AI is Flooding the Fair Work Commission. Here is What Employers Need to Know.
- Jordan Lowry

- Mar 30
- 6 min read
If you run a business in Australia and you have employees, this article is for you.
The Fair Work Commission is dealing with a problem it has never faced before. Artificial intelligence tools, particularly ChatGPT and similar platforms, are being used by employees and former employees to draft and lodge workplace claims at a pace that is overwhelming the system. And it is having a real, measurable impact on employers.
This is not a future problem. It is happening right now. As a business owner, you need to understand what is going on, what the Commission is doing about it, and what steps you should be taking to protect your business.
The Numbers Tell the Story
In February 2026, Fair Work Commission President Justice Adam Hatcher delivered a presentation to the Victorian Bar Association that laid out just how dramatically the landscape has shifted.
Until 2023, the Commission dealt with roughly 30,000 matters per year. That figure jumped to around 45,000 in 2024-25, and is projected to reach between 50,000 and 55,000 in the current financial year. That is a 70% increase in total workload in just three years.
Unfair dismissal applications alone have risen 41%. General protections dismissal claims under section 365 are up 62%. Other general protections disputes have surged 135%.
Justice Hatcher was direct about the cause. The historical relationship between retrenchment rates and the number of dismissal claims, which previously moved together, has broken down. The timing of that breakdown lines up almost exactly with the public release of ChatGPT in late 2022.
What AI is Actually Doing
To illustrate the point, Justice Hatcher ran his own experiment. He opened ChatGPT, told it he had been dismissed from his job, provided a handful of basic facts, and asked what he could do. Within ten minutes, the tool produced a completed unfair dismissal application ready to lodge, a witness statement, and an estimate that he could expect between $15,000 and $40,000 in compensation.
The catch? The witness statement contained fabricated details. The case, on the facts provided, had no reasonable prospect of success. But the application looked polished, professional, and ready to file.
This is the challenge employers are now facing. AI tools can take a dismissed employee from frustration to a filed application in under ten minutes, with no professional advice, no reality check on the merits, and no understanding of the time, cost and stress that the process will impose on the employer who has to respond.
Why This Matters for Employers: Every claim that hits your desk costs time and money to respond to, regardless of whether it has any merit. The Commission has acknowledged that the volume of weak and unmeritorious claims is now stretching its capacity and pulling resources away from legitimate matters. Employers are bearing the cost of this on both sides.
The Deysel Decision: A Real-World Example
We have already seen what happens when employees rely entirely on AI for legal guidance. In Deysel v Electra Lift Co. [2025] FWC 2289, a former employee used ChatGPT to prepare and lodge a general protections application against his employer.
The application was filed more than two and a half years after his dismissal. ChatGPT had failed to flag the requirement under the Fair Work Act to lodge within 21 days. The application also failed to properly address the legal elements needed to make out the claim.
Deputy President Slevin did not mince words. He described the proceedings as hopeless and criticised the danger of relying solely on AI for legal advice. He noted that even ChatGPT itself had cautioned the applicant to seek professional help before acting on its suggestions.
The employer still had to deal with the application, attend the conference, and prepare a response. That is time and money that no business gets back.
What the Commission is Doing About It
The Fair Work Commission is now taking steps to respond. The most significant change for employers to be aware of is the introduction of a mandatory AI disclosure requirement across all FWC forms and documents.
Under the draft guidance note released in March 2026, anyone who uses generative AI to prepare an application, response, submission, or witness material will be required to:
• Disclose that AI was used in preparing the document.
• Verify that all facts are accurate and all case law cited actually exists, including providing working hyperlinks to decisions relied upon.
• Confirm (for witness statements and declarations) that the content reflects the person’s own knowledge and is true to the best of their understanding.
Failure to comply may result in the application being dismissed or a costs order being made.
The Commission has also streamlined how it manages general protections conciliation conferences. Conferences now focus purely on settlement, run shorter (often under an hour, sometimes as little as five minutes), and are terminated immediately if a respondent indicates it has no intention to make an offer.
Beyond procedural changes, Justice Hatcher is pushing the Federal Government for legislative amendments that would give the Commission greater powers to dismiss matters with no reasonable prospect of success and to deal with more matters on the papers without requiring a full hearing.
The Fair Work Ombudsman is Also Getting Involved
It is not just the Commission that is paying attention to AI. In late March 2026, the Fair Work Ombudsman confirmed that it is investing in an AI pilot program of its own. The goal is to explore whether AI-enabled tools could make workplace obligations easier for employers to understand and follow.
This came out of high-level discussions between federal workplace regulators about reducing regulatory complexity for Australian businesses. There are already some early examples of this approach in action, including chatbot tools designed to explain recent changes to the Fair Work Act for small business operators.
For employers, this is a positive signal. If AI can be used to simplify award interpretation, clarify entitlements, and streamline compliance, that is a win. But it also reinforces a broader message: AI is not going away, and both sides of the employment relationship need to engage with it responsibly.
The Bigger Picture: AI in Employment Decision-Making
There is also a longer-term regulatory conversation happening that employers should have on their radar. In early 2025, a House of Representatives Standing Committee published its Future of Work Report, which made 21 recommendations about AI and automated decision-making in the workplace.
The headline recommendation was that all AI systems used for employment-related purposes should be classified as high-risk. That includes systems used in recruitment, hiring, remuneration, promotion, training, and termination.
The report also recommended:
• Banning high-risk uses of worker data, including disclosures to technology developers.
• Prohibiting the sale of workers’ personal data to third parties.
• Requiring meaningful consultation and transparency with workers about AI use and surveillance measures.
•
Empowering the Fair Work Commission to manage disputes relating to breaches of worker privacy.
None of this is law yet. But it signals where regulation is heading. If you use any form of AI in your hiring, rostering, performance management, or workforce planning processes, it is worth understanding what obligations may be coming.
What Employers Should Be Doing Now
You do not need to panic. But you do need to be informed and prepared. Here is what we recommend:
1. Expect more claims, and prepare accordingly.
The barrier to lodging a Fair Work claim has dropped significantly. Any dismissed employee with a phone can now have a completed application in minutes. That does not mean the claim will succeed, but it does mean you are more likely to receive one. Make sure your termination processes are documented, defensible and follow proper procedure every single time.
2. Get your documentation right before you need it.
Strong employment contracts, clear policies, documented performance management steps and properly conducted disciplinary processes are your best defence. When a claim lands, you want to be in a position to respond confidently with a clear paper trail.
3. Understand the new AI disclosure rules.
The mandatory disclosure requirement works both ways. If you are an employer responding to a claim and you use AI to help draft your response, the same obligations apply. Be transparent, check the facts, and verify any legal references. Do not submit anything you have not personally reviewed.
4. Do not assume a claim is meritless just because it looks AI-generated.
Some AI-assisted claims will have genuine substance behind them, even if the language looks templated. Treat every application seriously, respond on time, and get proper advice before making assumptions about the strength of the other side’s case.
5. Review how you use AI in your own business.
If you are using AI tools in recruitment, performance reviews, rostering, or any other employment-related process, take stock now. Understand what data these tools are collecting, how decisions are being made, and whether you are meeting your consultation and transparency obligations. This area is only going to attract more regulatory attention.
How Blackstone Can Help
At Blackstone Business Group, we work with employers to build the structures, documentation and processes that protect your business before a problem arises, and to guide you through it when one does.
If you have received a Fair Work claim (AI-generated or otherwise), need to review your employment contracts and policies, or want to make sure your termination processes are legally sound, we are here to help.
We are pro-employer, commercially practical, and we give you straight answers.
Call us on 07 3869 4743 or email advice@blackstonebg.com.au


Comments