Mr NATHAN HAGARTY (Leppington) (12:50): I speak in support of the Work Health and Safety Amendment (Digital Work Systems) Bill 2025. It is a sensible, measured reform, despite the bluster, bluff and hyperbole of those opposite. For anyone watching at home—the thousands and thousands of them—and for casual watchers of politics, there is no surer sign that an opposition is bereft of policy ideas and has nothing to give than when their only tactic is to oppose with exaggeration and scaremongering. We see that today in relation to the bill. The bill does not reinvent the Work Health and Safety Act; it merely extends existing provisions so that they remain fit and proper for the modern workplace.
I understand that those opposite struggle with the modern workplace because many of them are trapped in the 1950s and 1960s, and many have never been in a workplace. Many members on this side of the Chamber have been in workplaces and have spent many years representing, fighting and advocating on behalf of workers. We understand that a modern workplace, regardless of whether it is a manufacturing shopfront or in the IT space, will deal with modern digital technology, including things like algorithms, digital surveillance and artificial intelligence—and when I speak about artificial intelligence, I am not speaking about those opposite. Those things increasingly shape how work is organised, monitored and managed. We only need to think about how quickly ChatGPT has become a thing over the past 18 months to two years. Before that, I do not think many people would have known what it was. Very quickly it has been adopted in a whole bunch of spaces. The same goes for other pieces of AI, and that is increasingly the case in the workplace.
The bill seeks to amend the Work Health and Safety Act 2011 to recognise a simple reality: Digital work systems can themselves create risk, whether through unreasonable workloads, intrusive monitoring or automated decision-making, all of which can impact workers, their mental health, their safety and, most importantly, their job security. The bill does not create a new dispute regime. Section 142 of the Act already sets out dispute resolution procedures, including escalations to the Industrial Relations Commission. Those safeguards remain unchanged. What the bill does is ensure that when disputes arise in relation to the digital workplace, workers are not left without protection simply because the hazard is technological rather than physical.
Algorithms and artificial intelligence are not perfect. They are designed by humans, they are trained on human data and they inevitably reflect human bias, whether conscious or unconscious. When left unchecked, that can lead to deeply perverse outcomes. We have seen that before. While not related to workers and the workplace, Robodebt stands as one of the most disgraceful failures in public policy that this nation has ever seen. At the heart of it, the recipients of social services became victims due to the fact that the organisation blindly deferred its responsibilities to algorithms, with devastating consequences. The royal commission listed damning example after damning example. There were even instances of people committing suicide as a result of Robodebt. There was no check or balance, there was no human oversight and an algorithm was left to decide whether people needed to answer to the department. If it can happen in a Federal department, it can happen in any workplace in Australia, and we have a responsibility to protect workers.
In fact, we have seen similar dynamics in the Australian workplace. Woolworths proposed to introduce technologically driven performance management systems that imposed rigid, algorithmically set productivity targets upon warehouse workers. I stood with those workers during the industrial dispute at Erskine Park. What Woolworths sought to introduce was similar to what we have seen overseas in organisations like Amazon, where workers were treated in an inhumane way. In some examples, they needed to get from one end of the warehouse to the other, and the algorithm decided that was the most efficient way for the shareholders to make their profits, without any regard for workplace safety or what a human was capable of doing within the time frame. If workers did not meet those targets, they would be disciplined and their job security would be at risk. They were trying to inherently link automated algorithms and data to individual workplace performance rather than using human judgement, checks and balances. The concern was not with the technology but with the absence of safeguards when technology is used to manage and discipline people.
Those risks are not confined to Australia. The example of Woolworths was effectively the canary in the coalmine. We only need to look abroad to see what happens when that kind of technology is left to run rampant. The Government has a responsibility to put laws in place before the technology gets out of hand. When technology develops quickly, and regulation and reform do not keep up, we see very perverse outcomes. We have implemented a social media ban. The cat got out of the bag. We have had to play catch-up. We will not do that when it comes to workers in this State.
In the United States, for example, large retail employers have deployed facial recognition and AI-enabled camera systems on shop floors, under the banner of loss prevention. In practice, workers—and particularly workers of colour—have been repeatedly flagged by automated systems as "suspicious". That has led to reprimands, investigations and disciplinary action, even when no misconduct occurred. In those cases, algorithmic suspicion replaced evidence, creating fear, stress and a breakdown of trust in the workplace. Under the bill, if anyone tries that here, unions have a right to ask questions and demand the underlying data that led to those conclusions. I would much rather that those sorts of examples did not arise, and hopefully the bill goes some way towards preventing, or at least mitigating the risks of, the perverse systems that companies in the United States and other jurisdictions are using.
We have also seen examples in warehouse environments overseas. Companies like Amazon use AI-driven monitoring systems to track the movement, the time on task and the productivity of workers. Workers have been automatically warned or terminated based on algorithmic assessments, often without meaningful human review. In one example I read, the algorithm determined that someone had yawned, therefore they were tired, therefore they were not working hard enough, therefore they should be disciplined. That is a disgraceful use of technology. As I said, if there are no checks and balances in place, unions should have the right to ask meaningful questions. They should have access to the data that has determined those judgements.
Regulators and researchers have warned that such systems discourage breaks, increase injury risk and create unsafe psychosocial pressure. I have given some examples from overseas. Those are not theoretical concerns; they are real‑life examples of technology outpacing workplace protections. The bill does not ban the technology. It does not prevent innovation. It applies longstanding work health and safety principles identifying risk, preventing harm and resolving disputes in the reality of the modern workplace. Workplaces change and technology evolves, but the duty to protect workers must remain steadfast, and it must keep pace with the times. That is why I commend the bill to the House, and I hope members opposite come to their senses and support it.

