The Future of Work with an Eye on New Technologies
Ben Kimber
Communications Adviser, Legislative Council, Parliament of Victoria
Striking a balance between employer expectations and employee rights is becoming increasingly complicated due to new technologies, including artificial intelligence (AI). Businesses are embracing systems that supposedly “boost productivity,” and a parliamentary inquiry into workplace surveillance has heard some are also using them to keep a closer eye on their workers.
Monitoring an employee’s brain
“Neurotech is technology that either directly monitors the brain or peripheral nervous system or directly stimulates it or does both,” legal academic and President of the Institute of Neurotechnology and Law Dr Allan McCay said.
“It is now possible to use hospital-grade neurotech to decode mental images or songs that a person is listening to from a person’s brainwaves, and even to decode intended speech … it seems reasonable [to assume], given the investment environment, that these monitoring and decoding capacities will start to become available in more portable devices.
“It also seems reasonable to assume that, in time, employers will have access to more capable brain-reading devices than are currently available and some may wish to employ them.”
Dr McCay said there are situations when an employer might reasonably want to use what he calls ‘workplace neurosurveillance’, for instance in the interests of safety, to stop fatigued operators of machinery harming themselves or others.
“A second reason why they might want to engage in workplace neurosurveillance is related to productivity,” he said. “They might want to know about an employee’s attention levels in order to make them more productive.”
Dr McCay said there are several ethical issues relating to workplace neurosurveillance, including ‘mental privacy’, and urged law reformers to not just focus on existing technologies.
“I think we have to consider the future challenges and not just focus on the way things are now,” he said.
Business Council insists “no need for more regulation”
Business Council of Australia General Counsel Kat Eather argued for maintaining the status quo and no further regulation.
“While Victoria does not have a standalone law dealing with workplace surveillance, existing Commonwealth and state laws that deal with aspects or effects of workplace surveillance include the Occupational Health and Safety Act, the Fair Work Act, the Privacy Act and the Surveillance Devices Act 1999 (Victoria),” she said.
“The ability to monitor workplaces and interrogate actions that have occurred on an employer’s technology or communication system and devices can be critical for a range of reasons, ranging from worker safety to security of equipment and premises, recording working hours and attendance to ensure wage compliance and that adequate breaks are taken, and that employees are paid properly.
“It is essential that any move to further regulate workplace surveillance in Victoria does not impede the use of surveillance for those essential functions.”
Addressing the risks of harm
ARC Centre of Excellence for Automated Decision-Making and Society’s Dr Jake Goldenfein said regulating AI won’t necessarily result in a set of prescribed uses, but it will instead put forward standards which resemble product safety law.
“What we need is principled sectoral regulation that says, for instance, if you are going to use a new digital system for workplace management, whether it is making managerial decisions, human resources decisions, task allocation, for instance, it should not cause harm to workers,” he said.
“To me this is the most basic principle. If you are going to introduce a surveillance system, it should not cause harm to workers, because what we are getting are reports of all kinds of harm.”
Some of these harms were identified as work intensification, loss of privacy, low morale, anxiety and stress.
Privacy and transparency
The handling of personal data collected through workplace surveillance is also being examined.
“One of the real challenges in this area is that there is very little transparency about what is being gathered and how it is being used,” National Tertiary Education Union’s representative Associate Professor Dr Alysia Blackham said.
Dr Blackham said AI systems are being trained on huge amounts of data – potentially workers’ data, without their consent or knowledge.
“One of the real concerns we have put forward in our submission is that there is no restriction on employers on-selling workers’ data for a profit and commoditising it in that way, which is a major gap in the regulatory framework,” she said.
Artificial “Intelligence” lacks wisdom, consciousness
Australian Nursing and Midwifery Federation, Victorian Branch Professional Officer Alana Ginnivan said AI is not infallible, because it is a probability-based model.
“It does not understand the data. It does not contemplate, from our members’ experience, data obtained within a clinical healthcare setting,” she said. “The risks are, when this workplace surveillance is obtained and improperly used, for the data that was obtained for workplace surveillance the intent was not for AI modelling.
“AI is the way of the future; we do see that—but there are no obligations and safeguarding in place to protect the workers or the patients and consumers.”
A representative for the Victorian Government told the inquiry the Department of Government Services is looking at the use of AI in the public service and the privacy provisions it has in place for inputting material into generative AI tools.
The Lower House Committee will prepare a report for Parliament in 2025. To read all of the public hearing transcripts and watch video snapshots, visit vicparl.news/wsi.