The Future of AI in the Workplace
By Dr Catriona Nguyen-Robertson MRSV
Artificial intelligence (AI) has been woven into the fabric of our lives. ‘It’s a revolution,’ says Dr Tien Huynh, an academic at RMIT University. ‘We can’t really stop it, but we can improve it’.
AI comes in different forms that have already become integrated into everyday life. You may ask it to navigate the best route through traffic, to ask if it will rain today, to filter spam out of your email inbox, or to select films and music that you might like. The ability of AI to perform these tasks is the result of machine-learning algorithms, trained on copious amounts of data and examples.
OpenAI’s ChatGPT and Google’s Bard use large language models that digest huge quantities of data scraped off the web and infer relationships between words to generate their own text. Professor Dinh Nguyen, Research Director of the Department of Data Science and AI at Monash University, points out that, even though they are trained on trillions of words, they still ‘don’t understand anything” – they simply recognise patterns. The most basic language model training involves predicting a word in a sequence of words: most commonly either next-token-prediction (guessing the next word) and masked-language-modelling (guessing a word within a phrase). The model learns to fill in the blank with the most statistically probable word given the context. In doing so, they learn to compose sentences on the fly as if they were human, which potentially makes them more versatile than their AI “smart assistant” predecessors as assistants.
With a suite of generative AI tools at our disposal, the question of how it will impact the way we work arises.
What can we use generative AI to do?
Generative AI is a type of AI that can create new material in whatever medium we ask for: text, images, video, audio, and 3D models. It sounds like the way of the future – and people have been scurrying to jump on the generative AI bandwagon. Since its launch last November, ChatGPT has become one of the fastest growing apps in history, only taking three months to reach over 100 million active users.
Multiple industries are using generative AI to their advantage. Most people have played with ChatGPT – even if simply to test it out. People use it to generate cover letters, speeches and emails, to provide advice, to assess code, and to analyse vast quantities of data, among other things. Creatives looking for inspiration for their books or poems have also asked ChatGPT to produce rough drafts to get them started. Ultimately, people are using ChatGPT and other generative AI bots to save time and help share their cognitive load.
Many people liken this AI to having a 24/7 personal assistant to do some of the heavy lifting and streamline routine tasks. Entrepreneur, String Nguyen, uses ChatGPT and similar programs every day as someone who generates enormous amounts of social media content. It proof-reads her work and she asks it to provide her with marketing strategies. Essentially, it acts as her virtual assistant and copy editor. Indeed, String is not the only business-owner who has tested ChatGPT’s capacity as an assistant to reduce costs, and free up time and brain space for strategic thinking and high-level decision-making.1,2
Could chatbots eventually automate the roles of human executive assistants, especially their more repetitious work, as well as other jobs that involve administrative work? We do not have an answer. However, Dinh believes in a future where, instead of simply divvying up tasks with other people, we also use AI to help share the load.
What if we rely on generative AI too much – especially school students?
As many students who write essay assignments have realised, ChatGPT can be quite a useful writing tool. Ask it to write an 800-word essay on Rosalind Franklin in the style of a year 10 student, and it will do it. Perhaps not perfectly, but it will put words on a page.
While some teachers are trying to crack down on the use of AI in education, concerned that these tools will eliminate critical thinking and writing skills, others encourage it. With the right guard rails in place to prevent cheating – and honestly, students haven’t needed AI to cheat in the past – generative AI could empower both students and educators.
These tools can provide access to vast amounts of information in a short timeframe, remove accessibility barriers by providing the option of interacting through text or speech, and can provide personalised educational content based on student needs. Sal Khan, CEO of Khan Academy, sees it as an opportunity to provide a personal tutor to every student or teaching assistant to every teacher.3
The average class sizes in Victoria are over 21 students at both primary and secondary levels,4 but students and educators could collaborate with AI tools to even out the student:teacher ratio. Khan Academy has developed Khamingo, a “friendly AI-powered learning guide” that identifies mistakes in students’ work, answers questions, quizzes students, and, importantly, asks students to justify and share their reasoning to ensure that they have a solid understanding of concepts.
A survey from The University of Melbourne conducted across Semester 1 of this year (March-June) found that tertiary students use generative AI to brainstorm and refine ideas, summarise information, and provide language support, especially for students with English as an additional language.5 Fewer than one in ten student respondents had used generative AI to produce content that was submitted as all or part of an assignment, but many use it to at least get started and feel more supported in their learning. It can thus be used responsibly as a support tool, rather than something to be frowned upon.
Just as the education system has adopted calculators and computers as learning tools in the past, AI is the next tool. These tools aren’t going away and by banning them, educators would be failing to prepare their students for the world they will graduate into. We need to learn how to use them best. The same way that academics before me had to learn how to find information in books and journals in libraries, I had to learn how to best use search engines to enable me to better find what I was looking for. Students today need to learn how to work with generative AI tools to be a part of this modern world.
What are the downsides of a reliance on generative AI?
I may have painted a picture of profound ways that generative AI will impact work and education, however there are certainly very valid concerns around this technology. We need to understand its limitations and consider how to balance the risks with the rewards.
Inaccurate or biased information
When Tien asked her PhD students to try using ChatGPT to write an academic literature review, the result was intriguing: ChatGPT wrote something plausible about the field, except many of the references had been completely made up. A more well-known example is a New York-based lawyer who is now facing a court hearing after relying on ChatGPT for research in a legal brief, which he did not verify before submitting fake citations in a court filing.6 These are the result of AI hallucinations: confident responses from AI that are, in fact, false information. AI tools like ChatGPT are trained to predict strings of words that best match your query. They lack the reasoning, however, to apply logic or to consider any factual inconsistencies they may be spitting out.
Moreover, machine-learning algorithms are only as good at the data they are trained on. There is a risk that they may provide inaccurate or biased information. ChatGPT’s database has a limited knowledge of the world after 2021 – what it knows after that is merely based on users’ input. As we use it more, it will learn more, but it can still make mistakes, especially because it is essentially trained by people, and people make mistakes and can be biased.
Privacy and ethical concerns
The technology that has been unleashed into society is – according to A/Prof. Sarah Roberts – an ‘unfettered experiment’ that not even the tech companies that created it can properly control.7 Associate Professor of Information Studies at UCLA, Sarah Roberts, is concerned about the lack of ethical considerations before ChatGPT and other generative AI tools were released. To conduct research in laboratories, research scientists must apply for ethics approvals and adhere to strict ethics guidelines – but are tech companies doing this for their products that are directly going out into the world?
AI applications are revolutionising the way we create. But ultimately, these creations rely on ideas conceived by humans – humans who are not always given appropriate credit. Some professional artists, writers, musicians, and programmers fiercely object to the use of their creations as training data for generative AI tools that generate outputs that compete with or make their work redundant.8 These systems have often been trained on data harvested from the internet without attribution or compensation. Copyright lawsuits that are now underway in the US, including challenges of OpenAI’s Codex, and GitHub and OpenAI’s Copilot, both of which are large language models trained upon billions of lines of open-source code, have substantial implications for the future of generative AI systems.9 If the lawsuits prevail, then generative AI systems will only be allowed to be trained on work in the public domain or under licences, which will affect everyone who integrates generative AI into their work – including those who use it for scientific research and education.
Generative AI also introduces several privacy concerns due to its ability to process personal data and generate potentially sensitive information, even though this information is supposedly anonymised. There are also people whose job it is to screen the content AI algorithms are trained on – even the horrific content that represents the worst of humanity. Richard Mathenge, a worker for OpenAI based in Nairobi, and his team taught the GPT model about explicit content.10 The goal was to train it to keep such content away from users – but they had to repeatedly view, read and categorise explicit text so that the model learned to recognise and avoid it. This type of work has been crucial for AI tools like ChatGPT and Bard, but it horrified the people who had to do it. We rarely discuss this human toll behind the “quality assurance” of these AI models.
A changing tide: where to from here?
‘We need to reinvent the way we’re going to work,’ says engineer Quan Pham. AI can do mundane tasks for us and process information much faster than our brains are capable of, freeing up our time to do more “human” things. Some jobs may disappear, but many will adapt, and new ones will be created as the workforce adopts the new technology.
A 2017 report estimated that there has been a gradual net gain of 15.8 million jobs as a direct result of the introduction of personal computing technologies.11 Consider the entire ICT industry that did not exist several decades ago – we are simply at the forefront of another wave.
ChatGPT and other generative AI tools could be used to create a baseline for our work, to which we then add our own voice. They may boost our productivity, but as Dinh says, ‘at the end of the day, you are still responsible for what you produce’. We shouldn’t be afraid of the technology, but we do need to learn about its limitations, and train people to use it appropriately before completely embracing it.
—
Keen to learn more about AI? Join us in person or online on the 19th of October as Dr Muneera Bano (CSIRO/Data61) presents “Reimagining Humanity in the Age of Generative AI”. For more information, visit rsv.org.au/events/generative-ai
References:
-
- Madell, R. (2023, May 3). I use ChatGPT and it’s like having a 24/7 personal assistant for $20 a month. Here are 5 ways it’s helping me make more money. Business Insider. businessinsider.com/chatgpt-personal-assistant-saving-time-making-money-2023-5
- Chen, B.X. (2023, March 29). How ChatGPT and Bard Performed as My Executive Assistants. The New York Times. nytimes.com/2023/03/29/technology/personaltech/ai-chatgpt-google-bard-assistant.html
- TED. (2023, May 2). How AI Could Save (Not Destroy) Education|Sal Khan|TED. YouTube. youtube.com/watch?v=hJP5GqnTrNo
- Department of Education. (2023). Class sizes 2022. Statistics on Victorian schools and teaching. vic.gov.au/statistics-victorian-schools-and-teaching
- Skeat, J. & Ziebell, N. (2023, Jun 23). University students are using AI, but not how you think. Pursuit. pursuit.unimelb.edu.au/articles/university-students-are-using-ai-but-not-how-you-think
- Carrick, D. & Kesteven, S. (2023, Jun 24). This US lawyer used ChatGPT to research a legal brief with embarrassing results. We could all learn from his error. ABC News. abc.net.au/news/2023-06-24/us-lawyer-uses-chatgpt-to-research-case-with-embarrassing-result/102490068
- Tobin, G (Reporter) & Roberts, S.T. (Guest). (2023). AI Rising [Television series episode]. In A.Donaldson (Producer), Four Corners. ABC Television.
- Klein, N. (2023, May 8). AI machines aren’t ‘hallucinating.’ But their makers are. The Guardian. theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein
- Samuelson, P. (2023). Generative AI meets copyright. Science 381(6654), 158-161. DOI: 10.1126/science.adi0656
- Kantrowitz, A. (2023, May 21). The Horrific Content a Kenyan Worker Had to See While Training ChatGPT. Slate. slate.com/technology/2023/05/openai-chatgpt-training-kenya-traumatic.html
- McKinsey Global Institute. (2017). Jobs lost, jobs gained: workforce transitions in a time of automation. McKinsey & Company. www.mckinsey.com/~/media/mckinsey/industries/public%20and%20social%20sector/our%20insights/what%20the%20future%20of%20work%20will%20mean%20for%20jobs%20skills%20and%20wages/mgi-jobs-lost-jobs-gained-executive-summary-december-6-2017.pdf