Humans Learning Machine Learning
By Rosa Zwier
Education Programs Producer (Physics and Space Science) at Scienceworks, Museums Victoria
Computers are smarter than ever – more than I could have imagined as a child. How do you explain the difference between old-fashioned computer programming and machine learning to an eight-year-old? In 2020, that’s the task I found myself grappling with as a science communicator at Scienceworks.
It was a lonely Melbourne lockdown, and I was holed up in my bedroom on my laptop, messaging with various (terrible) AI chatbots to keep me company. It was the days before ChatGPT, and I was developing a workshop about AI for both 8 to 10-year-old children and their adults. I found myself reading about the Turing Test and the Chinese Room Argument, neural networks, and bias in AI.* As I learned more about machine learning, I found myself wondering: how could I explain these very complex concepts to very young people?
We’re now at a moment in time where this conversation is critical. AI is becoming a huge part of our world. But as with any new tool, we need to use it responsibly. We also need to make sure that young people who will grow up into a world shaped by AI understand what it is and how it works. Education is a key step in empowering a generation who will use AI daily, so that they can think critically and make informed choices.
Many parents, guardians, and teachers understand the importance of talking about this technology, but it didn’t exist when we were children. We’re still learning to understand it ourselves, while also learning how to teach it. At Scienceworks, we have a unique opportunity to lead conversations with educators, parents/guardians, and children about emerging technologies that bridge the gap between experts and the public.
Here’s some of what I learned from developing a workshop to teach eight-year-olds about AI, to encourage you to learn or even have your own conversations about machine learning and AI.
Engage playfully
There are plenty of fun and interactive tools that can help everyone, including young children, to understand what machine learning is.
To explain the difference between direct coding (writing a script that tells a computer exactly what to do) and machine learning (the computer learns how to perform a task from training data) in the workshop, we unpacked the idea of trying to get a computer to recognise a handwritten number. How would you write instructions for recognising a number? Pose the question of ‘how do you draw a “1”?’ to ten children and adults, and draw the lines that they say literally, and you get a variety of things that do not necessarily look like a “1”.
It would be surprisingly hard because everyone writes a little differently. But by showing a machine learning algorithm thousands of examples of handwritten numbers, we don’t even need to explain to the computer how it should identify them – it can learn. To demonstrate this, we used Neural Numbers, an AI that predicts the number you draw inside a box.1
A favourite in the workshop was the game Quick, Draw!2 It’s a great game where you’re given a prompt and have to draw it while an AI guesses what the drawing is of. The AI learns from all the previous drawings by other players, which you can look at to see how it comes up with its guesses. It’s great fun, and just by playing it you can start to develop an intuition about what machine learning is.
Don’t shy away from big ideas
A great way to introduce philosophical ideas is by playing the game where a group of people in a circle add one word at a time to make a story. We played this with parents and children together, and it gave us a framework to discuss very complex topics in a simple way. We could talk about syntax, versus language patterns, versus intention – all in simpler words, of course.
From there, we could then move into a discussion about predictive text, and text generation. Your phone can suggest the next word for you based on what you’ve written before, but does that mean it understands what it’s saying? Is that intelligence?
We introduced the concept of ‘black box’ machine learning – a term that refers to machine learning models that give a result or answer, without us understanding how they did it. We use an analogy of a mystery box with a bunch of dials, and when you feed in some numbers, like 3×3 for example, it spits out an answer. If the answer is wrong, it adjusts some dials, until eventually it starts getting the answer right. We don’t know exactly what happens inside, or what the knobs do, but by iteratively adjusting and telling the box that its output is right or wrong, we can eventually get to something that does work. This is a simplification of how machine learning happens via neural networks, but kids can start to grasp the concept.
We also talked about bias in AI, through the lens of voice recognition. Many children have experienced talking to Siri, and so talking about how things like Siri can struggle with different accents was a really relatable way to show young kids that AI can be discriminatory based on how they’re trained. From there, we could talk about citizen science projects like Mozilla Common Voice,3 which seeks to combat that bias.
Let kids use new technology
A key part of our workshop was a design challenge in which kids created their own AI. We used Google’s Teachable Machine,4 which lets you use machine learning to train an AI to recognise the difference between images, sounds, or poses. This is a fun and playful way that kids can not only use AI, but also take part in the process of creating AI themselves. It was a great culmination of their learning.
It might sound daunting to talk to kids about new and complicated technology. Will they get it, and will they care? But the children who attended our workshops were fascinated, and we stepped them through a journey of understanding some of the basic principles underpinning this new technology.
Remember that you don’t need to be an expert, either – I wasn’t an expert when I started developing these workshops. Learning is a beautiful and rich experience at any age, and there are plenty of great tools to help you do it either for yourself, or for any little ones you have around you. And it’s important – AI isn’t going away, and by educating young people, we can make sure these new technologies are used appropriately in the future.
–
*The Turing Test tests the ability of a machine to engage in conversation with a human without being detected as a machine, if it remains undetected as a machine, it has demonstrated human intelligence.5
The Chinese Room Argument is the idea that a person alone in a room who is responding to Chinese characters slipped under the door may be able to return appropriate strings of characters back out if they read enough and follow the pattern, leading those outside to mistakenly believe there is a Chinese speaker in the room.6
References:
- Neural Numbers. I AM A.I. i-am.ai/talk-neural-numbers.html
- Quick, Draw! Google. quickdraw.withgoogle.com
- Common Voice by Mozilla. Mozilla. commonvoice.mozilla.org/en
- Teachable Machine. Google. teachablemachine.withgoogle.com
- Oppy, G., & Dowe, D. “The Turing Test”, The Stanford Encyclopedia of Philosophy (Winter 2021 Edition), Zalta, E.D. (ed.). plato.stanford.edu/archives/win2021/entries/turing-test/
- Cole, D. “The Chinese Room Argument”, The Stanford Encyclopedia of Philosophy (Summer 2023 Edition), Zalta, E.D. & Nodelman, U. (eds.), plato.stanford.edu/archives/sum2023/entries/chinese-room/