By Dr Muneera Bano
The advent of generative AI and large language models (LLMs) has marked a new era in human history, a period characterised by both innovation and concern.1 More than a year since the introduction of tools such as ChatGPT, Bard, Midjourney, and DALL·E, these technologies have caused significant disruption across various fields. While they have sparked excitement about their potential, they have also raised fears regarding their impact on jobs – and even the future of humanity.2,3
Generative AI has revolutionised many sectors, such as healthcare, education, and retail. In medicine, AI algorithms are assisting in disease diagnosis and drug development.4 Education is being transformed, as personalised learning experiences become more accessible through AI tutors.5 This technology has also significantly impacted areas like customer service, where chatbots and virtual assistants have become increasingly sophisticated, offering more human-like interactions and supporting many customers simultaneously.
The underlying concerns
Despite these amazing advancements, generative AI raises substantial concerns. Deepfake technology – a byproduct of generative AI – poses a significant threat to privacy, security, and truth in media. It can be misused to create convincing fake videos or audio recordings, potentially destabilising democratic processes, influencing elections, or causing personal harm.6 The rise of AI-generated content also brings up issues of copyright and intellectual property, as distinguishing between human and AI creations becomes increasingly challenging – especially where an AI model was trained using copyrighted material, and sometimes without permission.
Humans possess a profound ability to interpret the meanings of words and pictures. It is a skill that extends far beyond simple recognition or replication, and is deeply rooted in our unique experiences, emotions, and cultural contexts. When we encounter language or imagery, we don’t just process the information at face value; we attach meaning to it, drawing from memories, societal norms, and emotional responses.
LLMs and Generative AI, on the other hand, lack this intrinsic capacity for understanding. They can recognise patterns, predict word sequences, and generate images based on data, but they cannot grasp the value of underlying meanings, or the emotional and cultural significance that these words and images hold for different people. This difference is fundamental: while AI can mimic or recreate, it cannot experience or empathise, leaving a significant gap between artificial generation and human understanding.
A more subtle, yet potentially profound, impact of generative AI is the creation of a ‘synthetic reality loop’ within cyberspace. As the internet becomes saturated with AI-generated content, future AI models may be trained predominantly on data created by their predecessors, leading to a cycle of ‘hyper fake’ realities.7 These synthetic data training loops would remove humans from the training process, and could detach AI from human experiences and perspectives, leading to the generation of content that echoes an increasingly artificial understanding of the world. The implications of such a loop are vast, potentially resulting in a digital landscape that is less reflective of the diversity of human experience and more a mirror of algorithmically generated perspectives.
Diversity and inclusion in AI
As we delve deeper into the age of generative AI, the significance of diversity and inclusion in these technologies cannot be overstated.8 “Diversity and Inclusion in Artificial Intelligence refers to the ‘inclusion’ of humans with ‘diverse’ attributes and perspectives in the data, process, system, and governance of the AI ecosystem”.9 When generative AI overlooks diversity and inclusion, the implications are both deep and wide-ranging.
Since its inception, AI development has predominantly been shaped by Western scientific methods, which often define intelligence in terms of logical reasoning, problem-solving, and data processing. However, this perspective of human intelligence is just one of many, which differ across cultures and societies.
In other cultures, intelligence is often perceived through the lens of social harmony, emotional awareness, and holistic thinking. In various Indigenous communities in Africa, Asia, and Australia, intelligence might include communal knowledge, storytelling, and a deep connection with nature. These diverse understandings of intelligence reflect a rich picture of human experiences and wisdom that are largely absent in current AI models.
The impact of generative AI is also unevenly distributed across different economic backgrounds. People from lower-income groups or underrepresented communities often lack access to the latest technologies, including AI. This digital divide not only limits their ability to benefit from AI-driven advancements, but also means their data and perspectives are underrepresented in AI training data.
Could AI, with its roots deep in Western data, inadvertently become a tool of digital colonisation? As AI systems are trained mostly on data that is influenced by Western perspectives, there’s a risk of them acting like modern-day digital colonisers, spreading a uniform cultural narrative across diverse global landscapes. This isn’t just about data imbalance; it’s a story of cultural domination in a new, digital guise.
Envision a world where generative AI models, from language generators to image creators, operate with a narrow lens, shaped predominantly by a homogenous dataset. Such systems could struggle with understanding and representing the rich variety of global dialects, accents, and cultural contexts, leading to a skewed digital representation of human diversity.
They would risk becoming an echo chamber of limited perspectives, potentially alienating diverse communities by failing to acknowledge their unique narratives and experiences. This lack of diversity isn’t just a flaw in technology; it’s a catalyst for reinforcing stereotypes and deepening societal divides. The concern isn’t merely theoretical; it’s about the potential real-world impact of AI shaping perceptions, decisions, and cultures in non-Western societies, echoing historical colonial patterns.
However, this narrative is not yet set in stone. We stand at a crossroads, where AI has the potential to evolve into a multicultural mosaic, reflecting and respecting the richness of human diversity. It’s a call to action, urging a reshaping of AI development to embrace the vast array of human experiences and viewpoints, transforming AI from a potential digital coloniser into a tool that celebrates and elevates our global cultural diversity.
The potential of generative AI to truly revolutionise our digital experience lies in its ability to capture and reflect the vast spectrum of human diversity – in language, imagery, and thought. Inclusivity in generative AI isn’t just an ethical consideration; it is fundamental to the creation of advanced, equitable, and genuinely innovative AI systems.
Addressing these challenges requires a concerted effort to make AI development more inclusive. This includes diversifying the teams that design and build AI systems, ensuring they represent a wide range of cultural, socio-economic, and geographical backgrounds. It also involves being intentional about the data used to train these systems, actively seeking out and including data from underrepresented groups and regions.10
Participatory design, where communities are involved in the development process of AI tools intended for their use, can ensure that these technologies are adapted to their specific contexts and needs. Additionally, policies and frameworks should be established to ensure equitable access to AI technologies, so that the benefits of AI can be enjoyed by a broader spectrum of society.
The human responsibility in the age of AI
Ultimately, AI is a tool, and its future is in the hands of humanity. We, as a society, are responsible for how these technologies are developed, implemented, and governed. It is crucial to establish ethical frameworks and guidelines to ensure the responsible use of AI.8 This responsibility includes addressing biases in AI, ensuring transparency in AI-driven decisions, and safeguarding against the misuse of AI technologies.
The innate human trait of curiosity, characterised by the desire to question and explore the unknown, remains a distinctive aspect of our intelligence, largely untouched by AI in its current form. While AI can process and generate responses based on vast datasets, it lacks the inherent curiosity that drives humans to seek knowledge beyond available information. Our desire to question arises from a deep-seated sense of wonder and a quest for meaning, elements that are fundamentally human and not easily replicable by AI, which operates within the confines of its programmed algorithms and existing data. AI, in its current data-dependent model, does not possess the spontaneous spark of curiosity that prompts humans to ask, “What if?” and “Why not?” – questions that have been the bedrock of human innovation and discovery.
Generative AI, with its ability to brilliantly replicate the works of Shakespeare or Picasso, essentially operates as a small portion of human intelligence that has been digitised and made available on the internet. However, despite its impressive mimicry, it lacks the originality and evolutionary depth of human intelligence. While AI draws from a subset of accumulated knowledge, human intelligence is the culmination of millions of years of evolution, encompassing not just information, but deep-seated creativity, emotions, and experiences that cannot be fully captured or recreated by algorithms.
As we reimagine humanity in the age of generative AI, it becomes evident that while these technologies offer incredible possibilities, they also bring forth significant challenges. Balancing the benefits of AI with ethical considerations, promoting diversity and inclusion in AI development, and acknowledging the limitations of AI in understanding the non-digital aspects of our world are crucial steps in this journey. By doing so, we can harness the power of AI to enhance human capabilities while preserving the essence of what makes us uniquely human – our diversity, our creativity, and our ability to experience the world beyond data and algorithms.
- Bano, M., Hoda, R., Zowghi, D., & Treude, C. (2024). Large language models for qualitative research in software engineering: exploring opportunities and challenges. Automated Software Engineering, 31(1), 8.
- Ellingrud, K., & Sanghvi, S. (2023, September 21). Generative AI: How will it affect future jobs and workflows? McKinsey Global Institute. mckinsey.com/mgi/our-research/generative-ai-how-will-it-affect-future-jobs-and-workflows
- Wroe, D. (2023, October 10). Artificial intelligence and the future of humanity. The Strategist. aspistrategist.org.au/artificial-intelligence-and-the-future-of-humanity/
- Toma, A., Senkaiahliyan, S., Lawler, P. R., Rubin, B., & Wang, B. (2023). Generative AI could revolutionize health care — but not if control is ceded to big tech. Nature, 624(7990), 36–38. doi.org/10.1038/d41586-023-03803-y
- The Australian Framework for Generative Artificial Intelligence (AI) in Schools – Department of Education, Australian Government. (2023, December 1). Department of Education. education.gov.au/schooling/announcements/australian-framework-generative-artificial-intelligence-ai-schools
- Bano, M., Chaudhri, Z., Zowghi, D., “The Role of Generative AI in Global Diplomatic Practices: A Strategic Framework” arxiv.org/abs/2401.05415
- Keen, E. (2023, August 1). Gartner Identifies Top Trends Shaping the Future of Data Science and Machine Learning. Gartner. gartner.com/en/newsroom/press-releases/2023-08-01-gartner-identifies-top-trends-shaping-future-of-data-science-and-machine-learning
- Bano, M., Zowghi, D., Gervasi, V., “A Vision for Operationalising Diversity and Inclusion in AI”, Responsible AI Engineering Workshop at ICSE ‘24 arxiv.org/abs/2312.06074
- Zowghi, Didar, and Francesca da Rimini. “Diversity and Inclusion in Artificial Intelligence.” arXiv preprint arXiv:2305.12728 (2023).
- Shams, R.A., Zowghi, D. & Bano, M. “AI and the quest for diversity and inclusion: a systematic literature review”. AI Ethics (2023). doi.org/10.1007/s43681-023-00362-w