10 August 2023
The impact of artificial intelligence in higher education
Dr Ilia Protopapa, Antonis Kazouris and Kunal Prasad Damodare
How can AI-enabled data collection address issues in attainment gaps
Artificial intelligence (AI), manifested by machines that exhibit aspects of human intelligence (HI), is increasingly utilised in service industries and is a major source of innovation (Huang and Rust 2021).
While critics argue that AI technology exhibits biases and inequalities, we discuss the use of AI to address issues of diversity and inclusion and more specifically, issues causing attainment gaps in higher education.
Advance HE defines the attainment gap as ‘…the difference in ‘top degrees’ – a First or 2:1 classification – awarded to different groups of students. The biggest differences are found by ethnic background’. To uncover students' perceptions of diversity, inclusion, and well-being with the aim to address issues causing the attainment gaps, we used AI technology to collect data.
Even though AI presents different formats, we discuss AI as a research tool for collecting and analysing data in this blog and explore the use of AI to address issues of the attainment gap in King's Business School.
In March 2023, we collaborated with Remesh, a platform that uses AI to facilitate real-time conversations with large groups of people, to design and launch an online discussion consisting of open-ended questions to King’s Business School students.
This project aimed to uncover students' perceptions of diversity, inclusion, and well-being. The anonymity provided by this technology allowed students to join the discussion without fear of judgement or repercussions, leading to the discovery of previously unexplored issues.
Exploring Different Types of AI and Criticisms on Diversity and Inclusion
AI is a broad field that encompasses many different types of systems. Two types of AI systems are generative AI and AI for collecting data.
Generative AI refers to AI systems that can generate new content, such as images, music, or text that is like content created by humans. These systems use machine learning algorithms to learn patterns in existing data and then use those patterns to generate new content.
One example of generative AI is the GPT-3 language model developed by OpenAI. GPT-3 can generate human-like text in response to a given prompt, such as writing a news article or composing a poem (OpenAI, 2023). However, there are ethical considerations when using generative AI, such as the potential for bias and lack of diversity in the training data, which can lead to the generation of discriminatory or offensive content.
AI for collecting data refers to AI systems that are used to collect and analyse data from various sources, such as social media, surveys, or customer feedback. These systems use natural language processing and machine learning algorithms to extract insights from unstructured data and provide actionable recommendations.
One example of AI for collecting data is the Remesh platform, which uses AI to facilitate real-time conversations with large groups of people. The platform collects and analyses data from these conversations to provide insights into customer preferences, opinions, and behaviours (Remesh, 2023).
In terms of ethics considerations for diversity and inclusion, it is important to ensure that AI systems are developed and trained using diverse and representative data to avoid perpetuating biases and discrimination. This includes considering factors such as race, gender, age, and socioeconomic status when collecting and analysing data.
Additionally, it is important to ensure that AI systems are accessible and inclusive for all users, including those with disabilities or who speak different languages (Buolamwini et al., 2018)
Leveraging AI for data collection: the Remesh platform
Among the various AI platforms available, we chose Remesh for its unique capabilities in collecting qualitative data at a quantitative scale. Remesh is an AI-powered platform that facilitates real-time conversations with large groups of participants. By using natural language processing and machine learning algorithms, Remesh enables researchers to analyse and extract insights from these conversations efficiently.
Benefits of AI in addressing the attainment gap
Our initiative to use AI for data collection yielded several significant benefits. Firstly, by providing an anonymous platform for students to express their thoughts and opinions, we overcame the barriers of social acceptance that often hinder open discussions in traditional focus groups. Students felt more comfortable sharing their experiences, allowing us to gain valuable insights into the challenges they face.
Secondly, the use of AI eliminated the power dynamics that can arise in investigations conducted by staff members. Students were able to engage in a peer-to-peer discussion, fostering a sense of trust and openness. This approach empowered students to uncover their true selves and provide us with insights that we may not have been able to collect through conventional methods.
Harnessing the Power of AI for a Diverse and Inclusive Future
The use of AI, exemplified by the Remesh platform, has proven to be a powerful tool in identifying issues causing the attainment gap in King's Business School. By leveraging AI for data collection, we were able to uncover previously unexplored issues and gain a deeper understanding of students' perceptions of diversity, inclusion, and wellbeing.
From our investigation there are key themes that emerged that we have not managed to identify through previous traditional research methods:
- Students’ loneliness
- Students’ fear of fitting in because of financial restrictions
- Students’ fear of disclosure of their true self
Participants may have felt more comfortable discussing sensitive topics like loneliness, fear of fitting in due to financial restrictions, and fear of disclosing their true selves through a machine interaction rather than with humans due to several reasons:
- Anonymity: Machine interactions, such as those facilitated by AI platforms like Remesh, provide a level of anonymity to participants. They can express their thoughts and feelings without the fear of judgement or social repercussions. Anonymity allows participants to share their experiences and emotions more openly and honestly, especially when discussing sensitive or stigmatized topics.
- Non-judgemental environment: Machines do not have personal biases, emotions, or preconceived notions. Participants may feel that they can freely express themselves without the fear of being judged or misunderstood. This non-judgemental environment can create a safe space for participants to discuss their feelings and concerns without the fear of negative consequences.
- Lack of social pressure: Interacting with machines eliminates social pressures that may exist in face-to-face interactions. Participants may feel more comfortable discussing personal and sensitive topics without the pressure to conform to social norms or expectations. They can freely express their thoughts and experiences without worrying about how others might perceive them.
- Enhanced privacy: Machine interactions provide a higher level of privacy compared to interactions with humans. Participants may feel more secure knowing that their personal information and responses are not directly linked to their identity. This increased privacy can encourage participants to share more personal and sensitive information.
- Reduced vulnerability: Sharing personal experiences and emotions can make individuals feel vulnerable. Interacting with machines can reduce this vulnerability as participants perceive machines as non-threatening and less likely to exploit or misuse their personal information. This perception of reduced vulnerability can encourage participants to open and discuss sensitive topics more freely.
It is important to note that these reasons are based on general observations and may vary depending on the specific context and individuals involved.
As we continue to embrace AI technology, it is crucial to remember the value of diversity and inclusion in higher education. AI has the potential to amplify diverse voices and bridge the gaps that exist within our institutions. We firmly believe in the power of AI to create a more inclusive and equitable future.
Dr Ilia Protopapa is a Lecturer in Marketing and Inclusive Education Partner at King’s Business School. Antonis Kazouris a professional doctorate student in Future of Work at Birkbeck University and occasionally partners with King's College London and LSE as an alumnus as an independent researcher. Kunal Prasad Damodare is a postgraduate Banking & Finance MSc student at King's Business School.
References
Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In The Cambridge Handbook of Artificial Intelligence (pp. 316-334). Cambridge University Press.
OpenAI. "GPT-3: Language Models are Few-Shot Learners." OpenAI, 2020, https://openai.com/research/language-models-are-few-shot-learners
Remesh. "How It Works." Remesh, https://remesh.ai/how-it-works/
Buolamwini, Joy, and Timnit Gebru. "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 2018, pp. 77-91, doi: 10.1145/3178876.318615
Huang, M. H., & Rust, R. T. (2021). A strategic framework for artificial intelligence in marketing. Journal of the Academy of Marketing Science, 49, 30-50.