King’s guidance on generative AI for teaching, assessment and feedback
Supporting the adoption and integration of generative AI.
Whilst acknowledging the ongoing debates about the immediate and longer-term implications of generative Artificial Intelligence (AI), the resources on generative AI at King’s start with the assumption that we are, as part of the wider higher education (HE) sector, beyond optionality in terms of whether we integrate AI into teaching, assessment and feedback practices.
Despite legitimate and ongoing ethical and sustainability concerns as well as a sense of needing to modify assessment practices swiftly and at scale, we need to consider how we will integrate, how we might realise opportunities and how, simultaneously, we can adapt and be flexible enough to uphold the integrity of our programmes.
To ensure our students are able to complete their studies responsibly and with integrity and be equipped to enter a world increasingly impacted by these emergent technologies, King’s College London supports considered use of generative AI, and is open to evolving teaching, assessment and feedback practices according to need and disciplinary differences.
On this page you will find an overview of key terminology and some contextual information relating to the tools themselves, the implications and direction of travel across the HE sector.
Despite understandable objections from those with years of experience and expertise and ongoing controversies about the validity of ascribing ‘intelligence’ to any technology, it seems there has been a popular settling on ‘AI’ as a shorthand for all tools that perform practical functions (e.g. gradual automation of aspects of driving such as automatic braking, or robot-mediated surgical procedures), language functions (such as automated transcriptions, copy writing or sports commentary) and media functions (e.g. advertising algorithms on social media or auto-tuned audio).
For the purposes of this guidance and ongoing policy, it is useful at present to distinguish this broad-brush shorthand from the relatively new (or at least relatively recently popular) tools that can be termed ‘Generative AI’, which will be the focus of this guidance henceforth due to its likely impact on assessment practices at King's and given its relatively recent popularity and availability.
Generative AI includes existing and novel text and media-producing technologies derived from human ‘prompts’. The tools use machine learning algorithms and are trained by scraping data (sometimes extraordinarily huge datasets) which are drawn upon to generate outputs in different formats. So called ‘Large Language Models’ (LLMs) (e.g. OpenAI’s ChatGPT, Google’s Gemini; Anthropic’s Claude or Microsoft’s ‘Copilot’) work by ‘token prediction’. That is, they use probability to determine most likely next words in given strings of text, based on the prompt input.
OpenAI, the company responsible for both ChatGPT and DALL-E, also employs human moderators as part of the training regime known as Reinforcement Learning from Human Feedback (RLHF). Honed and finely tuned prompts (i.e. ‘prompt engineering’) can be used to set tone, style, audience, depth, breadth and other variables. It is worth noting that these are NOT search engines but increasingly they are being coupled with or integrated into them (GPT-4 and DALL-E3 are integrated with Microsoft Copilot, for example). Because prediction is driving outputs, these text-based Generative AI tools are prone to what is commonly referred to as ‘hallucination’. In other words, from one generation to the next, even given the same prompt, one output may be accurate and another fallacious, though with each iteration notable improvements are evident.
Because any text generated is built on a probability model, the text is not actually lifted or copied from anywhere but derived from the corpus of data upon which it was trained. This renders it - like a human-authored essay purchased from an ‘essay mill’ - undetectable by existing plagiarism software and, at the time of writing, very easily made undetectable to even the most advanced generative AI detection tools. Nevertheless, issues around how these tools are trained and copyright remain very controversial and some organisations have blocked companies sourcing web-based data to train their models.
LLMs can also generate code and are already widely used for code error detection and correction. They can summarise documents, re-write in different formats; generate written text in multiple forms and formats including tables and, with an increasing range of generative AI models available, are able to do much more than text production (e.g. image interpretation, chart production, data analysis, slide production and so on). Whilst the churning of reams of text or code is what is initially eye-catching, it is productivity opportunities that often get overlooked. So, for example, ChatGPT or Google Bard can generate a video summary of a given length and tone based on only the video transcript, or create jargon and definition lists or generate quizzes based on a given text. The natural-looking language and fluency impresses, especially on first use and can easily convince users of apparent intelligence and understanding that are simply not there and, notably in chatbot interfaces, lead to users ascribing human qualities to these tools.
It is important to note that much guidance across the sector will include acknowledgement of limitations that, in such a fast moving - and potentially lucrative - landscape have already been overcome. So, for example, you may read that ChatGPT is not web-connected, that its training data stops in 2021 or that LLMs cannot cite genuine sources. Whilst this remains true of the ‘free’ version (GPT-3.5) the bigger, paid model (GPT-4) and other similar tools are web connected, can generate accurate references and do not have a 2021 limit.
Image generation tools likewise use text-based prompts (and increasingly combined with prompts supplemented with other images such as photographs) to produce ‘original’ artistic or photo-realistic images. Like the LLMs, however, they are dependent upon the training corpus and, as a result, one of the critical issues is that the outputs reflect biases in the training data. For instance, what do you notice about these generated images from the prompt: ‘A biochemistry professor at a UK university’?
Generative AI biases: A ‘Midjourney’ generated image, 30 June 2023, M. Compton
An ongoing and compounding issue, therefore, is that text or images generated will themselves be sources for generative AI data sets at points in the future, potentially buttressing and consolidating biases. For an in-depth consideration of responsible AI education see Bentley et al.s (2023) framework.
The JISC Generative AI primer provides excellent further explanations and details about generative AI with some examples of educational use, and look out for the King's short course on FutureLearn.
A recent poll (April 2023, Educause) revealed an increasingly optimistic disposition to generative AI amongst higher education stakeholders, and that 67% of respondents (n=440) had already used generative AI for their work. In the same poll, only 34% of respondents reported institutional level policy updates, however.
A summary from The Evening Standard of some London universities suggests only limited distinctions in approach between institutions, with most connecting existing academic integrity policies to ongoing policy modification and the definition of plagiarism extended to include use of tools to leverage an unfair advantage.
Despite highly publicised bans of some tools at national level (e.g. Italy, now revoked) or sector level (New York schools, also now revoked), in the UK at institutional level at least, there seems little appetite for a ban (given the impossibility of enforcement). Whilst King’s has taken the decision to disable the controversial AI detector in Turnitin - in line with the majority of UK HE institutions - the possibility of detection tools remains a hope in many quarters.
As wider acknowledgement across HE of what is seen to be the inevitability of profound changes to what we teach, how we teach and the ways we assess what we have taught are increasingly visible and accepted, so there is a growing realisation that this is a ‘reality that we must embrace’ and develop appropriate AI literacies according to our disciplines (Simpson, Thanaraj & Durston, 2023). Some institutions have invested considerable resource into laying the policy and practice foundations in place (such as guidance from Monash University), and sector organisations have also begun to collate useful guiding information (see for example QAA, Generative AI tool demos from JISC, UNESCO quick start guide to ChatGPT, WonkHE articles, HEPI summary).
Inequities already exist that King's has no, or only very limited, scope to address. Personal student wealth, for example, means that some students will have superior working environments, devices and reduced need to work for subsistence. Because there are cost implications to access some unsupported generative AI tools, King’s staff should avoid furthering these inequities by designing work or assessments that require students to use unsupported tools or software that is not available from within King’s, especially where there is a clear delineation between paid and free, advertising supported or freemium products.
Similarly, access to some tools varies according to geographic location and so should never be a requirement of assessed work. Whilst we continue to explore possibilities of purchasing licences for some products, we must adhere to existing due diligence processes and, at present, suggest that licensing is managed at a local level according to disciplinary needs. As the AI landscape changes, some tools might require subscriptions. King's needs to plan for this, ensuring all students and staff have fair access to essential AI resources.
This is a swiftly moving landscape but all staff and students at King’s have access to text and image generation via Microsoft Copilot so long as they log into their KCL Microsoft account. Our Enterprise license means that use of Copilot comes with commercial data protection and is therefore a more secure alternative to other generative AI tools.
Please note that Microsoft are developing several ‘Copilots’ and the text and image generator is just one of these and was formerly known as ‘Microsoft Bing Chat’.
King's Vision 2029 commits us to refreshing our pedagogic practices and assessment methods in response to global challenges, changing employment landscapes, technological innovations and changes in student demographics and needs. Whilst we have profound questions to address, the answers to which will only become clear as the technology evolves, our focus will be on how the processes and outputs of student work are designed and considered. Typical proxies for evidencing thinking such as the essay may need radical rethinking. It is also true that embedding generative AI tools can elevate students' learning, sharpening critical thinking and exposing them to practical AI applications. However, we acknowledge that this rapidly evolving landscape connotes significant development and training implications, all against a backdrop of profound changes consequent of the COVID-19 crisis.
Adapting these methods depends on individual disciplines, so whilst broad and generalised support will be made available, local level interventions at programme level may work better and faster. Faculty should have the liberty to create AI-integrated materials and assessments. We strongly encourage all programme and module leads to review assessment briefs and guidance on academic integrity shared with students in light of the published guidance and notably the acknowledgement statements and definitions of acceptable use. Professional associations (PSRBs) will need to be engaged with as we navigate this transition, specifically concerning accreditation and essential requirements in terms of what is taught and how programmes are assessed.
Like all our research and teaching practices, it is likely the ways in which we develop stakeholder AI literacy and the ways in which we respond will be subject to the many layers of evaluation and scrutiny currently employed, both formal and informal.
In addition, it is vital we include additional generative AI-focussed evaluation mechanisms that will help us make adjustments and find ways to generalise effective practices. We will seek student involvement in these processes from the start within the central working groups and committees, and pledge to engage students in ongoing discussions about governance and practice. King’s is funding a number of generative AI research projects and hosting events focussed on evaluation and dissemination of interventions and pilot schemes.
Supporting the adoption and integration of generative AI.
Find out more about learning and teaching at King's
Setting out bold ambitions for the future of King’s