King’s guidance on generative AI for teaching, assessment and feedback
Supporting the adoption and integration of generative AI.
This section outlines King's approach to the use of generative AI tools which align with the principles formulated by the Russell Group. It underscores the collective recognition among educational institutions of the importance of developing AI literacy among students and staff, the ethical use of AI, and maintaining academic rigor and integrity.
King’s contributed to and subscribes to the Russell Group’s five principles on the use of generative AI tools in education:
Each of these has implications for King's staff and for their students. For this reason, governance has been framed with these statements in mind and these principles should be acknowledged as framing and as the launch point for AI literacy work support for staff and students to be able to be critically reflective in their interactions with generative AI, and to inform any programme or assessment modifications we seek to make. Each of the principles is clarified in the Russell Group website.
At King’s we are committed to ensuring both staff and students receive clear guidance on the ethical dimensions and data security issues and how these relate to existing and evolving policy. In line with the Russell Group principles, the following clarifications to policy cover data privacy, bias potential, accountability for generated information and broader ethical issues. All use should be cognisant of existing King’s policy.
We are witnessing incredibly rapid changes and a potential revolution in ways of working, but with a lack of transparency in the training of generative AI models, training data biases apparent in outputs, intellectual property ownership disputes and data privacy concerns. These combine to present a complex ethical landscape. On the one hand we have an obligation to best support and prepare our students for what is ahead, whilst at the same time need to recognise the many unresolved (even unresolvable) issues and the concomitant academic integrity and skills development implications.
Existing ethical codes apply to the adoption and use of generative AI. AI tools generate responses based on human-created data. As such, they might replicate any societal biases and stereotypes that are embedded in the information they have been trained on. Whilst companies hosting generative AI tools tend to claim outputs are original and therefore not plagiarised, this is an area of ongoing complexity and dispute and thus represents ongoing risk. Furthermore, it is noted that some AI tool developers have outsourced reinforcement learning from human feedback (RLHF) to low wage workers. Finally, the training of generative AI tools involves some quite astounding carbon emission and water consumption, suggesting potentially profound environmental impacts.
Our goal should be to follow and champion ethical and critical practices that seek authentic, accurate and safe use of any generative AI tool that is sustainable and values user empowerment.
See Bentley et al.'s working paper for more detail on ethical implications and a broad framework for responsible use.
Even if an AI tool isn't explicitly trained from user inputs, there exist potential risks related to privacy and intellectual property. This is due to the information that staff and students potentially input into the system. For this reason, it is important that King’s staff do not, without explicit permission, use unsupported tools such as ChatGPT to scrutinise student work. In addition, and despite the name of the leading company in this domain, there is great secrecy about details of how models are trained and the sources of the training data as well as how customer data within products is used. If you are logged into your King’s account and use Microsoft Copilot your inputs (‘prompts’) and outputs are not shared and data cannot be used to train foundation models.
AI tools derive their data from various sources, some of which might be unreliable or incorrectly referenced. Moreover, any unclear commands or information could be misconstrued by the AI, leading to erroneous or out-of-date outputs. Thus, users must bear the responsibility for the accuracy of the information produced by these tools in different contexts.
King’s position can be summed up from this line in the academic integrity policy:
The advent of widely available generative AI does little to change this in the short term, though as tools such as Microsoft Co-Pilot become increasingly embedded within products we use daily such as Microsoft Word, the blurring of authorship will present additional challenges. The ‘given’ that students’ own words, ideas and judgements provide the substantive core of summative submissions remains and should of course be reinforced, restated at intervals and unpicked at programme level with new students at the earliest opportunity.
Submitting text generated by technology/artificial intelligence as their own, without written permission from their department is considered misconduct under the offences of third-party involvement or text manipulation, if it provides undue advantage or interferes with assessment of the student's own understanding.
The student guidance on academic honesty and integrity should be connected to discussions on appropriate use of generative AI.
The complication with generative AI is that detection tools are at best unreliable and, at worst, may present false positives. Misunderstandings about how large language models work has led to academics in other institutions using tools like ChatGPT to gauge student integrity, resulting in wide-scale false accusations. King’s took the decision not to enable the AI detection % in Turnitin due to concerns about its reliability and potential for false positives. As things stand AI detection is not an option that is available to us. It is, however, a potentially lucrative market so we should anticipate developments amongst the efforts to secure our business.
Therefore, and in many ways similar to the mechanisms we have used for some time to confront suspicions about submitted work via contract cheating, we are reliant on very subjective measures. For this reason, it is imperative students should never be ‘accused’ of cheating or dishonesty, especially if the suspicion is that work has been created with generative AI.
However, whilst we hope that everything will be done pre-emptively in terms of assessment design, engaging students in discussion about technology and ethics and in proffered support we recommend that induction programmes, KEATS resources and assignment briefs will include reiteration of how use of generative AI to gain an unfair advantage and/or without proper acknowledgement falls within the academic misconduct guidance. Students suspected of using generative AI in the first instance may be invited for an investigatory meeting with procedures aligned to existing academic misconduct procedures.
King's staff are encouraged to review their assessment briefs and to ensure clarity around acceptable use in that documentation. This could be done at programme, module or individual assessment level according to needs and ways of working within each department. See guidance on ‘defining appropriate use’ for further information.
Please note, at King's we will seek to define acceptable/fair use rather than trying to specify what is prohibited. An assessment is designed to both develop and evaluate your progress so it is never appropriate to submit chunks of text or other media that are duplicated from another source without clear acknowledgement. Because tools like ChatGPT are generating text on a predication model they are not quotable sources and are not appropriate places to focus research.
King’s College London, unlike some other universities, does not require students to reference generative AI as an authoritative source in the reference list for much the same reason you would not be expected to cite a search engine, a student essay website or be over-dependent on synoptic, secondary source material. However, as we learn more about the capabilities and limitations of these tools and as we work together to evolve our own critical AI literacies, we do expect you to be explicit in acknowledging your use of generative AI tools such as Large Language Models like Microsoft Copilot (available via your KCL account), Google Gemini or ChatGPT or any other media generated through similar tools.
You should select one of the following two statements, complete it and append it to your references or somewhere prominent with your submission. Please note that so long as acknowledged use falls within the scope of appropriate use as defined in the assessment brief/guidance then this will not have any direct impact on the grades awarded.
Declarations in (2) below are an important reflective step and should be considered necessary.
1. I declare that no part of this submission has been generated by AI software. These are my own words.
Note. Using software for English grammar and spell checking is consistent with Statement 1.
[or]
2. I declare that parts of this submission has contributions from AI software and that it aligns with acceptable use as specified as part of the assignment brief/guidance and is consistent with good academic practice. The content can still be considered as my own words. I understand that as long as my use falls within the scope of appropriate use as defined in the assessment brief/guidance then this declaration will not have any direct impact on the grades awarded.
I acknowledge use of software to [include as appropriate]:
(i) Generate ideas or structure suggestions, for assistance with understanding core concepts, or other substantial foundational and preparatory activity.
[insert AI tool(s) and links and/or how used]
(ii) Write, rewrite, rephrase and/or paraphrase part of this essay.
[insert AI tool(s) and links]
(iii) Generate some other aspect of the submitted assessment.
[insert AI tool(s) and links]/include brief details]
It is important to model open and honest practices to our students and that they might expect us to declare where generative AI software has been used and how it has been used.
Suggested wording:
[name/s/programme or module team] acknowledge/s the use of [named generative AI software] to [define purposes]
In line with the Russell Group statement that universities will support students and staff to become AI-literate, King’s has a responsibility to provide support to all students on how to engage with generative AI during their time at King’s and beyond. King’s sees this as a key academic skill that students will need to develop to succeed in their studies and career.
To enable this, an understanding of the development needs of students is key. Some of these needs will be evident swiftly, and others will emerge over the coming years. Some support, largely around responding to the King's-wide policy, will be provided at university-level, and others at a faculty and/or department level. As with other academic skills, the needs of students will vary. A key area of support that all programme and module leaders can provide is to ensure that all assessment guidance includes a clear statement about appropriate use (see guidance on ‘defining appropriate use’ for further information).
The existing providers of academic, digital, and data literacy skills support (King’s Foundations, CTEL, and Libraries and Collections, Centre for Doctoral Studies) will need to work closely with King’s Academy to be able to provide a coherent and comprehensive level of support and skills development provision to King's students.
Supporting the adoption and integration of generative AI.
Overview of key terminology and contextual information for generative AI
Find out more about learning and teaching at King's