King’s guidance on generative AI for teaching, assessment and feedback
Supporting the adoption and integration of generative AI.
Artificial Intelligence, or AI, refers broadly to any technology or system that can reason and/or adapt, sometimes on its own, to achieve certain goals. For example, predictive text on a smartphone guesses what you will write next, or a robot that can analyse what’s around it and then make decisions to act within its environment autonomously.
AI-based systems can be based purely in the virtual world with things like voice assistants (e.g. Siri or Google Home), image analysis tools, search engines, and technology that can recognise speech and faces.
AI can also be embedded in hardware, or objects in the physical world around us such as advanced robots, self-driving cars, drones, or devices connected to the Internet of Things.
Within AI, generative AI refers specifically to a type of AI where applications are trained to learn from data to improve at performing certain tasks such that it can create content like text, code, images, video and audio based on the vast amounts of data that the AI has been trained on. Generative AI (exemplified here but not necessarily endorsed by King's) includes:
Microsoft Copilot, ChatGPT, Google Gemini, Claude. These are AI systems that can generate human-like text by predicting probable next words after being given a prompt. Writing prompts accurately and in order to fine tune outputs is referred to as ‘prompt engineering’.
These tools are NOT databases of knowledge. They ’predict’ combinations of plausible words; they should never be used instead of, but may be used alongside, conventional approaches to sourcing information.
Please note that Microsoft Copilot is available to all King's students with your KCL Microsoft log in credentials. Make sure you are logged into your KCL account and switch ‘safe search’ to ‘moderate’ to use it.
DALL-E, Midjourney and Stable Diffusion are examples of AI systems that create images from text prompts by analysing vast datasets of image-text pairs and producing novel images.
Descript and Murf.ai are examples of AI tools that can generate human-like audio narration and conversation from text.
GitHub Copilot and TabNine are examples of AI coding assistants that suggest completions for code based on analysing large codebases. These are but a few of the thousands of tools available.
Please note that there are ongoing controversies and challenges across the globe about use of text and images generated by AI, with many arguing that these breach copyright rules.
No, King's does not ban the use of any type of AI. It is increasingly part of the wider world and is changing the nature of many aspects of life including the jobs you are in or will progress into. At King’s we are a signatory to the Russell Group principles:
We do, of course, expect students to critically engage with ideas and content produced from or with generative AI, to own outputs and to adhere to academic integrity policies to ensure appropriate and ethical use of AI.
Above all, any work submitted must represent a genuine demonstration of your own work, skills and subject knowledge, adhere to the guidelines of the assessment task, and respects the university's value of academic integrity and honesty.
Many of the tools we are already familiar with have built in AI elements and this will increase quite rapidly over the forthcoming months and years.
The key distinction of generative AI from e.g. a search engine output is its ability to create completely novel content at scale, as opposed to just accessing existing data. However, where outputs are fully AI-generated, they lack human understanding, critical engagement and precision and often exhibit a bland style of expression that, whilst technically accurate and impressive at first glance, is often flawed. The tools may generate biased, incorrect, or misleading information, and it is good practice to check information via other sources.
Generative AI can be extremely useful if used appropriately, but it's important to be aware of its limitations compared to human creativity and cognition. Used properly and strategically, generative AI can augment our creativity and productivity. The key is developing skills to harness it effectively as part of your own learning process, while maintaining full academic integrity through proper attribution and transparency.
Used properly, generative AI tools can:
We understand that common tools (such as grammar and spell checks) are already embedded into the software you use for assignments and, unless measurement of spelling, for example, is an aspect of an assessment we would not anticipate any issues. The same principle should be used when considering whether AI is appropriate to use within your assessment.
You should be given clear guidance on what level of generative AI is appropriate in any given assessment. Things are moving quickly so if you are uncertain, please do ask. We have suggested four broad levels that your Programme and Module leaders may adjust to the specifics of an assessment:
Includes routine and established use of tools such as auto transcription, spell checkers, grammar check.
Use for clearly delineated tasks as appropriate/allowed/recommended. May include:
No specific restrictions but with requirement to track key stages/tools utilised. Possible uses may include:
AI use is a feature of the assessment itself. Here the use of generative AI is a focal aspect of the assessment. This may include:
Once again, we state that in no circumstance is it appropriate to use generated content verbatim without clear indication and acknowledgement, or to effectively outsource the writing task as much as it is inappropriate to get someone else to write something and claim it as your own.
We suggest three golden rules:
If you follow these three rules you are essentially ensuring that your assessed outputs are unlikely to breach academic integrity guidance in terms of use of generative AI.
King’s College London, unlike some other universities, does not require students to reference generative AI as an authoritative source in the reference list for much the same reason you would not be expected to cite a search engine, a student essay website or be over-dependent on synoptic, secondary source material.
However, as we learn more about the capabilities and limitations of these tools and as we work together to evolve our own critical AI literacies, we do expect you to be explicit in acknowledging your use of generative AI tools such as Microsoft Copilot (available via your KCL account), Google Gemini, ChatGPT or any other media generated through other generative AI tools.
King's requires students to acknowledge any use of generative AI tools in coursework by including a declaration statement along with your references. Please note that so long as acknowledged use falls within the scope of appropriate use as defined in the assessment brief/guidance then this will not have any direct impact on the grades awarded. Unless alternative wording is suggested in an assessment brief then you should append one of the following:
1. I declare that no part of this submission has been generated by AI software. These are my own words.
Note. Using software for English grammar and spell checking is consistent with Statement 1.
[or]
2. I declare that parts of this submission has contributions from AI software and that it aligns with acceptable use as specified as part of the assignment brief/ guidance and is consistent with good academic practice. The content can still be considered as my own words. I understand that as long as my use falls within the scope of appropriate use as defined in the assessment brief/guidance then this declaration will not have any direct impact on the grades awarded.
I acknowledge use of software to [include as appropriate]:
(i) Generate ideas or structure suggestions, for assistance with understanding core concepts, or other substantial foundational and preparatory activity.
[insert AI tool(s) and links and/or how used]
(ii) Write, rewrite, rephrase and/or paraphrase part of this essay.
[insert AI tool(s) and links]
(iii) Generate some other aspect of the submitted assessment.
[insert AI tool(s) and links] / Include brief details]
Inappropriate use without attribution is considered academic misconduct. For further details please see the Academic Misconduct Policy and Academic Misconduct Procedure.
Potential penalties can range from formal warnings, resubmitting the coursework, suspension or expulsion.
If inappropriate use is suspected, a student may be called to an academic integrity meeting with academics to discuss authorship concerns and resolve any issues. The purpose of the AIM is to discuss the student’s assessment. The following topics would be explored in the AIM:
• How the student prepared the assessment task
• Feedback on the academic integrity concerns identified
• Any other circumstances impacting the student’s submission which they may wish to raise.
The Staff members discuss the concerns with the student, and the student is given an opportunity to input and discuss their perspectives and reflect on their work process.
In line with the Russell Group statement that universities will support you to become AI-literate, King’s sees this as a key academic skill that you will need to develop to succeed in your studies and career.
The existing providers of academic, digital, and data literacy skills support (including King’s Foundations, CTEL, Libraries and Collections, Centre for Doctoral Studies) will work closely with King’s Academy to be able to provide a coherent and comprehensive level of support and skills development provision. Access Generative AI in Higher Education FutureLearn course that has contributions from staff and students from across King’s too. If you have never accessed FutureLearn while at King’s then please make sure to sign up for your free FutureLearn Campus access first.
Supporting the adoption and integration of generative AI.
Overview of key terminology and contextual information for generative AI
Find out more about learning and teaching at King's