King’s guidance on generative AI for teaching, assessment and feedback
Supporting the adoption and integration of generative AI.
This guidance is designed to support research students, supervisors, and examiners by minimising, as far as possible, ambiguity around permitted uses of generative AI tools in doctoral assessment, whilst allowing for further refinement in response to future feedback, developing norms, and further advances in AI technology. It should be read alongside the King’s guidance on generative AI for teaching, assessment and feedback, the King’s Framework for Postgraduate Research Awards, and related policies.
We are focused here on generative AI tools that produce text based on human prompts, such as ChatGPT, and their use within the thesis writing process by research degree students. For a more substantial definition of “generative AI” and other valuable context, please refer to Generative AI: context and definitions in the King’s guidance on generative AI.
A doctorate is both a research project and a degree awarded on the basis of an assessment – and as such, doctoral researchers are expected to be mindful of the principles of both research integrity and academic integrity.
Generative AI tools undoubtedly have the potential to play a transformative role in the research process. Norms and rules around the use of such tools will take time to develop, and are likely to vary widely depending on disciplinary context. Issues relating to confidentiality, copyright, data protection, and other elements of research integrity will vary in importance based on the nature of individual projects. As such, an attempt to create a single exhaustive guide to the use of generative AI tools throughout the doctoral research process would be ill-judged.
The use of AI tools as part of the research process – to analyse and draw insights from data, for example – is therefore largely beyond the scope of this guidance. At a minimum, such usage would, like any other element of research design, be clearly outlined by authors in a ‘Methods’ section, subject to ethical review where appropriate, and conducted in accordance with principles of research integrity as well as disciplinary norms. For doctoral research projects, examiners will scrutinise such aspects of the research design as they would any other element of a project.
The examination of the final thesis by examiners is a form of summative assessment. A fundamental principle of academic integrity at King’s is that any assessment submitted by a student must be their own work. As the King’s guidance on generative AI for teaching, assessment and feedback states:
"An assessment is designed to both develop and evaluate your progress so it is never appropriate to submit chunks of text or other media that are duplicated from another source without clear acknowledgement. Because tools like ChatGPT are generating text on a predication model they are not quotable sources and are not appropriate places to focus research."
As it is phrased in the King’s Framework for Postgraduate Research Awards, a doctoral thesis is a postgraduate researcher’s ‘own account of their investigations’ – research that they have undertaken under the supervision of their supervisory team. During the final oral examination, one of the responsibilities of the examiners is to establish to their own satisfaction that this criterion has been met, and that the thesis is genuinely the student’s own work.
Doctoral students should be particularly mindful of this assessment criterion when considering making use of generative AI tools. However, this does not mean that the use of such tools is prohibited. Just as current King’s guidance allows for limited use of proofreaders by doctoral students, the final thesis can still be considered the student’s own work provided that generative AI tools have been used appropriately.
Doctoral students can use generative AI tools in an assistive role to clarify their own writing and help ensure that their own meaning is not misrepresented due to the quality and standard of the English used. This may go beyond simply correcting spelling and basic grammar, and can involve the rewriting, rephrasing and/or paraphrasing parts of the thesis.
However, generative AI tools should not be used to correct factual errors, and students must carefully review and edit any outputs from generative AI software before incorporating them into their thesis.
An approved reasonable adjustment may also involve the use of, for example, generative Artificial Intelligence writing programmes, paraphrasing software, or machine translators.
Any use of generative AI that exceeds the scope of the preceding examples, or is not appropriately declared, risks compromising the student’s authorship of the work submitted, and may place them in breach of the Academic Misconduct Policy.
Generative AI tools can undoubtedly be used to create superficially plausible text that, if inserted into a thesis without revision or reflection, would be difficult or impossible to defend in an oral examination. For example, generative AI can quite easily produce a seemingly convincing literature review that is in fact incorrect, incomplete, or biased.
Pasting such a literature review directly into the thesis from an AI tool would constitute academic misconduct. However, even if such text was reworded to the extent that it could be considered the student’s own words, this would still constitute poor practice and risk an undesirable outcome in the oral examination.
To meet the criteria for the award of a doctorate, the thesis must demonstrate a clear contribution to knowledge – evidence of originality that forms a distinct contribution to the knowledge of the subject. It should also demonstrate a deep and synoptic understanding of the field of study, in part through its critical assessment of the relevant literature.
After submission, examiners assess whether the thesis meets these criteria, informed by discussions in the oral examination. Candidates may be asked, for example:
Where generative AI has been used to an extent that it compromises the student’s authorship of the thesis, a candidate is likely to find themselves unable to satisfactorily answer such questions. Moreover, even where a student’s authorship has not been compromised, they might still struggle to answer such questions to their examiners’ satisfaction if they have followed poor practice in their use of generative AI tools – for example, by relying exclusively on information provided by the tool without reference to other sources.
Many generative AI tools explicitly state that what users provide as input will subsequently be used to train future models. This means that content uploaded to that generative AI tool could therefore form the output of subsequent queries, which raises potential privacy concerns where sensitive data is involved. In the case of a doctoral thesis, this could amount to putting your original contribution into the public domain prior to publication.
Before uploading any part of their research project into a generative AI tool, postgraduate research students must discuss potential confidentiality, copyright, data protection, and ethical considerations with their supervisory team, and receive approval to proceed.
This discussion should consider relevant University policies, any local faculty/department guidance, and any additional guidelines produced by Research Governance, Ethics and Integrity. Where applicable, funding body guidance should also be taken into account.
This discussion is necessary because the potential implications of uploading data into a generative AI tool can vary widely depending on the nature of the research. Consider the following doctoral projects:
In each case, generative AI tools should be used with particular caution, or not at all, and it is possible that the supervisory team will be aware of relevant factors that the student is not. The terms of data sharing agreements or ethical approvals may, for example, explicitly prohibit the use of generative AI tools.
Note: At the time of writing, if you use Microsoft Copilot whilst logged in to your King’s College London Microsoft account, then Copilot will not store or use inputs in this way. This can mitigate some – but not necessarily all – of the privacy and confidentiality risks identified above.
This discussion should also leave the supervisory team satisfied that the planned use of generative AI tools would not breach the spirit of the 'own work' requirement. That is, that the doctoral student intends to use the tool to clarify their own writing and help ensure that their own meaning is not misrepresented due to the quality and standard of the English used, and that the thesis will remain a genuine account of the research they have undertaken.
Following this discussion, the terms of any supervisory approval should be documented in writing, either as part of the agreed minutes of a supervisory meeting, or in an email between the supervisory team and the student.
King’s College London, unlike some other universities, does not require students to reference generative AI as an authoritative source in the reference list for much the same reason you would not be expected to cite a search engine, a student essay website or be over-dependent on synoptic, secondary source material.
However, as we learn more about the capabilities and limitations of these tools and as we work together to evolve our own critical AI literacies, we do expect students to be explicit in acknowledging their use of generative AI tools such as Large Language Models like Microsoft Copilot (formerly Bing Chat) (available via your KCL account), Google Bard or ChatGPT, or any other media generated through similar tools.
This approach is consistent with King’s guidance on generative AI and assessment. It also aligns with the developing guidelines of academic publishers, in which authors are expected to be transparent about their use of generative AI tools, and to take full responsibility for the text they submit for publication. In these guidelines, generative AI software cannot be listed as an author, because AI tools cannot take responsibility for the content of a submission, and the use of AI should be declared through an acknowledgement statement and not through authorship or citation.
When postgraduate research students submit their thesis to the Research Degrees team, they are also required to submit a completed “RD2 Declaration” signed by the student and their supervisors. On this form, students will now be required to select from one of the following two declarations:
I declare that no part of this thesis has been generated by AI software. These are my own words, and all references are cited accordingly.
Note: Using software for English grammar and spell checking is consistent with this statement.
I acknowledge the use of AI software to rewrite, rephrase and/or paraphrase parts of this thesis to ensure the quality and standard of the English used. I declare that my use of AI software is consistent with the research degree award criteria outlined in the Framework for Postgraduate Research Awards. This thesis is a genuine account of the research I have undertaken, and the content can still be considered my own words, with all references cited accordingly.
[Please name and insert links to specific AI tool(s) used]
Please note that both of these declarations describe permitted approaches to thesis authorship for King’s research degrees.
Find out more by reading some example scenarios, which have been designed to help clarify the guidance above. If you have suggestions for additional case studies, please contact doctoralstudies@kcl.ac.uk.
Supporting the adoption and integration of generative AI.
Support for staff in making changes to their assessment approaches.
Overview of key terminology and contextual information for generative AI