King’s guidance on generative AI for teaching, assessment and feedback
Supporting the adoption and integration of generative AI.
The following scenarios have been designed to help clarify King's generative AI guidance for doctoral students, supervisors and examiners.
If you have suggestions for additional case studies, please contact doctoralstudies@kcl.ac.uk.
Your supervisor has suggested that your discussion of research paradigms in your introduction needs to be clearer.
You are finding it difficult to articulate the difference between ontology and epistemology, and how they relate to your research project, so you ask ChatGTP to provide a concise definition of each term.
You’re pleased with its response, and don’t feel you can write clearer definitions yourself, so you paste them into your final thesis. When you submit your RD2 form, you declare that you have used AI software.
As alternative approaches, you could:
The student wishes to run some sections of their thesis through ChatGPT to ensure their argument is expressed as clearly as possible.
You are somewhat wary about the use of generative AI tools, as you are not very familiar with them, so you are unsure how best to advise the student.
When completing your RD2 form to accompany your submission, you see that you are required to declare whether or not you have used generative AI tools when writing your thesis.
This isn’t something you’ve really thought about before. You’ve certainly never used ChatGPT. But you did use Microsoft Word to write your thesis, and Word does often suggest correct spellings or different phrases while you are typing, which you sometimes accept and sometimes ignore. You also use Mendeley to organise your papers and references, which often suggests related papers that might be relevant to your project, some of which have proved very useful.
All of this leaves you feeling anxious and unsure which declaration to select on the form.
You first drafted this chapter early in your second year, and one section in particular contains a lot of factual information that you haven’t revisited since then. To reassure yourself that there are no silly inaccuracies in the text, you paste the section into Chat-GPT and ask it to correct any errors.
You’re relieved when it identifies two incorrect dates and corrects a few other small details, and you amend the chapter accordingly.
Reading the thesis in preparation for the student’s oral examination, you have some concerns about one chapter in particular. The students cites an extensive range of relevant literature, but in doing so they sometimes seem to miss key nuances of certain papers. In some cases, there are outright errors that suggest a lack of understanding.
Although you haven’t engaged with it closely, you’re aware that there is a growing debate about the impact of generative AI on research integrity. Perhaps the student has used Chat-GPT to generate their literature review, and this has caused the errors?
A colleague suggests uploading the section of the thesis into Chat-GPT to see if it can detect any AI-generated text.
In the case study above, potential issues have been identified relating to the criteria that a thesis should 'demonstrate a deep and synoptic understanding of the field of study through its critical assessment of the relevant literature'. These issues are best addressed by the examiner asking questions focusing on this aspect of the thesis during the oral examination, and giving the student the opportunity to demonstrate their knowledge and understanding.
The examiners would then use their judgement to determine whether this issue will influence the outcome of the oral examination (for example, they might request revisions to this section of the thesis).
Supporting the adoption and integration of generative AI.
Permitted uses of generative AI tools in doctoral assessment.
Overview of key terminology and contextual information for generative AI