College Guidance on the use of generative AI tools (e.g. ChatGPT)

The information outlined on this webpage was considered by Senate at its meeting on March 1st, 2023. 

 Several natural language processing AI models have come to prominence in recent months, such as generative AIs like ChatGPT. These models demonstrate a huge step forward in accessible AI which will develop substantially and quickly; likely growing to become something we use frequently in our everyday lives.  

For staff and students, these AI models present both opportunities for our education and risks for the integrity of our assessments. 

Jisc has also recently released a primer on generative AI, which staff and students may find useful to consult.

A working group will be established to explore the development, opportunities and implications these models and details on the work of this group will be published on this webpage. Please check back for further updates.  

 

Frequently asked questions

The perceived ability of these software to ‘do our work for us’ has prompted concern for the implications for academic integrity should students submit AI-generated work as their own. The focus on problem-solving in STEMMB subjects and the range of Imperial’s assessment types limit the capability of these AI models being able to produce highly refined answers to our assessments, but the impact that will have on quality assurance is still a concern. 

These models have ingested information available on the internet. In the case of ChatGPT this ingestion only covers information prior to 2021, meaning it has little ‘knowledge’ of current events. These models display limited success in handling mathematical information and code. Their predictive model means they often cannot distinguish between accurate references and fabrications. However, it can be expected that their power and accuracy will develop continuously and rapidly.

Generative AI models have the ability to mimic language in response to human-entered prompts. In some contexts, it can use such prompts to create detailed written responses that reflect general knowledge on the subject matter. 

Further information on ChatGPT usage

Information for students

  • AI models are powerful and can be an effective way to check the quality of your written work, prompt new ideas, or generate simplified explanations of complex topics to support your learning.  
  • Submitting work and assessments created by someone or something else, as if it was your own, is plagiarism and is a form of cheating and this includes AI-generated content. Please refer to the College’s Academic Misconduct Procedures for further information. 
  • To ensure quality assurance is maintained, your department may choose to invite a random selection of students to an ‘authenticity interview’ on their submitted assessments. This means asking students to attend an oral examination on their submitted work to ensure its authenticity, by asking them about the subject or how they approached their assignment. Being invited to an authenticity interview does not mean that there is any specific concern that you have submitted work that is not your own. 
  • Natural language processing models work by predicting what text is most likely to follow previous text based on the information it has ingested. Therefore, it can often return incorrect or false information. For example, it may return non-existent academic references. 

Information for staff

AI such as this is likely to become a commonplace tool in our lives and careers going forward. There is potential benefit in adopting an ‘educative’ approach to the use of AI-generated content in our educational programmes. This might include: 

  • Strengthening academic integrity and plagiarism awareness training and reviewing the uptake of this training across all educational programmes and levels.  
  • Recognising the potential for AI models as tools to support students in raising the quality of their work. The power of AI models to review accuracy of work and support idea generation may be considered valid uses of these tools and representative of the role they are likely to play in students’ future careers.  
  • Ensuring that assessments are developed to assess learning outcomes that test and give credit for high-order skills that cannot yet be replicated by AI, such as critical thinking and synthesis of new ideas, and ensuring that the specific questions posed in assessment are refreshed regularly.  
  • Exploring opportunities presented by the adoption of AI models to enhance the educational experience. For example, the use of AI as a tool for formative feedback. 
  • Considering the implications of this technology for the integrity of examinations which are sat remotely or allow students to ‘bring their own devices’ and how such instances can be controlled. 
  • To ensure quality assurance is maintained, a department may choose to invite a random selection of students to an ‘authenticity interview’ on their submitted assessments. This means asking students to attend an oral examination on their submitted work to ensure its authenticity, by asking them about the subject or how they approached their assignment. Students should be selected at random, and it should be explained to students that an invitation does not mean there is any specific concern that they have submitted work that is not their own.
  • Familiarising yourself with the AI software available in your discipline, its strengths and weaknesses, the type of assessments it is able to solve, and if the assessments you set are susceptible.
  • Do not rely on AI detection software. This is unproven technology and we may not have permission to upload student work to external sites.

Please note that the contents of this webpage are developing and subject to change.  

2 March 2023