Skip to main content
To KTH's start page To KTH's start page

Hinder and detect prohibited use of generative AI

By adapting the implementation of an exam, you can hinder the use of generative AI and increase the likelihood of detection. Design questions that generative AI have a hard time answering and test them with an AI. Restrict the exam setting with proctored exam or complement the exam or project with oral presentations. You can also combine these actions with detection tools, although such tools are flawed. All approaches require you to set clear boundaries on what is prohibited use of generative AI.

Set clear boundaries to reduce accidental prohibited use

Communicate to your student what is allowed and what is not when it comes to using generative AI. Your students must know what is ok and not, or chances are that they will use AI in a way that is not ok without understanding it. Most students want to do the right thing, you as a teacher must help them by informing them.

Decide your approach to generative AI  and what you regard as prohibited use and communicate that, by showing an example, for instance. Generally, it is not a good idea to set a percentage limit on how much AI-generated text or code is allowed. Instead, you might want to discuss at what stage of the work process use is allowed and not.

Create exams less sensitive to generative AI

It is possible to create examinations less sensitive to generative AI. The main point is to not solely base your assessment on students’ final product (such as an essay or program code) since it is vulnerable to third-party assistance. If all or part of the product has been produced by an outsider, for example a person or an AI, the connection between process and product is broken. The product is then not the result of the student's learning process, and what is assessed is unrelated to the student's knowledge.

You can make efforts to minimize the risk of outside influence by adapting the content of the exam or the exam setting. These methods make exams less sensitive to generative AI cheating and are presented in more detail below. For less vulnerable exam methods, read more about how to Prevent and discourage prohibited use of generative AI .

Adapt your questions and instructions

There are several different exam methods that are less vulnerable to cheating than for instance written home exams or unsupervised computer exams. You can read more about these on the page Prevent and discourage prohibited use of generative AI .

However, it is not always possible to change the examination type, but it might be possible to adapt the wording - how you ask questions. Below are some tips on how you can adapt your questions. These are essential for written unsupervised exams, but useful for other types as well.

  • Create questions with answers based on, or applied to, local, specific contexts, such as information specific to your course. Answering these types of questions require knowledge not available to outsiders, such as an AI.
  • Create exam questions that require students to reflect or reason around, for example, a part of the course literature. An AI cannot easily answer since it does not have all the information required.
  • Avoid using questions where the answer can be easily found on the internet, such as purely factual questions.
  • Ask your students to explain a relationship or a connection, such as “If you change variable x, what will happen with variable y?”. This requires the students to explain and motivate their answer using knowledge that is not easily accessible to an AI.

Test your questions with AI

When you have created a set of questions or a quiz – try using ChatGPT or another generative AI and ask it to solve the problems or answer the questions. If the answers are useful for a student trying to cheat, then you know that your exam is vulnerable to cheating with AI tools. To hinder this, you can either change the question wording or adapt the setting in which you conduct the exam.

Adapt the examination and the exam settings

There are several ways to hinder prohibited aid from a generative AI when conducing an examination. You can control the settings and increase the level of monitoring, complement the exam with an oral presentation or use continuous assessment for ongoing processes. This will both hinder the use of generative AI and increase the likelihood of detection.  

Control the exam setting

The most common way to control the settings is to conduct a proctored examination. This can be done using proctoring software such as KTH Digicertus Exam or by having a proctor present during a written exam. The level of monitoring is high during proctored examination, which will hinder students from using generative AI. However, increasing monitoring is not the only way, there are several other types of examination forms that are less vulnerable to generative AI.  

Complement the exam orally

Instead of having proctored exams you can complement the current exam with an oral presentation. For instance, complement a written home exam with oral presentations of 1-2 randomly selected questions. The students must be prepared to present all questions and you will quickly notice if there is a discrepancy between their written and oral answers. Also, it will not be as time consuming as having the students orally present all exam questions.  

Continuous assessment of an ongoing process

Have your students send in several drafts so you can follow their process. Continuously assessing the progress makes it easier to spot prohibited aid using generative AI. This approach could be combined with short oral reports, where the students explain what they are doing and what they have problems with.

A few words on detection tools

Ever since generative AI became publicly available, teachers have requested methods to detect AI-generated material. Today there are many tools available online with the purpose of detecting if a text or code is AI-generated. However, none of these detection tools are 100 % accurate, with both false positive and false negative analyses as a result. Therefore, using detection tools cannot be the only way to hinder the prohibited use of generative AI. Thinking that a detection tool could solve all problems is problematic and will probably never be true. Generative AI technology evolves fast and detection tools that are “working” today may not do that within a month of writing this.  

How detection tools work

Generative AI applications are often so-called probability-based text models. This means that the generated text depends on the probability that a word follows the previous one and so on. Most humans do not write text based on probability, so there will be a difference in the produced text. This difference is the base of how most detection tools work. There are several problems regarding this. First off, it is not always a big difference, and shorter texts are harder to analyze. There are also several different ways to sidestep and trick the detection tools. In addition, the probability for certain words will change as the algorithms are improved continuously.

What to do if you suspect prohibited use of AI?

There are two types of actions to consider, pedagogical and disciplinary. Pedagogically you can handle suspected use of AI as any other suspected case of cheating. Let the student conduct a complementary examination orally or in a controlled setting. For reporting disciplinary actions, email the contact person for student disciplinary matters at your school .

Further reading

The PriU group about assessment and examination methods. (2023). Promoting learning and preventing cheating . Report published 2023-03-31.

KTH's information on disciplinary matters .

Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? . Journal of Applied Learning and Teaching, 6(1).