New Delhi: Exam graders could have trouble spotting answers generated by AI-based chatbots, researchers said after their study found that these answers not only went undetected but were also graded better than those written by students.


COMMERCIAL BREAK
SCROLL TO CONTINUE READING

On behalf of 33 'fake students', the researchers at the University of Reading, UK, submitted answers generated by ChatGPT to the examinations system of the School of Psychology and Clinical Language Sciences of the same university.


The team found that 94 per cent of the AI-written answers went undetected. Further, about 83 per cent of the chatbot's answers secured better scores than real students' submissions.


"We found that within this system, 100 per cent AI-written submissions were virtually undetectable and very consistently gained grades better than real student submissions," the authors wrote in the study published in the journal PLoS ONE.


The researchers said their findings should be a "wakeup call" for educators across the world and called for the global education sector to evolve in the face of artificial intelligence."Many institutions have moved away from traditional exams to make assessment more inclusive.


Our research shows it is of international importance to understand how AI will affect the integrity of educational assessments," lead researcher Peter Scarfe, an associate professor at the University of Reading, said.