The makers of popular plagiarism detection software are launching a tool that also detects if essays are created using artificial intelligence chatbots, triggering a debate among universities over whether to use the new system to identify student cheating.
Turnitin, which is already used by more than 10,000 educational institutions worldwide, is launching a service on Tuesday that it said can identify AI-generated text with 98 per cent confidence. OpenAI, makers of popular ChatGPT services, has said its plagiarism detection system works only 26 per cent of the time.
“Educators told us that being able to accurately detect AI written text is their first priority right now,” said Turnitin chief executive Chris Caren. “They need to be able to detect AI with very high certainty to assess the authenticity of a student’s work and determine how to best engage with them.”
The launch has proved contentious. Some institutions, including Cambridge and other members of the Russell Group, the body that represents leading UK universities, have said they will opt-out of the new service, according to people familiar with the decision.
Universities are worried the tool may falsely accuse students of cheating, involves handing student data to a private company and prevents people from experimenting with new technologies such as generative AI.
“The concerns have been widely held,” one person familiar with its discussions said. The Russell Group declined to comment.
Those concerns have led the UCISA, the UK membership body supporting technology in education, to work with Turnitin to ensure universities had the option to opt-out of the feature temporarily.
C. Edward Watson, associate vice-president for curricular and pedagogical innovation at the American Association of Colleges and Universities, said there was also a “dubiousness” over the detection system given rapid developments in AI. “There’s a lot of disbelief that it can do the job well,” he said.
The popularity of ChatGPT, a system created by Microsoft-backed company OpenAI that can form arguments and write convincing swaths of text, has led to widespread concern that students will use the software to cheat on written assignments.
That has led to a debate among academics, higher education consultants and cognitive scientists across the world over how universities might develop new modes of assessment in response to the threat to academic integrity posed by AI.
Deborah Green, chief executive of UCISA, said she was concerned that Turnitin was launching its AI detection system with little warning to students as they prepared coursework and exams this summer.
While universities broadly welcomed the new tool they needed time to assess it, she added. “We’ve had no opportunity to test it, so we simply don’t know about what it does and doesn’t do.”
Charles Knight, assistant director at consultancy Advance HE, said lecturers were concerned that they would have no way to investigate why essays had been flagged as being written by AI.
In a single university an error rate of 1 per cent would mean hundreds of students wrongly accused of cheating, he added, with little recourse to appeal.
“It’s a black box,” he said. “We’ve got no idea what those results mean and we aren’t able to have a look at how the software came to those conclusions.”
Turnitin did not immediately respond to a request for comment to the concerns raised about the AI detection tool. But the company said in a statement about the tool’s launch that the technology had been “in development for years” and provided resources to “help the education community navigate and manage [it]”.