ChatGPT parent (hopefully that isn’t a literal description) OpenAI has unveiled a tool designed to tell its users whether a section of text was written by a human being or an AI. Because, to slightly paraphrase Jeff Goldblum, they were so preoccupied with whether they could [build a convincing writing AI], they didn’t stop to think if they should.
The new tool, called an AI classifier, is trained to “distinguish between text written by a human and text written by AIs from a variety of providers.” But, as with everything to do with AI, there’s a catch. For starters, it’s not 100% reliable.
It occasionally will misidentify human writers as AI. It also struggles with shorter pieces of copy. This makes sense. There’s an entire South African game show based around identifying something with as little information as possible (Noot vir Noot). Humans might manage the task in some specialised cases. Artificial intelligence tends to require enough data to accurately operate. OpenAI’s tool is also best suited for English-language identification. There are several other terms and conditions but at least the tool exists.
OpenAI said that its tool “should not be used as a primary decision-making tool [emphasis in original], but instead as a complement to other methods of determining the source of a piece of text.” Because that’s another thing we have to do now that ChatGPT is a part of the world. There are several other lessons we’re learning, more or less on the job, now that the AI writing service is a part of this reality. We’re sure it’ll all work out fine. We’re (pretty sure) we’re all human. We’ll adjust.