Binoculars is currently the most powerful detector of AI-generated text. According to the team behind the method, Binoculars detects AI text with over 90% accuracy.
Language model detectors are specialized algorithms for recognizing text generated by ChatGPT or other text generators. This task has become more difficult — and at the same time more necessary — in recent years with the advent of powerful language models such as GPT-3 and GPT-4, as AI-generated text is used in more and more areas.
A detector developed by researchers from the University of Maryland, Carnegie Mellon University, New York University, and the ELLIS Institute & MPI for Intelligent Systems at the AI Center in Tübingen, Germany, achieves a new peak performance in LLM recognition, which the team says even outperforms commercial vendors.
The team also cites a reason for using such detectors: to ensure the authenticity of content and protect the integrity of information in the digital space.
The core innovation of Binoculars is its ability to compare the perplexity of a given text when processed by two closely related language models. Perplexity measures how surprised a model is by a sequence of words; higher perplexity indicates less predictable or more complex text. By comparing perplexity and cross-perplexity (how one model’s predictions are perceived by another model), Binoculars can effectively distinguish between human- and machine-generated content.
Binoculars’ low error rate should prevent false positives
Binoculars shows impressive accuracy in recognizing machine-generated text: In tests, Binoculars correctly identified more than 90% of text generated by modern language models such as ChatGPT, with a very low error rate of just 0.01%. This high accuracy is maintained even with text from different sources and in different styles and languages.
However, the use of LLM detectors such as Binoculars also raises important ethical issues, according to the team. While they can help protect against disinformation and preserve the authenticity of information, there is a risk that they could be misused or have unintended negative effects. For example, texts written by non-native speakers could be misclassified as machine-generated. However, the researchers believe that Binoculars in its current form offers greater reliability even in such cases.
The team has published Binoculars on GitHub.