AI Text Detectors: Amazing Sleuths or WHAT?
Last Updated on July 17, 2023 by Editorial Team
Author(s): Toluwani Aremu
Originally published on Towards AI.
Our thoughts are manifested through both the words we choose and how we express them. Writing, as an art form, utilizes language to creatively convey intricate ideas, emotions, and imagery. It serves as a direct reflection of our thoughts and our ability to articulate them. In todayβs digital age, we frequently share our thoughts through various types of written content on the internet, be it casual or formal. These writings form the basis for training language models that are widely used today.
Language models are machine learning models that leverage extensive text data to generate or predict text based on a given input or context. They have diverse applications, including auto-completion, machine translation, and, most notably, text generation using large language models (LLMs).
LLMs have revolutionized our ability to express ourselves, enhancing productivity and simplifying complex subjects. They have also fostered innovation and improved our performance and versatility. However, along with these benefits, generative language models also pose certain risks. They can propagate biased or harmful content, generate convincing fake news or propaganda, and potentially infringe on privacy rights by leveraging personal data used for training.
One particular concern is the potential for AI-generated content to be mistaken for human-written text, leading to issues like misinformation and academic fraudulence. Consequently, AI text detectors have been developed to address these concerns. In this article, we will explore the emergence and ethical implications of AI text detectors.
AI TEXT DETECTORS
AI Text Detectors are sophisticated tools that leverage machine learning and natural language processing to distinguish between human-written and AI-generated text. These detectors serve various purposes, including detecting plagiarism, spotting fake news, and safeguarding against fraudulent or malicious use of AI-generated content.
There are a number of different AI Text Detectors available, each with its own strengths and weaknesses. Some of the most popular AI Text Detectors include Crossplag, Sapling, GPTZero, OpenAI, etc. In the initial wave of ChatGPTβs release, many AI text detection creators confidently asserted that their tools could accurately identify texts generated by AI. However, these claims quickly lost credibility when several incidents demonstrated the opposite.
Despite these setbacks, AI Text Detectors continue to be employed in educational institutions to combat academic dishonesty. Here is an example of such. Now, letβs delve into some studies conducted on AI text detectors.
IMPLICATIONS
Two noteworthy studies shed light on the effectiveness of AI text detectors. In one study conducted by Vinu et al. [1], they discovered that AI-generated texts could easily evade detection by lightly modifying them, even fooling detectors that employed advanced watermarking techniques. This means that slightly altering some of the texts generated by a language model is enough for a dishonest student to evade detection by one of these automated digital sleuths!!!
Another study by Liang et al. [2] examined the performance of these detectors in an educational context, particularly regarding native and non-native English speakers. The study revealed a consistent misclassification of non-native English writing samples as AI-generated by all seven detectors, indicating a bias. They proposed some prompting strategies to mitigate this bias. Not only did these strategies mitigate the bias, but they were also able to bypass the detectors. These findings raise ethical concerns, as detectors might penalize writers for using limited linguistic expressions while failing to flag content prompted by AI models. In solving an ethical issue, these detectors seem to have raised more. The blind truly cannot see!
To validate these results, I conducted a separate study [3] focusing on various types of formal writing such as argumentative, descriptive, expository, and narrative essays [Github]. The initial experiments demonstrated that AI text detectors performed better at identifying human-written essays but faced difficulties in detecting essays generated or enhanced by ChatGPT, confirming the findings above. This proves Liangβs experiment. A simple prompt to evade them all! Notably, the detectors exhibited higher accuracy in identifying AI-generated or enhanced argumentative essays compared to other types.
Additionally, when I prompted ChatGPT to generate human-like versions of these essay types, the detectors failed to detect them, consistent with the findings of previous studies. I also introduced deliberate grammatical errors in localized sections of the generated essays to test the detectorsβ capability, and their performance was notably poor.
CAN SOMETHING GOOD STILL COME OUT OF THIS?
In summary, these studies underscore the formidable obstacles faced by AI text detectors in accurately discerning between human-generated and AI-generated texts. They also raise concerns about the suitability of these detectors for educational contexts. From a philosophical standpoint, the distinction between human and AI-generated texts is becoming increasingly blurred.
During the course of these experiments, I noted promising performance from GPTZero, as it consistently outperformed other detectors in most of the presented scenarios. I think the minds behind these detectors should explore potential collaborations with the organizations behind these language models to improve the quality of these tools. Through continuous improvement and collaboration, we can work towards a future where AI text detectors offer greater accuracy and resilience, enabling us to address the evolving landscape of text-based challenges in a more effective and responsible manner.
Nevertheless, it is essential to engage in broader discussions regarding the ethical ramifications of deploying ChatGPT content detectors and exercise caution in their utilization within evaluative or educational settings. In educational contexts, alternative approaches beyond traditional assignments/homework may be advisable for evaluating studentsβ capabilities.
Undoubtedly, AI, particularly language models, has established a firm presence and will continue to shape our future!
Read more:
- How Well Do AI Text Detectors Work? U+007C Blog β hCaptcha
- Why detecting AI-generated text is so difficult (and what to do about it) U+007C MIT Technology Review
- We pitted ChatGPT against tools for detecting AI-written text, and the results are troubling (theconversation.com)
- Most sites claiming to catch AI-written text fail spectacularly U+007C TechCrunch
- AI-Detector Flags US Constitution as AI-Generated (analyticsvidhya.com)
REFERENCES
- Sadasivan, V. S., Kumar, A., Balasubramanian, S., Wang, W., & Feizi, S. (2023). Can ai-generated text be reliably detected?. arXiv preprint arXiv:2303.11156. [Link]
- Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J.Y. (2023). GPT detectors are biased against non-native English writers. ArXiv, abs/2304.02819. [Link]
- Aremu, T. (2023). Unlocking Pandoraβs Box: Unveiling the Elusive Realm of AI Text Detection. SSRN Electronic Journal, Available at SSRN 4470719. [Link]
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI