Detecting AI Created Content

Detecting AI Created Content

ITS and Online Learning Services do not currently support or recommend any tool for the detection of AI-generated writing.

All available research to date indicates that such tools are easily fooled and may risk harm to students by incorrectly flagging human authored writing as AI-generated. 






Where we stand — February 2024

ITS and Online Learning Services are acutely aware that artificial intelligence tools like ChatGPT have been disruptive to the assessment methods used by many instructors. We have long been aware of the risk of ghostwriters and creative solutions for cheating on exams, but the ease with which these new AI tools can generate believable content has led to a sharp increase in questions about how to determine that the work submitted by students is original (Mills 2023). The detection of AI content is notoriously difficult (Edwards 2023; Heikkilä 2022).

To date, there are no tools that can reliably detect writing that is AI-generated (Weber-Wulff et al. 2023). Multiple independent studies have found that while some AI detectors can identify raw output copy/pasted directly from ChatGPT at an initial rate over 90%, even the most initially accurate detectors can easily be fooled by very basic manipulation of AI-generated writing such as light paraphrasing, re-wording by a human, or even a second pass through another AI writing tool. The rate of “false negatives” – incorrectly attributing AI-generated text as human-written – approaches 50% in testing where the text is subject to these minor alterations before submission.

In July of 2023, OpenAI, the company behind ChatGPT, disabled public access to its AI classifier tool (Hendrik Kirchner et al. 2023). Their announcement of the change cited the classifier’s “low rate of accuracy,” even with text generated by their own ChatGPT service. While it was available, the classifier incorrectly identified human-written text as AI-generated – a “false positive” – in 9% of cases in OpenAI analysis. Independent testing found even higher rates (Elkhatat, Elsaid, and Almeer 2023).

In the context of academic integrity, the risks of false positives are significant (Klee 2023; Fowler 2023). Unreliable AI detection not only fails to improve academic integrity but may deepen existing inequalities. Non-native English speakers are flagged by AI detection tools at a disproportionate rate (Myers 2023). Other tools like Grammarly with legitimate academic applications, particularly for writers with dyslexia and other learning disabilities, also increase the likelihood of being flagged by AI detectors (Shapiro 2018; Steere 2023).

What to do?

All this leaves instructors in a challenging position where the best recommendations being put forward are to redesign their assessments. Redesigning assessments is difficult and time consuming, and the new assessment methods often require more time to grade. Just as AI tools are beginning to make the process of writing faster and easier for everybody, it feels unfair that teachers of writing are forced to spend more of their own precious time on addressing the downsides and potential misuse of these tools.

This change in the digital writing landscape has been foisted upon us suddenly and leaves us all scrambling to respond. Even so, these tools are available to learners and there is no way to prevent students from using them — the chat is already out of the bag, so to speak. Any response will consume our time and energy, so it is important our efforts are spent in ways that will genuinely address the problem. Time spent chasing false positives created by inadequate and biased tools is time wasted and puts at risk our relationship with our students. Our time is better spent adapting our teaching and assessments to reflect the changing landscape of writing technology.

The most optimistic stance in the face of this challenge is to recognize that the vast majority of our students are honest and deeply invested in genuine learning. As with all academic dishonesty, we should resist letting the actions of a few bad actors color our impression of our extraordinary student population. Still, the temptation remains for well-meaning students to use AI to cut what appear to them as minor corners. Students are at risk for harm if they are not educated on responsible use of these new technologies. If they use AI for course work, one risk is that learners miss an opportunity to internalize important information or fail to master the topics at hand. Worse, it’s possible for an AI to feed them misinformation that they are unable to distinguish from research-backed conclusions. Either problem can lead to costly mistakes down the road (Brodkin 2023). AI is rapidly becoming pervasive in the world beyond the university’s walls, and students deserve to be taught how to use AI tools thoughtfully and effectively in the future.

One place to begin is to formulate a statement on the use of AI in your course and communicate it clearly to students. Syracuse University’s Provost Office has published guidance and boilerplate language to include in course syllabi (“Syllabus Recommendations - Center for Learning and Student Success – Syracuse University,” n.d.). Instructor across the globe are contributing syllabi policies to a shared repository (Eaton, 2023). Addressing the issue directly and discussing it openly can help students make responsible decisions about using AI in their coursework. Ideally, an instructor would be able to help students understand where AI can be helpful and harmful in their specific discipline — where it can help speed up work and generate ideas, and where it’s likely to lead to faulty conclusions. 

Units across campus will continue to provide forums for faculty to discuss the implications of AI and approaches to take in response. With no reliable detection tools on the horizon, these conversations, both on campus and off, represent our best avenue to authentic assessment of our students and their work (McMurtrie 2023; “Authentic Assessment,” n.d.).

Online Learning Services will continue to evaluate new teaching and learning technologies and remains available to consult with faculty on teaching and technology. ITS will continue to provide access to effective tools where they are available. In addition to technological considerations, the Center for Teaching and Learning Excellence has pedagogical and policy resources for instructors on strategies they might take to improve their assessments (CTLE, n.d.).

AI created content detection and Turnitin

In April 2023, Turnitin released an AI writing detector. This tool was enabled in the Syracuse University Turnitin system as a preview. During the preview there were no fees associated with the tool. Turnitin initially reported low rates of false positives, but those have since been called into question. (Chechitelli 2023; D’Agostino 2023). The detector’s false negative rate was close to 40-50% in tests where AI-generated text was reworded by a human or by a separate AI paraphrasing tool (Weber-Wulff et al. 2023).

At the end of the free preview on December 31, 2023, Turnitin announced that it would begin charging an additional license fee for the use of the AI Detection tool. Given the concerns about its effectiveness, ITS elected to not license the AI content detection tool. We are not alone in this choice as multiple R1 universities have made a similar decision (Brown 2023; Coley 2023; “Known Issue – Turnitin AI Writing Detection Unavailable – Center for Instructional Technology | The University of Alabama” 2023).

We are also unable to recommend any alternative technological solution. None of the AI detection tools currently available online are accurate enough to provide credible evidence in academic integrity investigations. The risk of misleading results harming students who are acting in good faith is too great. ITS is committed to thorough and transparent vetting of any new tools that emerge in the future. If a reliable tool for AI detection becomes available, ITS will evaluate the tool and consider recommending it to the Syracuse University academic community.


Bibliography

“Authentic Assessment.” n.d. Center for Innovative Teaching and Learning. Accessed February 27, 2024. https://citl.indiana.edu/teaching-resources/assessing-student-learning/authentic-assessment/index.html.

Brodkin, Jon. 2023. “Lawyer Cited 6 Fake Cases Made up by ChatGPT; Judge Calls It ‘Unprecedented.’” Ars Technica. May 30, 2023. https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-fake-cases-made-up-by-chatgpt-judge-calls-it-unprecedented/.

Brown, Joseph. 2023. “Why You Can’t Find Turnitin’s AI Writing Detection Tool.” The Institute for Learning and Teaching. April 26, 2023. https://tilt.colostate.edu/why-you-cant-find-turnitins-ai-writing-detection-tool/.

Chechitelli, Annie. 2023. “AI Writing Detection Update from Turnitin’s Chief Product Officer.” May 23, 2023. https://www.turnitin.com/blog/ai-writing-detection-update-from-turnitins-chief-product-officer.

Coley, Michael. 2023. “Guidance on AI Detection and Why We’re Disabling Turnitin’s AI Detector.” Vanderbilt University. August 16, 2023. https://www.vanderbilt.edu/brightspace/2023/08/16/guidance-on-ai-detection-and-why-were-disabling-turnitins-ai-detector/.

CTLE. n.d. “CTLE and CLASS Tips and Strategies for Faculty and Instructors: What We Know About ChatGPT and Options for Responding - CTLE Resources - Answers.” Accessed February 27, 2024. https://su-jsm.atlassian.net/wiki/pages/viewpage.action?pageId=154894572.

D’Agostino, Susan. 2023. “Turnitin’s AI Detector: Higher-Than-Expected False Positives.” Inside Higher Ed. June 1, 2023. https://www.insidehighered.com/news/quick-takes/2023/06/01/turnitins-ai-detector-higher-expected-false-positives.

Eaton, Lance. 2023. “Syllabi Policies for AI Generative Tools.” Google Docs. January 16, 2023. https://docs.google.com/document/d/1RMVwzjc1o0Mi8Blw_-JUTcXv02b2WRH86vw7mi16W3U/edit?usp=embed_facebook.

Edwards, Benj. 2023. “Why AI Detectors Think the US Constitution Was Written by AI.” Ars Technica. July 14, 2023. https://arstechnica.com/information-technology/2023/07/why-ai-detectors-think-the-us-constitution-was-written-by-ai/.

Elkhatat, Ahmed M., Khaled Elsaid, and Saeed Almeer. 2023. “Evaluating the Efficacy of AI Content Detection Tools in Differentiating between Human and AI-Generated Text.” International Journal for Educational Integrity 19 (1): 17. https://doi.org/10.1007/s40979-023-00140-5.

Fowler, Geoffrey A. 2023. “Analysis | We Tested a New ChatGPT-Detector for Teachers. It Flagged an Innocent Student.” Washington Post, April 14, 2023. https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/.

Heikkilä, Melissa. 2022. “How to Spot AI-Generated Text.” MIT Technology Review. December 19, 2022. https://www.technologyreview.com/2022/12/19/1065596/how-to-spot-ai-generated-text/.

Hendrik Kirchner, Jan, Lama Ahmad, Scott Aaronson, and Jan Leike. 2023. “New AI Classifier for Indicating AI-Written Text.” January 31, 2023. https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text.

Klee, Miles. 2023. “She Was Falsely Accused of Cheating With AI -- And She Won’t Be the Last.” Rolling Stone (blog). June 6, 2023. https://www.rollingstone.com/culture/culture-features/student-accused-ai-cheating-turnitin-1234747351/.

“Known Issue – Turnitin AI Writing Detection Unavailable – Center for Instructional Technology | The University of Alabama.” 2023. August 1, 2023. https://cit.ua.edu/known-issue-turnitin-ai-writing-detection-unavailable/.

McMurtrie, Beth. 2023. “How ChatGPT Has Shaped Teaching — So Far.” The Chronicle of Higher Education. December 21, 2023. https://www.chronicle.com/newsletter/teaching/2023-12-21.

Mills, Anna R. 2023. “Advice | ChatGPT Just Got Better. What Does That Mean for Our Writing Assignments?” The Chronicle of Higher Education. March 23, 2023. https://www.chronicle.com/article/chatgpt-just-got-better-what-does-that-mean-for-our-writing-assignments.

Myers, Andrew. 2023. “AI-Detectors Biased Against Non-Native English Writers.” May 15, 2023. https://hai.stanford.edu/news/ai-detectors-biased-against-non-native-english-writers.

Shapiro, Lisa Wood. 2018. “How Technology Helped Me Cheat Dyslexia.” Wired, June 18, 2018. https://www.wired.com/story/end-of-dyslexia/.

Steere, Elizabeth. 2023. “The Trouble With AI Writing Detection.” Inside Higher Ed. October 18, 2023. https://www.insidehighered.com/opinion/career-advice/teaching/2023/10/18/faculty-should-know-tools-students-use-beat-ai-detection.

“Syllabus Recommendations - Center for Learning and Student Success – Syracuse University.” n.d. Accessed February 27, 2024. https://class.syr.edu/academic-integrity/syllabus-recommendations/.

Weber-Wulff, Debora, Alla Anohina-Naumeca, Sonja Bjelobaba, Tomáš Foltýnek, Jean Guerrero-Dib, Olumide Popoola, Petr Šigut, and Lorna Waddington. 2023. “Testing of Detection Tools for AI-Generated Text.” International Journal for Educational Integrity 19 (1): 26. https://doi.org/10.1007/s40979-023-00146-z.



com.atlassian.confluence.content.render.xhtml.migration.exceptions.UnknownMacroMigrationException: The macro 'ivy-ai' is unknown.