Redefining cheating and embracing responsible use
“I am just going to have my students write their papers using paper and pencils in class. That way I know they did the work.” — Cori Schwarzrock, Spring 2022.

This is Part 2 in a series about AI in education. In the first part of this series (found here), I shared how generative AI is a game-changer for the field of education. I will be the first to admit that I did not embrace this technology. My initial reaction was to block it, ban it, and prohibit it. I wanted to get rid of all technology and revert to pencil and paper assessments. In my mind, that was the only way I could be sure that it was my students’ work — not the work of AI.
Writing is a key component of the classes that I teach. I use writing assignments to accurately measure my student’s progress and growth over time. These types of assessments required them to not only know the content — but to apply it. It requires them to create meaning from the content learned. When I began to play around with ChatGPT I realized that this technology could write a better answer to my reading prompts than most people sitting in my classroom — including myself.
Based on what I have heard from other teachers, the fear that AI could replace genuine student effort — and undermine the authenticity of learning experiences — was a common initial response. As I dug into AI this summer, I realized that plagiarism is just the tip of the iceberg. AI is a complex and varied landscape — and one that requires careful navigation.
Is AI Cheating?

Google searches for AI detectors skyrocketed after the rollout of ChatGPT and, honestly, the fear of plagiarism is a valid concern. As a teacher, it is my job to accurately measure what my students do and don’t know. I can’t do that if AI is doing the “thinking” for my students.
Plagiarism: The act of presenting someone else’s work, ideas, or intellectual property as one’s own without proper attribution or permission.
Plagiarism is taking someone else’s ideas and passing them off as your own. Here’s the tricky part. ChatGPT generates unique text and products based on patterns that it has learned from the vast amount of data that it was trained on. Currently, work created by AI isn’t considered to have a “creator.” Therefore, it is not subject to intellectual property rights.

This makes AI-generated text really difficult to check. It also requires us to rethink what plagiarism is. Turns out, it’s not just black and white. Take a look at the above graphic provided by Matt Miller in his book, AI for Educators. Consider these questions:
- Which of these would you consider to be cheating?
- Where would you draw the line?
- How will you communicate this line with the students?
As educators, we need to shift our thinking regarding what constitutes cheating — and what doesn’t. As I wrestled with this, I found this quote to be particularly helpful in directing my thought process,
“Replace the term ‘AI” in this graphic with ‘classmate’ or ‘friend’ at school, and does your opinion/perspective change?” — Ken Shelton (Miller 2023, p. 83)
NOTE: A word of caution concerning AI Detectors: To date, I do not know of any reliable AI Detectors. Please use them with caution — if at all. I would not stake my claim that a student cheated based solely on an assignment being flagged.
Here are some practical suggestions to help you navigate the Wild West of AI:

Develop Clear Course Policies: An ounce of prevention is worth a pound of cure. Developing clear policies regarding your expectations regarding the use of AI can prevent problems from occurring. It is likely that your school or district already has a plagiarism policy in place. For example, my district’s Academic Honesty policy clearly states that claiming credit for work that is not the product of own’s honest effort is cheating. We do not yet have a clear policy statement regarding the use of AI.
Therefore, I added the following clarification to my course syllabus. Cheating also includes the use of AI to create work in any form without proper attribution. I do not intend to ban AI from the classroom — but students need to know that they cannot just copy and paste an answer from AI without attributing the source — just like they shouldn’t for an article or book. This shouldn’t be a “gotcha” moment. Clear communication of expectations is essential.
Have an Honest Discussion: I am a firm believer in giving my students the “why” behind decisions whenever possible. I will share the following with them when we discuss the use of AI.
- It is my job to help you learn and grow. I can’t do that if you use AI because the work being represented didn’t come directly from you.
- Learning is durable when you create it. It takes hard work — but that work pays off in the long run. Using AI to complete a homework assignment might earn you a good grade on that assignment, but it is unlikely that it will help you retain the information for other summative assessments.
- I am not an idiot. I am familiar with your writing from in-class activities. If it looks suspicious, I am going to ask you all about it.
- It’s important to learn how to be a good digital citizen. You will be interacting in these spaces for the rest of your life. It is necessary to build habits now that are accurate, ethical, and responsible.
Other Concerns: The ethical discussion regarding AI extends beyond plagiarism to encompass bias, privacy concerns, and barriers to learning.

- Bias: Have you ever heard the statement “garbage in, garbage out?” Basically, the quality of work that we can expect from technology depends on the quality of information that it is given to draw from. Although ChatGPT pulls data from a variety of sources there is no way for this data to encapsulate the entirety of the human experience. There will be gaps in the information available to the program.
- Algorithms: AI content is based on an algorithm. Algorithms are written by humans. Humans make errors. As W. Edwards Deming once said, “Every system is perfectly designed to get the results it gets” (Miller, 2023, p. 94). The system will produce what it was designed to produce. If there are errors in the coding, the content will have errors. If the data is biased or skewed, the content produced is skewed. If the data set is incomplete, the content produced will be incomplete.
- Media Literacy: The realization that AI’s output is a reflection of its training data, which inevitably contains gaps and potential biases, provides educators with the perfect opportunity to teach students about media literacy. Media literacy enables students to discern between credible and misleading sources, identify potential bias, and recognize potential misinformation or manipulation. Our students need to learn how to be informed and responsible consumers of media. AI provides us the opportunity to address these needs head-on.
- Privacy Concerns: Currently, users must be 18 or older to use ChatGPT (or 13 and older with parental approval). Many schools are even blocking students from accessing the site from their school devices due to concerns about how student data is used. However, I think it is naive to think that our students will not be able to access it in our buildings and outside of it. I know I have a handy little app on my phone that gives me direct access to ChatGPT. I’m sure our students do too. However, all technology needs to be carefully vetted for student privacy concerns. You can find a copy of Open AI’s privacy policy here.
- Barriers to Learning: The technological advancements brought by AI are key to creating an inclusive and equitable learning environment. Returning to paper and pencil assignments can create unnecessary barriers to student learning. It prevents students from utilizing accessibility tools. It prevents students from using other AI tools like Grammarly. It slows down the feedback loop and increases the time it takes to receive actionable feedback.
AI is the Future
It is our job to prepare students for the future — and their future will include AI. It is my position that to prepare them for the future we need to teach them how to utilize AI in a meaningful, productive, and ethical manner. In a world increasingly dominated by AI, we must view this technology as a strategic ally rather than a threat. By helping students comprehend AI’s potential and limitations, we can prepare them for a future where AI seamlessly integrates into the fabric of daily existence.
Arendt-Bunds, A. (2023, July 27). 8 ways to use generative AI for student assessment. Faculty Club. https://faculty-club.com/teaching-with-technology/8-ways-to-use-generative-ai-for-student-assessment/
Jimenez, L., & Boser, U. (2023, July 18). Future of testing in education: Artificial intelligence. Center for American Progress. https://www.americanprogress.org/article/future-testing-education-artificial-intelligence/
Luckin, R. (2023, July 14). Yes, Ai could profoundly disrupt education. but maybe that’s not a bad thing | Rose LuckinRo. The Guardian. https://www.theguardian.com/commentisfree/2023/jul/14/ai-artificial-intelligence-disrupt-education-creativity-critical-thinking
Mah, C. (2023, February 26). How to use CHATGPT as an example machine. Cult of Pedagogy. https://www.cultofpedagogy.com/chatgpt-example-machine/
Miller, M. (2023). AI for Educators Learning Strategies, teacher efficiencies, and a vision for an artificial intelligence future. Dave Burgess Consulting.
Phillips, V. (2023, June 8). Intelligent classrooms: What AI means for the future of Education. Forbes. https://www.forbes.com/sites/vickiphillips/2023/06/07/intelligent-classrooms-what-ai-means-for-the-future-of-education/



Leave a comment