Davidbena. (2019). Gaping hole of cistern is licensed under the Creative Commons Attribution-Share Alike 4.0 International license. Wikimedia Commons.

8 Ways to Guide Students on the Potential Pitfalls of AI

Reading Time: 8 minutes

This post discusses possible motivators for why and how students may be (mis)using AI in their education, explores ways to encourage students to make wise decisions in their academic journey (and beyond), and considers approaches to framing messages about navigating AI pitfalls.

Note: the term (mis)use encompasses any use of AI that exceeds what is stated as permitted.


What are the potential pitfalls of student AI use?

AI tools come with academic, personal, and professional risks (Conestoga, 2024; Ippolito, 2023):

  • AI can make mistakes and perpetuate biases in its training data, leading to misinformation and stereotypes.
  • Overreliance on AI can result in students missing out on foundational learning stages, as students jump to advanced tasks without mastering the basics using their own independent effort.
  • Using AI without proper citations or critical evaluation can lead to academic dishonesty and a lack of accountability.
  • Lack of awareness of potential privacy, security, and IP issues when using AI can lead to data security breeches or violations of intellectual property and copyright law.
  • Misuse of AI can lead to workplace problems, as students may lack the skills and knowledge required to succeed in their future careers.
  • AI use has broader economic, social, and environmental downsides that can negatively impact their chosen industries, workplaces, and roles.

As one college faculty member has noted in regards to how students are (mis)using AI:

“[M]ore and more students are using generative AI like ChatGPT or Copilot for translating, paraphrasing, and summarizing readings and for completing their writing tasks/assignments as well. However, they are using it to save time and copying/pasting the material directly without any ethical concerns (i.e., no citations or acknowledgement), or without performing any critical evaluation of the AI-generated writing to check the relevance to the assignment goals or learning outcomes. […] [S]tudents are jumping to the advanced stages of Bloom’s taxonomy without mastering the foundational stages. Therefore, students need to learn how to use AI technologies judiciously, as well as discriminate between ethical and unethical practices.” (Ahmed, F. 2024). EDEV0044 Final Assignment, p.2. Used with permission).​

What motivates students to use AI?

Even when AI use is limited or forbidden, some students may choose to use AI in their learning tasks and assignments. Here are some of the motivators that have been reported by students and faculty (Strickly, 2025)  – which ones sound similar to what your students have said to you?

Difficult personal situations

Students want to achieve passing grades and feel immense pressure to succeed academically. They may see AI as a way to manage their heavy workloads by saving time. Many students face significant stress and panic about their grades, leading them to cheat to avoid failure. They may lack confidence in their abilities but don’t want to fail, so they use AI to maintain their academic standing. Some students may turn to AI if they don’t see the relevance or value of the assignment or if they feel they need to prioritize other coursework.

Shifts in opinions about cheating

AI tools are easy to access, convenient, and private, leading students to believe there’s a low likelihood of getting caught. If everyone around them is cheating, students feel the need to cheat as well to keep up. They might be influenced by their peers using AI to cheat, creating a sense of peer pressure to do the same. Or, the perceived consequences of cheating appear to be minimal or worth the risk. In some cases, students might perceive that there are minimal or no consequences for cheating, which can encourage them to take the risk.

Knowledge and learning gaps

Students often misunderstand the risks and limitations of using AI for assignments. They may not comprehensively understand the errors, mistakes, hallucinations, biases, or incorrect information that AI can provide. They may believe in the “myths” of AI and what it can do, failing to recognize the mechanisms by which AI formulates outputs (Salvaggio, 2024).

Also, students may not appreciate the negative impact or consequences of AI (mis)use on their learning, knowledge, and skills in the workplace. Students may underestimate the importance of learning foundational knowledge and skills on their own. In addition, disrupted learning during the pandemic has left many students with weaker skills, making them more reliant on AI to complete their work.

Messages about AI

Students may not realize they are misusing AI if they are confused about what is permitted or not permitted AI use. Students receive different signals about AI use from professors, as permission levels may vary, making it hard to discern what is acceptable. External pressures from media and advertisements convince students that using AI is acceptable, reducing their sense of accountability. Peers who use AI may also embolden students to use AI.

Students may decide to use AI based on emotional and/or short-term rather than logical/long-term thinking, even when they know that AI use is restricted or forbidden. Decades of psychology research has found that:

“Our emotional brain has a hard time imagining the future, even though our logical brain clearly sees the future consequences of our current actions” (Bradt, 2004).

How do we help students avoid AI pitfalls?

Ultimately, students will decide whether and how they will use AI to complete learning tasks and assignments, but you can provide valuable guidance that considers their motivations and tendency to make decisions based on short-term (rather than long-term) thinking. Here are 8 ideas to consider:

  1. Provide clear and transparent AI Use Guidelines at the course start and then again when introducing specific assignments, detailing what AI use constitutes an academic integrity violation. This strategy assists students who need clarity about appropriate/inappropriate use.
  2. Engage students in active learning activities to recognize AI’s limitations, myths, and risks, especially the issues or problems most related to your course’s content and assignments. This helps students understand the problems with AI outputs more deeply. Rather than simply “tell” students about the pitfalls of AI, have students explore course-related pitfalls themselves.
  3. Use real-world, industry-related examples to illustrate the negative legal, economic, and reputational consequences of improper AI use in the workplace. This strategy makes the risks tangible and relevant, helping students see the real-world implications of their actions, especially if they have limited prior experience in their chosen fields.
  4. Engage students in decision-making exercises in the classroom, such as scenario-based learning, to practice ethical decision-making when it comes to using AI. By “gaming” out situations in which fictional students use AI unethically, it helps students in a low-stakes setting to foresee the potential problems and negative consequences of violating AI use expectations. The approach proactively encourages students to recognize the value of their independent effort for academic success and career readiness. It also
  5. Prompt students to reflect on how AI use may negatively affect others, including how others perceive them. For example, consider the negative impact of a “robot” generated output when the audience expects the response to come from them as a human. This encourages students to consider the broader impact of their actions on their reputation and relationships.
  6. Help students manage stress, time, and workload for high-stakes assignments during class. Provide templates, peer feedback activities, tips for learning task completion, and additional support. This reduces the temptation to use AI and addresses some of the broader reasons that have created the conditions for AI misuse by offering practical support and reducing pressure.
  7. Demonstrate how to be an “active operator” of AI if allowed in your course. Show students how to access, prompt, re-prompt, and save Copilot outputs. Provide examples of verifying, critiquing, citing, and documenting its use for quality and academic integrity. Use AI as a collaborator or discussion partner, then reflect on how it responded. This modelling of responsible AI use can provide clear examples for students to follow. You can also provide instructions for students to follow and practice independently.
  8. Foster a supportive classroom environment where students feel comfortable discussing AI use. Help students know you value their authentic work by showing caring and compassion, encouraging curiosity, giving choice in following their interests, and asking for thoughts and questions about AI. This builds trust and encourages open communication, making students more likely to seek guidance and use AI responsibly.
Screenshot of the AI GUidebook

Looking for some in-class activities to bring these ideas to life? View Conestoga’s AI Guidebook for Teaching: 6.1 Teaching About the Risks of AI for resources, links, and strategies to share and explore with your students.

How can framing your AI pitfall message make it more effective?

You can support the retention of your message by framing it so that it is more likely to be received. What you say–and how you say it–shows students what you prioritize in your class.

See these two AI pitfall messages below. Which one would appeal to your students, and why?

Negative (Loss, Fear-based) Framing​Positive (Gain, Benefit-based) Framing​
“AI tools can use biased training data to express stereotypes and make bad decisions, leading to disinformation and injustice that can worsen inequalities. Using AI for assignments can compromise your academic work while spreading harmful biases. Overreliance on AI might lead to you lacking key knowledge and skills for the workplace. For these reasons, it is important to use AI responsibly (or not at all).”“AI tools can help with your learning, but they also have serious limitations, and relying on them too much can hurt your growth. Sharing AI errors and misinformation can also hurt others. To use AI well, you need to think carefully and put in your own “human” efforts so it supports your skills, not replace them. Let’s explore when AI can improve you for what you can contribute to your future workplace, and when you will shine in your role by not using AI at all.”
Loss-framed messages emphasize the harms, risks, and negative consequences. They can be effective for individuals with low autonomy when short-term outcomes are conveyed, as they highlight immediate consequences (Churchill et al., 2016). However, these messages may induce fear and anxiety, which can be counterproductive if not balanced with supportive guidance. Such messages are less effective for individuals with high autonomy, who may respond better to positive reinforcement. Ignoring the topic of AI or only describing it in a negative way may affect how students perceive your credibility.Gain-framed messages emphasize the benefits of responsible AI use and can motivate students by highlighting positive outcomes. These messages are particularly effective for fostering long-term behavioural change and encouraging students to see the value in ethical practices. However, they may not be as immediately impactful for students who are primarily motivated by avoiding negative consequences (Oxman et al., 2022). Acknowledging a balance of the benefits and risks of AI can improve your credibility with students.
Table 1: Two different ways to “frame” AI messaging.

Emotional messages can be more persuasive for students with low information literacy as they resonate personally and can drive immediate action (Stewart et al., 2018). Factual messages, on the other hand, are more effective for students with high information literacy, as they provide concrete evidence and logical reasoning. Balancing both messages can ensure the communication is impactful and informative.

References and Resources

Ahmed, F. (2024). EDEV0044 Final Assignment.

Bleakley, A., Jordan, A. B., Strasser, A. A., Lazovich, D., & Glanz, K. (2020). Testing general versus specific behavioral focus in messaging for the promotion of sun protection behaviors. Annals of Behavioral Medicine, 54(2), 108–118. https://doi.org/10.1093/abm/kaz032

Bradt, S. (2004, October 14). Brain takes itself on over immediate vs delayed gratification. The Harvard Gazette.

Conestoga Library (2024). Generative AI Toolkit for Students. Conestoga College.

Churchill, S., Pavey, L., Jessop, D., & Sparks, P. (2016). Persuading people to drink less alcohol: The role of message framing, temporal focus and autonomy. Alcohol and Alcoholism, 51(6), 727–733.

Ippolito, J. (2023). IMPACT RISK acronym for AI downsides. Version 2.4.

Oxman, A. D., Fretheim, A., Lewin, S., Flottorp, S., Glenton, C., Helleve, A., Vestrheim, D. F., Iversen, B. G., & Rosenbaum, S. E. (2022). Health communication in and out of public health emergencies: To persuade or to inform? Health Research Policy and Systems, 20(28). https://doi.org/10.1186/s12961-022-00828-z

Salvaggio, E. (2024, August 29). Challenging the myths of generative AI. TechPolicy.Press.

Stewart Hoover, D., Wetter, D. W., Vidrine, D. J., Nguyen, N., Frank, S. G., Li, Y., Waters, A. J., Meade, C. D., & Vidrine, J. I. (2018). Enhancing smoking risk communications: The influence of health literacy and message content. Annals of Behavioral Medicine, 52(3), 204-215. https://doi.org/10.1093/abm/kax042

Stripling, J. (2025). The cheating vibe shift. The Chronicle of Higher Education.

AI Disclosure

Copilot was used to develop a summary of workshop content (EDV0309), integrate the provided notes, adjust the communication framing section, reorder sections, and provide an AI Disclosure statement. The post was heavily human-edited in multiple revisions following the outline produced by AI based on the workshop content. My detailed instructions guided Copilot’s contributions, and additional information was provided to ensure accuracy and relevance.

Share this Teaching Tip

Elan Paulson

Elan Paulson, PhD, has been an educator in Ontario's higher education system since 2004. Before joining Conestoga as a Teaching and Learning Consultant, Elan was on the executive team at eCampusOntario. She previously served as Program Director and as an instructor in professional education programs at Western University's Faculty of Education. With a Master's in Educational Technology, Elan specializes in technology-enabled and collaborative learning to support diverse learners. She has also conducted research on faculty participation in communities of practice for professional learning and self-care.

Leave a Reply

Your email address will not be published. Required fields are marked *

Post comment