AI-Adapting Activities and Assignments: Part 1 of 2
This is the first of two posts considering “AI-adapting” activities and assessments. If the use of AI does not serve the learning goals of your course, you may wish to adapt your in-class activities and assessments of learning to minimize and discourage the use of AI by students.
Why Consider Restricting AI Use
Forbes and Brandauer (2023) describe how faculty can permit or restrict student use of AI in learning activities and assessments using 3 main categories: closed, conditional, and open. “Closed” is the most restrictive approach, as no use of genAI is permitted. In this case, AI use is prohibited and is considered a violation of academic integrity or an example of academic misconduct. Put another way, genAI use is not seen as collaboration but as “collusion” or an “unauthorized aid.” This includes assistive AI (like Grammarly) and generative AI (like Copilot).
Why might you consider restricting AI use? You may do so when AI use does not serve the course’s learning goals. If AI capabilities can easily reproduce the learning meant to be demonstrated by students, it can be difficult to distinguish whether evidence of learning is human- or AI-generated (Birks & Clare, 2023).
For example, in a Civil Engineering Technology program, a structural analysis course requires students to calculate manually the load-bearing capacities of beams and columns. This hands-on approach enables students to develop their calculation and analytical skills. Using genAI to calculate the load-bearing capacities would interfere with the student’s ability to demonstrate the learning outcomes on their own.
For more information about how you may wish to restrict or permit the use of AI, see the related post, The Optional Use of GenAI in Assessments.
Activity and Assessment Elements to Review for the Closed Approach
If you take a “closed” approach to AI use in your course(s), you may wish to identify how aspects of your activities and assessments may make them “AI-vulnerable.” An assessment may be AI-vulnerable when it lacks measures to prevent or discourage the unauthorized use of AI, or when it is difficult to discern whether the product is AI- or human-generated.
Below are some questions you might consider as you review the “AI vulnerability” of your learning activities and assessments.
Can an AI output easily approximate what a human can do?
AI chatbots can quickly and effectively notice patterns, summarize data, report information derived from their training data sets, craft documents, and more. Especially for such lower-level thinking and doing skills, AI may have capabilities that match or exceed what a novice learner can do.
Incorporating elements that make an AI output less viable can discourage AI use. In a Computer Programming course, students might be asked to make simple fixes to code, a task that AI can easily do. To make this task less AI-vulnerable, the assignment could be modified to require students to program the chatbot to handle more complex tasks or unexpected user inputs.
Is the perceived interest or relevance for students high?
If students are uninterested in the activity or perceive the work as irrelevant or unhelpful for achieving their personal and professional goals, they may be more likely to “offload” the cognitive effort to generative AI. The assessment could be tailored to align more closely with students’ professional goals by incorporating real-world applications of the course material to increase interest and relevance.
In a Mechanical Engineering Technology course, for example, students might be asked to write an essay on the history of mechanical engineering, which may be outside their goal of designing modern machinery. To make this assignment more relevant and less AI-vulnerable, the essay prompt could be adjusted to ask students to discuss how historical mechanical engineering principles influence modern machinery design.
Is it a written-only submission?
Text-to-text chatbots have many different writing capabilities. Multimedia elements may discourage AI use because it requires integrating text, image, and sound to meet specific requirements.
In a Graphic Design course, for example, students might be asked to write a critique of a designer’s work, and a plausibly written critique may be generated by AI (especially if the designer is famous). To make this assignment more engaging and less AI-vulnerable, your assessment may ask students to visually represent their critiques, such as a sketch or a digital design. Students may also be asked to present their work orally.
Is it directly related to in-class work?
Requiring the incorporation of in-class activities can minimize AI use. This is because you can ask students to connect what they do in class without AI to what they do in an assignment.
For example, students might be asked to write a report on modern trade systems in a Business Administration course. To make this assignment more relevant and less AI-vulnerable, the report prompt could be adjusted to require students to incorporate specific vocabulary explained in class, examples from in-class work, or peer feedback provided in class.
Is it connected to specific organizations, current events, or community issues?
AI outputs tend to be general and vague; they can even hallucinate (e.g., create “fake” but plausible-sounding information). Using specific, local, and real contexts can discourage AI use because this strategy requires students to review specific information and materials to which AI may not have access.
In a Journalism course, for example, students might be asked to write a grant letter, but one that does not reference actual companies, current events, or community issues. To make this assignment more relevant and less AI-vulnerable, the grant letter prompt could be adjusted to require students to reference actual companies, current events, or community issues.
Is using the course materials optional?
Requiring direct quotations and references to required class materials can reduce genAI use. This is because genAI may not have access to specific resources assigned in the class. It can also not “think” about which direct quotations best fit its stated purpose.
For example, students might be asked to write an essay on a chosen topic in a Biology course, where referencing specific course materials is optional. To make this assignment less AI-vulnerable, the essay prompt could be adjusted to require students to reference specific course materials in their essays using APA-style references.
Is only the final draft required?
Asking students to build their assignments in steps, stages, or drafts can minimize genAI use. This is because the shifting focus from a single final product to a developmental process is more difficult to “simulate” using genAI.
For example, in a Communications course students might be asked to submit a final draft of a book review without needing to submit any preliminary drafts related to the final draft. To make this assignment less AI-vulnerable, the assignment could be modified to require students to submit preliminary drafts or outlines before the final draft. Students might also be required to incorporate in-class peer or faculty feedback on the draft as a core rubric item.
Are the expectations quite general?
Evaluation rubrics that require human capabilities to demonstrate mastery can discourage AI use. This is because an assignment rubric that evaluates specific human capabilities, including understanding context, nuance, emotion, audience, personalization, complexity of thought, and plainspoken communication, will succeed where a formulaic, algorithmically-derived AI output will not.
For example, in a healthcare course students might be asked to write a research paper with general rubric criteria, focusing on clarity, organization, and grammar rather than specific case analysis. To make this assignment less AI-vulnerable, the rubric could include more specific criteria, such as applying healthcare theories or analyzing specific cases. Other criteria that value human capabilities include creative thinking or new insights.
By assigning non-traditional assessments, incorporating real-time elements, and valuing the process as much as the product in your assessment criteria, you can make your assessment less AI-vulnerable. While there is no one way to “AI-proof” an assignment for a closed approach, layering multiple strategies may help discourage AI use and achieve course learning goals.
Summary
Taking a Closed approach may mean adapting your activities and assessments to make them less AI-vulnerable to discourage AI use. By prohibiting all use of AI, you are then also required to monitor and “police” the AI use that you restrict. Alternatively, you may decide to adopt a “conditional” or “open” approach to AI use by students, which is covered in Part 2 of this post series.
Regardless of your approach, it is important to communicate your expectations for students clearly and explicitly, in writing and orally in class, and refer students to appropriate academic integrity resources provided by the College.
References
Birks, D., & Clare, J. (2023). Linking artificial intelligence facilitated academic misconduct to existing prevention frameworks. International Journal of Educational Integrity, 19(20).
Forbes, M., & Brandauer J. “What’s my stance on genAI in this class?” Gettysburg College Johnson Center for Teaching and Learning.