
Conestoga’s Evaluation of Student Learning Policy is changing: Here’s what you need to know
Starting September 1st, 2025, the updated Evaluation of Student Learning Policy will bring changes to how assessments are structured in courses. These updates aim to ensure that assessments are both meaningful and manageable for students and faculty.
The highlights:
- Item 7. Typically, there are no more than seven (7) evaluations in a course, unless approved by the Dean.
- Item 8. No Evaluation can be valued at less than 5% of the course grade, unless approved by the Dean.
The intention of these changes is to provide time between assessments and promote effective assessment strategies during Conestoga’s 14-week semester.
This post describes a 5-step process to begin revisions so that assessments align with updated policy. You’ll also find information on where to go for support in making these revisions.
A 5-step process to begin revisions
1. Itemize assessments in your course.
It’s a good idea to start by looking at the evaluation plan in your course outline. If your course currently has more than seven evaluation items in total, it’s time to reassess and streamline. The same goes for any evaluation item valued at less than 5%.
When itemizing assessments, ask yourself:
- How many evaluation items are listed in the outline?
- How much is the lowest evaluation item worth?
- Are there additional evaluation components (e.g., ungraded assessments, scaffolded items) not reflected in this tally?
Quizzes | 12 @ 1% each | 12% |
Labs | 6 @ 5% each | 30% |
Tests | 3 @ 10% each | 30% |
Case Study | 1 @ 3% | 3% |
Exam | 1 @ 25% | 25% |
Totals | 21 | 100% |
In the above evaluation plan, for example (Table 1), we can tally 21 evaluation items in total. The Case Study also stands out because of its value at 3%. The number of evaluation items exceeds the new policy limit, so assessments will need to be combined or condensed. In the process, the low-value Case Study could be combined with another evaluation item in a scaffolded approach.
2. Look at learning outcomes.
Consider overlap in course learning outcomes (CLOs) being assessed. The same CLO(s) mapped to multiple evaluation items suggests over-assessment. For instance, since both the quizzes and labs in the below example (Table 2) assess CLOs 1, 2, and 3, you might combine these into fewer, more comprehensive assessments. This not only reduces the number of evaluations but also ensures that each CLO is assessed in a meaningful way.
Quizzes | 12 @ 1% each | 12% | CLOs 1, 2, 3 |
Labs | 6 @ 5% each | 30% | CLOs 1, 2, 3 |
Tests | 3 @ 10% each | 30% | CLOs 3, 4, 5 |
Case Study | 1 @ 3% | 3% | CLOs 4, 5, 6 |
Exam | 1 @ 25% | 25% | CLOs 4, 5, 6 |
Totals | 21 | 100% |
Keep in mind that each CLO needs to be assessed twice, but that assessment can occur in the same evaluation item.
When reflecting on CLOs, ask yourself:
- How many times is each learning outcome assessed?
- Does each evaluation item align with a learning outcome?
- Is there repetition or significant overlap in CLOs being assessed?
- Is there “drift,” where the assessment aligns with the CLO in theory but not practice? Ensure the assessment as it’s delivered aligns with the learning outcome.
3. Reflect on the purpose of assessments.
Next, think about what’s working and what’s not in current assessments and what purpose each assessment serves.
Take stock of what assessments work well in their current form as indicated by:
- Authentic engagement: Are students interested in the assignment and motivated to complete it? Are you pleasantly surprised by their diligence and their attention to the assessment task?
- Signs of adjustment or improvement in learning: Do you notice students intentionally attempt to improve over time, as communicated by their questions, requests for support, and application of feedback?
- Creative submissions: Do you find students submit creative and original work?
- Richness of formative data, both for the learner and the faculty: Does the assessment give you and your students an accurate picture of the learning that is happening?
And what assessments might not be working so well as indicated by:
- High fail or high pass results (e.g., test questions, rubric criteria) captured in assessment data on eConestoga.
- Inauthentic or low engagement suggested by academic integrity violations and incompletions.
- Confusion around assignment expectations or difficulty following instructions suggested by misaligned submissions or requirement for significant support in completion.
In terms of purpose, consider if some evaluations are being used to monitor attendance or completion of readings. If so, these tasks might be better implemented as ungraded asynchronous, synchronous, or in-class activities.
Here it can be helpful to differentiate between graded and ungraded assessments to provide formative feedback without adding to the total number of evaluations. For example, use class time to prepare students for assessments, incorporating practice quizzes and peer feedback circles using the assignment rubric. This approach not only incentivizes attendance but also ensures that students are well-prepared for their evaluations.
Will students still come to class or do readings without grades?
Tying grades to attendance or readings doesn’t necessarily translate into meaningful learning or tell us much about student learning across a course. Instead, incentivize attendance and engagement by including assessment preparation at the heart of class time. Structure teaching and learning activities around assessments (including instructions, practice, problem solving, and feedback) through:
- Show and tell.
- Reflection on challenges and questions.
- Practice for an upcoming assessment and get feedback.
- Guided peer support.
- Demonstrate grading using the rubric.
- Workshop and debrief.
4. Introduce scaffolding to combine evaluation items and add pedagogical value.
When restructuring your evaluations, look for areas to include scaffolded assessments: these are multiple assessments that build toward a final, high-stakes assessment. For example, you might replace ten quizzes with two larger quizzes and a midterm test, each worth a higher percentage of the grade. This approach not only aligns with the new policy but also supports students in developing their skills progressively. Scaffolded assessments provide opportunities for formative feedback, helping students improve before the final evaluation.
Another example of this approach is to take a higher stakes, summative assessment and break it into smaller, thoughtful submissions that support student time management as well as allow for rich formative feedback during the process. Such an approach can contribute to greater success for learners on the final submission, which inevitably contributes to more aligned outcome demonstration.
An example in practice would be taking an existing 30% paper (or whatever assignment aligns with your program/vocation) and breaking it down into three distinct submissions: 1) a 5% outline document, 2) a 5% partial draft, and 3) a 20% final submission. This strategy allows you to provide feedback to students as they progress through the completion of the paper, creating an often much-improved final product. Moreover, the scaffolding of components generates a timeline for completion, so students are less likely to procrastinate. Within the context of the policy, this scaffolded paper would count as three assessments. If scaffolding brings your total number of evaluation items above seven, it would be considered an acceptable evaluation plan for your Dean’s approval.
5. Emphasize pedagogical value.
Overall, the goal in revising the evaluation plan is to ensure assessments have pedagogical value, which means:
- Students have time to experiment, receive feedback, and adjust their approach across assessments to meet the expectations of learning.
- Assessment tasks provide valuable formative data on student learning to both the faculty and the student, who similarly gain insight into what most effectively supports learning.
What if I need an exception to policy updates?
The policy is not one size fits all. Flexibility in approach is necessary to recognize different assessment needs across programs. Consult with your Chair to discuss exceptions and the pedagogical value of such exceptions within the contexts of the course.
When course outlines are submitted for approval on the Employee Portal, any outline that includes an excess of seven evaluation items will be flagged for review by your Chair (you’ll also see the flag). It’s therefore important to discuss exceptions with your Chair in advance of the approval stage. Note that any requests for exceptions will also move up for approval from the Executive Dean or Dean after the Chair’s approval.
You have the option to include language under the “Notes” section on the course outline to explain to students any anomalies in the evaluation scheme. For example, if specific evaluation items are scaffolded, this can be communicated to students for transparency.
You might include a blurb that states something like:
“The essay in this course is scaffolded into three components, including a proposal (5%), draft (10%), and final submission (20%).”
“The midterm in this course includes a preparatory study sheet (5%) submitted one week in advance of the midterm date.”
Supports are available
Teaching and Learning is offering drop-in sessions, workshops, and individual consults for those currently modifying course outlines. Drop-in sessions will be supported by consultants from Curriculum and the Online Learning Centre to advise, respectively, on modifications to course learning outcomes and asynchronous courses or course shells. More information on the policy updates described in this post and answers to FAQs can also be found on SharePoint.
Don’t hesitate to get input to align your course with the updated Evaluation of Student Learning Policy and maintain a focus on meaningful and effective assessments. Remember, the goal is to provide students with a manageable workload that accurately reflects their learning and progress.