Skip to main content

Interpreting Student Course Evaluation Feedback

Campus scenes, students studying in the ILC, Photo Patrick Sweeney

University Policy 4300, Student Evaluation of Faculty, requires that students be given the opportunity to provide feedback on their courses. End-of-course evaluations from students are a source of insight into students’ experience of a course. In interpreting this feedback, your aim is to first understand your students’ perceptions and experiences and then to use it to make informed decisions about changes you might make to improve student learning.

Student feedback is only one source of information for those decisions. It should primarily be used for “formative assessment”: an instructor’s ongoing reflective process of gathering information about their teaching (e.g., classroom observations, mid-semester student feedback, patterns in students’ performance on assignments, and so on); reviewing it within the context of the institution, students’ needs and abilities, course and program learning outcomes, and evidence-based practices for effective instruction; for the purposes of identifying the instructor’s strengths in teaching as well as areas for growth and development.

It’s important to keep the following in mind when interpreting these course evaluations:

  • Student ratings are student perception data: Students are well-positioned to speak of their satisfaction with their experience in a course (e.g., the difficulty of content, engagement, or boredom) but are much less well-positioned to assess an instructor’s teaching quality, effectiveness, and breadth of knowledge and scholarship.
  • Student ratings are NOT faculty evaluations: Research has identified a number of factors that affect student evaluations of teaching (SET) that are out of an instructor’s control. Ratings can be biased by a number of variables such as an instructor’s gender, perceived attractiveness, ethnicity, and race. Evidence also suggests that course characteristics such as class size and type (core curriculum versus elective) affect SETs.
  • Student ratings are NOT measures of student learning: Research consistently fails to find evidence of a compelling correlation between measures of student learning and ratings of teaching quality and effectiveness.
  • Small differences in mean (average) ratings are common and not necessarily meaningful: Evidence suggests that there are issues with the statistical validity of SETs, such as there being non-standard distances between ratings (e.g., the unequal difference in scoring assignment of responses such as “Excellent, Good, Fair, Poor, or Very Poor”), which makes interpreting results – especially when comparing means – problematic.
  • Single course evaluation results are just small pieces of a larger picture and should be viewed in that context, rather than in isolation: Part of that larger picture could include mid-term feedback from students, peer observations and evaluations, and self-evaluations in the form of a teaching portfolio or similar artifact.
  • Higher response rates lead to more reliable information. Response rates of 60% or higher will provide more insights than lower response rates.

Student feedback can help you identify the strengths of your approach to your class – in the eyes of your students – as well as the challenges students encountered.

All Full-Time Faculty: Remember to upload your student evaluation of teaching responses to your Faculty 180 account.

Guidelines for Interpreting Student Course Evaluation Feedback

Understand Your Student's Perceptions and Experiences

Take a Student-Centered Approach

SUB atrium lit with rainbow-colored lights for Pride Week 2019, photo Patrick Sweeney

In interpreting student feedback, your aim is to first understand your students’ perceptions and experiences and then to use it to make informed decisions about changes you might make to improve student learning.

Consider the following suggestions:

Start Here: Five Steps

Five Steps for Responding to Feedback from Students

Adapted from the University of Georgia’s Center for Teaching and Learning and Stanford University’s “Interpreting Feedback”.

  1. Reflect. Before you look at your student evaluation scores and comments, think about what you are expecting. What went well in your course? What are some areas for growth/development/change.
  2. Read and React. Look at your collected scores and student comments, then allow yourself to have an emotional reaction – good or bad – in response to the data.
  3. Relax. Take a break. Process your emotions and prepare yourself to come back to the data with a fresh, more objective approach.
  4. Revisit. Return to your evaluations and process them with a view to understanding your students’ perceptions and experiences in your class.
  5. Respond. Make decisions about what you will and will not change in your course(s) and your teaching, based on the feedback your students have given you. Make some notes for yourself alongside your course materials, so that you can remember what you would like to incorporate and/or change next time. (See suggested strategies below).

1. Read: Understanding Likert-Scale Results

A Likert Scale was used in the evaluation forms. It is the most widely used approach to scaling responses in survey research and is the foundation of the student course-instructor feedback. For an accurate interpretation, be sure to keep the scale in mind while reading through your report. The scale is as follows:

  1. Strongly Disagree
  2. Disagree
  3. Neutral
  4. Agree
  5. Strongly Agree
  6. N/A – Cannot Rate [Not included in norm/average calculations]

In general, high scores (4+) can be interpreted as a student consensus indicating a strength. On the other hand, low scores (2-) should be considered as an area that requires immediate developmental focus according to student feedback. Small differences in mean (average) ratings are common and not necessarily meaningful. For instructors, this simple yet efficient importance indicator aims to highlight the most critical elements, thus allowing you to prioritize accordingly when putting your plan to action.

2. Read and Understand: Validity and Response Distributions

These evaluations stem from student perception, which implies that the validity increases proportionally with the number of occurrences. Contradictory written comments are not unusual. Your improvement plan should be based on the most representative results and less on outlying responses.

You are shown response distributions because they give a better overall picture than mean scores. For example, it is more meaningful to take a distribution range as showing the “% excellent or good.”

Look for patterns: are the distributions consistent and in the ranges you expect? Are there unusual clusters, such as a “spike,” or a very high and very low grouping?

A single mean score can be a few decimal points higher or lower simply due to the random sample of students in a particular course from term to term. An increase or decrease of a few decimal points should not necessarily be interpreted as a significant change. For more information on interpreting statistical measures, see 

2. Read and Reflect: Understanding Student Comments

Upon getting a general sense of direction as to what requires improvement, it is important to drill down to the related questions and consider them as distinct items. They were evaluated as such by students and will indicate tangible steps/actions to incorporate into your developmental process.

It is common to concentrate on outliers or unique responses, but it is more useful to look for patterns and trends than speculate about an isolated score. 

Compare the comments of students who gave the course a very positive response with those who gave the course a less positive response to gain insight into the most important issues to address.

Look for themes in the comments. There may be a significant sub-group of students who could benefit from course changes or different approaches.

3. Revisit and Respond: Reflecting on Student Feedback

Using Student Feedback to Revise Your Course and Enhance Student Learning

  1. Consider which of these categories might need more attention:

Learning Goals:

      • Are students more likely to report achieving some learning goals than others? Are the goals well-articulated?

Student Learning: 

      • Does my own assessment of my teaching match that of my students’? If not, why not?
      • Are there issues to address in the course design or teaching methods?
      • Is my course attracting the students I expected, with appropriate interests and prior knowledge? If not, what adjustments in course description, learning goals, and materials might I consider?

Attendance and Engagement:

      • Are students attending the course regularly? Are students spending a reasonable amount of time on the course outside of class?

Instruction and Organization:

      • Is the course structure clear to students? Could it be revised?

Course Elements:

      • The section on course elements allows you to assess the relative usefulness to students of different course elements. Are some elements more effective than others?

Sources and Additional Resources

Sources

Adapted from the following:

Additional Resources

Back To Top