Students speak: teacher evaluations

Tuesday, December 12th, 2017 at 9:30 AM

Occasionally they appear before YouTube videos.  

“How likely are you to watch ‘Wonder Woman’?” one might ask.  

You’re encouraged to reply very strongly in agreement, in disagreement, or remain neutral. They’re tucked into little holders at restaurants. Shoved in your face at select school events. They come digitally, on paper, sometimes projected across a screen. These surveys are all different — but they are united by one fact: they are often unexpected, much like professor evaluation day. 

In this continuing exploration, we move to evaluations, the differences amongst universities and student thoughts. 

Professor evaluations by students prove, according to university officials, to be an important tool for schools to measure teaching effectiveness. 

“Student evaluations (of professors) are only one tool I use to improve my teaching,” explained Mary Papakie, chairperson of the journalism and public relations department at Indiana University of Pennsylvania (IUP).

She continued: “We have peer observers (my colleagues are invited in to watch me teach and give me feedback), my chairperson does the same (I am the chairperson now, so I have to have an acting chairperson do that; but, regardless), and we have a wonderful organization on campus, the Center for Teaching Excellence and its Reflective Practice Program, that hosts workshops and introduces us to the latest and greatest practices at IUP. I take advantage of all the tools I have at my disposal to continuously improve my teaching.” 

The Pennsylvania State System of Higher Education (PASSHE) doesn’t mandate one specific method of student evaluation. As a result, every university has its own unique survey that they have students use to evaluate their professors.  

“The content of the survey instrument is completely up to what is decided among the students, administration and faculty,” said Dr. Kenneth Mash, president of the Association of Pennsylvania State Colleges and Universities (APSCUF). “Consequently, the instrument is different on every campus. The primary goal is to measure students’ opinions of teaching effectiveness. The results of the survey are taken very seriously. Many professors also volunteer to be evaluated so that they can get student feedback to improve their courses.”

The method of the evaluation can play a part in how many students then take it, though, and thus how effective it is to university administration.  

“The percentage of students submitting evaluations can vary considerably by course, especially for evaluations conducted online.  In fact, online evaluations have often been found to result in lower response rates than those for paper-based evaluations,” said Barbara Lyman, provost and executive vice president at Shippensburg University.  

“At Shippensburg University, since fall 2012, the percentage of students participating in submitting evaluations each fall semester has ranged from 52 percent to 62 percent.”

In his study, “An Evaluation of Evaluations,” Phillip Stark, professor of statistics at UC Berkeley discussed how numeric-based questionnaires do not provide an effective overview. 

He argues: “SET (student evaluations of teaching) scores are ordinal categorical variables: The ratings fall in categories that have a natural order, from worst (1) to best (7). But the numbers are labels, not values. We could replace the numbers with descriptions and no information would be lost. The ratings might as well be ‘not at all effective,’ to ‘extremely effective.’”

Stark’s argument makes sense. After all, how can less than 50 percent of a professor’s body of students decide their teaching effectiveness based on a set of rigid questions that allow little room for anecdote?  

IUP asked those same questions in 2015, when they moved from a traditional survey to a questionnaire that asked more direct questions regarding students experience in the classroom, allowing for more lengthy discussion of their opinions on what went right and wrong that semester.  

Papakie added: “I definitely like the new instrument better than the old one. If you compare them side-by-side, you will see that the newer instrument encourages students to reflect on how much effort they, as students, put into the class as well. I like that.”

Stark also concluded: “SET may be reliable, in the sense that students often agree. But that’s an odd focus. We don’t expect instructors to be equally effective with students with different background, preparation, skill, disposition, maturity and ‘learning style.’ Hence, if ratings are extremely consistent, they probably don’t measure teaching effectiveness. If a laboratory instrument always gives the same reading when its inputs vary substantially, it’s probably broken.”

However, this isn’t always the case. If a professor has uniformly bad reviews, students often point out that something must be wrong with the professor.

Caroline Fruchter, a political science major at Shippensburg, said of reviewing an unpopular professor: “The department knows this is an unpopular teacher, so they should especially read our suggestions. We’re not looking to take him down with nasty personal comments, but just express how the class could benefit us better. We are spending tens of thousands of dollars on this school; we deserve to get the most out of it.” And administrations usually believe that students like Fruchter have a case to make.  

“IUP uses an evaluation instrument (a survey) that is the outcome of consultation among IUP administration, IUP faculty, and student government,” said Tim Moreland, provost and vice president of academic affairs at IUP. 

“The particular [instrument] that each campus used is designed on the campus with the administration, the students, and the faculty on that campus. That includes whether that instrument will be done in person or online,” said Mash.  

So, if the business of professor reviews is in the consumer’s hands, why do students feel otherwise? 

Jenna Wise, a journalism student at Shippensburg University, said: “To be honest, I do not know if my thoughts are being heard. Very little is shared with SU students about the evaluation process and how our evaluations are used. I think that providing students with more information would make students feel as if their voices are being heard, [and] would encourage more students to fill out the evaluations.”  

You walk into class, and for a second you think you’ve walked into another class because the professor standing at the podium isn’t your professor. Surveys are handed out, and assurances are made that they won’t be seen by your professor until final grades are posted. It’s your turn to grade the grader. 

No matter how unexpected, the survey that you hand back on student evaluation day — whether it’s full of well thought out responses, or hurried answers — it will be used to evaluate your professor.

Shayma Musa is the copy editor for The Spectator. She can be reached at

View Our YouTube Channel
Edinboro TV
Find Us on Instagram