Researchers Rate RateMyProfessors

Rating RateMyProfessors

Surprising news from The Chronicle of Higher Education on the possible utility of RateMyProfessors. The survey tool, the use of which is a bit more widespread in the USA than in the UK, is generally regarded by universities with, at best, deep scepticism or, more likely, downright hostility. However, there is a different view:

new research out of the University of Wisconsin at Eau Claire suggests the popular service is a more useful barometer of instructor quality than you might think, at least in the aggregate. And the study, the latest of several indicating RateMyProfessors should not be dismissed, raises questions about how universities should deal with a site whose ratings have been factored into Forbes magazine’s college rankings and apparently even into some universities’ personnel evaluations.

“There is the possibility that people may feel legitimized to use the information in potentially dangerous ways,” says April Bleske-Rechek, an associate professor of psychology at Eau Claire, who is a co-author of the new study. They might, for example, give too much weight to comments on the site in deciding whether to hire someone or grant the person tenure.

RateMyProfessors, which debuted in 1999 and boasts over 10 million student-produced comments and ratings, calculates an instructor’s quality by averaging how the site’s users score the professor in two categories, “helpfulness” and “clarity.”

Ms. Bleske-Rechek and her co-author, Amber Fritsch, a student at Eau Claire, described their study in “Student Consensus on,” a paper published this month in the journal Practical Assessment, Research & Evaluation.

In their study, they probed the reliability of the site’s ratings by focusing on the level of consensus among students for 366 instructors at their state university, each of whom had at least 10 evaluations.

The idea is that, if students rate professors based on idiosyncratic personal reactions—to a rude comment made in class, say—then it should take a lot of posts to reach a consensus. By contrast, if students are consistent in their ratings, then a consensus should emerge with a small number of evaluations.

Earlier studies of traditional paper-and-pencil evaluations have documented significant consensus with as few as 25 raters, Ms. Bleske-Rechek says. That’s one rationale for using online evaluations; people argue that you will get the same distribution of responses even if everyone doesn’t fill out the form.

Will academics come to love it? Unlikely. Will it replace more formal evaluation of teaching quality? I think not. Perhaps there is some way to go with this yet.