Evaluating University Internationalisation

On the benefits of evaluating international activity


A nice short article by Eva Egron-Polak, Secretary General of the International Association of Universities (IAU) published by the Academic Coordination Association.

Egron-Polak argues that there is a beneficial emphasis on evaluating internationalisation at present and that this trend means that international activity is being taken seriously and that all kinds of such activity is being fully and properly scrutinised. Moreover, it means that universities are continuing, quite properly, to debate their approach to internationalisation, in other words to ask ‘why are we doing this?’.

Although she acknowledges that terminology is difficult here and internationalisation has as many different definitions as there are institutions, Egron-Polak argues that there is real value in the kind of assessment undertaken by the IAU through its Internationalization Strategies Advisory Service (ISAS) where “the aim is to know whether or not the internationalisation goals are being achieved; and if we fall short of that, why this is the case, and what is required to redress the situation”.

The overall outcomes of the ISAS programme are quite interesting:

And, despite the vastly dissimilar contextual realities in each university, each ISAS project still confirmed that the dominant understanding of internationalisation of higher education remains relatively narrow or only partial. Consequently, internationalisation tends to be implemented in a limited manner. And when institutions embark on an assessment, they are likely to focus on just a few, basic aspects, using a limited set of (usually quantitative) indicators, such as the number of international students on campus, the number of exchange partnerships, the teaching of foreign languages and the hosting of visitors from abroad.  Despite the clear importance of these indicators of internationalisation, are they really a mark that the goals of internationalisation have been achieved?  How much do they tell us about the impact of these actions on the learning that takes place?  How well can the academic community reply to the ‘why’ questions that can be raised about these actions, particularly when they require institutional investment?

So actually it looks like many institutions really have quite a long way to go to develop a more comprehensive conceptualisation of internationalisation. This seems to me to be rather disappointing but perhaps not entirely surprising. It does take time to develop beyond the basic issues of student numbers and exchange agreements and it is perhaps therefore inevitable that some universities will be further down the road than others. In all cases though, stepping back and asking the ‘why’ questions in relation to different international activities does seem sensible.


Researchers Rate RateMyProfessors

Rating RateMyProfessors

Surprising news from The Chronicle of Higher Education on the possible utility of RateMyProfessors. The survey tool, the use of which is a bit more widespread in the USA than in the UK, is generally regarded by universities with, at best, deep scepticism or, more likely, downright hostility. However, there is a different view:

new research out of the University of Wisconsin at Eau Claire suggests the popular service is a more useful barometer of instructor quality than you might think, at least in the aggregate. And the study, the latest of several indicating RateMyProfessors should not be dismissed, raises questions about how universities should deal with a site whose ratings have been factored into Forbes magazine’s college rankings and apparently even into some universities’ personnel evaluations.

“There is the possibility that people may feel legitimized to use the information in potentially dangerous ways,” says April Bleske-Rechek, an associate professor of psychology at Eau Claire, who is a co-author of the new study. They might, for example, give too much weight to comments on the site in deciding whether to hire someone or grant the person tenure.

RateMyProfessors, which debuted in 1999 and boasts over 10 million student-produced comments and ratings, calculates an instructor’s quality by averaging how the site’s users score the professor in two categories, “helpfulness” and “clarity.”

Ms. Bleske-Rechek and her co-author, Amber Fritsch, a student at Eau Claire, described their study in “Student Consensus on RateMyProfessors.com,” a paper published this month in the journal Practical Assessment, Research & Evaluation.

In their study, they probed the reliability of the site’s ratings by focusing on the level of consensus among students for 366 instructors at their state university, each of whom had at least 10 evaluations.

The idea is that, if students rate professors based on idiosyncratic personal reactions—to a rude comment made in class, say—then it should take a lot of posts to reach a consensus. By contrast, if students are consistent in their ratings, then a consensus should emerge with a small number of evaluations.

Earlier studies of traditional paper-and-pencil evaluations have documented significant consensus with as few as 25 raters, Ms. Bleske-Rechek says. That’s one rationale for using online evaluations; people argue that you will get the same distribution of responses even if everyone doesn’t fill out the form.

Will academics come to love it? Unlikely. Will it replace more formal evaluation of teaching quality? I think not. Perhaps there is some way to go with this yet.