Auditing University Rankings

Holding league tables to account?

The Chronicle reports that there are to be some new rules to govern university league tables. The proposals were announced recently in Paris at the Unesco Global Forum, “Rankings and Accountability in Higher Education: Uses and Misuses.” This Forum, which gave voice to lots of concerns about ranking methodologies, concluded with the adoption of some “Audit Rules”:

The new Ranking Audit Rules were adopted by the executive committee of the International Ranking Expert Group’s (or IREG’s) Observatory on Ranking and Excellence, which announced that it was working on an evaluation system, or audit, at its meeting last fall in Berlin.


“The purpose of an audit, conducted by independent academic teams, will be to verify if a ranking under review was done professionally, and observes good practices, providing students, their parents and employers with information allowing them to compare and assess programs offered by higher-education institutions,” according to the ranking group’s press release.

Rankings will be reviewed on a voluntary basis and any rankings organization can ask to be audited. Those that pass what the group describes as its “robust evaluation” will be able to certify their ranking system as “IREG approved.” The new audit system, for which the first results will be published this fall, is intended to “enhance the transparency of rankings, give users of rankings a tool to identify trustworthy rankings, and improve the quality of rankings.”

So, will the league table compilers submit themselves to this regime? And will the designation “IREG approved” have any currency?

Advertisements

An alternative global ranking of universities?

European project launched to develop a new international league table

Global Higher Ed has a report on the decision by the European Commission to award a million euro tender to develop and test a global ranking of universities to a consortium of institutions:

globe-europe

The successful bid – the CHERPA network (or the Consortium for Higher Education and Research Performance Assessment), is charged with developing a ranking system to overcome what is regarded by the European Commission as the limitations of the Shanghai Jiao Tong and the QS-Times Higher Education schemes. The final product is to be launched in 2011.

CHERPA is comprised of a consortium of leading institutions in the field within Europe; all have been developing and offering rather different approaches to ranking over the past few years.

But will it fly as an alternative?

IREG, the International Observatory on Rankings, reports the details:

The European ranking system will be independent, “robust” and measure higher education’s core functions of research, teaching and outreach, says the tender’s terms of reference. It will cover all types of higher education institutions in and outside Europe – particularly in North America, Asia and Australia – and will enable comparisons and benchmarking of similar institutions at the institutional and field levels.

The basic approach underlying the project is to compare only institutions which are similar and comparable in terms of their missions and structures. Therefore the project is closely linked to the idea of a European classification (“mapping”) of higher education institutions developed by CHEPS. The feasibility study will include focused rankings on particular aspects of higher education at the institutional level (e.g., internationalization and regional engagement) on the one hand, and two field-based rankings for business and engineering programmes on the other hand.

The project will help institutions better position themselves and improve their development strategies, quality and performance. It will enable stakeholders, especially students, to make informed choices between institutions and programmes – which existing rankings do not do because they focus only on research and entire institutions.

The field-based rankings will each focus on a particular type of institution and will develop and test a set of indicators appropriate to these institutions. The rankings will be multi-dimensional and will – like the CHE ranking – use a grouping approach rather than simplistic league tables. In contrast to existing global rankings, the design will compare not only the research performance of institutions but will include teaching & learning as well as other aspects of university performance.

Will be interesting to see the outputs of this work but it will be a huge challenge for the new model to become a credible alternative to SJTU and THE world rankings.