Some problems with academic standards and comparability

Some problems with academic standards and comparability

HEPI has recently published an interesting brief report by Professor Roger Brown on the comparability of academic standards in higher education. Whilst there is a periodic and reasonably predictable media interest in university standards, similar to the annual panic over the alleged decline in A level standards every August, academic standards remain one of the most misunderstood concepts in higher education. This absence of clarity of definition means that debates about standards are characterised by misconceptions and muddled thinking.

The HEPI report represents an attempt to address this problem. It is also a response to the 2009 IUSS Select Committee report which offered some staggeringly unhelpful and misinformed observations on universities but was also memorable for the challenge to the Vice–Chancellors of Oxford and Oxford Brookes Universities to compare the standards of degrees at their institutions.

When we took oral evidence, we asked the Vice- Chancellors of Oxford Brookes University and the University of Oxford whether upper seconds in history from their respective universities were equivalent. Professor Beer, Vice- Chancellor of Oxford Brookes, replied:

It depends what you mean by equivalent. I am sorry to quibble around the word but is it worth the same is a question that is weighted with too many social complexities. In terms of the way in which quality and standards are managed in the university I have every confidence that a 2:1 in history from Oxford Brookes is of a nationally recognised standard.

When asked the same question Dr Hood, Vice-Chancellor of the University of Oxford, responded:

We teach in very different ways between the two institutions and I think our curricula are different between the two institutions, so the question really is are we applying a consistent standard in assessing our students as to firsts, 2:1s, 2:2s et cetera? What I want to say in that respect is simply this, that we use external examiners to moderate our examination processes in all of our disciplinary areas at Oxford, and we take that external examination assessment very, very seriously. The external examiners’ reports after each round are submitted through our faculty boards, they are assessed and considered by the faculty boards, they are then assessed at the divisional board level and by the educational committee of the university. This is a process that goes on round the clock annually, so we would be comfortable that our degree classifications are satisfying an expectation of national norms.(1)

This attempt to sustain the really rather extraordinary proposition that all degrees represent the same standard of achievement by students regardless of the context or inputs did higher education no favours. The Vice-Chancellors and Roger Brown argue that the issue is not about comparability and, despite the contortions at the Committee, it is difficult not to agree with that proposition.

But where do we go from there? Is it simply a free for all? Do we just let market forces rule (if they don’t already – it is an employer’s market)? Brown suggests a number of steps intended to ensure a minimum level of achievement of all graduates. These graduate threshold standards would be intended to offer reassurance to all stakeholders that anyone with a degree had achieved to at least a minimum level. Whilst performance above the minimum would vary among students and across institutions this would be fine because at least minimum standards were assured. This approach is very reminiscent of the recommendations made in the 1990s by the Higher Education Quality Council’s Graduate Standards Programme (GSP)(2). The GSP sought to establish just such a set of minimum threshold standards and to codify a set of attributes which would encapsulate ‘graduateness’. Interesting, thorough and academic, the GSP proposals didn’t take off.

Perhaps they are back on the agenda though. As part of its approach Brown proposes a number of steps:

• Publish learning outcomes
• Refine benchmark standards
• Establish external examiner networks
• Improve assessment practice
• Replace honours degree classification
• Clarify definitional problems, eg with ‘comparability’

It is difficult not to feel a certain amount of sympathy for this approach which rightly recognises the fundamental futility of seeking to establish comparability of academic standards. Sustaining what has been described as the ‘polite myth’ of standards comparability, ie that a 2.1 in English from Cambridge is of the same standard as 2.1 in the same subject from a newly constituted institution, given the differences in every input measure is simply not credible. Yet this is what the sector traditionally argues and it is rightly criticised both in Brown’s report and, despite all of its other errors, the IUSS Select Committee.

Many of the problems in dealing with standards arise from difficulties with definition and Brown rightly identifies the need to address this. However, at the heart of the current QAA quality architecture is the notion that greater explicitness is required about standards in order to give all stakeholders confidence in the security of standards. Brown seems to accept this in arguing the need for learning outcomes and benchmark statements. But there is really no alternative to accepting the need to trust the judgement of professionals and the range of proxies devised over many years to assure the legitimacy of their collective decisions. National Vocational Qualifications (or NVQs, of which Alison Wolf has acerbically commented that they are ‘a great idea for other people’s children’(3)) and the extreme developments of the US learning by objectives movement sought to impose maximum explicitness and thereby to minimise the need for judgement. But attempts such as these to provide comprehensive explanations to students in advance both mislead and misrepresent reality and may, ultimately, endanger the standards they purport to uphold – the nature of learning is just not amenable to such detailed pre-specification. Moreover, explicitness about standards, cannot, in itself, convince that those standards are being achieved. There is no necessary correlation between description and understanding; this is simply a variant of a naming fallacy. Standards are not, and cannot be conceived of in an academic context as pure, absolute, Platonic forms but are relative, context-dependent and contingent.

Martin Wolf, although referring to the challenges of HE expansion, highlights a related problem about comparability:

‘if 50 per cent of the generation are to go to university and degree standards are to be the same everywhere, either everybody at Oxford or Cambridge gets a first or vast numbers of students must fail to get a degree altogether’. (4)

Whilst Brown suggests we should seek to sustain the notion of comparability of standards, at least at the threshold level, it is not clear that there is value in this, even if it is feasible. So, where do we go from here? There is huge difficulty in comparing standards, over time, between subjects, between institutions. They are different. There is no point in pretending otherwise. Establishing a threshold is not impossible and may well be helpful but it is questionable whether it is worth it in a system where over 60% of students receive first class or upper second class degrees.

(1) Innovation, Universities, Science and Skills Committee, Students and Universities, Eleventh Report of Session 2008–09, Volume I, HC 170-I, 2009
(2) Higher Education Quality Council (1997), Graduate Standards Programme Final Report, London: HEQC
(3) Wolf, A (2002), Does Education Matter?, London: Penguin.
(4) Wolf, M (September 26 2002), ‘How to save the British Universities’, Singer and Friedlander Lecture, delivered at Magdalen College, Oxford.

Advertisements

External Examiner review (and quality and standards)

Universities UK is to undertake a review of external examining

A press release from Universities UK gives some background to the recently announced review of external examiners:

In his keynote speech at the Universities UK Annual Conference, President Professor Steve Smith announced that UUK, together with GuildHE and in collaboration with agencies such as the Quality Assurance Agency (QAA) and the Higher Education Academy (HEA), would lead a UK-wide review of external examiner arrangements. This review will seek to ensure that the system remains robust, recommending any improvements uniuk240px
which would continue to support the comparability of academic standards and meet future challenges.

The Group, which will be chaired by a Vice-Chancellor (to be announced) and include representatives from across the sector, will address various issues, including:

  • The need to develop Terms of Reference for the role, to support consistency
  • Reinforcing the specific role of external examiners in ensuring appropriate and comparable standards
  • Analysing the level of support given by institutions to external examining, both financial and professional
  • Current and future challenges and changing practice (such as modularisation) and their implications for external examining
  • Comparing the UK system with international practice

After 12 months, the Group will produce a report, highlighting the immediate short-term improvements, as well as longer term challenges and how these should be addressed.

Meanwhile, HEFCE has just announced the outcome of a study on quality and standards which has been picked up by the BBC. Its recommendations include:

  • a review is needed of publicly available information provided by higher education institutions (HEIs) to meet the needs of students, parents, advisers and professionals
  • a complete review of the external examiner system should be undertaken
  • the degree classification system should be improved so that it better reflects student achievement.

Looks like there will be a bit more work then beyond external examiners but these do not seem to be hugely challenging tasks (indeed they have been on the agenda for some time) and reflect the conclusions of the HEFCE report that “There is no systemic failure in quality and standards in English higher education (HE), but there are issues needing to be addressed”.

This UUK external examiner review, supported by the HEFCE study, represents a speedy response to the recent (truly dreadful) report of the IUSS Select Committee. The IUSS report recommends the implementation of one of the 1997 Dearing recommendations, rejected at the time, on the creation of a national system of external examiners. It is to be hoped that the UUK review arrives at something sensible. (For anyone with a longish memory on these things it feels a bit like 1994-95 again and the Graduate Standards Programme and its reviews of external examining.)

Government needs to help league table compilers

The IUSS Committee’s recent report on students and universities is a most extraordinary document in all sorts of ways. One of the more entertaining propositions relates to university league tables where the Committee accepts the existence (wisely, you might argue) of league tables and acknowledges the work that HEFCE has recently published. However, its take on such tables is somewhat different from many, in that it suggests that as much data as possible is published in a way which facilitates the creation of league tables:

In our view, it is a case of acknowledging that league tables are a fact of life and we welcome the interest that HEFCE has taken in league tables and their impact on the higher education sector. We have not carried out an exhaustive examination of league tables but on the basis of the evidence we received we offer the following views, conclusions and recommendations as a contribution to the debate on league tables which HEFCE has sought to stimulate and to improve the value of the tables to, and usefulness for, students. We conclude that league tables are a permanent fixture and recommend that the Government seek to ensure that as much information is available as possible from bodies such as HEFCE and HESA, to make the data they contain meaningful, accurate and comparable. Where there are shortcomings in the material available we consider that the Government should explore filling the gap. We give two examples. First, the results from the National Student Survey are produced in a format which can be, and is, incorporated into league tables. It appears to us therefore that additional information or factors taken into account in the National Student Survey would flow through to, and assist those consulting, league tables. To assist people applying to higher education we recommend that the Government seek to expand the National Student Survey to incorporate factors which play a significant part in prospective applicants’ decisions— for example, the extent to which institutions encourage students to engage in non-curricula activities and work experience and offer careers advice. [Para 104]

Not only therefore is it proposed that current data be modified to make the league table compilers’ work easier, but that they should be provided with additional information where it is lacking. Thus:

Second, Professor Driscoll from Middlesex University considered that league tables neglected “the contribution that universities that have focused on widening participation, like Middlesex, make to raising skills and educational levels in this country”. In other words, the National Student Survey as presently constituted does not assess the “value added” offered by individual institutions. We recommend that the Government produce a metric to measure higher education institutions’ contribution to widening participation, use the metric to measure the contribution made by institutions and publish the results in a form which could be incorporated into university league tables. [para 105]

League table compilers have struggled with this one for some time and will therefore appreciate such kind assistance from government.