Assessing research excellence in the Humanities – an Australian approach

Posted on 11 April 2010

When asked to explain what a librarian does I’ve sometimes come back with the reply that we make lists. Lists that will facilitate ease of access to the collections we curate and preserve. Other colleagues may balk at what might seem a blunt summation of our chief function and point to user education, or collection development and preservation as more illustrative of our mission. Yet behind all these functions lies the goal of access to the stuff we gather and preserve and to do that we make itineraries of what we hold. The methods and standards we deploy to create lists gets defined in the basic job skill of cataloguing regardless of whether it’s metadata creation or scholarly bibliography.


The list also has another function beyond being just a primary facilitator to access. Any exercise in collection development would be lost without it. You need to know what you have before you decide what’s needed. It is therefore no surprise that an exercise like Australian Research Council’s ERA initiative with a list of 20,000 + ranked journals would draw some notice. The initiative seeks to examine where Australian researchers are publishing and give a ranking to the journals they choose to publish in. In effect a variation on the contentious Journal Impact Factor specific to Australian published research with metrics based on citation analysis (via SCOPUS) and post publication peer-review (via a three stage process between Research Evaluation Committee (REC) Members and peer-reviewers). The sustainability of a secondary ERA type peer-review process may be tricky to negotiate; nonetheless it establishes precedent and more efficient models may emerge.


The objectives of the exercise are to:
1. Establish an evaluation framework that gives government, industry, business and the wider community assurance of the excellence of research conducted in Australia’s institutions;

2. Provide a national stocktake of discipline-level areas of research strength and areas where there is opportunity for development in Australia’s higher education institutions;

3. Identify excellence across the full spectrum of research performance;

4. Identify emerging research areas and opportunities for further development; and

5. Allow for comparisons of Australia’s research nationally and internationally for all discipline areas.
The project identifies 8 research clusters with specific disciplines identified by Field of Research Code (FoR) e.g.:


The initiative deserves a commendation for bravery in choosing Humanities and Creative Arts (HCA) along with Physical, Chemical and Earth Sciences (PCE) as the trial research clusters. The latter could draw on existing citation analyses not suitable or indeed available to the former. Australian higher-education institutions provided data to ERA via SEER, an interface developed on top of the existing Australian institutional repository infrastructure.

Looking at humanities in isolation, if we take the monograph as the ‘gold standard’ of humanities research a search in Australian Research Online (an aggregation of open scholarly research outputs across Australia) returns 421 item types = book in the date range 2008 – 2010. Within these results 7 are listed under the subject heading ‘arts’ (sadly none of the records have a link to an open copy). I’m assuming that these items were submitted to ERA and that a hard copy of the monograph was examined as part of the peer-review analysis. I say brave because I imagine there is as much scepticism among the Humanities and Creative Arts research cluster in Australia as to the true object of the ERA exercise as there would be among its Irish counterpart.

By far the most important outcome of this process is that it has established an independent, national metric to try to uncover quality in research. Based on citation and independent peer-review analysis the trial established a journal assessment metric –

A* Typically an A* journal would be one of the best in its field or subfield in which to publish and would typically cover the entire field/subfield. Virtually all papers they publish will be of a very high quality. These are journals where most of the work is important (it will really shape the field) and where researchers boast about getting accepted. Acceptance rates would typically be low and the editorial board would be dominated by field leaders, including many from top institutions.

The majority of papers in a Tier A journal will be of very high quality. Publishing in an A journal would enhance the author’s standing, showing they have real engagement with the global research community and that they have something to say about problems of some significance. Typical signs of an A journal are low acceptance rates and an editorial board, which includes a reasonable fraction of well known researchers from top institutions.

B Tier B covers journals with a solid, though not outstanding, reputation. Generally, in a Tier B journal, one would expect only a few papers of very high quality. They are often important outlets for the work of PhD students and early career researchers. Typical examples would be regional journals with high acceptance rates, and editorial boards that have few leading researchers from top international institutions.

C Tier C includes quality, peer reviewed, journals that do not meet the criteria of the higher tiers.

to support a five-point ERA Rating Scale.

5. The Unit of Evaluation profile is characterised by evidence of outstanding performance well above world standard presented by the suite of indicators used for evaluation.

4. The Unit of Evaluation profile is characterised by evidence of performance above world standard presented by the suite of indicators used for evaluation.

3. The Unit of Evaluation profile is characterised by evidence of average performance at world standard presented by the suite of indicators used for evaluation.

2. The Unit of Evaluation profile is characterised by evidence of performance below world standard presented by the suite of indicators used for evaluation.

1. The Unit of Evaluation profile is characterised by evidence of performance well below world standard presented by the suite of indicators used for evaluation.

NA. Not assessed due to low volume. The number of research outputs does not meet the volume threshold standard for evaluation in ERA.

This is an important and bold initiative in which input from Australian libraries comprises a vital component supporting the institutional stakeholders. It’s not pretty and it’s not without its critics but we have to take notice.


Searchable database of ERA ranked journals [ http://lamp.infosys.deakin.edu.au/era/index.php ]Physical, Chemical and Earth Sciences (PCE) and Humanities and Creative Arts (HCA) Clusters EvaluationGuidelines for the 2009 ERA Trial (PDF) [http://www.arc.gov.au/pdf/ERA_Eval_Guide.pdf  ]Physical, Chemical and Earth Sciences (PCE) and Humanities and Creative Arts (HCA) Clusters ERA Indicator Benchmark Methodology (PDF) [http://www.arc.gov.au/pdf/ERA_Indicator_Bench.pdf ]

Leave a comment

Your e-mail address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.