33. To enable institutions to be advised in advance of the funding weights to be employed by
each funding council, to prevent grade inflation and to guarantee the integrity of ratings, we propose
that panels should be given guidelines on the proportions of three star, two star and one star ratings
which should be awarded in the absence of evidence that the subject outperformed other subjects
when measured against international benchmarks. For example, the funding councils might decide
that x% of the work submitted should be rated as three star work. These guidelines should be the
same for each unit of assessment. A moderation process would be built into the panel structure to
establish whether a panel should be able to depart from the guidelines.
34. This would enable the funding councils to provide something for which there is an
overwhelming demand: clear information on the relationship between the scores achieved in the
assessment and the impact upon funding (see figure 4). It is clearly desirable, though not absolutely
necessary, if the weights employed by each funding council are consistent. Funding weights will
undoubtedly vary between subjects, there being a rationale for greater selectivity in subjects where
there is already a large amount of research of the highest class and where the costs of remaining
competitive are particularly high.
a. The output of the Research Quality Assessment should be a ‘quality profile’ indicating
the quantum of ‘one star’, ‘two star’ and ‘three star’ research in each submission. It will
not be the role of the assessment to reduce this profile to summary metrics or grades.
b. As a matter of principle, star ratings would not be given to named individuals, nor would
the profile be published if the submission was sufficiently small that individual
performance could be inferred from it.
c. Panels would be given guidelines on expected proportions of one star, two star and
three star ratings. These proportions should be the same for each unit of assessment. If
they awarded grades which were more or less generous than anticipated in the
guidelines, these grades would have to be confirmed through moderation.
d. The funding councils should provide institutions with details of the relative value, in
funding terms, of one star, two star, and three star research, and of research fundable
through the Research Capacity Assessment in advance of the assessment. These ratios
might vary between disciplines.
A panel structure designed to ensure consistency
35. Even if panels are limited in the number of ‘star ratings’ they can award, there is still a need
for a mechanism to ensure consistency of practice in order to:
ensure that panel criteria are allowed to diverge where there is a rationale for them to
do so, and are consistent where this has a bearing upon the integrity of the assessment
ensure that panels adhere to their own criteria
enable the process as a whole to provide feedback to the funding councils on areas of
research which are stronger or weaker than the norm in relation to international benchmarks.
36. We therefore propose a hierarchical panel structure, in which the recommendations of each
sub-panel would be signed off by a higher level body with a remit to ensure consistency of practice.
Our proposed structure is illustrated in figures 5 and 6.
a. There should be between 20 and 25 units of assessment panels supported by around 60
sub-panels. Panels and sub-panels should be supported by colleges of assessors with
experience of working in designated multidisciplinary ‘thematic’ areas.
b. Each panel should have a chair and a moderator who would sit on each sub-panel. The
role of the moderator would be to ensure consistency of practice across the sub-panels
within the unit of assessment.
c. Each panel should include a number of non-UK based researchers with experience of
the UK research system.
d. The moderators of adjacent panels should meet in five or six ‘super-panels’ whose role
would be to ensure consistency of practice between panels. These ‘super-panels’ should
be chaired by senior moderators who would be individuals with extensive experience in
Respecting disciplinary differences
37. Whilst we are convinced that there is a need to ensure that panels’ practices are consistent
(or equivalent) where this is appropriate, we consider it equally important to define those aspects of
the assessment where greater sensitivity to disciplinary differences would enhance the reliability of
the results. We have therefore brought forward proposals which we believe will enable panels to
assess work in their fields in a way which is both more sensitive and more consistent.
a. The rule that each researcher may only submit up to four items of research output
should be abolished. Research Quality Assessment panels should have the freedom to
define their own limits on the number and/or size of research outputs associated with
each researcher or group.
b. Research Quality Assessment panels should ensure that their criteria statements enable
them to guarantee that practice-based and applicable research are assessed according
to criteria which reflect the characteristics of excellence in those types of research in
38. We propose to work alongside the research communities to develop a set of discipline-
specific performance indicators which could form the basis of the indicative bandings. These
bandings would be produced at least one year before the assessment. They would inform
institutions’ key strategic choice – which subject areas to submit for RQA and which for RCA. They
would also inform the decisions of the panels but would not bind the panels in any way.
39. The number of bands would be allowed to vary between subjects, reflecting the extent to
which different subject communities were prepared to accept performance indicators as a reliable
indicator of quality.
a. The funding councils should work alongside the subject communities and the research
councils to develop discipline-specific performance indicators.
b. Performance against these indicators should be calculated a year prior to the exercise
and institutions advised of their performance relative to other institutions.
c. The weight placed upon these indicators, as well as their nature, should be allowed to
vary between panels.
40. It is important to define what is assessable as well as how it is assessed. As noted above,
institutions submitting to RQA would forfeit the right to submit staff from that area to the RCA – and
any funding consequent upon RCA results. The RAE defines the population of eligible researchers
in a unit of assessment, and uses this information to publish the proportion of staff submitted. Under
our proposals it will be important to ensure that this information is reliable, and to minimise the
scope for artificially defining less research active staff as belonging to a unit of assessment with
which they have little to do. This may well require stronger audit procedures.
41. The abolition of grades should lower the stakes for institutions. Top-rated researchers would
attract funding even if the average score was depressed by the inclusion of others in the return. We
are confident that this would reduce ‘games-playing’.
42. Nevertheless it remains important that RQA results present an accurate picture of the
strength of a department whilst at the same time providing the flexibility needed to protect teaching
staff from being pressurised into prioritising research. To this end, we propose that at least 80% of
staff in any sub-unit of assessment must be included in any RQA return.
43. The consequences of this are much less significant than they would have been had an 80%
minimum been introduced in RAE2001. In that exercise, the presence of less well regarded
researchers in the submission could depress the grade awarded. It would therefore affect the
recognition – and potentially the funding – received by the strong researchers present in the
submission. With the introduction of the quality profile this will not happen. The amount of high
quality research in the submission will be clearly visible, and we anticipate that the funding councils
will wish to reward this irrespective of the amount of less excellent work contained in the same
44. We also consider it to be extremely important to encourage institutions to make collaborative
submissions. We advise the funding councils to do everything possible to facilitate this.
45. It should be possible for research groups to be submitted as a single entity. This would enable
qualified staff whose research activity has contributed to important work for which they have not
been formally recognised to be included in the assessment.
a. Where an institution submits to Research Quality Assessment in a sub-unit of
assessment, all staff in that sub-unit should become ineligible for the Research Capacity
Assessment, even if they are not included in the Research Quality Assessment
b. The funding councils should establish and promote a facility for work to be submitted as
the output of a group rather than an individual where appropriate.
c. The funding councils should consider what measures could be taken to make joint
submissions more straightforward for institutions.
d. Where an institution submits a sub-unit of assessment for Research Quality Assessment
no fewer than 80% of the qualified staff contracted to undertake research within the sub-
unit of assessment must be included in the submission.
e. All staff eligible to apply for grants from the research councils should be eligible for
submission to Research Quality Assessment.
46. In RAE2001 panels had the opportunity to consider statements on the research strategy and
environment underpinning each submission. Under our proposals, the elements contained within
that statement would be covered in the research competences assessment.
47. Notwithstanding the institutional competences assessment, we anticipate that panels would
benefit from receiving a research strategy statement. This would indicate the institution’s plans for
research at unit level. Panels would be able to see the institutional research strategy and ought to
be able to confirm whether the two documents were consistent with one another.
48. Panels would be asked to report on the strategies, indicating any they considered inadequate
or exemplary. It would be left to the funding councils to decide whether to act on these reports.
Each panel should consider a research strategy statement outlining the institution’s plans for
research at unit level.
Supporting emerging units
49. We have proposed that institutions be allowed to nominate a minimal number of emerging
units. They would have to demonstrate their commitment to developing research in these
departments to the level where it was genuinely competitive with top-rated research. The relevant
assessment panels would propose objective success criteria which would indicate that they were
catching up with leading departments. Progress against these criteria would be assessed at the mid-
point of the assessment period, by officers of the research assessment process.
There should be a facility for institutions to identify emerging units and a mechanism for
evaluating their progress after three years.
Links to other funding processes
50. We suggest to the funding councils that the results of the RQA could be used to identify
suitable candidates to compete for monies made available for the following purposes:
‘partnerships of excellence’ which would recognise the sharing of excellence with other
top-rated research units. These are seen as a means of helping the funding councils to
third stream activities, to improve the interactions between HE and business, the public
sector and the wider community
The funding councils should consider the extent to which data produced by the research
assessment process can be used to inform other funding processes, including third stream
funding and partnerships of excellence.
51. The recruitment of panel members was considered by institutions to be one of the less
transparent aspects of the exercise. In part, we believe this was because institutions did not engage
with recruitment as closely as with other parts of the process which were more salient to them. The
most transparent way to identify assessors would be to advertise each post. However, over 1,500
people contributed to the assessment of research in RAE2001, and the burden of open competition
for that number of people would, we believe, be excessive.
52. We therefore propose a package of measures intended to balance transparency with the
need to ensure that the burden is in proportion to the benefit obtained.
a. Job descriptions and person specifications should be produced for Research Quality
Assessment panel and sub-panel members, chairs and moderators as well as senior
moderators and the chair of the exercise. These should be published before steps are
taken to fill the posts.
b. Nominations of panel members should be sought from stakeholders in the same way as
The wider community can be taken to include the voluntary sector and, of course, society as a whole.
c. The chairs’ and moderators’ posts for the main Research Quality Assessment panels
should be advertised, and candidates should be chosen by a selection panel as should
the senior moderators.
d. Sub-panel chairs should be elected by the membership of the outgoing panels, from a
shortlist not necessarily confined to previous RAE panel members.
e. Panel members and sub-panel members should be chosen by sub-panel chairs, panel
chairs and moderators on the basis of their fit with the published job description and
f. The funding councils should monitor and report upon the gender balance of sub-panel
members, sub-panel chairs, panel chairs, moderators and senior moderators.
53. The operational review
of the RAE has found that the RAE team itself performed admirably.
However, it could justly be said that the planning of the administrative support for RAE2001 failed to
anticipate the demands it would face.
54. The review’s conclusion is stark. Resources for RAE2001 were inadequate. Even if there
were no changes to the exercise, the central administration would require increased resources.
‘The pressures imposed by the timetable for the assessment phase, the workload on
key players (panel members and secretaries, the RAE team) and the demand for
several of the supporting services, ran a high risk of major disruption, though none
occurred. The same degree of dedication and commitment which all those involved
showed cannot be assumed for any similar further exercise. More staff (or funds to
outsource services) would be required; and all inputs should be realistically costed
and paid for.’
55. It follows that to provide an adequate and safe service to the same specification as RAE2001
would require more resources in real terms than the £5-6 million devoted to RAE2001.
56. Furthermore, we believe that there is scope to relieve the burden on the rest of the sector by
providing a more comprehensive service from the centre.
a. The research assessment administration should employ full-time panel secretaries who
would each work with several panels.
b. The senior moderators (see recommendation 6) should not be external to the RAE
administration in the same way as RAE panel chairs. They should be accountable for the
successful administration of the exercise as well as for its results, should be employed
An operational review of the 2001 RAE’ (Universitas 2003).
by the funding councils, and should be in post at an early stage in the process.
c. The funding councils ought to consider the burden imposed upon their staff and
resources by the need to support the RAE and ensure this is properly accounted for. In
some cases this may involve embedding functions within the RAE administration itself.
d. The funding councils should recognise that the cost of specialist advice is likely to be
greater in a future exercise than it was in RAE2001.
57. When the funding councils present their own proposals for research assessment for
consultation, these will need to be accompanied by a full assessment of their impact upon equality
of opportunity for all groups of staff, and the burden placed by the assessment upon institutions,
assessors and their own administrative capacity. The funding councils will also need to assure
themselves that proposals do not require panel members of the funding councils and their
employees to accept any unnecessary legal risks. There will, in addition, be a need to investigate
the behavioural consequences of our proposed reforms.
58. The strands of work mentioned above are necessary in order to enable the funding councils
to take fully informed decisions on the adoption of our recommendations once a full public
consultation has taken place. In addition, there are strands of work relating to the implementation of
the recommendations, which, while less relevant to that decision point, need to be progressed
urgently if, as some have suggested, the next assessment is to take place in 2007. In particular, the
funding councils may wish to move quickly to:
• identify units of assessment
• develop discipline-specific metrics
• develop templates for our proposed assessment of research competences.
The funding councils should undertake or commission further work in parallel with the consultation
on these proposals to ensure that proposals for research assessment taken as a whole:
a. do not create, encourage or facilitate discrimination on the grounds of age, sexual orientation,
political belief, disability, gender, race or religion
b. do not create any unnecessary legal risks for the funding councils or the panel members
c. do not create excessive or unnecessary burdens upon panel members, institutions or the
d. are not likely to have behavioural effects which the funding councils consider unacceptable.
Additionally, the funding councils may wish to seek advice on the best ways of assessing
institutions’ policy and practice on equal opportunities, drawing upon the experience of other
Interdependence of our proposals
59. Many of our proposals have been designed to complement one another. There is a risk that
an emphasis upon the impact of individual recommendations rather than the generality of the
proposals may lead to pressure to take forward a package which is incoherent. It is important
therefore that the implications of revisiting any one of our proposals upon the feasibility of others,
are properly understood.
Should the funding councils ultimately decide to pursue some but not all of our
recommendations, members of the review team and the steering group should be
reconvened to advise on the feasibility of the revised package of reforms.
Figure 1 Sexennial research assessment
Figure 2: Institutional competences
Exemplar of good
Figure 3: Volume determination for funding
SEXENNIAL RESEARCH ASSESSMENT
Collation of discipline based metrics other specific indicator
Assessment of Institutional Competencies
Strategy & Metrics
Collation of discipline-based metrics or other specific indicators
Assessment of institutional competencies
Strategy and metrics
Documents you may be interested
Documents you may be interested