Responsible research evaluation

Statement on Responsible Research Assessment

Overview

The University of Bristol has produced a Policy Statement and associated guidance on responsible research assessment, including the appropriate use of quantitative research metrics.

This Policy Statement builds on a number of prominent external initiatives on the same task, including the San Francisco Declaration on Research Assessment (DORA), the Leiden Manifesto for Research Metrics and the Metric Tide report.

The Metric Tide report urged UK institutions to develop a statement of principles on the use of quantitative indicators in research management and assessment, where metrics should be considered in terms of:

  • Robustness - using the best available data.
  • Humility - recognising that quantitative evaluation can complement, but does not replace, expert assessment.
  • Transparency - keeping the collection of data and its analysis open to scrutiny.
  • Diversity - reflecting a multitude of research and researcher career paths.
  • Reflexivity - updating our use of metrics to take account of the effects that such measures have had.

These initiatives and the development of institutional policies are also supported or mandated by research funders in the UK (for example UKRIWellcome Trust).

Our aim is to balance the benefits and limitations of, for example, bibliometric use to create a framework for responsible research assessment at the University of Bristol and to suggest ways in which they can be used to deliver the ambitious vision for excellence in research, teaching, and this embodied in the University of Bristol strategy.

Responsible use of metrics

We recognize that the University of Bristol is a dynamic and diverse university, and no metric or set of metrics could universally be applied across our institution. Many disciplines or departments do not use research metrics in any way, because they are not appropriate in the context of their field. The University of Bristol recognises this and will not seek to impose the use of metrics in these cases.

This Policy Statement is deliberately broad and flexible to take account of the diversity of contexts, and is not intended to provide a comprehensive set of rules. To help put this into practice, we will provide an evolving set of guidance material with more detailed discussion and examples of how these principles could be applied. The University of Bristol is committed to valuing research and researchers based on their own merits, not the merits of metrics.

Further, research 'excellence' and 'quality' are abstract concepts that are difficult to measure directly but are often inferred from metrics. Such superficial use of research metrics in research evaluations can be misleading. Inaccurate assessment of research can become unethical when metrics take precedence over expert judgement, where the complexities and nuances of research or a researcher’s profile cannot be quantified.

When applied in the wrong contexts, such as hiring, promotion, and funding decisions, irresponsible metric use can incentivize undesirable behaviours, such as chasing publications in journals with a high Journal Impact Factor (JIF) regardless of whether this is the most appropriate venue for publication, or discouraging the use of open research approaches such as preprints or data sharing.

Bibliometrics

Bibliometrics is a term describing the quantification of publications and their characteristics. It includes a range of approaches, such as the use of citation data to quantify the influence or impact of scholarly publications, and other approaches (known as altmetrics) that capture wider engagement across media, social media and other platforms.

When used in appropriate contexts, bibliometrics can provide valuable insights into aspects of research in some disciplines. However, bibliometrics are sometimes used uncritically, which can be problematic for researchers and research progress when used in inappropriate contexts.

For example, some bibliometrics have been commandeered for purposes beyond their original design; the JIF was reasonably developed to indicate average journal citations (over a defined time period), but is often used inappropriately as a proxy for the quality of individual articles. It is important to recognise that some bibliometrics (e.g. JIF) do not apply to some scholarly outputs (e.g. books, monographs).

Other quantitative metrics

Other quantitative metrics or indicators may include grant income, number of postgraduate students or research staff, etc. As with bibliometrics, these can provide useful information and insights, but can also be misapplied. For example, grant income can reflect the ability to obtain competitive funding, but what is typical will vary considerably across disciplines and specific research questions or methodologies.

Moreover, it is better regarded as an input rather than an output – substantial grant income that does not lead to substantial knowledge generation (in the form of scientific insights, scholarly publications, impact, etc.) is arguably evidence of poor value for money or inefficiency. This illustrates why quantitative metrics should be used thoughtfully and in combination, in a discipline-appropriate way.

Principles

The University of Bristol is committed to applying the following guiding principles where applicable (e.g. in hiring and promotion decisions):

  1. Quality, influence, and impact of research are typically abstract concepts that prohibit direct measurement. There is no simple way to measure research quality, and quantitative approaches can only be interpreted as indirect proxies for quality.
  2. Different fields have different perspectives of what characterises research quality, and different approaches for determining what constitutes a significant research output (for example, the relative importance of book chapters vs journal articles). All research outputs must be considered on their own merits, in an appropriate context that reflects the needs and diversity of research fields and outcomes, and takes into account cross- and multi-disciplinary working.
  3. Both quantitative and qualitative forms of research assessment have their benefits and limitations. Depending on the context, the value of different approaches must be considered and balanced. This is particularly important when dealing with a range of disciplines with different publication practices and citation norms. In fields where quantitative metrics are not appropriate nor meaningful, the University of Bristol will not impose their use for assessment in that area.
  4. When making qualitative assessments, we should avoid making judgements based on external factors such as the reputation of authors, or of the journal or publisher of the work; the work itself is more important and must be considered on its own merits.
  5. Not all indicators are useful, informative, or will suit all needs; moreover, metrics that are meaningful in some contexts can be misleading or meaningless in others. For example, in some fields or subfields, citation counts may help estimate elements of usage, but in others they are not useful at all.
  6. Avoid applying metrics to individual researchers, particularly those that do not account for individual variation or circumstances. For example, the h-index should not be used to directly compare individuals, because the number of papers and citations differs dramatically among fields and at different points in a career.
  7. Ensure that metrics are applied at the correct scale of the subject of investigation, and do not apply aggregate level metrics to individual subjects, or vice versa (e.g. do not assess the quality of an individual paper based on the JIF of the journal in which it was published).
  8. Quantitative indicators should be selected from those that are widely used and easily understood to ensure that the process is transparent and they are being applied appropriately. Likewise, any quantitative goals or benchmarks must be open to scrutiny.
  9. If goals or benchmarks are expressed quantitatively, care should be taken to avoid the metric itself becoming the target of research activity at the expense of research quality itself.
  10. New and alternative metrics are continuously being developed to inform the reception, usage, and value of all types of research output. Any new or non-standard metric or indicator must be used and interpreted in keeping with the other principles listed here for more traditional metrics. Additionally, consider the sources and methods behind such metrics and whether they are vulnerable to being gamed, manipulated, or fabricated.
  11. Metrics (in particular bibliometrics) are available from a variety of services, with differing levels of coverage, quality and accuracy, and these aspects should be considered when selecting a source for data or metrics. Where necessary, such as in the evaluation of individual researchers, choose a source that allows records to be verified and curated to ensure records are comprehensive and accurate, or compare publication lists against data from University of Bristol systems.

Guidance

When evaluating research (e.g. for hiring or promotion purposes), staff at the University of Bristol should ensure they apply the principles described above.

This statement is licensed under a Creative Commons Attribution 4.0 International Licence. Please attribute as ‘Developed from the UCL Statement on the Responsible Use of Metrics’.

Edit this page