Research and Research Data Impact Metrics: Two New Resources

By Allan Barclay, Information Architecture Librarian at Ebling Library

6873455455_a83756275c_z
This photo, “Forge Welding” is copyright (c)2012 Kevin Wood, no changes, made available under Creative Commons Attribution-ShareAlike 2.0 Generic License

Peer review has long been the primary tool for evaluating the quality of research performed, ensuring that only quality research articles are published in journals. Additional tools such as publication metrics (i.e. Journal Impact Factor, h-index, etc.) have attempted to inject a more quantitative approach to the quality of journals themselves, as have more recent approaches like altmetrics. Quality journals are assumed to have more impact than lesser ones (an assumption many dispute). The same trends driving evaluation of the broader impact of research (cost control, accountability to the public, verification of results, etc) are also starting to look at the impact of sharing and re-use of research data itself. It stands to reason that best practices for performing research will have implications for research data creation, management and dissemination. The measurement of impact for both research articles and shared research datasets, however, is still far from a settled science.

Research Impact

The Metric Tide report, just released in July 2015, is a product of the Higher Education Funding Council for England’s Independent Review of the Role of Metrics in Research Assessment and Management (which, in turn, was a product of the UK Research Excellence Framework). Some of the main findings and recommendations mirror issues that UW-Madison Research Data Services grapples with:

      The need for transparency and openness in research data infrastructure.

The need for more investment in research information infrastructure (I.e. support for research on research, or the “science of science policy”)

The use of standardized identifiers such as digital object identifiers (DOIs) for research outputs. They also call for the use of researcher identifiers like ORCID and better institutional identifiers.

Research performed, the research data produced, and the evaluation of both require a more coherent infrastructure and culture than currently exists, and The Metric Tide outlines problems and some potential solutions. Much of the report is geared toward evaluating good research practices to allow for more efficient and effective policy-making, and while the report is UK-focused, it should be of value to all researchers and research managers. The report also reaffirms a common (and contentious) concern – that metrics should supplement rather than supplant qualitative tools like peer review.

Research Data Impact

More recently, attempts have been made to evaluate the impact of releasing research data (beyond the research findings published in journal articles). As with evaluating research impact, attempts are being made to use metrics to measure the impact of the research datasets themselves.

How to Track the Impact of Research Data with Metrics, released in June 2015, is a new How-to Guide from the Digital Curation Centre. It provides a great overview of impact measurement concepts, data citation, impact measurement tools and services and more. It also has suggestions for how to increase the impact of your own research data by monitoring usage and adjusting strategies accordingly.

Institutions can benefit from data usage monitoring when they:

plan for and monitor the success of the infrastructure providing access to the data, in particular to gauge capacity requirements in storage, archival and network systems; instigate promotional activities and celebrate data sharing and re-use successes by researchers at the institution;create special collections around popular datasets; meet funder requirements to safeguard data for the appropriate length of time since last use.

The use of metrics to measure research impact is already complicated and controversial; using metrics to judge the impact of research datasets themselves adds a new level of complexity. There are, however, new tools (and new uses for existing tools) which are geared toward dealing specifically with data. There are also potential hybrid approaches, such as using altmetrics for immediate impact and citations for longer term impact (including data paper citations). This How-to Guide is a great overview and guide to understanding (and using) these tools and techniques.

References:

Alex Ball, Monica Duke (2015). ‘How to Track the Impact of Research Data with Metrics’. DCC How-to Guides. Edinburgh: Digital Curation Centre. Available online: http://www.dcc.ac.uk/resources/how-guides – See more at: http://www.dcc.ac.uk/resources/how-guides/track-data-impact-metrics#sthash.Sb01EP8W.dpuf (retrieved July 2015)

‘The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management’ – http://www.hefce.ac.uk/pubs/rereports/Year/2015/metrictide/Title,104463,en.html (retrieved July 2015)

Alice Meadows (2015). ‘Lightening the Load: ORCID and Research Management’ – http://orcid.org/blog/2015/07/10/lightening-load-researchers-orcid-and-research-management (retrieved July 2015)