Research Papers

Evidence-based Nomenclature and Taxonomy of Research Impact Indicators

  • Mudassar Arsalan , 1, ,
  • Omar Mubin 1 ,
  • Abdullah Al Mahmud 2
Expand
  • 1School of Computer, Data and Mathematical Sciences, Western Sydney University, Australia
  • 2Centre for Design Innovation, Swinburne University of Technology, Australia
†Mudassar Arsalan (E-mail: ).

Received date: 2020-01-28

  Request revised date: 2020-05-04

  Accepted date: 2020-05-20

  Online published: 2020-06-12

Copyright

Copyright reserved © 2020

Abstract

Purpose: This study aims to classify research impact indicators based on their characteristics and scope. A concept of evidence-based nomenclature of research impact (RI) indicator has been introduced for generalization and transformation of scope.

Design/methodology/approch: Literature was collected related to the research impact assessment. It was categorized in conceptual and applied case studies. One hundred and nineteen indicators were selected to prepare classification and nomenclature. The nomenclature was developed based on the principle—“every indicator is a contextual-function to explain the impact”. Every indicator was disintegrated into three parts, i.e. Function, Domain, and Target Areas.

Findings: The main functions of research impact indicators express improvement (63%), recognition (23%), and creation/development (14%). The focus of research impact indicators in literature is more towards the academic domain (59%) whereas the environment/sustainability domain is least considered (4%). As a result, research impact related to the research aspects is felt the most (29%). Other target areas include system and services, methods and procedures, networking, planning, policy development, economic aspects and commercialisation, etc.

Research limitations: This research applied to 119 research impact indicators. However, the inclusion of additional indicators may change the result.

Practical implications: The plausible effect of nomenclature is a better organization of indicators with appropriate tags of functions, domains, and target areas. This approach also provides a framework of indicator generalization and transformation. Therefore, similar indicators can be applied in other fields and target areas with modifications.

Originality/value: The development of nomenclature for research impact indicators is a novel approach in scientometrics. It is developed on the same line as presented in other scientific disciplines, where fundamental objects need to classify on common standards such as biology and chemistry.

Cite this article

Mudassar Arsalan , Omar Mubin , Abdullah Al Mahmud . Evidence-based Nomenclature and Taxonomy of Research Impact Indicators[J]. Journal of Data and Information Science, 2020 , 5(3) : 33 -56 . DOI: 10.2478/jdis-2020-0018

1 Introduction

Research Impact (RI) is a broad topic of scientometrics to support the progress of science and monitoring the influence of efforts made by the government, institutions, societies, programs, and individual researchers. There are several documented and popular RI assessment methods developed by individuals and organisations for evaluating the research of a particular programme or general-purpose. This intent has created the diversity in evaluation methods, frameworks and scope. Some approaches focus only on the impacts related to academic recognition and use, such as Bibliometric Measures. However, the growing technology, educational networking, effective and targeted research strategies, and regular monitoring of RI are reducing the gap between the research producers and consumers. As a result, the horizon of RI is expanding and covering other areas of impacts such as on economy, society, and environment.
Many individuals and organizations have introduced measures and indicators for assessing the RI. Nevertheless, due to diversity in nature and scale of RI, not a single method is considered robust and complete (Vinkler, 2010). Therefore, new measures and indicators are being introduced on time to time basis according to the interest and availability of resources of the method designers (Canadian Academy of Health Sciences, 2009). Additionally, higher availability of national and international funding for health sciences is critically influencing the science of RI assessment (Heller & de Melo-Martín, 2009). It means there are more indicators, measures, and frameworks for health-related research than any other areas of science. Resultantly, there is a considerable gap available for generalizability and transformability of health-related efforts to rest of the science.
At large, this study aims to discover the evidence-based diversity of RI indicators and to develop a method. In this regard, Nomenclature of RI indicators is developed based on divide and rule principle to achieve the objective. Additionally, taxonomical analysis is presented based on the primary components of nomenclature. This effort is a step forward to develop a robust and inclusive RI assessment method. The concept of this paper was initially presented in the 17th International Conference on Scientometrics and Informetrics (ISSI 2019) in the form of a poster (Arsalan, Mubin, & Al Mahmud, 2019).

2 Review of literature

As per the broader understanding, metrics and indicators are different. It is clear from the semantics as “indicator” indicates the research impact, whereas metrics are the measurement of research impact. According to P. Vinkler (2010), an indicator should be read as a meaningful representative, which indicate the performance of a system as per its design objective. Metrics, on the other side, provide additional quantitative information about the impact of a system (Lewison, 2003). However, diversity of effects can create a problematic situation to measure all aspects quantitatively. Therefore, pragmatically indicators have the potential to illustrate useful and broader impacts as compared to the metrics (Vinkler, 2010).
There are multiple reasons for diversity in research impact indicators. For instance, Bennett et al. (2016) explained 10-points criteria for making research impact indicators from a technical and contextual point of view. In this regard, indicators should be specific, validated, reliable, comparable, substantial, accessible, acceptable, appropriate, useable, and feasible. Consequently, diversity in criteria required range of indicators to fulfil the conditions in diversified context. Canadian Academy of Health Sciences (2009) explained another reason for this diversity, i.e. strategy of indicator selection. This strategy describes three basic principles.
The indicator should answer the specific question of evaluation
The indicator should satisfy the level of aggregation
The indicator should be read with other indicators to complement the strength of evaluation
In other words, every indicator only explains the impact of research in minimal dimension, covers a very specific level of aggregation, and has a very limited power of defining the research impact (REF, 2012). As a result, we need a bundle of indicators that fulfil the strategic requirements of evaluators.
“Nomenclature” is a combination of two Latin words viz. “nomen” means “name” and “calare” means “to call”. It is a scientific process in any discipline to assign the names to essential components according to the predefined rules and standards (Hayat, 2014). Generally, these rules are outlined in the form of a classification scheme. Therefore, for nomenclature, the classification system is highly significant. Longabaugh et al. (1983) introduced the problem-focused nomenclature in medical science, which is a coding system with a specific objective. They argued that the problem-focused approach provides better control for organizing and problem management. The similar concept can be applied in any branch of science to organize the objects concerning a problem-focused classification system.
Classification and organization of research impact indicators are not new (Vinkler, 2010). However, the nomenclature or taxonomy approach is missing. Therefore, standardization is globally missing. Every effort of research impact assessment distinctly organized the indicators according to the technical and contextual requirements. Nonetheless, based on the context, the classification scheme of indicators can be arranged in four groups.
Impact Categories and Domains
Impact Time and Pathways
Impact in Specific Dimension
Uncategorised
In many research impact assessment methods, the adopted organization of impact indicators is based on impact categories and domains. These methods are wide-scope and open to select indicators in any of their classes (Bernstein et al., 2006). Payback framework for assessing the impact of health research is one of the classical methods falls under this group, for instance (Buxton & Hanney, 1996). It was developed by the Health Economics Research Group at Brunel University in 1996 by Buxton and Hanney (1996). It organizes the indicators in multi-dimensional categories including knowledge, research benefits, political, and administrative benefits, health sector benefits, and broader economic benefits.
The second group that follows the impact time and pathways are based on the concept of output and outcome. The understanding of the difference between output and outcome was first explained by United Way of America (1996) in the form of logic modelling. This model explicitly defines inputs, process and outputs in the form of resources, activities and products, respectively. Whereas, the outcome is a benefit to the population of interest. Weiss (2007) split the outcomes of health research into initial, intermediate, and long-term impacts. This time-bound approach represents a sequence or a chain of effects. For instance, awareness of new research in decision-making community is an initial outcome. That awareness can lead to a change in clinical practice as an intermediate outcome. Ultimately, the long-term outcome is the improvement in the health of patients.
The third approach is exclusive. Many organizations and individual researchers are keen to know the impact of research only in one area in depth. One example is the monetary value approach presented by Deloitte Access Economics (2011). In this approach, all indicators and measures are solely related to the economic impacts of research. Some other methods are organization-specific where a scoring system is limited in scope and developed in a local context. We cannot fit them in any above mentioned organized structure, for instance, The Wellcome Trust’s Assessment Framework (Wellcome Trust, 2009), Matrix Scoring System (Wiegers et al., 2015), and Royal Netherlands Academy of Arts and Sciences Approach (VSNU, KNAW, & NWO, 2009).
Although the organizations of indicators within a research impact framework has been a mandatory part of every evaluation method, there is still a need to organize the indicators based on criteria and rules. A classic example of diversity and heterogeneity can be seen in REF (2012) where more than 100 indicators are applied based on subject domains and target areas of socio-economic interest. There is still a need to adopt a mechanism where these indicators can be generalized and transformed on taxonomical structures.

3 Method

We systematically explored the literature databases, including Scopus, WebMD, ACM DL, IEEE Xplore, Web of Science and Google Scholar to collect research articles providing RI assessment indicators and methods. In many cases, organizations published the frameworks and guidelines in the form of technical reports; therefore, grey literature was also considered.
Multiple combinations of literature-searching keywords were used with their synonyms. These include but not limited to the “research impact,” “research productivity,” “research quality,” “research impact indicators,” “research impact assessment,” “research impact assessment method,” “research impact assessment framework,” “scientometric indicators,” “bibliometric indicators,” “economic indicators,” “social indicators,” and “environmental indicators.” The purpose of using a combination of these keywords was to identify theoretical or applied studies related to the research impact assessment. In theoretical or conceptual studies, we found the constructs and mechanisms of research impact assessment methods applied studies provided the demonstration of assessment methods in the form of case studies. We also found some review articles, which provided a comparison of different RI assessment approaches. However, in this study, we mainly focused on the preparation of RI indicators. Due to using multiple combinations of keywords and databases, we found the significant repetition of the same studies, which we removed with the help of EndNote software. In this study, we extracted indicators from conceptual studies. We used NVivo 12 software for annotation and coding. For deciphering the nomenclature, indicators were disintegrated based on their lexical and conceptual structures as discussed in the Results section. For improving the result of coding, inter-coder reliability was applied on 10% of data and conflicts were resolved with the help of discussion.
The base of the cognitive structure of defined nomenclature in this study is the “every indicator is a contextual-function to explain the impact”. The primary constructs of an indicator are function and context. Function refers to the “correspondence”, “dependence relation”, “rule”, “operation”, “formula” or “representation” as defined by Vinner and Dreyfus (1989). It explains the relationship between the two domains “research” and “impact”. In other words, impact (y) is a function of research (x), i.e. y=f(x). At large, in scientometrics understanding, the functional operation can be “improvement”, “recognition”, “reduction”, “replacement” etc. (see Table 1 for examples). The indicator is a subjective measure of a system-dependent phenomenon which is always described in its contextual understanding by a system designer (Vinkler, 2010). Therefore, the indicator’s function is always applied in a specific context. For instance, “improvement in patient care system”, in this indicator, the patient care system represents the context of the healthcare system, and it is critically important for researchers, funders, institutes and support organisations related to the health sciences (Trochim et al., 2011).
Structure of Indicator (I) = F + C
Where, F = Function, and C = Context
Whereas
C = t + d
Where, t = target area, and d = impact domain
Table 1 Nomenclature of Indicator with Examples.
Functions (F)
Improvement / Addition / Reduction
This function of indicator explains the addition or enhancement of an existing phenomenon in quantitative or qualitative form. (Example: Improvement in economic gains such as increased employment, health cost cut (Weiss, 2007))
Creation
This function of indicator focuses on the creativity in the form of the development of new knowledge, theory, technique, method, technology, approach, opportunity or any workflow. (Example: Creation of prevention methods for clinical practice (Trochim et al., 2011))
Recognition
This function explains the recognition of effort in the form of outstanding quality by the peers or experts such as in the form of awards, promotions, meritorious selection and work showcasing etc. This recognition can be of the research, the researcher or the research institute. (Example: Receiving an award on research (Kuruvilla et al., 2006))
Obsoleting / Replacing
This function elaborates the policy, law, regulation to obsolete or disuse the existing phenomena to overcome the future negative impacts. (Example: Change in law to obsolete the existing method of drug approval (Maliha, 2018))
Context (C)
Target (t)
Contextual targets in research impact science include knowledge, service, policy, law, guideline, system, technology, procedure, method, framework, workflow, publication, patent, product, stakeholder, citation, literature gaps, intellectual challenges, scholarly issues, relationships, collaborations, and networks etc. These are the key areas but usually partial in contextual understanding.
Domain (d)
The contextual domain is the main area or field of interest of the indicator system designer such as health, education, economy, environment, academia, medical science, chemistry, history, multidisciplinary etc. The main body of knowledge and elaboration of indicators are always from the domain language. The domain is the main component of the indicator, which specialised the context and application of the indicator. However, the level of the domain is subject to the interest and perspective of impact evaluator.

4 Results and discussion

4.1 Search outcome and identification of indicators

The result of the literature search is more than one thousand studies (1,152), where research impact was published in the form of theoretical papers, case studies and review articles. However, after excluding studies where research impact was assessed in case-studies by using any method developed by elsewhere, only 36 conceptual studies were left. In the conceptual studies, we found out more than 500 research impact indicators. For this study, we selected 119 indicators for preparing the nomenclature (see Appendix 1).

4.2 Nomenclature

In many cases, an indicator is self-explanatory and well written in a proper construct-based structure such as Development of mitigation methods for reducing environmental hazards and losses from natural disasters (Grant et al., 2010). However, similar to an algebraic expression, sometimes constructs are obscured but well understood by the users. For instance, in Number of citations, where, Function and the contextual domain is missing but well recognized as an Increased number of bibliometric citations, where, Function is the addition, the contextual target is citations, and the domain is bibliometrics.
This contextual nomenclature of indicators allows focussing on context and function irrespective of the selection of the words and lexical structure of the indicator. Additionally, it strengthens the idea of contextual generalisability, which is very helpful in extending the applications and scope of the indicators. For example, in use of research in the development of medical technology (where, Function = development/creation, Contextual Target = Technology, and Contextual Domain = Healthcare). This indicator can be generalised on the variable domain such as use of research in the development of technology (where, Function = development/creation, Contextual Target = Technology, and Contextual Domain = variable [generalized]).

4.3 Taxonomical analysis

In analyzed indicators, most of the indicators are functionally related to the improvements in the current state of affairs (63%), mainly focused on future research, services and methods (Figure 1). However, recognition of research (23%) in the form of bibliometric, rewards and other citations is also considerably highlighted in the literature-based list of indicators. Creativity and development (14%) are also the prevailing influence of research, which is reflected in indicators mentioning the creation of new knowledge, technique, research teams, drugs etc. More than half (59%) of the indicators attempt to explore the impact in the academic domain (Figure 2), e.g. Where and how the research is recognised? What knowledge, methods and collaborations are formed? What challenges, issues and gaps are addressed? Knowledge domains related to the social systems and services are second in coverage (26%) that primarily focus on the healthcare, education and justice systems. Economic policies and services also have a good share (11%) in literature-based indicators. Although, during the last two decades, the impact of research on improving the environment and sustainability has also emerged in various indicators, its representation is quite low.
Figure 1. Evidence-based taxonomical characteristics of indicators, (A) Scale of indicators, (B) Complexity of indicators, (C) Functions of indicators, (D) Domains of indicators, and (E) Target areas of indicators.
Figure 2. Cross-constructs distribution of indicators characteristics, (A) Functional distribution of target areas in indicators, (B) Domain distribution of target areas in indicators, and (C) Functional distribution of domains in indicators.

4.4 Limitations of the study

In this study, 119 indicators were interpreted and coded for nomenclature and taxonomy. However, the inclusion of more indicators may change the results of classification. Another aspect which may affect the outcome of the study is consistency in interpretation and coding of indicators. Although it was improved by using the intercoder reliability method on 10% indicators, rule-based text mining techniques may improve the results.

5 Conclusion and future direction

The study categorized the research impact indicators based on their characteristics and scope. Furthermore, a concept of evidence-based nomenclature of research impact indicator has been introduced to generalize and transform the indicators. For building nomenclature and classification, one hundred and nineteen indicators were selected and coded in NVivo software. The nomenclature was developed based on the principle “every indicator is a contextual-function to explain the impact”. Every indicator was disintegrated in three parts (essential ingredients of nomenclature), i.e. Function, Domain, and Target Areas. It is observed that in literature, the primary functions of research impact indicators are improvement, recognition and creation/development. The focus of research impact indicators in literature is more towards the academic domain, whereas the environment/sustainability domain is least considered. As a result, research impact related to the research aspects is considered the most. Other target areas include system and services, methods and procedures, networking, planning, policy development, economic aspects and commercialisation etc.
The study provided a novel approach in scientometrics for generalizability and transformability of research impact indicators. It explored the diversity of indicators and demonstrated the generalization based on fundamental constructs, i.e. function, domain and target area. As a result, a research impact indicator can be modified and applied to multiple research disciplines.

Author contributions

The first author is a PhD student under the supervision of the second author and co-supervision of the third author. Therefore, Mudassar Arsalan (M.Arsalan@westernsydney.edu.au) contributed to data collection, analysis and drafting of the paper, whereas Omar Mubin (O.Mubin@westernsydney.edu.au) and Abdullah Al Mahmud (aalmahmud@swin.edu.au) facilitated in conceptualization and reviewing the article.

Appendix 1. Indicators identified in literature

Ref Indicator Reference Measure Qualitative/Quantitative
1 Identification of research gaps, questions and new research dimension (Heller & de Melo-Martín, 2009; Kuruvilla, Mays, Pleasant, & Walt, 2006; W. M. Trochim, Marcus, Masse, Moser, & Weld, 2008; Weiss, 2007) Yes / NoŦ Qualitative
2 Development of a new technique for data collection and new data (Heller & de Melo-Martín, 2009; Sung et al., 2003) Yes / NoŦ Qualitative
3 Creation of a research method or extension of existing by involving new approach and technique (Kuruvilla et al., 2006; W. M. Trochim et al., 2008) Yes / NoŦ Qualitative
4 Defining the concept and subject vocabulary in a more comprehensive way (Mankoff, Brander, Ferrone, & Marincola, 2004; W. M. Trochim et al., 2008) Yes / NoŦ Qualitative
5 Formation of research groups and collaborate in multidimensional research (S. R. Hanney, Grant, Wooding, & Buxton, 2004; Heller & de Melo-Martín, 2009; Kuruvilla et al., 2006; W. M. Trochim et al., 2008) Yes / NoŦ Qualitative
6 Recruitment of skilled researchers ../../../../Google Drive/Working/001 PhD Work/00103 Paper 1/Drafting/Indicators v2.xlsx - RANGE!_ENREF_68 (Heller & de Melo-Martín, 2009) How many researchers are recruited? Quantitative
7 Development of communities of science, new grant programmes; replication and new research (S. Hanney, Buxton, Green, Coulson, & Raftery, 2007) Yes / NoŦŦ Mixed
8 Effective planning and addressing future research ../../../../Google Drive/Working/001 PhD Work/00103 Paper 1/Drafting/Indicators v2.xlsx - RANGE!_ENREF_51 (Gordon & Meadows, 1981) Yes / NoŦ Qualitative
9 Research capacity building for an individual or a group of researchers (Buxton & Hanney, 1996; Raftery, Hanney, Greenhalgh, Glover, & Blatch-Jones, 2016) How many researchers are trained? Quantitative
10 Preparing a better procedure for researchers induction (Heller & de Melo-Martín, 2009; Sung et al., 2003) Yes / NoŦ Qualitative
11 Improvement in ethical approval processes for better decisions and timeliness (Pober, Neuhauser, & Pober, 2001; Sung et al., 2003) Yes / NoŦ Qualitative
12 Formation of new research teams and projects (Pober et al., 2001) How many projects and teams are established? Quantitative
13 Successful completion of ongoing research with the achievement of set targets (Weiss, 2007) Yes / NoŦŦ Mixed
14 Retention of research team by involving in productivity and future research (Heller & de Melo-Martín, 2009; Kuruvilla et al., 2006; Nathan, 2002) How many members are retained? Quantitative
15 Advancement in numbers and quality of research and research teams (Nathan, 2002; Pober et al., 2001; W. M. Trochim et al., 2008; Weiss, 2007) Yes / NoŦŦ Mixed
16 Enhancement of research process, behaviour and procedural protocols (Heller & de Melo-Martín, 2009; Pober et al., 2001; Sung et al., 2003) Yes / NoŦ Qualitative
17 Recognition and leadership of researchers in the research domain (Kuruvilla et al., 2006; Pober et al., 2001) Yes / NoŦ Qualitative
18 Improvement of research communication between researchers and research organizations (Heller & de Melo-Martín, 2009; Mankoff et al., 2004) Yes / NoŦ Qualitative
19 Serving of research staff on a higher level in more advanced organizations at national and international level (Kuruvilla et al., 2006; Sung et al., 2003) Yes / NoŦ Qualitative
20 Improvement in research culture and overall environment (Heller & de Melo-Martín, 2009; Kessler & Glasgow, 2011; Mankoff et al., 2004; Pober et al., 2001; Sung et al., 2003) Yes / NoŦ Qualitative
21 Identification and overcoming of the research process constraints (Heller & de Melo-Martín, 2009; Pober et al., 2001) Yes / NoŦ Qualitative
22 Improved willingness and tangible measures for practice-based and applied research (Westfall, Mold, & Fagnan, 2007) Yes / NoŦ Qualitative
23 Development of improved analytical methods for existing data (Kessler & Glasgow, 2011; Kuruvilla et al., 2006; W. M. Trochim et al., 2008; Weiss, 2007) Yes / NoŦ Qualitative
24 Improvement in multi-disciplinary research methods (Kuruvilla et al., 2006) Yes / NoŦ Qualitative
25 Creation of methods for cross domains results in interpretation and synthesis (Kuruvilla et al., 2006; Pang et al., 2003) Yes / NoŦ Qualitative
26 Embracing the innovative methods for measuring the research outcome (Dougherty & Conway, 2008; W. M. Trochim et al., 2008) Yes / NoŦ Qualitative
27 Discovery of new or advanced research findings (Lavis, Ross, McLeod, & Gildiner, 2003; Mankoff et al., 2004) Yes / NoŦ Qualitative
28 Discovery of novel knowledge or innovative techniques (S. R. Hanney et al., 2004; Kalucy, Jackson-Bowers, McIntyre, & Reed, 2009; Lavis et al., 2003; W. Trochim, Kane, Graham, & Pincus, 2011) Yes / NoŦ Qualitative
29 Demonstration of an efficient way of treatment (Lavis et al., 2003; W. Trochim et al., 2011; Woolf, 2008) Yes / NoŦ Qualitative
30 Development of new research devices or products for better results (ARC, 2018; Kalucy et al., 2009; Lavis et al., 2003; Mankoff et al., 2004; Pang et al., 2003) Yes / NoŦŦ Mixed
31 Obtaining patents for new devices or products (ARC, 2018; Kuruvilla et al., 2006; Lavis et al., 2003; Lewison, 2003; Sarli, Dubinsky, & Holmes, 2010) How many patents are obtained? Quantitative
32 Identification or validation of new biomarkers for better healthcare (Lavis et al., 2003; Zerhouni, 2007) Yes / NoŦ Qualitative
33 Use of research outcomes and discoveries into the advancement of research related to animals and humans (Pober et al., 2001; Woolf, 2008; Zerhouni, 2007) Yes / NoŦ Qualitative
34 Receiving an award on research (Kuruvilla et al., 2006) How many awards are received? Quantitative
35 The increment in number and proportion of research grant submissions and awards (ARC, 2018; Lavis et al., 2003; Lewison, 2003; Weiss, 2007) What is the proportion of success of grant award? Quantitative
36 Increase in the quantity of publications in high ranking journals as a research outcome (Buxton & Hanney, 1996; Kuruvilla et al., 2006; Lewison, 2003; Pang et al., 2003; Weiss, 2007) How many publications are produced in high ranking journals? ŦŦŦ Quantitative
37 Increase in the total impact factor gained by publishing research in high ranking journals (ARC, 2018; Archambault & Lariviere, 2009; RAND Europe, 2006; Weiss, 2007) How much impact factor is gained? ŦŦŦ Quantitative
38 Increase in the conference papers and presentations organized on national or international levels. (ARC, 2018; Kalucy et al., 2009; Lewison, 2003) How many conference papers and presentations are produced? ŦŦŦ Quantitative
39 Increase in the number of citations of research outcome (ARC, 2018; Garfield, 2006; S. R. Hanney et al., 2004; Kuruvilla et al., 2006; RAND Europe, 2006; Weiss, 2007) How many citations are obtained? ŦŦŦ Quantitative
40 Increase in media appearance of researchers or research organizations for their findings and its relation to the public (Kuruvilla et al., 2006; Lewison, 2003) How many times appeared in media? Quantitative
41 Popularity and acceptance of research-based knowledge and techniques in masses (e.g. change in community-based health practice or education system) (Kalucy et al., 2009; Kuruvilla et al., 2006; Lewison, 2003; Pang et al., 2003; Weiss, 2007) Yes / NoŦ Qualitative
42 Participation of researchers as a member of the research journal editorial board or become a journal editor (Kuruvilla et al., 2006) Yes / NoŦ Qualitative
43 Dissemination and reach of research outcome to more audiences (Kalucy et al., 2009; Kuruvilla et al., 2006; Weiss, 2007) Yes / NoŦ Qualitative
44 IF2-Index (Boell & Wilson, 2010) Index Value ŦŦŦ Quantitative
45 h-Index (Hirsch, 2005) Index Value ŦŦŦ Quantitative
46 Contemporary h-Index (Sidiropoulos, Katsaros, & Manolopoulos, 2007) Index Value ŦŦŦ Quantitative
47 Individual h-Index (Harzing, 2010) Index Value ŦŦŦ Quantitative
48 Hi-Index (Zhai, Yan, & Zhu, 2013) Index Value ŦŦŦ Quantitative
49 H2-Index (Vanclay & Bornmann, 2012) Index Value ŦŦŦ Quantitative
50 M-Quotient (Hirsch, 2005) Index Value ŦŦŦ Quantitative
51 G-Index (Egghe, 2006) Index Value ŦŦŦ Quantitative
52 Y-Index (Fu & Ho, 2014) Index Value ŦŦŦ Quantitative
53 PRP-Index (Vinkler, 2014) Index Value ŦŦŦ Quantitative
54 IFQ2A index (Torres-Salinas, Moreno-Torres, Delgado-López-Cózar, & Herrera, 2011) Index Value ŦŦŦ Quantitative
55 DCI-Index (Järvelin & Persson, 2008) Index Value ŦŦŦ Quantitative
56 R-& AR-Indices (Jin, Liang, Rousseau, & Egghe, 2007) Index Value ŦŦŦ Quantitative
57 AHP Index (Wang, Wen, & Liu, 2016) Index Value ŦŦŦ Quantitative
58 Altmetric (A. E. Williams, 2017) Altmetric Attention Score Quantitative
59 STAR Metrics (Largent & Lane, 2012) Index Value Quantitative
60 ResearchGate-Score (Hoffmann, Lutz, & Meckel, 2016) Index Value ŦŦŦ Quantitative
61 Crown indicator (Moed, De Bruin, & Van Leeuwen, 1995) Index Value ŦŦŦ Quantitative
62 Societal Quality Score (Mostert, Ellenbroek, Meijer, van Ark, & Klasen, 2010) Index Value Quantitative
63 PlumX Metrics (Lindsay, 2016) Index Value ŦŦŦ Quantitative
64 Positive reviews of creative publications and performances (Grant, Brutscher, Kirk, Butler, & Wooding, 2010) Yes / NoŦ Qualitative
65 Non-academic publications in government reports (Penfield, Baker, Scoble, & Wykes, 2014) How many publications are done in government reports? Quantitative
66 Non-academic citations in government reports (Penfield et al., 2014) How many citations are made in government reports? Quantitative
67 Number of industrial contracts (ARC, 2018) How many industrial contracts are obtained? Quantitative
68 Amount of industrial and academic funding (ARC, 2018) How much funding is secured? Quantitative
69 Community awareness of research; Collaborative projects with end users (S. Hanney et al., 2007) Yes / NoŦ Qualitative
70 Facilitation and participation in expert panels for research enquiries; external institution; steering committees and advisory boards (S. Hanney et al., 2007) Yes / NoŦŦ Mixed
71 Use of research outcomes, discoveries or clinical trials as a best practice (Lewison, 2003; W. M. Trochim et al., 2008; Woolf, 2008) Yes / NoŦ Qualitative
72 Use of research outcome in efficiency and better performance of services (Woolf, 2008) Yes / NoŦ Qualitative
73 Provision of diversified and efficient intervention and treatment options for clinicians (Dougherty & Conway, 2008) Yes / NoŦ Qualitative
74 Improved client care (Heller & de Melo-Martín, 2009; Kuruvilla et al., 2006; Mankoff et al., 2004; Pang et al., 2003; Pober et al., 2001; W. Trochim et al., 2011; Weiss, 2007; Westfall et al., 2007) Yes / NoŦ Qualitative
75 The decrease in events of work-environment mistakes (Donaldson, Rutledge, & Ashley, 2004) What is the decrease rate of work-environment mistakes? Quantitative
76 Increase in the provision of training of healthcare improvement from the healthcare providers to the support staff (S. R. Hanney et al., 2004; Mankoff et al., 2004; Pober et al., 2001; Sung et al., 2003) How many support staff are trained? Quantitative
77 Improvement in technologies and information systems for social applications (B. Haynes & A. Haines, 1998; Kuruvilla et al., 2006) Yes / NoŦ Qualitative
78 Increase in training development for system improvement (S. R. Hanney et al., 2004; Kuruvilla et al., 2006; Lewison, 2003; Mankoff et al., 2004; Pober et al., 2001; Sung et al., 2003) How many trainings are developed for healthcare improvements? Quantitative
79 Creation of prevention methods for clinical practice (Heller & de Melo-Martín, 2009; Kuruvilla et al., 2006; Mankoff et al., 2004; Pang et al., 2003; Pober et al., 2001; W. Trochim et al., 2011; Weiss, 2007; Westfall et al., 2007) Yes / NoŦ Qualitative
80 Adapting evidence-based practices (Donaldson et al., 2004; Dougherty & Conway, 2008; Grant, Cottrell, Cluzeau, & Fawcett, 2000; Kuruvilla et al., 2006; Westfall et al., 2007) Yes / NoŦ Qualitative
81 Improvement in patient outcomes (Donaldson et al., 2004; Dougherty & Conway, 2008; Lewison, 2003; Weiss, 2007) Yes / NoŦ Qualitative
82 Improvement in health behaviours enthusiasm of patients and general masses (Kuruvilla et al., 2006; Lewison, 2003; Woolf, 2008) Yes / NoŦ Qualitative
83 Development and promulgation of guidelines and policies (Dougherty & Conway, 2008; Grant et al., 2000; S. R. Hanney et al., 2004; Brian Haynes & Andrew Haines, 1998; Kuruvilla et al., 2006; Lewison, 2003; Pang et al., 2003; W. Trochim et al., 2011) Yes / NoŦ Qualitative
84 Progress in personal circumstances-based healthcare e.g. based on genetic sequencing (Mankoff et al., 2004; Zerhouni, 2007) Yes / NoŦ Qualitative
85 Strengthening of service-client relationship (Woolf, 2008) Yes / NoŦ Qualitative
86 Research outcome translation into medical practice for improvement (Dougherty & Conway, 2008; Kessler & Glasgow, 2011) Yes / NoŦ Qualitative
87 Strengthening human protection through improved policies and better procedures (Weiss, 2007) Yes / NoŦ Qualitative
88 Improvement in regulation for introducing advanced technologies, tools and techniques (Lewison, 2003) Yes / NoŦ Qualitative
89 Compliance of ethical guidelines in research (Kuruvilla et al., 2006; Weiss, 2007) Yes / NoŦ Qualitative
90 Development of community-based awareness (Sarli et al., 2010) Yes / NoŦ Qualitative
91 Betterment of policies, guidelines and reimbursement systems for service providers (Sarli et al., 2010) Yes / NoŦ Qualitative
92 Increased empowerment of service users (Kuruvilla et al., 2006) Yes / NoŦ Qualitative
93 Support of research outcome and information for political decision and policy-making (Buxton & Hanney, 1996; B. Haynes & A. Haines, 1998; Kalucy et al., 2009; Pang et al., 2003) Yes / NoŦ Qualitative
94 Improved public awareness about the environment and culture, Public behaviour change and advocacy; Increased literacy and numeracy rates (Grant et al., 2010; Raftery et al., 2016) Yes / NoŦ Qualitative
95 Improvement in health literacy of health users and patients (Kuruvilla et al., 2006; Pang et al., 2003) Yes / NoŦ Qualitative
96 Improvement in health status of health users and patients (Dougherty & Conway, 2008; Kuruvilla et al., 2006; Weiss, 2007) Yes / NoŦ Qualitative
97 Establishment of public health, education or any other social schemes for a region (Woolf, 2008) Yes / NoŦ Qualitative
98 The decrease in social disparities (Heller & de Melo-Martín, 2009; Kuruvilla et al., 2006; Zerhouni, 2007) Yes / NoŦ Qualitative
99 Improvement in inter-organizational coordination for betterment in social sector (Sarli et al., 2010) Yes / NoŦ Qualitative
100 Increase in planning efforts and program implementation related to social issues (Heller & de Melo-Martín, 2009; Woolf, 2008) Yes / NoŦ Qualitative
101 Improvement in health user about health research (Weiss, 2007) Yes / NoŦ Qualitative
102 Increased empowerment and knowledge of health users about health issues (Kuruvilla et al., 2006; Weiss, 2007) Yes / NoŦ Qualitative
103 Better communication and perception of health users about health risks (Kuruvilla et al., 2006; Pober et al., 2001; Weiss, 2007) Yes / NoŦ Qualitative
104 Expansion of health education, literacy and other social advantages (Kuruvilla et al., 2006) Yes / NoŦ Qualitative
105 Improvement in Occupational Health and Safety Environment (Raftery et al., 2016; V. Williams, Eiseman, Landree, & Adamson, 2009) Yes / NoŦ Qualitative
106 Disuse the law for obsoleting the existing method of drug approval (Maliha, 2018) Yes / NoŦ Qualitative
107 Commercialization of new discovery, product or technology (Kuruvilla et al., 2006; Lavis et al., 2003; Woolf, 2008) Yes / NoŦ Qualitative
108 Improvement in cost reducing techniques and effectiveness (Kuruvilla et al., 2006) Yes / NoŦ Qualitative
109 Improvement in economic gains such as increased employment, health cost cut (Aries & Sclar, 1998; S. R. Hanney et al., 2004; Kalucy et al., 2009; Kuruvilla et al., 2006; RAND Europe, 2006; Weiss, 2007) Yes / NoŦŦ Mixed
110 Development of new job opportunities and growth in the specific economic sector or geographical region (Aries & Sclar, 1998) Yes / NoŦŦ Mixed
111 Development of medicinal products and therapeutic procedures (S. Hanney et al., 2007; Sarli et al., 2010) How many products or procedures are developed? Quantitative
112 Improvement in a business environment, commercialization, technology incubation, products and processes (Buxton & Hanney, 1996) Yes / NoŦ Qualitative
113 Reduction in work loss due to illness and increased benefits from a healthy workforce (CAHS, 2009) Yes / NoŦ Qualitative
114 Increased Royalties, employment, Licences; creative works commissioned (Grant et al., 2010) Yes / NoŦŦ Mixed
115 Creation of new knowledge about sustainable development and environmental protection for better future of the world (Kuruvilla et al., 2006) Yes / NoŦ Qualitative
116 Improved environmental quality and sustainability (Engel-Cox, Van Houten, Phelps, & Rose, 2008) Yes / NoŦ Qualitative
117 Reduced emissions; regeneration or arrested degradation of natural resources (Raftery et al., 2016) Yes / NoŦŦ Mixed
118 Improved awareness of environmental impacts and legislation for protection (CAHS, 2009) Yes / NoŦ Qualitative
119 development of mitigation methods for reducing environmental hazards and losses from natural disasters (Grant et al., 2010) Yes / NoŦŦ Mixed

Ŧ In case of Yes, description and justification is needed.

ŦŦ In case of Yes, detailed case study with quantitative evidence is needed.

ŦŦŦ Bibliometric indicator.

References

ARC. (2018). Excellence in Research for Australia (ERA) 2018: Submission Guidelines. Retrieved from http://www.arc.gov.au/sites/default/files/filedepot/Public/ERA/ERA%202018/ERA% 202018%20Submission%20Guidelines.pdf
Aries, N.R., & Sclar, E.D. (1998). The economic impact of biomedical research: a case study of voluntary institutions in the New York metropolitan region. J Health Polit Policy Law, 23(1), 175-193. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/9522285
Buxton, M., & Hanney, S. (1996). How can payback from health services research be assessed? J Health Serv Res Policy, 1(1), 35-43.
CAHS. (2009). Making an Impact: A Preferred Framework and Indicators to Measure Returns on Investment in Health Research. Retrieved from Otawwa, Canada:
Dougherty, D., & Conway, P.H. (2008). The “3T’s” road map to transform US health care: the “how” of high-quality care. Jama, 299(19), 2319-2321. Retrieved from https://jamanetwork.com/journals/jama/articlepdf/181916/jco80037_2319_2321.pdf
Garfield, E. (2006). Citation indexes for science. A new dimension in documentation through association of ideas. 1955. Int J Epidemiol, 35(5), 1123-1127; discussion 1127-1128. doi:10.1093/ije/dyl189
Gordon, M., & Meadows, A. (1981). The dissemination of findings of DHSS funded research. Primary Communications Research Centre, Leicester: University of Leicester.
Grant, J., Brutscher, P.-B., Kirk, S.E., Butler, L., & Wooding, S. (2010). Capturing Research Impacts: A Review of International Practice. Documented Briefing. Rand Corporation.
Grant, J., Cottrell, R., Cluzeau, F., & Fawcett, G. (2000). Evaluating “payback” on biomedical research from papers cited in clinical guidelines: applied bibliometric study. Bmj, 320(7242), 1107-1111. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC27352/pdf/ 1107.pdf
Hanney, S., Buxton, M., Green, C., Coulson, D., & Raftery, J. (2007). An assessment of the impact of the NHS Health Technology Assessment Programme. Health Technol Assess, 11(53), iii-iv, ix-xi, 1-180.
Hanney, S.R., Grant, J., Wooding, S., & Buxton, M.J. (2004). Proposed methods for reviewing the outcomes of health research: the impact of funding by the UK’s’ Arthritis Research Campaign’. Health Research Policy and Systems, 2(1), 4. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC503400/pdf/1478-4505-2-4.pdf
Harzing, A.-W.K. (2010). The publish or perish book: Tarma software research Melbourne.
Haynes, B., & Haines, A. (1998). Barriers and bridges to evidence based clinical practice. Bmj, 317(7153), 273-276. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/9677226
Haynes, B., & Haines, A. (1998). Barriers and bridges to evidence based clinical practice. Bmj, 317(7153), 273-276. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC 1113594/pdf/273.pdf
Heller, C., & de Melo-Martín, I. (2009). Clinical and Translational Science Awards: can they increase the efficiency and speed of clinical and translational research? Academic Medicine, 84(4), 424-432.
Kalucy, E.C., Jackson-Bowers, E., McIntyre, E., & Reed, R. (2009). The feasibility of determining the impact of primary health care research projects using the Payback Framework. Health Research Policy and Systems, 7(1), 11.
Lewison, G. (2003). Beyond outputs: new measures of biomedical research impact. Paper presented at the Aslib Proceedings.
Lindsay, J.M. (2016). PlumX from plum analytics: not just altmetrics. Journal of Electronic Resources in Medical Libraries, 13(1), 8-17.
Maliha, G. (2018). Obsolete to Useful to Obsolete Once Again: A History of Section 507 of the Food, Drug, and Cosmetic Act.
Moed, H., De Bruin, R., & Van Leeuwen, T. (1995). New bibliometric tools for the assessment of national research performance: Database description, overview of indicators and first applications. Scientometrics, 33(3), 381-422.
Nathan, D.G. (2002). Careers in translational clinical research—historical perspectives, future challenges. Jama, 287(18), 2424-2427. Retrieved from https://jamanetwork.com/journals/jama/articlepdf/194890/JCO20035.pdf
Pang, T., Sadana, R., Hanney, S., Bhutta, Z.A., Hyder, A.A., & Simon, J. (2003). Knowledge for better health: a conceptual framework and foundation for health research systems. Bull World Health Organ, 81(11), 815-820. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/ 14758408
RAND Europe. (2006). Measuring the benefits from research. Cambridge, England. Retrieved from https://www.rand.org/content/dam/rand/pubs/research_briefs/2007/RAND_RB9202.pdf
Sung, N.S., Crowley, W.F., Jr., Genel, M., Salber, P., Sandy, L., Sherwood, L.M., ... Rimoin, D. (2003). Central challenges facing the national clinical research enterprise. Jama, 289(10), 1278-1287. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/12633190
Westfall, J.M., Mold, J., & Fagnan, L. (2007). Practice-based research—“Blue Highways” on the NIH roadmap. Jama, 297(4), 403-406. Retrieved from https://jamanetwork.com/journals/jama/articlepdf/205216/jco60049_403_406.pdf
Williams, V., Eiseman, E., Landree, E., & Adamson, D. (2009). Demonstrating and Communicating Research Impact. Preparing NIOSH Programs for External Review. Retrieved from
1
Arsalan M., Mubin O., & Al Mahmud A . ( 2019). Evidence-based nomenclature and taxonomy of research impact indicators. In Proceedings of the 17th International Conference on Scientometrics and Informetrics (ISSI 2019), 2-5 September 2019, Sapienza University of Rome, Italy.

2
Bennett S., Reeve R., Muir K., Marjolin A., & Powell A . ( 2016). Orienting your journey: An approach for indicator assessment and selection. Retrieved from https://www.csi.edu.au/media/Orienting_Your_Journey_-_Change_Collection.pdf

3
Bernstein A., Hicks V., Borbey P., & Campbell T . ( 2006). A framework to measure the impact of investments in health research. Paper presented at the presentation to the Blue Sky II conference, What Indicators for Science, Technology and Innovation Policies in the 21st Century.

4
Buxton M., & Hanney S . ( 1996). How can payback from health services research be assessed? Journal of Health Services Research & Policy, 1(1), 35-43.

PMID

5
Panel on Return on Investment in Health Research. ( 2009). Making an Impact: A Preferred Framework and Indicators to Measure Returns on Investment in Health Research, Canadian Academy of Health Sciences, Ottawa, ON, Canada.

6
Deloitte Access Economics. ( 2011). Returns on NHMRC funded Research and Development. Commissioned by the Australian Society for Medical Research Sydney, Australia.

7
Grant J., Brutscher P.B., Kirk S.E., Butler L., & Wooding S . ( 2010). Capturing Research Impacts: A Review of International Practice. Documented Briefing. Rand Corporation, 92.

8
Hayat K . ( 2014). Nomenclature and its Importance in microbiology. Retrieved from https://medimoon.com/2014/04/nomenclature-and-its-importance-in-microbiology/

9
Heller C., &de Melo-Martín I . ( 2009). Clinical and translational science awards: Can they increase the efficiency and speed of clinical and translational research? Academic Medicine, 84(4), 424-432.

DOI PMID

10
Kuruvilla S., Mays N., Pleasant A., & Walt G . ( 2006). Describing the impact of health research: A Research Impact Framework. BMC Health Serv Res, 6(1), 134. doi: 10.1186/1472-6963-6-134

DOI

11
Lewison G . ( 2003). Beyond outputs: New measures of biomedical research impact. Aslib Proceedings: New Information Perspectives, 55(1/2), 32-42.

12
Longabaugh R., Fowler D.R., Stout R., & Kriebel G . ( 1983). Validation of a problem-focused nomenclature. Archives of General Psychiatry, 40(4), 453-461.

DOI PMID

13
Maliha G . ( 2018). Obsolete to Useful to Obsolete Once Again: A History of Section 507 of the Food, Drug, and Cosmetic Act. Food and Drug Law Journal, 73(3), 405.

14
REF. ( 2012). Panel criteria and working methods. Retrieved from

15
Trochim W., Kane C., Graham M.J., & Pincus H.A . ( 2011). Evaluating translational research: A process marker model. Clin Transl Sci, 4(3), 153-162. doi: 10.1111/j.1752-8062.2011.00291.x

DOI PMID

16
United Way of America . ( 1996). Measuring program outcomes: A practical approach. Retrieved from https://digitalcommons.unomaha.edu/slceeval/47

17
Vinkler P . ( 2010a). The evaluation of research by scientometric indicators: Elsevier.

18
Vinkler P . ( 2010b). Indicators are the essence of scientometrics and bibliometrics. Scientometrics, 85(3), 861-866. doi: 10.1007/s11192-010-0159-y

19
Vinner S., & Dreyfus T . ( 1989). Images and definitions for the concept of function. Journal for Research in Mathematics Education, 356-366.

20
VSNU, KNAW, & NWO. ( 2009). Standard evaluation Protocol 2009-2015: Protocol for research assessment in The Netherlands. Retrieved from https://www.knaw.nl/en/news/publications/standard-evaluation-protocol-sep-2009-2015

21
Weiss A.P . ( 2007). Measuring the impact of medical research: Moving from outputs to outcomes. Am J Psychiatry, 164(2), 206-214. doi: 10.1176/ajp.2007.164.2.206

DOI PMID

22
Wellcome Trust. ( 2009). How we are making a difference: Assessment framework report summary. Retrieved from https://wellcome.ac.uk/sites/default/files/wtx059577_0.pdf

23
Wiegers S.E., Houser S.R., Pearson H.E., Untalan A., Cheung J.Y., Fisher S.G., .., & Feldman A.M . ( 2015). A Metric-Based System for Evaluating the Productivity of Preclinical Faculty at an Academic Medical Center in the Era of Clinical and Translational Science. Clinical and translational science, 88(4), 357-361. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5351026/pdf/CTS-8-357.pdf

DOI PMID

Outlines

/

京ICP备05002861号-43

Copyright © 2023 All rights reserved Journal of Data and Information Science

E-mail: jdis@mail.las.ac.cn Add:No.33, Beisihuan Xilu, Haidian District, Beijing 100190, China

Support by Beijing Magtech Co.ltd E-mail: support@magtech.com.cn