Research Papers

Substantiality: A Construct Indicating Research Excellence to Measure University Research Performance

  • Masashi Shirabe , 1, ,
  • Amane Koizumi 2
Expand
  • 1Institute for Liberal Arts, Tokyo Institute of Technology, Oookayama 2-12-1 W9-77, Meguro, Tokyo 152-8550, Japan
  • 2Center for Novel Science Initiatives, National Institutes of Natural Sciences, 2nd Floor, Hulic Kamiyacho Building 4-3-13, Toranomon, Minato, Tokyo 105-0001, Japan
†Masashi Shirabe (E-mail: ).

Received date: 2021-03-03

  Request revised date: 2021-07-04

  Accepted date: 2021-07-06

  Online published: 2021-07-15

Copyright

Copyright reserved © 2021

Abstract

Purpose: The adequacy of research performance of universities or research institutes have often been evaluated and understood in two axes: “quantity” (i.e. size or volume) and “quality” (i.e. what we define here as a measure of excellence that is considered theoretically independent of size or volume, such as clarity in diamond grading). The purpose of this article is, however, to introduce a third construct named “substantiality” (“ATSUMI” in Japanese) of research performance and to demonstrate its importance in evaluating/understanding research universities.
Design/methodology/approach: We take a two-step approach to demonstrate the effectiveness of the proposed construct by showing that (1) some characteristics of research universities are not well captured by the conventional constructs (“quantity” and “quality”)-based indicators, and (2) the “substantiality” indicators can capture them. Furthermore, by suggesting that “substantiality” indicators appear linked to the reputation that appeared in university reputation rankings by simple statistical analysis, we reveal additional benefits of the construct.
Findings: We propose a new construct named “substantiality” for measuring research performance. We show that indicators based on “substantiality” can capture important characteristics of research institutes. “Substantiality” indicators demonstrate their “predictive powers” on research reputation.
Research limitations: The concept of “substantiality” originated from IGO game; therefore the ease/difficulty of accepting the concept is culturally dependent. In other words, while it is easily accepted by people from Japan and other East Asian countries and regions, it might be difficult for researchers from other cultural regions to accept it.
Practical implications: There is no simple solution to the challenge of evaluating research universities’ research performance. It is vital to combine different types of indicators to understand the excellence of research institutes. Substantiality indicators could be part of such a combination of indicators.
Originality/value: The authors propose a new construct named substantiality for measuring research performance. They show that indicators based on this construct can capture the important characteristics of research institutes.

Cite this article

Masashi Shirabe , Amane Koizumi . Substantiality: A Construct Indicating Research Excellence to Measure University Research Performance[J]. Journal of Data and Information Science, 2021 , 6(4) : 76 -89 . DOI: 10.2478/jdis-2021-0029

1 Introduction

Many indicators have been proposed and developed in the field of scientometrics, even if they are limited to measuring research performance. Although typologies of indicators have been proposed (e.g. Costas, van Leeuwen, & Bordons, 2010; Kosten, 2016; Okubo, 1997; Russell & Rousseau, 2009; Wilsdon, 2015), little attention has been paid to the constructs of indicators. However, in this study, we would like to focus on constructs of indicators for evaluating research universities.
By focusing on the numbers and amounts of research grants from Japan’s largest scientific research grant system (KAKEN-HI System), Shirabe (2019) proposed a method for evaluating a university’s research performance and clarifying its advantages. Although the superiority or inferiority of research performance (or researcher, research organization, research output, research activity, and so forth) have often been evaluated and understood in two axes (Kutlača, 2015; Russell & Rousseau, 2009), “quantity” (i.e. size or volume; e.g. Hayati & Ebrahimy, 2009; Sahel, 2011) and “quality” (i.e. what we define here as a measure of excellence that is considered theoretically independent of size or volume, such as clarity in diamond grading) (① In the context of research evaluation by the number of citations, it is said that the word “impact” should usually be used instead of “quality”. This can be measured well by “relative citation indicators” (Vinkler 1988). As Russell and Rousseau (2009) also point out, to be precise, “impact” does not immediately imply “quality.”), he advocated the use of “substantiality” (“ATSUMI” in Japanese, and “hòu dù” in Chinese) as another construct following such constructs as quantity and quality. However, as the focus of this paper was the measurement of university research performance through research grants, the construct itself was not discussed. Here, we focus on the construct of substantiality for research indicators. We shall demonstrate that this third construct is also important, especially for evaluating/understanding research organizations such as universities.
The construct we propose is based on the concept of the quantity of something with more than a certain quality. Schematically speaking, that is the multiplication or integration of quantity and quality, which we can regard as an expression of excellence (or excellent players). That is, we can call a research group or organization a center of excellence (COE) when such excellent players accumulate as a “group/organization having substantiality.” Porter (1998) indicated, regarding the “center of excellence” and clusters, that it is expected that research capability stems from the “accumulation of excellent players.” Therefore, we use substantiality as a construct of the research capability of organizations.
Since the h-index (Hirsch, 2005) was introduced, several indicators of the same type, that is, indicators combining the quantitative and qualitative aspects of research outputs, have been proposed in the field of scientometrics. For example, Egghe (2006) proposed the g-index as an improved version of the h-index. A number of indicators have been proposed (Jin et al., 2007), and these indices are gaining popularity for understanding research performance. Currently, increasingly more scientometricians have focused on “performance index (p-index), which was able to effectively combine size (quantity) and impact (quality) of scientific papers (Prathap, 2011).” Some indicators based on the substantiality concept we propose are similar to such performance indices or indicators as far as their contents are concerned. (② We mainly use institutional h-index of 5-year publication window as a representative “substantiality” indicator in this paper. We call it institutional h5-index.) Moreover, Ye and Rousseau (2010) proposed a new indicator by decomposing the h-index into factors, and a factor (h-core) among them can also be regarded as one of the proxies for the construct we propose in this study.
However, the targets of observations and evaluations obtained with substantiality indicators are different from those indicators or similar. While research performance indicators are used for many different purposes (Kosten, 2016), substantiality indicators are used for understanding and evaluating “accumulation of excellence (excellent players)” to estimate the performance of research organizations.
In addition, we often count input/output indicators (e.g. Russell & Rousseau, 2009) or other types of indicators as substantiality indicators as far as they can be regarded as proxies of accumulation of excellent players, which can hardly be measured in a relatively direct way, such as measuring the volume of research output by the number of publications. As we discuss later, the number of highly cited researchers, the number and amount of competitive research funds, and so forth are regarded as substantiality indicators. In contrast, the use of input indicators, such as the number of staff members and the number of Ph.D. students (Kosten, 2016), is not included in the use of performance indicators.
In other words, we propose a group of indicators to quantify excellent players accumulated in each research organization under the concept of substantiality and confirm its effectiveness. Substantiality indicators are an example of one way of combining metrics to provide a unique view on research performance, and these indicators are examined in this study.

2 Examining “substantiality” as a third construct of research performance

2.1 Approach

In this paper, the following procedure shows that substantiality can play an important role as a construct for measuring the research performance of research universities. First, through an analysis of the recent improvements in the reputation rankings of Tsinghua and Beijing Universities, it is shown that there is an aspect of research performance of research universities that cannot be well captured by the conventional indicators of quantity and quality. Second, we clarify the characteristics of substantiality proposed in this paper using thought experiments and real data; then we confirm that it can measure some characteristics of research performance that are difficult to capture by indicators of quantity and quality. Afterward, it is confirmed that the reputation mentioned above ranking problem can be explained at least consistently by substantiality indicators. Finally, we demonstrate the “predictive power” of substantiality indicators on the research reputation of universities.

2.2 Problems: Research university features hard to grasp through quantity and quality indicators

The subtitle of the announcement of the Times Higher Education (THE) World University Rankings 2019 was to convey China’s leap forward: “China is now home to the best university in Asia, while France’s Sorbonne University is the highest-ranked newcomer in the table.” The paragraph at the beginning of the text also conveyed the success of Chinese universities, especially that of Tsinghua University. The development of Chinese universities is not a projection anymore but a reality that has arrived.
In 2016, China produced the largest number of academic publications compared to any country(③ National Science Foundation, “Science and engineering Indexes 2018,” [Online]. Available: https://www.nsf.gov/statistics/2018/nsb20181/.). Although China may not yet have matched the United States and many European countries in terms of the “quality” of academic publications, it is not regarded as an “academically developing country.” The same can be said for Chinese universities. Chinese universities are not regarded as developing universities. Just several years ago, however, Tsinghua and Peking Universities were viewed as promising universities in development, while they are currently considered to be among the world’s top universities.
For example, in the reputation rankings based on the results of questionnaire surveys targeting selected academics and researchers published by Times Higher Education, the reputation ranks of Tsinghua and Peking University’s research were 37th and 49th, respectively, in 2011(④ https://www.timeshighereducation.com/world-university-rankings/2011/reputation-ranking#!/page/0/length/50/sort_by/scores_research/sort_order/asc/cols/undefined (retrieved on 28/06/2021).). However, they rose significantly to the 14th and 22nd, respectively, in 2017(⑤ https://www.timeshighereducation.com/world-university-rankings/2017/reputation-ranking#!/page/0/length/50/sort_by/scores_research/sort_order/asc/cols/stats (retrieved on 28/06/2021).). In other words, during this period, it seems that both universities were recognized as top universities in terms of research. It is necessary to understand what factors made these two universities become recognized as top research universities.
As previously mentioned, there are two types of commonly used research performance indicators. One type measures quantity, such as the number of publications, the other type measures quality based on citations, such as the ratio of highly cited publications (e.g. top 1% most cited publications) and field-weighted citation impact (FWCI) (⑥ Elsevier (2014) [Online]. “SciVal metrics guidebook,” Available: http://www.elsevier.com/__data/assets/pdf_file/0006/184749/scival-metrics-guidebook-v1_01-february2014.pdf (dead link; retrieved on 27/12/2020).). There is no doubt that both types of indicators are important, but each has its own limitations.
Considering that surveys of reputation rankings are conducted in the first quarter of each year, we studied the top 50 universities in terms of research reputation by using the previous year’s quantity and quality indicators in a five-year publication window. Among the 36 universities ranked higher than Tsinghua University in 2011, 26 universities published less. Meanwhile, 35 of 36 universities had higher FWCIs than Tsinghua University. However, no university had a lower FWCI than Tsinghua University among universities ranked lower than it. In 2017, 12 out of the 13 universities that ranked higher than Tsinghua University had fewer publications than Tsinghua University, and only one university out of the 13 universities had a lower FWCI. Meanwhile, although 34 out of the 36 universities ranked lower than Tsinghua University and published less, only four universities had a poorer FWCI than Tsinghua University among 36 universities. Peking University is also in a similar situation.
Accordingly, neither indicator can explain the significant improvement in the rankings of the two universities. Although it is certain that the research performance of each university has greatly improved as far as the two indicators show during this period, it is hard to say that the quantity and quality of research outputs are decisive for its recognition as a top university. Rather, it makes the current problem easier to understand if we focus on the concept of substantiality, which we propose as the third construct indicating research excellence to evaluate university research performances.

2.3 Characterizing substantiality and substantiality indicators

We have demonstrated that there is a feature of research universities that can hardly be grasped by the traditional quantity and quality indicators alone.
Now let us compare the research performance in scientific field X between University A and University B, as shown in Figure 1 A. Each blue circle represents a single publication, and its number is its citation count. One publication from University A was cited 52 times, but the other publications were not very much cited. However, University B has fewer publications and no highly cited publications, but its publications are well-cited overall. Which university has better research performance in the scientific field X? There are some paths of possible opinions. There are reasons to think that University A, which has a highly cited publication, has better research performance, while University B could be perceived as better for having a good average of citations despite not having a highly cited publication.
Figure 1. Features of “substantiality” indicators.
However, the quantity and quality indicators, such as the number of publications, the total number of citations, and an average number of citations like FWCI, show that University A is superior to University B. From these conventional metrics, University B’s consistent research capabilities are not evident.
Therefore, we believe that there is a need for a new way to evaluate rich and profound research capabilities, such as those of University B. We define this research capability as substantiality, accumulation of excellence (i.e. excellent players) to produce a certain volume of publications at a certain level of quality.
The Japanese word ATSUMI (⑦ ATSUMI is a term used in IGO game (a famous East Asian board game) and is an important concept to decide strategy and tactics in the game. Basically, the term means influence and strength of an arrangement of game pieces, that is, stones (Yokogawa, Nishino & Mizuno 1995). Some people say that the stones represent soldiers (i.e., players) in the game world.) directly means “thickness.” Beyond that, in expressions such as the ATSUMI of the starting line-ups in baseball or football, the word represents an especially competent or deep roster from which to expect more runs and goals. Here, we decided to use substantiality (ATSUMI) to express the accumulated capabilities or quantity of a certain (good) quality in relation to the research performance of universities.
For example, the ARWU ranking (the Shanghai Jiao Tong University ranking) uses the number of highly cited researchers, which we can consider to be a substantiality indicator because it indicates the quantity of human resources with certain abilities in each university. Also, we consider that the number of highly cited publications (e.g. number of top 1% most cited publications) can be regarded to represent substantiality as estimated by the outcome.
In addition, as another substantiality indicator, we propose h-index for evaluating institutional group performance in the past five years (so that only recent history is included in the substantiality concept). We call this the institutional h5-index. Applying it to University A and University B (Figures 1 A and 1 B), in listing the publications with citations, the h5-index is three for University A and six for University B. Using this index, the stable and consistent characteristics of the research performance of University B can be clearly captured.
Applying the institutional h5-index to actual university data, Figure 1 C lists publications from Kyushu University and Shinshu University in Japan in descending order of the number of citations up to a total of 150 publications. Kyushu and Shinshu Universities have the same number of highly cited publications, but Kyushu University has publications in the second tier with more citations. In contrast, Shinshu University showed a higher FWCI. Although FWCI is an excellent indicator for evaluating the average quality, it is disadvantageous for Kyushu University, which publishes a larger number of publications. Using the institutional h5-index as well, it becomes clear that Kyushu University has more substantiality than Shinshu University. As an independent third axis, the h5-index does not necessarily correlate with FWCI. In fact, their correlations differ according to the field of science (Table 1).
Table 1 Correlations between substantiality indicators and FWCI in Japanese National Universities.
Scopus ASJC Correlations between
h5-index & FWCI number of top 1% most cited publications & FWCI
Multidisciplinary 0.08 0.13
Agricultural and Biological Sciences -0.01 0.05
Arts and Humanities 0.15 0.13
Biochemistry, Genetics and Molecular Biology 0.60 *** 0.43 ***
Business, Management and Accounting 0.42 *** 0.22 *
Chemical Engineering 0.45 *** 0.38 ***
Chemistry 0.29 ** 0.25 *
Computer Science 0.14 0.19
Decision Sciences 0.13 0.46 ***
Earth and Planetary Sciences 0.41 *** 0.26 *
Economics, Econometrics and Finance 0.14 0.07
Energy 0.58 *** 0.39 ***
Engineering 0.15 0.12
Environmental Science 0.43 *** 0.39 ***
Immunology and Microbiology 0.06 0.13
Materials Science 0.05 0.11
Mathematics 0.15 0.24 *
Medicine 0.62 *** 0.57 ***
Neuroscience 0.41 *** 0.27 *
Nursing 0.48 *** 0.40 ***
Pharmacology, Toxicology and Pharmaceutics 0.59 *** 0.39 ***
Physics and Astronomy 0.40 *** 0.26 *
Psychology 0.09 0.04
Social Sciences 0.19 0.23 *
Dentistry 0.22 * 0.16
Health Professions 0.23 * 0.26 *

* p<0.05; ** p<0.01; *** p<0.001

Furthermore, we calculated the rank correlation coefficients among the indicators for research universities in Japan and the world (Table 2). Although the three substantiality indicators are strongly correlated with the number of publications, their correlations with quality indicators are quite different from those of the number of publications. Accordingly, substantiality indicators are intermediate in nature between quantity and quality indicators. Thus, a substantiality indicator such as the institutional h5-index evaluates an aspect of research performance that is different from those evaluated by the conventional indicators.
Table 2 Correlations between quantity, substantiality, quality indicators for the world and Japanese research universities.
# of pubs FWCI % of top 1% most cited % of top 10% most cited h5-index # of top 1% most cited
FWCI 0.31
% of top 1% most cited 0.31 0.95
% of top 10% most cited 0.31 0.96 0.94
h5-index 0.89 0.65 0.66 0.66
# of top 1% most cited 0.86 0.68 0.69 0.69 0.99
# of top 10% most cited 0.92 0.60 0.60 0.61 0.98 0.98

All coefficients are significant at the 0.1% level of significance.

Regarding reputation rankings, substantiality can explain the leaps of the two aforementioned Chinese universities better than other indicators. The correlations of substantiality indicators (i.e. h5-index and number of top 1% most cited publications (Kutlača, 2015)) to research reputation scores in the top 50 universities are clearly higher than those of the number and FWCI of publications (Figure 1 D). Between 2011 and 2017, in terms of the number of top 1% most cited publications, Tsinghua University rose from the 40th to 20th and Peking University from the 39th to 17th among the top 50 universities in terms of research reputation. We can interpret these results as showing that the presence of the two universities in academic societies has become more noticeable due to the increase in substantiality. We pursue this finding further in the next section.

2.4 “Predictive power” of substantiality indicators on research reputation

We have introduced the concept of substantiality and proposed indicators to evaluate the accumulation of excellence of research capabilities and organizations. We showed that there is a feature of research organizations that is difficult to grasp with previous indicators or combinations thereof, and we have clarified that substantiality indicators capture the feature reasonably well. Even if substantiality indicators capture such a feature well, however, the question of what kinds of benefits it brings needs to be answered.
We demonstrate the “predictive power” of substantiality indicators on the research reputation of organizations. We focused on the relations between substantiality indicators and research reputation because we had the following hypothesis (Figure 2) (⑧ As a matter of fact, it is impossible to verify this hypothesis because world reputation rankings have been announced in just the past nine years, and their scores and detailed ranks have been disclosed for only the top 50 universities.). Excellence (excellent players) accumulated in a research organization can attract many resources, and its researchers produce excellent results by using such resources. Then, after a certain period, such outputs from the organization spread among other researchers worldwide and the reputation increases.
Figure 2. Schematic relationships between “substantiality” and reputation.
Reputations were used as evaluation items in two famous world university rankings (i.e. THE and QS rankings), and their weights were virtually the largest among their evaluation items. Given the situation where research evaluation by peer reviews is still important (e.g. Hicks, 2015), the emphasis is placed on research reputation in the university rankings, which can be regarded as peer reviews of a large-scale version.
As it is the research reputation that summarizes subjective evaluation results obtained through a comparatively large-scale survey, it is unlikely that people, independent of the ranking agencies, understand what it means. Furthermore, it is not clear how to raise the research reputation.
Then, we analyze the relations between the research reputation in Times Higher Education’s world reputation rankings (2011-2018) (⑨ https://www.timeshighereducation.com/world-university-rankings/2018/reputation-ranking (retrieved on 28/12/2020).) and three types of research indicators: quantity, quality, and substantiality. For the quantity indicator, we use the number of publications. As quality indicators, FWCI, percentage of top 1% most cited publications, and percentage of top 10% most cited publications are used. Lastly, as substantiality indicators, we use the institutional h5-index(⑩ This is a sort of h-index; the publication years are limited to five.), the number of top 1% most cited publications, and the number of top 10% most cited publications.
We calculated Spearman’s rank correlation coefficients ( The distribution of scores of research reputation is so skewed that we have to use rank correlation coefficients.) of the indicators with research reputations( The analysis is conducted for universities that were in the top 50 in the world reputation rankings because only those universities’ scores and detailed ranks were published.). Considering that we assumed a time lag in the relations between those indicators and research reputation, the rank correlation coefficients were calculated by shifting the measurement timings from zero to 10 years. For example, we calculated Spearman’s rank correlation coefficients of the research reputations published in 2018 (measured in 2017) to determine the bibliometric indicators from 2007 to 2017. Furthermore, we calculated the average correlation coefficient for each indicator for each lag. The averages of the coefficients for each indicator type were also calculated(Although it is hard to say this average value is statistically meaningful, the result clearly shows the relation between each type of indicator and research reputation.). The results are shown in Figures 3 and 4.
Figure 3. Averages of Spearman’s rank correlation coefficient for each indicator.
Figure 4. Averages of Spearman’s rank correlation coefficients for each type of indicator.
As shown in these figures, the results are consistent with our hypotheses. Substantiality indicators are the most strongly correlated to research reputation, and older indicators tend to be more strongly correlated with research reputation. The time lag may be longer than expected. Giving that daily reputation in everyday life often depends on past actions, this result is not surprising. In any case, as we cannot secure enough trials, we cannot discuss the hypotheses too far. Still, their relations with research reputation could be suggestive for using substantiality indicators in practice.

3 Conclusion

Quantity and quality indicators are often used for evaluating university research performance. In this paper, however, we showed that the third construct of research excellence, substantiality, is also effective to characterize excellent research universities in addition to quantity and quality. Substantiality indicators are based on the concept of the quantity of something of more than a certain quality. The number of highly cited publications, institutional h-index, Nature Index, the number of highly cited researchers, and so forth can be regarded as such substantiality indicators.
Substantiality indicators enable us to capture the characteristics of universities that have been overlooked by previous indicators. In addition, they appear to be linked with reputation scores in reputation rankings, that is, researchers’ collective evaluation of university research capabilities.
There is no simple way to evaluate universities’ research performance. Therefore, it is vital to combine different types of indicators to understand the excellence of research institutes and determine a way to allocate grants. Substantiality indicators could form part of such a combination of indicators.

Acknowledgments

The authors are grateful to Prof. John Green and Dr. Marat Fatkhullin for their valuable comments. The authors would like to thank Elsevier and its Japanese team, especially, Kana Takasaka, for their bibliometric data provision and relevant support. This research was partly supported by JSPS KAKENHI (16H06580; 17K01173) and JST/RISTEX research funding program “Science of Science, Technology and Innovation Policy” (JPMJRX19B3).

Author contributions

Masashi Shirabe (shirabe.m.aa@m.titech.ac.jp): Conceived and designed the analysis, analyzed the data and wrote the paper. Amane Koizumi (a.koizumi@nins.jp): Conceived the analysis and wrote the paper.
[1]
Alonso, S., Cabrerizo, F.J., Herrera-Viedma, E. et al. (2010). hg-index: A new index to characterize the scientific output of researchers based on the h- and g-indices. Scientometrics, 82, 391-400.

DOI

[2]
Costas, R., van Leeuwen, T.N., & Bordons, M. (2010). A bibliometric classificatory approach for the study and assessment of research performance at the individual level: The effects of age on productivity and impact. JASIST, 61, 1564-1581.

[3]
Hayati, Z., & Ebrahimy, S. (2009). Correlation between quality and quantity in scientific production: A case study of Iranian organizations from 1997 to 2006. Scientometrics, 80, 625-636.

DOI

[4]
Hicks, D., Wouters, P., Waltman, L. et al. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520, 429-431.

DOI

[5]
Hirsch, J.E. (2005). An index to quantify an individual’s scientific research output. Proceeding of the National Academy of Science of the United States of America, 102, 16569-16572.

[6]
Hirsch, J.E. (2007). Does the h index have predictive power? Proceeding of the National Academy of Science of the United States of America, 104, 19193-19198.

[7]
Jin, B., Liang, L., Rousseau, R., et al. (2007). The R- and AR-indices: Complementing the h-index. Chinese Science Bulletin, 52, 855-863.

DOI

[8]
Kosten, J. (2016). A classification of the use of research indicators. Scientometrics, 108, 457-464.

PMID

[9]
Kutlača, D., Babić D., Živković L., et al. (2015). Analysis of quantitative and qualitative indicators of SEE countries scientific output. Scientometrics, 102, 247-265.

DOI

[10]
Okubo, Y. (1997). “Bibliometric Indicators and Analysis of Research Systems: Methods and Examples,” OECD Science, Technology and Industry Working Papers 1997/1, OECDPublishing.

[11]
Porter, M.E. (1998). Clusters and the new economics of competition. Harvard Business Review, 76, 77-90.

PMID

[12]
Prathap, G. (2011). Quasity, when quantity has a quality all of its own—Toward a theory of performance. Scientometrics, 88, 555-562.

DOI

[13]
Russell, J., & Rousseau, R. (2009). “Bibliometrics and institutional evaluation,” Science and Technology Policy - Volume II, UNESCO, 42-64.

[14]
Sahel, J. (2011). Quality versus quantity: Assessing individual research performance. Scienve and. Translational Medicine, 3, 84cm13.

[15]
Shirabe, M. (2019). Measurement of research capacity using disciplinary agglomeration indicators: National university “rankings” in Japan. In Proceedings of the 17th international society of scientometrics and informetrics conference, 316-321.

[16]
Vinkler, P. (1988). An attempt of surveying and classifying bibliometric indicators for scientometric purposes. Scientometrics, 13, 239-259.

DOI

[17]
Wilsdon, J. (2015) The Metric Tide: Independent Review of the Role of Metrics in Research Assessment and Management, Sage Publications.

[18]
Yokogawa, T., Nishino, J., & Mizuno, Y. (1995). Macroscopic understanding of the situations in GO. Proceedings of 1995 IEEE international conference on fuzzy systems, 31-32.

[19]
Ye, F.Y., & Rousseau, R. (2010). Probing the h-core: An investigation of the tail—core ratio for rank distributions. Scientometrics, 84, 431-439.

DOI

Outlines

/

京ICP备05002861号-43

Copyright © 2023 All rights reserved Journal of Data and Information Science

E-mail: jdis@mail.las.ac.cn Add:No.33, Beisihuan Xilu, Haidian District, Beijing 100190, China

Support by Beijing Magtech Co.ltd E-mail: support@magtech.com.cn