Uncertain research country rankings. Should we continue producing uncertain rankings?

Expand
  • Departamento de Biotecnología-Biología Vegetal, Universidad Politécnica de Madrid, Avenida Puerta de Hierro 2, 28040, Madrid, Spain

Received date: 2025-03-21

  Revised date: 2025-04-21

  Accepted date: 2025-04-24

  Online published: 2025-05-08

Abstract

Purpose: Citation-based assessments of countries’ research capabilities often misrepresent their ability to achieve breakthrough advancements. These assessments commonly classify Japan as a developing country, which contradicts its prominent scientific standing. The purpose of this study is to investigate the underlying causes of such inaccurate assessments and to propose methods for conducting more reliable evaluations.
Design/methodology/approach: The study evaluates the effectiveness of top-percentile citation metrics as indicators of breakthrough research. Using case studies of selected countries and research topics, the study examines how deviations from lognormal citation distributions impact the accuracy of these percentile indicators. A similar analysis is conducted using university data from the Leiden Ranking to investigate citation distribution deviations at the institutional level.
Findings: The study finds that inflated lower tails in citation distributions lead to undervaluation of research capabilities in advanced technological countries, as captured by some percentile indicators. Conversely, research-intensive universities exhibit the opposite trend: a reduced lower tail relative to the upper tail, which causes percentile indicators to overestimate their actual research capacity.
Research limitations: The descriptions are mathematical facts that are self-evident.
Practical implications: The ratios between the number of papers in the global top 10% and 1% by citation count to the total number of papers are commonly used to describe research performance. However, due to variations in citation patterns across countries and institutions with reference to the global pattern, these ratios can be misleading and lose their value as research indicators.
Originality/value: Size-independent research performance indicators, obtained as the ratios between paper counts in top percentiles and the total numbers of publications, are widely used by public and private institutions. This study demonstrates that the use of these ratios for research evaluations and country rankings can be highly misleading.

Cite this article

Alonso Rodríguez-Navarro . Uncertain research country rankings. Should we continue producing uncertain rankings?[J]. Journal of Data and Information Science, 0 : 20250030 -20250030 . DOI: 10.2478/jdis-2025-0030

References

[1] Abramo G., Cicero T., & D’Angelo C. A. (2013). National peer-review research assessment execises for the hard sciences can be a complete waste of money: the Italian case.Scientometrics, 95, 311-324.
[2] Aksnes D. W., Langfeldt L., & Wouters P. (2019). Citations, citation indicators, and research quality: An overview of basic concepts and theories.SAGE Open, 9(1), 1-17.
[3] Bihari A., Tripathi S., & Deepak A. (2021). A review onh-index and its alternative indices. Journal of Information Science, 49(3), 624-665.
[4] Bornmann, L., & Daniel, H.-D. (2007). What do we know about theh index. Journal of the American Society for information Science and Technology, 58, 1381-1385.
[5] Bornmann L., Leydesdorff L., & Mutz R. (2013). The use of percentile and percentile rank classes in the analysis of bibliometric data: opportunities and limits.Journal of Informetrics, 7(1), 158-165.
[6] Bornmann L., Ye A., & Ye F. (2018). Identifying landmark publications in the long run using field-normalized citation data.Journal of Documentation, 74(2), 278-288.
[7] Brito, R., & Rodríguez-Navarro, A. (2021). The incosistency of h-index: A mathematical analysis. Journal of Informetrics, 15(1), Article 101106.
[8] Conover, W. J., & Iman, R. (1981). Rank transformations as a bridge between parametric and nonparametric statitics.The American Statistician, 35(3), 124-129.
[9] Docampo, D., & Besoule, J.-J. (2019). A new appraoch to the analysis and evaluation of the research output of countries and institutions.Scientometrics, 119(2), 1207-1225.
[10] Drucker, P. F. (1954). The Practice of Management. Harper and Row Publisher, New York.
[11] European Commission. (2018). Science, Research and Innovation Performance of the EU 2018. Strengthening the foundations for Europe’s future. Publications Office of the European Union.
[12] European Commission. (2020). Science, Research and Innovation Performance of the EU 2020. A fair, green and digital Europe. Publication Office of the European Union.
[13] European Commission. (2022). Science, Reserach and Innovation Performance of the EU. Building a sustainable future in uncertain times. Publication Office of the European Union.
[14] European Commission. (2024). Science, Reserach and Innovation Performance of the EU. A competitive Europe for a sustainable future. Publication Office of the European Union.
[15] Gaida J., Wong-Leung J., Robin S., & Cave D. (2023). ASPI’s Critical Technology Tracker. The global race for future power. Policy Brief Report No.69/2023
[16] Garfield, E. (1955). Citation indexes for science: A new dimension in documentation through association of ideas.Science, 122(3159), 108-111.
[17] Garfield, E. (1973). Citation frequency as a measure of research activity and performance.Essays of an Information Scientist, 1(2), 406-408.
[18] Garfield, E. (1999). Journal Impact Factor: a brief review.Canadian Medical Association journal, 161(8), 979-980.
[19] Garfield, E., & Welljams-Dorof, A. (1992). Citation data: their use as quantitative indicators for science and technology evaluations and policy-making.Science and Public Policy, 19(5), 321-327.
[20] Godin, B. (2003). The emergence of S&T indicators: why did governments supplement statistics with indicators?Research Policy, 32(4), 679-691.
[21] Godin, B. (2004). The new economy: what the concept owes to the OECD.Research Policy, 33(5), 679-690.
[22] Godin, B. (2006). On the origins of bibliometrics.Scientometrics, 68(1), 109-133.
[23] Golosovsky, M. (2021). Universality of citation distributions: A new understanding.Quantitative Science Studies, 2(2), 527-543.
[24] Hcéres. (2019). OST, Dynamics of scietific production in the world, in Europe and France, 2000-2016. Paris.
[25] Hirsch, J. E. (2005). An index to quantify an individual’s scientific reserach output.Proceedins of the National Academy of Sciences USA, 102(46), 16569-16572.
[26] Hu, X., & Rousseau, R. (2019). Do citation chimeras exist? The case of under-cited influential articles suffering delayed recognition.Journal of the Association for Information Science and Technology, 70(5), 499-508.
[27] Kuhn, T. (1970). The structure of scientific revolutions. University of Chicago Press..
[28] Leung J. W., Robin S., & Cave D. (2024). Techology Tracker: The rewards of long-term reserach investments. The Australian Strategic Institute.
[29] Mcalister P. R., Narin F., & Corrigan J. G. (1983). Programmatic evaluation and comparison based on standardized citatio scores.IEEE Transactions on Engineering Managament, EM-30, 205-2011.
[30] Merton, R. K. (1979). Foreword. In E. Garfield (Ed), Citation Indexing - Its Theory and Application in Science, Technology, and Humanities. John Wiley & Sons.
[31] Min C., Bu Y., Wu D., Ding Y., & Zhang Y. (2021). Identifying citation patterns of scientific breakthroughs: A perspective o dynamic citation process.Information Processing & Management, 58(1), 102428.
[32] Mingers, J., & Leydesdorff, L. (2015). A review of theory and practice in scientometrics.European Journal of Operational Research, 246(1), 1-19.
[33] Narin, F. (1976). Evaluative bibliometrics: The Use of Publication and Citation Analysis in the Evaluation of Scientific Activity. Computer Horizon Inc.
[34] National Institute of Science and Technology, T. (2022). “Japanese Science and Technology Indicators 2022”, NISTEP RESEARCH MATERIAL No. 318.
[35] National Science Board. (2018). Science and Engineering Indicators 2018. National Science Foundation.
[36] National Science Board. (2020). Science and Engineering Indicators 2020: The State of U.S. Science and Engineering. NSB-2020-1. Alexandria, VA.
[37] National Science Board. (2022). Science and Engineering Indicators 2022: The State of U.S. Science and Engineering. NSB-2022-1.
[38] National Science Board. (2024). Science and Engineering Indicators 2024: The State of U.S. Science and Engineering. NSB-2024-3.
[39] Olensky M., Schmidt M., & van Eck, N. J. (2015). Evaluation of the citation matching algorithms of CWTS and iFQ in comparison to the Web of Science.Journal of the Association for Information Science and Technology, 67(10), 2550-2564.
[40] Pendlebury, D. A. (2020). When the data don’t mean what they say: Japan’s comparative underperformance in citation impact. In C. Daraio & W. Glanzel (Eds.), Evaluative Informetrics: The Art of Metrics-based Research Assessment. Springer.
[41] Perianes-Rodriguez, A., & Ruiz-Castillo, J. (2016). University citation distributions.Journal of the Association for Information Science and Technology, 67(11), 2790-2804.
[42] Poege F., Harhoff D., Gaessler F., & Baruffaldi S. (2019). Science quality and the value of inventions. Science Advances, 5(12), eaay7323.
[43] Prathap, G. (2010). An iCE map approach to evaluate performance and efficiency ofscientific production of countries.Scientometrics, 85(1), 185-191.
[44] Price, D. J. D. S. (1965). Networks of scientific papers: The pattern of bibliographic references indicates the nature of the scientific research front.Science, 149(3683), 510-515.
[45] Radicchi F., Fortunato S., & Castellano C. (2008). Universality of citation distributions: toward an objective measure of scientific impact.Proceedins of the National Academy of Sciences USA, 105(45), 17268-17272.
[46] Rodríguez-Navarro, A. (2024). Citation distributions and reserach evaluations: The impossibility of formulating a universal indicator.Journal of Data and Information Science, 9(4), 24-48.
[47] Rodríguez-Navarro, A., & Brito, R. (2018a). Double rank analysis for research assessment.Journal of Informetrics, 12(1), 31-41.
[48] Rodríguez-Navarro, A., & Brito, R. (2018b). Technological research in the EU is less efficient than the world average. EU research policy risks Europeans’ future.Journal of Informetrics, 12(3), 718-731.
[49] Rodríguez-Navarro, A., & Brito, R. (2019). Probability and expected frequency of breakthroughs - basis and use of a robust method of research assessment.Scientometrics, 119(1), 213-235.
[50] Rodríguez-Navarro, A., & Brito, R. (2020). Like-for-like bibliometric substitutes for peer review: advantages and limits of indicators calculated from theep index. Research Evaluation, 29(2), 215-230.
[51] Rodríguez-Navarro, A., & Brito, R. (2021b). Total number of papers and in a single percentile fully describes research impact - Revisiting concepts and applications.Quantitative Science Studies, 2(2), 544-559.
[52] Rodríguez-Navarro, A., & Brito, R. (2022). The link between countries’ economic and scientific wealth has a complex dependence on technological activity and research policy.Scientometrics, 127(5), 2871-2896.
[53] Rodríguez-Navarro, A., & Brito, R. (2024a). The extreme upper tail of Japan’s citation distribution reveals its research success.Quality & Quantity, 58(4), 3831-3844.
[54] Rodríguez-Navarro, A., & Brito, R. (2024b). Rank analysis of most cited publications, a new approach for research assessments.Journal of Informetrics, 18(2), 101503.
[55] Schlagberger E. M., Bornmann L., & Bauer J. (2016). At what institutions did Nobel laureates do their prize-winning work? An analysis of bibliographical information on Nobel laureates from 1994 to 2014.Scientometrics, 109, 723-767.
[56] Schneider, J. W., & Costas, R. (2017). Identifying potential “breakthrough” publications using refined citation analyses: Three related explorative approaches.Journal of the Association for Information Science and Technology, 68(3), 709-723.
[57] Schubert, A., & Braun, T. (1996). Cross-field normalization of scientometric indicators.Scientometrics, 36(3), 311-324.
[58] Shahmandi M., Wilson P., & Thelwall M. (2020). A new algorithm for zero-modified models applied to citation counts.Scientometrics, 125, 993-1010.
[59] Thelwall, M. (2016). Are there too many articles? Zero inflated variants of the discretised lognormal and hooked power law.Journal of Informetrics, 10(2), 622-633.
[60] Thelwall M., Kousha K., Stuart E., Makita M., Abdoli M., Wilson P., & Levitt J. (2023). In which fields are citations indicators of reserach quality?Journal of the Association for Information Science and Technology, 74(8), 941-953.
[61] Traag, V. A., & Waltman, L. (2019). Systematic analysis of agreement between metrics and peer review in the UK REF. Palgrave Communications, 5(1), 29. https://doi.org/ https://doi.org/10.1057/s41599-019-0233-x
[62] Viiu, G.-A. (2018). The lognormal distribution explains the remarkable pattern documented by characteristic scores and scales in scientometrics.Journal of Informetrics, 12(2), 401-415.
[63] Waltman, L. (2016). A review of the literature on citation impact indicators.Journal of Informetrics, 10(2), 365-391.
[64] Waltman L., Calero-Medina C., Kosten J., Noyons E. C. M., Tijssen R. J. W., van Eck N. J., van Leeuwen T. N., van Raan, A. F. J., Visser M. S., & Wouters P. (2012). The Leiden ranking 2011/2012: Data collection, indicators, and interpretation.Journal of the American Society for information Science and Technology, 63(12), 2419-2432.
[65] Waltman, L., & Schreiber, M. (2013). On the calculation of percentile-based bibliometric indicators.Journal of the American Society for information Science and Technology, 64, 372-379.
[66] Waltman, L., & van Eck, N. J. (2012). The inconsistency of the h-index.Journal of the American Society for information Science and Technology, 63(2), 406-415.
[67] Waltman L., van Eck N. J., van Leeuwen T. N., Visser M. S., & van Raan, A. F. J. (2011). Towards a new crown indicator: Some theoretical considerations.Journal of Informetrics, 5(1), 37-47.
[68] Waltman L., van Eck N. J., & van Raan, A. F. J. (2012). Universality of citation distributions revisited.Journal of the American Society for information Science and Technology, 63(1), 72-77.
[69] Wildgaard L., Schneider J. W., & Larsen B. (2014). A rewiew of the characteristics of 108 author-level bibliometric indicators.Scientometrics, 101, 125-158.
[70] Wuestman M., Hoekman J., & Frenken K. (2020). A topology of scientific breakthroughs.Quantitative Science Studies, 1(3), 1203-1222.
Outlines

/

京ICP备05002861号-43

Copyright © 2023 All rights reserved Journal of Data and Information Science

E-mail: jdis@mail.las.ac.cn Add:No.33, Beisihuan Xilu, Haidian District, Beijing 100190, China

Support by Beijing Magtech Co.ltd E-mail: support@magtech.com.cn