Purpose: Interdisciplinary research has become a critical approach to addressing complex societal, economic, technological, and environmental challenges, driving innovation and integrating scientific knowledge. While interdisciplinarity indicators are widely used to evaluate research performance, the impact of classification granularity on these assessments remains underexplored.
Design/methodology/approach: This study investigates how different levels of classification granularity—macro, meso, and micro—affect the evaluation of interdisciplinarity in research institutes. Using a dataset of 262 institutes from four major German non-university organizations (FHG, HGF, MPG, WGL) from 2018 to 2022, we examine inconsistencies in interdisciplinarity across levels, analyze ranking changes, and explore the influence of institutional fields and research focus (applied vs. basic).
Findings: Our findings reveal significant inconsistencies in interdisciplinarity across classification levels, with rankings varying substantially. Notably, the Fraunhofer Society (FHG), which performs well at the macro level, experiences significant ranking declines at meso and micro levels. Normalizing interdisciplinarity by research field confirmed that these declines persist. The research focus of institutes, whether applied, basic, or mixed, does not significantly explain the observed ranking dynamics.
Research limitations: This study has only considered the publication-based dimension of institutional interdisciplinarity and has not explored other aspects.
Practical implications: The findings provide insights for policymakers, research managers, and scholars to better interpret interdisciplinarity metrics and support interdisciplinary research effectively.
Originality/value: This study underscores the critical role of classification granularity in interdisciplinarity assessment and emphasizes the need for standardized approaches to ensure robust and fair evaluations.
[1] Boyack K. W., Patek M., Ungar L. H., Yoon P., & Klavans R. (2014). Classification of individual articles from all of science by research level. Journal of Informetrics, 8(1), 1-12.
[2] Chen S., Guo Y., Ding A. S., & Song Y. (2024). Is interdisciplinarity more likely to produce novel or disruptive research? Scientometrics, 1-18.
[3] Chen S., Song Y., Shu F., & Larivière V. (2022). Interdisciplinarity and impact: the effects of the citation time window. Scientometrics, 127(5), 2621-2642.
[4] Cohan A., Feldman S., Beltagy I., Downey D., & Weld D. S. (2020). Specter: Document-level representation learning using citation-informed transformers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2270-2282.
[5] Donner, P., & Schmoch, U. (2020). The implicit preference of bibliometrics for basic research. Scientometrics, 124(2), 1411-1419.
[6] Frietsch, R., & Bührer-Topçu, S. (2022). Strukturen und Governance öffentlicher Finanzierung der außeruniversitären Forschungsorganisationen in Deutschland. Forschung. Politik-Strategie-Management, 15(1+2), S. 34-41
[7] Frietsch R., Gruber S., Blind K., & Neuhäusler P. (2023). Erfassung und Analyse bibliometrischer Indikatoren 2023 im Rahmen des Pakt-Monitorings zum Pakt für Forschung und Innovation IV, Bericht im Auftrag des BMBF. Karlsruhe: Fraunhofer ISI.
[8] Frietsch, R., & Schubert, T. (2012). Public research in Germany: Continuity and change. Innovation system revisited-Experiences from, 40, 65-84.
[9] Grover, A., & Leskovec, J. (2016). node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 855-864).
[10] Herfindahl, O.C. (1950). Concentration in the U.S. steel industry. New York: Columbia University.
[11] Hirschman A.O.(1945). National power and the structure of foreign trade. Berkeley: University of California Press.
[12] Ledford H.How to solve the world’s biggest problems. Nature 525, 308(2015).
[13] Leinster, T., & Cobbold, C.A. (2012). Measuring diversity: The importance of species similarity. Ecology, 93(3), 477-489.
[14] Leydesdorff, L., & Bornmann, L. (2016). The operationalization of “fields” as WoS subject categories (WC s) in evaluative bibliometrics: The cases of “library and information science” and “science & technology studies”. Journal of the Association for Information Science and Technology, 67(3), 707-714.
[15] Leydesdorff L., Wagner C.S., & Bornmann L. (2019). Diversity measurement: Steps towards the measurement of interdisciplinarity? Journal of Informetrics, 13(3), 904-905.
[16] Leydesdorff L., Wagner C.S., & Bornmann L. (2019). Interdisciplinarity as diversity in citation patterns among journals: Rao-Stirling Diversity, Relative Variety, and the Gini coefficient. Journal of Informetrics, 13(1), 255-264.
[17] Ma, Y., & Uzzi, B. (2018). Scientific prize network predicts who pushes the boundaries of science. Proceedings of the National Academy of Sciences, 115(50), 12608-12615.
[18] Müllner, D. (2011). Modern hierarchical, agglomerative clustering algorithms. arXiv preprint arXiv:1109.237
[19] Porter, A., & Rafols, I. (2009). Is science becoming more interdisciplinary? Measuring and mapping six research fields over time. Scientometrics, 81(3), 719-745.
[20] Rafols, I., & Leydesdorff, L. (2009). Content‐based and algorithmic classifications of journals: Perspectives on the dynamics of scientific communication and indexer effects. Journal of the American Society for Information Science and Technology, 60(9), 1823-1835.
[21] Rousseau, R. (2019). On the Leydesdorff-Wagner-Bornmann proposal for diversity measurement. Journal of Informetrics, 13(3), 906-907.
[22] Shu F., Julien C. A., Zhang L., Qiu J., Zhang J., & Larivière V. (2019). Comparing journal and paper level classifications of science. Journal of Informetrics, 13(1), 202-225.
[23] Shu F., Ma Y., Qiu J., & Larivière V. (2020). Classifications of science and their effects on bibliometric evaluations. Scientometrics, 125, 2727-2744.
[24] Rylance R.Grant giving: global funders to focus on interdisciplinarity. Nature, 525, 313-315 (2015).
[25] Simpson, E.H. (1949). Measurement of diversity. Nature, 163(4148), 688-688.
[26] Stirling, A. (2007). A general framework for analysing diversity in science, technology and society. Journal of the Royal Society Interface, 4(15), 707-719.
[27] Sun Y., Livan G., Ma A., & Latora V. (2021). Interdisciplinary researchers attain better long-term funding performance. Communications Physics, 4(1), 263.
[28] Tong S., Chen F., Yang L., & Shen Z. (2023). Novel utilization of a paper-level classification system for the evaluation of journal impact: An update of the CAS Journal Ranking. Quantitative Science Studies, 4(4), 960-975.
[29] Traag, V.A., Waltman, L.& van Eck, N.J. From Louvain to Leiden: guaranteeing well-connected communities. Scientific Reports. 9, 5233(2019).
[30] Van Noorden, R. Interdisciplinary research by the numbers. Nature 525, 306-307 (2015).
[31] Wang, Q., & Waltman, L. (2016). Large-scale analysis of the accuracy of the journal classification systems of Web of Science and Scopus. Journal of informetrics, 10(2), 347-364.
[32] Zeng A., Shen Z., Zhou J., Fan Y., Di Z., Wang Y., .. & Havlin S. (2019). Increasing trend of scientists to switch between topics. Nature communications, 10(1), 3439.
[33] Zhang, L., & Leydesdorff, L. (2021). The scientometric measurement of interdisciplinarity and diversity in the research portfolios of Chinese universities. Journal of data and information science, 6(4), 13-35.
[34] Zhang L., Rousseau R., & Glänzel W. (2016). Diversity of references as an indicator of the interdisciplinarity of journals: Taking similarity between subject fields into account. Journal of the Association for Information Science and Technology, 67(5), 1257-1265.
[35] Zhang, J., & Shen, Z. (2024). Analyzing journal category assignment using a paper-level classification system: multidisciplinary sciences journals. Scientometrics, 1-16.
[36] Zhang Y., Wang Y., Du H., & Havlin S. (2022). Delayed Impact of Interdisciplinary Research. arXiv preprint arXiv:2207.04244.