Research Papers

Lone Geniuses or One among Many? An Explorative Study of Contemporary Highly Cited Researchers

  • Dag W. Aksnes , 1, ,
  • Kaare Aagaard 2
Expand
  • 1Nordic Institute for Studies in Innovation Research and Education (NIFU), Oslo 0608, Norway
  • 2Danish Centre for Studies in Research & Research Policy, Dept. of Political Science & Government, Midtjylland 8000, Denmark
† Dag W. Aksnes (E-mail: ).

Received date: 2020-12-11

  Revised date: 2021-01-26

  Accepted date: 2021-02-19

  Online published: 2021-03-08

Copyright

Copyright reserved © 2021

Abstract

Purpose: The ranking lists of highly cited researchers receive much public attention. In common interpretations, highly cited researchers are perceived to have made extraordinary contributions to science. Thus, the metrics of highly cited researchers are often linked to notions of breakthroughs, scientific excellence, and lone geniuses.
Design/methodology/approach: In this study, we analyze a sample of individuals who appear on Clarivate Analytics' Highly Cited Researchers list. The main purpose is to juxtapose the characteristics of their research performance against the claim that the list captures a small fraction of the researcher population that contributes disproportionately to extending the frontier and gaining—on behalf of society—knowledge and innovations that make the world healthier, richer, sustainable, and more secure.
Findings: The study reveals that the highly cited articles of the selected individuals generally have a very large number of authors. Thus, these papers seldom represent individual contributions but rather are the result of large collective research efforts conducted in research consortia. This challenges the common perception of highly cited researchers as individual geniuses who can be singled out for their extraordinary contributions. Moreover, the study indicates that a few of the individuals have not even contributed to highly cited original research but rather to reviews or clinical guidelines. Finally, the large number of authors of the papers implies that the ranking list is very sensitive to the specific method used for allocating papers and citations to individuals. In the “whole count” methodology applied by Clarivate Analytics, each author gets full credit of the papers regardless of the number of additional co-authors. The study shows that the ranking list would look very different using an alternative fractionalised methodology.
Research limitations: The study is based on a limited part of the total population of highly cited researchers.
Practical implications: It is concluded that “excellence” understood as highly cited encompasses very different types of research and researchers of which many do not fit with dominant preconceptions.
Originality/value: The study develops further knowledge on highly cited researchers, addressing questions such as who becomes highly cited and the type of research that benefits by defining excellence in terms of citation scores and specific counting methods.

Cite this article

Dag W. Aksnes , Kaare Aagaard . Lone Geniuses or One among Many? An Explorative Study of Contemporary Highly Cited Researchers[J]. Journal of Data and Information Science, 2021 , 6(2) : 41 -66 . DOI: 10.2478/jdis-2021-0019

1 Introduction

1.1 Background

The drive for research excellence has become ever more pervasive since the turn of the millennium. Funders, policymakers, stakeholders, research institutions, journals, and individuals all strive for excellence as the holy grail of academic life (Lamont, 2009; Moore et al., 2017; van Leeuwen et al., 2003). As the research policies of numerous countries increasingly emphasise excellence, they are consequently developing evaluation systems to identify universities, research groups, and researchers that can be said to be “excellent” (Danell, 2011). However, in this process, the notion of excellence has become a “contested concept”. While praised by many, others now see it as an empty term that may do more harm than good for the conduct and impact of research. By rewarding certain approaches, themes, collaboration patterns, and types of researchers at the expense of others, it is argued—for example—that it reinforces existing hierarchies, undervalues local and contextualized knowledge, and limits diversity (Ferretti et al., 2018; Gallie, 1955; Moore et al., 2017; Stilgoe, 2014; Vazire, 2017).
One of the main challenges is that excellence as broadly understood is produced and defined in a multitude of sites and by an array of social actors and, therefore, is difficult to capture in a standardized and consistent manner (Lamont, 2009). It may look different across different fields of research, between different review contexts, and between various national policy contexts (Langfeldt et al., 2020). Therefore, the discomfort with the concept of excellence is particularly pronounced when it becomes standardized and whenever proposals are made to measure it. This dilemma is well known in numerous contexts: simplification is an inescapable avenue in any attempt to represent complex concepts with numbers.
In this context, citation indicators have been brought forward and increasingly used as a potential means to quantify and measure research excellence. Since the turn of the millennium, the bibliometric community has been examining indicators that reflect the top of the citation distribution, such as the number of “highly cited” or “top” articles (Aksnes, 2003; Bornmann, 2014; Tijssen, Visser, & van Leeuwen, 2002; van Leeuwen et al., 2003). Such metrics are often explicitly linked to notions of scientific excellence and to the assumption that highly cited papers represent scientific breakthroughs or research of particular importance. In science policy contexts, they are frequently called upon to legitimate policy interventions, funding and publishing choices, and hiring and promotion decisions (Ferretti et al., 2018; Wilsdon, 2015).
However, the interest in highly cited publications and researchers is not new. Originally, such analyses gained prominence through the work of Eugene Garfield, who produced lists of most-cited researchers over the years. In particular, Garfield had an interest in using citation data to forecast Nobel Prize winners by identifying a group of researchers he termed “of Nobel class” (Garfield & Welljams-Dorof, 1992).
Lately, new versions of such rankings and lists of highly cited scientists have received much public attention, exemplified through, for example, Clarivate Analytics’ Highly Cited Researchers list (Web of Science Group, 2018) and Nature’s Rising Star Index (Dequilettes et al., 2018), although the latter index has the main focus on institutions (www.natureindex.com). The Highly Cited Researchers list from Clarivate Analytics can be considered an extension of Garfield’s work in recognizing investigators whose citation records position them in the top strata of influence and impact. Underlying these lists lies a perception of the highly cited scientists as individual geniuses who can be singled out for their extraordinary contributions. As such, these lists often contain strong value-laden statements and interpretations. For example, it is stated, “The 2018 Highly Cited Researchers from Clarivate Analytics is a contribution to the identification of that small fraction of the researcher population that contributes disproportionately to extending the frontier and gaining for society knowledge and innovations that make the world healthier, richer, sustainable, and more secure” (Web of Science Group, 2018).
However, the question is how valid this perception is in the age of globalization, research collaboration, and team science (Wagner, 2008). What types of research and what types of researchers do such lists actually capture and do they correspond to the underlying understanding of highly cited researchers (HCRs) as lone geniuses who single-handedly extend the frontier of our knowledge and make the world a better place? In order to assess this question, an investigation was conducted of 150 HCRs from five different countries (Norway, Denmark, Sweden, the Netherlands, and the UK) who appear on the Clarivate Analytics’ Highly Cited Researchers list (2018). From this outset, the present study examines the following key questions:
a) What characterises the scientific performance of the HCRs in terms of productivity and citation rates? Here, we analyze their highly cited papers as well as their total publication output. More specifically, we assess the extent to which the HCRs are mass producers of publications or whether they have produced a more limited number of publications during the time period analysed.
b) To what extent do the highly cited papers of HCRs originate from research conducted in small research teams versus large research consortia? Here, we analyze the role of collaboration generally and international collaboration specifically. These issues are addressed using co-authorship data.
c) The ranking list of Clarivate Analytics is based on a so-called “whole count” methodology, where each author gets full credit of papers regardless of the number of additional co-authors. This also holds for the calculation of citation indicators. To what extent would an alternative methodology using fractionalised author credit influence the identification of HCRs? This question is linked to the analysis of collaboration and the results on the individual author contributions of the papers.
By diving into these questions, this study helps to develop further knowledge on HCRs. Who becomes highly cited and what type of research benefits by defining excellence in terms of citation scores and specific counting methods? Based on these investigations, the study contributes to the ongoing critical discussions concerning the concept of excellence measured from a bibliometric perspective. Hereby, we highlight the tensions between the common perceptions of scientific excellence (e.g. focus on major scientific breakthroughs and discoveries) and research becoming highly cited involving a large number of authors.
The article proceeds in the following manner: Section 1.2 presents a short review of the literature of highly cited researchers and publications. Section 2 presents our methods and data. Section 3 outlines the results, and section 4 contains the discussion and conclusion.

1.2 Brief literature review

Citation distributions are very skewed. Most publications obtain none or only a few citations, while a small proportion of the publications become very highly cited (Aksnes, Langfeldt, & Wouters, 2019; Seglen, 1992). This phenomenon has been well known since bibliometrics began flourishing as a research field over half a century ago (Price, 1965). Naturally, the skewed distribution has drawn a lot of attention—for example, relating to the causes that may explain it (Aksnes, 2003), what implications it has for science policy and for the use of citations as performance indicators (van Leeuwen et al., 2003), as well as for the statistical properties of citation indicators (Schmoch, 2020).
Highly cited papers are usually considered as contributions with particularly substantial scientific influence or impact. Various sorts of justifications for this association have been provided. The most basic relates to the reference practice of scientists (Aksnes et al., 2019). When writing a paper, researchers refer to prior studies that have been relevant or useful for their own research. Papers that have been highly cited have accordingly been useful for many more preceding studies than articles that are barely cited or not at all. This argument dates back to Robert K. Merton’s norms of science, according to which scientists are obliged to cite the work they rely on and credit contributions by others (Merton, 1979). However, numerous studies have shown that the referencing process is also influenced by a multitude of other factors (Bornmann & Daniel, 2008), which implies that the association between high citation counts and scientific impact is complex. This is particularly the case when citation counts are interpreted as measures of second-order concepts, like scientific importance or quality. Quality is a multidimensional concept, consisting of different dimensions such as solidity, plausibility, originality, and scientific value. According to a review by Aksnes et al. (2019), there is little evidence that citations reflect these latter dimensions, while scientific impact may be considered a more appropriate interpretational term. Second, justifications have been provided by comparing citation metrics with peer judgements of scientific quality. Over the years, a large number of such studies have been conducted (for an overview see e.g. Aksnes et al., 2019; Wouters et al., 2015). These have shown that the relationship is not unambiguous and the correspondence reported has been moderate in most studies. This implies that the empirical support for claiming that citations reflect the same aspects of scientific quality as peer review judgements is limited.
A few comparative studies have specifically addressed the issue for highly cited papers. For example, in a survey among authors of highly cited Dutch papers, Tijssen, Visser, and van Leeuwen (2002) found that the authors’ ratings of these papers were mixed. Although for the large majority, there was a strong positive correspondence between the perceived scientific quality and the citation counts, a minority (15%) did not perceive the quality of their papers to be of international “world class” level. In a similar Norwegian study by Aksnes (2006), the majority (74%) of the highly cited papers were considered to represent major contributions by the authors themselves, while the remainder were rated as intermediate or minor contributions. Further, Porter et al. (1988) reported that only about one-third of the articles nominated by authors as their best were also their most-cited publications. These findings indicate that highly cited articles do not necessarily represent major scientific achievements, at least not according to the authors themselves. Assumedly, the correspondence would have been even weaker if using stronger superlatives, such as “breakthrough science” or “cutting-edge research”for characterising these papers. Nevertheless, it is argued by Tijssen, Visser, and van Leeuwen (2002) that highly cited articles can be used as a valid measure of academic scientific excellence, but only at aggregated publication levels.
As noted above, the interest in highly cited publications and HCRs goes back to Eugene Garfield, who—over a long period of time—investigated this issue in a Nobel prize context (Garfield, 1986; 1992). He found that almost all Nobel laureates were very highly cited within their fields by publishing highly cited articles or citation classics. Findings like these may also be considered as a kind of validation, as there is apparently a strong correlation between high citation counts and achievements of the world’s most prestigious scientific award. However, Garfield also identified numerous HCRs who did not obtain the Nobel prize—for example, the authors of the most cited article ever (Lowry et al., 1951). Therefore, one may turn the question around and ask the following question: Have all HCRs contributed to breakthrough science? According to Garfield, the answer to this question is no, as “[…] citation frequency by itself is not adequately indicative of outstanding and influential publication” (Garfield, 1986) and similarly, “[…] it would be absurd to claim that a researcher deserves the Nobel Prize simply because he or she is a citation superstar”. (Garfield, 1992).
Overall, it appears reasonable to conclude that there remains limited support to the claim that indicators of highly cited papers and researchers reflect scientific excellence. Nor is it clear how they relate to various aspects or conceptualizations of excellence. Such indicators have nevertheless increasingly been used in the context of research evaluation. Already in 2001, the European Commission applied highly cited papers as an indicator when comparing the research performance of countries in the European Union (EU) (European Commission, 2001) and there were scattered examples of such applications even earlier (Martin & Irvine, 1983; Plomp, 1994). Today, such indicators are used in a variety of contexts, including university rankings, such as the Leiden and Shanghai rankings (the Academic Ranking of World Universities)( See: https://www.leidenranking.com/information/indicators and http://www.shanghairanking.com/ARWU-Methodology-2019.html). The latter ranking relies specifically on Clarivate Analytics’ Highly Cited Researchers list, where HCRs are used as one of the indicators of the “quality of faculty” (Li, 2016). The increasing interest in highly cited papers and researchers may also be illustrated by a simple search in the online version of the WoS database. Here, the number of annual articles with “highly cited” in the topic field increased from less than 10 by the turn of the century to over 200 in 2019(Search conducted by the author 11.19.20. Limited to the core edition of Web of Science, covering the Science Citation Index Expanded (SCIE), the Social Sciences Citation Index (SSCI) and the Arts & Humanities Citation Index (AHCI).). In a large number of the studies, highly cited papers/researchers are explicitly considered as indicators of excellence (see, e.g. Basu, 2006; Bornmann, Wagner, & Leydesdorff, 2015; Tijssen & Winnink, 2018) or even as an “objective measure of research excellence” (Bonaccorsi et al., 2017).

2 Data and methods

The study is based on the 2018 edition of Clarivate Analytics’ Highly Cited Researchers list (Web of Science Group, 2018). This list contains about 6,000 highly cited individuals (HCRs) in 21 fields of the sciences and social sciences. Underlying the list is a methodology where highly cited articles (HC-articles) are identified as a first step. These are defined as publications that rank in the top 1% by citations for field and publication year in the Web of Science. Then, HCRs are identified as authors who have multiple top 1% papers. The researchers are ranked within their field according to the number of such articles. The 2018-edition is based on the publications from the 2006-2016 period.
The present study encompasses researchers from five countries: Denmark, the Netherlands, Norway, Sweden, and the UK. Different countries were selected in order to have a geographical dispersion in the sample of individuals analysed and avoid bias caused by the specific research profile of an individual country. The selected countries represent nations with different profile and size as research nations and their research is strongly intertwined in the international research front. Therefore, they are well suited as cases for an explorative study. The specific countries were further selected because they form the empirical backbone of a larger multi-year comparative study focusing on notions of research quality (www.r-quest.no). However, the individual country dimension is not specifically analysed in the paper.
The total number of HCRs from the selected countries differs significantly—from 547 for the UK to 22 for Norway, while the Netherlands has 194 HCRs, Denmark 74, and Sweden 64 (counting their main institutional affiliation) researchers. In order to have a comparative sample, we selected 30 HCRs from each of the five countries. In order to reach 30 individuals for Norway as well, we added eight researchers from the previous HCR lists. For the other countries, we selected a random sample of individuals but considered the relative field distribution of the researchers. In the HCR list, individuals have been identified within each of the 21 Essential Science Indicators (ESI) fields. This classification was used to obtain a sample of individuals that represents the field distribution of the specific populations. However, due to the limited number of individuals within each field, we have not addressed this dimension in the analysis.
As is evident, the study is based on a small proportion of the total population of HCRs and the sample is selective both in terms of countries and individuals included from each of them. The reason for this is that it is rather time consuming to collect the data required for the analysis. Although the small sample size is a limitation of the study, the results obtained reveal interesting patterns that are likely to have more general validity.
While the list of HCRs is publicly available, this does not hold for their publications. Hence, we had to identify the publication output of the HCRs by performing searches in the WoS database. We used the online version of WoS and downloaded the bibliographic details of the publications. We limited the searches to the 11-year period from 2006 to 2016, which corresponds to the time frame underlying the HCRs list. Only publications indexed in the WoS Core Collection were included, limited to the following publication types: articles, reviews, and letters.
In the searches, we applied different spelling variants of the author names (e.g. combinations of full names and initials) in order to identify a publication set as accurately and completely as possible. However, during this process, cases of homonyms appeared (different individuals with the same author name). A few of these issues were resolved by considering the bibliographic details of the publications (e.g. their field classification and author affiliations). In other cases, the homonyms issues were impossible to resolve. Then, we substituted the individuals with another person from the list. Although we cannot be sure that the publication sets identified exactly correspond to those underlying the HCRs, this source of error is likely to be of marginal importance considering the purpose of our study.
We then identified the publications that were among the top 1% highly cited articles—that is, publications that compared with other publications in the same year and in the same field belong to the top 1% most frequently cited. These publications are the basis for Clarivate Analytics’ identification of HCRs. Here, we relied on statistics from the WoS database at the Centre for Science and Technology Studies (CWTS), Leiden University. In this way, two publication sets for each individual were provided, their highly cited publications and their other (remaining) publications.
As part of the analysis, two types of publications were identified, which appeared particularly frequent in the sample of highly cited papers compared with the overall subset: review papers and guidelines. Typically, the latter papers are contributions containing recommendations on how to diagnose and treat a specific medical condition (Glenny et al., 2009). In the WoS database, each item is classified into a particular “document type” category, where “review” is one. Although there is a lack of a commonly accepted definitions of review articles, the classification applied in the WoS database has been proven to be inaccurate (Blumel & Schniedermann, 2020; Harzing, 2013). In particular, this refers to a criterion where all items containing over 100 references are automatically classified as review papers. Therefore, we checked whether the database classification of the highly cited articles into document types appeared correct. A few papers of the document type “article” were reclassified as “review” and vice versa. The identification of guidelines involved a semi-automatic approach using “guideline” as a search term (searching in the titles and abstracts of the publications). The results were manually checked and publications erroneously identified as guidelines, were amended.
The analyses are carried out at the level of individuals. This means that one person counts as one unit in the analyses, regardless of their publication volume. By this, we avoid the analyses to be skewed towards the most prolific individuals. Further, various bibliometric indicators were calculated for each individual. First, output measures—that is, number of publications over time and by publication type. Second, collaboration indicators based on co-authorship data and number of authors. Third, international collaboration using data on the institutional affiliations of the individuals. Fourth, the journal profile—that is, the journals in which they publish most frequently.
Table 1 provides an overview of the sample of HCRs analysed by field and country affiliation. The researchers are distributed across all fields. The largest category is cross-field research. This category comprises researchers whose highly cited papers are distributed across several other field categories.
Table 1 Overview of sample of HCRs analyzed by field and country.
Field Denmark Netherlands Norway Sweden UK Total
Agricultural Sciences 1 1 1 3
Biology & Biochemistry 3 1 3 7
Chemistry 1 1
Clinical Medicine 2 2 5 2 2 13
Computer Science 1 1 2
Cross-Field 14 10 6 16 10 56
Economics & Business 1 1 2
Engineering 3 2 1 6
Environment/Ecology 1 1 1 3
Geosciences 3 1 6 1 1 12
Materials Science 1 1 1 3
Mathematics 1 1
Microbiology 1 1
Molecular Biology & Genetics 3 2 3 8
Neuroscience & Behaviour 1 2 1 1 5
Pharmacology & Toxicology 1 1 2
Physics 1 1 2
Plant & Animal Science 1 2 2 1 6
Psychiatry/Psychology 1 2 1 4
Social Sciences, general 1 4 3 1 2 11
Space Science 1 1 2
Total 30 30 30 30 30 150

3 Results

In the following section, the results of the empirical examinations are presented. First, the scholarly outputs of the HCRs are investigated with the main emphasis on productivity and number of highly cited articles. Then the collaboration patterns of the researchers are analysed. Finally, we investigate the citation profile of the HCRs.

3.1 Characteristics of the scholarly output of HCRs

Generally, the HCRs are extremely productive. On average, each individual published 151 articles during the period 2006-2016, while the median researcher published 121 articles. The range of the production varies from a minimum value of 14 to a maximum value of 730 articles, as depicted in Figure 1.
Figure 1. Number of articles during the 2006-2016 period by highly cited individuals and the proportion of highly cited articles.
Overall, the HCRs have published 2,259 highly cited papers (top one percentile), which corresponds to an average of 17 highly cited papers per person. This implies that, on average, 16.5% of the publication output of each researcher is within the top percentile. This is almost 17 times higher than the “expected” value of 1%. Nevertheless, the large majority of the publications of the HCRs do not reach the highly cited threshold. The proportion of highly cited papers varies significantly across the individuals, cf. Figure 1. In the most extreme case, 69% of the publication output is within the top percentile. However, in general, the proportion of highly cited papers reduces as the total number of papers per HCR increases.
It must be noted that there is a certain overlap in the sample, as 351 HC-papers have been authored by more than one of the individuals analysed. Accordingly, the number of unique highly cited papers is 2,060, which implies that 17% of the papers are attributed to more than one HCR. This indicates that some of the HCRs belong to the same research groups/networks.
Figure 2 provides further information on the distribution of HCRs according to the number of highly cited papers. While the average is 17 highly cited articles during the period, 6% of the researchers have five or fewer such articles and 14% have more than 25. Thus, for simplification, the researchers may be divided into two different groups: Researchers who do not regularly contribute to high-impact research (i.e. having only a few papers with high impact) and researchers who steadily contribute to high-impact research.
Figure 2. Number of highly cited articles during the 2006-2016 period by highly cited individuals. Relative distribution of highly cited researchers.
The annual publication output of each HCR was analyzed separately. The results reveal that almost all researchers have published at least one paper each year (Figure 3). The majority of the researchers have also published at least one highly cited paper annually—the proportions vary from 53% to 79%. Thus, many researchers have published highly cited papers over longer periods.
Figure 3. Proportion of highly cited researchers who have published each year and who have published highly cited articles.
As described above, contributions in terms of review papers and guidelines were identified. A total of 15% of the highly cited papers were review articles. Thus, this document type appears rather frequently in the set of highly cited papers. At the level of individuals, 7% of the HCRs had mainly or entirely published highly cited review papers. Thus, their presence on the list of HCRs is due to the publication of such papers, most of them having a large number of additional authors.
Guidelines, of which the large majority represents clinical guidelines, also appeared rather frequently and accounted for 6% of the highly cited articles. This proportion would have been significantly higher if only papers in the field of medicine had been included as a denominator in the calculation. When analyzing the highly cited papers of the researchers, 5% of the individuals had mainly or entirely published guidelines.
The large occurrence of guidelines is also evident when analyzing the frequency of title words in the sample of highly cited papers. Here, “guidelines” is one of the most frequently appearing title words, along with words such as “meta-analysis” and “review”, which probably represent words primarily relating to review papers.
Not unexpectedly, we find that many of the highly cited papers have been published in high-impact journals. Table 2 provides an overview of the journals that account for the largest number of articles. On the top of the list, we find Nature and Science; together with other very prestigious journals on the list, such as Proceedings of The National Academy of Sciences of the USA (PNAS), New England Journal of Medicine, and the Lancet. However, it must be noted that not all articles have been published in high-impact journals, as illustrated by the presence of PLoS One. This is one of the largest journals in the world (Brainard, 2019). Table 2 also presents similar statistics for the other papers of the HCRs. This journal list looks rather different and mainly comprises field-specific journals.
Table 2 Overview of the journal profile of the HCRs and the 10 journals in which they publish most frequently: number of publications, highly cited articles, and other articles.
Highly cited articles Other articles
Journal Number of publications Journal Number of publications
Nature 250 PLoS One 518
Science 157 Atmospheric Chemistry and Physics 297
Nature Genetics 135 Nature Genetics 234
Proceedings of The National Academy of Sciences of the USA 99 Annals of the Rheumatic Diseases 207
European Heart Journal 80 European Heart Journal 206
New England Journal of Medicine 65 International Journal of Cancer 201
Plos One 56 Astrophysical Journal 181
Lancet 52 Journal of the American College of Cardiology 146
Nucleic Acids Research 49 IEEE Transactions on Power Electronics 142
Nature Communications 47 Plos Genetics 135

3.2 Collaboration profiles of the HCRs

Next, the highly cited articles of the HCRs have been analyzed using data on the number of co-authors. Generally, the highly cited papers have a very large number of authors, on average 59, and in the most extreme case over 500. The latter group consists of 1.7% of the articles.
For each individual, we calculated the average number of authors of their highly cited papers. The distribution is illustrated in Figure 4. There are large variations in the distribution. Overall, 15% of the HCRs published highly cited papers with an average of over 100 authors, and 7% with 50-100 authors. Thus, these scientists tend to be members of large research consortia and their presence on the list can be attributed to such memberships. There is also a significant share of researchers who publish with very large research groups—20-50 authors (20% of the individuals). However, there are also researchers who have fewer co-authors on average. The highly cited papers of 9% of the HCRs have five or fewer authors.
Figure 4. Relative distribution of individuals by number of author groups: highly cited articles and other articles.
Figure 5 depicts a scatter plot where the average number of authors of the HC papers is compared with the average number of authors for the other papers of the HCRs. The trend line shows the correlation between the datapoints and is based on linear least squares regressions. Generally, there is a rather strong positive linear correlation (R2 = 0.4), which implies that the collaboration profile is similar: Authors publishing highly cited papers with many co-authors also tend to publish other papers with many co-authors. Nevertheless, there are certain individuals who deviate from this pattern. The gradient or slope of the trendline is 3.6.
Figure 5. Distribution of individuals by average number of authors per article: highly cited articles and other articles.
Figure 6 depicts the difference in the average number of authors of the highly cited papers and the other papers by individuals. For approximately one-third of the HCRs, there is a difference of over 15 authors. For this group, the highly cited research by collaboration pattern differs significantly from the other research of the individuals, thereby suggesting that this research is different—for example, by involving participation in research consortia.
Figure 6. Difference in the average number of authors for highly cited articles and other articles by individual.
Overall, there are 33,009 different unique author names (last name and initials) contributing to the HC-papers, although the number of “real” people is probably somewhat lower due to different variants/spellings of author names of individuals. This confirms that the highly cited research of the HCRs is very far from representing individual achievements overall. Nevertheless, the HCRs appear as first or last author in 31% of the papers, thereby suggesting that they may have had a key role in approximately one-third of the papers. This is assuming conventions for authorship where the first author is the one who has undertaken the largest portion of the research and the last author is the leader of the investigation—an assumption which is not likely to hold for all papers analysed (Waltman, 2012).
Based on the data of the institutional affiliation of the authors of highly cited papers, international collaboration profiles were analysed. A large majority of highly cited papers involve international collaboration. Nevertheless, 22% of the papers did not have such collaboration and the authors were affiliated with only one country.
The distribution of HCRs according to average number of author-affiliated countries of their highly cited papers is presented in Figure 7. A large proportion of the researchers have published such papers with co-authors from numerous different countries. In total, 13% of the HCRs have an average of over 10 different countries contributing to their highly cited papers, while 27% have 5-10 countries. The group where the highly cited papers involve international collaboration to a small extent is merely 14% (1-2 countries). In conclusion, the publications of the HCRs tend to be the results of research involving scientists from several different countries.
Figure 7. Relative distribution of individuals by number of author affiliated countries: Highly cited articles and other articles.
In Figure 8, the profile of highly cited papers has been compared with the profile of the other papers for each individual. In general, the highly cited papers involve multilateral collaboration to a much larger extent than the other papers. The trendline shows the correlation between the datapoints and is based on linear least squares regressions. As shown in the equation, the slope of the line is 2.9.
Figure 8. Distribution of individuals* by average number of author affiliated countries: Highly cited articles and other articles.

*) One individual (outlier) is excluded from the figure for visibility reasons.

3.3 The citation profile of the HCRs

The ranking list of Clarivate Analytics is based on a so-called “whole count” methodology, where each researcher gets full credit for the papers regardless of the number of additional co-authors. This also holds for the calculation of citation indicators. Although this method remains widely used, the application of fractional contribution measures has become more common (Aksnes et al., 2019). In the simplest version, this implies that each unit of analysis would be credited with a fraction of an article based on the number of contributing authors.
Highly cited papers have been investigated in order to analyse the implications of applying these two methods. Here, the total number of citations to the highly cited paper has been counted using whole and fractionalised methods. Then, an average for each individual has been calculated. The results are illustrated in Figure 9.
Figure 9. Total number of citations for the highly cited articles of each HCR-individual: whole and fractionalised calculations.
Overall, by using the fractionalised methodology, the citation numbers of the HC-papers are reduced by 89%. This ratio varies significantly across individuals. In the most extreme case, a researcher contributing to highly cited papers receiving over 12,000 citations would be credited with only 77 citations using the fractionalised methodology. This corresponds to a ratio of 0.006. On the other hand, there are a few researchers with a corresponding ratio of 0.4. Thus, it is evident that the use of whole versus fractionalised counts has very large implications when it comes to HCRs and, in turn, also on the ranking list and identification of HCRs.

4 Discussion and conclusion

This explorative study is based on a small proportion of the total population of HCRs that were selected from five countries. Thus, the extent of the general validity of our findings across the global science system as a whole remains an open question. Nevertheless, we expect that the patterns identified herein are likely to be rather representative and, thus, have more general relevance. Most importantly, we see no reason to expect that the HCRs of these particular countries should have distinctively different characteristics than the HCRs from other countries. Based on this assumption, we discuss the results of this paper and their implications.
This study has shown that the researchers who have been identified as highly cited by Clarivate Analytics are not only extraordinary when it comes to the publication of highly cited papers, they also tend to be extremely prolific. Thus, generally, a high production of publications appears to be a prerequisite for appearing on the list. Both in terms of productivity and contribution to highly cited papers, HCRs tend to be very different from the more ordinary or “mundane” researchers. For example, while the HCRs, on average, have published 13.7 articles annually, a previous Norwegian study revealed that a full professor contributed to 2.6 articles each year, on average (Piro, Aksnes, & Rorstad, 2013).
However, the analysis of scientific collaboration using co-authorship data also shows that the highly cited papers tend to have a very large number of authors. A considerable number of HCRs appear to be members of large research consortia (over 50 authors per paper) and their presence on the list is likely primarily related to such memberships. We estimate that this group accounts for over 20% of the researchers (cf. Figure 5). In addition, there are also categories of researchers where the HCR status can be attributed to research in large (10-20 authors per paper) or very large research groups (20-50 authors per paper)—both categories account for approximately 20% of the researchers. Hence, less than 10% of the researchers can ascribe their HCR-status to research performed in small research groups (1-5 researchers). As a consequence, the general picture is that in order to be highly cited according to the criteria given by Clarivate Analytics, it is important to contribute frequently to big science publications.
A central implication of these findings is that the perceptions of HCRs as “top talents” or “star scientists” with important individual contributions are likely to be misleading. When the highly cited works are the results of large collaborative efforts, it is problematic to attribute the achievements to individual members of the groups, as the individual contribution is likely to be limited. The perceptions of HCRs would better fit the category in which the achievements are related to work performed in small research groups, but this category is rather small and accounts for a very small proportion of the HCRs. Moreover, even in this case, it would require a few strong assumptions to equate high citation rates with individual genius. These findings are also in line with the viewpoints put forward by Simonton (2013) in a commentary in Nature: “Natural sciences have become so big, and the knowledge base so complex and specialized, that much of the cutting-edge work these days tends to emerge from large, well-funded collaborative teams involving many contributors”.
In contrast, the list of researchers of Clarivate Analytics can be perceived as being closely linked to the traditional view in the history and sociology of science focusing on the role of individual geniuses in scientific discovery. This perception has been (and still is) further sustained through the celebration of individual scientists in awards such as the Nobel prize (Wuchty, Jones, & Uzzi, 2007). However, science has rarely been a strictly individual enterprise (Lariviere et al., 2015; Shapin, 1989). In particular, in the post-World War II era, the incidence of collaboration has grown where “big science” has come to play an increasingly important role (Hallonsten, 2016). This development has been attributed to the increasing specialisation of science, where the complexities of research questions often require interdisciplinary approaches as well as new communication technologies (Wu, Wang, & Evans, 2019). In such contexts, it may often be difficult or even delusive to attribute major achievements or breakthroughs to individuals. Therefore, one may argue that any list of HCRs will reproduce a concept of science and scientific progress, which is fundamentally anachronous.
Further, recent research has revealed that small and large teams contribute to progress in science in different ways, where small teams more often than large ones produce results that are disruptive to science by generating new directions for research (Wu, Wang, & Evans, 2019). According to the authors, this emphasizes the importance of developing science policies to support diverse team sizes. These results are consistent with another study showing that prize-winning papers of Nobel laureates are more likely to be written by less than three authors (Li et al., 2020). Contrasted with the findings in this study, this suggests that the research of the HCRs is of a particular kind and does not represent the entire spectre of major scientific achievements.
When individuals are ranked by citation numbers, this may produce the impression that the researchers themselves have been cited. However, citations are given to publications and not to individuals. From this perspective, any exercise linking citations to individuals may be seen as inherently problematic, whether for providing ranking lists or conducting aggregate analyses, such as in a recently published article of top-cited scientists (Nielsen & Andersen, 2021). Since the ranking lists get wide attention in society, they may strengthen the misconception of science, as described above. By focusing and celebrating individuals, science as a collective endeavour is neglected. As shown in our study, behind the highly cited papers analysed, we found no less than 33,000 other scientists who were involved but who have not been recognized.
In turn, this is also related to a central but hitherto unresolved methodological challenge. When publications, and the highly cited ones in particular, have a large number of authors, much depends on the selected counting method (Aksnes, Schneider, & Gunnarsson, 2012). The question is how individuals should be credited for multiple-author publications. This study revealed that the choice of whole versus fractional calculations has a large impact on the results. Some of the HCRs have received relatively few citations when measured fractionally. Thus, the ranking list of Clarivate Analytics would look very different when employing a fractionalized methodology, but such a list could also be argued to be problematic.
Both methods are commonly applied in bibliometric studies and they are often seen as complementary: The whole count yields the number of papers in which an author “participated”. A fractional count gives the number of papers that are “creditable” to the author, assuming that all authors made equal contributions to a co-authored paper and that all contributions add up to one (Moed, 2005). While these are the basic methods, numerous other counting methods—for example, by giving more credit to the first-authors—have been introduced in the literature (Gauffriau & Larsen, 2005). Moreover, various normalisation procedures and ways of accounting for networks have been suggested (see, e.g. Batagelj & Cerinšek, 2013; Leydesdorff & Park, 2017; Perianes-Rodriguez, Waltman, & van Eck, 2016). It is beyond the scope of this paper to discuss this issue further here, but we note that the issue is particularly urgent for highly cited papers, as these generally have a large number of authors. Therefore, the choice of counting methods would strongly influence the list of individuals who appear as highly cited, and another method would have yielded an alternative list (Docampo & Cram, 2019).
As described in the introduction, highly cited papers are often linked to notions of scientific breakthroughs/discoveries or research of particular importance. Our study has shown that for a part of the HCRs, their presence on the list can be attributed to the publication of review papers or clinical guidelines. This is not surprising, as it has been known for a long time that review articles, on average, are more cited than regular articles and are over-represented among highly cited papers (Aksnes, 2003; Glänzel & Czerwon, 1992). In addition, this challenges the interpretation of the HCRs as individuals who are contributing to scientific breakthroughs. Although review papers may have an important role in the scientific communication process, they typically sum up previous research on a particular topic and do not present new empirical findings. Therefore, it has also been argued that the inclusion of such papers invalidates the use of citations as performance indicators (Seglen, 1997). A similar argumentation might be used concerning clinical guidelines. The number of individuals whose HCR status is mainly related to publishing review papers or guidelines is limited; however, the inclusion of these people appears problematic, given the interpretational context of the HCR ranking.
Rather than arguing that the highly cited articles of the HCRs represent important breakthroughs, it appears more appropriate to claim that these papers in most cases provide results with general relevance or with applicability in numerous different fields (Aksnes, 2003). Because of this, they obtain citations from publications in many different research areas. This has implications for what type of research is identified through indicators based on highly cited publications and in turn for the notion of excellence associated with these indicators. Most importantly, it creates a disadvantage for important research with a more local or contextual scope. Moreover, it is likely that articles with many authors also are likely to receive more citations simply as a consequence of the larger network associated with the work (Wallace, Lariviere, & Gingras, 2012).
In conclusion, our study has shown that “excellence” understood as highly cited encompasses very different types of research and researchers, of which many do not fit with the dominant preconceptions of excellence. In the light of our findings, it appears problematic to sustain the strong value-laden interpretations associated with the list of Clarivate Analytics. The early insights by Garfield that high citation frequency by itself is not indicative of outstanding research appear to have been lost on the way. Moreover, these lists contribute to creating a misconception of science and the scientific process by focusing on individuals and individual achievements, when in fact a very large number of other scientists have also contributed to their research. The HCRs may often not even have had a leading role in the research underlying the papers.
From a policy perspective, these findings also raise important questions regarding the dominant recognition and rewards structures in the science system. As has also been noted repeatedly in relation to the Nobel Prize, individual acclaim of what is essentially a team effort is an anachronistic way of recognizing scientists for their work. Rather, it may distort the nature of science, overlook many of its important contributors, and create incentives that are dysfunctional for scientific collaborations. Therefore, a key question is whether ranking lists such as Clarivate Analytics’ Highly Cited Researchers list should be abandoned altogether and instead be replaced by approaches that better capture and pay tribute to collaborative efforts and achievements of teams? Instead of reinforcing a flawed reward system in which the winner takes all, and the contributions of the many are neglected (Casadevall & Fang, 2013), we need methods to highlight and reward scientific contributions that more accurately reflect and underpin contemporary research practices.

Acknowledgements

The research was funded by the Research Council of Norway, grant number 256223 (the R-QUEST centre). We are thankful to the R-QUEST team for input and comments to the paper and to Thed van Leeuwen for providing bibliometric reference values, in particular to Gunnar Sivertsen who developed the original research idea and to Thed van Leeuwen for providing bibliometric reference values.

Author contributions

Dag W. Aksnes (dag.w.aksnes@nifu.no): Conceptualization (Lead), Data curation (Lead), Formal analysis (Lead), Funding acquisition (Equal), Investigation (Lead), Methodology (Lead), Project administration (Lead); Kaare Aagaard (ka@ps.au.dk): Conceptualization (Supporting), Data curation (Supporting), Formal analysis (Supporting), Funding acquisition (Supporting), Investigation (Supporting), Methodology (Supporting), Project administration (Supporting).
[1]
Aksnes, D.W. (2003). Characteristics of highly cited papers. Research Evaluation, 12(3), 159-170.

[2]
Aksnes, D.W. (2006). Citation rates and perceptions of scientific contribution. Journal of the American Society for Information Science and Technology (JASIST), 57(2), 169-185.

[3]
Aksnes, D.W., Langfeldt, L., & Wouters, P. (2019). Citations, citation indicators, and research quality: An overview of basic concepts and theories. SAGE Open, 9(1). doi: http://dx.doi.org10.1177/2158244019829575

DOI PMID

[4]
Aksnes, D.W., Schneider, J.W., & Gunnarsson, M. (2012). Ranking national research systems by citation indicators. A comparative analysis using whole and fractionalised counting methods. Journal of Informetrics, 6(1), 36-43. doi: 10.1016/j.joi.2011.08.002

[5]
Basu, A. (2006). Using ISI's ‘Highly Cited Researchers' to obtain a country level indicator of citation excellence. Scientometrics, 68(3), 361-375. doi: 10.1007/s11192-006-0117-x

[6]
Batagelj, V., & Cerinšek, M. (2013). On bibliographic networks. Scientometrics, 96(3), 845-864.

[7]
Blumel, C., & Schniedermann, A. (2020). Studying review articles in scientometrics and beyond: A research agenda. Scientometrics, 124(1), 711-728. doi: 10.1007/s11192-020-03431-7

[8]
Bonaccorsi, A., Cicero, T., Haddawy, P., & Hassan, S.U. (2017). Explaining the transatlantic gap in research excellence. Scientometrics, 110(1), 217-241. doi: 10.1007/s11192-016-2180-2

[9]
Bornmann, L. (2014). How are excellent (highly cited) papers defined in bibliometrics? A quantitative analysis of the literature. Research Evaluation, 23(2), 166-173. doi: 10.1093/reseval/rvu002

[10]
Bornmann, L., & Daniel, H.D. (2008). What do citation counts measure? A review of studies on citing behavior. Journal of Documentation, 64(1), 45-80. doi: 10.1108/00220410810844150

[11]
Bornmann, L., Wagner, C., & Leydesdorff, L. (2015). BRICS countries and scientific excellence: A bibliometric analysis of most frequently cited papers. Journal of the Association for Information Science and Technology, 66(7), 1507-1513. doi: 10.1002/asi.23333

[12]
Brainard, J. (2019). Open-access megajournals lose momentum. Science, 365(6458), 1067. doi: 10.1126/science.365.6458.1067

[13]
Casadevall, A., & Fang, F.C. (2013). Is the Nobel Prize good for science? The FASEB Journal, 27(12), 4682-4690.

DOI PMID

[14]
Danell, R. (2011). Can the quality of scientific work be predicted using information on the author's track record? Journal of the American Society for Information Science and Technology, 62(1), 50-60. doi: 10.1002/asi.21454

[15]
Dequilettes, D., Garfinkel, S., Ge, B.H., Hugelius, G., Kim, J., Marchesan, S., … Thouvenin, O. (2018). The world at their feet. Nature, 561(7723), S10-S15. doi: 10.1038/d41586-018-06622-8

PMID

[16]
Docampo, D., & Cram, L. (2019). Highly cited researchers: A moving target. Scientometrics, 118(3), 1011-1025. doi: 10.1007/s11192-018-2993-2

[17]
European Commission. (2001). Key Figures 2001. Special edition. Indicators for benchmarking of national research policies. Brussels.

[18]
Ferretti, F., Pereira, A.G., Veertesy, D., & Hardeman, S. (2018). Research excellence indicators: Time to reimagine the ‘making of'? Science and Public Policy, 45(5), 731-741. doi: 10.1093/scipol/scy007

[19]
Gallie, W.B. (1955). Essentially Contested Concepts. Proceedings of the Aristotelian Society, 56, 167-198.

[20]
Garfield, E. (1986). Do Nobel Prize winners write citation classics? Current Contents, 23, 3-8.

[21]
Garfield, E. (1992). The 1991-Nobel prize winners were all citation superstars. Current Contents, 5, 3-9.

[22]
Garfield, E., & Welljams-Dorof, A. (1992). Of Nobel class: A citation perspective on high impact research authors. Theoretical Medicine, 13(2), 117-135.

[23]
Gauffriau, M., & Larsen, P.O. (2005). Counting methods are decisive for rankings based on publication and citation studies. Scientometrics, 64(1), 85-93.

[24]
Glänzel, W., & Czerwon, H.-J. (1992). What are highly cited publications? A method applied to German scientific papers, 1980-1989. Research Evaluation, 2(3), 135-141.

[25]
Glenny, A.M., Worthington, H.V., Esposito, M., & Nieri, M. (2009). What are clinical guidelines? European Journal of Oral Implantology, 2(2), 145-148.

[26]
Hallonsten, O. (2016). Big Science Transformed. Science, Politics and Organization in Europe and the United States: Palgrave Macmillan.

[27]
Harzing, A.W. (2013). Document categories in the ISI Web of Knowledge: Misunderstanding the Social Sciences? Scientometrics, 94(1), 23-34. doi: 10.1007/s11192-012-0738-1

[28]
Lamont, M. (2009). How professors think: Inside the curious world of academic judgment. Harvard University Press.

[29]
Langfeldt, L., Nedeva, M., Sorlin, S., & Thomas, D.A. (2020). Co-existing notions of research quality: A framework to study context-specific understandings of good research. Minerva, 58(1), 115-137. doi: 10.1007/s11024-019-09385-2

[30]
Lariviere, V., Gingras, Y., Sugimoto, C.R., & Tsou, A. (2015). Team size matters: Collaboration and scientific impact since 1900. Journal of the Association for Information Science and Technology, 66(7), 1323-1332. doi: 10.1002/asi.23266

[31]
Leydesdorff, L., & Park, H.W. (2017). Full and fractional counting in bibliometric networks. Journal of Informetrics, 11(1), 117-120. doi: 10.1016/j.joi.2016.11.007

[32]
Li, J.C., Yin, Y., Fortunato, S., & Wang, D.S. (2020). Scientific elite revisited: patterns of productivity, collaboration, authorship and impact. Journal of the Royal Society Interface, 17(165). doi: 10.1098/rsif.2020.0135

[33]
Li, J.T. (2016). What we learn from the shifts in highly cited data from 2001 to 2014? Scientometrics, 108(1), 57-82. doi: 10.1007/s11192-016-1958-6

[34]
Lowry, O.H., Rosebrough, N.J., Farr, A.L., & Randal, R.J. (1951). Protein measurement with the folin phenol reagent. Journal of Biological Chemistry, 193, 265-275.

[35]
Martin, B.R., & Irvine, J. (1983). Assessing basic research: Some partial indicators of scientific progress in radio astronomy. Research Policy, 12, 61-90.

[36]
Merton, R.K. (1979). Foreword. In E. Garfield (Ed.), Citation indexing: Its theory and application in science, technology, and humanities. John Wiley & Sons.

[37]
Moed, H.F. (2005). Citation Analysis in Research Evaluation. Springer.

[38]
Moore, S., Neylon, C., Eve, M.P., O'Donnell, D.P., & Pattinson, D. (2017). “Excellence R Us”: university research and the fetishisation of excellence. Palgrave Communications, 3, 16105. doi: 10.1057/palcomms.2016.105

[39]
Nielsen, M.W., & Andersen, J.P. (2021). Global citation inequality is on the rise. Proceedings of The National Academy of Sciences of the USA (PNAS). 118 (7), e2012208118. https://doi.org/10.1073/pnas.2012208118

[40]
Perianes-Rodriguez, A., Waltman, L., & van Eck, N.J. (2016). Constructing bibliometric networks: A comparison between full and fractional counting. Journal of Informetrics, 10(4), 1178-1195.

[41]
Piro, F.N., Aksnes, D.W., & Rorstad, K. (2013). A macro analysis of productivity differences across fields: Challenges in the measurement of scientific publishing. Journal of the American Society for Information Science and Technology, 64(2), 307-320. doi: 10.1002/asi.22746

[42]
Plomp, R. (1994). The highly cited papers of professors as an indicator of a research group's scientific performance. Scientometrics, 29(3), 377-393.

[43]
Porter, A.L., Chubin, D.E., & Jin, X.Y. (1988). Citations and scientific progress: comparing bibliometric measures with scientist judgments. Scientometrics, 13(3-4), 103-124.

[44]
Price, D.J.d.S. (1965). Networks of scientific papers. Science, 149, 510-515.

PMID

[45]
Schmoch, U. (2020). Mean values of skewed distributions in the bibliometric assessment of research units. Scientometrics, 125, 925-935. doi: 10.1007/s11192-020-03476-8

[46]
Seglen, P.O. (1992). The skewness of science. Journal of the American Society for Information Science, 43(9), 628-638.

[47]
Seglen, P.O. (1997). Citations and journal impact factors: Questionable indicators of research quality. Allergy, 52(11), 1050-1056.

DOI PMID

[48]
Shapin, S. (1989). The invisible technician. American Scientist, 77(6), 554-563.

[49]
Simonton, D.K. (2013). After Einstein: Scientific genius is extinct. Nature, 493(7434), 602. doi: 10.1038/493602a

DOI PMID

[50]
Stilgoe, J. (2014). Against excellence. The Guardian.

[51]
Tijssen, R., & Winnink, J. (2018). Capturing ‘R&D excellence': Indicators, international statistics, and innovative universities. Scientometrics, 114(2), 687-699. doi: 10.1007/s11192-017-2602-9

DOI PMID

[52]
Tijssen, R.J.W., Visser, M.S., & van Leeuwen, T.N. (2002). Benchmarking international scientific excellence: Are highly cited research papers an appropriate frame of reference? Scientometrics, 54(3), 381-397. doi: 10.1023/a:1016082432660

[53]
van Leeuwen, T.N., Visser, M.S., Moed, H.F., Nederhof, T.J., & van Raan, A.F.J. (2003). Holy grail of science policy: Exploring and combining bibliometric tools in search of scientific excellence. Scientometrics, 57(2), 257-280. doi: 10.1023/a:1024141819302

[54]
Vazire, S. (2017). Our obsession with eminence warps research. Nature, 547(7661), 7. doi: 10.1038/547007a

DOI PMID

[55]
Wagner, C.S. (2008). The new invisible college: Science for development. Brookings Institution Press.

[56]
Wallace, M.L., Lariviere, V., & Gingras, Y. (2012). A small world of citations? The influence of collaboration networks on citation practices. Plos One, 7(3), e33339. doi: 10.1371/journal.pone.0033339

[57]
Waltman, L. (2012). An empirical analysis of the use of alphabetical authorship in scientific publishing. Journal of Informetrics, 6(4), 700-711. doi: 10.1016/j.joi.2012.07.008

[58]
Web of Science Group. (2018). Highly cited researchers. Identifying top talent in the sciences and social sciences. Retrieved from https://clarivate.com/tag/highly-cited-researchers/

[59]
Wilsdon, J. (2015). We need a measured approach to metrics. Nature, 523(7559), 129. doi: 10.1038/523129a

PMID

[60]
Wouters, P., Thelwall, M., Kousha, K., Waltman, L., de Rijcke, S., Rushforth, A., & Franssen, T. (2015). The Metric Tide: Literature Review (Supplementary Report I to the Independent Review of the Role of Metrics in Research Assessment and Management).

[61]
Wu, L.F., Wang, D.S., & Evans, J.A. (2019). Large teams develop and small teams disrupt science and technology. Nature, 566(7744), 378-382. doi: 10.1038/s41586-019-0941-9

[62]
Wuchty, S., Jones, B.F., & Uzzi, B. (2007). The increasing dominance of teams in production of knowledge. Science, 316(5827), 1036-1039. doi: 10.1126/science.1136099

DOI PMID

Outlines

/

京ICP备05002861号-43

Copyright © 2023 All rights reserved Journal of Data and Information Science

E-mail: jdis@mail.las.ac.cn Add:No.33, Beisihuan Xilu, Haidian District, Beijing 100190, China

Support by Beijing Magtech Co.ltd E-mail: support@magtech.com.cn