Research Paper

Evaluating grant proposals: lessons from using metrics as screening device

  • Katerina Guba , † ,
  • Alexey Zheleznov ,
  • Elena Chechik
Expand
  • Center for Institutional Analysis of Science and Education, European University at Saint Petersburg, Gagarinskaya 6/1A Saint Petersburg 191178, Russian Federation
†Katerina Guba (Email: ORCID: 0000-0002-4677-5050).

Received date: 2022-12-19

  Revised date: 2023-03-08

  Accepted date: 2023-04-13

  Online published: 2023-04-23

Abstract

Purpose This study examines the effects of using publication-based metrics for the initial screening in the application process for a project leader. The key questions are whether formal policy affects the allocation of funds to researchers with a better publication record and how the previous academic performance of principal investigators is related to future project results.

Design/methodology/approach We compared two competitions, before and after the policy raised the publication threshold for the principal investigators. We analyzed 9,167 papers published by 332 winners in physics and the social sciences and humanities (SSH), and 11,253 publications resulting from each funded project.

Findings We found that among physicists, even in the first period, grants tended to be allocated to prolific authors publishing in high-quality journals. In contrast, the SSH project grantees had been less prolific in publishing internationally in both periods; however, in the second period, the selection of grant recipients yielded better results regarding awarding grants to more productive authors in terms of the quantity and quality of publications. There was no evidence that this better selection of grant recipients resulted in better publication records during grant realization.

Originality This study contributes to the discussion of formal policies that rely on metrics for the evaluation of grant proposals. The Russian case shows that such policy may have a profound effect on changing the supply side of applicants, especially in disciplines that are less suitable for metric-based evaluations. In spite of the criticism given to metrics, they might be a useful additional instrument in academic systems where professional expertise is corrupted and prevents allocation of funds to prolific researchers.

Cite this article

Katerina Guba , Alexey Zheleznov , Elena Chechik . Evaluating grant proposals: lessons from using metrics as screening device[J]. Journal of Data and Information Science, 2023 , 8(2) : 66 -92 . DOI: 10.2478/jdis-2023-0010

1 Introduction

Researchers have recognized that grant funding plays a more significant role in supporting science compared to institutional funding (Auranen & Nieminen, 2010; Grimpe, 2012; Maisano et al., 2020; Wang et al., 2020). Grant funding is expected to trigger competition among scientists and thus lead to more significant academic results (Grimpe, 2012; Maisano et al., 2020). However, multiple studies have indicated that the effect of grant funding on research performance has been relatively small (Fedderke & Goldschmidt, 2015; Hornbostel et al., 2009; Jacob & Lefgren, 2011; Maisano et al., 2020; Morrillo, 2019; Paudel et al., 2020; Wang & Shapira, 2015). Investigators have partially attributed this to the peculiarities of the peer review process in proposal evaluation, which does not always lead to the selection of the best scientists (Fang et al., 2016; Graves et al., 2011; Gyorffy et al., 2018). The findings that the past scientometric performance of the proposal’s leading scientist as the best predictor of future project output has called for a formal policy of using metrics in deciding which projects to fund (Fedderke & Goldschmidt, 2015). However, the suggestion to rely on quantitative performance indicators in evaluating grant proposals has remained disputable, as the precedents are rare and we do not have sufficient information on how such policy may affect grant allocation. Can the process for evaluating applications be improved so that grants are awarded to more capable scientists? The present study contributes to the discussion by analyzing the effects of a formal policy related to a rising publication threshold for principal investigators between two waves of grant competitions: the first wave of grants was awarded in 2014 and the second wave in 2017. We present the analysis of empirical evidence from the Russian Science Foundation (RSF) that has developed publication metrics as an initial application threshold.
The question of whether using scientometric indicators changes academic evaluation results by allocating resources to top-performing researchers is especially relevant for developing countries, given deep distrust in academia and evidence of academic nepotism (Denisova-Schmidt, 2023; Sokolov, 2021). In this context, academic bureaucracy tends to consider scientometric indicators as ‘hard’ evidence that improves resource allocation by redistributing resources from corrupted academic elites to productive authors and organizations. However, on the academic periphery, the policy rationale of using metrics to provide better selection faces challenges related to weak ethical standards. There is a strong possibility that institutions and individuals might respond strategically by working on indicators at the expense of the quality of research results. Evidence shows that Russian universities have faced difficulties in increasing their international publications; these challenges have led the institutions to employ questionable strategies, including publishing in predatory journals (Guskov et al., 2018). Russia is included in the top-8 countries with the largest numbers of publications in potentially predatory journals (Marina & Sterligov, 2021). Data shows that the share of Russian output in Scopus in predatory journals was the highest in 2015 and 2016 - 8.23 and 8.4 while, in 2012, it was only 0.6 (Marina & Sterligov, 2021). Is it possible that in the allocation of funds metrics do not provide a barrier to researchers who behave opportunistically by choosing low-quality journals that are nevertheless indexed in international databases?
From the start of its operations, the Russian Science Foundation has relied on bibliometric indicators in proposal evaluations, using scientometric requirements as a barrier for applicants. In other words, to enter the grant competition, a scientist must meet the minimum publication threshold, which has remarkably increased over the past five years. The changes for scientists in the social sciences and humanities (SSH) were particularly dramatic, as the national scientific journals (those not indexed on international databases) disappeared from the list of approved sources in meeting the threshold. Starting from 2017, for researchers from all scientific fields, only publications that appeared in sources indexed by Scopus or Web of Science had been counted as acceptable output. This policy change allows us to study whether the reliance on metrics in the project evaluation procedure has facilitated a more effective selection of best-performing scientists. Given that both scholars with international publications and questionable publication records were eligible to apply for the grant competition, we can study whether the results of the selection process were in favor of those who were successful in internationalization or the emphasis on international databases in scientometric requirements have not changed the selection process.
Our study gives additional empirical evidence on the relationship between grant recipients’ past academic performance and future research project results. We explore the following considerations in greater detail: 1) whether the policy in question has affected the allocation of funds to researchers with a better publication record; 2) how principal investigators’ previous academic performance relates to future project results; 3) whether the link between previous and future performance increases as the role of publication-based metrics grows; and 4) whether differences in policy effects between disciplines can be observed. Our main empirical focus is on the change in the quality of the publication output of two grant-holder groups. The change in the raw number of publications would signal the change in the submission eligibility. However, the change in the quality of publication output would allow us to conclude whether the raising scientometric requirements resulted in the selection of the project leaders who were successful in publishing internationally or, rather, grants were distributed to authors with indexed publications but were less prolific in the production of global science. We acknowledge that scientometric indicators for the research evaluation can have different effects in the social and natural sciences. Therefore, we compare a group of project leaders in two areas of science separately: physics and the social sciences and humanities (SSH).
Our main dataset comprises 209 winners in 2014 who had published 5,701 papers altogether in the five years before submitting their application, and 123 winners in 2017 who had published 3,466 papers within the same time span. We also obtained information from the list of publications resulting from each funded project located on the foundation’s website (4604 publications for the first wave and 6649 publications for the second wave).

2 Relevant literature

The body of literature on the efficiency of public funding is extensive and has concentrated particularly on evaluating the results of funded projects, measured by the quality and quantity of academic research output. Much of the research published has focused on the analysis of various foundations by tracking publications that resulted in supported projects (Gush et al., 2018; Gyorffy et al., 2020; Li & Agha, 2014). In studying foundations’ performance in this respect, a common approach has been to compare the performance of scholars who have received grants with that of similar authors without a grant (Fedderke & Goldschmidt, 2015).
Empirical studies on individual foundations have shown contradictory results. An analysis of publications by grant recipients in New Zealand revealed an increase in publications by between 3 and 15%, and citations by between 5 and 26% (Gush et al., 2018). Other researchers collected data on the performance measures of those who received additional financial resources and those who did not receive those (Fedderke & Goldschmidt, 2015). But while the comparison indicated that funding tended to lead to higher performance, the increase in publications could be modest depending on scientists’ previous achievements and research area. In their study of Italian scientists, Maisano et al. (2020) demonstrated that grant recipients’ research performance did not significantly differ from that of researchers who did not receive a grant, at least in the short term. According to the data on the Turkish foundation, grant funding did not lead to a significant uptick in publications and their subsequent citations (Tonta, 2018). Elsewhere in Europe, a study on the Russian Presidential Grants did not indicate an impact of funding on grant recipients’ research performance in medicine (Saygitov, 2014). A similar lack of impact could be found when comparing approved and rejected applicants for the German Research Foundation (Hornbostel et al., 2009) and the United States National Institutes of Health (Jacob and Lefgren, 2011). Even when an effect has been identified, it has often been relatively small (Morrillo, 2019; Wang & Shapira, 2015).
The wide variance in research results may be due to the peculiarities of specific foundations, which differ in their selection procedures and general policies. National public funders often do not only target citations and publications, but pursue other strategic objectives (Gök et al., 2016). For example, a comparative study found that while Chinese foundation grants tended to result in a high number of publications, European Union grants tended to be less effective, which could be explained by their emphasis on projects’ social impact (Wang et al., 2020). Besides, in a context where the foundation is the principal funding source, the effect on research performance tends to be much stronger in comparison to countries with less concentrated grant competitions. Gyorffy et al. (2020) pointed to a 47% increase in research output following the receipt of a basic foundation grant, which could be attributed to significantly less funding being available from other sources for unsuccessful applicants.
In terms of size, large grants have significant transaction costs and, therefore, tend not to be as effective as one might expect (Campbell et al., 2010; Clark & Llorens, 2012). A substantial number of empirical studies have indicated that resource distribution across smaller grants has yielded better performance on average than distribution into fewer and larger grants (Aagaard et al., 2019; Mongeon et al., 2016). Based on empirical research on the Center of Excellence Program in four Nordic countries, larger grants appeared to exert a limited impact for already prolific and distinguished groups, vis-à-vis less recognized groups with few other funding options (Langfeldt et al., 2015). Such findings seemed to suggest that under a policy objective of maximizing research output, giving smaller grants to more researchers represented a better course of action.
Another empirical strategy has been to focus on data at the country level. Research results regarding domestic and international grants for several European countries revealed a link between a combination of diverse grants, especially in complex EU programs, and the emergence of highly cited papers (Gök et al., 2016). For small countries in Eastern Europe, funding appeared to be a significant predictor of impact on citations (Gök et al., 2016). This effect could also be achieved through international collaborative research, where funding is a complementary mechanism (Yan et al., 2018; Zhao et al., 2018).
The efficiency of grant funding may be particularly dependent on the research field (Morillo, 2019). Resource distribution itself is not equal across different fields of science: European foundations have typically offered fewer opportunities to support SSH projects than those within the natural sciences and engineering (Grimpe, 2012). Research area has also turned out to be a significant factor in funded projects’ publication performance (Yan et al., 2018). Data on public funding in South Africa revealed an associated increase in publications in the natural sciences, while in engineering and the social sciences, the number of publications and citations did not appear to grow vis-à-vis the control group (Fedderke & Goldschmidt, 2015). The limited impact of grant funding may be further explained by how the process of grant evaluation has been organized to allocate funding to the most productive researchers (Technopolis, 2008). There are at least two reasons why grants may not be distributed among the most productive scientists, as explored below.
First, the most prolific authors may choose not to apply for grants, on different grounds. Grimpe (2012) demonstrated that even the most prominent competitions failed to attract the best scientists in Germany. Top-level scientists have been found to be less inclined to change their research plans to fit the foundations’ agendas (Laudel, 2006). In Italy, a study by Bertoni et al. (2021) revealed that while many of the country’s less productive researchers applied for a grant program, many of the highly productive ones did not. The latter authors suggested that researchers tended to misestimate their position in productivity distribution in relation to the assigned threshold, leading to the most productive not applying for funding.
Second, allocation procedures may not be meritocratic, resulting in a failure to reward the best scientists. According to several studies, the review process has rarely provided reliable results in predicting future scientific productivity (Fang et al., 2016; Graves et al., 2011; Gyorffy et al., 2018). For example, Gyorffy et al. (2020) indicated that grant application reviewers’ scores were only minimally better than random parameters. They found that the best predictors of future research performance were rather the bibliometric indicators of the project leaders at proposal submission. Hence, studies have provided evidence for the promotion of metrics and the demotion of review panels in selecting grant recipients. Researchers have explicitly suggested that “the proposal evaluation process could be more evidence-based and shortened through the more intensive usage of past publication data” (Gyorffy et al., 2020). In this regard, it can be argued that the concentration of resources in the hands of highly productive scientists is more likely to provide a high return on investments in science (Maisano et al., 2020).
Due to publishing norms, bibliometrics is not appropriate for every field of research (Van Raan, 1998). Scientists have generally agreed, although with some reservations, that the use of metrics would benefit evaluation purposes in the natural sciences, but that for the social sciences and especially the humanities, they should be applied with discretion (Abramo et al., 2013 ; Batista et al., 2006; Hicks et al., 2004; Nederhof, 2006). Compared to the basic natural sciences, more SSH output has tended to be published in journals and as monographs in the national language, significantly limiting the number of potential citations (Mosbah-Natanson & Gingras, 2014). Accordingly, our study acknowledges that scientometric indicators for assessing scientific achievements can have different effects in the social and natural sciences. Therefore, we compare a group of project leaders that received grants in 2014 and 2017 years in two areas of science separately: physics and the social sciences and humanities (SSH).
We propose that increased reliance on international databases would affect different social and natural sciences given their different experience in publishing internationally. Bibliometric analysis shows that natural sciences dominate the publications from post-Soviet countries that appeared in indexed journals; in Russia, it accounts for 78% of all Scopus-indexed papers (Chankseliani et al., 2021). The proportion of social sciences is less than 3% and has slightly increased during the last decade (Chankseliani et al., 2021). Given the development of the social sciences and humanities rather locally than globally, we expect to find the difference in strategic behavior regarding getting articles published - social sciences and humanities authors would be more inclined to choose a more guaranteed way of publishing in indexed journals.

3 Methods

Starting in 2014, the Russian Science Foundation annually maintains several competitions. We focused on the competition named “Fundamental scientific and exploratory research led by research groups,” which in 2014 was the main competition for Russian scientists - winners of this call accounted for 78% of all RSF’s grants awarded this year. Until 2017, the RSF did not demand the project leaders to publish solely in Scopus or Web of Science, and the year 2017 was the first year when these publications became necessary for an application not only for natural scientists but also for social sciences and humanities. Given that there were only several months between the call of application and the submission, scientists did not have enough time to change their publication behavior specifically to provide publications for the competition. However, it is well-known that between 2014 and 2017, scientists started to publish in journals indexed in international citation databases. In this period, the emphasis on international publications has permutated Russian science policy in different contexts, creating intense pressure for Russian scientists to change their publication behavior. (The use of metrics in resource allocation was initiated in the 2010s when the government enhanced control over the education system to eliminate ineffective universities and increase the productivity of others (Sokolov et al., 2018). The state adopted a policy strategy with development programs and specific goals that universities had to fulfill in exchange for additional resources. Thus, in 2012 the government launched the Russian Academic Excellence Project (Project 5-100). It aimed to improve the international competitiveness of a handful of Russian universities heavily supported by the state. The key benchmarks in international rankings are the number of publications and citations indexed in international databases.)By 2017, the number of researchers with papers in indexed journals had increased, and it is noteworthy that only these scientists were eligible to participate in the grant competition. More specifically, we focused on whether the policy change resulted in the selection of project leaders with significant publication records considering that researchers who did not publish in highly competitive international journals but instead chose Russian indexed journals or predatory sources were also eligible to apply.
Our data collection process began with assembling a list of all principal investigators who had received project funding from the Russian Science Foundation (RSF) in 2014 and 2017 (Figure 1 represents the main steps in data processing). Though the results of the RSF competitions were available on the foundation’s website, they were not in machine-readable form, and the list only provided data on the funded projects. Only part of the information was publicly available (i.e., project leaders’ names, project abstracts, and a list of publications as a result of the funded projects). Our analysis included two groups of grantees: 1) physicists and 2) social sciences and humanities (SSH) scholars.
Figure 1. Flow-chart of the data processing
The next step was to gather data on the listed principal investigators’ publications indexed on Scopus. We selected the Scopus database over Web of Science (WoS) as the former covers more Russian-language journals than the latter (Moed et al., 2018). Sterligov (2017) expressed the view that WoS sold “top quality,” while Scopus sold “top scope,” which could explain why the number of journals indexed on Scopus was almost twice as high as on WoS. Indeed, although the share of Russian papers grew on both databases during 2012-2016, there was a difference in the types of publications. While the coverage of proceedings expanded on both Scopus and WoS, the higher number of publications indexed on Scopus also reflected the inclusion of more Russian-language journals (Moed et al., 2018).
The observational window for our study spans the five years preceding the application year, as specified in the scientometric requirements for grant applicants. As mentioned, our data consisted of 209 grantees in 2014 who published 5,701 indexed papers from 2009 to 2013, and 123 grantees in 2017 who published 3,466 indexed papers from 2012 to 2016. We also obtained information from the list of publications resulting from each funded project located on the foundation’s website (4604 publications for the first wave and 6649 publications for the second wave). (At the moment of data collection, each project has a page on the foundation’s site where the foundation locates the list of publications that appeared as project results (https://www.rscf.ru/project/). This information updates annually.)
For the main analysis, we restricted our sample to journal articles and omitted other categories, such as book chapters and conference reports. This decision stemmed from our empirical strategy, as we intended to gather comparable information on the quality of publications across different research areas. Furthermore, the articles were matched with the bibliometric characteristics of the journals where they had been published. We assigned each journal to its respective quartile within the scientific field, based on the journal’s rank according to CiteScore metrics. If the journal belonged to several categories, then the quartile was taken from the category in which it was the highest. The researchers challenge the assumption that journal metrics are a strong indicator of paper quality (Larivière & Sugimoto, 2019). However, due to the limited data availability, we relied on the quartile-ranking as it was the most easily accessible data for the publications. In this regard, we followed the strategy of Gyorffy et al. (2020), who received reliable results by using quartile-ranking in studying Hungarian researchers.
To gauge research output, we gathered and stored the list of grantees’ publications available on the RSF website (4604 publications for the first wave and 6649 publications for the second wave). This step allowed us to compare principal investigators’ prior output with publications which resulted from the funded projects. We chose to analyze only journal articles at this stage, as it was not possible to manually check each publication through Scopus (5020 articles for the 2014 competition and 2948 articles for the 2017 competition). Similarly to the principal investigators’ publication records, the latter publications were matched with the bibliometric characteristics of journals where articles were published.
This study has several sources for limitations regarding the nature of available data. Available data did not allow the analysis of non-overlapping periods and the equal number of grant-holders in each period, given that the different numbers of grants were awarded in 2014 and 2017. By focusing on later years, we could avoid the overlapping; however, we aimed to analyze the first year when the bibliometric requirements were strengthened. Further, although we compared the results of selection procedures between two periods, the research design differs from the standard before-and-after study, which usually measures outcomes in a group of participants before an intervention, and then again afterward. In this study, the sample for each year consists of a different set of scientists, winners of the 2014 competition, and winners of the 2017 competition, and the number of grant-holders in each period is uneven. To make the comparison possible, we provided both absolute and relative perspectives - raw numbers on the principal investigators’ publications and citations and size-independent indicators, including shares of papers in Q1 journals and an average number of publications per author.

4 Results

4.1 Lead scientists’ research performance at submission

Our primary empirical strategy was to compare two waves of grantees. Empirically, we chose the strategy to compare the quality of publication output between two periods separately for physics and SSH. In addition, we compared the quality of publication output in each period to the quality of general Scopus output that represents the entire publications in physics and SSH published by Russian authors in each period, in 2012-2014 and 2015-2017. The first group of scientists received their grants in 2014, when scientometric expectations were significantly lower, while the second group won a competition in 2017, after the new threshold had been applied. For the 2014 competition, project leaders in the natural sciences were required to have published at least three articles in indexed journals; in 2017, five papers were required. For scholars in SSH, the change was more dramatic. Although the number of publications required remained the same, the RSF removed journals not indexed on international databases from its list of approved sources.
In the 2017 competition, noticeably fewer projects received funding than earlier. The number of grants decreased from 115 to 82 in physics, and from 94 to 41 in SSH that is partially related to the launching of several additional competitions by the Russian Science Foundation. In 2014, there were 12,774 applications, while in 2017 - 4,345; in 2014, winners of the call accounted for 78% of all RSF’s grants awarded this year, while in 2017, only 26% of all RSF’s grants. In this regard, unfortunately, we cannot define the main cause of the diminished number of applications. It is related both to launching other competitions and introducing metrics as a barrier.
Table 1 presents the number of papers grantees published in the five years before they received a grant. In the first period, not all SSH project leaders were required to have internationally indexed publications. However, in the second wave, 93% of scientists who received a grant in the social sciences had published in journals indexed on Scopus—compared to 54.3% in the first wave. For physicists, this percentage was consistently high: 100% of scientists were found to have published journal papers within the Scopus index.
Table 1. Descriptive statistics of principal investigators’ research performance in the five years before they received funding
Physics SSH
2014
(N projects=115)
2017
(N projects=82)
2014
(N projects=94)
2017
(N projects=41)
All Scopus publications in the 5 years before submitting (N)
Mean (SD) 41.5 (64.5) 33.4 (31.1) 2.6 (4.39) 4.95 (4.68)
Median
[Min, Max]
25 [2, 466] 25 [0, 153] 1 [0, 29] 4 [0, 27]
Journal papers indexed by Scopus (N)
Mean (SD) 35.3 (61.8) 31.0 (29.2) 2.12 (4.03) 4.8 (4.7)
Median [Min, Max] 21 [1, 450] 22 [0, 150] 0 [0, 28] 4 [0, 27]
Journal papers indexed by Scopus (%)
Mean (SD) 82.2 (14.2) 91 (17.3) 39.3 (44.1) 89.4 (27.1)
Median [Min, Max] 84.6 [41.7, 100] 96.3 [0, 100] 0 [0, 100] 100 [0, 100]
Conference papers indexed by Scopus (%)
Mean (SD) 14.6 (13.5) 4.96 (9.16) 4.02 (16.7) 0.488 (3.12)
Median [Min, Max] 12.5 [0, 52.9] 0 [0, 56.3] 0 [0, 100] 0 [0, 20]
Other Scopus papers (%)
Mean (SD) 3.29 (6.02) 1.59 (2.89) 9.84 (22.8) 2.79 (8.97)
Median [Min, Max] 1.16 [0, 50] 0 [0, 11.1] 0 [0, 100] 0 [0, 33.3]
In physics, the median principal investigator in our sample had 25 publications, without any differences between periods. The mean itself did vary, at 41 in 2014 and 33 in 2017, with an SD of 64 in 2014 and 31 in 2017. This variance could be explained by the presence of researchers with a strikingly long list of publications, which resulted from mega-collaborations. While SSH researchers tend to publish significantly less in Scopus-indexed sources, we noticed a positive trend in the number of publications between the two waves of competition: The median principal investigator in our sample had only one publication in 2014, but four in 2017 (Wilcoxon test, p-value = 1.909e-05, see Figure 2A).
Figure 2. Comparison of principal investigators’ publications between two waves of grant competitions.
Table 2 shows that in social sciences and humanities, for the second period, journal articles had become more important in comparison with other publication types. In physics, the median principal investigator in our sample had 21 journal papers; any significant differences between periods are absent. In SSH, the median project leader had no journal papers in 2014, but four in 2017 (Wilcoxon test, p-value = 5.5e-07, Figure 2B). It is evident from Table 2 that, for SSH, the share of other types of publications decreased (e.g., book chapters and reviews). In other words, the 2017 competition was won by SSH scholars who mainly published articles in internationally indexed journals.
Table 2. Principal investigators’ publications by journal quartiles (only journal articles included)
Journal quartiles Physics Social science and humanities
First wave (%) Second wave (%) First wave (%) Second wave (%)
Q1 2563 (53.7) 1437 (52.4) 30 (12.3) 50 (24.3)
Q2 710 (14.9) 452 (16.5) 13 (5.4) 30 (14.6)
Q3 800 (16.7) 600 (21.9) 57 (23.4) 65 (31.5)
Q4 703 (14.7) 253 (9.2) 144 (59) 61 (29.6)
Total 4776 (100) 2742 (100) 244 (100) 206 (100)
Furthermore, we used the information on the journals’ quartiles to analyze the quality of the project leaders’ publications before proposal submission (Table 2). In comparison with the first wave of scientists, the second wave published papers in a more selective set of journals. Among the first wave, the SSH scholars published mainly in Q4 journals (59%), while the share of SSH Q4 publications decreased to 29.6% among the second wave. Most of the lost Q4 share went to Q1, Q2, and Q3 journals. Physicists also experienced an “overflow” from lower to higher quartiles. Results are significant for both research fields (for physics, Pearson chi2 - 26.6, and for the social sciences, 47, p< 0.0001). This could be explained both by the increase in the journals’ quartiles and by a selection of grant recipients skewed in favor of those who have published in higher-quality journals. We also found the same result by analyzing journal metrics, particularly the Percentile indicator that indicates the journal’s position within the research area. While physics journals did not grow in Percentile between the two periods, SSH showed more pronounced positive dynamics: the first wave’s Percentile index reached on average 30%, while the second wave reached 46% (Wilcoxon test, p=3.3e-0.8).
Given the fact that the number of Russian journals in Scopus almost doubled for the periods analyzed, it may be that the growth in journal metrics does not necessarily indicate that researchers have started to publish in more prestigious journals. Authors may have continued to publish in Russian-language journals, which over time began to show higher metrics as more journals have been indexed, with their citation metrics rising as a result. In any case, data revealed that the share of international publications had grown for both fields. For the principal investigators in physics, 71% of publications during the first period were non-local, a proportion rising to 79% in the second period investigated. For recipients of social science grants, the same rate was found to have risen from 60 to 72%.
How could this favor to more productive authors in the second period be explained? Could we attribute this relatively positive change to the raising scientometric barrier for grant applicants? We suggest that bibliometric indicators were useful instruments given our assumption that the pool of applicants might consist of scholars with publications in low-quality journals who were also eligible to apply for the competition. Although we do not have access to the pool of unsupported applications, we can analyze the Scopus publication output, which represents the articles and authors eligible to apply to RSF. It is impossible to identify from this output which authors applied, but it offers an approximate understanding of the number of eligible researchers and the quality of their output. Due to limitations related to SciVal module, instead of five-year periods, we chose two three-year periods, 2012-2014 and 2015-2017. We suggest that publication output by these three-year periods provides an approximate description of the general population of researchers.
The special module SciVal was used to analyze the distribution of publications by journal quartiles. In physics, we found 44,310 publications with Russian affiliation published in 2012-2014, and 62,809 articles appeared in 2015-2017. The data did not show evidence that this growth has happened at the expense of the overall quality - the share of Q4 articles was 34% in 2012-2014 and 34% in 2015-2017, while the share of Q1 was 29% in the first period and 24% in the second period. Compared to general population, principal investigators’ output demonstrates the better quality: more than 50% of their papers appeared in Q1 journals, while in overall output this share is 29%.
In contrast to physics, the social sciences output tripled, from 8,292 in 2012-2014 to 27,901 in 2015-2017. The share of Q1 articles was 16% in 2012-2014 and 12.5% in 2015-2017, while the share of Q4 was 53% in the first and 51% in the second periods. Although the overall quality of general output did not change between periods, the raw number of papers has increased dramatically, meaning that more authors have become eligible to apply for the grants. In this context, the grants were awarded to more prolific researchers whose metrics are better than in general population. For example, 29.6% of 2017 grantees’ papers appeared in Q4 journals, while in overall output this share was 51%.
To summarize, our results indicated that the group of research leaders in physics did not significantly change between the two waves of competition. Physicists already tended to perform highly in research before publication pressure was increased, and have since continued to publish in selective journals. SSH grantees displayed a much more noticeable improvement in research performance, regarding both the quantity and quality of publications. The number of SSH publications significantly increased in high-impact journals, which had long been a challenge for Russian SSH researchers.

4.2 Research results of funded projects

In physics, the median project in our sample resulted in 18 papers in the first wave and 19 papers in the second wave. The mean does not remarkably differ from the median, pointing to an absence of high variation in performance (Table 3). Articles published in Scopus journals comprised almost half of the output (the median share for physics was 50% in the first period and 63% in the second period). Almost all these articles appeared in non-Russian journals.
Table 3. Descriptive statistics on projects’ publication output between the two waves of grant competitions
Physics SSH
2014
(N projects=115)
2017
(N projects=82)
2014
(N projects=94)
2017
(N projects=41)
All publications (N)
Mean (SD) 24.5 (17.4) 21.9 (11.2) 46.2 (27.1) 56.2 (26.7)
Median [Min, Max] 18 [3, 94] 19 [5, 67] 46 [5, 111] 49 [14, 111]
Journal papers indexed by Scopus (N)
Mean (SD) 11 (7.83) 12 (5.61) 5.3 (4.46) 9 (6.12)
Median [Min, Max] 9 [0, 43] 11 [3, 37] 4 [0, 18] 9 [0, 40]
Journal papers indexed by Scopus (%)*
Mean (SD) 51.1 (24.8) 61.5 (23.3) 16.0 (17.4) 19.8 (14.9)
Median [Min, Max] 50 [0, 100] 62.8 [9.09, 100] 10.2 [0, 80] 16.2 [0, 81.3]
Non-Russian Scopus journal papers (N)
Mean (SD) 9.24 (6.79) 10.5 (5.66) 2.22 (2.75) 3.63 (3.09)
Median [Min, Max] 8 [0, 41] 10 [1, 37] 1 [0, 14] 3.00 [0, 11]

* To calculate the percentage, we considered all journal articles as 100%.

It is evident here that SSH researchers published significantly more papers on the projects’ results, compared to projects in physics. The median project results amounted to 46 publications in the first period and 49 publications in the second. However, not all these were indexed publications: the median share of indexed journal publications was 10% in the first period and 16% in the second. According to previous research, standard bibliometrics has tended to measure only a small fraction of SSH publication output (Donovan & Butler, 2007). In comparison with the “hard sciences,” more SSH output has been published in non-indexed formats (e.g., national journals, books, edited volumes, conference proceedings, and policy memos). Hence, physicists tend to publish significantly more international journal articles than SSH scholars who included other types of publications in the project output.
In contrasting the two waves of competitions, we did not observe an increase in the number of publications produced as project results, both in physics and SSH (Wilcoxon test, p-value = 0,056, see Figure 3A). At the same time, while SSH scholars have published considerably less often in indexed sources, on average, the projects in the second wave produced more Scopus-indexed papers and articles in international journals. The median project results in 4 journal publications in the first period and 9 publications in the second (Wilcoxon test, p-value = 2.3e-05, see Figure 3B).
Figure 3. Comparison of projects’ publication output between the two waves of grant competitions.
We further used the information on the journals’ quartiles to estimate indirectly the quality of the project results (Table 4). The SSH publications from the first wave appeared mainly in Q4 (51%) while, for the second period, the share of Q4 publications decreased to 29%, with most of the lost Q4 share having been taken up by Q3 journals. Physicists showed noticeable growth in publications in Q1 journals. Results are significant for both research fields (for physics, Pearson chi2 - 70, and for the social sciences, 43.8, p< 0.0001).
Table 4. Project publication output by quartiles (only journal articles included)
Physics SSH
Journal quartiles First wave (%) Second wave (%) First wave (%) Second wave (%)
Q1 630 (49.8) 579 (59) 42 (8.4) 48 (13)
Q2 298 (23.6) 166 (16.9) 93 (18.7) 72 (19.5)
Q3 225 (17.8) 137 (14) 110 (22.1) 141 (38.2)
Q4 112 (8.9) 100 (10.2) 253 (50.8) 108 (29.3)
Total 1265 (100) 982 (100) 498 (100) 369 (100)
To summarize, in the second wave of competitions, the projects in both research fields produced more articles published in high-impact international journals.
Lead scientists’ research performance and project publication output
Table 5 shows the descriptive statistics for principal investigators’ publication records and project output. We limited our analysis to the publications that appeared in Q1 journals. In physics, we observe that the mean shares of project leaders’ Q1 publications and project output are comparable (the percentage is calculated from publications that have the quartile). For project output, it is slightly higher in the second wave: projects initiated in 2014 published on average 50% of their papers in Q1 journals, while the share of Q1 papers in the 2017 publication record was 56%. We also see that most of these papers were published in non-Russian journals. SSH projects have produced fewer papers in Q1 journals compared to the project leaders’ research performance. Thus, during the second wave, projects resulted in 16% of project leaders’ papers appearing in Q1 journals, while 11% of their papers were published in highly selective journals. We detected a similar pattern in the share of non-Russian language journals: While average project leaders published 54% of their papers internationally, only 40.5% of project output was published in non-Russian language journals.
Table 5. Descriptive statistics of principal investigators’ research performance and project publication output
Physics SSH
2014
(N projects=115)
2017
(N projects=82)
2014
(N projects=94)
2017
(N projects=41)
Q1 papers (N) before
Mean (SD) 22.3 (58.4) 17.5 (22.5) 0.319 (0.845) 1.22 (2.72)
Median [Min, Max] 11 [0, 422] 10.5 [0, 114] 0 [0, 5.00] 0 [0, 15]
Q1 papers (%) before
Mean (SD) 40.7 (24.1) 48.4 (27.3) 6.82 (18.4) 16.2 (24.5)
Median [Min, Max] 34 [0, 92.5] 50.9 [0, 100] 0 [0, 100] 0 [0, 87.5]
Q1 papers (N) after
Mean (SD) 5.48 (5.77) 7.06 (5.44) 0.447 (0.875) 1.17 (1.99)
Median [Min, Max] 4 [0, 38] 6 [0, 31] 0 [0, 4] 0 [0, 8]
Q1 papers (%) after*
Mean (SD) 46.6 (29.9) 55.9 (27.7) 9.29 (21.0) 11.4 (18.3)
Median [Min, Max] 50 [0, 100] 55.9 [0, 100] 0 [0, 100] 0 [0, 80]

* Percentage is calculated from all indexed publications.

Considering the scientometric parameters of principal investigators before proposal submission, in physics, the share of Q1 publications displays a significant correlation with the publication output measured during the grant running time (corr.coeff. 0.37 in the first wave and 0.338 in the second wave, Figure 4). Due to the limited number of observations, we did not calculate the correlation for social sciences and humanities.
Figure 4. Correlation between scientometric parameters of the principal investigators at proposal submission and project publications.
The significant correlation between the lead physicists’ scientometric parameters and subsequent publication records (during the grant running time) might be related to the dominance in physics of collaborations strongly led by the principal investigators. “Team science” is crucial in physics, where projects reflect the research priorities of the principal investigator; a long list of co-authored publications serves to show that they already have a team to conduct the research project. Meanwhile, the role of individual contributions is more significant in SSH (Wutchy et al., 2007; Birnholtz, 2008). The latter indicates that even in research groups, the principal investigator tends to have less influence over the research process. While research projects may be conducted as a collaboration of individual researchers around related research questions, they do not tend to develop one integrated project. Hence, the principal investigator may have less influence over the project results. We tested this assumption by analyzing the share of project output published by principal investigators. Data show that for both periods, on average, principal investigators in physics authored 75% of the indexed journal articles, while project leaders in SSH authored significantly fewer papers, at 30% of all publications and about 50% of indexed journal articles.

5 Discussion

In the modern era, peer review has dominated the evaluation of grant proposals, meaning that reviewers’ scores tend to be crucial in deciding which proposals to fund (Azulay, 2020). However, research from recent years has brought the efficiency of peer review into question by pointing out that it might add zero or even negative value if the reviewers were to be biased, mistaken, or focused on different goals (Guthrie et al., 2018; Gyorffy et al., 2020; Li & Agha, 2014). Empirical research on possible links between peer-review scores and project outcomes has brought mixed results (Li & Agha, 2014; Luukkonen, 2012).
Recent empirical studies have posited that using scientific merit scores in evaluating grant proposals may yield better outcomes (Azulay, 2020; Fedderke & Goldschmidt, 2015; Gyorffy et al., 2020). Based on this evidence, should funders rely more on quantitative metrics like publications and citations in addition to peer review? The RSF case is unique due to the foundation’s formal policy of using publication metrics for the initial threshold in the application process, as it requires a project leader to have published a certain number of indexed papers before applying. To study the effects of tightening this policy, we compared two groups of grant-holders who participated in completions before and after the policy raised the publication threshold for principal investigators. Although scientometric requirements have led some Russian institutions to employ questionable strategies, including publishing in predatory journals (Guskov et al., 2018), our study demonstrates that using metrics results in a more effective selection of project leaders. We showed that in Russia, with its problems with academic expertise, reliance on quantitative indicators yields better results regarding selecting more prolific authors.
First, we found that the selection among physicists in the first period had already been effective, as the grants tended to be allocated to prolific authors publishing in journals of high quality. In the second period, the grants were awarded to researchers with slightly better metrics compared with the first wave of the grant competition. By contrast, the SSH project grantees had been less prolific in publishing internationally; their smaller number of publications in the most selective set of journals could be explained by the field’s traditions and limited journal coverage in international databases. At the same time, between the first and second periods, the selection of SSH grantees favored those researchers who have more impressive publication records, both in terms of the quantity and quality of publications operationalized in this research on the basis of journal quartile.
Second, in physics, proposals submitted by investigators with better metrics tended to yield better publication records during grant realization that could be attributed to the more profound role of the project leader: Project publications in this field are generally strongly shaped and authored by a principal investigator. We found that the correlation between metrics of the principal investigators at proposal submission and project output did not differ between periods. In SSH, limited observations did not allow us to conduct correlational analysis. We assume that in this case the role of the project leader is less evident, as many project publications appeared without their participation as coauthors.
Using metric thresholds could change the supply side of those able to apply for funding. While this is less important for fields with many prolific authors, it is crucial for research fields where fewer authors tend to publish internationally. For SSH, the use of the metrics for initial filtering might lead to a change in the supply side of eligible researchers, as the bibliometric threshold defines what type of scholarship counts as publications through the lens of international citation databases. The exact answer requires the data on unsupported applications; unfortunately, access to them is restricted. Our data provides only indirect confirmation of our suggestion by showing that between 2014 and 2017, the pool of authors who were eligible to apply increased dramatically, and this increase has happened at the expense of quality given that significant (more than 50%) share of papers was published in low-quality journals. In this context, however, the grants were awarded to researchers with better publication metrics.
Our findings indicate that formal policy may have profound effects on the social sciences and humanities, which have been considered less suitable for metric-based evaluations. We observed that the share of non-journal publications among new grantees decreased, while the share of English-language journal articles increased. Our results confirm the findings by Hammarfelt and de Rijcke (2015) that evaluation practices changed publication patterns in the humanities by increasing the most common type of output (i.e., English-language journal articles). We did not focus in detail on changes in publication content. However, like others, we suggested that format changes would require complying with the topics, perspectives, and methods supported by the major journals in a field (Gantman & Fernandez, 2016; Koch & Vanderstraeten, 2019; López Piñeiro & Hicks, 2014). Hence, this international path might lead to the suppression of certain research themes and styles, and to abandoning important roles which social scientists have come to play in their societies (Sokolov, 2019). In this respect, foundations employing broad categories such as “social sciences and humanities” may create a situation where the use of metrics privileges research fields where it is easier to publish internationally (e.g., cognitive psychology versus history).
The evaluation of grant applications is a challenging task for any funding organization. The process consumes human resources, requiring extensive use of experts (Gyorffy et al., 2020). In the case of countries with a good pool of researchers, quantitative metrics might be used for the initial screening, which would enable peer reviewers to focus on distinguishing among top-performing scholars (Asulay, 2020; Mali et al. 2017). In other words, reliance on metrics at an early stage can speed up the evaluation process, allowing for expertise to be used where it is most needed, “without wasting resources for proposals that are highly likely to be accepted because of their authors’ recent publication activities as well as for proposals that are unlikely to be accepted due to extremely weak prior publication performance” (Gyorffy et al., 2020).
The application of bibliometric indicators is more promising and not limited to their capacity to lessen the administrative burden related to the evaluation of grant applications. What is more important is that the use of bibliometric indicators can counteract issues with objectivity and fairness in a peer review. Evidence suggests that even in contexts with strong scientific ethos, such negative social phenomena as nepotism is present (Bornmann, 2011). Sandström and Hällsten (2008) show that grant-holders with reviewer affiliation receive a higher score than applicants without affiliation. Van den Besselaar (2012) demonstrated that being part of a council’s inner circle benefits the number of received grants compared to applicants more at a distance.
In countries where academic expertise is corrupted, the objectivity of decisions in review panels is a much more intense issue. The post-soviet grant system lacks transparency in the selection process and has the potential for corruption (Batygin, 2001). Peer review is only effective when there is a shortage of competent academics, and the evaluation results are trusted. However, expert status can be obtained without confirmation from significant scientific achievements, thus making expert evaluations less legitimate. Severe flaws in academic integrity in Russia have been widely acknowledged and discussed throughout the scientific community, by state officials, and the general public. In Russia, even an academic degree, the primary signal of academic quality, may fail to be reliable proof of academic achievement (Sokolov, 2021).
In this context, bibliometric indicators might be considered as an instrument that contributes (In addition to nepotism and cronyism, several studies showed inherent conservatism in peer review as reviewers intentionally or unintentionally might be opposite to truly innovative or high-risk research (Luukkonen, 2012). For example, Luukkonen (2012) found the presence of conservatism even in the case of foundations with the explicit aim of selecting excellent and groundbreaking research proposals. Given that groundbreaking research is, by nature risky and controversial, the foundations need some guarantees that selection procedures identify the most capable scientists with exceptional capabilities and impressive past achievements and, in this regard, scientometric indicators about past performance provide an additional source of information. (Luukkonen, 2012). Application of bibliometric information for selection scientists capable of conducting innovative research projects is beyond the scope of this paper; we only suggest that new bibliometric indicators might be helpful in the identification of scientific innovations. For example, several researchers offered the Disruption index, which measures whether a paper breaks with the past and pushes science and technology in new directions (Wu et al., 2019; Park et al., 2023).)to “the fairness of research evaluations by presenting ‘objective’ information to a peer review that would otherwise depend more on the personal views and experiences of the scientists appointed as referees” (Južnič et al., 2010: 431). Metrics are perceived as trustworthy because they result from the review process of scholarly journals and publishers (Langfeldt et al., 2021). The editors and reviewers of the international journals are external experts, highly competent compared with local academics, and free from incentives to influence the results of grant allocation. Južnič et al. (2010) presented evidence of how bibliometric indicators in the absence of a sound international peer review could prevent reviewers from evaluating former students or colleagues favorably if their publication and citation record would indicate their underperformance compared with other applicants.
It is a highly discussable question whether the use of bibliometric indicators can counteract concerns in a peer review related to its objectivity and predictability of significant research results. While the survey of reviewers showed that they had already been using metrics in evaluating grant proposals (Langfeldt et al., 2021), our knowledge of the possible effects of such policy has been insufficient, as foundations have rarely explicitly relied on metrics in the proposal evaluation process (Mali et al., 2017). Developing this practice into a formal policy would make it more legitimate, allowing for elaboration of rules on how to properly employ metrics and for a chance to gather additional information on policy effects.
Funding information
This work is supported by Russian Science Foundation (Grant No. 21-78-10102).
Author contributions
Katerina Guba (kguba@eu.spb.ru): Conceptualization (Equal), Methodology (Equal), Supervision (Lead), Writing - original draft (Equal), Writing - review & editing (Lead); Alexey Zheleznov (azheleznov@eu.spb.ru): Conceptualization (Equal), Data curation (Equal), Investigation (Equal), Writing - original draft (Equal); Elena Chechik (echechik@eu.spb.ru): Conceptualization (Equal), Data curation (Equal), Formal analysis (Equal), Writing - original draft (Equal).
[1]
Aagaard K., Kladakis A., & Nielsen M. W. (2020). Concentration or dispersal of research funding? Quantitative Science Studies, 1(1): 117-149., from https://doi.org/10.1162/qss_a_00002.

[2]
Abramo G., Cicero T., & D’Angelo C. A. (2013). Individual research performance: A proposal for comparing apples to oranges. Journal of Informetrics, 7(2), 528-539, from 10.1016/j.joi.2013.01.013.

DOI

[3]
Auranen O., & Nieminen M. (2010). University research funding and publication performance—An international comparison. Research Policy, 39(6), 822-834, from 10.1016/j.respol.2010.03.003.

DOI

[4]
Azoulay P., & Li D. (2020). Scientific grant funding. In Innovation and Public Policy. University of Chicago Press.

[5]
Batista P. D., Campiteli M. G., & Kinouchi O. (2006). Is it possible to compare researchers with different scientific interests? Scientometrics, 68(1), 179-189, from https://doi.org/10.1007/s11192-006-0090-4.

DOI

[6]
Batygin G. S. (2001) The invisible border: Grant support and restructuring the scientific community in Russia. Intellectual News, 9(1), 70-74, from 10.1080/15615324.2001.10426712

DOI

[7]
Beckert J. (2019). Shall I publish this auf Deutsch or in English? Sociologica, 13(1), 3-7, from https://doi.org/10.6092/issn.1971-8853/9378.

[8]
Bertoni M., Brunello G., Checchi D., & Rocco L. (2021). Where do I stand? Assessing researchers’ beliefs about their productivity. Journal of Economic Behavior & Organization, 185, 61-80, from 10.1016/j.jebo.2021.02.025.

DOI

[9]
Bornmann L. (2011). Peer review and bibliometric:potentials and problems. In: Shin, J., Toutkoushian, R., Teichler, U. (eds) University Rankings. The Changing Academy - The Changing Academic Profession in International Comparative Perspective, vol 3. Springer, Dordrecht, 145-164, from https://doi.org/10.1007/978-94-007-1116-7_8

[10]
Campbell D., Picard-Aitken M., Côté G., Caruso J., Valentim R., Edmonds S., et al. (2010). Bibliometrics as a performance measurement tool for research evaluation: The case of research funded by the National Cancer Institute of Canada. American Journal of Evaluation, 31(1), 66-83, from https://doi.org/10.1177/1098214009354774.

DOI

[11]
Chankseliani M., Lovakov A., & Pislyakov V. (2021). A big picture: bibliometric study of academic publications from post-Soviet countries. Scientometrics, 126(10), 8701-8730, from https://link.springer.com/article/10.1007/s11192-021-04124-5

DOI

[12]
Clark B. Y., & Llorens J. J. (2012). Investments in Scientific Research: Examining the funding threshold effects on scientific collaboration and variation by academic discipline. Policy Studies Journal, 40(4), 698-729, from https://doi.org/10.1111/j.1541-0072.2012.00470.x

DOI

[13]
Denisova-Schmidt E. V. (2023). Academic dishonesty at Russian universities: A historical overview. Universe of Russia, 32(1), 159-181, from https://doi.org/10.17323/1811-038X-2023-32-1-159-181

[14]
Donovan C., & Butler L. (2007). Testing novel quantitative indicators of research ‘quality’, esteem and ‘user engagement’: An economics pilot study. Research Evaluation, 16(4), 231-242, from https://doi.org/10.3152/095820207X257030

DOI

[15]
Fang F. C., Bowen A., & Casadevall A. (2016). NIH peer review percentile scores are poorly predictive of grant productivity. eLife, 5, e13323, from https://doi.org/10.7554/eLife.13323

[16]
Fedderke J. W., & Goldschmidt M. (2015). Does massive funding support of researchers work?: Evaluating the impact of the South African research chair funding initiative. Research Policy, 44(2), 467-482, from https://doi.org/10.1016/j.respol.2014.09.009

DOI

[17]
Gantman E. R., & Fernández Rodríguez C. J. (2016). Literature segmentation in management and organization studies: The case of Spanish-speaking countries (2000-10). Research Evaluation, 25(4), 461-471, from https://doi.org/10.1093/reseval/rvv031

[18]
Gläser J. (2004). Why are the most influential books in Australian sociology not necessarily the most highly cited ones? Journal of Sociology, 40(3), 261-282, from https://doi.org/10.1177/1440783304046370

DOI

[19]
Gök A., Rigby J., & Shapira P. (2016). The impact of research funding on scientific outputs: Evidence from six smaller European countries. Journal of the Association for Information Science and Technology, 67(3), 715-730, from https://doi.org/10.1002/asi.23406

DOI

[20]
Graves N., Barnett A. G., & Clarke P. (2011). Funding grant proposals for scientific research: Retrospective analysis of scores by members of grant review panel. BMJ, 343, d4797, from https://doi.org/10.1136/bmj.d4797

[21]
Grimpe C. (2012). Extramural research grants and scientists’ funding strategies: Beggars cannot be choosers? Research Policy, 41(8), 1448-1460, from https://doi.org/10.1016/j.respol.2012.03.004.

DOI

[22]
Gush J., Jaffe A., Larsen V., & Laws A. (2018). The effect of public funding on research output: The New Zealand Marsden Fund. New Zealand Economic Papers, 52(2), 227-248, from https://doi.org/10.1080/00779954.2017.1325921.

DOI

[23]
Guskov A. E., Kosyakov D. V., & Selivanova I. V. (2018). Boosting research productivity in top Russian universities: The circumstances of breakthrough. Scientometrics, 117(2), 1053-1080, from https://doi.org/10.1007/s11192-018-2890-8.

DOI

[24]
Guthrie S., Ghiga I., & Wooding S. (2018). What do we know about grant peer review in the health sciences? F1000Research, 6(1335), from https://doi.org/10.12688/f1000research.11917.2

[25]
Győrffy B., Herman P., & Szabó I. (2020). Research funding: Past performance is a stronger predictor of future scientific output than reviewer scores. Journal of Informetrics, 14(3), 101050, from https://doi.org/10.1016/j.joi.2020.101050.

[26]
Győrffy B., Nagy A. M., Herman P., & Török Á. (2018). Factors influencing the scientific performance of Momentum grant holders: An evaluation of the first 117 research groups. Scientometrics, 117(1), 409-426, from https://doi.org/10.1007/s11192-018-2852-1.

DOI PMID

[27]
Hammarfelt B., & Rijcke S. de, (2015). Accountability in context: Effects of research evaluation systems on publication practices, disciplinary norms, and individual working routines in the faculty of Arts at Uppsala University. Research Evaluation, 24(1), 63-77, from https://doi.org/10.1093/reseval/rvu029.

DOI

[28]
Hicks D., Tomizawa H., Saitoh Y., & Kobayashi S. (2004). Bibliometric techniques in the evaluation of federally funded research in the United States. Research Evaluation, 13(2), 76-86, from https://doi.org/10.3152/147154404781776446.

DOI

[29]
Hornbostel S., Böhmer S., Klingsporn B., Neufeld J., & Ins M von. (2009). Funding of young scientist and scientific excellence. Scientometrics, 79(1), 171-190, from https://doi.org/10.1007/s11192-009-0411-5.

DOI

[30]
Jacob B. A., & Lefgren L. (2011). The impact of research grant funding on scientific productivity. Journal of Public Economics, 95(9), 1168-1177, from https://doi.org/10.1177/ 0011392118807514

DOI

[31]
Južnič P., Pečlin S., Žaucer M., Mandelj T., Pušnik M., & Demšar F. (2010). Scientometric indicators: peer-review, bibliometric methods and conflict of interests. Scientometrics, 85(2), 429-441, from https://doi.org/10.1007/s11192-010-0230-8

DOI

[32]
Koch T., & Vanderstraeten R. (2019). Internationalizing a national scientific community? Changes in publication and citation practices in Chile, 1976-2015. Current Sociology, 67(5), 723-741, from https://doi.org/10.1177/0011392118807514.

DOI

[33]
Langfeldt L., Benner M., Sivertsen G., Kristiansen E. H., Aksnes D. W., Borlaug S. B., et al. (2015). Excellence and growth dynamics: A comparative study of the Matthew effect. Science and Public Policy, 42(5), 661-675, from https://doi.org/10.1093/scipol/scu083

DOI

[34]
Langfeldt L., Reymert I., & Aksnes D. W. (2021). The role of metrics in peer assessments. Research Evaluation, 30(1), 112-126, from https://doi.org/10.1093/reseval/rvaa032.

DOI

[35]
Larivière V., & Sugimoto C. R. (2019). The Journal Impact Factor:A brief history, critique, and discussion of adverse effects. In W. Glänzel, H. F. Moed, U. Schmoch, & M. Thelwall (Eds.), Springer Handbooks. Springer Handbook of Science and Technology Indicators. Cham: Springer International Publishing, from https://doi.org/10.1007/978-3-030-02511-3_1

[36]
Laudel G. (2006). The art of getting funded: How scientists adapt to their funding conditions. Scienca and Public Policy (Science and Public Policy), 33(7), 489-504, from https://doi.org/10.3152/147154306781778777

[37]
Li D., & Agha L. (2015). Big names or big ideas: Do peer-review panels select the best science proposals? Science, 348(6233), 434-438, from https://doi.org/10.1126/science.aaa0185

DOI

[38]
López Piñeiro C., & Hicks D. (2015). Reception of Spanish sociology by domestic and foreign audiences differs and has consequences for evaluation. Research Evaluation, 24(1), 78-89, from https://doi.org/10.1093/reseval/rvu030

DOI

[39]
Luukkonen T. (2012). Conservatism and risk-taking in peer review: Emerging ERC practices. Research Evaluation, 21(1), 48-60, from https://doi.org/10.1093/reseval/rvs001.

DOI

[40]
Maisano D. A., Mastrogiacomo L., & Franceschini F. (2020). Short-term effects of non-competitive funding to single academic researchers. Scientometrics, 123(3), 1261-1280, from https://doi.org/10.1007/s11192-020-03449-x.

DOI

[41]
Mali F., Pustovrh T., Platinovšek R., Kronegger L., & Ferligoj A. (2017). The effects of funding and co-authorship on research performance in a small scientific community. Science and Public Policy (Science and Public Policy), 44(4), 486-496, from https://doi.org/10.1093/scipol/scw076.

[42]
Marina T., & Sterligov I. (2021). Prevalence of potentially predatory publishing in Scopus on the country level. Scientometrics, 126(6), 5019-5077, from https://doi.org/10.1007/s11192-021-03899-x

DOI

[43]
Moed H. F., Markusova V., & Akoev M. (2018). Trends in Russian research output indexed in Scopus and Web of Science. Scientometrics, 116(2), 1153-1180, from https://doi.org/10.1007/s11192-018-2769-8.

DOI

[44]
Mongeon P., Brodeur C., Beaudry C., & Larivière V. (2016). Concentration of research funding leads to decreasing marginal returns. Research Evaluation, 25(4), 396-404, from https://doi.org/10.1093/reseval/rvw007.

[45]
Morillo F. (2019). Collaboration and impact of research in different disciplines with international funding (from the EU and other foreign sources). Scientometrics, 120(2), 807-823, from https://doi.org/10.1007/s11192-019-03150-8.

DOI

[46]
Mosbah-Natanson S., & Gingras Y. (2014). The globalization of social sciences? Evidence from a quantitative analysis of 30 years of production, collaboration and citations in the social sciences (1980-2009). Current Sociology, 62(5), 626-646, from https://doi.org/10.1177/0011392113498866.

DOI

[47]
Najman J. M., & Hewitt B. (2003). The validity of publication and citation counts for Sociology and other selected disciplines. Journal of Sociology, 39(1), 62-80, from https://doi.org/10.1177/144078330303900106.

DOI

[48]
Nederhof A. J. (2006). Bibliometric monitoring of research performance in the Social Sciences and the Humanities: A Review. Scientometrics, 66(1), 81-100, from https://doi.org/10.1007/s11192-006-0007-2.

DOI

[49]
Park M., Leahey E., & Funk R. J. (2023). Papers and patents are becoming less disruptive over time. Nature, 613(7942), 138-144, from https://doi.org/10.1038/s41586-022-05543-x

DOI

[50]
Paudel P. K., Giri B., & Dhakal S. (2020). Is research in peril in Nepal? Publication trend and research quality from projects funded by the University Grants Commission-Nepal. Accountability in Research, 27(7), 444-456, from https://doi.org/10.1080/08989621.2020.1768374.

[51]
Sandström U., & Hällsten M. (2008). Persistent nepotism in peer-review. Scientometrics, 74(2), 175-189, from https://doi.org/10.1007/s11192-008-0211-3

DOI

[52]
Saygitov R. T. (2014). The impact of funding through the RF President’s Grants for young scientists (the field - medicine) on research productivity: A quasi-experimental study and a brief systematic review. PLOS ONE, 9(1), e86969, from https://doi.org/10.1371/journal.pone.0086969

[53]
Sokolov M. (2019). The sources of academic localism and globalism in Russian sociology: The choice of professional ideologies and occupational niches among social scientists. Current Sociology, 67(6), 818-837, from https://doi.org/10.1177/0011392118811392

DOI

[54]
Sokolov M. (2021). Can Russian Research Policy be Called Neoliberal? A Study in the Comparative Sociology of Quantification. Europe-Asia Studies, 73(6), 989-1009, from https://doi.org/10.1080/09668136.2021.1902945

DOI

[55]
Sterligov I. (2017). The monster ten you have never heard of: Top Russian scholarly megajournals. Higher Education in Russia and Beyond, 11, 11-13.

[56]
Tonta Y. (2018). Does monetary support increase the number of scientific papers? An interrupted time series analysis. Journal of Data and Information Science, 3(1), 19-39, from https://content.sciendo.com/view/journals/jdis/3/1/article-p19.xml.

DOI

[57]
van den Besselaar P. (2012). Selection committee membership: Service or self-service. Journal of Informetrics, 6(4), 580-585, from https://doi.org/10.1016/j.joi.2012.05.003

DOI

[58]
van Raan A. F. J. (1998). In matters of quantitative studies of science the fault of theorists is offering too little and asking too much. Scientometrics, 43(1), 129-139, from https://doi.org/10.1007/BF02458401.

DOI

[59]
Wang J., & Shapira P. (2015). Is there a relationship between research sponsorship and publication impact? An analysis of funding acknowledgments in nanotechnology papers. PLOS ONE, 10(2), e0117727, from https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0117727.

[60]
Wang L. L., Wang X. W., Piro F. N., & Philipsen N. (2020). The effect of competitive public funding on scientific output. Research Evaluation, 2020(September), 1-13, from https://doi.org/10.1093/reseval/rvaa023

[61]
Wuchty S., Jones B. F., & Uzzi B. (2007). The Increasing Dominance of Teams in Production of Knowledge. Science, 316(5827), 1036-1039, from https://doi.org/10.1126/science.1136099

DOI PMID

[62]
Yan E., Wu C. J., & Song M. (2018). The funding factor: A cross-disciplinary examination of the association between research funding and citation impact. Scientometrics, 115(1), 369-384, from https://doi.org/10.1007/s11192-017-2583-8.

DOI

[63]
Zhao S. X., Lou W., Tan A. M., & Yu S. (2018). Do funded papers attract more usage? Scientometrics, 115(1), 153-168, from https://doi.org/10.1007/s11192-018-2662-5.

DOI

Outlines

/

京ICP备05002861号-43

Copyright © 2023 All rights reserved Journal of Data and Information Science

E-mail: jdis@mail.las.ac.cn Add:No.33, Beisihuan Xilu, Haidian District, Beijing 100190, China

Support by Beijing Magtech Co.ltd E-mail: support@magtech.com.cn