Research Paper

Does Monetary Support Increase the Number of Scientific Papers? An Interrupted Time Series Analysis

  • Yaşar Tonta ,
Expand
  • Department of Information Management, Faculty of Letters, Hacettepe University, 06800 Beytepe, Ankara, Turkey
Corresponding author: Yaşar Tonta: ().

Online published: 2018-03-19

Supported by

Copyright

Open Access

Abstract

Purpose: One of the main indicators of scientific production is the number of papers published in scholarly journals. Turkey ranks 18th place in the world based on the number of scholarly publications. The objective of this paper is to find out if the monetary support program initiated in 1993 by the Turkish Scientific and Technological Research Council (TÜBİTAK) to incentivize researchers and increase the number, impact, and quality of international publications has been effective in doing so.

Design/methodology/approach: We analyzed some 390,000 publications with Turkish affiliations listed in the Web of Science (WoS) database between 1976 and 2015 along with about 157,000 supported ones between 1997 and 2015. We used the interrupted time series (ITS) analysis technique (also known as “quasi-experimental time series analysis” or “intervention analysis”) to test if TÜBİTAK’s support program helped increase the number of publications. We defined ARIMA (1,1,0) model for ITS data and observed the impact of TÜBİTAK’s support program in 1994, 1997, and 2003 (after one, four and 10 years of its start, respectively). The majority of publications (93%) were full papers (articles), which were used as the experimental group while other types of contributions functioned as the control group. We also carried out a multiple regression analysis.

Findings: TÜBİTAK’s support program has had negligible effect on the increase of the number of papers with Turkish affiliations. Yet, the number of other types of contributions continued to increase even though they were not well supported, suggesting that TÜBİTAK’s support program is probably not the main factor causing the increase in the number of papers with Turkish affiliations.

Research limitations: Interrupted time series analysis shows if the “intervention” has had any significant effect on the dependent variable but it does not explain what caused the increase in the number of papers if it was not the intervention. Moreover, except the “intervention”, other “event(s)” that might affect the time series data (e.g., increase in the number of research personnel over the years) should not occur during the period of analysis, a prerequisite that is beyond the control of the researcher.

Practical implications: TÜBİTAK’s “cash-for-publication” program did not seem to have direct impact on the increase of the number of papers published by Turkish authors, suggesting that small amounts of payments are not much of an incentive for authors to publish more. It might perhaps be a better strategy to concentrate limited resources on a few high impact projects rather than to disperse them to thousands of authors as “micropayments.”

Originality/value: Based on 25 years’ worth of payments data, this is perhaps one of the first large-scale studies showing that “cash-for-publication” policies or “piece rates” paid to researchers tend to have little or no effect on the increase of researchers’ productivity. The main finding of this paper has some implications for countries wherein publication subsidies are used as an incentive to increase the number and quality of papers published in international journals. They should be prepared to consider reviewing their existing support programs (based usually on bibliometric measures such as journal impact factors) and revising their reward policies.

Cite this article

Yaşar Tonta . Does Monetary Support Increase the Number of Scientific Papers? An Interrupted Time Series Analysis[J]. Journal of Data and Information Science, 2018 , 3(1) : 19 -39 . DOI: 10.2478/jdis-2018-000219

1 Introduction

The number of scholarly papers and citations thereto are indirect indicators of the level of scientific development of countries. The number of scholarly papers with Turkish affiliations listed in citation indexes has increased tremendously over the years and Turkey ranks 18th in the world in terms of number of publications. Over 36,000 papers were published in 2015 alone, although their scientific impact in terms of the number of citations they gather is well below the average of the world, the European Union (EU) and the OECD countries.
In 1993, the Turkish Scientific and Technological Research Council (TÜBİTAK) initiated a monetary support program (UBYT) to incentivize researchers and increase the number, impact, and quality of international publications authored by Turkish researchers. Considerable percentages of papers with Turkish affiliations were supported in the early years of this program, even though the rate of support has gradually decreased (to c. 30%) over the years due to the steep increase in the number of published papers with Turkish affiliations. As part of the program, some 157,000 publications (93% of which were papers/articles) were supported between 1997 and 2015. The amount of support paid for each paper has been determined on the basis of the impact factor of the journal in which it was published.
The total amount of support was about 124 million Turkish Liras (in 2015 current prices; equal to c. 35 million USD). The number of papers supported, the total number of publications, and the amount of support increased four-, 10- and 13-fold, respectively, during this period.
The support program has been in place for almost a quarter century. Yet, its impact has not been evaluated in the past. We have been asked by TÜBİTAK to evaluate the effectiveness of the program and given the payment records of 157,000 supported publications. They included, among others, journal information (name, year, its class based on Journal Citation Reports’ subject categories), type of contribution (e.g., article, review) and the amount of support.
Based on the payment records provided, the characteristics (i.e., impact factors) of journals in which supported papers with Turkish affiliations appeared have been analyzed, the functioning of the support algorithm has been studied, and the effectiveness of the overall support program has been evaluated. Findings indicate that the authors of mediocre papers published in journals with relatively low impact factors have mostly been supported due to the use of skewed distributions of journal impact factors in determining the amount of support. The existing support algorithm, on the other hand, does not seem to function as conceived.
This paper presents only the findings of the interrupted time series analysis with a view to find out if the support program has had any impact on the increase of the number of papers with Turkish affiliations. It is organized as follows: The Literature Review section briefly discusses the findings of relevant studies including those that provide some background on the Turkish case. The Data Sources and Method section describes the data used and provides information on interrupted time series analysis. The detailed findings are presented thereafter (Findings and Discussion) along with the limitations of the study. The paper ends with Conclusions.

2 Literature Review

Performance-based research funding systems (PRFSs) came into being in the 1980s. Based on rewarding the outputs, the rationale of PRFSs is to provide more support to institutions (or individuals) with higher performances so that the ones with lower performances will strive to improve theirs in order to get more support (Herbst, 2007). Yet, it is not clear if PRFSs based on outputs and competition increase the scientific productivity and the impact of outputs. In a relatively recent study comparing PRFSs and outputs of eight countries, countries with less competitive PRFSs such as Denmark turned out to be as effective as the ones with more competitive PRFSs such as the UK and Australia (Auranen & Nieminen, 2010). Some researchers drew attention to the potential “side effects” of PRFSs based on competition, as they tend to “homogenize” research outputs, discourage experiments using new approaches, and reward researchers playing “safe” even though their contributions may not have any societal impact (Geuna & Martin, 2003). The idea of increasing productivity on the basis of outputs and competition seems more complicated than what decision-makers have initially thought (Auranen & Nieminen). For instance, there appears to be some evidence (albeit with relatively small effect sizes) that China’s “cash-for-publication” policy tends to increase researchers’ productivity (Heywood, Wei, & Ye, 2011). Yet, such cash incentives for publications that are in effect in China, South Korea, and Turkey seem to increase the number of submissions but are negatively correlated with acceptance rates (Franzoni, Scellator, & Stephan, 2011).1 (1Based on the countries of the first authors of papers submitted to the journal Science between 2000 and 2009, some 6,228, 1,345, and 84 papers came from China, South Korea, and Turkey, respectively. Yet, only 93 papers from China (1.5%), 18 papers from South Korea (1.3%), and 3 papers from Turkey (3.6%) have been accepted for publication during this period (Franzoni, Scellato, & Stephan, 2011); number of papers submitted to Science along with the accepted number of papers from respective countries come from the Excel tables included in the Supporting Online Material of this article. We calculated the acceptance rates based on the figures provided.)
There are mainly two types of PRFSs in use: (1) the ones based on peer review or informed peer review supported with bibliometric measures; and (2) the ones based solely on bibliometric measures such as journal impact factors. The UK’s Research Excellence Framework (REF) is the largest research assessment system in the world (De Boer et al., 2015). Based on peer review, REF has been used since 1986 to distribute funds to research institutes and universities on the basis of their performances. Despite their shortcomings, PRFSs based on bibliometric measures only are on the rise, as they are, in comparison to peer review, easier and less costly to apply as a “proxy” to assess performance. Therefore, they tend to get preferred by increasingly more countries lately.
PRFSs and publication support systems based on bibliometric measures generally use the number of papers published in refereed journals and their impact in terms of citations as the main criteria to determine the research institutes and researchers to be supported. Impact factors (IF) and article influence scores (AIS) of journals are the two most commonly used metrics.
Journal IF was originally proposed by the late Eugene Garfield (1972) to help librarians in their selection of journals for subscription. It is an indicator of the quality of a journal in general and measures the citation impact of an “average” paper published therein. It does not say anything about the quality of an individual paper in that journal and how many citations, if any, it would gather in a certain period of time after its publication (e.g., two years).
Citation distributions used to calculate the IFs of journals are quite skewed, indicating that few papers published in a given journal get cited much more frequently while the majority get unnoticed or rarely cited (Marx & Bornmann, 2013). This is the case even for the most prestigious journals with the highest IFs such as Nature (IF = 38) and Science (IF = 35). As high as 75% of articles published in these journals get cited fewer times than their journal IFs indicate (Larivière et al., 2016, Table 2). Journal IFs vary by scientific discipline, too, as the number of researchers in each field, publication types (i.e., journal articles as opposed to books), and scholarly communication patterns tend to differ. In general, some 9%-10% of all the articles listed in Web of Science collect 44% of the total number of citations (Albarrán et al., 2011). More importantly, there exists no positive relationship between the number of citations that an article gets and the IF of the journal in which it is published (Zhang, Rousseau, & Sivertsen, 2017), and a large body of literature detailing the shortcomings of the use of journal IFs as a performance measure is readily available (e.g., Casadevall & Fang, 2012; Glänzel & Moed, 2002; Marx & Bornmann, 2013; Seglen, 1997; van Raan, 2005; Wouters et al., 2015). Yet, rather than checking the number of citations to the papers of individual researchers, PRFSs based on bibliometric measures continue to use journal IFs to assess the performance of individuals. Journal IFs are quite misleading in predicting the number of citations that any given article might get. What follows are a few examples of PRFS using journal IFs as a research assessment tool.
PRFSs are reviewed by several researchers (e.g., De Boer et al., 2015; European Commission, 2010; Geuna & Martin, 2003; Hicks, 2012). Most EU countries, Norway, USA, Australia, New Zealand, and China have some PRFSs in place. We provide a few examples of PRFSs that either solely use journal IF or use it in combination with peer review (excluding the ones based only on peer review such as REF in the UK).
Italy uses a PRFS where an expert panel decides whether to use citation analysis or peer review (or both) for each publication. Universities are ranked on the basis of a quality score consisting of citations and other journal metrics, which determine the amount of support each university gets. Some 30% of the research funds are distributed according to the outcome of this evaluation (Abramo, D’Angelo, & Di Costa, 2011; Abramo & D’Angelo, 2011, 2016).
Similarly, Spain uses a mixed system, although researchers are encouraged to publish in journals that are listed in the top quarters of JCR’s subject categories. Researchers who publish in such journals receive monetary support that ranges somewhere between 3% and 15% of their monthly salaries (Osuna, Cruz-Castro, & Sanz-Menéndez, 2011).
A number of countries such as Czech Republic, China, Finland, and Australia use journal IF exclusively to support research institutes and individual researchers. Finland, for instance, linked journal IF directly with research support by legislation (Adam, 2002). Similarly, Australia and the Czech Republic make direct linkage between research evaluation and funding by counting scholarly outputs and assigning a score to each on the basis of bibliometric measures. These scores are then used to determine the amount of monetary support and papers that appear in refereed journals or journals with relatively higher IFs get the highest scores (Butler, 2004; Butler, 2003; Good et al., 2015, Table 3). Norway also has a similar system based on weighting journals on the basis of various criteria and created three different journal lists (Schneider, 2009). China, on the other hand, uses journal IF most comprehensively in that academic recruitments and promotions, university rankings (and the amount of research support they get), support of Chinese journals that are listed in Chinese Citation Indexes all rely on journal IFs. The procedure seems to have been automated, as a researcher publishing in a journal with a certain IF knows how much support s/he would get. For instance, the author of a paper published in a journal with IF higher than 15 receives 300,000 Yuan (c. 43,000 USD) (Shao & Shen, 2012)! However, the use and appropriateness of such formulaic approaches has been questioned lately with a suggestion that China needs “to rethink its metrics- and citation-based research reward policies” (Teixeira da Silva, 2017).
Turkey is no exception: journal IFs are considered as an indicator of quality and have been used as an important criterion in academic promotions since early 1990s. In addition to individual universities, TÜBİTAK has initiated a nationwide monetary support system based exclusively on journal IFs. Journals classified under Q1, Q2, etc., in JCR’s subject categories have been used to determine the monetary compensation (Tonta, 2015). More recently (2016), Turkish Higher Education Council (HEC) started a new support scheme based mostly on journal IFs and the faculty whose scores are above a certain threshold in terms of number of academic activities (mostly publications) during the previous year get an additional 10% to 15% on top of their regular monthly salaries throughout the year (Akademik, 2015).
It should be noted that performance-based research funding and publication support systems based on quantitative measures tend to have some adverse effects. Researchers seem to adjust to the requirements very easily and change their publication patterns and behaviors. Such systems are prone to “gaming,” too, and researchers become more “opportunistic” (e.g., publication “inflation”) and less ethical (e.g., “fake” citations) in time. Unintended consequences of PRFSs in several countries (e.g., Australia, Czech Republic, and Spain) were reported in the literature (Butler, 2003; Butler, 2004; Good et al., 2015; Osuna, Cruz-Castro, & Sanz-Menéndez, 2011; Tonta, 2014). For example, more papers tend to get published in journals with relatively lower IFs. A similar trend has also been observed in Turkey (Kamalski et al., 2017; Önder et al., 2008; Yurtsever et al., 2001, 2002). As the Goodhart’s Law states, “When a measure becomes a target, it ceases to be a good measure.”
It should also be noted that correlation between competitive PRFSs and the research productivity is not clear-cut (Auranen & Nieminen, 2010). Excessive competition seems to reduce the time and energy otherwise to be expended for research. In this paper, we test the conjecture if TÜBİTAK’s publication support system has had an impact on the increase of number of publications listed in citation indexes with Turkish affiliations.

3 Data Sources and Method

We performed a search on Web of Science (WoS) (December 19, 2016) to identify all the publications with Turkish affiliations listed in Science Citation Index (SCI), Social Sciences Citation Index (SSCI) and Arts & Humanities Citation Index (A&HCI) between 1976 and 2015. More than 390,000 records were retrieved, 81% of which were full papers (articles) while the rest were other types of publications (e.g., reviews, notes, and letters to the editor).
TÜBİTAK provided the payment data for about 157,000 supported publications (93% of which were papers). These records were first cleaned, then coded as either “full papers” (articles) or “other” types of publications, classified under various criteria (e.g., year, class of journal, amount of support paid), ranked and combined, if necessary.
We used MS Excel and SPSS 23 for the detailed analysis of data and prepared both WoS and TÜBİTAK records for interrupted time series analysis outlined below (Interrupted, 2013). (See Appendix A for time series data prepared for interrupted time series analysis.)
The interrupted time series (ITS) analysis technique (also known as quasi-experimental time series analysis or intervention analysis) is used in this paper to measure the impact of TÜBİTAK’s support program. ITS analysis measures if an “event” occurring at any given stage has an immediate or delayed effect on the time series data. For instance, an unexpected political development in a given country may increase the exchange rates, or a terrorist attack may reduce the number of tourists. These “events” (called “interventions”) may be planned or not planned. As ITS analysis is a quasi-experimental method, it is possible (by means of using a control group) to verify if the change has occurred because of the intervention.
ITS analysis is based on the following statistical model:
Yt = ßpre + ßpost + et, (1)
where Yt represents the t’th observation in the time series, ßpre and ßpost represent the levels of series before and after the intervention, respectively, and et is the error related with Yt. The null hypothesis
H0 = ßpre - ßpost = 0, (2)
states that there is no statistically significant difference between the levels of series before and after the intervention (i.e., it has no impact on dependent variable (McDowall et al., 1980). It is assumed that the parameters in time series models stay the same before and after the intervention and that no other events that affect the parameters take place. ITS analysis can be applied to both static and dynamic (“ergodic”) time series. The ARIMA model is used for non-static series whose arithmetic means, variances, and co-variances change as time passes. This model is expressed as ARIMA (p, d, q) where p, d, and q represent the autoregressive operator (AR), the integrated operator (I), and the moving average operator (MA), respectively. If time series data is not stationary (d), it will first be made stationary to make its mean and variance constant over the years studied.
We have WoS data of publications with Turkish affiliations (1976-2015) and data of supported publications by TÜBİTAK (1997-2015). The program (“intervention”) started in 1993 and enough data points exist both before (1976-1992) and after (1993-2015) the intervention so as to be able to apply ITS analysis to time series data (Cochrane, 2002).
It is not always easy to determine when the performance-based funding system in a given institution is exactly introduced and how long it takes for the system to start to have some impact on the publication output of that institution (van den Besselaar, Heyman, & Sandstrom, 2017; Butler, 2017; Hicks, 2017). We took the date of the decision of TÜBİTAK’s Scientific Board to initiate the support program (June 12, 1993) as the starting date. As relatively fewer researchers benefited from the support program in the early years, we thought that the effect of the program might be observed with some delay (lag) and therefore measured its delayed effect one (1994), four (1997) and 10 years (2003) after of its start.
We have no data on papers (full articles) whose authors have not been supported. However, a relatively small group of authors of other types of contributions can function as a control group, as only 3% of the total amount of support on average was set aside for such contributions even though 19% of publications were of such nature. The authors of other types of contributions were paid half of what the authors of the full papers were, and a mere 1% of the support budget was allocated to them in 2013, for example.2 (2This percentage should ideally be 0 (zero) in order for it to function as a true control group. Yet, we think that it can be used as a control group with some caution and the generalization should be interpreted accordingly.)In other words, we can find out if TÜBİTAK’s support program has had any impact on the increase in the number of papers by comparing it with that of other types of contributions. If the number of other types of contributions that were not well supported did not increase but the number of papers supported increased, we can deduce that the source of the impact was the support program. Conversely, if, despite lack of support, the number of other types of contributions increased along with the number of papers receiving full monetary support, then the increase in the latter cannot be attributed to the program, suggesting that some factor(s) other than the support program may have played a role in this increase.

4 Findings and Discussion

The descriptive data about the number of papers and the total number of publications originating from Turkey are presented in Table 1 and Figure 1. The rate of increase is quite steep, especially starting from 2000s. This rate of increase made Turkey in those years one of the fastest growing countries in the world in terms of number of papers, and Turkey moved up the ladder very quickly from 45th in 1983 to 25th in 1999 to 18th in 2008 in the world, contributing to 1.56% of the overall scientific production in the world.
Table 1 Number of publications with Turkish affiliations (1976-2015).
Year Papers Other Total Year Papers Other Total
N % N % N N % N % N
1976 216 80 53 20 269 1996 3,359 84 623 16 3,982
1977 229 72 91 28 320 1997 3,844 83 796 17 4,640
1978 272 72 108 28 380 1998 4,460 82 1,001 18 5,461
1979 256 71 106 29 362 1999 5,201 83 1,078 17 6,279
1980 343 74 123 26 466 2000 5,462 84 1,059 16 6,521
1981 299 73 110 27 409 2001 6,684 84 1,271 16 7,955
1982 315 70 132 30 447 2002 8,985 86 1,434 14 10,419
1983 354 72 141 28 495 2003 10,662 84 1,978 16 12,640
1984 420 77 129 23 549 2004 13,199 84 2,488 16 15,687
1985 447 76 145 24 592 2005 14,194 83 2,877 17 17,071
1986 506 77 151 23 657 2006 15,070 79 4,099 21 19,169
1987 588 77 174 23 762 2007 17,853 80 4,414 20 22,267
1988 672 75 227 25 899 2008 19,327 82 4,379 18 23,706
1989 829 80 209 20 1,038 2009 21,655 82 4,627 18 26,282
1990 912 78 261 22 1,173 2010 22,833 83 4,760 17 27,593
1991 1,134 80 290 20 1,424 2011 23,588 82 5,325 18 28,913
1992 1,351 77 406 23 1,757 2012 25,254 82 5,607 18 30,861
1993 1,519 76 482 24 2,001 2013 26,526 79 7,200 21 33,726
1994 1,754 73 643 27 2,397 2014 27,242 79 7,315 21 34,557
1995 2,233 72 885 28 3,118 2015 28,662 79 7,530 21 36,192
Total / Avg. 318,709 81 74,727 19 393,436
Figure 1. Number of papers and total number of publications with Turkish affiliations (1976-2015).
A considerable percentage of these publications were supported by TÜBİTAK’s support program when it was first initiated in 1993. However, the support program seems to have not kept up with the pace of increase of papers and the percentage of papers supported went down from 70% in early 2000s to below 30% in recent years (Table 2, Figure 2).
Table 2 Number of papers supported by TÜBİTAK (1997-2015).
Year # of papers supported by TÜBİTAK # of papers with Turkish affiliations (WoS) Percentage supported (%)
1997 2,247 3,844 58
1998 2,657 4,460 60
1999 3,088 5,201 59
2000 3,298 5,462 60
2001 4,216 6,684 63
2002 5,888 8,985 66
2003 7,517 10,662 71
2004 9,511 13,199 72
2005 7,036 14,194 50
2006 8,122 15,070 54
2007 10,551 17,853 59
2008 10,411 19,327 54
2009 11,554 21,655 53
2010 11,592 22,833 51
2011 9,574 23,588 41
2012 10,641 25,254 42
2013 10,203 26,526 38
2014 10,257 27,242 38
2015 8,014 28,662 28
Total 146,377 300,701 49
Figure 2. Number of papers listed in WoS with Turkish affiliations and supported by TÜBİTAK (1997-2015).
The detailed analysis of changes in TÜBİTAK’s support policies over the years is beyond the confines of this paper (see Tonta, 2017b). Instead, we concentrate on whether TÜBİTAK’s support program has actually played a role in the steep rate of increase of papers by Turkish researchers. The time path of the number of papers listed in the Web of Science (WoS) originating from Turkey between 1976 and 2015 is given below (Figure 3). The intervention point (1993) is marked on the graph. As there exists a trend of increase in the number of papers both before and after the intervention, we took the difference of the time series from the 1st level (d = 1) to make it stationary. Consequently, the auto-correlation function (ACF) and partial auto-correlation function (PACF) of the time series became static within the confidence intervals (Figure 4).
Figure 3. Time path of papers with Turkish affiliations (1976-2015).
Figure 4. Correlograms of autocorrelation (ACF) and partial autocorrelations (PACF) functions.
We then defined ARIMA (1, 1, 0) model for interrupted time series data and wanted to see the impact of TÜBİTAK’s support program in 1994, 1997, and 2003 (after one, four and 10 years of its start, respectively). The test statistic of the ARIMA model shows that the defined model is suitable for the time series data
(Χ2 = 23.531, DF = 17, p = .133) (Table 3). The parameters of the ARIMA model (estimates, SE, t- and p- values) are given in Table 4. The ARIMA Model did not produce statistically significant results (coefficient = .153, SE = .170, t = .899, p = .375). The coefficient for “Time series” in Table 4 gives the slope of the regression line before the intervention (14.051), which is used to analyze the different time points by taking into account the existing trend in data before calculating the effect of the intervention. The coefficient for “Before/after Support Program” represents the slope of y- axis when x is equal to 0 (zero) and is used to measure the effect of the intervention in later time points. The coefficient for “Effect” (29.091) gives the difference between slopes before and after the intervention. By adding this difference to the value of pre-intervention slope (14.051), the value of the post-intervention slope (43.142) can be calculated (Interrupted, 2013).
Table 3 Test statistic (Ljung Box).
Model Statistics
Model Number of Predictors Model Fit statistics Ljung-Box Q(18) Number of Outliers
Stationary R-squared Statistics DF Sig.
Makale sayısı-Model_1 3 .607 23.531 17 .133 0
Table 4 ARIMA Model Parameters.
Estimate SE t Sig.
# of papers Model 1 # of papers No transformation Constant AR Lag 1 -57.138
.153
334.811
.170
-.171
.899
.866
.375
Difference 1
Time series No transformation Numerator Lag 0 14.051 29.910 .470 .642
Before/after Support
Program
No transformation Numerator Lag 0 11.258 708.202 .016 .987
Effect No transformation Numerator Lag 0 29.091 36.715 .792 .434
In order to see the effect of the support program on the number of papers with Turkish affiliations, we continued with this model. The slopes of pre- and post-intervention are the same for all analyses. It is possible to see the direct effect of the intervention on the number of papers with Turkish affiliations (Table 5). According to the model, an additional 564 papers were published in 1994 because of the support program. However, the effect of the support program is not statistically significant (p = .157). The delayed effect of the program has not been materialized in later years, either, as additional number of papers published due to the program were limited (651 papers in 1997, and 826 in 2003) and the effect is not statistically significant (p > .05). As the effect of the program has been negligible, the formula of the effect of the intervention is not given.
Table 5 Values showing the delayed effect of TÜBİTAK’s support program.
Year Predicted increase SE t-value p-value
1994 563.633 390.084 1.446 .157
1997 651.241 431.129 1.510 .140
2003 825.784 571.279 1.446 .157
2015 1,174.941 947.761 1.240 .224
Despite the fact that other types of contributions (non-papers) have been supported very little during the period of analysis, the rate of their increase is on a par with that of generously supported papers (see Figure 5). Slopes of linear regression lines of papers and non-papers are almost identical with corresponding R² values (y = 738.01x - 1E + 06, R² = 0.814 for papers; and y = 173.78x - 344912, R² = 0.766 for non-papers). As a control group, the rate of continuous increase in other types of publications seems to confirm the results of the interrupted time series analysis. For instance, some 4,000-7,000 other types of publications have been published annually in recent years, of which only a few hundreds got supported. Yet, the number of other publications continues to increase regardless of support, suggesting that TÜBİTAK’s support program is probably not the main factor causing the increase in the number of papers with Turkish affiliations. The main finding of this paper is, to some extent, in line with the evidence that researchers with Turkish affiliations do not seem to attach too much importance to TÜBİTAK’s “cash-for-publication” program (Yuret, 2017).
Figure 5. Rate of increase of papers and non-papers.
Note: Scales for y axes on the left and right are different. y axis on the left represents the number of papers while the one on the right respresents the number of non-papers.

5 Limitations of the Study

It should be noted though that interrupted time series analysis has some limitations. The assumption that no other “event” or “events” occurred during the period of analysis that might have affected the time series data is one of them. For example, the prerequisite of having papers published in journals listed in citation indexes for academic promotion may have triggered this increase, as more than 90% of research in Turkey has been carried out in universities, and the number of academic personnel in universities has increased tremendously over the years. Moreover, in addition to the number of research personnel in universities, the number of papers may be increasing due to a number of other factors such as the number of researchers per 10,000 capita, and the share of R&D expenditures within the Gross National Product (GDP). As indicated earlier, even though some positive correlation between PRFSs and the number of papers has been observed, this may not necessarily point to a strong causality between the two. As was the case in Spain (Osuna, Cruz-Castro, & Sanz-Menéndez, 2011), the number of papers with Turkish affiliations continues to increase perhaps not because of TÜBİTAK’s support program but because of other factors such as the growth in and the maturity of universities’ research systems including academic personnel.
We should also note that the interrupted time series analysis tells us whether the intervention has had any significant effect on the dependent variable or not but it does not tell us what caused the increase in the number of papers if it was not the intervention. To find out this, we carried out a multiple regression analysis and observed fairly strong correlation between the number of papers with Turkish affiliations and the number of academic personnel as well as the number of supported papers. However, we decided not to report the results of the multiple regression analysis, as the Durbin-Watson statistic was rather small (0.921), probably indicating the existence of serial autocorrelation between variables and thereby making the results less reliable. This can to some extent be observed from Figure 2: the correlation between the number of papers with Turkish affiliations and the number of supported papers was positive and statistically significant between 1997 and 2006 whereas it was negative and not statistically significant between 2007 and 2015.
For a more definitive answer to the question of whether TÜBİTAK’s support program has had any effect on the increase in the number of papers with Turkish affiliations, a true control group is needed. In other words, the rate of increase of papers supported by TÜBİTAK needs to be compared with that of non-supported ones. Even such a comparison may not necessarily be sufficient to reveal the causality, should there be any, between TÜBİTAK’s support program and the steep increase in the number of papers with Turkish affiliations. For this, individual level data for both TÜBİTAK-supported and non-supported papers are needed to see if the increase is due to the increase of the productivity of: (1) the same researchers benefiting from TÜBİTAK’s support program; (2) more researchers responding to TÜBİTAK’s cash incentives; (3) researchers who have not sought TÜBİTAK support for their papers in the past at all; or (4) a combination of some or all of the above.

6 Conclusions

As part of TÜBİTAK’s support program, the authors of over 157,000 publications received more than 124 million Turkish Liras (in 2015 current prices, c. 35 million USD) as monetary support between 1997 and 2015. Yet, two thirds of all payments were less than 826 liras (or c. 230 USD). These “micropayments” might be one of the reasons why, according to the test results of the interrupted time series analysis, the program did not seem to have direct impact on the increase of the number of papers published by Turkish authors. It is likely that small amounts of payments were not much of an incentive for authors to publish more.
We should point out that the objective of the support program is not to increase the number of papers per se but to increase their impact and quality, as stated in the By-Law of TÜBİTAK’s support program (TÜBİTAK, 2016). Some authors may find the small payments satisfactory. Yet, if such small payments do not help achieve the program’s objectives, precautions should be taken to correct it. The support program seems to have functioned as a mechanism to transfer small amounts of payments to authors without any considerable improvement in the impact and quality of the papers. Transaction costs of such small payments should be borne in mind as well as the costs of missed opportunities of increasing the impact and quality of papers. For instance, it might perhaps be a better strategy to concentrate limited resources on a few high impact projects rather than to disperse them every year as “pocket money” to the authors of some 10,000 papers that appear mostly in journals with relatively low impact factors. Sustainability of the existing support program should also be considered and its impact should be monitored more often.
Such support programs should function as a leverage to speed up the scientific and economic development of countries. A thorough study as to why the support program did not seem to function as intended should be carried out. After a comprehensive review of existing support programs, new policies should be instituted to increase the impact and quality of scientific papers originating from Turkey, and TÜBİTAK’s support program should be redesigned accordingly.
Based on 25 years’ worth of payments data, this is perhaps one of the first large-scale studies showing that “cash-for-publication” policies or “piece rates” paid to researchers tend to have little or no effect on the increase of researchers’ productivity. The main finding of this paper has some implications for countries wherein publication subsidies are used as an incentive to increase the number and quality of papers published in international journals. They should be prepared to consider reviewing their existing support programs (based usually on bibliometric measures such as journal impact factors) and revising their reward policies.
Appendix A. Time series data prepared for interrupted time series analysis (1976-2015)
Time series # of pubs # of papers Stage Impact Pre-int 1 Post-int 1 Pre-int 4 Post-int 4 Pre-int 10 Post-int 10 Pre-int 21 Post-int 21
1 269 216 0 0 1 0 1 0 1 0 1 0
2 320 229 0 0 2 0 2 0 2 0 2 0
3 380 272 0 0 3 0 3 0 3 0 3 0
4 362 256 0 0 4 0 4 0 4 0 4 0
5 466 343 0 0 5 0 5 0 5 0 5 0
6 409 299 0 0 6 0 6 0 6 0 6 0
7 447 315 0 0 7 0 7 0 7 0 7 0
8 495 354 0 0 8 0 8 0 8 0 8 0
9 549 420 0 0 9 0 9 0 9 0 9 0
10 592 447 0 0 10 0 10 0 10 0 10 0
11 657 506 0 0 11 0 11 0 11 0 11 0
12 762 588 0 0 12 0 12 0 12 0 12 0
13 899 672 0 0 13 0 13 0 13 0 13 0
14 1,038 829 0 0 14 0 14 0 14 0 14 0
15 1,173 912 0 0 15 0 15 0 15 0 15 0
16 1,424 1,134 0 0 16 0 16 0 16 0 16 0
17 1,757 1,351 0 0 17 0 17 0 17 0 17 0
18 2,001 1,519 0 0 18 0 18 0 18 0 18 0
19 2,397 1,754 1 19 19 0 22 -3 28 -9 40 -21
20 3,118 2,233 1 20 19 1 22 -2 28 -8 40 -20
21 3,982 3,359 1 21 19 2 22 -1 28 -7 40 -19
22 4,640 3,844 1 22 19 3 22 0 28 -6 40 -18
23 5,461 4,460 1 23 19 4 22 1 28 -5 40 -17
24 6,279 5,201 1 24 19 5 22 2 28 -4 40 -16
25 6,521 5,462 1 25 19 6 22 3 28 -3 40 -15
26 7,955 6,684 1 26 19 7 22 4 28 -2 40 -14
27 10,419 8,985 1 27 19 8 22 5 28 -1 40 -13
28 12,640 10,662 1 28 19 9 22 6 28 0 40 -12
29 15,687 13,199 1 29 19 10 22 7 28 1 40 -11
30 17,071 14,194 1 30 19 11 22 8 28 2 40 -10
31 19,169 15,070 1 31 19 12 22 9 28 3 40 -9
32 22,267 17,853 1 32 19 13 22 10 28 4 40 -8
33 23,706 19,327 1 33 19 14 22 11 28 5 40 -7
34 26,282 21,655 1 34 19 15 22 12 28 6 40 -6
35 27,593 22,833 1 35 19 16 22 13 28 7 40 -5
36 28,913 23,588 1 36 19 17 22 14 28 8 40 -4
37 30,861 25,254 1 37 19 18 22 15 28 9 40 -3
38 33,726 26,526 1 38 19 19 22 16 28 10 40 -2
39 34,557 27,242 1 39 19 20 22 17 28 11 40 -1
40 36,192 28,662 1 40 19 21 22 18 28 12 40 0

Note. “Pre-int”: Pre-intervention; “Post-int”: Post-intervention.

The authors have declared that no competing interests exist.

[1]
Abramo G.,& D’Angelo .+? (2011). National-scale research performance assessment at the individual level. Scientometrics, 86(2), 347-364.There is an evident and rapid trend towards the adoption of evaluation exercises for national research systems for purposes, among others, of improving allocative efficiency in public funding of individual institutions. However the desired macroeconomic aims could be compromised if internal redistribution of government resources within each research institution does not follow a consistent logic: the intended effects of national evaluation systems can result only if a “funds for quality” rule is followed at all levels of decision-making. The objective of this study is to propose a bibliometric methodology for: (i) large-scale comparative evaluation of research performance by individual scientists, research groups and departments within research institution, to inform selective funding allocations; and (ii) assessment of strengths and weaknesses by field of research, to inform strategic planning and control. The proposed methodology has been applied to the hard science disciplines of the Italian university research system for the period 2004–2006.

DOI

[2]
Abramo G.,& D’Angelo .+? (2016). Refrain from adopting the combination of citation and journal metrics to grade publications, as used in the Italian national research assessment exercise (VQR 2011-2014). Scientometrics, 109(3), 1-13.

DOI

[3]
Abramo G., D’Angelo C.A., & Di Costa F. (2011). National research assessment exercises: A comparison of peer review and bibliometrics rankings. Scientometrics, 89: 929. https://doi.org/10.1007/s11192-011-0459-x.Development of bibliometric techniques has reached such a level as to suggest their integration or total substitution for classic peer review in the national research assessment exercises, as far as the hard sciences are concerned. In this work we compare rankings lists of universities captured by the first Italian evaluation exercise, through peer review, with the results of bibliometric simulations. The comparison shows the great differences between peer review and bibliometric rankings for excellence and productivity.

DOI

[4]
Adam D. (2002). Citation analysis: The counting house. Nature, 415(415), 726-729.Not Available

DOI

[5]
Akademik Teşvik Ödeneği Yönetmeliği (By-law of Payment of Academic Incentive). (2015).Resmî Gazete. Retrieved from .

[6]
Albarrán P., Crespo J.A., Ortuño I., & Ruiz-Castillo J. (2011). The skewness of science in 219 subfields and a number of aggregates. Scientometrics, 88(2), 385-397.

DOI

[7]
Auranen O.,& Nieminen ,M. (2010). University research funding and publication performance—An international comparison. Research Policy, 39(6), 822-834.In current science policies, competition and output incentives are emphasized as a means of making university systems efficient and productive. By comparing eight countries, this article analyzes how funding environments of university research vary across countries and whether more competitive funding systems are more efficient in producing scientific publications. The article shows that there are significant differences in the competitiveness of funding systems, but no straightforward connection between financial incentives and the efficiency of university systems exists. Our results provoke questions about whether financial incentives boost publication productivity, and whether policy-makers should place greater emphasis on other factors relevant to high productivity.

DOI

[8]
Butler L. (2003). Explaining Australia’s increased share of ISI publications—the effects of a funding formula based on publication counts. Research Policy, 32(1), 143-155.Australia- share of publications in the Science Citation Index (SCI) has increased by 25% in the last decade. The worrying aspect associated with this trend is the significant decline in citation impact Australia is achieving relative to other countries. It has dropped from sixth position in a ranking of 11 OECD countries in 1988, to 10th position by 1993, and the distance from ninth place continues to widen. The increased publication activity came at a time when publication output was expected to decline due to pressures facing the higher education sector, which accounts for over two-thirds of Australian publications. This paper examines possible methodological and contextual explanations of the trends in Australia- presence in the SCI, and undertakes a detailed comparison of two universities that introduced diverse research management strategies in the late 1980s. The conclusion reached is that the driving force behind the Australian trends appears to lie with the increased culture of evaluation faced by the sector. Significant funds are distributed to universities, and within universities, on the basis of aggregate publication counts, with little attention paid to the impact or quality of that output. In consequence, journal publication productivity has increased significantly in the last decade, but its impact has declined.

DOI

[9]
Butler L. (2004). What happens when funding is linked to publication counts? In H.F. Moed et al.,(Ed.), Handbook of Quantitative Science and Technology Research: The Use of Publication and Patent Statistics in Studies of S&T Systems (pp. 389-405). Dordrecht: Kluwer.

[10]
Butler L. (2017). Response to van den Besselaar et al.: What happens when the Australian context is misunderstood. Journal of Informetrics, 11(3), 919-922.react-text: 205 fluency of genius and the loud noises of empty vessels" (Editorial, 1970). For example, publication counting can tell us little about the effect of a laboratory's work on others, but citation analysis can provide valuable information. It is important to highlight the difference between the short-term and long-term impact of citation. The short-term impact indicates how groups maintain... /react-text react-text: 206 /react-text [Show full abstract]

DOI

[11]
Casadevall A.,& Fang ,F.C. (2012. Causes for the persistence of impact factor mania. mBio, 5(2). Retrieved on April 28, 2017, from 2012). Causes for the persistence of impact factor mania. mBio, 5(2). Retrieved on April 28, 2017, from .

[12]
Cochrane Effective Practice and Organisation of Care Review Group. Data Collection Checklist.(2002). Retrieved from .

[13]
De Boer H., Jongbloed B.W.A., Benneworth S., Cremonini, L. Kolster R., Kottmann A., . . & Vossensteyn, J.J. (2015). Performance-based Funding and Performance Agreements in Fourteen Higher Education Systems. Enschede: University of Twente. Retrieved from

[14]
European Commission (2010). Assessing Europe’s University-Based Research. Retrieved from .

[15]
Franzoni C., Scellato G., & Stephan P. (2011). Changing incentives to publish. Science, 33(6043), 702-703.

[16]
Garfield E. (1972). Citation analysis as a tool in journal evaluation. Science, 178(4060), 471-479.

DOI

[17]
Geuna A.,& Martin ,B. (2003). University research evaluation and funding: An international comparison. Minerva, 41(4), 277-304.

DOI

[18]
Good B., Vermeulen N., Tiefenthaler B., & Arnold E. (2015). Counting quality? The Czech performance-based research funding system. Research Evaluation, 24(2), 91-105.After the fall of the Iron Curtain and a subsequent period of restructuring the research and innovation system, the Czech Republic has introduced a performance-based research funding system, commonly known as the Evaluation Methodology. The Evaluation Methodology is purely quantitative and focused solely on research outputs (publications, patents, prototypes, etc.) to determine the amount of institutional funding for research organizations. While aiming to depersonalize and depoliticize the allocation of institutional funding in the research system, improve research productivity, and safeguard accountability, we argue that the Evaluation Methodology has in fact become a negative example of a performance-based research funding system. Our analysis of the Evaluation Methodology shows that it has introduced considerable instability and unpredictability in the Czech research system, making strategic planning for research organizations difficult. The article contributes to a growing body of literature on research evaluation and performance-based research funding systems, discussing the impacts of introducing such systems in countries including the UK, Spain, Slovakia, Hong Kong, Australia, Poland, Italy, New Zealand, Flanders, Norway, Denmark, and Finland. The Czech case provides new insights in the interactions between politico-economic regimes and research policy, while also directing the attention of research policy scholars to significant developments in Central and Eastern European countries.

DOI

[19]
Glänzel W.,& Moed, H.F.(2002). Journal impact measures in bibliometric research. Scientometrics, 53(2), 171-193.

DOI

[20]
Herbst M. (2007). Financing public universities: The case of performance funding. Dordrecht: Springer.This crucial book addresses newer practices of resource allocation which tie university funding to indicators of performance. It covers the evolvement of mass higher education and the associated curtailment of funding, the public management reform debate within which performance-based budgeting or funding evolved, and sketches alternative governance and management modes which can be used instead. Four appendices cover more technical matters.

DOI

[21]
Heywood J.S., Wei X., & Ye G. (2011). Piece rates for professors. Economics Letters, 113(3), 285-287.

DOI

[22]
Hicks D. (2012). Performance-based university research funding systems. Research Policy, 41(2), 251-261.The university research environment has been undergoing profound change in recent decades and performance-based research funding systems (PRFSs) are one of the many novelties introduced. This paper seeks to find general lessons in the accumulated experience with PRFSs that can serve to enrich our understanding of how research policy and innovation systems are evolving. The paper also links the PRFS experience with the public management literature, particularly new public management, and understanding of public sector performance evaluation systems. PRFSs were found to be complex, dynamic systems, balancing peer review and metrics, accommodating differences between fields, and involving lengthy consultation with the academic community and transparency in data and results. Although the importance of PRFSs seems based on their distribution of universities- research funding, this is something of an illusion, and the literature agrees that it is the competition for prestige created by a PRSF that creates powerful incentives within university systems. The literature suggests that under the right circumstances a PRFS will enhance control by professional elites. PRFSs since they aim for excellence, may compromise other important values such as equity or diversity. They will not serve the goal of enhancing the economic relevance of research.

DOI

[23]
Hicks D. (2017). What year? Difficulties in identifying the effect of policy on university output. Journal of Informetrics, 11(3), 933-936.react-text: 396 The university research environment has been undergoing profound change in recent decades. Aiming at international competitiveness and excellence, performance based university research funding systems have been implemented in many countries. However, evidence-based analysis of policy effects is scarce. This paper develops methods for evaluating the effect of university research policy on... /react-text react-text: 397 /react-text [Show full abstract]

DOI

[24]
Interrupted time series analysis. (2013. Retrieved on April 28, 2017, from 2013). Retrieved on April 28, 2017, from .

[25]
Kamalski J.et al. (2017).World of Research 20152017). World of Research 2015: Revealing Patterns and Archetypes in Scientific Research. Elsevier Analytic Services. Retrieved from .

[26]
Larivière V., Kiermer V., MacCallum C.,. . . & Curry S. (2016). A simple proposal for the publication of journal citation distributions. Retrieved from .

[27]
McDowall D., McCleary R., Meidinger E.E., & Hay R.A. (1980). Interrupted Time Series Analysis. Newbury Park: Sage.

[28]
Osuna C., Cruz-Castro L., & Sanz-Menéndez L. (2011). Overturning some assumptions about the effects of evaluation systems on publication performance. Scientometrics, 86(3), 575-592.

DOI

[29]
Önder C., Şevkli M., Altınok T., & Tavukçuoğlu C. (2008). Institutional change and scientific research: A preliminary bibliometric analysis of institutional influences on Turkey’s recent social science publications. Scientometrics, 76(3), 543-560.This paper provides a detailed assessment of recent indexed journal publications by Turkish social scientists. We first present information on SCI, SSCI and AHCI indexed journal articles that were published by Turkish researchers over the past three decades. An inspection of publication statistics indicates a considerable improvement, especially during the last five years of the 1973–2005 period that we examine, in Turkey’s publication record in terms of number of articles authored or co-authored by Turkish researchers. In the next step, we scrutinize institutional sources of this improvement, emphasizing regulatory and organizational changes that have both forced researchers to publish in indexed journals and remunerated those who did so. Finally, we provide a qualitative assessment of recent improvement in publication performance of Turkish researchers by focusing on a particular behavioral consequence of institutional changes and its implications for impact that research from Turkey has on global research activity. Bibliometric analysis of articles published by Turkish researchers in SSCI-indexed journals during 2000–2005 shows that recent regulatory and organizational changes seem to have instituted a particular publication habit, publishing in journals with lower impact factor, which was earlier observed in other parts of the world where publication counts were used for performance evaluation, and that signs of improvement in our select indicators of impact are yet to be observed.

DOI

[30]
Schneider J.W. (2009). An outline of the bibliometric indicator used for performance-based funding of research institutions in Norway. European Political Science, 8(3), 364-378.This article outlines and discusses the bibliometric indicator used for performance-based funding of research institutions in Norway. It is argued that the indicator is novel and innovative as compared to the indicators used in other funding models. It compares institutions based on all their publication-based research activities across all disciplines. Specific incentives are given to researchers to focus their publication behaviour on the most ‘prestigious’ publication channels within the different fields. Such aims necessitate a documentation system based on high-quality data, and require differentiated publication counts as the basic measure. Experience until now suggests that the indicator works as intended.

DOI

[31]
Seglen P.O. (1997 February 5). Why the impact factor of journals should not be used for evaluating research . British Medical Journal 314 (7079), 498-502. Retrieved from .

[32]
Shao J.,& Shen H. (2012). Research assessment: The overemphasized impact factor in China. Research Evaluation, 21(3), 199-203.The assessment of quality in scientific research is a complex problem. The use of more objective scientometric indices in research evaluation emerged in the 1960s and 1970s. These scientometric indicators, among which the most common one is probably the journal impact factor (IF), are increasingly employed to evaluate the quality of scientific research performed by individual scientists, research groups, or institutes. In China, the IF is used not only to measure a journal's scientific influence, but has become increasingly important as a basis for recruitment or promotion, awards of research funding, grants, and authors' academic advancement. But in fact, the assessment of research mainly based on the IF will cause much academic 'froth', so it is necessary for universities and research institutions to reset the academic assessment system in China. In the assessment of scientific research, more research activities, like the organization of conferences and seminars, the coordination of research groups, and the participation to conferences, should be considered. Copyright The Author 2012. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com, Oxford University Press.

DOI

[33]
TÜBİTAK Türkiye Adresli Uluslararası Bilimsel Yayınları Teşvik (UBYT) Programı Uygulama Usul ve Esasları.(2016). Retrieved from .

[34]
Teixeira da Silva,J.A. (2017). Does China need to rethink its metrics- and citations-based research reward policies? Scientometrics, 112(3), 1853-1857.

DOI

[35]
Tonta Y. (2014. Use and misuse of bibliometric measures for assessment of academic performance, tenure and publication support. In the 77th Annual Meeting of the Association for Information Science and Technology, October 31 - November 5, 2014, Seattle, WA. 2014). Use and misuse of bibliometric measures for assessment of academic performance, tenure and publication support. In the 77th Annual Meeting of the Association for Information Science and Technology, October 31 - November 5, 2014, Seattle, WA. .

[36]
Tonta Y. (2015). Support programs to increase the number of scientific publications using bibliometric measures: The Turkish case.In A.A. Salah et al.(Eds.). Proceedings of ISSI 2015 Istanbul: 15th International Society of Scientometrics and Informetrics Conference, Istanbul, Turkey, 29 June to 4 July, 2015 (pp. 767-777). İstanbul: Boğaziçi University.ABSTRACT Bibliometric measures for scientific journals such as journal impact factor, cited half-life, and article influence score are readily available through commercial companies such as Thomson Reuters, among others. These metrics were originally developed to help librarians in collection building and are based on the citation rates of published papers. Yet, they are increasingly being used, albeit undeservedly, as proxies for peer review to assess the quality of individual papers; and research funding, hiring, academic promotion and publication support policies are developed accordingly. This paper reviews the use of such metrics by the Turkish Scientific and Technological Research Council (TUBITAK) in its Support Program of International Scholarly Publications and concentrates on the most recent policy changes. A sample of 228 journals was selected on the basis of stratified sampling method to study the impact of changing algorithms on the level of support that journals received in 2013 and 2014. Findings are discussed and some recommendations are offered to improve the existing algorithm.

[37]
Tonta Y. (2017a. Does monetary support increase the number of scientific papers? An interrupted time series analysis. Paper presented at ISSI 2017: 16th International Scientometrics and Informetrics Conference, 16-20 October 2017, Wuhan University, Wuhan, China. Retrieved from 2017a). Does monetary support increase the number of scientific papers? An interrupted time series analysis. Paper presented at ISSI 2017: 16th International Scientometrics and Informetrics Conference, 16-20 October 2017, Wuhan University, Wuhan, China. Retrieved from .

[38]
Tonta Y. (2017b). TÜBİTAK Türkiye Adresli Uluslararası Bilimsel Yayınları Teşvik (UBYT) Programının Değerlendirilmesi. Ankara: TÜBİTAK ULAKBİM . Retrieved from.

[39]
research funding? Butler’s Australian case revisited. Journal of Informetrics, 11(3), 905-918.

[40]
van Raan,A.F.J. (2005). Fatal attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods. Scientometrics, 62(1), 133-143.

DOI

[41]
Wouters P., Thelwall M., Kousha K., Waltman L., de Rijcke S.,Rushforth, A. & Franssen, T. (2015). The metric tide literature review: Supplementary report I to the independent review of the role of metrics in research assessment and management. Retrieved from .

[42]
Yuret T. (2017). Do researchers pay attention to publication subsidies? Journal of Informetrics, 11(2), 423-434.

DOI

[43]
Yurtsever E., Gülgöz S., Yedekçioğlu Ö.A., & Tonta M. (2001). Sosyal Bilimler Atıf Dizini’nde (SSCI) Türkiye 1970-1999 (Turkey in Social Sciences Citation Index (SSCI): 1970-1999). Ankara: Türkiye Bilimler Akademisi.

[44]
Yurtsever E., Gülgöz S., Yedekçioğlu Ö.A., & Tonta M. (2002). Sağlık Bilimleri,Mühendislik ve Temel Bilimlerde Türkiye’nin Uluslararası Atıf Dizinindeki Yeri 1973-1999 (Turkey’s Place in Health Sciences, Engineering and Basic Sciences in International Citation Index). Ankara: Türkiye Bilimler Akademisi.

[45]
Zhang L., Rousseau R., & Sivertsen G. (2017). Science deserves to be judged by its contents, not by its wrapping: Revisiting Seglen’s work on journal impact and research evaluation . PLoS ONE, 12(3), e0174205. Retrieved from .

Outlines

/

京ICP备05002861号-43

Copyright © 2023 All rights reserved Journal of Data and Information Science

E-mail: jdis@mail.las.ac.cn Add:No.33, Beisihuan Xilu, Haidian District, Beijing 100190, China

Support by Beijing Magtech Co.ltd E-mail: support@magtech.com.cn