Research Papers

Examining “Salami slicing” publications as a side-effect of research performance evaluation: An empirical study

  • Ciriaco Andrea D’Angelo ,
Expand
  • Department of Engineering and Management, Tor Vergata University of Rome, Rome 00133, Italy
†Ciriaco Andrea D’Angelo (Email: ).

Received date: 2024-06-17

  Revised date: 2024-09-30

  Accepted date: 2024-11-19

  Online published: 2024-12-16

Abstract

Purpose: This study investigates whether publication-centric incentive systems, introduced through the National Scientific Accreditation (ASN: Abilitazione Scientifica Nazionale) for professorships in Italy in 2012, contribute to adopting “salami publishing” strategies among Italian academics.

Design/methodology/approach: A longitudinal bibliometric analysis was conducted on the publication records of over 25,000 Italian science professors to examine changes in publication output and the originality of their work following the implementation of the ASN.

Findings: The analysis revealed a significant increase in publication output after the ASN’s introduction, along with a concurrent decline in the originality of publications. However, no evidence was found linking these trends to increased salami slicing practices among the observed researchers.

Research limitations: Given the size of our observation field, we propose an innovative indirect approach based on the degree of originality of publications’ bibliographies. We know that bibliographic coupling cannot capture salami publications per se, but only topically-related records. On the other hand, controlling for the author’s specialization level in the period, we believe that a higher level of bibliographic coupling in his scientific output can signal a change in his strategy of disseminating the results of his research. The relatively low R-squared values in our models (0.3-0.4) reflect the complexity of the phenomenon under investigation, revealing the presence of unmeasured factors influencing the outcomes, and future research should explore additional variables or alternative models that might account for a greater proportion of the variability. Despite this limitation, the significant predictors identified in our analysis provide valuable insights into the key factors driving the observed outcomes.

Practical implications: The results of the study support those who argue that quantitative research assessment frameworks have had very positive effects and should not be dismissed, contrary to the claims of those evoking the occurrence of side effects that do not appear in the empirical analyses.

Originality/value: This study provides empirical evidence on the impact of the ASN on publication behaviors in a huge micro-level dataset, contributing to the broader discourse on the effects of quantitative research assessments on academic publishing practices.

Cite this article

Ciriaco Andrea D’Angelo . Examining “Salami slicing” publications as a side-effect of research performance evaluation: An empirical study[J]. Journal of Data and Information Science, 2025 , 10(1) : 74 -100 . DOI: 10.2478/jdis-2025-0005

1 Introduction

Few scholars hold that traditional metrics used for research performance evaluation have become less reliable in the changing landscape of academic publishing (Elton, 2004). The flood of publications, the spreading of collaboration, the rising of cross and self-citations, and other factors should have compromised the validity of traditional bibliometric indicators (Fire & Guestrin, 2019). Many scholars also argue that the institutionalization of quantitative research assessments has encouraged researchers to publish as many articles as possible, especially in those contexts where evaluation systems tend to reward quantity along with quality, particularly when institutions distribute incentives at individual level (Auranen & Nieminen, 2010; Geuna & Martin, 2003; Moher et al., 2018; Larivière & Costas, 2016; Tonta, 2017). In this regard, many evoke the Goodhart’s law, according to which “when a measure becomes a target, it ceases to be a good measure” (Goodhart, 1975).
These positions have in recent years fuelled a series of initiatives aimed at putting in warning research managers, practitioners, and policymakers about limits and pitfalls of the use of bibliometric indicators for the assessment of the research performance of individual scientists, groups, institutions, or entire countries (above all DORA and the Leiden manifesto http://www.leidenmanifesto.org/ last access on 30 September 2024). Recently, a Coalition for Advancing Research Assessment (CoARA) defined an “agreement” that binds signatories to ten commitments that synthesize a paradigm shift setting “a common direction for changes in assessment practices for research, researchers and research organisations, with the goal to maximise the quality and impact of research”. The second of these commitments reads: “Base research assessment primarily on qualitative evaluation for which peer review is central, supported by responsible use of quantitative indicators.” The adjective “responsible” implicitly draws attention to an evidently “irresponsible” use that is being made of such indicators. The debate is by no means original, neither in terms of content, nor in terms of context. In a 1997 paper the UK anthropologist Marilyn Strathern, commenting on the fact that British universities were increasingly subject to national scrutiny for teaching, research, and administrative competence, gave an “anthropological comment” on the proliferation of such practices: “In higher education the subject of audit is not so much the education of the students as the institutional provision for their education. Audit does more than monitor—it has a life of its own that jeopardizes the life it audits” (Strathern, 1997). For some years, the debate has shifted to research, another fundamental mission of academies and universities in general.
In 2017, a special section in the Journal of Informetrics was dedicated to examining the impact of metrics-based funding systems on the behaviour of scientists (Volume 11, Number 3). The discussion was initiated with a paper by van den Besselaar, Heyman, and Sandström (2017), who raised objections to Butler’s pioneering studies on the behavioural effects of the Australian funding formula (Butler, 2003a, 2003b). They particularly criticized Butler for not giving sufficient attention to the influence of a scientist’s production. The debate saw contributions from various scholars, including Aagaard and Schneider (2017), Gläser (2017), Hicks (2017), and Martin (2017). The concern is that this transformative impact of the evaluation practice not only undermines the objectives of continuous improvement that the evaluation underpins, but even causes opposite effects to those that the policymaker sets itself. In line with the “rational cheaters” model, employees are expected to foresee the outcomes of their actions and engage in opportunistic behaviour when the additional benefits outweigh the additional costs (Nagin et al., 2002). Building on this theory, numerous scholars have proposed and examined the adverse effects on the conduct of researchers and institutions resulting from the growing reliance on metrics in evaluation systems (de Rijcke et al., 2016; Fang et al., 2012; Jimenez-Contreras et al., 2003; Rafols et al., 2012; Seeber et al., 2019).
Particular attention was paid to the effects that evaluation would have on the researchers’ choices regarding their research agenda, as well as how the results of their research activities would be disseminated. According to some, the pressure exerted on individuals by the evaluation systems, at the basis of the so-called “publish or perish” syndrome (Fanelli, 2010; Neill, 2008; van Dalen & Henkens, 2012), could induce behaviours that in some cases even lead to fraudulent. Extensive research and discussion have centered around the ethical considerations regarding scientific authorship and publication (Mukherjee, 2020). A thorough literature review spanning from 1945 to 2018 identified ten key ethical themes associated with authorship and publication (Hosseini & Gordijn, 2020). Among others, these themes encompass transgressions of authorship norms, the proliferation of irrelevant publications, plagiarism, self-plagiarism, and scientific fraud (Edwards & Roy, 2017; Hazelkorn, 2010; Honig & Bedi, 2012; Martin, 2013).
A quick scan of any bibliographic repertoire reveals that with a limited series of keywords, such as “data fabrication and falsification,” “plagiarism and self-plagiarism,” “research and scientific ethics or integrity,” and “research and scientific fraud or misconduct,” one can retrieve a literature corpus that shows an exponential trend. In particular, a query on Scopus reveals an annual average that goes from less than 900 articles in the period 2000-2004, to almost 7,600 in 2018-2022, a growth over five times greater than the increase in the total number of records in Scopus between the two periods. Of similar magnitude is the growth in the number of retracted articles and the case studies observed by independent observers such as “Retraction Watch” (Oransky & Marcus, 2012), recently born for this purpose. It is only the tip of the iceberg, which more and more frequently emerges also from the judicial chronicles that rage in the media and that occasionally tell the “stumbles” of some famous scientists lent meanwhile to politics. On the other hand, studies of organization theory reveal that any incentive scheme implemented to maximize the performance of a complex system always generates side effects, which must be detected, evaluated, and managed with the aim of keeping them below a certain threshold, which one can consider “physiological.”
In this work, we want to investigate a particular type of side effect of quantity-based research rewarding systems, in jargon called “salami slicing.” In the context of scientific research, this gastronomic metaphor illustrates the practice of breaking down a single research study into multiple publications to maximize the number of publications without adding substantial new contributions. This strategy often involves publishing incremental or minor variations in the same research findings across multiple papers. Like other ethically questionable practices, salami slicing research results in information duplication and a waste of time for the scholarly community.
Salami slicing generally involves distributing data across various publications sharing the same (or really similar) hypothesis, population, and methods, raising ethical concerns. However, there are situations were breaking down a research project into smaller units is warranted. Large-scale studies, such as clinical trials with extensive data, may benefit from this approach. Such studies have often explored multiple questions and yielded various outcomes. In these cases, dividing the comprehensive study into multiple publications based on distinct questions and outcomes is justifiable. Nevertheless, authors must transparently disclose that a specific publication is part of a larger research endeavour. It is important to note that multiple publications arising from a single research project are not inherently problematic. The key is whether each publication contributes significantly to the academic discourse and brings genuinely new insights. Needless to say that peer reviewers and editors play a crucial preventive role in detecting and avoiding salami slicing publications. They should be vigilant and compare submitted manuscripts with previously published literature.
The introduction of the so-called National Scientific Accreditation (ASN) for professorship in Italy in 2012 gives us the opportunity to provide empirical evidence of whether and to what extent publication-based incentive systems may induce salami slicing practices. The ASN is meant to conduct a quali-quantitative evaluation of the scientific profile of scholars aiming to be tenured as associate or full professors at Italian universities. Among the three quantitative indicators used in the ASN evaluation, one refers to the number of journal articles published in the last 10 years. This may have encouraged the recourse to “salami slicing” practice in order to surreptitiously increase the size of one’s own scientific output as measured by the number of authored publications.
Detecting salami slicing in academic publications is challenging. Ideally, scholars in the field would be able to look for a significant overlap in content between multiple publications from the same author(s) or to check if the papers share similar methodologies, datasets, results, and discussions, and if the contributions are too incremental or marginal. Unfortunately, actualizing it for research purposes is unviable, especially for large-scale observations. Alternatively, or in support of peer review, one could utilize plagiarism detection tools to identify similarities between the text of multiple papers. A shortcut would be to compare only the titles and abstracts or the references of related publications. If they are strikingly similar, it might suggest salami slicing.
In this work we will try identifying salami slicing through the detection of overlaps of the references of an author’s publications. We conduct a longitudinal bibliometric analysis of the publications of each Italian professor in the sciences (over 25,000), accounting for individual and contextual variables that might moderate the opportunistic response to the ASN incentive scheme.
The manuscript is structured as follows. Section 2 reviews the literature on the core topic of this work, while Section 3 provides an overview of ASN for academic appointments in Italy. Section 4 presents the research hypotheses, methodology and dataset used for the analysis, while Section 5 shows and discusses the results obtained. Finally, Section 6 provides the author’s conclusions and comments.

2 Salami publications: a literature review

The interest in the phenomenon known as “salami slicing publications” dates back over a quarter of a century, when in a work on the subject, Tom Jefferson estimated the prevalence of “redundant publications” at 10 to 25% of the published literature. He concluded that “the scientific community at large and governments should take urgent steps to safeguard the public from the possible effects of fraudulent multiple publications” (Jefferson, 1998).
According to the Committee on Publication Ethics (COPE, 2019), “salami publication” occurs when papers cover the same population, methods, and question. Its detection involves expert peer judgment since it is not easy to demonstrate that research outcomes disseminated through several articles should have been presented in a smaller number of publications (Andreescu, 2013), as “we are dealing with shades of grey” (Norman & Griffiths, 2008). On the other hand, research advances incrementally, and while each new research article is expected to make a novel contribution, researchers often need to reuse or “recycle” some material-methods, background, hypothesis, etc. A survey by Hall, Moskovitz, and Pemberton (2018) reveals that a large majority of academic gatekeepers believe text recycling is allowable in some circumstances. According to Anson and Moskovitz (2021), “recycling” some amount of material is even “normative” in STEM research writing.
In recent years, there has been an intensification of editorial and publisher initiatives to address such issues. Their proliferation has attracted the attention of some scholars who have noted the existence of inconsistent terminology, where the same terms show different meanings, urging clarification and harmonization of a standard taxonomy related to text recycling in research writing (Bruton, 2014; de Vasconcelos & Roig, 2015; Horbach & Halffman, 2019; Moskovitz, 2019, 2021).
Certainly, classifying certain practices and their legitimacy a priori is not straightforward. A case of particular interest concerns the republication as a journal article of works previously presented at conferences (de Vasconcelos & Roig, 2015), a case widespread, especially in computer science where journal policies are extremely differentiated (Zhang & Jia, 2013). A second case involves manuscripts published in multiple venues featuring different languages, mainly by non-English native scholars who want to push their outcomes into different scholarly channels to reach a wider audience (Teixeira da Silva, 2020).
Regarding journal submission policies, an empirical content analysis carried out by Ding, Nguyen, Gebel, Bauman, and Bero (2020) in health science disciplines reveals “an overall lack of explicit policies, inconsistency and confusion in definitions of bad practices, and lack of clearly defined consequences for non-compliance”.
Several scholars have even questioned the opportunity to have defined (and obviously applied) a standard such as the “least publishing unit,” that is, the minimum amount of information that can qualify a publication in a peer-reviewed venue (Buddemeier, 1981; Refinetti, 1990; Roth, 1981). The definition of such a standard may have indirectly incentivized the splitting of research results into the smallest publishable units and, consequently, the practice of salami publishing (Cabbolet, 2016; DeWitt et al., 2013).
The evolution of the publishing market has certainly played a decisive role, with the rise of “predatory publishers” offering rapid publication with loose peer review, exploiting the pressure exerted on authors by the “publish or perish” environment (Harvey & Weinstein, 2017; Kassian & Melikhova, 2019).
Studies attempting to quantify the extent of the salami publishing phenomenon, while agreeing on trends, differ on the quantity/incidence and severity of the cases detected. Through the extensive use of text-matching software called eTBLAST, Errami et al. surveyed seven million biomedical abstracts in Medline and discovered tens of thousands of highly similar articles, concluding that scientists are publishing more and more duplicate papers (Errami & Garner, 2008; Errami et al., 2008). On the contrary, Larivière and Gingras (2010) claimed that “the prevalence of duplicates is one out of 2,000 papers” and that the phenomenon does not affect all scholarly communities, being typically concentrated in medical science fields (Larivière & Gingras, 2010). Other authoritative scholars argue that ethical misconduct has a marginal effect on the advancement of science (Bar-Ilan & Halevi, 2018) and that salami publishing may not necessarily be a questionable research practice (Happell, 2016), since there could be valid and defensible arguments for a single research study generating multiple publications (Hicks & Berg, 2014). Others believe instead that it is an unacceptable practice undermining scientific integrity with all the consequences that entail, considering that science is called upon to respond to increasingly complex and urgent global challenges. Along these lines, Kostoff et al. (2006) asserts unequivocally that “paper inflation” is due in large part to the increasing use of metrics to evaluate research performance, and researchers’ motivation to maximize metrics. According to Amos (2014), unethical publishing practices cut across nations, but rates of plagiarism and duplicate publication are highest in Italy and Finland. Italy is a case in point, given the introduction in recent years of a number of national research assessment frameworks more or less extensively based on the use of bibliometric indicators. In the past, the present authors investigated a number of issues related to side effects deriving from such assessment frameworks (Abramo & D’Angelo, 2023; Abramo et al., 2021; Abramo et al., 2023). In this study, they intend to add one more piece to the puzzle by checking for the presence of another side effect such as “salami slicing publishing” by national academic researchers.
Detecting such practices is, however, a formidable task on a large-scale population. What emerges from the literature is that detection is generally entrusted with three types of approaches. The most used refers to “text similarity” of pairs of publications, analysed through ad-hoc developed codes or plagiarism detection software (CrossCheck, Turnitin, Antiplagiat, eTBLAST, WCopyfind). .) Given the computational effort, these studies are based on small samples of hundreds to a few thousand publications. The other two are indirect, more properly bibliometric approaches, based on:
1) The analysis of “retractions”, a generally small subset of publications, from which to isolate cases tagged as “redundant/duplicate” (Amos, 2014; Bar-Ilan & Halevi, 2018; Chen et al., 2018; Wager & Williams, 2011; Zhang & Grieneisen, 2013).
2) The use of very “restrictive” rules on publication metadata: for example, Larivière and Gingras (2010) isolated duplicate papers, defined as those that share the same title, the same first author, and the same number of references. In this way, out of over 18 million articles indexed in the WoS over the 1980-2007 period, they found 4,918 occurrences of duplicate papers published in two different journals.
In all cases, these are approaches that probably underestimate the scale of the phenomenon. In this work, we propose a different approach, based on the analysis of bibliographies, assuming that the probability that two or more papers are the result of a salami slicing strategy depends on the level of similarity of the relative bibliographies. On the other hand, the objective of the work is not the exact identification of publications resulting from slicing salami, but the opportunistic response of researchers to research evaluation systems, through some proxy to detect the presence of a change in behaviour by researchers.
Before moving on to a detailed description of the proposed method, we will illustrate the salient features of the analysis context, in particular, of the evaluation framework that could have induced the side-effect of interest.

3 The Italian scientific accreditation (ASN)

The procedure for appointing academic positions in Italy underwent significant changes following the introduction of Law 240 in 2010. This reform introduced the National Scientific Accreditation (ASN), which became a mandatory criterion for candidates seeking positions as associate or full professors at Italian universities. In 2011 and 2012, the “Regulation on Awarding of National Scientific Accreditation” and the “Regulation of Criteria and Parameters for Evaluation of Candidates and Verification of Committee Member Qualifications”—were implemented to define the structure and functioning of the ASN. The timing of these regulations is key for understanding the empirical analysis, which will be discussed later.
Accreditation committees based their decisions on three bibliometric indicators of research performance. Reaching the threshold values for one, two, or all three of these indicators (as determined by the specific committees) became a prerequisite for obtaining accreditation, which was crucial for hiring or advancing to associate and full professor roles.
The Ministry of Education, Universities, and Research (MIUR) was tasked with issuing calls for 184 Accreditation Committees every two years. Each committee represented a Competition Sector (CS) created by grouping Scientific Disciplinary Sectors (SDSs). These SDSs, 370 in total, were used to classify academic fields and manage university faculties in Italy, ensuring that each professor was assigned to a specific discipline. Out of the 184 CSs, 109 were classified as “bibliometric”, while the other 75 were considered “non-bibliometric”. For bibliometric CSs, committee members were selected randomly from full professors in CS who met all three bibliometric threshold values.
Applicants must declare their chosen CSs and academic ranks when submitting their applications. The call is now issued quarterly, giving applicants more opportunities to apply frequently. There is no restriction on the number of CSs a candidate can select, and candidates can simultaneously apply for both associate and full professorship. Applicants must submit a curriculum vitae, a list of their educational degrees, and their scientific publications.
For bibliometric CSs, the list of publications was used to assess three key metrics for candidates applied in 2012:
· The number of journal articles published between 2002-2012 was normalized for academic seniority, particularly for candidates with less than 10 years of experience.
· The total number of citations received for the candidate’s scientific output was also normalized for academic seniority.
· Contemporary h-index of the candidate’s overall scientific production (as defined by Sidiropoulos, Katsaros, & Manolopoulos, 2007).
These metrics are based on publications indexed in Scopus and the Web of Science (WoS). The public agency overseeing the ASN, known as ANVUR, utilized a detailed database of all publications by Italian professors until 2012 to calculate these metrics for both associate and full professors. ANVUR established publishing thresholds based on the median values to select committee members and evaluate ASN applicants. Failure to meet the required bibliometric thresholds resulted in the suspension of the evaluation process and rejection of the application. If candidates met the bibliometric criteria, the committees evaluated other elements of their curriculum vitae and publication history.
In the first call for accreditations in 2012, there were 59,148 applications, with 55.7% coming from individuals already holding faculty positions. Of the total number of applications, 30.5% were for full professor accreditation. Applications to “bibliometric” CSs accounted for about 62.2% of the total applications.

4 Methods

4.1 Research hypotheses

The evaluation criteria of the ASN, and in particular, the presence of the indicator “number of articles in journals over the period 2002-2012” may have led academics eager to be accredited for progressing from assistant to associate or from associate to full professors to adopt a salami publication strategy. If this were the case, around the observation period, that is, between the five-year period preceding the introduction of ASN and the one that follows it, we should observe an increase in publications, but especially a change in some variable that could signal the adoption of such a strategy.
As mentioned, salami slicing involves manipulating research contributions and reuse/recycling some material-methods, background, hypothesis, text, etc. Detecting such practices is a formidable task on a large scale population and given the size of our observation field, we can only propose an indirect approach based on the degree of originality of publications’ bibliographies. This approach should reflect (albeit indirectly) the level of differentiation of theoretical frameworks, hypotheses, and methodologies adopted in the research papers produced by a researcher. In particular, we refer to the so-called bibliographic coupling introduced by Kessler (1963) to indicate the case of two publications sharing the presence of an identical citation in their respective references. Bibliographic coupling was introduced with the idea that it represents a probability that two works treat a related subject matter, an idea later taken up and developed by other scholars (Sen & Gan, 1983).
Of course, we are aware that salami slicing involves not only the overlapping of references; the objective of the work is not the exact identification of redundant publications, but a possible opportunistic response of researchers to research evaluation systems, through a proxy signalling a change in their behaviour.
For this purpose, consider two papers, X and Y, and their relevant reference lists. In the language of set theory, if Ref(X) denotes the set of papers in the reference list of X and Ref(Y) denotes the set of papers in the reference list of Y, then Ref(X)∩Ref(Y) is the set of papers belonging to both these two reference lists. If this set is non-empty, then X and Y are bibliographically coupled. The relative bibliographic coupling strength (RBCS) can be easily defined using the notation of set theory:
$\operatorname{RBCS}_{X, Y}=\frac{(\operatorname{Ref}(X) \cap \operatorname{Ref}(Y))}{(\operatorname{Ref}(X) \cup \operatorname{Ref}(Y))}$
This essentially involves a classic Jaccard Index (Jaccard, 1901), specifically given by the ratio of repeated references to the total references.
When comparing a set of “n” publications (with n>2), various approaches can be adopted. For instance, one can calculate the average value of RBCS for all pairs of publications in the set. Alternatively, the average value of the Pearson coefficient between citation vectors or the average cosine similarity between these vectors can be computed. Another option is to refer to the density of the citation graph with “n” nodes representing as many publications and edges representing their “weighted” bibliographic coupling through the RBCS value. These approaches are characterized by computational efforts that grow quadratically with the size of the publication set to be analyzed. Therefore, in our case, it is necessary to resort to a simplified approach, a Jaccard-like index defined as the ratio between the number of occurrences of references shared by the “n” publications and the total number of references. Consequently, we measure the degree of originality of publication bibliographies as:
$\mathrm{O}_{\mathrm{S}_{\mathrm{n}}}=\frac{\text { Number of unshared references }}{\text { Total number of references }}$
This indicator assumes a zero value if the bibliographies of the compared publications present only repeated references. Conversely, it assumes a value of 1 if the bibliographies are entirely distinct, that is, characterized by only unshared references.
At this point, we subsume the following three hypotheses:
Hypothesis 1: After the introduction of the ASN, assistant and associate professors in bibliometric SDSs publish more works than in the past.
Hypothesis 2: After the introduction of the ASN, assistant and associate professors in bibliometric SDSs publish works with less differentiated bibliographies than in the past.
Hypothesis 3: Full professors and associate professors accredited under the first ASN stage, being subject to a weaker incentive, do not significantly vary their publishing strategy.
Our research method involves measuring variables related to these hypotheses in the two periods around the introduction of the ASN, namely the five-year period 2008-2012 and the subsequent five-year period (2013-2017). In the next section, we provide details about this research framework, which was inspired by a previous work from the author, where we investigated if and to what extent the ASN hinders national scholars from diversifying their research activities (Abramo et al., 2024). In the proposed study, instead, we focus on the ASN as well, but for investigating another kind of side effect, involving different research hypotheses from different theoretical frameworks. Despite the methodological contiguity, rather than referring to Abramo, D’Angelo, and Di Costa (2024) for details, we deliberately report an extended description of both the dataset and the model, also to better highlight variations, of course not ignoring the presence of significant overlaps.

4.2 Dataset

According to the database of Italian professors maintained by the MIUR, at the end of 2012, there were 57,400 professors on staff at Italian universities, 35,200 of which belonged to bibliometric SDSs. We limited our field of observation to the assistant, associate, and full professors, falling in a bibliometric SDSs (200 in all) in each year of the period 2008-2017 (26,171 in all).
We use the author name disambiguation algorithm developed by D’Angelo, Giuffrida, and Abramo (2011) for construction of the bibliometric dataset, based on the coupling of the publications extracted from the Italian National Citation Report (I-NCR) by Clarivate Analytics and the MIUR database. This algorithm assigns an I-NCR publication to a given professor if the latter:
· has a name compatible with one of the authors of the publication;
· belongs to one of the recognized universities in the list of addresses indicated by the authors of the publication;
· belongs to an SDS compatible with the subject category (SC) of the publication;
· was on staff on 31 December of the year prior to the publication year.
Of the 26,171 total professors, 590 were unproductive throughout the ten-year period. The remaining 25,581 publications produced a total of 987,339 authorships and 492,587 unique publications. Table 1 shows the breakdown of the dataset of analysis by disciplinary area (UDA).
Table 1. Dataset of analysis.
2008-2012 2013-2017
UDA* No. of SDSs Professorsǂ Authorship Publications Authorship Publications
1 10 2,306 (38-33-30) 21,017 15,652 25,472 19,525
2 8 1,598 (38-39-23) 67,986 22,019 102,995 24,936
3 12 2,222 (45-34-21) 42,376 23,631 49,859 28,319
4 12 780 (43-36-21) 7,760 5,378 11,488 7,803
5 19 3,549 (47-30-23) 47,560 30,231 57,221 37,088
6 50 6,973 (47-31-21) 124,216 67,840 167,062 90,835
7 30 2,353 (42-33-25) 24,239 13,661 34,405 19,719
8 9 1,098 (35-36-29) 9,467 6,831 16,989 12,301
9 42 3,903 (35-35-30) 64,453 43,065 95,848 65,508
10 8 799 (41-32-27) 6,236 4,635 10,690 7,984
Total 200 25,581 (42-33-25) 415,310 211,103** 572,029 281,484**

* 1 - Mathematics and computer science, 2 - Physics, 3 - Chemistry, 4 - Earth sciences, 5 - Biology, 6 - Medicine, 7 - Agricultural and veterinary sciences, 8 - Civil engineering, 9 - Industrial and information engineering, 10 - Psychology.

ǂ Counts refer to academic ranks and SDSs as of 31/12/2012. In brackets, the share of assistant, associate and full professors, respectively.

** The value is lower than the column total due to publications authored by professors from different UDAs.

4.3 The econometric model

We use the same analytical strategy proposed by Abramo et al. (2024), fitting a linear random effects model with longitudinal data grouped into two 5-year periods (2007-2012 vs. 2013-2017).
We intend to test our research hypotheses by controlling for some covariates, some of which are time-invariant. Specifically, the response variables are obtained as follows. For each professor in the dataset, we consider: (Y1) their scientific output in each five-year period (number of publications), and (Y2) their relative originality (according to Equation [2]).
Testing the research hypotheses means verifying whether there are significant variations in these variables around the introduction of the ASN, net of the effects of control variables expected to impact the dependent variables. These control variables include:
· The level of specialization of the professor in the period, measured by the specialization index, i.e. the share of their publications falling within their prevalent subject category.
· Academic rank, age, and gender of the professor.
Professors’ level of specialization certainly influences their new knowledge production intensity, as demonstrated by Abramo, D’Angelo, and Di Costa (2019). However, this variable also affects the originality of bibliographies because if they conduct highly focused research, they are more likely to cite the same works repeatedly. The same can be said for their personal traits, as reported by Abramo, D’Angelo, and Di Costa (2018).
For Hypotheses (2 and 3) regarding the originality of publications, we also consider three additional control variables:
· The average number of colleagues co-authoring the professor’s publications. We expect co-authors to exercise social control as a deterrent to opportunistic behaviour attributable to “salami slicing”; therefore, the higher the number of co-authors, the smaller the opportunity for the professor to inappropriately slice their publications.
· The share of the professor’s publications resulting from international collaboration. Social proximity could favour collusive behaviour. The social distance at the international level is greater than at the domestic and intra-mural level, therefore offering less scope for opportunistic slicing of publications.
· The location of the professor’s university, as the tendency to opportunistic behaviour may also depend on the ethical code of the geographical context where the professor works.
Field effects are controlled through dummies associated with the SDS of the professor.
The target covariates, related to our research hypotheses, are:
· PostASN: a dummy variable representing the two five-year periods.
· StatusASN: a time-constant variable with four categories, according to the strength of the ASN incentive to change publication strategy, from low to high:
○ Associate professors accredited in ASN 2012: having obtained accreditation for full professor, these have no need for further participation in ASN.
○ Full professors: they do not participate in the ASN, but they can apply for joining evaluation committees; in this case, they must meet bibliometric thresholds, similar to but slightly different from those for accreditation candidates.
○ Assistant professors accredited in ASN 2012: having obtained accreditation for the role of associate professor, they remain motivated to participate in future ASN calls for full professors.
○ Assistant and associate professors not accredited in ASN 2012: professors who either did not participate or participated and failed and would therefore be motivated to participate in future ASN calls.
The descriptive statistics of the econometric model variables are shown in Table 2, and the corresponding correlation matrix in Table 3. The latter shows four values exceeding 0.3 (gray shaded), specifically:
Table 2. Descriptive statistics for variables of the econometric model.
Time Variable Obs Min Max Mean Median Std Dev.
Varying Number of publications 51,162 0 619 19.3 12 31.4
Originality of bibliographies 49,586 0 1 0.730 0.749 0.199
Specialization index 49,695 0.111 1 0.616 0.6 0.224
Age 51,162 29 72 51.3 52 8.1
Rank
Assistant professor 51,162 0 1 0.366
Associate professor 51,162 0 1 0.371
Full professor 51,162 0 1 0.263
Average number of co-authors 49,695 1 3,007 23.1 6.4 170.1
Share of international publications 49,695 0 1 0.289 0.231 0.270
University location
North 51,162 0 1 0.435
Center 51,162 0 1 0.276
South 51,162 0 1 0.289
Constant Female 25,581 0 1 0.330 - -
ASN status
Full professor 25,581 0 1 0.239 - -
Accredited associate professor 25,581 0 1 0.107 - -
Accredited assistant professor 25,581 0 1 0.159 - -
Not accredited assistant or associate prof. 25,581 0 1 0.494 - -
Table 3. Correlation matrix of variables in the statistical model.
1 2 3 4 5 6 7 8 9
1 -
2 -0.112 -
3 -0.228 0.496 -
4 0.065 0.220 -0.064 -
5 -0.051 -0.030 -0.048 0.071 -
6 -0.008 -0.068 -0.003 0.026 -0.008 -
7 0.020 0.121 -0.009 -0.061 -0.165 -0.020 -
8 -0.034 -0.020 0.097 -0.168 -0.028 -0.045 0.320 -
9 -0.127 -0.036 0.270 -0.349 -0.289 -0.007 0.370 0.262 -
10 0.087 0.141 -0.161 0.260 -0.019 -0.011 -0.050 -0.250 -0.552

1 - Gender; 2 - Age; 3 - Academic rank; 4 - Status_ASN; 5 - Specialization Index; 6 - Geographical location; 7 - LN_co_authors; 8 - International; 9 - LN_publication; 10 - Originality of bibliographies.

· 0.496 for age vs. academic rank, indicating (as reasonable) that career progression is generally associated with a professor’s seniority and, consequently, age.
· 0.320 and 0.370 for the correlation between the number of co-authors of a professor and the share of their “international” publications and their total output, respectively.
· -0.552, confirming the existence of a negative correlation between the quantity of a professor’s publications and the originality of their bibliographies.
We estimate an OLS model using the “regress” command in STATA (12.0 edition).
Given the typically skewed distribution, the variables “number of publications” and “average number of co-authors” were preliminarily log-transformed to improve the fitting of the OLS model.

5 Results

From the data reported in Table 1, it can be observed that between the two five-year periods, the overall output produced by the 25,581 subjects in the dataset increased by 33.3% in terms of publications and 37.7% in terms of authorships, with the difference between the two data points due to the increase in the average number of authors per publication (from 1.98 to 2.04) . Figure 1 displays a boxplot of the distribution of output variation between the two periods at the individual level for the 25,581 professors in the dataset. The data indicate that 16,458 professors (equivalent to 64.3% of the total) increased their output, ranging from a minimum of +0.9% to a maximum of +4,725%, with a mean value of +141.5%. Among these, 989 subjects (3.9% of the total) were unproductive in the period before 2012 but not afterward. In contrast, 29.5% of professors (7,554 in total) reduced their output, on average, by -38.9%. Among these, 478 subjects (1.9% of the total) became unproductive after 2012. Finally, professors who maintained a constant output between the two periods totalled 1,569, representing 6.1% of the total.
Figure 1. Distribution of the variation in the number of publications (2013-2017 vs. 2008-2012) authored by 25,581 Italian professors in the dataset.
Analyzing the data related to the indicator defined in Equation [2], Figure 2 shows a normal distribution with a slight skewness to the left of the curve compared to the reference value, which is zero. In fact, in 49.8% of cases, a decrease in the level of originality of bibliographies between the two periods was recorded, with an average reduction value of -0.142. Conversely, in 46.0% of the cases, an increase was observed, with an average value of +0.134.
Figure 2. Distribution of the variation in bibliographies’ originality (2013-2017 vs. 2008-2012) of publications authored by the 25,581 Italian professors in the dataset.
The data results were combined in the diagram shown in Figure 3. Data dispersion clearly indicates a negative correlation between the two dimensions, suggesting that an increase in output is associated with a reduction in the originality of bibliographies, and vice versa.
Figure 3. Variation of the number of publications and of bibliographies’ originality (2013-2017 vs. 2008-2012), for the 25,581 Italian professors in the dataset.*

Note: * 700 observations in the dataset. *aset. **il of the x-axis were omitted, for improving data visualization.

Averaging individual data by SDSs (Scientific Disciplinary Sectors) yields the dispersion shown in Figure 4, confirming a negative correlation between the two indicators. Specifically, the average publication intensity between the two periods increased in 196 out of the total 200 SDSs, while the originality of bibliographies increased in only 91.
Figure 4. Average variation of the number of publications and of bibliographies’ originality (2013-2017 vs. 2008-2012), for the 25,581 Italian professors in the dataset by SDSs.
Turning to the first of the research hypotheses formulated in Section 4.1, Table 4 presents the Ordinary Least Squares (OLS) estimates for the model considering the number of authored publications in the five-year period as the response variable. The value in the second column for the variable Post_ASN unequivocally indicates that the output increases in the second period compared to the first, net of the effect of all considered variables. In particular, there is a negative (and significant) effect of female gender (-0.1121), age (-0.0494), and specialization (-0.1203). As for academic rank, a positive and incremental effect is observed (+0.5615 for associates and +1.1598 for full professors, compared to assistant professors). The third column reports estimate of a model in which, the interaction between this variable and the academic rank has substituted the Post_ASN dummy. The coefficient values indicate that the output increase between the two periods applies to all three academic ranks, although it is slightly higher for full professors (+0.4835) than for associates (+0.4384) and assistants (+0.3791).
Table 4. Estimates of the main parameters of the OLS model for the analysis of the number of publications of Italian professors overall.
Model 1 Model 2
Post_ASN 0.4301*** (0.0077) -
Gender -0.1121*** (0.0082) -0.1117*** (0.0082)
Age -0.0494*** (0.0006) -0.0492*** (0.0006)
Academic rank (Assistant as baseline)
Associate 0.5615*** (0.0089) 0.5348*** (0.0123)
Full 1.1598*** (0.0115) 1.1072*** (0.0149)
Acad. Rank#Post_ASN
Assistant - 0.3791*** (0.0129)
Associate - 0.4384*** (0.0118)
Full - 0.4835*** (0.0143)
Specialization_Index -0.1203*** (0.0240) -0.1197*** (0.0240)
Constant 3.1897*** (0.0513) 3.1972*** (0.0512)
F(205, 50956) 148.54 147.18
Prob > F 0.0000 0.0000
R-squared 0.3597 0.3601
Root MSE 0.8092 0.8090

200 dummies for as many SDSs for controlling field effects.

Linear regression - Number of obs = 51,162.

Robust standard errors in brackets.

A specific analysis confirms that the increase in output around 2012 affected all disciplinary areas. In this regard, Table 5 presents OLS estimates, specifically for the Post_ASN coefficient, obtained by grouping professors into their respective UDAs. The greatest increase, net of the effects of control variables, is observed in UDA 8 (Civil engineering) with +0.7974, followed by Psychology (UDA 10) with +0.7474. The smallest increase is recorded in Chemistry (UDA 3), with +0.2010. In some way, these data reveal that the growth rate of publications correlates with the rate of internationalization of the area.
Table 5. Estimates of the PostASN parameter of the OLS model for the analysis of the number of publications of Italian professors, by UDA.
UDA* Obs Coef. Robust Std. Err. t P>t [95% Conf. Interval] R-squared
1 4,614 0.2788 0.0234 11.93 0.000 0.2330 0.3246 0.2482
2 3,195 0.2935 0.0393 7.46 0.000 0.2164 0.3707 0.2615
3 4,442 0.2010 0.0222 9.07 0.000 0.1576 0.2445 0.2541
4 1,564 0.5637 0.0389 14.5 0.000 0.4875 0.6400 0.3078
5 7,101 0.2456 0.0179 13.69 0.000 0.2104 0.2808 0.2804
6 13,942 0.4143 0.0157 26.43 0.000 0.3836 0.4451 0.3684
7 4,708 0.5535 0.0233 23.71 0.000 0.5077 0.5992 0.3503
8 2,192 0.7974 0.0350 22.81 0.000 0.7288 0.8660 0.3330
9 7,808 0.5851 0.0195 30 0.000 0.5468 0.6233 0.3432
10 1,596 0.7474 0.0438 17.05 0.000 0.6614 0.8334 0.3659

* 1 - Mathematics and computer science, 2 - Physics, 3 - Chemistry, 4 - Earth sciences, 5 - Biology, 6 - Medicine, 7 - Agricultural and veterinary sciences, 8 - Civil engineering, 9 - Industrial and information engineering, 10 - Psychology.

SDS dummies in each UDA, for controlling field effects.

In brief, the first hypothesis appears to be confirmed, though it is not only assistant and associate professors who, after the introduction of the ASN, publish more works compared to the past, but also their colleagues’ full professors, however at the top of their careers. This trend is evident across all disciplines, albeit with noticeable variations. The control variables exhibit effects entirely in line with expectations.
Turning to the second hypothesis, Table 6 presents OLS estimates for the originality of bibliographies in the scientific portfolio of professors over the two investigated five-year periods as dependent variable. The second column shows a negative and significant coefficient (-0.0153) for Post_ASN, indicating a reduction in the level of the response variable between the two periods. In contrast to the previous model, control variables such as age and specialization index exhibit a positive effect, while gender loses its significance. The coefficients related to academic rank suggest that the originality of bibliographies tends to decrease with the “hierarchical” status, and the third column indicates that originality decreases more for full professors (-0.0241) compared to associates (-0.0164) and assistants (-0.0068).
Table 6. Estimates of the main parameters of the OLS model for the analysis of the originality of bibliographies of Italian professors’ publications.
Model 1 Model 2
Post_ASN -0.0153*** (0.0016) -
Gender 0.0024 (0.0017) 0.0023 (0.0017)
Age 0.0049*** (0.0001) 0.0048*** (0.0001)
Academic rank (Assistant as baseline)
Associate -0.0509*** (0.0019) -0.0466*** (0.0026)
Full -0.1034*** (0.0024) -0.0946*** (0.0031)
Acad. rank#Post_ASN
Assistant - -0.0068*** (0.0026)
Associate - -0.0164*** (0.0025)
Full - -0.0241*** (0.0029)
University location (South as baseline)
Center -0.0027 (0.0020) -0.0026 (0.0020)
North -0.0027 (0.0018) -0.0026 (0.0018)
Specialization_Index 0.0343*** (0.0041) 0.0345*** (0.0041)
International collab. intensity -0.0679*** (0.0037) -0.0678*** (0.0037)
Number of co-authors -0.0179*** (0.0016) -0.0178*** (0.0016)
Constant 0.6883*** (0.0107) 0.6869*** (0.0107)
F(209, 49376) (209,49376) 130.74 (211,49374) 129.78
Prob > F 0.0000 0.0000
R-squared 0.3271 0.3274
Root MSE 0.1637 0.1637

200 dummies for as many SDSs for controlling field effects.

Linear regression - Number of obs = 49,586.

Robust standard errors are in brackets.

Geographical location has a non-significant effect, while the level of specialization shows a significant positive coefficient (+0.0343), which is somewhat counterintuitive, as is the (negative) effect of the intensity of international collaboration and the number of co-authors. The originality of bibliographies seems to be negatively impacted by the increase in these variables and, conversely, positively influenced by the level of specialization.
Therefore, the second hypothesis also appears to be confirmed: after the introduction of the ASN. Italian professors of bibliometric SDSs have published works with less differentiated bibliographies than in the past. However, just like for the output, again this reduction in originality affects everyone and not just assistant and associate professors.
In order to test the third research hypothesis, we replace academic rank with the variable Status_ASN in the previous model. The results of the estimates are indicated in Table 7 and only partially confirm the hypothesis that full and associate professors accredited under the first ASN stage, subject to a weaker incentive, do not significantly alter their publishing strategies. Indeed, the greater reduction in originality concerns not accredited professors (-0.0307), but the coefficient of the interaction with Post_ASN for full professors is also negative (-0.0148). Accredited professors, on the other hand, tend to increase the originality of the bibliographies of their works, with a slight differentiation between assistant_accredited (+0.0093) and associate_accredited (+0.0077).
Table 7. Estimates of the main parameters of the OLS model for the analysis of the originality of bibliographies of Italian professors’ publications based on their ASN Status.
Coef. Robust Std. Err. t P>t [95% Conf. Interval]
Gender 0.0013 0.0016 0.77 0.440 -0.0020 0.0013
Age 0.0029 0.0001 22.82 0.000 0.0026 0.0029
Status_ASN_n (Full as baseline)
Assistant_accredited 0.0021 0.0037 0.57 0.566 -0.0051 0.0021
Associate_accredited -0.0338 0.0037 -9.11 0.000 -0.0410 -0.0338
Not_accredited 0.0989 0.0029 34.15 0.000 0.0932 0.0989
Status_ASN_n#Post_ASN
Assistant_accredited 0.0093 0.0034 2.69 0.007 0.0025 0.0093
Associate_accredited 0.0077 0.0039 1.94 0.052 -0.0001 0.0077
Full professor -0.0148 0.0031 -4.81 0.000 -0.0208 -0.0148
Not accredited -0.0307 0.0023 -13.5 0.000 -0.0351 -0.0307
Specialization_Index 0.0271 0.0040 6.78 0.000 0.0193 0.0271
International collab. intensity -0.0564 0.0036 -15.74 0.000 -0.0635 -0.0564
Number of co-authors -0.0161 0.0015 -10.46 0.000 -0.0191 -0.0161
Constant 0.7005 0.0116 60.57 0.000 0.6778 0.7005

Linear regression - Number of obs = 49,586.

F(213, 49372) = 145.83; Prob > F = 0.0000; R-squared = 0.3500; Root MSE = 0.1609.

To delve into the phenomenon, we conducted a specific investigation regarding the profile of those who showed a very significant change in output (either positive or negative) between the two periods. In particular, two professors’ cohorts were identified with about 1,000 professors each, as shown in Table 8.
Table 8. Composition of two subsets of the total population for analysis.
Set Subset 1 Subset 2 Total Obs
A - Professors registering a marked decrease in output between the two periods Unproductive after 2012, with at least 5 publications before (54 obs) Professors registering a decrease of at least 60% in their output after 2012 compared to the period before (960 obs) 1,014
B - Professors registering a marked increase in output between the two periods Unproductive before 2012, with at least 5 publications after (310 obs) Professors registering an output at least 5 times higher after 2012, compared to the period before (700 obs) 1,010
The frequency analysis of individual data for these two cohorts reveals:
· A statistically significant age difference, with a 95% confidence interval of [51.6-52.5] for the first subset (professors registering a marked decrease in output) and [48.3-49.2] for the second (professors registering a marked increase).
· A significant concentration of “not accredited” professors in both the first subset (+33.2% compared to the expected value) and the second (+54.8% compared to the expected value).
In essence, the “not accredited” cohort encompasses two types of individuals: i) older professors who are not increasing the intensity of their scientific activity and probably do not aim to obtain accreditation for career advancement, as they are likely approaching the end of their careers or lack the ambition/will/ability to enhance their scientific standing; ii) younger professors who are intensifying their scientific activity with the goal, presumably, of obtaining accreditation for career progression after 2012.
For the latter individuals, we re-estimated the model from Table 7. The new coefficients for the dummies associated with the interaction Status_ASN_n#Post_ASN are all negative and significantly greater than the values indicated in Table 7 for the entire population. In particular, the greater reduction in the originality of bibliographies affects accredited associates (-0.262) and full professors (-0.253). The coefficient for the not accredited is lower (-0.240), suggesting that this effect can hardly be attributed to the adoption of a salami strategy, but rather results as a natural consequence of the expansion of output. Not surprisingly, for the entire population, the correlation index (Pearson rho) between the two variables is negative both before (-0.567) and after 2012 (-0.544).
The OLS regressions used in this study are based on several assumptions. To ensure these assumptions hold in our empirical context, we performed a series of post regression diagnostic checks, such as tests for multicollinearity (VIF test), and residual normality test. As for heteroscedasticity we imposed the option of robust standard errors. The results of these diagnostic tests confirm that the assumptions underlying OLS are adequately met. We also tried specifications alternative to OLS, e.g. a nonlinear probit model, obtaining a worse fit (evaluated on the basis of the “Akaike Information Criterion”).
We recognize that the R-squared values reported in our models are relatively low (0.3-0.4). This indicates that the models explain less than half of the variability in the dependent variables, reflecting the complexity of the phenomenon under investigation. While lower R-squared values are not uncommon in social science research, especially when dealing with human behaviour or institutional factors, they do highlight the need for caution when interpreting the results. These values suggest that there are other unmeasured factors influencing the outcomes, and future research should explore additional variables or alternative models that might account for a greater proportion of the variability. Despite this limitation, the significant predictors identified in our analysis provide valuable insights into the key factors driving the observed outcomes.

6 Discussion and conclusions

The enduring interest in the transformative effects induced by research evaluation systems and, more broadly, the adoption of Performance-Based Research Funding Systems (Buckle et al. 2021; Hicks, 2012), along with the media attention, particularly in Italy, accompanying this historical phase of policy revision, has motivated us to empirically examine one of the most widely debated hypotheses on this topic: whether the implementation of publication-centric incentive systems contributes to the proliferation of salami slicing practices.
Before delving into the study’s findings and subsequent conclusions, it is essential to provide the contextual backdrop. In response to the global trend of New Public Management reform and the evident necessity to enhance the competitiveness of the higher education system and national research infrastructure (Dal Molin et al., 2017), a government agency (ANVUR) was established in Italy 12 years ago. ANVUR’s specific mandate is to evaluate universities and national research organizations. As part of this initiative, several evaluation exercises have been launched. Specifically, the ASN process aims to improve the scientific profile of subjects aspiring to enter academia (or advance in their careers) and has the peculiarity of being publication-centric.
As with any process of change, the introduction of such an exercise sparked opposing opinions, which over time coalesced into stronger movements, reflecting the discontent of various stakeholders. It is undeniable that the methodological framework underlying such an exercise draws little inspiration from the state-of-the-art, as it must navigate the delicate balance between achieving accurate and reliable measures of candidates’ scientific profiles while accommodating “political” interests, budget constraints, and other influencing factors. Despite having completed the introduction and development phases within the “life cycle” paradigm, research evaluation in Italy is in danger of not entering the maturity stage at all. In fact, action against the use of quantitative methods of global research evaluation has strongly called into question the use of bibliometric approaches to support the evaluation itself. Just recently, ANVUR signed the COARA agreement, committing (among others) to “Base research assessment primarily on qualitative evaluation for which peer review is central, supported by responsible use of quantitative indicators”; in essence ANVUR pledges to discontinue its current practices, acknowledging to some extent that it has engaged in what could be deemed an “irresponsible” use of bibliometrics.
According to the author, the issue appears surreal, at the very least, as COARA’s instances apparently lack scientific rigor. At the same time, the subscriptions to such an agreement are on the rise, involving even key players like ANVUR. Bibliometricians who could potentially provide objective evidence to confirm or refute COARA’s claims seem notably silent. The study presented in this paper (along with a number of others by the same authors) was conceived in this scenario. It draws support from organization theory, asserting that any incentive scheme generates side effects that must be managed to maintain them below a “physiological” threshold.
Specifically, as mentioned earlier, the investigation focused on the potential occurrence of salami slicing practices by Italian researchers eager to be accredited for progress in the academic hierarchy in an accreditation framework based, among others, on a bibliometric output indicator.
The findings are clear and give rise to intriguing considerations.
Following the introduction of the ASN, Italian professors have published more than before. This increase is not limited to individuals mostly affected by ASN incentives (assistant and associate professors); it extends to full professors already at the top of their careers, possibly seeking to enhance their bibliometric profile only for joining ASN evaluation committees.
This increase in output parallels a reduction in the originality of the bibliographies of professors’ scientific portfolios. This reduction is most significant for “not accredited” professors but is also very significant for full professors, challenging the hypothesis that the trend is related to the size of the incentive provided by ASN.
Analysing professors with a marked increase in scientific activity reveals a greater reduction in bibliography originality for accredited associate professors and full professors. Thus, the observed decline in originality cannot be attributed to a salami strategy as a strategic response to ASN, as it is not correlated with the ASN incentive. While ASN has stimulated the scientific production of Italian academics, the increase in publications is linked to a reduction in the originality of their works. Notably, these effects are more pronounced for cohorts that should have been stimulated less by the introduction of this assessment schema. Consequently, attributing the combined effect of increased output and decreased originality to a potential salami strategy appears questionable, at the very least.
The authors contend that it is a natural occurrence that while a scholar tries to increase his research activity and output, he/she tends to reuse more citations in manuscripts. Therefore, we argue against abandoning policy instruments like ASN, which (despite claimed but not confirmed side effects) have shown an evident effect on efficiency improvements according to rigorous empirical analyses. Instead, we suggest enhancing and refining these instruments in terms of approach, methodologies, and indicators, drawing on the current state of the art in evaluative scientometrics. While this study focuses exclusively on Italian academics, many of the structural features observed, such as centralized research evaluation, promotion systems, and funding frameworks, are common in other national research systems, particularly in Europe and parts of Latin America. For instance, countries with similar state-driven centralized academic policies may share comparable dynamics regarding academic career progression and research performance pressure. However, caution should be exercised when generalizing these results, as differences in institutional cultures, funding levels, and academic labour markets across countries could lead to variations in outcomes. A more nuanced comparative analysis could further clarify the extent to which these findings hold in other national contexts.

Acknowledgments

I am indebted to the Centre for Science and Technology Studies (CWTS) at Leiden University for providing access to the in-house WoS database, from which data serving elaborations were extracted.
I would also like to express my sincere gratitude to Giovanni Abramo for his invaluable contribution to this manuscript. His thoughtful insights, constructive feedback, and meticulous attention to details significantly enhanced both the study and, consequently, the overall structure of the manuscript.
[1]
Aagaard K., & Schneider J. W. (2017). Some considerations about causes and effects in studies of performance-based research funding systems. Journal of Informetrics, 11(3), 923-926. DOI: 10.1016/j.joi.2017.05.018.

[2]
Abramo G., & D’Angelo C. A. (2023). The impact of Italian performance-based research funding systems on the intensity of international research collaboration. Research Evaluation, 32(1), 47-57. DOI: 10.1093/reseval/rvac026.

[3]
Abramo G., D’Angelo C. A., & Di Costa F. (2018). The effects of gender, age and academic rank on research diversification. Scientometrics, 114(2), 373-387. DOI: 10.1007/s11192-017-2529-1.

[4]
Abramo G., D’Angelo C. A., & Di Costa F. (2019). Diversification versus specialization in scientific research: which strategy pays off?. Technovation, 82, 51-57. DOI: 10.1016/j.technovation.2018.06.010.

[5]
Abramo G., D’Angelo C. A., & Di Costa F. (2023). The effect of bibliometric research performance assessment on the specialization vs diversification strategies of scientists. 19th ISSI Conference, Bloomington, Indiana-US.

[6]
Abramo G., D’Angelo C. A., & Di Costa F. (2024). Do research assessment systems have the potential to hinder scientists from diversifying their research pursuits? Scientometrics, 129, 5915-5935. DOI: 10.1007/s11192-024-04959-8.

[7]
Abramo G., D’Angelo C. A., & Grilli L. (2021). The effects of citation-based research evaluation schemes on self-citation behavior. Journal of Informetrics, 15(4), 101204. DOI: 10.1016/j.joi.2021.101204.

[8]
Amos K. A. (2014). The ethics of scholarly publishing: Exploring differences in plagiarism and duplicate publication across nations. Journal of the Medical Library Association, 102(2), 87-91. DOI: 10.3163/1536-5050.102.2.005.

PMID

[9]
Andreescu L. (2013). Self-plagiarism in academic publishing: The anatomy of a misnomer. Science and Engineering Ethics, 19(3), 775-797. DOI: 10.1007/s11948-012-9416-1.

PMID

[10]
Anson I. G., & Moskovitz C. (2021). Text recycling in STEM: A text-analytic study of recently published research articles. Accountability in Research, 28(6), 349-371. DOI: 10.1080/08989621.2020.1850284.

[11]
Auranen O., & Nieminen M. (2010). University research funding and publication performance - An international comparison. Research Policy, 39(6), 822-834. DOI: 10.1016/j.respol.2010.03.003.

[12]
Bar-Ilan J., & Halevi G. (2018). Temporal characteristics of retracted articles. Scientometrics, 116(3), 1771-1783. DOI: 10.1007/s11192-018-2802-y.

[13]
Bruton S. V. (2014). Self-Plagiarism and Textual Recycling: Legitimate Forms of Research Misconduct. Accountability in Research, 21(3), 176-197. DOI: 10.1080/08989621.2014.848071.

[14]
Buckle R. A., Creedy J., & Ball A. (2021). Fifteen years of a PBRFS in New Zealand: Incentives and outcomes. Australian Economic Review, 54(2), 208-230. DOI: 10.1111/1467-8462.12415.

[15]
Buddemeier R.W. (1981). Least publishable unit. Science, 212(4494), 494. DOI: 10.1126/science.212.4494.494.

PMID

[16]
Butler L. (2003a). Explaining Australias increased share of ISI publications-the effects of a funding formula based on publication counts. Research Policy, 32(1), 143-155. DOI: 10.1016/S0048-7333(02)00007-0.

[17]
Butler L. (2003b). Modifying publication practices in response to funding formulas. Research Evaluation, 12(1), 39-46. DOI: 10.3152/147154403781776780.

[18]
Cabbolet M. J. T. F. (2016). The Least Interesting Unit: A new concept for enhancing one’s academic career opportunities. Science and Engineering Ethics, 22(6), 1837-1841. DOI: 10.1007/s11948-015-9736-z.

PMID

[19]
Chen W., Xing Q.-R., Wang H., & Wang T. (2018). Retracted publications in the biomedical literature with authors from mainland China. Scientometrics, 114(1), DOI: 217-227. 10.1007/s11192-017-2565-x.

[20]
COPE Committee on Publication Ethics. (2019). Salami publication. Retrieved September 30, 2024, from https://publicationethics.org/case/salami-publication.

[21]
D’Angelo, C. A., Giuffrida, C., & Abramo, G. (2011). A heuristic approach to author name disambiguation in bibliometrics databases for large-scale research assessments. Journal of the American Society for Information Science and Technology, 62(2), 257-269. DOI: 10.1002/asi.21460.

[22]
Dal Molin M., Turri M., & Agasisti T. (2017). New Public Management reforms in the Italian universities: Managerial tools, accountability mechanisms or simply compliance? International Journal of Public Administration, 40(3), 256-269. DOI: 10.1080/01900692.2015.1107737.

[23]
de Rijcke S., Wouters P. F., Rushforth A. D., Franssen T. P., & Hammarfelt B. (2016). Evaluation practices and effects of indicator use—a literature review. Research Evaluation, 25(2), 161-169. DOI: 10.1093/reseval/rvv038.

[24]
de Vasconcelos S. M. R., & Roig M. (2015). Prior publication and redundancy in contemporary science: Are Authors and Editors at the Crossroads?. Science and Engineering Ethics, 21(5), 1367-1378. DOI: 10.1007/s11948-014-9599-8.

PMID

[25]
DeWitt D. J., Ilyas I. F., Naughton J., & Stonebraker M. (2013). We are drowning in a sea of least publishable units (LPUs). In Proceedings of the ACM SIGMOD International Conference on Management of Data, 921-922. DOI: 10.1145/2463676.2465345.

[26]
Ding D., Nguyen B., Gebel K., Bauman A., & Bero L. (2020). Duplicate and salami publication: A prevalence study of journal policies. International Journal of Epidemiology, 49(1), 281-288. DOI: 10.1093/ije/dyz187.

PMID

[27]
Edwards M. A., & Roy S. (2017). Academic research in the 21st Century: Maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environmental Engineering Science, 34(1), 51-61. DOI: 10.1089/ees.2016.0223.

PMID

[28]
Elton L. (2004). Goodhart’s law and performance indicators in higher education. Evaluation and Research in Education, 1(2), 120-128. DOI: 10.1080/09500790408668312.

[29]
Errami M., & Garner H. (2008). A tale of two citations. Nature, 451(7177), 397-399. DOI: 10.1038/451397a.

[30]
Errami M., Hicks J. M., Fisher W., Trusty D., Wren J. D., Long T. C., & Garner H. R. (2008). Déjà vu - A study of duplicate citations in Medline. Bioinformatics, 24(2), 243-249. DOI: 10.1093/bioinformatics/btm574.

[31]
Fanelli D. (2010). Do pressures to publish increase scientists’ bias? An empirical support from US states data. PLoS ONE, 5, e10271. DOI: 10.1371/journal.pone.0010271.

[32]
Fang F. C., Steen R. G., & Casadevall A. (2012). Misconduct accounts for the majority of retracted scientific publications. Proceeding of the National Academy of Science, 109(42), 17028-17033. DOI: 10.1073/pnas.1212247109.

[33]
Fire M., & Guestrin C. (2019). Over-optimization of academic publishing metrics: Observing Goodhart’s Law in action. GigaScience, 8(6). DOI: 10.1093/gigascience/giz053.

[34]
Geuna A., & Martin B. R. (2003). University research evaluation and funding: An international comparison. Minerva, 41(4), 277-304. DOI: 10.1023/B:MINE.0000005155.70870.bd.

[35]
Gläser J. (2017). A fight on epistemological quicksand: Comment on the dispute between van den Besselaar et al. and Butler. Journal of Informetrics, 11(3), 927-932. DOI: 10.1016/j.joi.2017.05.019.

[36]
Goodhart C. A. E. (1975). Problems of Monetary Management: The U. K. Experience. Papers in Monetary Economics (Reserve Bank of Australia).

[37]
Hall S., Moskovitz C., & Pemberton M. A. (2018). Attitudes toward text recycling in academic writing across disciplines. Accountability in Research, 25(3), 142-169. DOI: 10.1080/08989621.2018.1434622.

[38]
Happell B. (2016). Salami: By the slice or swallowed whole? Applied Nursing Research, 30, 29-31. DOI: 10.1016/j.apnr.2015.08.011.

PMID

[39]
Harvey H.B., & Weinstein D.F. (2017). Predatory publishing: An emerging threat to the medical literature. Academic Medicine, 92(2), 150-151. DOI: 10.1097/ACM.0000000000001521.

PMID

[40]
Hazelkorn E. (2010). Pros and cons of research assessment, in World Social Science Report. Knowledge Divides 2010(pp. 255-258). UNESCO Press.

[41]
Hicks D. (2012). Performance-based university research funding systems. Research Policy, 41(2), 251-261. DOI: 10.1016/j.respol.2011.09.007.

[42]
Hicks D. (2017). What year? Difficulties in identifying the effect of policy on university output. Journal of Informetrics, 11(3), 933-936. DOI: 10.1016/j.joi.2017.05.020.

[43]
Hicks R., & Berg J. A. (2014). Multiple publications from a single study: Ethical dilemmas. Journal of the American Association of Nurse Practitioners, 26(5), 233-235. DOI: 10.1002/2327-6924.12125.

PMID

[44]
Honig B., & Bedi A. (2012). The fox in the hen house: A critical examination of plagiarism among members of the academy of management. Academy of Management Learning and Education, 11(1), 101-123. DOI: 10.5465/amle.2010.0084.

[45]
Horbach S. P. J. M. S., & Halffman W. W. (2019). The extent and causes of academic text recycling or ‘self-plagiarism’. Research Policy, 48(2), 492-502. DOI: 10.1016/j.respol.2017.09.004.

[46]
Hosseini M., & Gordijn B. (2020). A review of the literature on ethical issues related to scientific authorship. Accountability in Research, 27(5), 284-324. DOI: 10.1080/08989621.2020.1750957.

[47]
Jaccard P. (1901). Étude comparative de la distribution florale dans une portion des Alpes et des Jura. Bulletin de la Société Vaudoise des Sciences Naturelles, 37(142), 547-579.

[48]
Jarić I. (2016). High time for a common plagiarism detection system. Scientometrics, 106(1), 457-459. DOI: 10.1007/s11192-015-1756-6.

[49]
Jefferson T. (1998). Redundant publication in biomedical sciences: Scientific misconduct or necessity? Science and Engineering Ethics, 4(2), 135-140. DOI: 10.1007/s11948-998-0043-9.

PMID

[50]
Jimenez-Contreras E., De Moya Anegon F., & Lopez-Cozar E. D. (2003). The evolution of research activity in Spain: The impact of the national commission for the evaluation of research activity (CNEAI). Research Policy, 32(1), 123-142. DOI: 10.1016/S0048-7333(02)00008-2.

[51]
Kassian A., & Melikhova L. (2019). Russian Science Citation Index on the WoS platform: A critical assessment. Journal of Documentation, 75(5), 1162-1168. DOI: 10.1108/JD-02-2019-0033.

[52]
Kessler M. M. (1963). Bibliographic coupling between scientific papers. American Documentation, 14(1), 10-25. DOI: 10.1002/asi.5090140103.

[53]
Kostoff R. N., Johnson D., Del Rio, J. A., Bloomfield L. A., Shlesinger M. F., Malpohl G., & Cortes H. D. (2006). Duplicate publication and ‘paper inflation’ in the Fractals literature. Science and Engineering Ethics, 12(3), 543-554. DOI: 10.1007/s11948-006-0052-5.

PMID

[54]
Larivière V., & Costas R. (2016). How many is too many? On the relationship between research productivity and impact. PLoS ONE, (11)-9, DOI: 10.1371/journal.pone.0162709.

[55]
Larivière V., & Gingras Y. (2010). On the prevalence and scientific impact of duplicate publications in different scientific fields (1980-2007). Journal of Documentation, 66(2), 179-190. DOI: 10.1108/00220411011023607.

[56]
Martin B. R. (2013). Whither research integrity? Plagiarism, self-plagiarism and coercive citation in an age of research assessment. Research Policy, 42(5), 1005-1014. DOI: 10.1016/j.respol.2013.03.011.

[57]
Martin B. R. (2017). When social scientists disagree: Comments on the Butler-van den Besselaar debate. Journal of Informetrics, 11(3), 937-940. DOI: 10.1016/j.joi.2017.05.021.

[58]
Moher D., Naudet F., Cristea I. A., Miedema F., Ioannidis J. P., & Goodman S. N. (2018). Assessing scientists for hiring, promotion, and tenure. PLoS Biology, 16(3), e2004089. DOI: 10.1371/journal.pbio.2004089.

[59]
Moskovitz C. (2019). Text recycling in scientific writing. Science and Engineering Ethics, 25(3), 813-851. DOI: 10.1007/s11948-017-0008-y.

PMID

[60]
Moskovitz C. (2021). Standardizing terminology for text recycling in research writing. Learned Publishing, 34(3), 370-378. DOI: 10.1002/leap.1372.

[61]
Mukherjee A. (2020). Revisiting the ethical aspects in research publications. International Research Journal of Multidisciplinary Scope, 1(1), 27-29. DOI: 10.47857/irjms.2020.v01i01.005.

[62]
Nagin D. S., Rebitzer J. B., Sanders S., & Lowell J. T. (2002). Monitoring, motivation, and management: The determinants of opportunistic behaviour in a field experiment. American Economic Review, 92(4), 850-873. DOI: 10.1257/00028280260344498.

[63]
Neill U. S. (2008). Publish or perish, but at what cost. Journal of Clinical Investigation, 118(7), 2368. DOI: 10.1172/JCI36371.

[64]
Norman I., & Griffiths P. (2008). Duplicate publication and ‘salami slicing’: Ethical issues and practical solutions. International Journal of Nursing Studies, 45(9), 1257-1260. DOI: 10.1016/j.ijnurstu.2008.07.003.

PMID

[65]
Oransky I., & Marcus A. (2012). Retraction Watch. Retrieved September 30, 2024, from https://www.retractionwatch.wordpress.com.

[66]
Rafols I., Leydesdorff L., O’Hare A., Nightingale P., & Stirling A. (2012). How journal rankings can suppress interdisciplinary research: A comparison between innovation studies and business & management. Research Policy, 41(7), 1262-1282. DOI: 10.1016/j.respol.2012.03.015.

[67]
Refinetti R. (1990). In defense of the least publishable unit. The FASEB Journal, 4(1), 128-129. DOI: 10.1096/fasebj.4.1.2295373.

[68]
Rogerson A. M., & McCarthy G. (2017). Using internet based paraphrasing tools: Original work, patchwriting or facilitated plagiarism? International Journal for Educational Integrity, 13(1). DOI: 10.1007/s40979-016-0013-y.

[69]
Roth J. (1981). Least publishable unit. Science, 212(4494), 494. DOI: 10.1126/science.212.4494.494-a.

PMID

[70]
Seeber M., Cattaneo M., Meoli M., & Malighetti P. (2019). Self-citations as strategic response to the use of metrics for career decisions. Research Policy, 48(2), 478-491. DOI: 10.1016/j.respol.2017.12.004.

[71]
Sen S. K., & Gan S. K. (1983). A mathematical extension of the idea of bibliographic coupling and its applications. Annals of Library Science and Documentation, 30(2), 78-82.

[72]
Sidiropoulos A., Katsaros D., & Manolopoulos Y. (2007). Generalized Hirsch h-index for disclosing latent facts in citation networks. Scientometrics, 72, 253-280. DOI: 10.1007/s11192-007-1722-z.

[73]
Stack S. (2004). Gender, children and research productivity. Research in Higher Education, 45(8), 891-920. DOI: 10.1007/s11162-004-5953-z.

[74]
Strathern M. (1997). ‘Improving ratings’: Audit in the British University system. European Review, 5(3), 305-321. DOI: 10.1002/(SICI)1234-981X(199707)5:33.0.CO;2-4.

[75]
Šupak Smolčić V. (2013). Salami publication: Definitions and examples. Biochemia Medica, 23(3), 237-241. DOI: 10.11613/BM.2013.030.

PMID

[76]
Teixeira da Silva J. A. (2020). The ethics of publishing in two languages. Scientometrics, 123(1), 535-541. DOI: 10.1007/s11192-020-03363-2.

[77]
Tonta Y. (2017). Does monetary support increase the number of scientific papers? An interrupted time series analysis. Journal of Data and Information Science, 3(1): 19-39. DOI: 10.2478/jdis-2018-0002.

[78]
van Dalen H. P., & Henkens K. (2012). Intended and unintended consequences of a publish-or-perish culture: A worldwide survey. Journal of the American Society for Information Science and Technology, 63(7), 1282-1293. DOI: 10.1002/asi.22636.

[79]
van den Besselaar P., Heyman U., & Sandström U. (2017). Perverse effects of output-based research funding? Butler’s Australian case revisited. Journal of Informetrics, 11(3), 905-918. DOI: 10.1016/j.joi.2017.05.016.

[80]
Wager E., & Williams P. (2011). Why and how do journals retract articles? An analysis of Medline retractions 1988-2008. Journal of Medical Ethics, 37(9), 567-570. DOI: 10.1136/jme.2010.040964.

PMID

[81]
Zhang M., & Grieneisen M. L. (2013). The impact of misconduct on the published medical and non-medical literature, and the news media. Scientometrics, 96(2), 573-587. DOI: 10.1007/s11192-012-0920-5.

[82]
Zhang Y. H., & Jia X. (2012). A survey on the use of CrossCheck for detecting plagiarism in journal articles. Learned Publishing, 25(4), 292-307. DOI: 10.1087/20120408.

[83]
Zhang Y. H., & Jia X. (2013). Republication of conference papers in journals? Learned Publishing, 26(3), 189-196. DOI: 10.1087/20130307.

Outlines

/

京ICP备05002861号-43

Copyright © 2023 All rights reserved Journal of Data and Information Science

E-mail: jdis@mail.las.ac.cn Add:No.33, Beisihuan Xilu, Haidian District, Beijing 100190, China

Support by Beijing Magtech Co.ltd E-mail: support@magtech.com.cn