Research Paper

Performance-based Research Funding in Denmark: The Adoption and Translation of the Norwegian Model(1)

  • Kaare Aagaard
Expand
  • Danish Centre for Studies in Research & Research Policy, Department of Political Science & Government, Aarhus University Bartholins Allé 7, DK-8000 Aarhus C, Denmark
Corresponding author: Kaare Aagaard (E-mail: ). (1)This chapter draws on Aagaard 2011 and Schneider and Aagaard 2012 which have presented in-depth analyses of the introduction of the Danish BFI in Danish language publications. This chapter condenses and updates these analyses and makes them – for the first time – available for an international audience.

Received date: 2018-08-31

  Request revised date: 2018-10-20

  Accepted date: 2018-10-25

  Online published: 2019-01-08

Copyright

Open Access

Abstract

Purpose: The main goal of this study is to outline and analyze the Danish adoption and translation of the Norwegian Publication Indicator.

Design/methodology/approach: The study takes the form of a policy analysis mainly drawing on document analysis of policy papers, previously published studies and grey literature.Findings: The study highlights a number of crucial factors that relate both to the Danish process and to the final Danish result underscoring that the Danish BFI model is indeed a quite different system than its Norwegian counterpart. One consequence of these process- and design differences is the fact that the broader legitimacy of the Danish BFI today appears to be quite poor.

Reasons for this include: unclear and shifting objectives throughout the process; limited willingness to take ownership of the model among stakeholders; lack of communication throughout the implementation process and an apparent underestimation of the challenges associated with the use of bibliometric indicators.

Research limitation: The conclusions of the study are based on the authors’ interpretation of a long drawn and complex process with many different stakeholders involved. The format of this article does not allow for a detailed documentation of all elements, but further details can be provided upon request.

Practical implications: The analysis may feed into current policy discussions on the future of the Danish BFI.Originality/value: Some elements of the present analysis have previously been published in Danish outlets, but this article represents the first publication on this issue targeting a broader international audience.

Cite this article

Kaare Aagaard . Performance-based Research Funding in Denmark: The Adoption and Translation of the Norwegian Model(1)[J]. Journal of Data and Information Science, 2018 , 3(4) : 20 -30 . DOI: 10.2478/jdis-2018-0018

1 Background and motivation

Funding constitutes one of the main channels through which authority is exercised over research. Changes in the design of funding systems can accordingly be expected to have significant effects on the production of scientific knowledge (Whitley, Gläser & Engwall, 2010) and a detailed understanding of the design and effects of national research funding mechanisms is therefore vital (Aagaard, 2017). This is not least the case in relation to performance based research funding system (PBRFS) which during the latest decades have been introduced in more and more countries and which in most cases have been strongly contested (Hicks, 2012).
The Danish system is an interesting case in this respect. For at least four decades, a central issue on the Danish research policy agenda has been how to design a core funding system that not only takes student numbers and historical criteria into account in the allocation of resources. In line with general international trends, the funding of the Danish universities was from the post-WW2 years to the late 1970s almost totally dominated by core funding which initially were distributed equally between research and teaching assignments (Aagaard, 2017). However, with the ever-growing student uptake the political system became concerned with the fact that research priorities increasingly became side effects of policy decisions related to education. This led to a political demand for a more selective distribution of research funding. The first result of these discussions materialized in 1981 with the so-called budget reform which introduced a clear separation between funding for teaching and funding for research. On the teaching side the reform paved the way for performance-based indicators of educational activities, the so-called Taximeter-system, but on the research side it remained unclear how to replace student numbers and historical factors as key distribution criteria (Aagaard, 2011).
Despite continued discussions, further changes of the system were not implemented until the mid-1990s, where a minor corrective to the existing model was introduced after a lengthy and conflictual negotiation process. The new correctives took the form of a quantitative formula (known as the 50-40-10 model) which since 1997 has led to a marginal distribution of the core funding based on student activity, external funding and PhD-production (Aagaard, 2011; Schneider & Aagaard, 2012). Until 2010 this 50-40-10 model functioned on an ad hoc basis with significant year-to-year variations in the amount of money which was distributed. Hence, the universities did not know in advance, how much funding would be allocated and how the individual indicators were to be weighted. Surprisingly given the choice of indicators and the lack of transparency, the 50-40-10 model itself has rarely been debated although substantial amounts of money have been reallocated through this mechanism over the years (Aagaard, 2011).

2 The process leading to the adoption of the Norwegian model

The political perception that the existing Danish core funding system was functioning inappropriately became even more outspoken after the turn of the century. It was particularly highlighted as problematic that the distribution of core funding between the universities was based on a historically conditional distribution key—regardless of whether the quality and efficiency of the individual universities was high or low (Regeringen, 2005). It was therefore a central objective of the government to make sure that the core funding for research should be distributed based on “quality”, rather than on historical and quantity-oriented parameters—and that this “quality” should be systematically measured and evaluated (Regeringen, 2005). The intention was more precisely that by 2007 and onwards the universities should be assessed both on their teaching, research and knowledge dissemination activities. The assessment should be carried out by an international and independent panel and should be made public (Regeringen, 2005). These ideas were formally launched in the Globalisation Strategy presented in 2006, although the planned introduction of a new model was postponed one year—from 2007 to 2008 (Regeringen, 2006).
As a result of various consultations and internal discussions among policymakers, administrators and stakeholders, it was, however, relatively soon decided to aim for an indicator-based model rather than the proposed panel-based one. Already at this stage several key actors argued in favor of the Norwegian model as the one that would have the least adverse effects (Aagaard, 2011; Schneider & Aagaard, 2012). Hence, inspiration from the Norwegian model was included early on in the Danish process, but initially only as a limited element in several proposals of much more complex models (VTU, 2007d, VTU, 2008a,2008b,2008c,2008d). The complexity of the models was primarily the result of an ambition to cover all the activities of the universities. This meant that a large number of overall indicators were included, some of which had even more sub-indicators. Moreover, a number of the proposed indicators were quite controversial—not least in relation to the knowledge dissemination activities where it was difficult to see how the measured parameters could work in practice without creating unintended consequences. In addition to the problems of the high degree of complexity, there was also basic uncertainty regarding which problem a new model was supposed to solve, how much money it should redistribute, what activities it should cover, and how these activities should be weighted in relation to each other. Finally, the use of indicators in the proposed models seemed, in many cases, mainly to reflect what was available and administratively manageable rather than what the political system initially wished to create incentives for (VTU, 2007a,2007b,2007d; Aagaard, 2011).
Regarding the redistribution of funding, it was first proposed that all core funding should be distributed based on a new performance based model, but during the process the emphasis on this issue shifted from document to document (Aagaard, 2011). The perception was apparently that the questions about the design of the model and questions about the amount of funds that should be included, respectively, were independent of each other.
As a consequence of these problems, a number of key stakeholders grew increasingly sceptic during the initial phases of the process. To avoid the approaching deadlock it was—after almost two years of conflict ridden negotiations—instead suggested that the universities themselves should come up with an alternative model proposal (Aagaard, 2011). While this process also turned out to be challenging due to significant conflicts of interest between the research intensive and the teaching intensive universities (DTU, KU, & AU, 2008; CBS, AAU, & RUC, 2008), the institutions nevertheless managed to reach a compromise proposal which was presented in spring 2009 (Danske Universiteter, 2009). This proposal subsequently paved the way for the political decision which was taken June 30th, 2009—almost four years after the process was initiated (VTU, 2009). The final political agreement was based almost entirely on the proposal of the Danish Universities and took the form of an expanded 50-40-10 model, where the bibliometric research indicator (BFI) inspired by the Norwegian model came in as an additional element. Where the previous model had three indicators: education (50%), external research funding (40%) and PhD production (10%), the new model now had four: education (45%), external research funding (20%), PhD-production (10%) and the BFI (25%). The BFI, like the Norwegian model, was based on differentiated publication activity with two levels determined by a large number of field specific expert groups. Unlike the Norwegian model, the BFI, however, also included patents, doctoral and PhD-dissertations (the PhD-dissertations were later removed from the model again). Finally, as part of the reform it was decided that the indicator should only have funding consequences in relation to the distribution of “additional” core funding.

3 Differences between the Norwegian and the Danish model

Hence, on the one side The Danish BFI, which resulted from this elongated and conflictual process, was unambiguously inspired by the Norwegian model. But on the other, and this is much less recognized in both the public and the scholarly debate, the final Danish model design differs from its Norwegian counterpart in a number of decisive, but partly hidden, points. Hence, the two models, which from a superficial view look relatively identical, are in practice different in important respects. The process leading to the BFI thus meant that a number of solutions were chosen which to some extent violate the logic and transparency of the Norwegian model. In the following it is outlined how the Danish BFI deviates from the Norwegian model in at least four important respects. These relate to: 1) the lack of clear objectives; 2) uncertainty in relation to the redistributional effects of the model; 3) the choice of funding neutrality across the main scientific areas 4) the uncertainty related to the establishment of the documentation and data quality assurance system.

3.1 Lack of clear objectives

A fundamental problem related to both the process and the final result in the Danish case has been the lack of clear objectives with the introduction of the BFI. While the Norwegian model was designed to address specific Norwegian challenges, the purpose and rationale of the Danish model was contested and constantly shifting right from the beginning of the process. In addition, and contributing to this, the preparation of background material and underlying analyses was an incoherent, underprioritized and messy process. This lack of clear objectives and thorough preparation influenced the subsequent process in several ways. Firstly, the lack of clear objectives was a significant part of the explanation of the highly controversial process of designing and implementing the BFI which in turn resulted in a lack of legitimacy for the model as a whole. Secondly, the lack of clear objectives meant that Denmark ended up with a model that neither the political system, nor the research community really wished for—and a model which does not seem to address specific Danish challenges. While the Globalisation Strategy highlighted broader societally oriented factors as the most important ones to reward in a new model, agreement on how to measure such factors could not be reached. Hence, that the process ended up with a model with a strong emphasis on traditional academic publishing rather than knowledge exchange, collaboration and societal impact did not reflect a political wish, but instead a realization that it was the only possible solution as the process played out (Aagaard, 2011). It is, however, important to emphasize that the model not only is intended to work as an incentive model, but also as an accountability mechanism. From this perspective, the BFI can be characterized as a model enhancing transparency and broader legitimacy perspectives to the public at large in relation to the distribution of tax payer money.

3.2 Lack of clear incentives

A second difference relates to the incentive structures of the two models. This issue is crucial for the design of a model of this type, since the risk of “unintended effects” is closely linked to the degree of redistribution. Where the Norwegian model from the beginning was designed as a marginal redistribution mechanism, this issue was much less clearly articulated in Denmark. As outlined above this uncertainty characterized the design process, where very different proposals were launched— ranging from massive to marginal redistribution. It has however also characterized the process after the implementation, where the universities have had little chance of knowing how much money would be redistributed from year to year, as this amount both has been dependent on the infusion of new funds and by other mechanisms. Hence, the amount of money is not known in advance for the Danish universities, and this amount may in addition show significant fluctuations from year to year. The Norwegian bibliometric indicator, on the other hand, has consistently redistributed around 2% of the total funding for the university sector each year. There is thus a relatively predictable and marginal redistribution effect, making it possible for universities to navigate in relation to the model, while also maintaining financial space to pursue other important objectives. The actual development of the funding effects in the Danish case is outlined in section 4 of this chapter.

3.3 Main area funding neutrality

As part of the compromise between the Danish universities it was also decided that the Danish model—in opposition to the Norwegian—should be neutral in its re-distributional effects across the main scientific areas, meaning that funding should only be reallocated within the main areas and not across them (Danish Universities, 2010). This meant that the previous relative distributions of core funding between the main areas should also be the basis for the allocation of funding from the BFI. This choice, however, contradicts the intention of the Norwegian model of comparability across disciplines and thus goes against the rationale for using a universal publication indicator instead of for example citation indicators within the areas with high coverage in the bibliometric databases. Hansen (2009, 2011) points to a further unintended effect of the main area neutrality: The value of publication points differs from main area to main area. Thus, there is no direct correspondence between the main areas’ share of core funding and their distribution of publication points. This means that main areas with a larger share of publication points than core funding in fact receive a smaller grant per publication point than main areas, which have a smaller percentage of publication points than core funding (Hansen, 2009). This main area neutrality also means that the BFI becomes conservative and un-dynamic for the university sector as a whole as the ability to move funding between disciplines disappears. One could argue, however, that this was never the intention with the Norwegian model, either.

3.4 Documentation system and quality assurance

Finally, in relation to the Norwegian model, it was not only central to create an indicator that could stimulate to increased international publication activity and create a model which could be applied to the system as a whole. It was also a crucial condition for the implementation of the work that a unique research documentation system could be created in the same process, whether or not it was part of a redistribution mechanism. The main objective here was to ensure a high degree of transparency in relation to the sector’s academic output. However, the construction of a reliable documentation system and the establishment of quality assurance mechanisms in relation to publication data did not receive the same attention in Denmark. Such objectives were not given any particular weight throughout the Danish process and were not highlighted as a major reason for the introduction of the model. Hence, while data harvesting and calculation of points obviously indeed has been implemented, it has not worked without problems, and it has never become a real well-functioning documentation system with transparency and systematic quality assurance of data (Schneider & Aagaard, 2012).

4 Funding implications

As outlined in section 3.2., the amount of funding redistributed through the Danish BFI varies from year to year. Where the model’s economic reallocation effects in Norway have been well-known and relatively stable over time as mentioned in the previous section, the situation in Denmark has been quite different. It was a characteristic of the BFI in its first years that the actual redistribution effects were very modest, but also that the amount was not known for the universities in advance and that there were fluctuations from year to year which could not be foreseen. The latter two points are obviously far from appropriate in relation to an incentive model. In recent years, however, the trend has been moving very unambiguously towards more and more redistribution as table 1 below illustrates.
Table 1 Development in the allocation of core funding (mio. DKK and %).
2010 2011 2012 2013 2014 2015 2016 2017 2018
Core funding (mio.DKK) 7,905 8,443 8,504 8,592 8,589 8,593 8,526 8,527 8,527
Performance-based share 320 594 680 1,045 1,182 1,326 1,480 1,656 2,090
Performance-based as percentage of total 4 7 8 12 14 15 17 19 25
BFI (mio. DKK) 80 148.5 170 261.25 295.5 331.5 370 414 522.5
BFI in percentage of total 1 2 2 3 3 4 4 5 6

Source: Aagaard 2016

In 2010, the BFI only redistributed DKK 30 million for the sector as a whole. In 2011, this amount had increased to about DKK 75-80 million. Considering that these funds otherwise should have been distributed among the universities after the old 50-40-10 model, the actual redistribution on the basis of publication points was almost negligible. This has however changed quite drastically during the most recent years.
As presented above the Danish BFI is part of a broader performance-based mechanism. As shown in the table above, no less than a quarter of all core funding in 2018 is distributed according to this mechanism, and out of this the BFI has a weight of 25%. This means that approximately 6% of the total amount of core funding in 2018 will be distributed on the basis of BFI points. By comparison, the corresponding share in the Norwegian system from 2017 is only approx. 1.6%. This figure was for the Danish system 1% in 2010 and 2% in 2011 and 2012, while the percentage in 2018 will be as high as six times as high as in 2010 and almost four times as high as the current Norwegian level. The increase is driven by the fact that the already allocated basic research funds are reduced annually by 2%, which is then reinvested in the university sector via the performance model.
However, it is characteristic that this increasing redistribution has occured without any particular public discussion of possible consequences of this development. Given the fact that our actual knowledge about the real effects of the model is very limited, such discussions of consequences seem to be required. This is not least the case at a time with growing focus on the importance of both internal and external incentives for scientific misconduct and the spread of detrimental research practices. Viewed from this perspective, the combination of stagnant or declining total research funding level, very low success rates in all research councils and foundations, a high level of competition for positions from the postdoc level and beyond, as well as a general performance-based assessment and reward culture, increased weight on PBRFS might amplify unintended dynamics in the science system. All else being equal, we must expect that the greater the proportion of funding allocated to this type of mechanism, the greater the risk will also be that incentives may have inappropriate behavioral consequences at both institutional and individual level (Aagaard 2016). As we will return to in the next section, it is however not justified to single out the BFI as the sole driver of such unintended developments.

5 Experiences and effects

Most importantly, the article has highlighted a number of crucial factors that relate both to the Danish process and to the final Danish result, underscoring that the Danish BFI is indeed a quite different system than its Norwegian counterpart. One consequence of these process and design differences is the fact that the broader legitimacy of the Danish BFI today appears to be quite poor. The reasons for this lack of legitimacy can most likely be found in the following factors: 1) the preparation and the design and implementation process was not handled well by the central authorities; 2) the objectives of introducing such a Danish model have been unclear and shifting throughout the process and there has been limited willingness to take ownership of the model among stakeholders; 3) in addition, there has been a general lack of communication throughout the implementation process and an apparent underestimation of the challenges associated with the use of bibliometric indicators.
The use of a publication-based indicator such as the Danish BFI may still be defended, though, but if so, it should be based on a number of arguments that have almost been absent in the Danish debate so far. For example, it could be done by pointing out some of the potential positive effects of the Norwegian model, such as the possibility to create increased general awareness of publishing behavior at all levels and areas, the availability of a significantly improved national publication database, and not least the provision of greater visibility of the academic production of the humanities and the social sciences.

6 The future of the Danish BFI

The publish or perish phenomenon is by no means new, but there are indications that researchers today are perceiving a stronger pressure than previously—although such a publication pressure may differ depending on where you are in your career, what field you are in etc. But rather than seeing systems such as the Danish BFI as the main cause of this pressure, it is probably more reasonable just to perceive the model as a symptom of stronger underlying dynamics. It therefore appears both right and wrong when critics have expressed concern that the incentive structure in the BFI alone leads to inappropriate behavioral changes in relation to the values and norms that apply to good scientific work and in relation to the versatile tasks that the universities generally are expected to solve in the Danish society. Such general concerns on the one hand appear justified, but on the other hand, the pressures can hardly be attributed to BFI alone. Thus, the problem will hardly be solved by simply abandoning the indicator model. At the time of writing, however, the future of the BFI is highly uncertain. An expert committee has been commissioned to come up with new proposals, but so far no clear alternatives to the Norwegian model have materialized.

The authors have declared that no competing interests exist.

[1]
Aagaard K. (2011). Kampen om basismidlerne historisk institutionel analyse af basisbevillingsmodellens udvikling på universitetsområdet i danmark.

[2]
Aagaard K. (2016). Manglende debat om stigende præstationsbaseret finansiering af dansk forskning.

[3]
Aagaard K. (2017). The evolution of a national research funding system: Transformative change through layering and displacement. Minerva, 55(6), 1-19.

[4]
Butler L. (2010). Impacts of performance-based research funding systems: A review of the concerns and the evidence. Sourceoecd Education & Skills, 41,118-156(39). doi: 10.1787/9789264094611-7-en.

[5]
CBS AAU, & RUC (2008). Vedrørende fordeling af basismidler til universiteterne. Brev fra CBS, AAU og RUC til videnskabsministeren d. 10. Oktober 2008.

[6]
Danske Universiteter (2010). Bibliometrien skal være hovedområdeneutral. Notat.

[7]
DTU KU, & AU (2008). Brev til videnskabsministeren vedr. fordelingskriterier. 30. september 2008.

[8]
Forsknings-og Innovationsstyrelsen (2009). Samlet notat om den bibliometriske forskningsindikator. Notat 22/10-2009, Forsknings-og Innovationsstyrelsen.

[9]
Gläser ,J. &Laudel G. (2007). The Social Construction of Bibliometric Evaluations. I: R. Whitley & J. Glaäser (red.) The Changing Governance of the Sciences . doi:10.1007/978-1-40206746-4_5.

[10]
Hansen L. (2009). Hvorfor er en humanistisk artikel mere værd end en sundhedsvidenskabelig? indlæg d. 12/10 -2009 på bloggen: Forskningsfrihed?

[11]
Hansen L. (2011). CBS gør det godt i konkurrencen om forskningsmidler. Indlæg d. 25/1 - 2011 i CBS Observer.

[12]
Hicks D. (2012). Performance-based university research funding systems. Research policy, 41(2), 251-261.

[13]
Regeringen (2005). Offentlig forskning - mere konkurrence og bedre kvalitet. Baggrundsnotat til Globaliseringsrådets drøftelse på mødet d. 8.-9. december 2005. Regeringen.

[14]
Regeringen (2006). Fremgang, fornyelse, tryghed. Regeringens Globaliseringsstrategi.

[15]
Schneider , J.W.,&;Aagaard, K. (2012). Stor ståhej for ingenting - den danske bibliometriske indikator. I K. Aagaard, & N. Mejlgaard (red.), Dansk forskningspolitik efter årtusindskiftet (s. 229-260). Aarhus: Aarhus Universitetsforlag.

[16]
Sivertsen G. (2008) “Experiences with a bibliometric model for performance based funding of research institutions”. I: J. Goriaz & E. Schiebel (red.) Book of abstracts, 10th international science and technology indicators conference, 17-20 September 2008,University of Vienna, pp. 126-128.

[17]
VTU (2007a). Definition og indsamling af indikatorer til ny kvalitetsfinansieringsmodel for basismidler.Universitets - og Bygningsstyrelsen, 30. November 2007.

[18]
VTU (2007b).Gennemgang af uddannelsesindikatorer til brug for fordeling af universiteternes basismidler. Notat.18. April 2007. Universitets - og Bygningsstyrelsen.

[19]
VTU (2007c). Gennemgang af videnspedningsindikatorer til brug for fordeling af universiteternes basismidler. Notat 27. April 2007. Universitets - og Bygningsstyrelsen.

[20]
VTU (2007d). Udkast til en model for fordeling af basismidler efter kvalitet.11. September 2007. Universitets- og Bygningsstyrelsen.

[21]
VTU (2008a).Udkast til en model for fordeling af basismidler efter kvalitet. Universitets - og Bygningsstyrelsen, 2. januar 2008.

[22]
VTU (2008b).Model til fordeling af basismidler. Universitets - og Bygningsstyrelsen, 23.January 2008.

[23]
VTU (2008c).Notat om basismidler efter resultater.Universitets - og Bygningsstyrelsen, 14. April 2008.

[24]
VTU (2008d).Basismidler efter resultater.Universitets - og Bygningsstyrelsen, 25. April 2008.

[25]
VTU (2009). Aftale om basismidler efter resultat. Aftale mellem regeringen, Socialdemokraterne, Dansk Folkeparti og Det Radikale Venstre om en ny model for fordeling af basismidler, Juni 2009.

Outlines

/

京ICP备05002861号-43

Copyright © 2023 All rights reserved Journal of Data and Information Science

E-mail: jdis@mail.las.ac.cn Add:No.33, Beisihuan Xilu, Haidian District, Beijing 100190, China

Support by Beijing Magtech Co.ltd E-mail: support@magtech.com.cn