Research Paper

Taking Comfort in Points: The Appeal of the Norwegian Model in Sweden

  • Björn Hammarfelt
Expand
  • Swedish School of Library and Information Science, University of Borås, Sweden
Corresponding author: Björn Hammarfelt (E-mail: ).

Received date: 2018-09-07

  Request revised date: 2018-10-20

  Accepted date: 2018-10-25

  Online published: 2019-01-08

Copyright

Open Access

Abstract

Purpose: The “Norwegian model” has become widely used for assessment and resource allocation purposes. This paper investigates why this model has becomes so widespread and influential.

Approach: A theoretical background is outlined in which the reduction of “uncertainty” is highlighted as a key feature of performance measurement systems. These theories are then drawn upon when revisiting previous studies of the Norwegian model, its use, and reactions to it, in Sweden.

Findings: The empirical examples, which concern more formal use on the level of universities as well as responses from individual researchers, shows how particular parts—especially the “publication indicator”—are employed in Swedish academia. The discussion posits that the attractiveness of the Norwegian model largely can be explained by its ability to reduce complexity and uncertainty, even in fields where traditional bibliometric measurement is less applicable.

Research limitations: The findings presented should be regarded as examples that can be used for discussion, but one should be careful to interpret these as representative for broader sentiments and trends.

Implications: The sheer popularity of the Norwegian model, leading to its application in contexts for which it was not designed, can be seen as a major challenge for the future.

Originality: This paper offers a novel perspective on the Norwegian model by focusing on its general “appeal”, rather than on its design, use or (mis)-use.

Cite this article

Björn Hammarfelt . Taking Comfort in Points: The Appeal of the Norwegian Model in Sweden[J]. Journal of Data and Information Science, 2018 , 3(4) : 85 -95 . DOI: 10.2478/jdis-2018-0023

1 Introduction

The Norwegian model for performance based resource allocation, first introduced in 2005, has now been adopted in a range of countries including Denmark, Finland and Flanders in Belgium (Sivertsen, 2016). In wake of its popularity researchers interested in evaluation and performance measurement have taken a further interest in how this model might affect researchers and their practices (Aagaard, Bloch, & Schneider, 2015; Aagaard, 2015; Hammarfelt & de Rijcke, 2015). On a more general level, findings point to an increase in productivity on the national level in Norway, while there are concerns regarding how the model is used in local contexts. Still, the effects of performance-based research funding system (PRFS) are still largely an unresolved issue, and one of the main problems is to disentangle effects of PRFS from surrounding factors (for an in-depth discussion see special issue of Journal of Informetrics, Waltman, 2017). In this paper, however, I will focus on a different but related question: How and why has the Norwegian model become some popular? The present paper thus attempts to explain how it can be that the Norwegian model has becomes so widespread and influential.
Obviously, the Norwegian model has been successful in getting the attention of stakeholders such as governments and university leaders, yet I suggest that its appeal among researchers is crucial for its success. Hence, the study focuses on the appeal of the model, and specifically why it is attractive for department heads and (individual) researchers. The analysis builds on already gathered data from a Swedish context in conjunction with previous studies, and while the findings are country specific it is suggested that the conclusions drawn are relevant for explaining the attractiveness of the model more generally. Importantly, this is not a study of diffusion, which follows how the model has travelled into new contexts, nor should it be read as a review of the system as such. Strengths and weaknesses of the system have been listed, see for example (Schneider, 2009; Aagaard et al., 2015), yet how these should be weighed against each other remains an open question. Generally, however, it might be stated that the main strengths of the system, such as uncomplicatedness and inclusiveness can also be seen as drawbacks depending on your perspective. Moreover, statements regarding the qualities of the Norwegian model are not easily generalizable as national or local models often involve significant adjustments and adaptations, which is evident in the Swedish case. Hence, the usefulness and appropriateness of the Norwegian model should be understood in the context of where it is applied. The perception of the model will be rather different when its used on the national level of allocating resources to institutions, while the individual researcher getting a bonus (or a pay raise) based on points in the model will have rather different perspective.
The appeal of the Norwegian model is analysed in two steps: first, a theoretical background is outlined in which the reduction of “uncertainty” is highlighted as a key feature of performance measurement systems. These theories are then drawn upon when revisiting previous studies of the Norwegian model, its use, and reactions to it, in Sweden. In a concluding part, key insights are highlighted and the implications for the future are discussed.

2 Uncertainty and assessment systems

Research is a very uncertain activity. Generally, uncertainty is a key part of any knowledge making activity and attempts to decrease the level of uncertainty may result in loss of creativity and novelty. Academic life is likewise filled with uncertainty; researchers have a high degree of freedom in deciding on what to spend their time on, yet the relative independence of many academics results in insecurities in regards to career possibilities and employment. For scholars interested in what has been labelled as the “audit society” (Power, 1997) or “evaluation society” (Dahler-Larsen, 2011) audits, assessment procedures and evaluation systems are all designed to limit risk and reduce uncertainty. Following this line of argument, it is clear that one of the main purposes of performance measurement and evaluation systems is to reduce uncertainty: for governments and taxpayers such systems serve the purpose of ensuring that resources are spent effectively, while they at the same time provide individual researchers with yardsticks and benchmarks through which they can assess their own performance in comparison to others. An effective method for decreasing the level of uncertainty is the reduction of possible choices. In limiting the number of options and outcomes assessment procedures and measures ensures a level of assurance: “The tight, yet adjustable coupling between past, present and future behaviour with a numerical indicator is intended to eliminate uncertainty” (Nowotny, 2016). The greater the reduction of possibilities, the greater is the reduction of uncertainty. This process, often called commensuration, involves “turning qualities into quantities on a shared metric” (Espeland & Sauder 2016). Commensuration is needed for any assessment system to work effectively, as it is a prerequisite for comparison, and the Norwegian system effectively preforms this task by turning publications into points. Thus, the “publication indicator” is a key feature of the system, and as we shall see it is this feature that foremost has travelled into new contexts.
Another important characteristic of performance assessment systems is to provide stability and predictableness. This means that assessment does not threaten to radically alter the balance of a system. Hasty fluctuations will have negative consequences for the confidence in a particular system, and many national systems for performance-based allocation of research funds are designed to ensure that large variances from year to year is avoided. “Predictability” suggests that if units or individuals perform according to stipulated criteria they will be rewarded as promised. Systems that are instable and unpredictable will likely foster distrust in the system.
A key quality of an assessment system is the degree to which it is deemed as fair to all those involved, which in the case of PRFS suggests that all researchers or units have the same opportunity to “score” well in the system. Finally, then an important quality of a PRFS is the degree of transparency: how open and accessible are the mechanisms for resource allocation, and do the evaluated have a chance to influence the system? Evidently the reduction of uncertainty is a central feature of assessment systems, and the features listed here—stability, predictableness, transparency and fairness—are all important parts of a well-functioning system.

3 Adaptation and calibration: the Norwegian model(s) in Sweden

Unlike some neighbouring countries, Sweden has not adopted the Norwegian model as a nationwide system for allocating resources. The indicators for institutional funding are instead based on external funding and on publications and citations in Web of Science. Still, however parts of the Norwegian model, and especially the system for allocating points, is increasingly used and discussed in Swedish academia. What is important to note is that only one out of three main components in the models is actually employed by Swedish universities. The three main components are: (1) A national and comprehensive database of publications, (2) a publication indicator, and (3) a performance based funding model that reallocates resources (Sivertsen, 2016, Sivertsen, 2018). In principle Swedish universities make us of the second component, the “publication indicator”. The indicator and the list of accredited journals and publishers allows for turning publications into points that then can be weighed and compared to each other in various ways. It should be mentioned that Sweden has a CRIS system, SwePub, but this system is not yet fully developed for bibliometric analysis.
A survey conducted among bibliometricans at Swedish higher education institutions (HEIs) in 2014 found that 11 out of 27 universities used the “Norwegian model” or parts of it (Hammarfelt, Nelhans, Eklund, & Åström, 2016). The indicator is used for allocating resources either on the level of the university as a whole or in selected faculties. Often the Norwegian model is used in the social sciences and the humanities, but examples of other faculties using the indicator, such as the Faculty of Medicine at Umeå University, were also found (table 1).
Table 1 The Norwegian model in Sweden (survey from 2014).
HEI Area of application Adaptations
Gothenburg University Social sciences, humanities, computer science, pedagogy 1. Whole counting
Linneaus University Whole University 1. Points in the Norwegian system weighted against other universities, 2. Points for conference papers, 3. Points for journals in Web of Science, which are not in the Norwegian list, 4. Fractionalised
Luleå Technical University Whole University 1. Both Norwegian and the ‘Danish list’.
2. Additional points for Web of Science indexed journals
Lund University (LU) Economics 1. Fractionalised
Mid Sweden University Whole University 1. Publications in Scopus and Web of Science are also given points.
Stockholm University Social sciences 1. Fractionalised
Södertörn University Whole University 1. Fractionalised 2. Local book series counts as level 1. 3. Conference papers receive points based on publishing house.
University of Halmstad Whole university 1. Gives points to publication outside the Norwegian list. 2. Publication on the Norwegian list receives extra points.
University of Skövde Whole University 1. Gives points to publication outside the Norwegian list.
Umeå University Medical sciences 1. Fractionalised 2. Departments can suggest changes in the list of level 1 and 2 journals.
3. Doctoral theses receive 1 point.
Uppsala University Social Sciences and Humanities 1. Fractionalised
Notably, none of the eleven universities using the Norwegian model does so without modifications. Many of the universities fractionalise the counts, and this is done in the “original” model as well, yet they don’t seem to make use of the same counting method. Others give points not only for journals on the Norwegian list, but also for journals indexed in Web of Science and Scopus, and journals on the so-called Danish list (basically a Danish adaptation of the Norwegian list). Several universities work with more inclusive systems were doctoral theses (Umeå University), as well as conference papers and monographs in the local book series (Södertörn University) give points. University of Halmstad and University of Skövde give points to a wider array of publications, but their systems are clearly inspired by the Norwegian indicator. Linneaus University, which has the most intricate system, weighs points against similar departments at other Swedish universities. At Linneaus University individual researchers are rewarded, but only if their points are worth more than SEK 8,000, otherwise these points are given to their department. Moreover, the top 20% of researchers, in terms of earned points, receive an extra bonus. Generally, parts of the Norwegian system—or rather the publication indicator and the journal list—are used together with other assessment procedures and data sources. Several universities are more inclusive in terms of publications receiving points, and proceedings, dissertations, and local book series are among the channels being recognised.
Moreover, the level on which the Norwegian model is applied is important to consider. In some universities components from the Norwegian model are used to allocate funds across faculties, and at others it is employed within faculties to distribute resources to departments. At Luleå Technical University and Linneaus University researchers are individually and directly compensated for having published in level two journals. Such individual use may have particularly visible consequences as it unswervingly affects working conditions and research priorities, and it is likely that such individual assessment may have direct effects on publication practices.
Only at one institution, Umeå University, are researchers given the opportunity to influence the model by proposing journals and publishers that should be ranked. The other universities do not allow researchers to take part in the process of selecting and ranking publication outlets. This is an important difference in comparison to Norway where researchers themselves are engaged in committees deciding on the inclusion of journals, a process which also is increasingly transparent. Thus, when importing only parts of the Norwegian model, there is a risk that strengths of the system such as local authority, engagement from researchers, and the degree of transparency are reduced or lost. The possibility to influence the selection of ranked publication channels is of great importance for fields having a strong national orientation, and in which local audiences and publication channels play a considerable role. This description fits well with many fields in the humanities and social sciences (SSH), and the next section will focus on how the Norwegian model has been received among Swedish scholars in these fields.

4 The Norwegian model among Swedish scholars

The Norwegian model is well known in Sweden and, as shown above, several faculties in the social sciences and humanities make use of it when allocating funds to departments or individuals. One of the first studies to look at the how the model is used in the humanities was Hammarfelt and de Rijcke (2015), and the implementation of the model in a faculty of social sciences was studied by Edlund and Wedlin (2017). Here I will present a few insights from a recent survey of humanities scholars and social scientists in Sweden and Australia (Hammarfelt & Haddow, 2018; Haddow & Hammarfelt, to appear.). Short free text answers provided by Swedish respondents to this survey—which was about metric use and publication patterns in general—will be used in order provide a insight into how humanities scholars view and use the Norwegian model. The findings presented should be regarded as examples that can be used for discussion, but one should be careful to interpret these as representative for broader sentiments and trends. Rather these comments ought to be seen as a first indication of how the Norwegian model (or list) is used and discussed by Swedish scholars.
Several respondents mentioning the Norwegian system, or rather the Norwegian publication indicator, suggest that it influences their publication practices: “I’ve become more aware of the value attached to channels of publication and have adapted my publication practice accordingly, practically guidance given by the Norwegian list” (scholar in history and archaeology). The pressure to publish in ranked journals is sometimes coming from above, from faculty: “Thinking about classification of journal (Norwegian list) since this is very much stressed by the faculty.” (art scholar) or from funding agencies: “The Norwegian list is often required when applying for research grants based on evaluation of earlier research.” (educational researcher). More voluntary use, for example in job applications, is mentioned: “I have used the Web of Science ranking as well as the Norwegian list ranking in my applications for associate professor”. In some contexts, points in the Norwegian system supposedly even affect salaries and career advancement: “The inclusion of the Norwegian list is closely attached to wage and promotion” (economist).
The Norwegian publication indicator is used in combination with other measures, such as impact factors or h-index: “I have used Impact factor as a reason for internal funding (time to write the article); and have also used my articles published in the Norwegian list as a way of fulfilling departmental policy” (languages and literature scholar). In these remarks it become evident that Norwegian ‘points’ are part of a larger ecology of metrics and indicators: “Publication Points according to the Norwegian list and Google Scholars h-index (not accurate but I have too few ISI publications to use a ‘real’ h-index” (scholar in history and archaeology). Overall then, it appears as the Norwegian list is used as one option, among several others, which can be used to demonstrate ‘worth’. A feature of the Norwegian indicator, which is especially important for scholars in the social sciences and humanities, is that it values monographs and edited books: “I also prefer the Norwegian lists as it takes more in books” (economist).
When researchers express their views, it is often the negative consequences of assessment procedures that are in focus. However, changing publication practices, which by many is associated with the introduction of the Norwegian system, is welcomed by some researchers (Hammarfelt & Haddow, 2018). Thus, it is not uncommon that the use of the Norwegian list associated with a more general awareness regarding publication strategies: “More focus on high impact peer reviewed international journals. Less focus on chapters in books, although still publishing chapters in books at publishers with high impact/high scores in the Norwegian system” (humanities scholar, other). And a researcher in ‘computer and information science’ express similar sentiments: “Shift towards a preference to (highly) ranked journals (Impact factor, listed on the Norwegian list) and less focus on more marginal publication outlets.” In many ways the Norwegian list is seen to encourage an already ongoing transition in publications practices: “Institution policy is to reward publication in journals that are listed in Web of Science or at least on the Norwegian list, and although I mostly published in such journals before as well I probably focus on them to an even greater extent now” (economist).
For others, and perhaps especially for younger researchers, the list emphasises a development towards an international audience, which may be at odds with their own view: “I am trying to combat the shift towards international publication by also writing in Swedish, but it is hard when you are at an early stage in your career: you need those Norwegian listed journal publications” (educational researcher). In some cases the ‘battle’ is already lost, as this respondent who presents the model as ‘fait accompli’: “I have become aware of that I cannot escape the Norwegian system” (scholar in history and archaeology).
Overall, these glimpses from researchers in SSH show a rather ambivalent response to the introduction of the Norwegian publication indicator. For some it is merely emphasizing current trends within their field, while for others it comes to represent a more radical, and less welcomed, shift in publication practices. Similarly, while several respondents prefer points in the Norwegian model—not at least because it covers books—others are less approving.

5 Discussion

As individuals and organizations we take comfort in points and numbers as they are concrete manifestations of ‘performance’, and the attractiveness of the Norwegian model can largely be attributed to its capacity to effectively reduce uncertainty. Crucial for this ability is the ‘publication indicator’ through which publications in various forms can be turned into points, which then can be transferred into recognition and resources. The indicator’s capacity to reduce uncertainty concerning the ‘value’ of publication in fields where few established bibliometric measures are applicable is especially important for understanding the appeal of the Norwegian model. In comparison with rivalling systems, such as indicators based on citation data, it also does well in terms of predictability and transparency (Ahlgren, Colliander & Persson, 2012). The possibility for researchers themselves to engage in the process of suggesting publication channels on level 1 or 2 is especially worth emphasizing, and this opportunity is likely to strengthen the trust in the system. However, when only parts of the system—in most cases the ‘publication indicator’- is imported and adapted into another national context crucial components in the model are lost. For example, if databases with complete and accurate coverage are missing, the system will work less well. Most problematic, however, is how ‘the publication indicator’ may starts to live a life own its own when being adapted and utilized in a context for which it was not designed. Such a development is evident in Sweden where local models make use of distinct parts of the Norwegian model, yet with adaptations and additions, which reflect their own needs. In many cases it would actually be more relevant to view these as ‘models inspired by the Norwegian one’ rather than direct applications. Moreover, it is evidently so that Swedish researchers, even on the individual level, now are evaluated based on points in this model, and it appears as they themselves make use of it when selecting publication channels, and when showcasing and assessing their work. Yet, important parts of the model, such as, fractionalisation of authorship and normalization between fields, are often not used when applied on the local level. Additionally, Swedish researchers, unlike their Norwegian colleagues, are not engaged in selecting journals and book publishers in the Norwegian list, and they therefore have little influence on the grading of publication outlets.
Consequently, the popularity of the Norwegian model can also be positioned as a problem; when something is popular and widely diffused it easily becomes simplified and distorted. In a Swedish context it is evident that the Norwegian model is used and adapted in ways and contexts for which it was not designed. In principle the “Norwegian publication indicator” now lives a life of its own, separated from the national system that it was designed for. Similar to other indicators like the Journal Impact Factor, the “Norwegian model” now exists as a measure among others, which is separated from its original context, and operating well beyond the control of its inventors. Moreover, it is likely that the “publication indicator” will have considerable influence in various contexts even if it is formerly abandoned in a performance-based allocation system. An example of this phenomenon, what we may call the “after-life of indicators”, is the ERA-list in Australia which was formally used only for a short period, yet this ranked list of journals still plays an important role among researchers (Hammarfelt & Haddow, 2018). Previous studies has pointed to the problem of the “Norwegian publication indicator” being used in unintended ways (Aagaard, 2015; Hammarfelt et al., 2016), and actions, such as “inter-instutional learning areas” has been implemented in order to limit inappropriate use (MLE on Performance-based Funding of Public Research Organisations 2018.) Such efforts are commendable, and yet there is a risk that Norwegian publication indicator has reached a level of popularity, and visibility, among researchers with the consequence that such actions on the institutional level will not be sufficient. Paradoxically, the “success” of the Norwegian model may very well be its greatest problem.

The authors have declared that no competing interests exist.

[1]
Aagaard K. (2015). How incentives trickle down: Local use of a national bibliometric indicator system. Science and Public Policy, 42(5), 725-737.The question of how the incentives of national performance-based research funding systems affect local management practices within the higher education sector has high empirical and theoretical importance, but has so far received limited attention. From a traditional organization theory perspective the Norwegian system represents a puzzle: it is of marginal economic importance and the external pressure to fully implement the system at local levels is weak. A loose coupling between the national system and the local implementation would therefore be expected. Yet, in many instances we document a quite tight coupling between system-level incentives and local practices. Based on a recent evaluation of the Norwegian model, which identifies a number of indirect mechanisms that contribute to the trickling down of incentives, this paper shows that traditional coupling perspectives are oversimplistic. A large variation across institutions, fields and departments is, however, also observed and possible explanations for this are discussed.

DOI

[2]
Aagaard K., Bloch C., & Schneider J. W. (2015). Impacts of performance-based research funding systems: The case of the Norwegian Publication Indicator. Research Evaluation, 24(2), 106-117.There has been a growing use of performance-based research funding systems (PRFS) as a policy tool. With the introduction of the Publication Indicator in 2004, Norway joined this international trend in which the allocation of basic funds is increasingly linked to performance indicators. The purpose of this article is to present and discuss the main results of a recent evaluation of the Norwegian Publication Indicator, which examines the Indicator’s impact on publishing patterns, its properties, and how it has functioned in practice. This includes both a broad range of potential effects such as the Indicator’s impact on the quantity and the quality of publications, Norwegian language publishing, and length of articles and monographs. It also includes an examination of properties such as the Indicator’s legitimacy and transparency, how it functions as a measure of research performance across different fields, its use as a management tool, and how the system is organized and administrated in practice. In examining these questions, the article draws on a number of different data sources, including large-scale surveys of both researchers and research managers, multilevel case studies, and bibliometric analysis. The article concludes with a discussion of the implications of the analysis both for further development of the Norwegian Model and for PRFS in general.

DOI

[3]
Ahlgren P., Colliander C., & Persson O. (2012). Field normalized citation rates, field normalized journal impact and Norwegian weights for allocation of university research funds. Scientometrics, 92(3), 767-780.Abstract<br/>We compared three different bibliometric evaluation approaches: two citation-based approaches and one based on manual classification of publishing channels into quality levels. Publication data for two universities was used, and we worked with two levels of analysis: article and department. For the article level, we investigated the predictive power of field normalized citation rates and field normalized journal impact with respect to journal level. The results for the article level show that evaluation of journals based on citation impact correlate rather well with manual classification of journals into quality levels. However, the prediction from field normalized citation rates to journal level was only marginally better than random guessing. At the department level, we studied three different indicators in the context of research fund allocation within universities and the extent to which the three indicators produce different distributions of research funds. It turned out that the three distributions of relative indicator values were very similar, which in turn yields that the corresponding distributions of hypothetical research funds would be very similar.<br/>

DOI

[4]
Dahler-Larsen P. (2011).The evaluation society.Stanford University Press..

[5]
Edlund P.,& Wedlin, L. (2017). Den kom flygande genom fönstret. Införandet av ett mätsystem för resursfördelning till forskning. In Wedlin, L. & Pallas, H. Det ostyrda universitetet: Perspektiv på styrning, autonomi och reform av svenska lärosäten, (pp. 216-243). Makadam Förlag: Göteborg

[6]
Espeland, W.N., &Sauder M. (2016). Engines of anxiety: Academic rankings, reputation, and accountability. Russell Sage Foundation.The World of Whiskers 43 should prepare a cage for the kitten and she would get her to the hospital in the morning. We knew the kitten would not live until morning without medical attention. So, I said, "Let's take her to the emergency clinic on our own?' Steve agreed, saying, "We have to try to save her?' Alice and her boyfriend had placed a small cloth over the kitten for warmth and they agreed also. So, we took off with the kitten. I sat in the back of the car with her while Steve tried to find the fastest route to the Emergency Clinic, which was no easy matter from the shelter. I kept my hands on the kitten to keep her warm and watched for signs of life. Every so often she made a little movement or sound. I didn't dare pick her up as she was so incredibly fragile. It was almost as if there was nothing inside the fur. She was completely limp. It seemed to take forever to get to the hospital. We finally got there and Steve ran inside with the kitten while I closed the car. He told the receptionists we had a dying kitten and they scooped her up and brought her to the doctor while we filled out forms. After a while the doctor emerged and told us they had to put a catheter directly into the bone to deliver fluid immediately to the kitten who was 10% dehydrated , which was consistent with death. They also had to get her warm as her temperature had dropped to 93 degrees (a cat's normal temperature would be about 101 degrees). With that preparation, the doctor took us in to see her and she was a sight. Lying on a heating pad and surrounded with rubber gloves that had been filled with warm water she was trying to hold her little head up. They estimated her age as 4-6 Copyrighted Material 44 Chapter Two weeks. She had goop in her eyes from medication and was weak as a dishrag. The doctor said the next two hours would be critical and we should call back at that time. But we felt so elated to see that tiny kitty alive such cautions meant nothing to us. We were just glad we had taken the chance. We went home and called the hospital at 9 and 12 midnight. She was still alive. At this point we wanted her to have a name and asked the staff to call her Stephanie. Throughout the evening we were on the phone with Megan, Betty, and Harriet (the Whiskers health officer), all of whom were clearly happy we had done what we had done and would take over the care of the kitten after this. So, at 5:30 A.M. we went to the Emergency Clinic to pay the bill and pick up Stephanie to take her to Whiskers's veterinary clinic. When we arrived, it occurred to us that we had been assuming the kitten was a "she" all along without any verification and asked the staff to check the sex of the kitten. Well, it turned out that little Stephanie was a male. We called him Dylan. On the trip to the clinic, I opened the carrier to pet him and, sick as he was, the little guy tried to look up at me and relate to me. I was putty! Every day Dylan got a little stronger though he would not eat on his own. On Wednesday, I went to see him. He was sitting up and immediately gave me a silent meow in greeting and came to the front of the cage to see me. He played with the tail of the toy mouse I brought him. He seemed to be doing great though the doctors kept warning me that there might be some underlying reason for his failure to thrive Copyrighted Material The World of Whiskers 45 that we didn't know about and that he wasn't out of the woods yet. Harriet kept us informed of Dylan's progress. He was eating on his own in two weeks. One of the veterinary technicians brought him home when he no longer needed continuous hospitalization but was not yet ready for a foster home. He seemed well on his way to recovery. Then, one night, suddenly and mysteriously , he died. None of us had the heart for an autopsy. The tech...

DOI

[7]
Haddow G.,& Hammarfelt, B.(to appear).Quality, impact and quantification: Indicators and metrics use by social scientists. Journal of the Association for Information Science and Technology.

[8]
Hammarfelt B.,& deRijcke, S. (2015). Accountability in context: Effects of research evaluation systems on publication practices, disciplinary norms, and individual working routines in the faculty of Arts at Uppsala University. Research Evaluation, 24(1), 63-77.Given the increased role of bibliometric measures in research evaluation, it is striking that studiesof actual changes in research practice are rare. Most studies and comments on ‘a metric culture’ ...

DOI

[9]
Hammarfelt B.,& Haddow, G. (2018). Conflicting measures and values: How humanities scholars in Australia and Sweden use and react to bibliometric indicators. Journal of the Association for Information Science and Technology, 69(7), 924-935.

[10]
Hammarfelt B., Nelhans G., Eklund P., & Åström F. (2016). The heterogeneous landscape of bibliometric indicators: Evaluating models for allocating resources at Swedish universities. Research Evaluation, 25(3), 292-305.The use of bibliometric indicators on individual and national levels has gathered considerable interest in recent years, but the application of bibliometric models for allocating resources at the institutional level has so far gathered less attention. This article studies the implementation of bibliometric measures for allocating resources at Swedish universities. Several models and indicators based on publications, citations, and research grants are identified. The design of performance-based resource allocation across major universities is then analysed using a framework from the field of evaluation studies. The practical implementation, the incentives as well as the thics of models and indicators, are scrutinized in order to provide a theoretically informed assessment of evaluation systems. It is evident that the requirements, goals, possiconsequences, and the costs of evaluation are scarcely discussed before these ble systems are implemented. We find that allocation models are implemented in response to a general trend of assessment across all types of activities and organizations, but the actual design of evaluation systems is dependent on size, orientation, and the overall organization of the institution in question.

DOI

[11]
MLE on Performance-based Funding of Public Research Organisations. European Commission. (2018). Retrieved August 24, 2018, from /en/policy-support-facility/mle-performance-based-funding-systems.

[12]
Power M. (1997). The audit society: Rituals of verification. OUP Oxford.Michael PowerOxford University Press, 19.99, pp 183ISBN 0 19 8289472This book is valuable precisely because of the obscurity of its subject. Very little has been written about the purpose, context, or process of auditing. This book, therefore, fills a gap. The obscurity of the topic also makes the book somewhat heavy going. It is a meaty, dense read. At times, the book tries to cover too much and requires the reader to work hard to keep engaged.Michael Power turns a critical eye towards his former audit profession. He is rightly critical of the current high status given to the act of observing rather than the act of doing. Doctors and NHS managers will enjoy this perspective as they try to cope with difficult

DOI PMID

[13]
Schneider , J.W. (2009). An Outline of the Bibliometric Indicator Used for Performance-Based Funding of Research Institutions in Norway. European Political Science, 8(3), 364-378.This article outlines and discusses the bibliometric indicator used for performance-based funding of research institutions in Norway. It is argued that the indicator is novel and innovative as compared to the indicators used in other funding models. It compares institutions based on all their publication-based research activities across all disciplines. Specific incentives are given to researchers to focus their publication behaviour on the most ‘prestigious’ publication channels within the different fields. Such aims necessitate a documentation system based on high-quality data, and require differentiated publication counts as the basic measure. Experience until now suggests that the indicator works as intended.

DOI

[14]
Sivertsen G. (2016). Publication-based funding:The norwegian model. In M. Ochsner, S. E. Hug, & H. D. Daniel (Eds.), Research Assessment in the Humanities (pp. 79-90). Springer International Publishing.

[15]
Sivertsen G. (2018).The Norwegian Model in Norway. Journal of Data and Information Science, 3(4), 1-17.

[16]
Waltman L. (2017). Special section on performance-based research funding systems. Journal of Informetrics, 11(3), 904. Retrieved from 2017.05.015.

Outlines

/

京ICP备05002861号-43

Copyright © 2023 All rights reserved Journal of Data and Information Science

E-mail: jdis@mail.las.ac.cn Add:No.33, Beisihuan Xilu, Haidian District, Beijing 100190, China

Support by Beijing Magtech Co.ltd E-mail: support@magtech.com.cn