Research Paper

The F-measure for Research Priority

  • Ronald Rousseau ,
Expand
  • University of Antwerp, Faculty of Social Sciences, B-2020 Antwerp, Belgium & KU Leuven, Facultair Onderzoekscentrum ECOOM, Naamsestraat 61, Leuven B-3000, Belgium
Corresponding author: Ronald Rousseau (E-mail: ; ).

Online published: 2010-03-03

Copyright

Open Access

Abstract

Purpose: In this contribution we continue our investigations related to the activity index (AI) and its formal analogs. We try to replace the AI by an indicator which is better suited for policy applications.

Design/methodology/approach: We point out that fluctuations in the value of the AI for a given country and domain are never the result of that country’s policy with respect to that domain alone because there are exogenous factors at play. For this reason we introduce the F-measure. This F-measure is nothing but the harmonic mean of the country’s share in the world’s publication output in the given domain and the given domain’s share in the country’s publication output.

Findings: The F-measure does not suffer from the problems the AI does.

Research limitations: The indicator is not yet fully tested in real cases.

R&D policy management: In policy considerations, the AI should better be replaced by the F-measure as this measure can better show the results of science policy measures (which the AI cannot as it depends on exogenous factors).

Originality/value: We provide an original solution for a problem that is not fully realized by policy makers.

Cite this article

Ronald Rousseau . The F-measure for Research Priority[J]. Journal of Data and Information Science, 2018 , 3(1) : 1 -18 . DOI: 10.2478/jdis-2018-0001

1 Introduction

In this contribution we continue our investigations (Rousseau & Yang, 2012) related to the activity index (AI) and its formal analogs. The activity index (AI) of country C with respect to a given domain D (and with respect to the world, W) over a given period Pis defined as:
AI(C, D, W, P) = the country’s share in the world’s publication output during the period P in the given domain D divided by the country’s share in the world’s publication output during the same period P in all science domains.
(1)
We note, moreover, that publications are counted as retrieved in a given database. This index was introduced in informetrics by Frame (1977). We refer to this formulation as the basic activity index because, instead of the world one might, for instance consider the USA or China and instead of a country one may consider a state or province. Clearly many other variants are imaginable. The basic activity index is said to characterize the relative research effort a country devotes to a given domain D. Stated otherwise, the AIgauges the share of a country’s or region’s publication activity in a given domain in its total publication output against the corresponding world standard. The lower bound of the AI is zero, while it has no upper bound. It is easy to show, see Equation (3) and i.e., (Schubert & Braun, 1986) that the activity index can also be expressed as:
AI(C, D, W, P) = the given domain’s share in the country’s publication output during the period P divided by the given domain’s share in the world’s publication output during the same period P.
(2)
When the context is clear or when it does not matter we simply write AI. The mathematical framework of the AI, though with other meanings and sometimes slightly transformed, has been used in many contexts and with other names. In all cases one studies a nominal cross-classification table. Some of these, such as the attractivity index (replacing the term publication output by received citations in Equation (1), the relative specialization index and the (relative) priority index, are discussed further on.
The AI and the attractivity index are classified by Vinkler (2010) among the contribution indicators, used to characterize the contribution or weight of a subsystem, such as a country, to the total system, e.g., the world.
Next we have a look at the constituent parts of the AI and introduce some notations. For simplicity we stay within the context of Equations (1) and (2) but recall that everything we show in the context of the basic activity index can also be said in other contexts. Criticisms we exert refer to the meaning of the mathematical formula: a ratio of ratios, but to make things precise we work mostly in the context of the standard activity index.
We consider the following parameters: OCD, OD, OC and OW, where, as a memory aid, the symbol O refers to the word output. Further:
OCD denotes the number of publications by country C in domain D during a given publication window;
OD denotes the total number of publications in the world in domain D during the same publication window;
OC denotes the number of publications - in all domains - by country C during the same publication window;
OW denotes the total number of publications in the world and in all domains during this publication window.
Then clearly, we have the following relations:
♦ 0 ≤ OCDODOW ; 0 ≤ OCDOCOW; and further:
OCD/OD is: the country’s share in the world’s publication output in the given domain D
OCD/OC is: the given domain’s share in the country’s publication output
OC/OW is: the country’s share in the world’s publication output in all science domains
OD/OW is: the given domain’s share in the world’s publication output
Finally we note that
It is well-known that, assuming disjoint domains, a country cannot have an AI(D) value larger than one for all domains D (Rousseau, 2012).

2 A Short Literature Study

In this section we recall some articles that used or studied the activity index, the attractivity index or its variants, without trying to be exhaustive.
Thijs and Glänzel (2008) used the AI to describe the national profile of eight European countries’ research fields. Zhou, Thijs, and Glänzel (2009) studied the regions of China, including in their investigations the scientific production (where the AI plays a role), relative received citations (but they did not include the attractivity index), and regional R&D expenditure. Ramakrishnan and Thavamani (2015) used the basic activity index in a study of the contribution of India to the field of leptospirosis. Further, Sangam et al. (2017) show that the AI (they use the term relative priority index) depends on the used database. Concretely, they study hepatitis research and compare results obtained from data retrieved from PubMed, Web of Science (WoS), and a sub-database of the WoS consisting of fields in the life sciences.
Instead of the term AI Nagpaul and Sharma (1995) use the term (relative) priority index, but with the same meaning as the AI. This terminology has also been used by Bhattacharya (1997) and in the already mentioned publication by Sangam et al. (2017). The revealed comparative advantage (RCA) or Balassa Index (Balassa, 1965) is an index used in international economics for calculating the relative advantage or disadvantage of a certain country in a certain class of goods or services as evidenced by trade flows. The RCAis defined as the proportion of the country’s exports that are of the class under consideration divided by the proportion of world exports that are of that class. Mathematically this index has the same form as the AI. A comparative advantage is “revealed” if RCA > 1. If RCA is less than unity, the country is said to have a comparative disadvantage in the commodity or industry under consideration.
Next we draw our attention to studies that include some theoretical aspects or variations of the AI. First we mention that some authors prefer the AI multiplied by 100 and refer to this as the modified activity index (MAI), see e.g., (Guan & Gao, 2008). These authors studied the MAI for bioinformatics over the period 2000-2005 and observe that the MAI value (hence also the AI-value) of China in this field has doubled over the observed period. Chen and Xiao (2016) proposed the Keyword Activity Index (KAI) of a keyword in a given domain as:
KAI = (the share of the given domain in publications containing the given keyword)/(the share of the given domain in all publications).
Egghe and Rousseau (2002) place the activity and the attractive index within a larger abstract framework of relative indicators. Hu and Rousseau (2009) compare the research performance in biomedical fields of 10 selected Western and Asian countries. The results confirm that there are many differences in intra- and interdisciplinary scientific activities between the West and the East. In particular they found that in most biomedical fields Asian countries perform below world average. Stimulated by these experimental results they find that the ratio of the attractivity index over the activity index, in a given domain and for a given country, can be expressed in terms of normalized mean citation rates (for the precise results we refer the reader to the original publication).
The relative specialization index (RSI) as used e.g., in Glänzel (2000) and Aksnes, van Leeuwen, and Sivertsen (2014 is defined as:
The RSI is a strict order preserving normalization of the AI. If AI = 0 then RSI = -1 and if AI increases to infinity, then RSI tends to 1. This transformation makes sure that values stay bounded between -1 and +1. This indicator but with Chinese universities instead of countries was used in Li, Miao, and Ding (2015). Besides comparisons with the world, they also performed comparisons with respect to China and with respect to leading universities in the world as reference group. Aksnes, van Leeuwen, and Sivertsen (2014) studied the impact on the RSI of the increased representation of China in the WoS. They choose the Netherlands as a case study to study this effect. We note that here two dynamic aspects are at play: the huge growth of China in terms of publications (described as “booming”) and the change of the WoS over time (possibly influenced by China). They concluded however that, although the influence of China is visible in the RSI for the Netherlands, and this especially in the last decade and in domains where these countries have opposite specializations, the basic research profile of the Netherlands as measured by the RSI remains the same. We note though that this is not a strictly mathematical result but rather a heuristic impression related to the stability of this index. Zhang, Rousseau, and Glänzel (2011) applied the RSI formula using document types instead of scientific domains. They find that the USA, Canada, and Australia are balanced cases, while the UK has the highest relative contribution in book reviews.
Stare and Kejžar (2014) point out that although +1 is indeed an upper bound for the RSI, this upper bound depends on the domain under study and as such can in practice be much lower than +1 (for a given domain). They show that for the period 2005-2009 and for the Natural Sciences, this upper bound is as low as 0.32. They conclude that the differences in maximum values of AI and RSI between scientific fields are so big that any conclusions based on analyses of these indices seem questionable. For this reason they propose another index which takes the maximum value of the AI for a given domain into account. This indicator, denoted as SAI (standardized AI) is defined as follows:
Here, MAX (AI) is the theoretical maximum value of AI, given the real number of publications in the domain. Clearly, SAI takes values between 0 and 1 and when AI = 1, then SAI = 0.5.

3 Reflections on the Meaning of the Activity Index

What is the meaning of the activity index? In (Rousseau, 2012) we stated that if the values of OCD, OD, and OC stay the same—and these are the parameters we are interested in—then AI(Y+1) may differ from AI(Y), the values of the activity index in the yearsY+1 and Y, if there is an increase or decrease in OW, unrelated to country C or domain D. Concretely, the activity index of the USA in chemistry may increase just because China, or any other country, has an increase in articles on biology (leading to an increase inOW). Hence a change (increase or decrease) in the activity index can happen for reasons which have nothing to do with the country or the domain one is interested in. This observation is important for policy reasons as fluctuations in the value of the AI for a given country and domain is never the result of that country’s policy with respect to the domain D alone. For this reason we consider OCD, OC, and OD as endogenous factors (the factors of interest), while OW is considered an uncontrollable external, i.e., exogenous factor.
Because of these remarks strange, i.e., counterintuitive, results may occur when calculating an AI. We provide two examples.
Example A. Suppose that a country is the leading country in the world, according to the activity index, in a particular domain. Then it is possible that another country becomes the leading one by publishing less in other domains. Consider the following Table 1. At the start the activity indices for countries 1 and 2 are respectively 5.33 and 4.48. When country 2 publishes 17,000 articles less in other domains the activity indices become respectively 5.28 and 5.34. Although this is a fictitious case, it clearly demonstrates the fact that this indicator does not behave as intuitively expected, and worse it does not measure what it (probably) is supposed to measure. The problem lies in the parameter OW.
Table 1 Calculations related to Example A; the indicator F is introduced further on.
Original situation New situation: Country 2 publishes 17,000
articles less in other domains
Country 1 Country 2 Country 1 Country 2
OCD 200 1,400 200 1,400
OD 5,000 5,000 5,000 5,000
OC 12,000 100,000 12,000 83,000
OW 1,600,000 1,600,000 1,583,000 1,583,000
AI 5.33 4.48 5.28 5.34
F 0.0235 0.0267 0.0235 0.0318
Example B. Next we provide another counterintuitive example which comes from (Rousseau & Yang, 2012). This example is even more counterintuitive as there are no pure exogenous influences. It shows that if a country’s activity in a domain (parameter OCD) increases and nothing else changes (the changes in the domain, country and world, are only the result of the change introduced by the country and the domain under study), then it is possible that the AI decreases and similarly if the activity decreases it is possible that the AI increases. Of course this again is a purely theoretic example, but it clearly shows the intrinsic problem with the AI-formula. Data and results are shown in Table 2.
Table 2 Data and calculations related to Example B.
Basic Increase in OCD Decrease in OCD
OCD 190 200 180
OD 200 210 190
OC 200 210 190
OW 400 410 390
AI 1.9 1.859 1.945
F 0.95 0.9524 0.9474
These two examples clearly show that there are serious problems in the interpretation of the AI. Finally, we mention the following situation. Consider OCD, OC, OD, and OW in a particular year. The next year OCD, OC, and OD are exactly the same, but OWhas increased. Comparing the AI for these two years we see that the numerator has stayed the same but the denominator has decreased. Consequently the AI-value has increased. Reflecting on this we see that, with respect to the world, the contribution of countryC and of the domain D have decreased. Yet, according to the AI, the activity of C in D has increased! Also this result is difficult to grasp.

4 A New Proposal: F-measure for Research Priority

Although the AI and its variants do have a meaning as relative (or even double relative) measures (Rousseau, 2012) we think that in many cases researchers are actually interested in another indicator.
The ratios OCD/OD, namely the country’s share in the world’s publication output in the given domain D and OCD/OC, namely the given domain’s share in the country’s publication output are the indicators in which one generally is interested. Working withOCD/OD and OCD/OC we form their harmonic means, which conceptually is the same as the F-score with respect to Recall and Precision in information retrieval (Manning, Raghavan, & Schütze, 2008). This leads to the indicator, see (6), we propose instead of the activity index and its variants.
We further write F(C, D, W, P) simply as F when C, D, W and P are assumed to be known. We already note that

0 ≤ F ≤ 1, (7)

where the minimum and the maximum value only occur in the uninteresting cases that OCD = 0, i.e., the country has no contribution in that particular domain or when the country is the only one active in that particular domain and is, moreover, only active in that domain: OCD = OD = OC. So, from now on we assume the strict inequalities in (7). Being a mean we have for each concrete case that
The value of the F-measure in a domain D for the whole world is . The larger OD the larger this world value. Of course one could divide the value for a country in a domain with the corresponding value for the world, but this would re-introduce the parameter OW. For this reason we prefer to consider the world value as a separate piece of information about the priority given by the whole world to this particular domain. We note that a special application of the F-score, the so-called feature F-measure was used by Lamirel (2012) as an element in an unsupervised clustering method.
In (Rousseau & Yang, 2012) we investigated under which conditions an increase in OCD (or a decrease) would lead to an increase (decrease) in AI. Recall that we already know that this—expected—behavior does not always happen. Yet, we think that such an increase or decrease should not depend on other variables but should always happen. The next result shows that this is the case for the F-measure
for research priority. Here and further on we exclude the trivial case that OCD =
OD = OC.
Theorem 1.
1) If OCD increases then the F-measure increases (addition property).
2) If OCD decreases then the F-measure decreases (subtraction property).
Proof.
1) Let λ > 0 then we have to show that
This last inequality obviously holds.
2) Similarly, for OCD > λ > 0, we show that:
Proving the case of a decrease in OCD.
We further note the logical property that if OD and/or OC increases and OCD stays the same then F decreases.
Reconsidering Examples A and B we calculate the F-measure in these cases and notice that for Example A country C2 has already a higher F-measure than country C1; while for Example B, all counterintuitive results disappear (illustrating Theorem 1). Next we briefly discuss the notion of independence (Bouyssou & Marchant, 2011) in relation with the F-measure.
If S1 and S2 represent sets of publications then strict independence for an indicator J means that if J(S1) < J(S2) and one adds to S1 and to S2 the same publications, leading to sets S1’ and S2’ then still J(S1’) < J(S2’).
The indicator J is said to be relative independent if the independence property holds for sets S1 and S2 with the same number of elements. If one wants to stress the difference between independent and relative independent one may use the term absolute independent for the former.

Theorem 2 (Relative independence)

If countries C1 and C2 have the same number of publications, i.e., OC,1 = OC,2 = OC, if the relation F(C1,D,W,P) < F(C2,D,W,P) holds and if we add the same number of publications, q > 0, in the domain D, to the output of these two countries then stillF(C1,D,W,P) < F(C2,D,W,P) where the notations C1 and C2 refer to the same countries but with an increased number of publications in the field F.
Proof. We know that
Hence OCD,1< OCD,2. Now we have to show that:
This is obvious as OCD,1< OCD,2.
Note. The F-measure is not an absolute independent indicator. Indeed, consider the following example. Let OCD,1= 2; OCD,2 = 3; OD = 88; OC,1 = 49 and OC,2= 99. Then
If we add now one unit to OCD,1 and OCD,2 then we obtain the following values for the parameters: OCD,1 = 3; OCD,2 = 4; OD = 90; OC,1 = 50 and OC,2 = 100. The relation between the new F-values, denoted as F1’ and F2’, now becomes:
This shows that the F-measure for research priority is not an absolute independent measure.
If the domain stays fixed a ranking of countries (C1 and C2) according to AI and to the F-measure may yield opposite results. Consider, indeed, the following example: let OCD,1 = 4; OD = 20; OC,1= 14; OCD,2 = 3 and OC,2 = 10, where subscripts refer to the corresponding countries, then AI1 = 4 OW/280 and AI2 = 3OW/200 and hence AI1< AI2. Yet F1 = 8/34 > F2 = 6/30.
Similarly, if the country is fixed then a ranking of domains according to AI and to the F-measure may yield opposite results. This remark is nothing but a confirmation that AI and F measure different properties. Only the second one is determined by endogenous factors and hence can be the direct result of an appropriate policy.

5 Further Mathematical Results

Next we answer the question: if OCD increases with a given percentage p, what is its influence on the other parameters?
We first consider the parameter OCD/OD: the country’s share in the world’s publication output in the given domain D.
Proposition 1. Let 0 < p <1 then an increase of 100p% in OCD leads to an increase between 0 and 100p% in OCD/OD. In many realistic cases, i.e., OCD<< OD, this increase is close to 100p%.
Proof. If OCD becomes OCD + OCD.p, then OCD/OD becomes (OCD+ OCD.p)/(OD + OCD.p). Then:
.
The factor R is strictly positive and smaller than 1, proving this result. If OCD/OD is small then R is close to 1 and the increase in OCD/OD is close to p (but always strictly smaller).
This proposition also holds for OCD/OC.
As the F-measure is an average the proposition also holds here. For completeness sake we calculate the value of the corresponding R parameter:
The corresponding factor R is which is again close to 1 if the F-measure is small and close to zero if F is close to 1. For small values of the F-measure an increase of OCD by p100% leads to an increase of the F-measure by almost p100%.
The F-measure, considered a mathematical function, depends on two variables
x = OCD/OD (the country’s share in the world’s publication in domain D) and y = OCD/OC (the domain’s share in the country’s publication output). As a function of x and y we have:
defined for x ≥ 0, y ≥ 0 and (x,y) ≠ (0,0). We already note that F(x,x) = x.
Considering the parallel lines x + y = c, with c a strictly positive constant, we see that for points (x,y) on this line F(x,y) = . Hence when x + y = c, F(x,y) has the form of a parabola, taking the value zero for x = 0 and x = c, i.e., y = 0. The top of such a parabola is obtained for x = c/2 = y, and takes the value F(c/2,c/2) = c/2. From this analysis it follows that when either x or y is close to zero also the F-measure for research priority is small. Figure 1 shows the function F(x,y) for x and y between 0 and 1. It also shows the F-values for points on x + y = 0.5 and on x + y = 1.
Figure 1. Graph of the function F(x,y); origin is nearest to the viewer.

6 A Real-world Example

As a real-world application we consider a table of publications in the Humanities, containing information on publications by Flemish researchers (Engels, Ossenblok, & Spruyt, 2012). These data, published as part of Table 1 in (Engels, Ossenblok,
& Spruyt, 2012
), came about as follows: In 2008 the Flemish government provided the legal framework for the construction of the Flemish Academic Bibliographic Database for the Social Sciences and Humanities (‘‘Vlaams Academisch Bibliografisch Bestand voor de Sociale en Humane Wetenschappen’’ or ‘‘VABB-SHW’’ in short). This database provided the Flemish government with a useful tool to fine-tune the distribution of research funding over universities in Flanders. As a consequence it became possible for researchers to analyze changing publication patterns in the larger Flemish peer reviewed literature (not just restricted to the WoS). Five publication types are included in the VABB-SHW:
(a) articles in journals;
(b) books as author;
(c) books as editor;
(d) articles or chapters in books;
(e) proceedings papers that are not part of special issues of journals or edited books
In Table 3 a distinction is made between articles in journals included in the WoS and other ones, and similarly for proceedings papers, leading to seven types of publications. In the VABB-SHW all records are assigned to disciplines on the basis of the author(s) affiliation(s) with a SSH unit in which the author carries out research. For the Humanities one makes a distinction between the following disciplines: Archaeology; Art History (including Architecture and Arts); Communication Studies; History; Law; Linguistics; Literature; Philosophy (including History of Ideas); Theology (including Religious Studies). Finally, we mention that data in our Table 3 do not include the remainder category “Humanities-general.”
Table 3 Flemish Humanities publications (2000-2009) in the VABB.
Disciplines Journal articles Book chapters Edited books Monographs Proceedings
papers
Row totals
VABB-non-WoS VABB-WoS VABB VABB VABB VABB-WoS VABB-Non-WoS
Archaeology 176 133 40 6 11 12 18 396
Art History 295 150 135 38 12 22 28 680
Communication Studies 425 170 94 16 3 19 1 728
History 773 193 233 52 28 0 19 1,298
Law 4,018 144 320 89 55 11 20 4,657
Linguistics 908 457 511 135 59 54 83 2,207
Literature 631 143 376 87 36 0 31 1,304
Philosophy 786 603 279 42 30 36 9 1,785
Theology 610 85 410 85 53 1 4 1,248
Column totals 8,622 2,078 2,398 550 287 155 213 14,303
Next, in Table 4, we show AI-values for the data shown in Table 3. In this case AI-values refer to the relative preference of disciplines for certain publication types. Table 5 shows the corresponding F-values.
Table 4 Values according to the AI-formula for the data shown in Table 3.
Disciplines Journal articles Book chapters Edited books Monographs Proceedings papers
VABB-non-WoS VABB-WoS VABB VABB VABB VABB-WoS VABB-Non-WoS
Archaeology 0.737 2.312 0.602 0.394 1.384 2.796 3.052
Art History 0.720 1.518 1.184 1.453 0.879 2.985 2.765
Communication Studies 0.968 1.607 0.770 0.572 0.205 2.408 0.092
History 0.988 1.023 1.071 1.042 1.075 0.000 0.983
Law 1.431 0.213 0.410 0.497 0.589 0.218 0.288
Linguistics 0.682 1.425 1.381 1.591 1.332 2.258 2.525
Literature 0.803 0.755 1.720 1.735 1.376 0.000 1.596
Philosophy 0.730 2.325 0.932 0.612 0.838 1.861 0.339
Theology 0.811 0.469 1.960 1.771 2.116 0.074 0.215
Table 5 Values according to the F-measure for the data shown in Table 3.
Disciplines Journal articles Book chapters Edited books Monographs Proceedings papers
VABB-non-WoS VABB-WoS VABB VABB VABB VABB-WoS VABB-Non-WoS
Archaeology 0.039 0.108 0.029 0.013 0.032 0.044 0.059
Art History 0.063 0.109 0.088 0.062 0.025 0.053 0.063
Communication Studies 0.091 0.121 0.060 0.025 0.006 0.043 0.002
History 0.156 0.114 0.126 0.056 0.035 0.000 0.025
Law 0.605 0.043 0.091 0.034 0.022 0.005 0.008
Linguistics 0.168 0.213 0.222 0.098 0.047 0.046 0.069
Literature 0.127 0.085 0.203 0.094 0.045 0.000 0.041
Philosophy 0.151 0.312 0.133 0.036 0.029 0.037 0.009
Theology 0.124 0.051 0.225 0.095 0.069 0.001 0.005
Next we calculate the correlation for each type of publication (ranks for the calculation of the Spearman correlation go from 1 to 9 as there are 9 disciplines) between the numbers of publications, their AI-values and their F-values. Results are shown in Table 6.
Table 6 Correlation values.
Pearson Spearman
PUB-AI PUB-F AI-F PUB-AI PUB-F AI-F
Journal articles VABB-non-WoS 0.873 0.998 0.872 0.183 0.983 0.267
Journal articles VABB-WoS 0.521 0.964 0.704 0.431 0.470 0.750
Book chapters VABB 0.599 0.922 0.852 0.633 0.933 0.783
Edited books VABB 0.621 0.789 0.965 0.500 0.683 0.933
Monographs VABB 0.461 0.645 0.961 0.233 0.533 0.867
Proceedings papers VABB-WoS 0.631 0.724 0.990 0.731 0.849 0.950
Proceedings papers VABB-Non-WoS 0.595 0.734 0.979 0.633 0.800 0.933

Note. PUB stands for number of publications

Generally, correlations between the numbers of publications and the AI-values are the lowest, while correlations for PUB-F and AI-F are roughly of the same level, the case of the Spearman rank-correlation between journal articles in non-WoS journals being an exception. The main lesson to be learned from this example is that numbers of published items per discipline per publication type, relative preference of disciplines for certain publication types (based on the AI-formula) and the corresponding F-measure are different, but to some extent correlated indicators.

7 Discussion and Conclusion

The criticism on the AI-formula (in general) is not always valid. If in the original table row or column sums are fixed, the criticism does not hold. This is clarified in the Appendix.
Any average, including weighted averages, of (OCD/OD) and (OCD/OC) satisfies the addition property (Theorem 1). Because of the formal analogy with the F-score from information retrieval and because it is generally agreed that when rates are involved one should use a harmonic mean, we choose this option. In this way we obtain the additional sensitivity benefit that if either OCD/OD (the country’s share in the world’s publication output in the given domain D) or OCD/OC (the given domain’s share in the country’s publication output) is small also the F-measure for research priority is small. This property does not hold for an arithmetic mean. If deemed necessary one may even consider weighted harmonic means of (OCD/OD) and (OCD/OC). Another sensitivity aspect relates to the parameters OD and OC. If one studies a large domain such as the Natural Sciences or Medicine then the parameter OD, being for most countries and certainly for most universities, much larger than the parameter OC, has the largest influence on the actual value of F. On the other hand, if one studies a small specialty then the parameter OC may have the biggest influence. However, we do not think that actual values of F are of importance but rather changes in value and resulting changes in rankings between comparable units.
Although the AI and its mathematical equivalents, such as the attractivity index, or their monotone transformations such as the relative specialization index, can be used to characterize the contribution or weight of a subsystem to the total system, they can certainly not be used for science policy purposes. The number of publications by country C in domain D during a given publication window (OCD), the total number of publications in the world in domain D during the same publication window (OD) and the number of publications—in all domains—by country C during the same publication window (OC), can be considered as endogenous factors in a science policy model, while the total number of publications in the world and in all domains during this publication window (OW) is an exogenous factor. For this reason we propose the F-measure as a better and more sensitive policy indicator.

Appendix

The AI-formula can be calculated for any nominal cross-classification table. Focusing on one cell and using the earlier notation, this leads to Equation (10).
One may observe that for the calculation of one specific AI-value the original table is reduced to a two by two table:
Using this reduction, Equation (10) can be rewritten using four values referring to non-overlapping sets:
If a cross-tabulation is such that row or column totals are fixed, then the examples showing the irrationality of the AI-index cannot be given. Indeed: when row or column totals are fixed, then what is added to one value (in one cell) must be deducted from another. In those cases the AI-formula has a clear meaning as a relative index and can be used in a rational analysis and for decision making.

The authors have declared that no competing interests exist.

[1]
Aksnes D.W., van Leeuwen T.N., & Sivertsen G. (2014). The effect of booming countries on changes in the relative specialization index (RSI) on country level. Scientometrics, 101(2), 1391-1401.The Relative Specialization Index (RSI) is an indicator that measures the research profile of a country by comparing the share of a given field in the publications of a given country with the share of the same field in the world total of publications. If measured over time, this indicator may be influenced in the world total by the increased representation of certain other countries with different research profiles. As a case, we study the effect on the RSI for The Netherlands of the increased representation of China in the ISI Web of Science. Although the booming of China is visible in the RSI for The Netherlands, especially in the last decade and in fields where the countries have opposite specializations, the basic research profile as measured by the RSI remains the same. We conclude that the indicator is robust with regard to booming countries, and that it may suffice to observe the general changes in the research profile of the database if the RSI for a country is studied over time.

DOI

[2]
Balassa B. (1965). Trade liberalisation and ‘revealed’ comparative advantage. Manchester School of Economic & Social Studies, 33, 99-123.

DOI

[3]
Bhattacharya S. (1997). Cross-national comparison of frontier areas of research in physics using bibliometric indicators. Scientometrics, 40(3), 385-405.This paper attempts to reveal the characteristics of high activity areas of world research in Physics. “Frontier areas”-areas of high activity and areas of low activity are identified. Research activities in “Frontier areas” for twenty six countries (major countries) contributing maximum research output in Physics are analyzed for two time periods (1990 & 1995). The main objective of this study is to reveal the areas of research priorities, trends, gaps and similarity of research efforts of major countries in these “frontier” areas. Key countries in these areas in both the time periods are identified. Multivariate Scaling Algorithm is applied to the countries and fields in each time period, and also simultaneously to understand the relationship between countries and fields and the dynamics of change in research priorities. Results and implications of this study for policy research is highlighted.

DOI

[4]
Bouyssou D., &Marchant T. (2011). Ranking scientists and departments in a consistent manner. Journal of the American Society for Information Science and Technology, 62(9), 1761-1769.The standard data that we use when computing bibliometric rankings of scientists are their publication/ citation records, i.e., so many papers with 0 citation, so many with 1 citation, so many with 2 citations, etc. The standard data for bibliometric rankings of departments have the same structure. It is therefore tempting (and many authors gave in to temptation) to use the same method for computing rankings of scientists and rankings of departments. Depending on the method, this can yield quite surprising and unpleasant results. Indeed, with some methods, it may happen that the “best” department contains the “worst” scientists, and only them. This problem will not occur if the rankings satisfy a property called consistency, recently introduced in the literature. In this article, we explore the consequences of consistency and we characterize two families of consistent rankings.

DOI

[5]
Chen G., & Xiao, L. (2016). Selecting publication keywords for domain analysis in bibliometrics: A comparison of three methods. Journal of Informetrics, 10(1), 212-223.Publication keywords have been widely utilized to reveal the knowledge structure of research domains. An important but under-addressed problem is the decision of which keywords should be retained as analysis objects after a great number of keywords are gathered from domain publications. In this paper, we discuss the problems with the traditional term frequency (TF) method and introduce two alternative methods: TF-inverse document frequency (TF-IDF) and TF-Keyword Activity Index (TF-KAI). These two methods take into account keyword discrimination by considering their frequency both in and out of the domain. To test their performance, the keywords they select in China's Digital Library domain are evaluated both qualitatively and quantitatively. The evaluation results show that the TF-KAI method performs the best: it can retain keywords that match expert selection much better and reveal the research specialization of the domain with more details.

DOI

[6]
Egghe L., &Rousseau R. (2002). A general framework for general impact indicators. The

Canadian Journal of Information and Library Science / La revue canadienne des sciences de l’information et de bibliothéconomie, 27(1), 29-48.

[7]
Engels T.C.E., Ossenblok T.L.B., & Spruyt E.H.J. (2012). Changing publication patterns in the Social Sciences and Humanities, 2000-2009. Scientometrics, 93(2), 373-390.

DOI

[8]
Frame J.D. (1977). Mainstream research in Latin America and the Caribbean. Interciencia, 2, 143-148.

[9]
Glänzel W. (2000). Science in Scandinavia: A bibliometric approach. Scientometrics, 48(2)

121-150.

[10]
Guan J.C., &Gao X. (2008). Comparison and evaluation of Chinese research performance in the field of bioinformatics. Scientometrics, 75(2), 357-379.Bioinformatics is an emerging and rapidly evolving discipline. The bioinformatics literature is growing exponentially. This paper aims to provide an integrated bibliometric study of the knowledge base of Chinese research community, based on the bibliometric information in the field of bioinformatics from SCI-Expanded database during the period of 2000-2005. It is found that China is productive in bioinformatics as far as publication activity in international journals is concerned. For comparative purpose, the results are benchmarked against the findings from five other major nations in the field of bioinformatics: USA, UK, Germany, Japan and India. In terms of collaboration profile, the findings imply that the collaborative scope of China has gradually transcended boundaries of organizations, regions and nations as well. Finally, further analyses on the citation share and some surrogate scientometric indicators show that the publications of Chinese authors suffer from a lowest international visibility among the six countries. Strikingly, Japan has achieved most remarkable impact of publication when compared to research effort devoted to bioinformatics amongst the six countries. The policy implication of the findings lies in that Chinese scientific community needs much work on improving the research impact and pays more attention to strengthening the academic linkages between China and worldwide nations, particularly scientifically advanced countries.

DOI

[11]
Hu X.J., &Rousseau R. (2009). A comparative study of the difference in research performance

in biomedical fields among selected Western and Asian countries. Scientometrics, 81(2), 475-491.

[12]
Lamirel J.C. (2012). A new approach for automatizing the analysis of research topics dynamics: Application to optoelectronics research. Scientometrics, 93(1), 155-166.The objective of this paper is to propose a new unsupervised incremental approach in order to follow the evolution of research themes for a given scientific discipline in terms of emergence or decline. Such behaviors are detectable by various methods of filtering. However, our choice is made on the exploitation of neural clustering methods in a multi-view context. This new approach makes it possible to take into account the incremental and chronological aspects of information by opening the way to the detection of convergences and divergences of research themes at a large scale.

DOI

[13]
Li F., Miao Y.J., & Ding J. (2015). Tracking the development of disciplinary structure in China’s top research universities (1998-2013). Research Evaluation, 24(3), 312-324.

DOI

[14]
Manning C.D., Raghavan P., & Schütze, H. (2008). Introduction to information retrieval..

Cambridge: University Press.

[15]
Nagpaul P.S., &Sharma L. (1995). Science in the eighties: A typology of countries based on

inter-field priorities. Scientometrics, 34(2), 263-283.

[16]
Ramakrishnan J., &Thavamani K. (2015). Indian contributions to the field of leptospirosis (2006-2013): A bibliometric study. Collnet Journal of Scientometrics and Information

Management, 9(2), 235-249.

[17]
Rousseau R. (2012). Thoughts about the activity index and its formal analogues. ISSI Newsletter, 8(4), 73-75.

[18]
Rousseau R., &Yang L.Y. (2012). Reflections on the activity index and related indicators. Journal of Informetrics, 6(3), 413-421.We point out some theoretical problems in the construction of the activity index and related indicators. Concretely, if the activity index is larger than one then it is, at least theoretically, possible to decrease its value by increasing the activity in that field. Although for some practical applications these problems do not seem to have serious consequences our investigation adds to the list of problematic indicators. As the problems we point out are due to the mathematical structure of this indicator our analysis also applies to all indicators formed in the same way, such as the revealed comparative advantage index or Balassa index.

DOI

[19]
Sangam S.L., Arali U.B., Patil C.G., & Rousseau R. (2017). Growth of the hepatitis literature over the period 1976-2015: What can the relative priority index teach us?Paper presented at the 13th International Conference on Webometrics, Informetrics, and Scientometrics (WIS) and 18th COLLNET Meeting, July 2017, Canterbury.

[20]
Schubert A., &Braun T. (1986). Relative indicators and relational charts for comparative

assessment of publication output and citation impact. Scientometrics, 9(5-6), 281-291.

[21]
Stare J., &Kejžar N. (2014). On standardization of the activity index. Journal of Informetrics, 8(3), 503-507.Relative Specialization Index (RSI) was introduced as a simple transformation of the Activity Index (AI), the aim of this transformation being standardization of AI, and therefore more straightforward interpretation. RSI is believed to have values between 611 and 1, with 611 meaning no activity of the country (institution) in a certain scientific field, and 1 meaning that the country is only active in the given field. While it is obvious from the definition of RSI that it can never be 1, it is less obvious, and essentially unknown, that its upper limit can be quite far from 1, depending on the scientific field. This is a consequence of the fact that AI has different upper limits for different scientific fields. This means that comparisons of RSIs, or AIs, across fields can be misleading. We therefore believe that RSI should not be used at all. We also show how an appropriate standardization of AI can be achieved.

DOI

[22]
Thijs B., &Glänzel W. (2008). A structural analysis of publication profiles for the classification of European research institutes. Scientometrics, 74(2), 223-236.

DOI

[23]
Vinkler P. (2010). The evaluation of research by Scientometric indicators. Oxford: Chandos.

[24]
Zhang L., Rousseau R., & Glänzel W. (2011). Document-type country profiles. Journal of the American Society for Information Science & Technology, 62(7), 1403-1411.

[25]
Zhou P., Thijs B., & Glänzel W. (2009). Regional analysis on Chinese scientific output. Scientometrics, 81(3), 839-857.Based on data from the Science Citation Index Expanded (SCIE) and using scientometric methods, we conducted a systematic analysis of Chinese regional contributions and international collaboration in terms of scientific publications, publication activity, and citation impact. We found that regional contributions are highly skewed. The top positions measured by number of publications or citations, share of publications or citations are taken by almost the same set of regions. But this is not the case when indicators for relative citation impact are used. Comparison between regional scientific output and R&D expenditure shows that Spearman rank correlation coefficient between the two indicators is rather low among the leading publication regions.

DOI

Outlines

/

京ICP备05002861号-43

Copyright © 2023 All rights reserved Journal of Data and Information Science

E-mail: jdis@mail.las.ac.cn Add:No.33, Beisihuan Xilu, Haidian District, Beijing 100190, China

Support by Beijing Magtech Co.ltd E-mail: support@magtech.com.cn