Research Paper

The Power-weakness Ratios (PWR) as a Journal Indicator: Testing the “Tournaments” Metaphor in Citation Impact Studies

  • Loet Leydesdorff ,
  • Wouter de Nooy & Lutz Bornmann
Expand
  • 1 Amsterdam School of Communication Research, University of Amsterdam, Amsterdam 1001 NG, The Netherlands;
    2 Division for Science and Innovation Studies, Administrative Headquarters of the Max Planck Society, Munich 80539, Germany

Received date: 2016-06-10

  Revised date: 2016-07-28

  Online published: 2016-08-02

Supported by

The authors acknowledge Gangan Prathap for discussing the PWR method with us in detail.

Abstract

Purpose: Ramanujacharyulu developed the Power-weakness Ratio (PWR) for scoring tournaments. The PWR algorithm has been advocated (and used) for measuring the impact of journals. We show how such a newly proposed indicator can empirically be tested.
Design/methodology/approach: PWR values can be found by recursively multiplying the citation matrix by itself until convergence is reached in both the cited and citing dimensions; the quotient of these two values is defined as PWR. We study the effectiveness of PWR using journal ecosystems drawn from the Library and Information Science (LIS) set of the Web of Science (83 journals) as an example. Pajek is used to compute PWRs for the full set, and Excel for the computation in the case of the two smaller sub-graphs: (1) JASIST+ the seven journals that cite JASIST more than 100 times in 2012; and (2) MIS Quart+ the nine journals citing this journal to the same extent.
Findings: A test using the set of 83 journals converged, but did not provide interpretable results. Further decomposition of this set into homogeneous sub-graphs shows that—like most other journal indicators—PWR can perhaps be used within homogeneous sets, but not across citation communities. We conclude that PWR does not work as a journal impact indicator; journal impact, for example, is not a tournament.
Research limitations: Journals that are not represented on the “citing” dimension of the matrix—for example, because they no longer appear, but are still registered as “cited” (e.g. ARIST)—distort the PWR ranking because of zeros or very low values in the denominator.
Practical implications: The association of “cited” with “power” and “citing” with “weakness” can be considered as a metaphor. In our opinion, referencing is an actor category and can be Metaphor in Citation Impact Studies in terms of behavior, whereas “citedness” is a property of a document with an expected dynamics very different from that of “citing.” From this perspective, the PWR model is not valid as a journal indicator.
Originality/value: Arguments for using PWR are: (1) its symmetrical handling of the rows and columns in the asymmetrical citation matrix, (2) its recursive algorithm, and (3) its mathematical elegance. In this study, PWR is discussed and critically assessed.


http://ir.las.ac.cn/handle/12502/8729

Cite this article

Loet Leydesdorff , Wouter de Nooy & Lutz Bornmann . The Power-weakness Ratios (PWR) as a Journal Indicator: Testing the “Tournaments” Metaphor in Citation Impact Studies[J]. Journal of Data and Information Science, 2016 , 1(3) : 6 -26 . DOI: 10.20309/jdis.201617

References

Bergstrom, C. (2007). Eigenfactor: Measuring the value and prestige of scholarly journals. College & Research Libraries News, 68, 314.
Blondel, V .D., Guillaume, J.L., Lambiotte, R., & Lefebvre, E. (2008). Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment, 8(10), P10008, 10001-10012.
Brin, S., & Page, L. (1998). The anatomy of a large-scale hypertextual Web search engine.
Computer Networks and ISDN Systems, 30 (1-7), 107-117.
De Nooy, W ., Mrvar, A., & Batagelj, V. (2011). Exploratory social network analysis with Pajek: Revised and expanded second edition. Cambridge: Cambridge University Press.
De Visscher, A. (2010). An index to measure a scientist's specific impact. Journal of the American Society for Information Science and Technology, 61(2), 310-318.
De Visscher, A. (2011). What does the g-index really measure? Journal of the American Society for Information Science and Technology, 62(11), 2290-2293.
Dong, S.B. (1977). A Block-Stodola eigensolution technique for large algebraic systems with nonsymmetrical matrices. International Journal for Numerical Methods in Engineering, 11(2), 247-267.
Franceschet, M. (2011). Page Rank: Standing on the shoulders of giants. Communications of the ACM, 54(6), 92-101.
Garfield, E., & Sher, I.H. (1963). New factors in the evaluation of scientific literature through citation indexing. American Documentation, 14, 195-201.
Gingras, Y., & Larivière, V. (2011). There are neither “king” nor “crown” in scientometrics: Comments on a supposed “alternative” method of normalization. Journal of Informetrics, 5(1), 226-227.
Guerrero-Bote, V.P., & Moya-Anegón, F. (2012). A further step forward in measuring journals' scientific prestige: The SJR2 indicator. Journal of Informetrics, 6(4), 674-688.
Kamada, T., & Kawa i, S. (1989). An algorithm for drawing general undirected graphs. Information Processing Letters, 31(1), 7-15.
Kleinberg, J.M. (1999). Authoritative sources in a hyperlinked environment. Journal of ACM, 46 (5), 604-632.
Leydesdorff, L. (2006). Can scientific journals be classified in terms of aggregated journal-journal citation relations using the Journal Citation Reports? Journal of the American Society for Information Science & Technology, 57(5), 601-613.
Leydesdorff, L. (2009). How are New citation-based journal indicators adding to the bibliometric toolbox? Journal of the American Society for Information Science and Technology, 60(7), 1327-1336.
Leydesdorff, L., & Bornmann, L. (2012). Percentile ranks and the integrated impact indicator (I3). Journal of the American Society for Information Science and Technology, 63(9), 1901-1902.
Leydesdorff, L., & Bornmann, L. (2016). The operationalization of “fields” as WoS Subject Categories (WCs) in evaluative bibliometrics: The cases of “Library and Information Science” and “Science & Technology Studies”. Journal of the Association for Information Science and Technology, 67(3), 707-714.
Leydesdorff, L., Bornmann, L., Mutz, R., & Opthof, T. (2011). Turning the tables in citation analysis one more time: Principles for comparing sets of documents. Journal of the American Society for Information Science and Technology, 62(7), 1370-1381.
Milojevi?, S., & Leydesdorff, L. (2013). Information Metrics (iMetrics): A research specialty with a socio-cognitive identity? Scientometrics, 95(1), 141-157.
Moed, H.F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4(3), 265-277.
Narin, F. (1976). Evaluative bibliometrics: The use of publication and citation analysis in the evaluation of scientific activity. Washington, DC: National Science Foundation.
Nicolaisen, J., & Frandsen, T.F. (2008). The reference return ratio. Journal of Informetrics, 2(2), 128-135.
Opthof, T., & Leydesdorff, L. (2010). Caveats for the journal and field normalizations in the CWTS (“Leiden”) evaluations of research performance. Journal of Informetrics, 4(3), 423-430.
Pinski, G., & Narin, F. (1976). Citation influence for journal aggregates of scientific publications: Theory with application to the literature of physics. Information Processing and Management, 12(5), 297-312.
Prathap, G. (2014). The best team at IPL 2014 and EPL 2013-2014. Science Reporter, August 44-47.
Prathap, G. & Nishy, P. (in peparation). A size-independent journal impact metric based on social-network analysis. Preprint available at https://www.academia.edu/7765183/A_size-independent_journal_impact_metric_based_on_social-network_analysis.
Prathap, G., Nishi, P., & Savithri, S. (in press). On the orthogonality of indicators of journal performance. Current Science.
Price, D.J. de Solla (1976). A general theory of bibliometric and other cumulative advantage processes. Journal of the American Society for Information Science, 27(5), 292-306.
Price, D.J. de Solla (1981). The analysis of square matrices of scientometric transactions. Scientometrics, 3(1), 55-63.
Rafols, I., Leydesdorff, L., O'Hare, A., Nightingale, P., & Stirling, A. (2012). How journal rankings can suppress interdisciplinary research: A comparison between innovation studies and business & management. Research Policy, 41(7), 1262-1282.
Ramanujacharyulu, C. (1964). Analysis of preferential experiments. Psychometrika, 29(3), 257- 261.
Todeschini, R., Grisoni, F., & Nembri, S. (2015). Weighted power-weakness ratio for multi-criteria decision making. Chemometrics and Intelligent Laboratory Systems, 146, 329-336.
Waltman, L., Yan, E., & van Eck, N.J. (2011a). A recursive field-normalized bibliometric performance indicator: An application to the field of library and information science. Scientometrics, 89(1), 301-314.
Waltman, L., van Eck, N.J., van Leeuwen, T.N., Visser, M.S., & van Raan, A.F.J. (2011b). Towards a new crown indicator: Some theoretical considerations. Journal of Informetrics, 5(1), 37-47.
West, J.D., Bergst rom, T.C., & Bergstrom, C.T. (2010). The Eigenfactor metrics: A network approach to assessing scholarly journals. College and Research Libraries, 71(3), 236-244.
Wouters, P. (1999). The citation culture. Amsterdam: Unpublished Ph.D. Thesis, University of Amsterdam.
Yan, E., & Ding, Y. (2010). Weighted citation: An indicator of an article's prestige. Journal of the American Society for Information Science and Technology, 61(8), 1635-1643.
Yanovsky, V. (1981). Citation analysis significance of scientific journals. Scientometrics, 3(3), 223-233.
Zhirov, A., Zhirov, O., & Shepelyansky, D.L. (2010). Two-dimensional ranking of Wikipedia articles. The European Physical Journal B, 77(4), 523-531.
Outlines

/

京ICP备05002861号-43

Copyright © 2023 All rights reserved Journal of Data and Information Science

E-mail: jdis@mail.las.ac.cn Add:No.33, Beisihuan Xilu, Haidian District, Beijing 100190, China

Support by Beijing Magtech Co.ltd E-mail: support@magtech.com.cn