Research Paper

Measuring and Visualizing Research Collaboration and Productivity

  • Jon Garner 1 ,
  • Alan L. Porter , 2, ,
  • Andreas Leidolf 3 ,
  • Michelle Baker 3, 4
Expand
Corresponding author: Alan L. Porter (E-mail: ).

Online published: 2018-03-19

Copyright

Open Access

Abstract

Purpose: This paper presents findings of a quasi-experimental assessment to gauge the research productivity and degree of interdisciplinarity of research center outputs. Of special interest, we share an enriched visualization of research co-authoring patterns.

Design/methodology/approach: We compile publications by 45 researchers in each of 1) the iUTAH project, which we consider here to be analogous to a “research center,” 2) CG1— a comparison group of participants in two other Utah environmental research centers, and 3) CG2—a comparison group of Utah university environmental researchers not associated with a research center. We draw bibliometric data from Web of Science and from Google Scholar. We gather publications for a period before iUTAH had been established (2010-2012) and a period after (2014-2016). We compare these research outputs in terms of publications and citations thereto. We also measure interdisciplinarity using Integration scoring and generate science overlay maps to locate the research publications across disciplines.

Findings: We find that participation in the iUTAH project appears to increase research outputs (publications in the After period) and increase research citation rates relative to the comparison group researchers (although CG1 research remains most cited, as it was in the Before period). Most notably, participation in iUTAH markedly increases co-authoring among researchers—in general; and for junior, as well as senior, faculty; for men and women: across organizations; and across disciplines.

Research limitations: The quasi-experimental design necessarily generates suggestive, not definitively causal, findings because of the imperfect controls.

Practical implications: This study demonstrates a viable approach for research assessment of a center or program for which random assignment of control groups is not possible. It illustrates use of bibliometric indicators to inform R&D program management.

Originality/value: New visualizations of researcher collaboration provide compelling comparisons of the extent and nature of social networking among target cohorts.

Cite this article

Jon Garner , Alan L. Porter , Andreas Leidolf , Michelle Baker . Measuring and Visualizing Research Collaboration and Productivity[J]. Journal of Data and Information Science, 2018 , 3(1) : 54 -81 . DOI: 10.2478/jdis-2018-0004

1 Introduction

To help evaluate the performance of a large interdisciplinary, cross-institutional research project, we addressed a challenging complex of attributes. The five-year “innovative Urban Transitions and Aridregion Hydro-sustainability” (iUTAH) project was initiated in 2012 with support from the US National Science Foundation’s (NSF) “Established Program to Stimulate Competitive Research” (EPSCoR). The iUTAH research proposal mandated a comprehensive research collaboration assessment in Year 5 that would “...analyze the publication and citation patterns of researchers at all Utah institutions to identify the level of interconnectedness before iUTAH is awarded and in year 4 of the award [to] indicate the connections between institutions, disciplines, and individuals that were stimulated by the iUTAH activities.”
In the past, we and others have cast a variety of analytical and visualization tools to help analyze research attributes (Porter et al., 2010). Such measurement tools fall generally in the category of bibliometrics (De Bellis, 2009), with special note of:
♦ Measures of citation diversity to help gauge interdisciplinary research knowledge interchange (Zhang, Rousseau, & Glänzel, 2016), including “Integration scores” (Porter et al., 2007; 2008), Rao-Stirling diversity (Rafols & Meyer, 2010; Stirling, 2007), and Diffusion scores (Carley & Porter, 2012; Garner, Porter, & Newman, 2014)
♦ Cross-research domain knowledge interchange (Kwon et al., under review; Porter et al., 2013)
♦ Science overlay maps to visually represent the diversity of publication, citation, or citing sub-disciplinary involvement (Carley et al., under review; Leydesdorff et al., 2013; Leydesdorff & Rafols, 2009; Porter & Rafols, 2009; Rafols et al., 2010; Riopelle, Leydesdorff, & Jie, 2014)
♦ Social networking analyses (de Nooy, Mrvar, & Batgelj, 2011), to observe collaboration patterns with visualizations (Garner et al., 2012; Leydesdorff, 2008)
Integration and Diffusion scoring, and the science overlay maps, use Web of Science Categories (WoSCs) as the basic unit of sub-discipline categorization (Leydesdorff & Bornmann, 2016). There are various approaches, and attendant issues (Glänzel & Schubert, 2003; Klavans & Boyack, 2009, 2016; Rafols & Leydesdorff, 2009), but the WoSCs offer suitable sub-disciplinary granularity to accord with a key National Academies report’s recommendations (National Academies, 2005). In our experience, granularity should match the study’s main objectives. Recently we have tuned journal assignments to focus on research knowledge interchange among Cognitive Sciences, Education Research, and associated Border Fields (Youtie et al., 2017). We have generated informative results that consolidated the 200+ WoSCs into four “meta-disciplines” to assess interest (citations) from the natural sciences to a US NSF social science program (Garner et al., 2013). Or, one can seek much finer grain, as Klavans and Boyack (2017) demonstrate using some 91,000 topics to predict grant funding prospects. For the iUTAH assessment, interest keyed on the extent to which Center participants collaborated across disciplines, for which the WoSCs provided a manageable categorization.
We have applied various research assessment tools in the course of different research assessments of US federal funding programs, particularly to gauge interdisciplinarity. These include Environmental Protection Agency Science to Achieve Results (STAR) projects (Porter et al., 2003), and NSF Research and Evaluation on Education in Science & Engineering (REESE—Porter et al., 2013), Human and Social Dynamics Program (HSD—Garner et al., 2013), and Research Coordination Networks (RCN—Porter, Garner, & Crowl, 2012; Garner et al., 2012).
What one cites in a paper depends on multiple factors, including relevance, disciplinary training, disciplinary norms, and awareness (c.f., De Bellis, 2009). For instance, in a past bibliometric analysis, we observed stark differences in influence of nanotechnology Environmental, Health, and Safety (EHS) research findings, as gauged by citation, within the larger nanotechnology research community (Youtie et al., 2011).
Turning to the assessment task at hand, we focus on connections among institutions, disciplines, and individuals that were possibly bolstered by iUTAH activities. iUTAH aims to strengthen science regarding management of the state’s water resources. The program brings together researchers, students, and stakeholders from multiple organizations and disciplines. A key aspiration is to generate collaborative, interdisciplinary research.
This paper reports on our efforts to combine and extend analytical and visualization tools to assess iUTAH’s contributions on multiple factors. We want to measure “connections” made across individuals, organizations, and disciplines, with express interest in variations by seniority (academic rank) and gender. We concentrate on research outputs and their impacts (in the form of citations); we do not address process data per se (i.e., no interviews or surveys of participants to document the nature of interactions).
We offer this study as a model of applying bibliometric tools to help assess the research outputs of a major center. It illustrates a quasi-experimental study design to provide practical comparisons that help gauge the effects of instituting the center. While one cannot generalize from a single such study, we offer it to suggest possible measurement approaches.

2 The Research Assessment

2.1 Design

The research assessment (Garner & Porter, 2017) sought to measure changes in collaboration practices and publication patterns catalyzed by iUTAH, as well as their impact, in terms of extent of citation. This paper emphasizes considerations in measuring and depicting the collaboration attributes.
To achieve suitable comparisons, we determined to implement a quasi-experimental design (Campbell & Stanley, 1963). This approach is modeled on randomized experimental designs, adapted to real-world possibilities (c.f., Peck, 2016). Our approach is a “non-equivalent control group, before—after” design (Cook & Campbell, 1979). The intent is to generate a family of informative comparisons to document the effects of participation in iUTAH relating to research outputs and collaboration patterns. There are two comparative dimensions to this assessment design:
♦ Time—comparing Before to After metrics for iUTAH subjects (and for the comparison groups)
♦ Group—benchmarking iUTAH results against two suitable comparison groups.
For the temporal comparisons, we used 2010-2012 as the Before period and 2014-2016 as After. We set aside 2013 as ambiguous with respect to research publications that are apt to reflect participation in the iUTAH project.
Lacking a randomly assigned control group equivalent to iUTAH researchers, we worked to develop reasonable “comparison groups.” Our first comparison group consisted of participants in two Utah-based university centers comparable in terms of environmental science emphases. Comparison Group 1 (CG1) consists of researchers associated with Interdisciplinary/integrated Research Centers (IRCs) in Utah. We chose two research centers from which to draw this sample—the Ecology Center (EC) at Utah State University (USU) and the Global Change & Sustainability Center (GCSC) at the University of Utah (UU)—these being two major research universities involved in iUTAH. EC and GCSC were selected as prominent centers with commensurate (not identical) interests relating to environmental sciences. EC has some 70 participating faculty (with some overlap with iUTAH) and has been in operation since the 1960’s. GCSC is newer and also has some of its faculty participating in iUTAH. Researchers participating in both iUTAH and EC or GCSC could be included in our sampling as iUTAH.
For our second comparison group (CG2), we sought individual researchers with similar disciplinary ties but not associated with iUTAH or CG1. For purposes of this article, CG2 is of less interest in that substantial collaboration among an arbitrary set of collegial researchers would not be expected to increase markedly in the time periods studied. We note CG2 here for completeness, and draw limited comparisons.
The general hypothesis is that participation in iUTAH increases collaboration and the degree of interdisciplinarity in the research of its members. Testing of the hypotheses of changed publication outputs, citations received, and especially, collaboration patterns centers on comparing the treatment and comparison groups, Before and After. The ideal pattern would show minimal change over time for the comparison group vs. an increase for the iUTAH subjects.
We selected, in a non-random fashion, but without preconceived bias, a group of 45 tenure-track iUTAH researchers whose involvement spanned the length of the project. For practical purposes, these are nearly all the possible iUTAH participants as the dozen or so others mostly have special attributes (e.g., in faculty status or more limited involvement). We then matched them with random samples of equal size drawn from CG1 and CG2, and stratified according to institution, discipline, rank, and gender. The assessment report to iUTAH provides full details on how we composed the researcher samples.
In addition to USU and UU, the iUTAH sample also involved researchers at Brigham Young University (BYU) and several Primarily Undergraduate Institutions (PUIs): Utah Valley University; Weber State University; USU branch campuses; and Salt Lake Community College (SLCC). However, CG1 lacks PUI representation because those centers do not involve PUI researchers. Our intent was to do preliminary analyses on the PUIs in iUTAH and in CG2 to compare and determine further analytic strategies regarding this small number of researchers, because we hypothesized that PUI faculty may be more significantly impacted by participation in iUTAH.

2.2 Data

To address the assessment objectives, we compiled publication outputs in the form of abstract records gathered from two major research databases—Web of Science (WoS) and Google Scholar (GS). GS searches were conducted separately for each researcher (author) by first looking for a GS profile; if one existed, we extracted each article posted since 2010. If we did not find an author profile, then we searched for articles with that author name and extracted those. This was followed by manual checking to disambiguate author names, facilitated by using author affiliation information. Resulting records were formed into a single data set file by merging all 45 authors’ records into one GS set for each of the three groups (iUTAH, CG1, and CG2). Those files were pruned to remove duplicate records (e.g., co-authored records captured more than once).
WoS data were gathered and consolidated via corresponding processes to yield separate data sets for iUTAH, CG1, and CG2. We then merged the WoS and GS data sets for each research group, cleaned the resulting files by removing redundant information and formatting, and de-duplicated the cleaned files based on record (article) titles, removing the GS records where there was a corresponding WoS one.
Of the 45 faculty members in each group, not everyone published in each period. Total counts for GS analyses are limited to papers not also indexed in WoS. GS publications are diverse—we extracted distinct source titles for 630 of 868 GS records for 2014-2016. Those are somewhat noisy (not as consistent nor as clean as WoS fields of information). Nearly all are scholarly in nature. For instance, the five source titles occurring most frequently in these GS records are AGU Fall Meeting Abstracts, EGU General Assembly Conference Abstracts, 2014 GSA Annual Meeting in Vancouver, British Columbia, Proceedings of the Water Environment Federation, and Climate Dynamics.
Some general data attributes deserve noting. Average publication and citation values are of interest, but, as expected, those data tend to be highly skewed. Thus, analyses need to proceed with some caution. Publications range as high as 150 for one author. Of the 3,308 records (articles, etc.) in the full data set, 1,484 show no citations (but recall that the 2014-2016 papers have not had a great amount of time in which to be cited); one book has received 1,043 cites (a CG1 researcher co-authored it).
3 Publication and Citation Results

3.1 Research Output and Impact Comparisons

While this article focuses on discerning and depicting collaboration patterns, we first present the basic program research output and impact results to provide essential content (see Table 1). The first row shows how many of the 45 researchers in each group published one or more papers in the Before (2010-2012) and After (2014-2016) time periods. The next three rows show the publication counts—Total (from GS or WoS combined); Total from GS (excluding WoS duplicates); and Total from WoS.
Table 1 Publication and citation metrics of authors in three groups of environmental researchers from Utah, 2010-2016.
Publications 2010-2012 2014-2016
iUtah CG1 CG2 iUtah CG1 CG2
# of group authors with papers 45 44 40 45 45 36
Total Records 702 636 293 788 666 295
Total from GS 431 367 166 407 337 165
Total from WoS 271 269 127 381 329 130
Average Times Cited 9.55 12.81 10.02 2.68 3.65 2.60
Average Times Cited GS 6.49 7.33 6.62 1.13 4.06 1.75
Average Times Cited WoS 14.42 20.49 14.46 3.81 5.13 3.52
Median Times Cited 2 3 1 0 0 0
Median Times Cited GS 0 0 0 0 0 0
Median Times Cited WoS 9 9 4 2 2 1
Cites/Year WoS 2.43 3.47 2.35 2.39 3.08 1.92
H-Index 35 42 22 19 22 14
H-Index GS 26 26 15 13 11 10
H-Index WoS 29 36 18 17 20 12
Integration score 0.535 0.457 0.428 0.523 0.489 0.468
Figure 1 plots the totals for each group, Before and After, for the WoS publications (the Total Records pattern is similar). Results support propositions that participating in a research center (iUTAH or one of the CG1 centers—EC or GC):
♦ Attracts more prolific researchers (more papers in the Before period than the individual CG2 researchers)
♦ Boosts research productivity (both iUTAH and CG1 groups increase productivity in the After period, but we note that the CG1 centers were already operating in the Before period)
iUTAH participation appears to boost publication in leading journals (i.e., those indexed by WoS) more than does participation in the CG1 centers—i.e., the iUTAH publication rate in the After period exceeds that for CG1.
Figure 1. Publications indexed by Web of Science for authors in three groups of researchers from Utah, 2010-2016.
Citation comparisons are more difficult to interpret (Table 1). Publications in the Before period have much more time in which to accrue citations than do those in the After period. Also, citation data are highly skewed, so averages are influenced heavily by relatively few highly cited papers. Furthermore, citation rates vary by field, so uneven disciplinary concentrations could favor one group over another. Inspection of Table S-1 shows general correspondence between the leading WoSCs of iUTAH and CG1 publications, so that field propensities to citation are likely not to be too different. However, the CG2 WoSC distribution differs more, and the CG2 publication counts are considerably less robust. Too much should not be made of the relative citation rate for CG2.
All that said, the general pattern shows CG1 publications to be more heavily cited, both Before and After. iUTAH and CG2 show relatively similar citation profiles. Again, interpretation of relative center participation impacts is not symmetric in that CG1 operated in both the 2010-2012 and 2014-2016 periods whereas iUTAH operated only in the 2014-2016 period.
H-index is one approach to reduce extreme influences of outlier high value items (Hirsch, 2005). It reports the number of publications receiving at least that many citations. Traditionally, the H-Index is applied to individuals; here, we adopt it for groups of individuals. Using this citation metric, CG1 generally leads, with iUTAH second.
Calculating citations on the basis of years since publication provides more comparable Before vs. After comparisons. Figure 2 shows average cites/year from publication through 2016. This was calculated by taking the times cited for a given year and dividing by the current year minus the given year. i.e., a paper in 2011 with 12 cites is calculated as 12/[2016.5-2011] which gives a result of just over 2 cites per year. These results are then taken together to give an overall average per time period. Using this measure, CG1 retains its citation lead, but iUTAH shows a relative gain in the After period, compared to CG1 or CG2.
Figure 2. Average times cited per year since publication (based on Web of Science) for authors in three groups of researchers from Utah, 2010-2016.
We also broke out publication and citation activity by gender and rank (Garner & Porter, 2017), although this is not a focus of this paper. We do, nevertheless, note a few interesting results (see supplemental figures S-7a, b):
♦ iUTAH Assistant Professors published more in the After period, while Full and Associate Professors’ publications held relatively constant; this result supports an aim of the project to stimulate early-career faculty’s research.
♦ Within iUTAH, Full Professors’ papers tended to be more interdisciplinary than those of more junior faculty, in both the Before and After periods.

3.2 Interdisciplinarity

The bottom row of Table 1 presents Integration scores (Porter et al., 2007; Porter, Roessner, & Heberger, 2008). Those reflect the diversity of WoSCs cited by a given paper. A higher Integration score reflects greater 1) Variety (the cited journals being in different WoSCs), 2) Balance (rather than being heavily concentrated in one or a few WoSCs), and 3) Disparity (how distant those cited WoSCs are from each other based on 2015 journal-to-journal cross-citation propensities). An Integration score of 0 would indicate that all of a paper’s references that were indexed by WoS were in a single WoSC; a score approaching 1 would be extremely diverse (i.e., drawing on widespread sources of research knowledge).
iUTAH Integration scores are statistically significantly higher than each comparison group, both for the Before and After periods, at the 0.001 level, by one-tailed t test. This suggests that researchers participating in iUTAH are more inclined toward interdisciplinarity, rather than the iUTAH experience boosting interdisciplinarity of its researchers’ papers.
We also sought to discern the variety of fields in which these researchers publish. Science overlay maps (Rafols, Porter, & Leydesdorff, 2010) provide a means to show the distribution of the WoSCs of their publication journals. We generated science overlay maps for each of our three groups, for both periods (available in the Supplemental Materials, along with Table S-1 tallying the number of papers in leading WoSCs by each group, in each period). The most striking observation is that research emphases held quite stable over time.
Figure 3 presents the science overlay map for iUTAH in the After period. These maps locate the 227 WoSCs based on general cross-citation patterns for WoS in 2015. We color code five “macro” domains—Ecology and Environmental Science and Technology; Chemistry and Physics; Engineering and Mathematics; Psychology and Social Sciences; and Biology and Medicine. The nodes showing prominently in the maps are those with most publications by the iUTAH researchers—e.g., “Environmental sciences” is No. 1 for iUTAH in Figure 3. This map demonstrates that iUTAH researchers address not only multiple environmental specialties, but their papers also reach out into a range of scientific, engineering, and social sciences domains. Given the iUTAH Center’s focus is on better managing the state’s water resources, such diversity seems very suitable.
Figure 3. iUTAH publications overlaid on a science map based on Web of Science categories, 2014-2016.
Figure 4 shows the counterpart science overlay map for CG1 in the After period. CG1 publications also present a generally comparable profile of multiple environmental specialties (with more geoscience activity), plus considerable publication in other macro domains. CG2 shows a generally similar makeup, but with some different emphases. Science overlay maps for CG2 appear as Figures S-3 and S-4 in the Supplemental Materials. Table S-1 (also in the Supplemental Materials) presents paper counts by each group for both periods; this allows further exploration into specific WoSC activity. iUTAH, for 2014-2016, shows 58 or more papers in each of five different WoSCs (so, averaging more than one per each of the 45 researchers included)—environmental sciences, multidisciplinary geosciences, ecology, meteorology and atmospheric sciences, and water resources. We feel the tabular and graphical presentations of publication diversity across WoSCs complement each other. The next section delves into the research collaboration patterns underlying these research outputs.

4 Collaboration Patterns (Social Network Analyses)

Figures 5 and 6 convey the most compelling results of our research assessment —a marked increase in collaboration attributable to participation in iUTAH activities. These charts contain four quadrants somewhat unequal in size and shape—separated by hand-drawn dashed lines), representing the institutions engaged—UU, USU, BYU, and PUIs. Nodes represent individual researchers, with larger nodes indicating more total publications (Google Scholar and WoS combined). Researcher positions are consistently maintained in both figures. Heavier lines indicate more co-authored publications. Versions of the figures for the report to the iUTAH Center include researcher names to provide explicit co-authoring information; here names are removed to protect identities.
Figure 5. Co-Author map of iUTAH researchers for the Before period (2010-2012), separated by institution and discipline.
Figure 6. Co-Author map of iUTAH researchers for the After period (2014-2016), separated by institution and discipline.
Figure 5 maps co-authoring among the iUTAH researchers for 2010-2012 (This period precedes iUTAH activities that would affect research publication to any substantial degree). Interconnections across institutions are almost nonexistent.
Contrasting Figures 5 and 6, we note a substantial increase in networking within universities among many of the iUTAH participants (not all). This is pronounced for UU and USU, but less so for BYU and the PUIs. Even more dramatic is the large increase in connections between UU and USU colleagues in the After period.
Table 2 documents what is apparent in comparing Figures 5 and 6—marked increases in collaboration. This table tallies network statistics corresponding to Figures 5 and 6, as well as to Figures S-5, S-6, and S-7 (Supplemental Materials). Most striking is the confirmation of the visual patterns in Figures 5 and 6—iUTAH researchers increase their collaboration to a striking degree from Before to After. This is clear by inspection, with increases running more than 4-fold—note Average Degree (up from 1.2 to 5.4), Density (up from 0.03 to 0.12), and various tallies of links (e.g., total co-authorship links among the iUTAH researchers increase from 27 to 122).
Likewise, Table 2 comparisons between iUTAH After and CG1 After show large differences here, as also shown in comparing Figures 6 and 8. For one measure, total links in iUTAH After are 122 vs. 4 for CG1 After—i.e., heavy vs. minimal co-authoring among the respective groups of 45 researchers. Average degree or density comparisons are similarly extreme.
Table 2 Networking statistics within Utah researcher cohorts.
iUTAH CG1
Before After Before After
Average degree 1.2 5.422 0.133 0.178
Density 0.027 0.123 0.003 0.004
Total links 27 122 3 4
Links within discipline 12 63 3 3
Links across discipline 15 59 0 1
Links within rank 12 48
Links across rank 15 74
Links within gender 17 63
Links across gender 10 59
Links within university 24 14
Links across university 3 108
Figures 5 and 6 also color code the five disciplinary groupings within iUTAH. Focusing for a moment just on the USU and UU researchers:
♦ Of 20 USU faculty in our sample, most show multiple links to other researchers; only two show no collaborations with iUTAH colleagues on their publications in the After period.
♦ Of 13 UU faculty, about half show multiple links; only two do not show any collaborations with other iUTAH colleagues in the After period.
♦ We see heavy interconnections between ‘core’ researchers in UU and USU in the After period, implying a change attributable to participation in the iUTAH project.
♦ Interdisciplinary connection density, the USU “core” of heavily interconnected researchers in the After period, includes all five disciplines. The UU “core” taps four of the disciplines. Collectively, this supports an assertion of high interdisciplinary connection, not “silos” of disciplinary sub-groups.
♦ The cross-disciplinary interconnection appears much weaker for the four BYU and eight PUI participating researchers.
Figures 7 and 8 provide the counterpart CG1 comparisons. [We choose to omit these analyses for CG2 as there is no reason to anticipate substantial collaboration among those individual researchers randomly assigned to a comparison group.] We see surprisingly little collaboration in CG1, considering that these are researchers associated with integrated environmental research centers. Since those are single university centers, we would not expect much UU-USU linkage. Also, BYU and the PUIs are not engaged in these centers. It is striking, since GCSC and EC were active in both periods, that the paucity of co-authoring sharply contrasts to the After period in which iUTAH was active (Figure 6).
Figure 7. Co-Author map of CG1 researchers for the Before period (2010-2012), separated by institution and discipline.
Supplemental Figures S-6a and S-6b display iUTAH researchers by gender; we did not observe remarkable differences in group engagement of men versus women. Figures S-7a and S-7b are counterparts of Figures 5 and 6. They color-code rank instead of discipline. The impression is that USU has been more successful compared to other institutions in engaging their early-career faculty in research collaboration.

5 Discussion and Conclusions

Substantively, we describe the publications, and citations to those publications, emanating from the researchers actively involved in the iUTAH project. We are expressly interested in the cross-disciplinary nature of the research, and show that it, indeed, engages many disciplines.
Figure 8. Co-Author map of CG1 researchers for the After period (2014-2016), separated by institution and discipline.
Moreover, we have run several quasi-experimental comparisons to compare those iUTAH research outputs to other environmental center participants (CG1) and to a matched set of individual researchers at Utah universities (CG2). These indicate that the iUTAH cohort participants are highly productive (equivalent to CG1; higher than CG2 researchers) prior to engaging in iUTAH, and that their publication activity appears to be increased by iUTAH involvement. CG1 publications in the 2010-2012 period attract somewhat more citations than the papers of researchers who join iUTAH (this is the Before period) or of those in the CG2 group. In the After period, 2014-2016, the iUTAH publications show a moderate gain relative to the others.
We note certain limitations in these analyses. This is a quasi-experimental study, so comparisons are somewhat guarded in the absence of randomized control (which is not plausible in such a real case environment). Given the realities of composing an interdisciplinary research organization, such a design is certainly in order. [i.e., researchers are not amenable to being randomly assigned to join research centers or not.] We sought best available comparisons, but have noted limitations; for instance, the CG1 centers were operating during the period Before iUTAH began work. WoS coverage of publications in the After period are not as complete as for the Before period, due to lags in indexing. More serious, as discussed, citations accrued by those publications are more severely truncated for the After period. Such limitations are reasonably compensated by having the cross-group comparisons of iUTAH to CG1 and CG2. Tallying citations is problematic in several regards, some already noted (e.g., field differences). Fixed citation windows offer advantages vis-à-vis citations per year. Citation rates change over time and recent periods are apt to be less completely indexed by WoS.
Before vs. After citation comparisons are clouded by difficulties in adjusting citation propensities fairly (Zhang et al., 2017). We use cites per year since publication, but see advantages in the alternative of cites in 10 (or five) years post-publication. Hall et al. (2012) compared tobacco research center vs. individual grantee (National Institutes of Health R-01) publication rates. They too found that center researchers published more, but only after four years of project support (before that they lagged). Further assessment might want to compare citations received on a year-by-year basis. Gathering data on the year in which the citing document was published would enable such comparisons. We did not have those data; they require capturing information on each citing document individually. In contrast WoS readily provides consolidated “Times Cited” information within the abstract publication record, and that is what we used in this study.
One of us noted the apparent disconnect between collaboration and interdisciplinarity, as gauged by Integration scores. Presumably, cross-disciplinary collaboration offers a route to garner insights from the multiple disciplines represented; so Integration scores that measure diversity of references cited in a paper might be expected to increase. For our iUTAH researchers, collaboration within the 45 researchers escalates from Before to After, but Integration scores do not. As shown in the bottom row of Table 1, those scores do not change significantly. However, Integration scores do increase from Before to After for CG1 and CG2.
This lack of an apparent correlation between Integration scores of the papers and extent of within-group collaboration caused us to consider collaboration more broadly. While not the focus of this assessment that seeks to gauge change in within-group (the 45 iUTAH researchers) collaboration, one could examine collaboration generally. We introduce this here briefly to suggest future research potential in analyzing overall, as well as local, collaboration patterns. Table 3 offers some basic comparisons.
In brief, the After period papers of both center groups (iUTAH and CG1) show more authors per paper and more author affiliations (these are calculated at the organizational level; it was too difficult to measure departmental affiliations uniformly from the WoS records) per paper. The iUTAH increase in organizational affiliations is consistent with the expansion of cross-Utah organizational ties from Before to After (Table 2). Not an issue here, but we did run across articles with “mega” authoring (i.e., hundreds on a paper) in preliminary searches. Were those in one’s data set, they pose assessment challenges in discerning their degree of relationship (say to Center engagement) and statistical oddities.
To sum up, methodologically, we offer a multi-attribute suite of analytical and visual elements to help assess research outputs and impacts. Of particular interest are measures of research collaboration. We believe the combination of representations provides important, complementary perspectives. In particular, we find Figures 5 and 6 effective in capturing research networking attendant to interdisciplinary center activities. Without undue complexity, they graphically show changes from Before to After, and by comparing to counterpart figures for a comparison group, differences between iUTAH and CG1 here. In a presentation to an “All-Hands” meeting of iUTAH participants (July, 2017), they communicated effectively. These figures also show the nature of cross-organization and cross-disciplinary connections made among a group of researchers. [Supplemental figure variants do likewise for gender and ranks.]
The “take-away” from this study for others is a model of multiple measures of research outputs (publication characteristics) and impacts (citation characteristics). These could be adapted to meet assessment needs of other studies for which cross-disciplinarity and research collaboration are vital elements. In our view Figures 5-8, the co-author maps, are most novel—offering a concise way to communicate several facets of collaboration concisely. Perhaps this warrants consideration as a bibliometric assessment case study?

Author Contributions

Jon Garner (jon.garner@searchtech.com) performed most analyses and devised the novel graphics. Alan Porter (alan.porter@isye.gatech.edu, corresponding author) led the study design and the drafting. Andreas Leidolf (andreas.leidolf@usu.edu) and Michelle Baker (michelle.baker@usu.edu) led the sampling and contributed to design, analyses, review, and editing.

The authors have declared that no competing interests exist.

[1]
Campbell D.,& Stanley ,J.(1963). Experimental and quasi-experimental designs for research. Chicago: Rand-McNally.This article has no associated abstract. ( fix it )

[2]
Cook T., &Campbell ,D.A. (1979). Quasi-experimentation. New York:Houghton Mifflin.

[3]
Carley S.,& Porter ,A.L.(2012). A forward diversity index. Scientometrics, 90(2), 407-427.

DOI

[4]
Carley S., Porter A.L., Rafols I., & Leydesdorff, L.(under review). Visualization of disciplinary profiles: Enhanced science overlay maps. Journal of Data and Information Science, 2017(3), 68-111.

DOI

[5]
De Bellis,N. (2009).Bibliometrics and citation analysis:From the science citation index to cybermetrics, Lanham, MD: Scarecrow Press From the science citation index to cybermetrics,Lanham, MD: Scarecrow Press.

[6]
de Nooy,W., Mrvar , A., & Batgelj V. (2011). Exploratory social network analysis with Pajek (2nd Edition) .New York,NY: Cambridge University Press.

[7]
Garner J., Porter A.L., Borrego M., Tran E., & Teutonico R. (2013). Facilitating social and natural science cross-disciplinarity: Assessing the human and social dynamics program, Research Evaluation, 22(2), 134-144.Research that integrates the social and natural sciences is vital to address many societal challenges, yet is difficult to arrange, conduct, and disseminate. This article analyses the cross-disciplinary character of the research supported by a unique US National Science Foundation program on Human and Social Dynamics (HSD). It presents evidence that research publications deriving from this support chiefly pertain to the Social and Behavioral Sciences, but extend widely into the Bio and Medical Sciences, Environmental Sciences, and Physical Sciences and Engineering. Integration scores, based on the diversity of references cited, indicate that the HSD-derived publications are notably more interdisciplinary than those of comparable programs. Diffusion scores, together with science overlay maps, show that uptake of the HSD publications extends into the natural, as well as social, sciences. Research networking analyses, together with a new composite mapping approach, point toward successful catalysis of a new research community. The measures and maps of cross-disciplinary research activity that are advanced here may prove useful in other research assessments.

DOI

[8]
Garner J., Porter A.L., & Newman N.C. (2014). Distance and velocity measures: Using citations to determine breadth and speed of research impact. Scientometrics, 100(3), 687-703.Research that integrates the social and natural sciences is vital to address many societal challenges, yet is difficult to arrange, conduct, and disseminate. This paper compares diffusion of the resea

DOI

[9]
Garner J., Porter A.L., Newman N.C., & Crowl T.A. (2012). Assessing research network and disciplinary engagement changes induced by an NSF program. Research Evaluation, 21(2), 89-104.

DOI

[10]
Glänzel W.,& Schubert ,A.(2003). A new classification scheme of science fields and subfields designed for scientometric evaluation purposes. Scientometrics, 56(3), 357-367.A two-level hierarchic system of fields and subfields of the sciences,social sciences and arts and humanities is proposed. The system was specifically designed for scientometric (evaluation) purposes with the ultimate goal of classifying every single document into a well-defined category. This goal was achieved using a three-step iterative process. The basic concepts and some preliminary results are presented.

DOI

[11]
Hall K.L., Stokols D., Stipelman B.A., Vogel A.L., Feng A., Masimore B., Morgan G., Moser R.P., Marcus S.E., & Berrigan D. (2012). Assessing the value of team science: A study comparing center- and investigator-initiated grants. American Journal of Preventive Medicine, 42(2), 157-163.

DOI

[12]
Hirsch J.E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569-16572.

DOI

[13]
Klavans R.,& Boyack ,K.W.(2009). Toward a consensus map of science. Journal of the American Society for Information Science and Technology, 60(3), 455-476.

DOI

[14]
Klavans R.,& Boyack ,K.W.(2017). Which type of citation analysis generates the most accurate taxonomy of scientific and technical knowledge? Journal of the American Society for Information Science and Technology, 68(4), 984-998.In 1965, Derek de Solla Price foresaw the day when a citation-based taxonomy of science and technology would be delineated and correspondingly used for science policy. A taxonomy needs to be comprehensive and accurate if it is to be useful for policy making, especially now that policy makers are utilizing citation-based indicators to evaluate people, institutions and laboratories. Determining the accuracy of a taxonomy, however, remains a challenge. Previous work on the accuracy of partition solutions is sparse, and the results of those studies, while useful, have not been definitive. In this study we compare the accuracies of topic-level taxonomies based on the clustering of documents using direct citation, bibliographic coupling, and co-citation. Using a set of new gold standards - articles with at least 100 references - we find that direct citation is better at concentrating references than either bibliographic coupling or co-citation. Using the assumption that higher concentrations of references denote more accurate clusters, direct citation thus provides a more accurate representation of the taxonomy of scientific and technical knowledge than either bibliographic coupling or co-citation. We also find that discipline-level taxonomies based on journal schema are highly inaccurate compared to topic-level taxonomies, and recommend against their use.

DOI

[15]
Kwon S.,Solomon G.E.A., Youtie , J., & Porter, A.L.(under review). A measure of interdisciplinary knowledge flow between specific fields: Implications for impact and funding . PLoS One.

[16]
Leydesdorff L. (2008). On the normalization and visualization of author co-citation data: Salton’s Cosine versus the Jaccard Index. Journal of the American Society for Information Science and Technology, 59(1), 77-85.Abstract The debate about which similarity measure one should use for the normalization in the case of Author Co-citation Analysis (ACA) is further complicated when one distinguishes between the symmetrical co-citation—or, more generally, co-occurrence—matrix and the underlying asymmetrical citation—occurrence—matrix. In the Web environment, the approach of retrieving original citation data is often not feasible. In that case, one should use the Jaccard index, but preferentially after adding the number of total citations (i.e., occurrences) on the main diagonal. Unlike Salton's cosine and the Pearson correlation, the Jaccard index abstracts from the shape of the distributions and focuses only on the intersection and the sum of the two sets. Since the correlations in the co-occurrence matrix may be spurious, this property of the Jaccard index can be considered as an advantage in this case.

DOI

[17]
Leydesdorff L.,& Bornmann ,L.(2016). The operationalization of “fields” as WoS subject categories (WCs) in evaluative bibliometrics: The cases of “library and information science” and “science & technology studies.” Journal of the American Society for Information Science and Technology, 67(3), 707-714.Abstract: Normalization of citation scores using reference sets based on Web-of-Science Subject Categories (WCs) has become an established ("best") practice in evaluative bibliometrics. For example, the Times Higher Education World University Rankings are, among other things, based on this operationalization. However, WCs were developed decades ago for the purpose of information retrieval and evolved incrementally with the database; the classification is machine-based and partially manually corrected. Using the WC "information science & library science" and the WCs attributed to journals in the field of "science and technology studies," we show that WCs do not provide sufficient analytical clarity to carry bibliometric normalization in evaluation practices because of "indexer effects." Can the compliance with "best practices" be replaced with an ambition to develop "best possible practices"? New research questions can then be envisaged.

DOI

[18]
Leydesdorff L., Carley S., & Rafols I. (2013). Global maps of science based on the new Web-of-Science Categories. Scientometrics, 94(2), 589-593.In August 2011, Thomson Reuters launched version 5 of the Science and Social Science Citation Index in the Web of Science (WoS). Among other things, the 222 ISI Subject Categories (SCs) for these two databases in version 4 of WoS were renamed and extended to 225 WoS Categories (WCs). A new set of 151 Subject Areas was added, but at a higher level of aggregation. Perhaps confusingly, these Subject Areas are now abbreviated “SC” in the download, whereas “WC” is used for WoS Categories. Since we previously used the ISI SCs as the baseline for a global map in Pajek (Pajek is freely available at http://vlado.fmf.uni-lj.si/pub/networks/pajek/ ) (Rafols et al., Journal of the American Society for Information Science and Technology 61:1871–1887, 2010) and brought this facility online (at http://www.leydesdorff.net/overlaytoolkit ), we recalibrated this map for the new WC categories using the Journal Citation Reports 2010. In the new installation, the base maps can also be made using VOSviewer (VOSviewer is freely available at http://www.VOSviewer.com/ ) (Van Eck and Waltman, Scientometrics 84:523–538, 2010).

DOI PMID

[19]
Leydesdorff L.,& Rafols ,I.(2009). A global map of science based on the ISI subject categories. Journal of the American Society for Information Science and Technology, 60(2), 348-362.Abstract: The ISI subject categories classify journals included in the Science Citation Index (SCI). The aggregated journal-journal citation matrix contained in the Journal Citation Reports can be aggregated on the basis of these categories. This leads to an asymmetrical transaction matrix (citing versus cited) which is much more densely populated than the underlying matrix at the journal level. Exploratory factor analysis leads us to opt for a fourteen-factor solution. This solution can easily be interpreted as the disciplinary structure of science. The nested maps of science (corresponding to 14 factors, 172 categories, and 6,164 journals) are brought online at this http URL An analysis of interdisciplinary relations is pursued at three levels of aggregation using the newly added ISI subject category of "Nanoscience & nanotechnology". The journal level provides the finer grained perspective. Errors in the attribution of journals to the ISI subject categories are averaged out so that the factor analysis can reveal the main structures. The mapping of science can, therefore, be comprehensive at the level of ISI subject categories.

DOI

[20]
National Academies Committee on Facilitating Interdisciplinary Research, Committee on Science,Engineering and Public Policy(COSEPUPx) (2005). Facilitating Interdisciplinary Research. Washington,DC: National Academies Press.

[21]
Peck L.R. (2016), Social experiments in practice: The what, why, when, where, and how of Experimental Design & Analysis. New Directions for Evaluation, No. 152 (Winter), New York: Wiley.

[22]
Porter A.L., Cohen A.S., Roessner J.D., & Perreault M. (2007). Measuring researcher interdisciplinarity. Scientometrics, 72(1), 117-147.

DOI

[23]
Porter A.L., Garner J., & Crowl T. (2012). The RCN (Research Coordination Network) experiment: Can we build new research networks? BioScience, 62, 282-288.The US National Science Foundation Research Coordination Network (RCN) program broke new ground in funding the development of new research communities of practice. This assessment of RCN supports the conclusion that networking activity was increased for a sample set of projects relative to a comparison group. Journal articles resulting from RCN support are scored as highly interdisciplinary. Moreover, those articles appear as notably influential, being published in high-impact journals and being highly cited. The RCN program does indeed seem to be fostering new biological science research networks.

DOI

[24]
Porter A.L., Newman N.C., Myers W., & Schoeneck D. (2003). Projects and publications: Interesting patterns in U.S. Environmental Protection Agency research.Research Evaluation, 12(3), 171-182.

DOI

[25]
Porter A.L.,& Rafols ,I.(2009). Is science becoming more interdisciplinary? Measuring and mapping six research fields over time. Scientometrics, 81(3), 719-745.

DOI

[26]
Porter A.L., Roessner J.D., & Heberger A.E. (2008). How interdisciplinary is a given body of research? Research Evaluation, 17(4), 273-282.This article presents results to date produced by a team charged with evaluating the National Academies Keck Futures Initiative, a 15-year US$40 million program to facilitate interdisciplinary research in the United States. The team has developed and tested promising quantitative measures of the integration (1) and specialization (S) of research outputs, the former essential to evaluating the impact of the program. Both measures are based on Thomson-ISI Web of Knowledge subject categories. 'I' measures the cognitive distance (dispersion) among the subject categories of journals cited in a body of research. 'S' measures the spread of subject categories in which a body of research is published. Pilot results for samples from researchers drawn from 22 diverse Subject categories show what appears to be a surprisingly high level of interdisciplinarity. Correlations between integration and the degree of co-authorship of selected bodies of research show a low degree of association.

DOI

[27]
Porter A.L., Schoeneck D.J., & Carley S. (2013). Measuring the extent to which a research domain is self-contained. In Proceedings of the 14th International Conference on Scientometrics and Informetrics (ISSI2013), July 15-19, Vienna, Austria.

[28]
Porter A.L., Schoeneck D.J., Roessner D., & Garner J. (2010). Practical research proposal and publication profiling. Research Evaluation, 19(1), 29-44.

DOI

[29]
Porter A.L., Schoeneck D.J., Solomon G., Lakhani H., & Dietz J. (2013). Measuring and mapping interdisciplinarity: Research & evaluation on education in science & engineering (“REESE”) and STEM. In American Education Research Association Annual Meeting

April 27-May 1, San Francisco.

[30]
Rafols I. (2014). Knowledge integration and diffusion: Measures and mapping of diversity and coherence. In Ding, Y., Rousseau, R., & Wolfram, D.(Eds.) Measuring scholarly Impact: Methods and Practice (pp. 169-190). Berlin: Springer.Abstract: I present a framework based on the concepts of diversity and coherence for the analysis of knowledge integration and diffusion. Visualisations that help understand insights gained are also introduced. The key novelty offered by this framework compared to previous approaches is the inclusion of cognitive distance (or proximity) between the categories that characterise the body of knowledge under study. I briefly discuss the different methods to map the cognitive dimension.

DOI

[31]
Rafols I.,& Leydesdorff ,L.(2009). Content-based and algorithmic classifications of journals: Perspectives on the dynamics of scientific communication and indexer effects. Journal of the American Society for Information Science and Technology, 60(9), 1823-1835.

DOI

[32]
Rafols I.,& Meyer ,M.(2010). Diversity and network coherence as indicators of interdisciplinarity: Case studies in bionanoscience. Scientometrics, 82(2), 263-287.

DOI

[33]
Rafols I., Porter A., & Leydesdorff L. (2010). Science overlay maps: A new tool for research policy and library management. Journal of the American Society for Information Science and Technology, 61(9), 1871-1887.Abstract We present a novel approach to visually locate bodies of research within the sciences, both at each moment of time and dynamically. This article describes how this approach fits with other efforts to locally and globally map scientific outputs. We then show how these science overlay maps help benchmarking, explore collaborations, and track temporal changes, using examples of universities, corporations, funding agencies, and research topics. We address their conditions of application and discuss advantages, downsides, and limitations. Overlay maps especially help investigate the increasing number of scientific developments and organizations that do not fit within traditional disciplinary categories. We make these tools available online to enable researchers to explore the ongoing sociocognitive transformations of science and technology systems.

DOI

[34]
Riopelle K., Leydesdorff L., & Jie L. (2014). How to Create an Overlay Map of Science Using the Web of Science. Retrieved from .

[35]
Stirling A. (2007). A general framework for analysing diversity in science, technology and society. Journal of the Royal Society Interface, 4(15), 707-719.No abstract is available for this item.

DOI PMID

[36]
Wagner C.S., Roessner J.D., Bobb K., Klein J.T., Boyack K.W., Keyton J., Rafols I., & Börner K. (2011). Approaches to understanding and measuring interdisciplinary scientific research (IDR): A review of the literature. Journal of Informetrics, 5(1), 14-26.Interdisciplinary scientific research (IDR) extends and challenges the study of science on a number of fronts, including creating output science and engineering (S&E) indicators. This literature review began with a narrow search for quantitative measures of the output of IDR that could contribute to indicators, but the authors expanded the scope of the review as it became clear that differing definitions, assessment tools, evaluation processes, and measures all shed light on different aspects of IDR. Key among these broader aspects is (a) the importance of incorporating the concept of knowledge integration, and (b) recognizing that integration can occur within a single mind as well as among a team. Existing output measures alone cannot adequately capture this process. Among the quantitative measures considered, bibliometrics (co-authorships, co-inventors, collaborations, references, citations and co-citations) are the most developed, but leave considerable gaps in understanding of the social dynamics that lead to knowledge integration. Emerging measures in network dynamics (particularly betweenness centrality and diversity), and entropy are promising as indicators, but their use requires sophisticated interpretations. Combinations of quantitative measures and qualitative assessments being applied within evaluation studies appear to reveal IDR processes but carry burdens of expense, intrusion, and lack of reproducibility year-upon-year. This review is a first step toward providing a more holistic view of measuring IDR, although research and development is needed before metrics can adequately reflect the actual phenomenon of IDR.

DOI

[37]
Wang J., Thijs B., & Glänzel W. (2014. Interdisciplinarity and impact: Distinct effects of variety, balance and disparity (December 22, 2014). Retrieved from 2014). Interdisciplinarity and impact: Distinct effects of variety, balance and disparity (December 22, 2014). Retrieved from or .

[38]
Yegros-Yegros A., Amat C. B., d’Este P., Porter A.L., & Rafols I. (2010). Does interdisciplinary research lead to higher scientific impact? In Science and Technology Indicators (STI) Conference, September 8-11, Leiden, the Netherlands.

[39]
Yegros-Yegros A., Rafols I., & d’Este P. (2015). Does interdisciplinary research lead to higher citation impact? The different effect of proximal and distal interdisciplinarity, PLOS ONE, 10(8) . Retrieved from .

[40]
Youtie J., Porter A.L., Shapira P., Tang L., & Benn T. (2011). The use of environmental, health and safety research in nanotechnology research. Journal of Nanoscience and Nanotechnology, 11(1), 158-166.Environmental, health, and safety (EHS) concerns are receiving considerable attention in nanoscience and nanotechnology (nano) research and (R&D). Policymakers and others have urged that research on nano's EHS implications be developed alongside scientific research in the nano domain rather than subsequent to applications. This concurrent perspective suggests the importance of early understanding and measurement of the diffusion of nano EHS research. The paper examines the diffusion of nano EHS publications, defined through a set of search terms, into the broader nano domain using a global nanotechnology R&D database developed at Georgia Tech. The results indicate that nano EHS research is growing rapidly although it is orders of magnitude smaller than the broader nano S&T domain. Nano EHS work is moderately multidisciplinary, but gaps in biomedical nano EHS's connections with environmental nano EHS are apparent. The paper discusses the implications of these results for the continued monitoring and of the cross-disciplinary utilization of nano EHS research.

DOI PMID

[41]
Youtie J., Solomon G.E.A., Carley S., Kwon S., & Porter A.L. (2017). Crossing borders: A citation analysis of connections between Cognitive Science and Educational research and the fields in between. Research, 26(3), 242-255.

[42]
Zhang J., Ning Z., Bai X., Kong X., Zhou J., & Xia F. (2017). Exploring time factors in measuring the scientific impact of scholars. Scientometrics, 112(3), 1301-1321.

DOI

[43]
Zhang L., Rousseau R., & Glänzel W. (2016). Diversity of references as an indicator for interdisciplinarity of journals: Taking similarity between subject fields into account. Journal of the American Society for Information Science and Technology, 67(5), 1257-1265.The objective of this article is to further the study of journal interdisciplinarity, or, more generally, knowledge integration at the level of individual articles. Interdisciplinarity is operationalized by the diversity of subject fields assigned to cited items in the article's reference list. Subject fields and subfields were obtained from the Leuven-Budapest (ECOOM) subject-classification scheme, while disciplinary diversity was measured taking variety, balance, and disparity into account. As diversity measure we use a Hill-type true diversity in the sense of Jost and Leinster-Cobbold. The analysis is conducted in 3 steps. In the first part, the properties of this measure are discussed, and, on the basis of these properties it is shown that the measure has the potential to serve as an indicator of interdisciplinarity. In the second part the applicability of this indicator is shown using selected journals from several research fields ranging from mathematics to social sciences. Finally, the often-heard argument, namely, that interdisciplinary research exhibits larger visibility and impact, is studied on the basis of these selected journals. Yet, as only 7 journals, representing a total of 15,757 articles, are studied, albeit chosen to cover a large range of interdisciplinarity, further research is still needed.

DOI

Outlines

/

京ICP备05002861号-43

Copyright © 2023 All rights reserved Journal of Data and Information Science

E-mail: jdis@mail.las.ac.cn Add:No.33, Beisihuan Xilu, Haidian District, Beijing 100190, China

Support by Beijing Magtech Co.ltd E-mail: support@magtech.com.cn