Special Collections
Questions & Data for Better Science and Innovation

What are the top questions towards better science and innovation and the required data to answer these questions?

1. Which are the questions that need to be answered, to promote the development of science and technology? 

More specifically the journal welcomes:

* Perspectives on why these questions need to be answered

* Primary research on addressing these questions

* Reviews of a specific problem, the current status, and gaps in our knowledge

2. Which data are required to answer the above questions?

* Exploring the relationship between questions and data

* Providing and describing one or several datasets

* Comparing the quality of different data sources

* Presenting methods or tools that help to improve data quality


Guest Editors

* Yi Bu, Peking University

* Dongbo Shi, Shanghai Jiaotong University

* Zhesi Shen, National Science Library, CAS

* Ye Sun, University College London

* Yang Wang, Xi'an Jiaotong University

Sort by Default Latest Most read  
Please wait a minute...
  • Select all
    |
  • Research Papers
    Menghui Li, Fuyou Chen, Sichao Tong, Liying Yang, Zhesi Shen
    Journal of Data and Information Science. 2024, 9(2): 41-55. https://doi.org/10.2478/jdis-2024-0012

    Purpose: The notable increase in retraction papers has attracted considerable attention from diverse stakeholders. Various sources are now offering information related to research integrity, including concerns voiced on social media, disclosed lists of paper mills, and retraction notices accessible through journal websites. However, despite the availability of such resources, there remains a lack of a unified platform to consolidate this information, thereby hindering efficient searching and cross-referencing. Thus, it is imperative to develop a comprehensive platform for retracted papers and related concerns. This article aims to introduce “Amend,” a platform designed to integrate information on research integrity from diverse sources.

    Design/methodology/approach: The Amend platform consolidates concerns and lists of problematic articles sourced from social media platforms (e.g., PubPeer, For Better Science), retraction notices from journal websites, and citation databases (e.g., Web of Science, CrossRef). Moreover, Amend includes investigation and punishment announcements released by administrative agencies (e.g., NSFC, MOE, MOST, CAS). Each related paper is marked and can be traced back to its information source via a provided link. Furthermore, the Amend database incorporates various attributes of retracted articles, including citation topics, funding details, open access status, and more. The reasons for retraction are identified and classified as either academic misconduct or honest errors, with detailed subcategories provided for further clarity.

    Findings: Within the Amend platform, a total of 32,515 retracted papers indexed in SCI, SSCI, and ESCI between 1980 and 2023 were identified. Of these, 26,620 (81.87%) were associated with academic misconduct. The retraction rate stands at 6.64 per 10,000 articles. Notably, the retraction rate for non-gold open access articles significantly differs from that for gold open access articles, with this disparity progressively widening over the years. Furthermore, the reasons for retractions have shifted from traditional individual behaviors like falsification, fabrication, plagiarism, and duplication to more organized large-scale fraudulent practices, including Paper Mills, Fake Peer-review, and Artificial Intelligence Generated Content (AIGC).

    Research limitations: The Amend platform may not fully capture all retracted and concerning papers, thereby impacting its comprehensiveness. Additionally, inaccuracies in retraction notices may lead to errors in tagged reasons.

    Practical implications: Amend provides an integrated platform for stakeholders to enhance monitoring, analysis, and research on academic misconduct issues. Ultimately, the Amend database can contribute to upholding scientific integrity.

    Originality/value: This study introduces a globally integrated platform for retracted and concerning papers, along with a preliminary analysis of the evolutionary trends in retracted papers.

  • Research Papers
    Manman Zhu, Xinyue Lu, Fuyou Chen, Liying Yang, Zhesi Shen
    Journal of Data and Information Science. 2024, 9(1): 11-36. https://doi.org/10.2478/jdis-2024-0003

    Purpose: Accurately assigning the document type of review articles in citation index databases like Web of Science(WoS) and Scopus is important. This study aims to investigate the document type assignation of review articles in Web of Science, Scopus and Publisher’s websites on a large scale.

    Design/methodology/approach: 27,616 papers from 160 journals from 10 review journal series indexed in SCI are analyzed. The document types of these papers labeled on journals’ websites, and assigned by WoS and Scopus are retrieved and compared to determine the assigning accuracy and identify the possible reasons for wrongly assigning. For the document type labeled on the website, we further differentiate them into explicit review and implicit review based on whether the website directly indicates it is a review or not.

    Findings: Overall, WoS and Scopus performed similarly, with an average precision of about 99% and recall of about 80%. However, there were some differences between WoS and Scopus across different journal series and within the same journal series. The assigning accuracy of WoS and Scopus for implicit reviews dropped significantly, especially for Scopus.

    Research limitations: The document types we used as the gold standard were based on the journal websites’ labeling which were not manually validated one by one. We only studied the labeling performance for review articles published during 2017-2018 in review journals. Whether this conclusion can be extended to review articles published in non-review journals and most current situation is not very clear.

    Practical implications: This study provides a reference for the accuracy of document type assigning of review articles in WoS and Scopus, and the identified pattern for assigning implicit reviews may be helpful to better labeling on websites, WoS and Scopus.

    Originality/value: This study investigated the assigning accuracy of document type of reviews and identified the some patterns of wrong assignments.