1 Introduction
2 Theoretic framework for citation content-targeted evaluation
Figure 1. Diagram for Concept of Citation Content. |
and academic contribution point extraction from citing sentences, citing sentence evaluation analysis, contribution point knowledge distribution and calculation for citing sentence statistical indicators. Based on that, the paper established CiteOpinion, an evidence-based analysis tool for academic evaluation.
3 Technical approach for citation content-targeted evaluation
3.1 Basic analysis logic
Table 1 Comparison between CiteOpinion and Conventional Quantitative Analysis Tools. |
Quantitative Analysis Tool | CiteOpinion | |
---|---|---|
Data object | Metadata of paper | Citing sentences |
Data granularity | Article | Sentence |
Analysis focus | Statistical indicators | Text content mining |
Result form | Relationship diagrams and data sheets | Evaluation evidence text, relationship diagrams and data sheets |
3.2 Measuring academic contribution of cited papers based on citing sentences
Table 2 Sentiment categorization of citing sentences. |
Sentiment Category | Definition | Sentiment Score Range (E) |
---|---|---|
Positive | Holding commendatory, approving and admiring attitude | 1>E>0 |
Neutral | Brief statement or rephrasing, without obvious expression of sentiment | E=0 |
Negative | Describing defects, shortcomings or mistakes | 0>E>-1 |
Figure 2. Measurement framework for academic contribution of representative paper. |
4 Implementation scheme of citation content-targeted evaluation
4.1 System framework
Figure 3. CiteOpinion system framework. |
4.2 Automatic recognition of citing sentences
4.3 Calculating sentiment scores of citing sentences
4.4 Recognizing academic contribution points of papers
Figure 4. Move structure category recognition model. |
4.5 Knowledge transfer of contribution points
5 A case study—analysis on citing sentences in representative works of A.M. Turing Award laureates
5.1 Selection of representative works
Table 3 Representative papers of Geoffrey Hinton. |
# | Author | Representative Papers | Citing Papers |
---|---|---|---|
1 | Hinton Geoffrey | Deep learning | 7,630 |
2 | Hinton Geoffrey | Reducing the dimensionality of data with neural networks | 4,509 |
3 | Hinton Geoffrey | A fast learning algorithm for deep belief nets | 4,538 |
4 | Hinton Geoffrey | Learning representations by back propagating errors | 6,598 |
5 | Hinton Geoffrey | Dropout: A Simple Way to Prevent Neural Networks from Overfitting | 4,142 |
5.2 Evidence-based results analysis
Table 4 Highlights of citing sentences praising the contribution point as a breakthrough. |
Highlights of Citing Sentence | Titles of Citing Paper | |
---|---|---|
1 | The steepest descent algorithm, also known as the error backpropagation (EBP) algorithm [8,9], dispersed the dark clouds on the field of artificial neural networks and could be regarded as one of the most significant breakthroughs for training neural networks. | Application of Neural Networks to Automatic Load Frequency Control |
2 | In addition to the development of new ANN algorithms that were more neural-inspired (e.g. Hopfield networks), another major breakthrough that helped lead to a resurgence in neural network research was the rediscovery of the backpropagation technique (LeCun, 1985, Rumelhart et al., 1986, Werbos, 1990). | A historical survey of algorithms and hardware architectures for neural-inspired and neuromorphic computing applications |
3 | The next major breakthrough happened in late 80s with the invention of back-propagation and a gradient-based optimization algorithm to train a neural network with one or two hidden layers with any desired number of nodes (Rumelhart et al., 1986). | Meta-analysis of deep neural networks in remote sensing: A comparative study of mono-temporal classification to support vector machines |
Table 5 Comparison of contribution points of representative work and citing sentences. |
Contribution points mentioned by the author in the original representative work | We describe a new learning procedure, back-propagation, for networks of neurone-like units. |
---|---|
Contribution points mentioned in the citing sentences | 1. In MBGD, the learning rate is very important to the convergence speed and quality in training. Many different schemes, e.g., momentum [6], averaging [15], AdaGrad [16], RMSProp [17], Adam [18], etc., have been proposed to optimize the learning rate in neural network training. Adam may be the most popular one among them. [43.84%] |
2. As for the extrapolation, a smooth activation function that only acts on the hidden layer(s) is recommended. Back-propagation is the second part of the algorithm [37]. This is the central mechanism that allows neural network methods to “learn.” [42.36%] | |
3. The feedforward multilayer perceptron is one of the most popular types of ANNs; it was developed by Rumelhart et al. [23], and it is presented in Supplementary 1. This network also consists of an input layer, one or more hidden layers, and one output layer. [10.95%] |
Figure 5. Changes in major new research topics of representative paper’s citing papers. |
Figure 6. Distribution of representative work’s citing sentences in different disciplines. |