1 Introduction
2. Related work
2.1 Term function
Table 1 Classification of term function (Li, Cheng, & Lu, 2017). |
Classification of term function | Authors |
---|---|
Head, goal, method, other | Kondo (2009) |
Technology, effect | Nanba, Kondo, & Takezawa (2010) |
Focus, technique, domain | Gupta & Manning (2012) |
Technique, application | Tsai, Kundu, & Roth (2013) |
Method, task, other | Huang & Wan (2013) |
Domain-independent: Research topic, Research method Domain-related: Case, tool, dataset, etc. | Cheng (2015) |
2.2 Citation recommendation
2.2.1 Local citation recommendation
2.2.2 Global citation recommendation
3 Proposed approach
Figure 1. Framework of term function-based citation recommendation |
3.1 Paragraph organization patterns analysis in related work sections
3.1.1 Term function classification scheme in paragraph level
Table 2 Term functions in the related studies section. |
Category | Source | Description |
---|---|---|
Application | Tsai et al. (2013) | Describes existing application of the core problem and method in this article |
Dataset | Cheng (2015) | Describes related datasets to this article |
Evaluation | Cheng (2015) | Describes related evaluation methods to this article |
Method | Huang & Wan (2013) | Describes previous work related to the core method of the article |
Method+ Problem | New | Describes the core method of the article and introduces what problems it can be used to solve |
Problem | Kondo et al. (2009) | Describes previous work related to the core research problem of the article |
Problem+ Method | New | Describes the core research problem of the article and introduces the existing method to the problem |
Tool | Cheng (2015) | Describes related tools used in this article |
Topic-irrelevant | New | Describes previous work not very relevant |
3.1.2 Annotation
Figure 2. An annotation example of citation function in paragraph level. |
3.1.3 Paragraph organization patterns analysis
Figure 3. Statistical analysis of the term function distribution in three fields. |
3.2 Term function-based citation recommendation
3.2.1 Problem definition
3.2.2 Citation recommendation algorithms
3.2.3 Term function weighting-based recommendation models
Algorithm 1 Recommendation with baseline models or term function weighting-based recommendation models | ||
Input: | ||
Candidate paper list | ||
Rank papers by relevance scores | ||
If Year of candidate paper < year of original paper | ||
Add candidate paper into new recommendation list | ||
Else | ||
Continue | ||
End if Length of new recommendation list is 30 | ||
Output: | ||
New recommendation list |
4 Experiments
4.1 Dataset
4.2 Experimental setup
4.3 Evaluation metrics
4.4 Performance comparison
Table 3 Recommendation performance in information extraction. |
Metrics | Runs | Top 5 | Top 10 | Top 20 | Top 30 |
---|---|---|---|---|---|
Precision | BM25 | 10.6% | 12.6% | 7.1% | 6.0% |
W2V-VSM | 12.8% | 12.0% | 10.8% | 8.9% | |
TFW-BM25 | 13.6% | 16.6% | 9.6% | 7.4% | |
TFW-W2V-VSM | 16.4% | 14.9% | 13.7% | 13.0% | |
Recall | BM25 | 14.9% | 35.1% | 40.0% | 50.6% |
W2V-VSM | 17.5% | 38.7% | 48.9% | 58.6% | |
TFW-BM25 | 19.0% | 46.4% | 53.6% | 61.9% | |
TFW-W2V-VSM | 23.3% | 48.7% | 57.0% | 66.8% | |
F1-score | BM25 | 12.4% | 18.5% | 12.1% | 10.7% |
W2V-VSM | 14.8% | 18.3% | 17.7% | 17.2% | |
TFW-BM25 | 17.5% | 24.5% | 16.3% | 13.2% | |
TFW-W2V-VSM | 19.3% | 22.4% | 22.1% | 21.8% |
Table 4 Recommendation performance in sentiment analysis. |
Metrics | Runs | Top 5 | Top 10 | Top 20 | Top 30 |
---|---|---|---|---|---|
Precision | BM25 | 11.1% | 10.5% | 8.2% | 7.0% |
W2V-VSM | 13.6% | 12.9% | 10.5% | 9.5% | |
TFW-BM25 | 12.9% | 13.0% | 9.1% | 8.2% | |
TFW-W2V-VSM | 16.7% | 15.5% | 13.8% | 11.0% | |
Recall | BM25 | 15.6% | 29.6% | 46.3% | 59.3% |
W2V-VSM | 19.5% | 34.3% | 52.6% | 67.9% | |
TFW-BM25 | 18.1% | 36.7% | 51.5% | 69.6% | |
TFW-W2V-VSM | 27.7% | 41.0% | 59.2% | 73.8% | |
F1-score | BM25 | 13.0% | 15.5% | 13.9% | 12.5% |
W2V-VSM | 16.0% | 18.7% | 17.5% | 16.7% | |
TFW-BM25 | 15.1% | 19.2% | 15.4% | 17.2% | |
TFW-W2V-VSM | 20.8% | 22.5% | 22.4% | 19.0% |
Table 5 Recommendation performance in recommender system. |
Metrics | Runs | Top 5 | Top 10 | Top 20 | Top 30 |
---|---|---|---|---|---|
Precision | BM25 | 14.3% | 12.1% | 7.5% | 6.4% |
W2V-VSM | 16.4% | 14.8% | 12.3% | 11.7% | |
TFW-BM25 | 17.1% | 15.7% | 10.7% | 8.3% | |
TFW-W2V-VSM | 21.7% | 19.9% | 16.2% | 15.2% | |
Recall | BM25 | 21.3% | 36.2% | 44.7% | 57.4% |
W2V-VSM | 24.7% | 44.4% | 54.2% | 73.1% | |
TFW-BM25 | 25.5% | 46.8% | 63.8% | 74.5% | |
TFW-W2V-VSM | 26.8% | 50.1% | 68.9% | 77.8% | |
F1-score | BM25 | 17.1% | 18.1% | 12.8% | 11.5% |
W2V-VSM | 19.7% | 21.9% | 20.0% | 20.2% | |
TFW-BM25 | 20.5% | 23.5% | 18.3% | 14.9% | |
TFW-W2V-VSM | 24.0% | 28.5% | 26.2% | 25.2% |