Options
Lu, Jinghui
Preferred name
Lu, Jinghui
Official Name
Lu, Jinghui
Research Output
Now showing 1 - 3 of 3
Publication
Supervised and Unsupervised Text Mining for Grey Literature Screening
2021, Lu, Jinghui, 0000-0001-7149-6961
The increasing recognition of the value of Open Innovation (OI) and the Multi-actor Approach (MAA) in research and innovation activities highlights the need for an efficient and effective process for searching and extracting knowledge from a wide range of different sources, e.g. knowledge is required from academic sources but also from practitioners and intermediaries such as businesses, advisors, policymakers and non-government organisations. While knowledge from academic sources can be relatively easily accessed through peer-reviewed publications, knowledge from other sources may be more widely dispersed. This highlights the potential value of exploring and exploiting grey literature, information produced by organisations where publishing and distributing is not the primary focus, to support research and innovation activities. However, this is not easy given the lack of structure in grey literature, as well as the potentially large amount of irrelevant data that is likely to be included in any grey literature collection. As such, machine-learning-based text mining approaches can be used to facilitate the exploration and exploitation of grey literature, and thus, to enhance research and innovation activities. As one of the most important sectors in Ireland, the agri-food sector underperforms in relation to innovation activities in comparison to other sectors. Therefore, this thesis proposes using text mining approaches to fuel the advance of research and innovation activities in the agri-food sector. There are many challenges in applying text mining approaches to grey literature to support research and innovation activities. In this thesis, we focus on two aspects: using semi-supervised approaches to assist innovation scholars in grey literature screening; using unsupervised corpus comparison to support grey literature content analysis. To semi-automate grey literature screening, we reframe this as a problem of using active learning for grey literature classification. Firstly, we explore the most suitable text representation technique used in active learning, as text representations play an important role in the performance of an active learning system. To this end, we conduct a benchmark experiment comparing the effectiveness of different text representations in the active learning context, especially focusing on more recent high-performing transformer-based text representations. Furthermore, we incorporate the fine-tuning approach into active learning to improve the performance of the transformer-based text representations in active learning. A feature of grey literature compared to other texts is that it is unstructured and often includes long texts, so it is crucial to design a text representation that is suitable for grey literature, and that also works well in the active learning context where labelled data is scarce. Therefore, we develop the Hierarchical BERT Model (HBM) and combine it with certainty sampling. Experiments demonstrate that HBM outperforms state-of-the-art methods when labelled data is scarce, and it can work well with certainty sampling to reduce the workload associated with screening grey literature. For corpus comparison, we firstly compare the variants of Jensen-Shannon divergence (JSD) in the literature and identify JSD-pechenick as the appropriate variant to use in corpus comparison. Then we extend JSD-pechenick to enable a multi-corpus comparison. Lastly, we develop a Multi-corpus Topic-based Corpus Comparison (MTCC) approach by integrating topic modelling into corpus comparison. Based on the previous findings, we propose a pipeline that uses HBM+certainty and MTCC to support innovation scholars to explore and exploit agri-food innovation-related grey literature datasets.
Publication
Diverging Divergences: Examining Variants of Jensen Shannon Divergence for Corpus Comparison Tasks
2020-05-13, Lu, Jinghui, Henchion, Maeve, MacNamee, Brian
Jensen-Shannon divergence (JSD) is a distribution similarity measurement widely used in natural language processing. In corpus comparison tasks, where keywords are extracted to reveal the divergence between different corpora (for example, social media posts from proponents of different views on a political issue), two variants of JSD have emerged in the literature. One of these uses a weighting based on the relative sizes of the corpora being compared. In this paper we argue that this weighting is unnecessary and, in fact, can lead to misleading results. We recommend that this weighted version is not used. We base this recommendation on an analysis of the JSD variants and experiments showing how they impact corpus comparison results as the relative sizes of the corpora being compared change.
Publication
A Topic-Based Approach to Multiple Corpus Comparison
2019-12-06, Lu, Jinghui, Henchion, Maeve, MacNamee, Brian
Corpus comparison techniques are often used to compare different types of online media, for example social media posts and news articles. Most corpus comparison algorithms operate at a word-level and results are shown as lists of individual discriminating words which makes identifying larger underlying differences between corpora challenging. Most corpus comparison techniques also work on pairs of corpora and do need easily extend to multiple corpora. To counter these issues, we introduce Multi-corpus Topic-based Corpus Comparison (MTCC) a corpus comparison approach that works at a topic level and that can compare multiple corpora at once. Experiments on multiple real-world datasets are carried demonstrate the effectiveness of MTCC and compare the usefulness of different statistical discrimination metrics - the χ2 and Jensen-Shannon Divergence metrics are shown to work well. Finally we demonstrate the usefulness of reporting corpus comparison results via topics rather than individual words. Overall we show that the topic-level MTCC approach can capture the difference between multiple corpora, and show the results in a more meaningful and interpretable way than approaches that operate at a word-level.