Now showing 1 - 5 of 5
  • Publication
    Leveraging BERT to Improve the FEARS Index for Stock Forecasting
    Financial and Economic Attitudes Revealed by Search (FEARS) index reflects the attention and sentiment of public investors and is an important factor for predicting stock price return. In this paper, we take into account the semantics of the FEARS search terms by leveraging the Bidirectional Encoder Representations from Transformers (BERT), and further apply a self-attention deep learning model to our refined FEARS seamlessly for stock return prediction. We demonstrate the practical benefits of our approach by comparing to baseline works.
      536
  • Publication
    Explainable Text-Driven Neural Network for Stock Prediction
    It has been shown that financial news leads to the fluctuation of stock prices. However, previous work on news-driven financial market prediction focused only on predicting stock price movement without providing an explanation. In this paper, we propose a dual-layer attention-based neural network to address this issue. In the initial stage, we introduce a knowledge-based method to adaptively extract relevant financial news. Then, we use an input attention to pay more attention to the more influential news and concatenate the day embeddings with the output of the news representation. Finally, we use an output attention mechanism to allocate different weights to different days in terms of their contribution to stock price movement. Thorough empirical studies based upon historical prices of several individual stocks demonstrate the superiority of our proposed method in stock price prediction compared to state-of-the-art methods.
      734Scopus© Citations 27
  • Publication
    MAEC: A Multimodal Aligned Earnings Conference Call Dataset for Financial Risk Prediction
    In the area of natural language processing, various financial datasets have informed recent research and analysis including financial news, financial reports, social media, and audio data from earnings calls. We introduce a new, large-scale multi-modal, text-audio paired, earnings-call dataset named MAEC, based on S&P 1500 companies. We describe the main features of MAEC, how it was collected and assembled, paying particular attention to the text-audio alignment process used. We present the approach used in this work as providing a suitable framework for processing similar forms of data in the future. The resulting dataset is more than six times larger than those currently available to the research community and we discuss its potential in terms of current and future research challenges and opportunities. All resources of this work are available at https://github.com/Earnings-Call-Dataset/
      760Scopus© Citations 20
  • Publication
    HTML: Hierarchical Transformer-based Multi-task Learning for Volatility Prediction
    Thevolatility forecastingtask refers to predicting the amount ofvariability in the price of a financial asset over a certain period.It is an important mechanism for evaluating the risk associatedwith an asset and, as such, is of significant theoretical and practicalimportance in financial analysis. While classical approaches haveframed this task as a time-series prediction one – using historicalpricing as a guide to future risk forecasting – recent advances innatural language processing have seen researchers turn to com-plementary sources of data, such as analyst reports, social media,and even the audio data from earnings calls. This paper proposes anovel hierarchical, transformer, multi-task architecture designedto harness the text and audio data from quarterly earnings confer-ence calls to predict future price volatility in the short and longterm. This includes a comprehensive comparison to a variety ofbaselines, which demonstrates very significant improvements inprediction accuracy, in the range 17% - 49% compared to the currentstate-of-the-art. In addition, we describe the results of an ablationstudy to evaluate the relative contributions of each component ofour approach and the relative contributions of text and audio datawith respect to prediction accuracy.
      287Scopus© Citations 64
  • Publication
    Multi-level Attention-Based Neural Networks for Distant Supervised Relation Extraction
    We propose a multi-level attention-based neural network forrelation extraction based on the work of Lin et al. to alleviate the problemof wrong labelling in distant supervision. In this paper, we first adoptgated recurrent units to represent the semantic information. Then, weintroduce a customized multi-level attention mechanism, which is expectedto reduce the weights of noisy words and sentences. Experimentalresults on a real-world dataset show that our model achieves significantimprovement on relation extraction tasks compared to both traditionalfeature-based models and existing neural network-based methods
      301