Now showing 1 - 6 of 6
  • Publication
    Explainable Text-Driven Neural Network for Stock Prediction
    It has been shown that financial news leads to the fluctuation of stock prices. However, previous work on news-driven financial market prediction focused only on predicting stock price movement without providing an explanation. In this paper, we propose a dual-layer attention-based neural network to address this issue. In the initial stage, we introduce a knowledge-based method to adaptively extract relevant financial news. Then, we use an input attention to pay more attention to the more influential news and concatenate the day embeddings with the output of the news representation. Finally, we use an output attention mechanism to allocate different weights to different days in terms of their contribution to stock price movement. Thorough empirical studies based upon historical prices of several individual stocks demonstrate the superiority of our proposed method in stock price prediction compared to state-of-the-art methods.
      715Scopus© Citations 27
  • Publication
    HTML: Hierarchical Transformer-based Multi-task Learning for Volatility Prediction
    Thevolatility forecastingtask refers to predicting the amount ofvariability in the price of a financial asset over a certain period.It is an important mechanism for evaluating the risk associatedwith an asset and, as such, is of significant theoretical and practicalimportance in financial analysis. While classical approaches haveframed this task as a time-series prediction one – using historicalpricing as a guide to future risk forecasting – recent advances innatural language processing have seen researchers turn to com-plementary sources of data, such as analyst reports, social media,and even the audio data from earnings calls. This paper proposes anovel hierarchical, transformer, multi-task architecture designedto harness the text and audio data from quarterly earnings confer-ence calls to predict future price volatility in the short and longterm. This includes a comprehensive comparison to a variety ofbaselines, which demonstrates very significant improvements inprediction accuracy, in the range 17% - 49% compared to the currentstate-of-the-art. In addition, we describe the results of an ablationstudy to evaluate the relative contributions of each component ofour approach and the relative contributions of text and audio datawith respect to prediction accuracy.
    Scopus© Citations 62  280
  • Publication
    Multiresolution network models
    Many existing statistical and machine learning tools for social network analysis focus on a single level of analysis. Methods designed for clustering optimize a global partition of the graph, whereas projection-based approaches (e.g., the latent space model in the statistics literature) represent in rich detail the roles of individuals. Many pertinent questions in sociology and economics, however, span multiple scales of analysis. Further, many questions involve comparisons across disconnected graphs that will, inevitably be of different sizes, either due to missing data or the inherent heterogeneity in real-world networks. We propose a class of network models that represent network structure on multiple scales and facilitate comparison across graphs with different numbers of individuals. These models differentially invest modeling effort within subgraphs of high density, often termed communities, while maintaining a parsimonious structure between said subgraphs. We show that our model class is projective, highlighting an ongoing discussion in the social network modeling literature on the dependence of inference paradigms on the size of the observed graph. We illustrate the utility of our method using data on household relations from Karnataka, India. Supplementary material for this article is available online.
      572Scopus© Citations 10
  • Publication
    Multi-level Attention-Based Neural Networks for Distant Supervised Relation Extraction
    We propose a multi-level attention-based neural network forrelation extraction based on the work of Lin et al. to alleviate the problemof wrong labelling in distant supervision. In this paper, we first adoptgated recurrent units to represent the semantic information. Then, weintroduce a customized multi-level attention mechanism, which is expectedto reduce the weights of noisy words and sentences. Experimentalresults on a real-world dataset show that our model achieves significantimprovement on relation extraction tasks compared to both traditionalfeature-based models and existing neural network-based methods
      279
  • Publication
    Leveraging BERT to Improve the FEARS Index for Stock Forecasting
    Financial and Economic Attitudes Revealed by Search (FEARS) index reflects the attention and sentiment of public investors and is an important factor for predicting stock price return. In this paper, we take into account the semantics of the FEARS search terms by leveraging the Bidirectional Encoder Representations from Transformers (BERT), and further apply a self-attention deep learning model to our refined FEARS seamlessly for stock return prediction. We demonstrate the practical benefits of our approach by comparing to baseline works.
      530
  • Publication
    Generalized Random Dot Product graph
    The Random Dot Product model for social network was introduced in Nickel (2007) and extended by Young and Scheinerman (2007), where each asymptotic results such as degree distribution, clustering and diameter on both dense and sparse cases were derived. Young and Scheinerman (2007) explored two generalizations of the model in the dense case and obtained similar asymptotic results. In this paper, we consider a generalization of the Random Dot Product model and derive its theoretical properties under the dense, sparse and intermediate cases. In particular, properties such as the size of the largest component and connectivity can be derived by applying recent results on inhomogeneous random graphs (Bollobás et al., 2007; Devroye and Fraiman, 2014).
      463