Now showing 1 - 10 of 13
  • Publication
    An evolutionary algorithmic investigation of US corporate payout policy determination
    This Chapter examines cash dividends and share repurchases in the United States during the period 1990 to 2008. In the extant literature a variety of classical statistical methodologies have been adopted, foremost among these is the method of panel regression modelling. Instead, in this Chapter, we have informed our model specifications and our coefficient estimates using a genetic program. Our model captures effects from a wide range of pertinent proxy variables related to the agency cost-based life cycle theory, the signalling theory and the catering theory of corporate payout policy determination. In line with the extant literature, our findings indicate the predominant importance of the agency-cost based life cycle theory. The adopted evolutionary algorithm approach also provides important new insights concerning the influence of firm size, the concentration of firm ownership and cash flow uncertainty with respect to corporate payout policy determination in the United States.
      594
  • Publication
    Deep Evolution of Feature Representations for Handwritten Digit Recognition
    A training protocol for learning deep neural networks, called greedy layer-wise training, is applied to the evolution of a hierarchical, feed-forward Genetic Programming based system for feature construction and object recognition. Results on a popular handwritten digit recognition benchmark clearly demonstrate that two layers of feature transformations improves generalisation compared to a single layer. In addition, we show that the proposed system outperforms several standard Genetic Programming systems, which are based on hand-designed features, and use different program representations and fitness functions.
      315Scopus© Citations 11
  • Publication
    A preliminary investigation of overfitting in evolutionary driven model induction : implications for financial modelling
    This paper investigates the effects of early stopping as a method to counteract overfitting in evolutionary data modelling using Genetic Programming. Early stopping has been proposed as a method to avoid model overtraining, which has been shown to lead to a significant degradation of out-of-sample performance. If we assume some sort of performance metric maximisation, the most widely used early training stopping criterion is the moment within the learning process that an unbiased estimate of the performance of the model begins to decrease after a strictly monotonic increase through the earlier learning iterations. We are conducting an initial investigation on the effects of early stopping in the performance of Genetic Programming in symbolic regression and financial modelling. Empirical results suggest that early stopping using the above criterion increases the extrapolation abilities of symbolic regression models, but is by no means the optimal training-stopping criterion in the case of a real-world financial dataset.
      332Scopus© Citations 15
  • Publication
    Learning environment models in car racing using stateful genetic programming
    For computational intelligence to be useful in creating game agent AI we need to focus on methods that allow the creation and maintenance of models for the environment, which the artificial agents inhabit. Maintaining a model allows an agent to plan its actions more effectively by combining immediate sensory information along with a memories that have been acquired while operating in that environment. To this end, we propose a way to build environment models for non-player characters in car racing games using stateful Genetic Programming. A method is presented, where general purpose 2-dimensional data-structures are used to build a model of the racing track. Results demonstrate that model-building behaviour can be cooperatively coevolved with car controlling behaviour in modular programs that make use of these models in order to navigate successfully around a racing track.
      806Scopus© Citations 7
  • Publication
    Guidelines for defining benchmark problems in Genetic Programming
    The field of Genetic Programming has recently seen a surge of attention to the fact that benchmarking and comparison of approaches is often done in non-standard ways, using poorly designed comparison problems. We raise some issues concerning the design of benchmarks, within the domain of symbolic regression, through experimental evidence. A set of guidelines is provided, aiming towards careful definition and use of artificial functions as symbolic regression benchmarks.
      525Scopus© Citations 26
  • Publication
    Evolutionary learning of technical trading rules without data-mining bias
    In this paper we investigate the profitability of evolved technical trading rules when controlling for data-mining bias. For the first time in the evolutionary computation literature, a comprehensive test for a rule’s statistical significance using Hansen’s Superior Predictive Ability is explicitly taken into account in the fitness function, and multi-objective evolutionary optimisation is employed to drive the search towards individual rules with better generalisation abilities. Empirical results on a spot foreign-exchange market index suggest that increased out-of-sample performance can be obtained after accounting for data-mining bias effects in a multi-objective fitness function, as compared to a single-criterion fitness measure that considers solely the average return.
      2736Scopus© Citations 14
  • Publication
    Maximum margin decision surfaces for increased generalisation in evolutionary decision tree learning
    Decision tree learning is one of the most widely used and practical methods for inductive inference. We present a novel method that increases the generalisation of genetically-induced classification trees, which employ linear discriminants as the partitioning function at each internal node. Genetic Programming is employed to search the space of oblique decision trees. At the end of the evolutionary run, a (1+1) Evolution Strategy is used to geometrically optimise the boundaries in the decision space, which are represented by the linear discriminant functions. The evolutionary optimisation concerns maximising the decision-surface margin that is defined to be the smallest distance between the decision-surface and any of the samples. Initial empirical results of the application of our method to a series of datasets from the UCI repository suggest that model generalisation benefits from the margin maximisation, and that the new method is a very competent approach to pattern classification as compared to other learning algorithms.
      415Scopus© Citations 11
  • Publication
    Early stopping criteria to counteract overfitting in genetic programming
    Early stopping typically stops training the first time validation fitness disimproves. This may not be the best strategy given that validation fitness can subsequently increase or decrease. We examine the effects of stopping subsequent to the first disimprovement in validation fitness, on symbolic regression problems. Stopping points are determined using criteria which measure generalisation loss and training progress. Results suggest that these criteria can improve the generalistion ability of symbolic regression functions evolved using Grammar-based GP.
      837Scopus© Citations 7
  • Publication
    Maximum margin decision surfaces for increased generalisation in evolutionary decision tree learning
    Decision tree learning is one of the most widely used and practical methods for inductive inference. We present a novel method that increases the generalisation of genetically-induced classification trees, which employ linear discriminants as the partitioning function at each internal node. Genetic Programming is employed to search the space of oblique decision trees. At the end of the evolutionary run, a (1+1) Evolution Strategy is used to geometrically optimise the boundaries in the decision space, which are represented by the linear discriminant functions. The evolutionary optimisation concerns maximising the decision-surface margin that is defined to be the smallest distance between the decision-surface and any of the samples. Initial empirical results of the application of our method to a series of datasets from the UCI repository suggest that model generalisation benefits from the margin maximisation, and that the new method is a very competent approach to pattern classification as compared to other learning algorithms.
      394Scopus© Citations 11
  • Publication
    Understanding Grammatical Evolution: Grammar Design
    (Springer, 2018-09-12) ;
    A frequently overlooked consideration when using Grammatical Evolution (GE) is grammar design. This is because there is an infinite number of grammars that can specify the same syntax. There are, however, certain aspects of grammar design that greatly affect the speed of convergence and quality of solutions generated with GE. In this chapter, general guidelines for grammar design are presented. These are domain-independent, and can be used when applying GE to any problem. An extensive analysis of their effect and results across a large set of experiments are reported.
      873Scopus© Citations 20