Learning Sequential and Parallel Runtime Distributions for Randomized Algorithms
|Title:||Learning Sequential and Parallel Runtime Distributions for Randomized Algorithms||Authors:||Arbelaez, Alejandro
|Permanent link:||http://hdl.handle.net/10197/8043||Date:||8-Nov-2016||Abstract:||In cloud systems, computation time can be rented by the hour and for a given number of processors. Thus, accurate predictions of the behaviour of both sequential and parallel algorithms has become an important issue, in particular in the case of costly methods such as randomized combinatorial optimization tools. In this work, our objective is to use machine learning algorithms to predict performance of sequential and parallel local search algorithms. In addition to classical features of the instances used by other machine learning tools, we consider data on the sequential runtime distributions of a local search method. This allows us to predict with a high accuracy the parallel computation time of a large class of instances, by learning the behaviour of the sequential version of the algorithm on a small number of instances. Experiments with three solvers on SAT and TSP instances indicate that our method works well, with a correlation coefficient of up to 0.85 for SAT instances and up to 0.95 for TSP instances.||Funding Details:||Science Foundation Ireland||Type of material:||Conference Publication||Publisher:||IEEE||Keywords:||Optimisation; Decision analytics||DOI:||10.1109/ICTAI.2016.0105||Language:||en||Status of Item:||Peer reviewed||Is part of:||Proceedings of the 2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI)||Conference Details:||ICTAI 2016: 28th International Conference on Tools with Artificial Intelligence, San Jose, California, USA, 6-8 November 2016|
|Appears in Collections:||Insight Research Collection|
Show full item record
This item is available under the Attribution-NonCommercial-NoDerivs 3.0 Ireland. No item may be reproduced for commercial purposes. For other possible restrictions on use please refer to the publisher's URL where this is made available, or to notes contained in the item itself. Other terms may apply.