Nicolau, MiguelMiguelNicolauAgapitos, AlexandrosAlexandrosAgapitosO'Neill, MichaelMichaelO'NeillBrabazon, AnthonyAnthonyBrabazon2017-01-042017-01-042015 IEEE2015-05-28http://hdl.handle.net/10197/8249IEEE Congress on Evolutionary Computation, CEC 2015, Sendai, Japan, May 25-28, 2015, Proceedings, Sendai, Japan, May, 2015The field of Genetic Programming has recently seen a surge of attention to the fact that benchmarking and comparison of approaches is often done in non-standard ways, using poorly designed comparison problems. We raise some issues concerning the design of benchmarks, within the domain of symbolic regression, through experimental evidence. A set of guidelines is provided, aiming towards careful definition and use of artificial functions as symbolic regression benchmarks.en© 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Genetic programmingBenchmarksSymbolic regressionRegressionModel buildingGuidelines for defining benchmark problems in Genetic ProgrammingConference Publication10.1109/CEC.2015.72570192016-11-15https://creativecommons.org/licenses/by-nc-nd/3.0/ie/