WORKSHOPS

The presentation slides for this workshop can be found here.
In the era of explainable and interpretable AI, it is increasingly necessary to develop a deep understanding of how algorithms work and how new algorithms compare to existing ones, both in terms of strengths and weaknesses. For this reason, benchmarking plays a vital role for understanding algorithms’ behavior. Even though benchmarking is a highly-researched topic within the evolutionary computation community, there are still a number of open questions and challenges that should be explored:
- Most commonly-used benchmarks are too small and cover only a part of the problem space,
- Benchmarks lack the complexity of real-world problems, making it difficult to transfer the learned knowledge to work in practice,
- We need to develop proper statistical analysis techniques that can be applied depending on the nature of the data,
- We need to develop user-friendly, openly accessible benchmarking software. This enables a culture of sharing resources to ensure reproducibility, and which helps to avoid common pitfalls in benchmarking optimization techniques. As such, we need to establish new standards for benchmarking in evolutionary computation research so we can objectively compare novel algorithms and fully demonstrate where they excel and where they can be improved.
The topics of interest for this workshop include, but are not limited to:
- Performance measures for comparing algorithms behavior;
- Novel statistical approaches for analyzing empirical data;
- Selection of meaningful benchmark problems;
- Landscape analysis;
- Data mining approaches for understanding algorithm behavior;
- Transfer learning from benchmark experiences to real-world problems;
- Benchmarking tools for executing experiments and analysis of experimental results.
We particularly welcome position statements addressing or identifying open challenges in benchmarking optimization techniques. The schedule will be designed to encourage a high level of interactivity — expect a real workshop rather than (yet another) mini-conference!
Organisers (alphabetically):
- Thomas Bäck (LIACS, Leiden University, The Netherlands)
- Thomas Bartz-Beielstein (TH Cologne, Germany)
- Jakob Bossek (The University of Adelaide, Adelaide, Australia)
- Bilel Derbel (University of Lille, Lille, France)
- Carola Doerr (CNRS and Sorbonne University, Paris, France)
- Tome Eftimov (Stanford University, USA and Jožef Stefan Institute, Ljubljana, Slovenia)
- Pascal Kerschke (University of Münster, Germany)
- William La Cava (University of Pennsylvania, USA)
- Arnaud Liefooghe (University of Lille, France)
- Manuel López-Ibáñez (University of Manchester, UK)
- Boris Naujoks (TH Cologne, Germany)
- Pietro S. Oliveto (University of Sheffield, UK)
- Patryk Orzechowski (University of Pennsylvania, USA)
- Mike Preuss (LIACS, Leiden University, The Netherlands)
- Jérémy Rapin (Facebook AI Research, Paris, France)
- Ofer M. Shir (Tel-Hai College and Migal Institute, Israel)
- Olivier Teytaud (Facebook AI Research, Paris, France)
- Heike Trautmann (University of Münster, Germany)
- Ryan J. Urbanowicz (University of Pennsylvania, USA)
- Vanessa Volz (modl.ai, Copenhagen, DK)
- Markus Wagner (The University of Adelaide, Australia)
- Hao Wang (LIACS, Leiden University, The Netherlands)
- Thomas Weise (Institute of Applied Optimization, Hefei University, Hefei, China)
- Borys Wróbel (Adam Mickiewicz University, Poland)
- Aleš Zamuda (University of Maribor, Slovenia)
You can find more information about the workshop and its organizers at https://sites.google.com/view/benchmarking-network/home/activities/PPSN20.
The slides for this workshop can be found here
Stochastic local search (SLS) algorithms are among the most powerful techniques for solving computationally hard problems in many areas of computer science, operational research and engineering. SLS techniques range from rather simple constructive and iterative improvement algorithms to general-purpose methods, also widely known as metaheuristics, such as ant colony optimisation, evolutionary computation, iterated local search, memetic algorithms, simulated annealing, tabu search and variable neighbourhood search.
The SLS Workshop solicits contributions dealing with any aspect of engineering stochastic local search algorithms. Typical, but not exclusive, topics of interest are:
- New algorithmic developments
- Automated design of SLS algorithms
- In-depth experimental studies of SLS algorithms
- Theoretical analysis of SLS behaviour and their impact on the design
- Extensions to multi-objective optimisation
- Applications of SLS algorithms to real-world problems
Organisers’ bios
Holger H. Hoos is a full professor of machine learning at Leiden University (Netherlands) and Adjunct Professor of Computer Science at the University of British Columbia (Canada). pHis research interests are focused on empirical algorithmics with applications in artificial intelligence, bioinformatics and operations research. In particular, he works on automated algorithm design and on stochastic local search algorithms. Since 2015, he is a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) past president of the Canadian Association for Artificial Intelligence and one of the initiators of CLAIRE, an initiative by the European AI community seeking to strengthen European excellence in AI.
Laetitia Jourdan is a full Professor in Computer Science at University of Lille /CRIStAL. Her areas of research are modeling datamining tasks as combinatorial optimization problems, solving methods based on metaheuristics, incorporate learning in metaheuristics and multiobjective optimization. She holds a PhD in combinatorial optimization from the University of Lille 1 (France). From 2004 to 2005, she was a research associate at University of Exeter (UK). She is (co)author of more than 100 papers published in international journals, book chapters, and conference proceedings.She organized several international conferences (LION 2015, MIC 2015, etc) and is reviewer editor for Frontier in Big Data. She has served as member of the programme committee of the major conferences of her research domain (GECCO, CEC, PPSN …).
Marie-Eléonore Kessaci is an Associate Professor (maître de conférences) at the University of Lille, France. She holds a PhD in Computer Science from the University of Lille, France. From 2012 to 2013, she had a postdoctoral position at Université Libre de Bruxelles. Her research interests focus on knowledge-based design of metaheuristics in combinatorial optimization and include stochastic local search algorithms, hybrid methods, automatic algorithm configuration, parameter control and fitness landscape analysis. She has served as member of the program committee of international conferences or workshops (LION, GECCO).
Thomas Stuetzle is a research director (directeur de recherches) employed at the fund for scientific research (F.R.S.-FNRS) of Belgium’s french community. He holds a PhD from Darmstadt University of Technology, Germany, 1998. Since 2016, he is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE). He has published more than 200 peer-reviewed scientific articles in the area of computational intelligence and operations research. He is an associate editor of the journals Applied Mathematics andComputation, Computational Intelligence, Evolutionary Computation,vInternational Transactions on Operational Research and Swarm Intelligence as well as on the editorial board of seven other journals.He has edited or co-edited 22 books or proceedings of international conferences or workshops. He has served as member of the programme committee of more than 85 international conferences or workshops over the last five years.
Nadarajen Veerapen is an Associate Professor (maître de conférences) at the University of Lille, France. Previously he was a research fellow at the University of Stirling in Scotland. He holds a PhD in Computer Science from the University of Angers, France. His research interests include local search, hybrid methods, automatic algorithm configuration, adaptive operator selection, local optima networks, search-based software engineering and visualisation. He has previously co-organised the workshop on Landscape-Aware Heuristic Search at PPSN 2016 and GECCO 2017-2019. He is the Electronic Media Chair for GECCO 2020 and has served as Publicity Chair for GECCO 2019 and as Student Affairs Chair for GECCO 2017 and 2018.
You can find more information about the workshop and its organizers at https://sites.google.com/view/sls2020/home.
In practical applications, multi-objective (MO) optimization is usually treated secondarily due to its rather deterrent complexity/difficulty (compared to single-objective (SO) optimization problems). Therefore, practitioners in general scalarize their problems, e.g., by optimizing weighted sums of the underlying objectives. A major reason for such behavior is the much lesser tangibility of MO problems; it is extremely challenging to imagine interaction effects between >= 2 decision variables and >= 2 objectives simultaneously (let alone visualize them within a single plot). Even researchers usually limit themselves to visualizing only the Pareto fronts of MO problems, i.e., the image of the set of MO global optima. As a result, our “knowledge” about MO problems is highly influenced by our understanding of SO problems. For instance, it is well-known that multimodality can be very challenging in SO optimization. Thus, for a long time, research simply inferred that such structures cause similar problems for optimizers in the MO setting. In consequence, such structures have regularly been considered for the design of MO benchmark problems. Yet, recent works have shown that multimodality might in fact even facilitate MO optimization.
In an attempt to reduce our knowledge deficit in this particular domain, the workshop shall provide a platform for researchers to actively exchange ideas that improve our understanding of (multimodal) MO continuous optimization problems. We therefore welcome contributions related to the following non-exclusive list of topics:
- Characteristics of continuous and combinatorial MO optimization problems.
- High-level landscape characteristics (such as ridges, plateaus, etc.) as well as exploratory landscape features.
- Empirical and theoretical results on the transferability of structural properties from SO to MO problems.
- Multiobjectivization strategies for SO problems.
- Techniques for visualizing landscapes of (multimodal) MO problems.
- Algorithms and/or algorithm building blocks (e.g., operators, selection mechanisms) that are capable of handling or exploiting discovered challenges in problem structures (such as multimodality, ill-conditioned landscapes, etc.).
- Consequences for the design of benchmark problems and evaluation of existing test problem suites.
Organisers’ bios
Christian Grimme (https://www.wi.uni-muenster.de/department/statistik/people/christian-grimme) is postdoctoral researcher and associate professor at the group of “Information Systems and Statistics” at the University of Münster (Germany). He received his diploma degree in “Computer Science” in 2006 and completed his PhD thesis on multi-objective optimization in 2012 at TU Dortmund University (Germany). In 2019, he received the venia legendi (habilitation) in “Information Systems” on Hybridization of Algorithmic Decision Support in Optimization and Data Analysis. His main research topics comprise Evolutionary Multi-objective Optimization with a special focus on fundamentals in MOO and their application in algorithm development.
Pascal Kerschke (http://erc.is/p/kerschke) is postdoctoral researcher and assistant professor at the group of “Information Systems and Statistics” at the University of Münster (Germany). Prior to completing his PhD studies in 2017, he received a M.Sc. degree in “Data Sciences” (2013) from the Faculty of Statistics at the TU Dortmund University (Germany).
His main research interests are algorithm selection (applied to continuous optimization and TSP problems), as well as Exploratory Landscape Analysis (ELA) for single- and multi-objective continuous (black-box) optimization problems. Furthermore, he is the main developer of related R-packages such as “flacco” and “mogsa”.
Heike Trautmann (http://erc.is/p/trautmann) is professor of “Information Systems and Statistics” at the University of Münster in Germany and an expert on (multiobjective) evolutionary optimization, data science, exploratory landscape analysis as well as automated algorithm selection. She received her PhD and her habilitation in Statistics at TU Dortmund University, Germany. She is associated editor of the Evolutionary Computation Journal. In 2017, she was appointed Pascal Professor at LIACS, Leiden University and organized the Evolutionary Multi-Objective Optimization Conference in Münster. Currently, she is director of the European Research Center for Information Systems (ERCIS) as well as Vice-Dean for International Affairs at the School of Business and Economics of the University of Münster.
Michael T.M. Emmerich (https://www.universiteitleiden.nl/en/staffmembers/michael-emmerich) is associate professor at the Leiden Institute for Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he leads the “Multidisciplinary Optimization and Decision Analysis (MODA)” research group. In 2005, he received his Dr.rer.nat. from the University of Dortmund for his work on “Single- and Multiobjective Evolutionary Design Optimization Assisted by Gaussian Random Fields” (supervisor: Prof. Dr. Hans-Paul Schwefel). Further, he also worked as researcher for the Laboratory for Technical Turbomachinery, NTU Athens (gr), ACCESS e.V. (Material science), RWTH Aachen, University of the Algarve, Faro, Instituto Superior Tecnico Lisbon, and FOM Institute, AMOLF Amsterdam. His main research interests are the foundations of indicator-based multi-objective optimization and complex systems analysis, with applications in process engineering, drug discovery, and the biomedical sciences.
Hao Wang (https://www.universiteitleiden.nl/en/staffmembers/hao-wang) is a postdoctoral researcher at Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he is member of the “Natural Computing” group. Hao Wang received his master’s degree in Computer Science from Leiden University in 2013 and obtained his PhD in Computer Science from the same university in 2018. His research interests are proposing, improving and analyzing stochastic optimization algorithms, especially Evolutionary Strategies and Bayesian Optimization. In addition, he develops statistical machine learning algorithms for big and complex industrial data and aims at combining state-of-the-art optimization algorithms with data mining/machine learning techniques.
You can find more information about the workshop and its organizers at is/go/mmmoo2020.
Social Network Analysis and Mining research have experienced an exponential growth in recent years, one of the reasons of this increasing interest is due to this application domain offers a particularly fertile place to test and develop the most advanced techniques, which can be used to extract some valuable information from these Online Social Networks. Current open problems, as Community Detection on networks (static and dynamic), Analysis of Information Diffusion, Sentiment Analysis and Opinion mining, or Visualization, etc. are proposing new techniques to handle with the huge complexity of the information available. Evolutionary algorithms (as those based on multi or many objectives, differential evolution, or evolutionary strategies), and Swarm algorithms (as Ant Colonies, Particle swarms, Artificial Bees or Bat algorithms, among many others), have been successfully applied to face the SNA problems. This workshop aims to be a cross-disciplinary place to bring together experts, researchers and practitioners from several communities including computer science, physics, mathematics, marketing, sociology, psychology, etc., to explore the use of Evolutionary and Bio-inspired models, techniques and algorithms to work with the data, knowledge, and patterns, usually not trivial or hidden, stored at Social Networks.
Organisers’ bios
David Camacho– Technical University of Madrid, Spain. Email:camacho@upm.es
David Camacho is currently working as Associate Professor with Departamento de Sistemas Informáticos at Universidad Politécnica de Madrid (Technical University of Madrid, Spain). His research interests include Data Mining, Evolutionary Computation (GA, GP), Swarm Intelligence (ACO, PSO, ABC), and Machine Learning (Clustering, Hidden Markov Models, Classification, Deep learning, Social Network Analysis (Community Finding problems, graph theory), among others. He has been participated, and leaded (as PI or Coordinator) in more than 40 research projects (national and international funded, as H2020, ISPF, DG, Erasmus+, etc).
Javier del Ser– Tecnalia,Basque Center for Applied Mathematics(BCAM) & University of the Basque Country (UPV/EHU), Spain. Email:javier.delser@tecnalia.com
Javier Del Ser (M’07-SM’12) received his first PhD degree (cum laude) in Electrical Engineering from the University of Navarra (Spain) in 2006, and a second PhD degree (cum laude, extraordinary PhD prize) in Computational Intelligence from the University of Alcalá (Spain) in 2013. He is currently a Research Professor in Artificial Intelligence and leading scientist of the OPTIMA (Optimization, Modelling and Analytics) research area at TECNALIA, Spain. He is also an adjunct professor at the University of the Basque Country (UPV/EHU), an invited research fellow at the Basque Centre for Applied Mathematics(BCAM), an a senior AI advisor at the technological start-up SHERPA.AI..He is also the coordinator of the Joint Research Lab between TECNALIA, UPV/EHU and BCAM, and the director of the TECNALIA Chair in Artificial Intelligence implemented at the University of Granada (Spain). His research interests are in the design of Artificial Intelligence methods for data mining and optimization applied to problems emerging from Industry 4.0, Intelligent Transportation Systems, Smart Mobility, Logistics and Health, among others. He has published more than 280 scientific articles, co-supervised 10 Ph.D. theses, edited 7books, co-authored 9 patents and participated/led more than 40 research projects. He is an Associate Editor of tier-one journals from areas related to Data Science Artificial Intelligence, such as Information Fusion, Swarm and Evolutionary Computation and Cognitive Computation. He is an IEEE Senior Member and a recipient of the Bizkaia Talent prize for his research career.
More information about the workshop can be found on this webpage.
Over the years, Evolutionary Computation (EC) researchers have developed a plethora of optimization algorithms and investigative approaches for dealing with continuous optimization problems. Predominately, these algorithms are tested on purely artificial problems such as the well-known and frequently used BBOB or CEC test suites.
At the same time, the Machine Learning (ML) community faces many challenging practical optimization problems in terms of fitting parameterized models to data. Unfortunately however, there exists little knowledge exchange between the two communities (EC and ML). Recent work shows how ML methods can actually benefit from incorporating EC strategies (see for instance https://arxiv.org/abs/1802.08842, https://arxiv.org/abs/1806.05695, or https://blog.openai.com/evolution-strategies/ for some examples).
The aim of this workshop is to gain further insights into common ML problems, with the overall goal of improving the understanding of their specific structure. What do their landscapes look like? Which ML problems are of benign/malign nature for EC approaches (i.e., when should one consider incorporating EC into ML methods)? How similar are such problems to instances from popular EC benchmark suites?
Our workshop tries to reduce the gap between ML and EC by discussing common ML problems in more detail with the overall goal of improving the understanding of their specific structure. Moreover, we want to investigate how and when EC could be used to solve certain ML tasks. We therefore welcome contributions related to the following non-exclusive list of topics:
- What do the landscapes of ML problems look like?
- Which ML problems are of benign/malign nature for EC approaches (i.e., when should one consider incorporating EC into ML methods)?
- How similar are such problems to instances from popular EC benchmark suites?
In order to facilitate first analyses, we provide an initial set of ML problems and invite participants to contribute their insights into these problems. Possible approaches could consist of – but are definitely not limited to – applying exploratory landscape analysis, analyzing algorithm performances (with a focus on “why”), or investigating already existing problem solutions w.r.t. quality and/or diversity aspects.
More information about this workshop: http://erc.is/go/umlop2020.
Organisers’ bios
Marcus Gallagher (http://staff.itee.uq.edu.au/marcusg/) is an Associate Professor in the School of Information Technology and Electrical Engineering at The University of Queensland. He received his PhD degree from The University of Queensland in 2000. His research interests are in artificial intelligence, more specifically Optimization, metaheuristics and evolutionary computation, machine learning and exploratory data analysis and visualization. He is particularly interested in understanding optimization problems, exploratory landscape analysis and experimental evaluation of algorithms. He has previously organized workshops at conferences and has served as track Co-Chair (ENUM) for GECCO.
Pascal Kerschke (http://erc.is/p/kerschke) is postdoctoral researcher and assistant professor at the group of “Information Systems and Statistics” at the University of Münster (Germany). Prior to completing his PhD studies in 2017, he received a M.Sc. degree in “Data Sciences” (2013) from the Faculty of Statistics at the TU Dortmund University (Germany).
His main research interests are algorithm selection (applied to continuous optimization and TSP problems), as well as Exploratory Landscape Analysis (ELA) for single- and multi-objective continuous (black-box) optimization problems. Furthermore, he is the main developer of related R-packages such as “flacco” and “mogsa”.
Mike Preuss (https://www.universiteitleiden.nl/en/staffmembers/mike-preuss) is Assistant Professor at LIACS, the computer science institute of Universiteit Leiden in the Netherlands. Previously, he was with ERCIS (the information systems institute of WWU Münster, Germany), and before with the Chair of Algorithm Engineering at TU Dortmund, Germany, where he received his PhD in 2013. His main research interests rest on the field of evolutionary algorithms for real-valued problems, namely on multimodal and multiobjective optimization, and on computational intelligence and machine learning methods for computer games, especially in procedural content generation (PGC) and realtime strategy games (RTS).
Olivier Teytaud (https://www.lri.fr/~teytaud/) is research scientist at Facebook. He has been working in numerical optimization in many real-world contexts – scheduling in power systems, in water management, hyperparameter optimization for computer vision and natural language processing, parameter optimization in reinforcement learning. He is currently maintainer of the open source derivative free optimization platform of Facebook AI Research, to be released in November 2018, and containing various flavors of evolution strategies, Bayesian optimization, sequential quadratic programming, Cobyla, Nelder-Mead, differential evolution, particle swarm optimization, and a platform of testbeds including games, reinforcement learning, hyperparameter tuning and real-world engineering problems.
Toggle Content