TUTORIALS

Moving to the era of explainable AI, a comprehensive comparison of the performance of stochastic optimization algorithms has become increasingly important task. One of the most common ways to do this is to apply statistical analyses, which require good knowledge from the user to apply them properly. This tutorial will provide an overview of statistical approaches that can be used for analyzing algorithms performance with special emphasis on issues that are often overlooked. We will show how these issues can be easily avoided by applying simple principles that leads to Deep Statistical Comparison. The tutorial will not be based on equations, but mainly examples through which a deeper understanding of statistics will be achieved. Examples will be based on various comparisons scenarios including single- and multi-objective optimization algorithms. The tutorial will end with a demonstration of a web-service-based framework for statistical comparison of stochastic optimization algorithms.

Pareto optimization is a general optimization framework for solving single-objective optimization problems, based on multi-objective evolutionary optimization. The main idea is to transform a single-objective optimization problem into a bi-objective one, then employ a multi-objective evolutionary algorithm to solve it, and finally return the best feasible solution w.r.t. the original single-objective optimization problem from the generated non-dominated solution set. Pareto optimization has been shown a promising method for the subset selection problem, which has applications in diverse areas, including machine learning, data mining, natural language processing, computer vision, information retrieval, etc. The theoretical understanding of Pareto optimization has recently been significantly developed, showing its irreplaceability for subset selection. This tutorial will introduce Pareto optimization from scratch. We will show that it achieves the best-so-far theoretical and practical performances in several applications of subset selection. We will also introduce advanced variants of Pareto optimization for large-scale, noisy and dynamic subset selection.

Populations are at the heart of evolutionary algorithms (EAs). They provide the genetic variation which selection acts upon. A complete picture of EAs can only be obtained if we understand their population dynamics. A rich theory on runtime analysis of EAs has been developed over the last 20 years. This theory provides insights into how the performance of EAs depends on their parameter settings and the characteristics of the underlying fitness landscapes. Early studies were mostly concerned with EAs without populations, such as the (1+1) EA.This tutorial introduces recent techniques that enable runtime analysis of EAs with realistic populations. To illustrate the application of these techniques, we consider fundamental questions such as: When are populations necessary for efficient optimisation? What is the appropriate balance between exploration and exploitation and how does this depend on relationships between mutation and selection rates? What determines an EA’s tolerance for uncertainty?

The aim of the tutorial is to firstly provide an introduction to evolutionary algorithm hyper-heuristics. An overview of evolutionary algorithm hyper-heuristics will be provided, examining each of the four types of hyper-heuristics. The EvoHyp library will be used to demonstrate the implementation of a genetic algorithm hyper-heuristic for the case studies for selection hyper-heuristics and a genetic programming hyper-heuristic for the generation hyper-heuristics. Theoretical aspects of evolutionary algorithm hyper-heuristics will be presented. Challenges in the implementation of evolutionary algorithm hyper-heuristics will be highlighted. The use of hyper-heuristics for the automated design of evolutionary algorithms will be examined as well as the application of evolutionary algorithm hyper-heuristics for the design of machine learning techniques. The tutorial will end with a discussion session on future directions in this field.

One of the most challenging problems in solving optimization problems with evolutionary algorithms is the selection of several control parameters for adjusting the behaviour of the algorithms. Suitable control parameter values need to be found, and their choice can have a significant impact on the performance of the algorithm. Different parameter settings can be optimal at different stages of the optimization process: at the beginning, in the exploration phase of an optimization process, one may want to increase the chance of finding the most promising regions of the search space, while later in the exploitation phase, to stay focused within the promising area. The ambition of this tutorial is to contribute towards a more systematic use of dynamic parameter choices/setting. We survey existing techniques to automatically select control parameter values on the fly. We will discuss both theoretical and experimental results that demonstrate the unexploited potential of non-static parameter choices.

In the classical setting evolutionary algorithms (EAs) are used to compute a solution of high quality with respect to the objective function or a set of trade-off solutions in the field multi-objective optimization. Here, diversity preservation is usually introduced as a means to prevent premature convergence.

In many engineering applications it is beneficial to produce a set of solutions that is (1) of high quality and (2) diverse with respect to the search space or features of the given problem. evolutionary diversity optimization enables the computation of a large variety of new and innovative solutions that are unlikely to be produced by traditional evolutionary computation methods. A related field of study is novelty search where EAs are used to discover new solutions without focusing on explicit objectives as a driver for the search process with the goal to explore solutions that are different to the ones previously obtained.

Our tutorial will give a detailed overview on evolutionary diversity optimization in both theory and practice and highlight similarities and differences to novelty search.

Whenever one performs optimization on a continuous problem, having at least a vague idea of its landscape’s structure is usually very beneficial, as this, for instance, allows to select and/or configure a suitable, well-performing optimization algorithm. However, characterizing problem landscapes poses (at least) two challenges: (i) one wants to avoid spending major parts of the overall budget on the characterization of the landscape (as this would reduce the budget that is left for the optimizer), and (ii) the properties should be automatically computable.  Over the last decade an entire research area – denoted Exploratory Landscape Analysis (ELA) – has developed around the topic of automated and feature-based problem characterization for continuous optimization problems.

Within this tutorial, we will introduce the general concepts of automated algorithm selection and ELA, followed by a detailed summary of its state of the art, and concluded by exemplary success stories for the usage of ELA.

Genetic Programming (GP) has been on the scene for around 25 years. Genetic Improvement (GI) is the new kid on the block. What does GI have to offer over GP? The operational difference is that GI deals with source code, rather than a simulation of code. In other words, GI operates directly on Java or C, for example, whereas GP typically operates on some subset defined by the function and terminal sets. Another fundamental difference is that GI starts with real-world software, whereas GP typically tries to evolve programs from scratch. These differences may seem unimportant, as we can still generate the same set of functions; however this subtle difference opens up a vast number of new possibilities for research and makes GI attractive for industrial applications. Furthermore we can optimise the physical properties of code such as power consumption, size, bandwidth, and other non-functional properties, including execution time.

Many evolutionary and general-purpose search algorithms have been proposed for solving a broad range of single- and multi-objective combinatorial optimization problems. Despite their skillful design, it is still difficult to achieve a high-level fundamental understanding of why and when an algorithm is successful. From an engineering point of view, setting up a systematic and principled methodology to select or design an effective search algorithm is also a challenging issue which is attracting more and more attention from the research community. In this context, fitness landscape analysis is a well-established field for understanding the relation between the search space and algorithm(s) or their underlying components and parameters. In this tutorial, we expose and contrast the impact of fitness landscape geometries on the performance of search algorithms. A sound and concise summary of features characterizing the structure of a problem instance are identified. We also show the main methodologies to sample discrete search spaces and to predict the expected algorithm performance for unseen problems.

Self-Organizing Migrating Algorithm (SOMA) belongs to the class of swarm intelligence techniques. SOMA is inspired by competitive-cooperative behavior, uses inherent self-adaptation of movement over the search space, as well as discrete perturbation mimicking the mutation process. The SOMA perform significantly well in both continuous as well as discrete domains. This tutorial shows a collection of principal findings from the latest original research papers discussing new results on and with SOMA from the latest scientific events like CEC, GECCO, and SEMCCO.  Current research trends in parameters control, discrete perturbation, novel improvements tested on widely accepted benchmark tests as well as new population strategies based on ensemble technique, “Pareto” principle, or T3A scheme, followed by the future research directions, will be delivered here. Finally, the experiences from more than the last 10 years with SOMA demonstrated on various applications like control engineering, cybersecurity, combinatorial optimization, or computer games conclude the tutorial.

Evolutionary multi-objective optimization (EMO) has been a major research topic in evolutionary computation for many years. It has been generally accepted that combination of evolutionary algorithms and traditional optimization methods should be a next generation MO solver. As the name suggests, the basic idea of the decomposition-based technique is to transform the original complex problem into simplified subproblem(s) so as to facilitate the optimization. Decomposition methods have been well used and studied in traditional MO. MOEA/D decomposes a multi-objective problem into a number of subtasks, and then solves them in a collaborative manner. MOEA/D provides a very natural bridge between multi-objective evolutionary algorithms and traditional mathematical programming. It has been a commonly used evolutionary algorithmic framework in recent years.

This tutorial will (i) introduce the basic principles of MOEA/D in comparison with other two state-of-the-art EMO frameworks, i.e., Pareto- and indicator-based frameworks; (ii) present a general overview of state-of-the-art MOEA/D variants and their applications; (iii) discuss opportunities for further exploration.

The evolutionary computation community has recently found that constraint handling techniques (CHTs) developed for single-objective optimization cannot be adapted for multiobjective optimization as easily as first believed. As a result, more and more research is now focusing on constraint handling in multiobjective optimization. This tutorial provides an overview of the state of the art in constrained multiobjective optimization with evolutionary algorithms. It starts with the motivation for dealing with constrained multiobjective optimization problems, gives their formal definition, and explains the CHTs developed for single-objective optimization. The major part is devoted to multiobjective evolutionary algorithms (MOEAs) capable of constraint handling. Various ways of incorporating CHTs into MOEAs are presented and the proposed taxonomies of the resulting algorithms are discussed. In particular, two recent approaches are analyzed: a generic framework for incorporating CHTs into MOEAs and the ensemble-based constraint handling for multi-objective optimization. The tutorial concludes with an overview of the available test problems and illustrative case studies from constrained multiobjective optimization.

New developments in Gray Box Optimization makes it possible to construct new forms of evolutionary algorithms that do not use random mutation or random recombination.     Instead, for certain classes of NP Hard problems (ranging from MAXSAT to the Traveling Salesman Problem),  it is possible to exactly compute the location of improving moves (often in constant time),  and to use highly efficient forms of greedy deterministic recombination.   In some domains,  this makes random mutation obsolete.   Deterministic “Partition Crossover” can be applied to optimization problems such as MAXSAT and NK Landscapes as well as problems such as the Traveling Salesman Problem (TSP).    New innovations in this tutorial also include a review of “function transforms.”  Function transforms can be used to convert problems with high nonlinearity into problems with  k-bounded nonlinearity.   This tutorial will also show how parameter optimization problems can also be expressed as high-precision discrete optimization problems with k-bounded nonlinear.

In particular for algorithms in AI, it is crucial to determine a configuration of parameter settings to achieve peak performance. Since an in-depth understanding of these high-dimensional, abstract black-box problems is often not available, manual tuning of parameter configurations is often considered as a tedious and error-prone task. Therefore, automated algorithm configuration (AC) systems were proposed that free users from this task, and they demonstrated the practical applicability by achieving substantial speedups in a wide range of applications. Although applying any kind of optimization approach to AC might seem to be straightforward, AC systems are faced with many challenges in practice, incl. complex spaces and partially observed function values. In my tutorial, I will give an overview on (i) how to address these challenges, (ii) recent advances leading to even more efficient systems than ever before, (iii) applications to different applications, including AutoML and (iv) open challenges.