CEC2019 Competition on Evolutionary Multitask Optimization  



OVERVIEW AND AIMThe human possesses the most remarkable ability to manage and execute multiple tasks simultaneously, e.g., talking while walking. This desirable multitasking capability has inspired computational methodologies and approaches to tackle multiple tasks at the same time by leveraging commonalities and differences across different tasks to improve the performance and efficiency of resolving component tasks compared to when dealing with them separately. As a wellknown example, multitask learning [1] is a very active subfield of machine learning whereby multiple learning tasks are performed together using a shared model representation such that the relevant information contained in related tasks can be exploited to improve the learning efficiency and generalization performance of taskspecific models. Multitask optimization (MTO) [2][5] is a newly emerging research area in the field of optimization, which investigates how to effectively and efficiently tackle multiple optimization problems at the same time. In the multitasking scenario, solving one optimization problem may assist in solving other optimization problems (i.e., synergetic problemsolving) if these problems bear commonality and/or complementarity in terms of optimal solutions and/or fitness landscapes. As a simple example, if some problems have the same globally optimal solution but distinct fitness landscapes, obtaining the global optimum to any problem makes the others also get solved. Recently, an evolutionary MTO paradigm, socalled evolutionary multitasking [3],[4], was proposed to make the best use of the potential of evolutionary algorithms (EAs) incorporated with a unified solution representation space for MTO. As a populationbased optimizer, EAs feature the Darwinian survivalofthefittest principle and natureinspired reproduction operations which inherently promote implicit knowledge transfer across tasks during problemsolving. The superiority of this novel evolutionary multitasking framework over traditional ways of solving each task independently has been demonstrated on synthetic and realworld MTO problems by using a multifactorial evolutionary algorithm (MFEA) [3],[5] developed under this framework. Evolutionary multitasking opens up new horizons for researchers in the field of evolutionary computation. It provides a promising means to deal with the everincreasing number, variety and complexity of optimization tasks. More importantly, rapid advances in cloud computing could eventually turn optimization into an ondemand service hosted on the cloud. In such a case, a variety of optimization tasks would be simultaneously executed by the service engine where evolutionary multitasking may harness the underlying synergy between multiple tasks to provide service consumers with faster and better solutions. Due to the good response of this competition in CEC 2017, this year, we continue to organize the MTO competition with the aim of promoting research advances from both algorithmic and theoretical aspects of evolutionary MTO. TEST SUITESSingleobjective and multiobjective continuous optimization have been intensively studied in the community of evolutionary optimization where many wellknown test suites are available. As a preliminary attempt, we have designed two MTO test suites based on [6],[7] for singleobjective and multiobjective continuous optimization tasks, respectively. The test suite for multitask singleobjective optimization (MTSOO) contains nine standard MTO benchmark problems, ten MTO complex problems, and six 50task MTO benchmark problems. Each of the standard and complex MTO problem consists of two singleobjective continuous optimization tasks, while each of the 50task MTO problem contains 50 singleobjective continuous optimization tasks, which bear certain commonality and complementarity in terms of the global optimum and the fitness landscape. These MTO problems possess different degrees of latent synergy between their involved component tasks. The test suite for multitask multiobjective optimization (MTMOO) includes nine standard MTO benchmark problems, ten MTO complex problems, and six 50task MTO benchmark problems. Each of the standard and complex MTO problem consists of two multiobjective continuous optimization tasks, while each of the 50task MTO problem contains 50 multiobjective continuous optimization tasks, which bear certain commonality and complementarity in terms of the Pareto optimal solutions and the fitness landscape. The MTO problems feature different degrees of latent synergy between their involved two component tasks. All benchmark problems included in these two test suites are downloadable at http://www.bdsc.site/websites/MTO/index.html COMPETITION PROTOCOLPotential participants in this competition may target at either or both of MTSOO and MTMOO while using all benchmark problems in the corresponding test suites as described above for performance evaluation. For MTSOO test suite:(1) Experimental settingsFor each of 25 benchmark problems in this test suite, an algorithm is required to be executed for 30 runs where each run should employ different random seeds for the pseudorandom number generator(s) used in the algorithm. Note: It is prohibited to execute multiple 30 runs and deliberately pick up the best one. For all 2task benchmark problems, the maximal number of function evaluations (maxFEs) used to terminate an algorithm in a run is set to 200,000, while the maxFEs is set to 5,000,000 for all 50task benchmark problems. In the multitasking scenario, one function evaluation means calculation of the objective function value of any component task without distinguishing different tasks. Note: The parameter setting of an algorithm is required to be identical for each benchmark problem in this test suite, respectively. Participants are required to report the used parameter setting for each problem in the final submission to the competition. Please refer to “SUBMISSION GUIDELINE” for more details. (2) Intermediate results required to be recordedWhen an algorithm is executed to solve a specific benchmark problem in a run, the so far achieved best function error value (BFEV) w.r.t. each component task of this problem should be recorded when the current number of function evaluations reaches any of the predefined values which are set to k*maxFEs/Z (k =1, …, Z; Z=100 for 2task MTO problems and Z=1000 for 50task MTO problems) in this competition. BFEV is calculated as the difference between the best objective function value achieved so far and the globally optimal objective function value known in advance. As a result, 100 BFEVs would be recorded for every 2task benchmark problem, while 1000 BFEVs would be recorded for every 50task benchmark problem, w.r.t. each component task in each run. Intermediate results for each benchmark problem are required to be saved into separate ".txt" files: "MTOSOO_P1.txt", …, "MTOSOO_P9.txt" for the nine standard MTO benchmark problems; "MTOCSO_P1.txt", …, "MTOCSO_P10.txt" for the ten MTO complex problems; "MTOMSO_P1.txt", …, "MTOMSO_P6.txt" for the six 50task MTO benchmark problems. The data contained in each ".txt" file must conform to the following format:
where BFEV_{j,k}^i (i = 1, …, n ;j = 1, …, 30; k = 1, …, m) stands for the BFEV w.r.t. the ith component task obtained in the jth run at the kth predefined number of function evaluations. Note: n=2 and m=100 for 2task benchmark problems, while n=50 and m=1000 for 50task benchmark problems.
(3) Overall ranking criterionTo derive the overall ranking for each algorithm participating in the competition, we will take into account of the performance of an algorithm on each component task in each benchmark problem under varying computational budgets from small to large. Specifically, we will treat each component task in each benchmark problem as one individual task, ending up with a total of 338 individual tasks. For each algorithm to be ranked, the median BFEV over 30 runs will be calculated at each checkpoint which corresponds to different computational budgets for each of 18 individual tasks. Based on these calculated data, the overall ranking criterion will be defined. To avoid deliberate calibration of the algorithm to cater for the overall ranking criterion, we will release the formulation of the overall ranking criterion after the competition submission deadline. For MTMOO test suite:(1) Experimental settingsFor each of 25 benchmark problems in this test suite, an algorithm is required to be executed for 30 runs where each run should employ different random seeds for the pseudorandom number generator(s) used in the algorithm. Note: It is prohibited to execute multiple 30 runs and deliberately pick up the best one. For all 2task benchmark problems, the maximal number of function evaluations (maxFEs) used to terminate an algorithm in a run is set to 200,000, while the maxFEs is set to 5,000,000 for all 50task benchmark problems. In the multitasking scenario, one function evaluation means calculation of the values of multiple objective functions of any component task without distinguishing different tasks. The parameter setting of an algorithm should be identical for each benchmark problem in this test suite. Note: Participants are required to report the used parameter setting for each problem in the final submission to the competition. Please refer to “SUBMISSION GUIDELINE” for more details. (2) Intermediate results required to be recordedWhen an algorithm is executed to solve a specific benchmark problem in a run, the obtained inverted generational distance (IGD) value w.r.t. each component task of this problem should be recorded when the current number of function evaluations reaches any of the predefined values which are set to k*maxFEs/Z (k =1, …, Z; Z=100 for 2task MTO problems and Z=1000 for 50task MTO problems) in this competition. IGD [8] is a commonly used performance metric in multiobjective optimization to evaluate the quality (convergence and diversity) of the currently obtained Pareto front by comparing it to the optimal Pareto front known in advance. As a result, for any benchmark problem, 100 IGD values would be recorded w.r.t. each component task in each run. Intermediate results for each benchmark problem are required to be saved into separate ".txt" files: "MTOMOO_P1.txt", …, "MTOMOO_P9.txt" for the nine standard MTO benchmark problems; "MTOCMO_P1.txt", …, "MTOCMO_P10.txt" for the ten MTO complex problems; "MTOMMO_P1.txt", …, "MTOMMO_P6.txt" for the six 50task MTO benchmark problems. The data contained in each ".txt" file must conform to the following format:
where IGD_{j,k}^i (i = 1, …, n ;j = 1, …, 30; k = 1, …, m) stands for the IGD value w.r.t. the ith component task obtained in the jth run at the kth predefined number of function evaluations. Note: n=2 and m=100 for 2task benchmark problems, while n=50 and m=1000 for 50task benchmark problems.
(3) Overall ranking criterionTo derive the overall ranking for each algorithm participating in the competition, we will take into account of the performance of an algorithm on each component task in each benchmark problem under varying computational budgets from small to large. Specifically, we will treat each component task in each benchmark problem as one individual task, ending up with a total of 338 individual tasks. For each algorithm compared for ranking, the median IGD value over 30 runs will be calculated at each checkpoint corresponding to different computational budgets for each of 18 individual tasks. Based on these calculated data, the overall ranking criterion will be defined. To avoid deliberate calibration of the algorithm to cater for the overall ranking criterion, we will release the formulation of the overall ranking criterion after the competition submission deadline. SUBMISSION GUIDELINEThis competition is organized together with CEC’19 Special Session on Memetic Computing (http://www.memecs.org/mcw/SpecialSession_CEC2019.html). Interested participants are strongly encouraged to report their approaches and results in a paper to this special session before the paper submission deadline (January 7, 2019). Further, please archive the following files into a single .zip file and then send it to mtocompetition@gmail.com before the competition submission deadline (May 1, 2019):
Here, "param_SO.txt" and "param_MO.txt" contain the parameter setting of the algorithm for MTSOO and MTMOO test suites, respectively. "code.zip" contains the source code of the algorithm which should allow the generation of reproducible results. If you would like to participate in the competition, please kindly inform us about your interest via email (mtocompetition@gmail.com) so that we can update you about any bug fixings and/or the extension of the deadline. COMPETITION ORGANIZERSLiang Feng Kai Qin Abhishek Gupta Yuan Yuan YewSoon Ong Xu Chi REFERENCES
[1] R. Caruana, “Multitask learning”, Machine Learning, 28(1): 4175, 1997. 
