Multi-Objective Optimization via Equivariant Deep Hypervolume Approximation

Open Access
Authors
Publication date 05-10-2022
Edition v1
Number of pages 17
Publisher ArXiv
Organisations
  • Faculty of Science (FNWI) - Van 't Hoff Institute for Molecular Sciences (HIMS)
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Optimizing multiple competing objectives is a common problem across science and industry. The inherent inextricable trade-off between those objectives leads one to the task of exploring their Pareto front. A meaningful quantity for the purpose of the latter is the hypervolume indicator, which is used in Bayesian Optimization (BO) and Evolutionary Algorithms (EAs). However, the computational complexity for the calculation of the hypervolume scales unfavorably with increasing number of objectives and data points, which restricts its use in those common multi-objective optimization frameworks. To overcome these restrictions we propose to approximate the hypervolume function with a deep neural network, which we call DeepHV. For better sample efficiency and generalization, we exploit the fact that the hypervolume is scale-equivariant in each of the objectives as well as permutation invariant w.r.t. both the objectives and the samples, by using a deep neural network that is equivariant w.r.t. the combined group of scalings and permutations. We evaluate our method against exact, and approximate hypervolume methods in terms of accuracy, computation time, and generalization. We also apply and compare our methods to state-of-the-art multi-objective BO methods and EAs on a range of synthetic benchmark test cases. The results show that our methods are promising for such multi-objective optimization tasks.
Document type Preprint
Note Also the lastest version (2023) also available on ArXiv.
Language English
Published at https://arxiv.org/abs/2210.02177v1
Downloads
Permalink to this page
Back