Home
Optimization
Testimonials

NORS Best Master Thesis 2021

Jakob Kallestad, former master student at Heuristic Design Lab

Heuristic Design Lab (University of Bergen)

Main content

Abstract of the thesis:

There exist many problem-specific heuristic frameworks for solving combinatorial optimization problems. These can perform well for specific use-cases, however when applied to other problem domains these frameworks often do not generalize well. Metaheuristic frameworks serve as an alternative that aims to be more generalizable to several problems, yet these frameworks can suffer from poor selection of low-level heuristics during the search. The adaptive layer of the metaheuristic framework of Adaptive Large Neighborhood Search (ALNS) is an example of a heuristic selection mechanism that selects low-level heuristics based on their recent performance during the search. In this thesis, we propose a hyperheuristic selection framework that uses Deep Reinforcement Learning (Deep RL) to more efficiently select heuristics during the search compared to the adaptive layer of ALNS. Our framework uses the representation power of Deep Learning (DL) together with the decision-making capability of Deep RL for processing search states (containing useful information of the search) in order to efficiently select heuristics at each step of the search. In this thesis, we introduce Deep Reinforcement Learning Hyperheuristic (DRLH), a general framework for solving combinatorial optimization problems. Our experiments show that DRLH is able to come up with better heuristic selection strategies compared to ALNS and a simple Uniform Random Sampling (URS) framework, resulting in better solutions. Additionally, we show that DRLH is not negatively affected by having a large pool of heuristics to choose from, while ALNS does not perform well under these conditions, as it is unable to work efficiently when given a large pool of heuristics to select from.