Daniel Lokshtanov: Kernelization: Theory of Polynomial Time Pre-processing
Hovedinnhold
Suppose you are attacked by a mob of n bandits, and all you have for your own protection is a gun with k bullets in it. Luckily, the mobsters are standing still preparing to attack you, and the bullets go right through them, so that if some of them stand on a line, you can take out all of those in a single shot. Given the positions of all the n mobsters and the number k of bullets in your gun, can you take out all the attackers?
This morbid problem is known to be NP-complete, but what if you know that the number of bullets, k, is at most 6, just like in a classical revolver? A brute force solution (try all possible ways of shooting at most 6 bullets) would still require O(n^12) time, which is infeasible even for small values of n. However, there is a simple reduction rule that will reduce the number of attackers to at most 36, in O(n) time. At this point, clever heuristics or exact solvers can easily solve the instance. This is a striking example of how clever preprocessing can make a world of difference.
Preprocessing as a strategy of coping with hard problems is universally used in almost every implementation. We want to be able to make meaningful statements, that is to prove theorems, about the quality of our preprocessing rules, or to be able to rule out the existence of "good" preprocessing rules for the problem in question. Kernelization is a mathematical framework for proving such theorems - i.e, theorems on the form
(A) "Problem X admits good preprocessing"
(B) "Problem X does not admit good preprocessing rules unless ... "
In this talk I will survey the field of Kernelization through simple examples like the mobster-shooting problem, and talk about some of the contributions to the field made by the Algorithms group at our department.