On this page you will become familiar with the problem.
Suppose you are the manager of a restaurant chain. You have no restaurants in Germany jet, but since the market looks very promising you want to build new restaurants for the 20 biggest german cities. You have only the money to build 5 new restaurants. Now you have to decide at which location the restaurants should be build, so that the distance from the set of restaurants to any of those 20 cities is minimal. Or in other words you want to pick locations in such a way that the maximum distance from a city to the nearest restaurant is minimal. You are looking at a 5Center Problem in the twodimensional euclidean space!
The $k$Center Problem asks for a cover of n points by $k$ containers in such a way that the containers has a small radius. A container is uniquely defined by a center and a radius. The shape of a container is given by the used metric. In case of the euclidean metric the containers are circular discs.
Let's look at the formal definition. Let $P \subset \mathbb{R}^n$ a set of points, $c_i \in \mathbb{R}^n$ the centers of the container with $i = 1,\ldots,k$ so is the $k$Center Problem given by the minimizing problem:
So for each point $p$ there exist a center $c_j$ such that the distance between p and $c_j$ is less or equals $\rho$. Suppose we had a solution for the $k$Center Problem. The value of the objective function is given by:
The $k$Center Problem is $\mathbb{NP}$complete, so there is no known efficient algorithm for solving the problem. For a small set of points and centers we can compute a exact solution. The combinatorial explosion of possibilities cause an unacceptable runtime if the problem input increases. Therefore it is necessary to calculate an approximation of the optimal solution. Until $\mathbb{NP}$ is not equals to $\mathbb{P}$, which has not been proven jet, there will be no efficient algorithm in the future that gives us an optimal solution of the $k$Center Problem.
Euclidean metric  
$\sqrt{(x_1  x_2)^2 + (y_1  y_2)^2}$  
One metric  
$\left x_1  x_2 \right + \left y_1  y_2 \right$  
Max metric  
$\max\{\left x_1  x_2 \right,\left y_1  y_2 \right\}$ 
Radius of your solution 
Cover all cities by the discs of the centers. Try to find good positions for the centers. You have to cover all cities!
There is no way back to the 'Play game' tab.
Radius of the approximate solution: 

Radius of your solution: 

Computation time: 
Approximated cover:  
Your cover: 
In this tab you may have a look at the approximated solution and the corresponding calculation steps.
In the description of the algorithm you may find more details on the applied algorithms of the different steps.
The Dilatation Problem is a simple optimization problem. Let $c_1, \ldots, c_k \in \mathbb{R}^n$ centers and $P = \{p_1, \ldots, p_m\}$ cities (points). The Dilatation Problem ask for the citiy (point) $p^*$ that has the maximal distance to the nearst center:
$$p^* = \arg \max_{p \in P} \min_{j \in \{1,\ldots,k\}} \leftp  c_j \right$$ $$\rho = \max_{p \in P} \min_{j \in \{1,\ldots,k\}} \left p  c_j \right = \min_{j \in \{1,\ldots,k\}} \left p^*  c_j \right$$So given are the set of cities and $k$ fixed centers. We are searching for a point and a radius $\rho$.
If we shift the centers and solve the Dilatation Problem again, we can compare both solutions and can decide which positioning of the centers is better. The solution of the Dilatation Problem is therefore a upper bound for the solution of the $k$Center Problem.
The algorithm for this problem is very simple. We search for each point $p \in P$ the center that is next to $p$. After that we take maximum of the distances of these pairs $(c_i, p_j)$. The complexity is $\mathcal{O}(m \cdot k)$.
The OneCenter Problem is equals to the $k$Center Problem with $k = 1$. We are asking for a center $c$ of a single container. The container has to cover all cities (points) $P = \{p_1, \ldots, p_m\}$. Let us define $$\phi(c) = \max_{p \in P}\left pc \right.$$ This function is the solution of the Dilatation Problem for a single center $c$. So the value of the function depends on the position of $c$. One can show that $\phi$ is a continuous and convex function whatever metric of our three metrics we choose. The OneCenter Problem ask for the minimum of this function.
Given is the set of cities and we are searching for a center $c$.
In case of the euclidean metric, the minimizer of $\phi$ is the middle point of points $p_i,p_j \in P$, that have the largest distance between each other. A naive approach is to check all pairs of points in $P$, which result in a quadratic runtime $\mathcal{O}(n^2)$, where $n$ is the number of points. Furthermore this works only for the euclidean metric.
An another approach is to approximate the none linear function $\phi$ near the optimal solution by linear functions. Therefore we apply a CuttingPlane Method. We first construct a linear container $H$ which cover all points in $P$. We call $H$ approximation container. In each step we refine this container to become more like the container of the optimal solution. We choose suitable hyperplanes for this construction. At the beginning $H$ is a very simple container that just has to contain all points in $P$. Let $c_0$ any center of our real Container $Con$. A first upper bound is the value of $\phi(c_0)$. We calculate the subgradient $\nabla \phi$ at $c_0$. This subgradient is a simple vector with special conditions and we can add this vector to $H$ as a refinement of the approximation container. So $\nabla \phi(c_0)$ is used for the cut. We solve an $\mathcal{LP}$ that corresponds to $H$ which gives us a better center $c_1$ (the solution of the $\mathcal{LP}$) and an lower bound (the value of the objective function). In each step the approximated container is a superset of the container of the optimal solution. So we will never over approximate.
Let $H' = \cap \{x : h_i^T x \leq 1\}$ a linear container, defined by $t$ hyperplanes ($H = \rho \cdot H'$), $p_1, \ldots, p_m \in P$. The $\mathcal{LP}$ looks like:
$$\min \rho$$ such that $$\rho + h_i^T c \geq \max_{1 \leq j \leq m} h_i^T p_j \quad \forall 1 \leq i \leq t$$$\rho$ and $c$ are variables we are interested in. Let us look at a single condition where $p_j$ is the point that causes the maximum value. We can reformulate a condition as follows:
$$\rho \geq h_i^T p_j  h_i^T c \iff h_i^T (p_j  c) \leq \rho$$This means that if we look at $c$ and translate c by the vector $p_j$, the result has to lie inside the hyperplane defined by $h_i^T \leq \rho$. It follows $P \subset c + \rho \cdot H' \iff$ all constrains are fullfilled. Furthermore we can show that if $g = \nabla\phi(c)$ is the subgradient at $c$, $g$ is the outer normal vector of an (supporting) hyperplane of $c + \rho \cdot H'$. This proves that in each step all points of $P$ will lie inside the approximated container and we will never over approximate.
The following figures may help to get a good intuition of the alogorithm:
The procedure can be used for all metrics of this website, since we can calculate for each of these metrics a subgradient and $\phi$ will be a convex and continuous function.
The $k$Center Problem will be lead back to the OneCenter and the Dilatation Problem. We solve many instance of $k$ OneCenter Problems which will represent the solution of a relaxed $k$Center Problem instance. Therefore we take only a subset of the cities $P$ into account and fix the assignment of the cities to the centers. We evaluate the solution and if the solution is too far away from the lower bound, we try another assignment and go on. The most important factor is the assignment and the order of the different assignments. Furthermore we use each calculated bounds of each solution for further calculations.
The algorithm, that does the ordering of the assignment and the reuse of the bounds is a BranchAndBound algorithm. Each node of the BranchAndBound tree is a instance of the relaxed $k$Center Problem, so a unique assignment of a subset of cities to centers. At the root all containers are emtpy and the centers are at a random position. At the second level of the tree, the city that is most far away from all centers will be assigned to an arbitrary container. At the third level the second and most far points from the centers will be assigned to containers and so on. The order is crucial to take only a small subset of cities into account. Intuitively it makes sense to assign cities that are far away to different containers, so that the containers are distributed over the whole map. Therefore this order makes sense, and we achieve a relative high lower bound early. The search strategy of the tree is the dept first search strategy.
Since all containers are 'equals' (if they are empty) they are not distinguishable from each other. Therefore some combinations can be skipped. For example the case with all containers empty with the exception of container 1 that contains $p_1$ and the case with all containers empty with the exception of container 2 that contains $p_1$, are not distinguishable from each other.
Suppose we have an assignment of cities $P_1$ to the containers. In the next level we add one more unassigned city $p$ to $P_1$ so $P_2 := P_1 \cup \{p\}$. We choose the city $p$ that is most far away from all containers and assign this city to an arbitrary container. The solution of the relaxed $k$Center Problem based on $P_1$ is a local lower bound for the $k$Center Problem. This means that if we do not change the assignment of $P_1$ but add more cities to any other containers we can not become better. Since we look at all other assignments of $P_1$ at lower or equal high levels of the tree, this is not a problem. Furthermore the solution of the Dilatation Problem based on all points $P$ and the centers of the node is a local upper bound. The global upper bound is the minimum of all upper bounds that has been calculated so far. If we are in a node where the local lower bound is larger than the global upper bound, we can stop and we do not have to go deeper at this node. We can prune the subtree at this node. We go ahead until the global upper bound is close to a local lower bound and we reach a required tolerance.
Look at the following example of an instance of a 2Center Problem with the euclidean metric. The white colored cities with the black border are assigned to containers.
The picture shows the solution of the relaxed 2Center Problem with the local upper and lower bounds at a node of the BranchAndBound tree. The order is the order of the algorithm:
Level two (left exclusive branch): The local lower bound is 0. The marked point has the maximal distance and defines the local upper bound,
furhtermore this point will be assinged to container 1 or container 2 in the next level.
Level three (left branch): At the left the local upper bound is not equals to the local lower bound, at the right the opposite is the case.
At the right we reached a leaf, at the left the marked point will be assigned to container 1 or 2 at the next level.
Level four (left branch): The local upper bound is equals to the local lower bound in each cases. The better solution which defines the global upper bound is the final solution.
You can observe that we only had to look at a subset of points and to a very small number of assignments compared to the combinatorial possibilities. In this example we calculated even the optimal solution in very view steps.
We use the subgradient to solve the OneCenter Problem. The key idea is that we know that the optimum point c does lie one side of the hyperplane defined by the subgradient. So all other points are no longer of interest.
Let $\phi : \mathbb{R}^n \to \mathbb{R}$ be a convex function. A vector $g \in \mathbb{R}^n$ is called subgradient of $\phi$ at $c_0$, if for all $c \in \mathbb{R}^n$
$$\phi(c) \geq \phi(c_0) + \langle g, c  c_0 \rangle$$where $\langle g, c  c_0 \rangle$ is the scalar product and is in our case equals $g^T (c  c_0)$. The subdifferential $\partial \phi$ at $c_0$ is the set of all subgradient of $\phi$ at $c_0$.
Let $\phi : \mathbb{R} \to \mathbb{R}, \phi : c \mapsto \leftc \right$, then it follows: $$\partial \phi(c_0) = \begin{cases} \{1\} & \quad \text{falls } c_0 < 0\\ \left[1,1\right] & \quad \text{falls } c_0 = 0\\ \{1\} & \quad \text{falls } c_0 > 0\\ \end{cases}$$
Chair M9 of Technische Universität München does research in the fields of discrete mathematics, applied geometry and the optimization of mathematical problems. The problem and corresponding algorithms presented here are a famous example for the application of mathematics on an applied problem (and can be assigned to the field of combinatorial optimization). This page shall help the reader to recognize the application of mathematics to real life problems and to understand easy mathematical solution methods used. The algorithms presented were chosen according to understandability, they do not represent fields of research of the chair.
To cite this page, please use the following information:
The calculation of the solution is still in progress. Please try again later.
The calculation of the solution has been aborted. Please return to the Create game tab and start a new game.