The maximum cut problem is one of those combinatorics problems that are deceptively easy to articulate and yet NP-annoying to solve. Given an undirected graph $G=(V,E)$, find a partition of $V$ into disjoint subsets $A$ and $B$ such that the set of edges in $E$ with one endpoint in $A$ and the other in $B$ (the "cut set") is as large as is possible. There is also a weighted version, where you maximize the total weights of the cut set rather than its cardinality.
Someone asked on OR Stack Exchange about a variant of the problem, in which there is a specified size (which the author called a "budget") for one of the sets, i.e., a constraint $|A|=b$ for some given $b$. This generated various suggestions, including some heuristics. I had a couple of thoughts about possible heuristics, and in playing with some computational examples found myself rather surprised by the results.
Mixed Integer Program
It is of course quite straightforward to write an integer or mixed-integer linear program for the problem. I'll assume that $V=\lbrace 1,\dots,N\rbrace$ and that $2\le b \le N-1$. Let variable $x_i$ be 1 if vertex $i\in V$ is in $A$ and 0 if it is in $B$, and let variable $y_{i,j}$ be 1 if edge $(i,j)\in E$ is in the cut set ($i\in A$ and $j\in B$ or vice versa) and 0 if not ($i$ and $j$ both in $A$ or both in $B$).
Please allow me a slight digression here. The $y$ variables can be declared either continuous or binary. The combination of objective and constraints will force the values to 0 or 1 even if they are declared as continuous. Force of habit, dating back to when I was using very early generation software on not particularly powerful (by current standards) computers, has me declaring every variable as continuous unless it has to be integer. On the other hand, it has been suggested to me that declaring such variables integer-valued may help solvers tighten bounds or fix variables. That makes sense to me, but in this particular model I don't see anything that would suggest making $y$ integer would help the solver.
Digression over, we can write the MIP model as
\begin{alignat*}{1}
\max\sum_{(i,j)\in E}y_{ij}\\
\textrm{s.t. }\sum_{i\in V}x_{i} & =b\\
y_{ij} & \le x_{i}+x_{j}\quad\forall(i,j)\in E\\
y_{ij} & \le2-x_{i}-x_{j}\quad\forall(i,j)\in E\\
x_{i} & \in\left\{ 0,1\right\} \quad\forall i\in V\\
y_{ij} & \in\left[0,1\right]\quad\forall(i,j)\in E.
\end{alignat*}
The first constraint enforces the "budget", while the second and third constraints ensure that $y_{ij}=0$ if $i$ and $j$ are on the same side of the partition. The objective function will force $y_{ij}=1$ if $i$ and $j$ are on different sides. The MIP model is easy to solve for small instances and grows more difficult as the number of vertices and the number of edges grows, and as $b$ gets closer to $N/2$ (since the number of possible partitions satisfying the budget increases).
Genetic Algorithm
Since the max cut problem is NP-hard, I suspect the budgeted version also is, and in any case it is likely that solving the MIP to optimality can take too long (or too much memory) in many cases. That led me to thinking about heuristics. One easy heuristic is to solve the MIP model but not to optimality, stopping at some arbitrary time or memory/node limit. That requires a MIP solver. Another possibility (among many) is a genetic algorithm.
I was fooling around with the problem in R, which has a very handy genetic algorithm library (named, rather suggestively, GA). Among other things, the GA library allows you to use a permutation as a "chromosome" (solution encoding). So we can define a candidate solution to be a permutation of $1,\dots,N$, and it's "fitness" is then the size of the cut set defined by setting $A$ equal to the first $b$ elements of the permutation.
Pairwise Swap
Another possibility that I suggested on OR SE was a simple pairwise swapping heuristic. Start by generating a random choice of $A$ (and, by implication, $B$) and calculate the size of the cut set. Now consider all pairs of vertices $i\in A$ and $j\in B$ and, for each pair, see what happens if you swap them (moving $i$ to $B$ and $j$ to $A$). If the size of the cut set increases, accept the swap; otherwise do not. This is a primitive example of a "neighborhood" search, and it can be used in fancier heuristics, including
simulated annealing (where you occasionally accept a swap that actually makes the objective smaller, in order to get you out of your current neighborhood). To keep things simple, I suggested just doing pairwise swaps when the objective improved, with random restarts. That means assigning a time limit to the heuristic, and when you run out of acceptable swaps, start fresh with a new random solution until time is exhausted.
And the surprise is ...
I code the MIP model (solved with CPLEX), the GA and the swapping heuristic in R and ran them on a few random test cases, keeping dimensions small enough that CPLEX could solve the MIP reasonably quickly. Since the swapping heuristic relies on a time limit to stop it, I set a limit of two minutes. For the GA, I set a limit of 10,000 generations (which it never reached, as I expected) and a stagnation limit of 200 generations (meaning that, after 200 consecutive generations with no improvement in the best objective value, it would terminate). Early experiments suggested that the GA could stagnate pretty quickly, so I used an "island" GA (in which several smaller populations on "islands" evolve independently, with periodic migrations from one island to another to freshen the "genetic pools" on the islands).
My expectation was that the "sophisticated" GA would outperform the "primitive" swapping heuristic. Wrong! In the table below, I show the optimality gap (percent worse than the optimum, as computed by CPLEX) for the GA and the swap heuristic on six examples. Since the GA stagnates well before the two minute limit I gave to the swap heuristic, I also ran the GA with restarts (column "GA-R"), which restarted the GA with a new population each time it stagnated until the two minute limit was reached. The restarts improved the GA performance somewhat, but as you can see the swapping heuristic beat it on every one of the six test cases (and found the optimum in four out of six).
Graph |
Gap |
Nodes |
Edges |
Budget (b) |
GA |
GA-R |
Swap |
50 |
429 |
10 |
1.7% |
1.7% |
0.0% |
50 |
490 |
20 |
3.0% |
0.7% |
0.0% |
75 |
416 |
20 |
6.4% |
4.3% |
0.0% |
75 |
416 |
30 |
6.8% |
4.7% |
0.4% |
100 |
495 |
30 |
8.4% |
7.8% |
1.6% |
100 |
990 |
25 |
9.1% |
4.4% |
0.0% |
I don't expect the swapping heuristic to find the optimum this consistently on larger problems. Whether it would outperform the GA (with restarts) on tougher problems is an open question. Nonetheless, I take this as a reminder that sometimes simple heuristics can do a pretty good job.
My R notebook is available if you want to play with it. Fair warning: the notebook assumes you have CPLEX installed and loads seven R libraries (four for use with CPLEX, one for the GA, one to time execution of the heuristics and a separate one to set a time limit on the heuristics). If some of that does not interest you, you can of course edit out the related code and skip loading the corresponding libraries.