Quantum Outpost

Pre-registered benchmark · Reality Check #2

MaxCut: Goemans-Williamson vs QAOA

Random 3-regular graphs at n ∈ {8, 12, 16, 24}. Classical SDP with randomized rounding (Goemans-Williamson, α ≥ 0.878) versus QAOA at depths 1, 3, and 5. Pre-registered seeds; notebook run on a CPU simulator; we publish whichever direction the numbers point.

Pre-registered: 2026-04-23 · Run published: 2026-04-25 · Notebook on GitHub →

TL;DR

Goemans-Williamson wins at every problem size we tested. QAOA-1 averages an approximation ratio of ~0.70 (worse than GW's worst-case 0.878 guarantee). Warm-started QAOA-5 gets to ~0.91 on small instances but still trails GW's average of ~0.93, and the gap widens with size. No quantum advantage observed — consistent with the broader literature on near-term QAOA performance.

Pre-registration

Committed to git on 2026-04-23, before any runs:

Results

n GW QAOA-1 QAOA-3 (warm) QAOA-5 (warm)
8 0.945 0.728 0.881 0.913
12 0.939 0.708 0.864 0.897
16 0.932 0.698 0.852 0.881
24 0.928 0.689

Mean approximation ratio across 25 instances per size. QAOA-3/5 omitted at n = 24 due to simulator runtime (exact statevector at 24 qubits is borderline at our depth).

Code (excerpt)

import cvxpy as cp
import networkx as nx
import numpy as np
import pennylane as qml
from scipy.optimize import minimize


def goemans_williamson(G, hyperplanes=200, seed=0):
    n = G.number_of_nodes()
    X = cp.Variable((n, n), symmetric=True)
    constraints = [X >> 0] + [X[i, i] == 1 for i in range(n)]
    obj = sum(0.5 * (1 - X[i, j]) for i, j in G.edges())
    cp.Problem(cp.Maximize(obj), constraints).solve()
    V = np.linalg.cholesky(X.value + 1e-8 * np.eye(n))
    rng = np.random.default_rng(seed)
    best = -1
    for _ in range(hyperplanes):
        r = rng.normal(size=n)
        z = np.sign(V @ r)
        cut = sum((z[i] != z[j]) for i, j in G.edges())
        best = max(best, cut)
    return best


def qaoa_circuit(G, p):
    n = G.number_of_nodes()
    dev = qml.device("default.qubit", wires=n)

    @qml.qnode(dev)
    def cost(theta):
        gammas, betas = theta[:p], theta[p:]
        for w in range(n):
            qml.Hadamard(wires=w)
        for layer in range(p):
            for i, j in G.edges():
                qml.CNOT(wires=[i, j])
                qml.RZ(2 * gammas[layer], wires=j)
                qml.CNOT(wires=[i, j])
            for w in range(n):
                qml.RX(2 * betas[layer], wires=w)
        return qml.expval(maxcut_hamiltonian(G))

    def neg(theta):  # we maximise — minimise the negation
        return -cost(theta)

    res = minimize(neg, np.array([0.4] * p + [0.3] * p), method="COBYLA",
                   options={"maxiter": 200})
    return -res.fun  # expected cut value


def warm_start(G, p, gw_seed=0):
    """Initialize QAOA params from the Goemans-Williamson solution."""
    # Map the GW rounded bit-string into a small Ry rotation per qubit
    # and use the resulting state as the initial state for QAOA.
    # (Egger-Mareček-Woerner 2020 construction.)
    ...


# Run the experiment
sizes = [8, 12, 16, 24]
results = {n: {"gw": [], "qaoa1": [], "qaoa3w": [], "qaoa5w": []} for n in sizes}
for n in sizes:
    for seed in range(25):
        G = nx.random_regular_graph(3, n, seed=seed)
        opt = brute_force_maxcut(G) if n <= 16 else gurobi_maxcut(G)
        results[n]["gw"].append(goemans_williamson(G, seed=seed) / opt)
        results[n]["qaoa1"].append(qaoa_circuit(G, p=1) / opt)
        if n < 24:
            results[n]["qaoa3w"].append(warm_start(G, p=3) / opt)
            results[n]["qaoa5w"].append(warm_start(G, p=5) / opt)

# Print mean approximation ratios
for n in sizes:
    print(n, {k: np.mean(v) if v else None for k, v in results[n].items()})

Full notebook with seeds, validation, and Gurobi license setup at the GitHub link above.

What this means

QAOA-1's approximation ratio of ~0.70 was first computed analytically by Farhi et al. (2014) for 3-regular graphs and matches our empirical result. Goemans-Williamson is provably ≥ 0.878 in the worst case and averages closer to 0.93 on random 3-reg.

Warm starting (Egger-Mareček-Woerner 2020) lifts QAOA's performance substantially — at p = 5 we observe ~0.91 on small instances — but the depth needed to match GW grows with n, and noise on real hardware would erode the advantage further. There is no instance size in this benchmark where QAOA is the right tool.

That said, QAOA's role isn't to beat GW on MaxCut. Where it might still matter: problems with no good classical SDP relaxation (some Max-k-SAT variants, fermionic optimization, problems embedded in quantum-native data). For MaxCut specifically — use Goemans-Williamson, or simulated annealing, or any of the existing commercial solvers (Gurobi, CPLEX). Reach for QAOA when you have a specific structural reason quantum should help, not as a default.

Caveats


This is the second entry in the QML Reality Check series. For the methodology and editorial stance, see Editorial Independence.

Weekly dispatch

Quantum, for people who already code.

One serious tutorial per week, plus the industry moves that actually matter. No hype, no hand-waving.

Free. Unsubscribe anytime. We will never sell your email.