12 May 2005

Quantum error correction fails only when you don't use it

This (or a more formal version) is going to get posted to the arxiv, unless one of my quantum compadres tells me why I shouldn't do it.

In "Quantum error correction fails for Hamiltonian models" (quant-ph/0411008), Alicki argues that when our controls are Hamiltonians of bounded strength (say the gate time is t_0), error correction fails. How badly does it fail? Well, suppose we want to protect a single logical qubit, and have M physical qubits, which are subject to independent depolarizing noise at rate lambda2. Then he says that every encode-protect-decode cycle of arbitrary length has fidelity no greater than exp(-M t_0 lambda2). First of all, I think there must be a mistake somewhere since increasing M shouldn't necessarily make the system decohere faster; after all you can always add qubits that you don't use. Worryingly, I can't find the mistake... Second of all, this doesn't mean that fault-tolerance, or even error-correction, don't work. His analysis (minus the math) is roughly as follows. You start and end with an unencoded qubit. The Hamiltonian has finite strength, so the qubit must be unencoded for the first O(t_0) time and the last O(t_0) time. During this time errors act on it.

This means the problem isn't that your Hamiltonians aren't infinitely fast. The problem is leaving a qubit sit around unprotected by any code. In FTQC this doesn't happen. Computatons map one encoded state to another. Ergo no problem.

This argument could be better written, but hopefully the point is clear.

1 comment:

Anonymous said...

From what I read of the Alicki result, I agree totally.

In fact, I would go even further and argue that unencoded computation of ANY form (classical or quantum) does not exist (or we've never seen it.) While we like to think our bits are perfect little peices of information, they are nothing of the sort but only appear so through the digitizing effect of majority voting.