Every digital communication system faces the same fundamental problem: noise corrupts data. Bits flip, signals fade, and interference intrudes. Error correction codes solve this by adding structured redundancy that enables the receiver to detect and fix errors without asking for retransmission.
The simplest approach is repetition: send each bit three times and take a majority vote. This corrects single-bit errors per triple but wastes two-thirds of the channel capacity. Richard Hamming, frustrated by error-prone punch card readers at Bell Labs, invented a far more elegant solution in 1950.
Hamming(7,4) encodes 4 data bits into 7 code bits by adding 3 parity bits at carefully chosen positions. Each parity bit covers a specific overlapping subset of the data bits. When a single bit error occurs during transmission, the receiver computes the syndrome — the result of checking all three parity equations. The syndrome is a 3-bit number that directly indicates the position of the error (0 means no error). The receiver simply flips that bit to recover the original data.
This simulator lets you transmit hundreds of data blocks through a noisy channel and compare three strategies: no coding (raw transmission), repetition coding, and Hamming coding. Watch as errors (shown in red) corrupt the transmitted bits, parity bits (shown in cyan) enable detection, and the decoder corrects the damage.
The deeper lesson is Shannon's channel coding theorem: for any noisy channel, there exists a coding scheme that achieves reliable communication at any rate below the channel capacity. Hamming codes are an early, practical step toward this theoretical promise. Modern codes like LDPC and turbo codes come remarkably close to Shannon's limit.