How fast can you communicate? Not how fast can you speak or type, but what is the fundamental physical limit on the rate of reliable information transfer through a noisy channel? Claude Shannon answered this question definitively in 1948 with the channel capacity theorem.
The Shannon-Hartley formula C = B·log₂(1 + S/N) is breathtakingly elegant. Channel capacity C (in bits per second) depends on just two physical parameters: the bandwidth B (how wide the frequency range) and the signal-to-noise ratio S/N (how strong the signal relative to noise). Double the bandwidth, double the capacity. Double the SNR, and you gain roughly one extra bit per second per hertz.
The theorem has two parts, and both are essential. The achievability result says that for any rate R < C, there exists a coding scheme that achieves arbitrarily low error probability. The converse says that for any rate R > C, no coding scheme can avoid errors. Together, they establish C as a sharp threshold between possible and impossible.
This simulator visualizes both the theoretical limit and practical modulation schemes. The left panel shows how capacity scales with SNR, with the Shannon limit as a smooth curve and practical modulations as stepped lines below it. The gap between a modulation scheme and the Shannon curve represents the efficiency lost by using a finite constellation.
The right panel shows the constellation diagram — the geometric representation of the modulation scheme. Each dot represents a possible transmitted symbol. At high SNR, the noise clouds around each point are tight and well-separated. As SNR decreases, the clouds expand and begin to overlap, making it impossible for the receiver to distinguish between symbols. This is the geometric intuition behind the capacity limit: you can only pack as many distinguishable symbols as the noise allows.