The Timing Hypothesis
In 1949, Donald Hebb postulated that synapses strengthen when pre- and post-synaptic neurons fire together. But 'together' was vague. In 1998, Guo-qiang Bi and Mu-ming Poo discovered that it is the precise millisecond-scale timing that matters: if a presynaptic spike precedes the postsynaptic spike by 5–20 ms, the synapse potentiates; reverse the order and it depresses. This asymmetric window — the STDP rule — gave Hebb's postulate a mathematical backbone.
The STDP Learning Window
The STDP curve plots synaptic weight change (ΔW) against spike timing difference (Δt = t_post - t_pre). For positive Δt (causal pairing), ΔW follows an exponential decay: A+ × exp(-Δt/τ+). For negative Δt (anti-causal), it follows -A− × exp(Δt/τ−). The amplitudes A+ and A− and time constants τ+, τ− are tunable parameters that differ across brain regions and cell types. A slight asymmetry (A− > A+) ensures overall competition among synapses.
Self-Organization Without a Teacher
STDP enables unsupervised learning of temporal patterns. In a recurrent network, synapses that consistently participate in causal chains are reinforced, while decorrelated connections are pruned. This produces receptive fields, temporal sequences, and coincidence detection — hallmarks of cortical computation. Remarkably, STDP combined with lateral inhibition can perform competitive learning, clustering, and even independent component analysis without backpropagation.
Hardware Implementation
Neuromorphic engineers exploit STDP for on-chip learning. In Intel's Loihi, programmable spike traces at each synapse track recent activity and compute weight updates locally. Memristive devices offer an even more elegant solution: the conductance of a memristor naturally changes based on the timing of voltage pulses applied to its terminals, physically mimicking the STDP window. This makes memristor crossbar arrays promising substrates for large-scale, energy-efficient learning systems.