Computation Through Dynamics
Traditional recurrent neural networks are notoriously difficult to train — gradients vanish or explode through time. Reservoir computing sidesteps this entirely by fixing the recurrent connections and training only a simple linear readout. The reservoir — a random recurrent network of nonlinear nodes — transforms input time series into a high-dimensional trajectory where complex temporal features become linearly separable. It is computation through dynamics rather than through learned weights.
The Echo State Network
Herbert Jaeger's echo state network (ESN) generates reservoir states via x(t+1) = (1-α)x(t) + α×tanh(W_in×u(t) + W×x(t)), where α is the leak rate, W_in scales the input, and W is the fixed recurrent matrix. The spectral radius of W — its largest eigenvalue magnitude — controls memory length and computational complexity. Near the edge of chaos (ρ ≈ 1), the reservoir maximizes its information processing capacity, balancing stability and sensitivity.
Memory and Nonlinearity
A reservoir provides two complementary resources: fading memory (retaining recent inputs) and nonlinear mixing (creating complex feature combinations). Memory capacity — measured as the sum of squared correlations between reservoir outputs and delayed inputs — is bounded by the number of nodes N. The spectral radius trades off memory depth against nonlinear processing power: lower ρ favors memory, higher ρ favors nonlinear transformation.
Physical Reservoirs
Any dynamical system with sufficient complexity can serve as a reservoir. Researchers have built reservoir computers from photonic ring cavities, buckets of water, carbon nanotube networks, and spintronic oscillator arrays. Neuromorphic implementations using memristive devices are particularly promising — the inherent nonlinearity and memory of memristors provide reservoir dynamics naturally, requiring no separate random weight matrix. This points toward ultra-low-power edge computing devices that process sensor data in real time.