Computing at the Crossroads
The von Neumann bottleneck — the energy cost of shuttling data between processor and memory — dominates modern computing. Memristor crossbar arrays dissolve this boundary by storing synaptic weights as device conductances and computing directly through Ohm's law and Kirchhoff's current summation. A single voltage application across the array performs an entire matrix-vector multiplication in constant time, regardless of matrix size.
Ohm's Law as Computation
Each crossbar junction obeys I = V × G. When input voltages V₁...Vₙ are applied to word lines simultaneously, the current at each bit line j sums contributions from all rows: I_j = Σ V_i × G_ij. This is exactly a dot product — the core operation of neural network inference. The computation happens at the speed of electron transport, consuming only the ohmic power dissipated in the resistive devices.
Programming the Array
Writing weights into memristors requires voltage pulses exceeding the device's set/reset thresholds. Iterative write-verify schemes apply a pulse, read the conductance, and repeat until the target value is reached within tolerance. Multi-level conductance states (typically 4–8 bits) are achievable in HfO₂, TaOₓ, and phase-change devices, though variability increases with the number of levels.
Scaling and Integration
Production crossbar arrays from companies like Mythic, Cerebras, and research labs have reached 256×256 dimensions with CMOS-integrated peripherals. Array tiling — connecting multiple smaller arrays through digital routers — scales the architecture to million-synapse networks. Combined with STDP-based on-chip learning, memristor crossbars promise a complete neuromorphic computing platform that learns and infers at the edge with milliwatt power budgets.