Memristor Crossbar Simulator: Analog Matrix Computing

simulator intermediate ~10 min
Loading simulation...
16 MACs/cycle — 4×4 analog compute in one step

A 4×4 memristor crossbar performs a complete 16-element matrix-vector multiply in a single voltage application, consuming microwatts — orders of magnitude more efficient than digital multipliers.

Formula

I_j = Σ V_i × G_ij (Kirchhoff's current law at bit line j)
DR = G_max / G_min (dynamic range in on/off ratio)
P_array = Σ V_i² × Σ G_ij (total read power)

Computing at the Crossroads

The von Neumann bottleneck — the energy cost of shuttling data between processor and memory — dominates modern computing. Memristor crossbar arrays dissolve this boundary by storing synaptic weights as device conductances and computing directly through Ohm's law and Kirchhoff's current summation. A single voltage application across the array performs an entire matrix-vector multiplication in constant time, regardless of matrix size.

Ohm's Law as Computation

Each crossbar junction obeys I = V × G. When input voltages V₁...Vₙ are applied to word lines simultaneously, the current at each bit line j sums contributions from all rows: I_j = Σ V_i × G_ij. This is exactly a dot product — the core operation of neural network inference. The computation happens at the speed of electron transport, consuming only the ohmic power dissipated in the resistive devices.

Programming the Array

Writing weights into memristors requires voltage pulses exceeding the device's set/reset thresholds. Iterative write-verify schemes apply a pulse, read the conductance, and repeat until the target value is reached within tolerance. Multi-level conductance states (typically 4–8 bits) are achievable in HfO₂, TaOₓ, and phase-change devices, though variability increases with the number of levels.

Scaling and Integration

Production crossbar arrays from companies like Mythic, Cerebras, and research labs have reached 256×256 dimensions with CMOS-integrated peripherals. Array tiling — connecting multiple smaller arrays through digital routers — scales the architecture to million-synapse networks. Combined with STDP-based on-chip learning, memristor crossbars promise a complete neuromorphic computing platform that learns and infers at the edge with milliwatt power budgets.

FAQ

What is a memristor crossbar array?

A crossbar array arranges memristive devices at the intersections of horizontal word lines and vertical bit lines. Each memristor stores a conductance value representing a synaptic weight. By applying input voltages to word lines and reading currents on bit lines, the array performs analog matrix-vector multiplication in O(1) time — the fundamental operation for neural network inference.

How does a memristor store data?

A memristor changes its electrical resistance based on the history of applied voltages. In metal-oxide types (e.g., HfO₂), voltage pulses create or dissolve conductive filaments of oxygen vacancies, switching between high and low resistance states. Intermediate states enable multi-bit storage for analog computing.

What is the advantage over digital computing?

Digital MACs require fetching weights from memory (the von Neumann bottleneck). Memristor crossbars perform computation where data is stored, eliminating data movement. A single crossbar array can achieve 10–100 TOPS/W, compared to 1–10 TOPS/W for digital accelerators.

What are the main challenges?

Device variability (cycle-to-cycle and device-to-device), limited endurance (~10⁶–10⁹ write cycles), sneak path currents in passive arrays, and IR drop in large arrays are the primary challenges. Active research addresses these through selector devices, error-correcting codes, and array tiling architectures.

Sources

Embed

<iframe src="https://homo-deus.com/lab/neuromorphic-computing/memristor-crossbar/embed" width="100%" height="400" frameborder="0"></iframe>
View source on GitHub