The Most Important Operation in Signal Processing
Convolution is everywhere — every time you blur a photo, apply an audio equalizer, detect edges in an image, or process a neural network layer, you are performing convolution. It is the mathematical operation that describes how any linear system responds to any input, making it the universal language of signal processing.
Slide, Multiply, Sum
The intuition behind convolution is simple: flip the kernel, slide it across the input signal, and at each position multiply overlapping values and add them up. The result is a new signal that represents the weighted, running average of the input. This simulator animates the entire process so you can watch the output being constructed point by point.
Kernels Shape the Output
Different kernels produce dramatically different effects. A box kernel (rectangular window) averages neighboring values equally, producing a moving average. A Gaussian kernel weights nearby values more heavily, creating smoother results without ringing. A derivative kernel (like [-1, 0, 1]) detects changes, turning smooth regions into silence and edges into peaks.
The Convolution Theorem: Speed Through Frequency
Direct convolution requires O(N²) multiplications, which becomes impractical for large signals. The convolution theorem provides an elegant shortcut: transform both signals to the frequency domain using FFT, multiply pointwise, and transform back. This reduces the cost to O(N log N) and is how all practical convolution engines work for large kernels.