Selecting What Matters
Every second, your brain is bombarded by millions of sensory signals — yet you experience a coherent, focused stream of consciousness. The attention network accomplishes this remarkable feat by selectively amplifying task-relevant information while suppressing irrelevant distractors. This simulator models the neural competition that underlies attentional selection, letting you explore how signal strength, distractor load, and top-down expectations shape perception.
The Biased Competition Framework
Desimone and Duncan's biased competition model proposes that multiple stimuli compete for limited neural representation. When two objects fall within the same receptive field, their neural responses are mutually suppressive. Attention resolves this competition by injecting a bias signal — stronger signals or top-down expectations tip the competition in favor of the target, producing winner-take-all selection.
Top-Down vs Bottom-Up
Bottom-up attention is driven by stimulus salience — a bright flash or sudden motion captures attention automatically. Top-down attention is volitional, guided by goals and expectations stored in prefrontal cortex. The bias parameter β models this top-down influence: higher values simulate strong attentional templates that pre-activate target features, enabling efficient search even in cluttered visual fields.
Neural Noise and Errors
Neural processing is inherently noisy — stochastic firing rates mean that distractor activity sometimes exceeds target activity by chance. The noise parameter σ controls this variability. When signal-to-noise ratio drops below a critical threshold, selection errors become frequent, modeling phenomena like inattentional blindness and change blindness that reveal the limits of human attention.