Particle Swarm Optimization Simulator: Swarm-Based Function Optimization

simulator intermediate ~10 min
Loading simulation...
f(g_best) = 0.0023 — near-optimal in 40 iterations

A swarm of 30 particles with w=0.7, c₁=c₂=1.5 converges to within 0.0023 of the Rastrigin function global minimum in about 40 iterations, navigating past multiple local optima.

Formula

v_i ← w×v_i + c₁×r₁×(p_best_i − x_i) + c₂×r₂×(g_best − x_i)
x_i ← x_i + v_i (position update)
w(t) = w_max − (w_max − w_min) × t / t_max (linear inertia decay)

A Flock Searching for Food

Imagine a flock of birds searching for the richest feeding ground in a vast landscape. Each bird remembers the best spot it has personally found and can observe where the most successful flock-mate is feeding. By blending personal experience with social information, the flock collectively discovers optimal regions far faster than any individual could alone. PSO translates this intuition into mathematics, with particles replacing birds and fitness values replacing food quality.

The Velocity Update Equation

The core of PSO is a single elegant equation: each particle's velocity is updated as a weighted sum of three terms — inertia (continue current direction), cognitive pull (return to personal best), and social pull (move toward global best). Random coefficients r₁, r₂ add stochasticity that prevents premature convergence. The simulation visualizes velocity vectors as trails, showing how particles spiral around attractors and gradually tighten their orbits.

Exploration Versus Exploitation

The inertia weight w is the master dial controlling exploration vs. exploitation. High inertia (w > 0.9) keeps particles moving fast, exploring broadly. Low inertia (w < 0.4) makes particles decelerate and cluster around known good solutions. The linearly decreasing inertia strategy — starting high and reducing over iterations — provides broad exploration early and precise exploitation late. The simulator lets you see this transition as the swarm transforms from a scattered cloud to a tight cluster.

Multimodal Landscapes

PSO's real power emerges on multimodal landscapes with many local optima, like the Rastrigin or Ackley functions. The social component can pull particles out of shallow local minima toward deeper global ones. However, premature convergence remains a risk: if the swarm collapses too early onto a local optimum, it may never escape. Strategies like multi-swarm PSO, constriction coefficients, and topology changes address this. This simulation uses the Rastrigin function to showcase both the algorithm's strengths and its failure modes.

FAQ

What is particle swarm optimization?

Particle Swarm Optimization (PSO), proposed by Kennedy and Eberhart in 1995, simulates a flock of birds searching for food. Each particle has a position and velocity in the search space. It adjusts velocity based on its own best-known position (cognitive component) and the swarm's best-known position (social component). The swarm collectively converges on optimal regions.

What do the PSO parameters mean?

Inertia weight w controls momentum: high w means particles keep moving in their current direction (exploration), low w makes them responsive to attractors (exploitation). Cognitive coefficient c₁ pulls particles toward their personal best. Social coefficient c₂ pulls toward the global best. The balance c₁ ≈ c₂ ≈ 1.5 with w ≈ 0.7 is a robust default.

How does PSO compare to genetic algorithms?

PSO is simpler to implement (no crossover or mutation operators), has fewer parameters, and often converges faster on continuous optimization problems. Genetic algorithms are better for discrete/combinatorial problems. PSO particles share information directly through the global best, while GA individuals share information through selection and recombination.

What problems is PSO used for?

PSO is widely used for neural network training, antenna design, power system optimization, robotics path planning, and financial portfolio optimization. It excels on continuous multimodal functions where gradient methods fail. Variants like multi-objective PSO handle Pareto-optimal solutions.

Sources

Embed

<iframe src="https://homo-deus.com/lab/swarm-intelligence/particle-swarm/embed" width="100%" height="400" frameborder="0"></iframe>
View source on GitHub