Monte Carlo Simulation: Estimate Pi with Random Sampling

simulator beginner ~8 min
Loading simulation...
π̂ ≈ 3.14 after 10,000 samples

After 10,000 random points, the hit ratio gives π̂ ≈ 3.14 ± 0.03, demonstrating how random sampling can estimate deterministic quantities.

Formula

π̂ = 4 × (number of points with x² + y² ≤ r²) / N
Standard error: SE = σ/√N
95% CI: π̂ ± 1.96 · SE

Random Sampling as Computation

Monte Carlo methods turn randomness into a computational tool. The core idea is deceptively simple: if you can frame a quantity as an expected value, you can estimate it by averaging random samples. Stanislaw Ulam conceived this approach while playing solitaire during the Manhattan Project — he realized that random deal simulations could estimate probabilities faster than combinatorial analysis. Today Monte Carlo drives financial derivatives pricing, particle physics simulations, and Bayesian inference.

Estimating Pi Geometrically

The classic demonstration inscribes a unit circle in a square. Points thrown uniformly at random land inside the circle with probability equal to the area ratio: π/4. Count the hits, multiply by 4, and you have a π estimate. This simulation shows each point — red for misses, cyan for hits — and updates the running estimate in real time. Watch the scatter plot fill in and the estimate converge toward 3.14159...

The 1/√N Convergence

Monte Carlo's convergence rate of 1/√N seems slow — 10,000 samples give only two decimal places of π. But this rate is dimension-independent. For a 100-dimensional integral, a grid with 10 points per dimension needs 10^100 evaluations; Monte Carlo still needs only 1/ε² samples regardless of dimension. This 'curse of dimensionality' immunity makes Monte Carlo indispensable for high-dimensional problems in physics, finance, and machine learning.

Beyond Naive Sampling

Variance reduction techniques turbocharge basic Monte Carlo. Stratified sampling partitions the domain to ensure even coverage; importance sampling concentrates effort where the integrand is large; antithetic variates exploit symmetry to reduce variance by half. These methods — visible in the convergence plot as faster error decay — are the bridge between textbook Monte Carlo and production-grade simulation systems used in quantitative finance and particle transport.

FAQ

What is Monte Carlo simulation?

Monte Carlo simulation uses repeated random sampling to estimate numerical results. Named after the Monte Carlo casino by Stanislaw Ulam and John von Neumann during the Manhattan Project, it is used when analytical solutions are intractable — from nuclear physics to financial option pricing.

How does Monte Carlo estimate pi?

Inscribe a circle of radius 1 in a 2×2 square. Random points in the square land inside the circle with probability π/4 (the area ratio). So π ≈ 4 × (points inside circle / total points). As samples increase, the law of large numbers guarantees convergence to true π.

How fast does Monte Carlo converge?

Monte Carlo error decreases as 1/√N — halving the error requires quadrupling the samples. This is dimension-independent, which is Monte Carlo's key advantage: for high-dimensional integrals, it outperforms grid-based methods whose cost grows exponentially with dimension.

What are variance reduction techniques?

Techniques like stratified sampling, importance sampling, antithetic variates, and control variates can dramatically reduce Monte Carlo variance without increasing sample count. These methods exploit problem structure to make each sample more informative.

Sources

Embed

<iframe src="https://homo-deus.com/lab/operations-research/monte-carlo-simulation/embed" width="100%" height="400" frameborder="0"></iframe>
View source on GitHub