The intelligence explosion hypothesis, first articulated by I.J. Good in 1965, remains one of the most consequential predictions in AI safety research. Good observed that a sufficiently intelligent machine could redesign itself to be even more intelligent, creating a feedback loop whose dynamics depend critically on a single parameter: the returns to cognitive investment.
This simulator models recursive self-improvement using the difference equation I(t+1) = I(t) + η·I(t)^α, where I(t) is intelligence at time t, η is the improvement rate, and α is the returns exponent. The exponent α encodes the fundamental question of AI takeoff speed.
When α < 1, each unit of intelligence produces less than one unit of further improvement — diminishing returns. Growth is sublinear, roughly following a power law. This corresponds to the 'slow takeoff' scenario where society has decades to adapt.
When α = 1, returns are constant and growth is exponential, doubling at a fixed rate. This resembles Moore's Law extrapolation and is the implicit assumption in many economic growth models.
When α > 1, returns are increasing: smarter systems improve themselves faster than less-smart systems did. This produces hyperbolic growth that reaches mathematical infinity in finite time — the singularity at t_s ≈ I₀^(1-α)/[η·(α-1)]. In practice, physical constraints prevent actual infinity, but the growth rate can be fast enough to be effectively instantaneous by human standards. Yudkowsky's 'FOOM' describes this regime.
The critical insight is that the qualitative behavior changes discontinuously at α = 1. There is no smooth transition between 'manageable' and 'unmanageable' — the boundary is a phase transition. This is why the debate between slow and fast takeoff camps is so difficult to resolve empirically: small uncertainties in α map to qualitatively different futures.