Camera Calibration Simulator: Lens Distortion & Intrinsic Parameters

simulator advanced ~15 min
Loading simulation...
FOV = 65.2° — with barrel distortion correction

An 800 px focal length with k₁=-0.1 produces noticeable barrel distortion at image edges. Calibration corrects this, enabling sub-pixel reprojection accuracy for 3D reconstruction.

Formula

x_d = x(1 + k₁r² + k₂r⁴) (radial distortion model)
FOV = 2 × arctan(sensor_width / (2 × f))
K = [[f, 0, cx], [0, f, cy], [0, 0, 1]] (intrinsic matrix)

Why Calibration Matters

Every camera lens introduces geometric distortion — straight lines in the real world appear curved in images. For photogrammetry, where pixel measurements translate directly to 3D coordinates, uncorrected distortion propagates into systematic reconstruction errors. Camera calibration determines the mathematical model that maps world geometry to pixel coordinates, enabling corrections that achieve sub-pixel accuracy.

The Distortion Model

The Brown-Conrady model describes radial distortion as a polynomial function of distance from the image center: points farther from the principal point are displaced more. The k₁ coefficient dominates — negative values produce barrel distortion (wide-angle lenses), positive values create pincushion distortion (telephoto). Higher-order terms k₂ and k₃ capture subtle nonlinearities. This simulation visualizes how these coefficients warp a regular grid.

Calibration Procedure

Zhang's calibration method revolutionized the field by requiring only a planar pattern (checkerboard) photographed from multiple angles. The algorithm detects corners with sub-pixel precision, estimates homographies between the pattern and each image, and solves a constrained optimization for all intrinsic parameters simultaneously. The resulting reprojection error quantifies calibration quality — values below 0.3 pixels indicate excellent calibration.

Self-Calibration in SfM

Modern structure-from-motion pipelines can estimate camera intrinsics alongside 3D structure, a process called self-calibration or auto-calibration. By observing geometric constraints across many images (like the absolute conic), the algorithm recovers focal length and distortion without any calibration target. This enables photogrammetric reconstruction from casual smartphone photos, democratizing 3D capture for everyone.

FAQ

What is camera calibration?

Camera calibration determines the intrinsic parameters (focal length, principal point, distortion coefficients) that map 3D world points to 2D image pixels. Accurate calibration is essential for photogrammetric measurement — without it, lens distortion introduces systematic errors in 3D reconstruction.

What is radial distortion?

Radial distortion causes straight lines to appear curved in images. Barrel distortion (negative k₁) bows lines outward and is common in wide-angle lenses. Pincushion distortion (positive k₁) curves lines inward. The Brown-Conrady model uses polynomial coefficients k₁, k₂, k₃ to correct these effects.

How do you calibrate a camera?

The standard method uses images of a known calibration target (typically a checkerboard) from multiple angles. Zhang's method extracts corner points, estimates homographies, and solves for intrinsic and extrinsic parameters. OpenCV provides automated calibration pipelines.

What is reprojection error?

Reprojection error measures calibration quality by projecting known 3D points through the estimated camera model and comparing predicted pixel positions to observed ones. Sub-pixel reprojection error (< 0.5 px) indicates a well-calibrated camera suitable for precision photogrammetry.

Sources

Embed

<iframe src="https://homo-deus.com/lab/photogrammetry/camera-calibration/embed" width="100%" height="400" frameborder="0"></iframe>
View source on GitHub