Why Calibration Matters
Every camera lens introduces geometric distortion — straight lines in the real world appear curved in images. For photogrammetry, where pixel measurements translate directly to 3D coordinates, uncorrected distortion propagates into systematic reconstruction errors. Camera calibration determines the mathematical model that maps world geometry to pixel coordinates, enabling corrections that achieve sub-pixel accuracy.
The Distortion Model
The Brown-Conrady model describes radial distortion as a polynomial function of distance from the image center: points farther from the principal point are displaced more. The k₁ coefficient dominates — negative values produce barrel distortion (wide-angle lenses), positive values create pincushion distortion (telephoto). Higher-order terms k₂ and k₃ capture subtle nonlinearities. This simulation visualizes how these coefficients warp a regular grid.
Calibration Procedure
Zhang's calibration method revolutionized the field by requiring only a planar pattern (checkerboard) photographed from multiple angles. The algorithm detects corners with sub-pixel precision, estimates homographies between the pattern and each image, and solves a constrained optimization for all intrinsic parameters simultaneously. The resulting reprojection error quantifies calibration quality — values below 0.3 pixels indicate excellent calibration.
Self-Calibration in SfM
Modern structure-from-motion pipelines can estimate camera intrinsics alongside 3D structure, a process called self-calibration or auto-calibration. By observing geometric constraints across many images (like the absolute conic), the algorithm recovers focal length and distortion without any calibration target. This enables photogrammetric reconstruction from casual smartphone photos, democratizing 3D capture for everyone.