Point Cloud Simulator: 3D Surface Reconstruction Visualization

simulator beginner ~8 min
Loading simulation...
10,000 points — reconstructed surface model

A 10k-point cloud with 5mm noise provides a recognizable 3D surface. Rotating the view reveals the spatial structure that would be invisible in any single 2D projection.

Formula

ρ = N / A (point density per unit area)
SNR = (max_range - min_range) / σ
d_nn = (A / N)^0.5 (mean nearest-neighbor distance)

From Points to Surfaces

A point cloud is the raw geometric output of photogrammetric reconstruction — millions of individual 3D coordinates that collectively describe the shape of an object or landscape. Unlike mesh models, point clouds make no assumptions about surface connectivity, making them flexible but requiring further processing for visualization and measurement. Modern photogrammetry pipelines routinely produce clouds with billions of points from drone imagery alone.

Density and Detail

Point density determines the finest feature that can be resolved. At 100 points per square meter, you capture building outlines; at 1000 points, architectural details emerge; at 10,000+, you resolve individual bricks. This simulation lets you see how increasing point count progressively reveals surface structure, and how the relationship between density and noise level determines the effective resolution of the reconstruction.

Noise and Filtering

Every measurement system introduces noise. In photogrammetry, matching errors, calibration imprecision, and image noise all contribute to scatter around the true surface. Statistical outlier removal identifies points too far from their neighbors, while moving least squares fitting produces smooth surfaces from noisy input. The signal-to-noise ratio quantifies whether meaningful geometry survives the noise floor.

Visualization and Rendering

Displaying millions of 3D points efficiently requires level-of-detail techniques like octree-based point budgeting, where distant regions show fewer points while nearby areas render at full density. Point splatting assigns each point a disk radius that covers gaps between neighbors, creating the appearance of a continuous surface. Modern web viewers like Potree handle billions of points in real-time through hierarchical streaming.

FAQ

What is a point cloud?

A point cloud is a set of 3D coordinates (x, y, z) representing the external surface of an object or scene. Each point may also carry color (RGB) or intensity values. Point clouds are generated by LiDAR scanners, photogrammetry pipelines, or structured light systems.

How are point clouds generated from photos?

Multi-view stereo (MVS) algorithms match features across multiple photographs, triangulate 3D positions, and produce dense point clouds. The process typically follows structure-from-motion for camera poses, then dense matching for surface points.

How do you reduce noise in point clouds?

Statistical outlier removal filters points whose distance to neighbors exceeds a threshold. Moving least squares (MLS) smoothing fits local surfaces to reduce random noise while preserving geometric features. Bilateral filtering preserves edges while smoothing flat regions.

What file formats store point clouds?

Common formats include PLY (Polygon File Format), LAS/LAZ (LiDAR standard), PCD (Point Cloud Library format), and E57 (ASTM standard). PLY and LAS are the most widely supported across software platforms.

Sources

Embed

<iframe src="https://homo-deus.com/lab/photogrammetry/point-cloud/embed" width="100%" height="400" frameborder="0"></iframe>
View source on GitHub