top of page

From Drone Video to a 3D Gaussian Splat

This project takes drone video footage and turns it into a 3D Gaussian Splat that can be rendered from new viewpoints. The implementation follows the practical "canonical" pipeline most 3DGS systems use in the real world:

​

Video → frames → COLMAP (camera poses + sparse points) → 3D Gaussian Splatting training → novel-view rendering.

 

This post focuses on what Gaussian splatting is, how it works, why it matters, and how the Kaggle pipeline reproduces it end-to-end.

bottom of page