Targetless LiDAR-Camera Calibration with
Neural Gaussian Splatting

1 Seoul National University   2 Yonsei University  

arXiv 2025
Swipe to go to the next video.


Abstract

Accurate LiDAR-camera calibration is crucial for multi-sensor systems. However, traditional methods often rely on physical targets, which are impractical for real-world deployment. Moreover, even carefully calibrated extrinsics can degrade over time due to sensor drift or external disturbances, necessitating periodic recalibration. To address these challenges, we present a Targetless LiDAR–Camera Calibration (TLC-Calib) that jointly optimizes sensor poses with a neural Gaussian–based scene representation. Reliable LiDAR points are frozen as anchor Gaussians to preserve global structure, while auxiliary Gaussians prevent local overfitting under noisy initialization. Our fully differentiable pipeline with photometric and geometric regularization achieves robust and generalizable calibration, consistently outperforming existing targetless methods on KITTI-360, Waymo, and FAST-LIVO2, and surpassing even the provided calibrations in rendering quality.

Overview

method overview
Figure 1. Overview of the TLC-Calib pipeline. After aggregating LiDAR scans into a globally aligned point cloud, anchor Gaussians serve as fixed geometric references (their positions are not optimized), while auxiliary Gaussians adapt to local geometry and guide extrinsic optimization through photometric loss. Unlike anchor Gaussians, auxiliary Gaussians serve as learnable buffers around anchors, helping the optimization avoid local minima. Additionally, a rig-based optimization strategy jointly refines all cameras in relation to the scene, ensuring consistent and stable calibration across views.
The primary contributions of this work are as follows:
  1. We stabilize global scale and translation by designating reliable LiDAR points as anchor Gaussians to preserve the overall structure, and by introducing auxiliary Gaussians to regularize local geometry under challenging initialization parameters.
  2. We combine photometric loss with scale-consistent geometric constraints, enabling robust alignment of LiDAR and camera sensors across diverse environments and trajectories.
  3. We validate our approach on three real-world setups, including two autonomous driving datasets and one handheld setup with a solid-state LiDAR, demonstrating strong generalization and high calibration accuracy. The estimated extrinsics also provide improved rendering quality over the dataset-provided calibrations.


Progress of Optimization


*We sampled 4 images out of 40 per camera for visualization purposes.


Novel View Synthesis Results


Ours
Dataset Calib.


Qualitative Comparison of Novel View Synthesis


NVS Results


Qualitative Comparison of LiDAR-Camera Reprojection


Reprojection Results


BibTeX

@article{jung2025targetless,
  title     = {{Targetless LiDAR-Camera Calibration with Neural Gaussian Splatting}},
  author    = {Haebeom Jung and Namtae Kim and Jungwoo Kim and Jaesik Park},
  journal   = {arXiv preprint arXiv:2504.04597},
  year      = {2025}
}