Accurate LiDAR-camera calibration is crucial for multi-sensor systems. However, traditional methods often rely on physical targets, which are impractical for real-world deployment. Moreover, even carefully calibrated extrinsics can degrade over time due to sensor drift or external disturbances, necessitating periodic recalibration. To address these challenges, we present a Targetless LiDAR–Camera Calibration (TLC-Calib) that jointly optimizes sensor poses with a neural Gaussian–based scene representation. Reliable LiDAR points are frozen as anchor Gaussians to preserve global structure, while auxiliary Gaussians prevent local overfitting under noisy initialization. Our fully differentiable pipeline with photometric and geometric regularization achieves robust and generalizable calibration, consistently outperforming existing targetless methods on the KITTI-360, Waymo, and Fast-LIVO2 datasets. In addition, it yields more consistent Novel View Synthesis results, reflecting improved extrinsic alignment.
For dynamic scenes, we perform per-scene calibration and use the calibrated extrinsics for rendering. We further demonstrate that the method remains reasonably effective beyond static scenes. The presented results are rendered from interpolated test viewpoints.
@article{jung2026targetless,
title = {{Targetless LiDAR-Camera Calibration with Neural Gaussian Splatting}},
author = {Jung, Haebeom and Kim, Namtae and Kim, Jungwoo and Park, Jaesik},
journal = {IEEE Robotics and Automation Letters (RA-L)},
year = {2026}
}