Targetless LiDAR-Camera Calibration with
Neural Gaussian Splatting

IEEE RA-L (ICRA 2026)

1 Seoul National University   2 Yonsei University  


Swipe to go to the next video.


Abstract

Accurate LiDAR-camera calibration is crucial for multi-sensor systems. However, traditional methods often rely on physical targets, which are impractical for real-world deployment. Moreover, even carefully calibrated extrinsics can degrade over time due to sensor drift or external disturbances, necessitating periodic recalibration. To address these challenges, we present a Targetless LiDAR–Camera Calibration (TLC-Calib) that jointly optimizes sensor poses with a neural Gaussian–based scene representation. Reliable LiDAR points are frozen as anchor Gaussians to preserve global structure, while auxiliary Gaussians prevent local overfitting under noisy initialization. Our fully differentiable pipeline with photometric and geometric regularization achieves robust and generalizable calibration, consistently outperforming existing targetless methods on the KITTI-360, Waymo, and Fast-LIVO2 datasets. In addition, it yields more consistent Novel View Synthesis results, reflecting improved extrinsic alignment.


Overview

method overview
Figure 1. Overview of the TLC-Calib pipeline. After aggregating LiDAR scans into a globally aligned point cloud, anchor Gaussians serve as fixed geometric references (their positions are not optimized), while auxiliary Gaussians adapt to local geometry and guide extrinsic optimization through photometric loss. Unlike anchor Gaussians, auxiliary Gaussians serve as learnable buffers around anchors, helping the optimization avoid local minima. Additionally, the camera rig optimization strategy jointly refines all cameras with respect to the scene, ensuring consistent and stable calibration across views.
The primary contributions of this paper are as follows:
  1. We ensure metric scene scale by designating reliable LiDAR points as anchor Gaussians to preserve overall scene structure, while auxiliary Gaussians regularize local geometry under challenging initialization.
  2. We integrate adaptive voxel control and Gaussian scale regularization to reduce redundant anchor Gaussians and suppress over-dominant anisotropic Gaussians that hinder optimization stability.
  3. We validate our approach on three real-world setups, including two autonomous driving datasets and a handheld solid-state LiDAR setup, demonstrating strong generalization, high calibration accuracy, and rendering quality.


Progress of Joint Optimization


*We sampled 4 images out of 40 per camera for visualization purposes.


Novel View Synthesis Results


Ours
Dataset Calib.


Comparison of Novel View Synthesis


NVS Results
Figure 2. Qualitative comparison of Novel View Synthesis results on Waymo. Key improvements are highlighted with yellow boxes, and cropped patches show zoomed-in regions for clarity. The PSNR of each rendered image is shown in the top-right corner.


Comparison of LiDAR-Camera Reprojection


Reprojection Results
Figure 3. Qualitative evaluation of LiDAR-camera alignment on KITTI-360. LiDAR points are projected onto images using the calibration results estimated by each baseline method. Point colors indicate 3D distances from the LiDAR, ranging from red (near) to blue (far).


Ablation Study


Reprojection Results
Figure 4. Robustness to initialization perturbations. We evaluate four different difficulty levels: Easy, Medium, Hard, and Extreme, where the initial extrinsics are perturbed by up to (5°, 0.5 m), (10°, 1.0 m), (15°, 1.5 m), and (20°, 2.0 m), respectively. For each level, we report the calibration error (left axis) and the success rate (right axis), computed over successful runs, for (a) rotation and (b) translation. A run is considered successful if the final calibration error is within 1° for rotation and 20 cm for translation.


BibTeX

@article{jung2026targetless,
  title     = {{Targetless LiDAR-Camera Calibration with Neural Gaussian Splatting}},
  author    = {Jung, Haebeom and Kim, Namtae and Kim, Jungwoo and Park, Jaesik},
  journal   = {IEEE Robotics and Automation Letters (RA-L)},
  year      = {2026}
}