SemCal: Semantic LiDAR-Camera Calibration using Neural MutualInformation Estimator Institutional Repository Document uri icon

abstract

  • This paper proposes SemCal: an automatic, targetless, extrinsic calibration algorithm for a LiDAR and camera system using semantic information. We leverage a neural information estimator to estimate the mutual information (MI) of semantic information extracted from each sensor measurement, facilitating semantic-level data association. By using a matrix exponential formulation of the $se(3)$ transformation and a kernel-based sampling method to sample from camera measurement based on LiDAR projected points, we can formulate the LiDAR-Camera calibration problem as a novel differentiable objective function that supports gradient-based optimization methods. We also introduce a semantic-based initial calibration method using 2D MI-based image registration and Perspective-n-Point (PnP) solver. To evaluate performance, we demonstrate the robustness of our method and quantitatively analyze the accuracy using a synthetic dataset. We also evaluate our algorithm qualitatively on an urban dataset (KITTI360) and an off-road dataset (RELLIS-3D) benchmark datasets using both hand-annotated ground truth labels as well as labels predicted by the state-of-the-art deep learning models, showing improvement over recent comparable calibration approaches.

altmetric score

  • 1.5

author list (cited authors)

  • Jiang, P., Osteen, P., & Saripalli, S.

citation count

  • 0

complete list of authors

  • Jiang, Peng||Osteen, Philip||Saripalli, Srikanth

Book Title

  • arXiv

publication date

  • September 2021