Recent learning-based calibration methods yield promising results in estimating parameters for wide field-of-view cameras from single images. Yet, these end-to-end approaches are typically tethered to one fixed camera model, leading to issues: i) lack of flexibility, necessitating network architectural changes and retraining when changing camera models; ii) reduced accuracy, as a single model, limits the diversity of cameras represented in the training data; iii) restrictions in camera model selection, as learning-based methods need differentiable loss functions and, thus, undistortion equations with closed-form solutions. In response, we present a novel two-step calibration framework for radially symmetric cameras. Key to our approach is a specialized CNN that, given an input image, outputs an implicit camera representation (VaCR), mapping each image point to the direction of the 3D ray of light projecting onto it. The VaCR is used in a subsequent robust non-linear optimization process to determine the camera parameters for any radially symmetric model provided as input. By disentangling the estimation of camera model parameters from the VaCR, which is based only on the assumption of radial symmetry in the model, we overcome the main limitations of end-to-end approaches. Experimental results demonstrate the advantages of the proposed framework compared to state-of-the-art methods. Code is available at removed for blind review.
Live content is unavailable. Log in and register to view live content