Bi-Camera SLAM



In Bi-Camera SLAM, a second camera is added to the system, as in a stereo configuration.

The operation is as in bearings only, except that more observations from another point of view are available. This differs from classical stereo systems in that the observations are truly bearings only, hence there's no need to have an a priori precise calibration of  the extrinsic parameters of the cameras. Instead, the same EKF core is used to accurately estimate over time this extrinsic parameters. Depending on the current precision of these parameters, full 3D initialization reaches variable depths. A critical disparity value is dynamically determined from these uncertainties. For disparities bigger than the critical, a full 3D initialization is performed using both views. For small disparities, a ray is initialized starting at the associated critical depth. this ray can easily reach hundreds of meters with very few hypotheses.

In my experiments both cameras are separated 33 cm.  At the beginning the orientation of one camera with respect to the other one is only coarsely known. As the filter starts working, this precision increases. The depth observability rapidly reaches 100m. Beyond this, rays are used to initialize landmarks. With just 3 hypotheses one reaches the distance of 1.5km.

In this indoor experiment, only one ray is initialized at the first frame. Only the first landmark seen. From then on, all landmarks are fully 3d initialized. See the Bi-Cam algorithm in operation in this video (.mov).

Next, I'm going to run the same algorithm in an outdorr experiment to see how it behaves.