Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
research:vslam:dvso [2018/08/24 13:51] yangn |
research:vslam:dvso [2018/11/14 17:20] yangn |
||
---|---|---|---|
Line 1: | Line 1: | ||
===== Deep Virtual Stereo Odometry: Leveraging Deep Depth Prediction for Monocular Direct Sparse Odometry ===== | ===== Deep Virtual Stereo Odometry: Leveraging Deep Depth Prediction for Monocular Direct Sparse Odometry ===== | ||
- | **Contact: | + | **Contact: |
< | < | ||
Line 9: | Line 9: | ||
==== Abstract ==== | ==== Abstract ==== | ||
- | Monocular visual odometry approaches that purely rely on geometric cues are prone to scale drift and require sufficient motion parallax in successive frames for motion estimation and 3D reconstruction. In this paper, we propose to leverage deep monocular depth prediction to overcome limitations of geometry-based monocular visual odometry. To this end, we incorporate deep depth predictions into Direct Sparse Odometry (DSO) as direct virtual stereo measurements. For depth prediction, we design a novel deep network that refines predicted depth from a single image in a two-stage process. We train our network in a semi-supervised way on photoconsistency in stereo images and on consistency with accurate sparse depth reconstructions from Stereo DSO. Our deep predictions excel state-of-the-art approaches for monocular depth on the KITTI benchmark. Moreover, our Deep Virtual Stereo Odometry clearly exceeds previous monocular and deep-learning based methods in accuracy. It even achieves comparable performance to the state-of-the-art stereo methods, while only relying on a single camera. | + | Monocular visual odometry approaches that purely rely on geometric cues are prone to scale drift and require sufficient motion parallax in successive frames for motion estimation and 3D reconstruction. In this paper, we propose to leverage deep monocular depth prediction to overcome limitations of geometry-based monocular visual odometry. To this end, we incorporate deep depth predictions into [[: |
{{: | {{: | ||
Line 22: | Line 22: | ||
{{: | {{: | ||
+ | |||
+ | ==== Results ==== | ||
+ | We quantitatively evaluate our StackNet with other state-of-the-art monocular depth prediction methods on the publicly available KITTI dataset. For DVSO, we evaluate its tracking accuracy on the KITTI odometry benchmark with other state-of-the-art monocular as well as stereo visual odometry systems. In the [[: | ||
+ | |||
+ | === Monocular Depth Estimation === | ||
+ | |||
+ | {{: | ||
+ | |||
+ | {{: | ||
+ | |||
+ | === Monocular Visual Odometry === | ||
+ | |||
+ | {{: | ||
+ | |||
+ | {{: | ||
+ | |||
+ | {{: | ||
+ | |||
+ | ==== Download ==== | ||
+ | Trajectories of DVSO on KITTI 00-10: | ||
+ | Depth Estimations of StackNet on the test set of Eigen Split: | ||
+ | |||
+ | ==== Publications ==== | ||
+ | < | ||
+ | < | ||
+ | </ | ||