Both sides previous revision
Previous revision
|
Next revision
Both sides next revision
|
research:vslam:dso [2018/01/06 23:24] Rui Wang |
research:vslam:dso [2018/01/06 23:27] Rui Wang |
<html><center><iframe width="640" height="360" src="//www.youtube.com/embed/C6-xwSOOdqQ" frameborder="0" allowfullscreen></iframe></center></html> | <html><center><iframe width="640" height="360" src="//www.youtube.com/embed/C6-xwSOOdqQ" frameborder="0" allowfullscreen></iframe></center></html> |
| |
<html><h2 class="sectionedit1">Abstract</h2></html> | ===== Abstract ===== |
**DSO** is a novel //direct// and //sparse// formulation for Visual Odometry. | **DSO** is a novel //direct// and //sparse// formulation for Visual Odometry. |
It combines a fully direct probabilistic model (minimizing a photometric error) with | It combines a fully direct probabilistic model (minimizing a photometric error) with |
We thoroughly evaluate our method on three different datasets comprising several hours of video. The experiments show that the presented approach significantly outperforms state-of-the-art direct and indirect methods in a variety of real-world settings, both in terms of tracking accuracy and robustness. | We thoroughly evaluate our method on three different datasets comprising several hours of video. The experiments show that the presented approach significantly outperforms state-of-the-art direct and indirect methods in a variety of real-world settings, both in terms of tracking accuracy and robustness. |
| |
<html><h2 class="sectionedit1">Datatset</h2></html> | ===== Datatset ===== |
Please see [[:data:datasets:mono-dataset | here]] for the TUM monoVO dataset, used for large parts of the evaluation and the above video. | Please see [[:data:datasets:mono-dataset | here]] for the TUM monoVO dataset, used for large parts of the evaluation and the above video. |
It contains over 2h of video and respective evaluation / benchmarking metrics / tools. | It contains over 2h of video and respective evaluation / benchmarking metrics / tools. |
| |
<html><h2 class="sectionedit1">Supplementary Material</h2></html> | ===== Supplementary Material ===== |
Supplementary material with all ORB-SLAM and DSO results presented in the paper can be downloaded from here: [[http://vision.in.tum.de/mono/supp_v2.zip | zip (2.7GB)]]. We further provide ready-to-use Matlab scripts to reproduce all plots | Supplementary material with all ORB-SLAM and DSO results presented in the paper can be downloaded from here: [[http://vision.in.tum.de/mono/supp_v2.zip | zip (2.7GB)]]. We further provide ready-to-use Matlab scripts to reproduce all plots |
in the paper from the above archive, which can be downloaded here: [[http://vision.in.tum.de/mono/evaluation_code_v2.zip | zip (30MB)]] | in the paper from the above archive, which can be downloaded here: [[http://vision.in.tum.de/mono/evaluation_code_v2.zip | zip (30MB)]] |
14.10.2016.: We have updated the supplementary material with the fixed real-time results for ORB-SLAM, corresponding to the revised version of the papers. | 14.10.2016.: We have updated the supplementary material with the fixed real-time results for ORB-SLAM, corresponding to the revised version of the papers. |
| |
<html><h2 class="sectionedit1">Open-Source Code</h2></html> | ===== Open-Source Code ===== |
The full source code is available on Github under GPLv3: | The full source code is available on Github under GPLv3: |
[[https://github.com/JakobEngel/dso | https://github.com/JakobEngel/dso]] | [[https://github.com/JakobEngel/dso | https://github.com/JakobEngel/dso]] |
<html><br><br></h1></html> | <html><br><br></h1></html> |
| |
<html><h1 class="sectionedit1">Stereo DSO: Large-Scale Direct Sparse Visual Odometry with Stereo Cameras</h1></html> | |
| |
| ====== Stereo DSO: Large-Scale Direct Sparse Visual Odometry with Stereo Cameras ====== |
**Contact:** [[members:wangr]], [[members:cremers|Prof. Daniel Cremers]] | **Contact:** [[members:wangr]], [[members:cremers|Prof. Daniel Cremers]] |
| |
<html><center><iframe width="640" height="360" src="https://www.youtube.com/embed/BxTLhubqEKg" frameborder="0" allowfullscreen></iframe></center></html> | <html><center><iframe width="640" height="360" src="https://www.youtube.com/embed/BxTLhubqEKg" frameborder="0" allowfullscreen></iframe></center></html> |
| |
<html><h2 class="sectionedit1">Abstract</h2></html> | ===== Abstract ===== |
** Stereo DSO ** is a novel method for highly accurate real-time visual odometry estimation of large-scale environments from stereo cameras. It jointly optimizes for all the model parameters within the active window, including the intrinsic/extrinsic camera parameters of all keyframes and the depth values of all selected pixels. In particular, it integrates constraints from static stereo into the bundle adjustment pipeline of temporal multi-view stereo. Real-time optimization is realized by sampling pixels uniformly from image regions with sufficient intensity gradient. Fixed-baseline stereo resolves scale drift. It also reduces the sensitivities to large optical flow and to rolling shutter effect which are known shortcomings of direct image alignment methods. Quantitative evaluation demonstrates that the proposed Stereo DSO outperforms existing state-of-the-art visual odometry methods both in terms of tracking accuracy and robustness. Moreover, our method delivers a more precise metric 3D reconstruction than previous dense/semi-dense direct approaches while providing a higher reconstruction density than feature-based methods. | ** Stereo DSO ** is a novel method for highly accurate real-time visual odometry estimation of large-scale environments from stereo cameras. It jointly optimizes for all the model parameters within the active window, including the intrinsic/extrinsic camera parameters of all keyframes and the depth values of all selected pixels. In particular, it integrates constraints from static stereo into the bundle adjustment pipeline of temporal multi-view stereo. Real-time optimization is realized by sampling pixels uniformly from image regions with sufficient intensity gradient. Fixed-baseline stereo resolves scale drift. It also reduces the sensitivities to large optical flow and to rolling shutter effect which are known shortcomings of direct image alignment methods. Quantitative evaluation demonstrates that the proposed Stereo DSO outperforms existing state-of-the-art visual odometry methods both in terms of tracking accuracy and robustness. Moreover, our method delivers a more precise metric 3D reconstruction than previous dense/semi-dense direct approaches while providing a higher reconstruction density than feature-based methods. |
| |
<html><h2 class="sectionedit1">Results</h2></html> | ===== Results ===== |
For this work we use the [[http://www.cvlibs.net/datasets/kitti/eval_odometry.php | KITTI Visual Odometry Benchmark]] and the Frankfurt sequence of the [[https://www.cityscapes-dataset.com/ | Cityscapes Dataset]] for evaluations. The full evaluation results can be found in the supplementary material of our ICCV 2017 paper. Below we show some representative results. | For this work we use the [[http://www.cvlibs.net/datasets/kitti/eval_odometry.php | KITTI Visual Odometry Benchmark]] and the Frankfurt sequence of the [[https://www.cityscapes-dataset.com/ | Cityscapes Dataset]] for evaluations. The full evaluation results can be found in the supplementary material of our ICCV 2017 paper. Below we show some representative results. |
| |