Both sides previous revision
Previous revision
Next revision
|
Previous revision
Last revision
Both sides next revision
|
research:vslam [2015/05/20 00:22] Prof. Dr. Jörg Stückler |
research:vslam [2020/10/15 14:44] schubdav |
| {{ :research:vslam:demo_slam_reduced_v2.mp4 |}}~~META: |
| tag=vslam |
| ~~ |
====== Visual SLAM ====== | ====== Visual SLAM ====== |
**Contact:** [[members:stueckle]] | |
| |
We pursue direct SLAM techniques that instead of using keypoints, directly operate on image intensities both for tracking and mapping. | In **S**imultaneous **L**ocalization **A**nd **M**apping, we track the pose of the sensor while creating a map of the environment. In particular, our group has a strong focus on direct methods, where, contrary to the classical pipeline of feature extraction and matching, we directly optimize intensity errors. |
| |
<html><center></html>{{:research:lsdslam:directvskp.png?500|}}<html></center><br></html> | Our experience includes various sensor modalities, such as monocular, stereo and RGB-D cameras, but also visual-inertial setups. Besides our research for new methods, we provide public datasets for evaluation. |
| |
===== Direct SLAM for Monocular and Stereo Cameras ===== | If you are interested in the wider area of SLAM, chances are that our interests overlap with yours. We would be happy to hear from you! |
| |
**[[research:vslam:lsdslam|LSD-SLAM]]** is a direct SLAM technique for monocular and stereo cameras. The camera is tracked using **direct image alignment**, while geometry is estimated in the form of **semi-dense depth maps**, obtained by **filtering** over many pixelwise stereo comparisons. We then build a **Sim(3) pose-graph of keyframes**, which allows to build scale-drift corrected, large-scale maps including loop-closures. | <html><video width="100%" autoplay loop muted controls poster="/_media/research/vslam/thumb.jpg"> |
| <source src="/_media/research/vslam/demo_slam_reduced_v2.mp4" type="video/mp4" /> |
| </video></html> |
| |
<html><center><iframe width="640" height="360" src="//www.youtube.com/embed/GnuQzP3gty4" frameborder="0" allowfullscreen></iframe></center></html> | ==== Contact ==== |
| |
| <memberlist> |
| <dokuwiki> |
| <filter> |
| <grps>^vslam$</grps> |
| </filter> |
| <user>cremers</user> |
| </dokuwiki> |
| </memberlist> |
| |
===== Direct SLAM for RGB-D Cameras ===== | ==== Related publications ==== |
| <bibtex> |
For **[[research:rgb-d_sensors_kinect|SLAM with RGB-D cameras]]** (RGB-D SLAM) we developed a method that also tracks the camera using **direct image alignment**. We optimize a **SE(3) pose-graph of keyframes** to find a globally consistent trajectory and alignment of images. | <keywords>slam</keywords> |
| <bytype>-1</bytype> |
<html> | </bibtex> |
<iframe width="560" height="315" src="//www.youtube-nocookie.com/embed/jNbYcw_dmcQ" frameborder="0" allowfullscreen></iframe> | |
</html> | |