Both sides previous revision
Previous revision
Next revision
|
Previous revision
Next revision
Both sides next revision
|
members:wangr [2018/07/28 10:09] Rui Wang |
members:wangr [2018/07/28 10:26] Rui Wang |
* ** Stereo DSO ** This video shows some results of our paper "Stereo DSO: Large-Scale Direct Sparse Visual Odometry with Stereo Cameras" accepted by ICCV 2017. ([[https://vision.in.tum.de/research/vslam/stereo-dso|Project Page]]) | * ** Stereo DSO ** This video shows some results of our paper "Stereo DSO: Large-Scale Direct Sparse Visual Odometry with Stereo Cameras" accepted by ICCV 2017. ([[https://vision.in.tum.de/research/vslam/stereo-dso|Project Page]]) |
<html><center><iframe width="640" height="360" src="https://www.youtube.com/embed/A53vJO8eygw" frameborder="0" allowfullscreen></iframe></center></html> | <html><center><iframe width="640" height="360" src="https://www.youtube.com/embed/A53vJO8eygw" frameborder="0" allowfullscreen></iframe></center></html> |
<html><br /><br /></html> | <html><br /></html> |
| |
* ** SLAM extension to Stereo DSO ** After the ICCV 2017 deadline, we extended our method to a SLAM system with additional components for map maintenance, loop detection and loop closure. Our performance on KITTI is further boosted a little, as shown by the plots in the video. ([[https://vision.in.tum.de/research/vslam/stereo-dso|Project Page]]) | * ** SLAM extension to Stereo DSO ** After the ICCV 2017 deadline, we extended our method to a SLAM system with additional components for map maintenance, loop detection and loop closure. Our performance on KITTI is further boosted a little, as shown by the plots in the video. ([[https://vision.in.tum.de/research/vslam/stereo-dso|Project Page]]) |
<html><center><iframe width="640" height="360" src="https://www.youtube.com/embed/BxTLhubqEKg" frameborder="0" allowfullscreen></iframe></center></html> | <html><center><iframe width="640" height="360" src="https://www.youtube.com/embed/BxTLhubqEKg" frameborder="0" allowfullscreen></iframe></center></html> |
<html><br /><br /></html> | <html><br /></html> |
| |
=== Visual Odometry / SLAM + Deep Learning === | === Deep Learning Boosted VO / SLAM === |
| |
* ** Deep Virtual Stereo Odometry (DVSO) ** In this project we design a novel deep network and train it in a semi-supervised way to predict depth map from single image, and integrate the depth map into DSO as virtual stereo measurement. Being a monocular VO approach, DVSO achieves comparable performance to the state-of-the-art stereo methods. (Project Page coming soon) | * ** Deep Virtual Stereo Odometry (DVSO) ** In this project we design a novel deep network and train it in a semi-supervised way to predict depth map from single image, and integrate the depth map into DSO as virtual stereo measurement. Being a monocular VO approach, DVSO achieves comparable performance to the state-of-the-art stereo methods. (Project Page coming soon) |
src="https://www.youtube.com/embed/sLZOeC9z_tw" frameborder="0" allowfullscreen></iframe> | src="https://www.youtube.com/embed/sLZOeC9z_tw" frameborder="0" allowfullscreen></iframe> |
</center></html> | </center></html> |
<html><br /><br /></html> | <html><br /></html> |
| |
=== Camera Calibration === | === Camera Calibration === |
* ** Online Photometric Calibration ** We've conducted a project to achieve online photometric calibration, where the exposure times of consecutive frames, the camera response function, and the camera vignetting factors can be recovered in real-time. Experiments show that our estimations converge to the ground truth after only a few seconds. Our approach can be used either offline for calibrating existing datasets, or online in combination with state-of-the-art direct visual odometry or SLAM pipelines. For more details please check our paper "Online Photometric Calibration of Auto Exposure Video for Realtime Visual Odometry and SLAM". ([[https://vision.in.tum.de/research/vslam/photometric-calibration|Project Page]]) | * ** Online Photometric Calibration ** We've conducted a project to achieve online photometric calibration, where the exposure times of consecutive frames, the camera response function, and the camera vignetting factors can be recovered in real-time. Experiments show that our estimations converge to the ground truth after only a few seconds. Our approach can be used either offline for calibrating existing datasets, or online in combination with state-of-the-art direct visual odometry or SLAM pipelines. For more details please check our paper "Online Photometric Calibration of Auto Exposure Video for Realtime Visual Odometry and SLAM". ([[https://vision.in.tum.de/research/vslam/photometric-calibration|Project Page]]) |
<html><center><iframe width="640" height="360" src="https://www.youtube.com/embed/nQHMG0c6Iew" frameborder="0" allowfullscreen></iframe></center></html> | <html><center><iframe width="640" height="360" src="https://www.youtube.com/embed/nQHMG0c6Iew" frameborder="0" allowfullscreen></iframe></center></html> |
<html><br /><br /></html> | <html><br /></html> |
| |
==== Master Theses / IDP / Guided Research ==== | ==== Master Theses / IDP / Guided Research ==== |