Direkt zum Inhalt springen
Computer Vision Group
TUM School of Computation, Information and Technology
Technical University of Munich

Technical University of Munich



DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF Relocalization


For relocalization in large-scale point clouds, we propose the first approach that unifies global place recognition and local 6DoF pose refinement. To this end, we design a Siamese network that jointly learns 3D local feature detection and description directly from raw 3D points. It integrates FlexConv and Squeeze-and-Excitation (SE) to assure that the learned local descriptor captures multi-level geometric information and channel-wise relations. For detecting 3D keypoints we predict the discriminativeness of the local descriptors in an unsupervised manner. We generate the global descriptor by directly aggregating the learned local descriptors with an effective attention mechanism. In this way, local and global 3D descriptors are inferred in one single forward pass. Experiments on various benchmarks demonstrate that our method achieves competitive results for both global point cloud retrieval and for local point cloud registration in comparison to state-of-the-art approaches. To validate the generalizability and robustness of our 3D keypoints, we demonstrate that our method also performs favorably without fine-tuning on registration of point clouds that were generated by a visual SLAM system.


  • [11.11.2020] For the ease of comparison, we upload the numbers used to draw the plots in the paper (download).

ECCV Spotlight Presentation

The video is with audio.


If you find our work useful in your research, please consider citing:

    title={DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF Relocalization},
    author={Du, Juan and Wang, Rui and Cremers, Daniel},
    booktitle={European Conference on Computer Vision (ECCV)},


Code and the pre-trained models can be accessed from the GitHub page.


Our model is mainly trained and tested on the LiDAR point clouds from the Oxford RobotCar dataset. To test the generalization capability, two extra datasets are used, namely ETH (LiDAR point clouds from two sequences, gazebo_winter and wood_autumn) and Oxford RobotCar Stereo DSO (point clouds generated by running Stereo DSO). As our method can be used for global place recognition (retrieval) and local pose refinement (regression), the corresponding datasets are denoted by "global" and "local" respectively. For more details on how the datasets are generated, please refer to the beginning of Section 4 in the main paper and Section 4.2 and 4.4 in the supplementary material. For examples on how to train or test our model on these datasets, please refer to the GitHub page.

DatasetPoint CloudsGround Truth
Oxford RobotCar (local and global) zip (3.1GB) local_gt.pickle, global_gt.pickle
Oxford RobotCar (local) zip (294MB) gt.txt
Oxford RobotCar (global) zip (740MB) query_gt.pickle, reference_gt.pickle
Generalization Testing
ETH gazebo_winter (local) zip (11MB) gt.txt
ETH wood_autumn (local) zip (16MB) gt.txt
Oxford RobotCar Stereo DSO (local) zip (44MB) gt.txt

Qualitative Results

Resuts on LiDAR points of Oxford RobotCar
Resuts on LiDAR points of Oxford RobotCar

Other Materials


Export as PDF, XML, TEX or BIB

Conference and Workshop Papers
[]SOE-Net: A Self-Attention and Orientation Encoding Network for Point Cloud based Place Recognition (Y. Xia, Y. Xu, S. Li, R. Wang, J. Du, D. Cremers and U. Stilla), In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. ([arxiv]) [bibtex]Oral Presentation
[]DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF Relocalization (J. Du, R. Wang and D. Cremers), In European Conference on Computer Vision (ECCV), 2020. ([project page][code][supplementary][arxiv]) [bibtex] [pdf]Spotlight Presentation
[]Stereo DSO: Large-Scale Direct Sparse Visual Odometry with Stereo Cameras (R. Wang, M. Schwörer and D. Cremers), In International Conference on Computer Vision (ICCV), 2017. ([supplementary][video][arxiv][project]) [bibtex] [pdf]
Powered by bibtexbrowser
Export as PDF, XML, TEX or BIB

Rechte Seite

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:



CVPR 2023

We have six papers accepted to CVPR 2023.


NeurIPS 2022

We have two papers accepted to NeurIPS 2022.


WACV 2023

We have two papers accepted at WACV 2023.


Fulbright PULSE podcast on Prof. Cremers went online on Apple Podcasts and Spotify.


MCML Kick-Off

On July 27th, we are organizing the Kick-Off of the Munich Center for Machine Learning in the Bavarian Academy of Sciences.