Direkt zum Inhalt springen
Computer Vision Group
TUM School of Computation, Information and Technology
Technical University of Munich

Technical University of Munich

Menu

Links

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:

YouTube X / Twitter Facebook

News

26.02.2025

We have twelve papers accepted to CVPR 2025. Check our publication page for more details.

24.10.2024

LSD SLAM received the ECCV 2024 Koenderink Award for standing the Test of Time.

03.07.2024

We have seven papers accepted to ECCV 2024. Check our publication page for more details.

09.06.2024
GCPR / VMV 2024

GCPR / VMV 2024

We are organizing GCPR / VMV 2024 this fall.

04.03.2024

We have twelve papers accepted to CVPR 2024. Check our publication page for more details.

More


Ganlin Zhang

PhD StudentTechnical University of Munich

School of Computation, Information and Technology
Informatics 9
Boltzmannstrasse 3
85748 Garching
Germany

Fax: +49-89-289-17757
Office: 02.09.044
Mail: ganlin.zhang@tum.de

Personal homepage: https://ganlinzhang.xyz

Research Interests

3D Vision, Visual SLAM, Structure from Motion, 3D Reconstrution.

Open Research Projects

Sequential 3D Reconstruction with 3D foundation model and Compact Scene Representation

Recent 3D foundation models (e.g., DUSt3R, VGGT) have demonstrated strong performance in reconstructing 3D scenes from RGB images. Follow-up works such as Spann3r and CUT3R have extended these approaches to sequential image data. However, most existing methods use per-frame point clouds as the scene representation, which leads to redundancy in overlapping regions.

This project aims to explore more compact scene representations, such as 3D Gaussian Splatting (3DGS), to reduce reconstruction redundancy and improve efficiency for sequential image data.

Preferred Requirements

  • Strong research motivation, with an interest in producing publishable work.
  • Solid background in 3D computer vision and multi-view geometry, preferably with related project experience.
  • Proficiency in Python and PyTorch.
  • Prior experience with SLAM/SfM, 3DGS, or 3D foundation models is a plus.

If you are interested, please contact me via email with your CV and academic transcript.

Publications


Export as PDF, XML, TEX or BIB

Preprints
2024
[]GlORIE-SLAM: Globally Optimized Rgb-only Implicit Encoding Point Cloud SLAM (G Zhang, E Sandström, Y Zhang, M Patel, L Van Gool and MR Oswald), In arXiv preprint arXiv:2403.19549, 2024. ([project],[code]) [bibtex] [arXiv:2403.19549]
Conference and Workshop Papers
2025
[]Back on Track: Bundle Adjustment for Dynamic Scene Reconstruction (W Chen, G Zhang, F Wimbauer, R Wang, N Araslanov, A Vedaldi and D Cremers), In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025. ([project page]) [bibtex] [arXiv:2504.14516]Best Paper Candidate
[]Splat-slam: Globally optimized rgb-only slam with 3d gaussians (E Sandström, G Zhang, K Tateno, M Oechsle, M Niemeyer, Y Zhang, M Patel, L Van Gool, M Oswald and F Tombari), In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops, 2025. ([code]) [bibtex]
2023
[]Revisiting Rotation Averaging: Uncertainties and Robust Losses (G Zhang, V Larsson and D Barath), In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. ([code]) [bibtex] [arXiv:2303.05195]
Powered by bibtexbrowser
Export as PDF, XML, TEX or BIB

Rechte Seite

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:

YouTube X / Twitter Facebook

News

26.02.2025

We have twelve papers accepted to CVPR 2025. Check our publication page for more details.

24.10.2024

LSD SLAM received the ECCV 2024 Koenderink Award for standing the Test of Time.

03.07.2024

We have seven papers accepted to ECCV 2024. Check our publication page for more details.

09.06.2024
GCPR / VMV 2024

GCPR / VMV 2024

We are organizing GCPR / VMV 2024 this fall.

04.03.2024

We have twelve papers accepted to CVPR 2024. Check our publication page for more details.

More