
Johannes Michael Meier
PhD StudentTechnical University of MunichSchool of Computation, Information and Technology
Informatics 9
Boltzmannstrasse 3
85748 Garching
Germany
Fax: +49-89-289-17757
Office:
Mail: J.Meier@tum.de
Brief Bio
I received my M.Sc. in Computer Science from the University of Tuebingen. I did my master thesis with Bosch Research in Renningen and was supervised by Prof. Dr. Andreas Geiger and Dr. Wieland Brendel of the Max Planck Institute Tuebingen. I am a Ph.D. student in the Computer Vision Group, headed by Prof. Dr. Daniel Cremers at TUM. My research focuses on 3D object detection, tracking, semi-supervised learning and domain adaptation.
Master thesis topics
Are you excited about Computer Vision and Machine Learning? Are you interested in doing high impact work and submitting it to top conferences?
Then apply for one of the Master’s theses below.
I pursue publications in conferences like CVPR, ICCV, and ECCV and solve challenging computer vision problems identified jointly with Deepscenario - a highly innovative start-up in the automotive industry advised by Prof. Daniel Cremers (https://www.deepscenario.com/).
Specific thesis topics:
Master thesis: Domain Adaptation for Monocular 3D object detection in Autonomous Driving
Monocular 3D object detection is a challenging task because it requires models to predict the location, dimensions, and rotation of objects from a single input image. Traditional autonomous driving datasets, such as KITTI, NuScenes, Waymo, and Rope3D, are captured from a car or traffic view perspective. However, this can limit the generalization ability of models trained on these datasets. For example, a model trained on data from a car view may not be able to accurately detect objects from a drone view. In this thesis we want to answer the following research question: How can we generalize monocular 3D object detection models from a set of training perspectives to an unseen perspective during inference?
Master thesis: Generalized Monocular Depth Prediction for Autonomous Driving
Monocular depth prediction is a key challenge for autonomous driving. Current approaches either directly predict depth or use strong assumptions that require the input image to be taken from a specific perspective. This limits the generalization ability of these approaches to different perspectives. This thesis will investigate how to perform accurate depth prediction from any perspective, including car view, traffic view, and drone view.
Publications
Export as PDF, XML, TEX or BIB