Direkt zum Inhalt springen
Computer Vision Group
TUM School of Computation, Information and Technology
Technical University of Munich

Technical University of Munich

Menu

Links

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:

YouTube X / Twitter Facebook

News

24.10.2024

LSD SLAM received the ECCV 2024 Koenderink Award for standing the Test of Time.

03.07.2024

We have seven papers accepted to ECCV 2024. Check our publication page for more details.

09.06.2024
GCPR / VMV 2024

GCPR / VMV 2024

We are organizing GCPR / VMV 2024 this fall.

04.03.2024

We have twelve papers accepted to CVPR 2024. Check our publication page for more details.

18.07.2023

We have four papers accepted to ICCV 2023. Check out our publication page for more details.

More


Christian Tomani

PhD StudentTechnical University of Munich

School of Computation, Information and Technology
Informatics 9
Boltzmannstrasse 3
85748 Garching
Germany

Tel: +49-89-289-17779
Fax: +49-89-289-17757
Office: 02.09.037
Mail: christian.tomani@in.tum.de

Brief Bio

Find me on Linkedin and Google Scholar.

I am a PhD student at the Technical University of Munich at the Chair of Prof. Daniel Cremers. I received my Master's degree from TUM and my Bachelor's degree from Technical University Graz and studied as well as conducted research at University of Oxford, University of California Berkeley and University of Agder. I worked at Google, Meta and Siemens as a research intern.

Research Internships in Industry:

Meta (New York)
Paper: Uncertainty-Based Abstention in LLMs Improves Safety and Reduces Hallucinations (C Tomani, Kamalika C, Ivan E, Daniel C and Mark I), In arXiv preprint, 2024.

Google (San Francisco Bay Area)
Paper: Quality Control at Your Fingertips: Quality-Aware Translation Models (C Tomani, D Vilar, M Freitag, C Cherry, S Naskar, M Finkelstein, X Garcia and D Cremers), ACL, 2024.

Siemens (Munich)
Paper: Towards Trustworthy Predictions from Deep Neural Networks with Fast Adversarial Calibration (C Tomani and F Buettner), AAAI, 2021.

Research Visits in Academia:

University of Oxford
Machine Learning Group - Department of Engineering Science

University of California Berkeley
Artificial Intelligence Research Lab (BAIR) - Redwood Center for Theoretical Neuroscience

My Work

I am interested in developing reliable, robust, and reasoning-based large language models (LLMs) and multimodal models. Moreover, I am fascinated by enhancing the reasoning ability, safety and uncertainty awareness of generative models through pre-training and post-training (via fine-tuning and alignment to human preferences) to develop grounded world models and improve factuality and trustworthiness.

My work covers a large spectrum of Machine Learning and Deep Learning topics. Projects of mine include reliable, reasoning-based, safe and uncertainty aware models for in domain, domain shift and out of domain (OOD) scenarios; Natural Language Processing (NLP) and Large Language Models (LLMs); investigating reasoning capabilities and developing reliable LLMs; Computer Vision (CV); Time Series Data Analysis with supervised and self-supervised learning algorithms; Recurrent Neural Networks (RNNs) and Transformer architectures; attribution maps; designing learning algorithms for generalization; etc.

Publications

  • Quality-Aware Translation Models: Efficient Generation and Quality Estimation in a Single Model, ACL 2024.


  • Beyond In-Domain Scenarios: Robust Density-Aware Calibration, ICML 2023.


  • Parameterized Temperature Scaling for Boosting the Expressive Power in Post-Hoc Uncertainty Calibration, ECCV 2022.


  • Post-hoc Uncertainty Calibration for Domain Drift Scenarios, CVPR 2021, Oral Presentation.


  • Towards Trustworthy Predictions from Deep Neural Networks with Fast Adversarial Calibration, AAAI 2021,

To facilitate a wide-spread acceptance of AI systems guiding decision making in real-world applications, trustworthiness of deployed models is key. That is, it is crucial for predictive models to be uncertainty-aware and yield well-calibrated (and thus trustworthy) predictions for both in-domain samples as well as under domain shift. Recent efforts to account for predictive uncertainty include post-processing steps for trained neural networks, Bayesian neural networks as well as alternative non-Bayesian approaches such as ensemble approaches and evidential deep learning. Here, we propose an efficient yet general modelling approach for obtaining well-calibrated, trustworthy probabilities for samples obtained after a domain shift. We introduce a new training strategy combining an entropy-encouraging loss term with an adversarial calibration loss term and demonstrate that this results in well-calibrated and technically trustworthy predictions for a wide range of domain drifts. We comprehensively evaluate previously proposed approaches on different data modalities, a large range of data sets including sequence data, network architectures and perturbation strategies. We observe that our modelling approach substantially outperforms existing state-of-the-art approaches, yielding well-calibrated predictions under domain drift.


Export as PDF, XML, TEX or BIB

2024
Preprints
[]Uncertainty-Based Abstention in LLMs Improves Safety and Reduces Hallucinations (C Tomani, K Chaudhuri, I Evtimov, D Cremers and M Ibrahim), In arXiv preprint, 2024.  [bibtex] [arXiv:2404.10960]
Conference and Workshop Papers
[]Quality-Aware Translation Models: Efficient Generation and Quality Estimation in a Single Model (C Tomani, D Vilar, M Freitag, C Cherry, S Naskar, M Finkelstein, X Garcia and D Cremers), In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL), 2024.  [bibtex] [arXiv:2310.06707]
2023
Conference and Workshop Papers
[]Beyond In-Domain Scenarios: Robust Density-Aware Calibration (C Tomani, F Waseda, Y Shen and D Cremers), In Proceedings of the 40th International Conference on Machine Learning (ICML), 2023.  [bibtex] [arXiv:2302.05118]
2022
Preprints
[]Challenger: Training with Attribution Maps (C Tomani and D Cremers), In arXiv preprint, 2022.  [bibtex] [arXiv:2205.15094]
Conference and Workshop Papers
[]What Makes Graph Neural Networks Miscalibrated? (HHH Hsu, Y Shen, C Tomani and D Cremers), In NeurIPS, 2022. ([code]) [bibtex] [arXiv:2210.06391]
[]Parameterized Temperature Scaling for Boosting the Expressive Power in Post-Hoc Uncertainty Calibration (C Tomani, D Cremers and F Buettner), In European Conference on Computer Vision (ECCV), 2022.  [bibtex] [arXiv:2102.12182]
2021
Conference and Workshop Papers
[]Post-hoc Uncertainty Calibration for Domain Drift Scenarios (C Tomani, S Gruber, ME Erdem, D Cremers and F Buettner), In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021.  [bibtex] [arXiv:2012.10988]Oral Presentation
[]Towards Trustworthy Predictions from Deep Neural Networks with Fast Adversarial Calibration (C Tomani and F Buettner), In InThirty-FifthAAAIConferenceonArtificialIntelligence(AAAI-2021), 2021.  [bibtex] [arXiv:2012.10923]
Powered by bibtexbrowser
Export as PDF, XML, TEX or BIB

Rechte Seite

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:

YouTube X / Twitter Facebook

News

24.10.2024

LSD SLAM received the ECCV 2024 Koenderink Award for standing the Test of Time.

03.07.2024

We have seven papers accepted to ECCV 2024. Check our publication page for more details.

09.06.2024
GCPR / VMV 2024

GCPR / VMV 2024

We are organizing GCPR / VMV 2024 this fall.

04.03.2024

We have twelve papers accepted to CVPR 2024. Check our publication page for more details.

18.07.2023

We have four papers accepted to ICCV 2023. Check out our publication page for more details.

More