Direkt zum Inhalt springen
Computer Vision Group
TUM School of Computation, Information and Technology
Technical University of Munich

Technical University of Munich

Menu

Links

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:

YouTube X / Twitter Facebook

News

03.07.2024

We have seven papers accepted to ECCV 2024. Check our publication page for more details.

09.06.2024
GCPR / VMV 2024

GCPR / VMV 2024

We are organizing GCPR / VMV 2024 this fall.

04.03.2024

We have twelve papers accepted to CVPR 2024. Check our publication page for more details.

18.07.2023

We have four papers accepted to ICCV 2023. Check out our publication page for more details.

02.03.2023

CVPR 2023

We have six papers accepted to CVPR 2023. Check out our publication page for more details.

More


Christian Tomani

PhD StudentTechnical University of Munich

School of Computation, Information and Technology
Informatics 9
Boltzmannstrasse 3
85748 Garching
Germany

Tel: +49-89-289-17779
Fax: +49-89-289-17757
Office: 02.09.037
Mail: christian.tomani@in.tum.de

Brief Bio

Find me on Linkedin and Google Scholar.

I am a PhD student at the Technical University of Munich at the Chair of Prof. Daniel Cremers. I received my Master's degree from TUM and my Bachelor's degree from Technical University Graz and studied as well as conducted research at University of Oxford, University of California Berkeley and University of Agder. I worked at Google, Meta and Siemens as a research intern.

Research Internships in Industry:

Meta (New York)
Paper: Uncertainty-Based Abstention in LLMs Improves Safety and Reduces Hallucinations (C Tomani, Kamalika C, Ivan E, Daniel C and Mark I), In arXiv preprint, 2024.

Google (San Francisco Bay Area)
Paper: Quality Control at Your Fingertips: Quality-Aware Translation Models (C Tomani, D Vilar, M Freitag, C Cherry, S Naskar, M Finkelstein, X Garcia and D Cremers), ACL, 2024.

Siemens (Munich)
Paper: Towards Trustworthy Predictions from Deep Neural Networks with Fast Adversarial Calibration (C Tomani and F Buettner), AAAI, 2021.

Research Visits in Academia:

University of Oxford
Machine Learning Group - Department of Engineering Science

University of California Berkeley
Artificial Intelligence Research Lab (BAIR) - Redwood Center for Theoretical Neuroscience

My Work

I am interested in developing reliable, robust, and reasoning-based large language models (LLMs) and multimodal models. Moreover, I am fascinated by enhancing the reasoning ability, safety and uncertainty awareness of generative models through pre-training and post-training (via fine-tuning and alignment to human preferences) to develop grounded world models and improve factuality and trustworthiness.

My work covers a large spectrum of Machine Learning and Deep Learning topics. Projects of mine include reliable, reasoning-based, safe and uncertainty aware models for in domain, domain shift and out of domain (OOD) scenarios; Natural Language Processing (NLP) and Large Language Models (LLMs); investigating reasoning capabilities and developing reliable LLMs; Computer Vision (CV); Time Series Data Analysis with supervised and self-supervised learning algorithms; Recurrent Neural Networks (RNNs) and Transformer architectures; attribution maps; designing learning algorithms for generalization; etc.

Publications

  • Quality-Aware Translation Models: Efficient Generation and Quality Estimation in a Single Model, ACL 2024.

Maximum-a-posteriori (MAP) decoding is the most widely used decoding strategy for neural machine translation (NMT) models. The underlying assumption is that model probability correlates well with human judgment, with better translations getting assigned a higher score by the model. However, research has shown that this assumption does not always hold, and generation quality can be improved by decoding to optimize a utility function backed by a metric or quality-estimation signal, as is done by Minimum Bayes Risk (MBR) or quality-aware decoding. The main disadvantage of these approaches is that they require an additional model to calculate the utility function during decoding, significantly increasing the computational cost. In this paper, we propose to make the NMT models themselves quality-aware by training them to estimate the quality of their own output. Using this approach for MBR decoding we can drastically reduce the size of the candidate list, resulting in a speed-up of two-orders of magnitude. When applying our method to MAP decoding we obtain quality gains similar or even superior to quality reranking approaches, but with the efficiency of single pass decoding.

  • Beyond In-Domain Scenarios: Robust Density-Aware Calibration, ICML 2023.


  • Parameterized Temperature Scaling for Boosting the Expressive Power in Post-Hoc Uncertainty Calibration, ECCV 2022.


  • Post-hoc Uncertainty Calibration for Domain Drift Scenarios, CVPR 2021, Oral Presentation.



Export as PDF, XML, TEX or BIB

2024
Preprints
[]Uncertainty-Based Abstention in LLMs Improves Safety and Reduces Hallucinations (C Tomani, K Chaudhuri, I Evtimov, D Cremers and M Ibrahim), In arXiv preprint, 2024.  [bibtex] [arXiv:2404.10960]
Conference and Workshop Papers
[]Quality-Aware Translation Models: Efficient Generation and Quality Estimation in a Single Model (C Tomani, D Vilar, M Freitag, C Cherry, S Naskar, M Finkelstein, X Garcia and D Cremers), In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL), 2024.  [bibtex] [arXiv:2310.06707]
2023
Conference and Workshop Papers
[]Beyond In-Domain Scenarios: Robust Density-Aware Calibration (C Tomani, F Waseda, Y Shen and D Cremers), In Proceedings of the 40th International Conference on Machine Learning (ICML), 2023.  [bibtex] [arXiv:2302.05118]
2022
Preprints
[]Challenger: Training with Attribution Maps (C Tomani and D Cremers), In arXiv preprint, 2022.  [bibtex] [arXiv:2205.15094]
Conference and Workshop Papers
[]What Makes Graph Neural Networks Miscalibrated? (HHH Hsu, Y Shen, C Tomani and D Cremers), In NeurIPS, 2022. ([code]) [bibtex] [arXiv:2210.06391]
[]Parameterized Temperature Scaling for Boosting the Expressive Power in Post-Hoc Uncertainty Calibration (C Tomani, D Cremers and F Buettner), In European Conference on Computer Vision (ECCV), 2022.  [bibtex] [arXiv:2102.12182]
2021
Conference and Workshop Papers
[]Post-hoc Uncertainty Calibration for Domain Drift Scenarios (C Tomani, S Gruber, ME Erdem, D Cremers and F Buettner), In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021.  [bibtex] [arXiv:2012.10988]Oral Presentation
[]Towards Trustworthy Predictions from Deep Neural Networks with Fast Adversarial Calibration (C Tomani and F Buettner), In InThirty-FifthAAAIConferenceonArtificialIntelligence(AAAI-2021), 2021.  [bibtex] [arXiv:2012.10923]
Powered by bibtexbrowser
Export as PDF, XML, TEX or BIB

Rechte Seite

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:

YouTube X / Twitter Facebook

News

03.07.2024

We have seven papers accepted to ECCV 2024. Check our publication page for more details.

09.06.2024
GCPR / VMV 2024

GCPR / VMV 2024

We are organizing GCPR / VMV 2024 this fall.

04.03.2024

We have twelve papers accepted to CVPR 2024. Check our publication page for more details.

18.07.2023

We have four papers accepted to ICCV 2023. Check out our publication page for more details.

02.03.2023

CVPR 2023

We have six papers accepted to CVPR 2023. Check out our publication page for more details.

More