Vincent Fortuin

Vincent Fortuin

Research group leader in Machine Learning

Helmholtz AI

TU Munich

About me

Short bio: Vincent Fortuin is a tenure-track research group leader at Helmholtz AI in Munich, leading the group for Efficient Learning and Probabilistic Inference for Science (ELPIS), and a faculty member at the Technical University of Munich. He is also a Branco Weiss Fellow, a Fellow of the Konrad Zuse School of Excellence in Reliable AI, and affiliated with the Munich Center for Machine Learning. His research focuses on reliable and data-efficient AI approaches leveraging Bayesian deep learning, deep generative modeling, meta-learning, and PAC-Bayesian theory. Before that, he did his PhD in Machine Learning at ETH Zürich and was a Research Fellow at the University of Cambridge. He is a unit faculty member of ELLIS, a regular reviewer and area chair for all major machine learning conferences, an action editor for TMLR, and a co-organizer of the Symposium on Advances in Approximate Bayesian Inference (AABI) and the ICBINB initiative.

Long bio: I am a tenure-track research group leader at Helmholtz AI in Munich, leading the group for Efficient Learning and Probabilistic Inference for Science (ELPIS). I am also a faculty member at the Technical University of Munich, a Branco Weiss Fellow, a Fellow of the Konrad Zuse School of Excellence in Reliable AI, a unit faculty member of ELLIS, and affiliated with the Munich Center for Machine Learning. Moreover, I am a regular reviewer and area chair for all major machine learning conferences, and a co-organizer of the Symposium on Advances in Approximate Bayesian Inference (AABI) and the ICBINB initiative. My research focuses on the interface between deep learning and probabilistic modeling. I am particularly keen to develop models that are more reliable and data-efficient, following the Bayesian paradigm. To this end, I am mostly trying to find better priors and more efficient inference techniques for Bayesian deep learning. Apart from that, I am also interested in generative AI, meta-learning, and PAC-Bayesian theory. My group is aiming for fundamental ML research, but with a clear motivation by real-world problems, especially in scientific and biomedical applications. If you are interested in joining the group, check out our open positions.

Before starting my group in Munich, I was a postdoctoral researcher at the University of Cambridge, working in the Machine Learning Group with Richard Turner. I was also a Research Fellow at St. John’s College, where I was mentored by Zoubin Ghahramani, and my research was supported by a Postdoc.Mobility Fellowship from the Swiss National Science Foundation.

Even before that, I did my undergraduate studies in Molecular Life Sciences at the University of Hamburg, where I worked on phylogeny inference for quickly mutating virus strains with Andrew Torda. I then went to ETH Zürich to study Computational Biology and Bioinformatics (in a joint program with the University of Zürich), with a focus on systems biology and machine learning. My master’s studies were suported by an ETH Excellence Scholarship. My master’s thesis was about the application of deep learning to gene regulatory network inference under supervision of Manfred Claassen, for which I received the Willi Studer Prize. During my master’s studies, I also spent some time in Jacob Hanna’s group at the Weizmann Institute of Science, working on multiomics data analysis in stem cell research. I then did my PhD in Computer Science at ETH Zürich under the supervision of Gunnar Rätsch and Andreas Krause, where I was a member of the Biomedical Informatics group as well as the ETH Center for the Foundations of Data Science. I was supported by a PhD fellowship from the Swiss Data Science Center and was also an ELLIS PhD student. Within my PhD studies, I visited and worked with Stephan Mandt at the University of California in Irvine and Richard Turner at the University of Cambridge. Moreover, I completed internships at Disney Research Zürich, working with Romann Weber on deep learning for natural language understanding in the Machine Intelligence and Data Science team, at Microsoft Research Cambridge, working with Katja Hofmann on uncertainty quantification in deep learning in the Game Intelligence team, and at Google Brain, working with Efi Kokiopoulou and Rodolphe Jenatton on uncertainty estimation and out-of-distribution detection in the Reliable Deep Learning team.

My Erdös–Bacon number is 6. To dive deeper into my academic heritage, check out my complete pedrigree (reaching all the way back to Ibn Sina), powered by the amazing Mathematics Genealogy Project.

Interests
  • Bayesian deep learning
  • Deep generative AI
  • Meta-learning
  • PAC-Bayesian theory
Education
  • PhD in Machine Learning, 2021

    ETH Zürich

  • MSc in Computational Biology and Bioinformatics, 2017

    ETH Zürich

  • BSc in Molecular Life Sciences, 2015

    University of Hamburg

All Publications

Quickly discover relevant content by filtering publications.
Hodge-Aware Contrastive Learning
On the Challenges and Opportunities in Generative AI
Position Paper: Bayesian Deep Learning in the Age of Large-Scale AI
Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural Networks Using the Marginal Likelihood
A primer on Bayesian neural networks: review and debates
Challenges and Perspectives in Deep Generative Modeling (Dagstuhl Seminar 23072)
Estimating optimal PAC-Bayes bounds with Hamiltonian Monte Carlo
Improving Neural Additive Models with Bayesian Principles
Incorporating Unlabelled Data into Bayesian Neural Networks
Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization
Uncertainty in Graph Contrastive Learning with Bayesian Neural Networks
Understanding pathologies of deep heteroskedastic regression
Bayesian neural network priors revisited
Data augmentation in Bayesian neural networks and the cold posterior effect
Deep classifiers with label noise modeling and distance awareness
Invariance learning in deep neural networks with differentiable Laplace approximations
Meta-learning richer priors for VAEs
Neural Variational Gradient Descent
On Interpretable Reranking-Based Dependency Parsing Systems
Pathologies in Priors and Inference for Bayesian Transformers
Probing as quantifying inductive bias
Quantum Bayesian Neural Networks
Sparse MoEs meet Efficient Ensembles
A Bayesian Approach to Invariant Deep Neural Networks
Annealed Stein Variational Gradient Descent
BNNpriors: A library for Bayesian neural network inference with different prior distributions
Exact Langevin Dynamics with Stochastic Gradients
Factorized Gaussian Process Variational Autoencoders
On Stein variational neural network ensembles
PACOH: Bayes-optimal meta-learning with PAC-guarantees
PCA Subspaces Are Not Always Optimal for Bayesian Learning
Repulsive deep ensembles are Bayesian
Scalable Gaussian process variational autoencoders
Scalable marginal likelihood estimation for model selection in deep learning
Sparse Gaussian processes on discrete domains
T-DPSOM: An interpretable clustering method for unsupervised learning of patient health states
Conservative uncertainty estimation by fitting prior networks
GP-VAE: Deep probabilistic time series imputation
Sparse Gaussian process variational autoencoders
DPSOM: Deep probabilistic clustering with self-organizing maps
Meta-learning mean functions for Gaussian processes
SOM-VAE: Interpretable discrete representation learning on time series
InspireMe: learning sequence models for stories

Contact