Vincent Fortuin

Vincent Fortuin

Research group leader in Machine Learning

Helmholtz AI

About me

Short bio: Vincent Fortuin is a tenure-track research group leader at Helmholtz AI in Munich, leading the group for Efficient Learning and Probabilistic Inference for Science (ELPIS), and a faculty member at the Technical University of Munich. He is also a Branco Weiss Fellow and a Fellow of the Konrad Zuse School of Excellence in Reliable AI. His research focuses on reliable and data-efficient AI approaches leveraging Bayesian deep learning, deep generative modeling, meta-learning, and PAC-Bayesian theory. Before that, he did his PhD in Machine Learning at ETH Zürich and was a Research Fellow at the University of Cambridge. He is a unit faculty member of ELLIS, a regular reviewer and area chair for all major machine learning conferences, and a co-organizer of the Symposium on Advances in Approximate Bayesian Inference (AABI) and the ICBINB initiative.

Long bio: I am a tenure-track research group leader at Helmholtz AI in Munich, leading the group for Efficient Learning and Probabilistic Inference for Science (ELPIS). I am also a faculty member at the Technical University of Munich, a Branco Weiss Fellow, a Fellow of the Konrad Zuse School of Excellence in Reliable AI, and a unit faculty member of ELLIS. Moreover, I am a regular reviewer and area chair for all major machine learning conferences, and a co-organizer of the Symposium on Advances in Approximate Bayesian Inference (AABI) and the ICBINB initiative. My research focuses on the interface between deep learning and probabilistic modeling. I am particularly keen to develop models that are more reliable and data-efficient, following the Bayesian paradigm. To this end, I am mostly trying to find better priors and more efficient inference techniques for Bayesian deep learning. Apart from that, I am also interested in generative AI, meta-learning, and PAC-Bayesian theory. My group is aiming for fundamental ML research, but with a clear motivation by real-world problems, especially in scientific and biomedical applications. If you are interested in joining the group, check out our open positions.

Before starting my group in Munich, I was a postdoctoral researcher at the University of Cambridge, working in the Machine Learning Group with Richard Turner. I was also a Research Fellow at St. John’s College, where I was mentored by Zoubin Ghahramani, and my research was supported by a Postdoc.Mobility Fellowship from the Swiss National Science Foundation.

Even before that, I did my undergraduate studies in Molecular Life Sciences at the University of Hamburg, where I worked on phylogeny inference for quickly mutating virus strains with Andrew Torda. I then went to ETH Zürich to study Computational Biology and Bioinformatics (in a joint program with the University of Zürich), with a focus on systems biology and machine learning. My master’s studies were suported by an ETH Excellence Scholarship. My master’s thesis was about the application of deep learning to gene regulatory network inference under supervision of Manfred Claassen, for which I received the Willi Studer Prize. During my master’s studies, I also spent some time in Jacob Hanna’s group at the Weizmann Institute of Science, working on multiomics data analysis in stem cell research. I then did my PhD in Computer Science at ETH Zürich under the supervision of Gunnar Rätsch and Andreas Krause, where I was a member of the Biomedical Informatics group as well as the ETH Center for the Foundations of Data Science. I was supported by a PhD fellowship from the Swiss Data Science Center and was also an ELLIS PhD student. Within my PhD studies, I visited and worked with Stephan Mandt at the University of California in Irvine and Richard Turner at the University of Cambridge. Moreover, I completed internships at Disney Research Zürich, working with Romann Weber on deep learning for natural language understanding in the Machine Intelligence and Data Science team, at Microsoft Research Cambridge, working with Katja Hofmann on uncertainty quantification in deep learning in the Game Intelligence team, and at Google Brain, working with Efi Kokiopoulou and Rodolphe Jenatton on uncertainty estimation and out-of-distribution detection in the Reliable Deep Learning team.

My Erdös–Bacon number is 6. To dive deeper into my academic heritage, check out my complete pedrigree (reaching all the way back to Ibn Sina), powered by the amazing Mathematics Genealogy Project.

Interests
  • Bayesian deep learning
  • Deep generative AI
  • Meta-learning
  • PAC-Bayesian theory
Education
  • PhD in Machine Learning, 2021

    ETH Zürich

  • MSc in Computational Biology and Bioinformatics, 2017

    ETH Zürich

  • BSc in Molecular Life Sciences, 2015

    University of Hamburg

All Publications

Quickly discover relevant content by filtering publications.
Estimating optimal PAC-Bayes bounds with Hamiltonian Monte Carlo
A Primer on Bayesian Neural Networks: Review and Debates
Hodge-Aware Contrastive Learning
Uncertainty in Graph Contrastive Learning with Bayesian Neural Networks
Understanding Pathologies of Deep Heteroskedastic Regression
Improving Neural Additive Models with Bayesian Principles
Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization
Incorporating Unlabelled Data into Bayesian Neural Networks
Invariance Learning in Deep Neural Networks with Differentiable Laplace Approximations
Data augmentation in Bayesian neural networks and the cold posterior effect
Deep Classifiers with Label Noise Modeling and Distance Awareness
Bayesian Neural Network Priors Revisited
Meta-learning richer priors for VAEs
Neural Variational Gradient Descent
On Interpretable Reranking-Based Dependency Parsing Systems
PAC-Bayesian Meta-Learning: From Theory to Practice
Pathologies in priors and inference for Bayesian transformers
Probing as Quantifying Inductive Bias
Quantum Bayesian Neural Networks
Sparse MoEs meet Efficient Ensembles
Sparse Gaussian Processes on Discrete Domains
A Bayesian Approach to Invariant Deep Neural Networks
BNNpriors: A library for Bayesian neural network inference with different prior distributions
Exact Langevin Dynamics with Stochastic Gradients
Factorized Gaussian Process Variational Autoencoders
On Stein Variational Neural Network Ensembles
PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees
PCA Subspaces Are Not Always Optimal for Bayesian Learning
Repulsive Deep Ensembles are Bayesian
Scalable Gaussian Process Variational Autoencoders
Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning
Annealed Stein Variational Gradient Descent
T-DPSOM: An Interpretable Clustering Method for Unsupervised Learning of Patient Health States
Conservative Uncertainty Estimation By Fitting Prior Networks
GP-VAE: Deep Probabilistic Time Series Imputation
Sparse Gaussian Process Variational Autoencoders
DPSOM: Deep Probabilistic Clustering with Self-Organizing Maps
Meta-Learning Mean Functions for Gaussian Processes
SOM-VAE: Interpretable Discrete Representation Learning on Time Series
InspireMe: Learning Sequence Models for Stories

Contact