Vincent Fortuin

Vincent Fortuin

Postdoctoral Researcher in Machine Learning

University of Cambridge

About me

I am a postdoctoral researcher at the University of Cambridge, working in the Machine Learning Group with Richard Turner. I am also a Research Fellow at St. John’s College, a Branco Weiss Fellow, and my research is supported by a Postdoc.Mobility Fellowship from the Swiss National Science Foundation. My research focuses on the interface between deep learning and probabilistic modeling. I am particularly keen to develop models that are more interpretable and data efficient, following the Bayesian paradigm. To this end, I am mostly trying to find better priors and more efficient inference techniques for Bayesian deep learning. Apart from that, I am also interested in deep generative modeling, meta-learning, and PAC-Bayesian theory.

I did my undergraduate studies in Molecular Life Sciences at the University of Hamburg, where I worked on phylogeny inference for quickly mutating virus strains with Andrew Torda. I then went to ETH Zürich to study Computational Biology and Bioinformatics (in a joint program with the University of Zürich), with a focus on systems biology and machine learning. My master’s studies were suported by an ETH Excellence Scholarship. My master’s thesis was about the application of deep learning to gene regulatory network inference under supervision of Manfred Claassen, for which I received the Willi Studer Prize. During my master’s studies, I also spent some time in Jacob Hanna’s group at the Weizmann Institute of Science, working on multiomics data analysis in stem cell research. I then did my PhD in Computer Science at ETH Zürich under the supervision of Gunnar Rätsch and Andreas Krause, where I was a member of the Biomedical Informatics group as well as the ETH Center for the Foundations of Data Science. I was supported by a PhD fellowship from the Swiss Data Science Center and was also an ELLIS PhD student. Within my PhD studies, I visited and worked with Stephan Mandt at the University of California in Irvine and Richard Turner at the University of Cambridge. Moreover, I completed internships at Disney Research Zürich, working with Romann Weber on deep learning for natural language understanding in the Machine Intelligence and Data Science team, at Microsoft Research Cambridge, working with Katja Hofmann on uncertainty quantification in deep learning in the Game Intelligence team, and at Google Brain, working with Efi Kokiopoulou and Rodolphe Jenatton on uncertainty estimation and out-of-distribution detection in the Reliable Deep Learning team. My Erdös–Bacon number is 6.

Interests
  • Bayesian deep learning
  • Deep generative modeling
  • Meta-learning
  • PAC-Bayesian theory
Education
  • PhD in Machine Learning, 2021

    ETH Zürich

  • MSc in Computational Biology and Bioinformatics, 2017

    ETH Zürich

  • BSc in Molecular Life Sciences, 2015

    University of Hamburg

All Publications

Quickly discover relevant content by filtering publications.
(2022). Bayesian Neural Network Priors Revisited. In ICLR.

PDF Code

(2022). Deep Classifiers with Label Noise Modeling and Distance Awareness. In AABI.

PDF Code

(2022). Neural Variational Gradient Descent. In AABI.

PDF

(2022). On Disentanglement in Gaussian Process Variational Autoencoders. In AABI.

PDF Code

(2022). Pathologies in priors and inference for Bayesian transformers. In AABI.

PDF

(2022). Priors in Bayesian Deep Learning: A Review. In International Statistical Review.

PDF

(2022). Probing as Quantifying the Inductive Bias of Pre-trained Representations. In ACL.

PDF

(2022). Quantum Bayesian Neural Networks. In AABI.

PDF Code

(2021). On the Choice of Priors in Bayesian Deep Learning. PhD thesis.

PDF

(2021). T-DPSOM: An Interpretable Clustering Method for Unsupervised Learning of Patient Health States. In ACM CHIL.

PDF

(2021). Annealed Stein Variational Gradient Descent. In AABI.

PDF

(2021). BNNpriors: A library for Bayesian neural network inference with different prior distributions. In Software Impacts.

PDF Code

(2021). Data augmentation in Bayesian neural networks and the cold posterior effect. In arXiv.

PDF

(2021). Exact Langevin Dynamics with Stochastic Gradients. In AABI.

PDF

(2021). Factorized Gaussian Process Variational Autoencoders. In AABI.

PDF Code

(2021). MGP-AttTCN: An Interpretable Machine Learning Model for the Prediction of Sepsis. In PLOS ONE.

PDF Code

(2021). Mixture-of-Experts Variational Autoencoder for clustering and generating from similarity-based representations. In PLOS Computational Biology.

PDF

(2021). A Bayesian Approach to Invariant Deep Neural Networks. In ICML workshop on Uncertainty and Robustness in Deep Learning.

PDF

(2021). PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees. In ICML.

PDF Code

(2021). Repulsive Deep Ensembles are Bayesian. In NeurIPS (spotlight).

PDF Code

(2021). Scalable Gaussian Process Variational Autoencoders. In AISTATS.

PDF Code

(2021). Scalable Gaussian Processes on Discrete Domains. In IEEE Access.

PDF

(2021). Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning. In ICML.

PDF Code

(2021). Sparse MoEs meet Efficient Ensembles. In arXiv.

PDF

(2021). On Stein Variational Neural Network Ensembles. In arXiv.

PDF

(2020). Conservative Uncertainty Estimation By Fitting Prior Networks. In ICLR.

PDF Code

(2020). GP-VAE: Deep Probabilistic Time Series Imputation. In AISTATS.

PDF Code

(2020). Sparse Gaussian Process Variational Autoencoders. In arXiv.

PDF Code

(2019). DPSOM: Deep Probabilistic Clustering with Self-Organizing Maps. In NeurIPS workshop on Machine Learning for Health.

PDF Code

(2019). Meta-Learning Mean Functions for Gaussian Processes. In NeurIPS workshop on Bayesian Deep Learning.

PDF

(2019). META^2: Memory-efficient taxonomic classification and abundance estimation for metagenomics with deep learning. In MLCB.

PDF Code

(2019). SOM-VAE: Interpretable Discrete Representation Learning on Time Series. In ICLR.

PDF Code

(2018). InspireMe: Learning Sequence Models for Stories. In AAAI.

PDF

(2018). On the Connection between Neural Processes and Gaussian Processes with Deep Kernels. In NeurIPS workshop on Bayesian Deep Learning.

PDF

Contact