Vincent Fortuin
Vincent Fortuin
Home
Open Positions
Publications
Contact
Light
Dark
Automatic
featured
Bayesian Neural Network Priors Revisited
We show that empirical weight distributions of SGD-trained neural networks are heavy-tailed and correlated and that incorporating these insights into Bayesian neural network priors can improve their performance and reduce the cold-posterior effect.
Vincent Fortuin
,
Adrià Garriga-Alonso
,
Florian Wenzel
,
Gunnar Rätsch
,
Richard E Turner
,
Mark van der Wilk
,
Laurence Aitchison
PDF
Cite
Code
Priors in Bayesian Deep Learning: A Review
We provide a comprehensive review of the recent advances regarding the choice of priors for Bayesian neural networks, variational autoencoders, and (deep) Gaussian processes.
Vincent Fortuin
PDF
Cite
PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees
We derive a novel PAC-Bayes bound for meta-learning with Bayesian models, which gives rise to a computationally efficient meta-learning method that outperforms existing approaches on a range of tasks, especially when the number of meta-tasks is small.
Jonas Rothfuss
,
Vincent Fortuin
,
Martin Josifoski
,
Andreas Krause
PDF
Cite
Code
Repulsive Deep Ensembles are Bayesian
We show that introducing a repulsive force between the members of a deep ensemble can improve the ensemble’s diversity and performance, especially when this force is applied in the function space, and that it can also guarantee asymptotic convergence to the true Bayes posterior.
Francesco D'Angelo
,
Vincent Fortuin
PDF
Cite
Code
Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning
We show that a Laplace-Generalized-Gauss-Newton approximation to the marginal likelihood of Bayesian neural networks can effectively be used for model selection and can often discover better hyperparameter settings than cross-validation.
Alexander Immer
,
Matthias Bauer
,
Vincent Fortuin
,
Gunnar Rätsch
,
Mohammad Emtiyaz Khan
PDF
Cite
Code
GP-VAE: Deep Probabilistic Time Series Imputation
We show that using a Gaussian process prior in the latent space of a variational autoencoder can improve time series imputation performance, while still allowing for computationally efficient inference through a variational Gauss-Markov process.
Vincent Fortuin
,
Dmitry Baranchuk
,
Gunnar Rätsch
,
Stephan Mandt
PDF
Cite
Code
SOM-VAE: Interpretable Discrete Representation Learning on Time Series
We propose a novel version of the classical self-organizing map, which can be used as a structural prior in the latent space of a variational autoencoder, enabling interpretable representations of time series.
Vincent Fortuin
,
Matthias Hüser
,
Francesco Locatello
,
Heiko Strathmann
,
Gunnar Rätsch
PDF
Cite
Code
Cite
×