Vincent Fortuin
Vincent Fortuin
Home
Open Positions
Publications
Contact
Light
Dark
Automatic
1
Bayesian Neural Network Priors Revisited
We show that empirical weight distributions of SGD-trained neural networks are heavy-tailed and correlated and that incorporating these insights into Bayesian neural network priors can improve their performance and reduce the cold-posterior effect.
Vincent Fortuin
,
Adrià Garriga-Alonso
,
Florian Wenzel
,
Gunnar Rätsch
,
Richard E Turner
,
Mark van der Wilk
,
Laurence Aitchison
PDF
Code
Invariance Learning in Deep Neural Networks with Differentiable Laplace Approximations
Alexander Immer
,
Tycho van der Ouderaa
,
Vincent Fortuin
,
Gunnar Rätsch
,
Mark van der Wilk
PDF
Code
PAC-Bayesian Meta-Learning: From Theory to Practice
Jonas Rothfuss
,
Martin Josifoski
,
Vincent Fortuin
,
Andreas Krause
PDF
Probing as Quantifying the Inductive Bias of Pre-trained Representations
Alexander Immer
,
Lucas Torroba Hennigen
,
Vincent Fortuin
,
Ryan Cotterell
PDF
PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees
We derive a novel PAC-Bayes bound for meta-learning with Bayesian models, which gives rise to a computationally efficient meta-learning method that outperforms existing approaches on a range of tasks, especially when the number of meta-tasks is small.
Jonas Rothfuss
,
Vincent Fortuin
,
Martin Josifoski
,
Andreas Krause
PDF
Code
Repulsive Deep Ensembles are Bayesian
We show that introducing a repulsive force between the members of a deep ensemble can improve the ensemble’s diversity and performance, especially when this force is applied in the function space, and that it can also guarantee asymptotic convergence to the true Bayes posterior.
Francesco D'Angelo
,
Vincent Fortuin
PDF
Code
Scalable Gaussian Process Variational Autoencoders
Metod Jazbec
,
Matthew Ashman
,
Vincent Fortuin
,
Michael Pearce
,
Stephan Mandt
,
Gunnar Rätsch
PDF
Code
Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning
We show that a Laplace-Generalized-Gauss-Newton approximation to the marginal likelihood of Bayesian neural networks can effectively be used for model selection and can often discover better hyperparameter settings than cross-validation.
Alexander Immer
,
Matthias Bauer
,
Vincent Fortuin
,
Gunnar Rätsch
,
Mohammad Emtiyaz Khan
PDF
Code
T-DPSOM: An Interpretable Clustering Method for Unsupervised Learning of Patient Health States
Laura Manduchi
,
Matthias Hüser
,
Martin Faltys
,
Julia Vogt
,
Gunnar Rätsch
,
Vincent Fortuin
PDF
Conservative Uncertainty Estimation By Fitting Prior Networks
Kamil Ciosek
,
Vincent Fortuin
,
Ryota Tomioka
,
Katja Hofmann
,
Richard E Turner
PDF
Code
»
Cite
×