Vincent Fortuin
Vincent Fortuin
Home
Open Positions
Publications
Contact
Light
Dark
Automatic
1
Bayesian Neural Network Priors Revisited
We show that empirical weight distributions of SGD-trained neural networks are heavy-tailed and correlated and that incorporating these insights into Bayesian neural network priors can improve their performance and reduce the cold-posterior effect.
Vincent Fortuin
,
Adrià Garriga-Alonso
,
Florian Wenzel
,
Gunnar Rätsch
,
Richard E Turner
,
Mark van der Wilk
,
Laurence Aitchison
PDF
Cite
Code
Data augmentation in Bayesian neural networks and the cold posterior effect
Seth Nabarro
,
Stoil Ganev
,
Adrià Garriga-Alonso
,
Vincent Fortuin
,
Mark van der Wilk
,
Laurence Aitchison
PDF
Cite
Invariance Learning in Deep Neural Networks with Differentiable Laplace Approximations
Alexander Immer
,
Tycho van der Ouderaa
,
Vincent Fortuin
,
Gunnar Rätsch
,
Mark van der Wilk
PDF
Cite
Code
On Interpretable Reranking-Based Dependency Parsing Systems
Florian Schottmann
,
Vincent Fortuin
,
Edoardo Ponti
,
Ryan Cotterell
PDF
Cite
Probing as Quantifying Inductive Bias
Alexander Immer
,
Lucas Torroba Hennigen
,
Vincent Fortuin
,
Ryan Cotterell
PDF
Cite
PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees
We derive a novel PAC-Bayes bound for meta-learning with Bayesian models, which gives rise to a computationally efficient meta-learning method that outperforms existing approaches on a range of tasks, especially when the number of meta-tasks is small.
Jonas Rothfuss
,
Vincent Fortuin
,
Martin Josifoski
,
Andreas Krause
PDF
Cite
Code
Repulsive Deep Ensembles are Bayesian
We show that introducing a repulsive force between the members of a deep ensemble can improve the ensemble’s diversity and performance, especially when this force is applied in the function space, and that it can also guarantee asymptotic convergence to the true Bayes posterior.
Francesco D'Angelo
,
Vincent Fortuin
PDF
Cite
Code
Scalable Gaussian Process Variational Autoencoders
Metod Jazbec
,
Matthew Ashman
,
Vincent Fortuin
,
Michael Pearce
,
Stephan Mandt
,
Gunnar Rätsch
PDF
Cite
Code
Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning
We show that a Laplace-Generalized-Gauss-Newton approximation to the marginal likelihood of Bayesian neural networks can effectively be used for model selection and can often discover better hyperparameter settings than cross-validation.
Alexander Immer
,
Matthias Bauer
,
Vincent Fortuin
,
Gunnar Rätsch
,
Mohammad Emtiyaz Khan
PDF
Cite
Code
T-DPSOM: An Interpretable Clustering Method for Unsupervised Learning of Patient Health States
Laura Manduchi
,
Matthias Hüser
,
Martin Faltys
,
Julia Vogt
,
Gunnar Rätsch
,
Vincent Fortuin
PDF
Cite
»
Cite
×