Publications

(2024). Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural Networks Using the Marginal Likelihood. arXiv preprint arXiv:2402.15978.

Cite URL

(2024). Position Paper: Bayesian Deep Learning in the Age of Large-Scale AI. arXiv preprint arXiv:2402.00809.

Cite URL

(2024). On the Challenges and Opportunities in Generative AI. arXiv preprint arXiv:2403.00025.

Cite URL

(2024). Hodge-Aware Contrastive Learning. International Conference on Acoustics, Speech, and Signal Processing.

Cite URL

(2023). Understanding pathologies of deep heteroskedastic regression. arXiv preprint arXiv:2306.16717.

Cite URL

(2023). Uncertainty in Graph Contrastive Learning with Bayesian Neural Networks. Fifth Symposium on Advances in Approximate Bayesian Inference.

Cite URL

(2023). Scalable PAC-Bayesian Meta-Learning via the PAC-Optimal Hyper-Posterior: From Theory to Practice. Journal of Machine Learning Research.

Cite URL

(2023). Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization. Fifth Symposium on Advances in Approximate Bayesian Inference.

Cite URL

(2023). Incorporating Unlabelled Data into Bayesian Neural Networks. arXiv preprint arXiv:2304.01762.

Cite URL

(2023). Improving Neural Additive Models with Bayesian Principles. Fifth Symposium on Advances in Approximate Bayesian Inference.

Cite URL

(2023). Estimating optimal PAC-Bayes bounds with Hamiltonian Monte Carlo. arXiv preprint arXiv:2310.20053.

Cite URL

(2023). Challenges and Perspectives in Deep Generative Modeling (Dagstuhl Seminar 23072). Dagstuhl Reports.

Cite DOI URL

(2023). A primer on Bayesian neural networks: review and debates. arXiv preprint arXiv:2309.16314.

Cite URL

(2022). Sparse MoEs meet Efficient Ensembles. Transactions on Machine Learning Research.

Cite URL

(2022). Quantum Bayesian Neural Networks. Fourth Symposium on Advances in Approximate Bayesian Inference.

Cite URL

(2022). Probing as quantifying inductive bias. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).

Cite URL

(2022). Priors in Bayesian deep learning: A review. International Statistical Review.

Cite URL

(2022). Pathologies in Priors and Inference for Bayesian Transformers. Fourth Symposium on Advances in Approximate Bayesian Inference.

Cite URL

(2022). On Interpretable Reranking-Based Dependency Parsing Systems. Swiss Text Analytics Conference.

PDF Cite

(2022). On Disentanglement in Gaussian Process Variational Autoencoders. Fourth Symposium on Advances in Approximate Bayesian Inference.

Cite URL

(2022). Neural Variational Gradient Descent. Fourth Symposium on Advances in Approximate Bayesian Inference.

Cite URL

(2022). Meta-learning richer priors for VAEs. Fourth Symposium on Advances in Approximate Bayesian Inference.

Cite URL

(2022). Invariance learning in deep neural networks with differentiable Laplace approximations. Advances in Neural Information Processing Systems.

Cite URL

(2022). Deep classifiers with label noise modeling and distance awareness. Transactions on Machine Learning Research.

Cite URL

(2022). Data augmentation in Bayesian neural networks and the cold posterior effect. Uncertainty in Artificial Intelligence.

Cite URL

(2022). Bayesian neural network priors revisited. International Conference on Learning Representations.

Cite URL

(2021). T-DPSOM: An interpretable clustering method for unsupervised learning of patient health states. Proceedings of the Conference on Health, Inference, and Learning.

Cite URL

(2021). Sparse Gaussian processes on discrete domains. IEEE Access.

Cite URL

(2021). Scalable marginal likelihood estimation for model selection in deep learning. International Conference on Machine Learning.

Cite URL

(2021). Scalable Gaussian process variational autoencoders. International Conference on Artificial Intelligence and Statistics.

Cite URL

(2021). Repulsive deep ensembles are Bayesian. Advances in Neural Information Processing Systems.

Cite URL

(2021). PCA Subspaces Are Not Always Optimal for Bayesian Learning. NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and Applications.

Cite URL

(2021). PACOH: Bayes-optimal meta-learning with PAC-guarantees. International Conference on Machine Learning.

Cite URL

(2021). On Stein variational neural network ensembles. arXiv preprint arXiv:2106.10760.

Cite URL

(2021). Mixture-of-Experts Variational Autoencoder for clustering and generating from similarity-based representations on single cell data. PLoS Computational Biology.

Cite URL

(2021). Factorized Gaussian Process Variational Autoencoders. Third Symposium on Advances in Approximate Bayesian Inference.

Cite URL

(2021). Exact Langevin Dynamics with Stochastic Gradients. Third Symposium on Advances in Approximate Bayesian Inference.

Cite URL

(2021). BNNpriors: A library for Bayesian neural network inference with different prior distributions. Software Impacts.

Cite URL

(2021). Annealed Stein Variational Gradient Descent. Third Symposium on Advances in Approximate Bayesian Inference.

Cite URL

(2021). A Bayesian Approach to Invariant Deep Neural Networks. arXiv preprint arXiv:2107.09301.

Cite URL

(2020). Sparse Gaussian process variational autoencoders. arXiv preprint arXiv:2010.10177.

Cite URL

(2020). GP-VAE: Deep probabilistic time series imputation. International Conference on Artificial Intelligence and Statistics.

Cite URL

(2020). Conservative uncertainty estimation by fitting prior networks. International Conference on Learning Representations.

Cite URL

(2019). SOM-VAE: Interpretable discrete representation learning on time series. International Conference on Learning Representations.

Cite URL

(2019). META$^2$: Memory-efficient taxonomic classification and abundance estimation for metagenomics with deep learning. arXiv preprint arXiv:1909.13146.

Cite URL

(2019). Meta-learning mean functions for Gaussian processes. arXiv preprint arXiv:1901.08098.

Cite URL

(2019). DPSOM: Deep probabilistic clustering with self-organizing maps. arXiv preprint arXiv:1910.01590.

Cite URL

(2018). Supervised learning on synthetic data for reverse engineering gene regulatory networks from experimental time-series. bioRxiv.

Cite URL

(2018). On the connection between neural processes and Gaussian processes with deep kernels. Workshop on Bayesian Deep Learning, NeurIPS.

PDF Cite

(2018). InspireMe: learning sequence models for stories. Proceedings of the AAAI Conference on Artificial Intelligence.

Cite URL