Approximate Bayesian Neural Networks

Bayesian pragmatism provides useful tools to tackle overfitting problems, but it comes at a cost, the exact Bayesian inference appropriate to a neural network is often intractable. Bayesian deep learning remains a good choice to design efficient methods by providing an approximate solution combining approximate inference and scalable optimization framework. However, the practical effectiveness of Bayesian neural networks is limited by the need to specify meaningful prior distributions, and by the intractability of posterior inference.

By Nadhir Hassen in Probabiistic Machine Learning Approximate Inference Bayesian optimization Hyperparameter optimization

May 31, 2021

Date

December 31, 2021

Time

9:30 AM

Location

Montreal, Canada

Event

Description

we address these issues by attempting to demystify the relationship between approximate inference and optimization approaches through the generalized Gauss–Newton method. Bayesian deep learning yields good results, combining Gauss–Newton with Laplace and Gaussian variational approximation. Both methods compute a Gaussian approximation to the posterior; however, it remains unclear how these methods affect the underlying probabilistic model and the posterior approximation. Both methods allow a rigorous analysis of how a particular model fails and the ability to quantify its uncertainty.

Posted on:
May 31, 2021
Length:
1 minute read, 80 words
Categories:
Probabiistic Machine Learning Approximate Inference Bayesian optimization Hyperparameter optimization
Tags:
Laplace Approximation Gaussian Processes Numerical Optimization
See Also:
Approximate Bayesian Optimisation for Neural Networks
Kronecker-factored approximation (KFAC) of the Laplace-GGN for Continual Learning
Orion-Asynchronous Distributed Hyperparameter Optimization