pbdl-book/bayesian-intro.md
2021-04-15 16:20:17 +08:00

2.6 KiB
Raw Blame History

Introduction to Posterior Inference

We should keep in mind that for all measurements, models, and discretizations we have uncertainties. For the former, this typically appears in the form of measurements errors, while model equations usually encompass only parts of a system were interested in, and for numerical simulations we inherently introduce discretization errors. So a very important question to ask here is how sure we can be sure that an answer we obtain is the correct one. From a statistics viewpoint, wed like to know the probability distribution for the posterior, i.e., the different outcomes that are possible.

This admittedly becomes even more difficult in the context of machine learning: were typically facing the task of approximating complex and unknown functions. From a probabilistic perspective, the standard process of training an NN here yields a maximum likelihood estimation (MLE) for the parameters of the network. However, this MLE viewpoint does not take any of the uncertainties mentioned above into account: for DL training, we likewise have a numerical optimization, and hence an inherent approximation error and uncertainty regarding the learned representation. Ideally, we should reformulate our learning problem such that it enables posterior inference, i.e. learn to produce the full output distribution. However, this turns out to be an extremely difficult task.

This where so called Bayesian neural network (BNN) approaches come into play. They make a form of posterior inference possible by making assumptions about the probability distributions of individual parameters of the network. With a distribution for the parameters we can evaluate the network multiple times to obtain different versions of the output, and in this way sample the distribution of the output.

Nonetheless, the task remains very challenging. Training a BNN is typically significantly more difficult than training a regular NN. However, this should come as no surprise, as were trying to learn something fundamentally different here: a full probability distribution instead of a point estimate. (All previous chapters “just” dealt with learning such point estimates.)

Divider

Introduction to Bayesian Neural Networks

TODO, integrate Maximilians intro section here

A practical example

As a first real example for posterior inference with BNNs, lets revisit the case of turbulent flows around airfoils, from {doc}supervised-airfoils. However, in contrast to the point estimate learned in this section, well now aim for learning the full posterior.