# Bayesian Integration and a Normal Poster

Implementation of Bayesian methods is complicated by the need for specialized numerical integration techniques unfamiliar to many statistical practitioners. We propose a simple sampling-resampling approach to estimating expectations that avoids these difficulties.

A normal posterior is an estimator that gives more weight to the prior information as the sample size increases, while keeping the variance the same. This results in better performance of the posterior mean than other estimators.

## Mean

In bayesian statistics, a normal posterior can be used to approximate the probability distribution of an unknown parameter. The normal approximation works well in low dimensional thth parameter spaces, and it can be helpful for debugging more complex methods by providing credible intervals that compare to those produced by simpler models.

To understand how the normal approximation works, consider a random sample from a distribution with mean

The sample distribution gives us some information about thth, but the information it conveys is incomplete. The sample means and variance provide a mixture of signals, and the more precise a signal is, the more weight it receives. The resulting mixture is a normal posterior, and the more precision of the sample means, the closer it will get to the true mean.

A uniform prior spreads prior plausibility over a fairly wide range of values, [0, 0.7]. Even when you observe data that suggest thth may be greater than 0.7, the probability that it is will remain 0. This is because a 0 prior for thth makes it impossible to generate a posterior probability that includes such high values.

As a result, a normal approximation tends to produce erroneous spikes in the posterior distribution. To mitigate this effect, we can use MCMC sampling techniques to mix the Markov chain quickly around the range of posterior probabilities that are possible. Using this method, we can generate credible intervals that compare to the 98% central posterior probability and see how the normal approximation performs. We can then decide if it is a reasonable approximation to the true posterior. If not, we can investigate ways to improve the approximation.

## Variance

The variance of a Normal distribution is equal to the product of its mean and standard deviation. This means that a sample from a Normal distribution with mean m and standard deviation s will have precision (or, as we might say in statistical terms, the sample variance) equal to s/n. Thus, a simple compromise between prior and likelihood, assuming a normal sampling model, is to use the Normal prior with mean m’=ty+tmt+tm and standard deviation s’=1/n and compute sample means and precisions from this prior distribution.

Given the prior distribution, you can use Bayes’ theorem to calculate a posterior probability density function p(x|th)d

Let’s take a real example from neuroscience: posterior cortical atrophy is an important marker for Alzheimer’s disease, but it can also be caused by other neurological conditions such as Lewy body dementia and corticobasal degeneration, or by genetic factors like Creutzfeldt-Jakob disease. To investigate the impact of these different factors, we can use a Bayesian approach to brain imaging data.

To calculate the posterior probability of a brain disorder, we need to know how many neurons are damaged and how big they are. To get this information, we need to analyse a large number of brain scans. This can be difficult, and it is often more practical to perform a single MCMC run with a small number of iterations. To achieve this, we can choose a burn-in of 10 iterations, and then select 1 in each of the remaining iterations, giving us a total of n*10 iterations. We can then report the centre and width of a 98% central credible interval for the parameter of interest. This provides a useful summary of the uncertainty in our estimates.