CPB workshop continues, introduction to Bayesian inference.

McElreath discusses (slides):
- Bayes Theorem
P(θ|D)=Pr(D|θ)Pr(θ)Pr(D)
Philosophy of Priors, Uninformative Priors
Confidence intervals / credible intervals for free
Computing the posterior: Directly or by MCMC
Nice visual example of updating prior as we add data:
[gist id=“797795”]
King Markov and the chain islands.
Evaluating: Burn in , autocorrelation. Thinning (saves memory).
Metropolis2.R bad mixing.
Compare better and worse proposal mechanisms, motivates Gibbs Sampling: propose from the posterior, always accepted.
Richard’s code:
[gist id=“797782”]