The Bayesian method is the natural approach to inference, yet it is hidden from readers behind chapters of slow, mathematical analysis. Nevertheless, mathematical analysis is only one way to “think Bayes”. With cheap computing power, we can now afford to take an alternate route via probabilistic programming.
Cam Davidson-Pilon wrote the book Bayesian Methods for Hackers as a introduction to Bayesian inference from a computational and understanding-first, mathematics-second, point of view.
The book explains Bayesian principles with code and visuals. For instance:
%matplotlib inline from IPython.core.pylabtools import figsize import numpy as np from matplotlib import pyplot as plt figsize(11, 9) import scipy.stats as stats dist = stats.beta n_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500] data = stats.bernoulli.rvs(0.5, size=n_trials[-1]) x = np.linspace(0, 1, 100) for k, N in enumerate(n_trials): sx = plt.subplot(len(n_trials)/2, 2, k+1) plt.xlabel("$p$, probability of heads") \ if k in [0, len(n_trials)-1] else None plt.setp(sx.get_yticklabels(), visible=False) heads = data[:N].sum() y = dist.pdf(x, 1 + heads, 1 + N - heads) plt.plot(x, y, label="observe %d tosses,\n %d heads" % (N, heads)) plt.fill_between(x, 0, y, color="#348ABD", alpha=0.4) plt.vlines(0.5, 0, 4, color="k", linestyles="--", lw=1) leg = plt.legend() leg.get_frame().set_alpha(0.4) plt.autoscale(tight=True) plt.suptitle("Bayesian updating of posterior probabilities", y=1.02, fontsize=14) plt.tight_layout()
- Additional Chapter on Bayesian A/B testing
- Updated examples
- Answers to the end of chapter questions
- Additional explanation, and rewritten sections to aid the reader.
If you’re interested in learning more about Bayesian analysis, I recommend these other books:
- Doing Bayesian Data Analysis: A tutorial with R, JAGS, and Stan
- Think Bayes: Bayesian Statistics in Python
- Practical Probabilistic Programming