Classical statistics involves ways to test hypotheses and estimate confidence intervals. Bayesian statistics involves methods to calculate probabilities associated with your hypotheses. The result is a posterior distribution that combines information from your data with prior beliefs. The term “Bayesian” comes from Bayes’s theorem—a single formula for obtaining the posterior distribution. It is the cornerstone of modern statistical methods. In this chapter we introduce Bayesian statistics. We begin by considering the problem of learning about the population proportion (percentage) of all hurricanes that make landfall. We then consider the problem of estimating how many days it takes your manuscript to get accepted for publication. Again, we encourage you to follow along by typing the code into your R console. Models that have skill at forecasting hurricane counts are more relevant to society if they include an estimate of the proportion that make landfall. Before examining the data, you hold a belief about the value of this proportion. You model your belief in terms of a prior distribution. Then after examining some data, you update your belief about the proportion by computing the posterior distribution (Albert, 2009). This setting allows you to predict the likely outcomes of a new sample taken from the population, for example, the proportion of landfalls for next year. The use of the pronoun “you” focuses attention on the Bayesian viewpoint that probabilities do not exist but are how much you personally believe that something is true. Said another way, probabilities are subjective and based on all the relevant information available to you. Here you think of a population consisting of past and future hurricanes in the North Atlantic. Then let π represent the proportion of this population that hit the United States at hurricane intensity. The value of π is unknown. You are interested in learning about what the value of π could be. Bayesian statistics requires that you represent your belief about the uncertainty in this percentage with a probability distribution. The distribution reflects your subjective prior opinion about plausible values of π. Before you examine a sample of hurricanes, you think about what the value of π might be.
Keywords: Gibbs sampling, JAGS (Just Another Gibbs Sampler), MCMC (Markov chain Monte Carlo), beta density, credible interval, manuscript review, posterior distribution, predictive density, time-to-acceptance
Oxford Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.
If you think you should have access to this title, please contact your librarian.