Statistical Inference: An Integrated Bayesian/Likelihood Approach. By Murray Aitkin. Boca Raton, FL Chapman& Hall/CRC, 2010. 254 pages. UK£57.99 (hardback). ISBN 978-1-4200-9343-8.

This is a stimulating book that should be of interest to Bayesians and statisticians with a general interest in statistical inference. It is not intended for complete beginners – Aitkin states accurately in the preface that he assumes considerable background in both Bayes and classical frequentist theory. It is also not intended to be a balanced review of different approaches to inference – Aitkin has a strong viewpoint, which he propounds with the aim of converting readers to his ideas. It is likely to be controversial, even heretical, to Bayesians. However, this is precisely why it is worth reading: in exploring the new ideas, whether we ultimately accept them or not, we gain a better understanding of the current orthodoxy.

The author’s primary purpose in writing this book is to describe his approach to comparing statistical models (which roughly equates to testing hypotheses about parameters in models); its secondary purpose is to present the author’s views on Bayesian nonparametrics, including in particular his approach to analysing sample survey data. The book begins with a brief overview of various approaches to statistical inference and then in the core Chapter 2 introduces Aitkin’s approach to comparing models. This approach is applied to standard normal theory problems (one- and two-sample comparisons of means, comparison of variances, regression and analysis of variance) in Chapters 3 and 5. Chapters 4 and 6 present Aitkin’s approach to survey data and Bayesian nonparametrics involving the multinomial distribution. The two themes come together in Chapter 7, which concerns testing goodness-of-fit: diagnostics are also mentioned in the chapter title but, in my view, this conveys the wrong impression. Chapter 8 discusses two-level variance component models (i.e. models for one-way arrays) and finite mixture models.

Aitkin’s approach to comparing two models (pp. 41–45) is to use the posterior probability that the likelihood ratio is greater than one. The likelihood ratio is a familiar starting point for comparing models, but instead of the frequentist approach of maximizing to eliminate nuisance parameters and then referring the ratio to its sampling distribution given the parameters, Aitkin suggests referring the likelihood ratio to its posterior distribution given the data. This means that ultimately the comparison is based on a posterior probability which, at least in regular problems, often turns out to be related to a frequentist P-value. Note that this is not the same as the heavily criticized (for ‘using the data twice’) posterior Bayes factor approach proposed earlier by Aitkin. Indeed, one of the interesting contributions of the book is the discussion of the use of Bayes factors – if not ‘from the inside’, at least from someone who has been thinking deeply about them for some time.

Aitkin’s approach to sample surveys is to use the Bayesian bootstrap. This avoids specifying distributions for the survey variables and, as it is different for different sampling schemes, has a design-based feel to it. It is not clear to me at this stage how general the approach is and how it handles arbitrary sampling schemes, including informative sampling. Aitkin argues that the multinomial is a universal model that is always correct, but the covariance structure is restrictive and we may need to describe more structured populations; this latter point is discussed at the end of Chapter 4 (pp. 130–131). Moreover, additional normal models are needed for small-area problems (pp. 125–127). An important question about the approach concerns its motivation: why do we want to proceed in this way? To some extent, the answer may be that for some reason we want to use a Bayesian approach with a design-based flavour.

The book contains some useful points that are known but ought to be better known, and it is useful to have a reference to them. These include the comment that a prior has a different role from the upper-level model in a multilevel model (p. 22) and the illustration that parametrization is very important in Bayesian analysis (pp. 24–34). I would have liked some discussion of the other often ignored aspect of the importance of identifiability in Bayesian analysis. On the other hand, Aitkin is more sanguine than many in the recommendation of improper priors, adopting an optimistically pragmatic interpretation (p. 22–24) and living dangerously by treating them as limits of improper priors (e.g. p. 76) without warning the less experienced readers of what can go wrong with this approach.

I have two more general points of difference with Aitkin. First, he calls the usual deviance the frequentist deviance, and minus twice the log-likelihood ratio used in his analysis the deviance rather than the Bayesian deviance. This is understandable in the context of the book but it is unfortunate and may lead to confusion in more general discussion: the meaning of deviance should no longer be taken as understood. Second, in the examples, distributions are presented graphically mostly by plots of cumulative distribution functions. This is not a good idea – to me they nearly all look the same. This is a familiar problem with distribution functions.

Most readers will not agree with everything in this book but it repays careful and thoughtful reading. I am pleased to have had the opportunity to read it.

Centre for Mathematics and its Applications,
The Australian National University
Aust. N. Z. J. Stat. 00(0), 2011, 1–2 doi: 10.1111/j.1467-842X.2011.00613.x

This book describes an approach to inference based on using the likelihood function as the primary measure of evidence for parameters and models. The emphasis on evidence rather than decision theory makes the book especially relevant to scientific investigations. It gives interesting and thoughtful comparisons to alternative approaches to inference, arguing that that presented here has particular strengths. In place of Bayes factors to compare models, a strategy using the full posterior distribution of the likelihood is described. It also shows that the approach provides a natural strategy for finite population inference. The author describes the overall result as providing a “general integrated Bayesian/likelihood analysis of statistical models”, to serve as an alternative to standard Bayesian inference and as a foundation “for a course sequence” in modern Bayesian theory. The very deep and solid inferential foundations the book lays support a matching carefully thought out and impressive superstructure, covering topics which include variance component models, finite mixtures, regression, anova, complex survey designs, and other topics. It would provide a valuable and thought provoking volume for advanced students studying the foundations of inference and their practical implications. It would make a particularly good book for a reading group.

David J. Hand
Mathematics Department, Imperial College
London SW7 2AZ, UK
International Statistical Review (2011), 79, 1, 114–143

More reviews