School Seminars and Colloquia

Computing highly accurate parametric inference from discrete data

Statistics Seminar

by Chris J. Lloyd


Institution: University of Melbourne
Date: Tue 22nd November 2011
Time: 1:00 PM
Location: Room 213, Richard Berry Building, University of Melbourne

Abstract: In the context of discrete data, there is a very clean theory for what constitutes an exact P-value and an exact upper limit. Unfortunately, the exact inference required optimization over all nuisance parameters which is not computationally feasible for most models. Recently, it has become clear that “parametric bootstrap” versions of these exact methods have almost exact properties, while avoiding optimization with respect to the nuisance parameters. On the other hand, computing these “parametric bootstrap“ inferences still requires computation of order equal to the cardinality of the sample space, which is astronomical for most models. This suggests a monte-carlo approximation.

I develop an importance sampling approach to computing both P-values and upper limits in the presence of nuisance a parameters. A standard advantage is the typical variance reduction associated with importance sampling. The other advantage is that we can generate estimates of tail probabilities as a function of all parameters from a single set of importance samples. There is no need for smoothing of the simulation noise. In the case of upper limits, the new method easily outperforms existing methods of Garthwaite and Buckland (1992) and Garthwaite and Jones (2009), which have only been developed for models with no nuisance parameters. Current code functions very reliably for binomial regression models with up to 20 parameters.

For More Information: contact: Mihee Lee. email: miheel@unimelb.edu.au