Beyond HC: More Sensitive Tests for Rare/Weak Alternatives
by Tom Porter
Abstract: Higher Criticism (HC) is a popular tool that can
be applied to many large-scale inference problems such as signal detection, multiple comparisons, goodness-of-fit testing, feature selection, classification and clustering.
If taken at face value, then HC is a novel and
simple approach that seems to satisfy various lower-order asymptotic optimality criteria (such as attaining the problem-relevant ‘optimal’ phase diagram/detection boundary). However, the fundamental reason why HC should be successful, while other conceptually
similar approaches are not, is less clear.
In this talk, we will reveal some of this mystery
by establishing a new parametric mixture interpretation for HC (and the closely related Berk and Jones, 1979, statistic) in the context of multiple comparisons, goodness-of-fit testing, and mixture detection. Our interpretation enables a further understanding
of when and why HC might be successful, or unsuccessful, in practice. In doing so, we will suggest a rule-of-thumb for its implementation that hints at when one should go ‘beyond HC’ into potentially more sophisticated methods.
One of these sophisticated methods is an adaptive
score test that can be interpreted as a goodness-of-fit test based on the empirical moment generating function. We will show that the adaptive score test, and associated generalised likelihood ratio test, possess better higher-order asymptotic and empirical
properties than HC within a 1+o(1) neighbourhood of the detection boundary under sparse normal mixture local alternatives.
For More Information: Joint work with Dr Michael Stewart (School of Mathematics
and Statistics, University of Sydney).