Statistics Papers

Document Type

Journal Article

Date of this Version

5-2006

Publication Source

The Annals of Statistics

Volume

34

Issue

1

Start Page

78

Last Page

91

DOI

10.1214/009053606000000155

Abstract

Let X|μ∼Np(μ,vxI) and Y|μ∼Np(μ,vyI) be independent p-dimensional multivariate normal vectors with common unknown mean μ. Based on only observing X=x, we consider the problem of obtaining a predictive density (y|x) for Y that is close to p(y|μ) as measured by expected Kullback–Leibler loss. A natural procedure for this problem is the (formal) Bayes predictive density U(y|x) under the uniform prior πU(μ)≡1, which is best invariant and minimax. We show that any Bayes predictive density will be minimax if it is obtained by a prior yielding a marginal that is superharmonic or whose square root is superharmonic. This yields wide classes of minimax procedures that dominate U(y|x), including Bayes predictive densities under superharmonic priors. Fundamental similarities and differences with the parallel theory of estimating a multivariate normal mean under quadratic loss are described.

Copyright/Permission Statement

The original and published work is available at: https://projecteuclid.org/euclid.aos/1146576256#abstract

Keywords

Bayes rules, hear equation, inadmissibility, multiple shrinkage, multivariate normal, prior distributions, shrinkage estimation, superharmonic marginals, superharmonic priors, unbiased estimate of risk

Share

COinS
 

Date Posted: 27 November 2017

This document has been peer reviewed.