On predictive density estimation with additional information

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

Based on independently distributed X 1 ~ N p (θ 1 , σ 1 2 I p ) and X 2 ~ N p (θ 2 , σ 2 2 I p ), we consider the efficiency of various predictive density estimators for Y 1 ~ N p (θ 1 , σ Y 2 I p ), with the additional information θ 1 - θ 2 ε A and known σ 1 2 , σ 2 2 , σ Y 2 . We provide improvements on benchmark predictive densities such as those obtained by plug-in, by maximum likelihood, or as minimum risk equivariant. Dominance results are obtained for a-divergence losses and include Bayesian improvements for Kullback- Leibler (KL) loss in the univariate case (p = 1). An ensemble of techniques are exploited, including variance expansion, point estimation duality, and concave inequalities. Representations for Bayesian predictive densities, and in particular for qΠU,A associated with a uniform prior for θ= (θ 1 , θ 2 ) truncated to {θ ε ℝ 2p : θ 1 - θ 2 ε A}, are established and are used for the Bayesian dominance findings. Finally and interestingly, these Bayesian predictive densities also relate to skew-normal distributions, as well as new forms of such distributions.
Original languageEnglish
Pages (from-to)4209-4238
Number of pages30
JournalElectronic Journal of Statistics
Volume12
Issue number2
DOIs
StatePublished - Jan 1 2018

Keywords

  • A-divergence loss
  • Additional information
  • Bayes estimators
  • Dominance
  • Duality
  • Frequentist risk
  • Kullback-leibler loss
  • Multivariate normal
  • Plug-in
  • Predictive density
  • Restricted parameter
  • Skewnormal
  • Variance expansion

Fingerprint

Dive into the research topics of 'On predictive density estimation with additional information'. Together they form a unique fingerprint.

Cite this