21 Jan, 2016

Admissibility (Continued)


  • Posterior means are not unbiased estimators unless the risk is zero (under the sqared-error loss)
  • Proof:
    • Assume there is a posterior mean that in unbiased, say $\delta^\pi(x)$. Then \(\delta^\pi(x) = E[\theta | x] \Rightarrow E_{x | \theta}[\delta^\pi(x)] = \theta\)

Baye’s Risk $= \rho(\pi,\delta(x)) = E_{x,\theta}[L(\theta,\delta^\pi(x)] = E_{\theta,x}[L(\theta,\delta^\pi(x)]$ only when $\rho(\pi,\delta(x)) = 0$

Comments on Admisisbility

  • Bayes rules are “almost always” admissible
    • proper priors
    • unique Bayes rule
    • continuous loss functions

Exchangeability

Infinite Exchangeability: $(x_1,…,x_n)$ is an infinitely exchangeable sequence of random variables if for any $n$, the joint probability $P(x_1,…,x_n) = P(x_{\pi(1)},…,x_{\pi(n)})$, with $\pi$ any reordering of ${1,…,n}$.

Finite Exchangeability: $(x_1,…,x_n)$ is an finitely exchangeable if for a particular $n$, $P(x_1,…,x_n) = P(x_{\pi(1)},…,x_{\pi(n)})$, with $\pi$ any reordering of ${1,…,n}$.

Example (non-independence, but exchangeable)

There are $m$ balls in a box, $r$ of them are numbered 1, and the others are 0. Balls are drawn from the box without replacement. Let $x_k$ be the number associated with the $k^{th}$ ball. The final sequence is $(x_1,…,x_n)$ but the $x$’s are not independent.

de Finetti’s Representation Theorem

Let $x_1, x_2,…$ be an infinitely exchangeable sequence of binary r.v.’s with joint probability measure $\mathcal P$. Then there exists a distribution $\mathcal Q$ such that \(p(x_1,...,x_n) = \displaystyle\int_0^1 \prod_{i=1}^n \theta^{x_i}(1-\theta)^{1-x_i} dQ(\theta)\) where \(Q(\theta) = \lim_{n\rightarrow \infty} P(y_n/n \le \theta), y_n = x_1+...+x_n, \theta = \lim_{n\rightarrow\infty} y_n / n\)

  • The $x_i$’s are Bernoulli($\theta$)
  • $\theta = \lim_{n\rightarrow\infty} y_n / n$ (SLLN)
  • $Q$ represents the beliefs about the limiting relative frequency of 1’s

General Representation Theorem

$x_1,…x_n$ infinitely exchangeable real valued r.v.’s with distribution $\mathcal P$. Then there exists $\mathcal Q$ s.t. \(\mathcal{P}(x_1,...,x_n) = \displaystyle\int~ \prod{i=1}^n F(x_i) ~dQ(F)\) where $Q(F) =\lim_{n\rightarrow\infty} \mathcal{P}(F_n)$ and $F_n$ is the empirical distributino defined as \(F_n(x) = \sum_{i=1}^n I(x_i\le x) /n\)