Solved Examples of Cramer Rao Lower Bound [PDF]

  • 0 0 0
  • Suka dengan makalah ini dan mengunduhnya? Anda bisa menerbitkan file PDF Anda sendiri secara online secara gratis dalam beberapa menit saja! Sign Up
File loading please wait...
Citation preview

ELE 530 Theory of Detection and Estimation



April 9, 2012 Handout #9



Homework #4 Solutions 1. Rayleigh Samples. Suppose Yi are i.i.d. samples from pθ (y), for i = 1, ..., n, where θ ∈ < is unknown, and the family of distributions pθ (y) is the Rayleigh family given by y − y2 e 2θ u(y), pθ (y) = θ where u(y) is the unit step function. The distribution of the entire i.i.d. vector of samples pθ (y n ) can be inferred from pθ (y). Is pθ (y n ) an exponential family? Find a complete sufficient statistic for θ. What is the minimum variance unbiased estimator (MVUE) for θ? Can the Cramer-Rao Bound be applied? If so, use it to obtain a lower bound on the variance of unbiased estimators for θ. Is there an efficient estimator? (An unbiased estimator is efficient if it meets the Cramer-Rao bound.) e. What is the maximum likelihood estimator for θ. a. b. c. d.



Solution: a. Yes, pθ (y n ) is an exponential family. We can write it in the proper form as n Y n pθ (y ) = pθ (yi ) i=1



 =



1 θn



 Y n



! −1



yi u(yi ) e 2θ



Pn



i=1



yi2



.



i=1



b. The parameter −1 , in the exponent of the formula for pθ (y n ) given above, takes 2θ Pn on2 1 values in the set (−∞, 0), which is a rectangle in < . Therefore, the statistic i=1 yi is complete. It is a complete sufficient statistic for θ. c. The second moment of a Rayleigh distributed random variable is 2θ. Therefore, n 1 X 2 ˆ θ = Y 2n i=1 i is the only unbiased function of the complete sufficient statistic found in part b. and is therefore the minimum variance unbiased estimator. d. First we calculate the score function. ∂ s(θ, Y n ) = ln pθ (Y n ) ∂θ n n 1 X 2 Y . = − + 2 θ 2θ i=1 i From the above equation, we can see that the expected value of the score function is zero, and the Cramer-Rao Bound can be applied. However, we will factor the score



Page 2 of 6



ELE 530, Spring 2010-2011



function further to show that there is an efficient estimator and identify the Fisher information directly. As a side note, once we’ve shown that there is an efficient estimator, then we already know that the estimator from part c. is the efficient estimator. However, it will be obvious from the score function what the efficient estimator is as well. ! n X 1 n s(θ, Y n ) = 2 Y2−θ . θ 2n i=1 i From the above factorization, we know that there is an efficient estimator, and we find that the Fisher information is n I(θ) = 2 . θ Therefore, the Cramer-Rao Lower Bound for all unbiased estimators is ˆ ≥ V ar(θ)



θ2 . n



e. Also, from the score function we see that the maximum likelihood estimator is the same as the estimator MVUE from part c. In problem 4 we show that this is always the case if an efficient estimator exists.



Homework #4 Solutions



Page 3 of 6



2. Uniform Samples. Suppose Yi are i.i.d. samples ∼ U nif [−θ, θ], for i = 1, ..., n, where θ > 0 is unknown. The distribution of the entire i.i.d. vector of samples pθ (y n ) can be inferred from pθ (y). Is pθ (y n ) an exponential family? Find a scaler sufficient statistic for θ. What is an unbiased estimator based on the sufficient statistic of part b? Can the Cramer-Rao Bound be applied? If so, use it to obtain a lower bound on the variance of unbiased estimators for θ. Is there an efficient estimator? (An unbiased estimator is efficient if it meets the Cramer-Rao bound.) e. What is the maximum likelihood estimator for θ. a. b. c. d.



Solution: a. The family of distributions pθ (y n ) is not an exponential family. The support of an exponential family must be the same for all θ, and that is not the case with the uniform distribution we are dealing with. b. We use the Neyman-Fisher Factorization Theorem to find a sufficient statistic. n Y n pθ (y ) = pθ (yi ) i=1 n 1 Y 1y ∈[−θ,θ] = (2θ)n i=1 i



=



1 1maxi |yi |≤θ . (2θ)n



Therefore, T = maxi |Yi | is a sufficient statistic. We can show that T is complete. First we derive the distribution of T . Notice that |Yi | are each uniformly distributed on [0, θ] and independent. Thus, the formula for the maximal statistic is: n−1 pT |θ (t) = n F|Y | |θ (t) p|Y | |θ (t)  n−1 1 t 1t∈[0,θ] = n θ θ  n−1 n t = 1t∈[0,θ] θ θ Now take any function v(T ) that has zero mean for all θ and notice the following chain



Page 4 of 6



ELE 530, Spring 2010-2011



of implications: Z 0



θ



 n−1 n t v(t) dt = 0 ∀θ > 0. θ θ Z n θ v(t)tn−1 dt = 0 ∀θ > 0. θn 0 Z θ v(t)tn−1 dt = 0 ∀θ > 0. 0



v(t)tn−1 = 0 ∀θ > 0, t ≥ 0. v(t) = 0 ∀θ > 0, t > 0. Therefore, v(T ) is zero with probability one. The forth equality above only need hold for t almost everywhere, but that’s actually all that we need. c. First we find the expected value of our sufficient statistic.  n−1 Z θ n t ET = t dt θ θ 0 Z θ  n t = n dt θ 0 n θ. = n+1 An unbiased estimator is θˆ = n+1 maxi |Yi |. Since we showed that the statistic is n complete, this estimator is the MVUE. d. First we calculate the score function. ∂ s(θ, Y n ) = ln pθ (Y n ) ∂θ  −n , maxi |Yi | < θ θ = undefined, elsewhere Therefore, E s(θ, Y n ) =



−n , θ



and the Cramer-Rao Bound does not apply. e. We see from the score function that θˆM L = max |Yi |. i



Homework #4 Solutions



Page 5 of 6



3. MMSE vs. MVU and ML. Consider jointly Gaussian random variables X and Y with zero mean, unit variance, and correlation EXY = ρ. a. What is the MMSE estimate of X given Y ? b. Now treat X like a parameter (Ignore the marginal distribution on X and just consider the conditional distribution p(y|x).). What is the minimum variance unbiased (MVU) estimate of X given Y , and what is the maximum likelihood (ML) estimate of X given Y? Solution: a. ˆ M M SE = E (X|Y ) X = ρY. b. Notice that the family of distributions p(y|x) is an exponential family. 1 − 1 (y−ρx)2 p(y|x) = p e 1−ρ2 2π(1 − ρ2 ) !  2 2 y2 2ρx 1 − y − ρ x2 2 p = e 1−ρ e 1−ρ e 1−ρ2 . 2 2π(1 − ρ ) Therefore, Y is complete. The expected value of Y is ρX. To make it unbiased we must divide by ρ. ˆM V U = Y . X ρ It is obvious from p(y|x) that the maximum likelihood estimate for X will be the one that causes the term (y − ρx)2 to be zero. Thus, ˆM L = Y . X ρ



Page 6 of 6



ELE 530, Spring 2010-2011



4. Efficiency of ML. Show that if an efficient unbiased estimator exists then it is the maximum likelihood estimator. Consider the necessary and sufficient condition for an efficient estimator to exist, stated in Theorem 3.1 of Kay Vol. 1. Solution: From Theorem 3.1 of Kay Vol. 1 involving the Cramer-Rao lower bound, we know that an efficient estimator exists if and only if the score function can be factored as, ∂ ln pθ (X) s(θ, X) = ∂θ = I(θ)(g(X) − θ), for some functions g(X), which is the efficient estimator, and I(θ), which is the Fisher Information. Choosing θ to maximize the likelihood is equivalent to maximizing the log-likelihood. So to find θˆM L we can inspect the derivative of the log-likelihood function with respect to θ, which is precisely the score function. Notice that the Fisher Information, I(θ) is non-negative. Therefore, the score function is positive for θ < g(X) and negative for θ > g(X). This means that θˆM L = g(X) = θˆM V U . The maximum likelihood estimator is also unique except in the degenerate case where I(θ) = 0 for some non-empty open interval of θ. However, this would mean that the family of distributions does not change over that interval, which is not a reasonable model to work with for parameter estimation.