Estimating the Common Mean of k Normal Populations with Known Variance

Consider the problem of estimating the common mean of k normal populations with known variances. We study the admisibility of the Best linear Risk Unbiased Equivariant (BLRUE) estimator of the common mean of k normal populations under the squared error and LINEX loss function when the variances are known.


Introduction
Suppose we have k independent populations where the ith population follows from N(θ,  2 ),  = 1, … , .The parameter θ is unknown and   2 > 0,  = 1, … ,  are all assumed to be known.Let   ,  = 1, … ,   be i.Combining two or more unbiased estimators of an unknown parameter θ in order to obtain a batter unbiased estimators (in the sense of smaller risk) is a problem that often arises in statistics; for example when k independent sets of measurements of the same quantity are available.The problem of estimating the common mean of two or more independent normal populations has received attention from several authors in the past.For some references in this regard see Graybill and Deal(1959), Sinha and Mouqadem(1989) and Pal and Sinha(1996)for a complete bibliography.See also Lehmann and Casella (1998) pp 95-96, Sanjari Farsipour (1999), Sanjari Farsipour andAsgharzadeh (2002), for further references and comments.In section 2, a class of risk unbiased estimators which combines the means of the samples i.e.,   's, is developed and in section 3 the rejoin of admissibility of the estiomators of the from ∑    =1 +  is derived under the squared error loss function.
1 (, ) = ( − ) 2  (1.1) Which is a symmetric loss function.In section 4, the inadmissibility of the estimators of the from ∑    =1 +  are studied under the Loss function.In practice, the real loss function is often not symmetric and overestimation can lead to more or less severe consequences than underestimation.Varian(1975) employed an asymmetric loss function, which is known as LINEX loss, and was extensively used by Zellner(1986), Rojo(1987), Sadooghi-Alvandi andNematollahi (1989) and others.In this Regard, our next loss function is where  ≠ 0 and  > 0. The region of the admissibility and inadmissibility of the estimators of the form ∑   symmetric, or, to use the usual terminology, equivariant with respect to translation of the sample space, that is ( + ) = () +  for all a (2.1) Where = ( 1 ,  2 , … ,   ).An estimator satisfying (2.1) will called equivariant under translation.An alternative impartiality restriction which is applicable to our problem is the condition of unbiasedness.followingLehman and Casella (1998) (2.7) And (2.8) Obviously the estimators (2.7) and (2.8) are location equivariant (see(2.1))but their risks are complicated.

Admissibility Results under Loss(1.1)
Consider the admissibility of an arbitrary linear function ∑    =1 +  under the loss (1.1).The risk function∑    =1 +  with respect to the squared error loss (1.1) is So, we have the following theorem.
(i)The case 0 ≤   ≤ 1,  = 1, … , , and 0 ≤ ∑   < 1  =1 is considered first.If   = 0,  = 1, … , , then δ(0, …, 0,d) is admissible since it is the only estimator with zero risk at  = .For finding the Bayes estimator of , consider the normal prior with mean  and variance  2. The posterior distribution is then normal with mean and variance given by Respectively.it can be seen that the unique Bayes estimator is and that the associated Bayes risk is finite and hence admissible.It follows that δ(c 1 , …, c k ,d) is admissible whenever 0 ≤   ≤ 1,,  = 1, … ,  , and =   ´ (say),  = 1, … , , and  = 0, the risk of δ(c 1 ´, …, c k ,0) as seen from (3.1) is given by Note that if ∑   = 1  =1 and = 0, then we have It can be shown that the risk (3.3) is minimized under , when   =   ´, and hence ∑    =1 is inadmissible when   ≠   ´.To show that δ(c 1 ´, …, c k ,0) is admissible, the limiting Bayes method due to Blyth (1951)may be used.Suppose that δ(c 1 ´, …, c k ,0) is not admissible.Then, there is an estimator δ * such that For all , and with strict inequality for at least some .Now, (, δ) is a continuous function of  for every δ so that there exists  > 0 and  0 <  1 such that For all  0 <  <  1 .Let   * be the average risk of δ * with respect to the prior distribution (0,  2 ), and let   be the Bayes risk of the Bayes estimator (3.2) with respect to (0,  2 ).Then it follows that The integrand converges monotonically to 1 as  → ∞ and hence by the Lebesgue monotone convergence theorem, the integral converges to  1 − 0 and hence the ratio converges to ∞.Thus, there exists  0 < ∞ such that   0 * <   0 , which contradicts the fact that   0 is the Bayes risk for (0,  0 2 ).It follows that δ(c 1 ´, …, c k ,0) is admissible.

The Inadmissibility Results under Loss (1.1)
To see what can be said about the other values of   ᾿s,  = 1, … , , we shall now prove an inadmissibility result for linear estimators ∑    =1 + , which is quite general and in particular does not require the assumption of normality.
Theorem 4.1: The estimator ∑    =1 +  is inadmissible under squared error loss whenever one of the following conditions hold.).

Thus, ∑ 𝑐
Proof : , is considered first.If   = 0 ,  = 1, … ,  , then (0, … , 0, ) is admissible since it is the only estimator with zero risk at  = .Now consider the Bayes estimator when the prior distribution on  is normal with mean  and variance  2 .Then, using (3.2) in Zellner (1986), it follows that the unique Bayes estimator is =  ′ (say ), then the risk of ( 1 ′ , … ,   ′ ,  ′ ) as is seen from (5.1) is given by )+ad − ad − 1. (5. 3) It can be shown that the risk (5.3) is minimized when   =   ′ and =  ′ , and hence in this case ( 1 , … ,   , ) is inadmissible when   ≠   ′ and  ≠  ′ .To show that ( 1 ′ , … ,   ′ ,  ′ ) is admissible, again the limiting Bayes method may be used.Suppose that ( 1 ′ , … ,   ′ ,  ′ ) is not admissible, then there exists an estimator  * such that for all , and with strict inequality for at least some .By the continuity of (, ), there exists  > 0 and  0 <  1 such that −  for all  0 <  <  1 .Let   * be the average risk of  * with respect to the prior distribution (0,  2 ).Then it can be shown that .

The Inadmissibility Results Under Loss (1.2)
We shall now prove an inadmissibility result for linear estimators under the loss (1.2).