Bayesian Sequential Estimation of the Inverse of the Pareto Shape Parameter

The problem addressed is that of developing a sequential procedure for estimating the inverse of the shape parameter of the Pareto distribution under the squared loss, assuming that the shape parameter is the value of a random variable having a density function with compact support and that the cost per observation is one unit. A stopping time is proposed and a second-order asymptotic expansion is obtained for the Bayes regret incurred by the proposed procedure.


Introduction
Let X 1 , …, X n denote independent observations to be taken sequentially from the Pareto distribution with p.d.f.
one unit.In a sequential investigation, the sample size n is not chosen in advance; instead, data are analyzed as they become available and whether to stop taking observations is decided according a stopping time t, say.t is a stopping time means that t takes on the values 1, 2, … and has the properties that P{t < ∞} = 1 and that {t = n} ∈   for each integer n ≥ 1, where   is the sigma-algebra generated by X 1 , , …, X n .The advantage of using sequential methods in estimation or hypothesis testing problems is that procedures can be constructed with a substantially smaller number of observations compared to equally reliable procedures based on a predetermined sample size.
Throughout this paper, it is assumed that θ (the shape parameter) is a value of a random variable Θ having (prior) density function ξ with compact support in (0, ∞) and the objective is to determine a stopping time t for which the Bayes regret (see (5) below) of the procedure ) , ( t t δ is as small as possible for large a.In order to anticipate the nature of the stopping time t, it is necessary to find the best fixed sample size.So, let E ξ denote expectation with respect to a probability measure , ξ P under which X 1 , X 2 , … are independent random variables with conditional p.d.f. as in (1), given Θ = θ, and Θ is a random variable having a (prior) density function ξ with compact support in (0, ∞).Lemma 1 below states that if ξ is continuously differentiable on its compact support, then Let θ E denote conditional expectation, given Θ = θ.Since Y 1 , …, Y n are conditionally independent with common distribution the Exponential distribution with parameter θ , the risk incurred by estimating δ by (3) under the loss ( 2 is the regret incurred by the procedure The Bayes regret can be rewritten as for a > 0. Bayesian sequential estimation problems were studied by Bickel and Yahav (1969), Alvo (1977), Rasmussen (1980), Shapiro and Wardrop (1980), Woodroofe (1981Woodroofe ( , 1985)), Tahir (1989), Woodroofe and Hardwick (1990), among others.
In Section 2, preliminary results for the analysis of the Bayes regret are obtained.The main result is presented in Section 3. It provides an asymptotic expansion for the Bayes regret.The proposed procedure can be used to estimate the mean loss for insurance policyholders or the mean insured value of homes in an optimal fashion.It specifies how many insurance policyholders or homes should be selected and provides an estimate of the population mean, based on this number.Note that the estimate of the population mean is where t is given by (4).

Lemma 1
Let Ω denote the support of ξ and let n Y be as in (3).If ξ is continuously differentiable on Ω , then using integration by parts.The lemma follows.

Lemma 2
Let Ω denote the support of ξ and let n Y be as in (3).If ξ is twice continuously differentiable on Ω , then for n ≥ 1.

Proof:
Let x 1 > 0, …, x n > 0 denote the observed values of X 1 , , …, X n , respectively and let using integration by parts.The lemma follows.
Let  t denote the sigma-algebra generated by  1 , … ,   .Lemma 1 and Lemma 2 imply that the Bayes regret in (5) becomes for any a > 0, where Let t be defined by ( 4) with m ≥ 2.Then, (i) there exists δ 0 > 0 such that 0 δ a t ≥ w.p.1 (P ξ ) for any a > 0 and by definition of t and t δ ˆ.To establish Assertion (ii), observe that for a > 0, by definition of t.It follows that since the sequence n δ ˆ, n ≥ 1, is a uniformly integrable martingale such that n δ ˆ→ δ (Θ) w.p.1 (P ξ ) as n →∞ and t → ∞ w.p.1 as a → ∞.

The Main Result
The Bayes regret in ( 6) can be rewritten as for a > 0, where   = {(Θ)| t }.

Theorem
Let t be defined by ( 4) with m ≥ 2 and let ̅ () be as in ( 7).If ξ is continuously differentiable on its compact support, then as a →∞.
The proof of the theorem requires Lemmas 4-6 below.

Lemma 4
Let t be defined by ( 4) with m ≥ 2. Then as a →∞.

Proof:
By definition of t, ; so that for some number   > , by the first assertion of Lemma 3. Thus,

Proof:
Let x 1 , …, x n denote the observed values of X 1 , …, X n , respectively and let Also, let  �  denote the maximum likelihood estimate of θ, based on x 1 , …, x n .Then, � �  � =  �  and by Lemma 2. This implies that for any a > 0. Thus, [ ] by the second assertion of Lemma 3 and the facts that U n and V n are martingales such that U n → [δ(Θ)] 2 and V n → ξ″(Θ)/ξ(Θ) w.p.1 �  � as n →∞.Next, a Taylor's expansion for δ(Θ) about where * t θ is a random variable between Θ and t θ ˆ.Thus, converges to the Standard Normal distribution as n →∞ (see Bickel and Yahav (1969)).

Lemma 6
Let t be defined by (4) with m ≥ 2 and let   be as in (7) . If ξ is continuously differentiable on its compact support, (  ) as a →∞, by the second assertion of Lemma 3 and   → δ(Θ) w.p.1 (  ) as a →∞, by For any a > 0, by the first assertion of Lemma 3. It follows that,  > 0, are uniformly integrable since U t and V t are uniformly integrable martingales.The lemma follows.4.Proof of the TheoremThe theorem follows by taking the limit as a →∞ in (  ) as a →∞,   → .p.1 (  ) as a →∞ and  2  2   ,  > 0, are uniformly integrable ( and V t , a > 0, is a uniformly integrable martingale). w