Some Characterization Results by Conditional Expectations and their Applications to Lindley-type Distributions

This paper deals with some characterization results based on truncated expectations in the continuous as well as in the discrete case. Both the right and left truncations are considered and some general results are derived. Some of the known results dealing with truncated moments, residual moments and residual partial moments are obtained as special cases. These results are utilized to obtain certain characterization results for the Lindley type distributions. The characterization results provide new methods for estimating the unknown parameter of Lindley-type distributions and their goodness-of-fit. The results on the Lindley-type distributions are applied on some real data sets.


Introduction
Characterizations of probability distributions are important issues in statistical inference.For example, Su and Huang (2000) studied the characterizations of distributions based on conditional expectations.Recently, Nanda (2010) studied the characterizations through mean residual life and failure rates functions of absolutely continuous random variables.Ahmed (1991) characterized beta, binomial and Poisson distributions by connecting conditional expectations with hazard rate functions.Laurent (1974) presented characterization of distributions by truncated moments.Gupta and Gupta (1983) characterized distributions by the moments of residual life.
In this paper, we obtain some general results for the characterization of distributions, both in the continuous as well as in the discrete case.More specifically, we obtain some characterization results knowing E[g(X) | X ≥ x] and E[g(X) | X ≤ x].Results are provided to characterize distributions by means of truncated moments, residual moments, reversed residual moments and partial moments.We are particularly interested in the characterizations of continuous and discrete Lindley distributions.
A random variable X is said to have the Lindley distribution if its probability density function (p.d.f.) is given by x > 0, β > 0. (1) Lindley distribution was first introduced by Lindley (1958) and it is now popular in modeling lifetime data.Recently, Lindley distribution and its extensions have gained much attention in the statistical literature.We will not review this here and refer the reader to Al-Mutairi et al. (2013), Ghitany et al. (2008Ghitany et al. ( ), (2011Ghitany et al. ( ), (2013) ) and the references therein.
Recently, Ahsanullah et al. (2017) provided two characterizations of Lindley distributions based on a relation between left (right) truncated moments and (reversed) failure rate function.
A random variable X is said to have the discrete Lindley distribution if its probability mass function (p.m.f.) is given by Discrete Lindley distribution and its applications are introduced by Mazucheli and de Oliveira (2016).Another discrete Lindley distribution was studied by Gómez-Déniz and Calderín-Ojeda (2011).
In this paper, we give new characterizations of the above continuous/discrete Lindley distributions based on the modified left/right truncated moments.Our new characterizations based on the modified left truncated moments can be used to check graphically the goodness-of-fit of the continuous/discrete Lindley distributions.Also, our characterizations provide simple estimates of the parameter in these distributions.
The paper is organized as follows.In section 2, we present some general results, in the continuous case and give some examples.Section 3 contains similar results for the discrete case.Some characterizations of Lindley type distributions are derived in section 4. Two applications are presented in Section 5 to illustrate the importance of the characterization results for Lindley-type distributions.

Characterization Results -Continuous Case
In this section , we shall assume that X is a continuous random variable with absolutely continuous distribution function F(x), survival function F(x) = 1 − F(x), probability density function f (x), failure rate function r(x) = f (x)/F(x) and reversed failure rate function r(x) = f (x)/F(x).It is well known that the failure rate function and the reversed failure rate function characterize the distribution, see for example, Marshall and Olkin (2007).

Characterization through Truncated Conditional Expectation
Let g(x) be a differentiable function such that 0 < E[g(X)] < ∞.Also assume that g(t)F(t) → 0 as t → ∞.Then Integrating the above integral by parts, under the assumed conditions, it can be verified that We shall now obtain an expression for the failure rate in terms of the function ψ(x) and g(x).
The equation (3) can be written as Differentiating the above equation and simplifying, we get the failure rate as Example 2.1 Characterization through truncated moments see Navarro and Ruiz (1998).
Example 2.2 Characterization through residual moments Assuming t k F(t) → 0 as t → ∞ for all k and integrating by parts, we have Differentiating the above equation and simplifying, we have Thus, in this case, two consecutive residual moments are needed to characterize the distribution.
Example 2.3 Characterization through partial moments Differentiating k times with respect to x and simplifying, we get Thus, one partial moment is enough to characterize the distribution.For more details, see Gupta and Gupta (1983).
Example 2.4 Characterization through certain truncated conditional expectations In this case, using (3), we have This can be written as Differentiating the above equation, we get

Characterization by Reversed Truncated Conditional Expectation
Let g(x) be a differentiable function such that 0 < E[g(X)] < ∞.Also assume that g(0) is finite and g(0)F(0) = 0 .Then Integrating the above integral by parts, under the assumed conditions, it can be verified that We shall now obtain an expression for the reversed failure rate in terms of the function ϕ(x) and g(x).
The quation (9) can be written as Differentiating the above equation and simplifying, we get the reversed failure rate as Example 2.1R Characterization through reversed truncated moments Example 2.2R Characterization through reversed residual moments Assuming t k F(t) → 0 as t → −∞ for all k and integrating by parts, it can be verified that Differentiating the above equation and simplifying, it can be verified that r Thus, in this case, two reversed consecutive residual moments are needed to characterize the distribution.
Example 2.3R Characterization through partial reversed moments Differentiating k times with respect to x and simplifying, it can be verified that Thus, one partial reversed moment is enough to characterize the distribution.For more details, see Gupta and Gupta (1983).
Example 2.4R Characterization through certain reversed truncated conditional expectations In this case, using (9), we have This can be written as Differentiating the above equation, we get

Characterization Results -Discrete Case
In this section, we shall assume that X is a discrete random variable defined on the non-negative integers with probability mass function P(X = x) and survival function F(x) = P(X ≥ x).

Characterization through Truncated Conditional Expectation
In this case The above expression can also be written as Since the above result is true for all x, we change x to x + 1 and subtract to obtain This gives Now where F(x) is given by ( 16).
Remark 1. Gupta (1985) have characterized a distribution distribution by means of two consecutive factorial moments.
The failure rate function is given by Remark 2. A similar result was obtained by Dimaki and Xekalaki (1996).However, there is an error in their result.
Example 3.1 Characterization through truncated moments . . .Then, using (18), we have Example 3.2 Characterization through residual moments This means that the kth residual moment at two consecutive points is needed to characterize a distribution.

Characterization through Reversed Truncated Conditional Expectation
In this case The above expression can also be written as Since the above result is true for all x, we change x to x − 1 and subtract to obtain This gives ] The reversed hazard rate function is given by r .
Example 3.1R Characterization through reversed truncated moments Example 3.2R Characterization through reversed residual moments Example 3.3R Characterization through certain reversed truncated conditional expectations We shall now consider the case when ϕ (a) Assume g(x) = 0.In this case, we have Therefore, for all k = 1, 2, . . ., x − 1 That is, (b) Assume g(x) 0. In this case, we have Therefore, That is, Example 3.4R Characterization through reversed partial moments . . .In this case, g(s) = (s − x) r and g(x) = 0. Therefore, using equation ( 29), for all m = 1, 2, . . ., x − 1, we have

Characterization of Continuous Lindley Distribution (i) Characterization through left truncated moments
Theorem 4.1.1A continuous random variable X, with finite E(X k ), follows the Lindley distribution with parameter β if and only if, for all k = 1, 2, . . ., x > 0, where This characterization result can be obtained by choosing Thus, using equation ( 8), we have which is the result obtained by Ahsanullah et al. (2017), p. 6224.
(ii) Characterization through modified left truncated moments Theorem 4.1.2A continuous random variable X has Lindley distribution with parameter β if and only if This characterization result can be obtained by choosing Thus, using equation ( 8), we have We shall show the importance of this characterization result in the applications section.
(iii) Characterization through right truncated moments Theorem 4.1.1RA continuous random variable X, with finite E(X k ), follows the Lindley distribution with parameter β if and only if, for all k = 1, 2, . . ., where This characterization result can be obtained by choosing Thus, using equation ( 14), we have which is the result obtained by Ahsanullah et al. (2017), p. 6226.
(vi) Characterization through modified right truncated moments Theorem 4.1.2RA continuous random variable X has Lindley distribution with parameter β if and only if This characterization result can be obtained by choosing Thus, using equation ( 14), we have

Characterization of Discrete Lindley Distribution
(i) Characterization through left truncated moments Theorem 4.2.1 A discrete random variable X follows the discrete Lindley distribution with parameter β if and only if, for all k = 1, 2, . . .,

This characterization result can be obtained by choosing
Note that g(0) = 0, for all k = 1, 2, . . . .Now, using equation ( 21) when x = 0, for all m = 1, 2, . .., we have (ii) Characterization through modified left truncated moments Theorem 4.2.2A discrete random variable X has discrete Lindley distribution with parameter β if and only if In this case h(x) = (1 − e −β ) e −βx and g(x) = 1 1+x 0. Thus, using equation ( 22), we have We shall show the importance of this characterization result in the applications section.

Application of Characterization of Lindley Distribution
It has been shown in Theorem 4.1.2that X follows the Lindley distribution with parameter β if and only if log A natural nonparametric estimator of the left hand side of (32) is where X 1 , . . ., X n is a random sample from the Lindley distribution and I(A) is the indicator function of the set A.
Now, based on the data {(x i , Y(x i )), i = 1, 2, . . ., n}, we can find the least squares regression line of the response Y(x) in terms of the predictor x.Using this least squares regression line, we can estimate the parameter β of the Lindley distribution.
The least-squares regression line of y on x is given by ŷ(x) = −2.80− 0.06x.The coefficient of determination R 2 of this regression line is 97.7%, indicating excellent fit.Now, using the slope of this regression line, we have β = 0.06 Also, using the intercept of this regression line, we have log( β 1+ β ) = −2.80 which implies β = 0.06.

Application of Characterization of Discrete Lindley Distribution
It has been shown in Theorem 4.2.2 that X follows the discrete Lindley distribution with parameter β if and only if log A natural nonparametric estimator of the left hand side of (34) is where X 1 , . . ., X n is a random sample from the discrete Lindley distribution and I(A) is the indicator function of the set A.
Now, based on the data {(x i , Z(x i )), i = 1, 2, . . ., n}, we can find the least squares regression line of the response Z(x) in terms of the predictor x.Using this least squares regression line, we can estimate the parameter β of the discrete Lindley distribution.
For this data set, we have The least-squares regression line of z on x is given by ẑ(x) = −0.34− 1.26x.The coefficient of determination R 2 of this regression line is 99.7%, indicating excellent fit.
Therefore, using the slope of this regression line, we have β = 1.26Also, using the intercept of this regression line, we have log(1 − e − β) = −0.34 which implies β = 1.24.

Conclusions
In this paper, we have presented some general results based on the conditional expectations, both in the continuous as well as in the discrete case.Some of the ingredients, in the continuous case, are available in the literature.We unify these results dealing with the truncated moments, residual moments and residual partial moments.In addition, similar results are obtained in the case of reversed conditional expectations.The characterization results, in the discrete case, are also derived, both in the case of right truncation as well as in the case of left truncation.The characterization results provide new methods for estimating the unknown parameter of Lindley-type distributions and their goodness-of-fit.Some applications to real data sets are provided in the case of Lindley-type distributions.
vi) Characterization through modified right truncated moments Theorem 4.2.2RA discrete random variable X follows the discrete Lindley distribution with parameter β if and only if

Figure 1 .
Figure 1.Least squares regression line of y(x) on x of Data set 1.

Figure 2 .
Figure 2. Least squares regression line of z(x) on x of Data set 2.