Parametric Interval Estimation o f the Geeta Distribution

It is well known that the sample mean is the estimator of a population mean in mathematical statistics from a given population of interest as a point estimator which assume a single number that is obtained by taking a random sample of a specified size from the entire population, depending on whether the population mean and variance is known or unknown. In the interval estimation, the sample mean is accompanied with a plus or a minus margin of an error that is assumed that the estimator is contained within the range of values with certain degree of confidence. This paper investigated and obtained the interval estimators of the unknown constants of Geeta distribution model through the construction of confidence interval using; the pivotal quantity method, the shortest-length confidence interval, unbiased confidence interval estimators, Bayesian confidence interval estimators and statistical method. Geeta distribution is a new discrete random variable distribution defined over all the positive integers, with two unknown parameters. The properties and characteristics of the Geeta distribution model were discussed and reviewed that is, the existence of the mean, variance, moment generating function and that the sum of all probabilities is unity. These are common properties of any given probability density function.


Introduction
Generally, in the interval estimation, we seek to construct the confidence interval which the population mean and population variance is contained within a range of values and has high confidence coefficient with the shortest-length of the interval (Harold J. Larson, 1934).The construction of these confidence intervals considers the factors such as when the population is known or unknown and similarly when the population variance is either known or unknown.
Let X = (X 1, X 2 , . .., Xn) be a random sample from the normal distribution with mean  and variance S 2 .Let  ̅ = be the sample mean and sample variance for X, respectively (Traoré et al, 2018).
Let Z (X 1, X 2 , . .., Xn; ) be a pivotal quantity where X 1, X 2 , . . . Xn is a random variable from the distribution of f (X; ,  2 ).The mathematical form (Hogg, R.V and Craig, A.T. ,1956) of probability distribution defined here, Geeta distribution contains the unknown constants  and , and are therefore estimated through construction of confidence intervals which forms the subject of discussion in this paper

The Geeta Distribution
Geeta distribution is a newly introduced distribution which has two unknown parameters and is of the form L-shaped model, it belongs to a family of Modified Power Series distribution(MPSD), the Langrangian series distributions and location parameter distribution.The Yule distribution and Pareto which belongs to the same family(MPSD) have a single parameter and therefore fails the test of handling large data sets when it comes to applications in modern technologies.Geeta distribution model is very versatile in meeting the needs of modern complex data sets and this attributed to the presence of the two unknown constants when compared to the distribution of the same class.The unknown constants can be estimated using the estimation techniques and it is believed that these constants contain a lot of information.
Geeta distribution is defined as a discrete random variable, X, over the set of all positive integers, with the probability mass function given by where The upper limit on  has been imposed for the existence of the mean.When   1 the model degenerates to a single point at x=1.
The Geeta distribution has a maximum as x=1 and is Lshaped for all values of  and .It may have a short tail or a long tail and heavy tail depending upon the values of  and .. Its mean  and variance 2  are given by and from the formula (6) It can also be expressed as a location parameter probability distribution given below: where  is then mean and 1 


. Note that this form does not have an upper limit on  .(Consul, 1990) has shown that the Geeta distribution (9) can be characterized by its variance: and the domain of X.
It is clear from the expression of 2  that  is zero when 1   , that is, when the model reduces to a single point x=1.

Also
Thus, the variance 2  decreases monotonically as  increases and the smallest value of 2  , for the largest value of  becomes   . From this we conclude that when the variance will be less than the mean  and will have the range: the value of  2 will become larger than .

Bar -Diagram for Geeta Model
The successive probabilities for various values of x can be easily computed from the values: and the recurrence formula The probabilities for the Geeta distribution (13) were computed for  .= 1.2 to 5.2 varying by 0.2 and for values of  varying from 1.2 to 4.2 (with increments of 0.6) and bar-diagrams were drawn for all of them to see the variations.
Twelve of these bar-diagrams are shown below for two typical values of  .= 1.2 to 5.2 and for  = 1.2, 1.8, 2.4, 3.0, 3.6, 4.2 corresponding to each values of . .Bar-diagrams of Geeta Distribution for  = 1.2 Figure 1.Probabilities for values of X at  =1.2 for different values of  (Consul, 1990a)     Consul, 1990a) It is clear from these graphs that   1 Pr  X reduces as  increases and the probabilities for all other values of x increase but the model always remains L-shaped.Thus the tail becomes more and more heavy and longer with the increase in the value of  .There is a similar effect when the value of  is kept fixed and the value of  is slowly increased.The value of   1 Pr  X decreases and the probabilities for other values of x increase as  increases.
However, these changes for  are at a much slower pace than the changes for  with the result that the Geeta probability model becomes more suitable and versatile than some other models for abundance data sets.

Methodology
The parametric interval estimation is twofold; finding interval estimators and then determining good, or optimum interval estimator.In the preliminaries stages the identification of a statistics is obtained by taking a random sample of size, n from the entire population.A statistic can be expressed a function of sample observation and there are many statistics from same population that can be obtained.The statistics is chosen from either a sample mean or sample proportion and the confidence coefficient is obtained by determining its significant level such as, for 5 % significant level its confidence coefficient is 95 % or 0.95.The statistics can be biased or unbiased estimator and the properties of a good estimator informs the choice of a statistics.The construction of confidence intervals (CI)s by choosing appropriate   2 ⁄ values which are obtained from the standard normal tables (Arnab et al, 2017, Warisa et al, 2017).It largely depends on whether the population mean is known or unknown and similarly whether population variance is known or unknown.There are other cases of constructing the CI for the difference of two means where the student's t distribution tests are used (Suparat et al, 2017).An approximate confidence interval for a population mean can be constructed for random variables that are not normally distributed in the population relying on the central theorem for sample sizes which are large ( ≥ 30) This paper is divided into sections on which the following are discussed, shortest-length confidence interval, unbiased confidence intervals and Bayesian confidence interval.

Confidence Interval Estimation
Definition 1: An interval estimate for a real-valued parameter θ based on a sample  ≡ ( 1 , … ,   ) is a pair of functions () and () so that () ≤ U() for all  , that is [(), U()] (Wei Zhu, 2017)   Theorem 2.1: The confidence interval estimate of  ̂ is given by where L 1 and L 2 are lower and upper limits respectively

Proof:
It follows then that, from the central limit theorem is an approximate pivotal quantity.Thus where  is the confidence coefficient and, 1 L and 2 L are lower and upper limits of the parameter  and (1-) 100% is confidence interval for  (Mood, A.N. et al, 1963)

Definition2:
A pivotal quantity is a function of the sample and the parameter of interest.Furthermore, its distribution is entirely known.
Let X 1, X 2, , , Xn be a random sample for a normal population with mean,  and variance,  2 .That is, X i ~N(μ, σ 2 ), i = 1, … , n, and X i is identically and independently distributed.
The method of finding a confidence interval involves finding, Z, the pivotal quantity which is a function of the samples and the parameter to be estimated is as shown below in the examples (Arnab et al, 2017).

Example 1
When  is known, the pivotal quantity given by; The (1-) 100 % CI for a given =0.05, then the 95 % CI for  is given by Example 2 When  is unknown, the pivotal quantity is given by; The (1-) 100 % CI for a given =0.05, then the 95 % CI for  is given by where  −1, 2 ⁄ is the student's t distribution test with (n-1) degrees of freedom and  is the significant level.
Definition 3: where (1-) does not depend on  , then the random interval   T and 2 T are called lower and upper coefficient limits respectively.The confidence interval estimators of the parameters  and  for the distribution given in (10).For small sample size n, confidence interval estimation is not possible because we do not have the Geeta distribution tables to refer to, hence large sample distributions are considered where the normal table is going to be used.The large sample distribution for maximum likelihood estimators is used to derive the confidence intervals for the unknown parameter.If the maximum likelihood estimator  ˆ of a parameter is for large sample size n, it is approximately normally distributed with mean  and variance 2  where

Shortest -Length Confidence Interval
Large-sample confidence intervals based on maximum likelihood estimator will be shorter on the average then intervals determined by any other estimator.But from the solution of eq. ( 4), a = -b thus

Bayesian Confidence Interval
The Bayesian confidence interval estimate of  is given by where 1 t and 2 t are the upper and lower limits of Bayesian interval.

Results of the Interval Estimators
The following results were obtained.Theorem 3.1.1 Using the pivotal quantity method interval, the confidence estimate interval of is given by where  ˆ is the point estimator of  using the method of moments given as From the distribution given equation in (5), we have Therefore differentiating partially w .r. t , we have which further simplifies to give Hence the confidence estimate interval of is given by Using the pivotal quantity method, the confidence estimate interval of  is given by The following equations are used to solve for The solution for the above equations, gives where 1 V and 2 V are upper and lower limits.
Following the same procedure the interval for parameter  is obtained as The shortest-length confidence interval for parameter is obtained as We follow this procedure: The distance   a b  can be minimized for fixed area when which can be re-written in this form Therefore, the length of this interval is


increases monotonically as  increases in value and that the smallest value of 2