Stress-Strength Reliability Model with The Exponentiated Weibull Distribution: Inferences and Applications

In this paper, we deal with the estimation of the reliability R = P(Y < X) where X, a unit strength, and Y , a unit stress, are independent exponentiated Weibull random variables. The maximum likelihood and Bayesian methods are used to make inference about R. We obtain the Baysian estimator using Lindely’s procedure under squared error loss and LINEX loss functions with gamma prior for the unknown model parameters. The asymptotic and bootstrap confidence intervals are obtained as well as the credible interval for R is constructed in view of the empirical Bayesian procedure. For illustrative purposes, analysis of real data sets is presented. Mont Carlo simulations are carried out to compare the performances of the different estimators.


Introduction
We consider the inference on the reliability R = P(Y < X) of a system where X, a unit strength, and Y, a unit stress, are independent exponentiated Weibull random variables.This function means that R is the probability that a system is strong enough to overcome the stress imposed on it.The reliability parameter R is a measure of a system performance.Birnbaum (1956) who was introduced the main idea of this area of research.The stress-strength model has wide applications in several fields.For example, in engineering, X can represent the strength of a system structure and Y represents the stress due to environmental conditions imposed on it.Information about the mechanical reliability of system design can be obtained prior the production through stress-strength model.This information can decrease the costs of production.Other example, in biology, R can be a measure of the difference between two populations and has applications in many areas.When X is a treatment group and Y represents a control group, R refers to a measure of the treatment effects.For details, see Hauk et al. (2000), Reiser (2000) and Wellek (1993).Due to the practical importance, the estimation of R has attracted the attention of several authors who considered several distributions such as exponential, normal, Weibull, generalized exponential etc..Among of other works deal with inferences about R: Mahdizadeh (2018), Sarhan et al. (2015), Rao et al. (2016), Jovanovic and Rajic (2014), Raqab et al. (2008), Weerahndi and Johnson (1992), Constantine et al. (1986), Rezaei et al. (2010).Our aim in this research is to focus on inferences for R = P(Y < X) when X and Y are two independent but not identical distributed random variables with the exponentiated Weibull (EW) distribution.We use several estimation methods: classical and Bayesian for point estimation and asymptotic confidence, bootstrap confidence intervals and credible interval for interval estimation.The performances of Bayes and non-Bayes methods are compared by analysis of real data sets and Mont Carlo simulations through computed the mean square error of different estimators and average lengths and coverage probability of different estimating intervals.The exponentiated Weibull random variable has a cumulative distribution function and the corresponding probability density function (pdf) (1 − e −x α ) θ−1 , x > 0, α and θ > 0. (2) Here α and θ are shape parameters.We use the abbreviation EW(α, θ) to denote the exponentiated Weibull distribution with density cited above.This distribution has been introduced by Mudholkar and Srivastava (1993).The EW family includes many important distributions.For examples, for θ = 1, it represents Weibull distribution, for α = 1 , it represents the exponentiated exponential distribution.For α = 2, it represents the one-parameter Burr type-X distribution as well as a generalized Rayleigh distribution.Furthermore, The EW distribution has a convenient structure of its distribution function that can be used quite adequately and effectively in analyzing several lifetime data.The article is organized as follows: In Section 2, we consider the maximum likelihood estimation.In Section 3, we derive different confidence intervals estimation for R. Section 4 proposes Bayesian approximation technique to get the Bayesian estimation for R. Section 5, adopts empirical Bayesian procedure to obtain a credible interval estimation for R. Analysis of real data sets is given in Section 6.In Section 7 simulation study is carried out, and Section 8 concludes the paper.Now, we assume that X follows EW(α 1 , θ 1 ) and Y follows EW(α 2 , θ 2 ).Our interesting value is the reliability parameter R defined by Using this form with equation (2), we get Applying the series expansion (1 ,on the last two terms of the integrand with some mathematical manipulations, we get, finally, the form of R as Alternatively, we assume that α 1 = α 2 = α and then the reliability parameter R can be obtained as The assumption of this form may be associated with many practical situations.If θ 1 = θ 2 , R = 0.5, that is X and Y are independent and identically distributed and there is an equal chance that the strength is greater than stress.When θ 1 and θ 2 are estimated the value of R is simply estimated using equation ( 4).We remark that equation (4) does not contain α but θ 1 and θ 2 are functions of α and hence R depends on α.However, if α is (estimated) already known, the estimators of θ 1 and θ 2 are obtained and hence so does the estimator of R.

Maximum Likelihood Estimation
Suppose x = {x 1 , x 2 , . . ., x n 1 } and y = {y 1 , y 2 , . . ., y n 2 } be two random samples taken from EW(α, θ 1 ) and EW(α, θ 2 ), respectively.The observed value x i represents the strength of i − th component and observed value y i represents the stress acting on it.Based on these observed samples, the likelihood function of α, θ 1 and θ 2 is The log-likelihood function, l, is where and the estimating equations can be obtained as where From equations ( 8) and ( 9), we obtain the ML estimators: where α can be obtained as the solution of the nonlinear equation that can be rewritten in the form where The ML estimator, α, of α can be obtained from equation ( 11) by using a simple iterative technique as g(α (i) ) = α (i+1) , where α (i) is the j − th iterate of α.The iterations should be finished when the absolute value of (α (i) − α (i+1) ) is sufficiently small.Once α is obtained, we get θ 1 and θ 2 using equations ( 10) and hence the MLE of R is given by on the basis of the invariance property of the MLE.

Confidence Intervals
Although RM can be obtained in explicit form, it is difficult to obtain the exact distribution of it.Hence, we mainly depend on the asymptotic distribution of RM to construct an asymptotic confidence interval (ACI) of R. We also consider two different parametric bootstrap confidence intervals.

Asymptotic Confidence Interval
From the asymptotic distribution of γ = ( θ1 , θ2 , α) ′ we derive the asymptotic distribution of RM and hence we obtain the ACI of R. The MLE of γ = (θ 1 , θ 2 , α) ′ is asymptotically normal with mean of true γ and variance-covariance matrix I −1 (γ) = (a i j (γ)) −1 where I −1 (γ) is the inverse of the Fisher information matrix I(γ) = −E( ∂ 2 l ∂γ i ∂γ j ), i, j = 1, 2, 3. I(γ) is consistently estimated by I(γ) where γ is the MLE of γ.The variance-covariance matrix can be written in terms of its elements as the inverse of the matrix where the elements a i j for i, j = 1, 2, 3 are the negative of second derivatives of the log-likelihood function given by equation (6); That is, where p 1 , q 1 , p 2 , q 2 are defined in equation ( 7), The MLE is RM = θ1 /( θ1 + θ2 ) as given by equation ( 12), is asymptotically normally distributed with mean R and variance (Rao 1973) which is consistently estimated to be where J = a 11 a 22 a 33 − a 11 a 2 23 − a 22 a 2 13 .Remembring that all the above values of var( RM ) = σ 2 R is computed at the MLE of the parameters θ 1 , θ 2 and α.Therefore, an asymptotic 100(1 − τ)% confidence interval, ACI, for R can be obtained as where z k is the k − th quantile of the standard normal distribution.A better of such confidence interval may be obtained in cases of large sample sizes.For small sample sizes, we adopt the bootstrap confidence interval in the following.

Bootstrap Confidence Intervals
In this section, we propose the use of the following method to generate parametric bootstrap samples, suggested by Efron and Tibshirani (1998), of R, starting from the given independent random samples x and y obtained from EW(α, θ 1 ) and EW(α, θ 2 ), respectively.We employ the percentile bootstrap and Student's t bootstrap confidence intervals for R. The steps of the method to construct the bootstrap confidence interval for R are summarized in the following steps: Step 1.Given a random sample x = {x 1 , x 2 , . . ., x n 1 } and y = {y 1 , y 2 , . . ., y n 1 }, calculate α, θ1 and θ2 .
Step 3. Calculate the same statistics α * , θ1 * and θ2 * as in step 1 using the sample found in step 2. Compute the bootstrap estimate of R using equation ( 12), say R * .
Step 4. Repeat steps 2-3, N times, where N ≥ 1000, and put the bootstrap values R * in ascending order.

Bayesian Estimation of R
In this section, the Bayes estimates of R are obtained.We assume that the parameters θ 1 , θ 2 and α have independent gamma distributions, priori, each with density function Π(γ) ∝ γ a−1 e −bγ , γ > 0, for fixed values of a, b > 0 and γ is the vector space (θ 1 , θ 2 , α) ′ .The joint posterior density function of θ 1 , θ 2 and α can be obtained as where and v i are given in equation ( 6) and k −1 is the normalizing constant.The Byes estimator of R under squared error loss function is given by In view of difficulty to evaluate the posterior expectation in equation ( 19) analytically, we employed Lindely's approximation method to approximate the ratio of integrals in equation ( 19) and so we can obtain the estimate of R. Depending on the ML estimators for α, θ 1 , and θ 2 , we use lindely's approximation form expanding about these estimators.Lindely's approximation: Lindely (1980) developed an approximate procedure to evaluate the ratio of two integrals such as that of the posterior mean of a function w(λ) where where q(λ) = l(λ) + ρ(λ), l(λ) is the logarithm of the likelihood function and ρ(λ) is the logarithm of the prior density of λ where λ is a vector of parameters, say λ = (λ 1 , λ 2 , . . ., λ r ).According to Lndely's approximation, E(W(λ)|t) is approximately estimated by the form where w = w(λ), i, j, k, l = 1, 2, 3, . . ., r, w i = ∂w/∂λ i , w i j = ∂ 2 w/∂λ i ∂λ j , l i jk = ∂ 3 l/∂λ i λ j λ k , ρ j = ∂ρ/∂λ j , σ i j is the (i, j)th element in the inverse of the matrix {−l i j } and λ = ( λ1 , λ2 , . . ., λr ) is the MLE of λ, viz, all these quantities are evaluated at the MLE of the parameters.Consider the case of three parameters; that is when λ = (λ 1 , λ 2 , λ 3 ).The posterior mean from equation ( 21)) is reduced to where In our case, we have λ = (θ 1 , θ 2 , α) and w = w(θ 1 , θ 2 , α) = R as given in equation (4).To apply Lidely's form of equation ( 22), we first obtain the σ i j elements of the inverse of the matrix {−l i j }, i, j = 1, 2, 3. From the log-likelihood function given in equation ( 5), we can obtain σ i j as follows: where a i j , i, j = 1, 2, 3 are given by equations (13) and J is given in equation ( 14).
The quantities ρ j and l i jk , i = 1, 2, 3 are obtained as g 1 , g 2 , φ i , ψ i , p 1 and q 1 are given in equation ( 13). Then, . Therefore, The Bayes estimator for R, under squared error loss function and LINEX loss function, using Lindely's approximation can be obtained in what follows.
-Under squared error loss function The Bayes estimator for R, denoted by RBS L , under squared error loss function can be evaluated by the form where Φ = (1/2)t All these values are evaluated at the MLEs of θ 1 , θ 2 and α.
The values of w 1 , w 2 , w 11 , w 22 and w 12 can be obtained as follows: The Bayes estimator for R, denoted by RBLL , under LINEX loss function can be evaluated by the form where Keeping in mind that these values are evaluated at the MLEs of θ 1 , θ 2 and α.

Credible Interval
We know that the inference about R depends only on θ 1 and θ 2 .However, the estimators of θ 1 and θ 2 depend on α, the estimation of R can be accomplshed as soon as α is estimated and become known.Depending on the ML estimate of α from the observed samples, we employed the empirical Bayesian procedure suggested by Lindely (1969) and used by Awad and Gharaf (1986).They had estimated the prior parameters of θ 1 and θ 2 empirically.From the likelihood function given in equation ( 5), one can see that ) −1 are sufficient statistics for θ 1 and θ 2 , respectively.We have the assumption that θ 1 and θ 2 have independent gamma priors as θ 1 ∼ G(a 1 , b 1 ) and θ 2 ∼ G(a 2 , b 2 ).The empirical Bayes procedure suggests to take as estimated from the observed samples.When we adopt these empirical priors we get the posterior distributions θ 1 |x ∼ G(a 1 , b 1 ) and θ 2 |y ∼ G(a 2 , b 2 ) where a 1 = 2n 1 + 1, b 1 = 2U and a 2 = 2n 2 + 1, b 2 = 2V.Therefore, we can get two independent chisquared random variables Q 1 and Q 2 as be used as a pivotal quantity to obtain a 100(1 − τ)% CrI for R. The lower and upper bounds of this interval can be obtained, respectively, as It is worth to mention that this interval performs very well in terms of its length compared with the confidence intervals in Section 3, as it is expected, when we apply to the real data as we will see in Section 6.

Data Analysis
For illustration purposes, we present a real data analysis of the strength of two types of data: (1) Single carbon fiber data and (2) Jute fiber data.We apply the estimation methods, presented here, for R. (
These support that we cannot reject the null hypothesis that α 1 = α 2 and hence the claim that the two shape parameters for the distributions of thses data sets are equal, is justified.Figures 1 and 2 depict the Q-Q plots for both the data set 1 and 2. It is clear that the EW model fits quite well for both given data sets.This conclusion is also supported by the Kolmogrov-Smirnov (K-S) tests where the K-S statistic values are 0.0843 and 0.0929 with associated p values are 0.6784 and 0.5959, respectively.
Based on the estimates θ1 and θ2 , the ML estimate of R is RM = 0.5711 and the bootstrap estimate is RBoot = 0.5721.The ACI, p-boot CI and t-boot CI, with 95% confidence level, for R and their lengths are reported in Table 1.To evaluate the Bayes estimates and credible interval, small values (0.001) for the hyper parameters of gamma prior densities were considered to the vague prior information allow to get meaningful comparison with MLE of R. From the Bayes estimators formulas in equations ( 23) and ( 25), the Bayes estimates of R is RBS L = 0.5704 and RBLL = 0.5736.We note that the estimated value of R is greater than 0.5, implying that the carbon fibers with length 20 mm is stronger than carbon fibers with length 50 mm.The 95% credible interval, CrI, for R, computed by the form given in the equation ( 26), and its length are reported in Table 1.Note that the CrI region is highly shorter in length than the corresponding confidence intervals.For bootstrap methods, the results are based on 5000 repeated samples.(2) Jute fibers data These data sets are presented and studied by Xie et al. (2009).The data represent the breaking strength of Jute fiber at two different gauge lengths.The data sets are given as follows: Data set 1 of length 10 mm: X (n 1 = 30): 693 .73, 704.66, 323.83, 778.17, 123.06, 637.66, 383.43, 151.48, 108.94, 50.16, 671.49, 183.16, 257.44, 727.23, 291.27, 101.15, 376.42, 163.40, 141.38, 700.74, 262.90, 353.24, 422.11, 43.93, 590.48, 212.13, 303.90, 506.60, 530.55, 177.25.
To check whether the EW distribution can be used or not to fit these data sets, we use the Q-Q plot and K-S tests.The ML estimators for data sets 1 and 2 are α1 = 0.2703 and α2 = 0.2703, respectively, and hence the distributions of the two data sets have the same shape parameters α 1 = α 2 = α.The ML estimate of the common shape parameter α is α = 0.2681 and hence θ1 = 62.1842, θ2 = 48.8899.The K-S statistic values are 0.1420 and 0.1376 with associated p values are 0.5341 and 0.5737, respectively.Therefore, one cannot reject the hypothesis that the data sets follow the EW distribution.Figures 3 and 4 show that the EW distribution fits well the tow data sets.For jute fiber data and under the same considerations for Bayes estimates (cited in case of single carbon fiber data), we get the following estimators of R: RM = 0.5598, RBoot = 0.5693, RBS L = 0.5582 and RBLL = 0.5725.We note that the estimated value of R is greater than 0.5, implying that the Jute fiber with length 10 mm is stronger than Jute fiber with length 20 mm.The ACI, p-boot CI and t-boot CI as well as CrI and their lengths are reported in Table 2.The results using the bootstrap methods are obtained over 5000 repeated samples.For the two data sets, the MLE and Bayes estimator (under non informative priors) perform quite similarly, while the length of the credible interval is the shortest compared with the corresponding confidence intervals obtained by other methods.

Simulation Study
A simulation study is carried out through some simulation experiments to see how the different estimation methods work for different values of R = P(Y < X) using different sample sizes.We generate a set of 2000 X-samples from the EW(α, θ 1 ) and another set of 2000 independent Y-samples from the EW(α, θ 2 ).We choose the sample sizes n 1 = 10, 20, 35 and 50 with combinations of the same values of n 2 .The parameter values of α is 0.75 (1.5) with different several values of θ 1 and θ 2 to represent different values of the reliability parameter R to be 0.25, 0.40, 0.50, 0.70, 0.90.From the sample, we estimate α from equation ( 11) using a simple iterative algorithm.We employ the estimate of α to evaluate θ 1 and θ 2 using equations ( 10).Consequently, we get the MLE, RM of R. For Bayesian estimation under squared error loss and LINEX loss functions, small values (0.001) for the hyper-parameters of gamma prior densities are considered to get meaningful comparison with MLE of R. We report the average mean squared errors (MSEs) of different estimators in Tables 3 and  4. We compute the 95% confidence interval based on asymptotic distribution of RM and the bootstrap, p-boot and t-boot, confidence intervals as well as the credible interval.The average lengths and coverage probabilities (CPs) are reported for 95% confidence level in Tables 5 and 6.
From the results in Tables 3 and 4, some of points are observed from this simulation.
-All estimators perform quite well in terms of the MSEs for all sample sizes.
-The ML estimator works well even with small sample size.This show that the coincidence and consistency properties of all estimators.
-The MSE of RBS L is the smallest comparing with that of the other estimators, especially for small sample sizes.
-The MSEs decrease as the sample size increases for all methods and for different values of R.
-For the same size of the samples (say, for samples in sizes (10,10) or in sizes (35,20) at different values of R), the MSEs increase when 0 < R ≤ 0.5 and decrease when 0.5 < R ≤ 1 as R value increases through these two ranges.
-For small sample sizes, the MSEs of the different estimators in case of n 1 n 2 is smaller than the MSEs in case of n 1 = n 2 .
Examining Tables 5 and 6, it is clear that: -The average lengths of all intervals decrease as the sample size increases.
-The average lengths of the credible interval are smaller than that of the asymptotic and bootstrap confidence intervals for all different values of R and different sample sizes.
-For the same size of the samples, at the values of R, 0 < R ≤ 0.5, the increasing values of R the increasing the average lengths of different intervals and conversely when 0.5 < R ≤ 1.
-For small sample sizes, the average lengths of the different intervals in case of n 1 n 2 is smaller than the lengths in case of n 1 = n 2 .
-The coverage probabilities of the bootstrap confidence intervals are able to preserve the nominal level even for small sample sizes.
-The coverage probabilities of the asymptotic confidence intervals are slightly lower than the nominal level.
-The coverage probabilities of the credible intervals based on lack information a priori, are lower than the nominal level.
-In brief, the performances of the bootstrap confidence intervals are the best among the intervals taken into account here.Also, the credible interval is the best in terms of the lengths of the intervals.
Other simulation results were also considered at α = 1.5 for the same sample sizes cited above.The results are not reported here since they have a similar pattern to the results in Tables 3, 4, 5 and 6.

Conclusion
In this article, we studied the Bayesian and non Bayesian Inferences of the stress-strength parameter R = P(X > Y) when X and Y both follow the exponentiated Weibull distribution.We employed the ML method to estimate the MLE of R. The exact distribution of R is difficult to obtain and then we resorted to use the asymptotic distribution to compute the asymptotic confidence interval.Parametric bootstrap procedure is conducted and evaluate the estimate of R as well as different bootstrap confidence intervals are computed.We derived two Bayes estimates of R based on the independent gamma priors, using the approximate Lindely's procedure under squared error loss and LINX loss functions.Also, we derived the credible interval using the empirical method of Lindely (1969) and Awad and Gharaf (1986).The simulation results indicate that the Bayesian estimator under squared error loss function works the best even for small sample sizes.

Figure 1 .
Figure 1.Q-Q plot of the fitted EW distribution for data set 1(single carbon fiber data)

Table 1 .
Confidence and credible intervals for R (single carbon fiber data)

Table 2 .
Confidence and credible intervals for R (jute fiber data)