M / M / 1 Model with Unreliable Service

We define a new term unreliable service and construct the corresponding embedded Markov Chain to an M/M/1 queue with so defined protocol. Sufficient conditions for positive recurrence and closed form of stationary distribution are provided. Furthermore, we compute the probability generating function of the stationary queue length and LaplaceStieltjes transform of the stationary waiting time. In the course of the analysis an interesting decomposition of both the queue length and waiting time has emerged. A number of queueing models can be recovered from our work by taking limits of certain parameters.


Introduction
Queueing theory covers a large body of models including queues with interruptions, breakdowns, batch arrivals, batch service, and the like.A typical assumption made for these models reads: at the moment of service time completion, the customer is assumed to have been served.This, however, need not be the case!Imagine for example that a service representative (server) has a message to communicate (in audible words) to an arriving client (customer) in a loud noisy environment.The customer is expected to hear the message spoken by the server but due to the noisy environment, the service may fail.That is, the person who was supposed to hear the message could not fully understand the message.Under this scenario, it was not the servers fault but instead the external interference that caused the service to fail.This is the prevailing idea which motivated our work in this paper.Specifically, we seek to consider and analyze an M/M/1 queue with what we call 'unreliable service.'In this model, the term 'unreliable service' refers to the fact that the server may not always complete its service successfully.While service failure has been studied extensively in the literature, our model is different in that the failure is not due to the server itself by means of a 'breakdown,' nor is it due to the customer leaving the queue during the service time.Rather, the success or failure of a job is due to external forces and entirely random.Furthermore, neither the customer nor server know whether a job has failed or was successful until after the job's service time has been completed.The application of such a queue can come from many different areas and fields -all that is necessary is for some sort of quality check to be performed after service.This quality check would look at some set of measurements with certain thresholds and would conclude that the service was either successful or not.
Another key aspect to our model is that it will preserve the FCFS (First Come First Serve) discipline structure of the queue.Namely, when a customer's service fails, the customer does not lose its place in the queue and the service is repeated until it is successful.Our approach utilizes an embedded Markov chain methodology, similar to that done by Xu, Xiuli and Tian, Naishuo (Xu & Tian, 2009).It should be noted that one can construct an M/PH/1 queue with similar properties.However, such a model will impose an additional, undesirable restriction: µ β 1 + β 2 (Latouche & Ramaswami, 1999) See below for definitions of these parameters.

Definitions
We begin by defining our process, state space, and parameters.
Definition 2.1.Let {N(t) | t 0} be the number of customers in the queue at time t, and 1 immediately after service is rendered 0 otherwise Then {(N(t), S(t)) | t 0} is a Markov process on the state space: Define the following parameters: • λ : the rate of the Poisson arrivals process.
• µ : the rate of service, successful or not.
• β 1 : the rate of a successful service.
• β 2 : the rate of a failed service.
To visualize such a Markovian process, it is helpful to construct the state transition rate diagram.
Figure 1.Markovian state transition rate diagram.
Formally, we define a 'successful service' to be a transition from (n, 1) −→ (n − 1, 0), which is represented in the state transition diagram as having rate β 1 .Similarly, we define a 'failed service' to be a transition from (n, 1) −→ (n, 0) with transition rate β 2 .Accordingly, we can compute the probabilities of a 'successful' or 'failed' service explicitly by considering the transition probabilities of the embedded Markov Chain.

Infinitesimal Matrix
where

Positive Recurrence
Since the matrix Q has a block-tridiagonal structure, we have a QBD (Quasi Birth Death) Markovian process.Accordingly, we apply Theorem 1.5.1 from Neuts (Neuts, 1981) to prove a lemma that will be used to show positive recurrence and find the stationary distribution explicitly.To this end, we need the following lemma.
Lemma 4.1.The irreducible, block-tridiagonal Markov process with infinitesimal matrix Q is positive recurrent if and only if: • the minimal non-negative solution R of quadratic matrix equation: has sp(R) < 1, and • there exists a positive vector (x 0 , x 1 ) such that (x 0 , x 1 )B[R] = 0 where: , and (x 0 , x 1 ) is normalized by The stationary distribution satisfying: Our lemma, unlike Theorem 1.5.1 (Neuts, 1981), is stated in terms of the infinitesimal matrix Q rather than a Markov chain transition probability matrix.
Then we have: Since Pe = (I + τ −1 Q)e = Ie = e =⇒ P is a stochastic probability matrix of a discrete time Markov chain.
Theorem 1.5.1 from (Neuts, 1981) states that P, and consequently Q, is positive recurrent if and only if: • the minimal non-negative solution R of quadratic matrix equation: has sp(R) < 1, and • there exists a positive vector (x 0 , x 1 ) such that (x 0 , x 1 )B ′ [R] = (x 0 , x 1 ) where: The stationary distribution satisfying: To finish our proof, we must restate the conditions on P in terms of conditions on Q:

The Quadratic Matrix Equation
Thanks to Lemma 4.1, we seek the minimal non-negative solution R to the quadratic matrix equation: There are many methods for solving such equations in the literature.Some are numerical in nature (Guo, 2014), (S.Seo, J. Seo & H. Kim, 2014), others are analytical for particular cases (Adan, Wessels, & Zijm, 1993).However, pure analytical methods are generally preferred to numerical ones when they are feasible.In our case, we employ the direct method whereby we solve the system of equations generated by equating the matrices entry by entry.
] =⇒ (6) can be restated at the following system of equations: The analytical minimal non-negative solution to (7) is given by: 6.The Spectral Radius of R At this point, we can compute the spectral radius of R explicitly and construct a more readily verifiable sufficient condition under which our model will be positive recurrent.
Corollary.By Lemma 4.1, the infinitesimal matrix Q given in equation ( 2) is positive recurrent if and only if: Proof.We compute the spectral radius of R by solving the scalar quadratic equation generated by det(R − ρ i I) = 0, yielding that ρ i satisfies the following quadratic equation: It is clear by inspection that the largest of these eigenvalues in (9) will contain the positive radical.Thus, by Lemma 4.1, Q is positive recurrent if and only if:

The Explicit form of R k
Proposition 7.1.Using the scalar-factored form of R in ( 8), we find: Proof.By Mathematical Induction, we will show this is true for k = 1, assume it is true for arbitrary k, then show it is true for k + 1. k=1 , where Remark.Two substitutions were needed in this derivation, namely: µβ 1 and ρ 0 ρ 1 = λ 2 µβ 1 .These can easily be verified from (9).

The Initial Terms of π
Next we turn our attention to computing B[R], and a positive vector (x 0 , x 1 ), such that (x 0 , x 1 )B[R] = 0: We now seek to normalize the solution in order to generate the first three terms of π: =⇒ (π 00 , π 10 , π 11 ) = K(x 0 , x 1 ) =⇒ Remark.We observe that the condition given by the Corollary to Lemma 4.1 for positive recurrence: ) Proof.To motivate the proof, we begin by noting that: )R and consider: We have:

Little's Distributional Law
Since we have the P.G.F. of N, we may now employ Little's Distributional Law, named after John Little for his work in 1961 (Little, 1961), proved in general by Keilson, J. & Servi, L.D. in 1988(Keilson & Servi, 1988).Then: Proof.While we will refer the reader to Keilson, J. & Servi, L.D. (Keilson & Servi, 1988) for details, we give an elementary direct proof in line with our notation.Given the definition of the P.G.F. of N, we rewrite P(N = k) via total probability and obtain: ) .
Proof.Using Theorem 8.2, we can find the L.S.T. of W explicitly: ) .