Skip to main content
Log in

Tail asymptotics for the \(M_1,M_2/G_1,G_2/1\) retrial queue with non-preemptive priority

  • Published:
Queueing Systems Aims and scope Submit manuscript

Abstract

Stochastic networks with complex structures are key modelling tools for many important applications. In this paper, we consider a specific type of network: retrial queueing systems with priority. This type of queueing system is important in various applications, including telecommunication and computer management networks with big data. The system considered here receives two types of customers, of which Type-1 customers (in a queue) have non-pre-emptive priority to receive service over Type-2 customers (in an orbit). For this type of system, we propose an exhaustive version of the stochastic decomposition approach, which is one of the main contributions made in this paper, for the purpose of studying asymptotic behaviour of the tail probability of the number of customers in the steady state for this retrial queue with two types of customers. Under the assumption that the service times of Type-1 customers have a regularly varying tail and the service times of Type-2 customers have a tail lighter than Type-1 customers, we obtain tail asymptotic properties for the numbers of customers in the queue and in the orbit, respectively, conditioning on the server’s status, in terms of the exhaustive stochastic decomposition results. These tail asymptotic results are new, which is another main contribution made in this paper. Tail asymptotic properties are very important, not only on their own merits but also often as key tools for approximating performance metrics and constructing numerical algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Artalejo, J.R., Dudin, A.N., Klimenok, V.I.: Stationary analysis of a retrial queue with preemptive repeated attempts. Oper. Res. Lett. 28, 173–180 (2001)

    Article  Google Scholar 

  2. Artalejo, J.R., Gómez-Corral, A.: Retrial Queueing Systems. Springer, Berlin (2008)

    Book  Google Scholar 

  3. Asmussen, S., Klüppelberg, C., Sigman, K.: Sampling at subexponential times, with queueing applications. Stoch. Process. Their Appl. 79(2), 265–286 (1999)

    Article  Google Scholar 

  4. Bingham, N.H., Goldie, C.M., Teugels, J.L.: Regular Variation. Cambridge University Press, Cambridge (1989)

    Google Scholar 

  5. Borst, S.C., Boxma, O.J., Nunez-Queija, R., Zwart, A.P.: The impact of the service discipline on delay asymptotics. Perform. Eval. 54, 175–206 (2003)

    Article  Google Scholar 

  6. Boxma, O., Denisov, D.: Sojourn time tails in the single server queue with heavy-tailed service times. Queue. Syst. 69, 101–119 (2011)

    Article  Google Scholar 

  7. Boxma, O., Zwart, B.: Tails in scheduling. ACM Sigmetrics Perform. Eval. Rev. 34, 13–20 (2007)

    Article  Google Scholar 

  8. Choi, B.D., Chang, Y.: Single server retrial queues with priority calls. Math. Comput. Model. 30, 7–32 (1999)

    Article  Google Scholar 

  9. Dimitriou, I.: A mixed priority retrial queue with negative arrivals, unreliable server and multiple vacations. Appl. Math. Model. 37, 1295–1309 (2013)

    Article  Google Scholar 

  10. de Meyer, A., Teugels, J.L.: On the asymptotic behavior of the distributions of the busy period and service time in \(M/G/1\). J. Appl. Probab. 17, 802–813 (1980)

    Article  Google Scholar 

  11. Dudin, A.N., Lee, M.H., Dudina, O., Lee, S.K.: Analysis of priority retrial queue with many types of customers and servers reservation as a model of cognitive radio system. IEEE Trans. Commun. 65(1), 186–199 (2017)

    Google Scholar 

  12. Embrechts, P., Kluppelberg, C., Mikosch, T.: Modelling Extremal Events for Insurance and Finance. Springer, Heidelberg (1997)

    Book  Google Scholar 

  13. Falin, G.I.: A survey of retrial queues. Queue. Syst. 7(2), 127–168 (1990)

    Article  Google Scholar 

  14. Falin, G.I., Artalejo, J.R., Martin, M.: On the single server retrial queue with priority customers. Queue. Syst. 14(3–4), 439–455 (1993)

    Article  Google Scholar 

  15. Feller, W.: An Introduction to Probability Theory and Its Applications, vol. II. Wiley, London (1971)

    Google Scholar 

  16. Foss, S., Korshunov, D.: Sampling at a random time with a heavy-tailed distribution. Markov Process. Relat. Fields 6(4), 543–568 (2000)

    Google Scholar 

  17. Foss, S., Korshunov, D., Zachary, S.: An Introduction to Heavy-Tailed and Subexponential Distributions. Springer, New York (2011)

    Book  Google Scholar 

  18. Gao, S.: A preemptive priority retrial queue with two classes of customers and general retrial times. Oper. Res. Int. J. 15, 233–251 (2015)

    Article  Google Scholar 

  19. Gómez-Corral, A.: Analysis of a single-server retrial queue with quasi-random input and nonpreemptive priority. Comput. Math. Appl. 43, 767–782 (2002)

    Article  Google Scholar 

  20. Grandell, J.: Mixed Poisson Processes. Chapman & Hall, London (1997)

    Book  Google Scholar 

  21. Kim, J., Kim, B.: A survey of retrial queueing systems. Ann. Oper. Res. 247(1), 3–36 (2016)

    Article  Google Scholar 

  22. Kim, J., Kim, J., Kim, B.: Regularly varying tail of the waiting time distribution in \(M/G/1\) retrial queue. Queue. Syst. 65(4), 365–383 (2010c)

    Article  Google Scholar 

  23. Kim, J., Kim, B., Ko, S.-S.: Tail asymptotics for the queue size distribution in an \(M/G/1\) retrial queue. J. Appl. Probab. 44(4), 1111–1118 (2007)

    Article  Google Scholar 

  24. Lee, Y.: Discrete-time \(Geo^X/G/1\) queue with preemptive resume priority. Math. Comput. Model. 34, 243–250 (2001)

    Article  Google Scholar 

  25. Liu, B., Min, J., Zhao, Y.Q.: Refined tail asymptotic properties for the \(M^X/G/1\) retrial queue. Submitted. (arXiv:1801.02525) (2017)

  26. Liu, B., Zhao, Y.Q.: Analyzing retrial queues by censoring. Queue. Syst. 64(3), 203–225 (2010)

    Article  Google Scholar 

  27. Liu, B., Zhao, Y.Q.: Second order asymptotic properties for the tail probability of the number of customers in the \(M/G/1\) retrial queue. Submitted. (arXiv:1801.09607) (2017)

  28. Liu, B., Zhao, Y.Q.: Tail Asymptotics for a Retrial Queue with Bernoulli Schedule. Submitted. (arXiv:1804.00984) (2018)

  29. Liu, B., Wang, J., Zhao, Y.Q.: Tail asymptotics of the waiting time and the busy period for the \(M/G/1/K\) queues with subexponential service times. Queue. Syst. 76(1), 1–19 (2014)

    Article  Google Scholar 

  30. Liu, B., Wang, X., Zhao, Y.Q.: Tail asymptotics for \(M/M/c\) retrial queues with nonpersistent customers. Oper. Res. Int. J. 12(2), 173–188 (2012)

    Article  Google Scholar 

  31. Müller, A., Stoyan, D.: Comparison Methods for Stochastic Models and Risks. Wiley, London (2002)

    Google Scholar 

  32. Phung-Duc, T.: Retrial Queueing Models: A survey on theory and applications, to appear In: Dohi T., Ano, K., Kasahara, S. (eds.) Stochastic Operations Research in Business and Industry, World Scientific Publisher. (http://infoshako.sk.tsukuba.ac.jp/tuan/papers/Tuan_chapter_ver3.pdf) (2017)

  33. Sutton, C., Jordan, M.I.: Bayesian inference for queueing networks and modeling of internet services. Ann. Appl. Stat. 5(1), 254–282 (2011)

    Article  Google Scholar 

  34. Walraevens, J., Claeys, D., Phung-Duc, T.: Asymptotics of queue length distributions in priority retrial queues. Perform. Eval. 127–128, 235–252 (2018)

    Article  Google Scholar 

  35. Wang, J.: On the single server retrial queue with priority subscribers and server break-downs. J. Syst. Sci. Complex 21, 304–315 (2008)

    Article  Google Scholar 

  36. Wu, J., Lian, Z.: A single-server retrial G-queue with priority and unreliable server under Bernoulli vacation schedule. Comput. Ind. Eng. 64, 84–93 (2013)

    Article  Google Scholar 

  37. Wu, J., Wang, J., Liu, Z.: A discrete-time Geo/G/1 retrial queue with preferred and impatient customers. Appl. Math. Model. 37, 2552–2561 (2013)

    Article  Google Scholar 

Download references

Acknowledgements

We thank the anonymous reviewer for her/his constructive comments and suggestions, which significantly improved the quality of this paper. This work was supported in part by the National Natural Science Foundation of China (Grant No. 71571002), the Research Project of Anhui Jianzhu University and a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada (NSERC).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bin Liu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Definitions and useful results from the literature

Definition A.1

(for example, see Bingham, Goldie and Teugels [4]) A measurable function \(U:(0,\infty )\rightarrow (0,\infty )\) is regularly varying at \(\infty \) with index \(\sigma \in (-\infty ,\infty )\) (written \(U\in {\mathcal {R}}_{\sigma }\)) iff \(\lim _{t\rightarrow \infty }U(xt)/U(t)=x^{\sigma }\) for all \(x>0\). If \(\sigma =0\) we call U slowly varying, i.e., \(\lim _{t\rightarrow \infty }U(xt)/U(t)=1\) for all \(x>0\).

For a distribution function F, denote \(\bar{F}{\mathop {=}\limits ^\mathrm{def}}1-F\) for the remainder of the paper.

Definition A.2

(for example, see Foss, Korshunov and Zachary [17]) A distribution F on \((0,\infty )\) belongs to the class of subexponential distribution (written \(F\in {\mathcal {S}}\)) if \(\lim _{t\rightarrow \infty }\overline{F^{*2}}(t)/\overline{F}(t)=2\), where \(\overline{F}=1-F\) and \(F^{*2}\) denotes the second convolution of F.

Lemma A.1

(de Meyer and Teugels [10]) Under Assumption A1,

$$\begin{aligned} P\{T_{\alpha }>t\}\sim \frac{1}{(1-\rho _1)^{a_1+1}}\cdot t^{-a_1} L(t)\quad \text{ as } t\rightarrow \infty . \end{aligned}$$
(A.1)

The result (A.1) is straightforward due to the main theorem in [10].

Lemma A.2

(pp.580–581 in [12]) Let N be a r.v. with \(P\{N=k\}=(1-\sigma )\sigma ^{k-1}\), \(0<\sigma <1\), \(k\ge 1\), and \(\{Y_k\}_{k=1}^{\infty }\) be a sequence of non-negative, i.i.d. r.v.s having a common subexponential distribution F. Define \(S_n=\sum _{k=1}^n Y_k\). Then, \(P\{S_N > t\} \sim (1-F(t))/(1-\sigma )\) as \(t\rightarrow \infty \).

Lemma A.3

(Proposition 3.1 in [3], or Theorem 3.1 in [16]) Let \(N_{\lambda }(t)\) be a Poisson process with rate \(\lambda \) and let T be a positive r.v. with distribution F, which is independent of \(N_{\lambda }(t)\). If \(\bar{F}(t)=P\{T>t\}\) is heavier than \(e^{-\sqrt{t}}\) as \(t\rightarrow \infty \), then \(P(N_{\lambda }(T)>j)\sim P\{T>j/\lambda \}\) as \(j\rightarrow \infty \).

Lemma A.3 holds for any distribution F with a regularly varying tail because it is heavier than \(e^{-\sqrt{t}}\) as \(t\rightarrow \infty \).

Lemma A.4

(p.181 in [20]) Let \(N_{\lambda }(t)\) be a Poisson process with rate \(\lambda \) and let T be a positive r.v. with distribution F, which is independent of \(N_{\lambda }(t)\). If \(\bar{F}(t) = P\{T>t\}\sim e^{-w t} t^{-h}L(t)\) as \(t\rightarrow \infty \) for \(w> 0\) and \(-\infty<h<\infty \), then

$$\begin{aligned} P(N_{\lambda }(T)>j)\sim \lambda (\lambda +w)^{h-1}\left( \frac{\lambda }{\lambda +w}\right) ^j j^{-h}L(j),\quad j\rightarrow \infty . \end{aligned}$$

Lemma A.5

(p.48 in [17]) Let F, \(F_1\) and \(F_2\) be distribution functions. Suppose that \(F\in {\mathcal {S}}\). If \(\bar{F}_i(t)/\bar{F}(t)\rightarrow c_i\) as \(t\rightarrow \infty \) for some \(c_i\ge 0, \; i=1,2\), then \(\overline{F_1*F}_2(t)/\bar{F}(t)\rightarrow c_1+c_2\) as \(t\rightarrow \infty \), where the notation \(F_1*F_2\) stands for the convolution of \(F_1\) and \(F_2\).

Lemma A.6

(pp.162–163 in [20]) Let N be a discrete non-negative integer-valued r.v. with mean value \(\mu _N\), and let \(\{Y_k\}_{k=1}^{\infty }\) be a sequence of non-negative i.i.d. r.v.s with mean value \(\mu _Y\). Define \(S_0\equiv 0\) and \(S_n=\sum _{k=1}^n Y_k\). If \(P\{Y_k>x\}\sim c_Y x^{-h} L(x) \) as \(x\rightarrow \infty \) and \(P\{N>m\}\sim c_N m^{-h}L(m)\) as \(m\rightarrow \infty \), where \(h> 1\), \(c_Y\ge 0\) and \(c_N\ge 0\), then \(P\{S_N > x\}\sim (c_N\mu _Y^h + \mu _N c_Y) x^{-h}L(x)\) as \(x\rightarrow \infty .\)

Remark A.1

It is a convention that in Lemma A.6, \(c_Y=0\) and \(c_N=0\) means that \(\lim _{x\rightarrow \infty }P\{Y_k>x\}/(x^{-h} L(x))=0\) and \(\lim _{m\rightarrow \infty }P\{N>m\}/(m^{-h} L(m))=0\), respectively.

The following two criteria are from Feller (see p.441 in [15]), which are often used to verify that a function is completely monotone.

Criterion A.1 If \(\vartheta _1(\cdot )\) and \(\vartheta _2(\cdot )\) are completely monotone, so is their product \(\vartheta _1(\cdot )\vartheta _2(\cdot )\).

Criterion A.2 If \(\vartheta _3(\cdot )\) is completely monotone and \(\vartheta _4(\cdot )\) is a positive function with a completely monotone derivative \(\vartheta '_4(\cdot )\), then \(\vartheta _3(\vartheta _4(\cdot ))\) is completely monotone.

To prove Lemma C.2, let us list some notations and results which will be used. Let F(x) be any distribution on \([0,\infty )\) with the LST \(\phi (s)\). We denote the nth moment of F(x) by \(\phi _n\), \(n\ge 0\). It is well known that \(\phi _n<\infty \) iff

$$\begin{aligned} \phi (s)=\sum _{k=0}^{n}\frac{\phi _k}{k!}(-s)^k + o(s^n),\quad n\ge 0. \end{aligned}$$
(A.2)

Based on (A.2), we introduce the notation \(\phi _n(s)\) and \(\widehat{\phi }_n(s)\), defined by

$$\begin{aligned} \phi _n(s)&{\mathop {=}\limits ^\mathrm{def}}&(-1)^{n+1}\left\{ \phi (s)-\sum _{k=0}^{n}\frac{\phi _k}{k!}(-s)^k\right\} ,\quad n\ge 0, \end{aligned}$$
(A.3)
$$\begin{aligned} \widehat{\phi }_n(s)&{\mathop {=}\limits ^\mathrm{def}}&\phi _n(s)/s^{n+1},\quad n\ge 0. \end{aligned}$$
(A.4)

Lemma A.7

(pp.333–334 in [4]) Assume that \(n<d<n+1\), \(n\in \{0,1,2,\ldots \}\), then the following two statements are equivalent:

$$\begin{aligned} 1-F(t) \sim t^{-d} L(t),\quad t\rightarrow \infty ; \end{aligned}$$

and

$$\begin{aligned} \phi _n(s) \sim \frac{\Gamma (d-n)\Gamma (n+1-d)}{\Gamma (d)} s^{d}L(1/s),\quad s\downarrow 0. \end{aligned}$$

Lemma A.8

(Lemma 3.3 in [27]) Assume that \(n\in \{1,2,\ldots \}\), then the following two statements are equivalent:

$$\begin{aligned} 1-F(t)\sim t^{-n}L(t),\quad t\rightarrow \infty ; \end{aligned}$$
(A.5)

and

$$\begin{aligned} \lim _{s\downarrow 0}\frac{\widehat{\phi }_{n-1}(xs)-\widehat{\phi }_{n-1}(s)}{L(1/s)/(n-1)!}=-\log x, \quad \text{ for } \text{ all } x>0. \end{aligned}$$
(A.6)

In [27], Lemma A.8 was proved by applying Karamata’s theorem, the monotone density theorem and Theorem 3.9.1 presented in [4] (see, p.27, p.39 and pp.172–173, respectively).

Appendix B: Proofs for results in step 2

1.1 Proof of Proposition 3.1

By (2.14) and the definition of \(\alpha ^{(e)}(s)\), we can write \((1- h(u))/(1-u)= \lambda _2\alpha _1\cdot \alpha ^{(e)}(\lambda _2 - \lambda _2 u)\), from which, and by (2.7), (2.2) and (2.11), we have,

$$\begin{aligned} K_a(u)= & {} \frac{1-\rho _1}{p}\left[ q\cdot \frac{1-h(u)}{1-u} + p\right] =\rho _1 \alpha ^{(e)}(\lambda _2 - \lambda _2 u) +1-\rho _1, \end{aligned}$$
(B.1)

which leads to the stochastic decomposition given in (3.3) for the r.v. \(K_a\).

It follows from (2.8) that

$$\begin{aligned} K_b(z) = \beta ^{(e)}(\lambda - \lambda g(z))=\int _0^{\infty }\sum _{n=0}^{\infty }(g(z))^n\frac{(\lambda x)^n}{n!} e^{-\lambda x}\mathrm{d}F_{\beta }^{(e)}(x)=E\left( z^{ N_{\lambda ,X_g}(T_{\beta }^{(e)}) }\right) , \end{aligned}$$
(B.2)

which leads to the stochastic decomposition given in (3.4) for the r.v. \(K_b\).

It follows from (2.9) that

$$\begin{aligned} K_c(u)= & {} (1-\vartheta ) \left[ 1 - \frac{1- g(u)}{1-u}\cdot \frac{1-\beta _2(\lambda -\lambda g(u))}{1- g(u)} \right] ^{-1}\nonumber \\= & {} \frac{1-\vartheta }{ 1 - \vartheta \cdot K_a(u)\cdot \beta _2^{(e)}(\lambda - \lambda g(u)) }\nonumber \\= & {} 1-\vartheta +\vartheta \cdot \sum _{i=1}^{\infty }(1-\vartheta )\vartheta ^{i-1}\big (K_a(u)\cdot \beta _2^{(e)}(\lambda - \lambda g(u))\big )^i, \end{aligned}$$
(B.3)

which leads to the stochastic decomposition given in (3.5) for the r.v. \(K_c\).

Finally, (3.2) follows immediately from (2.6).

1.2 Proof of Proposition 3.2

We divide the proof of Proposition 3.2 into three parts.

1.2.1 \(S_{\beta _i}(z_1,z_2)\) are PGFs with detailed probabilistic interpretations

Following Definition 3.1, for the \({\mathrm{split} (}N;c,1-c)\) it is easy to see that \((\sum _{k=1}^N X_k,N-\sum _{k=1}^N X_k)\) has the PGF

$$\begin{aligned} E \big (z_1^{\sum _{k=1}^N X_k} z_2^{N-\sum _{k=1}^N X_k} \big )= & {} \sum _{n=0}^{\infty } \left[ \prod _{k=1}^n E(z_1^{X_k} z_2^{1- X_k})\right] P\{N=n\}\nonumber \\= & {} \sum _{n=0}^{\infty } (cz_1+(1-c)z_2)^n P\{N=n\}, \end{aligned}$$
(B.4)

where \(\prod _{1}^0\equiv 1\).

Recalling (2.21), we can write, for \(i=1,2\),

$$\begin{aligned} S_{\beta _i}(z_1,z_2)=\beta _i^{(e)}(\lambda -\lambda _1 z_1-\lambda _2 z_2)=\int _0^{\infty }\sum _{n=0}^{\infty } (qz_1+pz_2)^n ((\lambda x)^n/n!) e^{-\lambda x}\mathrm{d}F_{\beta _i}^{(e)}(x), \end{aligned}$$
(B.5)

which leads to (3.13) by setting \(N=N_{\lambda }(T_{\beta _i}^{(e)})\) and \(c=q\) in (B.4).

1.2.2 \(M_1(z_1,z_2)\) is a PGF with a detailed probabilistic interpretation

It follows from (2.19) that

$$\begin{aligned} M_1(z_1,z_2)=\frac{1-\rho _1}{1-\rho _1 H_{\beta _1}(z_1,z_2)}=\sum _{n=0}^{\infty } (1-\rho _1) \rho _1^{n} \left( H_{\beta _1}(z_1,z_2)\right) ^n, \end{aligned}$$
(B.6)

where

$$\begin{aligned} H_{\beta _1}(z_1,z_2)= & {} \frac{1}{\rho _1}\cdot \frac{\beta _1(\lambda -\lambda _1 z_1-\lambda _2 z_2)-h(z_2)}{z_1-h(z_2)} \end{aligned}$$
(B.7)
$$\begin{aligned}= & {} \frac{1}{\rho _1}\cdot \frac{\beta _1(\lambda -\lambda _1 z_1-\lambda _2 z_2)-\beta _1(\lambda -\lambda _1 h(z_2)-\lambda _2 z_2)}{z_1-h(z_2)}\quad \hbox {(by (2.3))}.\nonumber \\ \end{aligned}$$
(B.8)

Clearly, by (B.6), \((M_{11},M_{12})\) can be regarded as a random sum of two-dimensional r.v.s. provided that \(H_{\beta _1}(z_1,z_2)\) is the PGF of a two-dimensional r.v. To verify this, we rewrite (B.8) as a power series. Note that

$$\begin{aligned} \beta _1(\lambda -\lambda q z_1-\lambda p z_2)= & {} \int _0^{\infty }\sum _{k=0}^{\infty }\frac{(\lambda (q z_1+p z_2)t)^k}{k!}\cdot e^{-\lambda t}\mathrm{d}F_{\beta _1}(t)\nonumber \\= & {} \beta _1(\lambda )+\sum _{k=1}^{\infty } b_{\beta _1,k} (q z_1+p z_2)^k, \end{aligned}$$
(B.9)

where \(b_{\beta _1,k}\) is given in (3.9). By (2.3) and (B.9),

$$\begin{aligned} h(z_2)=\beta _1(\lambda -\lambda q h(z_2)-\lambda p z_2)=\beta _1(\lambda )+\sum _{k=1}^{\infty } b_{\beta _1,k} (q h(z_2)+p z_2)^k. \end{aligned}$$
(B.10)

Substituting (B.9) and (B.10) into the numerator of the right-hand side of (B.8), we obtain

$$\begin{aligned} H_{\beta _1}(z_1,z_2)= & {} \frac{1}{\rho _1}\cdot \sum _{k=1}^{\infty } b_{\beta _1,k} \left( \frac{(q z_1+p z_2)^k-(q h(z_2)+p z_2)^k}{z_1-h(z_2)}\right) \nonumber \\= & {} \frac{q}{\rho _1}\cdot \sum _{k=1}^{\infty } b_{\beta _1,k} \sum _{i=1}^{k} (q z_1+p z_2)^{i-1}(q h(z_2)+p z_2)^{k-i}\nonumber \\= & {} \frac{q}{\rho _1}\cdot \sum _{k=1}^{\infty } k b_{\beta _1,k}\cdot D_{k}(z_1,z_2), \end{aligned}$$
(B.11)

where

$$\begin{aligned} D_{k}(z_1,z_2)= & {} \frac{1}{k}\sum _{i=1}^{k} (q z_1+p z_2)^{i-1}(q h(z_2)+p z_2)^{k-i}. \end{aligned}$$
(B.12)

Note that \(q h(z_2)+p z_2=g(z_2)\) and \(q z_1+p z_2\) are the PGFs of (one or two-dimensional) r.v.s. It follows from (B.12) that for \(k\ge 1\), \(D_{k}(z_1,z_2)\) is the PGF of a two-dimensional r.v. \((D_{k,1},D_{k,2})\), which can be constructed according to (3.6).

In addition,

$$\begin{aligned} \sum _{k=1}^{\infty } k b_{\beta _1,k}=\sum _{k=1}^{\infty } \int _0^{\infty }\frac{(\lambda t)^k}{(k-1)!}e^{-\lambda t}\mathrm{d}F_{\beta _1}(t)=\lambda \int _0^{\infty }t\mathrm{d}F_{\beta _1}(t)=\rho _1/q. \end{aligned}$$
(B.13)

Namely, \((q/\rho _1)\sum _{k=1}^{\infty }kb_{\beta _1,k}=1\), which, together with (B.11), implies that \(H_{\beta _1}(z_1,z_2)\) is the PGF of a two-dimensional r.v. \((H_{\beta _1,1},H_{\beta _1,2})\), which can be constructed according to (3.7).

By (B.6), the two-dimensional r.v. \((M_{11},M_{12})\) can be constructed according to (3.14), which is a random sum of i.i.d. two-dimensional r.v.s \((H_{\beta _1,1}^{(i)},H_{\beta _1,2}^{(i)})\), \(i\ge 1\), each with the same PGF \(H_{\beta _1}(z_1,z_2)\).

1.2.3 \(M_2(z_1,z_2)\) is a PGF with a detailed probabilistic interpretation

Let

$$\begin{aligned} H_{\beta _2}(z_1,z_2)= & {} \frac{p}{q\rho _2}\cdot \frac{\beta _2(\lambda -\lambda _1 z_1-\lambda _2 z_2)-\beta _2(\lambda -\lambda _1 h(z_2)-\lambda _2 z_2)}{z_1-h(z_2)}. \end{aligned}$$
(B.14)

Using (B.14), (2.7) and (2.9), we can rewrite (2.20) as follows:

$$\begin{aligned} M_2(z_1,z_2)=\vartheta \cdot H_{\beta _2}(z_1,z_2)\cdot K_a(z_2)\cdot K_c(z_2)+ 1-\vartheta . \end{aligned}$$
(B.15)

Similarly to (B.11), we can derive (details omitted) from (B.14) that

$$\begin{aligned} H_{\beta _2}(z_1,z_2) =\frac{p}{\rho _2}\cdot \sum _{k=1}^{\infty } k b_{\beta _2,k}\cdot D_{k}(z_1,z_2), \end{aligned}$$
(B.16)

with \(b_{\beta _2,k}\) given in (3.10).

Similarly to (B.13), we can verify that \((p /\rho _2)\sum _{k=1}^{\infty }k b_{\beta _2,k}=1\), which, together with (B.16), implies that \(H_{\beta _2}(z_1,z_2)\) is the PGF of a two-dimensional r.v. \((H_{\beta _2,1},H_{\beta _2,2})\), which can be constructed according to (3.8).

By (B.15), the two-dimensional r.v. \((M_{21},M_{22})\) can be constructed according to (3.15).

Appendix C: Proofs of Lemmas 4.1 and 4.2 in step 3

In this section, we prove Lemma 4.1 and Lemma 4.2. Before proceeding, we provide the following facts, which will be used in our proof multiple times.

Applying Karamata’s theorem (for example, p.28 in [4]), and using Assumption A1 and Lemma A.1, respectively, gives, as \(t\rightarrow \infty \),

$$\begin{aligned} P\{T_{\beta _1}^{(e)}>t\} \sim \frac{\lambda _1}{\rho _1(a_1-1)} \cdot t^{-a_1+1} L(t), \end{aligned}$$
(C.1)

and

$$\begin{aligned} P\{T_{\alpha }^{(e)}>t\} \sim \frac{1}{\alpha _1 (a_1-1)(1-\rho _1)^{a_1+1}} \cdot t^{-a_1+1} L(t). \end{aligned}$$
(C.2)

Applying Proposition 8.5 (p.181 in [20]) to the density \(\bar{F}_{\beta _2}(t)/\beta _{2,1}\) and using Assumption A2, give, as \(t\rightarrow \infty \),

$$\begin{aligned} P\{T_{\beta _2}^{(e)}>t\}\sim & {} \left\{ \begin{array}{ll} \frac{\lambda _2}{\rho _2 (a_2-1) } \cdot t^{-a_2+1} L(t), &{} \text{ if } r=0,\\ \frac{\lambda _2}{\rho _2 r}\cdot e^{-rt} t^{-a_2} L(t), &{} \text{ if } r>0. \end{array} \right. \end{aligned}$$
(C.3)

Furthermore, since \(F_{\beta }(x)=q F_{\beta _1}(x)+p F_{\beta _2}(x)\) and based on Assumptions  A1 and A2, we have \(P\{T_{\beta }>t\}=qP\{T_{\beta _1}>t\} +p P\{T_{\beta _2}>t\}\sim q t^{-a_1} L(t)\) as \(t\rightarrow \infty \), from which Karamata’s theorem implies that

$$\begin{aligned} P\{T_{\beta }^{(e)}>t\}\sim & {} \frac{\lambda _1}{\rho (a_1-1) } \cdot t^{-a_1+1} L(t). \end{aligned}$$
(C.4)

1.1 Proof of Lemma 4.1

Recall (2.4), which relates the PGF of K to the PGF of \(R_0\). With this relationship, we first study the tail probability for K, which can be regarded as the sum of independent r.v.s \(K_a\), \(K_b\) and \(K_c\) (refer to (3.2)). By (3.3), (C.2) and applying Lemma A.3, we have

$$\begin{aligned} P\{K_a>j\}= & {} \rho _1 P\{N_{\lambda _2}(T_{\alpha }^{(e)})>j\} \sim \frac{\lambda _1\lambda _2^{a_1-1}}{(a_1-1) (1-\rho _1)^{a_1}} \cdot j^{-a_1+1} L(j),\quad j\rightarrow \infty .\nonumber \\ \end{aligned}$$
(C.5)

Recall (3.4), \(K_b=N_{\lambda , X_g}(T_{\beta }^{(e)}){\mathop {=}\limits ^\mathrm{d}}\sum _{i=1}^{N_{\lambda }(T_{\beta }^{(e)})} X_g^{(i)}\), where \(X_g^{(i)}\) has the common distribution \(X_g\). By (2.15), and then applying Lemma A.3 and using Lemma A.1, we know that

$$\begin{aligned} P\{X_g>j\} \sim qP\{N_{\lambda _2}(T_{\alpha })>j\}\sim q\lambda _2^{a_1}(1-\rho _1)^{-a_1-1} \cdot j^{-a_1} L(j). \end{aligned}$$

Similarly, applying Lemma A.3 and using (C.4), we have

$$\begin{aligned} P\{N_{\lambda }(T_{\beta }^{(e)})>j\} \sim \frac{q\lambda ^{a_1-1}}{(q\beta _{1,1}+p\beta _{2,1}) (a_1-1) } \cdot j^{-a_1+1} L(j), \end{aligned}$$

based on which, by (2.16) and applying Lemma A.6, we have,

$$\begin{aligned} P\{K_b>j\}\sim & {} \frac{\lambda _1 \lambda _2^{a_1-1}}{\rho (a_1-1)(1-\rho _1)^{a_1-1}} \cdot j^{-a_1+1} L(j), \quad j\rightarrow \infty . \end{aligned}$$
(C.6)

Next, we study \(P\{K_c>j\}\). By (3.5), we know that \(P\{K_c>j\}=\vartheta P\{\sum _{i=1}^{J}X_c^{(i)}>j\}\), where \(P(J=i)=(1-\vartheta )\vartheta ^{i-1}\), \(i\ge 1\), and \(X_c^{(i)}\) has the same distribution as that for \(X_c=K_a+N_{\lambda ,X_g}(T_{\beta _2}^{(e)})\). Note that \(N_{\lambda ,X_g}(T_{\beta _2}^{(e)}){\mathop {=}\limits ^\mathrm{d}}\sum _{i=1}^{N_{\lambda }(T_{\beta _2}^{(e)})} X_g^{(i)}\), where \(X_g^{(i)}\) has the common tail probability \(P\{X_g>j\}\sim Const\cdot j^{-a_1} L(j)\) and \(P\{N_{\lambda }(T_{\beta _2}^{(e)})>j\}\sim Const\cdot j^{-a_2+1}L(j)\). Therefore, by applying Lemma A.6 (and noticing that \(a_2 > a_1\) if \(r=0\) in Assumption A2), we have

$$\begin{aligned} P\{N_{\lambda , X_g}(T_{\beta _2}^{(e)})>j\} \sim Const\cdot \max (j^{-a_2{+}1}L(j),j^{-a_1}L(j)){=}o(1)\cdot j^{{-}a_1{+}1} L(j).\nonumber \\ \end{aligned}$$
(C.7)

By (C.5), (C.7), applying Lemma A.2 and Lemma A.5, we have, as \(j\rightarrow \infty \),

$$\begin{aligned} P\{K_c>j\}\sim \frac{\vartheta }{1-\vartheta }P\{X_c>j\}= \frac{\vartheta }{1-\vartheta } P\{K_a+N_{\lambda , X_g}(T_{\beta _2}^{(e)})>j\}\sim \frac{\vartheta }{1-\vartheta } P\{K_a>j\},\nonumber \\ \end{aligned}$$
(C.8)

which, together with (C.5), (C.6) and (3.2), leads to

$$\begin{aligned} P\{K_a+K_c>j\}\sim & {} \frac{\lambda _1\lambda _2^{a_1-1}}{(1-\rho ) (a_1-1) (1-\rho _1)^{a_1-1}} \cdot j^{-a_1+1} L(j),\quad j\rightarrow \infty ,\nonumber \\ \end{aligned}$$
(C.9)
$$\begin{aligned} P\{K>j\}\sim & {} \frac{\lambda _1\lambda _2^{a_1-1}}{\rho (1-\rho )(a_1-1) (1-\rho _1)^{a_1-1}} \cdot j^{-a_1+1} L(j),\quad j\rightarrow \infty .\nonumber \\ \end{aligned}$$
(C.10)

1.2 Proof of Lemma 4.2

By Proposition 3.2, \(S_{\beta _1,1}{\mathop {=}\limits ^\mathrm{d}}N_{\lambda _1}(T_{\beta _1}^{(e)})\), \(S_{\beta _1,2}{\mathop {=}\limits ^\mathrm{d}}N_{\lambda _2}(T_{\beta _1}^{(e)})\), \(S_{\beta _2,1}{\mathop {=}\limits ^\mathrm{d}}N_{\lambda _1}(T_{\beta _2}^{(e)})\) and \(S_{\beta _2,2}{\mathop {=}\limits ^\mathrm{d}}N_{\lambda _2}(T_{\beta _2}^{(e)})\). By (C.1) and applying Lemma A.3, we obtain

$$\begin{aligned} P\{S_{\beta _1,1}>j\}\sim & {} \frac{\lambda _1^{a_1}}{\rho _1 (a_1-1) } \cdot j^{-a_1+1} L(j), \end{aligned}$$
(C.11)
$$\begin{aligned} P\{S_{\beta _1,2}>j\}\sim & {} \frac{\lambda _1\lambda _2^{a_1-1}}{\rho _1 (a_1-1) } \cdot j^{-a_1+1} L(j). \end{aligned}$$
(C.12)

By (C.3) and applying Lemma A.3 and Lemma A.4, we obtain

$$\begin{aligned} P\{S_{\beta _2,1}>j\}\sim & {} \left\{ \begin{array}{ll} \frac{\lambda _2\lambda _1^{a_2-1}}{\rho _2 (a_2-1)}\cdot j^{-a_2+1} L(j), &{} \text{ if } r=0,\\ \frac{\lambda _2\lambda _1 (\lambda _1+r)^{a_2-1}}{\rho _2 r} \cdot \left( \frac{\lambda _1}{\lambda _1+r}\right) ^j j^{-a_2} L(j), &{} \text{ if } r>0, \end{array} \right. \end{aligned}$$
(C.13)
$$\begin{aligned} P\{S_{\beta _2,2}>j\}\sim & {} \left\{ \begin{array}{ll} \frac{\lambda _2^{a_2}}{\rho _2 (a_2-1) }\cdot j^{-a_2+1} L(j), &{} \text{ if } r=0,\\ \frac{\lambda _2^2(\lambda _2+r)^{a_2-1}}{\rho _2 r}\cdot \left( \frac{\lambda _2}{\lambda _2+r}\right) ^j j^{-a_2} L(j), &{} \text{ if } r>0. \end{array} \right. \end{aligned}$$
(C.14)

Next, we will study the asymptotic tail probabilities of the r.v.s \(M_{ik},\ i,k=1,2\). By Proposition 3.2, we know that

$$\begin{aligned} P\{M_{1k}>j\}= & {} \rho _1 P\Big \{\sum _{i=1}^{J}H_{\beta _1,k}^{(i)}>j\Big \},\quad k=1,2, \end{aligned}$$
(C.15)
$$\begin{aligned} P\{M_{21}>j\}= & {} \vartheta P \{H_{\beta _2,1} >j \}, \end{aligned}$$
(C.16)
$$\begin{aligned} P\{M_{22}>j\}= & {} \vartheta P\{H_{\beta _2,2}+K_a+K_c >j \}. \end{aligned}$$
(C.17)

To proceed further, we need to study the tail probabilities of the r.v.s \(H_{\beta _i,k}\) for \(i,k=1,2\).

1.2.1 Tail asymptotics for \(H_{\beta _1,1}\) and \(H_{\beta _2,1}\)

Taking \(z_2\rightarrow 1\) in (B.8) and (B.14), we can write

$$\begin{aligned} E(z_1^{H_{\beta _1,1}})= & {} H_{\beta _1}(z_1,1)=\frac{1}{\rho _1}\cdot \frac{\beta _1(\lambda _1-\lambda _1 z_1)-1}{z_1-1}=\beta _1^{(e)}(\lambda _1-\lambda _1 z_1), \nonumber \\ \end{aligned}$$
(C.18)
$$\begin{aligned} E(z_1^{H_{\beta _2,1}})= & {} H_{\beta _2}(z_1,1)=\frac{p}{q\rho _2}\cdot \frac{\beta _2(\lambda _1-\lambda _1 z_1)-1}{z_1-1}=\beta _2^{(e)}(\lambda _1-\lambda _1 z_1).\nonumber \\ \end{aligned}$$
(C.19)

Therefore, \(H_{\beta _i,1}{\mathop {=}\limits ^\mathrm{d}}N_{\lambda _1}(T_{\beta _i}^{(e)}){\mathop {=}\limits ^\mathrm{d}}S_{\beta _i,1}\), \(i=1,2\), and

$$\begin{aligned} P\{H_{\beta _i,1}>j\}=P\{S_{\beta _i,1}>j\},\quad i=1,2, \end{aligned}$$
(C.20)

whose asymptotic tails are presented in (C.11) and (C.13), respectively.

1.2.2 Tail asymptotics for \(H_{\beta _1,2}\)

Unlike for the other r.v.s discussed earlier, more effort is required for the asymptotic tail behaviour for \(H_{\beta _1,2}\), which will be presented in Proposition C.1. Before doing that, we first present a nice bound on the tail probability of \(H_{\beta _1,2}\), which is very suggestive for an intuitive understanding of the tail property for \(H_{\beta _1,2}\).

Taking \(z_1\rightarrow 1\) in (B.12) and (B.11), we have,

$$\begin{aligned} E(z_2^{D_{k,2}})=D_{k}(1,z_2)= & {} \frac{1}{k}\sum _{i=1}^{k}(q +p z_2)^{i-1}(q h(z_2)+p z_2)^{k-i}, \end{aligned}$$
(C.21)
$$\begin{aligned} E(z_2^{H_{\beta _1,2}})=H_{\beta _1}(1,z_2)= & {} \frac{q}{\rho _1}\cdot \sum _{k=1}^{\infty } k b_{\beta _1,k}\cdot D_{k}(1,z_2). \end{aligned}$$
(C.22)

It follows from (C.21) that, for \(k\ge 1\),

$$\begin{aligned} D_{k,2}{\mathop {=}\limits ^\mathrm{d}}\sum _{n=1}^{i-1}Y_n+\sum _{n=1}^{k-i}Z_n\quad \text{ with } \text{ probability } 1/k\quad \text{ for } i=1,2,\ldots ,k, \end{aligned}$$

where \(\{Y_n\}_{n=1}^{\infty }\) and \(\{Z_n\}_{n=1}^{\infty }\) are sequences of independent r.v.s, which are independent of each other, with \(Y_n\) and \(Z_n\) having PGFs \(q +p z_2\) and \(q h(z_2)+p z_2\), respectively.

We say that Y is stochastically smaller than Z, written as \(Y\le _{st}Z\), if \(P\{Y>t\}\le P\{Z>t\}\) for all t. It is easy to see that \(Y_{n_1}\le _{st}Z_{n_2}\) for all \(n_1,n_2\ge 1\). Define

$$\begin{aligned} D^L_{k,2}{\mathop {=}\limits ^\mathrm{d}}\sum _{n=1}^{k-1}Y_n\quad \text{ and } \quad D^U_{k,2}{\mathop {=}\limits ^\mathrm{d}}\sum _{n=1}^{k-1}Z_n. \end{aligned}$$

Then, by Theorem 1.2.17 (p.7 in [31]),

$$\begin{aligned} D^L_{k,2}\le _{st}D_{k,2}\le _{st}D^U_{k,2}. \end{aligned}$$
(C.23)

Furthermore, it follows from (C.22) that \(H_{\beta _1,2}{\mathop {=}\limits ^\mathrm{d}}D_{k,2}\), with probability \((q/\rho _1)k b_{\beta _1,k}\), for \(k\ge 1\).

Now, define the r.v.s \(H^L_{\beta _1,2}\) and \(H^U_{\beta _1,2}\) as follows:

$$\begin{aligned} H^L_{\beta _1,2}&{\mathop {=}\limits ^\mathrm{d}}&D^L_{k,2}\quad \text{ with } \text{ probability } (q/\rho _1)k b_{\beta _1,k}\ \text{ for } k\ge 1,\\ H^U_{\beta _1,2}&{\mathop {=}\limits ^\mathrm{d}}&D^U_{k,2}\quad \text{ with } \text{ probability } (q/\rho _1)k b_{\beta _1,k}\ \text{ for } k\ge 1. \end{aligned}$$

Then, by (C.23),

$$\begin{aligned} H_{\beta _1,2}^{L}\le _{st}H_{\beta _1,2}\le _{st}H_{\beta _1,2}^{U}. \end{aligned}$$
(C.24)

Note that \(H_{\beta _1,2}^{L}\) and \(H_{\beta _1,2}^{U}\) have the following PGFs:

$$\begin{aligned} E(z_2^{H_{\beta _1,2}^{L}})= & {} \frac{q}{\rho _1}\cdot \sum _{k=1}^{\infty } k b_{\beta _1,k}\cdot E(z_2^{D_{k,2}^{L}}) =\frac{q}{\rho _1}\cdot \sum _{k=1}^{\infty } k b_{\beta _1,k}\cdot (q +p z_2)^{k-1}, \nonumber \\ \end{aligned}$$
(C.25)
$$\begin{aligned} E(z_2^{H_{\beta _1,2}^{U}})= & {} \frac{q}{\rho _1}\cdot \sum _{k=1}^{\infty } k b_{\beta _1,k}\cdot E(z_2^{D_{k,2}^{U}}) =\frac{q}{\rho _1}\cdot \sum _{k=1}^{\infty } k b_{\beta _1,k}\cdot (qh(z_2) +p z_2)^{k-1}.\nonumber \\ \end{aligned}$$
(C.26)

Next, we will study the asymptotic behaviour of \(P\{H_{\beta _1,2}^{L}>j\}\) and \(P\{H_{\beta _1,2}^{U}>j\}\), respectively. Let N be a r.v. with probability distribution \(P\{N=k\}=(q/\rho _1) k b_{\beta _1,k}\), \(k\ge 1\). Therefore, (C.25) and (C.26) can be written as

$$\begin{aligned} H_{\beta _1,2}^{L}{\mathop {=}\limits ^\mathrm{d}}\sum _{k=1}^{N-1} Y_k \quad \text{ and }\quad H_{\beta _1,2}^{U}{\mathop {=}\limits ^\mathrm{d}}\sum _{k=1}^{N-1} Z_k, \end{aligned}$$

where N is independent of both \(Z_k\) and \(Y_k\), \(k\ge 1\).

Then, it is immediately clear that

$$\begin{aligned} P\{N>m\}=(q/\rho _1)\sum _{k=m+1}^{\infty } k b_{\beta _1,k}=(q/\rho _1)\left[ m\overline{b}_{\beta _1,m+1}+\sum _{k=m+1}^{\infty } \overline{b}_{\beta _1,k}\right] , \end{aligned}$$
(C.27)

where \(\overline{b}_{\beta _1,k}=\sum _{n=k}^{\infty } b_{\beta _1,n}\).

Using the definition of \(b_{\beta _1,n}\) in (3.9), and applying Lemma A.3, we know that \(\overline{b}_{\beta _1,k}=P\{N_{\lambda }(T_{\beta _1})>k-1\}\sim \lambda ^{a_1}k^{-a_1}L(k)\) as \(k\rightarrow \infty \), which, together with Proposition 1.5.10 in [4], implies that

$$\begin{aligned} P\{N>m\}\sim & {} \frac{a_1 q\lambda ^{a_1}}{\rho _1(a_1-1)}m^{-a_1+1}L(m)\quad {as}\ m\rightarrow \infty . \end{aligned}$$
(C.28)

Recall the following three facts: (i) \(Y_k\) is a \(0-1\) r.v., which implies that \(P\{Y_k>j\}\rightarrow 0\) as \(j\rightarrow \infty \); (ii) \(Z_k\) has the same probability distribution as \(X_g\) defined in (2.15), which implies that \(P\{Z_k>j\}=P \{X_g>j\}\sim Const\cdot j^{-a_1} L(j)\) as \(j\rightarrow \infty \); and (iii) \(E(Y_k)=p\) and \(E(Z_k)=E(X_g)=p/(1-\rho _1)<\infty \), given in (2.16). Then, by Lemma A.6, we know that

$$\begin{aligned} P\{H_{\beta _1,2}^{L}>j\}\sim & {} a_1\cdot \frac{\lambda _1 \lambda _2^{a_1-1}}{\rho _1(a_1-1)}\cdot j^{-a_1+1}L(j), \end{aligned}$$
(C.29)
$$\begin{aligned} P\{H_{\beta _1,2}^{U}>j\}\sim & {} \frac{a_1}{(1-\rho _1)^{a_1-1}}\cdot \frac{\lambda _1 \lambda _2^{a_1-1}}{\rho _1(a_1-1)}\cdot j^{-a_1+1}L(j). \end{aligned}$$
(C.30)

Remark C.1

It follows from (C.24) that \(P\{H_{\beta _1,2}^{L}>j\}\le P\{H_{\beta _1,2}>j\}\le P\{H_{\beta _1,2}^{U}>j\}\), whereas the asymptotic properties of \(P\{H_{\beta _1,2}^{L}>j\}\) and \(P\{H_{\beta _1,2}^{U}>j\}\) are given in (C.29) and (C.30), respectively. This suggests that \(P\{H_{\beta _1,2}>j\}\sim c\cdot \frac{\lambda _1 \lambda _2^{a_1-1}}{\rho _1(a_1-1)}\cdot j^{-a_1+1}L(j)\) as \(j\rightarrow \infty \) for some constant \(c\in \left( a_1, a_1/(1-\rho _1)^{a_1-1}\right) \). In Proposition C.1, we will verify that this assertion is true.

Proposition C.1

As \(j\rightarrow \infty \),

$$\begin{aligned} P\{H_{\beta _1,2}>j\}\sim \frac{1-\rho _1}{\rho _1} \left[ \frac{1}{(1-\rho _1)^{a_1}}-1\right] \cdot \frac{\lambda _1 \lambda _2^{a_1-1}}{\rho _1(a_1-1)}\cdot j^{-a_1+1}L(j). \end{aligned}$$
(C.31)

To prove this proposition, we need the following two lemmas (Lemma C.1 and Lemma C.2). Setting \(z_1=1\) in (B.7) and noting \(h(z_2)=\alpha (\lambda _2-\lambda _2 z_2)\), we obtain

$$\begin{aligned} E(z_2^{H_{\beta _1,2}})=H_{\beta _1}(1,z_2) =\frac{1}{\rho _1}\cdot \frac{\beta _1(\lambda _2-\lambda _2 z_2)-\alpha (\lambda _2-\lambda _2 z_2)}{1-\alpha (\lambda _2-\lambda _2 z_2)} =\gamma (\lambda _2-\lambda _2 z_2),\nonumber \\ \end{aligned}$$
(C.32)

where

$$\begin{aligned} \gamma (s)= & {} \frac{1}{\rho _1}\cdot \frac{\beta _1(s)-\alpha (s)}{1-\alpha (s)}. \end{aligned}$$
(C.33)

Lemma C.1

\(\gamma (s)\) is the LST of a probability distribution on \([0,\infty )\).

Proof

By Theorem 1 in [15] (see p.439), the lemma is true iff \(\gamma (0)=1\) and \(\gamma (s)\) is completely monotone, i.e., \(\gamma (s)\) possesses derivatives of all orders such that \((-1)^{n}\frac{d^n}{ds^n}\gamma (s)\ge 0\) for \(s> 0\), \(n\ge 0\). It is easy to check by (C.33) that \(\tau (0)=1\). Next, we only need to verify that \(\gamma (s)\) is completely monotone. By (C.33) and (2.10) and using Taylor expansion, we have

$$\begin{aligned} \gamma (s)= & {} \frac{1}{\rho _1}\cdot \frac{\beta _1\big (s+\lambda _1- \lambda _1\big )- \beta _1(s+\lambda _1- \lambda _1 \alpha (s))}{1-\alpha (s)} \nonumber \\= & {} \frac{1}{\rho _1}\cdot \frac{1}{1-\alpha (s)}\left[ \sum _{n=0}^{\infty } \frac{\beta _1^{(n)}(s+\lambda _1)}{n!}(-\lambda _1)^n -\sum _{n=0}^{\infty } \frac{\beta _1^{(n)}(s+\lambda _1)}{n!} (-\lambda _1\alpha (s))^n\right] \nonumber \\= & {} \frac{1}{\rho _1}\cdot \sum _{n=1}^{\infty }\frac{(-\lambda _1)^n \beta _1^{(n)}(s+\lambda _1)}{n!}\Big (1 +\alpha (s)+\cdots + (\alpha (s))^{n-1}\Big ), \end{aligned}$$
(C.34)

where \(\beta _1^{(n)}(\cdot )\) represents the nth derivative of \(\beta _1(\cdot )\).

It is easy to check that, for \(n\ge 1\), both \((-1)^n\beta _1^{(n)}(s+\lambda _1)\) and \(1 +\alpha (s)+\cdots + (\alpha (s))^{n-1}\) are completely monotone, and so is their product (by Criterion A.1 in Appendix A). Therefore, \(\gamma (s)\) is completely monotone.\(\square \)

Remark C.2

Let \(T_{\gamma }\) be a r.v. whose the probability distribution has the LST \(\gamma (s)\). Then, the expression \(Ez_2^{H_{\beta _{1,2}}}=\gamma (\lambda _2-\lambda _2 z_2)\) in (C.32) implies that \(H_{\beta _{1,2}}{\mathop {=}\limits ^\mathrm{d}}N_{\lambda _2}(T_{\gamma })\).

Lemma C.2

As \(t\rightarrow \infty \),

$$\begin{aligned} P\{T_{\gamma }>t\}\sim & {} \frac{1-\rho _1}{\rho _1} \left[ \frac{1}{(1-\rho _1)^{a_1}}-1\right] \frac{\lambda _1}{\rho _1(a_1-1)}\cdot t^{-a_1+1}L(t). \end{aligned}$$
(C.35)

Proof

First, let us rewrite (C.33) as

$$\begin{aligned} \gamma (s)=\frac{1}{\rho _1}-\frac{1}{\rho _1}\cdot \frac{1-\beta _1(s)}{s}\cdot \frac{s}{1-\alpha (s)}. \end{aligned}$$
(C.36)

In the following, we divide the proof of Lemma C.2 into two parts, depending on whether \(a_1\) (\(>1\)) is an integer or not.

Case 1: Non-integer \(a_1 >1\). Suppose that \(n<a_1<n+1\), \(n\in \{1,2,\ldots \}\). Since \(P\{T_{\beta _1}>t\}\sim t^{-a_1}L(t)\) and \((1-\rho _1)^{a_1+1}P\{T_{\alpha }>t\}\sim t^{-a_1}L(t)\), we know that \(\beta _{1,n}<\infty \), \(\beta _{1,n+1}=\infty \), \(\alpha _{n}<\infty \) and \(\alpha _{n+1}=\infty \). Define \(\beta _{1,n}(s)\) and \(\alpha _{n}(s)\) in a manner similar to that in (A.3). Therefore,

$$\begin{aligned} \frac{1-\beta _1(s)}{s}= & {} \beta _{1,1}+\sum _{k=2}^{n}\frac{\beta _{1,k}}{k!}(-s)^{k-1} +(-1)^{n}\frac{\beta _{1,n}(s)}{s}, \end{aligned}$$
(C.37)
$$\begin{aligned} \frac{1-\alpha (s)}{s}= & {} \alpha _{1}+\sum _{k=2}^{n}\frac{\alpha _{k}}{k!}(-s)^{k-1} +(-1)^{n}\frac{\alpha _{n}(s)}{s}. \end{aligned}$$
(C.38)

By Lemma A.7,

$$\begin{aligned} (1-\rho _1)^{a_1+1}\alpha _{n}(s)\ \sim \ \beta _{n}(s)\ \sim \ \frac{\Gamma (a_1-n)\Gamma (n+1-a_1)}{\Gamma (a_1)} s^{a_1}L(1/s),\quad s\downarrow 0. \end{aligned}$$
(C.39)

Furthermore, it follows from (C.38) that

$$\begin{aligned} \frac{s}{1-\alpha (s)}= & {} \frac{1/\alpha _1}{1+(1/\alpha _1)\sum _{k=2}^{n}\frac{\alpha _{k}}{k!}(-s)^{k-1} +(-1)^{n}(1/\alpha _1)\frac{\alpha _{n}(s)}{s}}\nonumber \\= & {} \frac{1}{\alpha _1} -\sum _{k=1}^{n-1}u_k s^k - (-1)^{n}\frac{\alpha _{n}(s)}{\alpha _1^2 s} +O(s^{n}), \end{aligned}$$
(C.40)

where \(u_1,u_2,\ldots ,u_{n-1}\) are constants. By (C.36), (C.37) and (C.40), we have

$$\begin{aligned} \gamma (s)= & {} 1+\sum _{k=1}^{n-1}e_k s^k +(-1)^{n} \cdot \frac{1}{\rho _1\alpha _1}\left[ \frac{\beta _{1,1}}{\alpha _1}\cdot \frac{\alpha _{n}(s)}{s}-\frac{\beta _{1,n}(s)}{s}\right] +O(s^{n}),\nonumber \\ \end{aligned}$$
(C.41)

where \(e_1,e_2,\ldots ,e_{n-1}\) are constants. Based on the above, we define \(\gamma _{n-1}(s)\) in a manner similar to that in (A.3). Applying (C.39), we have

$$\begin{aligned} \gamma _{n-1}(s)\sim & {} \frac{1}{\rho _1\alpha _1}\left[ \frac{\beta _{1,1}}{\alpha _1}\cdot \frac{\alpha _{n}(s)}{s}-\frac{\beta _{1,n}(s)}{s}\right] \nonumber \\\sim & {} \frac{\lambda _1}{\rho _1}\cdot \frac{1-\rho _1}{\rho _1} \left[ \frac{1}{(1-\rho _1)^{a_1}}-1\right] \cdot \frac{\Gamma (a_1-n)\Gamma (n+1-a_1)}{(a_1-1)\Gamma (a_1-1)} s^{a_1-1}L(1/s).\nonumber \\ \end{aligned}$$
(C.42)

Then, making use of Lemma A.7, we complete the proof of Lemma C.2 for non-integer \(a_1>1\).

Case 2: Integer \(a_{1}>1\). Suppose that \(a_1=n\in \{2,3,\ldots \}\). Since \(P\{T_{\beta _1}>t\}\sim t^{-n}L(t)\) and \((1-\rho _1)^{n+1}P\{T_{\alpha }>t\}\sim t^{-n}L(t)\), we know that \(\alpha _{n-1}<\infty \) and \(\beta _{1,n-1}<\infty \), but whether \(\alpha _{n}\) or \(\beta _{1,n}\) is finite or not remains uncertain. This uncertainty is essentially determined by whether \(\int _x^{\infty }t^{-1}L(t)\mathrm{d}t\) is convergent or not. Define \(\widehat{\beta }_{1,n-1}(s)\) and \(\widehat{\alpha }_{n-1}(s)\) in a way similar to that in (A.4). Then,

$$\begin{aligned} \frac{1-\beta _1(s)}{s}= & {} \beta _{1,1}+\sum _{k=2}^{n-1}\frac{\beta _{1,k}}{k!}(-s)^{k-1} +(-s)^{n-1}\widehat{\beta }_{1,n-1}(s), \end{aligned}$$
(C.43)
$$\begin{aligned} \frac{1-\alpha (s)}{s}= & {} \alpha _{1}+\sum _{k=2}^{n-1}\frac{\alpha _{k}}{k!}(-s)^{k-1} +(-s)^{n-1}\widehat{\alpha }_{n-1}(s). \end{aligned}$$
(C.44)

By Lemma A.8, we obtain, for \(x>0\),

$$\begin{aligned}&(1-\rho _1)^{n+1}\widehat{\alpha }_{n-1}(xs)-(1-\rho _1)^{n+1}\widehat{\alpha }_{n-1}(s)\nonumber \\&\quad \sim \ \widehat{\beta }_{1,n-1}(xs)-\widehat{\beta }_{1,n-1}(s)\ \sim \ -(\log x) L(1/s)/(n-1)!\quad \text{ as }\ s\downarrow 0.\nonumber \\ \end{aligned}$$
(C.45)

Furthermore, it follows from (C.44) that

$$\begin{aligned} \frac{s}{1-\alpha (s)}= & {} \frac{1/\alpha _1}{1+(1/\alpha _1)\sum _{k=2}^{n-1}\frac{\alpha _{k}}{k!}(-s)^{k-1} +(-s)^{n-1}(1/\alpha _1)\widehat{\alpha }_{n-1}(s)}\nonumber \\= & {} \frac{1}{\alpha _1} -\sum _{k=1}^{n-1}u'_k s^k - (-s)^{n-1}\frac{\widehat{\alpha }_{n-1}(s)}{\alpha _1^2} +O(s^{n}), \end{aligned}$$
(C.46)

where \(u'_1,u'_2,\ldots ,u'_{n-1}\) are constants. By (C.36), (C.43) and (C.46), we have

$$\begin{aligned} \gamma (s)= & {} 1+\sum _{k=1}^{n-1}e'_k s^k +(-1)^{n} s^{n-1}\cdot \frac{1}{\rho _1\alpha _1}\left[ \frac{\beta _{1,1}}{\alpha _1}\cdot \widehat{\alpha }_{n-1}(s)- \widehat{\beta }_{1,n-1}(s)\right] +O(s^{n}),\nonumber \\ \end{aligned}$$
(C.47)

where \(e'_1,e'_2,\ldots ,e'_{n-1}\) are constants. Based on this, we define \(\widehat{\gamma }_{n-2}(s)\) in a way similar to that in (A.4). Then,

$$\begin{aligned} \widehat{\gamma }_{n-2}(s)=(-1)^{n} e'_{n-1}+\frac{1}{\rho _1\alpha _1}\left[ \frac{\beta _{1,1}}{\alpha _1}\cdot \widehat{\alpha }_{n-1}(s)- \widehat{\beta }_{1,n-1}(s)\right] +O(s). \end{aligned}$$
(C.48)

It follows from (C.47) and (C.45) that

$$\begin{aligned} \lim _{s\downarrow 0}\frac{\widehat{\gamma }_{n-2}(xs)-\widehat{\gamma }_{n-2}(s)}{L(1/s)/(n-2)!}= & {} \frac{1}{\rho _1\alpha _1}\left[ \frac{\beta _{1,1}}{\alpha _1}\cdot \lim _{s\downarrow 0}\frac{\widehat{\alpha }_{n-1}(xs)-\widehat{\alpha }_{n-1}(s)}{(n-1)L(1/s)/(n-1)!}- \lim _{s\downarrow 0}\frac{\widehat{\beta }_{1,n-1}(xs)-\widehat{\beta }_{1,n-1}(s)}{(n-1)L(1/s)/(n-1)!} \right] \nonumber \\= & {} \frac{\lambda _1}{\rho _1}\frac{1-\rho _1}{\rho _1} \left[ \frac{1}{(1-\rho _1)^{n}}-1\right] \cdot \left( -\frac{1}{n-1}\log x\right) . \end{aligned}$$
(C.49)

Applying Lemma A.8, we complete the proof of Lemma C.2 for integer \(a_1=n\in \{2,3,\ldots \}\). \(\square \)

Proof of Proposition C.1

It follows directly from Remark C.2, Lemma C.2 and Lemma A.3. \(\square \)

Referring to Remark C.1, we know from (C.31) that \(c{=}(1{-}\rho _1) \rho _1^{{-}1} \left[ 1/(1{-}\rho _1)^{a_1}{-}1\right] \). Now let us confirm that \(a_1<c<a_1/(1-\rho _1)^{a_1-1}\), which is equivalent to checking that \(a_1\rho _1(1-\rho _1)^{a_1-1}+(1-\rho _1)^{a_1}<1\) and \(a_1\rho _1+(1-\rho _1)^{a_1}>1\). This is true because \(a_1\rho _1(1-\rho _1)^{a_1-1}+(1-\rho _1)^{a_1}\) is decreasing in \(\rho _1\in (0,1)\) and \(a_1\rho _1+(1-\rho _1)^{a_1}\) is increasing in \(\rho _1\in (0,1)\).

1.2.3 Tail asymptotics for \(H_{\beta _2,2}\)

As we shall see in the next subsection, our main results do not require a detailed asymptotic expression for \(P\{H_{\beta _2,2}>j\}\). It is enough to verify that it is \(o(1)\cdot j^{-a_1+1}L(j)\) as \(j\rightarrow \infty \).

Taking \(z_1\rightarrow 1\) in (B.16), we have

$$\begin{aligned} E(z_2^{H_{\beta _2,2}})=H_{\beta _2}(1,z_2)= & {} \frac{p}{\rho _2}\cdot \sum _{k=1}^{\infty } k b_{\beta _2,k}\cdot D_{k}(1,z_2). \end{aligned}$$
(C.50)

It follows from (C.50) that \(H_{\beta _2,2}{\mathop {=}\limits ^\mathrm{d}}D_{k,2}\), with probability \((p/\rho _2)k b_{\beta _2,k}\), for \(k\ge 1\). Define the r.v. \(H^U_{\beta _2,2}{\mathop {=}\limits ^\mathrm{d}}D^U_{k,2}\), with probability \((p/\rho _2)k b_{\beta _2,k}\), for \(k\ge 1\). Then, by (C.23), we have, \(H_{\beta _2,2}\le _{st}H_{\beta _2,2}^{U}\). Note that \(H_{\beta _2,2}^{U}\) has the PGF

$$\begin{aligned} E(z_2^{H_{\beta _2,2}^{U}})= & {} \frac{p}{\rho _2}\cdot \sum _{k=1}^{\infty } k b_{\beta _2,k}\cdot E(z_2^{D_{k,2}^{U}}) =\frac{p}{\rho _2}\cdot \sum _{k=1}^{\infty } k b_{\beta _2,k}\cdot (qh(z_2) +p z_2)^{k-1}.\nonumber \\ \end{aligned}$$
(C.51)

Let \(N^*\) be a r.v. with probability distribution \(P\{N^*=k\}=(p/\rho _2) k b_{\beta _2,k}\), \(k\ge 1\). Therefore, (C.51) implies \(H_{\beta _2,2}^{U}{\mathop {=}\limits ^\mathrm{d}}\sum _{k=1}^{N^*-1} Z_k\), where \(N^*\) is independent of \(Z_k\), \(k\ge 1\). Similar to (C.27), we can write

$$\begin{aligned} P\{N^*>m\}=(p/\rho _2)\left[ m\overline{b}_{\beta _2,m+1}+\sum _{k=m+1}^{\infty } \overline{b}_{\beta _2,k}\right] , \end{aligned}$$
(C.52)

where \(\overline{b}_{\beta _2,k}=\sum _{n=k}^{\infty } b_{\beta _2,n}\). By the definition of \(b_{\beta _2,n}\) given in (3.10) and applying Lemma A.3 and Lemma A.4, we know that \(\overline{b}_{\beta _2,k}=P\{N_{\lambda }(T_{\beta _2})>k-1\}=O(1)\cdot k^{-a_2}L(k)\) as \(k\rightarrow \infty \). Furthermore, by (C.52) and applying Proposition 1.5.10 in [4], we have

$$\begin{aligned} P\{N^*>m\}= & {} O(1)\cdot m^{-a_2+1}L(m)\quad {as}\ m\rightarrow \infty . \end{aligned}$$
(C.53)

As pointed out in Sect. 2, \(P\{Z_k>j\}\sim Const\cdot j^{-a_1} L(j)\) as \(j\rightarrow \infty \). By Lemma A.6, we know \(P\{H_{\beta _2,2}^{U}>j\}= O(1)\cdot \max \left( j^{-a_2+1}L(j),j^{-a_1}L(j)\right) \) as \(j\rightarrow \infty \). Since \(P\{H_{\beta _2,2}>j\}\le P\{H_{\beta _2,2}^{U}>j\}\) and \(a_2>a_1\), we have

$$\begin{aligned} P\{H_{\beta _2,2}>j\}= & {} O(1)\cdot \max \left( j^{-a_2+1}L(j),j^{-a_1}L(j)\right) =o(1)\cdot j^{-a_1+1}L(j).\nonumber \\ \end{aligned}$$
(C.54)

After the above preparations, we now return to the proof of the tail asymptotic properties for \(M_{ik},\ i,k=1,2\).

By (C.15) and applying Lemma A.2, together with (C.20), we have

$$\begin{aligned} P\{M_{11}>j\}\sim & {} \frac{\rho _1}{1-\rho _1} P \{H_{\beta _1,1}>j \}=\frac{\rho _1}{1-\rho _1} P \{S_{\beta _1,1}>j \} \quad \hbox {(refer to (C.11))}, \nonumber \\ \end{aligned}$$
(C.55)
$$\begin{aligned} P\{M_{12}>j\}\sim & {} \frac{\rho _1}{1-\rho _1} P \{H_{\beta _1,2}>j \} \quad \hbox {(refer to (C.31))}. \end{aligned}$$
(C.56)

Immediately, from (C.16) and (C.20),

$$\begin{aligned} P\{M_{21}>j\}= & {} \vartheta P \{H_{\beta _2,1}>j \} =\vartheta P \{S_{\beta _2,1} >j \} \quad \hbox {(refer to (C.13))}.\nonumber \\ \end{aligned}$$
(C.57)

By (C.17) and applying Lemma A.5, together with (C.54),

$$\begin{aligned} P\{M_{22}>j\}= & {} \vartheta P\{H_{\beta _2,2}+K_a+K_c>j\}\sim \vartheta P \{K_a+K_c>j \} \quad \hbox {(refer to (C.9))}.\nonumber \\ \end{aligned}$$
(C.58)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, B., Zhao, Y.Q. Tail asymptotics for the \(M_1,M_2/G_1,G_2/1\) retrial queue with non-preemptive priority. Queueing Syst 96, 169–199 (2020). https://doi.org/10.1007/s11134-020-09666-8

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11134-020-09666-8

Keywords

Mathematics Subject Classification

Navigation