Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journalInformation?journalCode=gscs20 Journal of Statistical Computation and Simulation ISSN: 0094-9655 (Print) 1563-5163 (Online) Journal homepage: https://www.tandfonline.com/loi/gscs20 Comparison of estimation methods for the Marshall–Olkin extended Lindley distribution A.P.J. do Espirito Santo & J. Mazucheli To cite this article: A.P.J. do Espirito Santo & J. Mazucheli (2015) Comparison of estimation methods for the Marshall–Olkin extended Lindley distribution, Journal of Statistical Computation and Simulation, 85:17, 3437-3450, DOI: 10.1080/00949655.2014.977904 To link to this article: https://doi.org/10.1080/00949655.2014.977904 Published online: 06 Nov 2014. Submit your article to this journal Article views: 322 View Crossmark data Citing articles: 6 View citing articles https://www.tandfonline.com/action/journalInformation?journalCode=gscs20 https://www.tandfonline.com/loi/gscs20 https://www.tandfonline.com/action/showCitFormats?doi=10.1080/00949655.2014.977904 https://doi.org/10.1080/00949655.2014.977904 https://www.tandfonline.com/action/authorSubmission?journalCode=gscs20&show=instructions https://www.tandfonline.com/action/authorSubmission?journalCode=gscs20&show=instructions http://crossmark.crossref.org/dialog/?doi=10.1080/00949655.2014.977904&domain=pdf&date_stamp=2014-11-06 http://crossmark.crossref.org/dialog/?doi=10.1080/00949655.2014.977904&domain=pdf&date_stamp=2014-11-06 https://www.tandfonline.com/doi/citedby/10.1080/00949655.2014.977904#tabModule https://www.tandfonline.com/doi/citedby/10.1080/00949655.2014.977904#tabModule Journal of Statistical Computation and Simulation, 2015 Vol. 85, No. 17, 3437–3450, http://dx.doi.org/10.1080/00949655.2014.977904 Comparison of estimation methods for the Marshall–Olkin extended Lindley distribution A.P.J. do Espirito Santoa and J. Mazuchelib∗ aDepartamento de Estatística, Universidade Estadual Paulista, Presidente Prudente, SP, Brazil; bDepartamento de Estatística, Universidade Estadual de Maringá, Maringá, PR, Brazil (Received 25 February 2014; accepted 14 October 2014) The aim of this paper is to compare the parameters’ estimations of the Marshall–Olkin extended Lindley distribution obtained by six estimation methods: maximum likelihood, ordinary least-squares, weighted least-squares, maximum product of spacings, Cramér–von Mises and Anderson–Darling. The bias, root mean-squared error, average absolute difference between the true and estimate distributions’ functions and the maximum absolute difference between the true and estimate distributions’ functions are used as comparison criteria. Although the maximum product of spacings method is not widely used, the simulation study concludes that it is highly competitive with the maximum likelihood method. Keywords: estimation methods; Lindley distribution; Marshall–Olkin family; Monte Carlosimulations AMS Subject Classification: F1.1; F4.3 1. Introduction By various methods, new parameters can be introduced to expand families of distribution.[1] Marshall and Olkin [2] introduced a general method to obtain more flexible distributions by adding a new parameter to an existing one, called the Marshall–Olkin family of distributions. Starting with a baseline survival function S1(y | θ), for a continuous random variable Y, the Marshall–Olkin family has survival function given by S(y | θ , α) = αS1(y | θ) 1 − ᾱS1(y | θ) , (1) where −∞ < y < +∞, θ = (θ1, . . . , θp) and ᾱ = 1 − α. α > 0 is called tilt parameter. Clearly, when α = 1 we get the baseline survival function S1(y | θ). In general, the addition of a tilt parameter makes the resulting distribution richer and more flexible for modelling data. The study of the tilt parameter effect and the hazard rate function monotonicity was conducted in [2–4]. In [5], the tilt parameter was taken as a random variable. *Corresponding author Email: jmazucheli@gmail.com © 2014 Taylor & Francis mailto:jmazucheli@gmail.com 3438 A.P.J. do Espirito Santo and J. Mazucheli In what follows from Equation (1), the corresponding probability density function and the hazard rate function are written, respectively, as f (y | θ , α) = αf1(y | θ) [1 − ᾱS1(y | θ)]2 , (2) and h(y | θ , α) = h1(y | θ) 1 − ᾱS1(y | θ) , (3) where f1(y | θ) and h1(y | θ) are, respectively, the baseline probability density function and the baseline hazard rate function. An interesting property of Marshall–Olkin family is the geometric-extreme stable property as follows. If Yi, i = 1, 2, . . . , is a sequence of independent and identically distributed random variables with survival function S1(y | θ) and if N has a geometric distribution with probability mass function P(N = n) = α(1 − α)n−1 taken values {1, 2, . . .}, independent of Yi, then the ran- dom variables U = min{Y1, . . . , YN } and V = max{Y1, . . . , YN } have survival function (1) with 0 < α = p ≤ 1 and α = 1/p ≥ 1, respectively. Marshall and Olkin [2] have also noted that the method has a stability property, that is, if the method is applied twice, nothing new is obtained in the second time around. Several papers have appeared in the last few years dealing with Marshall–Olkin extended family. A literature review showed that more than 20 distributions were used as baseline distributions. Beta: [6]; Birnbaum–Saunders: [7]; Burr: [8,9]; Exponential: [2,4,8,10–23]; Exponentiated exponential: [24,25]; Exponentiated log-normal: [26]; Exponentiated Weibull: [27]; Extreme-value: [8]; Fréchet: [8,11,28,29]; Gamma: [30]; Kumaraswamy: [11]; Lindley: [31]; Linear failure-rate: [32]; Logistic: [8]; Log-logistic: [33]; Lomax: [34–36]; Log-normal: [37]. Makeham: [38]; Journal of Statistical Computation and Simulation 3439 Normal: [39,40]. Pareto: [4,8,11,41,42]; Power series: [11]; q-Weibull: [43]; Semi-Burr: [9]; Semi-Weibull: [44]; Student-t: [45]. Uniform: [46]; Weibull: [2–4,8,11,15,20,47–52]. By considering the survival function of one-parameter Lindley distribution, we have the Marshall–Olkin extended Lindley distribution (MOEL), named as Lindley-Geometric distribu- tion by Zakerzadeh and Mahmoudi [53] with survival function: S(y | θ , α) = α(1 + θy/(1 + θ)) e−θy 1 − ᾱ(1 + θy/(1 + θ)) e−θy , (4) where 0 < y < ∞, θ > 0, α > 0 and ᾱ = 1 − α. When α = 1 we get the survival function of one-parameter Lindley distribution. The one-parameter Lindley distribution was introduced by Lindley [54] (see also [55]) as a distribution that can be useful to analyse lifetime data, especially in modelling stress-strength reliability applications. Ghitany et al. [56] studied the properties of the one-parameter Lindley distribution under a careful mathematical treatment. They also showed, in a numerical example, that the Lindley distribution gives better modelling than obtained using the exponential distribution. A two-parameter weighted Lindley distribu- tion was proposed by Ghitany et al.[57] A generalized Lindley distribution, which includes as special cases the Lindley, exponential and gamma distributions, was proposed by Zakerzadeh and Dolati.[58] The one-parameter Lindley distribution in the competing risks scenario was considered in [59]. The probability density and hazard rate functions of MOEL distribution are, respectively: f (y | θ , α) = αθ2(1 + y) e−θy (1 + θ)[1 − ᾱ(1 + θy/(1 + θ)) e−θy]2 , (5) h(y | θ , α) = θ2(1 + y) (1 + θ + θy)[1 − ᾱ(1 + θy/(1 + θ)) e−θy] . (6) Note that f (0|θ , α) = h(0|θ , α) = θ2/α(1 + θ), f (∞|θ , α) = 0 and h(∞|θ , α) = θ . The prob- ability density function (5) is decreasing if α ≤ 2θ2/(θ2 + 1) and unimodal if α > 2θ2/(θ2 + 1). Figure 1 illustrates the probability density function of MOEL selected values of α and θ = 1. It is clear that for values of α close to 1, the curve resembles the one-parameter Lindley distribution, while when α −→ ∞ the curve tends to be symmetric. In Figure 2, we have the hazard rate function graph of MOEL distribution for some values of α and θ = 1. From Ghitany et al.,[31] h(y | θ , α) has increasing, decreasing–increasing and increasing–decreasing–increasing behaviour, while the hazard rate function of the one-parameter Lindley distribution has only increasing behaviour. 3440 A.P.J. do Espirito Santo and J. Mazucheli 0 2 4 6 8 0. 0 0. 5 1. 0 1. 5 2. 0 2. 5 y 0 2 4 6 8 10 0. 00 0. 05 0. 10 0. 15 0. 20 0. 25 0. 30 y f(y , q , a ) f(y , q , a ) a = 2.0 a = 3.0 a = 4.0 a = 5.0 a = 6.0 a = 0.2 a = 0.4 a = 0.6 a = 0.8 a = 1.0 Figure 1. Density of MOEL distribution for selected values of α and θ = 1. 0 1 2 3 4 5 6 1 2 3 4 5 y h (y ,q , a ) h (y ,q , a ) h (y ,q , a ) 0 2 4 6 8 0. 5 0. 6 0. 7 0. 8 0. 9 y 0 2 4 6 8 10 0. 2 0. 4 0. 6 0. 8 y a = 0.1 a = 0.6 a = 0.7 a = 0.8 a = 2.0 a = 3.0 a = 4.0 a = 6.0 a = 0.9 a = 1.0 a = 0.2 a = 0.3 a = 0.4 a = 0.5 Figure 2. Hazard rate function of MOEL distribution for selected values of α and θ = 1. The quantile function of the MOEL distribution is given by F−1(u) = −1 − 1 θ − 1 θ W−1 ( (θ + 1) eθ+1 (u − 1) (1 − ᾱu) ) , where 0 < u < 1 and W−1(·) denotes the negative branch of the Lambert W function (i.e. the equation W(z) eW(z) = z solution) because (1 + θ + θy) > 1 and (u − 1)(θ + 1) e−θ−1/ (1 − ᾱu) ∈ (−1/e, 0).[31,60] Using the series expansion (1 − z)−w = ∞∑ j=0 �(w + j) �(w)j! zj, (7) where |z| < 1 and w > 0, the probability density function (5) can be written as f (y | θ , α) = θ2 (θ + 1) α(1 + y) e−θy × ∞∑ j=0 (j + 1)(1 − α)j ( 1 + θy θ + 1 )j e−θyj. (8) Journal of Statistical Computation and Simulation 3441 By considering Equation (8) and applying the binomial expression for (1 + θy/(θ + 1))j, the rth moment of Y is given by E(Y r) = θ2α (θ + 1) ∞∑ j=0 j∑ i=0 ( j i ) (j + 1)(1 − α)j ( θ θ + 1 )i × �(r + i + 1) [θ(j + 1)]r+i+1 ( 1 + r + i + 1 θ(j + 1) ) . (9) The moment generating function of the MOEL distribution is given by MY (t) = θ2α (θ + 1) ∞∑ k=0 ∞∑ j=0 j∑ i=0 tk k! ( j i ) (j + 1)(1 − α)j ( θ θ + 1 )i × �(k + i + 1) [θ(j + 1)]k+i+1 ( 1 + k + i + 1 θ(j + 1) ) . (10) Proposition 1.1 The mean of the MOEL distribution is given by E(Y ) = θ2α (θ + 1) ∞∑ j=0 j∑ i=0 (j + 1)! (j − i)! (1 − α)j ( θ θ + 1 )i × (i + 1) [θ(j + 1)]i+2 ( 1 + i + 2 θ(j + 1) ) . (11) More statistical properties of the MOEL distribution are discussed in [31]. For any probability distribution, parameters estimation is always of fundamental importance although in general only the maximum likelihood estimation (MLE) method is considered. It is of interest to compare the MLE method with other estimation methods. In this paper, we con- sider five additional methods to estimate the parameters of MOEL distribution. These additional methods are the ordinary least-squares (OLS), weighted least-squares (WLS), maximum product of spacings (MPS), Cramér–von Mises (CM) and Anderson–Darling (AD). The main aim of this paper is to identify, for the MOEL distribution, the most efficient estimation method for different shape parameters’ values and sample sizes. In Section 2, we discuss the six estimation methods considered in this paper. The comparison of these methods in terms of bias, root mean-squared error, average absolute difference between the true and estimate distributions’ functions and the maximum absolute difference between the true and estimate distributions’ functions are presented in Section 3. Some concluding remarks in Section 4 finalize this paper. 2. Estimation methods In this section, by considering the Marshall–Olkin model formulation, we describe six methods used to get the estimates for α and θ . For all methods, we consider the case where both α and θ are unknown. This is also considered in the simulation study presented in Section 3. 2.1. Maximum likelihood Let y = (y1, . . . , yn) be a random sample of size n from the Marshall–Olkin extended distribution with parameters α and θ = (θ1, . . . , θp). From Equation (2), the likelihood and log-likelihood 3442 A.P.J. do Espirito Santo and J. Mazucheli functions are written, respectively, as follows: L(θ , α | y) = n∏ i=1 f (yi|θ , α) = αn n∑ i=1 f1(yi|θ) [1 − ᾱS1(yi|θ)]2 , (12) l(θ , α | y) = n log(α) + n∑ i=1 log[f1(yi | θ)] − 2 n∑ i=1 log[1 − ᾱS1(yi|θ)]. (13) The maximum likelihood estimates of θ and α, θ̂MLE and α̂MLE, respectively, can be obtained numerically by maximizing the log-likelihood function (12). In this case, the log-likelihood func- tion is maximized by solving numerically (∂/∂θj)l(θ , α | y) = 0 and (∂/∂α)l(θ , α | y) = 0 in θ and α, respectively, where ∂ ∂θj l(θ , α | y) = n∑ i=1 f ′ 1j(yi | θ) f1(yi | θ) − 2ᾱ n∑ i=1 f1(yi | θ) 1 − ᾱS1(yi|θ) , (14) ∂ ∂α l(θ , α | y) = n α − 2 n∑ i=1 S1(yi | θ) 1 − ᾱS1(yi|θ) , (15) where f ′ 1j(yi | θ) = (∂/∂θj)f1(yi | θ), j = 1, . . . , p. 2.2. Ordinary least-squares Let y1:n < y2:n · · · < yn:n be the order statistics of a size n random sample from a distribution with cumulative distribution function F(y). It is well known that E[F(yi:n)] = i n + 1 and Var[F(yi:n)] = i(n − i + 1) (n + 1)2(n + 2) . (16) For the Marshall–Olkin extended distribution, the least-square estimates θ̂OLS and α̂OLS of the parameters θ and α, respectively, are obtained by minimizing the function: S(θ , α | y) = n∑ i=1 ( F(yi:n | θ , α) − i n + 1 )2 . (17) These estimates can also be obtained by solving the nonlinear equations: n∑ i=1 ( F1(yi:n|θ) 1 − ᾱS1(yi:n|θ) − i n + 1 ) �1j(yi:n|θ , α) = 0, (18) n∑ i=1 ( F1(yi:n|θ) 1 − ᾱS1(yi:n|θ) − i n + 1 ) �2(yi:n|θ , α) = 0, (19) where �1j(yi:n|θ , α) = (1 − ᾱS1(yi:n|θ))F ′ 1j + ᾱF1(yi:n|θ)S′ 1j [1 − ᾱS1(yi:n|θ)]2 , (20) �2(yi:n|θ , α) = −S1(yi:n|θ)F1(yi:n|θ) [1 − ᾱS1(yi:n|θ)]2 , (21) F ′ 1j = (∂/∂θj)F1(yi:n|θ) and S′ 1j = (∂/∂θj)S1(yi:n|θ), j = 1, . . . , p. Journal of Statistical Computation and Simulation 3443 2.3. Weighted least-squares The weighted least-squares’ estimates θ̂WLS and α̂WLS of the parameters θ and α, respectively, are obtained by minimizing the function: W(θ , α | y) = n∑ i=1 wi ( F1(yi:n|θ) 1 − ᾱS1(yi:n|θ) − i n + 1 )2 , (22) where the correction factor wi is given by wi = 1 V [F(y(i:n))] = (n + 1)2(n + 2) i(n − i + 1) . (23) These estimates can also be obtained by solving the nonlinear equations: n∑ i=1 1 i(n − i + 1) ( F1(yi:n|θ) 1 − ᾱS1(yi:n|θ) − i n + 1 ) �1j(yi:n|θ , α) = 0, (24) n∑ i=1 1 i(n − i + 1) ( F1(yi:n|θ) 1 − ᾱS1(yi:n|θ) − i n + 1 ) �2(yi:n|θ , α) = 0, (25) where �1j(yi:n|θ , α) and �2(yi:n|θ , α) are given by Equations (20) and (21), respectively. 2.4. Maximum product of spacings Cheng and Amin [61,62] introduced the maximum product of spacings (MPS) method as alter- native to MLE in parameters estimation of continuous univariate distributions. Ranneby,[63] independently, developed the same method as an approximation to the Kullback–Leibler measure of information. In what follows, let y1:n < y2:n < · · · < yn:n be an ordered random sample drawn from the Marshall–Olkin extended distribution. It defines the uniform spacings of the sample as the quantities: D1 = F(y1:n | θ , α), Dn+1 = 1 − F(tn:n | θ , α) and Di = F(ti:n | θ , α) − F(t(i−1):n | θ , α), i = 2, . . . , n. Note that there are (n + 1) spacings of the first order. Following,[62] the maximum product of spacings method consists in finding the values of θ and α which maximize the geometric mean of the spacings, the MPS statistics, given by G(θ , α | y) = ( n+1∏ i=1 Di )1/(n+1) , (26) or, equivalently, its logarithm H = log(G). By considering 0 = F(y0:n | θ , α) < F(y1:n | θ , α) < · · · < F(yn:n | θ , α) < F(y(n+1):n | θ , α) = 1, the quantity H = log(G) can be calculated as H(θ , α | y) = 1 n + 1 n+1∑ i=1 log(Di). (27) The estimates for θ and α, θ̂MPS and α̂MPS, can be found solving, respectively, in θ and α the nonlinear equations: ∂ ∂θj H(θ , α) = n+1∑ i=1 1 Di � [ ∂ ∂θj F(yi:n|θ , α) ] = 0, (28) ∂ ∂α H(θ , α) = n+1∑ i=1 1 Di � [ ∂ ∂α F(yi:n|θ , α) ] = 0, (29) 3444 A.P.J. do Espirito Santo and J. Mazucheli where � is the first-order difference operator. Cheng and Amin [62] showed that maximizing H as a method of parameter estimation is as efficient as MLE estimation and the MPS estimators are consistent under more general conditions than the MLE estimators. 2.5. Minimum distance methods In this subsection, we present two estimation methods for θ and α estimation based on the minimization of two well-known goodness-of-fit statistics. This class of statistics is based on the difference between the estimates of the cumulative distribution function and the empirical distribution functiono.[64,65] 2.5.1. Cramér–von Mises The Cramér–von Mises estimates of the parameters θ̂CM and α̂CM, respectively, are obtained by minimizing, with respect to θ and α, the function: C(θ , α | y) = 1 12n + n∑ i=1 ( F1(yi:n|θ) 1 − ᾱS1(yi:n|θ) − 2i − 1 2n )2 . (30) These estimates can also be obtained by solving the nonlinear equations: n∑ i=1 ( F1(yi:n|θ) 1 − ᾱS1(yi:n|θ) − 2i − 1 2n ) �1j(yi:n|θ , α) = 0, (31) n∑ i=1 ( F1(yi:n|θ) 1 − ᾱS1(yi:n|θ) − 2i − 1 2n ) �2 (yi:n|θ , α) = 0, (32) where �1j(yi:n|θ , α) and �2(yi:n|θ , α) are given in Equations (20) and (21), respectively. 2.5.2. Anderson–Darling The Anderson–Darling estimates of the parameters θ̂AD and α̂AD, respectively, are obtained by minimizing, with respect to θ and α, the function: A(θ , α | y) = −n − 1 n n∑ i=1 (2i − 1) log{F(yi:n|θ , α)[1 − F(yn+1−i:n|θ , α)]}. (33) These estimates can also be obtained by solving the nonlinear equations: n∑ i=1 (2i − 1) [ �1(yi:n|θ , α) F(yi:n|θ , α) − �1j(yn+1−i:n|θ , α) F(yn+1−i:n|θ , α) ] = 0, (34) n∑ i=1 (2i − 1) [ �2(yi:n|θ , α) F(yi:n|θ , α) − �2(yn+1−i:n|θ , α) F(yn+1−i:n|θ , α) ] = 0, (35) where �1j(·|θ , α) and �2(·|θ , α) are given in Equations (20) and (21), respectively. These two methods together with other five were used in [64,65] for parameters estimation of the generalized Pareto distribution and three-parameter Weibull distribution, respectively. Journal of Statistical Computation and Simulation 3445 3. Simulation studies In this section we present results of some numerical experiments to compare the performance of the different estimation methods discussed in the previous section. We have taken sample sizes n = 20, 50, 100 and 200 and α = 0.5, 0.8, 1.5, 3.0 and 5.0. The results were invariant with respect to θ , so we set θ = 1. For each of the 20 combinations, we have generated B = 500, 000 pseudo-random samples from the Marshall–Olkin extended Lindley distribution. Since the MOEL distribution does not have an explicit inverse distribution function F−1(y | θ , α) to generate pseudo-random samples, we find the solution y of F(y | θ , λ) = U(0, 1) numerically.[66] The estimates were obtained in Ox version 6.20,[67] using the MaxBFGS function. For each estimate, we compute the bias, the root mean-squared error, the average absolute difference between the true and estimate distributions’ functions and the maximum absolute difference between the true and estimate distributions’ functions, respectively, as Bias(θ̂) = 1 B B∑ i=1 ( θ̂i − θ ) , Bias(α̂) = 1 B B∑ i=1 (α̂i − α), (36) RMSE(θ̂) = √√√√ 1 B B∑ i=1 (θ̂i − θ)2, RMSE(α̂) = √√√√ 1 B B∑ i=1 (α̂i − α)2, (37) Table 1. Simulation results for θ = 1.0 and α = 0.5. n Qtd MLE MPS OLS WLS CM AD 20 Bias(θ) 0.25685 − 0.08023 0.02641 0.05432 0.27846 0.10384 RMSE(θ) 0.63852 0.54401 0.75505 0.73064 0.87986 0.64313 Bias(α) 0.51745 0.10721 0.35012 0.37264 0.72176 0.35863 RMSE(α) 1.31863 0.80301 1.41394 1.54035 1.94806 1.19212 Dabs 0.05633 0.05511 0.05735 0.05674 0.05866 0.05612 Dmax 0.09353 0.08791 0.09515 0.09374 0.10096 0.09222 Total 213 81 224 235 366 162 50 Bias(θ) 0.09646 − 0.08714 − 0.01481 0.01552 0.09305 0.02753 RMSE(θ) 0.34902 0.33791 0.47045 0.41824 0.49466 0.38753 Bias(α) 0.16545 − 0.01331 0.10222 0.11344 0.21516 0.11183 RMSE(α) 0.46502 0.35461 0.56735 0.52654 0.66706 0.47603 Dabs 0.03532 0.03501 0.03685 0.03604 0.03716 0.03573 Dmax 0.05802 0.05661 0.06185 0.05984 0.06346 0.05903 Total 193 91 235 224 356 182 100 Bias(θ) 0.04685 − 0.06936 − 0.01503 0.01071 0.04104 0.01212 RMSE(θ) 0.23471 0.24212 0.33325 0.28104 0.33836 0.26933 Bias(α) 0.07685 − 0.03021 0.04272 0.05444 0.09606 0.05183 RMSE(α) 0.27132 0.23491 0.34695 0.31014 0.37766 0.29283 Dabs 0.02491 0.02492 0.02635 0.02554 0.02636 0.02533 Dmax 0.04072 0.04051 0.04435 0.04234 0.04486 0.04193 Total 162 131 255 214 346 173 200 Bias(θ) 0.02315 − 0.04426 − 0.00842 0.00843 0.01964 0.00601 RMSE(θ) 0.16141 0.16762 0.23345 0.19204 0.23466 0.18793 Bias(α) 0.03725 − 0.02432 0.01991 0.02874 0.04576 0.02523 RMSE(α) 0.17392 0.16201 0.23035 0.20024 0.24046 0.19363 Dabs 0.01761 0.01762 0.01865 0.01804 0.01876 0.01793 Dmax 0.02861 0.02872 0.03155 0.02984 0.03166 0.02963 Total 151 151 234 234 346 163 3446 A.P.J. do Espirito Santo and J. Mazucheli Table 2. Simulation results for θ = 1.0 and α = 0.8. n Qtd MLE MPS OLS WLS CM AD 20 Bias(θ) 0.18885 − 0.09164 − 0.01442 0.01371 0.19066 0.06553 RMSE(θ) 0.51332 0.45261 0.60245 0.57704 0.67856 0.52073 Bias(α) 0.72575 0.12831 0.43592 0.47543 0.93726 0.49544 RMSE(α) 1.95914 1.18501 1.87123 2.06165 2.56066 1.73162 Dabs 0.05682 0.05591 0.05805 0.05734 0.05906 0.05683 Dmax 0.09503 0.09001 0.09685 0.09544 0.10186 0.09412 Total 213 91 225 213 366 172 50 Bias(θ) 0.07105 − 0.08216 − 0.02153 0.00691 0.06514 0.01962 RMSE(θ) 0.28742 0.28711 0.37475 0.33324 0.38786 0.31393 Bias(α) 0.22885 − 0.02791 0.12712 0.15023 0.28196 0.15664 RMSE(α) 0.67462 0.51881 0.76975 0.72304 0.90216 0.67663 Dabs 0.03572 0.03561 0.03735 0.03644 0.03756 0.03623 Dmax 0.05922 0.05831 0.06305 0.06094 0.06426 0.06023 Total 182 111 255 204 346 182 100 Bias(θ) 0.03505 − 0.05646 − 0.01223 0.00741 0.03104 0.00972 RMSE(θ) 0.19531 0.20162 0.26085 0.22484 0.26426 0.21763 Bias(α) 0.10715 − 0.03911 0.05722 0.07484 0.12846 0.07413 RMSE(α) 0.39642 0.34421 0.47435 0.43354 0.51506 0.41693 Dabs 0.02521 0.02532 0.02655 0.02574 0.02656 0.02563 Dmax 0.04162 0.04161 0.04495 0.04304 0.04526 0.04263 Total 162 131 255 214 346 173 200 Bias(θ) 0.01735 − 0.03456 − 0.00623 0.00562 0.01514 0.00471 RMSE(θ) 0.13521 0.13922 0.18145 0.15454 0.18256 0.15203 Bias(α) 0.05205 − 0.03012 0.02731 0.03894 0.06136 0.03603 RMSE(α) 0.25512 0.23721 0.31435 0.28174 0.32796 0.27543 Dabs 0.01771 0.01782 0.01875 0.01814 0.01876 0.01813 Dmax 0.02931 0.02942 0.03185 0.03024 0.03196 0.03013 Total 151 151 245 224 346 163 Table 3. Simulation results for θ = 1.0 and α = 1.5. n Qtd MLE MPS OLS WLS CM AD 20 Bias(θ) 0.13296 − 0.09664 − 0.03622 − 0.00671 0.12395 0.04073 RMSE(θ) 0.40092 0.36741 0.46095 0.43984 0.50076 0.40593 Bias(α) 1.23715 0.17141 0.64732 0.76033 1.45376 0.84314 RMSE(α) 3.48985 1.99871 2.93252 3.48104 3.98996 3.00043 Dabs 0.05732 0.05681 0.05885 0.05804 0.05936 0.05753 Dmax 0.09653 0.09261 0.09865 0.09714 0.10246 0.09612 Total 235 91 214 203 356 182 50 Bias(θ) 0.05045 − 0.06866 − 0.01863 0.00351 0.04634 0.01442 RMSE(θ) 0.23031 0.23352 0.28315 0.25554 0.29116 0.24513 Bias(α) 0.38635 − 0.04211 0.20772 0.25203 0.45776 0.27234 RMSE(α) 1.18133 0.90591 1.27325 1.21184 1.49406 1.16412 Dabs 0.03611 0.03632 0.03775 0.03674 0.03776 0.03653 Dmax 0.06052 0.06011 0.06385 0.06194 0.06476 0.06133 Total 172 131 255 204 346 172 100 Bias(θ) 0.02475 − 0.04346 − 0.00953 0.00461 0.02244 0.00692 RMSE(θ) 0.15761 0.16162 0.19565 0.17414 0.19826 0.17013 Bias(α) 0.17955 − 0.05371 0.09482 0.12533 0.20746 0.12744 RMSE(α) 0.69272 0.59951 0.77685 0.72824 0.84346 0.71003 Dabs 0.02551 0.02562 0.02665 0.02594 0.02676 0.02583 Dmax 0.04261 0.04262 0.04525 0.04364 0.04566 0.04343 Total 152 141 255 204 346 183 200 Bias(θ) 0.01215 − 0.02636 − 0.00483 0.00342 0.01094 0.00341 RMSE(θ) 0.10961 0.11222 0.13665 0.12064 0.13756 0.11923 Bias(α) 0.08655 − 0.04301 0.04532 0.06414 0.09896 0.06173 RMSE(α) 0.44552 0.41261 0.51365 0.47444 0.53576 0.46723 Dabs 0.01801 0.01812 0.01885 0.01834 0.01886 0.01823 Dmax 0.03001 0.03012 0.03205 0.03074 0.03216 0.03063 Total 152 141 255 224 346 163 Journal of Statistical Computation and Simulation 3447 Table 4. Simulation results for θ = 1.0 and α = 3.0. n Qtd MLE MPS OLS WLS CM AD 20 Bias(θ) 0.09786 − 0.09285 − 0.03853 − 0.01111 0.08484 0.02862 RMSE(θ) 0.32423 0.30671 0.35645 0.34404 0.37636 0.32362 Bias(α) 2.47915 0.30981 1.13232 1.48173 2.56236 1.66984 RMSE(α) 7.31906 4.06071 5.18622 6.86834 6.94135 5.86433 Dabs 0.05781 0.05802 0.05955 0.05874 0.05956 0.05813 Dmax 0.09803 0.09511 0.09955 0.09844 0.10236 0.09742 Total 245 111 224 203 336 162 50 Bias(θ) 0.03705 − 0.05596 − 0.01483 0.00181 0.03514 0.01082 RMSE(θ) 0.18871 0.19092 0.21965 0.20224 0.22596 0.19633 Bias(α) 0.74545 − 0.05531 0.40612 0.48913 0.87046 0.53794 RMSE(α) 2.35504 1.77691 2.42985 2.33433 2.86686 2.28132 Dabs 0.03631 0.03672 0.03775 0.03694 0.03786 0.03673 Dmax 0.06142 0.06111 0.06405 0.06234 0.06486 0.06193 Total 183 131 255 194 346 172 100 Bias(θ) 0.01835 − 0.03416 − 0.00743 0.00321 0.01724 0.00542 RMSE(θ) 0.12981 0.13232 0.15225 0.13894 0.15446 0.13653 Bias(α) 0.34495 − 0.07991 0.18612 0.24293 0.39256 0.25174 RMSE(α) 1.36132 1.16751 1.45935 1.39104 1.58976 1.36923 Dabs 0.02571 0.02592 0.02675 0.02604 0.02676 0.02593 Dmax 0.04321 0.04332 0.04545 0.04404 0.04566 0.04383 Total 152 141 255 204 346 183 200 Bias(θ) 0.00935 − 0.02006 − 0.00353 0.00271 0.00874 0.00292 RMSE(θ) 0.09051 0.09212 0.10685 0.09684 0.10756 0.09593 Bias(α) 0.16755 − 0.06591 0.09082 0.12524 0.18856 0.12353 RMSE(α) 0.87242 0.80331 0.96045 0.90604 1.00346 0.89663 Dabs 0.01821 0.01822 0.01895 0.01844 0.01896 0.01843 Dmax 0.03051 0.03062 0.03215 0.03114 0.03226 0.03103 Total 152 141 255 214 346 173 Table 5. Simulation results for θ = 1.0 and α = 5.0. n Qtd MLE MPS OLS WLS CM AD 20 Bias(θ) 0.08406 − 0.08265 − 0.03823 − 0.00921 0.06464 0.02442 RMSE(θ) 0.28623 0.27001 0.30135 0.29744 0.31266 0.28112 Bias(α) 4.45626 0.63961 1.66512 2.65903 3.80045 2.87004 RMSE(α) 13.43636 7.41361 7.64072 11.96985 10.01084 9.71003 Dabs 0.05821 0.05863 0.05956 0.05904 0.05915 0.05832 Dmax 0.09873 0.09601 0.09935 0.09884 0.10126 0.09782 Total 255 121 234 213 306 152 50 Bias(θ) 0.03165 − 0.04886 − 0.01203 0.00171 0.03104 0.00972 RMSE(θ) 0.16761 0.16852 0.18945 0.17634 0.19536 0.17203 Bias(α) 1.29405 − 0.04541 0.73332 0.86263 1.51436 0.95254 RMSE(α) 4.08464 3.01911 4.13885 3.99263 4.92546 3.91572 Dabs 0.03671 0.03703 0.03795 0.03714 0.03796 0.03692 Dmax 0.06202 0.06181 0.06425 0.06274 0.06506 0.06243 Total 183 141 255 194 346 162 100 Bias(θ) 0.01565 − 0.02956 − 0.00613 0.00271 0.01514 0.00482 RMSE(θ) 0.11551 0.11702 0.13175 0.12164 0.13386 0.11993 Bias(α) 0.59205 − 0.10521 0.33192 0.42153 0.67416 0.44024 RMSE(α) 2.31653 1.96491 2.43815 2.33164 2.66876 2.30552 Dabs 0.02591 0.02612 0.02686 0.02624 0.02685 0.02613 Dmax 0.04371 0.04372 0.04555 0.04434 0.04586 0.04413 Total 162 141 265 204 336 173 200 Bias(θ) 0.00785 − 0.01736 − 0.00293 0.00221 0.00764 0.00252 RMSE(θ) 0.08061 0.08172 0.09245 0.08484 0.09316 0.08413 Bias(α) 0.28545 − 0.09361 0.16042 0.21333 0.32136 0.21414 RMSE(α) 1.47252 1.34651 1.58725 1.50704 1.66216 1.49553 Dabs 0.01831 0.01842 0.01896 0.01854 0.01895 0.01843 Dmax 0.03081 0.03092 0.03225 0.03134 0.03236 0.03123 Total 152 141 265 204 336 183 3448 A.P.J. do Espirito Santo and J. Mazucheli Dabs = 1 B × n B∑ i=1 n∑ j=1 |F(yij|θ , α) − F(yij|θ̂ , α̂)|, (38) Dmax = 1 B B∑ i=1 max j |F(yij|θ , α) − F(yij|θ̂ , α̂)|. (39) In Tables 1–5, we show the calculated values of (36)–(39). The superscript values indicate the rank obtained by each method and the line named as total shows the global rank for each method based on measures (35)–(38). For example, in Table 1 and n = 20, the MPS method has the superscript value equal to 1 and this value means that the ranks sum of the measures (35)–(38) was the lowest among all methods. 4. Conclusions In this paper, we compared, by intensive simulation experiments, the parameters estimation of the MOEL distribution using six methods, namely, the maximum likelihood, ordinal and weighted least-squares, maximum product of spacings, Cramér–von Mises and Anderson–Darling. From the simulations we have observed that, in all scenarios, the maximum product of spac- ings method (MPS) had the lowest overall rank. However, as the sample size increases the maximum likelihood method (MLE) has the second lowest rank and sometimes equalling the MPS. Therefore, the MPS method can be regarded as the best method to estimate the parameters of MOEL distribution. For large samples the MLE method is also good. An important observa- tion is that for α > 1 and n = 20, the MLE had the second highest rank, better than CM only, since the method CM showed the highest rank in all cases. In general, the MPS method showed the lowest root mean-squared error. For the α parameter, the estimated RMSE(α) by the MPS method was the lowest for all α and n. In the case of θ parameter, the estimated RMSE(θ) by the MPS method was lower only for n = 20, 50 and α = 0.5, 0.8; otherwise, the MLE method obtained a better result. Acknowledgments A.P.J. do Espirito Santo and J. Mazucheli gratefully acknowledge the financial support from Coordination for the Improvement of Higher Level or Education-Personnel (CAPES) and the National Council for Scientific and Techno- logical Development (CNPq). The authors are also thankful to the Associate Editor and the referee for useful comments which enhanced the presentation of the paper. References [1] Gupta RD, Kundu D. Introduction of shape/skewness parameter(s) in a probability distribution. J Appl Statist Sci. 2009;7(2):153–171. [2] Marshall AW, Olkin I. A new method for adding a parameter to a family of distributions with application to the exponential and Weibull families. Biometrika. 1997;84(3):641–652. [3] Gupta RC, Lvin S, Peng C. Estimating turning points of the failure rate of the extended Weibull distribution. Comput Statist Data Anal. 2010;54(4):924–934. [4] Marshall AW, Olkin I. Life distributions: structure of nonparametric, semiparametric, and parametric families. Springer Series in Statistics. New York: Springer; 2007. [5] Nanda AK, Das S. Stochastic orders of the Marshall–Olkin extended distribution. Commun Stat Theory Methods. 2012;8(2):295–302. [6] Jose KK, Joseph A, Ristić MM. A Marshall–Olkin beta distribution and its applications. J Probab Statist Sci. 2009;7(2):173–186. [7] Lemonte AJ. A new extension of the Birnbaum-Saunders distribution. Braz J Probab Statist. 2013;27(2):133–149. Journal of Statistical Computation and Simulation 3449 [8] Jose KK. Marshall–Olkin family of distributions and their applications in reliability theory, time series model- ing and stress-strength analysis. In: Proceedings of the 58th World Statistical Congress, International Statistical Institute, Dublin; 2011. p. 3918–3923. [9] Jayakumar K, Mathew T. On a generalization to Marshall–Olkin scheme and its application to Burr type XII distribution. Statist Papers. 2008;49(3):421–439. [10] Rao GS. Acceptance sampling plans from truncated life tests based on the Marshall–Olkin extended exponential distribution for percentiles. Braz J Probab Stat. 2013;27(2):117–132. [11] Barreto-Souza W, Lemonte AJ, Cordeiro GM. General results for the Marshall–Olkin’s family of distributions. An Acad Brasil Ciênc. 2013;85(1):3–21. [12] Bdair OM. Different methods of estimation for Marshall–Olkin exponential distribution. J Appl Statist Sci. 2012;19(2):141–157. [13] Pakyari R. Inference for the Geometric extreme Exponential distribution under progressive type II censoring. ISRN Probab Statist. 2012;2012:1–15. [14] Louzada F, Roman M, Cancho VG. The complementary Exponential Geometric distribution: model, properties, and a comparison with its counterpart. Comput Statist Data Anal. 2011;55(8):2516–2524. [15] Caroni C. Testing for the Marshall–Olkin extended form of the Weibull distribution. Statist Papers. 2010;51(2):325– 336. [16] Pakyari R. Discriminating between generalized exponential, geometric extreme exponential and Weibull distribu- tions. J Statist Comput Simul. 2010;80(12):1403–1412. [17] Gupta RC, Peng C. Estimating reliability in proportional odds ratio models. Comput Statist Data Anal. 2009;53(4):1495–1510. [18] Salah MM, Raqab MZ, Ahsanullah M. Marshall–Olkin exponential distribution: moments of order statistics. J Appl Statist Sci. 2009;17(1):81–91. [19] Rao GS, Ghitany ME, Kantam RRL. Reliability test plans for Marshall–Olkin extended exponential distribution. Appl Math Sci. 2009;3(55):2745–2755. [20] Nadarajah S. Marshall and Olkin’s distributions. Acta Appl Math. 2008;103(1):87–100. [21] Adamidis K, Dimitrakopoulou T, Loukas S. On an extension of the exponential–geometric distribution. Statist Probab Lett. 2005;7(3):259–269. [22] Marshall AW, Meza JC, Olkin I. Can data recognize its parent distribution? J Comput Graph Statist. 2001;10(3):555–580. [23] Adamidis K, Loukas S. A lifetime distribution with decreasing failure rate. Statist Probab Lett. 1998;39(1):35–42. [24] Louzada F, Marchi V, Carpenter J. The complementary exponentiated Exponential Geometric lifetime distribution. J Probab Statist. 2013;2013:1–12. [25] Silva RB, Barreto-Souza W, Cordeiro MC Gauss. A new distribution with decreasing, increasing and upside-down bathtub failure rate. Comput Statist Data Anal. 2010;54(4):935–944. [26] Gui W. A Marshall–Olkin power log-normal distribution and its applications to survival data. Int J Statist Probab. 2013;2(1):63–72. [27] Mahmoudi E, Shiran M. Exponentiated Weibull–Geometric distribution and its applications. arXiv:12064008v1; 2012. [28] Krishna E, Jose T K K Alice, Ristić MM. The Marshall–Olkin Fréchet distribution. Commun Statist Theory Methods. 2013;42(22):4091–4107. [29] Krishna E, Jose KK, Ristić MM. Applications of Marshall–Olkin Fréchet distribution. Commun Statist Simul Comput. 2013;42(1):76–89. [30] Ristić MM, Jose KK, Ancy J. A Marshall–Olkin gamma distribution and minification process. STARS: Stress Anxiety Res Soc. 2007;1:107–117. [31] Ghitany ME, Al-Mutairi DK, Al-Awadhi FA, Al-Burais MM. Marshall–Olkin extended Lindley distribution and its application. Int J Appl Math. 2012;25(5):709–721. [32] Ghitany ME, Kotz S. Reliability properties of extended linear failure-rate distributions. Probab Eng Inf Sci. 2007;21 441–450. [33] Gui W. Marshall–Olkin extended log-logistic distribution and its application in minification processes. Appl Math Sci. 2013;7(80):3947–3961. [34] Gupta RC, Ghitany ME, Al-Mutairi DK. Estimation of reliability from Marshall–Olkin extended Lomax distribu- tions. J Statist Comput Simul. 2010;80(7–8):937–947. [35] Srinivasa Rao G, Ghitany ME, Kantam RRL. Marshall–Olkin Lomax distribution: extended an economic reliability test plan. Int J Appl Math. 2009;22(1):139–148. [36] Ghitany ME, Al-Awadhi FA, Alkhalfan LA. Marshall–Olkin extended Lomax distribution and its application to censored data. Commun Statist Theory Methods. 2007;36(9–12):1855–1866. [37] García VJ, Gómez-Déniz E, Vázquez-Polo FJ. Marshall–Olkin family of heavy-tailed distributions which includes the lognormal one. Commun Statist Theory Methods. 2013. Available from: http://www.tandfonline.com/doi/abs/ 10.1080/03610926.2013.873132 [38] EL-Bassiouny AH, Abdo NF. Reliability properties of extended Makeham distributions. Comput Methods Sci Technol. 2009;15(2):143–149. [39] Maiti SS, Dey M. Tilted normal distribution and its survival properties. J Data Sci. 2012;10:225–240. [40] García VJ, Gómez-Déniz E, Vázquez-Polo FJ. A new skew generalization of the normal distribution: properties and applications. Comput Statist Data Anal. 2010;54(8):2021–2034. [41] Ghitany ME. Marshall–Olkin extended Pareto distribution and its application. Int J Appl Math. 2005;18(1):17–31. http://www.tandfonline.com/doi/abs/10.1080/03610926.2013.873132 http://www.tandfonline.com/doi/abs/10.1080/03610926.2013.873132 3450 A.P.J. do Espirito Santo and J. Mazucheli [42] Thomas A, Jose KK. Marshall–Olkin Pareto processes. Far East J Theor Stat. 2003;9(2):117–132. [43] Jose KK, Naik SR, Ristić MM. Marshall–Olkin q-Weibull distribution and max–min processes. Statist Papers. 2010;51:837–851. [44] Alice T, Jose KK. Marshall–Olkin semi-Weibull minification processes. Recent Adv Statist Theory Appl 1. 2005;1:6–17. [45] Rubio FJ, Steel MFJ. On the Marshall–Olkin transformation as a skewing mechanism. Comput Statist Data Anal. 2012;56(7):2251–2257. [46] Jose KK, Krishna E. Marshall–Olkin extended uniform distribution. ProbStat Forum. 2011;4(October):78–88. [47] Cordeiro GM, Lemonte AJ. On the Marshall–Olkin extended Weibull distribution. Statist Papers. 2013;54(2):333– 353. [48] Tojeiro C, Louzada F, Roman M, Borges P. The complementary Weibull Geometric distribution. J Statist Comput Simul. 2014;84(6):1345–1362. [49] Srivastava AK, Kumar V. Software reliability data analysis with Marshall–Olkin extended Weibull model using MCMC method for noninformative set of priors. Int J Comput Appl. 2011;18(4):31–39. [50] Barreto-Souza W, de Morais AL, Cordeiro GM. The Weibull–Geometric distribution. J Statist Comput Simul. 2011;81(5):645–657. [51] Zhang T, Xie M. Failure data analysis with extended Weibull distribution. Commun Statist Simul Comput. 2007;36(3):579–592. [52] Ghitany ME, Al-Hussaini EK, Al-Jarallah RA. Marshall–Olkin extended Weibull distribution and its application to censored data. J Appl Stat. 2005;32(10):1025–1034. [53] Zakerzadeh H, Mahmoudi E. A new two parameter lifetime distribution: model and properties. arXiv:12044248v1; 2012. [54] Lindley DV. Fiducial distributions and Bayes’ theorem. J R Statist Soc Ser B Methodol. 1958;20:102–107. [55] Lindley DV. Introduction to probability and statistics from a Bayesian viewpoint, Part II: inference. New York: Cambridge University Press; 1965. [56] Ghitany ME, Atieh B, Nadarajah S. Lindley distribution and its application. Math Comput Simul. 2008;78(4):493– 506. [57] Ghitany ME, Alqallaf F, Al-Mutairi DK, Husain HA. A two-parameter weighted Lindley distribution and its applications to survival data. Math Comput Simul. 2011;81:1190–1201. [58] Zakerzadeh H, Dolati A. Generalized Lindley distribution. J Math Extension. 2009;3:13–25. [59] Mazucheli J, Achcar JA. The Lindley distribution applied to competing risks lifetime data. Comput Methods Programs Biomed. 2011;104(2):189–192. [60] Jodrá P. Computer generation of random variables with Lindley or Poisson–Lindley distribution via the Lambert W function. Math Comput Simul. 2010;81(4):851–859. [61] Cheng RCH, Amin NAK. Maximum product-of-spacings estimation with applications to the lognormal distribution. Cardiff: University of Wales IST, Department of Mathematics; 1979. Report no. 1. [62] Cheng RCH, Amin NAK. Estimating parameters in continuous univariate distributions with a shifted origin. J R Statist Soc Ser B. 1983;45(3):394–403. [63] Ranneby B. The maximum spacing method. An estimation method related to the maximum likelihood method. Scand J Statist Theory Appl. 1984;11(2):93–112. [64] Luceño A. Maximum likelihood vs. maximum goodness of fit estimation of the three-parameter Weibull distribution. J Statist Comput Simul. 2008;78(10):941–949. [65] Luceño A. Fitting the generalized Pareto distribution to data using maximum goodness-of-fit estimators. Comput Statist Data Anal. 2006;51(2):904–917. [66] Devroye L. Non-uniform random variate generation. New York: Springer; 1986. [67] Doornik JA. Object-oriented matrix programming using Ox. 3rd ed. London: Timberlake Consultants Press and Oxford; 2007. 1. Introduction 2. Estimation methods 2.1. Maximum likelihood 2.2. Ordinary least-squares 2.3. Weighted least-squares 2.4. Maximum product of spacings 2.5. Minimum distance methods 3. Simulation studies 4. Conclusions Acknowledgments References