Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 2012, Article ID 565894, 15 pages doi:10.1155/2012/565894 Research Article A Bayesian Analysis of Spectral ARMA Model Manoel I. Silvestre Bezerra,1 Fernando Antonio Moala,1 and Yuzo Iano2 1 DEST, FCT, Unesp, Presidente Prudente 19060-900, SP, Brazil 2 DECOM, FEEC, Unicamp, Campinas 13083-852, SP, Brazil Correspondence should be addressed to Manoel I. Silvestre Bezerra, manoel@fct.unesp.br Received 22 November 2011; Revised 9 April 2012; Accepted 12 April 2012 Academic Editor: Kwok W. Wong Copyright q 2012 Manoel I. Silvestre Bezerra et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Bezerra et al. (2008) proposed a new method, based on Yule-Walker equations, to estimate the ARMA spectral model. In this paper, a Bayesian approach is developed for this model by using the noninformative prior proposed by Jeffreys (1967). The Bayesian computations, simulation via MarkovMonte Carlo (MCMC) is carried out and characteristics of marginal posterior distributions such as Bayes estimator and confidence interval for the parameters of the ARMA model are derived. Both methods are also compared with the traditional least squares and maximum likelihood approaches and a numerical illustration with two examples of the ARMA model is presented to evaluate the performance of the procedures. 1. Introduction The spectral estimation of autoregressive moving average (ARMA) is considered a topic of interest in several applied areas of knowledge, for example, engineering, econometrics, and so forth [1–4]. Different methods have been studied, comprising two focuses, among which, we can point out: (i) the optimal methods, such as the maximum likelihood estimates, which are computational difficult to deal [1, 3, 4]; (ii) the suboptimal methods based on the Yule-Walker equations, which employ linear equations in the parameters estimation [2– 8]. Another type of suboptimal method, proposed by [9], makes a simultaneous estimate of both parameters AR and MA using the reformulated Steiglitz-McBride least-squares algorithm. The estimation methods for the ARMA process have been widely studied, especially for the separate estimation of parameters, where Yule-Walker equations are used to estimate the AR process parameters, and following this, the parameters MA process are estimated through the Durbin method [3, 4]. Among these methods, the best known are the modified 2 Mathematical Problems in Engineering Yule-Walker equations (MYWEs) and the method of least squares Yule-Walker (LSYW). Moreover, these methods provide similar results for ARMA spectral estimation. One of the methods used to make the comparisons is proposed by [8]. Currently, Bayesian inference has been used successfully in practical problems also. In Bayesian inference, the unknown parameter is considered a random variable, thus assuming a probability distribution associated with (prior), which is specified from past data or in the expert opinion. The main objective of this paper is to show the viability of the Bayesian approach in estimating the parameters of the ARMA process, considering a noninformative prior proposed by Jeffrey [10]. Thus, a comparison between the classical and Bayesian estimation methods is carried out by comparing the estimates of the parameters of the ARMA model and consequently the estimates of the power spectrum. Themean estimates of the parameters and their respective standard deviations are obtained from chains generated by simpleMonte Carlo simulations MCMC algorithms. The paper is organized as follows. In Section 2, the ARMAmodel and its main features are presented. Section 3 is devoted to the separate estimation methods using Yule-Walker equations, and presents the development of the method proposed, based on the methods described in section. In Section 4, we present a Bayesian analysis considering the Jeffreys and independent normal prior distributions for the parameters. Section 5 deals with applications using Monte Carlo simulations for two numerical examples. 2. The ARMA Spectral Model Consider xt an ARMA stationary random process of order (p, q), defined by the following difference equation: p∑ i=0 φixt−i = q∑ j=0 θjεt−j , assuming a0 = 1, 1 ≤ t ≤ N, (2.1) where φi, i = 0, 1, . . . , p; θj , j = 0, 1, . . . , q, are the parameters of the process and εt is a white noise with mean zero and variance σ2. Thus, the vector of parameters to be estimated is defined by β = [ φ1, . . . , φp, θ1, . . . , θq, σ 2 ] . (2.2) We define the power spectral density of ARMA process, in plane-z, from the system function, by S(z) = σ2B(z)B∗(1/z) A(z)A∗(1/z∗) , (2.3) where A(z) = ∏p i=1(1 − piz −1 i ), A∗(1/z∗) = ∏p i=1(1 − p∗ i z ∗ i ), pi, i = 1, . . . , p, are the poles, and B(z) = ∏p i=1(1 − qiz −1 i ), B∗(1/z∗) = ∏q i=1(1 − q∗ i z ∗ i ) and qi, i = 1, . . . , q, are the zeros. As usual, the stars denote complex conjugation. Mathematical Problems in Engineering 3 The ARMA power spectral density on frequency domain is obtained by substituting z = exp(j2πf) into (2.3) resulting in S ( f ) = σ2 ∣∣B ( f )∣∣2 ∣∣A ( f )∣∣2 = σ2 ∣∣∣ ∑q k=0 θk exp (−j2πfk) ∣∣∣ 2 ∣∣∣1 + ∑p k=1 φk exp (−j2πfk) ∣∣∣ 2 . (2.4) The function S(f) is the Fourier transform of the autocorrelation function rk = E[xtxt−k], presumably nonnegative and periodical in frequency with period 1Hz. It is assumed to be band limited to ±0.5Hz [1–3]. The problem of ARMA spectral estimation consists initially on selecting the suitable model such that we can estimate the vector of parameters of the process and then replace the estimates value in the spectral density function. This parametrization of the function S(f) is obtained by using the vector of parameters β. 3. Estimation Methods Using Yule-Walker Equations There are several methods which use Yule-Walker equations in separate spectral estimation for the ARMA model, as previously mentioned in the introduction. The methods the best known are modified Yule-Walker equations (MYWEs) and the least squares modified Yule- Walker (LSMYW). This uses more than p linear equations in the parameters estimation. In both methods, the errors can be weighted in order to stabilize the variance. Moreover, these methods provide similar results for ARMA spectral estimation. 3.1. Least-Squares Yule-Walker Method This method is an attempt to reduce the variance of the MYWE estimator and improve the quality of their estimates. It is known that the autocorrelation function of the ARMA process is defined as [6] rk + p∑ i=1 φirk−i = 0, k ≥ q + 1. (3.1) From the MYWE, initially we will describe the least squares method of the modified Yule-Walker equations (LSYW) for the estimation of the parameters of the AR process, given that the expanded equations of Yule-Walker for the ARMA process can be represented as [2–4, 6] r = −RΦ, (3.2) 4 Mathematical Problems in Engineering where R is the expanded autocorrelation matrix (M − q) × p, which is given by R = ⎡ ⎢⎢⎢⎣ rq rq−1 · · · rq−p+1 rq+1 rq · · · rq−p+2 ... ... . . . ... rM−1 rM−2 · · · rM−p ⎤ ⎥⎥⎥⎦ , (3.3) and r is the autocorrelations vector (M − q) × 1 given by r = [ rq+1, rq+2, . . . , rM ]T . (3.4) The consistent estimates of AR vector parametersΦ = [φ1, . . . , φp] T can be obtained by expression (3.2). As M − q > p, there are more equations than unknowns. One error e of size (M − q) × 1 in order to estimate the autocorrelations should be introduced given by r̂ = −R̂Φ + e, (3.5) where r̂ and R̂ correspond to r and R estimators, respectively. It is convenient to use the unbiased autocorrelations rk to estimate r and R, so that the bias of approximation error is null. Thus, the method of least squares can be applied to find the vector which minimizes the sum of squares of errors, that is, [2, 6] S = M∑ k=q+1 |ek|2 (3.6) or, in the matrix form: S = eTe. (3.7) Substituting the equation of the errors e = r̂ + R̂Φ (3.8) in (3.7), we obtain S = ( r̂ + R̂Φ )T( r̂ + R̂Φ ) . (3.9) The development of the products above in (3.9) one has the following expression results: S = ΦT R̂T R̂Φ + r̂T R̂TΦ +ΦT R̂T r̂ + r̂T r̂. (3.10) Mathematical Problems in Engineering 5 1 Y (z) ꉱxxt t Figure 1: Filtering of the signal through an AR filter so as to obtain a new estimate. Taking the partial derivatives ∂S/∂Φ = 0 in expression (3.10), ∂S ∂Φ = 2R̂T R̂Φ + 2R̂T r̂ = 0. (3.11) We can obtain the Φ̂ estimator of the AR process, which is given by Φ̂ = − ( R̂T R̂ )−1 R̂T r̂. (3.12) The matrix R̂T R̂ is usually positive definite, hermitian, and Φ̂ estimator cannot be a minimum-phase estimator [2, 3]. The estimation of AR parameters of the ARMA model through LSYW method can be interpreted as an application of the covariance method considering the set of values {r̂q−p+1, r̂q−p+2, . . . , r̂M} [2, 3]. Moreover, the estimated process parameters MA can be obtained applying the method of Durbin [3]. The performance of the LSYW, p + q < M < N, is usually better than the modified Yule-Walker equations method (MYWE), whereM = p+q (number of equations), using only p equations, but the results will also depend on the kind of spectrum to be estimated [2, 3, 5]. 3.2. The Method Using AR Filter in MA Estimates The new method for separate estimation consists of making a new AR filtering of the signal xt by using the estimates of the MA parameters obtained with the Durbin method. Thus determining a new signal x̂t, proposed by [8], as shown in the Figure 1 and according to the expression: x̂t = xt − θ̂1xt−1 − · · · − θ̂qxt−q. (3.13) In Figure 1, the H(z) = 1/Y (z) is the system transfer function, and Y (z) = 1 − θ̂1z −1 − · · · − θ̂qz −q. This procedure is like repeating the second step of the Durbin method. This is the key idea of the separate estimation method [3, 6]. It is known that an AR(∞) process (large order) can be an approach for theMAprocess [1–4]. In this method, a new signal estimate x̂t will be used to obtain a new AR estimate through the LSYW method in order to subsequently determine a new MA estimate through the Durbin method. The key idea of the method proposed is to consist of obtain a new signal, from the AR filtering, which will provide better estimates when the parameters of the ARMA process are reestimated. 6 Mathematical Problems in Engineering 4. Bayesian Estimation In a Bayesian framework, the inference is based on the posterior distribution of parameters β, denoted by p(β | x), which, in turn, is used for inferences and decisions involving β. The posterior distribution p(β | x) is obtained from the combination of the information provided by a prior distribution f(β ) and the information supplied by the data through the likelihood L(β | x) . Thus, using Bayes’ theorem, the posterior distribution is given by p ( β | x) ∝ f ( β ) L ( β | x). (4.1) The prior distribution represents the knowledge or uncertainty state about the parameter β before the experiment is run and the posterior distribution describes the updated information about β after the data x is observed. For an ARMA(p, q) model, we need to estimate the parameters β and σ where β =(φ1, . . . , φp, θ1, . . . , θq). The likelihood function proposed in [11] for the observations X given the vector parameters (β, σ) can be written as follows: L ( β, σ | X) ∝ 1 σn exp { − 1 2σ2 n∑ t=1 ( xt − μt )2 } , (4.2) where μ1 = ∑p i=1 φixt−i + ∑q i=1 θiεt−i, μt = p∑ i=1 φixt−i + t−1∑ i=1 θi ( xt−i − μt−i ) + q∑ i=t θiεt−i, t = 2, . . . , q, μt = p∑ i=1 φixt−i + t−1∑ i=1 θi ( xt−i − μt−i ) + q∑ i=t θiεt−i , t = q + 1, . . . , n . (4.3) The likelihood above combined with the prior distribution results in the posterior distribution given by p ( β, σ | x) ∝ 1 σn exp { − 1 2σ2 n∑ t=1 ε2t } f ( β ) , (4.4) The values of the variables xi−p , i = 1, . . . , p and error terms εi−q, i = 1, . . . , q are arbitrarily fixed. To progress with the Bayesian analysis, it is necessary to specify a prior distribution over the parameter space. Different prior distributions can be used in our study according to all currently available information. If prior information on study parameters is unavailable or does not exist for a device, then initial uncertainty about the parameters can be quantified with a noninformative prior distribution. This is the same to include in the analysis just the information provided by the data. Therefore, for the ARMA coefficients, we assume that little is known about these parameters such that an uniform distribution can be used as a prior distribution. We also assume a prior that the components of β are independent and hence the overall prior for β Mathematical Problems in Engineering 7 is the product of the priors for its components. Besides, a conventional noninformative prior distribution for the model variance σ2 can be represented by f ( σ2 ) ∝ 1 σ2 . (4.5) Therefore, the joint prior distribution of β and σ2 has the form: f ( β, σ2 ) ∝ 1 σ2 , (4.6) where −∞ < βi < ∞ and 0 < σ2 < ∞. Other prior specifications also could be considered, as independent informative normal distributions for the components of β, that is, βj ∼ N(μ,σ2 β), j = 1, . . .,k, with mean μ and variance σ2 β specified a prior and an informative prior as the inverse Gamma distribution for the parameter σ2, that is, σ2 ∼ IG(aσ , bσ)with hyperparameters aσ and bσ known. Thus, the joint prior for β and σ2 would be given by f ( β, σ2 ) ∝ f(β)f ( σ2 ) . (4.7) To obtain the marginal posterior distribution for each parameter from the model, one needs to solve integrals involving the joint posterior density that are not analytically tractable and which standard integral approximations can perform poorly. In this case, we use of Markov chain Monte Carlo (MCMC) approaches to carry out the Bayesian posterior inference. Specifically, we can run an algorithm for simulating a long chain of draws from the posterior distribution, and base inferences on posterior summaries of the parameters or functional of the parameters calculated from the samples. MCMC is essentially Monte Carlo integration using Markov chains. Constructing such a Markov chain is not difficult. We first describe the Metropolis- Hastings algorithm. This algorithm is due to [12], which is a generalization of the method first proposed by [13]. Let g(γ) be the distribution of interest. Suppose at time t, γt+1 is chosen by first sampling a candidate point η from a proposal distribution q(· | γt). The candidate η is then accepted with probability: α ( γ, η ) = min ( 1, g ( η ) q ( γt | η ) g ( γ ) q ( η | γt ) ) . (4.8) If the candidate point is accepted, the next state becomes γt+1 = η. If it is rejected, then γt+1 = γt and the chain does not move. The proposal distribution is arbitrary and, provided the chain is irreducible and aperiodic, the equilibrium distribution of the chain will be g(γ). After obtaining a random sample from the MCMC algorithm for each component of β, it is important to investigate issues such as convergence and mixing, to determine whether the sample can reasonably be treated as a set of random realizations from the target posterior distribution. Looking at marginal trace plots is the simplest way to examine the output 8 Mathematical Problems in Engineering Table 1: Estimation classic AR and MA parameters (standard deviation). N = 256 Parameter AR Parameter MA Method −0.520 1.018 −0.255 0.240 −0.337 0.810 MYWE −0.8394 0.8236 −0.2752 0.2145 −0.2712 0.6068 (2.1903) (1.8329) (0.2346) (0.2539) (0.4057) (0.3770) LSYW −0.4057 0.7724 −0.2002 0.2252 −0.3132 0.6867 (0.2012) (0.2699) (0.1018) (0.0809) (0.3061) (0.2234) LSYWS −0.5952 1.0085 −0.3210 0.2405 −0.3888 0.8040 (0.4266) (0.3257) (0.2659) (0.1406) (0.3061) (0.1698) MLE −0.3760 0.7283 −0.1764 0.2167 −0.1954 0.5418 (0.4133) (0.4455) (0.1545) (0.1163) (0.4117) (0.4204) besides formal procedures. This way, all the values of the chain have marginal distribution given by the equilibrium distribution. Formore details ofMCMC in a varietyways to construct these chains, see, for example, [14–16]. We now describe the MCMC implementation used in our time series framework. We generate a posterior Monte Carlo sample by simulating a Markov chain described as follows: (i) choose starting values βo = (βo0 , β o 1 , . . . , β o k) and σ2 0 . At step i + 1, we draw a new sample (βi+1, σ 2 i+1) conditional on the current sample (βi, σ 2 i ) using the following proposal distributions (and the usual Metropolis-Hastings acceptance rule); (ii) sample a new vector βi+1 from the multivariate normal distribution Nk(βi ; V ) where V is a diagonal matrix; (iii) the candidate for βi+1, denoted by βprop, will be accepted with a probability given by the Metropolis ratio: α ( βi , βprop ) = min ⎧ ⎨ ⎩1, p ( βprop, σ 2 i | X ) p ( βi, σ 2 i | X) ⎫ ⎬ ⎭; (4.9) (iv) sample the new variance σ2 i+1 from inverse Gamma distribution IG(d, (d− 1) ∗σ2 i ); (v) because this proposal is not symmetric, we find α ( σ2 i , σ2 prop ) = min ⎧ ⎨ ⎩1, IG ( σ2 i ) p ( βi+1, σ 2 prop | X ) IG ( σ2 prop ) p ( βi+1, σ 2 i | X) ⎫ ⎬ ⎭. (4.10) The proposal distribution parameters V and d were chosen to obtain good mixing of the chains. Mathematical Problems in Engineering 9 Table 2: Estimation Bayesian AR and MA parameters (standard deviation). N = 256 Parameter AR Parameter MA Method −0.520 1.018 −0.255 0.240 −0.337 0.810 Bayes non −0.5953 1.1073 −0.2650 0.2148 −0.4052 0.8759 (0.0282) (0.0291) (0.0372) (0.0386) (0.0440) (0.0469) Empirical Bayes −0.5950 1.1065 −0.2645 0.2142 −0.4050 0.8754 (0.0281) (0.0290) (0.0372) (0.0389) (0.0439) (0.0425) Table 3: Estimation classic AR parameters (standard deviation). N = 256;M = 20 Parameter AR Method 0.1 1.66 0.093 0.8649 MEYW 0.1361 1.6707 0.1254 0.8644 (0.1688) (0.1179) (0.1179) (0.0646) LSYW 0.1054 1.6438 0.0978 0.8432 (0.0299) (0.0483) (0.0323) (0.0472) LSYWS 0.1038 1.6537 0.0995 0.8522 (0.0706) (0.0528) (0.0600) (0.0555) MLE 0.1001 1.6414 0.0942 0.8482 (0.0275) (0.0395) (0.0363) (0.0404) 5. Numerical Illustration In this section, we describe the Monte Carlo simulations [17], in order to obtain the estimates of the parameters and the spectral power density corresponding to each of the cited below ARMA processes: (1) ARMA(4,2) [18] and (2) ARMA(4,4) [19]. The simulation and the programs were implemented in the MATLAB and R softwares. The methods used on the simulations are (1) modified Yule-Walker equations (MYWE); (2) least squares modified Yule-Walker equations (LSYW); (3) least squares mod- ified Yule-Walker equations with AR filtering (LSYWS—proposed method); (4) maximum likelihood estimator (MLE); (5) Bayesian estimator considering the noninformative priors as Jeffreys and independence normal priors for the parameters. In order to evaluate the performance of the methods, the random data have been generated from the same seed and that change it with the sequences, i = 1, . . . , B, and B is the replications number in each simulation. We carried out this using N = 256 observations and B = 30 replications. The random numbers are generated from a standard Gaussian distribution (white noise) with mean equal zero and variance equal one, then generated a signal of the process ARMA. The modified covariance was the criterion in the Durbin method [3], and the large AR process order was chosen using the criterion described by [20] and the number of equations, M, was chosen in accordance with the spectral characteristics to be analyzed. On the other hand, for the spectrumwith poles far from the unit circle (UC), as it was given in Example 5.1, a small number was used for M, which has one pole near and other far from the unit cycle (UC). 10 Mathematical Problems in Engineering Table 4: Estimation classic MA parameters (standard deviation). N = 256;M = 20 Parameter MA Method 0.0226 0.8175 0.0595 0.0764 MEYW −0.0244 0.8848 −0.0020 0.1291 (0.2514) (0.2227) (0.2106) (0.2222) LSYW −0.0649 0.8739 −0.0137 0.0982 (0.1701) (0.2205) (0.2112) (0.2342) LSYWS −0.0681 0.8879 −0.0224 0.1070 (0.1914) (0.2236) (0.2262) (0.2403) MLE 0.0241 0.8014 0.0653 0.0760 (0.0594) (0.0785) (0.0721) (0.1186) Table 5: Estimation Bayesian AR parameters (standard deviation). N = 256 Parameter AR Method 0.1 1.66 0.093 0.8649 Bayes non 0.0967 1.6029 0.0875 0.8102 (0.0151) (0.0157) (0.0187) (0.0189) Empirical Bayes 0.0824 1.5964 0.0739 0.8121 (0.0154) (0.0206) (0.0215) (0.0231) In the first example, the ARMA(4,2)model was used withM = 10, L = 85, toN = 256, and in the second example, we used ARMA(4,4) model with M = 20, L = 125 to N = 256. The dashed curve is theoretical and solid curve is the average of estimates. Example 5.1. The ARMA(4,2) model presented in [18] is given by xt − 0.52xt−1 + 1.018xt−2 − 0.255xt−3 + 0.24xt−4 = εt − 0.337εt−1 + 0 810εt−2. (5.1) Example 5.2. The ARMA(4,4) model presented in [19] is given by xt + 0.1xt−1 + 1.66xt−2 + 0.093xt−3 + 0.8649xt−4 = εt + 0.0226εt−1 + 0.8175εt−2 + 0.0595εt−3 + 0.0764εt−4. (5.2) 5.1. Discussion Firstly, we consider a comparative analysis between the classical methods applied to the ARMA(4,2) model fitted by the data of Example 5.1. Thus, Table 1 shows that the LSYWS method provided the best averages for estimate parameters than other methods, but with a slightly larger variability than the LSYW method for the AR model. The LSYW method produces intermediate estimates but with a superior performance than the MYWE and MLE methods. These facts can be observed through the average estimates of the spectra power in Mathematical Problems in Engineering 11 Table 6: Estimation Bayesian MA parameters (standard deviation). N = 256 Parameter MA Method 0.0226 0.8175 0.0595 0.0764 Bayes non 0.0151 0.7556 0.0518 0.0525 (0.0424) (0.0508) (0.0478) (0.0698) Empirical Bayes 0.0077 0.7554 0.0517 0.0677 (0.0418) (0.0604) (0.0616) (0.0613) Sp ec tr al d en si ty (d B ) 8 6 4 2 0 −2 −4 −6 0 0.2 0.4 0.6 0.8 1 Frequency MYWE (a) 0 0.2 0.4 0.6 0.8 1 Frequency LSYW 3 2 1 0 −1 −2 −3 −4 Sp ec tr al d en si ty (d B ) (b) Figure 2: Density Spectral—ARMA(4,2)—Methods Classics. Figures 2 and 3, which shows that the best estimate is really provided by LSYWS method proposed by [8]. Comparing the classical methods and the Bayesian methods studied in this paper, Table 2 shows that the estimates of parameters from both Bayesian methods are similar to the LSYWS method, but with a smaller variability (see Figure 4). By using the noninformative Jeffreys prior, we obtained a slightly better estimate than the empirical prior, most likely due to the mean prior given by LME does not produce so good results. Tables 3 and 4 present the results of estimation methods MEYW, LSYW, LSYWS, and MLE applied to the ARMA(4,4) model from Example 5.2 (see Figures 5 and 6). In this case, the average estimates of the parameters were approximately equal for all considered methods. Just the standard deviations of the mean estimates showed a difference with the Bayesian standard deviation, see Tables 5 and 6. The standard deviation, with Jeffreys prior, had lower value than the other methods, but with a value slightly smaller than the standard deviation obtained by the MLE method. 12 Mathematical Problems in Engineering 0 0.2 0.4 0.6 0.8 1 Frequency LSYWS 3 2 1 0 −1 −2 −3 −5 −4 Sp ec tr al d en si ty (d B ) (a) 0 0.2 0.4 0.6 0.8 1 Frequency LME 3 2 1 0 −1 −2 −3 −4 Sp ec tr al d en si ty (d B ) (b) Figure 3: Density Spectral—ARMA(4,2)—Methods Classics. 0 0.2 0.4 0.6 0.8 1 Frequency 3 2 1 0 −1 −2 −3 −4 Sp ec tr al d en si ty (d B ) Bayes noninformative (a) 0 0.2 0.4 0.6 0.8 1 Frequency 3 2 1 0 −1 −2 −3 −4 Sp ec tr al d en si ty (d B ) Empirical bayes (b) Figure 4: Density Spectral—ARMA(4,2)—Methods Bayesian. Mathematical Problems in Engineering 13 20 15 10 5 0 −5 −10 Sp ec tr al d en si ty (d B ) LSYWS 0 0.2 0.4 0.6 0.8 1 Frequency (a) MLE 20 15 10 5 0 −5 −10 Sp ec tr al d en si ty (d B ) 0 0.2 0.4 0.6 0.8 1 Frequency (b) Figure 5: Density Spectral—ARMA(4,4)—Methods Classics. MYWE 20 15 10 5 0 −5 −10 Sp ec tr al d en si ty (d B ) 0 0.2 0.4 0.6 0.8 1 Frequency (a) LSYW 20 15 10 5 0 −5 −10 Sp ec tr al d en si ty (d B ) 0 0.2 0.4 0.6 0.8 1 Frequency (b) Figure 6: Density Spectral—ARMA(4,4)—Methods Classics. 14 Mathematical Problems in Engineering 20 15 10 5 0 −5 −10 Sp ec tr al d en si ty (d B ) 0 0.2 0.4 0.6 0.8 1 Frequency Bayes noninformative (a) Empirical bayes 20 15 10 5 0 −5 −10 Sp ec tr al d en si ty (d B ) 0 0.2 0.4 0.6 0.8 1 Frequency (b) Figure 7: Density Spectral—ARMA(4,4)—Methods Bayesian. This analysis can also be confirmed by plots of the power spectra estimates 2 (see Figures 6 and 7). In this case, this similarity can be due to the ARMA(4,4) has a stable power spectrum and with more parameters to be estimated than case previous. 6. Conclusions In this article, we describe the Bayesian approach by using the noninformative prior distribution proposed by Jeffreys [10] to estimate the density spectral of the ARMA model. This Bayesian approach is compared with non-Bayesian approaches LSYWS method and LSYW, both based on Yule-Walker equations and least-squares method. Other methods proposed in the literature as MLE and MYWE (Mod-fitted Yule-Walker Equations) are also compared. Two ARMA models with different orders were introduced to illustrate the proposed methodologies and to examine their performances. The study showed that the Bayesian approaches produced more accurate estimates than the other approaches by considering ARMA(4,2) model, but similar results are found for ARMA(4,4). This way, we conclude that Bayesian approach for less stable ARMAmodels is justified, and more stable power spectrums any one procedures could be chosen. The Bayesian approach provides best fitting of spectral applicable for any order of the ARMA model than the other approaches. Finally, the choice of the approaches becomes irrelevant for a moderate large size. In addition, generally speaking, it was seen that the results depend on the spectrum to be estimated. Mathematical Problems in Engineering 15 Acknowledgments This work is supported by Capes, RH-TVD, CNPq, Faepex/Unicamp, and FUNDUNESP. References [1] G. E. P. Box, G. M. Jenkins, and G. C. Reinsel, Time Series Analysis, Prentice Hall, Englewood Cliffs, NJ, USA, 3rd edition, 1994, Forecasting and Control. [2] S. L. Marple, Jr., Digital Spectral Analysis with Applications, Prentice Hall Signal Processing Series, Prentice Hall, Englewood Cliffs, NJ, USA, 1987. [3] S. M. Kay, Modern Spectral Estimation Theory and Applications, Prentice-Hall, Englewood Cliffs, NJ, USA, 1988. [4] C. W. Therrien, Discrete Random Signals and Statistical Signal Processing, Prentice-Hall, Englewood Cliffs, NJ, USA, 1992. [5] J. A. Cadzow, “High performance spectral estimation—a new ARMA method,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 28, no. 5, pp. 524–529, 1980. [6] B. Friedlander and B. Porat, “The modified Yule-Walker method of ARMA spectral estimation,” IEEE Transactions on Aerospace and Electronic Systems, vol. 20, no. 2, pp. 158–173, 1984. [7] M. I. S. Bezerra, “Precision of Yule-Walker methods for the ARMA spectral model,” in Proceedings of the IASTED Conference on Circuits, Signals, and Systems, pp. 54–59, Clearwater Beach, Fla, USA, 2004. [8] M. I. S. Bezerra, Y. Iano, and M. H. Tarumoto, “Evaluating some Yule-Walker methods with the maximum-likelihood estimator for the spectral ARMA model,” Tendências em Matemática Aplicada e Computacional, vol. 9, no. 2, pp. 175–184, 2008. [9] B. L. Jackson, “Frequency-domain Steiglitz-McBridemethod for least-squares IIR filter design, ARMA modeling, and periodogram smoothing,” IEEE Signal Processing Letters, vol. 15, pp. 49–52, 2008. [10] H. Jeffreys, Theory of Probability, Oxford University Press, London, UK, 3rd edition, 1967. [11] J. M. Marriot, N. M. Spencer, and A. N. Pettit, “Bayesian approach to selecting covariates for prediction,” Tech. Rep., Trent University, 1996. [12] W. K. Hastings, “Monte Carlo sampling methods using Markov chains and their applications,” Biometrika, vol. 57, pp. 97–109, 1970. [13] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. Teller, and H. Teller, “Equations of state calculations by fast computing machines,” Journal of Chemical Physics, vol. 21, pp. 1087–1091, 1953. [14] A. F. M. Smith and G. O. Roberts, “Bayesian computation via the Gibbs sampler and related Markov chain Monte Carlo methods,” Journal of the Royal Statistical Society Series B, vol. 55, no. 1, pp. 3–23, 1993. [15] A. E. Gelfand and A. F. M. Smith, “Sampling-based approaches to calculating marginal densities,” Journal of the American Statistical Association, vol. 85, no. 410, pp. 398–409, 1990. [16] W. R. Gilks, S. Richardson, and D. J. Spiegelhalter, Markov Chain Monte Carlo in Practice, Chapman & Hall, London, UK, 1996. [17] D. N. Politis, “Computer intensive methods in statistical analysis,” IEEE Signal Processing Magazine, vol. 15, pp. 39–55, 1998. [18] A. S. W. Batista,Métodos de estimação dos parâmetros dos modelos ARMA para análise espectral [Mastership in Electrical Engineering], FEE, UNICAMP, 1992. [19] R. L. Moses, V. Simonyte, P. Stoica, and T. Söderström, “An efficient linear method for ARMA spectral estimation,” International Journal of Control, vol. 59, no. 2, pp. 337–356, 1994. [20] P. M. T. Broersen, “Finite sample criteria for autoregressive order selection,” IEEE Transactions on Signal Processing, vol. 48, no. 12, pp. 3550–3558, 2000. Submit your manuscripts at http://www.hindawi.com Operations Research Advances in Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 Abstract and Applied Analysis ISRN Applied Mathematics Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 International Journal of Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 Journal of Function Spaces and Applications International Journal of Mathematics and Mathematical Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 ISRN Geometry Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 Discrete Dynamics in Nature and Society Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 Advances in Mathematical Physics ISRN Algebra Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 Probability and Statistics Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 ISRN Mathematical Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 Journal of Applied Mathematics Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 Stochastic Analysis International Journal of Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Volume 2013 ISRN Discrete Mathematics Hindawi Publishing Corporation http://www.hindawi.com Differential Equations International Journal of Volume 2013