Jump to ContentJump to Main Navigation

## Florence Merlevède, Magda Peligrad, and Sergey Utev

Print publication date: 2019

Print ISBN-13: 9780198826941

Published to Oxford Scholarship Online: April 2019

DOI: 10.1093/oso/9780198826941.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (oxford.universitypressscholarship.com). (c) Copyright Oxford University Press, 2020. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use.  Subscriber: null; date: 13 August 2020

# Linear Processes

Chapter:
(p.345) 12 Linear Processes
Source:
Functional Gaussian Approximation for Dependent Structures
Publisher:
Oxford University Press
DOI:10.1093/oso/9780198826941.003.0012

# Abstract and Keywords

Here we apply different methods to establish the Gaussian approximation to linear statistics of a stationary sequence, including stationary linear processes, near-stationary processes, and discrete Fourier transforms of a strictly stationary process. More precisely, we analyze the asymptotic behavior of the partial sums associated with a short-memory linear process and prove, in particular, that if a weak limit theorem holds for the partial sums of the innovations then a related result holds for the partial sums of the linear process itself. We then move to linear processes with long memory and obtain the CLT under various dependence structures for the innovations by analyzing the asymptotic behavior of linear statistics. We also deal with the invariance principle for causal linear processes or for linear statistics with weakly associated innovations. The last section deals with discrete Fourier transforms, proving, via martingale approximation, central limit behavior at almost all frequencies under almost no condition except a regularity assumption.

In this chapter, we apply different methods to establish the Gaussian approximation to linear statistics of a stationary sequence {ξ‎i}, called the innovation sequence. Define

$Display mathematics$

1. (i) We first treat stationary linear processes when aij = aij.

2. (ii) We then move to near stationary processes when the sums $bin=∑k=1naik$ do not vary too much with respect to i (more precisely when they satisfy the near linearity property (12.9)).

3. (iii) Finally, we treat special sums; namely, discrete Fourier transforms that are of the form

$Display mathematics$

# 12.1 Linear Processes with Short Memory

In this section we shall discuss when limiting properties of stationary processes are preserved under infinite linear transforms.

Let $(ξi)i∈Z$ be a strictly stationary sequence with $E(|ξ0|)<∞$ and $E(ξ0)=0$ and let $I$ be its invariant σ‎–field. Define

$Display mathematics$
(12.1)

(p.346) The following notation is useful in various parts of the section:

$Display mathematics$

We begin with a representation of Sn in terms of partial sums and illustrate its applications to the CLT and FCLT.

We first notice that by interchanging finite and infinite sum

$Display mathematics$

where

$Display mathematics$

Since for each finite j, $Sn,j(ξ)$ and $Sn(ξ)$ are identically distributed, it suggests the following intuitive approximation

$Display mathematics$

To make the argument rigorous we shall use the following simple argument:

Lemma 12.1 Let $∑i∈Z|ai|<∞$ . Assume that for a sequence of positive constants bn the following conditions hold:

$Display mathematics$

Then,

$Display mathematics$

Remark 12.2 This lemma shows that if we have a weak limit theorem for the innovations, so if $Sn(ξ)/bn→dL$ , then a related result holds for partial sums of linear processes with the same normalization, i.e. $Sn(X)/bn→dAL$ .

Proof Notice that

$Display mathematics$

(p.347) and thus by the triangular inequality

$Display mathematics$

It remains to notice that by stationarity

$Display mathematics$

The result follows by letting $n→∞$ followed by $M→∞.$

By using the same approach, our next proposition allows to compare the maximum of partial sums of the innovations to the maximum of partial sums of the linear process with short memory (i.e. $∑i|ai|<∞)$ .

Proposition 12.3 Let $∑i∈Z|ai|<∞$ . Assume the representation (12.1) is satisfied and in addition, there is a constant C > 0 and a sequence of positive reals $bn→∞$ such that for all n,

$Display mathematics$
(12.2)

and

$Display mathematics$
(12.3)

Then,

$Display mathematics$
(12.4)

If the innovations are assumed in $Lp$ , p ≥ 1, (12.2) is replaced by $Emax1≤j≤n|Sj(ξ)|p≤Cbnp$ and the convergence in (12.3) holds in $Lp$ , then

$Display mathematics$

Proof The proof is similar. Note that, for any positive integer M less than n,

$Display mathematics$

(p.348) For M fixed the second term II divided by bn converges to 0 in probability as $n→∞$ by condition (12.3).

Next, by Markov inequality and stationarity, for any ε‎ > 0,

$Display mathematics$

which converges to 0 when $M→∞.$ So the first part of the proposition follows by letting first $n→∞$ followed by $M→∞.$ The proof of the second part of the proposition is done by using similar arguments.

Theorem 12.4 Let $∑i∈Z|ai|<∞$ . Assume that representation (12.1) and condition (12.2) are satisfied. Moreover, assume that the innovations satisfy the invariance principle ${bn−1S[nt](ξ),t∈[0,1]}→dηW$ in D([0, 1]) as $n→∞$ , where η‎ is $I$ -measurable and W is a standard Brownian motion on [0, 1] independent on $I$ . Then the linear process also satisfies the invariance principle, i.e. ${bn−1S[nt](X),t∈[0,1]}→dηAW$ in D([0, 1]) as $n→∞$ .

Proof Notice that the convergence in probability in (12.3) follows from the invariance principle ${bn−1S[nt](ξ),t∈[0,1]}→dηW$ in D([0, 1]) as $n→∞$ , since the modulus of continuity is convergent to 0 in probability. All the conditions in Proposition 12.3 are then satisfied which imply the conclusion of the theorem.

From Theorem 12.4 we easily derive the following useful consequence.

Corollary 12.5 ( $Lp$ –invariance principle.) Let $∑i∈Z|ai|<∞$ . Assume the representation (12.1) holds and p ≥ 1. Then,

$Display mathematics$

Discussion

Theorem 12.4 and its Corollary 12.5 work for many dependent structures such as surveyed in Peligrad (1986), Doukhan (1994), Bradley (2007) and Merlevède, Peligrad and Utev (2006). Various invariance principles can be extended from the original sequence to the linear process with short memory. Here we mention some traditional and also some recently developed dependence conditions for innovations whose partial sums satisfy both a maximal inequality and the invariance principle and therefore Theorem 12.4 and its Corollary 12.5 apply.

(p.349) For instance, let us assume that $(ξi)i∈Z$ is a stationary ergodic sequence with $E(ξ02)<∞$ and $E(ξ0)=0$ and let $Fk=σ(ξi,i≤k)$ . For all the structures below the family $(max1≤k≤n(Sk(ξ))2/n)n≥1$ is uniformly integrable and the conclusion of Corollary 12.5 holds with $bn=n$ and with p = 2. Moreover, since $(ξi)i∈Z$ is assumed to be ergodic then there is a non-negative constant σ‎ such that η‎ = σ‎.

1. (i) Hannan (1979) (see also its extension to Hilbert space in Dedecker and Merlevède (2003)):

$Display mathematics$

where $Pk(X)=E(X|Fk)−E(X|Fk−1)$ is the projection operator (see Theorem 4.17).

2. (ii) Newman and Wright (1981): $(ξi)i∈Z$ is an associated sequence (i.e. such that any two coordinatewise non-decreasing functions of any finite subcollection of the ξ‎i’s (of finite variance) are non-negatively correlated) which satisfies in addition

$Display mathematics$

3. (iii) Doukhan, Massart and Rio (1994):

$Display mathematics$

where Q denotes the càdlàg inverse of the function $t→P(|ξ0|>t)$ and (α‎(k))k≥0 is the sequence of strong mixing coefficients associated with $(ξi)i∈Z$ (see Theorem 6.39).

4. (iv) Dedecker and Rio (2000):

$Display mathematics$

(See Theorem 4.18).

5. (v) Peligrad and Utev (2005), by developing Maxwell and Woodroofe (2000),

$Display mathematics$

(See Theorem 4.16).

6. (vi) Peligrad, Utev and Wu (2007), which guarantees the $Lp$ -invariance principle, p ≥ 2.

(p.350)

$Display mathematics$

Comments

1. (a) If $E(ξ02)<∞$ and $bn≥n$ then condition (12.3) automatically holds.

2. (b) The set of indexes $Z$ can be replaced by $Zd$ where d is a positive integer allowing for the treatment of random fields.

3. (c) A natural extension is to consider innovations with values in functional spaces that also facilitate the study of estimation and forecasting problems for several classes of continuous time processes (see Bosq (2000)). The linear processes are still defined by the formula (12.1) with the difference that now the innovations $(ξk)k∈Z$ take values in a separable Hilbert space H and the sequence of constants is replaced by the sequence of bounded linear operators ${ak}k∈Z$ from H to H. Merlevède, Peligrad and Utev (1997) treated the problem of the central limit theorem for this case under the summability condition

$Display mathematics$

where ∥ajL(H) denotes the usual operators norm. It was discovered that, if this condition is not satisfied, then the central limit theorem fails even for the case of independent innovations. The approach developed in that paper shows that the central limit theorem results stated can be strengthened to the invariance principle (some results in this direction for strongly mixing sequences are established in Merlevède (2003)).

4. (d) In all the examples (i)–(iv) the variance of partial sums is linear in n.

# 12.2 Functional CLT using Coboundary Decomposition

Here we shall establish a functional CLT by using the coboundary decomposition.

Lemma 12.6 Assume that the sequence of innovations $(ξj)j∈Z$ is strictly stationary, centered with finite second moment and has a bounded spectral density. Assume in addition that the sequence of constants (ak)k≥0 satisfies the following two conditions

$Display mathematics$

For any $k∈Z$ , define the causal linear process

$Display mathematics$

(p.351) Then there is a stationary sequence (ν‎k)k of square integrable real-valued random variables such that, for any $ℓ∈Z$ , the following coboundary decomposition holds:

$Display mathematics$
(12.5)

Proof Let us introduce the stationary sequence

$Display mathematics$

This sequence is well defined in $L2$ under the conditions of this theorem. With this definition of ν‎, it is immediate that (12.5) is satisfied.

As an immediate consequence we obtain:

Theorem 12.7 Assume that the sequence of innovations and of constants satisfy the conditions of Lemma 12.6. Moreover assume that the innovations satisfy the invariance principle ${bn−1S[nt](ξ),t∈[0,1]}→dηW$ in D([0, 1]) as $n→∞$ , where η‎ is $I$ -measurable and W is a standard Brownian motion on [0, 1] independent on $I$ . Suppose also that

$Display mathematics$

Then the linear process also satisfies the invariance principle, i.e. ${bn−1S[nt](X),t∈[0,1]}→dηAW$ in D([0, 1]) as $n→∞$ .

Proof The functional CLT follows from the corresponding one for the innovations by noticing that, for any t ∈ [0, 1],

$Display mathematics$

# 12.3 Toward Linear Processes with Long Memory

We consider in this section linear processes $Xj=∑k∈Zaj−kξk$ . In many situations the process ${Xj,j∈Z}$ is well defined under weaker conditions that (12.1). For instance, when the innovations form an i.i.d. sequence of centered random variables which are also square integrable the necessary and sufficient condition for the existence of X0 in the almost sure sense is

$Display mathematics$
(12.6)

(implied by $∑|ai|<∞$ ). So, whenever X0 is well defined we can represent the partial sums as a linear statistic

(p.352)

$Display mathematics$

where bk, n := a1−k + … + ank.

It should be noted that in this case the variance of partial sums might not be asymptotically linear in n. As a matter of fact, if the innovations are i.i.d. centered with finite second moment the variance of Sn can range from a constant to close of n2. When the variance is not asymptotically linear in n we shall refer to that case as long memory.

We shall obtain first the central limit theorem for the linear statistic for various dependence structures that will allow us to obtain the CLT for the classes of linear processes satisfying condition (12.6). The invariance principle is much more delicate in the long memory case. The discussion of the invariance principle will follow the section on the central limit theorem.

## 12.3.1 CLT for Linear Statistics with Dependent Innovations via Martingale Approximation

Let $(ξi)i∈Z$ be a strictly stationary sequence of centered real-valued random variables with finite second moment. The aim of this section is to study the asymptotic behavior of linear statistics of the type

$Display mathematics$
(12.7)

where ${bk,n,k∈Z}$ is a triangular array of numerical constants satisfying $∑k∈Zbk,n2<∞$ for any n. Note that the linear statistic Sn is properly defined if and only if the stationary sequence of innovations $(ξi)i∈Z$ has a bounded spectral density.

The main result of this section is the following:

Theorem 12.8 Let $(ξi)i∈Z$ be a strictly stationary and ergodic sequence of real-valued random variables with finite second moment, centered at expectations, adapted to a stationary filtration $(Fi)i∈Z$ and satisfying

$Display mathematics$
(12.8)

Let ${bk,n,k∈Z}$ be a triangular array of numerical constants such that

$Display mathematics$
(12.9)

(p.353) Let Sn be defined by (12.7). Then, $(ξi)i∈Z$ has a continuous spectral density f,

$Display mathematics$
(12.10)

where N is a standard normal variable.

Comment 12.9 This result is due to Peligrad and Utev (2006). We note that if we do not assume the sequence $(ξi)i∈Z$ to be ergodic then it can be proven that there is a non-negative random variable η‎2 measurable with respect to $I$ (the invariant σ‎-field) such that $n−1E((∑k=1nξk)2|F0)→η2$ in $L1$ as $n→∞$ and $E(η2)=2πf(0)$ . In this situation, the second part of (12.10) has to be modified as follows: $bn−1Sn→dηN$ as $n→∞$ , where N is a standard normal variable independent of η‎. For a complete proof of this comment, we refer the reader to the proof of Theorem 1 in Peligrad and Utev (2006).

As a corollary of Theorem 12.8, we can derive the following central limit theorem for the partial sums associated with a linear process ${Xk,k∈Z}$ defined by

$Display mathematics$
(12.11)

when $∑j∈Zaj2<∞$ and $(ξi)i∈Z$ satisfies the conditions of Theorem 12.8.

Corollary 12.10 Let $(ξi)i∈Z$ be as in Theorem 12.8 and $(ak)k∈Z$ be a sequence of real numbers such that $∑j∈Zaj2<∞$ . Let

$Display mathematics$
(12.12)

and assume that $bn2→∞$ as $n→∞.$ Let ${Xk}k∈Z$ be defined by (12.11). Then

$Display mathematics$

where N is a standard normal variable and f is the spectral density of $(ξi)i∈Z$ .

Indeed, recall the representation

$Display mathematics$

where bk, n := a1−k + … + ank. Observe that, by the Cauchy inequality,

$Display mathematics$

(p.354) and so the series

$Display mathematics$

which is certainly finite if $∑i∈Zai2<∞$ . To end the proof of the corollary, observe that

$Display mathematics$

Comment 12.11 Condition (12.8) is satisfied under various different dependent conditions. We just mention in what follows that it holds if $(ξi)i∈Z$ satisfies Hannan’s condition (4.29), namely:

$Display mathematics$

where we recall that $P0(⋅)=E(⋅|F0)−E(⋅|F−1)$ . Indeed, since $E(ξ0|F−∞)=0$ a.s., we have the following representation:

$Display mathematics$

By stationarity ∥Pn(ξ‎0)∥2 = ∥Pn+k(ξ‎k)∥2 for any $k∈Z$ . Next, Pi(ξ‎0) and Pj(ξ‎k) are uncorrelated for ij, implying that

$Display mathematics$

As a consequence

$Display mathematics$

Therefore,

$Display mathematics$

Whence, under Hannan’s condition, we derive that $limj→∞Γj=0$ , proving the validity of condition (12.8). We refer to the paper by Peligrad and Utev (2006) for other dependence conditions implying condition (12.8).

(p.355) Proof of Theorem 12.8 Since by assumption, $∑k=0∞|E[ξkξ0]|=Γ0<∞$ , by Lemma 1.5, $(ξi)i∈Z$ has a continuous spectral density and the first part of (12.10) holds. The proof of the second part of (12.10) will be divided in two steps. In step 1, we prove that, under the additional assumption that $(ξi)i∈Z$ forms a sequence of martingale differences then the conclusion of Theorem 12.8 is satisfied. In the second step, and with the help of step 1, we use a martingale approximation to show that Theorem 12.8 holds in its full generality under condition (12.8).

Step 1. In this step, we shall assume that $(ξi)i∈Z$ is a strictly stationary and ergodic sequence of martingale differences with finite second moment, and prove that, in this case $Sn→dN(0,E(ξ02))$ .

We shall apply Corollary 2.32 to the triangular array of martingale differences $(dn,j)j∈Z,n≥1$ where, for any $j∈Z$ and any n ≥ 1,

$Display mathematics$

Note first that, for any positive integer n, $bn−2∑j∈ZE(bj,nξj)2=E(ξ02)$ . The conclusion of the theorem will then follow from Corollary 2.32 if one can prove that the triangular array ${dn,j,j∈Z}n≥1$ satisfies the conditions (2.44) and (2.45).

To verify (2.44), that is: $bn−1E(supk∈Z|bk,nξk|)→0$ as $n→∞$ , it is enough to check the Lindeberg condition: For any ε‎ > 0,

$Display mathematics$
(12.13)

But, by stationarity, for any ε‎ > 0,

$Display mathematics$

We then conclude that (12.13) will hold if $bn−1maxj∈Z|bj,n|→0$ . This latter condition holds under condition (12.9) as stated in the following lemma about sequences, whose proof will be given at the end of the proof of the theorem.

Lemma 12.12 Under (12.9), $limn→∞bn−1maxk∈Z|bk,n|=0$ .

It remains to show that the triangular array ${dn,j,j∈Z}n≥1$ satisfies condition (2.45). So we shall show in what follows that the following convergence holds:

$Display mathematics$
(12.14)

The approach to this proof is to make blocks of variables and replace each variable in a block by the average of variables forming the block. The following lemma, with the Hilbert space type language, is convenient.

(p.356) Lemma 12.13 Let 2 be a Hilbert space of double sequences $x={xj}j∈Z$ with the norm $∥x∥22=∑j|xj|2$ . Set also $∥x∥1=∑j|xj|$ . Let the translation operator be denoted by Tx(j) = xj+1. Given a fixed positive integer p, by Ik denote the set of integers Ik = {(p(k − 1) + 1, …, kp} and associate the sequence ${(Apx)j}j∈Z$ based on the average of the terms in a block:

$Display mathematics$

In addition, define $(x2)k=(xk2)$ . Then, for any positive integers j and p,

$Display mathematics$

In particular, assume that we have a sequence of elements x(n)2 such that ∥x(n)2 = 1 and $∥x(n)−Tx(n)∥2→0$ as $n→∞$ . Then, for any positive integers j and p,

$Display mathematics$

as $n→∞$ .

Proof The proof requires easy algebraic manipulations and is left to the reader.

With the help of this lemma, let us prove (12.14). Fix a positive integer p and make small blocks of normalized sums of consecutive random variables. Define

$Display mathematics$

and decompose the sum in (12.14) in the following way

$Display mathematics$

Notice first that $Σkptk,n=bn2$ and as a consequence, by stationarity and the $L1$ ergodic theorem (see Theorem 1.3), the following convergence holds uniformly in n

$Display mathematics$

(p.357) On the other hand,

$Display mathematics$

by Lemma 12.13. This ends the proof of (12.14) and then of Step 1.

Step 2. We show now that Theorem 12.8 holds in its full generality. So, we assume from now on that $(ξi)i∈Z$ is a strictly stationary and ergodic sequence of centered real-valued random variables with finite second moment that satisfies condition (12.8).

This step is based on a blocking procedure and then on an approximation of the sums of variables in blocks by martingale differences. As before, let p be a fixed positive integer and denote by Ik = {(k − 1)p + 1, …, kp}. So Ik’s are blocks of consecutive integers of size p and $Z=∪k=−∞∞Ik$ . Let

$Display mathematics$

We start with the following decomposition

$Display mathematics$

We shall show first that Bn, 2 is negligible for the convergence in distribution. As noticed at the beginning of the proof, $(ξi)i∈Z$ has a continuous spectral density and by the second inequality in part (i) of Lemma 1.5, the variance of Bn, 2 is bounded by

$Display mathematics$

On the other hand, notice that, by Lemma 12.13,

$Display mathematics$

which proves that

$Display mathematics$

(p.358) To analyze Bn, 1 we denote the weighted sum in a block of size p by

$Display mathematics$

Then, $Yk(p)$ is $Gk$ -measurable and define

$Display mathematics$

Obviously $Vk(p)$ is a stationary sequence of martingale differences and $Yk(p)=Zk(p)+Vk(p).$ It follows that Bn, 1 can be decomposed into a linear process with stationary martingale differences innovations and another one involving $Zk(p).$

We shall show first that the term involving $Zk(p)$ is negligible for the convergence in distribution in the sense that

$Display mathematics$
(12.15)

Let $δn,k=pck,n$ . Observe that $δn2:=∑k∈Zδn,k2≤1$ . In addition,

$Display mathematics$

and, by Lemma 12.13 and the construction,

$Display mathematics$

Hence,

$Display mathematics$

Moreover,

$Display mathematics$

Hence, since (12.9) is assumed, the sequence {δ‎n, k} satisfies condition (1.11). Therefore, according to Lemma 1.5, part (iii), we deduce that

$Display mathematics$

(p.359) where f(p)(x) denotes the spectral density of the process ${Zk(p)}k∈Z$ . On the other hand, since

$Display mathematics$

in order to establish (12.15) it is enough to show that

$Display mathematics$

First, we observe that, for any k ≥ 1,

$Display mathematics$

By the triangle inequality and Condition (12.8) obviously

$Display mathematics$

To complete the prove we have to show that the remaining linear process involving the martingale differences satisfies the desired CLT. We shall denote by

$Display mathematics$

Since the coefficients $δn,k=pck,n$ satisfy (1.11), by step 1, it follows that for any p fixed,

$Display mathematics$

In order to complete the proof, by Theorem 1.10 we have only to establish that

$Display mathematics$

With this aim, we notice that, by stationarity,

$Display mathematics$

(p.360) Since $Γ0<∞$ ,

$Display mathematics$

Therefore, the proof will be complete if one can prove that

$Display mathematics$
(12.16)

But, by stationarity,

$Display mathematics$

proving (12.16) by taking into account condition (12.8). This ends the proof of step 2. To end the proof of Theorem 12.8, it remains to prove Lemma 12.12.

Proof of Lemma 12.12 To prove it, we proceed as follows. Let m be a fixed positive integer. Note first that if i0 is such that $|bi0,n|=maxk∈Z|bk,n|$ then, since i0 = j0 (mod(m)) where j0 ∈{0, …, m − 1}, it follows that

$Display mathematics$

It follows that

$Display mathematics$

Therefore, the lemma will follow if we can prove that

$Display mathematics$
(12.17)

With this aim, observe that, for any j = 0, …, m − 1,

$Display mathematics$

(p.361) implying that

$Display mathematics$

Therefore,

$Display mathematics$

which proves (12.17) by taking into account condition (12.9).

# 12.4 Invariance Principle for Linear Processes

We move now to explore the invariance principle for the partial sums associated with a causal linear process defined by

$Display mathematics$
(12.18)

Assume that X0 exists, is in $L2$ and is defined as before. Let us associate with the partial sums the following process in D([0, 1])

$Display mathematics$

We would like to find the limiting distribution of this process when a CLT is available. It should be noted that if {Wn(t), t ∈ [0, 1]} converges weakly to a standard Brownian motion, then necessarily $σn2=nh(n)$ where h(n) is a slowly varying function (i.e. a regularly varying function with exponent 1). This is so since for t ∈ [0, 1] fixed we have $S[nt]/σn→dN(0,t)$ and, in addition, taking t = 1 we have $Sn2/σn2$ is uniformly integrable (by the convergence of moments theorem), implying $σ[nt]2/σn2→t$ .

A natural conjecture is that if we assume that the innovations (ξ‎k) are i.i.d. centered with finite second moment $E(ξ02)>0$ , $∑j=0∞aj2<∞$ and $σn2=nh(n)$ with h(n) a slowly varying function, then {Wn(t), t ∈ [0, 1]} converges weakly to a standard Brownian motion. This conjecture however has a negative answer.

## (p.362) 12.4.1 Construction of the Counterexample

Let us first give a characterization for the variance of the partial sums of a linear process with i.i.d. innovations to be linearly varying with exponent 1 (see Definition 12.17).

Lemma 12.14 Let $(ξi,i∈Z)$ be a sequence of i.i.d. centered real-valued r.v.’s with finite second moment $E[ξ12]=σ2>0$ . In addition, let (ai, i ≥ 0) be a sequence of real numbers such that $∑i≥0ai2<∞$ . Then we consider the causal linear process defined by (12.18). Let $bn=a0+⋯+an$ and assume that

$Display mathematics$
(12.19)

and

$Display mathematics$
(12.20)

Then, $∑k=0n−1bk2−1σn2→σ2$ and $σn2=nh(n)$ where h(n) is a slowly varying function.

Remark 12.15 Notice that Wu and Woodroofe (2004) pointed out that (12.20) is a necessary and sufficient condition in order for (4.86) to hold for the sequence $(Xk)k∈Z$ .

Proof Notice that

$Display mathematics$

and then

$Display mathematics$
(12.21)

This shows that $∑k=0n−1bk2−1σn2→σ2$ . Let now $Fk=σ(ξℓ,ℓ≤k)$ and note that

$Display mathematics$

Then $E(E(Sn|F0))2=σ2∑j=0∞(bn+j−bj)2$ . Notice that our conditions imply

$Display mathematics$

The result follows by applying Proposition 4.30.

(p.363) The next example shows that conditions (12.19) and (12.20) are not sufficient to ensure that linear processes $(Xk)k∈Z$ , as defined above, satisfy the weak invariance principle.

Proposition 12.16 There exist a sequence of i.i.d. innovations {ξ‎i}, centered with positive and finite second moments and a sequence (ai, i ≥ 0) of real numbers, satisfying $∑i≥0ai2<∞$ , such that the linear process $(Xk)k∈Z$ defined by (12.18) satisfies $σn2$ = nh(n) with h(n) slowly varying and such that the weak invariance principle does not hold.

Proof of Proposition 12.16 Our example is inspired by the construction of examples as in Herrndorf (1983), and also by the paper of Wu and Woodroofe (2004). Let us define two sequences {an, n ≥ 0} and ${an′,n≥0}$ as follows:

$Display mathematics$

and

$Display mathematics$

Let now $(ξi,i∈Z)$ be a sequence of independent, identically distributed and symmetric random variables such that

$Display mathematics$
(12.22)

Define now two linear processes:

$Display mathematics$

Denote $σn2=E(∑k=1nXk)2$ and $σn′2=E(∑k=1nXk′)2$ . Let $bn=a0+⋯+an$ . Since

$Display mathematics$

it follows that both of the linear processes ${Xk,k∈Z}$ and ${Xk′,k∈Z}$ satisfy the conditions of Lemma 12.14.

Now observe that

$Display mathematics$

then: $∑k=1nXk+2′=∑k=1nXk+ξn+2−ξ2$ . It follows that $σn′2∼σn2$ . In addition, according to (12.22), classical computations yield that for every ε‎ > 0,

$Display mathematics$
(12.23)

(p.364) As a consequence, it follows that the sequences of processes ${{σn−1∑i=1[nt]Xi,t∈[0,1]}$ and ${(σn′)−1∑i=1[nt]Xi′,t∈[0,1]}$ cannot satisfy the weak invariance principle at the same time. Indeed, if for instance the sequence ${σn−1∑i=1[nt]Xi,t∈[0,1]}$ satisfies the weak invariance principle, then necessarily for every ε‎ ≥ 0, $P(max1≤i≤n|Xi|≥εσn)→0$ , as $n→∞$ , and consequently, from (12.23),

$Display mathematics$

Then, for linear processes, the weak invariance principle cannot hold without additional assumptions to the conditions of Lemma 12.14.

## 12.4.2 Finite-dimensional Distributions

In this subsection we shall describe the behavior of the finite-dimensional distributions of the process

$Display mathematics$

where the sequence of innovations, constants and notations are as in Corollary 12.10.

Definition 12.17 We say that a positive sequence $(vn2)n≥1$ is regularly varying with exponent β‎ > 0 if, for any t ∈[0, 1],

$Display mathematics$
(12.24)

Definition 12.18 A Gaussian process is called a standard fractional Brownian motion on [0, 1] with Hurst index α‎ ∈ (0, 1) if it has the following covariance structure: For any s, t ∈ [0, 1],

$Display mathematics$

Theorem 12.19 Assume the innovations and the constants satisfy the conditions of Corollary 12.10. Let β‎ ∈]0, 2] and assume that $bn2$ is regularly varying with exponent β‎. Then the finite-dimensional distributions of ${bn−1S[nt],t∈[0,1]}$ converge to the corresponding ones of $2πf(0)WH$ , where WH is the standard fractional Brownian motion with Hurst index H = β‎/2.

Example 1 Let us consider the linear process Xk defined by

$Display mathematics$
(12.25)

where 0 < d < 1/2, B is the lag operator, and $(ξi)i∈Z$ is a strictly stationary sequence satisfying the condition of Theorem 12.8. Then Theorem 12.19 applies with β‎ = 2d + 1, since akκ‎dkd−1 for some κ‎d > 0.

(p.365) Example 2 Now, if we consider the following choice of (ak)k≥0: a0 = 1 and ai = (i+1)α‎iα‎ for i ≥ 1 with α‎ ∈]0, 1/2[, then the theorem also applies. Indeed for this choice, $bn2∼καn1−2α$ , where κ‎α‎ is a positive constant depending on α‎.

Example 3 For the selection aiiα‎(i) where is a slowly varying function at infinity and 1/2 < α‎ < 1 then, $bn2∼καn3−2αℓ2(n)$ (see for instance Relations (12) in Wang et al. (2001)), where κ‎α‎ is a positive constant depending on α‎.

Example 4 Finally, if $ai∼i−1/2(logi)−α$ for some α‎ > 1/2, then $bn2∼n2(logn)1−2α/(2α−1)$ (see again Relations (12) in Wang et al. (2001)). Hence (12.24) is satisfied with β‎ = 2.

Proof of Theorem 12.19 To prove the convergence of the finite-dimensional distributions, we shall apply the Cramér–Wold device. Let m be a positive integer. Let $0 and set n = [nt]. For $λ1,…,λm∈R$ , notice that

$Display mathematics$
(12.26)

where bj, n = a1−j + ⋯ + anj for all $j∈Z$ , and $bn2=∑j∈Zbj,n2$ .

We shall apply Theorem 12.8 to the linear process $∑j∈ZBj,nξj$ where

$Display mathematics$
(12.27)

As a first step we calculate the limit over n of the following quantity

$Display mathematics$

For any 1 ≤ km, by using the fact that for any two real numbers A and B we have A(A + B) = 2−1(A2 + (A+B)2B2), we get that

$Display mathematics$

Now, by using condition (12.24), we derive that, for any 1 ≤ km,

$Display mathematics$
(12.28)

(p.366) It follows from (12.28) that

$Display mathematics$
(12.29)

Moreover,

$Display mathematics$

since $∑j∈Z(bj,nk−bj−1,nk)2≤4∑i∈Zai2$ and $bn→∞$ . Therefore, the conditions of Theorem 12.8 being satisfied, using (12.29), we can deduce that

$Display mathematics$

where

$Display mathematics$

ending the proof of the convergence of the finite-dimensional distributions.

## 12.4.3 Tightness

We shall comment about the functional CLT only for i.i.d. innovations.

Theorem 12.20 Let $(ai)i∈Z$ be in 2 and let $(ξi)i∈Z$ be i.i.d. centered real-valued random variables with ∥ξ‎02 = 1. Let β‎ ∈]0, 2] and assume that $σn2$ (the variance of Sn) is regularly varying with exponent β‎. If β‎ ∈]1, 2] then the process ${σn−1S[nt],t∈[0,1]}$ converges in distribution in D([0, 1]) to WH where WH is a standard fractional Brownian motion with Hurst index H = β‎/2. If β‎ ∈]0, 1] and we assume in addition that, for a real q > 2/β‎, we have $∥ξ0∥q<∞$ , then the process {σ‎n−1S[nt], t ∈ [0, 1]} converges in distribution in D([0, 1]) to WH.

Proof By Theorem 12.19, it is enough to prove tightness. With this aim, we apply Corollary 1.22. By the Khintchine–Burkholder inequality applied to linear statistics of i.i.d. variables ${ξi}i∈Z$ , we get that, if $∥ξ0∥q<∞$ for some q ≥ 2, then

$Display mathematics$

where bj, n = a1−j + … + anj. Recall that $σn2=nβh(n)$ where h is slowly varying. Therefore by Corollary 1.22, the tightness follows by considering either q > 2/β‎ if β‎ ∈]0, 1] or q = 2 if β‎ ∈]1, 2].

# (p.367) 12.5 IP for Linear Statistics with Weakly Associated Innovations

## 12.5.1 The Case of Asymptotically Negative Dependent Innovations

From Corollary 9.14, we can prove the following result:

Corollary 12.21 Let $ξ=(ξk)k∈Z$ be a $L2$ -stationary sequence of real-valued centered random variables with a continuous spectral density function f on (−π‎, π‎], satisfying the asymptotic negative dependence condition (9.2) and such that ${ξk2}$ is an uniformly integrable family. Assume in addition that ∥ξ‎02 = 1. Consider a triangular array of non-negative numerical constants ${bk,n,k∈Z}$ satisfying the condition (12.9). Define the triangular array ${Xk,n,k∈Z}n≥1$ by

$Display mathematics$

For 0 ≤ t ≤ 1 we set

$Display mathematics$

and Wn(0) = 0, $Wn(t)=∑i=−∞kn(t)Xi,n.$ Then ${Wn(t),t∈[0,1]}→dV$ in D([0, 1]) where $V=2πf(0)W$ with W the standard Brownian motion. In particular

$Display mathematics$

where $N∼N(0,1)$ .

Proof of Corollary 12.21 To prove the corollary, we shall take into account Comment 9.17 and apply Corollary 9.14 to the triangular array ${Xk,n,k∈Z}n≥1$ . Since the sequence $(ξk)k∈Z$ is asymptotically negative dependent and the coefficients bk, n are non-negative then the triangular array ${Xk,n,k∈Z}n≥1$ is also asymptotically negative dependent. Next, for any ε‎ > 0,

$Display mathematics$

where $δn=bn−1maxk∈Z|bk,n|$ . But by Lemma 12.12, since (12.9) is assumed, $limn→∞δn=0$ . This convergence, together with the uniform integrability of ${ξk2}$ entail that ${Xk,n,k∈Z}n≥1$ satisfies the Lindeberg condition.

To apply Corollary 9.14 (see also Remark 9.15), it remains to show that for all 0 ≤ a < b

$Display mathematics$
(12.30)

(p.368) Note that

$Display mathematics$

Using the fact that $(ξk)k∈Z$ is $L2$ -stationary with a continuous spectral density function f on (−π‎, π‎] and proceeding as in the proof of point (iii) of Lemma 1.5, we infer that (12.30) will hold if one can prove that, for any $i∈Z$ ,

$Display mathematics$
(12.31)

where

$Display mathematics$

To prove the convergence (12.31), note that for any $i∈Z$ ,

$Display mathematics$

Hence,

$Display mathematics$

The last term in the right-hand side is going to zero as $n→∞$ , by taking into account condition (12.9). To show that the first term is going to zero as $n→∞$ , we observe that since we have proved that the triangular array $(Xj,n)j∈Z$ satisfies the Lindeberg condition, we have $supi∈ZE(Xi,n2)→0$ . This implies that for any t ∈ [0, 1], $∑i≤kn(t)E[Xi,n2]=bn−2∑k=−∞kn(t)bk,n2→t$ . This proves that the first term in the right-hand side is going to zero as $n→∞$ . This ends the proof of (12.31) and then of (12.30).

## 12.5.2 The Case of Long-Range Dependent Statistics of Stationary Perturbed Determinantal Point Processes

We start by proving the asymptotic normality of linear statistics when the innovation process is a stationary perturbed determinantal point process.

(p.369) Corollary 12.22 Let $X=(Xi,i∈Z)$ be a stationary determinantal point process and let $Y=(Yi,i∈Z)$ be a non-degenerate stationary Gaussian sequence with positive continuous spectral density. Assume that X and Y are independent. Also, consider a measurable function h(x, y) defined on ${0,1}×R$ and such that $E[h2(X1,Y1)]<∞$ . Define the stationary sequence $ξ=(ξi,i∈Z)$ by $ξi=h(Xi,Yi)−E[h(Xi,Yi)]$ , $i∈Z$ . Let ${bk,n,k∈Z}$ be a triangular array of non-negative numerical constants satisfying (12.9). Then ξ‎ has a continuous spectral density f on (−π‎, π‎] and

$Display mathematics$
(12.32)

where $bn2=∑k∈Zbk,n2$ and $N∼N(0,1)$ .

Proof of Corollary 12.22 By (10.3), recall that

$Display mathematics$

where υ‎i = (h(1, Yi)−h(0, Yi))+, ψ‎i = (−h(1, Yi)+h(0, Yi))+ and ζ‎i = h(0, Yi). For any $u=(u1,u2,u3)∈R3$ , let

$Display mathematics$

and define the process $Uu:=(Ui(u))i∈Z$ by

$Display mathematics$
(12.33)

The fact that ξ‎ has a continuous spectral density f on (−π‎, π‎] comes from Lemma 12.23 (the proof of which follows) by applying it with u = (1, −1, 1).

Lemma 12.23 Let $X=(Xi,i∈Z)$ , $Y=(Yi,i∈Z)$ and h(⋅, ⋅) be as in Corollary 12.22. Then, for $u=(u1,u2,u3)∈R3$ , the stationary process $U=(Ui(u),i∈Z)$ defined by (12.33) has a continuous spectral density fu on (−π‎, π‎].

Let us prove now the convergence in distribution of $bn−1∑k∈Zbk,nξk$ . With this aim, we first recall that, for any $u∈R+3$ , $limn→0rn(Uu)=0$ (see (10.4)). This convergence together with Lemma 12.23 and Corollary 12.21 prove that, for any $u∈R+3$ ,

$Display mathematics$
(12.34)

where fu is the spectral density of Uu. Now, by using the same arguments as those developed in the proof of Theorem 9.18 (and in particular Property 9.19), it follows that the convergence (12.34) also holds for any $u∈R3$ . Therefore taking u = (1, −1, 1), the convergence (12.32) follows. To end the proof of the corollary, it remains to prove Lemma 12.23.

(p.370) Proof of Lemma 12.23 The lemma will follow from Theorem 1.7 if one can prove that, for any $u∈R3$ ,

$Display mathematics$
(12.35)

the supremum being taken over all pairs of non-empty, finite, disjoint sets Q, S $∈Z$ satisfying $d(Q,S)=minq∈Q,s∈S|q−s|≥n$ .

With this aim, we first notice that since $X=(Xi)i∈Z$ and $Y=(Yi)i∈Z$ are independent,

$Display mathematics$
(12.36)

But, setting $y=(yi)i∈Q∪S$ ,

$Display mathematics$

where b(yi) = u1(h(1, yi)−h(0, yi))+ + u2(−h(1, yi)+h(0, yi))+. With definition (1.17), we derive that, for any pairs (Q, S) of non-empty, finite, disjoint sets in $∈Z$ satisfying d(Q, S) ≥ n,

$Display mathematics$

implying that

$Display mathematics$

Therefore, by the stationarity of Y,

$Display mathematics$
(12.37)

(p.371) On the other hand, note that, by the stationarity of X,

$Display mathematics$

Therefore

$Display mathematics$

implying that, for any pairs (Q, S) of non-empty, finite, disjoint sets of $Z$ satisfying d(Q, S) ≥ n,

$Display mathematics$

Since Y has a positive and continuous spectral density, according to Fact 10.2, $ρ1*(Y)<1$ and then $r1′(Y)<1$ which implies by Lemma 7.9 and stationarity that

$Display mathematics$

where $c=1+r1′(Y)1−r1′(Y)<∞$ . Therefore,

$Display mathematics$
(12.38)

So, overall, starting from (12.36) and taking into account (12.37) and (12.38), it follows that there exists a positive finite constant K such that, for any n ≥ 1,

$Display mathematics$
(12.39)

By Fact 10.1, X is negatively dependent. Hence, by taking into account Remark 9.25, it follows that

$Display mathematics$

(p.372) and then X has a continuous spectral density. Therefore, by Theorem 1.7, $κn(X)→0$ as $n→∞$ . On the other hand, since Y is a Gaussian process with positive continuous spectral density, by Fact 10.2, $ρn*(Y)→0$ as $n→∞$ . Taking into account these considerations in (12.39), it follows that (12.35) holds. This ends the proof of the lemma (and then the proof of Corollary 12.22).

Corollary 12.22 gives the following central limit theorem for linear processes constructed with non-negative numerical constants and generated by asymptotically negative dependent innovations.

Corollary 12.24 Suppose $X=(Xk)k∈Z$ and $Y=(Yk)k∈Z$ satisfy the condition of Corollary 12.22. Let h(x, y) be a measurable function defined on ${0,1}×R$ and such that $E[h2(X1,Y1)]<∞$ . Define the stationary sequence $ξ=(ξi,∈Z)$ by $ξi=h(Xi,Yi)−E[h(Xi,Yi)]$ , $i∈Z$ . Let ${aj,j∈Z}$ be a non-negative sequence such that $∑j∈Zaj2<∞$ . Let

$Display mathematics$

If $bn→∞$ then

$Display mathematics$

where $N∼N(0,1)$ and f is the spectral density of Z.

Proof of Corollary 12.24 It suffices to notice that $Sn=∑k∈Zbk,nξk$ and to apply Corollary 12.22. Indeed since $∑j∈Zaj2<∞$ , $bn2<∞$ . On another hand $∑k∈Z(bk,n−bk−1,n)2≤∑j∈Zaj2$ which shows that the second part of condition (12.9) holds since $bn2→∞$ and $∑j∈Zaj2<∞$ .

Comment The central limit theorem for triangular arrays of linear statistics of the following form

$Display mathematics$

where $Lj:=Lj(n)$ are sets and $(Xi)i∈Z$ is a determinantal point process, was established in Soshnikov (2002). It is an open question whether the central limit theorem holds for long-range dependent statistics of perturbed determinantal processes for arbitrary sequences (aj) with $∑aj2<∞$ .

# 12.6 Discrete Fourier Transform and Periodogram

An important application of martingale theory is in the analysis of periodogram. Given a stochastic process $(Xj)j∈Z$ , the periodogram is defined as

(p.373)

$Display mathematics$

where $i=−1$ is the imaginary unit. The periodogram is related to the discrete Fourier transform

$Display mathematics$

In this section we consider a strictly stationary ergodic sequence $(Xk)k∈Z$ of centered real-valued random variables with finite second moments, adapted to a non-decreasing filtration $(Fk)k∈Z$ , and we shall impose the following regularity condition:

$Display mathematics$
(12.40)

where $F−∞=∩n∈ZFn$ is the tail sigma field.

## 12.6.1 A CLT for Almost All Frequencies

We present below a central limit theorem for almost all frequencies that has been obtained by Peligrad and Wu in 2010. In Theorem 12.25, we let the parameter θ‎ be in the space [0, 2π‎], endowed with Borel sigma algebra and Lebesgue measure λ‎.

Theorem 12.25 Let $(Xk)k∈Z$ be a strictly stationary ergodic sequence of centered real-valued random variables with finite second moments, adapted to a non-decreasing filtration $(Fk)k∈Z$ , and such that (12.40) is satisfied. Then, for almost all θ‎ ∈ [0, 2π‎], the following convergence holds:

$Display mathematics$
(12.41)

where f is the spectral density of $(Xk)k∈Z$ . Furthermore,

$Display mathematics$
(12.42)

where N1(θ‎) and N2(θ‎) are independent identically distributed normal random variables with mean 0 and variance π‎f(θ‎).

An implication of this results is that we obtain the limiting distribution of the perio- dogram as follows:

Remark 12.26 As a consequence ofTheorem 12.25, for sequences satisfying (12.40), the perio- dogram n−1|Sn(θ‎)|2 is asymptotically distributed as f(θ‎)χ‎2(2) for almost all θ‎ ∈ [0, 2π‎], where χ‎2(2) is the chi-squared distribution with 2 degrees of freedom.

(p.374) The proof of Theorem 12.25 is a combination of martingale techniques with results from ergodic theory and harmonic analysis.

Some preparatory materials. From harmonic analysis we shall use the first three facts below. The last one comes from ergodic theory.

Fact 1 Carleson Theorem (Carleson 1966): If (ak)k≥0 are real numbers such that $∑l=0∞al2<∞,$ then $∑j=1najeijθ$ converges λ‎-almost surely on [0, 2π‎].

Fact 2 Hunt and Young (1974): There is a constant C such that

$Display mathematics$

Fact 3 Fejér–Lebesgue Theorem (cf Bary, 1964, p. 139 or Theorem 15.7 in Champeney, 1989). If g(θ‎) is integrable on [0, 2π‎], with Fourier coefficients

$Display mathematics$

then, denoting

$Display mathematics$

we have

$Display mathematics$
(12.43)

Fact 4 Next proposition gives the strong law of large numbers for discrete Fourier transform. We notice that we need only the variables to be identically distributed to get an almost sure result. We refer the reader to Proposition 30 and Lemma 32 in Cuny, Merlevède and Peligrad (2013) and to Zhang (2017) for more general results on this type.

In the sequel λ‎ denotes the Lebesgue measure on [0, 2π‎].

Proposition 12.27 Let (Xk)k≥0 denote a sequence of identically distributed random variables on $(Ω,K,P),$ with finite first moment. Then, for λ‎-almost all θ‎ in [0, 2π‎],

$Display mathematics$

where $Sn(θ)=∑k=1neikθXk$ .

(p.375)

Proof We shall use a truncation argument. Define

$Display mathematics$

Then,

$Display mathematics$

By the Borel–Cantelli lemma, applied on Ω‎, $P(Xn≠Yn$ i.o.) = 0. Therefore, for all θ‎ in [0, 2π‎],

$Display mathematics$

The proposition is then reduced to show that, for almost all θ‎ in [0, 2π‎], we have that $Sn*(θ)/n$ $→0,$ $P$ -a.s.

In order to prove it, note that, by using the Kronecker lemma, it is enough to show that, for almost all θ‎ in [0, 2π‎],

$Display mathematics$
(12.44)

To prove it, we use Carleson’s (1966) theorem. Clearly,

$Display mathematics$

for some positive constant c. The latter relation implies that there is $Ω′⊂Ω$ with $P(Ω′)=1,$ such that, for all $ω∈Ω′$ ,

$Display mathematics$

By Carleson’s (1966) theorem, for such ω‎,

$Display mathematics$

Denote now

$Display mathematics$

(p.376) By Fubini’s theorem, in the product space [0, 2π‎] ×Ω‎, we have $λ×P(A)=1$ . Again by Fubini’s theorem, (12.44) follows. The proof is complete.

Some preparatory lemmas. For $k∈Z$ we define the projection operator by

$Display mathematics$
(12.45)

Lemma 12.28 Let

$Display mathematics$

Under (12.40), for λ‎-almost all θ‎, we have

$Display mathematics$
(12.46)

Proof By (12.40) we have $∑k∈Z∥Pk(X0)∥22=∥X0∥22<∞$ , whence $∑k∈Z|P0(Xk)|2<∞$ $P$ -almost surely. Therefore, by Carleson’s (1966) theorem, for almost all ω‎, $∑1≤k≤neikθP0(Xk)$ converges λ‎-almost surely. Denote the limit by D0 = D0(θ‎). We now consider the set

$Display mathematics$

and notice that almost all sections for ω‎ fixed have Lebesgue measure 0. So by Fubini’s theorem the set A has measure 0 in the product space and therefore, again by Fubini’s theorem, almost all sections for θ‎ fixed have probability 0. It follows that for almost all θ‎, $P0(Sn(θ))→D0$ almost surely under $P$ . Next, by the maximal inequality in Hunt and Young (1974), there is a constant C such that

$Display mathematics$

and then we integrate

$Display mathematics$

Therefore,

$Display mathematics$

Since $|P0(Sn(θ))| and the last one is integrable for almost all θ‎, by the Lebesgue dominated convergence we have that P0(Sn(θ‎)) converges in $L2$ .

(p.377) Lemma 12.29 Let $g(θ)=(2π)−1E|D0(θ)|2$ . For all $j∈Z$ , we have

$Display mathematics$
(12.47)

where cj = cov(X0, Xj) for all $j∈Z$ . So $(cj)j∈Z$ are the Fourier coefficients of g implying that g = f. Additionally, for almost all θ‎,

$Display mathematics$
(12.48)

Proof As before, let

$Display mathematics$

It follows that

$Display mathematics$

Since

$Display mathematics$

for 0 ≤ jn,

$Display mathematics$
(12.49)

and 0 in rest (i.e. if j < 0 or j > n). Since $Xj=∑l∈ZPl(Xj)$ , by orthogonality of martingale differences and stationarity we have that

$Display mathematics$

Combining the latter equality with (12.49) we obtain

$Display mathematics$

(p.378) We know that, by (12.46), for λ‎-almost all θ‎,

$Display mathematics$

By the Lebesgue dominated convergence theorem, as in the proof of Lemma 12.28, (12.47) follows in view of the Hunt and Young maximal inequality since $supn|P0(Tn(θ))|$ is integrable.

Now we prove (12.48). By stationarity, we have

$Display mathematics$

Namely $E|Sn(θ)|2/n$ is the Cesàro average of the sum $∑j=−llcjeijθ$ . Note that $∥D0(θ)∥22$ is integrable over [0, 2π‎]. Therefore by the Fejér–Lebesgue Theorem (Fact 3 above), relation (12.48) holds for λ‎-almost all θ‎ ∈ [0, 2π‎].

Proof of Theorem 12.25 The first assertion of Theorem 12.25 is just Lemma 12.29. We now prove (12.42).

Step 1. The construction of a martingale differences sequence. Define the projector operator by (12.45). Then we construct

$Display mathematics$

Then, by Lemma 12.28 for λ‎-almost all θ‎,

$Display mathematics$
(12.50)

Note that

$Display mathematics$

Therefore, by the contractive property of the conditional expectation,

$Display mathematics$

For λ‎-almost all θ‎, we then construct a sequence of stationary martingale differences (Dk(θ‎))k≥1, given by

$Display mathematics$

(p.379) Step 2. Martingale approximation. By step 1 we know that there is a set $Γ1⊂[0,2π]$ , such that $λ(Γ1)=2π$ and for all $θ∈Γ1$ the martingale

$Display mathematics$

is well defined in $L2$ . We show now that, for almost all θ‎,

$Display mathematics$
(12.51)

To this end, note that $Sn(θ)−E(Sn(θ)|F0)$ and $E(Sn(θ)|F0)$ are orthogonal, and we have

$Display mathematics$
(12.52)

Next, for $θ∈Γ1$ , by the orthogonality of martingale differences, the stationarity and Lemma 12.28, we have that

$Display mathematics$
(12.53)

Hence,

$Display mathematics$
(12.54)

But $∥Mn(θ)∥22=n∥D0(θ)∥22$ and by Lemma 12.37, for almost all θ‎, $limn→∞n−1∥Sn(θ)∥22=∥D0(θ)∥22$ . This consideration together with (12.52) and (12.54) entail that, for almost all θ‎,

$Display mathematics$

This implies (12.51) by taking into account (12.53) and the decomposition

$Display mathematics$

(p.380) Step 3. The CLT for the approximating martingale. Here we shall consider θ‎ in a set of measure 2π‎ where all the functions make sense.

By Theorem 1.9, it remains just to prove the central limit theorem for complex valued martingale:

$Display mathematics$

As a matter of fact we shall provide a central limit theorem for the real part and imaginary part and show that in the limit they are independent. By the Cramér–Wold device, we then only have to show that, for any reals a and b,

$Display mathematics$

With this aim, we shall apply Theorem 2.29. Clearly, since D0(θ‎) is square integrable and $(Dk(θ))k∈Z$ is stationary,

$Display mathematics$

It remains to verify

$Display mathematics$

or equivalently if a2 + b2 = 1,

$Display mathematics$
(12.55)

For convenience we write Dk = Dk(θ‎) and Dk = Ak + iBk . So,

$Display mathematics$

By using basic trigonometric formulas, it follows that, if a2 + b2 = 1,

$Display mathematics$

(p.381) By stationarity and the ergodic theorem,

$Display mathematics$

On another hand, by Proposition 12.27, for almost all θ‎,

$Display mathematics$

By these arguments, (12.55) follows and it remains to apply Lemma 12.29 to obtain the result.

## 12.6.2 Examples

We present several examples of processes for which the conclusions of Theorem 12.29 hold.

Clearly condition (12.40) is satisfied if the left tail sigma field $F−∞$ is trivial. These processes are called regular (see Chapter 2, vol. 1 in Bradley, 2007).

Example 1 (Mixing sequences.)

Assume that $(Xk)k∈Z$ is a strictly stationary sequence of real-valued r.v.’s. If the sequence is additionally strong mixing (so $limn→∞α(n)=0$ where the strong mixing coefficients α‎(n) have been defined in Section 5.1), the tail sigma field is trivial; see Claim 2.17a in Bradley (2007). Examples of this type include Harris recurrent Markov chains. Now if we denote by (ρ‎(n))n≥1 the sequence of maximal correlation coefficients associated with $(Xk)k∈Z$ (see again Section 5.1 for the definition), and if $limn→∞ρ(n)<1$ , then the tail sigma field is also trivial; see Proposition 5.6 in Bradley (2007).

Example 2 (Functions of i.i.d. random variables.)

Let $(εk)k∈Z$ be a sequence of i.i.d. and consider Xn = f(ε‎k, kn). These are regular processes and therefore Theorem 12.25 applies. Examples include linear processes, functions of linear processes and iterated random functions (Wu and Woodroofe, 2004) among others. For example let $Xn=∑j=0∞ajεn−j$ , where the ε‎j’s are i.i.d. with mean 0 and variance 1 and aj are real coefficients with $∑j=1∞aj2<∞$ . In this case Xn is well-defined and, by Lemmas 12.28 and 12.29, the spectral density is

$Display mathematics$

Example 3 (Reversible Markov chains.)

As in Chapter 14, we consider $(Xj)j∈Z$ a strictly stationary, centered, ergodic and reversible Markov chain with values in a measurable space. By computation (14.4),

$Display mathematics$

If we assume $cn→0$ then the conclusion of Theorem 12.25 holds.