Convolution Based Unit Root Processes : a Simulation Approach

In this paper we present a modified version of a unit root processes using a convolution-based technique. This methodology exploits the properties of copula functions. The application of copula functions to stochastic processes (more in particular to Markov processes) was recently described in the book by Cherubini, Gobbi, Mulinacci and Romagnoli (2012). Our contribution relies on the application of a particular family of copulas, which are generated by the convolution operator, to the design of time series processes. From this point of view, the paper contributes to the literature modeling time series with copulas (Chen & Fan, 2006; Chen, Wu, & Yi, 2009; Cherubini & Gobbi, 2013). While this literature builds on the pioneering paper by Darsow, Nguyen and Olsen (DNO, 1992) on the link between copula functions and Markov processes, our paper exploits the concept of convolution based copulas to define a new version of the unit root process. Beyond the Markov property, there is a long standing and extremely vast literature on the fact that most of the changes of the processes, those that are called innovations, are not predictable on the basis of past information (Samuelson, 1963; 1973; Fama, 1965). In financial markets the natural representation of this concept is to assess that log-prices of assets follow a random walk, which is, in fact, a unit root process. Technically, this process is characterized by innovations that are permanent and independent of the level of the process. The same random walk hypothesis spread into the literature in the field macroeconomics in the 1980s, starting with the seminal paper by Nelson and Plosser (1982). Based on the first unit root tests, due to Dickey-Fuller (1979; 1981), Nelson and Plosser found that most of the US macroeconomic time series included a random walk component, that is a shock, independent and persistent. In this paper we propose an extension to this approach, which allows for dependent innovations, and for non linear dependence between the innovation and the value of the process of the previous period. This is our modified version of the unit root process. The dependence structure is modelled by a copula function and the distribution of the process for all t is obtained by applying the C-convolution technique (Cherubini, Mulinacci, & Romagnoli, 2011) as it will be described in section 3. The choice of the family of copulas changes the probabilistic properties of the new process. In order to simplify the computational aspects, in this paper we concentrate on gaussian copulas for which a closed form of the C-convolution is available. In this framework, we propose a C-convolution-based unit root process, C-UR(1), characterized by a negative correlation between the innovation and the value of the process of the previous period. We investigate the stationarity property of this new process by a simulation experiment.


Introduction
In this paper we present a modified version of a unit root processes using a convolution-based technique.This methodology exploits the properties of copula functions.The application of copula functions to stochastic processes (more in particular to Markov processes) was recently described in the book by Cherubini, Gobbi, Mulinacci and Romagnoli (2012).Our contribution relies on the application of a particular family of copulas, which are generated by the convolution operator, to the design of time series processes.From this point of view, the paper contributes to the literature modeling time series with copulas (Chen & Fan, 2006;Chen, Wu, & Yi, 2009;Cherubini & Gobbi, 2013).While this literature builds on the pioneering paper by Darsow, Nguyen and Olsen (DNO, 1992) on the link between copula functions and Markov processes, our paper exploits the concept of convolution based copulas to define a new version of the unit root process.Beyond the Markov property, there is a long standing and extremely vast literature on the fact that most of the changes of the processes, those that are called innovations, are not predictable on the basis of past information (Samuelson, 1963;1973;Fama, 1965).In financial markets the natural representation of this concept is to assess that log-prices of assets follow a random walk, which is, in fact, a unit root process.Technically, this process is characterized by innovations that are permanent and independent of the level of the process.The same random walk hypothesis spread into the literature in the field macroeconomics in the 1980s, starting with the seminal paper by Nelson and Plosser (1982).Based on the first unit root tests, due to Dickey-Fuller (1979;1981), Nelson and Plosser found that most of the US macroeconomic time series included a random walk component, that is a shock, independent and persistent.In this paper we propose an extension to this approach, which allows for dependent innovations, and for non linear dependence between the innovation and the value of the process of the previous period.This is our modified version of the unit root process.The dependence structure is modelled by a copula function and the distribution of the process for all t is obtained by applying the C-convolution technique (Cherubini, Mulinacci, & Romagnoli, 2011) as it will be described in section 3. The choice of the family of copulas changes the probabilistic properties of the new process.In order to simplify the computational aspects, in this paper we concentrate on gaussian copulas for which a closed form of the C-convolution is available.In this framework, we propose a C-convolution-based unit root process, C-UR (1), characterized by a negative correlation between the innovation and the value of the process of the previous period.We investigate the stationarity property of this new process by a simulation experiment.
The plan of the paper is as follows.In section 2 we present the standard linear autoregressive model and the unit root case.In section 3 we introduce our modified version of the unit root process based on the concept of C-convolution.In section 4 we describe the simulation algorithm and we discuss the results.Section 5 concludes.

The Standard AR(1) Process
We begin by describing briefly the property of the celebrate autoregressive process of order 1, AR (1).The definition is the following.
Definition: 1. AR (1).The discrete time stochastic process (Y t ) t is a first order autoregressive process, AR (1) where ϕ is a real number and (ε t ) t is a sequence of i.i.d.random variables, i.e., (ε t ) t is a white noise process.Moreover, Y t−1 is independent of ε t .
In other words, a stochastic process Y t is an autoregressive process if the value at the time t depends linearly on its own previous values and on a stochastic term (a stochastican imperfectly predictable term); thus the model is in the form of a stochastic difference equation.The notation AR (1) indicates an autoregressive model of order 1.It is well known the constraints on the autoregressive parameter for the model to remain wide-sense stationary.In particular, the process is wide-sense stationary if |ϕ| < 1 since it is obtained as the output of a stable filter whose input is white noise.Conversely, the condition |ϕ| ≥ 1 identifies the case where the process is not stationary.Figure 1 displays some examples of trajectories of a stationary AR (1) process for some values of the parameter.The meanreverting property appear clear from the figure.Furthermore, the absence of any kind of trends is more evident for small values of the autoregressive parameter.Stationarity assures that the mean, µ = E[Y t ], and the variance, V 2 t = Var(Y t ), are constants for all t.In particular, it is known that µ Hamilton (1994) for more details).The autocovariance function .. depends only on the lag k and it is given by whereas the autocorrelation function (ACF), ρ k , has the form Notice that the ACF of a weakly stationary AR(1) process decays exponentially with rate ϕ.Figures 2 shows the theoretical autocorrelation function for some values of the parameter.If the parameter assumes values close to 1 the decline of the ACF is much slower.For a detailed discussion on autoregressive processes we refer the reader to the manuals of Hamilton (1994) and of Brockwell and Davis (1991).

The Unit Root Case
In this paper we are particularly interested in the unit root case, i.e., when the autoregressive parameter ϕ = 1.The definition of a unit root process is the following.

Definition: 2. I(1). The discrete time stochastic process
where (ε t ) t is a white noise process.We denote such a process by I (1).Moreover, Y t−1 is independent of ε t .
Observe that a unit root process is a random walk.As mentioned in the previous section, since the autoregressive parameter is equal to 1 the I(1) process is not stationary, as we can also infer by observing figure 3 which reports some simulated examples of paths of a unit root process.We can observe that the trajectories are not stationary in their means as we would expect if they were constant over time.As regards the variance, and more in general all higher-order moments, it depends on t.In particular, by repeated substitutions, we can write and it approaches to infinity when t tends to infinity.
We can investigate the behavior of the state variable Y t and of its standard deviation V t with a Monte Carlo simulation.In particular, we generate 5000 trajectories of 250 points of an I(1) process with initial condition Y 0 = 0 with the assumption that ε t are i.i.d.N(0, σ ε ); each simulated path (ỹ t ) t=1,...,250 is a realization of the I(1) process, whereas if we fix t = t 0 we have 5000 realizations of the state variable at the time t 0 , (ỹ (i) t 0 ) i=1,...,5000 .Figure 4 (panel (a)) reports the estimated probability density function relatively to (ỹ (i)  t ) i=1,...,5000 for increasing value of t.We see that the dispersion of the distribution of ỹt increases as t increases, signalling that the process is not stationary in variance.Moreover, figure 4 (panel (b)) displays the standard deviation, say Ṽt , of a realization (ỹ (i)  t ) i=1,...,5000 for increasing values of t.As expected, Ṽt is monotone increasing.
The theoretical autocorrelations of a I(1) process tend to one asymptotically for any lag k but the sample autocorrelations may decline rather fast even with large sample (Hassler, 1994).The average ACF up to the lag k = 20 over the 5000 simulated trajectories of our random experiment is reported in figure 5. Clearly, the inspection of this ACF is not sufficient to find out the presence of a unit root.A several test for the presence of unit roots are available in literature (Dickey, 1976;Dickey & Fuller, 1979;1981).

The Convolution Based Unit Root Process
In this section we introduce a modified version of the standard unit root process based on the notion of C-convolution introduced by Cherubini, Mulinacci and Romagnoli (2011).The C-convolution was originally introduced to determine the distribution function of a sum of two dependent and continuous random variables X and Y.The dependence structure Conversely, given two distribution functions F X and F Y and a suitable bivariate function C X,Y we may build joint distribution for (X, Y).The requirements to be met by this function are that: i) it must be grounded (C(u, 0 The one to one relationship that results between copula functions and joint distributions is known as Sklar theorem.See Nelsen (2006) and Joe (1997) for a detailed discussion on copulas.
The C-convolution technique links the marginal distributions of X and Y and their dependence structure given by a copula so as to determine the probability distribution of the sum X + Y.The seminal paper is that of Cherubini, Mulinacci & Romagnoli (2011) where we may find the concept of convolution-based copulas.If X e Y be two real-valued random variables with corresponding copula C X,Y and continuous marginals F X and F Y , then the distribution function of the sum where . The choice of the copula function affects the probabilistic behavior of the distribution of the sum (for a detailed discussion on this topic see the book of Cherubini, Gobbi, Mulinacci & Romagnoli (2012).Some of the most used copula functions are the Gaussian copula, the Clayton copula, the Frank copula and the Gumbel copula.The Gaussian copula is constructed from a bivariate normal distribution over R 2 by using the probability integral transform.For a given correlation coefficient ρ, the Gaussian copula with parameter ρ can be written as where Φ 2 is the bivariate standard normal distribution with correlation coefficient ρ and Φ is the standard normal distribution.The Clayton copula is an asymmetric Archimedean copula, exhibiting greater dependence in the negative tail than in the positive.Its functional form is given by where θ is the parameter which assumes positive values, θ ∈ (0, +∞).The Frank copula is a symmetric copula defined as where the parameter θ is a real number, θ ∈ R. The Gumbel copula is an asymmetric archimedean copula, exhibiting greater dependence in the positive tail than in the negative.This copula is given by: where θ ∈ [1, +∞).It is important to notice that the C-convolution has a closed form if and only if the marginal distributions are gaussian and the copula linking them is the gaussian copula (see Cherubini, Gobbi, Mulinacci & Romagnoli, 2012).For computational purposes in this paper we only consider that case.The reader can find some examples of Cconvolution with Clayton and Frank copulas in the book of Cherubini, Gobbi, Mulinacci & Romagnoli (2012).
Here, we are interested in how to use the C-convolution to modelling stochastic processes.As shown in Cherubini, Gobbi & Mulinacci (2016) The process (Y t ) t is called the C-convolution based process.This methodology may be applied to define a new version of the unit root process I(1), Y t = Y t−1 + ε t , when Y t−1 and ε t are not independent as in the standard case but linked by some copula C.This is our modified version of a I(1) process.
Notice that if the copula C is the independent copula, that is C(u, v) = uv, the C-convolution coincides with the standard convolution and we obtain the standard I(1) process.In this section we consider a C-convolution-based unit root process, C-UR (1), by imposing a dependence structure between Y t−1 and ε t .The distribution of Y t = Y t−1 + ε t is given by the C-convolution between the distribution of Y t−1 , F t−1 , and the distribution of ε t , H t .Suppose that Y 0 has distribution F 0 .
Then, the distribution of Y t is We are now ready to introduce the definition of our modified version of a I(1) process.In particular we have the following Definition: 3. C-UR (1).The discrete time stochastic process (Y t ) t is a C-convolution based unit root process, C-UR (1), if • the functional form is that of a unit root process, Y t = Y t−1 + ε t ; • there exists a dependence structure between the state variable at the time t−1, Y t−1 , and the innovation ε t .Moreover, this dependence structure is described by a copula function, C, with a time-invariant parameter.
Remark 1. Patton (2005;2006) introduced the notion of conditional copulas that allows us to define a time-varying dependence structure.In other words, the parameter of the copula function depends on the time t while remaining within the same family of copulas.Now, we propose our algorithm to simulate trajectories from a C-UR(1) process using Alg1.The input is given by a sequence of distributions of innovations, ε t , that for the sake of simplicity we assume stationary H t = H and gaussian: H ∼ N(0, σ ε ).Moreover we assume a dependence structure stationary and gaussian, (u, v; ρ), We also assume Y 0 = 0. We describe a procedure to generate a iteration of a n-step trajectory.We generate 5000 trajectories of 250 points.We can think of daily trajectories therefore 250 points refer to a calendar year.The number of trajectories has been chosen according to the computational resources available.Without loss of generality we set σ ε = 1 and we select three different levels of negative correlation: ρ = −5%, ρ = −10% and ρ = −25%.

Results
We now describe the results of our simulations.Figure 6 shows some simulated trajectories for each level of correlation.We can observe that as the correlation increases in absolute value the dynamics of trajectories appears more stationary both in mean and in variance.If we compare this figure with figure 3 the effect is even more clear.We notice a mean reverting effect which becomes stronger as the negative correlation increases.If we consider the autocorrelations the behavior of our C-UR( 1) is also interesting.Table 1 reports the autocorrelation function for the first 20 lags for a I(1) process and for our C-UR( 1) process with correlation level from -3% to -25%.The impact of negative correlation is clear.If in the case of low negative correlation the decline of autocorrelations is in fact identical (with ρ = −3%) or very similar (with ρ = −5%) to that of a I(1) process, when ρ is -10% or -25% the situation drastically changes.In particular, a negative correlation greater than -20% virtually eliminates serial correlation while being in the presence of a unit root.For example, in the case of ρ = −10% autocorrelations are very close to those of a standard AR(1) process with autoregressive parameter ϕ around 0.94 whereas in the case of ρ = −25% are very similar to those of a standard AR(1) process with ϕ around 0.84 as we can see in figure 7. Figure 8 compares the dynamics of autocorrelations of a I(1) process with those of a C-UR(1) process with negative correlation from -5% to -25%.As regards the variance of the state variable, figures 9 and 10 show the impact of the negative correlation.More precisely, figure 9 reports the behavior of the standard deviation V t as a function of the time t.The convergence towards a constant level (given by equation 7) is faster as the negative correlation increases.If ρ = −25% the convergence to the limit value is immediate and the variance of the state variable is constant over time as in stationary processes.Figure 10 emphasizes this aspect showing that the dispersion is the same for both the instants of time considered if ρ = −25%.The linear relationship between the time t and the variance disappears.

Conclusion
In this paper we propose a convolution based approach to the simulation of a modified version of a unit root process which we called C-convolution-based unit root process, C-UR (1).The idea is that once the distribution of innovations is specified, and the dependence structure between innovations and levels of the process is chosen, the distribution of the process can be automatically recovered.The variance of this new process converges to a constant level and this convergence is faster as the correlation becomes more negative.The autocorrelation function rapidly decay towards zero as soon as the correlation is around -20%.For these reasons, the model is well suited to address problems of persistent and unpredictable shocks, beyond the standard paradigm of linear models.

Figure 3 .
Figure 3. Examples of trajectories of a unit root process.

Figure 4 .Figure 5 .
Figure 4. (a) Probability density function of the state variable of a simulated unit root process; (b) Standard deviation of the state variable of a simulated unit root process

Table 1 .
Comparison among autocorrelation values for different choices of the correlation coefficient.