Wavelet analysis stems from the 19th century when Joseph Fourier studied the heat equation. From his work, a foundation was set for Fourier transformations (FT). However, FT requires conditions such as stationarity of the data set and it only gives frequency information of a time series instead of the time information simultaneously. Consequently, wavelet analysis was introduced and developed to overcome the shortcomings of FT on financial data.
Properties of wavelets
There are two different wavelet functions, namely the father wavelet and the mother wavelet . The following conditions must hold for the father wavelet function:
And for the mother wavelet:
At last, we have an orthogonality condition:
Furthermore, we can choose our wavelet to behave according to a specific shape. There are discrete (like the Haar wavelet), symmetric (the Mexican hat wavelet), near symmetric (such as the symmlet) and asymmetric wavelets (like daublets).
The corresponding father and mother functions of the Haar filter are defined by:
To give an idea how these wavelets look like, the father and mother wavelets of the Haar wavelet are plotted in the figure below.
The discrete wavelet transform (DWT) multiresolution analysis (MRA) allows the decomposition of a time series into its constituent multiresolution (frequency) components. Given a time series , a certain number of observations , its wavelet multiresolution representation is given by:
Or compactly written:
One can notice that equation 6 consists mainly of two parts. One part of the equation embodies the smooth component of the wavelet representation and this part will be written as . The other part deals with the detail components of the analysis for and we will use as notation. From this, equations 5 and 6 can be written as follows:
In these equations, represents the number of multiresolution levels (or frequencies), defines the length of the filter, are the wavelet coefficients, and are the father and mother functions respectively. These wavelet functions are obtained from the father and mother functions through scaling and translation as follows:
while the coefficients of the wavelet functions are obtained with the following formulas:
Equations 5, 6 and 7 show that the original series can be decomposed in different components, each defined in the time domain and representing the fluctuation of the original time series in a specific frequency. In particular, the first couple of wavelet detail components (small ‘s) represent the higher frequency characteristics. These higher frequencies are able to capture the short-term dynamics of the time series. On the other hand, the last couple of wavelet detail components (large ‘s) represent the lower frequency movements. These capture the long-term dynamics of the time series. Finally, the lowest frequency is captured by the smooth component. This usually represents the trend of the time series.
The first detail component captures oscillations between 2 and 4 months, the second detail component captures oscillations between 4 and 8 months, while detail components , , and capture oscillations with a period of 8-16, 16-32, 32-64, 64-128 months, respectively. Finally, the smooth component captures oscillations with a period longer than 128 months (10.6 years). To give an idea of how these decomposed signals behave, we plotted the original time series of the US equity premium and its MODWTmra decomposition in the figure below.
Visually, it seems that the first detail components display unstable behaviour while the higher detail and smooth components appear to be showing trends in the data.
Forecasting based on wavelet decomposition
So why would we want to decompose our variables of interest into different time frequencies? Well, when we decompose the dependent variable of our time series, we can exploit all the characteristics of our dependent variable in specific frequencies. For example, when we decompose stock market returns we can find different patterns that can predict different outcomes. Essentially, forecasting based on wavelet decomposition forecasts the returns in the decomposed frequencies and subsequently aggregates these forecasts to find the ultimate prediction of the stock return. Numerous studies have found that this method has superior predictive power over the historical average benchmark and other models as well. Thus, next time you want to forecast your portfolio returns, try this method to increase your returns.
This article is written by Sam Ansari