Introduction

Time series data has been widely studied since long back. “Time Series is a series of data points indexed in time order.” It is a sequence taken at an equally spaced discrete time interval. The most common fields involving time series analysis are signal processing, econometrics, mathematical finance, weather prediction, etc. The figure below is a snapshot of SENSEX data, taken from the google finance tool. The below image shows the nature of time-series data.

Example of Time Series Data

Techniques of Time Series Analysis

Traditional time series analysis involves techniques such as auto-regressive techniques (ARIMA, SARIMA, VARIMA, etc). These techniques can easily model both, univariate and multivariate time series data. A sequence can be defined as univariate when, per time step, a scalar data point is observed. The multivariate sequence has a vector data point at each time step. Traditional time series analysis technique is comparatively easier when applied to univariate data. Multivariate time series data modeling has more complications, one of the reasons being visualization. Modern-day machine learning and deep learning techniques are successful in modeling time series efficiently and are being used widely across the world. Recurring training algorithms such as Recurrent Neural Networks (RNN) are readily applicable to sequential data. This blog explains one such application. Apart from the above-mentioned structured data, these techniques have been successfully applied in the field of text and images. Gated Recurrent Unit (GRUs) and Long Short-Term Memory (LSTMs) have a wide range of applications in sequential data analysis. Understanding LSTM network is a good blog post by Chris Olah to learn about LSTMs.
Spatio-Temporal Sequence

What if the data point at each time step is a matrix, which has spatial dependencies? Can we apply the same techniques to this data? Perhaps we can reshape the matrix into a single long length vector and apply various time series techniques. But again, the question arises, “Can we extract spatial dependency present in the data?” This blog intends to answer this question briefly.
First, let us understand, what do we mean by Spatio-temporal data? To answer this question let’s take an example of a video. Conceptually, a video is a set of images (frames) passing with time. In the most basic video, we have 30 such frames pass in one second. Such videos are said to be at 30 fps. When we run a video in 2x mode, we just run the same set of frames at 60 fps. Now that the concept of a video is clear, let us dive into understanding each frame. Image, if defined mathematically, is a matrix A, of pixel value aij, where pixel value represents color intensity. Each pixel value can take any real value ranging from 0-255. These pixel values have spatial dependencies between them which makes such data a perfect example of Spatio-temporal data. Unlike a video, where pixel takes values only in a specific range, Spatio-temporal data such as meteorological maps of climatic variables can take any real values. Pressure map, heat map, rainfall measurement, wind-speed data records are always Spatio-temporal in nature. Traditional analysis methods cannot be used to analyze and model such data. We need a method to display the data in a way that the model can capture the pattern in both spatial and temporal domains simultaneously.

How to model Spatio-Temporal Data

In their work, presented at the Conference on Neural Information Processing Systems (NIPS), 2015 titled Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting, the authors presented a new machine learning approach to handle such data. They used this new technique to forecast very short-term precipitation over Hong Kong using RADAR maps.

The technique, known as Convolutional LSTM, is a small and efficient glitch in conventional fully connected LSTMs. Convolutional LSTMs or convLSTM, use properties of convolution to capture spatial dependency in a tradition temporal pattern learning LSTMs.

The mathematical equations are similar to the traditional LSTM network with subtle changes. In LSTM, we learn the weights and biases while propagating backward while optimizing a loss function. On the contrary, convLSTM networks learn filters. The linear combination of weights and input x is replaced with convolution operation in convLSTM represented by . The equation for the forget gate layer is shown below.

Implementation of convLSTM

Convolutional LSTM can be implemented using Keras in python. Keras has a convlstm2d layer which can be used to model such data after proper pre-processing.
One simple implementation is shown below.

The above network has one hidden layer. The last layer is the convolution layer to get the future frame (t+1 instance) out of a data which has dimensions 40X40 at each time step. We can see that the input to this network is five-dimensional (#time_steps, #samples, width, height, #stack) while each time step takes a 4-dimensional input. The output from the network is a single frame of 40X40.

Conclusion

In this age of data explosion, we have volume and a variety of data available to us. Data is not always simple to analyze. One such data is Spatio-temporal in nature. With an increase in computing power and availability of complex neural nets, we can model such data. We have learnt that a new network known as convLSTM is especially helpful in modelling and get insights. We also learnt how to implement such models. Once the data is properly pre-processed, we can easily model using convLSTM networks to predict, classify, or forecast such data.

1 Star2 Stars3 Stars4 Stars5 Stars (2 votes, average: 4.50 out of 5)
Loading...


Saurabh Katariyar

Posted by Saurabh Katariyar

Leave a reply

Your email address will not be published. Required fields are marked *