Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Unlock the full potential of AI with Building LLMs for Productionβ€”our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

How To Make STGNNsCapable of Forecasting Long-term Multivariate Time Series Data?
Latest

How To Make STGNNsCapable of Forecasting Long-term Multivariate Time Series Data?

Last Updated on July 28, 2022 by Editorial Team

Author(s): Reza Yazdanfar

Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.

Time Series Forecasting (TSF) data is vital in all industries, from Energy to Healthcare. Researchers have achieved some significant advances through the development of TFS models. By thoroughly considering patterns and their relationships for time series, analysis based on long-dependencies in the dataset is a must. This article is about designing a new model based on another model to perform on long-dependencies and produced segment-level representations. This model stands on STEP, an abbreviation of STGNN (Spatial-Temporal Graph Neural Networks) + Enhanced + Pre-training model.

STEP:

First of all, Do Not Be Confused:

  • Spatial-temporal graph data = multivariate timeΒ series

Here, the data (traffic flow) used is recorded time series data on the road byΒ sensors.

Did you see those two patterns in Figure 1Β above??

Answer: there are two repeating patterns: 1. daily 2. weekly periodicities

First, STGNNs is the abbreviation of β€œSpatial-Temporal Graph Neural Networks” for those who don’t know/ know meager. (not difficult, it just needed to be googled; mentioned for those who don’t want to lose time or be distracted)

STGGNNs = Sequential Networks + Graph Neural NetworksΒ (GNNs)

We use GNNs for dealing with relationships between time series and Sequential models for instructing time series patterns. By the combination of these two terms, we can grasp outstanding results. By the way, there is no free lunchβ€Šβ€”β€Šas researchers said. It means powerful models demand complicated architectures; consequently (in most cases), the computational cost rises (linearly or quadratically) with the input length. Also, don’t forget the size of our time series, which is usually considerable. STGNNs, like other models, can predict small windows to make forecasting. This ability to rely on small windows makes the model unreliable.

Problem: 1. STGNNs can’t capture long-dependencies.

2. The dependency graph isΒ missing.

Solution: STEP (STGNN is Enhanced by a scalable time-series Pre-training)

Β· a modified version ofΒ STGNNs

Illustration:

Two initiatives:

1. proposing TSFormer, a transformer-based block with an autoencoder (encoder-decoder) structure as an unsupervised model. This TSFormer able to capture long dependencies.

2. Proposing a graph architecture learner to learn dependency graphs.

After proposing these two, we just need to weld them for a joint model, that is the final solution. That’s it!! Sounds easy?! Let’s make them as simple as possible. πŸ˜‰

Let’s see the proposed architecture:

As you can see from figure 2, the model includes twoΒ phases:

phase 1) pre-training

Figure 3

The scheme is a masked autoencoding model which is trained for time series data relying on Transformer blocks (TSFormer). This model is able to capture long-dependencies and turn out segment-level representation which includes some valuable information.

phase 2) forecasting

Figure 4

In this phase, the pre-trained model from the previous phase (which captured long-dependencies) is used to modify the downstream STGNN. Additionally, a discrete and sparse graph learner is designed just in case the pre-defined graph isΒ missing.

That’s all I have done in general. Thus, let’s dive more into the details of these twoΒ phases:

1. The Pre-Training Phase

This attempt, I mean using a pre-trained model, is due to an increase of interest (and, of course, results) in applying them in NLP projects. Though pre-trained models are adopted widely in NLP (which is sequential data), there are some differences with time series. You can read its full description in my previous article: β€œHow to Design a Pre-training Model (TSFormer) For TimeΒ Series?”

2. The Forecasting Phase

The input here is divided into P non-overlapping patches of length L. Our TSFormer produces indications for each input (Si) of the forecasting phase. One of the STGNNs’ features is that they take the newest. Therefore, based on those produced indications by TSFormer, we will modifyΒ STGNNs.

STEP From ZERO | TheΒ Process

The structure of learning inΒ graphs

Problem) most graphs depend on a pre-defined graph that is unavailable or not good enough in most casesβ€”also, mixing the way of learning (seeking the relationship between nodes (for ex. i and j) of the time series and STGNNs leads to great complexity.

Solution) pre-trained TSFormer

Interpretation) Proposing a discrete sparse graph. How? 1. graph regularization to fit supervised information. 2. a KNN graph to rein the sparsity. Its formulations are summarized below:

Downstream spatial-temporal graph neuralΒ network

problem) usual STGNNs’ input: last patch+dependency graph

solution) STEP (which adds the input patch’s representation to theΒ input)

interpretation) As we discussed in my previous article, β€œHow to Design a Pre-training Model (TSFormer) For Time Series?”, TSFormer captures long-dependencies; consequently, it makes H rich in aspects of information. Also, WaveNet is selected as our backend, which assists in capturing multivariate time series properly. But how?? It blends graph convolution with dilated convolution. Consequently, our forecasts are supported by WaveNet’s output latent, hidden representations. How?? By usingΒ MLP.

Q) If you look at the Forecasting phase architecture, you’d see two streams into Spatial-Temporal Graph NN Block. So, how can we manageΒ that?

A) by usingΒ Eq7:

In the end, the forecasts are made byΒ MLP:

The output of the downstream STGNN:

That’s the end of this STGNN modification. Hope you enjoyed. The rest is the results on the real-world dataset.

Results:

data:

The model is trained on three traffic speed datasets in three regions in theΒ USA:

  1. METR-LA
  2. PEMS-BAY
  3. PEMS04
Table 1. Statistics ofΒ datasets

Metrics:

  1. MAE (Mean AbsoluteΒ Error)
  2. RMSE (Root Mean AbsoluteΒ Error)
  3. MAPE (Mean Absolute Percentage Error)

The End

The source isΒ this.

You can contact me on Twitter here or LinkedIn here. Finally, if you have found this article interesting and useful, you can follow me on medium to reach more articles fromΒ me.


How To Make STGNNsCapable of Forecasting Long-term Multivariate Time Series Data? was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓