Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

A Practical Approach to Time Series Forecasting with APDTFlow
Latest   Machine Learning

A Practical Approach to Time Series Forecasting with APDTFlow

Last Updated on February 10, 2025 by Editorial Team

Author(s): Yotam Braun

Originally published on Towards AI.

https://github.com/yotambraun/APDTFlow/tree/main

Introduction

Forecasting time series data is quite different from handling a typical regression or classification task. With sequential dependencies, seasonal effects, and non stationary behavior, these datasets demand a modeling approach that truly understands time. Researchers have explored a variety of approaches over the years from classical statistical methods to deep learning architectures to tackle these challenges.

We built APDTFlow specifically to address these challenges. By combining multiple cutting‑edge techniques into a modular system, you have the freedom to explore various modeling options while keeping the process transparent and flexible no rigid black boxes here.

GitHub – yotambraun/APDTFlow: APDTFlow: A Modular Forecasting Framework for Time Series Data

APDTFlow: A Modular Forecasting Framework for Time Series Data – yotambraun/APDTFlow

github.com

Why a New Approach to Time Series Forecasting?

Many standard forecasting methods assume that each feature is independent or ignore the sequential nature of the data entirely. While some methods capture long‑term trends, they often miss out on short‑term fluctuations; others are so specialized that extending them becomes a challenge.

Here’s how we designed APDTFlow to overcome these challenges:

  • We break down the input into multiple scales, capturing both broad trends and fine details.
  • We use Neural ODEs to model the smooth evolution of hidden states over time, ensuring continuity in our forecasts.
  • Our process is split into clear, manageable steps from decomposition right through to prediction so you always know what’s happening behind the scenes.
  • And with built in tools for data processing, flexible cross‑validation, evaluation metrics, and a CLI, APDTFlow is ready for both experimental and real‑world applications.

Core Components of APDTFlow

APDTFlow is constructed from several key modules that work together to produce reliable forecasts.

1. Multi‑Scale Decomposition

We start by decomposing the input series into multiple scales. This lets the model capture broad, slow‑moving trends as well as the quicker, seasonal fluctuations that can occur in real data:

  • Global trends: Slow‑changing components that represent the overall direction.
  • Local fluctuations: The short‑term variations that might reflect seasonal patterns or sudden shifts.

2. Hierarchical Neural Dynamics

For every scale, we use Neural ODEs to capture the smooth evolution of hidden states. This approach not only handles irregular time steps gracefully but also adapts as the underlying dynamics shift over time:

  • Handling Irregularities: Interpolating between observations in datasets with irregular time steps.
  • Modeling Non‑Stationarity: Capturing gradual changes in the dynamics over time.

3. Probabilistic Scale Fusion

Once the different scales have been processed, their latent representations are combined using a probabilistic fusion mechanism. This step:

  • Integrates Multi‑Resolution Information: Merges insights from various scales.
  • Quantifies Uncertainty: Provides a measure of confidence in the predictions, which is crucial for risk-sensitive applications.

4. Time‑Aware Transformer Decoder

The final step is decoding the fused latent state into forecasts. A Transformer‑based decoder is used here because:

  • Temporal Order is Preserved: Positional encodings ensure that the sequential order of the data is taken into account.
  • Complex Dependencies are Captured: The self‑attention mechanism in transformers effectively models long‑term dependencies.

For more details on the model components, check out the models documentation.

Additional Functionalities

But there’s more to APDTFlow than just the forecasting engine. We’ve also integrated practical features to help you work with real‑world data from flexible cross‑validation to a user-friendly CLI.

Cross‑Validation Factory

Getting the evaluation right for time series data means you need to split your data thoughtfully this is why we built in flexible cross‑validation tools. The TimeSeriesCVFactory in APDTFlow supports:

  • Rolling Splits: Emulating a moving window over time.
  • Expanding Splits: Increasing the training set gradually while keeping the validation set size fixed.
  • Blocked Splits: Dividing the data into contiguous segments for training and testing.

Example Code

from apdtflow.cv_factory import TimeSeriesCVFactory
from torch.utils.data import Dataset

class SampleDataset(Dataset):
def __init__(self, length=100):
self.data = list(range(length))
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
return self.data[idx]

dataset = SampleDataset()
cv_factory = TimeSeriesCVFactory(dataset, method="rolling", train_size=40, val_size=10, step_size=10)
splits = cv_factory.get_splits()
print("Cross-Validation Splits:", splits)

Evaluation Framework

For consistent performance comparisons, APDTFlow includes tools to compute standard metrics such as:

  • Mean Squared Error (MSE)
  • Mean Absolute Error (MAE)
  • Root Mean Squared Error (RMSE)
  • Mean Absolute Percentage Error (MAPE)

Example Code for Evaluation

metrics = model.evaluate(test_loader, device, metrics=["MSE", "MAE", "RMSE", "MAPE"])
print("Evaluation Metrics:", metrics)

Command‑Line Interface (CLI)

To simplify operations, APDTFlow includes a CLI that allows you to train models and run inference directly from the terminal. This is especially useful for quick tests or for integrating forecasting into a larger workflow.

Example CLI Commands

# Train a model
apdtflow train --csv_file path/to/dataset.csv --date_col DATE --value_col VALUE --T_in 12 --T_out 3 --num_scales 3 --filter_size 5 --hidden_dim 16 --batch_size 16 --learning_rate 0.001 --num_epochs 15 --checkpoint_dir ./checkpoints

# Run inference using a saved checkpoint
apdtflow infer --csv_file path/to/dataset.csv --date_col DATE --value_col VALUE --T_in 12 --T_out 3 --checkpoint_path ./checkpoints/APDTFlow_checkpoint.pt --batch_size 16

Training and Inference Demonstration

Here’s how you can train the APDTFlow model on your data and then use it to make predictions.

Training Example

import torch
from torch.utils.data import DataLoader
from apdtflow.data import TimeSeriesWindowDataset
from apdtflow.models.apdtflow import APDTFlow

# Prepare the dataset
dataset = TimeSeriesWindowDataset(
csv_file="path/to/dataset.csv",
date_col="DATE",
value_col="VALUE",
T_in=12, # Number of historical time steps
T_out=3 # Forecast horizon
)
train_loader = DataLoader(dataset, batch_size=16, shuffle=True)

# Initialize the APDTFlow model
model = APDTFlow(
num_scales=3,
input_channels=1,
filter_size=5,
hidden_dim=16,
output_dim=1,
forecast_horizon=3
)

# Set device and train the model
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
model.train_model(
train_loader=train_loader,
num_epochs=15,
learning_rate=0.001,
device=device
)

Inference Example

import torch
from torch.utils.data import DataLoader
from apdtflow.data import TimeSeriesWindowDataset
from apdtflow.models.apdtflow import APDTFlow

# Prepare the dataset for inference
test_dataset = TimeSeriesWindowDataset(
csv_file="path/to/dataset.csv",
date_col="DATE",
value_col="VALUE",
T_in=12,
T_out=3
)
test_loader = DataLoader(test_dataset, batch_size=16, shuffle=False)

# Initialize the model and load a checkpoint
model = APDTFlow(
num_scales=3,
input_channels=1,
filter_size=5,
hidden_dim=16,
output_dim=1,
forecast_horizon=3
)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
checkpoint_path = "path/to/checkpoint.pt"
model.load_state_dict(torch.load(checkpoint_path, map_location=device))

# Run inference
predictions, pred_logvars = model.predict(new_x, forecast_horizon=3, device=device)
print("Predictions:", predictions)

Additional Forecasting Approaches

While APDTFlow is the primary model in the framework, it also includes other forecasting approaches. Depending on your dataset and needs, you may prefer one of these alternatives.

TransformerForecaster

The TransformerForecaster leverages the Transformer architecture to capture long-range dependencies via self‑attention. It is well-suited for modeling complex temporal interactions over extended sequences.

Example Code

import torch
from torch.utils.data import DataLoader
from apdtflow.data import TimeSeriesWindowDataset
from apdtflow.models.transformer_forecaster import TransformerForecaster

dataset = TimeSeriesWindowDataset(
csv_file="path/to/dataset.csv",
date_col="DATE",
value_col="VALUE",
T_in=12,
T_out=3
)
train_loader = DataLoader(dataset, batch_size=16, shuffle=True)

model = TransformerForecaster(
input_dim=1,
model_dim=32,
num_layers=2,
nhead=4,
forecast_horizon=3
)

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)

TCNForecaster

The TCNForecaster employs Temporal Convolutional Networks (TCNs) to capture local temporal patterns efficiently. TCNs use dilated convolutions, which allow them to cover a large receptive field without requiring many layers.

Example Code

import torch
from torch.utils.data import DataLoader
from apdtflow.data import TimeSeriesWindowDataset
from apdtflow.models.tcn_forecaster import TCNForecaster

dataset = TimeSeriesWindowDataset(
csv_file="path/to/dataset.csv",
date_col="DATE",
value_col="VALUE",
T_in=12,
T_out=3
)
train_loader = DataLoader(dataset, batch_size=16, shuffle=True)

model = TCNForecaster(
input_channels=1,
num_channels=[32, 32],
kernel_size=5,
forecast_horizon=3
)

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)

EnsembleForecaster

The EnsembleForecaster combines predictions from multiple models (such as APDTFlow, TransformerForecaster, and TCNForecaster) to improve overall robustness and accuracy. This approach can be especially useful when different models capture different aspects of the time series.

Example Code

import torch
from apdtflow.models.apdtflow import APDTFlow
from apdtflow.models.transformer_forecaster import TransformerForecaster
from apdtflow.models.tcn_forecaster import TCNForecaster
from apdtflow.models.ensemble_forecaster import EnsembleForecaster

model1 = APDTFlow(num_scales=3, input_channels=1, filter_size=5, hidden_dim=16, output_dim=1, forecast_horizon=3)
model2 = TransformerForecaster(input_dim=1, model_dim=32, num_layers=2, nhead=4, forecast_horizon=3)
model3 = TCNForecaster(input_channels=1, num_channels=[32, 32], kernel_size=5, forecast_horizon=3)

ensemble_model = EnsembleForecaster(models=[model1, model2, model3])

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
ensemble_model.to(device)

Experimentation and Results

APDTFlow has been evaluated across multiple forecasting horizons using a rolling-window cross‑validation approach. Comparative studies against models like TransformerForecaster, TCNForecaster, and various ensemble strategies have shown competitive performance in terms of validation loss.

Validation Loss Comparison

Figure 1: This plot illustrates how the average validation loss changes with the forecast horizon for each model.

Performance vs. Forecast Horizon

Figure 2: This chart shows how each model holds up as you try to forecast further into the future, letting you see which ones stay reliable over time

Example Forecast

Figure 3: In this plot, the historical input (blue), the actual future values (dashed orange), and the model’s prediction (dotted line) are shown.

Getting Started with APDTFlow

APDTFlow is available on PyPI, so you can install it quickly:

pip install apdtflow

If you’d like to explore or modify the source code, the project is hosted on GitHub. To get started, clone the repository and install it in editable mode:

git clone https://github.com/yotambraun/APDTFlow.git
cd APDTFlow
pip install -e .

omprehensive documentation is available in the docs directory, covering topics from data processing and model configuration to cross‑validation and evaluation.

Conclusion

We designed APDTFlow as a modular system that combines state-of-the-art deep learning with an emphasis on clarity and flexibility β€” making it easier to tailor the approach to your unique forecasting needs. By breaking down the series into multiple scales, modeling continuous dynamics, and providing practical tools for evaluation and deployment, it serves as a versatile solution for both research and production.

If you’re looking to experiment with new forecasting strategies or refine your current models, APDTFlow provides a solid foundation. If you’d like to take a closer look at the code and learn more about how APDTFlow works, check out our GitHub repository.

I hope you found this article helpful. If you know someone who might benefit from it, please feel free to pass it along.

If you enjoyed this post, please give it a clap. Feel free to follow me on Medium for more articles!

References

Time Series Forecasting Fundamentals:

  • Hyndman, R.J., & Athanasopoulos, G. (2018). Forecasting: Principles and Practice.
    This online textbook is a great reference for classical time series forecasting methods and provides context for why modern approaches are needed when dealing with non‑stationarity and complex patterns.

Neural ODEs and Continuous Dynamics:

  • Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, David Duvenaud (2018). Neural Ordinary Differential Equations.
    This paper introduced Neural ODEs and has been influential in the deep learning community. It’s considered a seminal work for understanding continuous modeling of hidden states, which is directly relevant to how APDTFlow models continuous dynamics in time series data.

Transformer Architectures for Time Series:

  • Bryan Lim, Sercan O. Arik, Nicolas Loeff, Tomas Pfister (2021). Temporal Fusion Transformers for Interpretable Multi-horizon Time Series Forecasting.
    This paper presents an advanced approach that combines transformers with other techniques for forecasting. It’s highly regarded for its emphasis on interpretability and performance, making it a useful point of comparison for APDTFlow’s transformer-based decoder.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓