Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Our 15 AI experts built the most comprehensive, practical, 90+ lesson courses to master AI Engineering - we have pathways for any experience at Towards AI Academy. Cohorts still open - use COHORT10 for 10% off.

Publication

Beyond the Buzz: The Complex Reality of Graph Foundation Models
Latest   Machine Learning

Beyond the Buzz: The Complex Reality of Graph Foundation Models

Last Updated on September 9, 2025 by Editorial Team

Author(s): Amin Assareh, PhD

Originally published on Towards AI.

Beyond the Buzz: The Complex Reality of Graph Foundation Models
Image is author-generated with AI tools

After a year away from the AI conference circuit — thanks to a major life event — I made my comeback last month with a doubleheader: ICML followed by KDD. I’ve been in AI since way before it was “cool”, but the pace of change in the past several years makes me feel like I’m cramming for an exam that never ends. Conference spotlights used to linger on a hot topic for years; now they shift so fast you’d think they were on TikTok.

This time, the buzzword echoing through coffee lines and poster sessions was Graph Foundation Models (GFMs). The broader AI crowd — especially those still riding the high of generative AI — was giddy about bringing the “foundation model” playbook to graph-structured data. One even whispered, eyes wide, that this could be “another BERT moment.”

The graph research veterans, however, weren’t quite buying the hype. Sure, they enjoyed seeing graphs back in the limelight — but the expressions on their faces as GenAI folks pitched GFMs? Somewhere between polite smile and visible cringe.

And that wasn’t resistance. It was a reflection of the real complexities that come with applying a paradigm born from sequential data (like text) to the intricate, non-Euclidean world of graphs.

What is a Graph Foundation Model?

At its core, a Graph Foundation Model (GFM) is a machine learning model pre-trained on extensive graph data, designed to be adapted for diverse downstream graph tasks. This mirrors the success of other foundation models in AI, where large-scale pre-training enables versatile adaptation.

A GFM typically works in two phases:

  • Pre-training — learning patterns, structures, and relationships from massive, diverse graph datasets without focusing on any single task.
  • Adaptation — fine-tuning, prompting, or in-context learning for specific downstream tasks, such as molecular property prediction, community detection in social networks, or traffic forecasting.
The Workflow of Graph Foundation Models. Figure adapted from [2], Graph Foundation Models: A Comprehensive Survey

Recent surveys propose a modular view of GFMs, decomposing them into three building blocks:

  • Backbones (e.g., Graph Transformers, GNNs, LLMs, or hybrids),
  • Pre-training strategies (contrastive, generative, predictive objectives),
  • Adaptation mechanisms (fine-tuning, prompt tuning, test-time adaptation).

In short: if LLMs are foundation models for text, GFMs aspire to be the same for relational data.

The GenAI Enthusiasm: A Natural Progression

For anyone who’s seen the impact of Large Language Models (LLMs) or foundation models in vision, the idea of a GFM feels like a natural next step. Foundation models, by definition, are trained on vast datasets and then adapted to a wide range of tasks.

The logic is simple: if LLMs can learn the nuances of language from massive text corpora and generalize across countless NLP tasks, why can’t GFMs do the same for graphs? Imagine a single model trained across the world’s social networks, molecules, and knowledge graphs, then fine-tuned for anything from drug discovery to fraud detection. This “one model, many tasks” vision is incredibly appealing.

Signatures of Graph Foundation Models

What truly makes a foundation model different from a large model isn’t just its scale, but the characteristic properties it exhibits when trained across vast, diverse datasets. For GFMs, researchers highlight several emerging signatures:

  • Scaling Laws & Emergence — As seen in LLMs, new abilities may only appear at scale. GFMs could eventually demonstrate reasoning over complex structures, in-context learning, or even zero-shot adaptation to unseen tasks. Understanding how these behaviors emerge with more data and parameters is central to their study.
  • Homogenization Across Tasks — One of the boldest ambitions is to unify diverse graph problems — node classification, link prediction, graph classification, even graph generation — under a single modeling paradigm. Just as most NLP tasks reduce to predicting the next token, the hope is that graph tasks can be reframed into a common framework. Achieving this, however, is far more challenging given the structural heterogeneity of graphs.
  • Transferability Theories — The long-term promise of GFMs lies in reusability. Transfer can happen within a task (e.g., adapting a model from one citation network to another) or across domains and tasks (e.g., moving from molecular property prediction to social network analysis). Theoretical work is beginning to formalize when such transfer is possible — and where it might fail.

Together, these signatures suggest that the future of GFMs is not only about bigger models, but about discovering the principles that make foundation models truly universal.

Graph Transformers: The Backbone of GFMs

If GFMs are the vision, Graph Transformers (GTs) are the leading architectural candidate to make them real. Unlike Graph Neural Networks (GNNs), which rely on localized message passing, GTs empower each node to attend directly to all other nodes. This global attention helps overcome classic GNN bottlenecks like locality bias, over-smoothing, and over-squashing. Where GNNs ‘pass messages’ between neighbors, GTs let every node ‘talk to’ every other node directly.

Recent work also shows that Graph Transformers cannot simply borrow the vanilla Transformer design from NLP. They must integrate graph-specific signals — for example, positional encodings that capture distances or spectral information — so the model respects graph topology. This hybrid approach blends global expressiveness with structural awareness, making GTs particularly promising for domains where long-range interactions matter: drug discovery, protein folding, fraud detection, recommendation systems, and knowledge graph reasoning.

But surveys caution that GTs are not the only path. Enhanced GNNs, LLM-based tokenization approaches, and hybrid architectures remain strong candidates. GFMs may not converge on a single “Transformer of graphs,” but on a plurality of designs optimized for different contexts.

The Graph Community’s Nuance: Acknowledging the Hurdles

The graph research community views GFMs with both excitement and caution. Several hurdles stand out:

  • The Non-Euclidean Nature of Graphs — Unlike text, graphs are irregular, domain-specific, and task-diverse. Creating a unified training objective is far more difficult.
  • Unproven Emergent Abilities — Scaling GNNs is not enough; emergent properties like reasoning remain largely unverified.
  • Homogenization Challenges — Graph tasks differ structurally (node vs. edge vs. graph-level), making task unification tricky.
  • Computational Complexity — GTs scale quadratically with nodes; billion-scale graphs remain intractable.
  • Data Scarcity — Unlike text or images, we lack large, diverse, open graph datasets. Many are domain-specific or noisy.
  • Architectural Uncertainty — No canonical “graph transformer” exists; structural bias and robustness remain open questions.
  • The Enduring Strength of GNNs — Enhanced GNNs often match or outperform GTs with far greater efficiency.

Surveys also point to evaluation gaps: current benchmarks focus on accuracy but neglect robustness, trustworthiness, and generalization across domains.

The Path Forward

Despite the hurdles, GFMs remain one of AI’s most exciting frontiers. Several threads are emerging that could guide GFMs from concept to reality:

  • New Architectures — Beyond Transformers: Mamba, linear attention, hybrids.
  • Unified Pre-training & Adaptation — Transferable pretext tasks, prompt-based methods, RLHF, and knowledge distillation.
  • Data Curation — Large, diverse, high-quality graph datasets.
  • Robust Evaluation — New benchmarks for generalization, trustworthiness, and scalability.
  • Taxonomy & Specialization — Distinguishing universal GFMs (cross-domain), task-specific GFMs, and domain-specific GFMs (molecules, knowledge graphs, temporal graphs).
  • Safety & Interpretability — Addressing privacy, hallucination, fairness, and explainability.

— — — — –

The buzz around Graph Foundation Models at ICML and KDD wasn’t just hype. It reflected an evolving frontier — where the bold vision of GenAI enthusiasts meets the grounded caution of graph researchers. The future of GFMs will likely be shaped by a balance of ambition, rigor, and collaboration across domains.

References

Surveys:

  1. Graph Foundation Models: Concepts, Opportunities and Challenges — arXiv (2025)
  2. Graph Foundation Models: A Comprehensive Survey — arXiv (2025)
  3. Towards Graph Foundation Models — The Web Conference 2024
  4. Towards Graph Foundation Models: A Survey and Beyond — arXiv (2023)
  5. A Survey on Self-Supervised Graph Foundation Models: Knowledge-Based Perspective — arXiv (2024)

Graph Transformers

  1. Attending to Graph Transformers — arXiv (2024)
  2. Can Classic GNNs Be Strong Baselines for Graph-level Task? — arXiv (2025)
  3. A Survey of Graph Transformers: Architectures, Theories and Applications — arXiv (2025)
  4. Introduction to Graph Transformers (kumo.ai)

Applications & Extensions:

  1. Graph Foundation Models for Recommendation: A Comprehensive Survey — arXiv (2025)
  2. Learning Generalities Across Graphs via Task-trees — OpenReview (2025)
  3. Towards Foundation Models for Knowledge Graph Reasoning — OpenReview (2024)
  4. Awesome Foundation Models on Graphs — GitHub

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!


Discover Your Dream AI Career at Towards AI Jobs

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!

Note: Content contains the views of the contributing authors and not Towards AI.