Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Our 15 AI experts built the most comprehensive, practical, 90+ lesson courses to master AI Engineering - we have pathways for any experience at Towards AI Academy. Cohorts still open - use COHORT10 for 10% off.

Publication

Prediction, Generation, or Inference? Matching Your Goal to the Right Data Tool
Data Science   Latest   Machine Learning

Prediction, Generation, or Inference? Matching Your Goal to the Right Data Tool

Author(s): Bushra Anjum, Ph.D.

Originally published on Towards AI.

Large Language Models (LLMs) are making headlines every day. At the same time, traditional machine learning (ML) and statistical methods are firmly holding their ground and continue to be used widely. So, which one should you use, and when? In this article, we try to make that decision a little clearer.

We would like to say upfront, the decision is not about following tech hype or showing blind loyalty to one method. The goal is to make intentional and informed decisions depending on your data, goals, and constraints. Here are four questions to get you started on this journey.

Note: This article focuses on the modeling techniques for textual data; multi-modal data and modeling capabilities are beyond the scope of this discussion.

Prediction, Generation, or Inference? Matching Your Goal to the Right Data Tool
Image generated by Canva AI Image Generator: Data scientist evaluating LLM, ML and Statistics as options

Q1: Is your data structured or unstructured, bounded or unbounded?

Let’s define the terms before we get into a deeper discussion.

What do we mean by structured data? This data has a predefined format and a well-defined schema. Tabular data is the most common example of structured data, with neatly organized rows and columns where each column has a specific meaning.

What do we mean by bounded data? This data has predefined limits on its size, scope (limited permissible permutations of words), or duration (generated over a few days or weeks). This is in contrast to unbounded textual data that is often free-form and evolving.

LLMs are free-form language wizards, but struggle with structure

LLMs truly shine when working with unbounded unstructured data. That is, they are great for tasks that require deep semantic understanding of human language. For example, when it comes to analyzing customer reviews, summarizing documents, interpreting reports, or responding to free-form queries, LLMs are the superior choice.

This ability to understand, generate, and translate unbounded unstructured text is gained by getting trained by a transformer neural network on massive text corpora. These networks are exceptional at processing sequential text and developing a long-range probabilistic understanding of the patterns, grammar, context and nuances of language. Thus, LLMs are fundamentally probabilistic models, designed to predict the most likely next word based on prior context.

LLMs, however, are less apt at dealing with structured data. Why? Because it is not sequential. LLMs are trained to look at data as a sequence of tokens (words), not structured matrices. So when the tabular data gets flattened to be used as input, the inherent structure and relationships become ambiguous.

LLMs excel at both ingesting and generating open-ended natural language content where the inputs and expected outputs are diverse, variable in length, and not easily categorized. LLMs are less adept with non-language data, however.

ML and statistics remain the gold standard for understanding structured data

When data is organized in a structured format of rows and columns with clearly defined features (such as customer records, financial transactions, website analytics, or industrial sensor readings), traditional ML models and statistical techniques are the most effective and efficient tools. These methods are built to explicitly identify patterns, correlations, and predictive signals between structured features like numerical values, categorical labels, and boolean flags.

In addition to structured data, bounded unstructured data, i.e., data limited in scope or length, such as sets of comments, reviews, answers, and descriptions, can often be handled effectively by simpler natural language processing (NLP) pipelines and pattern matching using machine learning algorithms. We will look at some examples in the next section.

When the goal is not just to predict an outcome but to understand the underlying drivers and relationships between variables, statistical techniques are invaluable, especially in low-data environments. These techniques are designed for robust inference from limited samples.

Q2: What is your end goal: prediction, generation or inference?

Let’s take some of the data nuances discussed in the previous section and turn them into concrete genre recommendations. First, some definitions:

Prediction is the process of forecasting future outcomes or classifying unknown data points based on historical data.

Generation refers to creating new and original content, such as text, summaries, or responses to questions.

Inference involves drawing conclusions about the underlying relationships and causal drivers within a population based on a sample of data. It focuses on understanding why an outcome occurs.

Below are some time-tested, well-researched and well-established use cases for each of the three tools covered in this article.

Use LLMs for content generation, intelligent summarization, personalized customer experience, knowledge management, semantic search, and to extract structured data from unstructured text.

Why? Because an LLM’s core strength is the ability to generate coherent and contextually relevant text.

Can we use ML instead? If the data is bounded, and the language patterns are relatively stable, then yes, some of the more classic NLP algorithms like TF-IDF, bag-of-words models, topic modeling, named entity recognition, etc, can be applicable.

Use ML for high fidelity predictive modeling, automated classification, recommendation engines and real-time anomaly detection.

Why? ML algorithms are engineered for uncovering statistical relationships between rows and columns of data. They excel at detecting patterns, trends and correlations within structured datasets.

Can we use LLMs instead? May be in some limited ways (e.g., few-shot prompting for simple classifications, if efficiency is secondary). LLMs lack the predictive precision and computational efficiency of a well-trained ML model.

Use statistics for causal and Bayesian inference, hypothesis testing, time-series analysis and simple regressions, especially in data-limited scenarios.

Why? These formulae and tests are designed to generate generalizable inferences from limited samples while quantifying the uncertainty around them.

Q3: How much data do you have?

In general, the more data we have, the better value we can get out of it. However, the availability of clean and high-quality data is not always a given.

One of the main strengths of statistics is that the discipline is designed to work with limited data. We start with a data “sample” from which we infer characteristics about the entire population or try to understand the causal relationships between a few well-defined features of this population. If you have limited data availability and the goal is to better understand the entire domain using a few observations, while also quantifying uncertainty, statistical techniques are our friend.

Another benefit of Statistics is that while ML models are prone to overfitting if the training data is limited, statistical methods are more robust in such situations because of their simpler calculations and fewer parameters. For example, using a t-test for comparing groups with limited data is far more accurate and effective as opposed to using random forests for classification of the same groups.

LLMs also do reasonably well in certain scenarios learning from a very small number of examples (called few shot learning). This is a critical advantage in real-world scenarios where large, labeled datasets are expensive or unavailable. This advantage stems from the LLM’s extensive training, that allows it to leverage its vast, generalized “world knowledge” to fill in the blanks and compensate for the lack of task specific data.

Image by redgreystock on Freepik

Q4: Is explainability or repeatability mission-critical for your application?

These two criteria often go hand in hand, i.e., can you explain why a decision was made, and can you guarantee that the same input will always yield the same decision?

Explainability

Explainability is often a critical requirement, especially in heavily regulated domains such as finance (e.g., credit scoring), healthcare (e.g., disease diagnosis), and legal applications. For these use cases, statistical models and interpretable ML models are the best choice.

Unlike LLMs, which function as opaque “black boxes”, statistical models (like linear and logistic regression, ANOVA) and traditional ML models (like decision trees and simple rule-based systems) offer clear reasoning paths that can be audited. Even more complex ML models can be made interpretable via well-established ExplainableAI methods like SHAP and LIME. It is important to remember, though, that while ML models are excellent at identifying complex correlations, they are not designed to infer causation, which can be a key factor in strategic decision-making.

In contrast, LLMs distribute their internal reasoning across billions of parameters, making it nearly impossible to trace a specific decision back to its inputs in a human-understandable way. There have been some research efforts, notably by Anthropic, to track the group of neurons that contribute to the decision-making of a particular request. Such research is still in its very early stages, however, and lacks general-purpose usability. At the time of writing, LLMs are unsuitable for applications where trust, explainability, and accountability are primary requirements.

Determinism and Repeatability

LLMs, given their probabilistic nature, are inherently non-deterministic. This is not a bug, but a feature allowing for creative, fluent, human-like responses. This, however, also means that asking the same question multiple times can produce different, though often similar-sounding, answers.

LLMs offer some control over their sampling process (the way they pick tokens to generate a response). For example, setting the model’s “temperature” parameter close to zero makes the output more predictable. Similarly, adjusting other hyperparameters, such as “top-p” (nucleus sampling) or “top-k”, can help limit unexpected outputs. However, these methods cannot guarantee determinism, as at the end of the day, you are still sampling a text corpus for the next best word in sequence.

In addition to sampling, there is also an issue of the inherent imprecision of floating-point arithmetic that creates the probabilities in the first place. All the internal calculations within LLMs are performed using floating-point numbers. These calculations happen in parallel, and the hardware on which they run (GPUs, TPUs), the order in which they finish and how their results are combined across billions of parameters create rounding errors that generate different probability distributions for the same set of words.

If your application requires deterministic behavior, i.e., consistently producing the same output given the same input without random variation, relying on statistical formulas and interpretable ML models is preferable.

Wrapping up: Think in terms of trade-offs

In the fast-moving world of AI, critical thinking matters. LLMs are powerful tools, particularly for tasks related to natural language. But they are not a panacea. For structured, numerical, or tasks requiring transparency and explainability, traditional ML and statistics remain the preferred choice. Thus, when choosing between tools, ask yourself:

  1. What type of data am I working with: structured or unstructured? Bounded or unbounded?
  2. What outcome do I need: prediction, generation, inference?
  3. How much data do I have?
  4. How important is explainability or repeatability?

By considering these questions and options carefully, you can match the right tools to the task and are far more likely to build data systems that are performant, stable and trusted.

In many real-world applications, a hybrid approach offers the best performance and flexibility. As we wrap up this article, we encourage the reader to consider that rather than viewing LLMs, ML, and statistics as competitors, see them as complementary parts of your broader data toolbox. For example, one common hybrid method involves using an LLM to extract structured features from free-form text, which are then fed into a traditional ML model for high-accuracy prediction or classification. The hybrid approaches deserve their own deeper discussion. In a follow-up to this article, we will explore how working with the hybrid approaches helps unlock greater value.

Acknowledgement: My sincere thanks to Wes Adams, Anna Ransbotham-Cole, Trevor Rollins and Charlie Hanley for their valuable feedback as I finalized this write-up.

Note: An earlier version of this work appeared in BuiltIn.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!


Discover Your Dream AI Career at Towards AI Jobs

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!

Note: Content contains the views of the contributing authors and not Towards AI.