Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Our 15 AI experts built the most comprehensive, practical, 90+ lesson courses to master AI Engineering - we have pathways for any experience at Towards AI Academy. Cohorts still open - use COHORT10 for 10% off.

Publication

The Algorithmic Experience (AX)
Artificial Intelligence   Latest   Machine Learning

The Algorithmic Experience (AX)

Last Updated on September 4, 2025 by Editorial Team

Author(s): Alexandra Lebleu

Originally published on Towards AI.

The Algorithmic Experience (AX)
Co-written by Yerko Ortiz ✍️

Users are aware when they tap a button or scroll through an app, but few realize that behind the scenes, algorithms are quietly shaping their experience.

A survey from Norway (a country with near-universal internet and smartphone adoption) found that 41% of people were completely unaware of algorithms’ role in their digital lives. From recruitment platforms shaping career opportunities to dating apps influencing relationships, their influence is greater than we often realize.

Human-Computer Interaction (HCI) and User Experience (UX) research have spent decades perfecting interfaces, refining accessibility, usability, and navigation to near-perfection. Yet traditional HCI and UX frameworks rarely consider how users perceive, interpret or are influenced by these technologies that increasingly shape our digital experiences.

In response, a growing body of research has started to explore algorithmic experience or “AX”, focusing on understanding user–algorithm interactions.

The spectrum

Consider the features we interact with daily: finding a contact in our phone or hitting Ctrl+Z in a text editor. These systems rely heavily on algorithms, yet we experience them as predictable, reliable tools. Users face no friction.

These are deterministic algorithms. Given the same input, they always produce the same output. Search for “Mom” in your contacts, and Mom appears. Hit undo, and the last action reverses. Outcomes are consistent, so users rarely question why something happened or feel uncertain about the process.

In contrast, many of today’s digital products are shaped by opaque complex algorithms. Their outcomes can be hard for users to predict or fully understand, creating potential friction and experiences that may feel unpredictable from the user’s perspective. They are black-box algorithms.

This unpredictability introduces a new dynamic between users and technology, one that traditional HCI and UX frameworks doesn’t capture.

The algorithm hates me

If you’ve scrolled social media lately you’ve probably seen those “the algorithm hates me” posts. Behind the memes lies a reality, we inherently assign emotions and motivations to lines of code.

This isn’t new. Since the 1990s the Computers Are Social Actors (CASA) paradigm has shown that we instinctively treat computers like social beings, even forming emotional attachments. We unconsciously apply social rules, expectations, and behaviors to computers and other technologies.

We anthropomorphize technology. Which can make unintended outcomes feel personal, even adversarial.

Our relationship with algorithms is complicated. We rely on them daily, often without realizing it, craving their convenience while simultaneously distrusting their results.

Users often exhibit algorithm aversion when they know algorithms are involved, even when algorithms clearly outperform human judgment. For example in 2023 about two-thirds of Americans said they would not want to apply for a job if AI were used to help make hiring decisions.

René Magritte | Vengeance (1936)

Unpredictability, trust, and misaligned goals

Traditional UX is built around predictability: tap a button, get the expected result, aligning with user expectations and satisfaction. Products driven by “black-box” algorithms can disrupt this logic and users’ mental models, as they often pursue goals such as engagement or revenue rather than users’ satisfaction.

For non-tech savvy users especially, this variability can come across as unreliability, weakening trust in technology. Identical inputs may yield different outputs, leaving people uncertain about which results to rely on, or whether they should rely on them at all.

Influence without awareness
There is also an ethical dimension to consider: algorithms can shape not only what we see but also how we think. Black-box models can be susceptible to bias, as they may reproduce human biases present in their training data or embedded in their design.
These systems can subtly influence our preferences through repeated exposure to curated content, often without our conscious awareness. They create echo chambers that reinforce existing beliefs while filtering out diverse perspectives, thereby strengthening biases and limiting critical thinking and intellectual growth.

Consider this: fewer than 1% of people venture beyond Google’s first page, giving enormous control over the information we encounter.

Source — Design Ethics and the Limits of the Ethical Designer

Path forward

Early AX research aims to design more human-centered experiences. By emphasizing transparency, user control, and literacy, we can build digital experiences that are clear, equitable, and genuinely empowering for all.

Digital Literacy
Studies show significant gaps in algorithmic understanding, particularly among older adults, low-income individuals, and those with less education. These groups may trust outputs blindly, even when results are biased. Universal design principles and targeted education can help ensure everyone benefits from technology while preventing unfair exclusion.

Transparency & explainability
Help users understand what data is used, why decisions are made, and the reasoning behind them. This doesn’t require exposing complex code, just being more transparent about the process. Transparency allows users to spot potential errors or biases in algorithmic decisions.

Research shows that explainability fosters trust: Stanford students who saw how their grades were calculated trusted the algorithm more, even with lower scores.

The field of Explainable Artificial Intelligence (XAI) explores methods that allows human users to comprehend and trust the results and output created by machine learning algorithms.

User control
Provide meaningful ways for users to influence or override algorithmic outputs. Even minimal interventions such as TikTok’s “Not Interested” button can significantly restore user agency and system trust.
More sophisticated approaches include letting users adjust algorithmic parameters directly.
If a black box model makes wrong decisions or produces inaccurate or harmful outputs, it can be difficult to adjust the model or provide feedback.

The Effects of Explainability and User Control on Algorithmic Transparency: The Moderating Role of Algorithmic Literacy (2024)

As we become more dependent on digital products and algorithms become increasingly ubiquitous, the field of Algorithmic Experience (AX) will help us improve these interactions and examine how these systems shape choices, trust, and behavior.

Collaboratively written by Yerko Ortiz Mora and Alexandra Lebleu

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!


Discover Your Dream AI Career at Towards AI Jobs

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!

Note: Content contains the views of the contributing authors and not Towards AI.