Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Free: 6-day Agentic AI Engineering Email Guide.
Learnings from Towards AI's hands-on work with real clients.
From Interface to Behavior: The New UX Engineering
Latest   Machine Learning

From Interface to Behavior: The New UX Engineering

Last Updated on April 2, 2026 by Editorial Team

Author(s): Yelpin Sergey

Originally published on Towards AI.

Agentic UX is the next step in the evolution of interfaces. Services are learning to listen to the user, understand intent, and act on their own — moving beyond familiar buttons and forms.

This article explores what agentic interaction is, what skills designers now need, how to design system behavior, what mistakes to avoid, and how to integrate the AX approach into your workflow.

From Interface to Behavior: The New UX Engineering

Traditionally, a UX designer was responsible for the visual mechanics of interaction: where to place a button, how a user fills out a form, and in what order screens appear.

The main goal was to make the path clear and manageable, so the user would not get lost, feel overloaded, or be left wondering what to do next.

Designers built the rhythm of the interface: what appears on screen, when, and with what emphasis. They managed attention like a director manages lighting and movement on stage.

Today, this work does not disappear — but it is supplemented by a new focus: designing the behavior of agent-based systems.

Where there used to be a button — there is now dialogue.
Where there were forms — there are now intentions.

The user no longer looks for what to click — they express an intent, and the system responds with an action.

This is how Agentic UX (AX) takes shape: an interaction model where the primary object of design is not the screen, but the behavior of the system.

1. What Is an Agent in UX

An agent is not a chatbot with prewritten answers.

It’s a digital performer that understands user intent, clarifies details, and acts on its own. It doesn’t wait for clicks — it collaborates.

In the past, the user followed a path of “select → fill out → confirm,” but now the agent performs these steps autonomously, asking only for what truly matters.

An agent represents a new layer of UX — one where interaction is built not through buttons, but through meaning and context.

An agent can exist inside an application, as part of a website, or as a standalone service. Its defining quality is that it drives the scenario rather than waiting for the user to initiate action.

Examples of existing agentic solutions:

-Work and productivity systems:

  • Microsoft Copilot — creates documents, emails, and summaries directly from chat, leveraging Microsoft Graph context and connected services (Outlook, Excel, Teams).
  • Google Duet AI — writes emails, builds presentations, and formats reports based on textual descriptions.
  • Notion AI / Agents (3.0) — add tasks, update databases, and execute multi-step workflows while preserving contextual memory.

-E‑commerce and consumer services:

  • Amazon Rufus — a search assistant that answers questions like “What’s a good gift for a 5-year-old?”, analyzes reviews, and builds tailored recommendations.
  • Shopify Sidekick — a merchant assistant that analyzes a store, writes product descriptions, selects relevant items, and even configures necessary plugins.
  • Instacart “Ask Instacart” — helps users find groceries and adds them to the cart based on the meaning of the request.

– Design tools:

  • Figma AI / Figma Make — turns ideas into layouts, creating interface structures directly from text descriptions.
  • Photoshop Firefly — understands commands like “remove background” or “add light” and executes them automatically.
  • Canva Magic Studio — designs visuals and copy in a unified style based on the described task.
Figma AI “First Draft”

-Development and coding:

  • GitHub Copilot Workspace — understands code, builds plans, fixes errors, and prepares pull requests.
  • Claude — Computer Use — an agent with “screen and cursor” capabilities that can click and type directly within the interface.
  • OpenAI Operator — performs actions on web pages such as scrolling, filling out forms, and completing purchases, essentially “working on behalf of the user.”
  • Netlify + ChatGPT — a “prompt-to-action” example: the agent receives a text description of a website and deploys a project on Netlify.

2. Agentic UX as the New Interaction Engineering

In agentic UX, designers are no longer creating interfaces in the traditional sense — they are constructing system behavior: how the agent understands a task, clarifies details, and responds with actions.

Agentic UX is not about visual composition, but about compiling meanings and reactions. Where the “user journey” was once a path across screens, it is now a scenario of mutual understanding between human and system.

2.1 A New Object of Design — The Meaning Loop

UX transforms into a behavioral loop: intent → interpretation → action → feedback → new intent.

Each turn of this loop can be designed just like animation or interface logic was before.

The designer’s challenge is to preserve the natural flow so the agent doesn’t seem “alien” or “smarter than necessary”.

For example:

When a user says, “Book a table for tomorrow,” the agent may clarify details like time, location, and preferences.

But the designer decides where to stop clarifying — to keep the conversation natural and prevent it from becoming an interrogation.

→ In the end, the designer controls not the screens, but the level of initiative the system demonstrates.

2.2 Behavioral Directing

An agent should behave as if it is part of the user’s context, not just a static interface. Agents now have tone, pauses, hesitation, and empathy — all of which become new tools for UX design.

Become a Medium member

The UX designer is now a director of reactions:

  • how the agent responds to an error,
  • how it expresses uncertainty,
  • how it shifts initiative back and forth.

In the past, an interface might display “404 error.” Now, the agent says, “It seems that event doesn’t exist. Would you like to create a new one?
This is no longer just text — it’s an act of interaction, carefully planned in tone and delivery.

3. Quick Guide: How to Design Agent Behavior

  1. Define the point of intent (what the user wants).
  2. Script the agent’s reaction (what it does, what it clarifies).
  3. Adjust initiative (when the agent takes over, when it returns control to the user).
  4. Add feedback and pauses (short confirmations, clarifications, emotional cues).
  5. Check the trust balance (if the agent is too smart, the user loses control; if too neutral, the sense of live interaction fades).

→ Why: To make the interaction feel alive and predictable — like a conversation, not a static interface.

4. The Language of Interaction and Prompt Engineering

Agentic UX is closely connected to the emerging discipline of prompt engineering. Where prompt engineering in visual models teaches us to describe an image by structuring meaning, in AX design we describe system behavior in the same way. The prompt becomes a specification for how the system should understand, act, and respond — so designers shape user experiences not just through interface elements, but through structured language and intent-first architecture.

In prompt engineering, designers guide AI not through interfaces, but through text.
Words become a compositional tool, where each element of a phrase sets the frame, lighting, tone, and context of the outcome.

In agentic interfaces, this principle works the same way: the prompt becomes a scenario for interaction, and the UX designer turns into an architect of meaning transitions.

They aren’t designing “what appears on screen,” but what the system should understand, clarify, and do in response.

Prompting is the design language for AI, and Agentic UX is the behavioral grammar for systems that are already fluent in that language.

5. The AX Blueprint Method: How to Design Behavior

AX Blueprint is a methodology for designing system behavior in dialog-based UX. It outlines how an agent identifies intent, manages context, makes decisions, and delivers feedback.

This framework defines the logic by which the agent understands a user’s request, clarifies details, makes decisions, and communicates the outcome.

Designers are no longer building screens — they’re constructing a behavioral loop, ensuring each step of the system feels natural, predictable, and meaningful.

https://youtu.be/Ku9wgYkZ8eg

1. Intent — recognizing the user’s task

The first layer of AX Blueprint is intent recognition. The designer’s job is to ensure the agent correctly understands what the user wants to accomplish, even if the phrasing is incomplete or conversational.

Examples:

User: “Schedule a meeting with Alex for tomorrow.”
Agent: “What time should I schedule it?”

User: “Publish a post tonight.”
Agent: “Should I schedule it for 7 PM as usual?”

The agent should clarify only what’s genuinely relevant for the action. The UX designer sets which parameters the agent can interpret independently and where clarification is necessary.

→ The goal is to reduce cognitive load for the user, while maintaining a sense of control and transparency.

2. Memory — using context consciously

The agent should remember the user’s past actions, but apply them thoughtfully and transparently. Context helps to speed up repetitive scenarios, but shouldn’t feel intrusive.

Example:

User: “Make a report like yesterday, but for October.”
Agent: “Okay, same format, just filtered for October?”

→ Principle: Memory serves speed and convenience, but never violates privacy or the predictability of system behavior.

3. Decision — regulating agent initiative

An agent can take the initiative, but only within clear boundaries. The UX designer sets the limits of autonomy: when the system can act on its own, when confirmation is needed, and when offering a choice is preferable.

Examples:

User: “Order supplies, like last time.”
Agent: “Same items and supplier — should I place the order?”

User: “Prepare a report for the client.”
Agent: “I’ll gather data for last week and send a draft. Is that okay?”

→ The designer’s task is to manage the system’s initiative, preserving a sense of partnership rather than replacing the user’s actions.

4. Feedback — communicating action and returning control

The final layer is feedback. The system’s response should be informative, concise, and allow quick correction of the result.

Examples:

Agent: “The draft is ready. Would you like to review it before sending?”
Agent: “The event has been created. Add a description now or later?”

This feedback loop keeps the user informed and maintains control of the interaction.

How to apply AX Blueprint:

  1. Describe the user’s task in the form of a short dialogue — from intent to outcome.
  2. Break the scenario into four layers: Intent, Memory, Decision, Feedback.
  3. Check each step for naturalness and predictability: where the agent understands, where it clarifies, where it acts, and where it informs.
  4. Ensure the user remains the focus — the system helps, but never forces decisions.

AX Blueprint changes the designer’s role: from interface architect to architect of behavior and context.

Design is no longer limited to visuals — it becomes a scenario of interaction, where every system action is intentional and explainable.

6. Typical mistakes in agent interfaces

  • The agent takes initiative without the user’s permission.
  • Overloads with clarifications, turning interaction into an “interrogation.”
  • Feedback is excessive or doesn’t explain the result.
  • The user loses a sense of control

Conclusion

Agentic UX transforms the very nature of interfaces. Rather than constructing the user’s journey, we design the system’s behavior within that journey.

The visual layer remains, but no longer at the center. Now, the focus is on how the system understands intent, clarifies context, and chooses when to act.

For designers, this is a shift from layout to direction: from pixel to meaning, from click to dialogue.

UX becomes a scenario of interaction between user and agent.

Thank you!

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Towards AI Academy

We Build Enterprise-Grade AI. We'll Teach You to Master It Too.

15 engineers. 100,000+ students. Towards AI Academy teaches what actually survives production.

Start free — no commitment:

6-Day Agentic AI Engineering Email Guide — one practical lesson per day

Agents Architecture Cheatsheet — 3 years of architecture decisions in 6 pages

Our courses:

AI Engineering Certification — 90+ lessons from project selection to deployed product. The most comprehensive practical LLM course out there.

Agent Engineering Course — Hands on with production agent architectures, memory, routing, and eval frameworks — built from real enterprise engagements.

AI for Work — Understand, evaluate, and apply AI for complex work tasks.

Note: Article content contains the views of the contributing authors and not Towards AI.