Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

Qualities of Great AI Experiences
Latest   Machine Learning

Qualities of Great AI Experiences

Last Updated on January 25, 2024 by Editorial Team

Author(s): Elaine

Originally published on Towards AI.

For Designers interested in AI, Managers developing processes around AI, Organizations adopting AI into user experiences, & anyone curious about the truth in designing great AI experiences

While researching Design & AI at Carnegie Mellon and designing ML products at Apple, I conducted 4 experiments to find out what makes great AI experiences.

The learnings were remarkable. Revealing the truth about designing for AI, from 65 students learning about AI, hobbyists interested in AI, professors pioneering research in AI, and experienced professionals working on many of the AI products we read about in headlines.

Notable AI Examples: Consumer, Enterprise, Dev Tools, Robotics

TL;DR What we Already Know

Regarding working with AI, several foundational challenges emerge in practice. These challenges are well-documented in research:

There’s a learning curve to get started pitching designs. Understanding model inputs and outputs is a complex process. Designers struggle to innovate with AI. Designers tend to rely on ML experts as a proxy to understand inference systems, but ML expertise is scarce. Designers tend to come up with imaginative ideas that can’t be built; Engineers and Data Scientists tend to come up with ideas nobody wants… or are still in search of a problem to solve.

Learn more in Hype vs. Reality: Foundational Challenges when Designing with AI

Most research today focuses on sensitizing methods — ways to help practitioners better understand AI mechanisms and capabilities, in order to better work with it

Limited research investigates how people perceive and experience pleasurable AI experiences, and how this understanding can inform the design of products

What’s New?

A new frame for approaching AI experience design.

Something interesting emerged from research. Today, most tools supporting product teams working with AI focus on sensitizing people to what the technology can realistically achieve.

These have resulted in many forms. Such as experiential systems for people play with AI to get a sense of what it can do. Educational courses and knowledge sharing resources within organizations. Design guidance from Google’s People + AI Guidebook, Apple’s Human Interface Guidelines for ML, Microsoft’s guidelines for Human-AI Interaction, articles from a handful of AI startups, etc. There are also many low-code, no-code ML creator tools, such as Apple’s Create ML, Google’s Teachable Machine, AutoML, AutoDraw, Microsoft’s Lobe.ai, Semantic Kernel, and more.

These tools make it easier for people to work with AI. With the exception of Apple’s Human Interface Guidelines for ML, however, they do not tell you how to think differently about AI — particularly through the lens of aesthetics and human experience.

The design of great AI products emphasizes their experiential qualities.

Qualities of Great AI Experiences

Throughout history, design innovation has consistently emerged from a foundation of material understanding. Designers navigate the creative process through meticulous study and manipulation of materials — pushing boundaries, challenging conventions, crafting novel assemblies of already known materials, and then presenting unique points of view in highly visual and tangible forms. Ultimately revealing to the world new ideas of what’s possible with great artistry.

AI is another design material. Its materiality is just not well known.

Research shows that even experienced Designers struggle when envisioning AI — knowing what it can reasonably do, what forms it can take, and applying it to the right human problems to solve. Design is historically used to work with known materials. However, the inner workings of AI are generally unknown.

Extrapolated from academic & industry research over the past decade and supported by Experiments 1–4 in a later section, emerges a place where design can have a unique impact.

This is not the technical layer. Not the process since repeatable success is still unclear. And not design patterns since patterns imply a level of maturity AI has not yet reached. Instead, this is a fundamental practice of material study, observation, & critique — already native to the design discipline. It’s a place where design can work independently and be valuable.

By identifying the qualities of great AI experiences, Designers and AI Developers may hone their sensitivity toward what truly delightful and innovative experiences look and feel like.

By leveraging the unique materiality of AI, Designers and AI Developers may be better positioned to envision new forms of human-AI interaction.

Below is a curated list of the qualities of remarkable AI experiences — at your disposal when conceptualizing when and why AI can uniquely add value to human experiences.

These qualities were informed by discussions with the Authors and Professors of HCI & AI Design research at CMU. They were also identified through reviewing AI products in the market, my reflections from designing ML products in the industry, and findings from 4 Design Experiments described in the next section.

Great AI Experiences are Beyond Expectations
Great AI Experiences Anticipates Needs
Great AI Experiences Engages Imagination
Great AI Experiences are Optional

Great AI experiences are beyond expectations. They anticipate needs. They engage imagination. They are often optional, giving control to the user, with appropriate fallbacks built in.

Not all qualities should apply to AI experiences in different situations. They do offer a starting point for describing the experiential qualities only AI products can deliver. Experiences that can offer remarkable convenience, augmentation, simplicity, even unexpectedness. And it feels like magic.

4 Research Experiments

Reveal the truth about envisioning with AI, aspirations for AI, and why people struggle.

Experiment 1 shows challenges Designers and product teams face in practice when working with AI.

Experiment 1

Interviews with 16 AI practitioners, from startups to large tech companies. Some people have switched between multiple AI companies. Many work on products we read about in the news today.

This study uncovered challenges practitioners face in the early stages of product development when working with AI, highlighting unclear process and different design approaches.

16 Participants from Design, Product, and Engineering
  1. Designers claim they don’t know AI/ML technical details, but can provide examples of what it can do, usually with stories and metaphors.
  2. There’s no consistent definition of AI. Everyone described AI in a different level of abstraction, from high level,
    “Working with AI is like trying to solve problems by answering questions using data and data methodologies” (E3),
    to highly specific,
    generative AI compares images against each other in ways that humans cannot, to the point that it’s able to understand what exactly each thing is in its own way, and then create it” (D2).
  3. Perception of AI/ML depends on the products people work on. This generally did not translate to knowing how AI should be treated across other applications, like enterprise vs. consumer AI. There are exceptions, such as design teams working on AI/ML tools, research teams, and leadership who more often consider broader implications.
  4. People have different expectations for accuracy & appropriateness. Transparency, explainability, and high accuracy are crucial in enterprise products and in situations with risk. Opinions varied for consumer products. Some people are more tolerant of AI failures than others. Proactive AI experiences such as recommendations were impressive to some people, but annoying to others.
  5. People reported encountering misconceptions about AI regularly. Internally, it’s hard to communicate about AI across interdisciplinary teams without examples. Externally, even when customers were shown examples, they care about outcomes, not technology
    “I interview users and creators day in and day out. The average person has no idea what AI is, how it works, truly. They just want to have a good experience” (D3).
  6. Understanding technology is a common challenge. Designers and Product Managers in this study mentioned the need to understand technology capabilities and limitations reasonably well in order to teach their broader team. AI Developers mentioned the ways they explain AI vary greatly in level of abstraction, depending on how much knowledge others already have.
  7. There’s no repeatable process. The most commonly cited path leading to successful AI products is active matchmaking between human needs and technology capabilities and active co-creation between designers, engineers, and others who can bring ideas to life.

Experiment 2 demonstrated great potential for design & engineering working together to generate interesting AI use cases, even with variable technical knowledge. It explores this potential through collaborative ideation with AI.

Experiment 2

AI ideation workshops with 7 groups of Design & Engineering students. Each group was asked to brainstorm many AI experiences and then develop a storyboard for the most promising one. To scope, they were given three parameters: a human emotion, an interaction method, an AI capability.

  1. People desired more examples of AI. Participants struggled to think of many AI examples. Only a few generic examples of precedent came to mind — recommender systems, content generation, chatbots, voice assistants, and autopilot. When asked to critique and draw inspiration from known AI examples, participants struggled to describe (1) its technical attributes and (2) why those experiences are fundamentally compelling.
  2. How to justify building new AI products? Most groups simply assumed their ideas would be difficult to build. Many struggled to think of the right hardware and software systems needed to support their designs. Considering feasibility, students were uncertain how to justify value and costs.
  3. What’s possible today? In a post-activity survey, all students claimed that it was at least “somewhat challenging” to envision the future. They wished for a better understanding of the current state of AI and trends to ground their ideas in reality.
  4. AI seems too abstract to make tangible, at first. Participants affirmed the workshop helped them come up with ideas, especially grounded in a human-centered perspective with the human emotion prompt. AI became more approachable to ideate with throughout the session.

The challenges students faced were generally consistent with those faced by experienced Designers and practitioners. Although feasibility was unclear, the ideas were pleasurable and imaginative. Ultimately, students came up with many novel human-AI interactions in the form of embodied AI, AI-enabled IoT, and device interfaces.

Comment or message me if you want the workshop template.

The ideation workshop yielded valuable insights, but did not reflect broader opinions about AI desirability and appropriateness.

Experiment 3 represents a sampling of public opinion. People envisioned AI as a collaborator, a companion, something that simplifies and supports life.

Experiment 3

I asked 23 people about their needs & desires for AI: undergraduate students, master’s students, professors, researchers, & friends working on AI products.

Interestingly, all needs people expressed were aspirational rather than critical.

  1. Desirability. People wanted AI that can make their lives simpler, streamline activities and workflows, enhance experiences, or just make something feel more enjoyable.
  2. Types of task support.
    (1) Help with routines — this could be something as simple as waking up, doing the laundry, or weather reminders
    (2) Find information — reducing the time it takes to find what something or get to the insights people ultimately want
    (3) Support decisions — when there are not enough or too many options, and when there are not enough or too much information
    (4) Support lifestyle— help with staying active, healthy, on time, etc
    (5) Provide entertainment — making tasks feel more enjoyable, and experiences more pleasurable
  3. Form factor. Embodied AI was most commonly described. People described situations where AI is embedded in life regardless of form factor. But when asked to specifically describe a form, people associated AI with an object, device, something tangible.
  4. Role of AI in life. Everyone envisioned AI to assist and enhance life. Everyone expressed they did not want AI to compete with decision-making or creativity. Everyone envisioned their ideal “AI” as context-aware — helping with functional tasks that improve life quality, integrated into various contexts of daily life’s tasks.

AI is hard to detect the use of. So what types of AI experiences can Designers and AI Developers notice? What does their critique of these AI experiences tell us about current the stage of AI development?

Findings reveal difficulties, and some interesting patterns.

Experiment 4

AI is embedded in many products and services we interact with daily. But general awareness about AI is low. Only 27% of Americans say they interact with AI at least several times a day. 28% think they interact with AI about once a day or several times a week. 44% think they do not regularly interact with AI at all (Pew Research, 2023)

15 participants documented 55 AI features they encountered throughout a week. All participants have some prior experience working with AI. They were sent a daily reminder to submit a diary entry by answering multiple-choice and free-form text entry questions for each AI capability they found.

Some follow-ups were scheduled after the study to dive deeper into participants’ responses.

15 Participants who have worked with AI

Overall Findings

The #1 reported challenge is knowing whether experiences have AI. Is it a heuristics-based rule system? Is it actually IoT and sensors?

Even when sent a daily reminder to document AI experiences, many participants found it difficult. The original ask was to document 5 AI experiences, later modified to at least 3 AI experiences based on this feedback.

Most AI documented were from consumer products, with a handful from enterprise products. All documented features had obvious, visual interface elements. Almost none were AI capabilities that operate in the background.

Interestingly, while people in Experiment 4 documented only AI in interfaces, the desired role of AI was reported as interface-agnostic, embedded in life mostly through physical objects (See Experiment 3)

Designers and Engineers think differently about AI improvements. When asked to critique a specific AI feature, Designers thought about experience enhancements & use case expansion. Engineers thought about using case depth, increasing the accuracy and reliability of current solutions to make them more useful.

Human-AI Interaction. Designers frequently identified handoff points between AI and humans — when AI should execute, and when humans should take control. Engineers were generally open to automating everything.

Critiquing AI value is a rare. Participants said the study allowed them to think more critically about whether AI is truly intended for the benefit of the user or business. Participants said they rarely critiqued existing AI experiences as holistically as the study’s questions probed them — from technology, design, customer, business, appropriateness, risk, and innovation opportunity perspectives.

Detail Results

Value of AI

Participants expressed confidence in AI enabling convenience, improving efficiency, enhancing experiences, & providing new capabilities.

Experience & Business Value

Expectations for AI

The majority of AI features documented were from consumer products. Expectations for accuracy were highly variable. On average, people expect that AI needs to work at least 65% of the time to be valuable. Tolerance of AI not meeting expectations was 47%.

This suggests for consumer products, moderate AI performance can still be useful.

Failures and Risks

Top concerns when AI fails were privacy, fairness, safety and performance.

Designers tend to question information sources, skeptical of when AI can & should be trusted. Engineers generally assume better technology would bypass these issues — resulting in better experiences, fewer failures, less unpredictability.

Appropriateness and Desirability

From most to least desirable: AI that’s reactive to user inputs and preferences (29.1%), have contextual awareness (29.1%), domain expertise (20%), followed by limited memory (18.2%).

AI with self-awareness was ranked last by 63.4% of participants.

Conclusion

Findings were broad, but each of 4 experiments showed patterns regarding aspects of AI that Designers and AI Developers respond to

Experiment 1 revealed experienced AI practitioners are optimistic about AI enabling unique experiences, but they face tactical challenges.

Experiment 2 demonstrated great potential for design & engineering students to generate interesting new human-AI interactions based on their own experiences, even without deep technical expertise.

Experiment 3 showed people have aspirational rather than critical needs for AI — from supporting mundane tasks to enabling extraordinary experiences that make activities feel more enjoyable

Experiment 4 showed AI is difficult to notice and hard to critique. Designers tend to think more about AI appropriateness; Engineers think more about technology improvement & enablement.

The main contribution of this piece is identifying a place where design can be independently valuable when working with AI.

Contribution

These experiments offer a rare sampling of how people work with, notice about, and desire from AI. The experiments revealed a lot of optimism about using AI as a design material (Experiment 1, 2). Ideating with AI also led to many imaginative, aspirational ideas people have towards future intelligent experiences (Experiment 2, 3).

However, AI is hard to notice and critique (Experiment 4). Designers and AI Developers struggled to know whether AI is used. They rarely identified non-obvious examples of AI features, beyond visible elements in user interfaces. Interestingly, they were also unable to identify numerous compelling AI experiences, despite well-known examples where AI has historically captivated people upon discovery for the first time:

Examples: Facial recognition, chatbots, driverless vehicles, autonomous robots, voice assistants, self-optimizing heating and cooling systems, smart appliances, healthcare diagnostics, interactive art exhibits, fraud detection, predictive maintenance in industrial machinery, weather forecasts, car crash and fall detection, and more.

Across these known examples of AI, a pattern becomes apparent. With AI, even ordinary experiences could be designed into extraordinary ones. With AI, even a little intelligence paired with the right human problem to solve can be greatly valuable.

By starting to see AI through its unique experiential qualities, Designers and AI Developers may begin to develop greater sensitivity towards what makes for truly great AI experiences.

Thank you to my advisors at CMU, Professor John Zimmerman and Professor Bruce Hanington, everyone who provided feedback and participated in my research, & colleagues and friends who have shared valuable insights and conversation with me about design and AI/ML.

Elaine designs human-AI interactions for robotics, with experience in AI/ML consumer and enterprise products, ML tools, & research in interaction design

Thank you for reading U+1F44FU+1F3FC
Comment what you’d like to know more

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓