Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Words Matter: Are Language Barriers Driving Quiet Failures in AI
Latest   Machine Learning

Words Matter: Are Language Barriers Driving Quiet Failures in AI

Author(s): Kris Naleszkiewicz

Originally published on Towards AI.

The AI revolution is upon us, transforming how we work, live, and interact with the world.

Yup. We know. We’ve all heard.

The media loves to cover spectacular successes and failures.

But what about the quiet failures? The stalled projects. The initiatives that never quite get off the ground. Not because the technology doesn’t work β€” but because something more human gets in the way.

Is this a familiar situation? You’re discussing an AI solution with a client. They’re excited. You’re excited. The initial meetings go great. β€œLet’s bring in more stakeholders!” they say. Soon, you’ve got the infrastructure team involved, five additional use cases under consideration, and another business executive at the table. The energy is palpable.

Everyone sees the potential.

And then… the fighting starts. Or maybe not fighting exactly, but something more subtle. Resistance. Friction. Suddenly, a project that had everyone thrilled hit a wall. Why? Everyone was excited. Everyone saw the value. What changed?

The answer might surprise you: it’s language.

Not Python or Java or SQL β€” but the everyday language we use to talk about AI. We have no shortage of technical challenges, but the most unexpected roadblocks to AI adoption often stem from a fundamental shift in how we need to work together. AI isn’t just the β€˜new electricity’ of our age β€” it’s forcing unprecedented collaboration between groups that previously operated in comfortable silos.

When straightforward terms like β€œperformance,” β€œexplainability,” and β€œrisk” carry such different meanings across teams, it’s no wonder some AI projects struggle to gain traction. These concepts form the foundation for discussing, evaluating, and implementing AI systems, but their meanings shift depending on who’s using them. This linguistic flexibility isn’t just a communication challenge β€” it’s a window into deeper questions about professional identity, authority, and the changing nature of expertise in an AI-augmented workplace.

As we introduce increasingly complex technical terminology around AI, these fundamental translation gaps only widen, creating invisible barriers that technical solutions alone cannot address.

Setting the Stage

We have all heard β€œAI is the new electricity,” but what that comparison misses is that when electricity transformed manufacturing, it didn’t just change how things were powered β€” it fundamentally restructured how people worked together.

The same thing is happening with AI but more broadly. Electricity mainly required engineers and operators to collaborate. AI? It’s forcing everyone to work together in unprecedented ways.

AI Enthusaism Slipping Away. Image generated in DALL-E by author.

Data scientists need domain experts to understand the problems they’re solving. Business leaders need technical teams to understand the possibilities and limitations. Front-line workers need to collaborate with both groups to ensure solutions work in the real world.

And here’s the kicker β€” none of these groups are particularly good at talking to each other. Not because they don’t want to, but because they’ve never had to β€” at least not at this depth.

When Silos Crumble

Think about traditional technology implementations. You had clear handoffs: Business teams defined requirements, technical teams built solutions, and users learned to adapt. Everyone stayed in their lane and spoke their own language, and things mostly worked out.

AI doesn’t play that game.

When data scientists build a model, they need to understand the business context β€” not just surface-level requirements. When business teams deploy AI solutions, they need to understand more than just features and benefits β€” they need to grasp concepts like model drift and edge cases. And users? They’re not just learning new interfaces; they’re learning to collaborate with AI systems in ways that fundamentally change how they work.

This isn’t just cross-functional collaboration β€” it’s forced interdependence. And it’s causing friction in unexpected places.

LendAssist: Illustrative Example

Let’s introduce LendAssist, an LLM-based mortgage lending assistant that we will use to illustrate this new reality.

On paper, it’s straightforward. An AI system designed to streamline mortgage lending decisions, reduce processing time, and improve accuracy. LendAssist’s struggles highlight a critical challenge in AI adoption: seemingly straightforward terms can have radically different meanings for different stakeholders, leading to miscommunication and misunderstanding.

What constitutes β€œperformance” for a data scientist might be completely different for a data scientist building the product, the loan officer working with the product, or a customer interacting with the product.

Similarly, β€œexplainability” can have varying levels of depth and complexity depending on the audience.

And β€œrisk” can encompass a variety of issues and concerns, from technical failures to ethical dilemmas and job displacement.

In the following sections, we’ll explore these three key areas where language barriers arise.

Expertise Paradox in AI Adoption

Before we dive into specific challenges with LendAssist, let’s discuss the expertise paradox, a fundamental tension that underlies them all.

When LendAssist was first introduced, something unexpected happened. The most resistance didn’t come from technophobes or change-resistant employees β€” it came from the experienced loan officers and underwriters. The experts whose knowledge the system was designed to augment became its biggest skeptics.

Why? The rapid rise of AI presents a unique challenge for experts in traditional fields. It’s like suddenly finding yourself in a world where the game’s rules have changed, and your hard-earned expertise might not translate as seamlessly as you’d hoped.

This expertise paradox is a psychological and organizational hurdle that often gets overlooked in the excitement of AI adoption. Traditional tech leaders feel threatened by the need to start over as learners. Subject matter experts struggling with AI systems that challenge their domain expertise. There is a tension between deep knowledge of traditional systems and the need to adapt to AI-driven approaches.

Organizations often face a delicate balancing act. They need to leverage their existing experts’ valuable experience while embracing AI’s transformative potential. This creates tension and uncertainty as teams grapple with integrating traditional knowledge with AI capabilities.

Through my work with AI implementations, I’ve noticed a consistent pattern in how experts respond to this challenge. It typically manifests as three competing pressures I’ve started mapping out to help teams understand what’s happening.

Maintaining Credibility β€” β€œI still know what I’m doing”

Experts feel intense pressure to demonstrate that their knowledge remains relevant and valuable. I’ve watched seasoned loan officers, for instance, struggle to show how their years of experience still matter when an AI system seems to make decisions in milliseconds.

Embracing Change: β€œI need to adapt to AI”

At the same time, these experts recognize they need to evolve. This isn’t just about learning new tools β€” it’s about fundamentally rethinking how they apply their expertise. I’ve seen loan officers transform from decision-makers to decision interpreters, but this shift rarely comes easily.

Preserving Value: β€œMy experience matters”

Perhaps most importantly, experts need to find ways to show how their experience enhances AI capabilities rather than being replaced by them. The most successful transitions I’ve observed happen when experts can clearly see how their knowledge makes the AI better, not obsolete.

The key to successful AI adoption is finding a balance between these three corners. Experts need to acknowledge the limitations of their existing knowledge, embrace the learning process, and find ways to leverage AI to enhance their expertise rather than viewing it as a threat.

Despite these challenges, there are inspiring examples of experts successfully navigating the expertise paradox. These individuals embrace AI as a tool to augment their expertise and guide others in adapting to AI-driven approaches.

GenAI Rollouts by Maturity. (McKinsey, 2025)

This could explain a puzzling trend in AI adoption. A McKinsey survey completed in November 2024 and published in January 2025 and discussed in Superagency in the Workplace: Empowering people to unlock AI’s full potential found that while one-quarter of executives have defined a GenAI roadmap, just over half remain stuck in the β€œdraft being refined” stage.

The technical capabilities exist, but organizations struggle with the human side of implementation. As technology continues evolving at breakneck speed, roadmaps must be built to evolve β€” but we should recognize that many of the barriers aren’t technical at all.

The invisible psychological and organizational traps repeatedly derail even the most promising AI initiatives.

Performance β€” A Multifaceted Challenge

The data science team is ecstatic. LendAssist’s new fraud detection model boasts a 98% accuracy rate in their meticulously crafted testing environment. Champagne corks pop, high-fives are exchanged, and LinkedIn posts are drafted. But the celebration is short-lived. The operations team pushes back, overwhelmed by a 30% increase in false positives that clog their workflows.

Meanwhile, the IT infrastructure team grapples with the model’s insatiable appetite for computing resources.

And the business leaders, well, they’re left wondering why those key performance indicators (KPIs) haven’t budged an inch.

Welcome to the performance paradox of AI adoption, where impressive technical achievements often clash with the messy realities of real-world implementation.

Performance in AI is a chameleon, adapting its meaning depending on who’s using the word. To truly understand this multifaceted challenge, we need to dissect β€œperformance” through the lens of different stakeholders:

Business Performance: The language of executives and shareholders focuses on the bottom line. Does LendAssist increase revenue? Does it reduce costs? Does it improve customer satisfaction and retention? Does it boost market share?

Technical Performance: This is the domain of data scientists and engineers who are focused on metrics and algorithms. How accurate is LendAssist’s risk assessment model? What’s its precision and recall? How does it compare to traditional credit scoring methods regarding AUC and F1-score?

Operational Performance: This is the realm of IT and operations teams concerned with utilization, efficiency, and scalability. How fast does LendAssist process loan applications? How much computing power does it consume? Can it handle peak loads without crashing? How easily does it integrate with existing systems?

Human Performance: This is the often-overlooked dimension, focusing on the impact of AI on human workers. Does LendAssist make loan officers more productive? Does it reduce errors and improve decision-making? Does it enhance job satisfaction or create anxiety and resistance?

But performance challenges are just the beginning.

When different groups can’t even agree on what β€œgood performance” means, how do they explain their decisions to each other β€” or, more importantly, customers?

This brings us to an even thornier challenge: the crisis of explainability.

Explainability β€” The Black Box Dilemma

A loan officer sits across from a client who’s just been denied a mortgage by LendAssist. The client, understandably bewildered, asks, β€˜Why?’ The loan officer, with 20 years of experience explaining such decisions, finds herself staring blankly at the screen, unable to provide a clear answer. This isn’t just about a declined mortgage β€” it’s about a fundamental shift in professional authority, a moment where human expertise collides with the opacity of AI.

Explainable AI (XAI) is no longer a luxury; it’s required to maintain trust, ensure responsible AI development, and navigate the evolving landscape of professional expertise.

However, β€œexplainability” itself has layers of understanding for different stakeholders, too.

Technical Explainability Challenge: β€œOur model shows high feature importance for these variables…” This might satisfy data scientists, but it leaves business users and clients in the dark. How does LendAssist’s technical team explain the model’s risk assessment to the data science team in a technically sound and understandable way?

Process Explainability Challenge: β€œBut how does this translate to our existing underwriting workflow?” Integrating AI into established processes requires explaining how it interacts with human decision-making. How does the data science team explain LendAssist’s integration into the loan approval process to the loan officers and underwriters, clarifying how it augments their existing expertise?

Decision Explainability Challenge: β€œHow do we explain this to the customer?” Building trust with clients requires clear, understandable explanations of AI-driven decisions. How do loan officers explain LendAssist’s loan denial decision to the client in a transparent and empathetic way without resorting to technical jargon?

Impact Explainability Challenge: β€œWhat does this mean for our business and regulatory compliance?” Understanding the broader implications of AI decisions is crucial for responsible adoption. How do executives explain LendAssist’s impact on loan origination volume, risk mitigation, and compliance to stakeholders and regulators in an informative and persuasive way?

Explainability isn’t just about understanding β€” it’s about authority.

When professionals can’t explain why decisions are made in their own domain, they lose not just control but their role as knowledge authorities. This can lead to resistance, fear of obsolescence, and difficulty integrating AI into existing workflows.

Risk β€” Navigating Uncertainty

The CTO champions LendAssist as the future of lending, painting a picture of streamlined workflows and data-driven decisions.

The compliance team, however, sees looming regulatory disasters haunted by visions of biased algorithms and data breaches.

Middle managers envision organizational chaos, with confused employees and disrupted workflows.

Loan officers on the front lines of client interaction fear professional extinction and are replaced by an emotionless algorithm that spits out loan approvals and denials with cold, hard efficiency.

It has the same technology but radically different risk landscapes.

However, these surface-level conflicts mask a deeper pattern that reveals how organizations and individuals process the fundamental changes AI brings.

Hidden Psychology of Risk When Talking about AI

We can break down this complex risk perception into four distinct levels:

Level 1: β€œWhat if it doesn’t work?” (Technical Risk) This is the most immediate and obvious concern. Will LendAssist’s AI models be accurate and reliable? Will the system be secure against cyberattacks? Will it comply with relevant regulations? But beneath these technical anxieties lies a deeper fear: losing control over familiar processes. When compliance officers obsess over LendAssist’s error rates, they often express anxiety about shifting from rule-based to probability-based decision-making. They’re grappling with the uncertainty inherent in AI systems, where outcomes aren’t always predictable or easily explained.

Level 2: β€œWhat if it works too well?” (Operational Risk) This is where things get interesting. As AI proves its capabilities, concerns shift from technical failures to operational disruptions. How will LendAssist impact the daily work of loan officers and underwriters? Will it disrupt existing processes and create confusion? Will it lead to job losses? But the real fear here is more personal: Will AI erode the value of human skills and experience? When loan officers worry about LendAssist processing applications too quickly, they’re asking, β€œWill speed make my experience irrelevant?” They’re grappling with the potential for AI to diminish their role and authority in the lending process.

Level 3: β€œWhat if it works differently than we expect?” (Strategic Risk) This level delves into the broader implications of AI adoption. Will LendAssist have unintended consequences? Will it disrupt the competitive landscape? Will it create new ethical dilemmas? But the underlying fear is about professional identity. When managers resist LendAssist’s recommendations, they often protect their identity as decision-makers more than questioning the AI’s judgment. They’re grappling with the potential for AI to redefine their roles and responsibilities, challenging their authority and expertise.

Level 4: β€œWhat if it changes who we are?” (Identity Risk) This is the deepest and most existential level of risk perception. Will LendAssist fundamentally change how we work and interact with each other? Will it alter our understanding of expertise and professional identity? Will it reshape our values and beliefs about the role of technology in our lives? This is where the fear of obsolescence truly takes hold. When senior underwriters label LendAssist β€œtoo risky,” they’re expressing fear about transitioning from decision-makers to decision-validators. They’re grappling with the potential for AI to transform their sense of self-worth and professional purpose.

How technical and identity risks become intertwined makes AI risk assessment particularly challenging. When a loan officer says, β€œLendAssist’s risk models aren’t reliable enough,” they might be expressing fear of losing their ability to make judgment calls or anxiety about their role in the organization changing.

The more organizations focus on addressing technical risks, the more they might inadvertently amplify identity risks by suggesting that human judgment is secondary to AI capabilities. As AI systems like LendAssist become more capable, they don’t just present technical risks β€” they force us to reconsider what it means to be an expert in an AI-augmented world.

These layered challenges might seem insurmountable when viewed through a purely technical lens. After all, how do you solve a β€œtechnical” problem when the real issue lies in professional identity? How do you address β€œperformance” concerns when different stakeholders define success in fundamentally different ways?

What I’ve found is that acknowledging these language barriers is the first crucial step toward overcoming them. When we recognize that resistance to AI adoption often stems from communication gaps rather than technological limitations, we open up new paths forward.

The Path Forward: A Practical Perspective

Once you recognize these language barriers, they become surprisingly manageable. We’re not just dealing with technical challenges β€” we’re dealing with translation challenges. We need to become multilingual in the different ways our stakeholders talk about and understand AI.

The organizations I’ve seen succeed with AI adoption aren’t just technically sophisticated β€” they’re linguistically sophisticated.

They create a shared vocabulary that respects different perspectives. They recognize expertise transitions as a core implementation part and build bridges between technical and professional languages. They value communication skills as much as technical skills.

Conclusion

This isn’t just another factor to consider in AI adoption β€” it’s often the factor determining β€˜go’ or β€˜no go’ decisions.

The good news? While technical challenges typically require significant resources, language barriers can be addressed through awareness and intentional communication. We’re all figuring this out together, but recognizing how language shapes AI adoption has been one of the most potent insights. It’s changing how I approach projects, how I work with stakeholders, and, most importantly, how I help organizations navigate the fundamental changes AI brings to professional expertise.

The choice isn’t between technical excellence and human understanding β€” it’s about building bridges between them.

And sometimes, those bridges start with something as simple as recognizing that we might mean different things when we say β€œperformance,” β€œexplainability,” or β€œrisk.”

Further reading and citation

Why AI Projects Fail and How They Can Succeed

By some estimates, more than 80 percent of AI projects fail. That’s twice the rate of failure of information technology…

www.rand.org

Keep Your AI Projects on Track

AI-and especially its newest star, generative AI-is today a central theme in corporate boardrooms, leadership…

hbr.org

Superagency in the workplace: Empowering people to unlock AI’s full potential

Almost all companies invest in AI, but just 1% believe they are at maturity. Our new report looks at how AI is being…

www.mckinsey.com

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓