
Words Matter: Are Language Barriers Driving Quiet Failures in AI
Author(s): Kris Naleszkiewicz
Originally published on Towards AI.
The AI revolution is upon us, transforming how we work, live, and interact with the world.
Yup. We know. Weβve all heard.
The media loves to cover spectacular successes and failures.
But what about the quiet failures? The stalled projects. The initiatives that never quite get off the ground. Not because the technology doesnβt work β but because something more human gets in the way.
Is this a familiar situation? Youβre discussing an AI solution with a client. Theyβre excited. Youβre excited. The initial meetings go great. βLetβs bring in more stakeholders!β they say. Soon, youβve got the infrastructure team involved, five additional use cases under consideration, and another business executive at the table. The energy is palpable.
Everyone sees the potential.
And then⦠the fighting starts. Or maybe not fighting exactly, but something more subtle. Resistance. Friction. Suddenly, a project that had everyone thrilled hit a wall. Why? Everyone was excited. Everyone saw the value. What changed?
The answer might surprise you: itβs language.
Not Python or Java or SQL β but the everyday language we use to talk about AI. We have no shortage of technical challenges, but the most unexpected roadblocks to AI adoption often stem from a fundamental shift in how we need to work together. AI isnβt just the βnew electricityβ of our age β itβs forcing unprecedented collaboration between groups that previously operated in comfortable silos.
When straightforward terms like βperformance,β βexplainability,β and βriskβ carry such different meanings across teams, itβs no wonder some AI projects struggle to gain traction. These concepts form the foundation for discussing, evaluating, and implementing AI systems, but their meanings shift depending on whoβs using them. This linguistic flexibility isnβt just a communication challenge β itβs a window into deeper questions about professional identity, authority, and the changing nature of expertise in an AI-augmented workplace.
As we introduce increasingly complex technical terminology around AI, these fundamental translation gaps only widen, creating invisible barriers that technical solutions alone cannot address.
Setting the Stage
We have all heard βAI is the new electricity,β but what that comparison misses is that when electricity transformed manufacturing, it didnβt just change how things were powered β it fundamentally restructured how people worked together.
The same thing is happening with AI but more broadly. Electricity mainly required engineers and operators to collaborate. AI? Itβs forcing everyone to work together in unprecedented ways.
Data scientists need domain experts to understand the problems theyβre solving. Business leaders need technical teams to understand the possibilities and limitations. Front-line workers need to collaborate with both groups to ensure solutions work in the real world.
And hereβs the kicker β none of these groups are particularly good at talking to each other. Not because they donβt want to, but because theyβve never had to β at least not at this depth.
When Silos Crumble
Think about traditional technology implementations. You had clear handoffs: Business teams defined requirements, technical teams built solutions, and users learned to adapt. Everyone stayed in their lane and spoke their own language, and things mostly worked out.
AI doesnβt play that game.
When data scientists build a model, they need to understand the business context β not just surface-level requirements. When business teams deploy AI solutions, they need to understand more than just features and benefits β they need to grasp concepts like model drift and edge cases. And users? Theyβre not just learning new interfaces; theyβre learning to collaborate with AI systems in ways that fundamentally change how they work.
This isnβt just cross-functional collaboration β itβs forced interdependence. And itβs causing friction in unexpected places.
LendAssist: Illustrative Example
Letβs introduce LendAssist, an LLM-based mortgage lending assistant that we will use to illustrate this new reality.
On paper, itβs straightforward. An AI system designed to streamline mortgage lending decisions, reduce processing time, and improve accuracy. LendAssistβs struggles highlight a critical challenge in AI adoption: seemingly straightforward terms can have radically different meanings for different stakeholders, leading to miscommunication and misunderstanding.
What constitutes βperformanceβ for a data scientist might be completely different for a data scientist building the product, the loan officer working with the product, or a customer interacting with the product.
Similarly, βexplainabilityβ can have varying levels of depth and complexity depending on the audience.
And βriskβ can encompass a variety of issues and concerns, from technical failures to ethical dilemmas and job displacement.
In the following sections, weβll explore these three key areas where language barriers arise.
Expertise Paradox in AI Adoption
Before we dive into specific challenges with LendAssist, letβs discuss the expertise paradox, a fundamental tension that underlies them all.
When LendAssist was first introduced, something unexpected happened. The most resistance didnβt come from technophobes or change-resistant employees β it came from the experienced loan officers and underwriters. The experts whose knowledge the system was designed to augment became its biggest skeptics.
Why? The rapid rise of AI presents a unique challenge for experts in traditional fields. Itβs like suddenly finding yourself in a world where the gameβs rules have changed, and your hard-earned expertise might not translate as seamlessly as youβd hoped.
This expertise paradox is a psychological and organizational hurdle that often gets overlooked in the excitement of AI adoption. Traditional tech leaders feel threatened by the need to start over as learners. Subject matter experts struggling with AI systems that challenge their domain expertise. There is a tension between deep knowledge of traditional systems and the need to adapt to AI-driven approaches.
Organizations often face a delicate balancing act. They need to leverage their existing expertsβ valuable experience while embracing AIβs transformative potential. This creates tension and uncertainty as teams grapple with integrating traditional knowledge with AI capabilities.
Through my work with AI implementations, Iβve noticed a consistent pattern in how experts respond to this challenge. It typically manifests as three competing pressures Iβve started mapping out to help teams understand whatβs happening.
Maintaining Credibility β βI still know what Iβm doingβ
Experts feel intense pressure to demonstrate that their knowledge remains relevant and valuable. Iβve watched seasoned loan officers, for instance, struggle to show how their years of experience still matter when an AI system seems to make decisions in milliseconds.
Embracing Change: βI need to adapt to AIβ
At the same time, these experts recognize they need to evolve. This isnβt just about learning new tools β itβs about fundamentally rethinking how they apply their expertise. Iβve seen loan officers transform from decision-makers to decision interpreters, but this shift rarely comes easily.
Preserving Value: βMy experience mattersβ
Perhaps most importantly, experts need to find ways to show how their experience enhances AI capabilities rather than being replaced by them. The most successful transitions Iβve observed happen when experts can clearly see how their knowledge makes the AI better, not obsolete.
The key to successful AI adoption is finding a balance between these three corners. Experts need to acknowledge the limitations of their existing knowledge, embrace the learning process, and find ways to leverage AI to enhance their expertise rather than viewing it as a threat.
Despite these challenges, there are inspiring examples of experts successfully navigating the expertise paradox. These individuals embrace AI as a tool to augment their expertise and guide others in adapting to AI-driven approaches.
This could explain a puzzling trend in AI adoption. A McKinsey survey completed in November 2024 and published in January 2025 and discussed in Superagency in the Workplace: Empowering people to unlock AIβs full potential found that while one-quarter of executives have defined a GenAI roadmap, just over half remain stuck in the βdraft being refinedβ stage.
The technical capabilities exist, but organizations struggle with the human side of implementation. As technology continues evolving at breakneck speed, roadmaps must be built to evolve β but we should recognize that many of the barriers arenβt technical at all.
The invisible psychological and organizational traps repeatedly derail even the most promising AI initiatives.
Performance β A Multifaceted Challenge
The data science team is ecstatic. LendAssistβs new fraud detection model boasts a 98% accuracy rate in their meticulously crafted testing environment. Champagne corks pop, high-fives are exchanged, and LinkedIn posts are drafted. But the celebration is short-lived. The operations team pushes back, overwhelmed by a 30% increase in false positives that clog their workflows.
Meanwhile, the IT infrastructure team grapples with the modelβs insatiable appetite for computing resources.
And the business leaders, well, theyβre left wondering why those key performance indicators (KPIs) havenβt budged an inch.
Welcome to the performance paradox of AI adoption, where impressive technical achievements often clash with the messy realities of real-world implementation.
Performance in AI is a chameleon, adapting its meaning depending on whoβs using the word. To truly understand this multifaceted challenge, we need to dissect βperformanceβ through the lens of different stakeholders:
Business Performance: The language of executives and shareholders focuses on the bottom line. Does LendAssist increase revenue? Does it reduce costs? Does it improve customer satisfaction and retention? Does it boost market share?
Technical Performance: This is the domain of data scientists and engineers who are focused on metrics and algorithms. How accurate is LendAssistβs risk assessment model? Whatβs its precision and recall? How does it compare to traditional credit scoring methods regarding AUC and F1-score?
Operational Performance: This is the realm of IT and operations teams concerned with utilization, efficiency, and scalability. How fast does LendAssist process loan applications? How much computing power does it consume? Can it handle peak loads without crashing? How easily does it integrate with existing systems?
Human Performance: This is the often-overlooked dimension, focusing on the impact of AI on human workers. Does LendAssist make loan officers more productive? Does it reduce errors and improve decision-making? Does it enhance job satisfaction or create anxiety and resistance?
But performance challenges are just the beginning.
When different groups canβt even agree on what βgood performanceβ means, how do they explain their decisions to each other β or, more importantly, customers?
This brings us to an even thornier challenge: the crisis of explainability.
Explainability β The Black Box Dilemma
A loan officer sits across from a client whoβs just been denied a mortgage by LendAssist. The client, understandably bewildered, asks, βWhy?β The loan officer, with 20 years of experience explaining such decisions, finds herself staring blankly at the screen, unable to provide a clear answer. This isnβt just about a declined mortgage β itβs about a fundamental shift in professional authority, a moment where human expertise collides with the opacity of AI.
Explainable AI (XAI) is no longer a luxury; itβs required to maintain trust, ensure responsible AI development, and navigate the evolving landscape of professional expertise.
However, βexplainabilityβ itself has layers of understanding for different stakeholders, too.
Technical Explainability Challenge: βOur model shows high feature importance for these variablesβ¦β This might satisfy data scientists, but it leaves business users and clients in the dark. How does LendAssistβs technical team explain the modelβs risk assessment to the data science team in a technically sound and understandable way?
Process Explainability Challenge: βBut how does this translate to our existing underwriting workflow?β Integrating AI into established processes requires explaining how it interacts with human decision-making. How does the data science team explain LendAssistβs integration into the loan approval process to the loan officers and underwriters, clarifying how it augments their existing expertise?
Decision Explainability Challenge: βHow do we explain this to the customer?β Building trust with clients requires clear, understandable explanations of AI-driven decisions. How do loan officers explain LendAssistβs loan denial decision to the client in a transparent and empathetic way without resorting to technical jargon?
Impact Explainability Challenge: βWhat does this mean for our business and regulatory compliance?β Understanding the broader implications of AI decisions is crucial for responsible adoption. How do executives explain LendAssistβs impact on loan origination volume, risk mitigation, and compliance to stakeholders and regulators in an informative and persuasive way?
Explainability isnβt just about understanding β itβs about authority.
When professionals canβt explain why decisions are made in their own domain, they lose not just control but their role as knowledge authorities. This can lead to resistance, fear of obsolescence, and difficulty integrating AI into existing workflows.
Risk β Navigating Uncertainty
The CTO champions LendAssist as the future of lending, painting a picture of streamlined workflows and data-driven decisions.
The compliance team, however, sees looming regulatory disasters haunted by visions of biased algorithms and data breaches.
Middle managers envision organizational chaos, with confused employees and disrupted workflows.
Loan officers on the front lines of client interaction fear professional extinction and are replaced by an emotionless algorithm that spits out loan approvals and denials with cold, hard efficiency.
It has the same technology but radically different risk landscapes.
However, these surface-level conflicts mask a deeper pattern that reveals how organizations and individuals process the fundamental changes AI brings.
Hidden Psychology of Risk When Talking about AI
We can break down this complex risk perception into four distinct levels:
Level 1: βWhat if it doesnβt work?β (Technical Risk) This is the most immediate and obvious concern. Will LendAssistβs AI models be accurate and reliable? Will the system be secure against cyberattacks? Will it comply with relevant regulations? But beneath these technical anxieties lies a deeper fear: losing control over familiar processes. When compliance officers obsess over LendAssistβs error rates, they often express anxiety about shifting from rule-based to probability-based decision-making. Theyβre grappling with the uncertainty inherent in AI systems, where outcomes arenβt always predictable or easily explained.
Level 2: βWhat if it works too well?β (Operational Risk) This is where things get interesting. As AI proves its capabilities, concerns shift from technical failures to operational disruptions. How will LendAssist impact the daily work of loan officers and underwriters? Will it disrupt existing processes and create confusion? Will it lead to job losses? But the real fear here is more personal: Will AI erode the value of human skills and experience? When loan officers worry about LendAssist processing applications too quickly, theyβre asking, βWill speed make my experience irrelevant?β Theyβre grappling with the potential for AI to diminish their role and authority in the lending process.
Level 3: βWhat if it works differently than we expect?β (Strategic Risk) This level delves into the broader implications of AI adoption. Will LendAssist have unintended consequences? Will it disrupt the competitive landscape? Will it create new ethical dilemmas? But the underlying fear is about professional identity. When managers resist LendAssistβs recommendations, they often protect their identity as decision-makers more than questioning the AIβs judgment. Theyβre grappling with the potential for AI to redefine their roles and responsibilities, challenging their authority and expertise.
Level 4: βWhat if it changes who we are?β (Identity Risk) This is the deepest and most existential level of risk perception. Will LendAssist fundamentally change how we work and interact with each other? Will it alter our understanding of expertise and professional identity? Will it reshape our values and beliefs about the role of technology in our lives? This is where the fear of obsolescence truly takes hold. When senior underwriters label LendAssist βtoo risky,β theyβre expressing fear about transitioning from decision-makers to decision-validators. Theyβre grappling with the potential for AI to transform their sense of self-worth and professional purpose.
How technical and identity risks become intertwined makes AI risk assessment particularly challenging. When a loan officer says, βLendAssistβs risk models arenβt reliable enough,β they might be expressing fear of losing their ability to make judgment calls or anxiety about their role in the organization changing.
The more organizations focus on addressing technical risks, the more they might inadvertently amplify identity risks by suggesting that human judgment is secondary to AI capabilities. As AI systems like LendAssist become more capable, they donβt just present technical risks β they force us to reconsider what it means to be an expert in an AI-augmented world.
These layered challenges might seem insurmountable when viewed through a purely technical lens. After all, how do you solve a βtechnicalβ problem when the real issue lies in professional identity? How do you address βperformanceβ concerns when different stakeholders define success in fundamentally different ways?
What Iβve found is that acknowledging these language barriers is the first crucial step toward overcoming them. When we recognize that resistance to AI adoption often stems from communication gaps rather than technological limitations, we open up new paths forward.
The Path Forward: A Practical Perspective
Once you recognize these language barriers, they become surprisingly manageable. Weβre not just dealing with technical challenges β weβre dealing with translation challenges. We need to become multilingual in the different ways our stakeholders talk about and understand AI.
The organizations Iβve seen succeed with AI adoption arenβt just technically sophisticated β theyβre linguistically sophisticated.
They create a shared vocabulary that respects different perspectives. They recognize expertise transitions as a core implementation part and build bridges between technical and professional languages. They value communication skills as much as technical skills.
Conclusion
This isnβt just another factor to consider in AI adoption β itβs often the factor determining βgoβ or βno goβ decisions.
The good news? While technical challenges typically require significant resources, language barriers can be addressed through awareness and intentional communication. Weβre all figuring this out together, but recognizing how language shapes AI adoption has been one of the most potent insights. Itβs changing how I approach projects, how I work with stakeholders, and, most importantly, how I help organizations navigate the fundamental changes AI brings to professional expertise.
The choice isnβt between technical excellence and human understanding β itβs about building bridges between them.
And sometimes, those bridges start with something as simple as recognizing that we might mean different things when we say βperformance,β βexplainability,β or βrisk.β
Further reading and citation
Why AI Projects Fail and How They Can Succeed
By some estimates, more than 80 percent of AI projects fail. Thatβs twice the rate of failure of information technologyβ¦
www.rand.org
Keep Your AI Projects on Track
AI-and especially its newest star, generative AI-is today a central theme in corporate boardrooms, leadershipβ¦
hbr.org
Superagency in the workplace: Empowering people to unlock AIβs full potential
Almost all companies invest in AI, but just 1% believe they are at maturity. Our new report looks at how AI is beingβ¦
www.mckinsey.com
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI