5 Surprising Use Cases Where AI Fails — Avoid Using LLM Here!
Last Updated on September 27, 2024 by Editorial Team
Author(s): Kamran Khan
Originally published on Towards AI.
5 Areas Where AI Falls Short
This member-only story is on us. Upgrade to access all of Medium.
Photo by BoliviaInteligente on UnsplashAI, specifically big language models (LLMs), has changed how we interact with technology. The output is superb in almost every area. And though the model is advanced, it is incomplete.
Given its promise, I have encountered some truly remarkable scenarios where AI fails to impress. Areas might include creative tasks that require a genuine emotional insight or complexity in problem-solving in scenarios in real-time.
These are places where one would rely on AI and end up dissatisfied. In this article, I shall go through five surprising use cases where AI fails to meet the mark — and why it’s good to know when not to use it.
Photo by Julien L on UnsplashOne of the most amazing surprises I found related to the limitation is that the AI cannot understand complex emotions.
It can parse and produce text that sounds emotionally intelligent, but it does not have the depth of human empathy.
For example, when people want to express subtle feelings like sadness or personal struggles, the response generated by AI can sound too mechanical or not right.
I caught myself doing that many times when writing passionate content, where what… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI