Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

The “Real” Risk of AI — Minus the Fear Mongering
Latest   Machine Learning

The “Real” Risk of AI — Minus the Fear Mongering

Last Updated on July 17, 2023 by Editorial Team

Author(s): Aditya Anil

Originally published on Towards AI.

“Humanity” had and will continue to have an important role in AI to tackle the Real Risks of AI.

The following image was the Time Magazine Cover on AI. The cover reportedly declares that AI could be “The End of Humanity” — with the words A and I highlighted in the word Humanity.

Image Source — AI Is Not an Arms Race U+007C Time

The theme of the illustration was based on the potential risks surrounding AI. The article contained mostly about AI risks, how the AI race could end up, and so on.

The magazine generated countless criticism on social media like Twitter. Many folks on the internet pointed out that this is just plain attention-seeking, calling it a doomer move.

While I don’t think AI can end humanity any time soon, one thing about the cover is clear to me — we should really consider how real the risk is. AI currently needs a global regulatory body — somewhat like a CERN for AI as Prof. Gary Marcus says — to address and tackle the risks perpetuated in the fast-paced era of AI development.

The Real Risk is not an Existential Risk at present

The threat of AI (though unlikely an existential threat) is present in its most visible form at present. You can see it yourself. Spend some time experimenting with Generative AI Tools — such as ChatGPT — by giving them controversial or made-up questions and see how it responds.

GPT-4, the latest and arguably the largest LLM models available, still has issues of misinformation and hallucination — not to mention that these machines still hold biases in their responses.

The first few responses of ChatGPT, when you experiment, do not seem harmful. ChatGPT, as a chatbot, just gives textual output in an impressive way. Then it is natural to ask — how could it ever pose a threat to humans?

Spend a good amount of time experimenting with ChatGPT, and you will see where things begin to go south. After spending a considerable amount of time, you will begin to notice the flaws.

The common faults with tools like ChatGPT are the hallucination, the misinformation and the bias it generates while giving the responses.

Hallucination refers to the phenomena in which an AI system makes up facts on its own — as they lack reasoning.

Bias, on the other hand, is the inaccurate responses containing prejudice. It usually rises due to the bad quality of the training data on which the AI system is trained.

Out of hallucination and bias, it is quite hard to tackle bias. Bias is a tricky thing to handle, even for humans.

While misinformation and hallucination can be detected by comparing them with existing data, how could you verify bias?

You can, of course, do peer review and manual standard tests to identify if the statement consists of bias or not. But how could this identification of bias be hardcoded in the form of an algorithm that is consistent most of the time?

That’s quite hard.

But achieving this milestone is important if we truly want to develop a safe AI system. A well-defined mechanism that caters to safety and regulations.

Regardless of what we are tackling here, one thing is crucial — the need for human intervention.

Humans need AI, but good AI needs Humans as well.

In machine learning, we have a mechanism called RLHF, which optimizes an AI model by consistently taking human feedback. This has been one successful method to fine-tune responses by tools such as ChatGPT.

Quite ironically, for AI to become a hypothetical human-replacer machine does require humans. Humans make these LLMs, and they do the fine-tuning, and they drive the development of AI forward. When does an AI become “smart”? When it produces our desired result that caters to our needs, i.e., being reliable and fast at the same time. Humans thus become essential for the long-term sustainability of these machines and for making AI safe.

While the future of AI won’t have the terminator scenario, it is still dangerous.

Bad players with enormous power can use AI to cause dissonance. Whether it is AI-generated images that could incite violence or accidentally leak your information to these huge data mining machines, the scenarios could be harmful — both directly and indirectly.

Forget the machines. Even the misinformation and biased communication used by humans are enough to cause disharmony. AI machines make it much more efficient to spread misinformation — inducing disharmony.

Even if misinformation, hallucination and bias are not on your top harms list, then there is one other factor you should consider as well — The AI Race.

Business and Corporate race to catch up with AI development

Businesses seem to rely ever so more on AI. A bit too much, perhaps. We saw a plethora of AI startups evolve this year alone.

Personally, I still hesitate to call many of them AI startups — since many are just using the APIs like that of OpenAI or HuggingFace. Most of them aren’t developing AI systems of their own.

But that’s fine, I guess. They have an AI component in their system, and from that perspective, it does provide justice to the “AI-powered” tag.

Image Source: Large Language Models Will Redefine B2B Software by Sam Crowder from Cloud Constructed

The image above shows a good glimpse of the enhanced ecosystem of different AI apps. Generative AI is expected to be the sub-field of AI that is expected to transform into AGI (if that is ever to happen).

This, in theory, is concerning, as this is the same field of AI that poses a threat to jobs. Whenever we are talking about AI replacing humans, chances are it is about Generative AI.

But the biggest gainers in this scenario are the businesses that cash out huge profits from these AI systems.

While I mentioned that in a business ecosystem, it may not be reliable to depend on AI fully, but nonetheless, AI has started having an impact on jobs.

Though amongst this buzz, I haven’t seen a single instance where AI could provide the desired result standalone. To make AI productive in your major work, someone must be there to fact-check or assess the result. The reason for this is the same — misinformation and hallucination.

One famous example of this buzz was, ironically, BuzzFeed. BuzzFeed earlier this year said that it would use AI to help create content faster. And then there was CNET — which found errors in more than half of its AI-written stories when it used AI to write its content.

Ironically, the faster you make content with AI, the slower you become overall — as you now have to fact-check them manually.

It's high time for Regulations and Social Safeguards on AI

AI left alone, thus, couldn’t prove beneficial in the long run. Humans must be present — to verify and fine-tune the AI system — to get the desired results.

If AI can work independently (that is if AGI is achieved) — AI would replace the CEOs first. Don’t you think?

The Time article also mentioned the following lines (emphasized by me), which deeply resonate with the possible outcome of the AI race among businesses-

“Notably, in the classic arms race, a party could always theoretically get ahead and win. But with AI, the winner may be advanced AI itself. This can make rushing (of AI) the losing move.”

This also aligns with the democratic aspect of the AI system. As highlighted by Sam Altman in this post — a mechanism is required that ensures common people have a say in the development of AI systems.

And for that, people must be educated on this topic. They must know what are the problems that we are tackling in AI.

Just like when Web 2.0 boomed, the issue of phishing activities also came up. Phishing activities cause huge losses to users, with 90% of data breaches being caused by phishing. The blame is to be put on the users themselves for the most part.

But after 2 decades of Web 2.0, people are now aware of phishing attempts, and they learned to be cautious.

In a similar fashion, AI safety should begin with the people.

The real risks should be known en masse.

And If this could happen, then we can wish for an alternate cover for the magazine.

Image by Author

Are you interested in keeping up with the latest advancements in technology and artificial intelligence?

Then you won’t want to miss out on my free weekly newsletter on substack, where I share insights, news, and analysis on all things related to tech, AI, and science.

Creative Block U+007C Aditya Anil U+007C Substack

A weekly newsletter about AI, Technology and Science that matters to you. Click to read Creative Block, by Aditya Anil…

creativeblock.substack.com

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓