Unlock the full potential of AI with Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

Taking the Intelligence out of AI
Latest   Machine Learning

Taking the Intelligence out of AI

Last Updated on July 24, 2023 by Editorial Team

Author(s): Dr. Adam Hart

Originally published on Towards AI.

When we look at clouds or other ambiguous media and sometimes see images that form out from the clouds, this effect, known as pareidolia is at work. This is the brain's effort to impose a pattern upon what has no pattern, to make sense of something that does not make sense.

The rabbit in the moon is another example of the same phenomenon. It brings a non-scientific allure to the grey satellite that orbits this blue and white marble in fulgin space.

We spoke before about human resistance to an AI dominated future, where a small community of talented scientists with variable ethics and human-replacement motives purports to develop all manner of black box algos for ‘noble’ reasons. But we suggest those reasons are much like Fringe’s Dr. Walter Bishop’s efforts that disrupted the stability of spacetime, hubris. From noble reasons bad things happen.

What is disturbing about this trajectory is not that market forces are rewarding such innovations, not even that they can be weaponized, but that many of these classes of technical simulacra are inserting themselves into a network of relations that many human minds are unable to perceive as non-real.

Part of this is the pareidolia that is a facet of being human, part of it is the tendency to anthropomorphize technological advances that mimic some part of human agency to make sense of something foreign, something ambiguous, something new.

Boston Dynamics made a very smart move with using a dog as a metaphor for their robotic technology. A dog is not human, not the equal of a human [1], a dog is a partner and a friend. And a well-trained dog will not harm you, but enhance your life. You will incorporate it into your network because it is safe. A well-trained dog will incorporate itself in your pack, but will not rule the pack.

For the other technologies whose UX interface seeks to mimic human agency (whether it is declared a mimic or not, does not matter), humans in a pareidolic manner are inferring meaning where there is none, sense where there is no sense, matter where there is no matter. We can’t help it, it is inside of all of us, but we must overcome this.

What also is possibly happening is that unlike the dog, whose status is always below ours, the deepfake voice and images, the chatbots, talking sex dolls, voice assistants, the AI artists and the rest of the tech golems are engaged in a network of relations as an equal partner, a teacher, a superior.

When Mr. Lee Sedol recently retired from Go, a sport he’d placed for 24 years, it was reported that he said:

“Frankly, I had sensed kind of a defeat even before the start of the matches against AlphaGo. People from Google’s DeepMind Technologies looked very confident from the beginning.”

and

“Even if I become the number one, there is an entity that cannot be defeated,”

Apart from the fact that a sole man was pitted against a team of humans with serious levels of research funding that enabled AlphaGoZero to teach itself, and that a former European master lent his services to DeepMind likely with the aim of taking Lee Sedol out, the necessary problem is not with the AI algo itself, but with the network of relations that were in play during this ‘conflict’.

A human has been ‘defeated’. This is incorrect. He was not defeated because he was not in conflict with an entity, he was not in a conflict at all. He was engaged with an algorithm seeking a preset optimized outcome. AlphaGoZero is not an anthropomorphized Go player, it is code, bloody complex code.

Was this ‘defeat’ simply a dog and pony show to demonstrate to the world how smart DeepMind is so they can start to turn a profit for Alphabet and scare their competitors? It was also not ever said how much Lee Sedol was paid to play AlphaGo Zero. We can’t know. Is it at least plausible that he has retired because he’s sitting on a pile of DeepMind cash? What in fact was his motive to play a machine? Does this mean no people play Go anymore, same as not play chess because of IBM DeepBlue? No.

All any Go player needs to do is to drop the hubris, refuse to play any machine and render the machine worthless. It’s pretty boring to play a game you can never win. When I was a kid I had a Kasparov chess computer. I quickly lost interest.

And, the machine hasn’t made ‘an achievement’ per se but has simply reached an optimized goal following a non-deterministic class of DNN logic based on reinforcement learning penalties and rewards. That is a more accurate way to view technology.

We all used to know that any technology is neutral and is a tool. Is this still the case?

I think no matter how advanced the tech gets, the network of relations that are entangled when humans interact with these advanced technologies is the necessary problem that needs to be solved for, not the technologies themselves, which are not an ‘entity’, don't have a ‘life’, and can never ‘think’. They can certainly appear to ‘reason’ to attain an optimized goal, but it is not reasoning, it is simply a new class of computerization to solve a narrow albeit complex problem.

If we drop the fear-mongering about AI and put the excellently thought out yet unlikely dystopic views of Dr. Nick Bostrom aside, what is the necessary problem to solve is to reestablish control by removing software agents from the way we construct our personal and ideological network of relations; remove the word intelligence from AI; cease anthropomorphic thinking and pareidolic sense-making where there is no sense to be made; reject the philosophy of physicalism as unethical; ensure that humans are in a superordinate relation with the artificial; and also realize that this emergent tech is a tool like any other, put to service in the long-running human story about partisan ownership and power over capital.

Footnote

1. Peter Springer, of course, may disagree on this point, and I respect his ethics. Code can never respect anything. It may have logic that mimics respect, but it can’t actually respect.

Originally published at https://curiousnews.tech.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓