Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

How Can We Foresee Our Relationship With AI?
Artificial Intelligence   Latest   Machine Learning

How Can We Foresee Our Relationship With AI?

Last Updated on November 5, 2023 by Editorial Team

Author(s): Reza Yazdanfar

Originally published on Towards AI.

Human-AI interaction at the current stage or post-AGI is always a matter of debate. Whether you are on the next negative side or position side, I would like to disentangle the aftermath of AI on our minds. Will humans become more intelligent in a world where AI is proliferated or unintelligent?

source

Firstly, AI is not a brand-new part of life, we already get used to it while we listen to Amazon recommendations or scroll the indefinite reels of our TikTok, etc. They know us pretty well, our interests, and sometimes dictate what to do by recommending us (the modern version of advertising — Recommendation Systems). Some people, on this, consider “conspiracy” while tech-savvies argue this helps us avoid burdening our minds with thinking about meager things. I have seen this debate personally.

source

It is true that tech has improved productivity but eliminated a lot of tasks by automating them.

There was a time when people were hired to do calculations on paper, then they were given the calculator to help them lest putting too much time and energy into calculating on paper, now they have computers and Excel to do the math for them.

There was a time when people were hired to be responsible for manually stopping lifts for timing to stop the lift at a specific floor, now they are automated and just need some repairment when they are positioned to operate. Currently, machines are responsible for 30% of tasks, while the rest is for people.

Jobs of the Future (shrm.org)

A calculator has not made an accountant dumb to calculate on air but more productive and precise. Automated lift has made some expert technicians capable of doing their job, but not everyone is capable of doing their job. AI is the next revolution. Speculations say that this is about to be 50–50 blend of humans and machines, a dramatic shift. That is why many people worry about it, lest they can’t keep pace and equip themselves before AI does their job, even better and cheaper and more productive.

In any way, this article is not about whether AI is taking our jobs or not. This will cost, a labor cost, for those who fell behind. More than 120 million workers globally will need retraining in the next three years due to artificial intelligence’s impact on jobs, according to an IBM survey.

After all, the subject here is our intelligence; shall AI expand our minds to be more creative, innovative, and productive??

We are giving our intelligence to the data. Yes, it is true to a significant extent, the data that most AI tools have been trained on have been produced by us, people. The more data we produce, the more intelligence they get. That's why European countries are concerned about their citizens’ data, as well as the European people themselves as well.

source

It has been that we draw a line, a border, between our minds and AI. We as creators and they (machines) as created. We handed over our intellectual tasks to them. We are superior to them. Let’s reorganize this viewpoint; on the contrary, we both, human and AI, are coupling and merging, blending and combining, forming units.

source

The concept of extended mind is defined by two philosophers, Andy Clark and David Chalmers, that says that “cognition” doesn’t just happen in our heads, in other words, the mind is not the residence of the brain, exclusively, but it could extends into outer world, physical objects. For example, not long after Apple’s release of its smartphones, it was capable of doing tasks that we used to do by ourselves, such as memorizing telephone numbers, navigation, etc.

source

Likewise, we can see “extended body”, for example, implanting chips by Neuralink into our heads, or the external suit in Iron Man, etc. These are exemplars of the above-mentioned combining tech in general and humans. And we can generalize the idea to capabilities.

source

In fact, this has been good for humankind in the 21st century. Smartphones, help blind or low-vision people to navigate their last few steps by detecting doors or live captions for deaf people, and so on and on. (Apple previews innovative accessibility features — Apple) Though tech has progressed a lot in its journey so far, it still needs to be improved more but has the potential. (Let’s discuss: How can technology empower people with disabilities? — The Valuable 500)

source

There are three conditions that this coupling must meet to be in harmony:

  1. Each part has its own casual responsibility (or responsibilities) while they are well connected and construct a thorough system
  2. All systems are operating just exactly as the mind thinks to do
  3. If external systems (not all but at least one) are lost, the overall behavioral capacities should be decreased

Up to now, we’ve been utilizing tech to enhance our productivity, which worked quite well if we want to be honest with ourselves. According to another philosopher, John Danaher, there is a distinct but eliminating difference between “external” and “internal” forms of the combination of human and tech for enhancement reasons. He believes if tech can improve the mental health or behavior of people in new environments, it acts towards human enhancement.

If the [extended mind hypothesis] is true, then we are always enhancing the human mind through the use of technology.

John Danaher –Why internal moral enhancement might be politically better than external moral enhancement.

Thus we can apply the same analogy to AI as well for those AI tools (e.g. ChatGPT, etc.) that we humans can take advantage of directly. Yes, we cannot easily apply it to all AI tools. Why? Because some of them are in our control, by control I mean we could trigger their action, such as prompting an AI chatbot, asking Alexa/Siri a favor, etc. On the other hand, some of them are partially in our control, such as Instagram/TikTok algorithms, Amazon recommendation systems, etc. And some are out of our control.

For this, David Chalmers gave a talk on whether we can consider Large Language Models, LLMs, as extended minds or not. He questioned whether we could consider AI tools, like ChatGPT, as tools to extend our minds, because given the right prompt, the chatbot responds to us with useful information. To the amazement of all, he says that a technology like ChatGPT might be operating too much independently to be an extension of the mind.

The theory of extended mind applies only to a number of AI tools, and as they get more autonomous the less they’re able to be considered as a mind extension.

He reasoned and discussed the criterion of why and when we consider a new technology as an extended mind; at the end, he concludes with a question to think about “Is this process extended cognition?”. A funny thing he did and we can ask the same question from GPT-4:

Interestingly there is a paper, published titled “Do Large Language Models Know What Humans Know?” [2209.01515] Do Large Language Models know what humans know? (arxiv.org) They did a False Belief Task, humans were correct 82.7% of the time while LLM was correct 74.5% of the time, which is pretty impressive; but the large language model did not describe human behavior completely. That says that language statistics solely are enough to generate something to false beliefs, but not fully account for humans.

Also, on the other hand, relying more and more on AI, to recommend (or on extreme dictate) us what to do, some argue that it makes our brains dumber and less creative. It has not been a new subject to debate, it has been discussed on Google ( Is GOOGLE Making Us Stupid?) in the past or now on new advances of AI (Will Generative AI Make Us Dumber?)

One of the very first stories of facing human and artificial intelligence goes to the late 90s, the story of Deep Blue and the world champion chess player, Garry Kasparov. He won and lost several games with the IBM machine, Deep Blue. It is out of the scope of this article to articulate the story in detail, so we only suffice to one of the lessons that Kasparov mentioned in his book, Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins.

He believes that anything we, us humans can do, machines can do better. Contrary to great minds and computer scientists like Alan Turing, who thought chess is the ultimate test of intelligence. He is not against AI, actually, he believes in a simple equation, a person plus machines can beat a genius man in his area of expertise. You also can watch Deep Thinking U+007C Garry Kasparov U+007C Talks at Google

In fact, it’s become a reality, in 2016, a series of Go games took place between DeepMind’s AI, AlphaGo, and the world champion of Go, Lee Sedol. The human, Lee Sedol, won only one out of five games and lost the rest. The same story on chess; we should consider that Go is much harder than chess because of its gigantic freedom of movement, which leads to numerous strategies.

Later on, an employee of DeepMind with no knowledge of the game, and only with recommendations of the AI was able to beat the Go world champion. If you are interested in the subject of human plus machine, watch the video by HBR or read the book “Human + Machine: Reimagining Work in the Age of AI

Some talks and books would be useful in comprehending this article

Is your phone part of your mind? U+007C David Chalmers U+007C TEDxSydney

David Chalmers: Do large language models extend the mind?

Thanks to

Ali Moezzi for reading drafts of this.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓