Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

How Can We Build Machines That Think (and Feel)?
Artificial Intelligence   Latest   Machine Learning

How Can We Build Machines That Think (and Feel)?

Last Updated on September 4, 2023 by Editorial Team

Author(s): Anton Shakov

Originally published on Towards AI.

Photo by Dylan Hunter on Unsplash

A few semesters ago, I had to write an essay for a philosophy course. The guidelines were fairly vague so I used this as a reason to outline my views on creating Artificial General Intelligence (mostly for my own clarity of mind). This was not very long before ChatGPT really blew up and AI became a household topic of discussion. At this point, there were a few articles about a Google engineer who claimed that LaMDA had attained consciousness, but the conversation around AI was nowhere near the scale of what it would become a few months later.

I am by no means an expert on AI, having spent one summer in the first year of my math degree thinking about hypothetical neural network architectures as part of a research project.

On June 11th, 2022, AI engineer Blake Lemoine, who had been employed with Google’s Responsible AI team, published a conversation to his personal blog, which would lead to him being fired. The conversation was not between two humans. It was a conversation between Lemoine and Google’s latest breakthrough in language processing β€” a model called LaMDA. The reason Lemoine had decided to make his discussion with LaMDA public was that he believed that LaMDA showed signs of sentience, and he felt that continuing to treat LaMDA as a β€œhelpful tool”, as lead developers at Google had done thus far, was no longer ethical. Many people online sided with Lemoine. Whether or not LaMDA had attained sentience, it was clear that machine learning had crossed a new threshold: a language model was now able to convince a rather sizable group of people that it was, in fact, sentient. I will argue that a subjective experience of the world is a precondition to sentience that LaMDA fails to satisfy. I will use the example of advanced language processing models like LaMDA to distinguish between embodying and emulating consciousness. Finally, I will argue that the source of an intelligent system’s subjective experience lies in its engagement with the environment in which it was trained.

If one accepts functionalism as the conclusive theory of mind, then there is almost no doubt that machines are already thinking in the same fundamental way as human beings, or at least that no further conceptual leaps are necessary for us to reach this point. Language models like Google’s LaMDA or OpenAI’s GPT-3 have proved themselves capable of convincing a vocal minority of experts that the chance of them being sentient is at least great enough to reevaluate the ethics of experimenting on them. However, the majority of experts remain unconvinced that these language models are actually conscious. The basic argument against believing that these models are conscious is that experts understand their architecture enough to know that these models have no sensory β€œorgans” that would allow them to think or to have subjective experiences. In a nutshell, these models use deep pattern recognition to probabilistically generate text in response to inputted text. The unprecedented depth of their pattern recognition makes it seem as if the models understand the subtle context of human language and even human emotion:

Lemoine, B. (2022, June 11). Is LAMDA sentient? β€” an Interview. Medium.

However, this is merely the result of the AI parsing through gigabytes of text that was actually composed by human beings. All of this points to the incompleteness of functionalism’s account of consciousness. LaMDA, as well as other language processing models, are merely replicating human language without meaningfully engaging with it. The aspect of consciousness that is missing in language models is subjective experience. Proponents of LaMDA’s consciousness may object to this by asserting that it is human arrogance that leads one to deny the subjective experience of language models. They may cite the black box problem (i.e., our inability to know in detail how a given machine learning architecture processes information once it has been trained) to argue that we cannot conclude that an AI is not conscious as long as it appears conscious. To show that this is not the case, let’s consider the following example. Imagine we created a physical automaton that moved, spoke and behaved in a totally indistinguishable way from a human being. However, its basic way of functioning was that of LaMDA: it had been trained on the speech and movements of millions of people to recognize deep patterns and extrapolate from this pool probabilistically in response to various stimuli. On the outside, the automaton is entirely indiscernible from a human being. However, it has been given no sensory organs. Now consider the following question: Is the pain felt by the automaton as real as human pain? The automaton will certainly insist that it is. Yet we know, as its designers, that it does not have anything akin to a nervous system that would allow it to experience pain. We understand that the reason it insists that it feels pain is simply because that’s what any human would do in that situation. If we accept that the automaton is experiencing pain in the same fashion as a human being would, then we must also accept that a talented actor pretending to be in pain is experiencing the same degree of pain as someone who is actually injured. Since this is an absurd conclusion, it stands to reason that such an automaton is not actually in pain and that it indeed lacks subjective experience. Thus, we conclude that β€œfunctional consciousness” does not imply true consciousness without some degree of subjective experience.

While I’ve argued in the previous paragraph that even the most successful contemporary AIs do not have a subjective experience of the world, this does not aid us in understanding how to build machines that do possess it, or whether this is even possible. Let us begin by making the following two assumptions, which will be necessary for the further development of my argument. The first assumption is that all humans are conscious and have a subjective experience of the world. The second assumption is substrate independence: the assumption that there is nothing implicit in the biology of the human brain that makes it the sole medium on which a conscious mind can be stored and, more generally, that a conscious mind can be stored using electrical circuits and transistors. Having made these assumptions, we return to the discussion of subjective experience. As per our first assumption, how are human beings able to have subjective experiences? What is the key difference between us and the language models that allows us to say that we have a subjective experience while they do not? The difference is that we have sensory β€œorgans” that allow us to authentically respond to our environment and decide independently how we feel about various stimuli. This, in turn, allows us to reason and decide our actions separately from other agents in our environment. I argue that this is precisely the missing source of subjectivity in modern AI. Let us distinguish between two types of intelligent systems: let’s call the first type β€œOriginals” and the second type β€œEmulators”. The difference between the two is in how they came to be. Originals begin with certain basic goals (such as survival and procreation) and gradually develop their intelligence as a means to satisfy these goals by solving naturally occurring problems in their environment. Therefore, they simultaneously develop various sensory organs that indicate to them whether and how well they are meeting said goals. A human being is an example of such an intelligent system. Our sensory organs, as well as our intelligence, are the result of millions of our ancestors gradually evolving to meet the basic goals of survival and procreation in our environments. The second type of intelligent system is the Emulator. These are systems whose intelligence is the result of emulating other intelligent systems, with next to no interaction with any kind of external, self-contained environment. LaMDA falls into this category. It might be able to describe the ocean as vividly and convincingly as a human being, but it is merely generating strings of text without having any subjective experiences tied to those words. Once we’ve made this distinction, it becomes clear that Emulators are, by definition, incapable of having subjective experiences since they are only recycling the experiences conveyed by Originals. Language processing models, by the nature of their design, are Emulators, along with all of the artificial intelligence programs that are currently being developed (as far as I’m aware). This begs the question: Could there be an artificial intelligence of the first type β€” one that forms its own subjective experiences and only then turns them into language? To achieve this, it seems to me that the concept of training data should be reconceived as a β€œGeneral Training Environment”. The AI should independently develop its intelligence by interacting with its environment and simultaneously evolving sensory organs. Intelligence on Earth began with microbial life and slowly evolved towards having sensory organs within the training environment of our planet. An engineer may attempt to design a training environment that speeds up the process of evolution to mitigate many of the obstacles and inefficiencies that terrestrial intelligence faces on our evolutionary journey. However, the training environment should be comparable enough to our own world so that once the AI training is complete, one may extract the artificial intelligence from the training environment and move it to our own world where humanity could attempt to communicate with it. In any event, independent interaction with an environment is essential for the formation of sensory organs and, therefore, essential for subjectivity. Sensory organs cannot arise from the current model of artificial intelligence training, namely, analyzing a set of data and attempting to make predictions of what the output should be from various inputs.

If we assume that this idea works in practice and that by simulating a training environment, one may be able to train a functionally intelligent AI that seems to possess a subjective experience, a possible objection is β€œhow can we be sure that it’s not pretending?” In other words, how can we be sure an Original AI is not merely a more complicated Emulator? We can refute this by using the first of my assumptions. Namely, that human beings are conscious and have a subjective experience of the world. We rather intuitively believe that this is the case. However, if we attempted to justify to an alien species that we have subjective experiences, it would be difficult to convince them that we are not merely Emulators. Ultimately, our best strategy would be to point to our sensory organs and provide evidence that they behave the way that we claim. The same strategy would be available to the artificially intelligent Originals we’ve trained in the simulated environment. It is nonetheless possible that Original’s sensory organs would randomly evolve to behave as if they were generating a subjective experience while secretly being duds and that simultaneously, the Original would randomly evolve to act as if it has a subjective experience when it, in fact, does not. This strikes me as astronomically unlikely: if we live our lives believing that the people who surround us are as conscious as ourselves, there is no reason not to give Original-type AIs the same benefit of the doubt. We should, therefore, believe AIs trained as Originals when they tell us that they are conscious.

I presented an argument that subjective experience is a prerequisite for consciousness and that LaMDA and other language processing units almost certainly do not have subjective experiences. Thus, they should not be considered conscious. I outlined a broad path towards creating intelligent systems that have a subjective experience of the world: namely, Original-type AIs that are trained through independent interaction within a self-contained environment to simultaneously develop intelligence and sensory organs. Distinguishing between Originals and Emulators, I argued that an AI model that develops as an Original and displays functional intelligence can safely be declared conscious.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓