Can ChatGPT think?
Last Updated on January 30, 2023 by Editorial Team
Author(s): Dor Meir
Originally published on Towards AI.
An answer from Leibowitz, Yovell, andΒ ChatGPT
- The question
- The easyΒ answer
- The mind-brain problem
- Epiphenomenalismβββmind as a side-effect ofΒ brain
- Functionalismβββmind as a software ofΒ brain
- How can we tell if ChatGPTΒ thinks?
The question
The image above is a rather rich one, isnβt it? I was never skilled at writing Dall-E prompts thatβll generate decent images. What I did here was to ask ChatGPT to βwrite a Dall-E text to image prompt of ChatGPT thinking and make it sit in the same position as the famous Le Penseu statueβ. I then took the text result and punched it into Dall-E, and got back this fineΒ image.
This is just another example of how advanced ChatGPTβs communication level is. It makes me wonderβββif ChatGPT is that complex in his calculations if its learned weights capture so much of human context, could this be a sign of his human-like intelligence? Does this count as thinking in a somewhat similar way to how we think? Roughly speaking, itβs just statistics and learning from examples. But donβt we humans also learn from examples? I know my toddlerΒ doesβ¦
If ChatGPTβs trained weights capture so much of human context, could it be it has a human-like intelligence? Does this count as thinking, similar in a way to how weΒ think?
First and foremost, letβs make a proper introduction. Here are ChatGPTβs own words about βitselfβ:
So ChatGPT claims it understands the human-like language. Although it is showing a remarkable chat capability, NLP researchers say it has many flaws in its so-called βunderstandingβ. Here are a few notableΒ ones:
- ChatGPT doesnβt really βknows what it knowsβ. It doesnβt even know what βknowingβ is. All it does is to guess the next token (=word), and this next token guess may be based on either well-founded acquired knowledge or it may be a completeΒ guess.
- ChatGPT doesnβt have a coherent and complete worldview: it has no way of knowing that different news stories it was trained on all describe the sameΒ thing.
- ChatGPT has no notion of time: it can concurrently βbelieveβ both βObama is the current president of the USβ, βTrump is the current president of the USβ and βBiden is the current president of the USβ are all valid statements.
Iβll add that if you use it long enough, sooner or later, youβll see it spill out some rubbish, but itβll always be articulated with immense confidence and in a nice and eloquent way. If you say in response: βare you sure?β, itβll ask for your forgiveness and then either fix its mistake or just repeat it all over again. Nonetheless, most of us can think of a person weβre familiar with who chats in a similarΒ mannerβ¦
Even with these flaws, ChatGPT has an astonishing Q&A capability. Could it be evidence for ChatGPT and other Large Language Model's ability to think, even at a very low and limited level of thinking? In my humble opinion, this question should not be laid alone at the doorstep of NLP researchers or Machine Learning experts. It ought to be handed over also to Philosophers since it brings up more basic philosophical questions: in what way are we thinking? And how can we tell if another being is thinking?
Despite its flaws, ChatGPT has an astonishing Q&A capability. Could it be evidence for its ability to think, even at a very limited level? This question brings up philosophical questions: in what way are we thinking? And how can we tell if another being is thinking?
The easyΒ answer
One way to know what the being in front of us thinks is to simply askΒ it:
Thatβs just too easy. Iβm positive ChatGPT was dictated to answer this, just like it was dictated to answer anything that even smells like financial consultation: βIt is not appropriate or ethical for me, as a language model, to give investment adviceβ.
Notice, though, when asked if it can think, ChatGPT answered, βI do not possesβ consciousness or self-awarenessβ. Besides being rather ironic
(who is I?), this also teaches us that being able to think is correlated in ChatGPTβs training data with consciousness. We will soon see how these terms connect to the subject in question.
The mind-brain problem
Since asking ChatGPT turned futile, weβll resort, at least for now, to philosophy. Iβll give ChatGPT the honor of introducing the philosophers:
A βsmallβ mistake here: Yoram Yovell is alive and well. And another detail omittedβββYovell is Leibowitzβs grandson. More importantly, In 2005, Yovell replied to his grandfatherβs 1974 paper explaining the fundamentals of the Psycho-Physical Problem, better known as the mind-brain problem. The two essays were combined along with an essay by the Nobel prize winner, Kahneman, to a book in Hebrew called βΧ Χ€Χ© ΧΧΧΧβ, or βmind and brainβ. Although thinking algorithms is not the book's primary subject, it contains more than a few discussions on the option of an AI or a supercomputer developing a mindβββand thinkingΒ ability.
As I said above, the main topic of the book is the psychophysical problem. Letβs ask ChatGPT for an explanation of theΒ problem:
Indeed. We donβt know, and some say we will never know, how our mental experiences reflect and affect our physical state of the brain, and visa versaβββhow exactly is the brain affecting the mind? Yeshayahu Leibowitz has defined it thisΒ way:
How does an event in the public domain of the physical spacetime, emerge as an event of a consciousness in the privateΒ domain?
We donβt know, and some say we will never know, how our mental experiences affect our physical state of the brain, and visa versaβββhow the brain is affecting theΒ mind.
Leibowitz elaborated on four popular solutions to the mind-brain problem: interactionism, parallelism, epiphenomenalism, and identity theory. Donβt worry about all the ismβs, weβll soon explain the most relevant one in plain English. But first, Letβs see if ChatGPT is familiar with theseΒ terms:
The last sentence reveals a key question every solution of mind-brain problem ought toΒ answer:
Are mind and brain separated entities (dualism),
or are they essentially the same thing (monism)?
Out of the four solutions to the mind-brain problem, weβll focus on the one thatβs most compatible with the option of ChatGPT thinking and developing a mind of itβs own. Its name is Epiphenomenalism.
Epiphenomenalismβββmind as a side-effect ofΒ brain
The term βepiphenomenalismβ comes from the Greek word βepiβ meaning βuponβ or βon top ofβ, and βphenomenon,β meaning βappearanceβ or βmanifestation.β βEpiphenomenalismβ is used to describe this theory because it suggests that mental states are like an βafter-effectβ or βmanifestation on top ofβ the physical brain activity. Advocators of Epiphenomenalism are monists, that presume there is only one vector of influence between the brain and the mind, and its direction starts from the brain and ends in theΒ mind.
According to Leibowitz, some of the people who believe in epiphenomenalism, or the mind as just a side effect of the physical states in the brain, also argue that computers might develop a mindβββas a side effect of their physical activity.
Some of the people who be believe in epiphenomenalism as a solution to the mind-brain problem, also presume that computers might develop a mindβββas a side effect of their physical activity.
Weβll now elaborate on this point. Advocates of epiphenomenalism believe the entirety of functional relations in the net of billions of neurons and hundreds of billions of synapses is the physical basis for the mind. Moreover, some see the electronic computer as a model of the brain. In their view, since processes in the computer (or algorithm) imitate the brain's thought processes, we can expect a computer (or algorithm) with a huge number of functional units and a complex level of internal relationsβββto develop subjective self-awareness and human-like thinking.
At first sight, the question we should ask here is whether an algorithm does act in a similar way to the brain. However, even if it does, a more basic question arisesβββcan we even say a human brain can think? Because if not, it doesnβt matter if the algorithmβs neural net architecture is identical to the brainβs system of neuronsβββnone of the two possess a mind and the ability toΒ think.
If the mind is a side effect of the brain, and an algorithm imitates the brain, we expect the algorithm to develop a human-like thinking. But can we even say a human brain can think? Because if not, an algorithm obviously canβtΒ think.
Can OUR brainΒ think?
Leibowitz recognizes the public, and sometimes even the academic literature, uses the phrase βthe thinking brainβ, and even talks about the computer as a βdigital brainβ or as a βthinking machineβ. Nevertheless, he emphatically asserts:
Thinking is not done by the brain itself, but by the owner of theΒ brain!
I find this argument similar, in a way, to one by David Hume, one of philosophyβs giants and the author of the monumental A Treatise of Human Nature. Hume argues that even the coldest and most calculated person acts solely out of her emotions. According to Hume, emotion is what makes the will, and will is what motivates human activity. That is, without the emotions (or mental activity) of the brain owner, no willβββwill develop, and no activity will occur. Getting back to Leibowitz, one might say a brain without the owner of the brain activating it is like a calculator with no one pressing theΒ buttons.
According to David Hume, even the most calculated person acts solely out of her emotions. Emotions make the will and will motivates activity. One might say a brain without the owner of the brain activating it, is like a calculator with no one pressing theΒ buttons.
Leibowitz further states that the secondariness of the computer (and, for our matter, of the algorithm) is apparent: the algorithm only performs a physical task, and it takes a human with consciousness and intelligence to give the physical task a logical meaning. ChatGPT uses electric pulses to generate words and logical sentences, but they only represent logical relations. The actual logical relations exist merely in the mind of the thinkingΒ person.
ChatGPT uses electric pulses to generate words and logical sentences, but they only represent logical relations. The actual logical relations exist merely in the mind of the thinkingΒ person.
One might say that the algorithm doesnβt think, just like the ensemble of instruments that performs physical activities together, does not play music. The air vibrations coming out of the instruments to create a harmony only to a musical consciousness.
Leibowitz arguments conclude:
The thinking person stands in the beginning and in the end of an algorithmβs system of processes, that we might call as βthe mental activities of the algorithmβ.
The person stands in the beginning since the programming and training of an algorithm is not possible without the thinking activity of the person. At the same time, the person stands at the end of the algorithm since, without an intelligent being interpreting the activity, it is nothing but a physical activity. Thus, in accordance with Leibowitz argument, no mental activity can be attributed to an algorithm alone. Itβs only we, the algorithms users, who do the thinking.
The thinking person stands both in the beginning and in the end of the so-called βmental operations of the algorithmβ. Thus, no mental activity can be attributed to an algorithm alone. Itβs only us, the algorithms users, who do the thinking.
Epiphenomenalismβββexperimental evidence
This is all fine, but what does actual science have to say about epiphenomenalism? Yoram Yovell enriched his grandfatherβs argument about the mind-as-a-side-effect approach with some experimental evidence. Yovell portrayed an experiment conducted by the neuropsychologist Benjamin Libet of Standford University. In this experiment, volunteers were instructed to raise a finger whenever they βfeel likeβ doing so. During that time, they were also looking at a type of watch with one hand spinning rapidly, and the electrical activity of their brains was recorded in a non-invasive way using electrodes.
The first result of the experiment was astonishing: about a third of a second before a person became conscious to his will to raise his finger, her brain already βknewβ sheβs is going to have that will! Some researchers viewed that as evidence that there is no free will since the consciousness (or mind) is just a product of unconscious processes in the brain. Or in other words, the mind is only a side effect of the brain, as epiphenomenalism suggests. And if epiphenomenalism is correct, maybe a consciousness CAN naturally grow as side effect of the physical operation of complex algorithms such asΒ ChatGPT.
A third of a second before a person became conscious to his will, his brain already βknewβ sheβs going to have that will. Some viewed that as evidence that there is no free will. Or in other words, the mind is just a side effect of the brain, as Epiphenomenalism suggests.
Against that statement lies our very strong feeling that we do have free will. Thus, for epiphenomenalism to be correct, our sense of free will needs to be a complete delusion. Lebowitz argued in a similar manner on free will elusion: when we say free will is an elusion, we already assume thereβs a mind, separated from the matter, that experiences elusion, which is a purely mental phenomenon. By doing so, we fundamentally contradict epiphenomenalism, since, according to Leibowitz and Yovell it it is a monistic-materialistic approach that assumes no separation between the mind and theΒ brain.
Lebowitzβs philosophical contradiction of epiphenomenalism fits well with the second result of the experiment. Libet has also found that there was an even smaller time frame, about a tenth of a second, when the volunteers could have decided not to raise their fingers. This βveto callβ didnβt have any evidence of preceding brain activity.
Libet deduced from this finding that the veto call is decided, in practice, only as part of the consciousness. To Libet, this meant that there is free willβββa free will to not do something. If this is, in fact, the case, then the mind is not just a side effect of the brain as it can also affect the physical state of the brain. This, evidently, leads to the conclusion epiphenomenalism does not hold and that if ChatGPT has a mind, it is not just a side effect of its neural net activity.
There was a tenth of a second in which the volunteers decided not to raise their finger, without any preceding brain activity. This might mean there IS a free willβββa free will to not to do something. Consequently, epiphenomenalism does not hold, and if ChatGPT has a mind, it is not just a side effect of its neural net activity.
Functionalismβββmind as a software of theΒ brain
Philosophically and empirically, it seems as if epiphenomenalism fails to solve the mind-brain problem. As a result, it also fails to support ChatGPT's ability to think. However, Leibowitz wrote about epiphenomenalism in the 1970s. Lately, the ever-growing use of computers has given rise to a new and interesting approach to the mind-brain problem: functionalism, or the mind as a software package of theΒ brain.
Yovell states that many philosophers and neuroscientists are proponents of functionalism. They presume that the mind is an emergent property of the brain, and as so, the mental phenomenon starts to appear when large groups of neurons, connected by many synapses, are activated in someΒ way.
Proponents of functionalism suggest the mind is an emergent property of the brain, and as so, the mental phenomenon start to appear when large groups of neurons, connected by many synapses, are activated in someΒ way.
But this sounds like epiphenomenalism, isnβt it? Despite the similarity, the two are not theΒ same:
Functionalism suggests the mind is not a βthingββββas argued by Descartes and Spinosaβββbut a process. For instance, if we want to know what is a block of metal, we need to know its material, weight, shape, etc. But if we want to know what a watch is, knowing all these properties is not enough. We have to know what it does or what is its role or function.
Functionalism is the process that occurs within the thing, i.e., the processes that occur inside the brain. In other words, the mind is the brainβs functionality. More accurately, the material level is the brainβββthe hardware of the computer, and the mind is the functional level of that hardware, meaning a process built of a series of events, determined by the rule of the βsoftwareβ operating theΒ brain.
One reason functionalism is fairly popular is that it is a non-reductionist approach, i.e., the mental activity does not reflect in physical activity. In this, it elegantly avoids the problems of identity theory: if the mind and brain are the same, they must be translated completely to each other, and yet we clearly canβt tell how the mind translates into the brain because thatβs the definition of the mind-brain problem! However, if the mind is a process, the mind is just what we feel at a time, and it doesnβt need to have a direct connection to a physical event in theΒ brain.
Functionalism suggests the mind is not a βthingβ but a process. if we want to know what a watch is, knowing its properties is not enough. We must know what is does, its function or role. The processes that occur inside the brain, the brainβs functionalityβββthatβs the mind. As a process, the mind is just what we feel at a time, and it doesnβt need to have a direct connection to a physical event in theΒ brain.
Another reason functionalism gained popularity is that, according to Yovell and Lebowitz, it is monistic and materialistic. In a way, it might be the best combination of dualism and monism: on the one hand, it assumes, like in monism, the world has only matterβββa rather attractive assumption for anybody holding the scientific method. On the other hand, like in dualism, it assumes reality has two levels, two of which are necessary to understand a person. In that, dualism better describes the relationship we feel our mind and body trulyΒ has.
Considering the above, functionalism allows for ChatGPT to be able to think: while ChatGPTβs material level is the algorithmβs trained weights and architecture, ChatGPTβs mind is the functional level of that hardwareβββits ability to chat with usΒ humans.
Alas, the simple but harsh Leibowian argument still holds: a function only exists when a thinking person interprets it as a function. In Leibowitzβs view, a watch that is defined by βsomething that shows the timeβ, assumes that someone with a mind and the ability to think is interpreting the time that the watch is showing. Now, weβve just argued that the function of chatting is what makes ChatGPTβs mind and ability to think. Taking into account the Leibowian argument, ChatGPT doesnβt actually chat with usβββwe, thinking humans, interpret its output as chatting, and so ChatGPTβs mind and ability to think exist merely in our thoughts. Itβs all just hardware.
It seems as if functionalism allows for ChatGPT to being able to think: while ChatGPTβs material level is the algorithmβsβ trained weights, ChatGPTβs mind is its ability to chat with us humans. Alas, the Leibowian argument still holds: we, thinking humans, interpret its output as chatting, and so ChatGPTβs mind, and ability to think, exists merely in our thoughts. Itβs all just hardware.
How can we tell if ChatGPTΒ thinks?
But maybe weβve made a mistake. Perhaps thereβs another approach for the mind-brain problem that we didnβt consider or that was not discovered yet, and ChatGPT does have a mind and an ability to think. Even so, almost immediately surfaces the question of how we can know an algorithm has produced a mind. This question is better known as the problem of otherΒ minds.
The immediate answer is that we can never know for sure if the person or algorithm in front of us has a mind. This answer comes directly from the definition of consciousness, which exists solely in the private domain and is an unmediated fact of its owner, that experience it βfrom withinβ. Any other answer is only a deduction, however intuitive and empathetic, that comes from looking βfrom the outsideβ.
How, then, can we deduce the other side has a mind? We believe the other side has a mind if it behaves and reacts the way we behave and react. Or in other words, we know thereβs βsomeoneβ in the other side if it passes the famous TuringΒ test.
Even if ChatGPT has a mind and can think, how can we tell it does have a one? We can never know for sure, but we can deduce it if ChatGPT behaves and reacts the same way we do, meaning if it passes the TuringΒ test.
ChatGPT is right this timeβββpassing the Turing test is not that hard, and certainty does not mean the other side has a mind. Itβs quite an easy task for us to impersonate to a different gender or age. Thatβs true, especially when the only interaction between the two sides is masked by a keyboard and a screen. However, some say in response that the mere ability to impersonate someone else, including this someoneβs thoughts and way of expression, is a significant human capability, evident of human-like intelligence.
Another known test, Yovell claims, exposes the fallacy in the Turing test, and is called the ChineseΒ room.
The Chinese room thought experiment suggests an algorithm can produce meaningful responses without truly understanding them. Consequently, Yovell, a clinical therapist, says that any therapist knows that understanding and giving meaning are both human qualities, i.e., evidence of a thinking mind. Without them, one cannot make sense of the human mind, let alone take care of it. Yovell then argues that humans understand and experience the meaning of words such as βloveβ, βmotherβ, or βgodβ, in a way that is hard to imagine an algorithm canΒ imitate.
The Chinese room exposes the fallacy in Turing test: an algorithm can produce meaningful responses without truly understanding them. Understanding is evidence of a thinking mind, and its hard to believe an algorithm can understand the meaning of βloveβ, βmotherβ, orΒ βgodβ.
Nevertheless, Yovell wrote the essay in a time when ChatGPT was but a dream. Can ChatGPT truly understand these words? Is ChatGPTβs level of understanding of these words evidence of his mind and ability toΒ think?
Iβll leave it for you toΒ decide:
Feel free to share your feedback and contact me on LinkedIn.
Thank you for reading, and good luck!Β π
Can ChatGPT think? was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI