Do Large Language Models Have Minds Like Ours?
Last Updated on July 17, 2023 by Editorial Team
Author(s): Vincent Carchidi
Originally published on Towards AI.
Do Large Language Models Have Minds Like Ours?
Intellectual spats between generative linguists and machine learning researchers have neglected the most interesting components of human language use.
Custom Tag: Creative Aspect of Language Use
By Vincent J. Carchidi
Do large language models (LLMs) use language creatively? Ample intellectual content has been produced recently over whether LLMs generate text sufficiently novel to be considered βcreativeβ or merely synthesize creatively human-generated content without a distinctive contribution of their own. It is one dimension of a highly complex debate that is unfolding over the nature of both LLMs and human intelligence.
This saga has seen contributions from thinkers in a diversity of disciplines, including computer science, robotics, cognitive science, philosophy, and even national security. A notable flashpoint is linguist Noam Chomskyβs fiery critique of ChatGPT and LLMs in The New York Times. This controversial piece illuminates stark divides between scientific approaches to the nature of the human mind, natural and artificial intelligence (AI), and how engineering makes use (or doesnβt) of these notions.
Chomskyβs NYT piece spurred tremendous debates on this subject, as he highlighted his belief that βIntelligence consists not only of creative conjectures but also of creative criticism.β The discourse which has sprung up in the wake of this and other pieces surrounds familiar arguments about the utility of generative linguistics, the role of cognitive science in AI, and even broader matters such as the emergent theory of mind capabilities in LLMs.
I find myself frustrated and baffled. This is good because otherwise, I may not have written this article. But the reasons are not stellar: Chomskyβs rigid communication style has prevented him from leveraging some of the fascinating features of his own linguistic work in a direct and explicit manner to assess LLMsβ capabilities. Conversely, machine learning researchers have so thoroughly indulged in the euphoria of the fieldβs recent (and real) advancements that they frequently lack the will required to assess whether human cognition is as straightforward as it seems.
I attempt to remedy this here. Where Chomskyβs approach to the mind and the tradition of generative linguistics broadly are brought into AI, they have focused intensely on familiar arguments like the poverty of the stimulus and the innateness of linguistic knowledge or principles. I instead highlight what is known in the rationalist tradition in philosophy and cognitive science as the βcreative aspect of language use,β or CALU.
CALU, referring to the stimulus-free, unbounded, and appropriate and coherent ways in which humans use language, offers a three-pronged test for the existence of a mind similar to our own. It is inextricably connected to human intellectual nature and our means of creativity.
Somehow, despite several AI-induced intellectual spasms lately, the only sustained conversation of CALU and LLMs/AI appears to be in a Machine Learning Street Talk video on Noam Chomsky (Disclaimer: I have no affiliation with MLST). Mohamad Aboufoul also alludes to Chomskyβs views on determinism and free will, relatedly.
Whether LLMs are creative in the sense that human beings, upon reflection, consider themselves to be is one of the most important questions interested individuals can ask at the current moment. An understanding of βtrueβ or βgenuineβ creativity informs a diversity of views related to human nature, the significance of human effort and output, AI ethics, the nature of current and possible AI systems, and the contours of human-machine interaction in the near future. Creativity is not everything, but if one is interested in AI, then one ought to know where one stands on it.
Through this lens, we ought to assess the linguistic creativity β and the question of whether we are interacting with minds like our own β of LLMs. We begin with a breakdown of CALU and its relevance for AI, followed by an application of its three components to LLMs like ChatGPT, and conclude with some thoughts on the significance of an AI system reproducing CALU for humans.
Table of Contents
- What Is the Creative Aspect of Language Use?
- Why Does CALU Even Matter for AI?
- Do LLMs Exhibit CALU?
- What Would the Artificial Reproduction of CALU Mean for Humans?
What Is the Creative Aspect of Language Use?
CALU takes what is vigorously debated in AI and linguistics β the syntactic and semantic novelty of human language use β and situates it in a broader, though more subtle, perspective. It is an observation and description of how language is used by human individuals; CALU is not a theory or even an explanatory framework of how language is put to use by humans in concrete settings. (This mirrors the competence-performance distinction often employed in the cognitive sciences, but this should not distract the reader.)
CALU is inextricably bound up with human thought. This creative use of language is, in fact, ordinary β it is the ability, as Chomsky puts it, to βform new statements which express new thoughts and which are appropriate to new situations.β Philosopher James McGilvray notes that this idea is traceable back to Descartes, who believed that, taken together, the three components of CALU are βa test of having a mind βas we do.ββ CALU, in this sense, is not intelligence per se, but a fundamental feature of human cognition, shaping the intellectual character of the species.
The ability to form new linguistic expressions in a manner that is causally detached from the circumstances of their use and transmit them to others who find them intelligible and complementary to their own thoughts underwrites the most mundane and the richest of human creations. βThis,β Charles Kreidler writes, βis just what happens when the architect envisions a building not yet erected, the composer puts together a concerto that is still to be played, a writer devises a story about imaginary people doing imaginary thingsβ¦β
Creative language use is thus broken down as follows (drawn from McGilvrayβs description here):
Β· Stimulus Freedom: No external or internal circumstances can be causally traced back to the use of a particular linguistic expression. βLanguage use might be prompted by but is not causally tied to and determined by current external or internal circumstance.β
Β· Unbounded: There is no limit on the number or kinds of sentences that are produced either in thought or in speech, including in any specific circumstance. Human linguistic production is not only novel but innovative.
Β· Appropriate and Coherent to Circumstance: Despite the unbounded and stimulus-free nature of language use, it is nonetheless appropriate for any given circumstance, fictional or otherwise. Uncaused remarks are produced without limit yet appropriately to the eliciting stimulus.
Critically, these three uses of βvocabulary items and syntactic rules,β as linguist Mark Baker points out, must be simultaneous. Language use would not be creative if we only generated an unbounded set of thoughts or speech; it would not be creative if we thought or spoke in a stimulus-free fashion but incoherently and with bounded limits; and it would not be creative to simply utter a few simple words that are appropriate to a situation but not unbounded and stimulus free. Only together do these features of language use make it creative. Only together do they indicate the presence of a mind like our own.
Much more can be said, but consider this point before we proceed to language use by LLMs: while one may draw conclusions about the relationship between semantics and syntax, and a host of overlapping cognitive, psychological, and social matters, from CALU, one does not need to be a generative linguist to recognize that CALU exists.
Why Does CALU Even Matter for AI?
CALU offers a set of criteria by which the existence of a mind can be determined. This ability is remarkable. As I have written elsewhere, it is frankly βironic that our ordinary use of language possesses a quality so remarkable but that so few of us are prepared to acknowledge it.β I depart from the respectable and understandable view articulated here by computer scientist Pedro Domingos that human creativity is given too much credit, owing more to a simple cutting-and-pasting process than some high-in-the-sky slice of humanity.
Domingos, interestingly, highlights his own experience as a musician as anecdotal evidence that human creativity is a rather mechanistic process (presumably, to say nothing of its enjoyment). I imagine that Domingosβ use of his personal intuitions in this regard is not uncommon in assessments of AI systems like LLMs. We readily believe we understand ourselves and what it means for humans to be creative. We naturally believe we can transfer this understanding to LLMs at will. But, as the need to expound on anthropomorphizing shows, our understanding of human intelligence is frequently deficient upon closer examination. Accusations of goalpost shifting in laying out criteria for what is considered βtrueβ human-like intelligence charitably reflect this realization that we all lack an understanding of human intellectual nature (rather than the uncharitable suggestion that oneβs opponents are acting in bad faith).
Indeed, Chomskyβs own approach to the study of language and mind is couched in an oft-neglected philosophy that rejects the use of commonsense intuitions and concepts in scientific inquiry. He frequently likens the proper study of the mind to the development of physics, highlighting the latterβs centuries-long difficulties with commonsense notions of motion and causality. A certain skepticism of simply accepting what is right before our eyes pervade generativist writing. It is in this context that CALU emerges as a phenomenon visible only to those willing to take a sufficiently refined lens to the problem of human language use β recognizing the βcrucial if obscure differenceβ embedded in the observation that βdiscourse is not a series of random utterances but fits the situation that evokes it but does not cause it.β
Large Language Models may present the most challenging example of human-like language use by non-humans in the speciesβ history. I am innately resistant to hype and exaggeration in AI, but I know of no comparable example in the history of human invention that exhibits as human-like use of syntactic structures as LLMs like ChatGPT.
Indeed, perhaps surprisingly to some today, automating linguistic creativity has been an intermittent fixation by rationalists and generativists of diverse stripes, beginning with Descartes. Each of the cited works on CALU in this piece thus far alludes to the inability of machines to replicate stimulus-free, unbounded, and appropriate linguistic thoughts and expressions.
Do LLMs Exhibit CALU?
The question before us is this: Do Large Language Models reproduce CALU? An affirmative answer to this question implies that certain LLMs possess minds sufficiently like ours; if a negative, then not.
To answer our question, we consider each criterion in detail:
Β· Stimulus Freedom: LLMs are engaged through prompts. Human users input strings of human language, and the program returns a response. Claims regarding LLMsβ intellectual capabilities hinge on these programs responding as requested by human users in a direct and appeasing manner.
LLMsβ responses can be tied to an identifiable stimulus. Even the internal message tags that LLMs like Bing AI use (inclusive of the assistantβs βinner monologueβ) are inextricably linked to the userβs input. OpenAI reports experimenting with GPT-4βs autonomous self-replicating capabilities (with unfortunate media framing), which we may consider an interesting if indirect, attempt at reproducing CALU. But GPT-4 was ultimately ineffective in this context, even with some success at tricking a TaskRabbit user. Nothing about the βsimulatedβ means by which GPT-4 was tested here suggests its output was stimulus-free. [Judgment: Stimulus-Constrained.]
Β· Unbounded: It appears that LLMs, including GPT3.5 (ChatGPT), GPT-4 (ChatGPT Plus, Bing AI), and Bard, among others, are capable of producing an unlimited number and variety of sentences for any given context. This is a magnificent achievement. It is also the intense focus of ongoing debates in linguistics as to what this means for Chomskyβs approach to syntax and the generative school broadly. For our purposes, whether LLMs do this by βknowingβ or βunderstandingβ the abstract rules of human grammar or by statistically settling on a too-perfect mimicry of them (if there is a difference here) is not directly relevant. The fact is that its syntactic output is unbounded.
Just as importantly, however, is that this output is strictly βverbalβ β there is not yet sufficient reason to believe any kind of linguistic thought is occurring. In this same vein, furthermore, the LLMs produce novel linguistic outputs but do not appear to be innovative in the free yet constrained sense that human beings are. (For example, no LLM has yet written this paper on CALU and LLMs, and my attempts to achieve this through prompts of various kinds have returned inaccurate and/or middling results.) The syntactic combinations LLMs produce are novel and limitless but not particularly innovative. Rather than advancing discourse, they seem to excel at capturing what already exists through limitless linguistic expressions (perhaps this is why the significance of LLM-powered chatbots in popular discourse is sometimes downgraded from autonomous superintelligences to helpful collaborators and finally to occasionally useful apps). [Judgment: Syntactically unbounded, semantically bounded.]
Β· Appropriate and Coherent to Circumstance: On the surface, it seems that LLMs produce linguistic expressions that are coherent and appropriate to the circumstances of their use. This is difficult to probe not only because LLMs are frequently built with guardrails that restrict their outputs (βAs an AI language model, I do notβ¦β), but also because it is difficult to know exactly what counts as appropriate. An accepted condition is whether oneβs interlocutor judges the responses to be appropriate. As Chomsky puts it, βrecognized as appropriate by other participationsβ¦who might have reacted in similar ways and whose thoughts, evoked by this discourse, correspond to those of the speaker.β Even here, however, because of our tendency to anthropomorphize, we impose coherence on to LLM-powered chatbotsβ answers even when there may be none (we do the same with people, too, but we do not deny that their thoughts can and do correspond with ours).
I must be anecdotal here, but the knowledge that I am not alone in this experience: when I interact with conversational AIs, I have never felt as though there was a mutual correspondence of thought occurring between prompt and response. Even creative prompts which yield interesting results are interesting in the same way that happening upon a unique Wikipedia page is interesting. Wide-ranging, rich conversations in which my human interlocutorβs responses flick from subject to subject with mutually intelligible relevance do not happen, in my experience, with chatbots. Even correct answers to queries have an air of mechanical appeasement, not correspondence with my own thoughts. For example, ChatGPT, Bing, and Bard each return responses that seem appropriate to the topic of CALU and its relationship to AI, but on even moderately close analysis, fail to produce linguistic content of an appropriate nature over an extended conversation. Note that such coherence, though still difficult to pin down, is more precise than one prominent studyβs use of the term, seemingly equating coherence with grammaticality and semantics at times. [Judgment: Undetermined, leaning towards frequently inappropriate to circumstances.]
Overall Judgment: Large Language Models do not reproduce CALU. They thus fail, on these terms, to prove they possess minds like our own.
What Would the Artificial Reproduction of CALU Mean for Humans?
If the term βartificial general intelligence,β or AGI, is meant to describe an AI system that possesses intellectual capabilities comparable to that of humans, then CALU must be relevant to identifying its existence. On this test alone, LLMs are neither minds like ours nor AGI.
It is strange, however, that this concept has not been made more explicit by either generativists or their detractors. It is a phenomenon whose mere existence depends only on an acceptance of readily observable and describable features of human language use that do not cohere exclusively with generative theories. Generativistsβ own reluctance to carry commonsense intuitions into scientific inquiry has something to offer here.
CALU is central to human intellectual nature and will thus continue to be central to our assessments of future AI systems. We evidently consider the matter of creating an intimate one, yet we have often settled for imprecision and passion in our assessments of it. Much of this, I assume, rests with either the euphoria or the fear that future AI systems could match or exceed our own creative efforts. Much of this, in turn, may rest on convictions individuals hold about human nature.
This thought process is a mistake. It is an understandable mistake, but one owing to the wildly disconnected and overhyped intersection of AI research and a litany of human arts and sciences. If CALU were to be reproduced by an AI system, this might be considered an achievement so momentous that current discourse tilting between euphoria and doom simply evades, not captures, its significance. A language model that actually exhibits CALU would be no more a threat to my significance than the existence of human writers better than myself (and with apologies to Eliezer Yudkowsky, I have no desire to take over the world).
A better way forward is to temporarily calm our passions in the service of bridging divides between scientific and engineering approaches to the mind. Generative linguistics, as noted, is a notable flashpoint in this intersection of worlds, especially with Steven Piantadosiβs LLM-driven critique of Chomskyβs approach to language. But it is a shame to see rich stocks of wisdom on both sides become oversimplified. To echo computer scientist Walid Sabaβs sentiments here, we should stare advancements in AI in the face while remaining humble about the complexity and utter sophistication of the human mind.
References:
[1] M. Baker, The Creative Aspect of Language Use and Nonbiological Nativism (2008), Oxford University Press
[2] V.J. Carchidi, Do submarines swim? Methodological dualism and anthropomorphizing AlphaGo (2022), AI & Society
[3] N. Chomsky, Language and Problems of Knowledge (1988), MIT
[4] N. Chomsky, Cartesian Linguistics (2009), Cambridge University Press
[5] N. Chomsky, The Mysteries of Nature: How Deeply Hidden? (2009), The Journal of Philosophy
[6] K. Duggar, T. Scarfe and W. Saba, #78 β Prof. NOAM CHOMSKY (Special Edition) [Video] (2022), Machine Learning Street Talk
[7] C.W. Kreilder, Introducing English Semantics (1998), Routledge
[8] J. McGilvray, Chomsky on the Creative Aspect of Language Use and Its Implications for Lexical Semantic Studies (2011), Cambridge University Press
[9] J. McGilvray, Cognitive Science: What Should It Be? (2017), Cambridge University Press
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI