Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

Taming AI Hype: A Human Perspective
Artificial Intelligence   Latest   Machine Learning

Taming AI Hype: A Human Perspective

Last Updated on May 9, 2023 by Editorial Team

Author(s): Vincent Carchidi

 

Originally published on Towards AI.

Source: Image by Kyle Sung on Unsplash. The Stata Center’s design is bizarre and counter-intuitive, like the argument here.

Taming AI Hype: A Human Perspective

Artificial Intelligence should widen, not narrow, our understanding of humanity. It should act as a bridge between the human arts and sciences; an end-in-itself and a means to our human re-orientation.

By Vincent J. Carchidi

Noam Chomsky and Logan Roy are rarely mentioned in the same breath. The world’s most famous linguist and the main antagonist of the HBO series Succession do not appear to have much in common — one is known outside of linguistic circles for his anti-capitalist activism, and the other is the fictional owner of a major media corporation modeled loosely on the real-life Murdoch family. Moreover, it is unlikely that anyone has ever used these two figures as foils for taming hype and misinterpretation in artificial intelligence (AI).

But the scientific work of Chomsky and the methods of Scottish actor Brian Cox uses to become Roy serve as useful bridges between the human arts and sciences. They hold unexpected lessons in achieving a simple but elusive goal: taming AI hype and misinterpretation while simultaneously widening our understanding of humanity.

Achieving this goal matters for a simple reason, articulated bluntly by Reece Rogers: “Understanding the difference between human intelligence and machine intelligence is becoming crucial as the hype surrounding AI crescendos to the heavens.”

This is not so simple. ChatGPT and other Large Language Models (LLMs) have led to a popular dynamic in which any claim to genuine advancement in AI is perceived as a cheapening of human intellectual capabilities. Defenses of human intelligence, in contrast, tend to take on an air of defensiveness against advancements in AI. In linguistics, ironically, Chomsky’s approach to human language acquisition and linguistic cognition has become the topic of energetic debate (as Steven Pinker noted in 2016, enthusiastic linguistics researchers have purportedly “slain the king” for 50 years).

We can avoid this frustrating dynamic. AI can be not merely an end-in-itself but a means to widen and deepen our understanding of humanity. AI compels individuals to doubt their abilities, question their distinctiveness, and default into familiar analytic frames of debate that reinforce but not challenge their existentialism. But it is for these reasons that the pursuit of AI is, in part, a human orientation — it reflects the character of the human species. Taming AI hype can only be successful, then, if we do so by bringing the full artistic, philosophical, and scientific capabilities of humanity to bear on this challenge.

To achieve this, we exit familiar frames of reference and explore the topic of AI and humanity through two entirely unrelated figures: Noam Chomsky and Logan Roy. (There will be no Succession spoilers).

The guiding spirit is an ancient dictum recently espoused by linguist Elliot Murphy: In Sterquilinis Invenitur, translated to, “in filth, it will be found.” As Murphy interprets this: “What you are searching for the most will be found in the place you least want to look.”

We begin by asking two separate questions: What makes Logan Roy possible? What makes science possible? Framing the pursuit of art and science in this way — by searching for enabling foundations — allows us to pinpoint the specific mindsets needed to achieve each. In doing so, we uncover commonalities across the arts and sciences that will be increasingly useful for interpretations of advancements in AI in the coming years. We close by locating this pursuit, and AI itself, in the broader social and political contexts in which its various applications are being deployed and what this means for our efforts going forward.

Table of Contents

· What Makes Logan Roy Possible?

· What Makes Science Possible?

· Why Must It Be So Difficult?

· Compelled to Re-Create Ourselves

What Makes Logan Roy Possible?

In the show Succession, Logan Roy, played by Brian Cox, sits atop the wealthy Roy family as the CEO of a major media company which his three children vie to control following his abdication. One of the most compelling aspects of the character is that Roy seems to dislike his children, expressing disdain for them, even jealousy of them, presumably for being born into wealth and comfort, unlike Logan himself.

Upon reading the pilot episode’s script, Cox recounts asking the show’s writer: “Does he love his kids?” To which the writer responded, perhaps surprisingly, affirmatively. Cox learned an interesting lesson about Logan Roy from this answer: “That’s all you really need to know about Logan. Whatever terrible things he does, however awful he is, it comes from this bedrock of wanting the best for his children.” He strikingly describes Logan as a character who “does villainous things but [is] not really a villain.”

Cox’s comments on Logan Roy’s desire to have the best for his children — alongside his viciousness towards them- is an interpretation of human behavior and a counter-intuitive one at that. It is, in one sense, an explanation: from a foundation of love, a man of remarkable, shocking shortcomings emotionally abuses his children. He does awful things to them because he loves them while failing to express this appropriately. Brian Cox becomes Logan Roy with this explanation in mind.

Cox’s acting method of placing himself non-judgmentally into the inner life of a vulgar individual to find that “bedrock” making the character behave in a predictable but bizarre fashion is not all that different from the mindset needed to engage in scientific thinking and interpretations of technological innovation.

This may sound like a ludicrous claim, a rhetorical flourish of sorts. But the foundational step that enables the maturing of scientific disciplines and the most impactful technological innovations is a highly specific shift in one’s mindset.

Cox’s explanation is counter-intuitive — a man who “does villainous things but [is] not really a villain” is not an intuitive concept. Nor is it familiar to think of love and parenthood as having anything to do with the jealous, cruel, and short-tempered Logan Roy. Yet, that is what enables Logan Roy to exist in such a multidimensional fashion and for Brian Cox to become him temporarily. This willingness to accept counter-intuitive, bizarre, and downright painful ideas — and to temporarily place oneself in the mindsets requiring these analytic frames — is a prerequisite for engaging with the scientific and technological marvels of our time.

What Makes Science Possible?

“Science” calls to mind quantitative analyses, cold-minded inquiry, and an ambitious piercing into the unknown. “Science” is comfort derived from relative certainty and precision.

Science does, to be sure, offer some level of relative certainty and precision, with comfort perhaps happily resulting from some of its discoveries — but only once it has matured. The question is: what makes science possible?

Science is the long-term result of individuals accepting a simple fact: physical reality is bizarre. It refuses to “[comport] with commonsense intuition.” Physical reality is rude to our intuitions about how the world and its constituent objects ought to work together; so engrained is this acceptance in the success of physics as a mature scientific discipline that we find it suspicious when aspects of reality — like the discovery of a distant astronomical phenomenon’s effect on the planet — make too much sense.

Pre-scientific philosophers, like René Descartes, took up the burden of creating new sciences of the natural world because they recognized — in some domains — that “what previously reigned supreme in the court of ideas — commonsense ideas about how parts of the world interact — rather than explaining everything, in fact, explained very little. The real work began once intuitive explanations came to be seen as obstacles.”

Noam Chomsky sums up the mindset repeatedly in his scientific and philosophical work: the “modern scientific revolution began with a willingness to be puzzled about things that seemed entirely simple and obvious.” The seeds of this genius simplicity were planted centuries ago in the observations of several figures, but prominently by Galileo Galilei. Of both the “closest elemental substances” and “more remote celestial things,” we may determine “their location, motion, shape, size, opacity, mutability, generation, and dissolution” but never will these properties yield insight into their “true essences” — an end achieved, he hoped, only through the “divine Artificer.”

Although sometimes seen as a tragic figure with some level of literary embarrassment in his time, Galileo had begun the process of science in which we set out “to reshape and to re-form our intellect itself.” Isaac Newton, born the year of Galileo’s death, was left with this fledgling scientific mindset. Newton departed from the “mechanical philosophy” he was intellectually reared on by positing the existence of a “force of a universal gravity extending through space.” Our intuitions do not present us with a world in which objects can act upon one another at a great distance — and yet this is the “revolution” that Newton foisted upon the scientific world.

Chomsky correctly writes repeatedly in his work today that “Galileo and others allowed themselves to be puzzled about the phenomena of nature…and it was quickly discovered that many of our beliefs are senseless and our intuitions often wrong.” The nature of the world is bizarre, and disciplines like physics have the benefit of centuries of maturation behind them to make this fact acceptable. The human capacity for language should be studied, Chomsky argues, with the same standard of inquiry employed in the natural sciences like physics, a discipline which he frequently likens to a properly conceived science of the mind.

Chomsky has repeatedly invoked the thought experiment of a Martian scientist studying the development of a human organism with sufficiently advanced technology of their own. The Martian scientist sees the routine and miraculous human acquisition of this thing called “language” not because, as linguist Edwin Battistella suggests, it is an “unbiased observer” but precisely because it is equipped with a different set of cognitive faculties that render its intuitions about development, thought, and communication distinct from our own.

It is our commonsense ideas about the world and our intelligence — our intuitions that inform our decisions and our theorizing — that stand in the way of achieving our scientific ends. Why would we expect AI’s road to be any different?

Why Must It Be So Difficult?

When Chomsky invokes the Martian scientist to explain how distinctive the human capacity for language is while exposing our misleading commonsense ideas about its development, he is engaging in a mindset not unlike Brian Cox’s to become Logan Roy. Each takes a familiar character — language and a villainous corporate actor — and searches for their enabling foundations. These enabling foundations — whether they exist in the human mind or the histories and dynamics between individuals — are characterized by striking counter-intuitiveness. They are not familiar and do not comport nicely with commonsense notions of how minds or people work. They, indeed, respect these characters so immensely as to avoid molding them into something that does make intuitive sense. Finally, once these counter-intuitive enabling foundations are found, they are used to interpret and explain the behavior of these characters; their outputs, limits, and possibilities are given new dimensions.

This is the mindset with which we ought to approach AI. It is how we can tame AI hype while simultaneously respecting this technology and widening our understanding of humanity.

But…something is missing. What could it be?

We appreciate the acting method of Brian Cox in becoming Logan Roy in part because it is so distinctive and effective — not everyone could do what he has done, nor interpret the character with the same depth he has given him. We respect the maturation of disciplines like physics because we recognize that this field of study has worked, and its success is not owed to just anything — its intellectual lineage is highly specific and evolutionary, adapting to the bizarre needs of physical reality as we can meet them.

Whatever description one gives this — Chomsky prefers a “willingness to be puzzled,” whereas physicist Steven Weinberg characterized it as a student-teacher dynamic in which the scientist “enters into a relationship with nature, as a pupil with a teacher, and gradually learns its underlying laws” — it is clear that science is not just anything, to be done at any time, by anyone, of any mindset.

The scientific mindset is, at first, a painful one. We do not want it so much as we want to leap to the gleaming successes of the natural sciences and their diverse technological offshoots. Accepting the bizarre nature of the reality “out there” is easier said than done. Doing so when the reality in question is one’s human nature — and the nature of the artificial intelligence designed, however accurately, in one’s image — is viscerally undesirable.

Evidence of this undesirability is currently flooding the discourse on AI. Intuitive and familiar analytic frames of inquiry are embedded in arguments from all corners.

Claims regarding the “emergent” capabilities of Large Language Models have compelled human researchers and observers to make several startling claims, including that they now possess a theory of mind, they can self-improve, and that they have mastered the syntactic and semantic structure of human language, among many others. The now-regretful deep learning pioneer Geoffrey Hinton is frightened of AI in a post-generative world. Why? In part because, as one expects from the man whose burden it was to convince the world of deep learning’s miracles, of a vague association of near-future AI systems with “superintelligence” and “hyper-intelligent robots” deployed by the likes of Vladimir Putin. It is better to be scared of one’s creativity in a way that aligns with the analytic frames through which one comfortably views the world than to accept the discomfort of limiting alternatives, it appears.

Even the “scaling is all you need” trope — which has gone a bit quiet with Sam Altman’s splash of cold water regarding GPT-5 — is perhaps a prime example of crudely simplifying intelligence for the sake of intuitive appeal.

But we should not feel compelled to bend our mind to AI because of our awe for the latter or our discomfort with the mysterious nature of the former. The pain that comes with acquiring a proper scientific mindset is inevitable and, as Chomsky himself has pointed out on occasion, nearly an informal prerequisite for explanatory theories to attain a depth meriting the name “science.” Interpretations of technologies like AI-enabled systems will force us to adopt a mindset of much the same type.

But with AI we have two choices: we can indulge in the familiar and intuitive analytic frames of debate, or we can begin the hard work of unifying and expanding the human arts and sciences with this remarkable, awe-inspiring technology.

Compelled to Re-Create Ourselves

American neuroscientist Erik Hoel wondered in March 2023 “if there will…be a wave of depression as people see how cheap cognitive abilities really are…everyone on earth just got a little bit smaller, a little bit less useful.”

Only an intellectual could make such a remark. AI’s reception by humanity will be determined by several factors but in no small part by the existing analytic frames used to evaluate social arrangements. Those with grievances against institutions in the liberal, particularly American, world are likely to see AI-induced job displacement as a reason to advance, not weaken, their political objectives. As I wrote two weeks before ChatGPT was released in November 2022, “the threat to liberalism lies in individuals’ perceptions as lesser parts of a greater whole.”

But perception is not everything. My initial perception tells me that Logan Roy is a cruel, vindictive man whose lust for money and power drives his every decision. But my perception here is flimsy and uninspired. It takes the interpretation Brian Cox brings to the character and the script to transform it.

So, too, for science and technology. Interpretations of AI will hinge on our willingness to “confront our continued existence for what it is.” This position respects the challenge that AI presents us with — a challenge to our individual and collective identities. It respects the challenge by accepting that things have already changed in and outside of AI.

Machine learning researchers sometimes view AI as an event that will (or is currently) shape social arrangements and interpersonal dynamics in a decisive, clear-cut fashion. But this is not how human technology is integrated into human society.

The reader must forgive my American-centric view, but AI will instead interact with social phenomena that currently plague social dynamics. America has, for example, an “intimacy problem,” with surgeon general Vivek Murthy reporting that loneliness is now a public health epidemic on a par with smoking 15 cigarettes daily. The Covid-19 virus — which remains — presented social and political institutions with a challenge to our identities which we have largely failed. The aftermath speaks for itself. This is to say nothing of the social toxins that existed before its arrival. The Republican National Committee (RNC), far from using AI to move beyond our current state of affairs, made its AI-enabled debut with a deepfake of Joe Biden and Kamala Harris during a hypothetical scenario in which China invades Taiwan.

It is within these and related contexts that AI is currently being deployed, rather than the over-intellectualized fantasies of some commentators.

Now is the time to tame AI hype in a way that aligns with the richness of human potential. Arguing over whether human brains are like neural networks or whether generative AI is “general” AI, is played out — it hasn’t brought about cohesion so far, and it will not do so in the future.

Instead, we should take a new approach to taming AI hype that respects these emerging technologies while unifying and expanding the human arts and sciences. The lessons that can be mined from the scientific and artistic work of people like Noam Chomsky and Brian Cox — as horribly flawed and hypocritical as the former may be — are directly relevant to AI research, development, and implementation.

The time for compartmentalization has passed.

References:

[1] N. Chomsky, Language and Problems of Knowledge (1988), MIT

[2] N. Chomsky, Naturalism and Dualism in the Study of Language and Mind (1994), International Journal of Philosophical Studies

[3] N. Chomsky, The Mysteries of Nature: How Deeply Hidden? (2009), The Journal of Philosophy

[4] N. Chomsky, Poverty of Stimulus: Unfinished Business (2012), Studies in Chinese Linguistics

[5] N. Chomsky, What Kind of Creatures Are We? (2015), Columbia University Press

[6] I. B. Cohen, Revolution in Science (1985), Harvard University Press

[7] B. Cox, Putting the Rabbit Back in the Hat (2022), Grand Central Publishing

[8] D. Garber, On the Frontlines of the Scientific Revolution: How Mersenne Learned to Love Galileo (2004), Perspectives on Science

[9] J. Huang, et al., Large Language Models Can Self-Improve (2022), ArXiv

[10] M. Kosinski, Theory of Mind May Have Spontaneously Emerged in Large Language Models (2023), ArXiv

[11] A. Koyre, Galileo and the Scientific Revolution of the Seventeenth Century (1943), The Philosophical Review

[12] J. McGilvray, Cognitive Science: What Should It Be? (2017), Cambridge University Press

[13] L. Olschski, The Scientific Personality of Galileo (1942), Bulletin of the History of Medicine

[14] S. Piantadosi, Modern Language Models Refute Chomsky’s Approach to Language (2023), LingBuzz

[15] J. Wei, et al. Emergent Abilities of Large Language Models (2022), ArXiv

[16] S. Weinberg, Reflections of a Working Scientist (1974), Daedalus

 

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓