Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

What Is Innateness and Does It Matter for Artificial Intelligence? (Part 1)
Latest   Machine Learning

What Is Innateness and Does It Matter for Artificial Intelligence? (Part 1)

Last Updated on July 24, 2023 by Editorial Team

Author(s): Vincent Carchidi

Originally published on Towards AI.

Source: Wikimedia Commons licensed under the Creative Commons Attribution-Share Alike 4.0 International license.

The question of innateness, in biology and artificial intelligence, is critical to the future of human-like AI. This two-part deep dive into the concept and its application may help clear the air.

By Vincent J. Carchidi

Introduction

Not much good comes out of Twitter these days, but a recent Twitter thread started by artificial intelligence (AI) expert Gary Marcus touched on a fascinating subject: the concept of “innateness” in biological organisms and its relevance for AI. The full Tweet, pictured below, reads: “clear evidence of Innateness — in ants. The contemporary hostility to Innateness in the ML crowd is going to look foolish in hindsight when we figure this all out.”

Source: Gary Marcus on Twitter.

Other experts’ and researchers’ reactions fell along pre-existing lines drawn over recent years in AI research. Thomas Dietterich questioned, “How does declaring something to be innate help us do AI science?,” adding that “The whole idea of pre-training is exactly to create a network with good innate representations.” Judea Pearl chimed in, retorting that “Innateness is…a warning that some knowledge, external to the data, external also to any pre-training, must be encoded in a system to achieve a certain performance.” Rodney Brooks referred to end-to-end learning as a “fetish” while Dietterich, in response, argued that Large Language Models (LLMs) demonstrate that “automated learning can work at internet scale to build a very broad and general system.”

It is not exactly the ideal exchange one can imagine over innateness and intelligence. But Dietterich’s original question is worth asking and answering: Does the concept of innateness help AI researchers?

The question of innateness is critical to future research aiming to design human-like AI systems. While the insertion of this concept into the field has often been messy and contentious, it is simply unavoidable — AI, for the foreseeable future, aims to resemble at least some of the higher-order cognitive abilities of humans. But questions about human cognitive nature are often addressed in the field with over-simplicity, outdated ideas, or outright dismissal of the notion that how humans achieve their feats matters for machine intelligence.

This article presents a deep dive into innateness in biology and AI that aims to clear some of the air and inform the debate on all sides of the aisle. Because the subject is chock-full of multidisciplinary information, this piece is broken up into two parts. The first part is devoted to setting up the background to innateness in humans, structured in a way that avoids familiar frames of analysis in the field. I thus provide a perspective on an innate basis for moral cognition in humans— a topic that, unlike language, has not been caught up in excessive hype of late.

This allows us, in the second part, to bootstrap our way into a discussion of innateness in AI. I home in on two examples in gameplaying AI research — the Go-playing agent AlphaGo Zero and the Diplomacy-playing agent Cicero — and relate the concept of innateness in biology to these agents. Both a high-level and a lower-level overview of these systems’ architectures is provided. While some background knowledge is assumed, the content is written in such a way as to be accessible to a general audience.

Ultimately, these two parts will illustrate how the pursuit of human-like AI, or even Artificial General Intelligence, is directly relevant to innateness.

Table of Contents

Part 1:

· Biological Innateness in Moral Cognition

  • How to Get Interested in Innateness
  • Moral Cognition’s Innate Basis
  • Implications

Part 2:

· Innateness in Artificial Intelligence

  • Gameplaying AIs as Windows Into Artificial General Intelligence
  • Implications

· Conclusion

Biological Innateness in Moral Cognition

The debate over both the robustness of the concept of innateness and whether humans possess innate cognitive endowments is a ripe 2,000-odd years old by now. I do not expect to settle the debate here nor provide a conversation-ending definition of the term. But a personal perspective on this is useful to get our feet wet, which will then be used to bootstrap our way into innateness in biology and AI.

How to Get Interested in Innateness

A significant amount of intellectual energy in my undergraduate and graduate years was spent arguing in favor of the idea that human moral psychology has a powerful, domain-specific, and innate basis in the mind — that is, a specialized cognitive system designated for the generation of moral intuitions, possessing independence from other systems of the mind, like the visual or auditory systems, though constantly required to interface with them in the course of ordinary human behavior. This was unusual, as I studied political science, a discipline that tends to not consider these issues in the detail characteristic of the cognitive and neurosciences.

My motivation came from a source of frustration: I was taking a course on international organizations when the subject of the United Nations’ 1948 Universal Declaration of Human Rights came up, which serves in many ways as the foundation of international human rights activism and law that sprung up in the decades following its adoption. International Relations scholars (a subfield of political science) have many ways of framing this document’s creation and subsequent uses — some focusing on the anti-colonial movements of the first half of the twentieth century and the collapse of empires, others on ideational influences stretching back decades or centuries through political activism and religious traditions, and others still on the distribution of power across countries. Morality — meaning the moral judgments of individuals, the moral norms of particular cultures, and the moral traditions of groups and countries — underwrites much of this.

However, interesting as all that is, one thing irked me: these scholars would write learned tomes on the historical origin of this or that human right in the Universal Declaration, how one religious tradition’s moral values overlap with another’s (or doesn’t), even how the definition of “human” was re-shaped in the first half of the twentieth century leading to a concomitant expansion of moral concern. But virtually all of them made the same, implicit assumption: that morality could be understood in purely social scientific terms.

This seemed implausible to me on its face. Thousands of pages can be written on the evolution of moral norms and customs that find themselves bound up in an international declaration. But how on earth do individuals develop a capacity to generate moral intuitions in the first place? Answering that question, it seems, necessarily shapes how we answer questions related to the development of international human rights norms and laws.

Moral Cognition’s Innate Basis

While moral psychology has since pervaded international relations, much of the debate over innateness remains insufficiently fleshed out. Innateness can mean too many things, too often getting wrapped up in debates about biological determinism.

Instead, to understand human morality, we should employ the same methodological techniques we use to understand any other physical or cognitive system of the body. We thus distinguish between the ability to generate moral intuitions and moral judgments and how this ability is used — the distinction between competence and performance employed throughout the cognitive sciences. Doing this allows us to strip away much of the messiness of social and political life to explain an ability — moral competence — that developmentally healthy humans possess. Our target of explanation in moral psychology, properly conceived, is moral competence, leaving performance to another day.

As legal scholar and philosopher Matthias Mahlmann writes in a recent, major work on moral cognition and human rights: “Nevertheless, the performance of this capacity, the final evaluation of an action can be biased — for example, by the interests of the evaluating person. Consequently, such influences need to be factored out of the analysis if we are to properly study the cognitive competence in question, which is not an easy thing, particularly in empirical work” (p. 403). One can imagine why this is so difficult to do in the social sciences, especially, where the “final evaluation of an action” is frequently bound to empirically irrelevant factors. One can also, as we discuss below, imagine why this is difficult in AI, where systems require some level of human input to probe their competencies.

So, we begin the analysis with the understanding that human beings can intuitively frame the social world in moral terms. As philosopher Susan Dwyer put it: “Moral evaluations, like permissibility judgments and attributions of responsibility, simply cannot get started if we do not already “see” the world in terms of agents, patients, and consequences” (p. 248). But how should we understand this capacity? What explains this ability to “see” the world in these terms?

The next step is to ask what the fundamental properties of these moral evaluations look like. But remember: we cannot simply pick our favorite examples of good or evil and start from there. Nor, furthermore, can we use established ethical taxonomies — like the ethic of autonomy or the ethic of community — as our starting point, as prominent researchers like Jonathan Haidt and Craig Joseph do, because these simply neglect the most basic nature of moral evaluations.

What do I mean by this?

The truth is that moral judgment is deceptive — it is an intimate experience that most human individuals experience throughout their lives, oriented toward emotionally charged social situations and institutional arrangements. But once we have narrowed our focus to moral competence and stripped away the irrelevant factors, moral judgment has “seemingly innocuous” properties with “far-reaching consequences,” as legal scholar John Mikhail puts it. These properties are as follows, drawn from Mikhail’s description here (pp. 45–46):

(1) Novelty: The moral judgments individuals produce have no “point-for-point relationship” between any of the judgments they have produced in the past nor any they have encountered from others.
Elaboration: While moral judgments may seem similar on the surface, they are elicited by circumstances that are entirely novel to the individual. A judgment that one person is wrong to strike another unprovoked sound familiar, but the people involved, the surrounding environment, the actions they take, and so on, are unique. Such a judgment is, then, novel.

(2) Unbounded: An individual can produce, in principle, an unlimited number of moral judgments. There is no limit on the number or kinds of moral judgments an individual can make, save for non-moral constraints like memory, time, etc.
Elaboration: An individual does not merely produce verbalized judgments about situations. Rather, each judgment presupposes mental representations of circumstances that are entirely novel to the individual. A judgment that one person is wrong to strike another unprovoked is not “just a judgment” — it depends on a mental representation of the specific configuration of people, actions, and other variables. The unbounded nature of moral judgments is that these judgments — and the mental representations they presuppose — can be generated by individuals infinitely.

Taken together, the “far-reaching consequences” become apparent. When we attempt to make sense of the ability to produce novel moral judgments on an unbounded scale, we realize that the “finite storage capacity of the brain” rules out the possibility that the brain simply recruits a pre-sorted list of mental representations to produce them. “Instead,” Mikhail writes, “her brain must contain, with respect to moral judgment, something more complex: some kind of cognitive system, perhaps characterizable in terms of principles or rules, that can construct or generate the unlimited number and variety of representations her exercise of moral judgment presupposes.” (Mikhail also uses the language of “a recipe or program of some sort,” which the AI researcher may find more plausible.)

Implications

Lo, and behold, the argument for morality’s innate basis in the human mind comes into view. This innate basis is domain-specific (specialized for the evaluation of moral worth), and innate — that is, basic moral competence, the ability to morally frame and evaluate the world, is unlearned. The specific ways in which humans morally evaluate the world, furthermore, are also unlearned (meaning our moral faculty is not recruited for just any reason — such as, in Mahlmann’s example, assigning virtue to a tree because an apple dropped into the hands of a hungry person).

While it is not a logical impossibility that such an ability could be learned, it would stretch one’s credulity to insert “learning” or “brain-environment interaction” as the primary mechanism responsible for the development of one’s moral competence. While conceptions of innateness vary greatly, it is not at all uncommon to find neuroscientists saying the following: “The superiority of human cognitive learning and understanding compared with existing deep network models may largely result from the much richer and complex innate structures incorporated in the human cognitive system” (p. 693).

By working, then, with the basic properties of moral judgment identified above, we should expect to find complex representations underlying them upon experimental inquiry. Indeed, individuals can be found to impose complex legal and philosophical principles on novel situations intuitively — without having a need to systematically and consciously apply the principles or even having been trained in them formally. Mikhail found, for example, that children responding to classic moral dilemmas posed by developmental psychologists in experimental settings employ an intuition against the legal conception of harmful battery. He thus postulates “an acute sensitivity to the purposeful harmful battery as a property of the human mind” (p. 780). This, we may characterize as innate moral knowledge specified in advance of experience, emerging reliably in the course of biological development. Later research conducted by Sydney Levine, Mikhail, and Alan M. Leslie finds initial experimental support for the idea that intentions are inferred by individuals evaluating novel actions in part by imposing a “presumption of innocence” on other agents.

This picture of the mind is complex — it presumes that “moral intuitions can be understood as the output of a computational process performed over structured mental representations of human action,” as these authors note elsewhere. Experiments probing participants’ intuitions about moral dilemmas using “act trees” supported a long-running theme throughout certain areas of cognitive science: that the mental representations which presuppose moral intuitions are not “exceedingly simple” and cannot be captured “in terms of heuristics and biases” (p. 31).

(Note that it could have turned out, upon experimental inquiry, that moral intuitions’ mental representations were quite simplistic. Were this the case, we may still posit an innate basis for morality, but perhaps not a dedicated cognitive system.)

What all this goes to show is that biological innateness is more than just a matter of what systems of the mind are endowed by genetics in advance of experience and to what extent they are pre-specified with domain-specific knowledge. Rather, it goes to show that such capacities interface with one another productively and without the conscious awareness of individuals. Causal representations of human action, moral intuition, linguistic intuition, and theory of mind, are all operative, in some fashion, in the seemingly simple matter of verbally deeming a novel action to be unjust or immoral.

When dealing with the matter of innateness, then, we understand the following as paramount:

First, positing the innate basis for a human capacity like moral judgment is not straightforward — the most basic properties of moral judgments are deceptively complex, requiring patience and attention to detail to identify them.

Second, once identified, it takes conscious, deliberate effort to tease out the conceptual significance of these basic properties to articulate the broad contours of morality’s innate basis in the mind. In this case, the cognitive system is characterized by rules, principles, or concepts encoded with some moral knowledge.

Third, this cognitive system must interface productively with others in mind, including causal representations of human actions, theory of mind, linguistic cognition, visual cognition, and so on.

Finally, what innateness reveals about human moral judgment is that the intuitions on which they are founded are principled and rely on structured mental representations.

The tricky part is remembering how to conceptualize competence and performance. When we study moral competence, we study it in isolation from the rest of the mind. But we understand that, in ordinary life (and right now), the systems of the mind must routinely interface with one another in productive, dynamic ways. This methodological technique (abstracting the cognitive system away from concrete human behavior) is most difficult when dealing with cognitive functions like moral judgments or linguistic cognition, though we readily employ it elsewhere — the concept of an “immunocompromised” individual only makes sense, for example, if we assume that there is one, idealized immune system that human beings all possess.

With all this in the background, we turn to the creation of artificial minds and how lessons in biological innateness can aid research programs to that end in Part 2.

References

[1] J. Donnelly, Universal Human Rights in Theory and Practice (2013), Cornell University Press.

[2] S. Dwyer, How Good Is the Linguistic Analogy? (2006), The Innate Mind: Vol. 2

[3] M. Finnemore, The Purpose of Intervention (2003), Cornell University Press

[4] A. Getachew, Worldmaking After Empire (2019), Princeton University Press

[5] J. Haidt and C. Joseph, The Moral Mind: How Five Sets of Innate Intuitions Guide the Development of Many Culture-Specific Virtues, and Perhaps Even Modules (2008), The Innate Mind: Vol. 3

[6] M. Keck and K. Sikkink, Activists Beyond Borders (2014), Cornell University Press

[7] S. Levine, A.M. Leslie, J. Mikhail, The Mental Representation of Human Action (2018), Cognitive Science

[8] S. Levine, J. Mikhail, and A.M. Leslie, Presumed Innocent? How Tacit Assumptions of Intentional Structure Shape Moral Judgment (2018), Journal of Experimental Psychology: General

[9] M. Mahlmann, Mind and Rights (2023), Cambridge University Press

[10] J. Mikhail, Any Animal Whatever? (2014), Ethics

[11] J. Mikhail, Elements of Moral Cognition (2011), Cambridge University Press

[12] C. Reus-Smit, Individual Rights and the Making of the International System (2013), Cambridge University Press

[13] D. Traven, Law and Sentiment in International Politics (2021), Cambridge University Press

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓