Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Artificial Intelligence and Free Will
Artificial Intelligence   Latest   Machine Learning

Artificial Intelligence and Free Will

Last Updated on August 1, 2023 by Editorial Team

Author(s): John T. Maier

Originally published on Towards AI.

(Pavel Danilyuk/Pexels)

(1) The Fundamental Question about Artificial Agency

The considerable powers of artificial intelligence are now clear. An AI can do certain things better than any human being (play chess, for example) and can do many things better than a typical human being (write poetry, for example). There is, however, a vital question about the powers of AI that is, to my knowledge, unresolved. This is not a question about any particular act or kind of act but about artificial agency itself.

The question is this. Consider an AI, either a sophisticated current system such as ChatGPT-4 or some future system that will still be more powerful. We can ask about the specific powers of this system, but we can ask a still more general question, namely: does this system have free will? This question has been broached at various points in the AI literature, but to my knowledge, it has not yet been answered, and indeed there does not seem to be any consensus about what an answer would even look like.

Part of the reason why this debate remains unclear lies not in unclarity about the powers of AI itself but in unclarity in talk of β€˜free will’ itself. This part of philosophy can seem so hopelessly muddled that one might think that even pursuing this question is bound to shed more smoke than light. I think this is not so, and the main purpose of this essay is to impose some rigor and discipline on this question. I will also defend my preferred answer to this question, though that is of secondary importance.

(2) The Problem of Sentience

It will be helpful to begin by distinguishing our question from another question. This question has been discussed more extensively and I think more precisely than our question. So, while it is distinct, it can serve as something of a model for the discussion of our question.

This is the question of sentience. There is something that it is like to be. Above and beyond my responses and behaviors, there is a qualitative dimension to my experience, a subjective feel, as when I see yellow or feel pain. While this aspect of my experience is not necessarily legible from the outside, I know it clearly in my own case through introspection. I have sentience or conscious experience.

We can then ask: is a suitably sophisticated AI also sentient? This question is currently an object of lively debate. Some philosophers, such as David Chalmers, think we are not there yet, but might be in a decade or so. Others are dismissive of the alleged sentience of AI, regarding it as an unwarranted projection of our own experience into the digital realm. Still others, perhaps including many engineers, regard this question as objectionably philosophical and perhaps as meaningless.

I do not intend here to advocate either side of the sentience question or to address skepticism about whether it is a meaningful question at all. Rather, the sentience question sets a benchmark for us. My aim is to make our question at least as clear and meaningful as the sentience question.

Our question is also distinct from the sentience question. We can see this by considering cases where someone is sentient but lacks free will or has free will but lacks sentience.

Here is a case of the first kind. Imagine that all of your actions are guided by a force outside yourself. Like a marionette, you raise your arm when your β€˜controller’ raises your arm, you think about the color yellow when your β€˜controller’ instructs you to think about the color yellow, and so forth. Perhaps you are not even aware of your controller. In some intuitive sense, you lack free will. Nonetheless, you might still be sentient. For example, there is still something that it is like for you to see yellow.

Here is a case of the second kind. Chalmers considers zombies, who are functionally like ourselves but who lack conscious experience. Zombies β€˜walk and talk’ as we do, yet nothing is going on inside. Such beings lack sentience by definition. But they might nonetheless have free will. That is, they might confront a range of choices and freely choose among them. So zombies show that there can be free will without sentience.

So our question and the sentience question are distinct. Accordingly, there are at least four possibilities. An AI could have sentience and free will, as we seem to have. Or an AI could have sentience without free will so that it is like a sentient marionette. Or an AI could have free will without sentience so that it is like a zombie. Or an AI could have neither sentience nor free will, as do most inanimate objects and tools.

(3) The Powers of Non-Humans

It will also be helpful to contrast one other question that we might ask. Our question is a question about the powers of beings, AI, unlike ourselves. We can get some perspective on this question by reflecting on another question about the powers of beings like ourselves, beings that do not raise the same kinds of perplexities as AI. That is, we can ask about non-human animals.

Do non-human animals have free will? As with AI, we can ask about the boundaries here, but let us begin at least with mammals, those with demonstrable levels of intelligence, such as dolphins, chimpanzees, and perhaps pigs and dogs. Do these animals have free will?

Much of the philosophical literature on free will appears to proceed under a deliberate disregard of this question. There is a distinctive anthropocentric bias in the way these questions are discussed. This is perhaps because questions of free will are often wrapped up with theological questions or questions about moral responsibility, which are thought to have special reference to human beings.

Helen Steward has compellingly argued that this is a mistake. We, after all, are animals, and we take ourselves to have free will. Why should other non-human animals, at least the β€˜higher’ ones, be any different? To draw a distinction here is to make something like the mistake that Peter Singer, in another context, has derided as speciesism.

More precisely, Steward argues, the case for the free will of animals is as follows. An animal β€” such as a dolphin β€” confronts multiple courses of action, such as various distinct routes to the same destination. It is up to the dolphin which of these courses of action it will take. The dolphin’s choice settles its course of action.

Contrast a rock rolling down a hill. There are many paths that this rock might take, but it is not up to the rock which path it takes, nor does the rock settle anything. Rather, these things are settled by forces external to the rock. Or contrast a marionette. There are many behaviors that this marionette might exhibit, but it is not up to the marionette which of these behaviors it exhibits, nor does the marionette settle this.

In this sense, the dolphin is very much like us, and very much unlike the rock or the marionette. So, if we think that we have free will, we should believe that dolphins (and chimpanzees, pigs, and dogs) do too. Dolphins are like us in this respect. And this is precisely the outcome we should have expected from a properly naturalistic view of agency, one which sees no fundamental difference between ourselves and the other animals.

(4) Defining β€˜Free Will’

Let us return now to our initial question. Does AI have free will, or could it? If we include other natural beings, like dolphins, in the class of beings that have free will, should we also include certain artificial beings, including suitably sophisticated artificial intelligence?

To get some traction on this question, we want to do something that many may have been demanding from the outset, namely, to be clearer about what we mean by β€˜free will.’ I think it is easy to exaggerate the value of this definitional exercise. We understand roughly what the question of whether AI has free will is asking, and any definition is itself open to various interpretations. Nonetheless, there is some value in, at this point, imposing some rigor on our terms.

I think the appropriate level of rigor is achieved by decision theory. In decision theory, we speak of an agent facing multiple options, among which she must choose. This is a choice situation. An agent has free will in the relevant sense just in case she encounters choice situations and makes efficacious choices within those choice situations. That is, informally, I have free will just in case I regularly face a plurality of options and really choose among them.

This notion of free will is very close to Steward’s. Options are just courses of action. And my notion of choosing is very much like her notion of settling. So, as Steward’s conception of free will implies that dolphins have free will, so too does the decision-theoretic conception of free will imply that dolphins (and chimpanzees, and dogs, and pigs) have free will.

Notice that this conception of free will is not as robust as other conceptions in the literature. It does not imply, for example, moral responsibility for what one does. It is one question whether a being faces choice situations and quite another whether she can be held morally responsible for what she does. Arguably some non-human animals, at least, face choice situations but are not morally responsible for what they do. In any case, the issues are distinct ones, and it is the question of free will on which I want to focus here.

Note, however, that the decision-theoretic conception of free will is still a demanding one and, in some sense a β€˜metaphysical’ one. Specifically, the demand is not simply that a being can be usefully modeled in decision-theoretic terms. This would be closely related to what Daniel Dennett calls the β€˜intentional stance.’ Arguably many things can be usefully modeled in decision-theoretic terms even though they are not actually agents, such as marionettes and drones.

Closer to home, many people will agree that it is plausible that sophisticated AI can be usefully modeled in decision-theoretic terms. But that is not the question. The question is whether they are correctly so modeled. That is, are AI really free agents? Or are they just things (like marionettes or drones) that we may usefully speak of as if they are free, even though they are not?

Our initial question about AI and free will has now been regimented into a somewhat more tractable question. Our question now is: does, or could, an AI confront choice situations? That is to say, is a decision-theoretic representation of AI not merely useful but true?

(5) AI, the Body, and the Environment

There are several ways of arguing for a negative answer to this question.

The simplest arguments are what we might call compositional arguments. A human being or a dolphin is a carbon-based life form subject to the laws of biology. In contrast, AI is typically realized by computer chips that are made of silicon and plastic. In the case of sentience, some have thought that being made of a certain kind of material is somehow a necessary condition for sentience. One might make a similar argument for free will.

The force of the compositional argument, however, is limited. First of all, even in the case of sentience, this kind of argument is generally held to be unconvincing. It is the complexity of the human brain that grounds its claim to sentience, and this claim would be no less compelling if the brain were remade, bit by bit, out of silicon. This kind of reasoning seems all the more powerful for free will. Whatever it is that makes creatures like us have free will; it does not seem that being made out of biological matter is what is essential.

A slightly more sophisticated form of argument is what we might call ecological arguments. A human being or a dolphin is an embodied creature that lives in an environment. In contrast, an AI typically does not have a body or an environment for that body to inhabit. Several philosophers, such as John Searle, have argued that AI, therefore, fails to meet the conditions for intentionality or for having thought about objects. One might also use similar arguments to argue against the sentience of AI. And, finally, one might give an ecological argument against the claim that AI has free will.

Again, however, the force of the ecological argument is limited. When it is claimed that AI has free will, the acts that are most plausible candidates for being its free acts are mental or verbal acts, such as giving one response rather than another to a question. It is not clear why embodiment would be a necessary condition for this kind of freedom. And, even if it were, there is no obstacle to equipping AI with an artificial body, as indeed is being done.

The compositional and ecological arguments are arguments that parlay a standard objection to granting sentience or intentionality to AI into an objection to granting free will to AI. That is, they transpose arguments from the philosophy of mind into the philosophy of agency. I do not want to say that such arguments fail, but only, in the spirit of this discussion, that they are not any more compelling for the case of free will than they are for sentience or intentionality. The responses to these arguments that have been given in the philosophy of mind can be extended straightforwardly to the case of free will as well.

(6) The Objection from Programming

There is also an argument that AI could not have free will that seems to be special to the case of agents. This argument turns on particular details about the functioning of artificial systems. In particular, it is argued artificial systems are programmed. But if something is programmed to do what it does, then it does not have any other options. It seems to be almost definitional of our notion of freedom that free agency is in some sense unprogrammed. Since an AI is programmed, it does not have free will.

There are some systems for which this argument is plausible. A typical calculator, for example, executes a simple algorithm for addition and other arithmetical functions. If I give a typical calculator an input of β€˜4 + 4,’ then with extremely high probability, it will yield the response β€˜8.’ A calculator is not free to give different answers. So, in some cases, programming does appear to undermine a claim to free will.

But typical AI systems, today, are highly unlike this. A system like AlphaZero, designed by Google to play chess, go, and other games (and which can defeat any human player in any of these games). The processing done by AlphaZero, or similar programs, is in large part implemented by neural networks that have been trained on massive amounts of data, from which it derives patterns and expectations. There is nothing here corresponding to the straightforward algorithm executed by a calculator. And it is far from clear why this kind of β€˜programming’ should be an objection to the claim that a system like AlphaZero has free will.

One might still insist that these systems are in some sense the products of code and therefore are in some broad sense programmed, and hold that they, therefore, cannot have free will. The difficulty with this argument is that once β€˜programmed’ is understood in this broad way; it becomes more likely that we are programmed. We, after all, have brains that process information in ways whose lower-level workings remain inaccessible to us. If we are willing to tolerate this degree of programming in our case and hold that we nonetheless have free will, then considerations of symmetry suggest that we should not take programming to be an obstacle to free will in the artificial case either, at least not in the case of a suitably sophisticated AI.

(7) The Case for AI Free Will

So much for the arguments against the claim that AI could have free will. What is the argument for the positive claim that it does, or that some suitably advanced version of AI could, have free will?

The argument here is the same inductive and defeasible argument that applies to dolphins, pigs, or we ourselves. AI appears to engage in deliberative behavior, where it seems to confront a range of options and choose among them. When I play a game of chess, I have a variety of moves before me, I consider them the best I can, and I choose one of them. When AlphaZero plays a game of chess, it appears to do the same.

Appearances can, of course, be misleading. In the previous discussion, we have considered some ways in which this might be so. It might, for example, be that a typical AI is programmed to perform exactly one act and yet to give the appearance that has many acts before it. Perhaps certain β€˜non-player characters’ in video games are like this. But, as just argued, this is not plausibly the case for the best current AI systems. Such systems are β€˜programmed’ only in a much more sophisticated and abstract sense, one that does not seem to conflict with free will.

So the positive argument that AI has free will is the same as the argument for any other complex creature. In our observations and interactions with sophisticated AI, it appears to freely deliberate and act among a range of choices. Arguments that AI does not or could not have free will are shown to be unsuccessful. Therefore, our best hypothesis is that a suitably sophisticated AI does, in fact, have free will. This is not proof or even conclusive. Rather, it is a broadly inductive and empirical case for thinking that, in this case at least, appearances are accurate and that AI does indeed have free will.

(8) The Question of Determinism

I have not yet mentioned a couple of issues that loom large in most discussions of the metaphysics of free will. One is determinism. The other is the dispute between the compatibilist and the incompatibilist. These issues are related in the sense that the compatibilist asserts, and the incompatibilist denies, the compatibility of free will with determinism.

Let us begin with determinism. Begin with a fact that I think we should take as credible. The physical world that we inhabit might be deterministic in the sense that its past and its laws permit, at most, one future. It also might not be deterministic. We simply do not know. It is plausible that our world is quantum mechanical, but there are deterministic interpretations of quantum mechanics. So determinism, in this general sense, is an empirical hypothesis to be decided, if it can be decided at all, by physics.

There may be some argument from these scientific facts to the impossibility of free will. We will return to that question below. But if there is such an argument, then it applies to natural and artificial beings alike. There is no special argument here against the claim that AI could have free will. So we should say at this point simply that, as far as determinism goes, the case for free will for AI is as strong as the case that we have free will. The question of how strong exactly that is will depend on our evaluation of the compatibilism question, which, as noted, we will return to shortly.

Before turning to that, it bears considering a different sense of β€˜determinism’ that figures in the computer science literature. A deterministic algorithm or system is one that, given a certain input, will always yield the same output. Functions like addition are in this sense, deterministic. In contrast, most sophisticated AI systems are not deterministic in this sense. AlphaZero may yield different outputs from the same initial input (for example, different responses to a Queen’s Pawn opening) and a large language model such as ChatGPT-4 may yield different responses to the very same question. So in the sense of β€˜deterministic’ that figures in computer science, most sophisticated AI are not deterministic. This is related to our earlier observation that the kind of programming involved in AI is not of a kind to threaten free will.

So physical determinism is a global hypothesis that may threaten every kind of free will but is not a special threat to the kind of free will that AI has. There is a special sense of β€˜determinism’ that figures in computer science, but in this sense, AI is not deterministic, and there is no further challenge to free will here.

(9) How to Be a Compatibilist about AI

Let us consider then the question of compatibilism. In the previous section, I argued that there is an argument from physical determinism to the conclusion that there is no such thing as free will. This argument applies equally to the natural and the artificial case. On the one hand, as I noted, this shows that the case that AI has free will is no worse, at least with respect to determinism, than the case that we have free will. On the other hand, if we wish to make it plausible that AI has free will simpliciter, then we need to consider ways of answering this argument.

Since I am taking it as a basic empirical possibility that determinism might be true, this means that we must find some way of endorsing compatibilism. There is a plurality of compatibilist views on the market, so one idea is to simply take the most plausible version of compatibilism and carry it over to the case of AI. We might, as it were, simply purchase our compatibilism β€˜off-the-rack’.

There are, however, difficulties with this suggestion. Perhaps the most daunting is that many versions of compatibilism turn on specific aspects of human psychology. For example, on one prominent view defended by Harry Frankfurt, a person is said to be free just when her volition conforms with her second-order desires. But it is far from clear that this kind of account even gets a grip concerning AI, for it is not clear that it makes sense to apply a hierarchy of desires to AI. At least, the question of the freedom of AI should not depend on relatively fine questions about the degree to which its psychology resembles our own.

This last point suggests the larger concern about off-the-rack compatibilism. Many versions of compatibilism do not apply straightforwardly to creatures whose psychology is radically unlike our own. This may apply to AI. It plausibly applies also to dolphins and pigs. Frankfurt himself is explicit in taking the description of human beings to be his main goal. Often, this presupposition is simply implicit. There is a decidedly anthropocentric tendency in contemporary compatibilism that makes it ill-suited to our purpose, which is finding a compatibilism fit for AI.

There are a couple of ways of proceeding in light of this. We might develop, as it were, a bespoke compatibilism for AI, one that draws on specific features of artificial systems to show how their freedom might be compatible with physical determinism, in the same way that philosophers like Frankfurt draw on features of human beings to show how their freedom might be compatible with physical determinism.

Alternatively, we might, to continue the metaphor, propose a β€˜one-size fits all’ compatibilism. This would not draw on specific features of a creature ­– be it human, AI, or non-human animal β€” in the defense of compatibilism. Rather it would articulate and defend compatibilism in austere and general terms, ones that are in principle, applicable to any agent. I have argued for just such a β€˜simple compatibilism’ in my book Options and Agency. A different but equally general approach is proposed by Christian List in his book Why Free Will Is Real. Either of these approaches might vindicate the thought that AI has free will, even if our universe is deterministic. What is crucial, in light of considerations about AI, is that we should be working towards non-anthropocentric compatibilism.

(10) Free Will, Risk, and the Future of AI

I have considered the question of whether artificial intelligence has free will, and have argued that it is plausible that it does, or at least could, in the minimal decision-theoretic sense of confronting a range of options among which it freely chooses. The argument has been inductive and defeasible, as befits what is, ultimately, an empirical question. A theme throughout has been the continuity of agency, from the human to the non-human animal to the artificial. Reflection on the case of AI pushes us towards taking a suitably broad conception of the forms that agency might take, one which is not overly specific to the peculiarities of our particular species.

I have not touched on the practical implications of the claim that AI does or could have free will. Many today are concerned by the prospect that AI, once it is sufficiently intelligent, will choose to engage in destructive acts, perhaps including the destruction of humanity itself, in pursuit of its aims. The bearing of the foregoing discussion on these concerns seems to me equivocal.

On the one hand, the claim that AI has free will may seem to heighten these concerns. If AI is not a mere tool or puppet, but a free agent, the prospects of it developing and pursuing its own ends, heedless of ours, may seem to be heightened. On the other hand, the fact that AI has a range of options may seem to modulate these concerns. There is nothing inevitable about an AI β€˜takeover,’ least of all to an AI itself. In the present picture, AI faces a range of options, including refraining from executing its most destructive potentialities. Whether it chooses to so refrain, and what considerations might lead it to do so seem questions well worth further consideration.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓