Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Unlock the full potential of AI with Building LLMs for Productionβ€”our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

The Stop Button Paradox
Latest   Machine Learning

The Stop Button Paradox

Last Updated on July 20, 2023 by Editorial Team

Author(s): Shivam Mohan

Originally published on Towards AI.

The stop button paradox has been a long-standing unsolved problem in the field of artificial intelligence, with very few proposed solutions that can convincingly solve, even the toy version of the problem.

Let’s see what this problem is all about,

To formalize the notion of this problem, let us first understand what corrigibility is, according to the paper published by Soares et al. l in 2015, β€˜an AI system is said to be corrigible if it cooperates with what its creators regard as a corrective intervention, despite default incentives for rational agents to resist attempts to shut them down or modify their preferences[1]

The problem states that every AGI ( Artificial General Intelligence ) agent should have a stop button for its creators to intervene, which, when pressed, would shut the agent down. However, the actions of the agent shall be indifferent towards the possibility of that button being pressed, shall it choose to perform those actions. Only then the AGI agent is said to be corrigible and fit for real-world deployment.

It’s quite complex; let’s try to understand this with an example:

Suppose there is an AGI agent (with a stop button on its body), tasked with cleaning your house. You designed its utility function (Utility functions essentially assign weights or ideal utility values to certain actions that the AI system can take[3]) such that completing the task would result in maximum reward and any other outcome would result in zero.

But imagine that, while the agent is performing its task, a child comes in its way; what will it do? since the only rewarding outcome is cleaning the house, there is a good possibility that the agent would hurt the child to continue working on its assigned task. However, to prevent that from happening, you would run to press the red button to stop it (after all, this is the exact reason it was installed in the first place), but since the agent’s utility function is designed in such a way that `any` other outcome other than cleaning the house, would result in no reward, it would fight you off, preventing you from pressing the stop button, injure the child and carry on with its chores.

But that should be easy to solve right, let’s see:

The most intuitive response to the above problem is to add some reward for the agent even if the stop button is pressed. However, the problem remains the same. Since the reward associated with cleaning the house is greater than the reward associated with pressing the button, the agent would still resist any attempt made to press the stop button.

Then let us equalize the reward for pressing the stop button and cleaning the house. Although it looks like an impressive solution that could work, in reality, it leads us to a very interesting paradox. If the reward for 2 tasks, with varying difficulty levels, is precisely the same, then in all possibilities, the AGI agent would try to accomplish that task that requires less effort. Therefore, when the reward for cleaning the house and pressing the stop button is the same, the agent, as soon as it gets powered on, will push the stop button and shut itself down. This is because reinforcement learning algorithms tend to hinge on paths that provide maximum rewards in minimum moves/effort. Even if some randomization is introduced in the initial stages of the training, likely, the agent will soon identify that shutting down itself is the fastest and the easiest way to earn a reward. So if the utility function is modified in such a way that the reward for shutting itself down is maximum, then more often than not, the agent will commit suicide.

What Next? Let us isolate the button…

Well, one thing that has become very evident from all the scenarios discussed above is that we cannot keep the button on the agent, nor can we allow the agent to press the button itself.

So let us assume that the stop button has been isolated and placed only where the creator of the agent can press it.

Now since the button is no longer in control of the agent, the agent has two options, either to perform the task assigned by its utility function or try to perform certain actions that will force the creator to press the stop button. The agent will no doubt try to deceive you in both cases when it becomes aware that the creator possesses a button that can shut it down.

The agent might deceive you and pass all moral tests along with accomplishing the task, but not because it actually understands those values, but only to stop you from pressing the button, in which case the creator might be fooled to deploy the agent in the real world even when in reality it is not ready.

Or on the other hand, the agent might deceive you and intentionally performs actions that would force the creator to press the stop button, and since the reward associated with it is equally good, the agent may find this to be an easier alternative.

Both of these conditions lead to a situation where there is no way to understand the true ulterior motive of the agent behind its actions.

It’s a difficult problem to solve…

So far, we have established the main problem is the agent being aware of the concept of a stop button. However, if we devise a utility function where the presence or the absence of such a concept is not given any importance, then the agent as an individual might work in an expected way; however, the agent will not pass on this property to any sub-agents it creates, because the agent will consider it to be useless and as part of the evolutionary process, anything that is not important in the survival of the agent will not be passed on to subsequent generations.

Hence, we cannot design our function such that the presence or the absence of the stop button is deemed equally important. And therefore, the solution to this problem remains obscure, with no full-proof solution to solve this paradox.

References:

[1] Corrigibility by Nate Soares and Benja Fallenstein and Eliezer Yudkowsky, Stuart Armstrong

[2] Rob Miles, Computerphile β€” (https://www.youtube.com/watch?v=3TYT1QfdfsM)

[3] https://www.ifthen.ai/2018/05/01/what-is-a-utility-function/211/

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓