Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Learn to Schedule Communication between Cooperative Agents
Latest   Machine Learning

Learn to Schedule Communication between Cooperative Agents

Last Updated on July 20, 2023 by Editorial Team

Author(s): Sherwin Chen

Originally published on Towards AI.

A novel architecture for communication scheduling in multi-agent environments

Photo by Pavan Trikutam on Unsplash

Introduction

In multi-agent environments, one way to accelerate the coordination effect is to enable multiple agents to communicate with each other in a distributed manner and behave as a group. In this article, we discuss a multi-agent reinforcement learning framework, called SchedNet proposed by Kim et al in ICLR 2019, in which agents learn how to schedule communication, how to encode messages, and how to act upon received messages.

Problem Setup

We consider multi-agent scenarios wherein the task at hand is of a cooperative nature and agents are situated in a partially observable environment. We formulate such scenarios into a multi-agent sequential decision-making problem, such that all agents share the goal of maximizing the same discounted sum of rewards. As we rely on a method to schedule communication between agents, we impose two restrictions on medium access:

  1. Bandwidth constraint: The agent can only pass L bits message to the medium every time.
  2. Contention constraint: The agents share the communication medium so that only K out of n agents can broadcast their messages.

We now formalize MARL using DEC-POMDP(DECentralized Partially Observable Markov Decision Process), a generalization of MDP to allow a distributed control by multiple agents who may be incapable of observing the global state. We describe a DEC-POMDP by a tuple <S, A, r, P, ????, O, ????>​, where:

  • ​s ∈ S is the environment state, which is not available to agents
  • aα΅’ ∈ A​ and oα΅’ ∈ ????​ are the action and observation for agent ​i ∈ N
  • r: S ⨉ A^N β†’ R is the reward function shared with all agents
  • P:S ⨉ A^N β†’ S​ is the transition function
  • O: S ⨉ N β†’ ????​ is the emission/observation probability
  • ????​ denotes the discount factor

SchedNet

Overview

Figure.1 Architecture of ScheduleNet with two agents. Each agent has its own observations and networks that do not share with others. We use bold-face fonts to highlight aggregate notations of multiple agents

Before diving into details, we first take a quick look at the architecture(Figure1) to get an overview of what’s going on here. At each time step, each agent receives its observation, and pass the observation to a weight generator and an encoder to produce a weight value w​ and a message m, respectively​. All weight values are then transferred to a central scheduler, which determines which agents’ messages are scheduled to broadcast via a schedule vector c=[cα΅’]β‚™, cα΅’ ∈{0, 1}​. The message center aggregates all messages along with the schedule vector ​c and then broadcasts selected messages to all agents. At last, each agent takes an action based on these messages and their own observations.

As we will see next, SchedNet trains all its components through the critic, following the decentralized training and distributed execution framework.

Weight Generator

Let’s start with the weight generator. The weight generator takes observation as input and outputs a weight value which is then used by the scheduler to schedule messages. We train the weight generator through the critic by maximizing Q(s,w)​, an action-value function. To get a better sense of what’s going on here, let’s take the weight generator as a deterministic policy network, and absorb all other parts except the critic into the environment. Then the weight generator and critic will form a DDPG structure. In this setup, the weight generator is responsible for answering the question: β€œwhat weight I generate could maximize the environment rewards from here on?”. As a result, we have the following objective

The objective for the weight generator, where we use bold-face fonts to highlight aggregate notations of multiple agents as we did in Figure1.

It is essential to distinguish s from o; s is the environment state, while o is the observation from the viewpoint of each agent.

Scheduler

Back when we described the problem setup, two constraints were imposed on the communication process. The bandwidth limitation L​ can easily be implemented by restricting the size of message m​. We now focus on imposing K​ on the scheduling part.

The scheduler adopts a simply weight-based algorithm, called WSA(Weight-based Scheduling Algorithm), to select K​ agent. We consider two proposals from the paper

  1. ​Top(k): Selecting top k​ agents in terms of their weight values
  2. Softmax(k)​: Computing softmax values​ for each agent i based on their weight values​, and then randomly selecting k​ agents according to this softmax values

The WSA module outputs a schedule vector c=[cα΅’]β‚™, cα΅’ ∈{0, 1}​, where each cᡒ​ determines whether the agent ​’s message is scheduled to broadcast or not.

Message Encoder, Message Center, and action selector

The message encoder encodes observations to produce a message ​m. The message center aggregates all messages m​, and select which messages to broadcast based on ​c. The resulting message mβŠ— c​ is the concatenation of all selected messages. For example, if m=[000, 010, 111]​ and c=[101]​, the final message to broadcast is ​mβŠ— c=[000111]. Each agent’s action selector then chooses an action based on this message and its observation.

We train the message encoders and action selectors via an on-policy algorithm, with the state-value function V(s)​ in the critic. The gradient of its objective is

where ????​ denotes the aggregate network of the encoder and selector, and V​ is trained with the following objective

Discussion

Two Different Training Procedure?

Kim et al. train the weight generators and action selectors using different methods but with the same data source. Specifically, they train the weight generators using a deterministic policy-gradient algorithm(an off-policy method), while simultaneously training the action selectors using a stochastic policy-gradient algorithm(an on-policy method). This could be problematic in practice since the stochastic policy-gradient method could diverge under the training with off-policy data. The official implementation ameliorates this problem using a small replay buffer of ​ transitions, which, however, may impair the performance of the on-policy one.

We could bypass this problem by reparameterizing the critic such that it takes as inputs state s​ and actions a₁, aβ‚‚, …​ and outputs the corresponding ​Q-value. In this way, we make both trained with off-policy methods. Another conceivable way is to separate the training process from environment interaction if one insists on stochastic policy-gradient methods. Note that it is not enough to simply separate the policy training since the update of the weight generator could change the environment state distribution.

References

Daewoo Kim, Moon Sangwoo, Hostallero David, Wan Ju Kang, Lee Taeyoung, Son Kyunghwan, and Yi Yung. 2019. β€œLearning To Schedule Communication In Multi-Agent Reinforcement Learning.” ICLR, 1–17.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓