Actor Critic — Deep Reinforcement Learning
Last Updated on February 3, 2025 by Editorial Team
Author(s): Sarvesh Khetan
Originally published on Towards AI.
I have introduced the problem statement here wherein we are trying to build an agent capable of playing Ping Pong Atari game. Before reading this article I would suggest building up foundational knowledge by reading more on policy gradient techniques, this is a continuation of these techniques !!
In policy gradient techniques we saw how to use baselines to train the policy model. There we understood that baseline is nothing but a value which helps us decide if an action is a good action or not in a particular state compared to the baseline!!
But there the baselines were fixed, can we do better? So researchers though what if we take baseline as the average reward due to different possible actions in that state??
Now we already saw here average reward due to different possible actions in that state is nothing but Value function. Hence researchers concluded that they could use the value function to be the baseline !!
Value Network (Critic)
Now to calculate Value function we can use the value neural network that we saw earlier here in Q learning article but there we could not train this network but here we have the means to train this network as follows
Now this network can be trained using the following loss function
But now how will you calculate this discounted reward without a policy? Hence we will use the unoptimal policy given by the policy network below at that time to compute this discounted reward value.
Policy Network (Actor)
Now once we have the value function we can use it to train the policy network, the policy network architecture looks the same as we saw here just that now for baseline we will use a value network.
Now you can use any of the following algorithm to train this policy network (all these algorithms were discussed and derived in policy gradient article)
Actor Critic Algorithm
Combined Actor Critic Model
Above we trained two separate neural networks i.e. Policy Network (also called ACTOR) and Value Network (also called CRITIC). Later researchers thought that instead of training two separate network we could combine both of these networks into one since it would help reduce the duplicate computations and also increase the accuracy of the system due to shared weights !! Hence the new combined network looks as follows
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI