
Exploring and Exploiting the Racetrack
Author(s): Denny Loevlie
Originally published on Towards AI.
Solving Sutton and Bartos racetrack problem using Reinforcement Learning.
This member-only story is on us. Upgrade to access all of Medium.
This post covers a solution and extension to the racetrack problem from Chapter 5 of Reinforcement Learning by Sutton and Barto. If you would like to read the problem and attempt it yourself, you can find it in the free online version of the book here. All the code needed to replicate the results in this post can be found at this GitHub repository: https://github.com/loevlie/Reinforcement_Learning_Tufts/tree/main/RaceTrack_Monte_Carlo.
Monte Carlo (MC) control methods are computationally expensive because they rely on extensive sampling. However, unlike dynamic programming (DP) methods, MC does not assume the agent has perfect environmental knowledge, making it more flexible in uncertain or complex scenarios. With MC methods, the agent finishes an entire episode before updating the policy. This is advantageous from a theoretical point of view because the expected sum of future discounted rewards can be precisely calculated from the actual future rewards recorded during that episode.
The racetrack problem from Reinforcement Learning by Sutton and Barto motivates getting to the finish line by providing a constant reward of -1 every step of the episode and causing the agent to jump back to the start any time it runs… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI