Reinforcement Learning: Introducing Deep Q* Networks — Part 6
Last Updated on July 19, 2024 by Editorial Team
Author(s): Tan Pengshi Alvin
Originally published on Towards AI.
An adjusted framework combining Deep Q-Networks with a trainable exploration heuristic and supervision
Photo by Chantal & Ole on Unsplash
You may have heard of Project Q*, a leaked idea from OpenAI in the year 2023 that is rumoured to represent a major breakthrough in the research for Artificial General Intelligence (AGI). While nobody knows what the project entails, I stumbled across an idea that is inspired by the name ‘Q-star’, by combining my previous knowledge in Q-Learning and my current foray into search algorithms, in particular the A* Search algorithm.
While I do not claim to have understood the meaning behind Project Q* (in fact, far from it), this article reports a new model — which I will henceforth call the Deep Q* Networks — that has demonstrated a significant upgrade in efficiency to the vanilla Deep Q-Networks that is widely used in the field of Reinforcement Learning. This article represents a continuation (Part 6) of the series of explorations in Reinforcement Learning from scratch, and one can find the introductions of Q-Learning and Deep Q-Networks in the previous articles in the series here:
Introducing the Temporal Difference family of iterative techniques to solve the Markov Decision Process
pub.towardsai.net
Reinforcement Learning with continuous state spaces and gradient descent techniques
pub.towardsai.net
Of note, the Deep Q-Networks applies the epsilon-greedy approach… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI