Modern Reinforcement Learning: Actor-Critic Methods Download
How to Implement Cutting Edge Artificial Intelligence Research Papers in the Open AI Gym Using the PyTorch Framework
What you’ll learn
- How to code policy gradient methods in PyTorch
- How to code Deep Deterministic Policy Gradients (DDPG) in PyTorch
- How to code Twin Delayed Deep Deterministic Policy Gradients (TD3) in PyTorch
- How to code actor critic algorithms in PyTorch
- How to implement cutting edge artificial intelligence research papers in Python
- Understanding of college level calculus
- Prior courses in reinforcement learning
- Able to code deep neural networks independently
In this advanced course on deep reinforcement learning, you will learn how to implement policy gradient, actor critic, deep deterministic policy gradient (DDPG), and twin delayed deep deterministic policy gradient (TD3) algorithms in a variety of challenging environments from the Open AI gym.
The course begins with a practical review of the fundamentals of reinforcement learning, including topics such as:
- The Bellman Equation
- Markov Decision Processes
- Monte Carlo Prediction
- Monte Carlo Control
- Temporal Difference Prediction TD(0)
- Temporal Difference Control with Q Learning
And moves straight into coding up our first agent: a blackjack playing artificial intelligence. From there we will progress to teaching an agent to balance the cart pole using Q learning.
After mastering the fundamentals, the pace quickens, and we move straight into an introduction to policy gradient methods. We cover the REINFORCE algorithm, and use it to teach an artificial intelligence to land on the moon in the lunar lander environment from the Open AI gym. Next we progress to coding up the one step actor critic algorithm, to again beat the lunar lander.
With the fundamentals out of the way, we move on to our harder projects: implementing deep reinforcement learning research papers. We will start with Deep Deterministic Policy Gradients, which is an algorithm for teaching robots to excel at a variety of continuous control tasks.
Finally, we implement a state of the art artificial intelligence algorithm: Twin Delayed Deep Deterministic Policy Gradients. This algorithm sets a new benchmark for performance in robotic control tasks, and we will demonstrate world class performance in the Bipedal Walker environment from the Open AI gym.
By the end of the course, you will know the answers to the following fundamental questions in Actor-Critic methods:
- Why should we bother with actor critic methods when deep Q learning is so successful?
- Can the advances in deep Q learning be used in other fields of reinforcement learning?
- How can we solve the explore-exploit dilemma with a deterministic policy?
- How do we get overestimation bias in actor-critic methods?
- How do we deal with the inherent errors in deep neural networks?
This course is for the highly motivated and advanced student. To succeed, you must have prior course work in all the following topics:
- College level calculus
- Reinforcement learning
- Deep learning
The pace of the course is brisk, but the payoff is that you will come out knowing how to read cutting edge research papers and turn them into functional code as quickly as possible.
Who this course is for:
- Advanced students of artificial intelligence who want to implement state of the art academic research papers
Related Courses: Tensorflow 2.0: Deep Learning And Artificial Intelligence