Introduction

Unit 8

In Unit 6, we learned about Advantage Actor Critic (A2C), a hybrid architecture combining value-based and policy-based methods that help to stabilize the training by reducing the variance with:

Today we’ll learn about Proximal Policy Optimization (PPO), an architecture that improves our agent’s training stability by avoiding too large policy updates. To do that, we use a ratio that indicates the difference between our current and old policy and clip this ratio from a specific range [1ϵ,1+ϵ] [1 - \epsilon, 1 + \epsilon] .

Doing this will ensure that our policy update will not be too large and that the training is more stable.

This Unit is in two parts:

Environment
This is the environments you're going to use to train your agents: VizDoom and GodotRL environments

Sounds exciting? Let’s get started! 🚀