Post
1284
Let’s Play the Chrome Dino Game with Reinforcement Learning! 🎉
Reinforcement Learning has been one of my favorite areas of interest for a while. This is a project I worked on a while ago while learning the fundamentals of reinforcement learning.
I believe the OpenAI Gym library offers an excellent way to standardize environments for RL agents. While there are many ready-to-use Gym environments available for learning and testing, you don’t fully understand how they work until you build your own custom Gym environment ⚙️
Creating your own environment helps you grasp the core concepts behind RL.
On the other hand, Stable Baselines3 offers PyTorch implementations of popular RL algorithms like PPO and DQN. The best part is that Gym environments are fully compatible with Stable Baselines3, making it easy to benchmark different models and compare their performance.
I'm open-sourcing this project as a helpful starting point for anyone interested in learning how to :
* Build a custom RL environment using the OpenAI Gym library
* Train RL agents using Stable Baselines3
* Use the Chrome DevTools Protocol for direct communication between a Python script and the Chrome browser. This is especially useful if you're interested in web scraping or browser automation (another one of my all-time favorite topics 🤩 )
Also, this project uses image preprocessing with Sobel edge detection, a basic feature extraction technique commonly used in image processing and by deep neural networks.
I've also included pre-trained model checkpoints saved every 100,000 timesteps, up to 1 million timesteps. If you'd like to test the project without training from scratch, you can simply load and use one of these pre-trained models.
I hope this project helps someone learn something new and exciting!
shanaka95/AIDino
Reinforcement Learning has been one of my favorite areas of interest for a while. This is a project I worked on a while ago while learning the fundamentals of reinforcement learning.
I believe the OpenAI Gym library offers an excellent way to standardize environments for RL agents. While there are many ready-to-use Gym environments available for learning and testing, you don’t fully understand how they work until you build your own custom Gym environment ⚙️
Creating your own environment helps you grasp the core concepts behind RL.
On the other hand, Stable Baselines3 offers PyTorch implementations of popular RL algorithms like PPO and DQN. The best part is that Gym environments are fully compatible with Stable Baselines3, making it easy to benchmark different models and compare their performance.
I'm open-sourcing this project as a helpful starting point for anyone interested in learning how to :
* Build a custom RL environment using the OpenAI Gym library
* Train RL agents using Stable Baselines3
* Use the Chrome DevTools Protocol for direct communication between a Python script and the Chrome browser. This is especially useful if you're interested in web scraping or browser automation (another one of my all-time favorite topics 🤩 )
Also, this project uses image preprocessing with Sobel edge detection, a basic feature extraction technique commonly used in image processing and by deep neural networks.
I've also included pre-trained model checkpoints saved every 100,000 timesteps, up to 1 million timesteps. If you'd like to test the project without training from scratch, you can simply load and use one of these pre-trained models.
I hope this project helps someone learn something new and exciting!
shanaka95/AIDino