This model was trained using a custom multi-layer LSTM with PPO.
Training Data: Custom sequence dataset Algorithm: Proximal Policy Optimization (PPO) with a custom LSTM Library: Stable-Baselines3