Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

mnaylor
/
ppo-LunarLander-simple

Reinforcement Learning
stable-baselines3
LunarLander-v2
deep-reinforcement-learning
Eval Results
Model card Files Files and versions
xet
Community
ppo-LunarLander-simple
508 kB
  • 1 contributor
History: 2 commits
mnaylor's picture
mnaylor
Commit with first PPO model
ba9efd9 almost 3 years ago
  • simple_ppo_lunar_lander
    Commit with first PPO model almost 3 years ago
  • .gitattributes
    1.48 kB
    initial commit almost 3 years ago
  • README.md
    928 Bytes
    Commit with first PPO model almost 3 years ago
  • config.json
    14.3 kB
    Commit with first PPO model almost 3 years ago
  • replay.mp4
    198 kB
    Commit with first PPO model almost 3 years ago
  • results.json
    164 Bytes
    Commit with first PPO model almost 3 years ago
  • simple_ppo_lunar_lander.zip

    Pickle imports

    • No problematic imports detected

    What is a pickle import?

    147 kB
    xet
    Commit with first PPO model almost 3 years ago