File size: 2,589 Bytes
612c3a0 5fdda93 612c3a0 5fdda93 612c3a0 5fdda93 612c3a0 5fdda93 612c3a0 5fdda93 612c3a0 5fdda93 612c3a0 5fdda93 612c3a0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
# SAC + HER Agent for PandaPickAndPlace-v3 π¦Ύ
This repository contains a **Soft Actor-Critic (SAC)** agent trained with **Hindsight Experience Replay (HER)** to solve the [PandaPickAndPlace-v3](https://panda-gym.readthedocs.io/en/latest/environments/pickandplace.html) environment from [Panda-Gym](https://github.com/qgallouedec/panda-gym).
The training was done using [Stable-Baselines3](https://stable-baselines3.readthedocs.io/) and uploaded to the Hugging Face Hub.
---
## π Model Details
- **Algorithm:** SAC (Soft Actor-Critic) + HER
- **Environment:** `PandaPickAndPlace-v3`
- **Training Steps:** 800k
- **Library:** [Stable-Baselines3](https://stable-baselines3.readthedocs.io/)
- **Replay Buffer:** HER with `future` strategy
- **Device:** Trained on GPU (`cuda`)
---
## π Evaluation Results
The agent was evaluated for **10 episodes**:
Mean reward = XXX.XX Β± YYY.YY
*Please replace XXX.XX and YYY.YY with your actual evaluation results.*
---
## π Usage
You can directly load this trained agent from the Hugging Face Hub and run it inside the `PandaPickAndPlace-v3` environment.
```python
import gymnasium as gym
from stable_baselines3 import SAC
from huggingface_sb3 import load_from_hub
# Download model from Hugging Face Hub
repo_id = "mustafataha5/sac-her-PandaPickAndPlace-v3-800k" # your repo
filename = "sac_her_checkpoint_800000_steps.zip" # uploaded file
# This will download the model from HF Hub
model_path = load_from_hub(repo_id, filename)
model = SAC.load(model_path)
# Create the environment
env = gym.make("PandaPickAndPlace-v3", render_mode="human")
# Run one episode
obs, _ = env.reset()
done, truncated = False, False
while not (done or truncated):
action, _ = model.predict(obs, deterministic=True)
obs, reward, done, truncated, info = env.step(action)
env.render()
env.close()
```
---
## π¦ Files inside this repo
- `sac_her_checkpoint_800000_steps.zip` β The trained SAC + HER model checkpoint
- `README.md` β This file
---
## π Acknowledgements
- [Stable-Baselines3](https://stable-baselines3.readthedocs.io/)
- [Panda-Gym](https://github.com/qgallouedec/panda-gym)
- [Hugging Face Hub](https://huggingface.co/)
---
## π Maintainer
Mustafa Taha
---
β‘ **Steps to use:**
1. Copy this into a file called `README.md`.
2. Place it in your Hugging Face repo (it will replace the default template).
3. Commit + push.
Then, when people visit your model page, theyβll see this **professional README** and can copy-paste the usage code to download + run your agent. |