File size: 1,667 Bytes
31d2278 27e49ee 31d2278 27e49ee 31d2278 65a59c1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -6.30 +/- 1.79
name: mean_reward
verified: false
---
# **TQC** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **TQC** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
# 1 - 2
env_id = "PandaPickAndPlace-v3"
env = gym.make(env_id)
# 4
from stable_baselines3 import HerReplayBuffer, SAC
model = TQC(policy = "MultiInputPolicy",
env = env,
batch_size=2048,
gamma=0.95,
learning_rate=1e-4,
train_freq=64,
gradient_steps=64,
tau=0.05,
replay_buffer_class=HerReplayBuffer,
replay_buffer_kwargs=dict(
n_sampled_goal=4,
goal_selection_strategy="future",
),
policy_kwargs=dict(
net_arch=[512, 512, 512],
n_critics=2,
),
tensorboard_log=f"runs/{wandb_run.id}",
)
# 5
model.learn(1_000_000, progress_bar=True, callback=WandbCallback(verbose=2))
wandb_run.finish()
```
Weights & Biases charts: https://wandb.ai/patonw/PandaPickAndPlace-v3/runs/w7lzlwnx/workspace?workspace=user-patonw |