legalaspro's picture
Add model card, weights, video, TB
f636118 verified
metadata
license: mit
library_name: marl-ppo-suite
tags:
  - reinforcement-learning
  - starcraft-mac
  - smacv2
  - mappo
  - zerg_10_vs_10
  - smacv2_zerg_10_vs_10
model-index:
  - name: MAPPO on smacv2_zerg_10_vs_10
    results:
      - task:
          type: reinforcement-learning
          name: StarCraft Multi-Agent Challenge v2
        dataset:
          name: zerg_10_vs_10
          type: smacv2
        metrics:
          - name: win-rate
            type: win_rate
            value: 0.484
          - name: mean-reward
            type: mean_reward
            value: 15.33
          - name: mean-ep-length
            type: mean_episode_length
            value: 29.2

MAPPO on smacv2_zerg_10_vs_10

10 M environment steps · 21.49 h wall-clock · seed 1

This is a trained model of a MAPPO agent playing smacv2_zerg_10_vs_10.
The model was produced with the open-source marl-ppo-suite training code.

Usage – quick evaluation / replay

# 1. install the codebase (directly from GitHub)
pip install "marl-ppo-suite @ git+https://github.com/legalaspro/marl-ppo-suite"

# 2. get the weights & config from HF
wget https://huggingface.co/<repo-id>/resolve/main/final-torch.model
wget https://huggingface.co/<repo-id>/resolve/main/config.json

# 3-a. Generate a StarCraft II replay file 1 episode in starcraft replay folder
marl-train   --mode render   --model final-torch.model   --config config.json   --render_episodes 1 \       

# 3-b. generate additionally video drawn from frames 
marl-train   --mode render   --model final-torch.model   --config config.json   --render_episodes 1   --render_mode rgb_array       

Files

  • final-torch.model – PyTorch checkpoint
  • replay.mp4 – gameplay of the final policy
  • config.json – training config
  • tensorboard/ – full logs

Hyper-parameters

{
  "clip_param": 0.05,
  "data_chunk_length": 10,
  "entropy_coef": 0.01,
  "fc_layers": 2,
  "gae_lambda": 0.95,
  "gamma": 0.99,
  "hidden_size": 64,
  "lr": 0.0005,
  "n_steps": 200,
  "num_mini_batch": 1,
  "ppo_epoch": 5,
  "reward_norm_type": "efficient",
  "seed": 1,
  "state_type": "AS",
  "use_reward_norm": true,
  "use_rnn": true,
  "use_value_norm": false,
  "value_norm_type": "welford"
}