A(n) APPO model trained on the doom_deathmatch_bots environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
Codes
Github repos(Give a star if found useful):
- https://github.com/hishamcse/Advanced-DRL-Renegades-Game-Bots
- https://github.com/hishamcse/DRL-Renegades-Game-Bots
- https://github.com/hishamcse/Robo-Chess
Kaggle Notebook:
Downloading the model
After installing Sample-Factory, download the model with:
python -m sample_factory.huggingface.load_from_hub -r hishamcse/doom_deathmatch_bots
Using the model
To run the model after download, use the enjoy
script corresponding to this environment:
python -m .opt.conda.lib.python3.10.site-packages.ipykernel_launcher --algo=APPO --env=doom_deathmatch_bots --train_dir=./train_dir --experiment=doom_deathmatch_bots
You can also upload models to the Hugging Face Hub using the same script with the --push_to_hub
flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
Training with this model
To continue training with this model, use the train
script corresponding to this environment:
python -m .opt.conda.lib.python3.10.site-packages.ipykernel_launcher --algo=APPO --env=doom_deathmatch_bots --train_dir=./train_dir --experiment=doom_deathmatch_bots --restart_behavior=resume --train_for_env_steps=10000000000
Note, you may have to adjust --train_for_env_steps
to a suitably high number as the experiment will resume at the number of steps it concluded at.
- Downloads last month
- 0
Evaluation results
- mean_reward on doom_deathmatch_botsself-reported1.80 +/- 1.72