Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Zickl
/
llama32-1b-iterative-dpo
like
0
PEFT
Safetensors
English
dpo
iterative-dpo
self-rewarding
preference-learning
lora
arxiv:
2401.10020
arxiv:
2305.18290
License:
llama3.2
Model card
Files
Files and versions
xet
Community
Use this model
main
llama32-1b-iterative-dpo
30.9 MB
1 contributor
History:
3 commits
Zickl
Upload README.md with huggingface_hub
7524a37
verified
16 days ago
.gitattributes
Safe
1.57 kB
Upload Iterative DPO model (2 iterations)
16 days ago
README.md
3.23 kB
Upload README.md with huggingface_hub
16 days ago
adapter_config.json
1.01 kB
Upload Iterative DPO model (2 iterations)
16 days ago
adapter_model.safetensors
13.6 MB
xet
Upload Iterative DPO model (2 iterations)
16 days ago
chat_template.jinja
Safe
3.83 kB
Upload Iterative DPO model (2 iterations)
16 days ago
special_tokens_map.json
Safe
325 Bytes
Upload Iterative DPO model (2 iterations)
16 days ago
tokenizer.json
Safe
17.2 MB
xet
Upload Iterative DPO model (2 iterations)
16 days ago
tokenizer_config.json
Safe
50.6 kB
Upload Iterative DPO model (2 iterations)
16 days ago
training_args.bin
pickle
Detected Pickle imports (11)
"transformers.trainer_utils.HubStrategy"
,
"trl.trainer.dpo_config.DPOConfig"
,
"transformers.trainer_utils.IntervalStrategy"
,
"transformers.trainer_utils.SchedulerType"
,
"transformers.training_args.OptimizerNames"
,
"trl.trainer.dpo_config.FDivergenceType"
,
"accelerate.utils.dataclasses.DistributedType"
,
"transformers.trainer_utils.SaveStrategy"
,
"transformers.trainer_pt_utils.AcceleratorConfig"
,
"torch.device"
,
"accelerate.state.PartialState"
How to fix it?
6.8 kB
xet
Upload Iterative DPO model (2 iterations)
16 days ago