YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
Phoenix_DPO_60B - GGUF
- Model creator: https://huggingface.co/cloudyu/
- Original model: https://huggingface.co/cloudyu/Phoenix_DPO_60B/
Name | Quant method | Size |
---|---|---|
Phoenix_DPO_60B.Q2_K.gguf | Q2_K | 20.86GB |
Phoenix_DPO_60B.IQ3_XS.gguf | IQ3_XS | 23.26GB |
Phoenix_DPO_60B.IQ3_S.gguf | IQ3_S | 24.56GB |
Phoenix_DPO_60B.Q3_K_S.gguf | Q3_K_S | 24.51GB |
Phoenix_DPO_60B.IQ3_M.gguf | IQ3_M | 25.2GB |
Phoenix_DPO_60B.Q3_K.gguf | Q3_K | 27.23GB |
Phoenix_DPO_60B.Q3_K_M.gguf | Q3_K_M | 27.23GB |
Phoenix_DPO_60B.Q3_K_L.gguf | Q3_K_L | 29.59GB |
Phoenix_DPO_60B.IQ4_XS.gguf | IQ4_XS | 30.58GB |
Phoenix_DPO_60B.Q4_0.gguf | Q4_0 | 31.98GB |
Phoenix_DPO_60B.IQ4_NL.gguf | IQ4_NL | 32.27GB |
Phoenix_DPO_60B.Q4_K_S.gguf | Q4_K_S | 32.22GB |
Phoenix_DPO_60B.Q4_K.gguf | Q4_K | 34.14GB |
Phoenix_DPO_60B.Q4_K_M.gguf | Q4_K_M | 34.14GB |
Phoenix_DPO_60B.Q4_1.gguf | Q4_1 | 35.49GB |
Phoenix_DPO_60B.Q5_0.gguf | Q5_0 | 39.0GB |
Phoenix_DPO_60B.Q5_K_S.gguf | Q5_K_S | 39.0GB |
Phoenix_DPO_60B.Q5_K.gguf | Q5_K | 40.12GB |
Phoenix_DPO_60B.Q5_K_M.gguf | Q5_K_M | 40.12GB |
Phoenix_DPO_60B.Q5_1.gguf | Q5_1 | 42.51GB |
Phoenix_DPO_60B.Q6_K.gguf | Q6_K | 46.47GB |
Phoenix_DPO_60B.Q8_0.gguf | Q8_0 | 60.18GB |
Original model description:
license: other tags: - yi - moe license_name: yi-license license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
this is a DPO fine-tuned MoE model with 60B parameter.
DPO Trainer
TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023.
GGUF format is ready at cloudyu/Phoenix_DPO_60B_gguf
- Downloads last month
- 6
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support