File size: 2,178 Bytes
74ca0db 78b8357 74ca0db 389a649 78b8357 e770dbf 78b8357 389a649 78b8357 389a649 e770dbf 389a649 78b8357 389a649 78b8357 389a649 e770dbf 389a649 e770dbf 78b8357 389a649 78b8357 389a649 78b8357 389a649 78b8357 389a649 78b8357 389a649 e770dbf 389a649 e770dbf 389a649 78b8357 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
library_name: transformers
license: apache-2.0
datasets:
- argilla/dpo-mix-7k
language:
- en
---
# Phi2-PRO

*phi2-pro* is a fine-tuned version of **[microsoft/phi-2](https://huggingface.co/microsoft/phi-2)** on **[argilla/dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k)**
preference dataset using *Odds Ratio Preference Optimization (ORPO)*. The model has been trained for 1 epoch.
## π₯ LazyORPO
This model has been trained using **[LazyORPO](https://colab.research.google.com/drive/19ci5XIcJDxDVPY2xC1ftZ5z1kc2ah_rx?usp=sharing)**. A colab notebook that makes the training
process much easier. Based on [ORPO paper](https://colab.research.google.com/corgiredirector?site=https%3A%2F%2Fhuggingface.co%2Fpapers%2F2403.07691)

#### π What is ORPO?
Odds Ratio Preference Optimization (ORPO) proposes a new method to train LLMs by combining SFT and Alignment into a new objective (loss function), achieving state of the art results.
Some highlights of this techniques are:
* π§ Reference model-free β memory friendly
* π Replaces SFT+DPO/PPO with 1 single method (ORPO)
* π ORPO Outperforms SFT, SFT+DPO on PHI-2, Llama 2, and Mistral
* π Mistral ORPO achieves 12.20% on AlpacaEval2.0, 66.19% on IFEval, and 7.32 on MT-Bench out Hugging Face Zephyr Beta
#### π» Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("abideen/phi2-pro", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("abideen/phi2-pro", trust_remote_code=True)
inputs = tokenizer('''
"""
Write a detailed analogy between mathematics and a lighthouse.
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## π Evaluation
### COMING SOON |