Model Card for Model ID

Quick Llama 3 8B finetune with ORPO. Demontration that it can be fine tune in 2 hours only. Thanks to Maxime Labonne's notebook:

https://colab.research.google.com/drive/1eHNWg9gnaXErdAa8_mcvjMupbSS6rDvi?usp=sharing

  • Number of training samples from the dataset: 1500 out of 40K
  • Hardware Type: L4
  • Hours of training: 2
  • Cloud Provider: google colab
Downloads last month
3
Safetensors
Model size
8.03B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train mayacinka/OrpoLlama-3-8B