Convert to SafeTensors format for security compliance
Browse files- README.md +38 -0
- model.safetensors +3 -0
README.md
ADDED
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
base_model: Qwen/Qwen3-0.6B-Base
|
4 |
+
tags:
|
5 |
+
- dpo
|
6 |
+
- fdpo
|
7 |
+
- math
|
8 |
+
- code
|
9 |
+
- qwen3
|
10 |
+
- reasoning
|
11 |
+
datasets:
|
12 |
+
- albertfares/MNLP_M3_dpo_dataset
|
13 |
+
language:
|
14 |
+
- en
|
15 |
+
pipeline_tag: text-generation
|
16 |
+
---
|
17 |
+
|
18 |
+
# MNLP M3 fDPO Model (69k samples)
|
19 |
+
|
20 |
+
This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) using **filtered Direct Preference Optimization (fDPO)** on the [MNLP M3 DPO dataset](https://huggingface.co/datasets/albertfares/MNLP_M3_dpo_dataset).
|
21 |
+
|
22 |
+
## Model Details
|
23 |
+
|
24 |
+
- **Base Model**: Qwen/Qwen3-0.6B-Base
|
25 |
+
- **Training Method**: fDPO (filtered Direct Preference Optimization)
|
26 |
+
- **Dataset**: MNLP M3 mixed dataset (~69k samples)
|
27 |
+
- **Format**: SafeTensors (secure format)
|
28 |
+
|
29 |
+
## Usage
|
30 |
+
|
31 |
+
```python
|
32 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
33 |
+
|
34 |
+
model = AutoModelForCausalLM.from_pretrained("albertfares/MNLP_M3_dpo_model_69k")
|
35 |
+
tokenizer = AutoTokenizer.from_pretrained("albertfares/MNLP_M3_dpo_model_69k")
|
36 |
+
```
|
37 |
+
|
38 |
+
This model uses SafeTensors format for enhanced security and faster loading.
|
model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:56fff24a6c3c9cd9c34a571d55e94b404063a45ad1b06c31fcc3ed27dc0a81d7
|
3 |
+
size 1503300296
|