Upload folder using huggingface_hub
Browse files- 0000100_adapters.safetensors +3 -0
- 0000200_adapters.safetensors +3 -0
- 0000300_adapters.safetensors +3 -0
- 0000400_adapters.safetensors +3 -0
- 0000500_adapters.safetensors +3 -0
- 0000600_adapters.safetensors +3 -0
- 0000700_adapters.safetensors +3 -0
- 0000800_adapters.safetensors +3 -0
- 0000900_adapters.safetensors +3 -0
- 0001000_adapters.safetensors +3 -0
- 0001100_adapters.safetensors +3 -0
- 0001200_adapters.safetensors +3 -0
- 0001300_adapters.safetensors +3 -0
- 0001400_adapters.safetensors +3 -0
- 0001500_adapters.safetensors +3 -0
- 0001600_adapters.safetensors +3 -0
- 0001700_adapters.safetensors +3 -0
- 0001800_adapters.safetensors +3 -0
- 0001900_adapters.safetensors +3 -0
- 0002000_adapters.safetensors +3 -0
- README.md +45 -0
- adapter_config.json +15 -0
- adapters.safetensors +3 -0
0000100_adapters.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:74822c67005604faf5e041094d50dd21a9d120dc43aba190d80a323145702fbc
|
3 |
+
size 12589908
|
0000200_adapters.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:92988b8ec0df0cbd65b9737b33d961f58e5dc742d2c218b62acd89ef0a30422e
|
3 |
+
size 12589908
|
0000300_adapters.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f67d97bfaa6859c3f1889f9a49330d29e0bcb58abe1fc22994150e3d8fc4054b
|
3 |
+
size 12589908
|
0000400_adapters.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0756017ce09adc0525b69903d1dc157488c3f238fb21189bbdaf3aacfb3bd620
|
3 |
+
size 12589908
|
0000500_adapters.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6f3ba01d303bb894a52fea57001aa54a20ba2d1f0d31f56dcfc88361c539f3a5
|
3 |
+
size 12589908
|
0000600_adapters.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:aebe6c6a0f9e621af02b0923bc7f669706fabbc7836c92614ad3bffcd04dfcfa
|
3 |
+
size 12589908
|
0000700_adapters.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9e090d1216c56f5f5ae01099cf563ff9eed8a0210697551f76a3ed45c4b17fc7
|
3 |
+
size 12589908
|
0000800_adapters.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c4b393700a1a75ba5c3ab38d5f7059b1bfe762bdcc4ff9046edbe5bd40385092
|
3 |
+
size 12589908
|
0000900_adapters.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:66944a8e053fd49a81b58d638f6760bad516c55e427b832521752468b8755c68
|
3 |
+
size 12589908
|
0001000_adapters.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6639d5cd36c24bc2f3a4fcc715cce9c37112e62c3ab65f63e8d11c9e92ecb526
|
3 |
+
size 12589908
|
0001100_adapters.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b657e17839e8e90a949d0835b109a5c8a9e621703ed7e0bedc17e945ae212de3
|
3 |
+
size 12589908
|
0001200_adapters.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7edba5d51fa79461d4ecf7883bd4f60cab9b7cd6da4914279938207b07bc1116
|
3 |
+
size 12589908
|
0001300_adapters.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9a6256bba3e0b06e21fe96d8c55cf1e32c61dc2f82ad0c20b5347670ab3e1ed6
|
3 |
+
size 12589908
|
0001400_adapters.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ce45e6798a51070893cd0f3b01d62f368bf9732c974aa8be0d9eb716f4a43ee8
|
3 |
+
size 12589908
|
0001500_adapters.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:416f48f5c6105aa213b782c97d050f34c97fdb59b474babcaee40f55ea7372ff
|
3 |
+
size 12589908
|
0001600_adapters.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c02cf311e5ec765e269a137060c5a4300df1f7aa300952b62357f8ad038bd06f
|
3 |
+
size 12589908
|
0001700_adapters.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f2f40cb889fc90899c71b65dc0ecd283316be82966cce89e446ac44653b8de98
|
3 |
+
size 12589908
|
0001800_adapters.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b990a17260b508912678ba20b557c3b55bbed7a76e285a95e87312c976646cde
|
3 |
+
size 12589908
|
0001900_adapters.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b12d9f8be33daae0d2515295a8bd3902e17a073403a47905c9e8480baae69bf7
|
3 |
+
size 12589908
|
0002000_adapters.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:414d073a2137b5b0199f7bbc4ba3a2bf2101c7dabcc33054fa49cdbf017f916a
|
3 |
+
size 12589908
|
README.md
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
base_model: microsoft/Phi-3-mini-4k-instruct
|
4 |
+
tags:
|
5 |
+
- fine-tuned
|
6 |
+
- lora
|
7 |
+
- mlx
|
8 |
+
---
|
9 |
+
|
10 |
+
# Fine-tuned Phi-3 Mini Model
|
11 |
+
|
12 |
+
This model is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct using LoRA (Low-Rank Adaptation) and MLX.
|
13 |
+
|
14 |
+
## Model Details
|
15 |
+
|
16 |
+
- **Base Model**: microsoft/Phi-3-mini-4k-instruct
|
17 |
+
- **Fine-tuning Method**: LoRA with MLX
|
18 |
+
- **Model Size**: 252.1 MB
|
19 |
+
|
20 |
+
## Usage
|
21 |
+
|
22 |
+
```python
|
23 |
+
from mlx_lm import load, generate
|
24 |
+
|
25 |
+
# Load the model
|
26 |
+
model, tokenizer = load("didierlopes/phi-3-mini-4k-instruct-ft-on-my-blog")
|
27 |
+
|
28 |
+
# Generate text
|
29 |
+
prompt = "<|system|>\nYou are a helpful assistant.<|end|>\n<|user|>\nHello!<|end|>\n<|assistant|>"
|
30 |
+
response = generate(model, tokenizer, prompt, max_tokens=100)
|
31 |
+
print(response)
|
32 |
+
```
|
33 |
+
|
34 |
+
## Training
|
35 |
+
|
36 |
+
This model was fine-tuned using the MLX framework with LoRA adapters. The training process involved:
|
37 |
+
|
38 |
+
1. Data preprocessing and validation
|
39 |
+
2. LoRA configuration and setup
|
40 |
+
3. Fine-tuning with custom training loop
|
41 |
+
4. Adapter fusion into base model
|
42 |
+
|
43 |
+
## License
|
44 |
+
|
45 |
+
This model is released under the MIT license.
|
adapter_config.json
ADDED
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"num_layers": 32,
|
3 |
+
"lora_layers": 32,
|
4 |
+
"lora_parameters": {
|
5 |
+
"rank": 16,
|
6 |
+
"scale": 20.0,
|
7 |
+
"dropout": 0.1,
|
8 |
+
"keys": [
|
9 |
+
"self_attn.q_proj",
|
10 |
+
"self_attn.k_proj",
|
11 |
+
"self_attn.v_proj",
|
12 |
+
"self_attn.o_proj"
|
13 |
+
]
|
14 |
+
}
|
15 |
+
}
|
adapters.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:414d073a2137b5b0199f7bbc4ba3a2bf2101c7dabcc33054fa49cdbf017f916a
|
3 |
+
size 12589908
|