Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,46 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
license_link: >-
|
4 |
+
https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
tags:
|
9 |
+
- nlp
|
10 |
+
- code
|
11 |
+
---
|
12 |
+
|
13 |
+
## Model Summary
|
14 |
+
|
15 |
+
This repo provides the GGUF format for the Phi-3-Mini-128K-Instruct.
|
16 |
+
|
17 |
+
*For more details check out the original model at [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct).*
|
18 |
+
|
19 |
+
|
20 |
+
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
|
21 |
+
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) it can support.
|
22 |
+
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures.
|
23 |
+
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
|
24 |
+
|
25 |
+
Resources and Technical Documentation:
|
26 |
+
|
27 |
+
+ [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april)
|
28 |
+
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
|
29 |
+
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
|
30 |
+
+ [Phi-3 on Hugging Face](https://aka.ms/phi3-hf)
|
31 |
+
+ Phi-3 ONNX: [4K](https://aka.ms/phi3-mini-4k-instruct-onnx) and [128K](https://aka.ms/phi3-mini-128k-instruct-onnx)
|
32 |
+
|
33 |
+
This repo provides GGUF files for the Phi-3 Mini-128K-Instruct model.
|
34 |
+
| Name | Quant method | Bits | Size | Use case |
|
35 |
+
| ---- | ---- | ---- | ---- | ----- |
|
36 |
+
| [Phi-3-mini-128k-instruct-Q4_K_M.gguf](https://huggingface.co/AlessandroW/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 2.39 GB | medium, balanced quality - recommended |
|
37 |
+
| [Phi-3-mini-128k-instruct-f16.gguf](https://huggingface.co/AlessandroW/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct-f16.gguf) | None | 16 | 7.2 GB | minimal quality loss |
|
38 |
+
|
39 |
+
|
40 |
+
### License
|
41 |
+
|
42 |
+
The model is licensed under the [MIT license](https://huggingface.co/microsoft/phi-3-mini-128k/resolve/main/LICENSE).
|
43 |
+
|
44 |
+
## Trademarks
|
45 |
+
|
46 |
+
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
|