File size: 3,240 Bytes
27e96b2
 
 
 
 
 
 
835ec48
27e96b2
 
 
 
 
 
 
 
 
 
 
 
59465a9
1c9667c
 
27e96b2
 
 
 
 
 
 
 
 
2a1753a
27e96b2
 
 
2a1753a
 
 
27e96b2
2a1753a
27e96b2
 
 
59465a9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
license: mit
license_link: >-
  https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: microsoft/Phi-3-mini-128k-instruct
tags:
- nlp
- code
---

## Model Summary

This repo provides the GGUF format for the Phi-3-Mini-128K-Instruct. 

*For more details check out the original model at [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct).*


The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets. This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties. The model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) that it can support.

After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures. When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters. Resources and Technical Documentation:

Resources and Technical Documentation:

+ [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ [Phi-3 on Hugging Face](https://aka.ms/phi3-hf)
+ Phi-3 ONNX: [4K](https://aka.ms/phi3-mini-4k-instruct-onnx) and [128K](https://aka.ms/phi3-mini-128k-instruct-onnx)

This repo provides GGUF files and Llamafiles ([`d228e01d`](https://github.com/Mozilla-Ocho/llamafile/tree/d228e01d70a7b91bf04dbf63428646f3f173b888)) for the Phi-3 Mini-128K-Instruct model. 
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [Phi-3-mini-128k-instruct-Q4_K_M.gguf](https://huggingface.co/AlessandroW/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 2.39 GB | medium, balanced quality - recommended | 
| [Phi-3-mini-128k-instruct-Q4_K_M.llamafile](https://huggingface.co/AlessandroW/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct-Q4_K_M.llamafile) | Q4_K_M | 4 | 2.4 GB | medium, balanced quality - recommended | 
| [Phi-3-mini-128k-instruct-f16.gguf](https://huggingface.co/AlessandroW/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct-f16.gguf) | None | 16 | 7.64 GB | minimal quality loss |
| [Phi-3-mini-128k-instruct-f16.llamafile](https://huggingface.co/AlessandroW/Phi-3-mini-128k-instruct-gguf/blob/main/Phi-3-mini-128k-instruct-f16.llamafile) | None | 16 | 7.65 GB | minimal quality loss |

*Note:* When using the llamafile version make sure to specify the context size, e.g., `./Phi-3-mini-128k-instruct-Q4_K_M.llamafile -c 0 -p "your prompt"`.

### License

The model is licensed under the [MIT license](https://huggingface.co/microsoft/phi-3-mini-128k/resolve/main/LICENSE).