Update README.md
Browse files
README.md
CHANGED
@@ -11,9 +11,10 @@ language:
|
|
11 |
pipeline_tag: text-generation
|
12 |
---
|
13 |
|
14 |
-
#
|
|
|
15 |
|
16 |
-
This
|
17 |
|
18 |
To use this model, you must use a service which supports the GGUF file format.
|
19 |
Additionaly, this is the Prompt Template, it uses the Phi-3 template.
|
@@ -25,7 +26,9 @@ Additionaly, this is the Prompt Template, it uses the Phi-3 template.
|
|
25 |
{{ end }}<|assistant|>
|
26 |
{{ .Response }}<|end|>
|
27 |
|
28 |
-
Highly recommended to be used with a system prompt.
|
|
|
|
|
29 |
|
30 |
- **Developed by:** Pinkstack
|
31 |
- **License:** apache-2.0
|
|
|
11 |
pipeline_tag: text-generation
|
12 |
---
|
13 |
|
14 |
+
# Information
|
15 |
+
## Here are things you should be aware of when using PARM models (Pinkstack Accuracy Reasoning Models) 🧀
|
16 |
|
17 |
+
This PARM is based on Phi 3.5 mini which has gotten extra training parameters so it would have similar outputs to O.1 Mini, We trained with [this](https://huggingface.co/datasets/gghfez/QwQ-LongCoT-130K-cleaned) dataset.
|
18 |
|
19 |
To use this model, you must use a service which supports the GGUF file format.
|
20 |
Additionaly, this is the Prompt Template, it uses the Phi-3 template.
|
|
|
26 |
{{ end }}<|assistant|>
|
27 |
{{ .Response }}<|end|>
|
28 |
|
29 |
+
Highly recommended to be used with a system prompt.
|
30 |
+
|
31 |
+
This model has been tested inside of MSTY. with 8,192 Max token output and 32,000 Context. Less context is completly fine too but it does like to use a lot of tokens for reasoning.
|
32 |
|
33 |
- **Developed by:** Pinkstack
|
34 |
- **License:** apache-2.0
|