Pinkstack commited on
Commit
753f712
·
verified ·
1 Parent(s): 20de593

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -11,9 +11,10 @@ language:
11
  pipeline_tag: text-generation
12
  ---
13
 
14
- # Uploaded model
 
15
 
16
- This model has gotten extra training parameters, We trained with [this](https://huggingface.co/datasets/gghfez/QwQ-LongCoT-130K-cleaned) dataset.
17
 
18
  To use this model, you must use a service which supports the GGUF file format.
19
  Additionaly, this is the Prompt Template, it uses the Phi-3 template.
@@ -25,7 +26,9 @@ Additionaly, this is the Prompt Template, it uses the Phi-3 template.
25
  {{ end }}<|assistant|>
26
  {{ .Response }}<|end|>
27
 
28
- Highly recommended to be used with a system prompt.
 
 
29
 
30
  - **Developed by:** Pinkstack
31
  - **License:** apache-2.0
 
11
  pipeline_tag: text-generation
12
  ---
13
 
14
+ # Information
15
+ ## Here are things you should be aware of when using PARM models (Pinkstack Accuracy Reasoning Models) 🧀
16
 
17
+ This PARM is based on Phi 3.5 mini which has gotten extra training parameters so it would have similar outputs to O.1 Mini, We trained with [this](https://huggingface.co/datasets/gghfez/QwQ-LongCoT-130K-cleaned) dataset.
18
 
19
  To use this model, you must use a service which supports the GGUF file format.
20
  Additionaly, this is the Prompt Template, it uses the Phi-3 template.
 
26
  {{ end }}<|assistant|>
27
  {{ .Response }}<|end|>
28
 
29
+ Highly recommended to be used with a system prompt.
30
+
31
+ This model has been tested inside of MSTY. with 8,192 Max token output and 32,000 Context. Less context is completly fine too but it does like to use a lot of tokens for reasoning.
32
 
33
  - **Developed by:** Pinkstack
34
  - **License:** apache-2.0