tags:
- text-generation-inference
- transformers
- unsloth
- gguf
- reasoning
license: apache-2.0
language:
- en
pipeline_tag: text-generation
Information
Here are things you should be aware of when using PARM models (Pinkstack Accuracy Reasoning Models) 🧀
This PARM is based on Phi 3.5 mini which has gotten extra training parameters so it would have similar outputs to O.1 Mini, We trained with this dataset.
To use this model, you must use a service which supports the GGUF file format. Additionaly, this is the Prompt Template, it uses the Phi-3 template.
{{ if .System }}<|system|> {{ .System }}<|end|> {{ end }}{{ if .Prompt }}<|user|> {{ .Prompt }}<|end|> {{ end }}<|assistant|> {{ .Response }}<|end|>
Highly recommended to be used with a system prompt.
This model has been tested inside of MSTY. with 8,192 Max token output and 32,000 Context. Less context is completly fine too but it does like to use a lot of tokens for reasoning.
- Developed by: Pinkstack
- License: apache-2.0
- Finetuned from model : unsloth/phi-3.5-mini-instruct-bnb-4bit
This llama model was trained with Unsloth and Huggingface's TRL library.
Used this model? Don't forget to leave a like :)