Some GGUF v2 quantizations of the model KnutJaegersberg/deacon-3b Which is based on conceptofmind/Open-LLongMA-3b so you will need to set linear rope_scaling to 0.25.
Prompt Example:
### System:
You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### Instruction:
How do you fine tune a large language model?
### Response:
- Downloads last month
- 18
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.