DeepSeek-R1-Distill-Llama-8B-ZABBIX-BIT
Model Overview:
language: en tags: - zabbix - fine-tuning - lora - text-generation license: Apache-2.0 finetuned_from: unsloth/DeepSeek-R1-Distill-Llama-8B
DeepSeek-R1-Distill-Llama-8B-ZABBIX-BIT
Model Overview:
This model is a fine-tuned version of the DeepSeek-R1-Distill-Llama-8B model, optimized specifically for addressing technical questions related to Zabbix monitoring, alerting, and performance optimization. The model leverages chain-of-thought reasoning to provide detailed, step-by-step answers.
Fine-Tuning Details:
- Base Model: DeepSeek-R1-Distill-Llama-8B
- Fine-Tuning Method: Supervised Fine-Tuning (SFT) using LoRA (Low-Rank Adaptation)
- LoRA Configuration:
- Rank (r): 16
- Target Modules:
q_proj
,k_proj
,v_proj
,o_proj
,gate_proj
,up_proj
,down_proj
- LoRA Alpha: 16
- Dropout: 0 (No dropout applied)
- Gradient Checkpointing: Enabled (via Unsloth optimizations)
- Quantization: 4-bit quantization for improved memory efficiency
- Dataset: Fine-tuned on a custom JSON dataset (
questions_with_cot_and_answers.json
) containing Zabbix-related questions, chain-of-thought explanations, and corresponding answers - Training Framework: Utilized the
SFTTrainer
from the TRL library and Unsloth’s fast inference optimizations
Intended Use Cases:
This model is designed to support IT professionals and system administrators in:
- Answering complex Zabbix configuration and performance optimization questions
- Providing detailed, reasoning-based responses for technical troubleshooting
- Enhancing educational content and technical support regarding Zabbix environments
Usage Example:
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "aman-ph/DeepSeek-R1-Distill-Llama-8B-ZABBIX-BIT"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = """Below is an instruction that describes a task, paired with an input that provides further context.
Write a response that appropriately completes the request.
Before answering, think carefully about the question and create a step-by-step chain of thoughts to ensure a logical and accurate response.
### Instruction:
You are a Zabbix expert with advanced knowledge in monitoring, alerting, and performance optimization.
Please answer the following Zabbix-related question.
### Question:
what is the best way to configure a proxygroup in Zabbix?
### Response:
<think>"""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(inputs.input_ids, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for aman-ph/DeepSeek-R1-Distill-Llama-8B-ZABBIX-BIT
Base model
deepseek-ai/DeepSeek-R1-Distill-Llama-8B
Finetuned
unsloth/DeepSeek-R1-Distill-Llama-8B