Aeshp/deepseekR1_tunedchat
This model is a fine-tuned version of deepseek-ai/DeepSeek-R1-Distill-Llama-8B, loaded via Unsloth in 4-bit as unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit. It has been trained on customer service and general chat datasets:
- taskydata/baize_chatbot
- MohammadOthman/mo-customer-support-tweets-945k
- bitext/Bitext-customer-support-llm-chatbot-training-dataset
The training was performed in three steps, and the final weights were merged with the base model and pushed here. It is a light model.
📝 License
This model is released under the MIT license, allowing free use, modification, and further fine-tuning.
💡 How to Fine-Tune Further
All code and instructions for further fine-tuning, inference, and pushing to the Hugging Face Hub are available in the open-source GitHub repository:
https://github.com/Aeshp/deepseekR1finetune
- You can fine-tune this model on your own domain-specific data.
- Please adjust hyperparameters and dataset size as needed.
- Example scripts and notebooks are provided for both base model and checkpoint-based fine-tuning.
⚠️ Notes
- The model may sometimes hallucinate, as is common with LLMs.
- For best results, use a large, high-quality dataset for further fine-tuning to avoid overfitting.
📚 References
Hugging Face Models
- deepseek-ai/DeepSeek-R1-Distill-Llama-8B
- deepseek-ai/DeepSeek-R1
- meta-llama/Meta-Llama-3-8B
- unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
Datasets
- taskydata/baize_chatbot
- MohammadOthman/mo-customer-support-tweets-945k
- bitext/Bitext-customer-support-llm-chatbot-training-dataset
GitHub Repositories
Papers
For all usage instructions, fine-tuning guides, and code, please see the GitHub repository.
- Downloads last month
- 44
Model tree for Aeshp/deepseekR1_tunedchat
Base model
deepseek-ai/DeepSeek-R1-0528