π TinyLlama-1.1B-qlora-mango
A fine-tuned version of the TinyLlama-1.1B model using QLoRA on a custom prompt-response dataset, Ultrachat200k.
Model Details
- Base Model: TinyLlama-1.1B-Chat
- Tuning Method: QLoRA (Quantized Low-Rank Adaptation)
- Use Case: Instruction-following / Chatbot generation
- Tokenizer: TinyLlama tokenizer
- Framework: Hugging Face Transformers
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("abhinavm16104/TinyLlama-1.1B-qlora-mango")
model = AutoModelForCausalLM.from_pretrained("abhinavm16104/TinyLlama-1.1B-qlora-mango")
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt = "<|user|>\nTell me something about mangoes.</s>\n<|assistant|>"
print(pipe(prompt)[0]["generated_text"])
Example Prompt
<|user|>
Tell me something about mangoes.</s>
<|assistant|>
Mangoes are a type of fruit that originated in Southeast Asia and are now grown in many parts of the world...
Citation
If you use tinyllama-1.1B-qlora-mango in your work, please cite the author:
@misc {tinyllama-1.1B-qlora-mango,
author = {Abhinav Mangalore},
title = {TinyLlama-1.1B-qlora-mango},
year = {2025},
url = {https://huggingface.co/abhinavm16104/TinyLlama-1.1B-qlora-mango}
}
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for abhinavm16104/TinyLlama-1.1B-qlora-mango
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0