π§ Model Card: Mistral_Posttrain_SFT
This is a fine-tuned version of mistralai/Mistral-7B-v0.1
using supervised instruction tuning on domain-specific prompts. It is designed for general-purpose text generation tasks such as Q&A, summarization, and story generation.
π§Ύ Model Details
Model Description
- Model type: Causal Language Model (decoder-only)
- Architecture: Transformer-based (Mistral)
- Languages: English
- Fine-tuned on: Instruction-style conversational data
- Library: Hugging Face
transformers
- Finetuned from:
mistralai/Mistral-7B-v0.1
- Shared by: @yuvrajpant56
π Model Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "yuvrajpant56/Mistral_Posttrain_SFT"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
inputs = tokenizer("Explain gravity in simple terms.", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
[More Information Needed]
- Downloads last month
- 6
Model tree for yuvrajpant56/Mistral_Posttrain_SFT
Base model
mistralai/Mistral-7B-v0.1
Finetuned
mistralai/Mistral-7B-Instruct-v0.1