🧠 Model Card: Mistral_Posttrain_SFT

This is a fine-tuned version of mistralai/Mistral-7B-v0.1 using supervised instruction tuning on domain-specific prompts. It is designed for general-purpose text generation tasks such as Q&A, summarization, and story generation.


🧾 Model Details

Model Description

  • Model type: Causal Language Model (decoder-only)
  • Architecture: Transformer-based (Mistral)
  • Languages: English
  • Fine-tuned on: Instruction-style conversational data
  • Library: Hugging Face transformers
  • Finetuned from: mistralai/Mistral-7B-v0.1
  • Shared by: @yuvrajpant56

πŸš€ Model Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "yuvrajpant56/Mistral_Posttrain_SFT"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

inputs = tokenizer("Explain gravity in simple terms.", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))


[More Information Needed]
Downloads last month
6
Safetensors
Model size
7.24B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for yuvrajpant56/Mistral_Posttrain_SFT

Finetuned
(318)
this model

Space using yuvrajpant56/Mistral_Posttrain_SFT 1