Hinglish Fine-tuned Conversational Model
This model is fine-tuned on Hinglish conversation data using LoRA adapters. It's designed to respond to queries in Hinglish (a mix of Hindi and English).
Model Details
- Base model: facebook/opt-350m
- Fine-tuning: LoRA adapters
- Training dataset: one-thing/chatbot_arena_conversations_hinglish
- Language: Hinglish (Hindi-English code-mixed)
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel, PeftConfig
# Load configuration
config = PeftConfig.from_pretrained("Subh775/hinglish-finetuned-demo")
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load LoRA model
model = PeftModel.from_pretrained(base_model, "Subh775/hinglish-finetuned-demo")
# Prepare input
prompt = "Human: Explain what is an Artificial Neural Network?\nAssistant:"
inputs = tokenizer(prompt, return_tensors="pt")
# Try with modified generation parameters
outputs = model.generate(
**inputs,
max_length=100,
min_new_tokens=10, # Force generating at least some new tokens
temperature=0.86, # Add some randomness
top_p=0.9,
no_repeat_ngram_size=3, # Avoid repeating trigrams
repetition_penalty=1.5, # Penalize repetition more heavily
do_sample=True # Use sampling instead of greedy decoding
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Limitations
- The model is fine-tuned on a specific dataset and may not generalize to all Hinglish dialects or topics.
- It works best for conversational queries similar to those in the training data. """
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Subh775/hinglish-finetuned-demo
Base model
facebook/opt-350m