metadata
license: mit
datasets:
- Abhishekcr448/Hinglish-Everyday-Conversations-1M
language:
- hi
- en
base_model:
- google/gemma-3-1b-it
pipeline_tag: text-generation
tags:
- hinglish
library_name: keras-hub
RLM_hingu
RLM_hingu is a fine-tuned version of the Gemma-3B Instruct model, adapted for casual Hinglish (Hindi-English) conversation using the keras-nlp
framework. It is designed for lightweight conversational tasks in Hinglish, optimized with the JAX
backend for efficiency.
Model Overview
- Base model:
gemma3_instruct_1b
- Library:
keras-nlp
- Backend: JAX (recommended for best performance)
- Sampling Method: Top-K (k=10)
- Use Case: Conversational Hinglish response generation
Usage
from keras_nlp.models import Gemma3CausalLM
from keras_nlp.samplers import TopKSampler
model = Gemma3CausalLM.from_preset("hf://rudrashah/RLM_hingu")
template = "Question:\n{question}\n\nAnswer:\n{answer}"
prompt = template.format(
question="Rudra acha ladka hai?",
answer="",
)
output = model.generate(prompt, max_length=256)
print(output)
Question:
Rudra acha ladka hai?
Answer:
haan, sabse best hai.
To run RLM_hingu, just paste this code and wait.