|
--- |
|
license: other |
|
license_name: gemma-terms-of-use |
|
license_link: https://ai.google.dev/gemma/terms |
|
base_model: google/gemma-2b |
|
datasets: |
|
- ravithejads/samvaad-hi-filtered |
|
- Telugu-LLM-Labs/telugu_teknium_GPTeacher_general_instruct_filtered_romanized |
|
- Telugu-LLM-Labs/telugu_alpaca_yahma_cleaned_filtered_romanized |
|
- abhinand/tamil-alpaca |
|
- Tensoic/airoboros-3.2_kn |
|
- Tensoic/gpt-teacher_kn |
|
- VishnuPJ/Alpaca_Instruct_Malayalam |
|
- Tensoic/Alpaca-Gujarati |
|
- HydraIndicLM/punjabi_alpaca_52K |
|
- HydraIndicLM/bengali_alpaca_dolly_67k |
|
- OdiaGenAI/Odia_Alpaca_instructions_52k |
|
- yahma/alpaca-cleaned |
|
language: |
|
- te |
|
- en |
|
- ta |
|
- ml |
|
- hi |
|
- kn |
|
- gu |
|
- bn |
|
- pa |
|
- or |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# Indic-gemma-2b-finetuned-sft-Navarasa |
|
|
|
This model is based on [google/gemma-2b](https://huggingface.co/google/gemma-7b) and hase been LoRA finetuned on 9 Indian languages and English language instruction datasets: |
|
|
|
1. #### Hindi - [ravithejads/samvaad-hi-filtered](https://huggingface.co/datasets/ravithejads/samvaad-hi-filtered), [HydraIndicLM/hindi_alpaca_dolly_67k](https://huggingface.co/datasets/HydraIndicLM/hindi_alpaca_dolly_67k)(sampled) |
|
2. #### Telugu - [Telugu-LLM-Labs/yahma_alpaca_cleaned_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/yahma_alpaca_cleaned_telugu_filtered_and_romanized), [Telugu-LLM-Labs/teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/teknium_GPTeacher_general_instruct_telugu_filtered_and_romanized) |
|
3. #### Tamil - [abhinand/tamil-alpaca](https://huggingface.co/datasets/abhinand/tamil-alpaca) |
|
4. #### Kannada - [Tensoic/airoboros-3.2_kn](https://huggingface.co/datasets/Tensoic/airoboros-3.2_kn), [Tensoic/gpt-teacher_kn](https://huggingface.co/datasets/Tensoic/gpt-teacher_kn) |
|
5. #### Malayalam - [VishnuPJ/Alpaca_Instruct_Malayalam](https://huggingface.co/datasets/VishnuPJ/Alpaca_Instruct_Malayalam) |
|
6. #### Gujarati - [Tensoic/Alpaca-Gujarati](https://huggingface.co/datasets/Tensoic/Alpaca-Gujarati) |
|
7. #### Punjabi - [HydraIndicLM/punjabi_alpaca_52K](https://huggingface.co/datasets/HydraIndicLM/punjabi_alpaca_52K) |
|
8. #### Bengali - [HydraIndicLM/bengali_alpaca_dolly_67k](https://huggingface.co/datasets/HydraIndicLM/bengali_alpaca_dolly_67k)(alpaca filtered) |
|
9. #### Odia - [OdiaGenAI/Odia_Alpaca_instructions_52k](https://huggingface.co/datasets/OdiaGenAI/Odia_Alpaca_instructions_52k), [OdiaGenAI/gpt-teacher-roleplay-odia-3k](https://huggingface.co/datasets/OdiaGenAI/gpt-teacher-roleplay-odia-3k) |
|
10. #### English - [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned) |
|
|
|
The model is finetuned using [unsloth](https://github.com/unslothai/unsloth) library and we provide inference code using the same for faster inference. Alternatively you can use HuggingFace Library for inference. |
|
|
|
# Training Details: |
|
|
|
The model is trained on approx 500K instruction samples. |
|
1. GPU: 1 A100, 80GB |
|
2. Time: 36.5 Hours |
|
3. Platform: [E2E Networks](https://www.e2enetworks.com/) |
|
# Installation |
|
|
|
`!pip install "unsloth[colab-ampere] @git+https://github.com/unslothai/unsloth.git"` |
|
|
|
# Input Text Format |
|
|
|
``` |
|
### Instruction: {instruction} |
|
|
|
### Input: {input} |
|
|
|
## Response: {response} |
|
``` |
|
|
|
# Inference With Unsloth |
|
|
|
```python3 |
|
from unsloth import FastLanguageModel |
|
import torch |
|
max_seq_length = 2048 |
|
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ |
|
load_in_4bit = False |
|
model, tokenizer = FastLanguageModel.from_pretrained( |
|
model_name = "Telugu-LLM-Labs/Indic-gemma-2b-finetuned-sft-Navarasa", |
|
max_seq_length = max_seq_length, |
|
dtype = dtype, |
|
load_in_4bit = load_in_4bit, |
|
device_map="auto" |
|
) |
|
FastLanguageModel.for_inference(model) # Enable native 2x faster inference |
|
|
|
input_prompt = """ |
|
### Instruction: |
|
{} |
|
|
|
### Input: |
|
{} |
|
|
|
### Response: |
|
{}""" |
|
|
|
input_text = input_prompt.format( |
|
"Tranlsate following sentence to Hindi.", # instruction |
|
"This model is developed by Telugu LLM Labs", # input |
|
"", # output - leave this blank for generation! |
|
) |
|
|
|
inputs = tokenizer([input_text], return_tensors = "pt").to("cuda") |
|
|
|
outputs = model.generate(**inputs, max_new_tokens = 300, use_cache = True) |
|
response = tokenizer.batch_decode(outputs) |
|
``` |
|
|
|
# Inference with HuggingFace |
|
|
|
```python3 |
|
from peft import AutoPeftModelForCausalLM |
|
from transformers import AutoTokenizer |
|
|
|
model = AutoPeftModelForCausalLM.from_pretrained( |
|
"Telugu-LLM-Labs/Indic-gemma-2b-finetuned-sft-Navarasa", |
|
load_in_4bit = False, |
|
token = hf_token |
|
) |
|
tokenizer = AutoTokenizer.from_pretrained("Telugu-LLM-Labs/Indic-gemma-2b-finetuned-sft-Navarasa") |
|
|
|
input_prompt = """ |
|
### Instruction: |
|
{} |
|
|
|
### Input: |
|
{} |
|
|
|
### Response: |
|
{}""" |
|
|
|
input_text = input_prompt.format( |
|
"Tranlsate following sentence to Hindi.", # instruction |
|
"This model is developed by Telugu LLM Labs", # input |
|
"", # output - leave this blank for generation! |
|
) |
|
|
|
inputs = tokenizer([input_text], return_tensors = "pt").to("cuda") |
|
|
|
outputs = model.generate(**inputs, max_new_tokens = 300, use_cache = True) |
|
response = tokenizer.batch_decode(outputs)[0] |
|
``` |
|
|
|
Refer to the [blog post](https://ravidesetty.medium.com/introducing-indic-gemma-7b-2b-instruction-tuned-model-on-9-indian-languages-navarasa-86bc81b4a282) for sample examples. |
|
|
|
Please check our [Code Repository](https://github.com/TeluguLLMLabs/Indic-gemma-7b-Navarasa)for training and inference scripts. |
|
|
|
# Developers: |
|
|
|
The model is a collaborative effort by [Ravi Theja](https://twitter.com/ravithejads) and [Ramsri Goutham](https://twitter.com/ramsri_goutham). Feel free to DM either of us if you have any questions. |