π§ LoRA Fine-Tuned Adapter for [Your Task or Dataset Name]
This repository contains a LoRA (Low-Rank Adaptation) fine-tuned adapter for the base model Whisper-small(https://huggingface.co/openai/whisper-small). The adapter is trained on [Malayalam speech data].
π§ Base Model
- Base Model:
whisper-small
- Model Type: Seq2Seq (pick one)
- LoRA Library: PEFT (Hugging Face)
ποΈ Training Details
- Framework: PyTorch + π€ Transformers + π€ PEFT
- LoRA Config:
r = 32
lora_alpha = 64
lora_dropout = 0.05
bias = "none"
- `task_type = None
π§ͺ How to Use
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
# Load base model and tokenizer
base_model = AutoModelForCausalLM.from_pretrained("openai/whisper-small")
tokenizer = AutoTokenizer.from_pretrained("openai/whisper-small")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "abhiramivs/malayalam_asr_model")
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for abhiramivs/malayalam_asr_model
Base model
openai/whisper-small