πŸ”§ LoRA Fine-Tuned Adapter for [Your Task or Dataset Name]

This repository contains a LoRA (Low-Rank Adaptation) fine-tuned adapter for the base model Whisper-small(https://huggingface.co/openai/whisper-small). The adapter is trained on [Malayalam speech data].


🧠 Base Model


πŸ‹οΈ Training Details

  • Framework: PyTorch + πŸ€— Transformers + πŸ€— PEFT
  • LoRA Config:
    • r = 32
    • lora_alpha = 64
    • lora_dropout = 0.05
    • bias = "none"
    • `task_type = None

πŸ§ͺ How to Use

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

# Load base model and tokenizer
base_model = AutoModelForCausalLM.from_pretrained("openai/whisper-small")
tokenizer = AutoTokenizer.from_pretrained("openai/whisper-small")

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "abhiramivs/malayalam_asr_model")
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for abhiramivs/malayalam_asr_model

Finetuned
(2793)
this model