whisper-large-v3-encoder

How to

from transformers.models.whisper.modeling_whisper import WhisperEncoder
from transformers import AutoFeatureExtractor
import torch
import librosa

encoder = WhisperEncoder.from_pretrained(
    'huseinzol05/whisper-large-v3-encoder', 
    torch_dtype = torch.float16).cuda()

feature_extractor = AutoFeatureExtractor.from_pretrained('openai/whisper-large-v3')
y, sr = librosa.load('audio.mp3', sr = 16000)

input_ids = feature_extractor(y, return_tensors = 'pt', sampling_rate = feature_extractor.sampling_rate)
input_ids['input_features'] = input_ids['input_features'].to(torch.float16).cuda()
encoder(**input_ids)
Downloads last month
5
Safetensors
Model size
637M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support