inference

The model can be used directly (without a language model) as follows...

Using the HuggingSound library:

from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
import torchaudio



# load model and processor
processor = Wav2Vec2Processor.from_pretrained("gymeee/demo_code_switching")
model = Wav2Vec2ForCTC.from_pretrained("gymeee/demo_code_switching")

# load speech
speech_array, sampling_rate = torchaudio.load("speech.wav")
# tokenize
input_values = processor(speech_array[0], return_tensors="pt", padding="longest").input_values  # Batch size 1

# retrieve logits
logits = model(input_values).logits

# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)

transcription
Downloads last month
3
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Space using gymeee/demo_code_switching 1