SLU Models
Collection
SSL models fine-tuned for the spoken intent classification task
•
18 items
•
Updated
This model is a fine-tuned version of facebook/wav2vec2-base on the FSC dataset for the intent classification task.
It achieves the following results on the test set:
The base Facebook's Wav2Vec2 model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
Intent Classification (IC) classifies utterances into predefined classes to determine the intent of speakers. The dataset used here is Fluent Speech Commands (FSC), where each utterance is tagged with three intent labels: action, object, and location.
You can use the model directly in the following manner:
import torch
import librosa
from transformers import AutoModelForAudioClassification, AutoFeatureExtractor
## Load an audio file
audio_array, sr = librosa.load("path_to_audio.wav", sr=16000)
## Load model and feature extractor
model = AutoModelForAudioClassification.from_pretrained("alkiskoudounas/wav2vec2-base-fsc")
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
## Extract features
inputs = feature_extractor(audio_array.squeeze(), sampling_rate=feature_extractor.sampling_rate, padding=True, return_tensors="pt")
## Compute logits
logits = model(**inputs).logits
@inproceedings{koudounas2025unlearning,
title={"Alexa, can you forget me?" Machine Unlearning Benchmark in Spoken Language Understanding},
author={Koudounas, Alkis and Savelli, Claudio and Giobergia, Flavio and Baralis, Elena},
booktitle={Proc. Interspeech 2025},
year={2025},
}
Base model
facebook/wav2vec2-base