Papers
arxiv:2506.18623

Efficient and Generalizable Speaker Diarization via Structured Pruning of Self-Supervised Models

Published on Jun 23
Authors:
,
,
,
,
,
,

Abstract

A study on compressing self-supervised learning models for speaker diarization using structured pruning and knowledge distillation achieves up to 80% reduction in model size and 4x faster inference without performance loss.

AI-generated summary

Self-supervised learning (SSL) models such as WavLM have brought substantial improvements to speaker diarization by providing rich contextual representations. However, the high computational and memory costs of these models hinder their deployment in real-time and resource-constrained scenarios. In this work, we present a comprehensive study on compressing SSL-based diarization models through structured pruning guided by knowledge distillation. Building upon our previous work, we extend the analysis to include pruning objectives based on multiply-accumulate operations (MACs), investigate module-wise and progressive pruning strategies, and examine the impact of training data quantity. Experimental results show that our method reduces model size by up to 80% without degrading performance, achieving up to 4x faster inference on a single GPU. We further perform large-scale evaluations on a diverse compound dataset comprising eight public diarization corpora, where our best pruned model achieves state-of-the-art performance across most conditions. Additionally, we show strong generalization to the CHiME-6 dataset, attaining performance comparable to the third-place system in the CHiME-7 challenge without any domain adaptation. All models and code are publicly released to support reproducibility and future research.

Community

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.18623 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.18623 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.