Papers
arxiv:2507.07867

Re-Bottleneck: Latent Re-Structuring for Neural Audio Autoencoders

Published on Jul 10
· Submitted by dbralios on Jul 11
Authors:
,
,

Abstract

A Re-Bottleneck framework modifies pre-trained autoencoders to enforce specific latent structures, improving performance in diverse downstream applications.

AI-generated summary

Neural audio codecs and autoencoders have emerged as versatile models for audio compression, transmission, feature-extraction, and latent-space generation. However, a key limitation is that most are trained to maximize reconstruction fidelity, often neglecting the specific latent structure necessary for optimal performance in diverse downstream applications. We propose a simple, post-hoc framework to address this by modifying the bottleneck of a pre-trained autoencoder. Our method introduces a "Re-Bottleneck", an inner bottleneck trained exclusively through latent space losses to instill user-defined structure. We demonstrate the framework's effectiveness in three experiments. First, we enforce an ordering on latent channels without sacrificing reconstruction quality. Second, we align latents with semantic embeddings, analyzing the impact on downstream diffusion modeling. Third, we introduce equivariance, ensuring that a filtering operation on the input waveform directly corresponds to a specific transformation in the latent space. Ultimately, our Re-Bottleneck framework offers a flexible and efficient way to tailor representations of neural audio models, enabling them to seamlessly meet the varied demands of different applications with minimal additional training.

Community

Paper submitter

Re-Bottleneck is a framework that efficiently restructures the latent spaces of frozen pre-trained neural audio autoencoders. This allows for the post-hoc imposition of desired properties like channel ordering, semantic alignment, and transformation equivariance, improving downstream tasks such as text-to-audio generation while maintaining reconstruction quality.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2507.07867 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2507.07867 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2507.07867 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.