title
stringlengths
14
154
paper_url
stringlengths
42
42
authors
sequencelengths
1
21
type
stringclasses
3 values
abstract
stringlengths
413
2.52k
keywords
stringlengths
4
397
TL;DR
stringlengths
5
250
submission_number
int64
2
14.3k
arxiv_id
stringlengths
10
10
Both Ears Wide Open: Towards Language-Driven Spatial Audio Generation
https://openreview.net/forum?id=qPx3i9sMxv
[ "Peiwen Sun", "Sitong Cheng", "Xiangtai Li", "Zhen Ye", "Huadai Liu", "Honggang Zhang", "Wei Xue", "Yike Guo" ]
Spotlight
Recently, diffusion models have achieved great success in mono-channel audio generation. However, when it comes to stereo audio generation, the soundscapes often have a complex scene of multiple objects and directions. Controlling stereo audio with spatial contexts remains challenging due to high data costs and unstable generative models. To the best of our knowledge, this work represents the first attempt to address these issues. We first construct a large-scale, simulation-based, and GPT-assisted dataset, BEWO-1M, with abundant soundscapes and descriptions even including moving and multiple sources. Beyond text modality, we have also acquired a set of images and rationally paired stereo audios through retrieval to advance multimodal generation. Existing audio generation models tend to generate rather random and indistinct spatial audio. To provide accurate guidance for Latent Diffusion Models, we introduce the SpatialSonic model utilizing spatial-aware encoders and azimuth state matrices to reveal reasonable spatial guidance. By leveraging spatial guidance, our model not only achieves the objective of generating immersive and controllable spatial audio from text but also extends to other modalities as the pioneer attempt. Finally, under fair settings, we conduct subjective and objective evaluations on simulated and real-world data to compare our approach with prevailing methods. The results demonstrate the effectiveness of our method, highlighting its capability to generate spatial audio that adheres to physical rules.
audio generation, multimodal learning, stereo audio
The multi-modal guided spatial audio generation dataset and method for immersive soundscapes
102
2410.10676
Moner: Motion Correction in Undersampled Radial MRI with Unsupervised Neural Representation
https://openreview.net/forum?id=OdnqG1fYpo
[ "Qing Wu", "Chenhe Du", "Xuanyu Tian", "Jingyi Yu", "Yuyao Zhang", "Hongjiang Wei" ]
Spotlight
Motion correction (MoCo) in radial MRI is a particularly challenging problem due to the unpredictability of subject movement. Current state-of-the-art (SOTA) MoCo algorithms often rely on extensive high-quality MR images to pre-train neural networks, which constrains the solution space and leads to outstanding image reconstruction results. However, the need for large-scale datasets significantly increases costs and limits model generalization. In this work, we propose Moner, an unsupervised MoCo method that jointly reconstructs artifact-free MR images and estimates accurate motion from undersampled, rigid motion-corrupted k-space data, without requiring any training data. Our core idea is to leverage the continuous prior of implicit neural representation (INR) to constrain this ill-posed inverse problem, facilitating optimal solutions. Specifically, we integrate a quasi-static motion model into the INR, granting its ability to correct subject's motion. To stabilize model optimization, we reformulate radial MRI reconstruction as a back-projection problem using the Fourier-slice theorem. Additionally, we propose a novel coarse-to-fine hash encoding strategy, significantly enhancing MoCo accuracy. Experiments on multiple MRI datasets show our Moner achieves performance comparable to SOTA MoCo techniques on in-domain data, while demonstrating significant improvements on out-of-domain data. The code is available at: https://github.com/iwuqing/Moner
MRI Reconstruction, Motion Correction, Neural Representation, NeRF, Unsupervised Learning
We propose Moner, an unsupervised method that can jointly recover high-quality MR images and estimates accurate motion from undersampled and rigid motion-corrupted radial MRI measurement data without the need for any extra data.
101
2409.16921
UniMatch: Universal Matching from Atom to Task for Few-Shot Drug Discovery
https://openreview.net/forum?id=v9EjwMM55Y
[ "Ruifeng Li", "Mingqian Li", "Wei Liu", "Yuhua Zhou", "Xiangxin Zhou", "Yuan Yao", "Qiang Zhang", "Hongyang Chen" ]
Spotlight
Drug discovery is crucial for identifying candidate drugs for various diseases. However, its low success rate often results in a scarcity of annotations, posing a few-shot learning problem. Existing methods primarily focus on single-scale features, overlooking the hierarchical molecular structures that determine different molecular properties. To address these issues, we introduce Universal Matching Networks (UniMatch), a dual matching framework that integrates explicit hierarchical molecular matching with implicit task-level matching via meta- learning, bridging multi-level molecular representations and task-level generalization. Specifically, our approach explicitly captures structural features across multiple levels—atoms, substructures, and molecules—via hierarchical pooling and matching, facilitating precise molecular representation and comparison. Additionally, we employ a meta-learning strategy for implicit task-level matching, allowing the model to capture shared patterns across tasks and quickly adapt to new ones. This unified matching framework ensures effective molecular alignment while leveraging shared meta-knowledge for fast adaptation. Our experimental results demonstrate that UniMatch outperforms state-of-the-art methods on the MoleculeNet and FS-Mol benchmarks, achieving improvements of 2.87% in AUROC and 6.52% in ∆AUPRC. UniMatch also shows excellent generalization ability on the Meta-MolNet benchmark.
Few-shot molecular representation learning, maching learning
We introduce HierMatch, which performs matching across multiple levels, from atoms to tasks, to enhance molecular property predic- tions in few-shot learning scenarios.
45
2502.12453
OASIS Uncovers: High-Quality T2I Models, Same Old Stereotypes
https://openreview.net/forum?id=L6IgkJvcgV
[ "Sepehr Dehdashtian", "Gautam Sreekumar", "Vishnu Boddeti" ]
Spotlight
Images generated by text-to-image (T2I) models often exhibit visual biases and stereotypes of concepts such as culture and profession. Existing quantitative measures of stereotypes are based on statistical parity that does not align with the sociological definition of stereotypes and, therefore, incorrectly categorizes biases as stereotypes. Instead of oversimplifying stereotypes as biases, we propose a quantitative measure of stereotypes that aligns with its sociological definition. We then propose OASIS to measure the stereotypes in a generated dataset and understand their origins within the T2I model. OASIS includes two scores to measure stereotypes from a generated image dataset: **(M1)** Stereotype Score to measure the distributional violation of stereotypical attributes, and **(M2)** WALS to measure spectral variance in the images along a stereotypical attribute. OASIS also includes two methods to understand the origins of stereotypes in T2I models: **(U1)** StOP to discover attributes that the T2I model internally associates with a given concept, and **(U2)** SPI to quantify the emergence of stereotypical attributes in the latent space of the T2I model during image generation. Despite the considerable progress in image fidelity, using OASIS, we conclude that newer T2I models such as FLUX.1 and SDv3 contain strong stereotypical predispositions about concepts and still generate images with widespread stereotypical attributes. Additionally, the quantity of stereotypes worsens for nationalities with lower Internet footprints.
Stereotype Measurement, Responsible AI, Trustworthy AI, Interpretability, Generative AI, Text-to-Image Models, Multimodal Models
We propose a toolbox to quantify stereotypes in Text-to-Image models
17
2501.00962
Instance-dependent Early Stopping
https://openreview.net/forum?id=P42DbV2nuV
[ "Suqin Yuan", "Runqi Lin", "Lei Feng", "Bo Han", "Tongliang Liu" ]
Spotlight
In machine learning practice, early stopping has been widely used to regularize models and can save computational costs by halting the training process when the model's performance on a validation set stops improving. However, conventional early stopping applies the same stopping criterion to all instances without considering their individual learning statuses, which leads to redundant computations on instances that are already well-learned. To further improve the efficiency, we propose an Instance-dependent Early Stopping (IES) method that adapts the early stopping mechanism from the entire training set to the instance level, based on the core principle that once the model has mastered an instance, the training on it should stop. IES considers an instance as mastered if the second-order differences of its loss value remain within a small range around zero. This offers a more consistent measure of an instance's learning status compared with directly using the loss value, and thus allows for a unified threshold to determine when an instance can be excluded from further backpropagation. We show that excluding mastered instances from backpropagation can increase the gradient norms, thereby accelerating the decrease of the training loss and speeding up the training process. Extensive experiments on benchmarks demonstrate that IES method can reduce backpropagation instances by 10%-50% while maintaining or even slightly improving the test accuracy and transfer learning performance of a model.
Early Stopping, Supervised Learning, Deep Learning, Efficiency, Sample Selection, Data Pruning
We propose an instance-dependent early stopping method that stops training at the instance level by determining whether the model has fully learned an instance. It reduces computational costs while maintaining or even improving model performance.
2
2502.07547