Abstract
We introduce Feasible Learning (FL), a sample-centric learning paradigm where models are trained by solving a feasibility problem that bounds the loss for each training sample. In contrast to the ubiquitous Empirical Risk Minimization (ERM) framework, which optimizes for average performance, FL demands satisfactory performance on every individual data point. Since any model that meets the prescribed performance threshold is a valid FL solution, the choice of optimization algorithm and its dynamics play a crucial role in shaping the properties of the resulting solutions. In particular, we study a primal-dual approach which dynamically re-weights the importance of each sample during training. To address the challenge of setting a meaningful threshold in practice, we introduce a relaxation of FL that incorporates slack variables of minimal norm. Our empirical analysis, spanning image classification, age regression, and preference optimization in large language models, demonstrates that models trained via FL can learn from data while displaying improved tail behavior compared to ERM, with only a marginal impact on average performance.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- An Efficient Unsupervised Framework for Convex Quadratic Programs via Deep Unrolling (2024)
- Differentiable Convex Optimization Layers in Neural Architectures: Foundations and Perspectives (2024)
- A Hessian-informed hyperparameter optimization for differential learning rate (2025)
- Marvel: Accelerating Safe Online Reinforcement Learning with Finetuned Offline Policy (2024)
- Representation and Regression Problems in Neural Networks: Relaxation, Generalization, and Numerics (2024)
- EDoRA: Efficient Weight-Decomposed Low-Rank Adaptation via Singular Value Decomposition (2025)
- Towards Simple and Provable Parameter-Free Adaptive Gradient Methods (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper