DivMerge: A divergence-based model merging method for multi-tasking
Abstract
Multi-task learning (MTL) is often achieved by merging datasets before fine-tuning, but the growing availability of fine-tuned models has led to new approaches such as model merging via task arithmetic. A major challenge in this setting is task interference, which worsens as the number of tasks increases. We propose a method that merges models trained on different tasks into a single model, maintaining strong performance across all tasks. Our approach leverages Jensen-Shannon divergence to guide the merging process without requiring additional labelled data, and automatically balances task importance. Unlike existing methods, our approach remains robust as the number of tasks grows and consistently outperforms prior work.
Community
This paper introduces a new method for model merging method that preserves high performance on each task when increasing the number of merged tasks. The key is the use of Jensen-Shannon divergence to drive the optimization of interpolation weights and to avoid the need for labeled data for this optimization, i.e. automatically balancing task importance. Unlike previous methods, this approach remains robust and effective as the number of tasks increases, consistently outperforming existing techniques.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DisTaC: Conditioning Task Vectors via Distillation for Robust Model Merging (2025)
- Tensorized Clustered LoRA Merging for Multi-Task Interference (2025)
- Rethinking Layer-wise Model Merging through Chain of Merges (2025)
- Task-Based Flexible Feature Distillation for LLMs (2025)
- ICM-Fusion: In-Context Meta-Optimized LoRA Fusion for Multi-Task Adaptation (2025)
- Align, Don't Divide: Revisiting the LoRA Architecture in Multi-Task Learning (2025)
- Efficient Multi-Source Knowledge Transfer by Model Merging (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper