MemAgent: Reshaping Long-Context LLM with Multi-Conv RL-based Memory Agent
Abstract
A novel agent workflow, MemAgent, enhances long-text processing by optimizing in an end-to-end fashion and updating memory segmentally, achieving excellent performance on large datasets.
Despite improvements by length extrapolation, efficient attention and memory modules, handling infinitely long documents with linear complexity without performance degradation during extrapolation remains the ultimate challenge in long-text processing. We directly optimize for long-text tasks in an end-to-end fashion and introduce a novel agent workflow, MemAgent, which reads text in segments and updates the memory using an overwrite strategy. We extend the DAPO algorithm to facilitate training via independent-context multi-conversation generation. MemAgent has demonstrated superb long-context capabilities, being able to extrapolate from an 8K context trained on 32K text to a 3.5M QA task with performance loss < 5% and achieves 95%+ in 512K RULER test.
Models citing this paper 2
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper