venue
stringclasses
5 values
paper_openreview_id
stringclasses
342 values
paragraph_idx
int64
1
314
section
stringlengths
2
2.38k
content
stringlengths
1
33.1k
ICLR.cc/2025/Conference
Ql7msQBqoF
16
5 Re-evaluation and MCTS Search: After edits are applied, the KB is re-evaluated, generatingnew feedback and a reward score. This score guides a Monte Carlo Tree Search (MCTS) to exploredifferent states of the KB, iterating through steps 1 - 4 to progressively refine the KB and improvethe system’s overall performance.
Figure 1 illustrates our technique applied to the ARKS Pony domain (Su et al., 2024a), where aknowledge base (KB) for the low-resource programming language Pony supports a natural languageto-code task. Due to Pony’s rarity, language models often generate code that fails to compile. Toaddress this, we use the Pony compiler as an expert to provide feedback in the form of compileerrors.
ICLR.cc/2025/Conference
hWF0HH8Rr9
54
5 CONCLUSION
REFERENCESLucas N. Alegre. SUMO-RL. https://github.com/LucasAlegre/sumo-rl, 2019.
ICLR.cc/2025/Conference
hWF0HH8Rr9
55
5 CONCLUSION
James Ault and Guni Sharon.
ICLR.cc/2025/Conference
I0To0G5J7g
23
5 EXPERIMENTS
Results The first plot in Figure 3 presents our results on the simulated LanguageTable domain,where orange markers represent policy performance after Stage 1, and blue markers represent policy performance after Stage 2. As can be observed, across all dataset sizes (10%, 20%, 80%), ourproposed self-improvement procedure leads to very significant improvement in success rates (minimum 1.5x performance boost), with incredible sample-efficiency in terms of number of episodes(less than 2% extra episodes collected in Stage 2). As an example, by training a 10% data Stage 1policy with 1% additional episodes in Stage 2, we obtain policies that outperform both the 20% and80% data Stage 1 policies. Furthermore, as evidenced by Figure 8 left (Appendix D), across randomseeds our Stage 2 process is stable and reproducible, with the individual blue markers representingindividual experiments tightly packed together. 5.1.2 REAL-WORLD LANGUAGETABLEThe significant sample-efficiency and robustness of our results suggest that our self-improvementprocedure may indeed be applicable for real-world robotics. To this end, we apply our two-stagefine-tuning framework to the real-world LanguageTable domain, in two settings of using 20% and80% of the real-world LanguageTable dataset (Lynch et al., 2023). As in the simulated setting,we apply our Stage 2 process on the Block2Block subset of tasks. Experiments are run for approximately 20 hours each, with 1 human operator monitoring and periodically resetting 3-4 LanguageTable robot stations simultaneously. For details on the real-world LanguageTable experimentation protocol we refer the interested reader to Appendix C. We run the 80% data experiment onceusing 3 robot stations, and run the 20% data experiment twice, once with 3 and once with 4 robotstations. As described in Section 3.2, success detection for episode termination is performed automatically by our system, and the sole responsibility of the human operator is to monitor the robotsand periodically reset the blocks on the stations. 5.1.3 SIMULATED ALOHA SINGLE INSERTION TASKWe also validate our proposed fine-tuning framework on a second robot embodiment, the bimanualAloha manipulation platform (Zhao et al., 2023; Aldaco et al., 2024). We designed and collecteddata for a bimanual insertion task, where the left gripper must pick up a socket, and the right grippermust pick up a peg and insert that peg into the socket. Figure 6 presents a visualization of this task,with videos available on our supplementary materials website. Due to the single-task nature, muchsmaller imitation datasets, much more complex observations, and 70-dim action space, this presentsa challenging setting for further validation of our proposed process. For details on the task and howthe datasets were created, we refer to Appendix B.3. We create 3 imitation dataset sizes of 5K,10K, and 15K trajectories. We apply our two-stage process on 5K and 10K dataset sizes, and reportresults for supervised learning on the 15K dataset as well to better situate the numbers. The onlydifferences in methodology compared to LanguageTable domain are the following: 1) To initializethe Stage 2 policy checkpoint we do not take the best validation checkpoint, as we saw that furthertraining the supervised policy lead to much more improved performance. 2) Since the exact successstate is difficult to observe from the robot camera observations, we add a small positive constant tothe reward function when the robot reaches a successful state. Our task and collected data will beopen-sourced in an upcoming contribution to the Aloha simulation repository (Aldaco et al., 2024).Results Figure 3, middle, presents our results. As can be seen, policies trained with 5K+2.5Kepisodes (Stage 1 + Stage 2) outperform policies trained with 10K imitation episodes (Stage 1 only,RT-2), and rival the success rate of those trained with 15K supervised episodes (Stage 1 only, RT-2).
ICLR.cc/2025/Conference
Ql7msQBqoF
5
1 INTRODUCTION
To leverage expert or oracle feedback, we propose MAC-CAFE, a Multi-actor, Centralized CriticArchitecture for Feedback-driven Editing technique. Our contributions are as follows: 1. Introduction of Feedback-Driven KB Editing: We present MAC-CAFE, a novel framework that refines the KB using structured edits based on expert feedback. This approach allows for direct, document-level updates without requiring access to LLM parameters,making it applicable to both white-box and black-box LLMs. 2. Multi-Actor, Centralized Critic Architecture: We design a multi-agent reinforcementlearning framework where each actor is responsible for a specific document, and a centralized critic coordinates updates based on a global reward signal. This architecture ensuresthat document-level edits are consistent and contribute to the overall accuracy of the RAGsystem. 3. Parameterized Action Space for Document Editing: We propose a parameterized actionspace for each document-specific actor, enabling fine-grained control over edits, additions,and deletions within each document. This structured action space allows the actors to perform precise modifications based on expert feedback, resulting in a refined KB that bettersupports the RAG system. 4. Definition and Evaluation of KB Characteristics: We define desirable characteristicsfor KB refinement, including coherence, completeness, and generalizability, and introducecorresponding metrics to quantitatively assess these properties. These metrics provide asystematic way to measure the effectiveness of KB updates. 5. Empirical Evaluation and Performance Gains: We demonstrate that MAC-CAFE significantly improves the accuracy and reliability of the QA system in a variety of settings. Through extensive experiments, we show that incorporating expert feedback intodocument-level edits leads to a substantial reduction in error rates and enhances the KB’sability to support accurate answer generation.
ICLR.cc/2025/Conference
Ql7msQBqoF
6
1 INTRODUCTION
This paper is organized as follows: Section 2 reviews relevant prior work, while Section 3 presents anillustrative example to introduce and explain our approach. Section 4 details the proposed methodology, and Section 5 outlines the desired characteristics for the edited KB along with metrics forevaluation. Section 6 describes the experimental setup, and finally, Section 7 reports the results.
ICLR.cc/2025/Conference
Ql7msQBqoF
7
2 RELATED WORK
The MAC-CAFE framework addresses a key limitation of current RAG systems: the inability todynamically update Knowledge Bases (KBs) without retraining or altering model parameters. Ourwork draws from research in Retrieval-Augmented Generation (RAG), Continual Learning, ModelEditing, and feedback-driven prompt optimization, incorporating insights from Multi-Agent Reinforcement Learning (MARL) to propose an effective solution for KB editing.
ICLR.cc/2025/Conference
Ql7msQBqoF
8
2 RELATED WORK
Retrieval Augmented Generation (RAG): RAG systems enhance LMs by retrieving relevantknowledge from a KB based on the input query and appending it to the context, thereby addressingthe limitations of standalone LMs that lack sufficient context and produce inaccurate answers (Chenet al., 2017; Khandelwal et al., 2020; Guu et al., 2020; Izacard et al., 2022; Shi et al., 2023). Thesesystems dynamically construct contexts from unstructured KBs without modifying the LM’s internal parameters. MAC-CAFE further enhances RAG systems by refining the KB itself based onfeedback, ensuring more accurate and up-to-date information.
ICLR.cc/2025/Conference
Ql7msQBqoF
9
2 RELATED WORK
Continual Learning: Continual Learning (CL) methods address the challenge of updating LMs innon-stationary environments by ensuring that new information is learned without forgetting previously acquired knowledge (Jin et al., 2022; Xu et al., 2023; Padmanabhan et al., 2023; Aky¨ureket al., 2024). These methods are often computationally intensive and require large-scale retraining,making them less suitable for scenarios requiring frequent updates or minimal computational resources. MAC-CAFE, by contrast, leverages expert feedback to perform direct edits to the KB,avoiding the need for extensive retraining.
ICLR.cc/2025/Conference
Ql7msQBqoF
10
2 RELATED WORK
Knowledge Editing: Knowledge Editing approaches fall into two categories: Model Editing, whichmodifies the LM parameters directly, and Input Editing, which updates the knowledge supplied tothe model. While Model Editing efficiently alters specific facts using specialized secondary modelsor altering parameters (De Cao et al., 2021b; Meng et al., 2023), it struggles to ensure consistentupdates across contexts (Onoe et al., 2023; Hua et al., 2024). In contrast, Input Editing modifies theKB itself, enabling updates to be reflected in outputs without changing model parameters (Madaanet al., 2022; Wang et al., 2024a;b; Li et al., 2024). MAC-CAFE builds on input editing techniques by leveraging expert feedback to refine the KB systematically, ensuring more accurate and consistentresponses.
ICLR.cc/2025/Conference
Ql7msQBqoF
11
2 RELATED WORK
Prompt Optimization: With the advent of LMs, some recent works approximate gradients in textbased environments using LMs (Pryzant et al., 2023; Wang et al., 2023; Juneja et al., 2024; Guptaet al., 2024) for optimizing task prompts. MAC-CAFE is inspired by these approaches and generates textual reflections, similar to MetaReflection (Gupta et al., 2024) and Shinn et al. (2023), asproxies for gradients. It provides actionable guidance for document updates without the need fordifferentiable models. Additionally, MAC-CAFE adopts clustering strategies for feedback aggregation from works like UniPrompt (Juneja et al., 2024)- ensuring that actors receive coherent andnon-redundant instructions.
ICLR.cc/2025/Conference
Ql7msQBqoF
12
2 RELATED WORK
Multi-Agent Reinforcement Learning (MARL): Multi-agent reinforcement learning (MARL) hasbeen applied to various domains, with early research focusing on tabular methods (Busoniu et al.,2008; Canese et al., 2021; Gronauer & Diepold, 2022) and later expanding to deep learning techniques for high-dimensional inputs (Tampuu et al., 2017; Leibo et al., 2017). Studies have exploredindependent Q-learning (Tan, 1993), agent communication (Foerster et al., 2016; Das et al., 2017),and centralized training with decentralized execution (Gupta et al., 2017). However, most of theseapproaches do not address the critical challenge of multi-agent credit assignment. Actor-critic methods have been introduced to overcome this limitation by employing centralized critics with decentralized actors (Foerster et al., 2018; Iqbal & Sha, 2019; Wang et al., 2021; Chen et al., 2023).MAC-CAFE extends such actor-critic framework to operate directly on textual content, usingthe centralized critic to decompose feedback into actionable textual gradients for each documentspecific actor.
ICLR.cc/2025/Conference
Ql7msQBqoF
13
2 RELATED WORK
In the next section, we provide an example to illustrate the KB editing problem, while also providingan overview of MAC-CAFE.
ICLR.cc/2025/Conference
Ql7msQBqoF
14
5 Re-evaluation and MCTS Search: After edits are applied, the KB is re-evaluated, generatingnew feedback and a reward score. This score guides a Monte Carlo Tree Search (MCTS) to exploredifferent states of the KB, iterating through steps 1 - 4 to progressively refine the KB and improvethe system’s overall performance.
SimulateTask Task: Given an array of integers nums, write a functionthat returns the number of good pairs. A pair (i, j) iscalled good if nums[i] == nums[j] and i < j.
ICLR.cc/2025/Conference
Ql7msQBqoF
19
5 Re-evaluation and MCTS Search: After edits are applied, the KB is re-evaluated, generatingnew feedback and a reward score. This score guides a Monte Carlo Tree Search (MCTS) to exploredifferent states of the KB, iterating through steps 1 - 4 to progressively refine the KB and improvethe system’s overall performance.
Knowledge Base main.pony:15:16: call is not partial but the method is - a question mark is required after this call if nums(i) == nums(j) then ˆ main.pony:15:27: call is not partial but the method is - a question mark is required after this call Feedback if nums(i) == nums(j) then ˆ main.pony:11:42: function signature is not marked aspartial but the function body can raise an error fun numIdenticalPairs(nums: Array[I32]): I32 => ˆ Reflection and Selection Reflection: The output code contains the following errors:1. The array access calls nums(i) and nums(j) are not marked as partial, but theyshould be as the apply function in the Array class is partial and can throw errors. Thecorrected code is: nums(i)? = nums(j)? To further illustrate the fix, consider thefollowing simplified example:Code with error: fun get(arr: Array[I32], index: USize): I32 ? => arr(index) Fixed code: fun get(arr: Array[I32], index: USize): I32 ? => arr(index)? ...Selection: builtin-array.md Reasoning: This file contains information about the Array class, including apply...
ICLR.cc/2025/Conference
Ql7msQBqoF
20
5 Re-evaluation and MCTS Search: After edits are applied, the KB is re-evaluated, generatingnew feedback and a reward score. This score guides a Monte Carlo Tree Search (MCTS) to exploredifferent states of the KB, iterating through steps 1 - 4 to progressively refine the KB and improvethe system’s overall performance.
Expert (Compiler) Feedback Aggregate Document Gradients Document-wise Partial Gradient: builtin-array.md is a KB document detailing theuse of the methods and functions of the Array class...
ICLR.cc/2025/Conference
Ql7msQBqoF
21
5 Re-evaluation and MCTS Search: After edits are applied, the KB is re-evaluated, generatingnew feedback and a reward score. This score guides a Monte Carlo Tree Search (MCTS) to exploredifferent states of the KB, iterating through steps 1 - 4 to progressively refine the KB and improvethe system’s overall performance.
Collected selection reasoning for builtin-array.md Method Availability and Error Handling: The current file lacks explicit guidance on thepartiality of methods and the use of ? operator. The omission can lead developers to...The file should accurately reflect the partiality of methods like apply and update...
ICLR.cc/2025/Conference
Ql7msQBqoF
22
5 Re-evaluation and MCTS Search: After edits are applied, the KB is re-evaluated, generatingnew feedback and a reward score. This score guides a Monte Carlo Tree Search (MCTS) to exploredifferent states of the KB, iterating through steps 1 - 4 to progressively refine the KB and improvethe system’s overall performance.
Generate Edits
ICLR.cc/2025/Conference
Ql7msQBqoF
23
5 Re-evaluation and MCTS Search: After edits are applied, the KB is re-evaluated, generatingnew feedback and a reward score. This score guides a Monte Carlo Tree Search (MCTS) to exploredifferent states of the KB, iterating through steps 1 - 4 to progressively refine the KB and improvethe system’s overall performance.
Edit: Add SectionWhen working with arrays in Pony,it is important tounderstand that some methods are partial, meaning themay not succeed in all cases and might raise an error. Tosafely handle these errors, Pony provides the ? operator...The apply method on arrays is partial and will raise anerror in the index is out of bounds...
ICLR.cc/2025/Conference
Ql7msQBqoF
24
4 METHODOLOGY
We will start by describing a typical Retrieval-Augmented Generation (RAG) system over unstructured Knowledge Bases.
ICLR.cc/2025/Conference
Ql7msQBqoF
25
4 METHODOLOGY
Errors in such systems can arise from multiple components: 1) the LLM B might fail to reasoncorrectly over the provided information, 2) the retriever R might not select the right set of relevantdocuments from K, or 3) the knowledge base K itself might contain incorrect or incomplete information. We assume an expert is monitoring the system, identifying when answers are incorrect,determining which component is at fault, and providing feedback on why the answer is incorrectand what the correct answer must be.
ICLR.cc/2025/Conference
Ql7msQBqoF
26
4 METHODOLOGY
Figure 1: Example of the MAC-CAFE in the ARKS Pony scenario 4.1 PROBLEM FORMULATION We are provided with a training set T = {(qi, oi, ci, fi)}li=1, where qi is a user query, oi is theRAG system’s answer, ci is the correct answer, and fi is an optional expert feedback on incorrectanswers. We also assume access to a scoring function g, which compare oi and ci to output a score.The objective is to optimize the knowledge base K to maximize the sum of the scores for all queriesin the training set: K∗ = arg max K 1|T | (cid:88) (qi,ai,ci,fi)∈T g(B(qi, Γ(qi, K)), ci) (1) In the next section, we show how such an objective can be seen as a state search problem. 4.2 KNOWLEDGE BASE EDITING AS STATE SEARCH In our problem setting, the Knowledge Base (K) is defined as a collection of documents K ={Di}ni=1. We assume each document consists of a number of chunks of text and can be representedas Di = [cij]. The state s ∈ S of the system is represented by the current configuration of the KB,i.e., the content of all documents in K.
ICLR.cc/2025/Conference
Ql7msQBqoF
27
4 METHODOLOGY
This work focuses on scenarios where incorrect answers result from issues in the Knowledge Base(K). Our goal is to improve K by addressing mistakes in K and filling in missing information basedon expert feedback, thus enhancing the RAG system’s performance on future queries.
ICLR.cc/2025/Conference
Ql7msQBqoF
28
4 METHODOLOGY
Given a query qi and a set of retrieved documents Γ(qi, K), the LLM B generates an answer oi.When errors arise due to incomplete or incorrect information in the retrieved documents, our goalis to identify the optimal configuration of K that improves the accuracy of the system’s responses.Thus, we define our state search problem as finding the best state s∗ of the KB.
ICLR.cc/2025/Conference
Ql7msQBqoF
29
4 METHODOLOGY
State Space: The state space S encompasses all possible configurations of the KB. Each state s corresponds to a particular set of document contents, represented as: s = {Di}ni=1, where Di denotesthe content of document i and n is the number of documents in K. The state s captures the overallstructure and content of the KB at any given point. We set s0 = K.
ICLR.cc/2025/Conference
Ql7msQBqoF
30
4 METHODOLOGY
State Transition Function: The state transition function T (s, u) defines how the KB changes inresponse to the action u taken by the agent. Each action contains modifications to one or moredocuments within the KB, resulting in a new KB configuration. The state transition is formalized as:s′ = T (s, u), where s′ is the new state of the KB after applying u.
ICLR.cc/2025/Conference
hWF0HH8Rr9
56
5 CONCLUSION
Reinforcement learning benchmarks for In J. Vanschoren and S. Yeung (eds.), Proceedings of the Neuraltrol.cessing Systems Track on Datasets and Benchmarks, volume 1, 2021.//datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/2021/file/f0935e4cd5920aa6c7c996a5ee53a70f-Paper-round1.pdf.
ICLR.cc/2025/Conference
tlH4vDii0E
23
3 PRELIMINARIES
removes the incoming effect on the intervention node.
ICLR.cc/2025/Conference
Ql7msQBqoF
31
4 METHODOLOGY
Action Space: The action space A consists of list of diffs di corresponding to each document Di.Essentially, u = [di]|K|i=1.Environment: We model the environment simply as a “patch” function, that takes the diff generatedby the agent and patches the KB to produce the new state.Optimization Objective: Following Equation 1, our objective then is to find the optimal state s∗ ofthe KB that maximizes the overall performance of the RAG system, as measured by a global rewardfunction R. The optimization problem is formulated as: R(s) = arg maxs∈S 1|T | (cid:88) (qi,ai,ci,fi)∈T g(B(qi, Γ(qi, s)), ci) = a (2) where R(s) represents the cumulative reward of the KB state s, reflecting its ability to supportaccurate and complete responses for a set of queries.
ICLR.cc/2025/Conference
Ql7msQBqoF
32
4 METHODOLOGY
s∗ = arg maxs∈S Monte Carlo Tree Search: We employ Monte Carlo Tree Search (MCTS) similartoPROMPTAGENT (Wang et al., 2023) to search for the optimal state s∗. However, this introducesseveral challenges: (1) The search space for all possible KB edits is vastly larger than that of standard prompt edits typically explored in the literature (Pryzant et al., 2023; Wang et al., 2023; Junejaet al., 2024; Gupta et al., 2024), making exhaustive search infeasible. (2) Generating actions andsubsequent states, as done in methods like PROMPTAGENT , is difficult in the KB editing contextsince fitting the entire KB into the prompt of a language model is impractical. Despite advancementsin handling long contexts (Wang et al., 2020; Kitaev et al., 2020; Press et al., 2022; Su et al., 2024b),these models often struggle to leverage extensive contexts effectively Liu et al. (2024). (3) Finally,the LM would need to output the entire edited KB, which is challenging due to the inherent difficultyLMs face in generating long, coherent outputs (Bai et al., 2024).
ICLR.cc/2025/Conference
Ql7msQBqoF
33
4 METHODOLOGY
The reward function R(s) is derived from the expert feedback on the system’s generated answersand captures improvements in terms of correctness, coherence, and completeness of the informationin the KB. By optimizing for s∗, we ensure that the final state of the KB maximizes the overallaccuracy and effectiveness of the RAG system, rather than focusing on an intermediate sequence ofstate transitions.In summary, the state search formulation defines the problem of finding the optimal state s∗ of theKB that maximizes the system’s performance. This approach enables us to make targeted, feedbackdriven edits to the KB and achieve a refined, high-quality knowledge base that better supports accurate answer generation. 4.3.1 REWARD SIGNAL For a given query qi and the generated answer oi, the expert provides feedback (ci, fi) that includesaground truth answer ci and qualitative expert feedback fi on any errors. The global reward signal isderived from ci as per the scoring function s (Refer Equation 2). 4.3.2 KB EDITING AGENT To effectively incorporate expert feedback, we employ a multi-actor, centralized critic architecture.
ICLR.cc/2025/Conference
Ql7msQBqoF
34
4 METHODOLOGY
To address these challenges, we decouple the KB edits by isolating document-level modificationsbased on the required updates. Since individual documents can be large, we further break down theedits into manageable sections, enabling a structured editing mechanism that focuses on specificportions of a document at a time. In the next section, we introduce MAC-CAFE, an agent designedto efficiently perform these structured edits based on feedback.
ICLR.cc/2025/Conference
Ql7msQBqoF
35
4 METHODOLOGY
The proposed approach MAC-CAFE is designed to enhance a RAG system by refining the underlying Knowledge Base (K) using expert feedback. Our approach employs a multi-actor, centralizedcritic architecture, where each actor is responsible for making updates to a specific document withinK, and a centralized critic uses global feedback to coordinate these updates. The objective is toiteratively improve K such that the overall accuracy of the RAG system is maximized.
ICLR.cc/2025/Conference
Ql7msQBqoF
36
4 METHODOLOGY
Figure 2: MAC-CAFE Multi-actor, centralized critic architecture: On receiving a reward from theenvironment, the critic generates a reflection over the failures to calculate the textual gradient ∇. Thecritic uses this reflection to select the documents responsible for the error and proceeds to assignscredit to the actors in the form of document-wise reflections. The actors then proceed to iterativelyedit the documents. All the document-wise edits are then pooled to define the KB edit.
ICLR.cc/2025/Conference
Ql7msQBqoF
37
4 METHODOLOGY
Centralized Critic: The centralized critic, denoted as C, is responsible for evaluating the overallperformance of the RAG system based on the global reward signal r derived from expert feedback.The critic analyzes the feedback received given the current state s of K. The critic’s analysis is thenused to provide tailored reflections to each actor, guiding document updates.
ICLR.cc/2025/Conference
Ql7msQBqoF
38
4 METHODOLOGY
The centralized critic aggregates the reward signal across multiple queries to generate a holisticevaluation of K. (cid:88) g(B(qi, Γ(qi, s)), ci) (qi,ai,ci,fi)∈T (3) To generate feedback for the documents, the critic needs to take gradient of this reward with respectto the documents. This would give us, ∂j = ∂R(s)∂Dj = 1|T | (cid:88) (qi,ai,ci,fi)∈T ∂∂Dj g(B(qi, Γ(qi, s)), ci) (4) Actors: Each document Di ∈ K is managed by a distinct actor, Ai, which is modeled as a ReACTagent Yao et al. (2023) responsible for making structured edits to its document. Each actor operatesindependently, receiving reflections from the centralized critic on how to modify the content ofDi = [cij].The actors need to only update these chunks as needed. The set of possible actionsincludes: • EditChunk: The action is defined as EditChunk(j, tj), where j indicates which chunk cij of Di to modify, and tj is the updated content for the chunk. • AddChunk: The action is defined as AddChunk(nj, tj), where nj indicates the name of the new chunk, and tj is the content for the chunk. • DeleteChunk: The action is defined as DeleteChunk(j), where j specifies which chunk cij of Di to remove.
ICLR.cc/2025/Conference
Ql7msQBqoF
39
4 METHODOLOGY
R(s) =
ICLR.cc/2025/Conference
Ql7msQBqoF
40
4 METHODOLOGY
1|T |
ICLR.cc/2025/Conference
Ql7msQBqoF
41
4 METHODOLOGY
This parameterized action space allows the actors to perform precise edits within the document,ensuring that the refinement process is both flexible and context-specific. Each actor leverages itslocal state si and the document-specific feedback from the critic to produce a sequence of structurededits, ensuring that modifications are consistent and contribute towards enhancing the document’srelevance and completeness.
ICLR.cc/2025/Conference
Ql7msQBqoF
51
6 EXPERIMENTAL SETUP
For evaluating under this setting, we leverage the ARKS-ScipyM and ARKS-TensorflowM datasetsfrom ARKS and the CLARK-news dataset from Erase (Li et al., 2024). The ARKS datasets consistof data science problems sourced from the DS-1000 dataset (Lai et al., 2022), which are to be solvedby artificially perturbed versions of scipy and tensorflow libraries respectively, while referring to theoriginal unperturbed documentation. Similar to Pony and Ring, we use the execution accuracy ona test bench as a success metric and use compiler outcome as expert feedback. We also follow asimilar approach for data splitting.
ICLR.cc/2025/Conference
Ql7msQBqoF
42
4 METHODOLOGY
Figure 2 illustrates the environmental interaction of the actor-critic model. Following methodologiesin prior works (Pryzant et al., 2023; Juneja et al., 2024; Gupta et al., 2024), we use LLMs to generate an overall text gradient ∇ over each failing example. The critic first identifies and select whichdocuments in Γ(qi, s) are responsible for any inaccuracies in oi. Reflections are then generated forthese documents based on the correct answer, expert feedback and the text gradient. However, asshown in Equation 4, we need to aggregate these reflections across all queries. Instead of a simpleconcatenation, we adopt the clustering approach similar to Juneja et al. (2024), producing generalized reflections that effectively capture the core insights from multiple queries. These aggregatedreflections can be effectively considered as the partial textual gradient ∂ with respect to the document. These partial gradients are provided as feedback to the document-specific actor Aj, whichthen perform the actions to edit the specific documents.
ICLR.cc/2025/Conference
Ql7msQBqoF
43
4 METHODOLOGY
The ReACT agent utilizes these reflections and iteratively generates a trajectory t0 = a0, a1, a2 · anof edit actions to the document until the errors are resolved or the knowledge gaps are filled. Thiscontrolled editing process improves the accuracy of the RAG system by ensuring that the KB contains up-to-date and relevant information. After the completion of the actor runs, we generate theedit diffs for each document di and pool them to generate the KB edit action u = [di]|K|i=1However, there might be many ways to edit a KB and we may need to have some desirable characteristics for the edited KB. In the next section, we discuss what those desirable characteristics couldbe and how we might measure them.
ICLR.cc/2025/Conference
Ql7msQBqoF
44
5 EVALUATING KNOWLEDGE BASE EDITING QUALITY
A Knowledge Base should be complete with respect to a task - it should contain all the informationnecessary to assist the RAG system to solve the task at hand. Given the open-ended nature of tasks that typical RAG agents are designed for, it is hard to quantify a closed-form metric of completeness.That said, an ideal Knowledge Base editing system should at least be able to incorporate as muchexternal feedback as possible.
ICLR.cc/2025/Conference
Ql7msQBqoF
45
5 EVALUATING KNOWLEDGE BASE EDITING QUALITY
Further, It will be extremely undesirable for any Knowledge Base to only help the RAG system fora small subset of tasks. Given the tendencies for data-driven techniques to over-fit on the train-setdistribution, it is important that knowledge base edits are generalizable to unseen examples.
ICLR.cc/2025/Conference
Ql7msQBqoF
46
5 EVALUATING KNOWLEDGE BASE EDITING QUALITY
Lastly, given the semantic and textual nature of the Knowledge Base, it is important that the documents in the Knowledge base are coherent and consistent throughout. This not only makes thedocument interpretable for human consumption, it also help reduce in-context noise during LLMinteference, which has been shown to affect LLM performance (Liu et al., 2024).
ICLR.cc/2025/Conference
Ql7msQBqoF
47
6 EXPERIMENTAL SETUP
6.1 BASELINE While there has been a rich body of works in the area of knowledge editing and prompt optimization,to the best of our knowledge, MAC-CAFE is the first work targeting the feedback-driven textualKnowledge Base Editing problem. Therefore, to perform a holistic evaluation of MAC-CAFE weimplement - PROMPTAGENT-E, an extension of PROMPTAGENT Wang et al. (2023) for the KBediting task. PROMPTAGENT formulates prompt optimization as a strategic planning problem usingMonte Carlo Tree Search (MCTS). At a high-level our baseline approach, PROMPTAGENT -E creates separate PROMPTAGENT -style agents to optimize specific document in the KB. To minimizespurious edits in the Knowledge Base, we restrict PROMPTAGENT -E to only optimize documentsthat were part of the retrievals for more than 2 training sample. After identifying the best nodesfor each of the document-wise runs, we put them back in the knowledge base to generate the newversion of the KB. In contrast to MAC-CAFE, PROMPTAGENT -E can be seen as a collection ofdocument-wise Independent Actor-Critic models (Foerster et al., 2017). We present in-depth comparisons between PROMPTAGENT -E and MAC-CAFE in Section 7 6.2 DATASETS Knowledge Base Editing can be useful for scenarios where the KB is 1. Incomplete, or 2. Incorrect.We evaluate MAC-CAFE on 5 datasets spanning these different settings.
ICLR.cc/2025/Conference
Ql7msQBqoF
48
6 EXPERIMENTAL SETUP
INCOMPLETE KNOWLEDGE BASE two code generation datasets DatasetPonyRingScipyMTensorflowMCLARKS News Train Eval Test Documents4539982660 We adaptfromARKS (Su et al., 2024a), namely ARKS-Ponyand ARKS-Ring. The dataset consists of LeetCodeproblems and their solutions in low-resource languages Pony and Ring respectively. Each datapointis supplemented with a corresponding language documentation, with execution accuracy as the successmetric and execution failures as feedback to the system. Given that these language don’t appear prominently in LLM pre-training data, the performance of code generation RAG agents on these datasetsdepends significantly on the quality of the Knowledge Base. However, given that these languageshave smaller communities, their documentation isn’t as well maintained and often lack critical information. . For the purpose of evaluation on these datasets, we split them into train, eval, test splits asspecified in Table 3. To ensure that we have a good representation of failure cases during training,we first execute the RAG pipeline on the entire dataset and divide the failures at random in a 1:1:2ratio for train, eval and test respectively. All the datapoints with successful execution match are putin the test split. We use the compiler feedback from the executions as the expert feedback to theMAC-CAFE system.
ICLR.cc/2025/Conference
Ql7msQBqoF
49
6 EXPERIMENTAL SETUP
INCORRECT KNOWLEDGE BASE
ICLR.cc/2025/Conference
Ql7msQBqoF
50
6 EXPERIMENTAL SETUP
Figure 3: Data splits While fact retrieval is one of the most popular use cases of RAG systems, evolving nature of information requires us to keep the knowledge bases up to date. To simulate these dynamic factualknowledge updates we use the CLARKS-news dataset from Erase (Li et al., 2024) which containsquestions and their respective answers extracted from Wikidata at different timestamps. Each timestamp is characterized by a set of articles that were added in the data at that time. For our evaluation,we pool all the questions whose answers changed for the first time at a given timestamp and splitthem across train, eval and test splits in a 1:1:2 ratio (Table 3). 6.3 EVALUATION METRICS In section 5 we discussed the desirable properties of a Knowledge Base edit. We leverage theseproperties to design 3 metrics for the KB Editing problem as follows: Completeness: We use the train set accuracy to estimate the degree of expert feedback incorporatedin the learnt Knowledge Base.
ICLR.cc/2025/Conference
DnBjhWLVU1
42
5 CONCLUSION
In this paper, we introduced a novel method to recover the plasticity of neural networks. The proposed method, Soft Weight Rescaling, scales down the weights in proportion to the rate of weightgrowth. This approach prevents unbounded weight growth, a key factor behind various issues indeep learning. Through a series of experiments on standard image classification benchmarks, including warm-start and continual learning settings, SWR consistently outperformed existing weightregularization and re-initialization methods.
ICLR.cc/2025/Conference
Ql7msQBqoF
52
6 EXPERIMENTAL SETUP
Generalization: To estimate the degree of generalization of our Knowledge Base edits, we use theheld out test set accuracy. 6.4 SYSTEM CONFIGURATIONS MCTS parameters: We use the Upper Confidence bounds applied to Trees (UCT) algorithm forselecting expansion nodes, enabling effective exploration and exploitation of the KB state space. Forour experiments, we set a maximum search depth of 3, an expansion width of 3, and a maximum of 5iterations. The UCT exploration constant is set to 2.5. These parameters were chosen to balance thecomputational cost and the need for adequate exploration. A depth of 3 ensures that the search canexplore sufficient variations in the KB states without unnecessary expansion, while an expansionwidth of 3 allows a moderate number of candidate states to be evaluated at each step. Similarly,5 iterations provide enough opportunity to refine the state search, and the UCT constant of 2.5encourages sufficient exploration in early stages while converging towards high-reward states inlater stages. For unstructured data, the documents are chunked after every 50 lines and then edit thechunks.
ICLR.cc/2025/Conference
Ql7msQBqoF
53
6 EXPERIMENTAL SETUP
Coherence: To quantify the degree of coherence of the KB, we first calculate a document-wisecoherence score using G-Eval (Liu et al., 2023) with GPT4-1106-PREVIEW as the judge model.The G-eval prompt assigns a 1-5 score to the diff of changes with respect to the original document,checking for thematic similarity of the diff. We pool all the edited documents for a KB edit andaverage there respective coherence score to define the KB coherence metric.
ICLR.cc/2025/Conference
Ql7msQBqoF
54
6 EXPERIMENTAL SETUP
RAG System: For the purpose of our evaluations, we setup a generic RAG system which uses anembedding similarity for semantic retrieval. Additionally, in lines with prior works like (Zhanget al., 2023) for coding related tasks, we use an iterative retrieval setup wherein we first generate acode using naive retrieval and then query the database again with both the question and generatedcode to improve the quality of retrieval before generating the final result.
ICLR.cc/2025/Conference
Ql7msQBqoF
55
6 EXPERIMENTAL SETUP
LLM configs: We use OPENAI-TEXT-EMBEDDING-3-LARGE as the embedding model with dimensions size of 3072 and use cosine similarity as a metric of embedding match for ranking. Toaccount for the 8191 max input limit, we create document chunks of at most 7500 tokens. For thereasoning model, we use GPT4-1106-PREVIEW, with a temperature of 0. Since LLMs are knownto perform poorly with longer context input (Liu et al., 2024), we restrict the max token budget forretrievals at 18000 tokens and remove any lower ranked retrieval to fit this token budget.
ICLR.cc/2025/Conference
Ql7msQBqoF
56
7 RESULTS
Dataset Ring Pony SciPy Tensorflow CLARK-news Acc σ Acc σ Acc σ Acc σ Acc σ Base KB30.77PROMPTAGENT-E 33.3336.75MAC-CAFE Dataset Ring Pony SciPy Tensorflow CLARK-news PROMPTAGENT -EMAC-CAFE 7.1 COMPLETENESS AND GENERALIZATION improvements over We observe consistentthePROMPTAGENT-E baseline in completeness andgeneralizability scores, with MAC-CAFE achieving approximately 2x performance gains on Ringand Pony datasets. However, feedback incorporationremains limited, likely due to suboptimal retrieval orlimited document-query associations hindering generalization. MAC-CAFE also demonstrates highergeneralizability and lower variance, attributed to itsstructured and focused document edits that enhancecoherence.
ICLR.cc/2025/Conference
Ql7msQBqoF
57
7 RESULTS
Dataset
ICLR.cc/2025/Conference
Ql7msQBqoF
58
7 RESULTS
Table 1: Comparison of Generalization performance of MAC-CAFE and baselines on variousdatasets
ICLR.cc/2025/Conference
Ql7msQBqoF
59
7 RESULTS
Table 2: Comparison of Completeness metric for MAC-CAFE and baselines on various datasets Ring Pony SciPy Tensorflow CLARK-news PROMPTAGENT-EMAC-CAFE 7.2 MAC-CAFE MAKES HIGH QUALITY COHERENT EDITS As seen in Table 3, MAC-CAFE produces edits with a coherence score of 4 or higher for mostdatasets. For KBs which need long term maintenance (like language and code documentation asseen in the ARKS datasets), MAC-CAFE makes more coherent edits compared to the baseline.This is especially true for long documents as seen in the ARKS Pony dataset. For news-article likedataset like CLARK-news with factual edits. Incoherency is naturally induced when the facts of thearticle are changed. For instance, an article on the coronation of a king will lose coherency when thearticle is updated to add information about the coronation of a new king.
ICLR.cc/2025/Conference
Ql7msQBqoF
60
8 CONCLUSION
We introduced MAC-CAFE, a novel framework for refining Knowledge Bases (KBs) inRetrieval-Augmented Generation (RAG) systems using a multi-actor, centralized critic architecture. MAC-CAFE enables efficient KB updates without retraining or altering model parameters byleveraging feedback-driven structured edits and textual gradients.
ICLR.cc/2025/Conference
Ql7msQBqoF
61
8 CONCLUSION
Our approach achieved superior performance in preserving knowledge base (KB) coherence, consistency, and completeness, resulting in enhanced RAG system responses. Nonetheless, there remainsconsiderable potential for further advancements. Future work will focus on refining these three metrics to elevate system performance even further.
ICLR.cc/2025/Conference
Ql7msQBqoF
62
8 CONCLUSION
Table 3: Comparison of Coherence metric for MAC-CAFE and baselines on various datasets. Score ranged from 1-5. Higherscores are better
ICLR.cc/2025/Conference
NHe6guO3l6
1
Title
ACHIEVING EXACT FEDERATED UNLEARNING WITHIMPROVED POST-UNLEARNING PERFORMANCE
ICLR.cc/2025/Conference
NHe6guO3l6
2
Abstract
Federated learning is a machine learning paradigm that allows multiple clientsto train aggregated model via sharing model updates to a central server withoutsharing their data. Even though the data is not shared, it can indirectly influencethe aggregated model via the shared model updates. In many real-life scenarios, weneed to completely remove a client’s influence (unlearning) from the aggregatedmodel, such as competitive clients who want to remove their influence from theaggregated model after leaving the coalition to ensure other clients do not benefitfrom their contributions. The influence removal is also needed when the adversarialclient negatively affects the aggregated model. Though the aggregated model canbe retrained from scratch to ensure exact unlearning (completely removing theclient’s influence from the aggregated model), it performs poorly just after theunlearning, which is undesirable during deployment. To overcome this challenge,this paper proposes federated unlearning algorithms that ensure exact unlearningwhile achieving better performance post-unlearning. Our experimental results ondifferent real datasets validate the performance of the proposed algorithms.
ICLR.cc/2025/Conference
TIxiwxd4iD
1
Title
BANGLAGITI: A NOVEL DATASET FORBANGLA MUSIC GENRE CLASSIFICATIONWITH A COMPREHENSIVE ANALYSIS
ICLR.cc/2025/Conference
hWF0HH8Rr9
57
5 CONCLUSION
traffic signal conInformation ProURL https: Lukas Biewald. Experiment tracking with weights and biases, 2020. URL https://www.wandb.com/.
ICLR.cc/2025/Conference
hWF0HH8Rr9
58
5 CONCLUSION
Software available from wandb.com.
ICLR.cc/2025/Conference
GlLXMjW7oF
28
4 BENCHMARK DATASETS
# params
ICLR.cc/2025/Conference
NHe6guO3l6
3
1 INTRODUCTION
An individual user may have insufficient data to train a state-of-the-art machine learning model. Yet,we can significantly improve the model performance by leveraging the combined data from multipleusers. Federated learning (FL) (Zhang et al., 2021) is one of the most prevalent paradigms to performsuch collaboration today, especially in sectors with strong privacy demands such as finance and healthcare (Li et al., 2020; Xu et al., 2021). In the FL setting, collaborative clients train local models ontheir own data, and a central server model is obtained by aggregating these local model updates formultiple communication rounds. FL is well-suited for many commercial applications as it eliminatesthe need to share users’ private data during training. For example, multiple companies from the sameindustrial sector (e.g., banking, insurance, or healthcare) often possess diverse user data. To leverageall available data, these companies can collaborate and train a more accurate model using suitable FLalgorithms (Aledhari et al., 2020).
ICLR.cc/2025/Conference
NHe6guO3l6
4
1 INTRODUCTION
Although FL algorithms do not directly access users’ data, the aggregated model is still influenced bythe local models trained on each client’s data. When a client leaves the collaboration, it is necessaryto update the aggregated model to remove the influence of its data–a process known as federatedunlearning (FU),1 e.g., a company leaving the collaboration of many companies may demand theremoval of their contributions to ensure their competitors do not benefit from them. FU techniquesare also desirable to remove the influence of adversarial clients, i.e., the adversary behaves like aclient and degrades the model performance by contributing contaminated updates (Fang et al., 2020).Additionally, the development of FU techniques facilitates the exercise of the right to be forgottenformalized in many regional or government data regulations such as GDPR (2016) and CCPA (2018).
ICLR.cc/2025/Conference
NHe6guO3l6
5
1 INTRODUCTION
We can trivially achieve FU by retraining the collaboration from scratch without the target client’sdata (Liu et al., 2023). Despite its simplicity, the new server model suffers from low performance asit is restarted with random initialization. As a result, it slows down the deployment of the unlearnedmodel, as training large models on the collaboration of many users can be time-consuming. Due to 1This differs from the typical FL setting, where clients may be intermittently active or inactive during the training process.
ICLR.cc/2025/Conference
NHe6guO3l6
6
1 INTRODUCTION
these shortcomings, it is natural to consider the following question: How can we guarantee the exactfederated unlearning while ensuring better post-unlearning performance? This paper proposes two novel methods for achieving exact FU with improved post-unlearningperformance. The first method, Bi-Models Training (BMT) (Section 3.1), preserves isolated copiesof local models and reuses clients’ existing knowledge residing in these models during unlearningfor better aggregation. Despite being unlearning-friendly, these local models fail to capture the jointinfluence of multiple clients on the global model. Training the power set of clients can capture theinfluence of all possible influences of the clients but is computationally expensive and may leadto double influence, where a client affects multiple sub-FL models. As a result, we propose thesecond method, Multi-Models Training (MMT) (Section 3.2), that trains each sub-FL model on disjointsubsets of clients to avoid double influence and aggregates the best sub-FL models upon unlearning toachieve improved initialization of the aggregated model. We empirically justify the effectiveness ofBMT and MMT through multiple experiments on real-world vision and language datasets (Section 4).
ICLR.cc/2025/Conference
NHe6guO3l6
7
1.1 RELATED WORKS
In this section, we now review the relevant work, especially in federated learning, machine unlearning,and federated unlearning, to our problem setting.
ICLR.cc/2025/Conference
NHe6guO3l6
8
1.1 RELATED WORKS
Federated Learning (FL). FL emerges from the industrial needs to train centralized models onlarge, decentralized data residing on users’ device (McMahan et al., 2017) and is particularly favoredin sectors requiring strong privacy guarantees, such as finance and health care (Li et al., 2020; Xuet al., 2021). Based on the characteristics of the decentralized data, Yang et al. (2019) divided FL intothree categories: horizontal FL, vertical FL, and federated transfer learning. To optimize the federatedmodels, McMahan et al. (2017) proposed the FedAvg algorithm that averages local updates fromcontributing clients and works well on independent and identically distributed (i.i.d.) data. However,as real-world data is often heterogeneous (e.g., users with different demographics), subsequent workshave proposed new methods that target model architecture or algorithm design to alleviate model driftthat can degrade model performance (Zeng et al., 2023; Mu et al., 2023; Idrissi et al., 2021; Li et al.,2021; Karimireddy et al., 2020). We refer the readers to Zhang et al. (2021) for a detailed survey ofvarious works covering different settings of federated learning.
ICLR.cc/2025/Conference
NHe6guO3l6
9
1.1 RELATED WORKS
Machine Unlearning (MU). MU aims to remove the influence of a selected subset of data fromthe trained ML model. Based on the guarantee of removal, MU methods are broadly categorizedinto exact unlearning and approximate unlearning (Nguyen et al., 2022; Wang et al., 2024). Inexact unlearning, we aim for an identical model to one that would have been obtained by retrainingwithout that data to be erased. Retraining is a method that trivially achieves exact unlearning but iscomputationally expensive with large models and datasets. Existing works can exactly unlearn forsupport vector machines (Cauwenberghs & Poggio, 2000), k-means (Ginart et al., 2019), randomforests (Brophy & Lowd, 2021). Bourtoule et al. (2021) partitions the entire training data set into afew disjoint subsets and trains one base model with each of these subsets. Since each base modelis only trained with a subset of the original training data, the performance may be sub-optimal.Approximate unlearning aims for a model whose distribution closely resembles that of the retrainedmodel. Guo et al. (2020) proposed a certified removal method to approximately unlearn linear modelby Newton-like update. Nguyen et al. (2020) minimizes the KL divergence between the approximateposterior of the unlearned model and the retrained model under the variational inference framework.
ICLR.cc/2025/Conference
NHe6guO3l6
17
3.1 BI-MODELS TRAINING
Initialization. The central server starts the standard FL training process by randomly initializingthe global aggregated model. This randomly initialized global model is then shared with all clients.Each client updates the global model using its local training data and then shares the model update(updated model or gradients) with the central server. As compared to the standard initialization in anyFL training process, each client makes a copy of the locally updated global model2 (i.e., local model).Since the initial global model is randomly initialized, these local models are, by design, isolated fromthe influence of other clients’ training data.
ICLR.cc/2025/Conference
Yd5MHVIKLk
40
4.1 MAIN RESULTS AND ANALYSIS
Table 2: GPT-4V evaluation of final generated images and final prompts after adjustments/changes.The results show that MuLan is still very effective with various adjustment of prompts during generation.
ICLR.cc/2025/Conference
NHe6guO3l6
10
1.1 RELATED WORKS
Federated Unlearning (FU). Many recent works adapt machine unlearning to the federated learningsettings (Liu et al., 2020; Wang et al., 2021; Gong et al., 2021). Liu et al. proposed FedEraser, whichinvolves using historical updates from the server and local calibration training on the unlearned client.The federated unlearning protocol proposed in this work can be used to unlearn an arbitrary subset ofclients without any constraint on the type of data each client possesses. At the same time, it requiresno participation of the unlearned client. Wang et al. proposed a channel pruning-based method toselectively forget a specific class from the trained ML model. Such an approach has limited scope asit is impractical to assume that each participant in the FL setting possesses precisely one class of data.Gong et al. concerned with the setting where no centralized party/server is present, which does notapply to the centralized FL setting. In terms of exact federated unlearning, Xiong et al. (2023) andTao et al. (2024) use quantization and sampling strategies, respectively, to get a checkpoint duringthe FL training where the unlearned client’s data have not made a quantifiable impact and use it as initialization for model retraining and since speed up the retraining process. On the other hand,Qiu et al. (2023) proposed to cluster the clients and train a few intermediate FL models and thensubsequently obtain the global FL model through one-shot aggregation. At the unlearning stage, onlythe intermediate FL model where the unlearning client is present is retrained (and hence reducing theretraining cost). Our proposed method touches on both ideas and uses aggregation of a few sub-FLmodels to obtain a good initialization for much more efficient retraining. The way we obtained oursub-FL models trades-off between computation budget and post-unlearning performance, played anessential role in ensuring its effectiveness.
ICLR.cc/2025/Conference
NHe6guO3l6
11
2 PROBLEM SETTING
Federated Learning. This paper considers the centralized federated learning (FL) setting with atrusted central server and multiple clients. In this setting, a central server shared an aggregated modelwith the clients and then each client trains this model on his dataset and send model updates (weightsor gradients) to the central server, which then aggregates these updates to get a better aggregatedmodel. In our setting, we assume that the number of clients participating in FL process varies overtime. Let Ct denote the set of participating clients at the beginning of the FL communication roundt. An FL communication round (communication round for brevity) represents one cycle of modelsharing by the central server with clients and then receiving the updated aggregated model.
ICLR.cc/2025/Conference
NHe6guO3l6
12
2 PROBLEM SETTING
Each client c ∈ Ct has training dataset Dc,t with nc,t labeled samples, where each sample is drawnfrom the distribution νc over X × Y. Here, X represents the input space, and Y represents the labelspace. The learning model is denoted by hθ : X → Y for model parameters θ ∈ Rd, where d is thenumber of model parameters. The loss incurred by the learning model hθ on a sample (x, y) ∈ X × Yis denoted by l(hθ(x), y), which can be the root mean squared error (for regression problems) orcross-entropy loss (for classification problems).
ICLR.cc/2025/Conference
NHe6guO3l6
13
2 PROBLEM SETTING
After the communication round t, the loss incurred by the client c for model parameters θ is the averageloss of the model hθ on the samples in Dc,t and defined by fc,t(θ) := 1s=1 l(hθ(xc,s), yc,s),nc,twhere (xc,s, yc,s) is the s-th sample in Dc,t. The central server aims to find a learning model with theminimum average loss for each client. The server achieves this by finding a model θ that minimizesthe average clients’ loss weighted by their respective number of samples, which is given by solvingthe following optimization problem in the communication round t: (cid:80)nc,t argmin θ 1nt (cid:88) c∈Ct nc,tfc,t(θ) = 1nt (cid:88) nc,t(cid:88) c∈Ct s=1 l(hθ(xc,s), yc,s), (1) where nt = (cid:80)Ctc=1 nc,t. Since the clients cannot share their local data Dc,t with the server (due tocommunication or privacy constraints), the optimization problem given in Eq. (1) must be solved in afederated manner by using the suitable FL algorithm (e.g., FedAvg (McMahan et al., 2017)).
ICLR.cc/2025/Conference
NHe6guO3l6
14
2 PROBLEM SETTING
Exact Federated Unlearning. Let client c influence be completely removed from the aggregatedmodel. Exact federated unlearning is the process of completely removing the influence of client ctraining data from the aggregated model, resulting in a model that is equivalent to the models trainedwithout the training data of client c. However, the aggregated model resulting from retraining withoutthe data of client c may have a poor performance in the initial round, which may not be expectedwhen these models are deployed in practice. Therefore, our goal is to design methods that ensureexact federated unlearning while leading to an aggregated model with as high accuracy as possible.
ICLR.cc/2025/Conference
NHe6guO3l6
15
3 EXACT FEDERATED UNLEARNING METHODS
Due to multiple communication rounds of the FL training, it becomes impossible to completelyremove a client’s data influence from the trained aggregated model. Therefore,the moststraightforward way to achieve the exact federated unlearning is to restart the federated learningprocess from scratch with the remaining clients. This method of retraining the aggregated model fromscratch is called retraining from scratch (RfS) (Bourtoule et al., 2021; Liu et al., 2023). AlthoughRfS is a simple method, the new model may have very low accuracy in the initial rounds afterunlearning compared to the aggregated model before unlearning due to restarting the FL process with random initialization of the aggregated model. Such performance reduction of the aggregated modelmay not be desirable during deployment in practice involving critical applications such as healthcare(Prayitno et al., 2021; Dhade & Shirke, 2024) and finance (Long et al., 2020). This shortcoming ofRfS raises a natural question: How can we guarantee the exact federated unlearning while ensuringbetter post-unlearning performance? To answer this question, we propose two novel methods forachieving exact federated unlearning that completely remove the client’s influence while giving betterpost-unlearning performance than RfS.
ICLR.cc/2025/Conference
NHe6guO3l6
16
3.1 BI-MODELS TRAINING
To get a better performing aggregated model post-unlearning, we must design a new FL trainingprocess that allows exact federated unlearning while having a better initialization than randominitialization. One way to achieve better initialization is to design methods that can exploit theremaining clients’ existing knowledge. To do this, we propose a method named Bi-Models Training(BMT) that can be incorporated into any existing federated learning framework. The main idea ofBMT is to have an additional model for each client that is only trained on its data, making thesemodels unaffected by other clients’ training data. We refer to this model as local model. We use theterm global model for referring to the aggregated model, which is trained using all client’s data andused for deployment. Next, we discuss how BMT can be incorporated into the different stages of anyexisting federated learning framework (as depicted in Fig. 1), namely: Initialization, FL Training,Unlearning, and New Client joining the FL process, whose details are given as follows.
ICLR.cc/2025/Conference
Yd5MHVIKLk
41
4.1 MAIN RESULTS AND ANALYSIS
Spatial Overall28.57% 73.06%
ICLR.cc/2025/Conference
NHe6guO3l6
18
3.1 BI-MODELS TRAINING
FL Training. After receiving the first model updates, the central server aggregates them to get theaggregated global model as per the underlying FL algorithm (McMahan et al., 2017; Shlezinger et al.,2020; Zhang et al., 2021). In each subsequent communication round, each client receives the updatedglobal model from the central server and then trains it using its training data. After updating theglobal model, each client shares the model update with the central server. Besides the standard FLtraining process, each client also updates their local model using their training data.
ICLR.cc/2025/Conference
NHe6guO3l6
19
3.1 BI-MODELS TRAINING
Unlearning. Let c be the client whose influence needs to be completely removed from the globalmodel after a communication round t and Ct,r be the set of remaining clients, i.e., Ct,r = Ct \ {c}.The central server first discards the current global model and requests each client to share their currentcopy of local models. Once the central server receives the local models from all remaining clients, thecentral server aggregates them to get the new initialization for the global model as per the underlying
ICLR.cc/2025/Conference
NHe6guO3l6
20
3.1 BI-MODELS TRAINING
Figure 1: Bi-Models Training (BMT) in the different stages of any federated learning framework.
ICLR.cc/2025/Conference
NHe6guO3l6
21
3.1 BI-MODELS TRAINING
2The locally updated global model in the first communication rounds is the same as the model that is a copy FL algorithm, e.g., for FedAvg, the central server performs weighted aggregation on the remainingclient’s local models, where each client’s weight is proportional to their respective training data. Ourextensive experimental results (in Section 4) show that the resulting initialized global model performsbetter than random model initialization as done in RfS. Lastly, the central server restarts the FLtraining process with the newly initialized global model, which is completely free from the influenceof the unlearned client’s data.
ICLR.cc/2025/Conference
NHe6guO3l6
22
3.1 BI-MODELS TRAINING
of the initial global model and trained on client’s training data.
ICLR.cc/2025/Conference
NHe6guO3l6
23
3.1 BI-MODELS TRAINING
New Client. When a new client wants to join the ongoing FL collaboration, the central server waitsuntil the end of the ongoing communication round. Once it is over, the central server starts the FLtraining process with the new client by sharing the current global model with the new client, who thenupdates the current global model using its training data and shares the model update with the centralserver. Apart from this, the central client also shares the randomly initialized global model with thenew client, who updates it, which then acts as the local model of the new client for subsequent rounds.Other clients do not influence this local model, as the initial global model is randomly initialized.
ICLR.cc/2025/Conference
NHe6guO3l6
24
3.1 BI-MODELS TRAINING
In summary, BMT has two models for each client: global and local. All clients train their localmodel on their data in isolation, whereas the global model is trained using the underlying FL trainingprotocol. To completely remove a client influence from the global model, the central server firstdiscards the global model and then uses the local models of the remaining clients to re-initialize theglobal model, which is further updated via FL training. This process ensures that BMT, by design,guarantees the exact federated unlearning. Further, using the remaining clients’ local models leads toan initialization of the global model that is already influenced by the remaining clients to some extent,leading to a better performance than RfS, as corroborated by our experiments in Section 4. The pricefor this improved post-unlearning performance is the cost of pre-training the local models in advance.Such a trade-off is worthwhile for applications that require exact unlearning and an unlearned modelwith good performance as quickly as possible for deployment.
ICLR.cc/2025/Conference
NHe6guO3l6
25
3.2 MULTI-MODELS TRAINING
The key insight of the previous section is that BMT achieves a better initial global model because itis influenced by the clients’ local models. However, the local model only contains influence froman individual client and has no joint influence of multiple clients. Since all clients influence theglobal model, we should capture the joint influence of different clients and then use it to get a betterinitialization of the global model. To capture the joint influence, we can train FL models using only asubset of clients. We refer to these FL models as sub-FL models. Formally, a sub-FL model is an FLmodel that is trained via FL protocol using a subset of clients, where the size of the subset variesfrom 2 to N − 1. One can train all possible sub-FL models (power set of clients excluding globalmodel) to capture the influence of all possible interactions of different subsets of clients. However,this approach is not computationally feasible as these sub-FL models increase exponentially withthe number of clients (i.e., 2n − n − 2 for n clients). Another problem of training arbitrary sub-FLmodels leads to a situation of double influence, which is defined as follows:Definition 1. Let Si be the set of clients whose data are used in training the i-th sub-FL model. Thesub-models i and j leads to double influence if Si ∩ Sj ̸= ∅, Si \ Sj ̸= ∅, and Sj \ Si ̸= ∅.
ICLR.cc/2025/Conference
NHe6guO3l6
26
3.2 MULTI-MODELS TRAINING
When one client data is used to train two sub-FL models, it can lead to double influence if both arealso trained using data from different clients, e.g., one is trained on clients {1, 2} and another onclients {1, 3}; the client 1 data is used in both sub-FL models and hence having the double influence.
ICLR.cc/2025/Conference
NHe6guO3l6
27
3.2 MULTI-MODELS TRAINING
To avoid the double influence, each sub-FL model should be trained on disjoint subsets of clients,or the set of clients used for training sub-FL models is a proper superset of the set of clients usedfor another sub-FL model. One possible way to achieve this is to organize sub-FL models in ahierarchical tree structure. In this tree, the root node represents the global model while the leaf nodescorrespond to the local models, and intermediate nodes represent sub-FL models, with each childnode having disjoint sets of clients compared to its siblings. As we move from the root node to theleaf nodes, each sub-FL model branches into further subsets, maintaining either disjoint relationshipsor superset relations, thus ensuring a clear and systematic flow of influence throughout the hierarchy.We refer to this hierarchical tree structure as an influence tree. After unlearning a client, we shouldaggregate the sub-FL models with higher influence (those influenced by a larger number of clients)and local models to get the initialization for the global model. If the number of models to aggregateis less, it implies that the initialization of the global model contains the most joint influence of clients.
ICLR.cc/2025/Conference
NHe6guO3l6
28
3.2 MULTI-MODELS TRAINING
This relationship inspires our proposed metric influence degradation score, which measures howgood is an influence tree. Next, we formally define the influence degradation score.Definition 2 (Influence Degradation Score (IDS)). Let T be any influence tree structure. Theinfluence degradation score for T , denoted by s(T ), is defined as the average number of sub-FL andlocal models that are aggregated to get the initial global modal after unlearning any client.
ICLR.cc/2025/Conference
NHe6guO3l6
29
3.2 MULTI-MODELS TRAINING
Though the tree structure, by design, eliminates double influence, we do not know which tree structuregives the lowest IDS for given clients’ different likelihood of requesting unlearning (as the probabilityof requesting unlearning may vary across the clients). As our goal is to construct an influence treewith minimum IDS, our following result shows that the binary influence tree constructed usingHuffman coding has the lowest IDS among all n-ary influence tree structures, where n > 2.Theorem 1. Given an n-ary influence tree T , there exists a binary influence tree T2 that has smallerIDS, i.e., s(T2) < s(T ). Let pc be the unlearning probability of the client c. Then, Huffman codingwith n symbols representing clients and weights {pc}nc=1 gives the optimal binary influence tree suchthat s(THuffman) ≤ s(T2) for any influence tree T2 for the same group of clients.
ICLR.cc/2025/Conference
NHe6guO3l6
30
3.2 MULTI-MODELS TRAINING
With Theorem 1, we can use Huffman coding (Huffman, 1952) to construct an influence tree thathas the lowest IDS among all types of influence trees. In some real-life applications, the client’sunlearning probability can be unknown. In such cases, we can assume that each client is equallylikely to be unlearned, hence having the same unlearning probability. We show the influence tree for8 clients having the same unlearning probability in Fig. 3a. A client (on the leaf node) influences thesub-FL model if there is a path from a sub-FL model to the leaf node representing that client. Wenext propose a method named Multi-Models Training (MMT) that uses the sub-FL models to get betterinitialization for the global model. MMT can be easily incorporated into the different stages of anyexisting federated learning framework (as depicted in Fig. 1), whose details are given as follows.
ICLR.cc/2025/Conference
NHe6guO3l6
31
3.2 MULTI-MODELS TRAINING
Initialization. Similar to BMT, the central server starts the standard FL training process by randomlyinitializing the global aggregated model. This randomly initialized global model is then shared withall clients. Each client updates the global model using its local training data and then shares themodel update (updated model or gradients) with the central server. Each client makes a copy of thelocally updated global model. Compared to BMT, MMT also initializes the sub-FL models using themodel updates of clients corresponding to the sub-FL models.
ICLR.cc/2025/Conference
NHe6guO3l6
32
3.2 MULTI-MODELS TRAINING
FL Training. After receiving the first model updates, the central server aggregates them to getthe aggregated global and sub-FL models. In each subsequent communication round, each clientreceives the updated global and its sub-FL models from the central server and then trains them usingits training data. After updating these models, each client shares the global and its sub-FL modelupdates with the central server. Apart from this, each client also updates their local model.
ICLR.cc/2025/Conference
NHe6guO3l6
33
3.2 MULTI-MODELS TRAINING
Figure 2: Multi-Models Training (MMT) in the different stages of any federated learning framework. (a) Influence tree (b) Affected nodes (c) Initialization (d) New influence tree unlearned client, the remaining influence tree may no longer have the lowest IDS for the remainingclients. It leads to two options: create a new influence tree while using earlier sub-FL models asmuch as possible (as shown in Fig. 3c) or keep using the existing influence tree, which may not bethe best but retains the sub-FL models trained over time. Lastly, the central server restarts the FLtraining process with the newly initialized global and sub-FL (if any) models (as shown in Fig. 3d),which are completely free from the influence of the unlearned client’s data.