venue
stringclasses 5
values | paper_openreview_id
stringclasses 342
values | paragraph_idx
int64 1
314
| section
stringlengths 2
2.38k
| content
stringlengths 1
33.1k
⌀ |
|---|---|---|---|---|
ICLR.cc/2025/Conference
|
MGZyUtaYUb
| 65
|
10 5
|
Time
|
ICLR.cc/2025/Conference
|
MGZyUtaYUb
| 66
|
20 8
|
5.3 COMBINING CONSTRUCTION AND IMPROVEMENT: BEST OF BOTH WORLDS? improvement methods are more suitable for their performance is often limOnlarger computing budgets.
|
ICLR.cc/2025/Conference
|
MGZyUtaYUb
| 67
|
20 8
|
While constructive policies can build solutions in seconds,ited, even with advanced decoding schemes such as sampling or augmentations.the other hand,We benchmark models on TSP with 50 nodes: theAR constructive method POMO (Kwon et al., 2020)and the improvement methods DACT (Ma et al.,2021) and NeuOpt (Ma et al., 2024). In the originalimplementation, DACT and NeuOpt started from asolution constructed randomly. To further demonstrate the flexibility of RL4CO, we show that bootstrapping improvement methods with constructiveones enhance convergence speed. Fig. 5 showsthat bootstrapping with a pre-trained POMO policysignificantly enhances the convergence speed. Tofurther investigate the performance, we report thePrimal Integral (PI) (Berthold, 2013; Vidal, 2022;Thyssens et al., 2023), which evaluates the evolutionof solution quality over time. Improvement methodsalone, such as DACT and NeuOpt, achieve 2.99 and2.26 respectively, while sampling from POMO achieves 0.08. This shows that the “area under thecurve” can be better even if the final solution is worse for constructive methods. Bootstrapping withPOMO then improves DACT and NeuOpt to 0.08 and 0.04 respectively, showing the benefits ofmodularity and hybridization of different components.
|
ICLR.cc/2025/Conference
|
MGZyUtaYUb
| 68
|
6 DISCUSSION
|
Limitations and Future Directions While RL4CO is an efficient and modular library specializedin CO problems, it might not be suitable for any other task due to a number of area-specific optimizations, and we do not expect it to seamlessly integrate with, for instance, OpenAI Gym wrapperswithout some modifications. Another limitation of the library is its scope so far, namely RL. Eventually, creating a new library to support supervised methods as a comprehensive "AI4CO" codebasecould benefit the whole NCO community. We additionally identify in Foundation Models for COand related scalable architectures a promising area of future research to overcome generalizationissues across tasks and distributions, for which we provided some early clues in Appendix E.8.
|
ICLR.cc/2025/Conference
|
MGZyUtaYUb
| 69
|
6 DISCUSSION
|
Long-term Plans RL4CO is an active library that has already garnered much attention from thecommunity, with over 400 stars on GitHub. We thank contributors in the community who havehelped us build RL4CO. Our long-term plan is to become the go-to RL for CO benchmark library.For this purpose, we created a community Slack workspace (link available upon acceptance) thathas attracted more than 200 researchers. We are committed to helping resolve issues and questionsfrom the community and actively engaged in discussion. It is our hope that our work will ultimatelybenefit the NCO field with new ideas and collaborations. More available in Appendix A.
|
ICLR.cc/2025/Conference
|
MGZyUtaYUb
| 70
|
6 DISCUSSION
|
Figure 5: Bootstrapping improvement with constructive methods.
|
ICLR.cc/2025/Conference
|
D2EdWRWEQo
| 16
|
2 BACKGROUND
|
Building upon TFEP, Wirnsberger et al. (2020) introduced Learned Free Energy Perturbation(LFEP), using normalizing flows to learn the mapping M . Instead of relying on hand-crafted transformations, LFEP learns M by optimizing a neural network to maximize the overlap between thetransformed distribution B′ and the target distribution B. This approach provides a data-driven wayto enhance free energy estimations without the need for explicit intermediate states or extensivephysical intuition.
|
ICLR.cc/2025/Conference
|
MGZyUtaYUb
| 71
|
7 CONCLUSION
|
This paper introduces RL4CO, a modular, flexible, and unified Reinforcement Learning (RL) forCombinatorial Optimization (CO) benchmark. We provide a comprehensive taxonomy from environments to policies and RL algorithms that translate from theory to practice to software level. Ourbenchmark library aims to fill the gap in unifying implementations in RL for CO by utilizing several best practices with the goal of providing researchers and practitioners with a flexible startingpoint for NCO research. We provide several experimental results with insights and discussions thatcan help identify promising research directions. We hope that our open-source library will providea solid starting point for NCO researchers to explore new avenues and drive advancements. Wewarmly welcome researchers and practitioners to actively participate and contribute to RL4CO.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 1
|
Title
|
FEDGO: FEDERATED ENSEMBLE DISTILLATION WITHGAN-BASED OPTIMALITY
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 2
|
Abstract
|
For federated learning in practical settings, a significant challenge is the considerable diversity of data across clients. To tackle this data heterogeneity issue, it hasbeen recognized that federated ensemble distillation is effective. Federated ensemble distillation requires an unlabeled dataset on the server, which could eitherbe an extra dataset the server already possesses or a dataset generated by traininga generator through a data-free approach. Then, it proceeds by generating pseudolabels for the unlabeled data based on the predictions of client models and trainingthe server model using this pseudo-labeled dataset. Consequently, the efficacy ofensemble distillation hinges on the quality of these pseudo-labels, which, in turn,poses a challenge of appropriately assigning weights to client predictions for eachIn this work, wedata point, particularly in scenarios with data heterogeneity.suggest a provably near-optimal weighting method for federated ensemble distillation, inspired by theoretical results in generative adversarial networks (GANs).Our weighting method utilizes client discriminators, trained at the clients based ona generator distributed from the server and their own datasets. Our comprehensiveexperiments on various image classification tasks illustrate that our method significantly improves the performance over baselines, under various scenarios withand without extra server dataset. Furthermore, we provide an extensive analysis of additional communication cost, privacy leakage, and computational burdencaused by our weighting method.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 3
|
1 INTRODUCTION
|
Federated learning (FL) (McMahan et al., 2017) has received substantial attention in both industryand academia as a promising distributed learning approach. It enables numerous clients to collaboratively train a global model without sharing their private data. A major concern in deploying FLin practice is the severe data heterogeneity across clients. In the real world, it’s probable that clientspossess non-IID (identical and independently distributed) data distributions. It is known that the dataheterogeneity results in unstable convergence and performance degradation (Li et al., 2019; Wanget al., 2020b; Li & Zhan, 2021; Kairouz et al., 2021; Huang et al., 2023; Karimireddy et al., 2020).
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 4
|
1 INTRODUCTION
|
To address the data heterogeneity issue, various approaches have been taken, including regularizingthe objectives of the client models (Karimireddy et al., 2020; Li et al., 2020; Liang et al., 2019; Yaoet al., 2021; Mendieta et al., 2022), normalizing features or weights (Dong et al., 2022; Kim et al.,2023), utilizing past round models (Yao et al., 2021; Wang et al., 2023b), sharing feature information (Dai et al., 2023; Yang et al., 2024; Tang et al., 2024), introducing personalized layers (Huanget al., 2023), and learning the average input-output relation of client models through ensemble distillation (Chang et al., 2019; Gong et al., 2021; Deng et al., 2023; Sattler et al., 2020; Lin et al.,2020; Cho et al., 2022; Xing et al., 2022; Park et al., 2024; Wang et al., 2023a; Tang et al., 2022;Zhang et al., 2022; 2023b). In particular, the last approach, federated ensemble distillation, has recently gained significant attention for its effectiveness in mitigating data heterogeneity and for itsadvantage of being effectively applicable to heterogeneous client models. It requires an unlabeleddataset at the server, for which pseudo labels are created based on client predictions. By training onthis pseudo-labeled dataset at the server, the server distills the knowledge from the clients. This additional dataset can be either public (Chang et al., 2019; Gong et al., 2021; Deng et al., 2023; Sattleret al., 2020), held only by the server due to its exceptional data collection capability (Lin et al., 2020;Cho et al., 2022; Xing et al., 2022; Park et al., 2024), or obtained through a data-free approach (Wang et al., 2023a; Tang et al., 2022; Zhang et al., 2022; 2023b). Note that the performance of ensembledistillation depends on the quality of the pseudo-labels, which ultimately translates into a problemof appropriately assigning weights to client predictions for each data point, particularly in situationsof data heterogeneity. In this research stream of federated ensemble distillation, early studies likeFedDF (Lin et al., 2020) applied uniform weighting. Subsequently, algorithms such as Fed-ET (Choet al., 2022), FedHKT (Deng et al., 2023), FedDS (Park et al., 2024), and DaFKD (Wang et al.,2023a) emerged, which utilize metrics like variance, entropy, and judgement of client discriminator as indicators of confidence in client predictions for weighting. However, analysis regarding therationale behind optimal weighting remains scarce. • We propose FedGO: Federated Ensemble Distillation with GAN-based Optimality. Ouralgorithm incorporates a novel weighting method using the client discriminators that aretrained at the clients based on the generator distributed from the server. • The optimality of our proposed weighting method is theoretically justified. We define anoptimal model ensemble and show that a knowledge-distilled model from an optimal modelensemble achieves the optimal performance, within an inherent gap due to the differencebetween the spanned hypothesis class of ensemble model and the hypothesis class of asingle model. Then, based on the theoretical result for vanilla GAN (Goodfellow et al.,2014), we show that our weighting method using client discriminators constitutes an optimal model ensemble. • We experimentally demonstrate significant improvements of FedGO over existing researchboth in final performance and convergence speed on multiple image datasets (CIFAR10/100, ImageNet100). In particular, we demonstrate performance across various scenarios, including cases where the server holds an unlabeled dataset different from the clientdatasets and where the the server does not hold an unlabeled dataset and hence some datafree approaches are taken. Furthermore, we provide a comprehensive analysis of communication cost, privacy leakage, and computational burden for the proposed method.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 5
|
1 INTRODUCTION
|
In this paper, we suggest a novel weighting method for federated ensemble distillation that outperforms previous methods (Fig. 1), with theoretically justified optimality based on some resultsin generative adversarial networks (GANs) (Goodfellow et al., 2014). Our main contributions aresummarized in the following:
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 6
|
1 INTRODUCTION
|
Figure 1: A toy example of decision boundaries of aggregated models. Each point represents data,and its color represents the label. The background color represents the decision boundary of eachmodel in the RGB channels. The oracle decision boundary, shown by the black lines, corresponds tothe x-axis and y-axis. For aggregated models, we consider the parameter-averaged model (McMahan et al., 2017) and ensemble-distilled models using uniform weighting (Lin et al., 2020), varianceweighting (Cho et al., 2022), entropy weighting (Deng et al., 2023; Park et al., 2024), domain-awareweighting (Wang et al., 2023a), and ours. Detailed settings are provided in Appendix E.1.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 7
|
2 SYSTEM MODEL AND RELATED WORK
|
Federated Learningon data distributed among K clients, by exchanging the models between a server and the clients.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 8
|
2 SYSTEM MODEL AND RELATED WORK
|
In federated learning, the goal is to cooperatively train a global model based We focus on classification tasks in this paper. Let X denote the data domain and y denote thelabeling function that outputs the label of the data x ∈ X . A model f (·; θ) is parameterized byθ ∈ Θ where Θ is the set of model parameters and H = {h|h(·) = f (·; θ), θ ∈ Θ} denotes the classof parameterized models. For a distribution q on X , h∗q denotes the expected loss minimizer on q,≜ arg minh∈H Lq(h), where Lq(h) = Eq[l(h(x), y(x))] and l is the loss function. Clienti.e., h∗q k possesses a (labeled) dataset Sk of nk data points, sampled over X i.i.d. according to distributionpk. Then p = (cid:80)K, is the average of client data distribution. Theobjective of federated learning is given as follows: k=1 πk · pk, where πk = nkk′=1 nk′ (cid:80)K minh∈H Lp(h) = minh∈H Ep[l(h(x), y(x))] = minh∈H K(cid:88) k=1 πk · Epk [l(h(x), y(x))] = minh∈H K(cid:88) k=1 πk · Lpk (h). (1) (2) In each communication round t, a subset At of clients downloads the current server model and trainsit based on Sk with the objective of minimizing Lpk (h). Then it sends the trained model to theserver. The server aggregates these client models to update the server model. The aforementionedprocedure is repeated at the next communication round. For the aggregation of client models at theserver, the FedAVG algorithm (McMahan et al., 2017) constructs the server model with parameterθtk for k ∈ At received in round t (line 7 ofs for round t as the average of model parameters θtAlgorithm 1). When the client data distributions are homogeneous, each pk is same as p and henceLpk becomes same as Lp. However, when the client data distributions are heterogeneous, Lpk andLp are not same, leading to a significant degradation in the convergence rate of FedAVG to theglobal optimum (Li et al., 2019).
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 9
|
2 SYSTEM MODEL AND RELATED WORK
|
In the following, we introduce federated ensemble distillation using an unlabeled dataset on theserver to address client data heterogeneity.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 18
|
12 dH△H(pk, p) + λk + O
|
In Goodfellow et al. (2014), the authors showed that the output of an optimal discriminator againsta fixed generator can be expressed in terms of distributions of real and fake images.Theorem 1. (Goodfellow et al., 2014, Proposition 1) For a fixed generator G, let pg and pdata denotethe density functions of the generated distribution by G and the real data distribution, respectively.Then the output of an optimal discriminator D for input data x is given as follows: D(x) = pdata(x)pdata(x) + pg(x) . (5) Using the above result, we develop a method of assigning weights to client predictions in Section 3.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 10
|
2 SYSTEM MODEL AND RELATED WORK
|
Federated Ensemble Distillation To address client data heterogeneity, there has been a line ofresearch on federated ensemble distillation using an unlabeled dataset on the server. This unlabeleddataset may either be available from the outset (Lin et al., 2020; Cho et al., 2022; Deng et al., 2023;Park et al., 2024) or produced through a generator trained as a part of FL by taking a data-freeapproach (Rasouli et al., 2020; Guerraoui et al., 2020; Li et al., 2022; Wang et al., 2023c; Fan & Liu,2020; Behera et al., 2022; Hardy et al., 2019; Xiong et al., 2023; Zhang et al., 2021; 2023a; Wanget al., 2023a; Zhang et al., 2022; 2023b). With the unlabeled dataset, the server model undergoesadditional training to learn the average input-output relationship of client models.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 11
|
2 SYSTEM MODEL AND RELATED WORK
|
Algorithm 1 describes this federated ensemble distillation, when the client and server model structures are the same. Here σ represents the softmax function, and KL denotes the Kullback-Leiblerdivergence. If the model output already includes the softmax activation, then the softmax function isomitted in lines 10 and 11. After averaging client model parameters in line 7, the performance of theserver model depends on the quality of the pseudo-labels, as the server model undergoes additionaltraining with those pseudo-labels. Moreover, the quality of pseudo-labels ˜y(·) relies on designingthe weighting function wk(·), which determines the weighting of client k’s output. Therefore, designing a better-performing ensemble distillation during the server update ultimately boils down todesigning a better-performing weighting function.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 12
|
2 SYSTEM MODEL AND RELATED WORK
|
For the weighting function, FedDF (Lin et al., 2020) uses uniform weights for each client, i.e.,wk(x) = 1|At| for all k in At. Subsequently, algorithms assigning higher weights to the outputsof more confident clients have been proposed. In Fed-ET (Cho et al., 2022), higher weights are assigned to models with larger output logit variance, i.e., wk(x) =i )) . FedHKT (Denget al., 2023) and FedDS (Park et al., 2024) allocate higher weights to models with smaller output softi )))/τ ) , where τ is the temperature parameter.max entropy, i.e., wk(x) =In DaFKD (Wang et al., 2023a), while training a global generator and client discriminators at eachround, ensemble distillation is performed on unlabeled dataset generated by the global generator byassigning higher weights to models with larger discriminator outputs, i.e., wk(x) =where Dtk is the client k’s discriminator against the global generator at round t.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 13
|
2 SYSTEM MODEL AND RELATED WORK
|
exp(−Entropy(σ(f (x;θti∈At exp(−Entropy(σ(f (x;θt Var(f (x;θti∈At Var(f (x;θt Dtk(x)i∈At Dt k)))/τ ) i (x) k)) (cid:80) (cid:80) (cid:80) For theoretical aspects, generalization bounds of an ensemble model are presented in Lin et al.(2020); Cho et al. (2022); Wang et al. (2023a) for a binary classification task under ℓ1 loss. Fornk = nK for all k, a generalization bound for an ensemble model with fixed weights α1, ..., αK with(cid:80)k αk = 1 is given as follows (Lin et al., 2020; Cho et al., 2022): for any δ ∈ (0, 1), the following Algorithm 1 Federated learning with K clients for T communication rounds, with ensemble distillation exploiting unlabeled dataset on the server. Client k possesses nk data points, and the fractionC of clients participate in each communication round. f (·; θ) stands for the model with parameterθ, and µ stands for the step size.Require: Client labeled dataset {Sk}K1: Initialize server model f (·, θ02: for communication round t = 1 to T doAt ← sample ⌊C · K⌋ clients3:parfor client k ∈ At do4:5:6:7: k=1, server unlabeled dataset Us) with parameter θ0s ▷ Gradient update θt−1 s with Sk , Sk) s k ← ClientU pdate(θt−1θtend parfors ← (cid:80)nkθti∈At nifor server train epoch e = 1 to Es dofor unlabeled minibatch u ∈ U do · θtk k∈At (cid:80) k∈At wk(u) · f (u; θt s − µ · ∇θt sKL(˜y(u), σ(f (u; θt s))) k)) ▷ Label as a weighted sum of client predictions▷ Ensemble distillation ˜y(u) ← σ((cid:80)θts ← θtend for 8:9:10:11:12:13:14: end for15: return f (·, θTs ) end for holds with probability 1 − δ:KK(cid:88)(cid:88) Lp( αkh∗ˆpk ) ≤ k=1 k=1 (cid:20)L ˆpk (h∗ˆpk ) + αk ·
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 14
|
12 dH△H(pk, p) + λk + O
|
nk . (3) Here, ˆpk is the empirical distribution by sampling nk data points i.i.d. according to pk, dH△Hdenotes the discrepancy between two distributions, λk = inf h Lpk (h) + Lp(h), and τH is growthfunction bounded by polynomial of the VC-dimension of H.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 15
|
12 dH△H(pk, p) + λk + O
|
On the other hand, a generalization bound for a weighted ensemble model with weight functionwk(x) =i (x)) is given as follows (Wang et al., 2023a): for any δ ∈ (0, 1) and σ > 0, thefollowing holds with probability 1 − δ: Dtk(x)i∈At Dt (cid:80) K(cid:88) Lp( k=1 wk · h∗ˆpk ) ≤ (K + 1) · K(cid:88) k=1 1K L ˆpk (h∗ˆpk · ) + . (4) The above bounds relate the loss of an ensemble model (the LHS of (3) and (4)) to the averageempirical loss of client models (the first term in the RHS of (3) and (4)). The proofs of these boundsrely on the results in domain adaptation theory for binary classification [Shalev-Shwartz & BenDavid, Theorem 6.11; Ben-David et al., Lemma 3]. Note that the bound (3) assumes a fixed weightper client irrelevant to data points, hence there is a lack of analysis for assigning varying weights perdata point. The bound (4) assumes a specific weighting function of each data point, but it is too loosebecause it becomes vacuous as K increases. Consequently, guidance on determining appropriateweights for each client per data point is limited. Moreover, in federated ensemble distillation, ourultimate interest is in the loss of the server model, knowledge-distilled from the ensemble model.Note that the hypothesis class of ensemble models is in general larger than that of single models,and hence there exists an inherent gap between the losses of an ensemble model and the knowledgedistilled model. However, the above bounds do not provide an analysis on this gap.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 16
|
12 dH△H(pk, p) + λk + O
|
In Section 3.1, we define an optimal model ensemble and show that the server model knowledgedistilled from an optimal model ensemble achieves the optimal loss within the gap arising from thedistillation step, which depends on the inherent difference between the hypothesis classes of theserver model and the ensemble model, along with the distribution discrepancy between the averageclient distribution p and the distribution ps of unlabeled data on the server.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 17
|
12 dH△H(pk, p) + λk + O
|
Generative Adversarial Network The generative adversarial networks (GANs) are a class ofpowerful generative models composed of a generator and a discriminator (Goodfellow et al., 2014; Gulrajani et al., 2017; Radford et al., 2015; Chen et al., 2016; Zhu et al., 2017; Choi et al., 2018;Karras et al., 2019). They are trained in an unsupervised learning manner, requiring no class labels. The discriminator aims to distinguish between real images from the dataset and fake imagesgenerated by the generator. Meanwhile, the generator strives to produce images that can fool thediscriminator.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 19
|
3 PROPOSED METHOD
|
In this section, we propose a weighting method for federated ensemble distillation. First, theoreticalresults are presented in Section 3.1. In Section 3.1.1, we define an optimal model ensemble and givea bound on the loss of the server model knowledge-distilled from an optimal model ensemble. Next,in Section 3.1.2, we propose a client weighting method to construct an optimal model ensemble,based on Theorem 1. In Section 3.2, we introduce our FedGO algorithm, leveraging the theoreticalresults. We note that a generalization bound of an ensemble model with our proposed weightingmethod comparable with (3) is provided in Appendix C. 3.1 THEORETICAL RESULTS 3.1.1 ENSEMBLE DISTILLATION WITH OPTIMAL MODEL ENSEMBLE We first define an optimal model ensemble.Definition 1. For K clients, the ensemble of their models and weight functions {(hk, wk)}Ksaid to be an optimal model ensemble if the following holds: k=1 is Lp k=1 (cid:33) (cid:34) wk · hk = Ep l k=1 wk(x) · hk(x), y(x) ≤ minh∈H Lp(h) = Lp(h∗ p). (6) We remind that the objective of federated learning is to train a model that minimizes the expectedloss over the average client distribution p as shown in (1). If {(hk, wk)}Kk=1 is an optimal model ensemble, its expected loss over p is less than or equal to the minimum expected loss over p achievableby a single model, i.e., minh∈H Lp(h).
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 20
|
3 PROPOSED METHOD
|
However, we cannot guarantee that a knowledge-distilled model from an optimal model ensemblewould be optimal, i.e., achieve minh∈H Lp(h), due to the following two reasons: 1) the ensemblemodel (cid:80)Kk=1 wk · hk may lie outside the hypothesis class H of a single model and 2) the distributionused for knowledge distillation (the distribution ps of unlabeled data on the server) can be differentfrom p. In the following theorem, we present a bound on the expected loss over p of a single modelby taking into account these factors. For two hypotheses h, h′ ∈ H and a distribution q over X , theexpected difference between h and h′ over q, denoted Lq(h, h′), is defined as follows: Lq(h, h′) ≜ Eq [(l(h(x), h′(x))] . (7) Theorem 2. (Informal) Let ¯H ≜ {(cid:80)K1, · · · , K, x ∈ X } be the spanned hypothesis class, ps be a distribution on X , and {(hk, wk)}Kbe an ensemble of client models and weight functions. Then for any h ∈ H, the following holds: k=1 wk · hk|hj ∈ H, wj : X → [0, 1], (cid:80)K Lp(h) ≤ Lp( K(cid:88) k=1 wk · hk) + Lps(h, K(cid:88) k=1 wk · hk) +
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 21
|
12 d ¯H△ ¯H(p, ps).
|
(9) Corollary 1 demonstrates the powerfulness of an optimal model ensemble. If an optimal modelensemble is constituted, the difference between the expected loss of the server model over p andthe minimum expected loss Lp(h∗p) = minh∈H Lp(h) is bounded by the distillation loss, whichdepends on the inherent difference between the hypothesis class H and the spanned hypothesis class¯H, along with the distribution discrepancy between p and ps.In the next subsection, we propose a weighting method to constitute an optimal model ensemble. 3.1.2 CLIENT WEIGHTING FOR OPTIMAL MODEL ENSEMBLE Let us assume that the server has models {h∗pkdistributions {pk}Kk=1. In the following theorem, we present weight functions {wk}Kthe ensemble of {h∗pkTheorem 3. Let the loss function l be convex. Define the client weight functions {w∗follows: }Kk=1 trained by clients based on their respective datak=1 such that k=1 constitutes an optimal model ensemble.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 22
|
12 d ¯H△ ¯H(p, ps).
|
, wk}K k=1 as k}K Then, the ensemble {h∗pk k(x) ≜ nk · pk(x)w∗(cid:80)K πk · pk(x)i=1 πi · pi(x)k=1 is an optimal model ensemble, i.e., Lp i=1 ni · pi(x) k}K (cid:80)K = , w∗ . (10) k w∗ k · h∗pk (cid:1) ≤ Lp(h∗p).
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 23
|
12 d ¯H△ ¯H(p, ps).
|
Theorem 3 follows from some manipulations based on the convexity of the loss and the definitionsof w∗ pk ’s, and its full proof is provided in Appendix B.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 24
|
12 d ¯H△ ¯H(p, ps).
|
k’s and h∗ Theorem 3 demonstrates that for data point x, weighting according to each client’s proportion ofhaving x constitutes an optimal model ensemble. However, even if weighting each client accordingto Theorem 3 constitues an optimal model ensemble, it is not feasible without knowing the datadistribution pk of each client. Theorem 4 addresses this issue based on Theorem 1 and provideshints on how to implement an optimal model ensemble.Definition 2. (Odds): For ϕ ∈ (0, 1), its odds value Φ is defined as Φ(ϕ) = ϕTheorem 4. For a fixed generator G with generating distribution pg, let Dk be an optimal discriminator for generator G and client k’s distribution pk. Assume that Dk outputs a value over (0, 1)using a sigmoid activation function, and let Φk(x) ≜ Φ(Dk(x)). Then, for x ∈ supp(pg), thefollowing holds: 1−ϕ .
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 25
|
12 d ¯H△ ¯H(p, ps).
|
nk · Φk(x)i=1 ni · Φi(x) (cid:80)K = πk · pk(x)i=1 πi · pi(x) (cid:80)K = w∗ k(x). (11) Theorem 4 is a direct consequence of Theorem 1, because Φk(x) = pk(x)rem 4 indicates that if the server once receives the optimal discriminators {Dk}K pg(x) from Theorem 1. Theok=1 trained by the clients, it can use those discriminators to calculate the weights for optimal model ensemble. Notethat the generator G only needs to generate a wide distribution capable of producing sufficientlydiverse samples. Therefore, one can use an off-the-shelf generator pretrained on a large dataset. 3.2 PROPOSED ALGORITHM: FEDGO By leveraging the theoretical results in Section 3.1, we propose FedGO that constitutes an optimalmodel ensemble and performs knowledge distillation. The main technical novelty of FedGO liesin implementing the optimal weighting function w∗k using client discriminators, which is a versatiletechnique that can be integrated to both the following scenarios with/without extra server dataset. (S1) The server holds an extra unlabeled dataset. (S2) The server holds no unlabeled dataset, thus a data-free approach is needed.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 26
|
12 d ¯H△ ¯H(p, ps).
|
For completeness, let us describe how the FedGO algorithm can be adapted depending on the cases(S1) and (S2). FedGO largely consists of two stages: pre-FL and main-FL. In the pre-FL stage,the server and the clients exchange the generator and the discriminators. First, the server obtains agenerator through one of the following three methods, and distributes the generator to the clients. (G1) Train a generator with an unlabeled dataset on the server, which is possible under (S1). (G2) Load an off-the-shelf generator pretrained on a sufficiently rich dataset. (G3) Train a generator through an FL approach, e.g., using FedGAN (Rasouli et al., 2020).
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 27
|
12 d ¯H△ ¯H(p, ps).
|
After receiving the generator, each client trains its own discriminator based on its dataset and sendsthe discriminator to the server.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 28
|
12 d ¯H△ ¯H(p, ps).
|
The main-FL stage operates according to Algorithm 1, except that the server assigns weights forpseudo-labeling according to (11) using the client discriminators. For the server unlabeled datasetU used for distillation, which we call distillation dataset, we consider the following cases: (D1) Use the same dataset held by the server, which is possible under (S1). (D2) Produce a distillation dataset using the generator from (G2). (D3) Produce a distillation dataset using the generator from (G3).
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 29
|
12 d ¯H△ ¯H(p, ps).
|
A comprehensive analysis of additional communication cost, privacy leakage, and computationalburden according to the methods for obtaining the generator and distillation set is provided in Table 1, which shows the trade-off among the methods. In particular, an extra dataset at the servermakes the communication cost and the client-side privacy and computational burden negligible, atthe expense of server-side privacy leakage. In the absence of server dataset, the use of an off-theshelf generator makes all the burdens negligible, but it can be challenging to secure an off-the-shelfgenerator whose generation distribution is similar to the client data distribution. Lastly, the data-freeapproach (G3)+(D3) does not require an extra server dataset or an external generator, but it increasesthe communication burden and the privacy and computational burden on the client side.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 30
|
12 d ¯H△ ¯H(p, ps).
|
A detailed description of FedGO and explanation for Table 1 can be found in Appendices D and G,respectively.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 31
|
12 d ¯H△ ¯H(p, ps).
|
ExtraServer Dataset
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 32
|
12 d ¯H△ ¯H(p, ps).
|
Table 1: A comprehensive analysis of additional communication burden, privacy leakage, and computational burden caused by the proposed weighting method, compared to FedAVG.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 33
|
12 d ¯H△ ¯H(p, ps).
|
GeneratorPreparation CommunicationCost Privacy Leakage Server-side Client-side Client-sideComputational Burden NegligibleNegligibleNegligibleNon-negligible Non-negligible NegligibleNon-negligible NegligibleNegligibleNon-negligible NegligibleNegligibleNegligibleNon-negligible
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 34
|
12 d ¯H△ ¯H(p, ps).
|
DistillationDataset
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 35
|
4 EXPERIMENTAL RESULTS
|
In this section, we present the experimental results. All experimental results were obtained usingfive different random seeds, and the reported results are presented as the mean ± standard deviation. 4.1 EXPERIMENTAL SETTING Datasets and FL Setup We employed datasets CIFAR-10/100 (Krizhevsky et al., 2009) (MITlicense) and downsampled ImageNet100 (ImageNet100 dataset; Chrabaszcz et al., 2017). Unless specified otherwise, the entire client dataset corresponds to half of the specified client dataset(half for each class), and each client dataset is sampled from the entire client dataset according toDirichlet(α), akin to setups in Lin et al. (2020); Cho et al. (2022). α is set to 0.1 and 0.05 to represent data-heterogeneous scenarios. The server dataset corresponds to half of the specified serverdataset (half for each class) without labels. If not otherwise specified, the server dataset and theclient datasets partition the same dataset disjointly. We considered 20 and 100 clients (20 clients ifnot specified otherwise), assuming that 40% of the clients participate in each communication round.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 36
|
4 EXPERIMENTAL RESULTS
|
Models and Baselines For architecture, we employed ResNet-18 (He et al., 2016) with batchnormalization layers (Ioffe & Szegedy, 2015). For baselines, we considered the vanilla FedAVG (McMahan et al., 2017) and FedProx Li et al. (2020) that do not perform ensemble distillation, FedDF (Lin et al., 2020), FedGKD+ (Yao et al., 2021) and DaFKD (Wang et al., 2023a) thatincorporate ensemble distillation. For comparison with other weighting methods, we considered thevariance-based weighting method of Cho et al. (2022), the entropy-based methods of Deng et al.(2023) and Park et al. (2024), and the domain-aware method of Wang et al. (2023a), described inSection 2. As an upper bound of the performance, we also compared with central training that trainsthe server model directly using the entire client dataset. FedGO and DaFKD require image generators and discriminators. For the generator, we considered the three approaches (G1), (G2), and (G3)in Section 3.2. For (G1) and (G3), we adopted the model architecture and training method proposedin WGAN-GP (Gulrajani et al., 2017). For (G2), we employed StyleGAN-XL (Sauer et al., 2022),pretrained on ImageNet (Krizhevsky et al., 2012). Unless specified otherwise, we assume (G1). Fordiscriminators, we utilized a 4-layer CNN. More experimental details are provided in Appendix E.2. 4.2 RESULTS Test Accuracy and Convergence Speed Table 2 shows the test accuracy of the server model andTable 3 presents the communication rounds required for the server model to achieve target accuracy(Acctarget) for the first time, for the baselines and FedGO, on CIFAR-10/100 and ImageNet100datasets. Our FedGO algorithm exhibits the smallest performance gap from the central training andthe fastest convergence speed across all the datasets and data heterogeneity settings.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 37
|
4 EXPERIMENTAL RESULTS
|
For CIFAR-10 with α = 0.1, our FedGO algorithm shows a performance improvement of over7%p compared to the baselines. However, we observe a diminishing gain for CIFAR-100 and ImageNet100. We argue in Appendix F.1 that this is not due to the marginal improvement in FedGO’sensemble performance, but rather due to larger distillation loss as the server model more strugglesto keep up with the performance of the ensemble model.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 38
|
4 EXPERIMENTAL RESULTS
|
Comparison of Weighting Methods Figure 2 shows the ensemble test accuracy along with communication rounds on the CIFAR-10 dataset, according to weighting methods. We evaluated ensemble test accuracy to compare the efficacy of each method in generating pseudo-labels. For thebaseline weighting methods, we considered the uniform (Lin et al., 2020), the variance-based (Choet al., 2022), the entropy-based (Deng et al., 2023; Park et al., 2024), and the domain-aware (Wanget al., 2023a) methods. For fair comparison, all the baselines follow the same steps except theweighting methods. The effectiveness of our weighting method is demonstrated by its ensemble testaccuracy outperforming all the other weighting methods over all communication rounds.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 39
|
4 EXPERIMENTAL RESULTS
|
Results with 100 Clients Figure 3 shows (a) the test accuracy of the server model, (b) the testaccuracy of the ensemble model, and (c) the test loss of the ensemble model during the trainingprocess for K = 100 clients on CIFAR-10 dataset with α = 0.05. The latter two measures wereevaluated only for algorithms incorporating ensemble distillation. FedGO achieves the test accuracy Central TrainingFedAVGFedProxFedDFFedGKD+DaFKD AcctargetFedAVGFedProxFedDFFedGKD+DaFKDFedGO (ours) of 69.52%, which is slightly lower than 72.35% with 20 clients (Table 2). In comparison, FedAVG,FedProx, FedDF, FedGKD+, and DaFKD show significant performance drops to 33.40%, 35.07%,44.36%, 45.44%, and 59.62%, respectively. This demonstrates that even in settings with a largenumber of clients, FedGO exhibits robust performance compared to the baselines.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 40
|
4 EXPERIMENTAL RESULTS
|
In terms of the test accuracy and the test loss of the ensemble model, FedGO consistently demonstrates superior performance across all rounds compared to the baseline algorithms. Furthermore,unlike the baseline algorithms, whose test loss initially decreases but then becomes unstable andincreases from early rounds, FedGO’s loss converges with small deviation.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 41
|
4 EXPERIMENTAL RESULTS
|
Table 2: Server test accuracy (%) of our FedGO and baselines on three image datasets at the 100-thcommunication round. A smaller α indicates higher heterogeneity. (a) Server test accuracy (b) Ensemble test accuracy (c) Ensemble test loss Generator Scratch Pretrained Scratch Pretrained Scratch Pretrained Accuracy More ResultsIn Appendix F, we provide more experimental results. We report ensemble testaccuracy of the baselines and FedGO, demonstrating a larger improvement compared to test accuracy. We also provide results for cases where the server dataset is different from the client datasets,as well as for data-free approaches when no server dataset is available, showing significant performance gains over the baselines. Additionally, we report the performance of FedGO with a reducedserver dataset and various discriminator training epochs, showing that even with only 20% of theserver dataset, FedGO achieves a performance gain of 15%p over FedAVG. Furthermore, FedGOoutperforms the baselines even with significantly fewer discriminator training epochs.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 42
|
4 EXPERIMENTAL RESULTS
|
Table 3: The number of communication rounds to achieve a test accuracy of at least Acctarget.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 43
|
4 EXPERIMENTAL RESULTS
|
Figure 2: Ensemble test accuracy (%) of FedGO and other baseline weighting methods over communication rounds on CIFAR-10 with α = 0.1 and α = 0.05.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 44
|
4 EXPERIMENTAL RESULTS
|
FedGO with a Pretrained GeneratorIf there exists a pretrained generator capable of generating sufficiently diverse data, the server can distribute the pretrained generator to clients instead oftraining a generator from scratch using the server’s unlabeled dataset, which corresponds to the case(G2) in Section 3.2. This approach has the advantage of saving the server’s computing resourcesrequired for training a generator.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 45
|
4 EXPERIMENTAL RESULTS
|
In Appendix G, a comprehensive analysis of communication costs, privacy, and computational costsfor FedGO and baselines is provided.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 46
|
4 EXPERIMENTAL RESULTS
|
Figure 3: Server test accuracy (%), test accuracy of the ensemble model (%), and test loss of theensemble model of our FedGO and baselines for 100 clients on CIFAR-10 dataset with α = 0.05.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 47
|
4 EXPERIMENTAL RESULTS
|
Table 4 reports the performance of FedGO for various datasets with α = 0.05, when using agenerator trained with the server’s unlabeled dataset versus using a generator pretrained on ImageNet (Krizhevsky et al., 2012). We observe that utilizing the pretrained generator results in superior performance on CIFAR-10 and ImageNet100, whereas it remains the same for CIFAR-100. Akey factor contributing to performance enhancement seems to be the larger model structure of thepretrained generator and its training with a richer dataset. This enhances the generalization performance of client discriminators, enabling optimal weighting even for test data. However, since theassumption of Theorem 4 does not hold for x ∈ supp(p) \ supp(pg), the portion of data for which anoptimal weighting is guaranteed decreases as the portion of p’s support not covered by pg increases,potentially leading to performance degradation. We note that ImageNet100 is a subset of ImageNet,and ImageNet includes the classes of CIFAR-10 except deer. However, there are several classes ofCIFAR-100 not included in ImageNet, which could possibly result in no performance gain.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 48
|
4 EXPERIMENTAL RESULTS
|
Table 4: Server test accuracy (%) of our FedGO with a generator trained with the unlabeled dataseton the server (Scratch) and with an off-the-shelf generator pretrained on ImageNet (Pretrained) onthree image datasets with α = 0.05.
|
ICLR.cc/2025/Conference
|
4ftMNGeLsz
| 49
|
5 CONCLUSION
|
We proposed the FedGO algorithm, which effectively addresses the challenge of client data heterogeneity. Our algorithm was proposed based on theoretical analysis of optimal ensemble distillation,and various experimental results demonstrated its high performance and fast convergence rate undervarious scenarios with and without extra server dataset. Due to page limit, limitation and broaderimpact of our work are provided in Appendices H and I, respectively.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 1
|
Title
|
REINFORCEMENT LEARNING AND HEURISTICS FORHARDWARE-EFFICIENT CONSTRAINED CODE DESIGN
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 2
|
Abstract
|
Constrained codes enhance reliability in high-speed communication systems andoptimize bit efficiency when working with non-binary data representations (e.g.,three-level ternary symbols). A key challenge in their design is minimizing thehardware complexity of the translation logic that encodes and decodes data. Weintroduce a reinforcement learning (RL)-based framework, augmented by a custom L1 similarity-based heuristic, to design hardware-efficient translation logic,navigating the vast solution space of codeword assignments. By modeling thetask as a bipartite graph matching problem and using logic synthesis tools to evaluate hardware complexity, our RL approach outperforms human-derived solutionsand generalizes to various code types. Finally, we analyze the learned policies toextract insights into high-performing strategies.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 3
|
1 INTRODUCTION
|
Reinforcement learning (RL) has been successfully applied to numerous tasks in chip design.Works such as DRiLLS (Hosny et al., 2020) and Retrieval-Guided RL (Goliaei et al., 2024), havedemonstrated RL’s ability to optimize tasks like logic synthesis, which involves converting highlevel hardware descriptions into optimized gate-level representations to minimize circuit complexity and improve performance. Similarly, Mirhoseini et al. (2021) introduced a graph-based RLmethodology to optimize circuit placement, significantly reducing layout generation time comparedto traditional methods.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 4
|
1 INTRODUCTION
|
Building on recent advances, we address a distinct challenge: constrained code design. Constrained codes restrict sequences or patterns in communication and data storage, ensuring they meetspecific rules (e.g., avoiding certain bit patterns). Unlike typical digital design tasks where the logical specifications are fixed (e.g., an adder), our approach applies RL to determine valid and efficientcodeword assignments that adhere to these constraints. Instead of simply optimizing existing logic,RL is used to dynamically generate the assignments in a lookup table structure, which presents aunique challenge for RL agents.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 5
|
1 INTRODUCTION
|
Constrained codes are crucial for ensuringdata reliability and efficiency in many systems.Run-Length Limited (RLL) codes (e.g 8b10b)are implemented in standards like PCIe, USB,and Ethernet to prevent signal degradation bylimiting long runs of similar bits (PCI-SIG,2019). Similarly, Data-Bus Inversion (DBI) isused in memory interfaces like HBM and DDRto reduce power consumption (Hollis, 2009).Beyond these established uses, we propose applying constrained codes to further compressultra-quantized AI models. Recent research on low-precision LLMs (Ma & all, 2024) showspromise for 1.58bit (ternary) precision models. Constrained codes could enhance this by combiningmultiple weights (e.g., ternary symbols) into more efficient encodings.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 7
|
1 INTRODUCTION
|
Figure 1: RLL Constrained Code Illustration These functions are typically implemented as digital circuits, where logic complexity affects latency,power, and cost. From our encoder definition, there are 2n input sequences to map to valid codewords. Let v be the number of valid codewords where v ≥ 2n, and we must select a subset of(cid:1)(2n)!. A straightforward approach is tosize 2n from v. The number of possible mappings is (cid:0) v2ndefine f and g by choosing from these assignments and implementing them using a lookup table(LUT). Synthesis tools, however, cannot optimize these input mappings, leaving this critical stepto manual methods, which are often inefficient and time-consuming. With a selected mapping, synthesis tools can then convert these mappings into hardware, providing critical metrics such as gatecount, area, and delay.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 8
|
1 INTRODUCTION
|
output, while the decoder, g : 0, 1n+k → 0, 1n, performs the inverse mapping, ensuring thatg(f (x)) = x for any input x. Figure 1 illustrates the encoder-decoder structure for an RLL code.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 9
|
1 INTRODUCTION
|
We demonstrate this process using Maximum Transition Avoidance (MTA) coding fromGDDR6x memory, which maps binary data toPAM-4 symbols -3,-1,+1,+3 and avoids maximum transitions (-3 to +3) (Sudhakaran et al.,2021). The MTA code uses n = 7, k = 1 andhas v = 139 valid codewords, making exhaustive synthesis of all (cid:0) v(cid:1)2n! mappings imprac2ntical. Fig. 2 shows the gate counts from 1M randomized mappings versus a manually designedsolution.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 10
|
1 INTRODUCTION
|
It becomes evident that there is a significantgap between purely random assignments anda hand-crafted solution. While hand-craftedthis prodesigns can achieve better results,cess is both time-consuming and unscalable forlarger problem instances without a clear algorithm. The vast solution space and inefficiencyof random search highlight the need for automated optimization of codeword assignments.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 11
|
1 INTRODUCTION
|
We propose a RL frameworkto automate exploration of thecodeword assignment space. Byframing the problem as a combinatorial optimization task, theRL agent iteratively learns optimal mappings using feedbackon metrics like gate count, area,and latency. To improve efficiency, we incorporate a custom L1 similarity-based heuristic to prioritize promising mappings early. An external simulator evaluates the quality of eachassignment, helping the agentnavigate the large search spacemore effectively.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 12
|
2 BACKGROUND & RELATED WORK
|
Our codeword selection and assignment problem can be formulated as a bipartite graph matching problem, where the goal is to assign input sequences to codewords while satisfying specificconstraints. Formally, a bipartite graph G = (X, Y, E) consists of two disjoint sets of nodes:X, representing the unrestricted domain of input sequences (locations), and Y , representing therestricted domain of encoded outputs (codewords). Each node in X corresponds to a potentialinput bit sequence, while each node in Y corresponds to a valid encoded sequence. The edgesE ⊆ X × Y represent valid assignments between input sequences and codewords, defining thepossible mappings in the problem space.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 13
|
2 BACKGROUND & RELATED WORK
|
encoder
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 14
|
2 BACKGROUND & RELATED WORK
|
Figure 2: Randomized 1M LUT Mappings GateCount Histogram for MTA 7-8 Code with LUTverilog code snippet This bipartite graph fully specifies our encoder and decoder functions f, g. Although this particularproblem is simple enough to explore the full solution space consisting of (cid:0)9(cid:1) · 8! = 362, 880unique LUT constructions, as n and nn+k scale, the solution space explodes, giving rise to a widerange of logic complexity results.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 15
|
2 BACKGROUND & RELATED WORK
|
Figure 3 shows the frameworkflow, where the RL agent selectsactions (mapping inputs to codewords), and the environment returns rewards based on synthesis metrics. This approach outperforms random search and classicaloptimization techniques.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 16
|
2 BACKGROUND & RELATED WORK
|
Figure 3: High-level RL Framework for our Bipartite MatchingCodeword Assignment Problem
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 17
|
2 BACKGROUND & RELATED WORK
|
To help illustrate this, consider(Pulse-Amplitudea PAM-3thatModulation)convert binary data to ternarysymbolsPAM3(-1,0,+1).encoders require ν = 3s > 2nwhere s represents the numberconsistof PAM-3 symbolsing of 2 bitseach hences = n+k2 . A simple encoderhas n = 3, k = 1, s = 2.There are ν = 9 code wordswhich yields 1 unassigned code since 2n = 8. Figure 4 shows a bipartite graph formulation for thePAM 3b2s code. Note that the green colored nodes (Y ) do not contain the ”10” symbol.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 18
|
2 BACKGROUND & RELATED WORK
|
Navigating large discrete solution spaces is a well-known challenge in combinatorial optimization.In the remainder of this section, we provide an overview of traditional and emerging approaches,highlighting their applications and limitations in solving our codeword selection and assignmentproblem.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 19
|
2 BACKGROUND & RELATED WORK
|
Figure 4: Bipartite Graph Representation of Location to CodeAssignments for Simple 3b2s PAM-3 Code
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 20
|
2.1 CLASSICAL COMBINATORIAL OPTIMIZATION ALGORITHMS
|
For combinatorial optimization problems, Simulated Annealing (SA) (Kirkpatrick et al., 1983) andMonte Carlo Tree Search (MCTS) (Browne et al., 2012) are proven techniques. SA employs probabilistic exploration of solution spaces by allowing occasional moves away from gradients to escapelocal minima. While effective, SA can be slow to converge. In contrast, MCTS incrementally buildsa search tree through random sampling and refines its exploration based on promising branches.Although successful in various combinatorial optimization and game-playing tasks, MCTS wasdeemed impractical for our problem due to the large branching factor and computational overhead.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 31
|
3.2 L1 SIMILARITY & HEURISTICS
|
During our exploration of RL-based methods,we found that incorporating domain-specificheuristics was crucial in guiding the agent toward optimal solutions. Specifically, we usedan L1 similarity metric on the location andcodeword binary data to assess the benefit ofassigning codewords to locations (see Algorithm 1). This heuristic proved to be a strongpredictor of optimal assignments,improvingperformance and convergence. We also applied it with non-RL approaches such as greedysearch and simulated annealing.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 32
|
3.2 L1 SIMILARITY & HEURISTICS
|
To illustrate the intuition behind our method,we present a simple example from the PAM33b2s code. We show the first four locations (000, 001, 010, 011) and the assignments to valid 4-bitPAM-3 codewords while avoiding restricted symbols (10). At the fourth step, we compute the L1distances from the current location to previous locations and do the same for all available codewordsto previously assigned codes. By choosing 0101, we reduce the logic for assignments: y[2] = x[1]and y[0] = x[0], resulting in no gates being required (just pass through).
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 21
|
2.2 LINEAR PROGRAMMING & SAT/SMT
|
Linear optimization techniques, including Linear Programming (LP) (Kuhn, 1955; Jonker & Volgenant, 1987), Integer Linear Programming (ILP) (Cunningham, 1976), and network flow algorithms (Edmonds & Karp, 1972), have been successfully applied to bipartite assignment and matching problems. These approaches rely on linear formulations and relaxations of the problem, seekingto optimize assignments between two sets (e.g., tasks and workers, or locations and resources) whileminimizing or maximizing a given cost function. In the standard assignment problem, given two setsX and Y and a cost matrix C, where Cij represents the cost of assigning element i ∈ X to element j ∈ Y , the objective is to minimize the total cost of assignment: min (cid:80)j∈Y Cijxij,subject to the constraints: (cid:80)j∈Y xij = 1 for all i ∈ X, and (cid:80)i∈X xij = 1 for all j ∈ Y ,with xij ∈ {0, 1}. The LP relaxation relaxes this constraint to xij ∈ [0, 1], making the problemtractable. These approaches assume that the assignment costs are readily available or precomputed, i∈X (cid:80) which is infeasible given our need to invoke the synthesis tool for cost evaluations. In contrast, ourRL framework samples potential assignments incrementally, allowing it to learn which assignmentslead to better outcomes while minimizing the number of synthesis evaluations required.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 22
|
2.2 LINEAR PROGRAMMING & SAT/SMT
|
In addition to linear optimization, the problem of codeword selection and logic complexity minimization has also been approached using SAT/SMT solvers. Recent work (Anonymous, 2024) hasdemonstrated their potential for small- to medium-sized codes. Rather than using a traditional LUTor matching approach, this work focused on generating an efficient circuit structure (e.g., Sum ofProducts) while asserting constraints on the total number of minterms (i.e., gates). While the authorsdo show their methods attain solutions competitive with human solutions for the MTA code, theyfaced challenges scaling to larger codes.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 23
|
2.3 NEURAL METHODS FOR COMBINATORIAL OPTIMIZATION, CODE DESIGN, AND LOGIC
|
SYNTHESIS Over the past several years, there has been considerable progress in Neural Combinatorial Optimization (NCO) using RL and Graph Neural Networks (GNNs). For example, Georgiev & Li`o(2021) apply RL and GNNs to solve bipartite matching through a flow-based formulation, whileDwivedi et al. (2021) extend these approaches to problems like the Traveling Salesman Problem(TSP) and bipartite matching. Manchanda et al. (2023) further enhance NCO techniques by integrating meta-learning with RL to improve generalization across problem distributions. However,all these approaches compute assignment costs inline, contrasting with our method, which relies onexternal evaluations for cost calculations.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 24
|
2.3 NEURAL METHODS FOR COMBINATORIAL OPTIMIZATION, CODE DESIGN, AND LOGIC
|
Related to the design of codes, Liao et al. (2020) and Miloslavskaya et al. (2024) use RL to constructpolar codes for wireless communication. Their goal is to improve error-correction performance byframing bit selection as a sequential decision-making task. In contrast, our objective focuses onminimizing hardware logic complexity, which introduces different challenges. Furthermore, theirmethod computes costs inline. Finally, Qin et al. (2023) use RL to develop fountain codes.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 25
|
2.3 NEURAL METHODS FOR COMBINATORIAL OPTIMIZATION, CODE DESIGN, AND LOGIC
|
As noted in the Introduction, recent work such as DRiLLS and related frameworks have successfullyapplied RL to synthesis optimization tasks. These approaches focus on optimizing fixed logicalstructures like adders and multipliers by adjusting synthesis parameters. In contrast, our work tacklesthe problem of optimizing constrained code design, where the RL agent determines the logicalstructure itself, offering a distinct set of challenges and opportunities for optimization.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 26
|
3 METHODS & RL FRAMEWORK
|
As outlined in Fig. 3, our RL framework models the codeword assignment problem as a MarkovDecision Process (MDP). The state comprises of the current codeword assignments and synthesistool metrics, and the action space includes potential codeword selections. The reward functionevaluates hardware complexity, aiming to minimize metrics such as gate count, area, and power,guided by feedback from logic synthesis tools.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 27
|
3 METHODS & RL FRAMEWORK
|
We formally define our objective function in Eqn. 1 where A represents the bipartite graph assignments, X and Y are the nodes in the unrestricted and restricted domains respectively, and α, β, γare all weighting terms based on common QoR metrics incuding area, delay, and power. The notation X × Y represents all the possible assignments in the Cartesian product.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 28
|
3 METHODS & RL FRAMEWORK
|
maxA⊆X×Y QoR(A) = min A⊆X×Y α Area(A) + β Latency(A) + γ Power(A). (1)
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 29
|
3.1 REINFORCEMENT LEARNING ALGORITHMS
|
To solve the bipartite matching problem using RL, we considered the type of learning approach(online vs. offline), the environment details (model-based vs. model-free), and the choice betweenvalue-based and policy-based networks. Additionally, our environment is purely deterministic.Given the same inputs, constraints, and parameters, the synthesis tools will consistently producethe same output, meaning each state-action pair will always lead to the same next state. Below, weoutline the rationale behind our choices: 1. Online vs Offline Learning: We opted for online learning, where the agent interacts withthe environment in real-time, dynamically adapting and refining its policy based on livefeedback. This approach is particularly suitable for our problem given the fast solve timesof the synthesis tools. 2. Model-Based vs Model-Free: Despite the deterministic environment, we chose model-freemethods, which learn directly from interactions rather than relying on a pre-built model.Logic optimization has a highly non-linear solution space, which makes creating accurate performance models difficult. Given our fast solve times, we prioritized model-freemethods for their simplicity and direct application. Model-free methods like Double-DQN,PPO, and distributional RL variants such as C25 and C11 (Bellemare et al., 2017) allowedus to focus on optimizing the policy without the additional complexity of maintaining anaccurate model. 3. Value-Based vs Policy-Based Network: We explored both value-based methods, such asDouble DQN, and C51, as well as policy-based methods, including A2C and PPO. Wefound the value-based methods to perform quite well with hyperparameter tuning withregards to both attained solutions and stability. Though we expected PPO to perform quitewell due to its balance between exploitation and exploration, its performance was alwaysworse than Double DQN.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 30
|
3.1 REINFORCEMENT LEARNING ALGORITHMS
|
As detailed in Section 4, most algorithms achieved similar final solutions in terms of value, but theydiffered in terms of convergence rates, stability, and other performance characteristics.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 33
|
3.2 L1 SIMILARITY & HEURISTICS
|
Figure 5: L1 Similarity Intuition
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 35
|
3.3 NETWORK ARCHITECTURE
|
Algorithm 1 L1 Similarity AlgorithmInput: locs: Location vectorscodes: Code vectorscurr loc: Current location indexassigned locs: Assigned locsassigned codes: Assigned codesOutput: final sim: Normalized scores1: Step 1: L1 between current location, assigned locations:2:sim loc←L1(locs[curr], locs[assgn])3:norm loc←softmax(sim loc)4: Step 2: L1 between available, assigned codes:5:sim codes←L1(codes, codes[assgn])6: Step 3: Weight code similarity by location similarity:7:weight sim←sim codes × norm loc8: Step 4: Apply mask to penalize unavailable codes:9:10: Step 5: Normalize similarity scores for final code assignment:11: mask sim←apply mask(weight sim) final sim←softmax(mask sim) The final layers are fully connected layers which integrate the embeddings with the L1 similaritycalculation. The resulting Q-values guide the agent’s assignment decisions, leveraging both learnedattention from the GAT and historical embedding similarities. We prove the benefit of each portionof the architecture through ablation studies in Section 4.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 36
|
3.4 EPISODE PROGRESSION
|
An episode involves sequentially assigning code nodes to each location node from location 0 to2n − 1. In our experiments with various ordering schemes, order had little effect on the output sowe used the simplest scheme. At each step i, the external synthesis tool is called, incorporating thei assignments made so far along with any ”don’t care” conditions. The synthesis tool then providesfeedback, which we integrate into the reward function and state information. Fig. 7 illustrates thegraph state at various stages of the episode. Initially, the graph contains mostly virtual edges, indicating possible assignments. As the episode progresses, these virtual edges are replaced by actualassignments, ending in a state where all location nodes are mapped to code nodes.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 37
|
3.5 REWARD DESIGN
|
We designed reward functions that encourage the agent to optimize assignments based on gate countreduction. For value-based methods like DQN, the reward at step i is Ri = α · (γ + GCi−1 − GCi)β where GCi is the gate count at step i, α adjusts the magnitude, β controls sensitivity, andγ sets a baseline for reduction.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 38
|
3.5 REWARD DESIGN
|
For policy-gradient methods like PPO, we compute post-episode rewards based on final synthesisresults: Ri = 1 − (cid:80)12dm where m is a minterm including assignment i, and dm is the degreeof the minterm, rewarding more shared terms in the final synthesis. Both structures incentivizecodeword assignments that minimize gate count through shared logic terms.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 39
|
3.5 REWARD DESIGN
|
Figure 6 shows our network architecture which leverages a 3 layer Multi-Layer Perceptrons (MLPs).The MLP transforms the binary location and code nodes into embeddings with 256 dimensions,where the final layer outputs a 256-dimensional vector. For more details on the MLP architecture,please see the Appendix. While the MLP alone achieves competitive results, we experimented witha Graph Attention Network (GAT). The GAT operates on location and code nodes, learning attentioncoefficients. We found that the GAT accelerates convergence, but remains optional as L1 similaritycombined with the MLP still performs well over longer training episodes as we will show in Sec. 4.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 40
|
3.5 REWARD DESIGN
|
Figure 6: Network Architecture: MLP with FullyConnected Layers. Graph-Attention Layer shownas Optional m
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 41
|
3.5 REWARD DESIGN
|
Figure 7: Graph Progression During Episode for 3T5b code (n = 5, k = 1, ν = 27). Blackedges represent bipartite assignments, purple dotted lines are the virtual edges for a current locationto assign to all available code words. Left: start of episode, center: middle of episode, right: endof episode after all locations have been assigned.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 42
|
4 RESULTS
|
The experiments were conducted on GPUs using a NVIDIA-DGX2 system with the PyTorch container image (nvcr.io/nvidia/pytorch:24.01-py3). We evaluate the RL algorithms discussed in Section 3 and provide ablation studies compared against baseline heuristics including the similar L1similarity feature. To demonstrate the generalizability of our framework, we show results for a number of different constrained codes: the MTA 7b8b code mentioned in Section 1, 5s8b which is largertrinary to binary code, and an 8b9b and an 8b9b code which mitigates crosstalk in high-speed linksSudhakaran & Newcomb (2016). While most ablation studies focus on the MTA 7-8 bit code, weshow comparative results for all of the codes.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 43
|
4.1 HYPERPARAMETER AND ALGORITHMIC COMPARISONS
|
We start by evaluting the impact of different hyperparameters and learning approaches on the performance of our Double-DQN models on the MTA code. The results are presented in Fig. 8, whichhighlights the performance across three different sweeps: learning and target network update rate.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 44
|
4.1 HYPERPARAMETER AND ALGORITHMIC COMPARISONS
|
The first row illustrates the results of a sweep over the learning rate α, where values of α = 1e−5,7e−5,1e−4,and 7e−4 while keeping the target network update rate constant (T = 50). Higherlearning rates (green, purple) converge faster, but the 7e−4 case exhibits wild fluctuations. Wealso noticed that very low learning rates converged slower (blue). In the second row, we explorethe effect of varying the target network update rate T = 1, 5, 20, 50 while keeping α constant(α = 7e−4). As shown, faster target rates (brown) yield higher fluctuations. For the remainder ofthe Double-DQN cases, we chose the control case (black) parameters (α = 7e−5, T = 50).
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 45
|
4.1 HYPERPARAMETER AND ALGORITHMIC COMPARISONS
|
Figure 8: Double DQN Results Across Selected Hyperparamter Sweeps: Double-DQN Target Network Update Rate. Top Row Learning Rate Sweep with (T = 50) and Bottom Row Target UpdateRate Sweep with (α = 7e−5) fluctuation albeit not visible in the rolling minimum plots. In subplot (b) we compare the PPO algorithm across various entropy and learning rate parameters. Over more episodes, the PPO algorithmseventually attained similar minimum gate counts outside of the case with the 1e−5 learning rate.In subplot (c) we show the double-DQN algorithm with different reward linear scaling factors (α).Double-DQN reached the lowest minimum gate count (57), the fastest time to attain it, and lowestvariation. Moving ahead, we focus on using Double-DQN with α = 1.0.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.