venue
stringclasses 5
values | paper_openreview_id
stringclasses 342
values | paragraph_idx
int64 1
314
| section
stringlengths 2
2.38k
| content
stringlengths 1
33.1k
⌀ |
|---|---|---|---|---|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 46
|
4.1 HYPERPARAMETER AND ALGORITHMIC COMPARISONS
|
Next, we compare the rolling minimum gate count across a several RL algorithms. In Fig. 9(a), wepresent the distributional RL algorithm (C51) with varying number of quantiles: c = 7, 9, 11, 25.While the final minimum gate counts converge similarly, lower quantile cases exhibited a lot more
|
ICLR.cc/2025/Conference
|
veiSkPqIXm
| 22
|
0 Acc′(t)dt)2dt
|
rankF = 1m m(cid:88) i=1 ranki We count the average ranks at the 6 t-values settings for each scenario, where m = 6, and the overallaverage ranks across n datasets where m = 6 × n. ranki is the rank of a prompt learning method inthe i-th setting. Additionally, we will re-rank the methods to determine the final rank based on theresults of the Friedman ranking.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 47
|
4.2 NETWORK ARCHITECTURE ABLATION STUDIES
|
We experimented with the GNN and L1 similarity components to evaluate their impact on both performance andresource usage, as GNNs typically add extra computational and memory demands Hamilton et al. (2017). Asshown in Fig. 10, the model without the GNN (green)still managed to learn and approach a somewhat similargate count, but it took a lot longer to converge (15K vs.2K episodes with the GNN). Additionally, the memoryfootprint of the GNN model was much higher, requiring1177MB compared to 136MB for the non-GNN version.The L1 similarity heuristic also proves to be quite critical. Without it (red), the model struggles and plateaus at amuch higher minimum gate count showing that domainspecific knowledge dramatically improves performance.Thus the integration of both the GNN and L1 similarity achieve faster convergence and better solutions in thissetting, but the ablation study without the GNN showspromise and can benefit from further study to keep thememory footprint down when scaling to larger codes.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 48
|
4.3 RL PERFORMANCE AGAINST HEURISTICS ACROSS CODES
|
Now we compare the performance of our RL framework across the three codes against heuristicsguided with the same L1 similarity algorithm. Figure 11 shows the rolling minimum gate count ofour best performing RL configuration against the simulated-annealing approach guided by the L1similarity algorithm. Subplot (a) shows the MTA code followed by the 8b9b code in (b) and 5s8bcode in (c). In all three cases, there is an appreciable improvement in RL over guided heuristics.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 49
|
4.4 LEARNING FROM THE RL AGENT’S BEST TRAJECTORIES
|
To gain insight into the RL agent selection algorithms, we looked at the trajectories of a few casesin one of our double DQN training cases. Figure 12 illustrates the solution trajectories, highlighting the gate count increase per episode step for three solutions: a poor policy at the begining of training,a mediocre solution in the middle of training, and the best solution achieved during training. Foreach point, the gate count for both encoder and decoder is plotted across assignment steps, allowingus to visualize the trajectory of complexity.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 50
|
4.4 LEARNING FROM THE RL AGENT’S BEST TRAJECTORIES
|
The most notable observation is the nearlyflat portions of the trajectories between steps32 and 95 in the best solutions (blue, orange), across each code. This suggests thatthe learned policy effectively minimizes additional complexity during these steps, largelythrough extensive term sharing. Notably, theagent appears to replicate rules for location-tocodeword mappings during this segment, taking advantage of the changes in the mostsignificant bits (MSBs) in the locations. Specifically,the agent efficiently assigns similarcodes where the MSBs differ (32-63 = ”01” and64-95 = ”10”), resulting in minimal complexity increase. However, when the MSBs changeto ”11” for locations 96–127, corresponding toa restriction in the MTA code (where the firstsymbol cannot be ”11”), the agent is forcedto introduce additional complexity to adhere tothis rule. In contrast, the mid-level and initialsolutions show steadily increasing gate counts,reflecting suboptimal policies with little to noterm sharing.
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 51
|
4.4 LEARNING FROM THE RL AGENT’S BEST TRAJECTORIES
|
Figure 9: Rolling Minimum Gate Count Across RL Algorithms and Parameter Sweeps(a)Distributional RL (b) PPO (c) Double-DQN
|
ICLR.cc/2025/Conference
|
kBybSUskz7
| 52
|
4.4 LEARNING FROM THE RL AGENT’S BEST TRAJECTORIES
|
Figure 10: Ablation Study: RemovingGNN, L1 Similarity We expanded this analysis to all three codes. Figure 13 demonstrates the cumulative gate countprogression for these codes, paired with visualizations of the location and codeword assignments forthe best solution found by the agent. To better illustrate the learned term-sharing patterns, we havehighlighted key block structures in green within the rightmost black-and-white visualizations. Theseblocks reveal where the agent replicates specific assignment rules to optimize term sharing, particularly in sections where the most-significant bits differ. This visualized replication indicates theagent’s ability to generalize its approach across different constrained codeword mappings. Acrossall codes, the agent consistently discovers policies that minimize gate count through careful andstructured assignments.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.