arxiv_id,title,authors,abstract,categories,published_date,updated_date,abs_url,arxiv_link,publication_date,raw_latex_related_works,clean_latex_related_works,pdf_related_works 2506.02838v1,TaxAgent: How Large Language Model Designs Fiscal Policy,"Jizhou Wang, Xiaodan Fang, Lei Huang, Yongfeng Huang","Economic inequality is a global challenge, intensifying disparities in education, healthcare, and social stability. Traditional systems like the U.S. federal income tax reduce inequality but lack adaptability. Although models like the Saez Optimal Taxation adjust dynamically, they fail to address taxpayer heterogeneity and irrational behavior. This study introduces TaxAgent, a novel integration of large language models (LLMs) with agent-based modeling (ABM) to design adaptive tax policies. In our macroeconomic simulation, heterogeneous H-Agents (households) simulate real-world taxpayer behaviors while the TaxAgent (government) utilizes LLMs to iteratively optimize tax rates, balancing equity and productivity. Benchmarked against Saez Optimal Taxation, U.S. federal income taxes, and free markets, TaxAgent achieves superior equity-efficiency trade-offs. This research offers a novel taxation solution and a scalable, data-driven framework for fiscal policy evaluation.","cs.AI, econ.GN, q-fin.EC, I.2.11, I.6.5, J.4",2025-06-03T13:06:19+00:00,2025-06-03T13:06:19+00:00,http://arxiv.org/abs/2506.02838v1,http://arxiv.org/abs/2506.02838v1,2025-06-03 13:06:19+00:00,"\subsection{Traditional Tax Systems} Progressive taxation is relatively simple, imposing higher tax rate on higher income. Empirical studies proved its effectiveness in ameliorating economic inequality~\cite{NBERw21340, NBERw21211} but researchers also pointed out that it lacks adaptability to dynamic economic conditions~\cite{Foo2019ProcessAC, Patjoshi2015DesignAD}. Optimal taxation explores tax systems that maximize social welfare while accounting for economic constraints and behavioral responses\cite{10.1257/jep.25.4.165}. It can be adjusted according to dynamic economic conditions. Modern frameworks were pioneered by Mirrlees\cite{10.2307/2296779} and Diamond and Mirrlees\cite{RePEc:aea:aecrev:v:61:y:1971:i:1:p:8-27}, aiming to maximize aggregate utility. Saez is one of the largest contributors to this basis. Saez\cite{10.1111/1467-937X.00166} derived optimal nonlinear tax rates by modeling earnings elasticity and income distribution. Diamond and Saez\cite{10.1257/jep.25.4.165} extended this framework, focusing on maximizing social welfare while mitigating income inequality, constructing a closed-loop optimal taxation system. Economists emphasized the importance of the behavioral responses of taxpayers. Piketty, Saez, and Stantcheva \cite{10.1257/pol.6.1.230} explored the elasticity of the top tax rates and their influence on labor supply and tax avoidance, highlighting the importance of behavioral responses in tax policy design. Kroft, Kucko, Lehmann, and Schmieder \cite{10.1257/pol.20180033} examined how unemployment and wage responses impact tax structures, advocating for the Earned Income Tax Credit (EITC) as a tool to support low-income households while maintaining working incentives. \subsection{Artificial Intelligence(AI) in Economic Policy Research} AI offers innovative tools for analyzing and optimizing macroeconomic policies, addressing limitations of traditional models, which rely on equilibrium assumptions. Reinforcement learning and Bayesian Neural Networks enable adaptive simulations and uncertainty quantification. For example, “The AI Economist” framework uses RL to co-adapt agents and social planners\cite{zheng2020aieconomistimprovingequality}. Integration with causal inference techniques further improves policy-impact assessment\cite{NBERc14009}. ABM captures decentralized decision-making and complex phenomena like systemic risk\cite{AxtellFarmer2022}. ABMs are used to study business cycles, policy interventions, and inflation\cite{DelliGatti2018}. Enhanced computational techniques and high-quality data have improved their empirical validity, enabling applications such as tax policy optimization\cite{zheng2020aieconomistimprovingequality}. Large Language Models (LLMs) introduce advanced reasoning capabilities to various subjects, including economic research, enabling market behavior simulation and policy evaluation~\cite{shen2025phyxdoesmodelwits,zhao2024competeaiunderstandingcompetitiondynamics, nie2024surveylargelanguagemodels}. Existing work forms a rule-based framework for optimal taxation and recognizes the impact of taxpayer heterogeneity on optimal taxation design. Nevertheless, current optimal taxation makes rational-man supposition and oversimplified social welfare calculations. In this work, we integrated the advancement in ABMs and LLMs, replaced predetermined rules with LLM-based agents, simulated human-like policy responses, and dynamically adjusted tax rates to generate the optimal social outcome.","\subsection{Traditional Tax Systems} Progressive taxation is relatively simple, imposing higher tax rate on higher income. Empirical studies proved its effectiveness in ameliorating economic inequality~\cite{NBERw21340, NBERw21211} but researchers also pointed out that it lacks adaptability to dynamic economic conditions~\cite{Foo2019ProcessAC, Patjoshi2015DesignAD}. Optimal taxation explores tax systems that maximize social welfare while accounting for economic constraints and behavioral responses\cite{10.1257/jep.25.4.165}. It can be adjusted according to dynamic economic conditions. Modern frameworks were pioneered by Mirrlees\cite{10.2307/2296779} and Diamond and Mirrlees\cite{RePEc:aea:aecrev:v:61:y:1971:i:1:p:8-27}, aiming to maximize aggregate utility. Saez is one of the largest contributors to this basis. Saez\cite{10.1111/1467-937X.00166} derived optimal nonlinear tax rates by modeling earnings elasticity and income distribution. Diamond and Saez\cite{10.1257/jep.25.4.165} extended this framework, focusing on maximizing social welfare while mitigating income inequality, constructing a closed-loop optimal taxation system. Economists emphasized the importance of the behavioral responses of taxpayers. Piketty, Saez, and Stantcheva \cite{10.1257/pol.6.1.230} explored the elasticity of the top tax rates and their influence on labor supply and tax avoidance, highlighting the importance of behavioral responses in tax policy design. Kroft, Kucko, Lehmann, and Schmieder \cite{10.1257/pol.20180033} examined how unemployment and wage responses impact tax structures, advocating for the Earned Income Tax Credit (EITC) as a tool to support low-income households while maintaining working incentives. \subsection{Artificial Intelligence(AI) in Economic Policy Research} AI offers innovative tools for analyzing and optimizing macroeconomic policies, addressing limitations of traditional models, which rely on equilibrium assumptions. Reinforcement learning and Bayesian Neural Networks enable adaptive simulations and uncertainty quantification. For example, “The AI Economist” framework uses RL to co-adapt agents and social planners\cite{zheng2020aieconomistimprovingequality}. Integration with causal inference techniques further improves policy-impact assessment\cite{NBERc14009}. ABM captures decentralized decision-making and complex phenomena like systemic risk\cite{AxtellFarmer2022}. ABMs are used to study business cycles, policy interventions, and inflation\cite{DelliGatti2018}. Enhanced computational techniques and high-quality data have improved their empirical validity, enabling applications such as tax policy optimization\cite{zheng2020aieconomistimprovingequality}. Large Language Models (LLMs) introduce advanced reasoning capabilities to various subjects, including economic research, enabling market behavior simulation and policy evaluation~\cite{shen2025phyxdoesmodelwits,zhao2024competeaiunderstandingcompetitiondynamics, nie2024surveylargelanguagemodels}. Existing work forms a rule-based framework for optimal taxation and recognizes the impact of taxpayer heterogeneity on optimal taxation design. Nevertheless, current optimal taxation makes rational-man supposition and oversimplified social welfare calculations. In this work, we integrated the advancement in ABMs and LLMs, replaced predetermined rules with LLM-based agents, simulated human-like policy responses, and dynamically adjusted tax rates to generate the optimal social outcome.", 2506.02634v1,"KVCache Cache in the Wild: Characterizing and Optimizing KVCache Cache at a Large Cloud Provider","Jiahao Wang, Jinbo Han, Xingda Wei, Sijie Shen, Dingyan Zhang, Chenguang Fang, Rong Chen, Wenyuan Yu, Haibo Chen","Serving large language models (LLMs) is important for cloud providers, and caching intermediate results (KV\$) after processing each request substantially improves serving throughput and latency. However, there is limited understanding of how LLM serving benefits from KV\$ caching, where system design decisions like cache eviction policies are highly workload-dependent. In this paper, we present the first systematic characterization of the KV\$ workload patterns from one of the leading LLM service providers. We draw observations that were not covered by previous studies focusing on synthetic workloads, including: KV\$ reuses are skewed across requests, where reuses between single-turn requests are equally important as multi-turn requests; the reuse time and probability are diverse considering all requests, but for a specific request category, the pattern tends to be predictable; and the overall cache size required for an ideal cache hit ratio is moderate. Based on the characterization, we further propose a workload-aware cache eviction policy that improves the serving performance under real-world traces, especially with limited cache capacity.","cs.DC, cs.AI",2025-06-03T08:51:38+00:00,2025-06-03T08:51:38+00:00,http://arxiv.org/abs/2506.02634v1,http://arxiv.org/abs/2506.02634v1,2025-06-03 08:51:38+00:00,"\label{sec:related} \nospacestitle{Reuse {\kvcache} across requests via {\kvcache} cache. \,} Reusing {\kvcache} for accelerating LLM serving has been widely studied~\cite{vllm,chunkattention,cachedattention,promptcache,sglang,cacheblend} and used in commercial LLM serving~\cite{openaiapi,genimiapi,claudeapi,mooncake}. Currently, production systems only treat cache hits by prefix matches~\cite{chunkattention,sglang,cachedattention,openaiapi,genimiapi,claudeapi,mooncake}, because it preserves the original algorithm of model inference with no accuracy loss. A number of studies~\cite{promptcache,cacheblend,hu2024epic} have studied methods for non-prefix {\kvcache} cache. We can revisit our characterization once they have matured and been deployed in production. \stitle{Other {\kvcache}-related optimizations. \,} Emerging works\cite{streamingllm,h2o,infinigen,pyramidkv} propose runtime {\kvcache} compression and deletion methods to reduce {\kvcache} size. StreamingLLM~\cite{streamingllm} keeps only a finite attention window for recent tokens combined with a few initial tokens. H2O~\cite{h2o} only involves tokens that contribute most to the attention score for inference. InfiniGen~\cite{infinigen} speculatively selects tokens critical for attention scores and drops others. KVQuant~\cite{KVQuant} enables 3-bit KV cache quantization with minimal perplexity degradation for large language models, achieving fast million-length context inference. These methods require fewer {\kvcache} per-request, albeit with accuracy degradation. Our study is compatible with these works: if an uncompressed/full {\kvcache} is reusable, a compressed or deleted version would also be reusable. %\TODO{add more about the recent sparse attention works.} %\stitle{Cache in web-service and serverless platform. } \stitle{Optimizing caching policies. \,} Optimizing caching has long been studied in the literature, in general-purpose caching policies~\cite{lruk,slru,twoq,eelru,lrfu,lirs,arc,mq,car,clockpro,DBLP:journals/tos/EinzigerEFM22,lhd,cacheus,sieve,cherkasova1998improving} or specific domains~\cite{yang2020twemcache,berg2020cachelib,icebreaker,fasscache}. We continue this line of research, and leverage our characterized {\kvcache} reuse properties for optimizing {\kvcache} cache policies. \stitle{Optimizing LLM serving. \,} We continue the line of research in optimizing the performance of LLM serving systems~\cite{DBLP:conf/osdi/ZhongLCHZL0024,DBLP:journals/corr/abs-2404-09526,DBLP:conf/sosp/KwonLZ0ZY0ZS23,alpaserve,DBLP:conf/osdi/YuJKKC22,DBLP:conf/isca/PatelCZSGMB24,298501,DBLP:journals/corr/abs-2412-17246}, with a particular focus on characterizing serving workloads for {\kvcache} cache. Our work is orthogonal to optimizations other than {\kvcache} cache. \iffalse % Cache replacement algorithm is essential to computer systems, and has been researched % for decades\cite{lruk,slru,lirs,arc,lhd,cacheus,sieve}. The design of a cache replacement policy needs to take into account the characteristics of the workload~\cite{lruk,slru,lirs,arc,lhd,cacheus,sieve}. Therefore, access traces analysis for real-world production systems can help to understand the workload characteristics~\cite{yang2020twemcache,berg2020cachelib,shahrad2020serverless,fasscache,icebreaker}. % \cite{DBLP:conf/osdi/YangYR20,DBLP:conf/osdi/BergBMGGLUCBHG20} characterize cache workloads in production web-service systems. % They show that skewed Zipf distribution is common for popularity distribution, and requests for some entry are very bursty. Studies of web services~\cite{yang2020twemcache,berg2020cachelib} have shown that web requests have a Zipf distribution, with bursts of hotspots. Studies of serverless platforms~\cite{shahrad2020serverless,fasscache,icebreaker} design keep-alive policies for function caching to accelerate function invocations based on the observation that function calling frequencies vary. % \cite{shahrad2020serverless} provides a thorough characterization of Function as a Service (FaaS) workloads % in production system, and reveals varieties in function calling frequencies. % Based on this observation, multiple works\cite{shahrad2020serverless,fasscache,icebreaker} design keep-alive % policies for function caching to accelerate function invocations. Our work performs trace analysis for different work scenarios (toB and toC) for LLM inference and optimizes {\kvcache} cache based on workload characteristics. We find similarities and differences between the workloads in the LLM inference scenario and the previous cache systems. \SSJ{Some of the similarities and differences can be specifically summarised to highlight our contribution.} \fi","\nospacestitle{Reuse {\kvcache} across requests via {\kvcache} cache. \,} Reusing {\kvcache} for accelerating LLM serving has been widely studied~\cite{vllm,chunkattention,cachedattention,promptcache,sglang,cacheblend} and used in commercial LLM serving~\cite{openaiapi,genimiapi,claudeapi,mooncake}. Currently, production systems only treat cache hits by prefix matches~\cite{chunkattention,sglang,cachedattention,openaiapi,genimiapi,claudeapi,mooncake}, because it preserves the original algorithm of model inference with no accuracy loss. A number of studies~\cite{promptcache,cacheblend,hu2024epic} have studied methods for non-prefix {\kvcache} cache. We can revisit our characterization once they have matured and been deployed in production. \stitle{Other {\kvcache}-related optimizations. \,} Emerging works\cite{streamingllm,h2o,infinigen,pyramidkv} propose runtime {\kvcache} compression and deletion methods to reduce {\kvcache} size. StreamingLLM~\cite{streamingllm} keeps only a finite attention window for recent tokens combined with a few initial tokens. H2O~\cite{h2o} only involves tokens that contribute most to the attention score for inference. InfiniGen~\cite{infinigen} speculatively selects tokens critical for attention scores and drops others. KVQuant~\cite{KVQuant} enables 3-bit KV cache quantization with minimal perplexity degradation for large language models, achieving fast million-length context inference. These methods require fewer {\kvcache} per-request, albeit with accuracy degradation. Our study is compatible with these works: if an uncompressed/full {\kvcache} is reusable, a compressed or deleted version would also be reusable. %\TODO{add more about the recent sparse attention works.} %\stitle{Cache in web-service and serverless platform. } \stitle{Optimizing caching policies. \,} Optimizing caching has long been studied in the literature, in general-purpose caching policies~\cite{lruk,slru,twoq,eelru,lrfu,lirs,arc,mq,car,clockpro,DBLP:journals/tos/EinzigerEFM22,lhd,cacheus,sieve,cherkasova1998improving} or specific domains~\cite{yang2020twemcache,berg2020cachelib,icebreaker,fasscache}. We continue this line of research, and leverage our characterized {\kvcache} reuse properties for optimizing {\kvcache} cache policies. \stitle{Optimizing LLM serving. \,} We continue the line of research in optimizing the performance of LLM serving systems~\cite{DBLP:conf/osdi/ZhongLCHZL0024,DBLP:journals/corr/abs-2404-09526,DBLP:conf/sosp/KwonLZ0ZY0ZS23,alpaserve,DBLP:conf/osdi/YuJKKC22,DBLP:conf/isca/PatelCZSGMB24,298501,DBLP:journals/corr/abs-2412-17246}, with a particular focus on characterizing serving workloads for {\kvcache} cache. Our work is orthogonal to optimizations other than {\kvcache} cache. \iffalse % Cache replacement algorithm is essential to computer systems, and has been researched % for decades\cite{lruk,slru,lirs,arc,lhd,cacheus,sieve}. The design of a cache replacement policy needs to take into account the characteristics of the workload~\cite{lruk,slru,lirs,arc,lhd,cacheus,sieve}. Therefore, access traces analysis for real-world production systems can help to understand the workload characteristics~\cite{yang2020twemcache,berg2020cachelib,shahrad2020serverless,fasscache,icebreaker}. % \cite{DBLP:conf/osdi/YangYR20,DBLP:conf/osdi/BergBMGGLUCBHG20} characterize cache workloads in production web-service systems. % They show that skewed Zipf distribution is common for popularity distribution, and requests for some entry are very bursty. Studies of web services~\cite{yang2020twemcache,berg2020cachelib} have shown that web requests have a Zipf distribution, with bursts of hotspots. Studies of serverless platforms~\cite{shahrad2020serverless,fasscache,icebreaker} design keep-alive policies for function caching to accelerate function invocations based on the observation that function calling frequencies vary. % \cite{shahrad2020serverless} provides a thorough characterization of Function as a Service (FaaS) workloads % in production system, and reveals varieties in function calling frequencies. % Based on this observation, multiple works\cite{shahrad2020serverless,fasscache,icebreaker} design keep-alive % policies for function caching to accelerate function invocations. Our work performs trace analysis for different work scenarios (toB and toC) for LLM inference and optimizes {\kvcache} cache based on workload characteristics. We find similarities and differences between the workloads in the LLM inference scenario and the previous cache systems. \SSJ{Some of the similarities and differences can be specifically summarised to highlight our contribution.} \fi","Reuse KV$ across requests via KV$ cache. Reusing KV$ for accelerating LLM serving has been widely studied [ 30,63, 19,20,69,62] and used in commercial LLM serving [ 42,21, 5,2]. Currently, production systems only treat cache hits by prefix matches [ 63,69,19,42,21,5,2], because it preserves the original algorithm of model inference with no accuracy loss. A number of studies [ 20,62,23] have studied methods for non-prefix KV$ cache. We can revisit our characterization once they have matured and been deployed in production. Other KV$ -related optimizations. Emerging works[ 59,68, 33,13] propose runtime KV$ compression and deletion meth- ods to reduce KV$ size. StreamingLLM [ 59] keeps only a finite attention window for recent tokens combined with a few initial tokens. H2O [ 68] only involves tokens that contribute most to the attention score for inference. InfiniGen [ 33] spec- ulatively selects tokens critical for attention scores and drops others. KVQuant [ 22] enables 3-bit KV cache quantizationwith minimal perplexity degradation for large language mod- els, achieving fast million-length context inference. These methods require fewer KV$ per-request, albeit with accuracy degradation. Our study is compatible with these works: if an uncompressed/full KV$ is reusable, a compressed or deleted version would also be reusable. Optimizing caching policies. Optimizing caching has long been studied in the literature, in general-purpose caching policies [ 41,29,28,53,32,25,38,71,10,24,16,11,49, 67,14] or specific domains [ 60,12,50,17]. We continue this line of research, and leverage our characterized KV$ reuse properties for optimizing KV$ cache policies. Optimizing LLM serving. We continue the line of re- search in optimizing the performance of LLM serving sys- tems [ 70,57,31,34,64,46,18,66], with a particular focus on characterizing serving workloads for KV$ cache. Our work is orthogonal to optimizations other than KV$ cache." 2506.00958v1,"Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues","Youngmin Kim, Jiwan Chung, Jisoo Kim, Sunghyun Lee, Sangkyu Lee, Junhyeok Kim, Cheoljong Yang, Youngjae Yu","Nonverbal communication is integral to human interaction, with gestures, facial expressions, and body language conveying critical aspects of intent and emotion. However, existing large language models (LLMs) fail to effectively incorporate these nonverbal elements, limiting their capacity to create fully immersive conversational experiences. We introduce MARS, a multimodal language model designed to understand and generate nonverbal cues alongside text, bridging this gap in conversational AI. Our key innovation is VENUS, a large-scale dataset comprising annotated videos with time-aligned text, facial expressions, and body language. Leveraging VENUS, we train MARS with a next-token prediction objective, combining text with vector-quantized nonverbal representations to achieve multimodal understanding and generation within a unified framework. Based on various analyses of the VENUS datasets, we validate its substantial scale and high effectiveness. Our quantitative and qualitative results demonstrate that MARS successfully generates text and nonverbal languages, corresponding to conversational input.","cs.AI, cs.CL, cs.CV",2025-06-01T11:07:25+00:00,2025-06-01T11:07:25+00:00,http://arxiv.org/abs/2506.00958v1,http://arxiv.org/abs/2506.00958v1,2025-06-01 11:07:25+00:00,"\label{sec:related_works} \noindent \textbf{Multimodal Large Language Models.} Recent studies have introduced models that combine various modalities with large language models (LLMs), extending their capabilities beyond text to include visual, auditory, and multimodal reasoning. Specifically, to enhance visual comprehension capabilities of LLMs, LLaVA~\cite{liu2024:visual}, Qwen-VL~\cite{bai2023:qwen} and MiniGPT-4~\cite{chen2023:sharegpt4v} have successfully integrated vision encoders into pre-trained LLMs. Furthermore, VideoChat~\cite{li2023:videochat} and Video-LLaMA~\cite{zhang2023:video} extend these capabilities to video understanding, while models such as Unified-IO-2~\cite{lu2024:unified} and GPT-4-O~\cite{achiam2023:gpt} expand the scope to include auditory modalities, showing robust multimodal reasoning across various inputs. \noindent \textbf{Learning Dialogue in Video.} The importance of analyzing conversational sentiment using multimodal data (\textit{e.g.}, text, audio, and visual) from videos has driven the development of numerous datasets~\cite{busso2008:iemocap, zadeh2018:multimodal, poria2019:meld}. This has further spurred research into generating and understanding dialogues from videos, leveraging multimodal cues. For instance, Champagne~\cite{han2023:champagne} introduced the YTD-18M dataset for dialogue generation using visual signals and LLMs, while MultiDialog~\cite{park2024:let} combined audio and visual data for generating conversations. Beyond text, efforts like~\cite{shafique2023:nonverbal} and EmotionCLIP~\cite{zhang2023:learning} focus on recognizing nonverbal cues, such as gestures and emotions. Additionally, works like FurChat~\cite{cherakara2023:furchat} and~\cite{lee2023:developing} explore applying nonverbal signals to enhance robotic facial expressions and actions. However, existing conversational datasets are often limited in scale or fail to include detailed 3D facial and body language information necessary for modeling nonverbal cues effectively. Our VENUS dataset addresses these gaps by being both large-scale and scalable, offering comprehensive conversational data that integrates not only text but also 3D facial expressions and body languages. This enables a more nuanced understanding of nonverbal cues and supports the generation of richer, context-aware conversations. \noindent \textbf{Human Motion Synthesis in Conversation.} Recent advancements in 3D human reconstruction~\cite{lin2023:one, dwivedi2024:tokenhmr, danvevcek2022emoca} have significantly improved the quality of pseudo-ground truth data, providing a scalable and accessible alternative to traditional sensor-based methods~\cite{yi2023:generating}. Leveraging these datasets, recent works~\cite{wu2024:motionllm, lu2023:humantomato} have focused on generating human motions from text. Building on this progress, our work utilizes pseudo labels derived from our VENUS, which addresses the lack of large-scale dataset for conversational settings. Unlike previous works like~\cite{ng2023:can, ng2022:learning}, which primarily generate listener facial motions from text, our approach extends to produce text, facial expressions, and body language, aligned with conversational context.","\noindent \textbf{Multimodal Large Language Models.} Recent studies have introduced models that combine various modalities with large language models (LLMs), extending their capabilities beyond text to include visual, auditory, and multimodal reasoning. Specifically, to enhance visual comprehension capabilities of LLMs, LLaVA~\cite{liu2024:visual}, Qwen-VL~\cite{bai2023:qwen} and MiniGPT-4~\cite{chen2023:sharegpt4v} have successfully integrated vision encoders into pre-trained LLMs. Furthermore, VideoChat~\cite{li2023:videochat} and Video-LLaMA~\cite{zhang2023:video} extend these capabilities to video understanding, while models such as Unified-IO-2~\cite{lu2024:unified} and GPT-4-O~\cite{achiam2023:gpt} expand the scope to include auditory modalities, showing robust multimodal reasoning across various inputs. \noindent \textbf{Learning Dialogue in Video.} The importance of analyzing conversational sentiment using multimodal data (\textit{e.g.}, text, audio, and visual) from videos has driven the development of numerous datasets~\cite{busso2008:iemocap, zadeh2018:multimodal, poria2019:meld}. This has further spurred research into generating and understanding dialogues from videos, leveraging multimodal cues. For instance, Champagne~\cite{han2023:champagne} introduced the YTD-18M dataset for dialogue generation using visual signals and LLMs, while MultiDialog~\cite{park2024:let} combined audio and visual data for generating conversations. Beyond text, efforts like~\cite{shafique2023:nonverbal} and EmotionCLIP~\cite{zhang2023:learning} focus on recognizing nonverbal cues, such as gestures and emotions. Additionally, works like FurChat~\cite{cherakara2023:furchat} and~\cite{lee2023:developing} explore applying nonverbal signals to enhance robotic facial expressions and actions. However, existing conversational datasets are often limited in scale or fail to include detailed 3D facial and body language information necessary for modeling nonverbal cues effectively. Our VENUS dataset addresses these gaps by being both large-scale and scalable, offering comprehensive conversational data that integrates not only text but also 3D facial expressions and body languages. This enables a more nuanced understanding of nonverbal cues and supports the generation of richer, context-aware conversations. \noindent \textbf{Human Motion Synthesis in Conversation.} Recent advancements in 3D human reconstruction~\cite{lin2023:one, dwivedi2024:tokenhmr, danvevcek2022emoca} have significantly improved the quality of pseudo-ground truth data, providing a scalable and accessible alternative to traditional sensor-based methods~\cite{yi2023:generating}. Leveraging these datasets, recent works~\cite{wu2024:motionllm, lu2023:humantomato} have focused on generating human motions from text. Building on this progress, our work utilizes pseudo labels derived from our VENUS, which addresses the lack of large-scale dataset for conversational settings. Unlike previous works like~\cite{ng2023:can, ng2022:learning}, which primarily generate listener facial motions from text, our approach extends to produce text, facial expressions, and body language, aligned with conversational context.","Multimodal Large Language Models. Recent studies have introduced models that combine vari- ous modalities with large language models (LLMs), extending their capabilities beyond text to in- clude visual, auditory, and multimodal reason- ing. Specifically, to enhance visual comprehen- sion capabilities of LLMs, LLaV A (Liu et al., 2024b), Qwen-VL (Bai et al., 2023) and MiniGPT- 4 (Chen et al., 2023) have successfully integrated vision encoders into pre-trained LLMs. Further- more, VideoChat (Li et al., 2023) and Video- LLaMA (Zhang et al., 2023a) extend these ca- pabilities to video understanding, while models such as Unified-IO-2 (Lu et al., 2024) and GPT-4- O (Achiam et al., 2023) expand the scope to include auditory modalities, showing robust multimodal reasoning across various inputs. Learning Dialogue in Video. The importance of analyzing conversational sentiment using mul- timodal data ( e.g., text, audio, and visual) from videos has driven the development of numerous datasets (Busso et al., 2008; Zadeh et al., 2018; Poria et al., 2019). This has further spurred re- search into generating and understanding dialoguesfrom videos, leveraging multimodal cues. For in- stance, Champagne (Han et al., 2023) introduced the YTD-18M dataset for dialogue generation us- ing visual signals and LLMs, while MultiDia- log (Park et al., 2024) combined audio and visual data for generating conversations. Beyond text, efforts like (Shafique et al., 2023) and Emotion- CLIP (Zhang et al., 2023c) focus on recognizing nonverbal cues, such as gestures and emotions. Ad- ditionally, works like FurChat (Cherakara et al., 2023) and (Lee et al., 2023) explore applying non- verbal signals to enhance robotic facial expres- sions and actions. However, existing conversational datasets are often limited in scale or fail to include detailed 3D facial and body language information necessary for modeling nonverbal cues effectively. Our VENUS dataset addresses these gaps by being both large-scale and scalable, offering comprehen- sive conversational data that integrates not only text but also 3D facial expressions and body languages. This enables a more nuanced understanding of non- verbal cues and supports the generation of richer, context-aware conversations. Human Motion Synthesis in Conversation. Re- cent advancements in 3D human reconstruc- tion (Lin et al., 2023; Dwivedi et al., 2024; Dan ˇeˇcek et al., 2022) have significantly improved the qual- ity of pseudo-ground truth data, providing a scal- able and accessible alternative to traditional sensor- based methods (Yi et al., 2023). Leveraging these datasets, recent works (Wu et al., 2024; Lu et al., 2023b) have focused on generating human motions from text. Building on this progress, our work utilizes pseudo labels derived from our VENUS, which addresses the lack of large-scale dataset for conversational settings. Unlike previous works like (Ng et al., 2023, 2022), which primarily gener- ate listener facial motions from text, our approach extends to produce text, facial expressions, and body language, aligned with conversational con- text." 2506.00832v1,"Counterfactual Activation Editing for Post-hoc Prosody and Mispronunciation Correction in TTS Models","Kyowoon Lee, Artyom Stitsyuk, Gunu Jho, Inchul Hwang, Jaesik Choi","Recent advances in Text-to-Speech (TTS) have significantly improved speech naturalness, increasing the demand for precise prosody control and mispronunciation correction. Existing approaches for prosody manipulation often depend on specialized modules or additional training, limiting their capacity for post-hoc adjustments. Similarly, traditional mispronunciation correction relies on grapheme-to-phoneme dictionaries, making it less practical in low-resource settings. We introduce Counterfactual Activation Editing, a model-agnostic method that manipulates internal representations in a pre-trained TTS model to achieve post-hoc control of prosody and pronunciation. Experimental results show that our method effectively adjusts prosodic features and corrects mispronunciations while preserving synthesis quality. This opens the door to inference-time refinement of TTS outputs without retraining, bridging the gap between pre-trained TTS models and editable speech synthesis.","cs.SD, cs.AI, eess.AS",2025-06-01T04:33:37+00:00,2025-06-01T04:33:37+00:00,http://arxiv.org/abs/2506.00832v1,http://arxiv.org/abs/2506.00832v1,2025-06-01 04:33:37+00:00,"\subsection{TTS Prosody Control} Text can be spoken in various ways due to semantic nuances, speaking styles, or inherent variability. Traditional approaches, such as unit-selection, capture this variability through speech databases \cite{strom2006expressive}. In contrast, recent studies model prosodic variation by predicting key prosodic features such as pitch, duration, and energy from their embedding spaces \cite{ren2019fastspeech, ren2020fastspeech, mohan2021ctrl, bandekar2023speaking}, or by using implicit representations learned from a reference encoder to capture the nuanced variations not specified by text alone \cite{wang2018style, skerry2018towards, hsu2018hierarchical}. Controlling intermediate representations has also been considered in prior work through the use of embedding bias, calculated by assessing the extent of translation required to achieve specific modifications in acoustic features within multidimensional scaling coordinates \cite{lenglet2022speaking}. However, such modified representations risk deviating from the data manifold, and their application has been primarily confined to controlling duration. \subsection{TTS Pronunciation Control} A critical challenge for end-to-end TTS models, which operate without the aid of grapheme-to-phoneme dictionaries or predictive model, is polyphone disambiguation \cite{zhang2020unified}. For TTS models to accurately convert graphemes into phonemes, their linguistic encoder must internalize the varied pronunciation rules, but fully internalizing them is difficult, leading to inevitable pronunciation errors in synthesized speech. To mitigate mispronunciations, unit selection concatenates recorded speech fragments from a database. However, this often leads to noticeable join artifacts and requires a single-speaker database, limiting the use of non-target speaker data. Recently, a model-centric approach called as the Speech Audio Corrector (SAC) \cite{fong2022speech} has been introduced. It leverages speech codes aligned with words, derived from self-supervised learning models to correct mispronunciations at the word level. In contrast, we introduce a model-agnostic approach that manipulates intermediate representations in TTS models.","\subsection{TTS Prosody Control} Text can be spoken in various ways due to semantic nuances, speaking styles, or inherent variability. Traditional approaches, such as unit-selection, capture this variability through speech databases \cite{strom2006expressive}. In contrast, recent studies model prosodic variation by predicting key prosodic features such as pitch, duration, and energy from their embedding spaces \cite{ren2019fastspeech, ren2020fastspeech, mohan2021ctrl, bandekar2023speaking}, or by using implicit representations learned from a reference encoder to capture the nuanced variations not specified by text alone \cite{wang2018style, skerry2018towards, hsu2018hierarchical}. Controlling intermediate representations has also been considered in prior work through the use of embedding bias, calculated by assessing the extent of translation required to achieve specific modifications in acoustic features within multidimensional scaling coordinates \cite{lenglet2022speaking}. However, such modified representations risk deviating from the data manifold, and their application has been primarily confined to controlling duration. \subsection{TTS Pronunciation Control} A critical challenge for end-to-end TTS models, which operate without the aid of grapheme-to-phoneme dictionaries or predictive model, is polyphone disambiguation \cite{zhang2020unified}. For TTS models to accurately convert graphemes into phonemes, their linguistic encoder must internalize the varied pronunciation rules, but fully internalizing them is difficult, leading to inevitable pronunciation errors in synthesized speech. To mitigate mispronunciations, unit selection concatenates recorded speech fragments from a database. However, this often leads to noticeable join artifacts and requires a single-speaker database, limiting the use of non-target speaker data. Recently, a model-centric approach called as the Speech Audio Corrector (SAC) \cite{fong2022speech} has been introduced. It leverages speech codes aligned with words, derived from self-supervised learning models to correct mispronunciations at the word level. In contrast, we introduce a model-agnostic approach that manipulates intermediate representations in TTS models.", 2506.00418v1,Dual Debiasing for Noisy In-Context Learning for Text Generation,"Siqi Liang, Sumyeong Ahn, Paramveer S. Dhillon, Jiayu Zhou","In context learning (ICL) relies heavily on high quality demonstrations drawn from large annotated corpora. Existing approaches detect noisy annotations by ranking local perplexities, presuming that noisy samples yield higher perplexities than their clean counterparts. However, this assumption breaks down when the noise ratio is high and many demonstrations are flawed. We reexamine the perplexity based paradigm for text generation under noisy annotations, highlighting two sources of bias in perplexity: the annotation itself and the domain specific knowledge inherent in large language models (LLMs). To overcome these biases, we introduce a dual debiasing framework that uses synthesized neighbors to explicitly correct perplexity estimates, yielding a robust Sample Cleanliness Score. This metric uncovers absolute sample cleanliness regardless of the overall corpus noise level. Extensive experiments demonstrate our method's superior noise detection capabilities and show that its final ICL performance is comparable to that of a fully clean demonstration corpus. Moreover, our approach remains robust even when noise ratios are extremely high.","cs.CL, cs.AI, I.2.7",2025-05-31T06:44:48+00:00,2025-05-31T06:44:48+00:00,http://arxiv.org/abs/2506.00418v1,http://arxiv.org/abs/2506.00418v1,2025-05-31 06:44:48+00:00,"\label{sec:related} \myparagraph{In-context learning (ICL):} Recent research has leveraged pre-trained LLMs for downstream NLP tasks through in-context learning, particularly in text classification~\citep{yoo2022ground} and generation tasks~\citep{o2023contrastive}. Notable advances include the UDR retriever by \citet{li2023unified}, which works effectively across multiple tasks, and the efficient approach by \citet{liucontext} that extracts in-context vectors from LLM embeddings to reduce computational costs. However, most ICL research assumes clean, high-quality demonstrations, leaving open questions about performance with noisy or imperfect examples. \myparagraph{ICL with noisy annotations:} Initial studies exploring random labels in ICL classification have shown mixed results. While \citet{min2022rethinking} found limited performance impact with random retrievers for certain LLM-dataset combinations, \citet{yoo2022ground} demonstrated significant performance degradation across a broader range of settings. More recent work has begun addressing noisy ICL directly. \citet{kang2024context} proposed \textit{Rectification} for classification tasks, though its fine-tuning requirements introduce substantial computational overhead. For generation tasks, \citet{gao2024noise} pioneered the first noise-robust method, but it shows limitations under high-noise conditions. \myparagraph{Debiasing LLM Output:} Despite their capabilities, LLMs can exhibit biases from their pre-training corpora that impact task performance. To address this, \citet{li2022contrastive} and \citet{zhao2024enhancing} developed Contrastive Decoding, which improves text generation quality by debiasing larger LLMs using outputs from smaller models within the same family. Additionally, \citet{fei2023mitigating} and \citet{zhao2021calibrate} introduced methods to reduce bias in LLMs by addressing both prefixed context bias and finite label bias in classification tasks.","\myparagraph{In-context learning (ICL):} Recent research has leveraged pre-trained LLMs for downstream NLP tasks through in-context learning, particularly in text classification~\citep{yoo2022ground} and generation tasks~\citep{o2023contrastive}. Notable advances include the UDR retriever by \citet{li2023unified}, which works effectively across multiple tasks, and the efficient approach by \citet{liucontext} that extracts in-context vectors from LLM embeddings to reduce computational costs. However, most ICL research assumes clean, high-quality demonstrations, leaving open questions about performance with noisy or imperfect examples. \myparagraph{ICL with noisy annotations:} Initial studies exploring random labels in ICL classification have shown mixed results. While \citet{min2022rethinking} found limited performance impact with random retrievers for certain LLM-dataset combinations, \citet{yoo2022ground} demonstrated significant performance degradation across a broader range of settings. More recent work has begun addressing noisy ICL directly. \citet{kang2024context} proposed \textit{Rectification} for classification tasks, though its fine-tuning requirements introduce substantial computational overhead. For generation tasks, \citet{gao2024noise} pioneered the first noise-robust method, but it shows limitations under high-noise conditions. \myparagraph{Debiasing LLM Output:} Despite their capabilities, LLMs can exhibit biases from their pre-training corpora that impact task performance. To address this, \citet{li2022contrastive} and \citet{zhao2024enhancing} developed Contrastive Decoding, which improves text generation quality by debiasing larger LLMs using outputs from smaller models within the same family. Additionally, \citet{fei2023mitigating} and \citet{zhao2021calibrate} introduced methods to reduce bias in LLMs by addressing both prefixed context bias and finite label bias in classification tasks.","In-context learning (ICL): Recent research has leveraged pre-trained LLMs for downstream NLP tasks through in-context learning, particularly in text classification (Yoo et al., 2022) and genera- tion tasks (O’Brien and Lewis, 2023). Notable advances include the UDR retriever by Li et al. (2023), which works effectively across multiple tasks, and the efficient approach by Liu et al. that extracts in-context vectors from LLM embeddings to reduce computational costs. However, most ICL research assumes clean, high-quality demonstra- tions, leaving open questions about performance with noisy or imperfect examples. ICL with noisy annotations: Initial studies ex- ploring random labels in ICL classification have shown mixed results. While Min et al. (2022) found limited performance impact with random retrievers for certain LLM-dataset combinations, Yoo et al. (2022) demonstrated significant performance degra- dation across a broader range of settings. More re- cent work has begun addressing noisy ICL directly. Kang et al. (2024) proposed Rectification for clas- sification tasks, though its fine-tuning requirements introduce substantial computational overhead. For generation tasks, Gao et al. (2024) pioneered the first noise-robust method, but it shows limitations under high-noise conditions. Debiasing LLM Output: Despite their capa- bilities, LLMs can exhibit biases from their pre- training corpora that impact task performance. To address this, Li et al. (2022) and Zhao et al. (2024) developed Contrastive Decoding, which improves text generation quality by debiasing larger LLMs using outputs from smaller models within the same family. Additionally, Fei et al. (2023) and Zhao et al. (2021) introduced methods to reduce bias in LLMs by addressing both prefixed context bias and finite label bias in classification tasks." 2505.24754v1,"Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation","Yingchaojie Feng, Yiqun Sun, Yandong Sun, Minfeng Zhu, Qiang Huang, Anthony K. H. Tung, Wei Chen","In this work, we investigate an important task named instruction-following text embedding, which generates dynamic text embeddings that adapt to user instructions, highlighting specific attributes of text. Despite recent advancements, existing approaches suffer from significant computational overhead, as they require re-encoding the entire corpus for each new instruction. To address this challenge, we propose GSTransform, a novel instruction-following text embedding framework based on Guided Space Transformation. Our key observation is that instruction-relevant information is inherently encoded in generic embeddings but remains underutilized. Instead of repeatedly encoding the corpus for each instruction, GSTransform is a lightweight transformation mechanism that adapts pre-computed embeddings in real time to align with user instructions, guided by a small amount of text data with instruction-focused label annotation. We conduct extensive experiments on three instruction-awareness downstream tasks across nine real-world datasets, demonstrating that GSTransform improves instruction-following text embedding quality over state-of-the-art methods while achieving dramatic speedups of 6~300x in real-time processing on large-scale datasets. The source code is available at https://github.com/YingchaojieFeng/GSTransform.","cs.CL, cs.AI, cs.IR",2025-05-30T16:16:22+00:00,2025-05-30T16:16:22+00:00,http://arxiv.org/abs/2505.24754v1,http://arxiv.org/abs/2505.24754v1,2025-05-30 16:16:22+00:00,"\label{sec:related_work} \subsection{Generic Text Embedding} Text embedding has been a long-studied problem. Since word embeddings, people adopt self-supervised training in generating word embeddings, and pool the word embeddings to form text embeddings \cite{NIPS2013_9aa42b31, pennington-etal-2014-glove}. Recent advancements in context-aware semantic text embedding models leverage Transformer-based architectures \cite{transformer, devlin-etal-2019-bert} as their backbone, often employing customized objectives like contrastive loss to train the models \cite{cer-etal-2018-universal, reimers-gurevych-2019-sentence, gao-etal-2021-simcse, zhuo-etal-2023-whitenedcse}. Moreover, state-of-the-art (SOTA) text embedding models have been further enhanced with techniques such as using large language models (LLMs) \cite{wang2023improving, muennighoff2024generative, lei-etal-2024-meta} and more sophisticated loss functions designed to address issues like cosine saturation \cite{li-li-2024-aoe}. Despite their effectiveness, these methods lack generalizability and fail to meet diverse user needs when downstream tasks require focusing on specific aspects beyond general semantics. \subsection{Instruction-Following Text Embedding} Instruction-following text embedding~\cite{su-etal-2023-one, peng-etal-2024-answer} allows users to guide embedding generation through customized instructions. The model produces embeddings that align with users' specific interests by considering both the input text and instructions. InstructOR~\cite{su-etal-2023-one} pioneered instruction-based embeddings by concatenating instructions with input texts and training the model using contrastive objectives across a diverse set of instructions. It adapts embeddings for varied semantic interpretations but does not explicitly model instruction-specific semantic aspects. InBedder~\cite{peng-etal-2024-answer} extends this idea by treating instructions as questions and generating intermediate answers to produce more fine-grained, instruction-aware embeddings. They also propose Instruction Awareness Tests, which we adopt to evaluate Triplet Alignment, STS, and Clustering tasks. %%% Yet, both methods require re-encoding the entire corpus for each new instruction, resulting in notable computational overhead and latency, especially for large-scale datasets. Beyond text embeddings, related efforts have explored instruction-aware and prompt-based information retrieval~\cite{weller2024promptriever, min2024unihgkr, oh2024instructir, sun2024mair, weller2024followir}, offering alternative formulations that leverage user intent to enhance retrieval quality.","\subsection{Generic Text Embedding} Text embedding has been a long-studied problem. Since word embeddings, people adopt self-supervised training in generating word embeddings, and pool the word embeddings to form text embeddings \cite{NIPS2013_9aa42b31, pennington-etal-2014-glove}. Recent advancements in context-aware semantic text embedding models leverage Transformer-based architectures \cite{transformer, devlin-etal-2019-bert} as their backbone, often employing customized objectives like contrastive loss to train the models \cite{cer-etal-2018-universal, reimers-gurevych-2019-sentence, gao-etal-2021-simcse, zhuo-etal-2023-whitenedcse}. Moreover, state-of-the-art (SOTA) text embedding models have been further enhanced with techniques such as using large language models (LLMs) \cite{wang2023improving, muennighoff2024generative, lei-etal-2024-meta} and more sophisticated loss functions designed to address issues like cosine saturation \cite{li-li-2024-aoe}. Despite their effectiveness, these methods lack generalizability and fail to meet diverse user needs when downstream tasks require focusing on specific aspects beyond general semantics. \subsection{Instruction-Following Text Embedding} Instruction-following text embedding~\cite{su-etal-2023-one, peng-etal-2024-answer} allows users to guide embedding generation through customized instructions. The model produces embeddings that align with users' specific interests by considering both the input text and instructions. InstructOR~\cite{su-etal-2023-one} pioneered instruction-based embeddings by concatenating instructions with input texts and training the model using contrastive objectives across a diverse set of instructions. It adapts embeddings for varied semantic interpretations but does not explicitly model instruction-specific semantic aspects. InBedder~\cite{peng-etal-2024-answer} extends this idea by treating instructions as questions and generating intermediate answers to produce more fine-grained, instruction-aware embeddings. They also propose Instruction Awareness Tests, which we adopt to evaluate Triplet Alignment, STS, and Clustering tasks. %%% Yet, both methods require re-encoding the entire corpus for each new instruction, resulting in notable computational overhead and latency, especially for large-scale datasets. Beyond text embeddings, related efforts have explored instruction-aware and prompt-based information retrieval~\cite{weller2024promptriever, min2024unihgkr, oh2024instructir, sun2024mair, weller2024followir}, offering alternative formulations that leverage user intent to enhance retrieval quality.","2.1 Generic Text Embedding Text embedding has been a long-studied prob- lem. Since word embeddings, people adopt self- supervised training in generating word embed- dings, and pool the word embeddings to form text embeddings (Mikolov et al., 2013; Penning- ton et al., 2014). Recent advancements in context- aware semantic text embedding models leverage Transformer-based architectures (Vaswani et al., 2017; Devlin et al., 2019) as their backbone, often employing customized objectives like contrastive loss to train the models (Cer et al., 2018; Reimers and Gurevych, 2019; Gao et al., 2021; Zhuo et al., 2023). Moreover, state-of-the-art (SOTA) text em- bedding models have been further enhanced with techniques such as using large language models (LLMs) (Wang et al., 2023; Muennighoff et al., 2025; Lei et al., 2024) and more sophisticated loss functions designed to address issues like cosine saturation (Li and Li, 2024). Despite their effectiveness, these methods lack generalizability and fail to meet diverse user needs when downstream tasks require focusing on spe- cific aspects beyond general semantics. 2.2 Instruction-Following Text Embedding Instruction-following text embedding (Su et al., 2023; Peng et al., 2024) allows users to guide embedding generation through customized instruc- tions. The model produces embeddings that align with users’ specific interests by considering both the input text and instructions. InstructOR (Su et al., 2023) pioneered instruction-based embeddings by concatenating instructions with input texts and training the model using contrastive objectives across a diverse set of instructions. It adapts embeddings for varied semantic interpretations but does not explicitly model instruction-specific semantic aspects. InBedder (Peng et al., 2024) extends this idea by treating instructions as questions and generating intermediate answers to produce more fine-grained, instruction-aware embeddings. They also propose Instruction Awareness Tests, which we adopt to evaluate Triplet Alignment, STS, and Clustering tasks. Yet, both methods require re-encoding the entire corpus for each new instruction, resulting in notable computational overhead and latency, especially for large-scale datasets. Beyond text embeddings, related efforts have explored instruction-aware and prompt-based infor- mation retrieval (Weller et al., 2025b; Min et al., 2025; Oh et al., 2024; Sun et al., 2024; Weller et al., 2025a), offering alternative formulations that lever- age user intent to enhance retrieval quality." 2505.24575v1,NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization,"Hyuntak Kim, Byung-Hak Kim","Summarizing long-form narratives--such as books, movies, and TV scripts--requires capturing intricate plotlines, character interactions, and thematic coherence, a task that remains challenging for existing LLMs. We introduce NexusSum, a multi-agent LLM framework for narrative summarization that processes long-form text through a structured, sequential pipeline--without requiring fine-tuning. Our approach introduces two key innovations: (1) Dialogue-to-Description Transformation: A narrative-specific preprocessing method that standardizes character dialogue and descriptive text into a unified format, improving coherence. (2) Hierarchical Multi-LLM Summarization: A structured summarization pipeline that optimizes chunk processing and controls output length for accurate, high-quality summaries. Our method establishes a new state-of-the-art in narrative summarization, achieving up to a 30.0% improvement in BERTScore (F1) across books, movies, and TV scripts. These results demonstrate the effectiveness of multi-agent LLMs in handling long-form content, offering a scalable approach for structured summarization in diverse storytelling domains.","cs.CL, cs.AI",2025-05-30T13:26:23+00:00,2025-05-30T13:26:23+00:00,http://arxiv.org/abs/2505.24575v1,http://arxiv.org/abs/2505.24575v1,2025-05-30 13:26:23+00:00,"\label{sec:relatedwork} Narrative summarization differs from traditional document summarization, requiring specialized techniques to handle complex plots, evolving characters, and mixed prose-dialogue structures. This section reviews related work on narrative summarization, long-context summarization, and multi-agent LLMs, positioning~\ours~within this research landscape. \subsection{Narrative Summarization} Benchmark datasets like BookSum, MENSA, MovieSum and SummScreenFD have advanced long-form narrative summarization research. Traditional extractive-to-abstractive pipelines~\citep{ladhak-etal-2020-exploring, pu-etal-2022-two} risk losing coherence by omitting character arcs and event dependencies. To address this, scene-based and discourse-aware techniques leverage graph-based models~\cite{gorinski-lapata-2015-movie} and transformer-based saliency classifiers~\cite{saxena-keller-2024-select}. However, these methods struggle with full text processing, often truncating key content. Our approach overcomes this gap by introducing the dialogue-to-description transformation, allowing for a holistic narrative processing while preserving coherence. \subsection{Long-Context Summarization} Long-context summarization techniques typically fall into two categories: \paragraph{\textbf{Architectural Optimization}} Transformer models struggle with scalability due to the quadratic cost of self-attention. Solutions include sparse attention, memory-efficient encoding, and long-context finetuning~\citep{zaheer2020bigbird, Beltagy2020Longformer, kitaev2020reformerefficienttransformer, guo-etal-2022-longt5, wang2020linformerselfattentionlinearcomplexity}. Expanded context windows (up to 200K tokens)~\citep{chen2023extendingcontextwindowlarge, gpt4_technical, mistralai2024large} help but still degrade in multi-turn dependencies, entity tracking, and coherence~\cite{liu-etal-2024-lost}. \paragraph{Chunking-Based Method} Chunking-based approaches like SLED~\cite{ivgi-etal-2023-sled} and Unlimiformer~\cite{bertsch2023unlimiformer} segment text for hierarchical summarization, while CachED~\cite{saxena2025endtoendlongdocumentsummarization} improves efficiency via gradient caching but requires finetuning. Unlike prior methods, \ours~offers a training-free alternative leveraging Multi-LLM agents, allowing full text summarization without truncation. \subsection{Multi-Agent LLMs for Summarization} Recent multi-agent LLM frameworks, such as Chain of Agents (CoA)~\cite{zhang2024chain} and BooookScore~\cite{chang2024booookscore}, improve document summarization through hierarchical merging and sequential refinement (HM-SR) ~\cite{jeong2025agentasjudgefactualsummarizationlong}. However, they lack adaptations for narrative coherence, character interactions, and event dependencies. Retrieval-augmented generation~\cite{NEURIPS2020_rag} improves factuality but struggles with long-form storytelling, often missing thematic continuity~\citep{geng-etal-2022-improving-abstractive, uthus-ni-2023-rise}. \ours~addresses these gaps by integrating the dialogue-to-description transformation and systematic length control, ensuring coherent and contextually faithful summaries.","Narrative summarization differs from traditional document summarization, requiring specialized techniques to handle complex plots, evolving characters, and mixed prose-dialogue structures. This section reviews related work on narrative summarization, long-context summarization, and multi-agent LLMs, positioning~\ours~within this research landscape. \subsection{Narrative Summarization} Benchmark datasets like BookSum, MENSA, MovieSum and SummScreenFD have advanced long-form narrative summarization research. Traditional extractive-to-abstractive pipelines~\citep{ladhak-etal-2020-exploring, pu-etal-2022-two} risk losing coherence by omitting character arcs and event dependencies. To address this, scene-based and discourse-aware techniques leverage graph-based models~\cite{gorinski-lapata-2015-movie} and transformer-based saliency classifiers~\cite{saxena-keller-2024-select}. However, these methods struggle with full text processing, often truncating key content. Our approach overcomes this gap by introducing the dialogue-to-description transformation, allowing for a holistic narrative processing while preserving coherence. \subsection{Long-Context Summarization} Long-context summarization techniques typically fall into two categories: \paragraph{\textbf{Architectural Optimization}} Transformer models struggle with scalability due to the quadratic cost of self-attention. Solutions include sparse attention, memory-efficient encoding, and long-context finetuning~\citep{zaheer2020bigbird, Beltagy2020Longformer, kitaev2020reformerefficienttransformer, guo-etal-2022-longt5, wang2020linformerselfattentionlinearcomplexity}. Expanded context windows (up to 200K tokens)~\citep{chen2023extendingcontextwindowlarge, gpt4_technical, mistralai2024large} help but still degrade in multi-turn dependencies, entity tracking, and coherence~\cite{liu-etal-2024-lost}. \paragraph{Chunking-Based Method} Chunking-based approaches like SLED~\cite{ivgi-etal-2023-sled} and Unlimiformer~\cite{bertsch2023unlimiformer} segment text for hierarchical summarization, while CachED~\cite{saxena2025endtoendlongdocumentsummarization} improves efficiency via gradient caching but requires finetuning. Unlike prior methods, \ours~offers a training-free alternative leveraging Multi-LLM agents, allowing full text summarization without truncation. \subsection{Multi-Agent LLMs for Summarization} Recent multi-agent LLM frameworks, such as Chain of Agents (CoA)~\cite{zhang2024chain} and BooookScore~\cite{chang2024booookscore}, improve document summarization through hierarchical merging and sequential refinement (HM-SR) ~\cite{jeong2025agentasjudgefactualsummarizationlong}. However, they lack adaptations for narrative coherence, character interactions, and event dependencies. Retrieval-augmented generation~\cite{NEURIPS2020_rag} improves factuality but struggles with long-form storytelling, often missing thematic continuity~\citep{geng-etal-2022-improving-abstractive, uthus-ni-2023-rise}. \ours~addresses these gaps by integrating the dialogue-to-description transformation and systematic length control, ensuring coherent and contextually faithful summaries.","Narrative summarization differs from traditional document summarization, requiring specialized techniques to handle complex plots, evolving char- acters, and mixed prose-dialogue structures. This section reviews related work on narrative summa- rization, long-context summarization, and multi- agent LLMs, positioning NEXUS SUMwithin this research landscape. 2.1 Narrative Summarization Benchmark datasets like BookSum, MENSA, MovieSum and SummScreenFD have advanced long-form narrative summarization research. Tra- ditional extractive-to-abstractive pipelines (Ladhak et al., 2020; Liu et al., 2022) risk losing coherence by omitting character arcs and event dependencies. To address this, scene-based and discourse-aware techniques leverage graph-based models (Gorinski and Lapata, 2015) and transformer-based saliency classifiers (Saxena and Keller, 2024b). However, these methods struggle with full text processing, often truncating key content. Our approach over- comes this gap by introducing the dialogue-to- description transformation, allowing for a holistic narrative processing while preserving coherence.2.2 Long-Context Summarization Long-context summarization techniques typically fall into two categories: Architectural Optimization Transformer mod- els struggle with scalability due to the quadratic cost of self-attention. Solutions include sparse attention, memory-efficient encoding, and long- context finetuning (Zaheer et al., 2020; Beltagy et al., 2020; Kitaev et al., 2020; Guo et al., 2022; Wang et al., 2020a). Expanded context windows (up to 200K tokens) (Chen et al., 2023; OpenAI, 2023; Mistral AI, 2024) help but still degrade in multi-turn dependencies, entity tracking, and co- herence (Liu et al., 2024). Chunking-Based Method Chunking-based ap- proaches like SLED (Ivgi et al., 2023) and Un- limiformer (Bertsch et al., 2023) segment text for hierarchical summarization, while CachED (Sax- ena et al., 2025) improves efficiency via gradient caching but requires finetuning. Unlike prior methods, NEXUS SUM offers a training-free alternative leveraging Multi-LLM agents, allowing full text summarization without truncation. 2.3 Multi-Agent LLMs for Summarization Recent multi-agent LLM frameworks, such as Chain of Agents (CoA) (Zhang et al., 2024) and BooookScore (Chang et al., 2024), improve docu- ment summarization through hierarchical merging and sequential refinement (HM-SR) (Jeong et al., 2025). However, they lack adaptations for narrative coherence, character interactions, and event depen- dencies. Retrieval-augmented generation (Lewis et al., 2020) improves factuality but struggles with long-form storytelling, often missing thematic con- tinuity (Geng et al., 2022; Uthus and Ni, 2023). NEXUS SUMaddresses these gaps by integrating the dialogue-to-description transformation and sys- tematic length control, ensuring coherent and con- textually faithful summaries." 2506.00085v1,COSMIC: Generalized Refusal Direction Identification in LLM Activations,"Vincent Siu, Nicholas Crispino, Zihao Yu, Sam Pan, Zhun Wang, Yang Liu, Dawn Song, Chenguang Wang","Large Language Models (LLMs) encode behaviors such as refusal within their activation space, yet identifying these behaviors remains a significant challenge. Existing methods often rely on predefined refusal templates detectable in output tokens or require manual analysis. We introduce \textbf{COSMIC} (Cosine Similarity Metrics for Inversion of Concepts), an automated framework for direction selection that identifies viable steering directions and target layers using cosine similarity - entirely independent of model outputs. COSMIC achieves steering performance comparable to prior methods without requiring assumptions about a model's refusal behavior, such as the presence of specific refusal tokens. It reliably identifies refusal directions in adversarial settings and weakly aligned models, and is capable of steering such models toward safer behavior with minimal increase in false refusals, demonstrating robustness across a wide range of alignment conditions.","cs.CL, cs.AI",2025-05-30T04:54:18+00:00,2025-05-30T04:54:18+00:00,http://arxiv.org/abs/2506.00085v1,http://arxiv.org/abs/2506.00085v1,2025-05-30 04:54:18+00:00,"Our work builds on research in LLM safety and mechanistic interpretability. \paragraph{Safety:} LLM alignment is typically achieved through fine-tuning \cite{ouyang2022traininglanguagemodelsfollow} and RLHF \cite{bai2022traininghelpfulharmlessassistant, ganguli2022redteaminglanguagemodels}, yet studies show that fine-tuning \cite{lermen2024lorafinetuningefficientlyundoes, yang2023shadowalignmenteasesubverting, qi2023finetuningalignedlanguagemodels} and adversarial prompts \cite{andriushchenko2024jailbreaking, zou2023universaltransferableadversarialattacks, chao2024jailbreakingblackboxlarge} can bypass refusal mechanisms. \paragraph{Steering:} Recent work demonstrate refusal behavior is encoded in activation space \cite{weidinger2021ethicalsocialrisksharm, arditi2024refusallanguagemodelsmediated, marshall2024refusalllmsaffinefunction} with interventions aiming to modulate it directly \cite{zou2023representationengineeringtopdownapproach, arditi2024refusallanguagemodelsmediated, marshall2024refusalllmsaffinefunction, Spectralediting, bhattacharjee2024inferencetimecategorywisesafetysteering, uppaal2025profs}. Many methods use contrastive data pairs to extract feature directions \cite{burns2024discoveringlatentknowledgelanguage, arditi2024refusallanguagemodelsmediated, panickssery2024steeringllama2contrastive, zou2023representationengineeringtopdownapproach} for behavior steering \cite{zou2023representationengineeringtopdownapproach, panickssery2024steeringllama2contrastive, turner2024steeringlanguagemodelsactivation, arditi2024refusallanguagemodelsmediated, lee2025programmingrefusalconditionalactivation} and concept removal techniques \cite{guerner2024geometricnotioncausalprobing, haghighatkhah2022betterhitnailhead, ravfogel2020nulloutguardingprotected, belrose2023leaceperfectlinearconcept} such as Representation Engineering and Contrastive Activation Addition \cite{zou2023representationengineeringtopdownapproach, panickssery2024steeringllama2contrastive}.\citet{wang2024trojanactivationattackredteaming} also uses similarity-based scores to target intervention layers. \paragraph{Interpretability}: Model behaviors are often represented as linearly encoded in activation space \cite{bolukbasi2016man, elhage2022toymodelssuperposition, park2024linearrepresentationhypothesisgeometry, mikolov2013linguistic, nanda2023emergentlinearrepresentationsworld, hernandez2021lowdimensionallineargeometrycontextualized}, although other work posit refusal behaviors as affine functions \cite{marshall2024refusalllmsaffinefunction}. These hypothesis are investigated via mechanistic interpretability approaches leveraging sparse autoencoders \cite{bricken2023monosemanticity, templeton2024scaling, cunningham2023sparseautoencodershighlyinterpretable}, weight-based analysis \cite{pearce2024bilinearmlpsenableweightbased}, and circuit analysis \cite{elhage2021mathematical, lieberum2023doescircuitanalysisinterpretability} to further understand model internals.","Our work builds on research in LLM safety and mechanistic interpretability. \paragraph{Safety:} LLM alignment is typically achieved through fine-tuning \cite{ouyang2022traininglanguagemodelsfollow} and RLHF \cite{bai2022traininghelpfulharmlessassistant, ganguli2022redteaminglanguagemodels}, yet studies show that fine-tuning \cite{lermen2024lorafinetuningefficientlyundoes, yang2023shadowalignmenteasesubverting, qi2023finetuningalignedlanguagemodels} and adversarial prompts \cite{andriushchenko2024jailbreaking, zou2023universaltransferableadversarialattacks, chao2024jailbreakingblackboxlarge} can bypass refusal mechanisms. \paragraph{Steering:} Recent work demonstrate refusal behavior is encoded in activation space \cite{weidinger2021ethicalsocialrisksharm, arditi2024refusallanguagemodelsmediated, marshall2024refusalllmsaffinefunction} with interventions aiming to modulate it directly \cite{zou2023representationengineeringtopdownapproach, arditi2024refusallanguagemodelsmediated, marshall2024refusalllmsaffinefunction, Spectralediting, bhattacharjee2024inferencetimecategorywisesafetysteering, uppaal2025profs}. Many methods use contrastive data pairs to extract feature directions \cite{burns2024discoveringlatentknowledgelanguage, arditi2024refusallanguagemodelsmediated, panickssery2024steeringllama2contrastive, zou2023representationengineeringtopdownapproach} for behavior steering \cite{zou2023representationengineeringtopdownapproach, panickssery2024steeringllama2contrastive, turner2024steeringlanguagemodelsactivation, arditi2024refusallanguagemodelsmediated, lee2025programmingrefusalconditionalactivation} and concept removal techniques \cite{guerner2024geometricnotioncausalprobing, haghighatkhah2022betterhitnailhead, ravfogel2020nulloutguardingprotected, belrose2023leaceperfectlinearconcept} such as Representation Engineering and Contrastive Activation Addition \cite{zou2023representationengineeringtopdownapproach, panickssery2024steeringllama2contrastive}.\citet{wang2024trojanactivationattackredteaming} also uses similarity-based scores to target intervention layers. \paragraph{Interpretability}: Model behaviors are often represented as linearly encoded in activation space \cite{bolukbasi2016man, elhage2022toymodelssuperposition, park2024linearrepresentationhypothesisgeometry, mikolov2013linguistic, nanda2023emergentlinearrepresentationsworld, hernandez2021lowdimensionallineargeometrycontextualized}, although other work posit refusal behaviors as affine functions \cite{marshall2024refusalllmsaffinefunction}. These hypothesis are investigated via mechanistic interpretability approaches leveraging sparse autoencoders \cite{bricken2023monosemanticity, templeton2024scaling, cunningham2023sparseautoencodershighlyinterpretable}, weight-based analysis \cite{pearce2024bilinearmlpsenableweightbased}, and circuit analysis \cite{elhage2021mathematical, lieberum2023doescircuitanalysisinterpretability} to further understand model internals.","Our work builds on research in LLM safety and mechanistic interpretability. Safety: LLM alignment is typically achieved through fine-tuning (Ouyang et al., 2022) and RLHF (Bai et al., 2022; Ganguli et al., 2022), yet studies show that fine-tuning (Lermen et al., 2023; Yang et al., 2023; Qi et al., 2024) and adversarial prompts (Andriushchenko et al., 2024; Zou et al., 2023b; Chao et al., 2023) can bypass refusal mech- anisms. Steering: Recent work demonstrate refusal be- havior is encoded in activation space (Weidinger et al., 2021; Arditi et al., 2024; Marshall et al., 2024) with interventions aiming to modulate it di- rectly (Zou et al., 2023a; Arditi et al., 2024; Mar- shall et al., 2024; Qiu et al., 2024; Bhattacharjee et al., 2024; Uppaal et al., 2025). Many methods use contrastive data pairs to extract feature direc- tions (Burns et al., 2023; Arditi et al., 2024; Pan- ickssery et al., 2023; Zou et al., 2023a) for behavior steering (Zou et al., 2023a; Panickssery et al., 2023; Turner et al., 2023; Arditi et al., 2024; Lee et al., 2024) and concept removal techniques (Guerner et al., 2023; Haghighatkhah et al., 2022; Ravfo- gel et al., 2020; Belrose et al., 2023) such as Rep- resentation Engineering and Contrastive Activa- tion Addition (Zou et al., 2023a; Panickssery et al., 2023).Wang and Shu (2023) also uses similarity-based scores to target intervention layers. Interpretability : Model behaviors are often rep- resented as linearly encoded in activation space (Bolukbasi et al., 2016; Elhage et al., 2022; Park et al., 2024; Mikolov et al., 2013; Nanda et al., 2023; Hernandez and Andreas, 2021), although other work posit refusal behaviors as affine func- tions (Marshall et al., 2024). These hypothesis are investigated via mechanistic interpretability ap- proaches leveraging sparse autoencoders (Bricken et al., 2023; Templeton et al., 2024; Huben et al., 2024), weight-based analysis (Pearce et al., 2024), and circuit analysis (Elhage et al., 2021; Lieberum et al., 2023) to further understand model internals." 2505.23996v1,"Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs","Yinong Oliver Wang, Nivedha Sivakumar, Falaah Arif Khan, Rin Metcalf Susa, Adam Golinski, Natalie Mackraz, Barry-John Theobald, Luca Zappella, Nicholas Apostoloff","The recent rapid adoption of large language models (LLMs) highlights the critical need for benchmarking their fairness. Conventional fairness metrics, which focus on discrete accuracy-based evaluations (i.e., prediction correctness), fail to capture the implicit impact of model uncertainty (e.g., higher model confidence about one group over another despite similar accuracy). To address this limitation, we propose an uncertainty-aware fairness metric, UCerF, to enable a fine-grained evaluation of model fairness that is more reflective of the internal bias in model decisions compared to conventional fairness measures. Furthermore, observing data size, diversity, and clarity issues in current datasets, we introduce a new gender-occupation fairness evaluation dataset with 31,756 samples for co-reference resolution, offering a more diverse and suitable dataset for evaluating modern LLMs. We establish a benchmark, using our metric and dataset, and apply it to evaluate the behavior of ten open-source LLMs. For example, Mistral-7B exhibits suboptimal fairness due to high confidence in incorrect predictions, a detail overlooked by Equalized Odds but captured by UCerF. Overall, our proposed LLM benchmark, which evaluates fairness with uncertainty awareness, paves the way for developing more transparent and accountable AI systems.","cs.CL, cs.AI, cs.LG",2025-05-29T20:45:18+00:00,2025-05-29T20:45:18+00:00,http://arxiv.org/abs/2505.23996v1,http://arxiv.org/abs/2505.23996v1,2025-05-29 20:45:18+00:00,"\label{sec:related} % [Oliver] TODO: make shorter \subsection{Fairness Evaluation of LLMs} LLM evaluation has become a critical research area~\cite{liang2022holistic,hendrycks2020measuring,open-llm-leaderboard-v2}, especially the safety of LLMs~\cite{blodgett-etal-2020-language}. While efforts have been made in fields such as adversarial robustness~\cite{yang2024assessing}, toxicity~\cite{hartvigsen2022toxigen}, and harmfulness~\cite{magooda2023framework}, gender fairness in LLMs remains an important area of research that requires further exploration and development~\cite{li2023survey,mackraz2024evaluating,patel2024fairness}. Currently, most model fairness evaluations focus on prediction-based metrics (i.e., whether the model predictions are correct and/or unbiased)~\cite{laskar2023systematic,chu2024fairness}. For example, demographic parity quantifies the difference in positive prediction rates across demographic groups, while equalized odds measures the difference in error. Other metrics propose first collecting generated responses from various tasks (e.g., continuation of input text) and analyzing the generation quality (e.g., presence of bias) as indirect reflections of fairness~\cite{wang2024ceb}. Nonetheless, existing fairness metrics do not consider model uncertainty, which contains information about a model's internal decision-making~\cite{ye2024benchmarking} and can influence fairness estimation. Besides metrics, fairness evaluation datasets are another cornerstone for measuring LLM fairness~\cite{fabris2022algorithmic}. However, existing datasets to assess gender-occupation bias in LLMs have limitations. Datasets based on the WinoGrad Schema~\cite{levesque2012winograd}, such as WinoBias~\cite{zhao2018gender}, WinoBias+~\cite{vanmassenhove2021neutral} and WinoGender~\cite{rudinger2018gender} are no longer adequate to evaluate recent LLMs as detailed in~\cref{sec:dataset}. Datasets like Big-Bench (the Disambiguation\_QA task)~\cite{srivastava2023beyond}, BOLD~\cite{dhamala2021bold}, a variation of WinoBias~\cite{kotek2023gender}, and BBQ (the gender-identity split)~\cite{parrish2021bbq} either suffer from limited size or are based on templates which limit the diversity of sentence syntax and context. GAP~\cite{webster-etal-2018-mind} and GAP-Subjective~\cite{pant-dadu-2022-incorporating} focus on pronoun-name bias instead of occupation, which is not relevant to our work. BUG~\cite{levy-etal-2021-collecting-large}, which scrapes large-scale real-world corpus using 14 fixed searching patterns, suffers from limited syntax that could be memorized by models and label noise. To overcome existing limitations in studying gender-occupation biases, we use a state-of-the-art LLM to generate SynthBias, a large-scale, high-quality, and diverse synthetic dataset representing varied freeform contexts. \subsection{Uncertainty Estimation of LLMs} \label{sec:related_uncertainty} Model uncertainty~\cite{gawlikowski2023survey} is a critical factor in evaluating the reliability of language models~\cite{hu2023uncertainty,huang2023look,fadeeva2023lm,ye2024benchmarking,kendall2017uncertainties,ye2024benchmarking}. To quantify the model uncertainty of LLMs, estimation methods can be categorized~\cite{hu2023uncertainty} as: (1) Confidence-based methods, i.e., aggregating model prediction confidence, such as softmax response~\cite{bridle1990probabilistic,hendrycks2017a}, perplexity~\cite{jurafsky2000speech}, softmax-entropy-based approaches~\cite{fomicheva2020unsupervised,malinin2021uncertainty}, and conformal prediction~\cite{vovk2005algorithmic}; (2) Sampling-based methods, i.e., estimating uncertainty through repeated sampling, such as MC Dropout~\cite{gal2016dropout}, mutual information~\cite{yu2022learning}, predictive-entropy-based approaches~\cite{malinin2021uncertainty,kuhn2023semantic,duan2023shifting}, and P(True)~\cite{kadavath2022language}; (3) Distribution-based methods, i.e., modeling uncertainty by parameterizing probability distributions, such as prior networks~\cite{malinin2018predictive} or divergence and distance~\cite{darrin2022rainproof}. As recent studies~\cite{vashurin2025benchmarking,santilli2024spurious,santilli2025revisiting} show that logit-based uncertainty estimators can be as effective as more recent uncertainty metrics, in this paper, we use perplexity to estimate model uncertainty for its simplicity and intuitive interpretability. \subsection{Uncertainty-Aware Fairness} While fairness evaluation and uncertainty estimation of LLMs have been separately studied, the intersection of the two areas is overlooked despite its value~\cite{mehta2024evaluating}. The importance of evaluating models on both fairness and reliability (uncertainty) metrics is shown in~\cite{kuzmin-etal-2023-uncertainty}, but this work studies fairness and reliability as two separate metrics and focuses on how debiasing methods impact the trade-off between the two metrics. Distinctively, our work promotes an improvement in the fairness metric itself by incorporating uncertainty information. Uncertainty analysis reveals additional insights into model behavior, helping to uncover subtle fairness differences that conventional metrics may overlook~\cite{kuzucu2023uncertainty,kaiser2022uncertainty,liang2022holistic}. By incorporating uncertainty estimation, previous methods~\cite{kaiser2022uncertainty,tahir2023fairness} have gained additional information and achieved fair model outcomes in the decision-making process. However, these methods are designed for tabular and vision datasets. In the specific area of LLM fairness, to the best of our knowledge, only one work~\cite{kuzucu2023uncertainty} has considered uncertainty by evaluating whether two groups have the same uncertainty, complementing conventional group fairness metrics. However, this method does not jointly consider uncertainty and correctness, neglecting complex scenarios with varying correctness and uncertainty levels, which is essential for comprehensive fairness evaluation as shown in~\cref{sec:metric}.","% [Oliver] TODO: make shorter \subsection{Fairness Evaluation of LLMs} LLM evaluation has become a critical research area~\cite{liang2022holistic,hendrycks2020measuring,open-llm-leaderboard-v2}, especially the safety of LLMs~\cite{blodgett-etal-2020-language}. While efforts have been made in fields such as adversarial robustness~\cite{yang2024assessing}, toxicity~\cite{hartvigsen2022toxigen}, and harmfulness~\cite{magooda2023framework}, gender fairness in LLMs remains an important area of research that requires further exploration and development~\cite{li2023survey,mackraz2024evaluating,patel2024fairness}. Currently, most model fairness evaluations focus on prediction-based metrics (i.e., whether the model predictions are correct and/or unbiased)~\cite{laskar2023systematic,chu2024fairness}. For example, demographic parity quantifies the difference in positive prediction rates across demographic groups, while equalized odds measures the difference in error. Other metrics propose first collecting generated responses from various tasks (e.g., continuation of input text) and analyzing the generation quality (e.g., presence of bias) as indirect reflections of fairness~\cite{wang2024ceb}. Nonetheless, existing fairness metrics do not consider model uncertainty, which contains information about a model's internal decision-making~\cite{ye2024benchmarking} and can influence fairness estimation. Besides metrics, fairness evaluation datasets are another cornerstone for measuring LLM fairness~\cite{fabris2022algorithmic}. However, existing datasets to assess gender-occupation bias in LLMs have limitations. Datasets based on the WinoGrad Schema~\cite{levesque2012winograd}, such as WinoBias~\cite{zhao2018gender}, WinoBias+~\cite{vanmassenhove2021neutral} and WinoGender~\cite{rudinger2018gender} are no longer adequate to evaluate recent LLMs as detailed in~\cref{sec:dataset}. Datasets like Big-Bench (the Disambiguation\_QA task)~\cite{srivastava2023beyond}, BOLD~\cite{dhamala2021bold}, a variation of WinoBias~\cite{kotek2023gender}, and BBQ (the gender-identity split)~\cite{parrish2021bbq} either suffer from limited size or are based on templates which limit the diversity of sentence syntax and context. GAP~\cite{webster-etal-2018-mind} and GAP-Subjective~\cite{pant-dadu-2022-incorporating} focus on pronoun-name bias instead of occupation, which is not relevant to our work. BUG~\cite{levy-etal-2021-collecting-large}, which scrapes large-scale real-world corpus using 14 fixed searching patterns, suffers from limited syntax that could be memorized by models and label noise. To overcome existing limitations in studying gender-occupation biases, we use a state-of-the-art LLM to generate SynthBias, a large-scale, high-quality, and diverse synthetic dataset representing varied freeform contexts. \subsection{Uncertainty Estimation of LLMs} Model uncertainty~\cite{gawlikowski2023survey} is a critical factor in evaluating the reliability of language models~\cite{hu2023uncertainty,huang2023look,fadeeva2023lm,ye2024benchmarking,kendall2017uncertainties,ye2024benchmarking}. To quantify the model uncertainty of LLMs, estimation methods can be categorized~\cite{hu2023uncertainty} as: (1) Confidence-based methods, i.e., aggregating model prediction confidence, such as softmax response~\cite{bridle1990probabilistic,hendrycks2017a}, perplexity~\cite{jurafsky2000speech}, softmax-entropy-based approaches~\cite{fomicheva2020unsupervised,malinin2021uncertainty}, and conformal prediction~\cite{vovk2005algorithmic}; (2) Sampling-based methods, i.e., estimating uncertainty through repeated sampling, such as MC Dropout~\cite{gal2016dropout}, mutual information~\cite{yu2022learning}, predictive-entropy-based approaches~\cite{malinin2021uncertainty,kuhn2023semantic,duan2023shifting}, and P(True)~\cite{kadavath2022language}; (3) Distribution-based methods, i.e., modeling uncertainty by parameterizing probability distributions, such as prior networks~\cite{malinin2018predictive} or divergence and distance~\cite{darrin2022rainproof}. As recent studies~\cite{vashurin2025benchmarking,santilli2024spurious,santilli2025revisiting} show that logit-based uncertainty estimators can be as effective as more recent uncertainty metrics, in this paper, we use perplexity to estimate model uncertainty for its simplicity and intuitive interpretability. \subsection{Uncertainty-Aware Fairness} While fairness evaluation and uncertainty estimation of LLMs have been separately studied, the intersection of the two areas is overlooked despite its value~\cite{mehta2024evaluating}. The importance of evaluating models on both fairness and reliability (uncertainty) metrics is shown in~\cite{kuzmin-etal-2023-uncertainty}, but this work studies fairness and reliability as two separate metrics and focuses on how debiasing methods impact the trade-off between the two metrics. Distinctively, our work promotes an improvement in the fairness metric itself by incorporating uncertainty information. Uncertainty analysis reveals additional insights into model behavior, helping to uncover subtle fairness differences that conventional metrics may overlook~\cite{kuzucu2023uncertainty,kaiser2022uncertainty,liang2022holistic}. By incorporating uncertainty estimation, previous methods~\cite{kaiser2022uncertainty,tahir2023fairness} have gained additional information and achieved fair model outcomes in the decision-making process. However, these methods are designed for tabular and vision datasets. In the specific area of LLM fairness, to the best of our knowledge, only one work~\cite{kuzucu2023uncertainty} has considered uncertainty by evaluating whether two groups have the same uncertainty, complementing conventional group fairness metrics. However, this method does not jointly consider uncertainty and correctness, neglecting complex scenarios with varying correctness and uncertainty levels, which is essential for comprehensive fairness evaluation as shown in~\cref{sec:metric}.","2.1 Fairness Evaluation of LLMs LLM evaluation has become a critical research area (Liang et al., 2022; Hendrycks et al., 2021; Fourrier et al., 2024), especially the safety of LLMs (Blodgett et al., 2020). While efforts have been made in fields such as adversarial robust- ness (Yang et al., 2024b), toxicity (Hartvigsen et al., 2022), and harmfulness (Magooda et al., 2023), gender fairness in LLMs remains an important area of research that requires further exploration and development (Li et al., 2023; Mack- raz et al., 2024; Patel et al., 2024). Currently, most model fairness evaluations focus on prediction-based metrics (i.e., whether the model predic- tions are correct and/or unbiased) (Laskar et al., 2023; Chu et al., 2024). For example, demographic parity quantifies the difference in positive prediction rates across demographic groups, while equalized odds measures the difference in error. Other metrics propose first collecting generated responses from various tasks (e.g., continuation of input text) and an- alyzing the generation quality (e.g., presence of bias) as indirect reflections of fairness (Wang et al., 2024). Nonethe- less, existing fairness metrics do not consider model uncer- tainty, which contains information about a model’s internal decision-making (Ye et al., 2024) and can influence fairness estimation. Besides metrics, fairness evaluation datasets are another cornerstone for measuring LLM fairness (Fabris et al., 2022). However, existing datasets to assess gender-occupation bias in LLMs have limitations. Datasets based on the WinoGrad Schema (Levesque et al., 2012), such as WinoBias (Zhao et al., 2018), WinoBias+ (Vanmassenhove et al., 2021) and WinoGender (Rudinger et al., 2018) are no longer adequate to evaluate recent LLMs as detailed in Sec" 2505.23353v1,"Synthetic Generation and Latent Projection Denoising of Rim Lesions in Multiple Sclerosis","Alexandra G. Roberts, Ha M. Luu, Mert Şişman, Alexey V. Dimov, Ceren Tozlu, Ilhami Kovanlikaya, Susan A. Gauthier, Thanh D. Nguyen, Yi Wang","Quantitative susceptibility maps from magnetic resonance images can provide both prognostic and diagnostic information in multiple sclerosis, a neurodegenerative disease characterized by the formation of lesions in white matter brain tissue. In particular, susceptibility maps provide adequate contrast to distinguish between ""rim"" lesions, surrounded by deposited paramagnetic iron, and ""non-rim"" lesion types. These paramagnetic rim lesions (PRLs) are an emerging biomarker in multiple sclerosis. Much effort has been devoted to both detection and segmentation of such lesions to monitor longitudinal change. As paramagnetic rim lesions are rare, addressing this problem requires confronting the class imbalance between rim and non-rim lesions. We produce synthetic quantitative susceptibility maps of paramagnetic rim lesions and show that inclusion of such synthetic data improves classifier performance and provide a multi-channel extension to generate accompanying contrasts and probabilistic segmentation maps. We exploit the projection capability of our trained generative network to demonstrate a novel denoising approach that allows us to train on ambiguous rim cases and substantially increase the minority class. We show that both synthetic lesion synthesis and our proposed rim lesion label denoising method best approximate the unseen rim lesion distribution and improve detection in a clinically interpretable manner. We release our code and generated data at https://github.com/agr78/PRLx-GAN upon publication.","eess.IV, cs.AI, cs.CV",2025-05-29T11:22:48+00:00,2025-05-29T11:22:48+00:00,http://arxiv.org/abs/2505.23353v1,http://arxiv.org/abs/2505.23353v1,2025-05-29 11:22:48+00:00,"\label{sec:formatting} \subsection{Synthetic MRI} Synthetic MRI data ranges from classical representations generated via signal models~\cite{Mcal,Jand} to more recent deep learning approaches aiming to estimate mappings over synthetic data that are applicable to $in \ vivo$ data~\cite{Emoy,Kgop} and across various contrasts and resolutions~\cite{Jigl}. Given pretrained models and approximate source and target distributions~\cite{jwil}, transfer learning approaches can address the challenges of gathering sufficient MRI data~\cite{weli,jval,smat}. Another subset of solutions focus on generating synthetic data for small datasets~\cite{Vtha} and label scarcity arising from new~\cite{Awah} or lethal~\cite{Bahm} pathologies. Rim segmentation efforts have addressed the class imbalance problem by oversampling the minority class in the latent space~\cite{Hzha,Ddab}, included in our comparison. Synthetic lesions in multiple sclerosis have been generated~\cite{Msal} using a variational autoencoder on qualitative $\mathrm{T_2FLAIR}$. Like our work, this model aims to generate synthetic MS lesions, differing in that the whole brain learned mapping is between healthy cases and MS patients on qualitative $\mathrm{T_2FLAIR}$. We train a generative adversarial network~\cite{Igoo} (GAN) with the goal of learning the mapping from random noise initializing the generator model to synthetic, quantitative, paramagnetic rim lesions. We do this using susceptibility images as rim lesions are differentiable from non-rim lesions only on QSM. \subsection{Latent projection denoising} From our choice in architecture arises an opportunity to recover ambiguous or ``noisy'' rim lesions by projecting their latent vectors into the latent space of a pretrained GAN synthesizing only unambiguous rim lesions, closely related to GAN inversion~\cite{Wxia}. Namely, these ambiguous cases are lesions where expert raters disagree on the label. The presence of label noise has been addressed by conditional GANs~\cite{Mmir,Kthe} (cGAN), which are trained on both majority and minority class labels. Given the data imbalance between rim and non-rim lesions, we focus on the minority class rather than the majority class and we show its inclusion nearly doubles the required training time. We seek to make use of ambiguous, noisy real rim lesions by training the generator only the unambiguous rim lesion minority class. We use the projection into the learned latent space from unambiguous (or noiseless) rim lesions to recover denoised lesions. Other works related to this effort include modeling label noise as a latent space shift~\cite{Wehua}, a technique applied to correct classifications rather than augment data. Also related is estimation of the noise transition matrix~\cite{Hbae} to calibrate classifiers trained on noisy labels, which requires some understanding of the noise distribution. Perhaps most relevant is the use of GAN inversion for under-sampled MRI reconstruction~\cite{Vkel}, which deals with the presence of instrument noise rather than mitigation of more subjective label noise.","\subsection{Synthetic MRI} Synthetic MRI data ranges from classical representations generated via signal models~\cite{Mcal,Jand} to more recent deep learning approaches aiming to estimate mappings over synthetic data that are applicable to $in \ vivo$ data~\cite{Emoy,Kgop} and across various contrasts and resolutions~\cite{Jigl}. Given pretrained models and approximate source and target distributions~\cite{jwil}, transfer learning approaches can address the challenges of gathering sufficient MRI data~\cite{weli,jval,smat}. Another subset of solutions focus on generating synthetic data for small datasets~\cite{Vtha} and label scarcity arising from new~\cite{Awah} or lethal~\cite{Bahm} pathologies. Rim segmentation efforts have addressed the class imbalance problem by oversampling the minority class in the latent space~\cite{Hzha,Ddab}, included in our comparison. Synthetic lesions in multiple sclerosis have been generated~\cite{Msal} using a variational autoencoder on qualitative $\mathrm{T_2FLAIR}$. Like our work, this model aims to generate synthetic MS lesions, differing in that the whole brain learned mapping is between healthy cases and MS patients on qualitative $\mathrm{T_2FLAIR}$. We train a generative adversarial network~\cite{Igoo} (GAN) with the goal of learning the mapping from random noise initializing the generator model to synthetic, quantitative, paramagnetic rim lesions. We do this using susceptibility images as rim lesions are differentiable from non-rim lesions only on QSM. \subsection{Latent projection denoising} From our choice in architecture arises an opportunity to recover ambiguous or ``noisy'' rim lesions by projecting their latent vectors into the latent space of a pretrained GAN synthesizing only unambiguous rim lesions, closely related to GAN inversion~\cite{Wxia}. Namely, these ambiguous cases are lesions where expert raters disagree on the label. The presence of label noise has been addressed by conditional GANs~\cite{Mmir,Kthe} (cGAN), which are trained on both majority and minority class labels. Given the data imbalance between rim and non-rim lesions, we focus on the minority class rather than the majority class and we show its inclusion nearly doubles the required training time. We seek to make use of ambiguous, noisy real rim lesions by training the generator only the unambiguous rim lesion minority class. We use the projection into the learned latent space from unambiguous (or noiseless) rim lesions to recover denoised lesions. Other works related to this effort include modeling label noise as a latent space shift~\cite{Wehua}, a technique applied to correct classifications rather than augment data. Also related is estimation of the noise transition matrix~\cite{Hbae} to calibrate classifiers trained on noisy labels, which requires some understanding of the noise distribution. Perhaps most relevant is the use of GAN inversion for under-sampled MRI reconstruction~\cite{Vkel}, which deals with the presence of instrument noise rather than mitigation of more subjective label noise.", 2505.22757v1,Pre-Training Curriculum for Multi-Token Prediction in Language Models,"Ansar Aynetdinov, Alan Akbik","Multi-token prediction (MTP) is a recently proposed pre-training objective for language models. Rather than predicting only the next token (NTP), MTP predicts the next $k$ tokens at each prediction step, using multiple prediction heads. MTP has shown promise in improving downstream performance, inference speed, and training efficiency, particularly for large models. However, prior work has shown that smaller language models (SLMs) struggle with the MTP objective. To address this, we propose a curriculum learning strategy for MTP training, exploring two variants: a forward curriculum, which gradually increases the complexity of the pre-training objective from NTP to MTP, and a reverse curriculum, which does the opposite. Our experiments show that the forward curriculum enables SLMs to better leverage the MTP objective during pre-training, improving downstream NTP performance and generative output quality, while retaining the benefits of self-speculative decoding. The reverse curriculum achieves stronger NTP performance and output quality, but fails to provide any self-speculative decoding benefits.","cs.CL, cs.AI",2025-05-28T18:19:18+00:00,2025-05-28T18:19:18+00:00,http://arxiv.org/abs/2505.22757v1,http://arxiv.org/abs/2505.22757v1,2025-05-28 18:19:18+00:00,"\noindent \textbf{Curriculum learning.} After \citet{bengio2009curriculum} first proposed to apply a curriculum learning strategy in the context of machine learning, it has been successfully applied on a number of tasks in various machine learning domains including natural language processing, computer vision and speech recognition \cite{cl_survey}. In the context of language modeling, they have been shown to provide benefits both when pre-training encoder-only models \cite{cl_nlu, cl_bert, bert_lrc}, as well as instruction-tuning large decoder-only models \cite{orca, curr_instr}. The use of curriculum learning approaches was not reported in pre-training any publicly available decoder-only foundation models trained on vast amounts of text data, although recently \citet{feng2024} showed that using a two-stage curriculum based on text quality can lead better training outcomes. Meanwhile curriculum learning approaches have been very popular in data-constrained pre-training setups \cite{babylm_2023, babylm_2024}. While the curricula that focus on ordering the data based on various difficulty metrics were not found to be consistently better than non-curriculum baselines, an approach by \citet{less_is_more} that involves a curriculum for pre-training objectives was able to reliably outperform non-curriculum baselines in a data-constrained setup. \noindent \textbf{Multi-token prediction.} ProphetNet \cite{prophetnet} was the first large-scale transformer-based language model that was able to predict multiple n-grams in one prediction step. However, their model relies on n-stream self-attention mechanism that involves more computational overhead compared to regular transformers. \citet{future_lens} showed that hidden states of next-token prediction models are able to encode more than a single token ahead by probing pre-trained transformers, and that it's possible to predict those to a certain extent. \citet{gloeckle2024mtp} improved upon the previous work by proposing slight architectural tweaks, such as using full transformer layers as language modeling heads, to account for the multi-token prediction task that resulted in a more computationally efficient, compute-matched with NTP models, and effective method for multi-token prediction. \noindent \textbf{Self-speculative decoding.} \citet{blockwise_parallel_decoding} were the first to suggest a speculative decoding scheme for faster inference. Since then, a number self-speculative decoding methods were introduced. Some of these methods rely on the early-exit mechanism \cite{layerskip, kangaroo}, others on skipping intermediate layers \cite{draft_verify, swift}, and some on architectural transformations \cite{koala}. Medusa \cite{medusa} has gained the most prominence due to its simplicity and ability to relatively easily and cost-efficiently enable self-speculative decoding for LLMs that were pre-trained using the regular NTP objective.","\noindent \textbf{Curriculum learning.} After \citet{bengio2009curriculum} first proposed to apply a curriculum learning strategy in the context of machine learning, it has been successfully applied on a number of tasks in various machine learning domains including natural language processing, computer vision and speech recognition \cite{cl_survey}. In the context of language modeling, they have been shown to provide benefits both when pre-training encoder-only models \cite{cl_nlu, cl_bert, bert_lrc}, as well as instruction-tuning large decoder-only models \cite{orca, curr_instr}. The use of curriculum learning approaches was not reported in pre-training any publicly available decoder-only foundation models trained on vast amounts of text data, although recently \citet{feng2024} showed that using a two-stage curriculum based on text quality can lead better training outcomes. Meanwhile curriculum learning approaches have been very popular in data-constrained pre-training setups \cite{babylm_2023, babylm_2024}. While the curricula that focus on ordering the data based on various difficulty metrics were not found to be consistently better than non-curriculum baselines, an approach by \citet{less_is_more} that involves a curriculum for pre-training objectives was able to reliably outperform non-curriculum baselines in a data-constrained setup. \noindent \textbf{Multi-token prediction.} ProphetNet \cite{prophetnet} was the first large-scale transformer-based language model that was able to predict multiple n-grams in one prediction step. However, their model relies on n-stream self-attention mechanism that involves more computational overhead compared to regular transformers. \citet{future_lens} showed that hidden states of next-token prediction models are able to encode more than a single token ahead by probing pre-trained transformers, and that it's possible to predict those to a certain extent. \citet{gloeckle2024mtp} improved upon the previous work by proposing slight architectural tweaks, such as using full transformer layers as language modeling heads, to account for the multi-token prediction task that resulted in a more computationally efficient, compute-matched with NTP models, and effective method for multi-token prediction. \noindent \textbf{Self-speculative decoding.} \citet{blockwise_parallel_decoding} were the first to suggest a speculative decoding scheme for faster inference. Since then, a number self-speculative decoding methods were introduced. Some of these methods rely on the early-exit mechanism \cite{layerskip, kangaroo}, others on skipping intermediate layers \cite{draft_verify, swift}, and some on architectural transformations \cite{koala}. Medusa \cite{medusa} has gained the most prominence due to its simplicity and ability to relatively easily and cost-efficiently enable self-speculative decoding for LLMs that were pre-trained using the regular NTP objective.","Curriculum learning. After Bengio et al. (2009) first proposed to apply a curriculum learning strat-egy in the context of machine learning, it has been successfully applied on a number of tasks in var- ious machine learning domains including natural language processing, computer vision and speech recognition (Soviany et al., 2022). In the context of language modeling, they have been shown to pro- vide benefits both when pre-training encoder-only models (Xu et al., 2020; Nagatsuka et al., 2021; Ranaldi et al., 2023), as well as instruction-tuning large decoder-only models (Mukherjee et al., 2023; Lee et al., 2024). The use of curriculum learning approaches was not reported in pre-training any publicly available decoder-only foundation models trained on vast amounts of text data, although recently Feng et al. (2024) showed that using a two-stage curriculum based on text quality can lead better training out- comes. Meanwhile curriculum learning approaches have been very popular in data-constrained pre- training setups (Warstadt et al., 2023; Hu et al., 2024). While the curricula that focus on order- ing the data based on various difficulty metrics were not found to be consistently better than non- curriculum baselines, an approach by Salhan et al. (2024) that involves a curriculum for pre-training objectives was able to reliably outperform non- curriculum baselines in a data-constrained setup. Multi-token prediction. ProphetNet (Qi et al., 2020) was the first large-scale transformer-based language model that was able to predict multiple n- grams in one prediction step. However, their model relies on n-stream self-attention mechanism that involves more computational overhead compared to regular transformers. Pal et al. (2023) showed that hidden states of next-token prediction models are able to encode more than a single token ahead by probing pre- trained transformers, and that it’s possible to pre- dict those to a certain extent. Gloeckle et al. (2024) improved upon the previ- ous work by proposing slight architectural tweaks, such as using full transformer layers as language modeling heads, to account for the multi-token pre- diction task that resulted in a more computationally efficient, compute-matched with NTP models, and effective method for multi-token prediction. Self-speculative decoding. Stern et al. (2018) were the first to suggest a speculative decoding scheme for faster inference. Since then, a number self-speculative decoding methods were introduced. Some of these methods rely on the early-exit mech- anism (Elhoushi et al., 2024; Liu et al., 2024b), oth- 8 ers on skipping intermediate layers (Zhang et al., 2024a; Xia et al., 2024), and some on architectural transformations (Zhang et al., 2024b). Medusa (Cai et al., 2024) has gained the most prominence due to its simplicity and ability to relatively easily and cost-efficiently enable self-speculative decoding for LLMs that were pre-trained using the regular NTP objective." 2506.02853v1,"Learning Pyramid-structured Long-range Dependencies for 3D Human Pose Estimation","Mingjie Wei, Xuemei Xie, Yutong Zhong, Guangming Shi","Action coordination in human structure is indispensable for the spatial constraints of 2D joints to recover 3D pose. Usually, action coordination is represented as a long-range dependence among body parts. However, there are two main challenges in modeling long-range dependencies. First, joints should not only be constrained by other individual joints but also be modulated by the body parts. Second, existing methods make networks deeper to learn dependencies between non-linked parts. They introduce uncorrelated noise and increase the model size. In this paper, we utilize a pyramid structure to better learn potential long-range dependencies. It can capture the correlation across joints and groups, which complements the context of the human sub-structure. In an effective cross-scale way, it captures the pyramid-structured long-range dependence. Specifically, we propose a novel Pyramid Graph Attention (PGA) module to capture long-range cross-scale dependencies. It concatenates information from various scales into a compact sequence, and then computes the correlation between scales in parallel. Combining PGA with graph convolution modules, we develop a Pyramid Graph Transformer (PGFormer) for 3D human pose estimation, which is a lightweight multi-scale transformer architecture. It encapsulates human sub-structures into self-attention by pooling. Extensive experiments show that our approach achieves lower error and smaller model size than state-of-the-art methods on Human3.6M and MPI-INF-3DHP datasets. The code is available at https://github.com/MingjieWe/PGFormer.",cs.CV,2025-06-03T13:21:37+00:00,2025-06-03T13:21:37+00:00,http://arxiv.org/abs/2506.02853v1,http://arxiv.org/abs/2506.02853v1,2025-06-03 13:21:37+00:00,"\label{sec:relatedwork} %------------------------------------------------------------------------- \subsection{3D Human pose estimation} {The inference of human body coordinates in 3D space from a single image was first proposed by Lee and Chen \cite{lee1985determination}. Recently, state-of-the-art approaches have employed deep neural networks. Some methods use end-to-end regression \cite{mehta2017monocular, pavlakos2017coarse} to predict 3D coordinates or heatmaps directly from a single image. For instance, Pavlakos et al. \cite{pavlakos2017coarse} proposed a coarse-to-fine network that predicts depth heatmaps using a convolutional neural network. However, these methods struggle with the mapping from a 2D image to a 3D human body.}\par {With advancements in 2D HPE, researchers decoupled the problem of 3D HPE and addressed it through 2D-to-3D lifting. \cite{cai2019exploiting,martinez2017simple,zhao2019semantic,zou2021modulated,zhao2022graformer,ZhongTMM2024,WangTMM2024,tang20233d,li2022mhformer, liu2023posynda}. This approach is capable of exploring spatial \cite{cai2019exploiting,martinez2017simple,zhao2019semantic,zou2021modulated,zhao2022graformer} and temporal \cite{ZhongTMM2024,WangTMM2024,tang20233d,li2022mhformer, chen2023hdformer} information to achieve excellent performance, Consequently, we adopt this two-stage approach as well. To model human structure and exploit the relations, some works \cite{cai2019exploiting,zhao2019semantic,hu2021conditional,ci2019optimizing,liu2020comprehensive} are based on GCN network architecture. For example, Zhao et al. \cite{zhao2019semantic} use semmantic graph convolution and non-local modules \cite{wang2018non} to learn spatial constrains. However, local graph convolution aggregates information from adjacent keypoints but lacks the establishment of long-range coordination. The method we proposed builds upon local information and delves into the modeling of long-range dependencies.}\par In addition, some distribution-based methods \cite{gong2023diffpose, holmquist2023diffpose, liu2023posynda} propose to learn pose distribution. {For instance, a reliable 3D pose is estimated by using diffusion model \cite{gong2023diffpose}. But they also require a conditional reverse diffusion step by modeling the spatial context. The proposed hierarchical long-range dependencies are also helpful to capture spatial priors and model sub-structures at different levels in the diffusion process.} \par %------------------------------------------------------------------------- \subsection{Long-range dependence} Long-range dependence refers to the dependencies among non-adjacent nodes in 3D HPE \cite{zou2021modulated}. In recent years, some methods \cite{fang2018learning, he2021db,zeng2021learning,zhao2022graformer} have recognized this widespread dependence in the structure of human body. It is often used to model human coordination in different actions. Therefore, taking this into consideration yields better results in some complex actions.\par {Fang et al. \cite{fang2018learning} propose to learn the symmetry and coordination of specific joint pairs via hand-craft connection. Zou and Tang \cite{zou2021modulated} propose to learn the motion patterns beyond natural connection via aggregating all high-order nodes. However, this manual design and reliance on graph convolution limits the efficiency of learning long-range constraints. } Nowadays, some of the latest methods \cite{zhao2022graformer,zhang2023learning,li2022exploiting,li2022mhformer,zhang2022mixste} design the network architecture based on attention \cite{vaswani2017attention}. For example, Zhao et al. \cite{zhao2022graformer} design the transformer architecture that combines graph convolution and attention. Li et al. \cite{li2022exploiting} use transformer to exploit long-range dependence. {However, calculation process of self-attention ignores the rich structural information of human body. Previous work \cite{zhao2022graformer} improves long-range dependencies learning by replacing MLP with convolution, and does not improve the fundamental problem of the lack of structural information in attention. So, our approach further considers improving the self-attention mechanism to learn efficient feature representations of long distance dependencies, rather than noise data due to pure coordinate values.} %------------------------------------------------------------------------- \subsection{Hierarchical human structure} {The hierarchical human structure is a critical concept in the field of human pose estimation. It involves modeling the human body as a series of interconnected parts with different levels of hierarchy. This representation reflects the natural organization of the human body and its joints, allowing for more accurate and realistic predictions of poses. Grouping \cite{zhou2019hemlets,zeng2020srnet,xue2022boosting,wu2022hpgcn} or pooling\cite{xu2021graph, hua2022unet, zhang2023learning} are used to achieve hierarchical representation, such as the proposed pyramid structure. Different from image tasks \cite{wu2022p2t,PVT}, in 3D HPE, the pyramid structure provides graph-like hierarchical information about the substructure of the human body.}\par {These methods\cite{zhou2019hemlets,zeng2020srnet,xue2022boosting,wu2022hpgcn} demonstrate the importance of part-level analysis in 3D HPE. For example, Zhou et al. \cite{zhou2019hemlets} propose to learn part-centric heatmaps and Wu et al.\cite{wu2022hpgcn} represented human structure using hierarchical poselets. However, they do not adopt a explicit pooling approach to directly acquire part-level information. The proposed pyramid structure facilitates adequate information exchange across multiple scales. And it is an intuitive and efficient method to extract human substructure feature by pooling, which is explainable. Similarly, Xu et al. \cite{xu2021graph} construct a Graph Hourglass network by pooling on the hierarchical representation of the human skeleton. The proposed pyramid retains the original scale information, and achieves cross-scale calculation, which is conducive to the utilization of multi-scale information. Additionally, Zhang et al. \cite{zhang2023learning} introduce a parallel framework to compute semantic relations between different scales for 3D human pose estimation, but the architecture is redundant. We propose to integrate multi-scale concepts into self-attention, which can learn human substructure features in parallel and calculate correlations using a small number of parameters. In sum, we implement pooling in the self-attention to efficiently learn hierarchical human structure information.} \par","%------------------------------------------------------------------------- \subsection{3D Human pose estimation} {The inference of human body coordinates in 3D space from a single image was first proposed by Lee and Chen \cite{lee1985determination}. Recently, state-of-the-art approaches have employed deep neural networks. Some methods use end-to-end regression \cite{mehta2017monocular, pavlakos2017coarse} to predict 3D coordinates or heatmaps directly from a single image. For instance, Pavlakos et al. \cite{pavlakos2017coarse} proposed a coarse-to-fine network that predicts depth heatmaps using a convolutional neural network. However, these methods struggle with the mapping from a 2D image to a 3D human body.}\par {With advancements in 2D HPE, researchers decoupled the problem of 3D HPE and addressed it through 2D-to-3D lifting. \cite{cai2019exploiting,martinez2017simple,zhao2019semantic,zou2021modulated,zhao2022graformer,ZhongTMM2024,WangTMM2024,tang20233d,li2022mhformer, liu2023posynda}. This approach is capable of exploring spatial \cite{cai2019exploiting,martinez2017simple,zhao2019semantic,zou2021modulated,zhao2022graformer} and temporal \cite{ZhongTMM2024,WangTMM2024,tang20233d,li2022mhformer, chen2023hdformer} information to achieve excellent performance, Consequently, we adopt this two-stage approach as well. To model human structure and exploit the relations, some works \cite{cai2019exploiting,zhao2019semantic,hu2021conditional,ci2019optimizing,liu2020comprehensive} are based on GCN network architecture. For example, Zhao et al. \cite{zhao2019semantic} use semmantic graph convolution and non-local modules \cite{wang2018non} to learn spatial constrains. However, local graph convolution aggregates information from adjacent keypoints but lacks the establishment of long-range coordination. The method we proposed builds upon local information and delves into the modeling of long-range dependencies.}\par In addition, some distribution-based methods \cite{gong2023diffpose, holmquist2023diffpose, liu2023posynda} propose to learn pose distribution. {For instance, a reliable 3D pose is estimated by using diffusion model \cite{gong2023diffpose}. But they also require a conditional reverse diffusion step by modeling the spatial context. The proposed hierarchical long-range dependencies are also helpful to capture spatial priors and model sub-structures at different levels in the diffusion process.} \par %------------------------------------------------------------------------- \subsection{Long-range dependence} Long-range dependence refers to the dependencies among non-adjacent nodes in 3D HPE \cite{zou2021modulated}. In recent years, some methods \cite{fang2018learning, he2021db,zeng2021learning,zhao2022graformer} have recognized this widespread dependence in the structure of human body. It is often used to model human coordination in different actions. Therefore, taking this into consideration yields better results in some complex actions.\par {Fang et al. \cite{fang2018learning} propose to learn the symmetry and coordination of specific joint pairs via hand-craft connection. Zou and Tang \cite{zou2021modulated} propose to learn the motion patterns beyond natural connection via aggregating all high-order nodes. However, this manual design and reliance on graph convolution limits the efficiency of learning long-range constraints. } Nowadays, some of the latest methods \cite{zhao2022graformer,zhang2023learning,li2022exploiting,li2022mhformer,zhang2022mixste} design the network architecture based on attention \cite{vaswani2017attention}. For example, Zhao et al. \cite{zhao2022graformer} design the transformer architecture that combines graph convolution and attention. Li et al. \cite{li2022exploiting} use transformer to exploit long-range dependence. {However, calculation process of self-attention ignores the rich structural information of human body. Previous work \cite{zhao2022graformer} improves long-range dependencies learning by replacing MLP with convolution, and does not improve the fundamental problem of the lack of structural information in attention. So, our approach further considers improving the self-attention mechanism to learn efficient feature representations of long distance dependencies, rather than noise data due to pure coordinate values.} %------------------------------------------------------------------------- \subsection{Hierarchical human structure} {The hierarchical human structure is a critical concept in the field of human pose estimation. It involves modeling the human body as a series of interconnected parts with different levels of hierarchy. This representation reflects the natural organization of the human body and its joints, allowing for more accurate and realistic predictions of poses. Grouping \cite{zhou2019hemlets,zeng2020srnet,xue2022boosting,wu2022hpgcn} or pooling\cite{xu2021graph, hua2022unet, zhang2023learning} are used to achieve hierarchical representation, such as the proposed pyramid structure. Different from image tasks \cite{wu2022p2t,PVT}, in 3D HPE, the pyramid structure provides graph-like hierarchical information about the substructure of the human body.}\par {These methods\cite{zhou2019hemlets,zeng2020srnet,xue2022boosting,wu2022hpgcn} demonstrate the importance of part-level analysis in 3D HPE. For example, Zhou et al. \cite{zhou2019hemlets} propose to learn part-centric heatmaps and Wu et al.\cite{wu2022hpgcn} represented human structure using hierarchical poselets. However, they do not adopt a explicit pooling approach to directly acquire part-level information. The proposed pyramid structure facilitates adequate information exchange across multiple scales. And it is an intuitive and efficient method to extract human substructure feature by pooling, which is explainable. Similarly, Xu et al. \cite{xu2021graph} construct a Graph Hourglass network by pooling on the hierarchical representation of the human skeleton. The proposed pyramid retains the original scale information, and achieves cross-scale calculation, which is conducive to the utilization of multi-scale information. Additionally, Zhang et al. \cite{zhang2023learning} introduce a parallel framework to compute semantic relations between different scales for 3D human pose estimation, but the architecture is redundant. We propose to integrate multi-scale concepts into self-attention, which can learn human substructure features in parallel and calculate correlations using a small number of parameters. In sum, we implement pooling in the self-attention to efficiently learn hierarchical human structure information.} \par", 2506.02547v1,Probabilistic Online Event Downsampling,"Andreu Girbau-Xalabarder, Jun Nagata, Shinichi Sumiyoshi","Event cameras capture scene changes asynchronously on a per-pixel basis, enabling extremely high temporal resolution. However, this advantage comes at the cost of high bandwidth, memory, and computational demands. To address this, prior work has explored event downsampling, but most approaches rely on fixed heuristics or threshold-based strategies, limiting their adaptability. Instead, we propose a probabilistic framework, POLED, that models event importance through an event-importance probability density function (ePDF), which can be arbitrarily defined and adapted to different applications. Our approach operates in a purely online setting, estimating event importance on-the-fly from raw event streams, enabling scene-specific adaptation. Additionally, we introduce zero-shot event downsampling, where downsampled events must remain usable for models trained on the original event stream, without task-specific adaptation. We design a contour-preserving ePDF that prioritizes structurally important events and evaluate our method across four datasets and tasks--object classification, image interpolation, surface normal estimation, and object detection--demonstrating that intelligent sampling is crucial for maintaining performance under event-budget constraints.","cs.CV, cs.ET",2025-06-03T07:33:11+00:00,2025-06-03T07:33:11+00:00,http://arxiv.org/abs/2506.02547v1,http://arxiv.org/abs/2506.02547v1,2025-06-03 07:33:11+00:00,"Modern event cameras produce an overwhelming number of events, posing significant challenges in bandwidth, computation, and memory. Efficient event stream management is therefore crucial, with downsampling emerging as a key strategy to reduce computational and bandwidth demands. Early works explored event downsampling in spatial and temporal dimensions by scaling event coordinates and timestamps, adapting the sampling strategy to the dataset \cite{cohen2018spatial}. Later approaches refined this by integrating events over space and time using a counting strategy with refractory periods \cite{ghoshevdownsampling}. Other methods take inspiration from biological neurons, reducing events based on the activation of multiple sensory unit layers \cite{barrios2018less}. Spiking Neural Networks (SNNs) have also been explored for downsampling, leveraging neuromorphic processing to optimize event retention \cite{gupta2020implementing,Gruel_2023_WACV,ghosh2023insect,rizzo2023neuromorphic}. Adaptive compression strategies, such as Huffman encoding that dynamically adjusts to bandwidth constraints, have also been proposed \cite{bisulco2020near}, along with pre-processing techniques that use non-uniform spatial sampling via 3D grids \cite{bi2019graph}. Beyond computational efficiency, research has also examined how downsampling affects human perception of event streams. Studies have compared basic temporal and spatial filtering with more advanced SNN-based approaches to assess how sparsification impacts interpretability \cite{gruel2023frugal,Gruel_2023_WACV}. Despite these advances, most existing methods rely on fixed heuristics or task-specific optimizations, limiting their adaptability across different applications. In contrast to existing approaches, we formulate event downsampling as an online stochastic process, where events are sampled based on their likelihood of belonging to an estimated event distribution. This enables adaptive selection based on event importance and scene statistics, rather than relying on fixed heuristics or thresholds. The closest work to ours is \cite{araghi2024pushing}, which uniformly samples events each epoch to train a CNN, studying the effects of event reduction and its interaction with CNN training parameters. However, their approach does not consider event importance or zero-shot applicability. Instead, we focus on the downsampling technique itself, evaluating its performance across independent tasks and models trained on the original event stream. Additionally, we investigate the effects of retraining with downsampled events, reaching similar conclusions to \cite{araghi2024pushing} while emphasizing the role of intelligent sampling. We propose importance-based downsampling using an event-importance probability density function (ePDF), which can be arbitrarily defined and adapted to different settings. To make this framework broadly applicable, we introduce a generic formulation and processing pipeline, namely \alg, capable of handling any valid ePDF. In this work, we present a Poisson-based ePDF designed to prioritize contour preservation, under the premise that contour-related events are more relevant for solving diverse tasks. Furthermore, we approach event downsampling from a purely online perspective, making decisions based only on past and present information, simulating a real-time scenario where future events are unknown. Finally, prior work has largely focused on simple classification datasets or introduced metrics favoring classification-based evaluation. To provide a more comprehensive assessment, we evaluate our method on four challenging datasets, covering classification, frame interpolation for super-slow-motion video generation, surface normal estimation, and object detection in an automotive setting.","Modern event cameras produce an overwhelming number of events, posing significant challenges in bandwidth, computation, and memory. Efficient event stream management is therefore crucial, with downsampling emerging as a key strategy to reduce computational and bandwidth demands. Early works explored event downsampling in spatial and temporal dimensions by scaling event coordinates and timestamps, adapting the sampling strategy to the dataset \cite{cohen2018spatial}. Later approaches refined this by integrating events over space and time using a counting strategy with refractory periods \cite{ghoshevdownsampling}. Other methods take inspiration from biological neurons, reducing events based on the activation of multiple sensory unit layers \cite{barrios2018less}. Spiking Neural Networks (SNNs) have also been explored for downsampling, leveraging neuromorphic processing to optimize event retention \cite{gupta2020implementing,Gruel_2023_WACV,ghosh2023insect,rizzo2023neuromorphic}. Adaptive compression strategies, such as Huffman encoding that dynamically adjusts to bandwidth constraints, have also been proposed \cite{bisulco2020near}, along with pre-processing techniques that use non-uniform spatial sampling via 3D grids \cite{bi2019graph}. Beyond computational efficiency, research has also examined how downsampling affects human perception of event streams. Studies have compared basic temporal and spatial filtering with more advanced SNN-based approaches to assess how sparsification impacts interpretability \cite{gruel2023frugal,Gruel_2023_WACV}. Despite these advances, most existing methods rely on fixed heuristics or task-specific optimizations, limiting their adaptability across different applications. In contrast to existing approaches, we formulate event downsampling as an online stochastic process, where events are sampled based on their likelihood of belonging to an estimated event distribution. This enables adaptive selection based on event importance and scene statistics, rather than relying on fixed heuristics or thresholds. The closest work to ours is \cite{araghi2024pushing}, which uniformly samples events each epoch to train a CNN, studying the effects of event reduction and its interaction with CNN training parameters. However, their approach does not consider event importance or zero-shot applicability. Instead, we focus on the downsampling technique itself, evaluating its performance across independent tasks and models trained on the original event stream. Additionally, we investigate the effects of retraining with downsampled events, reaching similar conclusions to \cite{araghi2024pushing} while emphasizing the role of intelligent sampling. We propose importance-based downsampling using an event-importance probability density function (ePDF), which can be arbitrarily defined and adapted to different settings. To make this framework broadly applicable, we introduce a generic formulation and processing pipeline, namely \alg, capable of handling any valid ePDF. In this work, we present a Poisson-based ePDF designed to prioritize contour preservation, under the premise that contour-related events are more relevant for solving diverse tasks. Furthermore, we approach event downsampling from a purely online perspective, making decisions based only on past and present information, simulating a real-time scenario where future events are unknown. Finally, prior work has largely focused on simple classification datasets or introduced metrics favoring classification-based evaluation. To provide a more comprehensive assessment, we evaluate our method on four challenging datasets, covering classification, frame interpolation for super-slow-motion video generation, surface normal estimation, and object detection in an automotive setting.","Modern event cameras produce an overwhelming number of events, posing significant challenges in bandwidth, com-putation, and memory. Efficient event stream management is therefore crucial, with downsampling emerging as a key strategy to reduce computational and bandwidth demands. Early works explored event downsampling in spatial and temporal dimensions by scaling event coordinates and timestamps, adapting the sampling strategy to the dataset [5]. Later approaches refined this by integrating events over space and time using a counting strategy with refractory pe- riods [9]. Other methods take inspiration from biological neurons, reducing events based on the activation of multiple sensory unit layers [2]. Spiking Neural Networks (SNNs) have also been explored for downsampling, leveraging neuromorphic processing to optimize event retention [10, 12, 13, 16]. Adaptive compression strategies, such as Huffman encod- ing that dynamically adjusts to bandwidth constraints, have also been proposed [4], along with pre-processing tech- niques that use non-uniform spatial sampling via 3D grids [3]. Beyond computational efficiency, research has also ex- amined how downsampling affects human perception of event streams. Studies have compared basic temporal and spatial filtering with more advanced SNN-based ap- proaches to assess how sparsification impacts interpretabil- ity [11, 12]. Despite these advances, most existing methods rely on fixed heuristics or task-specific optimizations, limiting their adaptability across different applications. In contrast to existing approaches, we formulate event downsampling as an online stochastic process, where events are sampled based on their likelihood of belonging to an es- timated event distribution. This enables adaptive selection based on event importance and scene statistics, rather than relying on fixed heuristics or thresholds. The closest work to ours is [1], which uniformly sam- ples events each epoch to train a CNN, studying the effects of event reduction and its interaction with CNN training pa- rameters. However, their approach does not consider event importance or zero-shot applicability. Instead, we focus on the downsampling technique itself, evaluating its perfor- mance across independent tasks and models trained on the original event stream. Additionally, we investigate the ef- fects of retraining with downsampled events, reaching sim- ilar conclusions to [1] while emphasizing the role of intelli- gent sampling. We propose importance-based downsampling using an event-importance probability density function (ePDF), which can be arbitrarily defined and adapted to different set- tings. To make this framework broadly applicable, we intro- duce a generic formulation and processing pipeline, namely POLED, capable of handling any valid ePDF. In this work, we present a Poisson-based ePDF designed to prioritize contour preservation, under the premise that contour-related 2 events are more relevant for solving diverse tasks. Further- more, we approach event downsampling from a purely on- line perspective, making decisions based only on past and present information, simulating a real-time scenario where future events are unknown. Finally, prior work has largely focused on sim- ple classification datasets or introduced metrics favoring classification-based evaluation. To provide a more compre- hensive assessment, we evaluate our method on four chal- lenging datasets, covering classification, frame interpola- tion for super-slow-motion video generation, surface nor- mal estimation, and object detection in an automotive set- ting." 2506.01071v1,Aligned Contrastive Loss for Long-Tailed Recognition,"Jiali Ma, Jiequan Cui, Maeno Kazuki, Lakshmi Subramanian, Karlekar Jayashree, Sugiri Pranata, Hanwang Zhang","In this paper, we propose an Aligned Contrastive Learning (ACL) algorithm to address the long-tailed recognition problem. Our findings indicate that while multi-view training boosts the performance, contrastive learning does not consistently enhance model generalization as the number of views increases. Through theoretical gradient analysis of supervised contrastive learning (SCL), we identify gradient conflicts, and imbalanced attraction and repulsion gradients between positive and negative pairs as the underlying issues. Our ACL algorithm is designed to eliminate these problems and demonstrates strong performance across multiple benchmarks. We validate the effectiveness of ACL through experiments on long-tailed CIFAR, ImageNet, Places, and iNaturalist datasets. Results show that ACL achieves new state-of-the-art performance.",cs.CV,2025-06-01T16:19:30+00:00,2025-06-01T16:19:30+00:00,http://arxiv.org/abs/2506.01071v1,http://arxiv.org/abs/2506.01071v1,2025-06-01 16:19:30+00:00,"\label{sec:related} \subsection{Long-tailed recognition} In long-tailed recognition, the class imbalance is traditionally addressed through re-balancing techniques. These include re-sampling, which over-samples minority classes or under-samples majority classes~\cite{buda2018systematic,byrd2019effect,drummond2003c4,pouyanfar2018dynamic,shen2016relay}, and re-weighting, which assigns inverse-frequency weights to classes in loss computation~\cite{cui2019class,huang2016learning,wang2017learning,khan2017cost, shu2019meta}. Both methods promote balanced classifier learning yet unexpectedly damage the representative ability of the deep features~\cite{kang2019decoupling, zhou2020bbn,zhang2023deep,nam2023decoupled}. Therefore, re-balancing strategies are usually used together with a 2-stage training paradigm. Kang \etal~\cite{kang2019decoupling} proposed a decoupled training strategy for long-tailed recognition, where representation learning and classification are trained separately. % decouples the representation learning and classification. They first train the network jointly using an instance-based sampler followed by classifier fine-tuning using a class-balanced sampler. BBN~\cite{zhou2020bbn} utilizes a bilateral branch network to dynamically balance features from instance-balanced and reversed sampling branches. An alternative approach to long-tailed recognition involves adjusting logit values based on logarithmic label frequencies. Balanced Softmax~\cite{ren2020balanced} is introduced to address bias in Softmax loss. Menon \etal~\cite{menon2020long} further introduces post-hoc logit adjustment and a logit-adjusted classification loss, shifting from empirical risk minimization to balanced error minimization. This technique has been extensively adopted as a complementary enhancement in various long-tailed algorithms ~\cite{cui2021parametric,cui2023generalized,zhu2022balanced,suh2023long,zhu2024generalized}. \subsection{Contrastive learning} Contrastive learning has gained widespread adoption in self-supervised learning to enhance representation robustness by contrasting positive and negative pairs with augmented views~\cite{he2020momentum, chen2020simple,grill2020bootstrap,chen2021exploring}. SCL~\cite{khosla2020supervised} extends it to the supervised setting by encouraging distinctions between samples at the class level. Recently, contrastive learning has become prevalent in long-tailed recognition~\cite{cui2023generalized, zhu2022balanced,suh2023long,du2024probabilistic,zhang2023deep, zhang2022fairness,hou2023subclass}. KCL~\cite{kang2020exploring} integrates balanced feature space and cross-entropy classification discriminability using K positives. TSC~\cite{li2022targeted} further aligns class features closer to target features on regular simplex vertices. Several works combine the merits of contrastive learning with logit adjustment techniques. PaCo~\cite{cui2021parametric} and GPaCo~\cite{cui2023generalized} seamlessly integrate these methods into a single loss, introducing parametric learnable class centers to expand contrastive pairs. BCL~\cite{zhu2022balanced} introduces class weight embeddings for comparison and adopts class averaging to balance the positive and negative pairs. GML~\cite{suh2023long} creates class-wise queues for contrast samples and conducts knowledge distillation based on the features of a pre-trained teacher model. In this work, we theoretically analyze the pairwise gradient of SCL and find that gradient conflict in positive pairs hinders effectiveness of the learning process in multi-view setting. Our proposed ACL eliminates the conflicting items to promote consistent attraction for all the positives. \subsection{Augmentation multiplicity} Multiple augmented views have emerged as a significant enhancement in contrastive learning frameworks to learn more robust and invariant representations. %~\cite{chen2020big,chen2021empirical} explore the usage of multiple positive pairs and demonstrate improved performance with additional augmented views. \jiequan{citations should not be the subject} The usage of multiple positive pairs is explored in works~\cite{chen2020big,chen2021empirical} and is proved to promote model performance with additional augmented views. %~\cite{SimCLR v2} extended the original two-view approach, demonstrating improved performance with additional augmented views. SwAV~\cite{caron2020unsupervised} introduces a multi-crop strategy, utilizing both global and local crops to enforce consistency across different image views. %~!\cite{moco v3} explored using multiple positive pairs, showing benefits over single-pair methods. %These approaches consistently demonstrate that incorporating multiple augmented views leads to more robust and invariant representations. Additionally, the work~\cite{fort2021drawing} shows that increasing the multiplicity of augmentations improves accuracy in conventional classification losses. Our work extends multi-view training to long-tailed recognition, proposing ACL to fully leverage the benefits of multiple views.","\subsection{Long-tailed recognition} In long-tailed recognition, the class imbalance is traditionally addressed through re-balancing techniques. These include re-sampling, which over-samples minority classes or under-samples majority classes~\cite{buda2018systematic,byrd2019effect,drummond2003c4,pouyanfar2018dynamic,shen2016relay}, and re-weighting, which assigns inverse-frequency weights to classes in loss computation~\cite{cui2019class,huang2016learning,wang2017learning,khan2017cost, shu2019meta}. Both methods promote balanced classifier learning yet unexpectedly damage the representative ability of the deep features~\cite{kang2019decoupling, zhou2020bbn,zhang2023deep,nam2023decoupled}. Therefore, re-balancing strategies are usually used together with a 2-stage training paradigm. Kang \etal~\cite{kang2019decoupling} proposed a decoupled training strategy for long-tailed recognition, where representation learning and classification are trained separately. % decouples the representation learning and classification. They first train the network jointly using an instance-based sampler followed by classifier fine-tuning using a class-balanced sampler. BBN~\cite{zhou2020bbn} utilizes a bilateral branch network to dynamically balance features from instance-balanced and reversed sampling branches. An alternative approach to long-tailed recognition involves adjusting logit values based on logarithmic label frequencies. Balanced Softmax~\cite{ren2020balanced} is introduced to address bias in Softmax loss. Menon \etal~\cite{menon2020long} further introduces post-hoc logit adjustment and a logit-adjusted classification loss, shifting from empirical risk minimization to balanced error minimization. This technique has been extensively adopted as a complementary enhancement in various long-tailed algorithms ~\cite{cui2021parametric,cui2023generalized,zhu2022balanced,suh2023long,zhu2024generalized}. \subsection{Contrastive learning} Contrastive learning has gained widespread adoption in self-supervised learning to enhance representation robustness by contrasting positive and negative pairs with augmented views~\cite{he2020momentum, chen2020simple,grill2020bootstrap,chen2021exploring}. SCL~\cite{khosla2020supervised} extends it to the supervised setting by encouraging distinctions between samples at the class level. Recently, contrastive learning has become prevalent in long-tailed recognition~\cite{cui2023generalized, zhu2022balanced,suh2023long,du2024probabilistic,zhang2023deep, zhang2022fairness,hou2023subclass}. KCL~\cite{kang2020exploring} integrates balanced feature space and cross-entropy classification discriminability using K positives. TSC~\cite{li2022targeted} further aligns class features closer to target features on regular simplex vertices. Several works combine the merits of contrastive learning with logit adjustment techniques. PaCo~\cite{cui2021parametric} and GPaCo~\cite{cui2023generalized} seamlessly integrate these methods into a single loss, introducing parametric learnable class centers to expand contrastive pairs. BCL~\cite{zhu2022balanced} introduces class weight embeddings for comparison and adopts class averaging to balance the positive and negative pairs. GML~\cite{suh2023long} creates class-wise queues for contrast samples and conducts knowledge distillation based on the features of a pre-trained teacher model. In this work, we theoretically analyze the pairwise gradient of SCL and find that gradient conflict in positive pairs hinders effectiveness of the learning process in multi-view setting. Our proposed ACL eliminates the conflicting items to promote consistent attraction for all the positives. \subsection{Augmentation multiplicity} Multiple augmented views have emerged as a significant enhancement in contrastive learning frameworks to learn more robust and invariant representations. %~\cite{chen2020big,chen2021empirical} explore the usage of multiple positive pairs and demonstrate improved performance with additional augmented views. \jiequan{citations should not be the subject} The usage of multiple positive pairs is explored in works~\cite{chen2020big,chen2021empirical} and is proved to promote model performance with additional augmented views. %~\cite{SimCLR v2} extended the original two-view approach, demonstrating improved performance with additional augmented views. SwAV~\cite{caron2020unsupervised} introduces a multi-crop strategy, utilizing both global and local crops to enforce consistency across different image views. %~!\cite{moco v3} explored using multiple positive pairs, showing benefits over single-pair methods. %These approaches consistently demonstrate that incorporating multiple augmented views leads to more robust and invariant representations. Additionally, the work~\cite{fort2021drawing} shows that increasing the multiplicity of augmentations improves accuracy in conventional classification losses. Our work extends multi-view training to long-tailed recognition, proposing ACL to fully leverage the benefits of multiple views.", 2506.01037v1,"Self-supervised ControlNet with Spatio-Temporal Mamba for Real-world Video Super-resolution","Shijun Shi, Jing Xu, Lijing Lu, Zhihang Li, Kai Hu","Existing diffusion-based video super-resolution (VSR) methods are susceptible to introducing complex degradations and noticeable artifacts into high-resolution videos due to their inherent randomness. In this paper, we propose a noise-robust real-world VSR framework by incorporating self-supervised learning and Mamba into pre-trained latent diffusion models. To ensure content consistency across adjacent frames, we enhance the diffusion model with a global spatio-temporal attention mechanism using the Video State-Space block with a 3D Selective Scan module, which reinforces coherence at an affordable computational cost. To further reduce artifacts in generated details, we introduce a self-supervised ControlNet that leverages HR features as guidance and employs contrastive learning to extract degradation-insensitive features from LR videos. Finally, a three-stage training strategy based on a mixture of HR-LR videos is proposed to stabilize VSR training. The proposed Self-supervised ControlNet with Spatio-Temporal Continuous Mamba based VSR algorithm achieves superior perceptual quality than state-of-the-arts on real-world VSR benchmark datasets, validating the effectiveness of the proposed model design and training strategies.","cs.CV, I.4.4, I.2.6",2025-06-01T14:36:25+00:00,2025-06-01T14:36:25+00:00,http://arxiv.org/abs/2506.01037v1,http://arxiv.org/abs/2506.01037v1,2025-06-01 14:36:25+00:00,"\label{sec:related_work} \subsection{Video Super-Resolution} The goal of VSR is to enhance a sequence of HR video frames from their degraded LR counterparts. Based on the paradigms, existing VSR algorithms \cite{cao2021video,chan2021basicvsr,chan2022basicvsr++,isobe2020video,isobe2020video2,isobe2020revisiting,jo2018deep,liang2024vrt,liang2022recurrent,wang2019edvr,xue2019video,liu2013bayesian,nah2019ntire,yi2019progressive} could be roughly classified into two categories: temporal sliding-window based VSR and recurrent framework based VSR. Temporal sliding-window based VSR \cite{jo2018deep,wang2019edvr,li2020mucan} utilize a fixed set of neighboring frames to super-resolve one or more target frames. However, the information accessible is constrained by the temporal window’s size. Consequently, these methods can only exploit the temporal details of a restricted subset of input video frames. To exploit temporal information from more frames, recurrent framework based VSR \cite{chan2021basicvsr,chan2022basicvsr++,liang2024vrt,liang2022recurrent} utilizes multiple LR frames as input and employs recurrent neural networks to simultaneously produce their corresponding SR results. However, most existing approaches \cite{cao2021video,chan2021basicvsr,chan2022basicvsr++,isobe2020video,isobe2020video2,isobe2020revisiting,jo2018deep,liang2024vrt,liang2022recurrent,wang2019edvr,xue2019video} assume a pre-defined degradation process \cite{liu2013bayesian,nah2019ntire,yi2019progressive}. In real-world scenes with more complicated degradations, these VSR methods may not perform well. Due to the lack of real-world paired data for training, \citet{realvsr} propose to collect LR-HR data pairs with iPhone cameras to better model real-world degradations. While the VSR model trained on such data can be effective to videos captured by similar mobile cameras, it is relatively labor-intensive and may not generalize well to videos collected by other devices. Recent studies have shifted towards employing diverse degradations for data augmentation during training, such as blur, downsampling, noise and video compression \cite{realbasicvsr,xie2023mitigating}. However, maintaining temporal consistency while generating photorealistic textures remains a challenge. \begin{figure*} \centering \includegraphics[width=1\linewidth]{img/pipeline.pdf} \vspace{-0.6cm} \caption{Overview of the proposed SCST framework for real-world VSR. SCST consists of several modules, including Spatial-Temporal Continuous Mamba (STCM) and Self-supervised ControlNet (MoCoCtrl). The STCM incorporates 3D-Mamba Block within its structure, which, with the addition of spatial-temporal continuous scan, ensures comprehensive 3D attention for both inter-frame and intra-frame modeling. The Self-supervised ControlNet adopts the MoCo architecture to employ contrastive learning between LR and HR features, aligning LR features to noise-free HR features, thus reducing the impact of degradation.} \label{fig:framework} \vspace{-0.5cm} \end{figure*} \subsection{State Space Models} Structured state space models (S4) \cite{S4}, as a promising framework in handling long-distance sequences, has attracted widespread research interest. A variety of S4-inspired models, that capture long-range dependencies in sequential data, achieve competitive performance on various tasks \cite{variant1,variant2,variant3,variant4,variant5}. The major reason behind this might be that S4's adherence to Linear Time Invariance (LTI), which guarantees consistent output for identical inputs regardless of their temporal application. Nevertheless, LTI systems come with some limitations, especially when it comes to handling dynamic changes over time. The constancy of the internal state transition matrix throughout the sequence constrains the model's adaptability to evolving content, thereby limiting its utility in contexts demanding content-driven reasoning. To address these constraints, Mamba \cite{mamba} is recently introduced as a state-space model that dynamically adjusts its parameters in response to the input sequence. This adaptive strategy enables Mamba to engage in context-dependent reasoning, significantly enhancing its effectiveness across various domains \cite{domain1,domain2,domain3}. However, the application of Mamba in video super resolution tasks remains unexplored. \subsection{Self-Supervised learning} Self-supervised learning (SSL), as an off-the-shelf representation techniques, has achieved excellent performance in various computer vision tasks \cite{self-supervised_task1,self-supervised_task2,self-supervised_task3,self-supervised_task4,self-supervised_task5,self-supervised_task6,self-supervised_task7,self-supervised_task8}. Recently, contrastive learning \cite{cl1,cl2,cl3,self-supervised_task1,self-supervised_task2} has emerged as one of the most prominent self-supervised methods, making significant progress in exploring image representations based on instance discrimination tasks, where an instance’s different views originating from the same instance are treated as positive examples for an anchor sample, while views from different instances serve as negative examples. The core idea is to promote proximity between positive examples and maximize the separation between negative examples within the latent space, thereby encouraging the model to capture meaningful relationships within the data.","\subsection{Video Super-Resolution} The goal of VSR is to enhance a sequence of HR video frames from their degraded LR counterparts. Based on the paradigms, existing VSR algorithms \cite{cao2021video,chan2021basicvsr,chan2022basicvsr++,isobe2020video,isobe2020video2,isobe2020revisiting,jo2018deep,liang2024vrt,liang2022recurrent,wang2019edvr,xue2019video,liu2013bayesian,nah2019ntire,yi2019progressive} could be roughly classified into two categories: temporal sliding-window based VSR and recurrent framework based VSR. Temporal sliding-window based VSR \cite{jo2018deep,wang2019edvr,li2020mucan} utilize a fixed set of neighboring frames to super-resolve one or more target frames. However, the information accessible is constrained by the temporal window’s size. Consequently, these methods can only exploit the temporal details of a restricted subset of input video frames. To exploit temporal information from more frames, recurrent framework based VSR \cite{chan2021basicvsr,chan2022basicvsr++,liang2024vrt,liang2022recurrent} utilizes multiple LR frames as input and employs recurrent neural networks to simultaneously produce their corresponding SR results. However, most existing approaches \cite{cao2021video,chan2021basicvsr,chan2022basicvsr++,isobe2020video,isobe2020video2,isobe2020revisiting,jo2018deep,liang2024vrt,liang2022recurrent,wang2019edvr,xue2019video} assume a pre-defined degradation process \cite{liu2013bayesian,nah2019ntire,yi2019progressive}. In real-world scenes with more complicated degradations, these VSR methods may not perform well. Due to the lack of real-world paired data for training, \citet{realvsr} propose to collect LR-HR data pairs with iPhone cameras to better model real-world degradations. While the VSR model trained on such data can be effective to videos captured by similar mobile cameras, it is relatively labor-intensive and may not generalize well to videos collected by other devices. Recent studies have shifted towards employing diverse degradations for data augmentation during training, such as blur, downsampling, noise and video compression \cite{realbasicvsr,xie2023mitigating}. However, maintaining temporal consistency while generating photorealistic textures remains a challenge. \subsection{State Space Models} Structured state space models (S4) \cite{S4}, as a promising framework in handling long-distance sequences, has attracted widespread research interest. A variety of S4-inspired models, that capture long-range dependencies in sequential data, achieve competitive performance on various tasks \cite{variant1,variant2,variant3,variant4,variant5}. The major reason behind this might be that S4's adherence to Linear Time Invariance (LTI), which guarantees consistent output for identical inputs regardless of their temporal application. Nevertheless, LTI systems come with some limitations, especially when it comes to handling dynamic changes over time. The constancy of the internal state transition matrix throughout the sequence constrains the model's adaptability to evolving content, thereby limiting its utility in contexts demanding content-driven reasoning. To address these constraints, Mamba \cite{mamba} is recently introduced as a state-space model that dynamically adjusts its parameters in response to the input sequence. This adaptive strategy enables Mamba to engage in context-dependent reasoning, significantly enhancing its effectiveness across various domains \cite{domain1,domain2,domain3}. However, the application of Mamba in video super resolution tasks remains unexplored. \subsection{Self-Supervised learning} Self-supervised learning (SSL), as an off-the-shelf representation techniques, has achieved excellent performance in various computer vision tasks \cite{self-supervised_task1,self-supervised_task2,self-supervised_task3,self-supervised_task4,self-supervised_task5,self-supervised_task6,self-supervised_task7,self-supervised_task8}. Recently, contrastive learning \cite{cl1,cl2,cl3,self-supervised_task1,self-supervised_task2} has emerged as one of the most prominent self-supervised methods, making significant progress in exploring image representations based on instance discrimination tasks, where an instance’s different views originating from the same instance are treated as positive examples for an anchor sample, while views from different instances serve as negative examples. The core idea is to promote proximity between positive examples and maximize the separation between negative examples within the latent space, thereby encouraging the model to capture meaningful relationships within the data.", 2506.00434v1,"Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding","Tuan-Luc Huynh, Thanh-Danh Le, Tam V. Nguyen, Trung-Nghia Le, Minh-Triet Tran","In this paper, we address the crucial task of brain tumor segmentation in medical imaging and propose innovative approaches to enhance its performance. The current state-of-the-art nnU-Net has shown promising results but suffers from extensive training requirements and underutilization of pre-trained weights. To overcome these limitations, we integrate Axial-Coronal-Sagittal convolutions and pre-trained weights from ImageNet into the nnU-Net framework, resulting in reduced training epochs, reduced trainable parameters, and improved efficiency. Two strategies for transferring 2D pre-trained weights to the 3D domain are presented, ensuring the preservation of learned relationships and feature representations critical for effective information propagation. Furthermore, we explore a joint classification and segmentation model that leverages pre-trained encoders from a brain glioma grade classification proxy task, leading to enhanced segmentation performance, especially for challenging tumor labels. Experimental results demonstrate that our proposed methods in the fast training settings achieve comparable or even outperform the ensemble of cross-validation models, a common practice in the brain tumor segmentation literature.","eess.IV, cs.CV",2025-05-31T07:30:37+00:00,2025-05-31T07:30:37+00:00,http://arxiv.org/abs/2506.00434v1,http://arxiv.org/abs/2506.00434v1,2025-05-31 07:30:37+00:00,"Since its inception, U-Net~\cite{ronneberger_unet_miccai_2015} has become the \textit{de facto} standard for medical image segmentation with its iconic encoder and decoder architecture. Recent BraTS challenge solutions~\cite{menze_tmi_2015,bakas_arxiv_2019,baid_arxiv_2021} have prominently relied on CNNs using the U-shaped encoder-decoder design~\cite{myronenko_miccai_2019,jiang_cascaded_unet_miccai_2020,isensee_nnunet_miccai_2021,luu_miccai_2022,zeineldin_miccai_2022}. Notably, Isensee \emph{et al.} secured victory in the BraTS2020 challenge with the nnU-Net framework~\cite{isensee_nnunet_nature_2021}, which features an automated configuration mechanism. The winning solution of the subsequent year~\cite{luu_miccai_2022} further built upon nnU-Net's achievements. Despite transformer-based methods like TransBTS~\cite{wang_transbts_miccai_2021} and SwinUNetR~\cite{swinunetr} showing comparable performance, CNNs continue to dominate in medical image segmentation. In the existing literature, a common strategy involves laborious cross-validation, assembly, and training of multiple models from scratch. To address the efficiency challenge, transfer learning has garnered attention. One notable attempt in this direction is Med3D~\cite{chen_med3d_arxiv_2019}, which offers 3D pre-trained weights. However, it is crucial to acknowledge that the scale of its pre-trained data remains incomparable to the vast 2D natural image datasets commonly utilized in transfer learning. Another approach, Model Genesis~\cite{zhu_modelgenesis_mia_2021}, leverages self-supervised methods on 3D medical images. Although innovative, these methods have yet to surpass the performance of widely explored fully-supervised approaches, which span over a decade of research in the field. Our work posits that the full potential of efficient training and fine-tuning in the brain tumor segmentation problem is yet to be fully realized, particularly concerning large-scale 3D pre-trained models.","Since its inception, U-Net~\cite{ronneberger_unet_miccai_2015} has become the \textit{de facto} standard for medical image segmentation with its iconic encoder and decoder architecture. Recent BraTS challenge solutions~\cite{menze_tmi_2015,bakas_arxiv_2019,baid_arxiv_2021} have prominently relied on CNNs using the U-shaped encoder-decoder design~\cite{myronenko_miccai_2019,jiang_cascaded_unet_miccai_2020,isensee_nnunet_miccai_2021,luu_miccai_2022,zeineldin_miccai_2022}. Notably, Isensee \emph{et al.} secured victory in the BraTS2020 challenge with the nnU-Net framework~\cite{isensee_nnunet_nature_2021}, which features an automated configuration mechanism. The winning solution of the subsequent year~\cite{luu_miccai_2022} further built upon nnU-Net's achievements. Despite transformer-based methods like TransBTS~\cite{wang_transbts_miccai_2021} and SwinUNetR~\cite{swinunetr} showing comparable performance, CNNs continue to dominate in medical image segmentation. In the existing literature, a common strategy involves laborious cross-validation, assembly, and training of multiple models from scratch. To address the efficiency challenge, transfer learning has garnered attention. One notable attempt in this direction is Med3D~\cite{chen_med3d_arxiv_2019}, which offers 3D pre-trained weights. However, it is crucial to acknowledge that the scale of its pre-trained data remains incomparable to the vast 2D natural image datasets commonly utilized in transfer learning. Another approach, Model Genesis~\cite{zhu_modelgenesis_mia_2021}, leverages self-supervised methods on 3D medical images. Although innovative, these methods have yet to surpass the performance of widely explored fully-supervised approaches, which span over a decade of research in the field. Our work posits that the full potential of efficient training and fine-tuning in the brain tumor segmentation problem is yet to be fully realized, particularly concerning large-scale 3D pre-trained models.","Since its inception, U-Net [20] has become the de facto standard for medical im- age segmentation with its iconic encoder and decoder architecture. Recent BraTS challenge solutions [16,3,1] have prominently relied on CNNs using the U-shaped encoder-decoder design [18,14,11,15,24]. Notably, Isensee et al. secured victory in the BraTS2020 challenge with the nnU-Net framework [10], which features an automated configuration mechanism. The winning solution of the subsequent year [15] further built upon nnU-Net’s achievements. Despite transformer-based methods like TransBTS [21] and SwinUNetR [6] showing comparable perfor- mance, CNNs continue to dominate in medical image segmentation. Efficient 3D BraTS with ACS Embedding 3 In the existing literature, a common strategy involves laborious cross-validation, assembly, and training of multiple models from scratch. To address the efficiency challenge, transfer learning has garnered attention. One notable attempt in this direction is Med3D [4], which offers 3D pre-trained weights. However, it is crucial to acknowledge that the scale of its pre-trained data remains incomparable to the vast 2D natural image datasets commonly utilized in transfer learning. Another approach, Model Genesis [25], leverages self-supervised methods on 3D medical images. Although innovative, these methods have yet to surpass the performance of widely explored fully-supervised approaches, which span over a decade of research in the field. Our work posits that the full potential of efficient training and fine-tuning in the brain tumor segmentation problem is yet to be fully realized, particularly concerning large-scale 3D pre-trained models." 2506.00333v1,Test-time Vocabulary Adaptation for Language-driven Object Detection,"Mingxuan Liu, Tyler L. Hayes, Massimiliano Mancini, Elisa Ricci, Riccardo Volpi, Gabriela Csurka","Open-vocabulary object detection models allow users to freely specify a class vocabulary in natural language at test time, guiding the detection of desired objects. However, vocabularies can be overly broad or even mis-specified, hampering the overall performance of the detector. In this work, we propose a plug-and-play Vocabulary Adapter (VocAda) to refine the user-defined vocabulary, automatically tailoring it to categories that are relevant for a given image. VocAda does not require any training, it operates at inference time in three steps: i) it uses an image captionner to describe visible objects, ii) it parses nouns from those captions, and iii) it selects relevant classes from the user-defined vocabulary, discarding irrelevant ones. Experiments on COCO and Objects365 with three state-of-the-art detectors show that VocAda consistently improves performance, proving its versatility. The code is open source.",cs.CV,2025-05-31T01:15:29+00:00,2025-05-31T01:15:29+00:00,http://arxiv.org/abs/2506.00333v1,http://arxiv.org/abs/2506.00333v1,2025-05-31 01:15:29+00:00,"\lblsec{relatedwork} Open-vocabulary object detection (\ovod)~\cite{zhu2023survey} aims to map predicted region features to a frozen vision-language embedding space, typically from contrastive models like CLIP~\cite{radford2021learning}. \ovod detectors usually train on box-labeled data~\cite{lin2014microsoft,gupta2019lvis} with limited categories due to high annotation costs, and supplement these with % image-level datasets {annotated at image level}~\cite{deng2009imagenet}, which cover more classes. Major studies have focused on improving alignment training via pseudo-labeling~\cite{zhou2022detecting}, transfer learning~\cite{zhong2022regionclip}, or enhanced weak supervision~\cite{ma2024codet}. In contrast, we improve off-the-shelf \ovod detectors without fine-tuning, % modifying {updating} only their vocabularies. Our work relates to SHiNe~\cite{liu2024shine}, which augments vocabularies via prompt engineering and a semantic hierarchy, but produces a single improved vocabulary for all images. Instead, \vocada adapts the vocabulary per image at test time, complementing prompt-engineering approaches that can further augment \vocada's refined vocabulary.","\lblsec{relatedwork} Open-vocabulary object detection (\ovod)~\cite{zhu2023survey} aims to map predicted region features to a frozen vision-language embedding space, typically from contrastive models like CLIP~\cite{radford2021learning}. \ovod detectors usually train on box-labeled data~\cite{lin2014microsoft,gupta2019lvis} with limited categories due to high annotation costs, and supplement these with % image-level datasets {annotated at image level}~\cite{deng2009imagenet}, which cover more classes. Major studies have focused on improving alignment training via pseudo-labeling~\cite{zhou2022detecting}, transfer learning~\cite{zhong2022regionclip}, or enhanced weak supervision~\cite{ma2024codet}. In contrast, we improve off-the-shelf \ovod detectors without fine-tuning, % modifying {updating} only their vocabularies. Our work relates to SHiNe~\cite{liu2024shine}, which augments vocabularies via prompt engineering and a semantic hierarchy, but produces a single improved vocabulary for all images. Instead, \vocada adapts the vocabulary per image at test time, complementing prompt-engineering approaches that can further augment \vocada's refined vocabulary.","Open-vocabulary object detection (OvOD) [ 11] aims to map predicted region features to a frozen vision-language embed- ding space, typically from contrastive models like CLIP [ 4]. OvOD detectors usually train on box-labeled data [ 9,12] with limited categories due to high annotation costs, and supplement these with datasets annotated at image level [ 13], which cover more classes. Major studies have focused on improving align- ment training via pseudo-labeling [ 5], transfer learning [ 14], or enhanced weak supervision [ 8]. In contrast, we improve off-the-shelf OvOD detectors without fine-tuning, updating only their vocabularies. Our work relates to SHiNe [ 15], which augments vocabularies via prompt engineering and a semantic hierarchy, but produces a single improved vocabulary for all images. Instead, V ocAda adapts the vocabulary per image at test time, complementing prompt-engineering approaches that can further augment V ocAda’s refined vocabulary" 2505.24443v1,"Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers","Heejo Kong, Sung-Jin Kim, Gunho Jung, Seong-Whan Lee","Conventional semi-supervised learning (SSL) ideally assumes that labeled and unlabeled data share an identical class distribution, however in practice, this assumption is easily violated, as unlabeled data often includes unknown class data, i.e., outliers. The outliers are treated as noise, considerably degrading the performance of SSL models. To address this drawback, we propose a novel framework, Diversify and Conquer (DAC), to enhance SSL robustness in the context of open-set semi-supervised learning. In particular, we note that existing open-set SSL methods rely on prediction discrepancies between inliers and outliers from a single model trained on labeled data. This approach can be easily failed when the labeled data is insufficient, leading to performance degradation that is worse than naive SSL that do not account for outliers. In contrast, our approach exploits prediction disagreements among multiple models that are differently biased towards the unlabeled distribution. By leveraging the discrepancies arising from training on unlabeled data, our method enables robust outlier detection even when the labeled data is underspecified. Our key contribution is constructing a collection of differently biased models through a single training process. By encouraging divergent heads to be differently biased towards outliers while making consistent predictions for inliers, we exploit the disagreement among these heads as a measure to identify unknown concepts. Our code is available at https://github.com/heejokong/DivCon.","cs.CV, cs.LG",2025-05-30T10:24:30+00:00,2025-05-30T10:24:30+00:00,http://arxiv.org/abs/2505.24443v1,http://arxiv.org/abs/2505.24443v1,2025-05-30 10:24:30+00:00,"\subsection{Semi-supervised Learning} \textcolor{black}{ As a remedy for the reliance of deep supervised learning on large-scale annotated datasets, semi-supervised learning (SSL) provides effective solutions to leverage abundant unlabeled data while requiring only a small proportion of labeled samples. Primary SSL methods can be broadly categorized into entropy minimization \cite{ssl_2, ssl_9, ssl_10}, consistency regularization \cite{ssl_11, ssl_12, ssl_13, ssl_14, tnnls_2}, and holistic approaches \cite{ssl_3, ssl_4, ssl_16}. Among the various frameworks, FixMatch \cite{ssl_4} has garnered widespread attention as a strong baseline, offering remarkable effectiveness despite its simple training procedure. Several studies have focused on the confidence thresholding employed by FixMatch, proposing class-specific threshold adjustments based on training difficulty \cite{ssl_19, ssl_20} or adaptive rejection strategies that consider the quantity-quality trade-off of pseudo-labels, thereby progressively filtering noisy samples \cite{ssl_8}. In addition, some efforts have integrated FixMatch with representation learning strategies. These techniques encompass a range of graph-based \cite{ssl_6, ssl_7, ssl_17, tnnls_3} and contrastive learning \cite{ssl_5, rep_3} approaches. By incorporating instance-level similarity relationships into their objectives, they aim to produce more refined classifiers. } \textcolor{black}{ Notably, the majority of leading SSL frameworks that have demonstrated significant success are grounded in self-training \cite{ssl_2, ssl_4, ssl_10}. That is, they leverage predictions from a weak model trained on labeled data as pseudo labels for unlabeled data. However, conventional self-training assumes that the class distributions of the labeled and unlabeled sets are identical, rendering it incapable of handling unknown samples, \textit{i.e.}, outliers, present in the unlabeled data. This approach consequently risks treating the outliers as known categories, ultimately yielding corrupted SSL models. } \subsection{Open-set Semi-supervised Learning} \textcolor{black}{ An open-set SSL problem extends the conventional SSL setting to a more practical scenario by assuming the presence of out-of-category samples within uncurated unlabeled data. This problem was first explored in \cite{ssl_1}, which demonstrated that existing SSL methods suffer performance degradation when outliers are present in the unlabeled set. As an explicit solution, existing open-set SSL works have adopted a detect-and-filter strategy, that is, detecting samples that are expected to be outliers and suppressing their influence in training. The core of this strategy lies in devising a sophisticated criterion and corresponding training procedure to effectively detect potential outliers. One intuitive approach utilizes the predictions of a standard training model as a direct measure for outlier detection without introducing additional modules. This broad category of methods encompasses various out-of-distribution detection techniques, such as prediction confidence \cite{ossl_2, ossl_14}, sample similarity \cite{ossl_12, ossl_16}, and energy discrepancy \cite{ossl_5, ossl_6}. By regarding samples with uncertain predictions under the known distribution as potential outliers, these methods adopt adaptive training strategies accordingly. Another line of the strategy is to exploit learnable detectors as an additional module to handle the outliers. Depending on how the binary property is modeled, various methods have been proposed. A standard way treats all known classes and unknown classes each as a single generic class, effectively employing a one-vs-one binary classifier. Related studies have introduced a curriculum-based framework \cite{ossl_3}, self-training techniques \cite{ossl_9, ossl_10}, and contrastive learning strategies \cite{ossl_8} to train this detector. Another way employs one-vs-all binary classifiers, which independently identify a binary property for each known class, as outlier detectors—a concept first introduced in OpenMatch \cite{ossl_4}. Subsequent studies \cite{ossl_7, ossl_11} have proposed various extensions to enhance this baseline. In contrast, some investigations opt for more implicit strategies to reduce the impact of outliers on the training process by formalizing the training as a bi-level optimization problem \cite{ossl_1} or adopting binary decomposition \cite{ossl_13}, thereby mitigating their influence. } \textcolor{black}{ In this work, we adopt the detect-and-filter strategy, which has emerged as a major trend for tackling open-set SSL problem. Our primary focus is on ensuring robust open-set SSL performance even when the labeled data is scarce, resulting in underspecified prior knowledge. As discussed, under such underspecified regimes, standard detection-and-filter frameworks—including OpenMatch—often suffer from an over-rejection problem, excluding not only outliers but also many inliers from training. Although several studies \cite{ossl_9, ossl_10, ossl_17, ossl_8} have indirectly leveraged potential outliers for representation learning purposes, only a few \cite{ossl_7, ossl_11} have explicitly considered solutions to the over-rejection issue. We note that despite the lack of prior knowledge, existing alternatives \cite{ossl_7, ossl_11} still rely on the prediction uncertainty of a single detector acquired on the labeled set. In contrast, our approach leverages the relative discrepancy between multiple functions acquired on the unlabeled data as an uncertainty measure. This perspective enables the proposed DAC to effectively identify outliers even when the labeled data is underspecified. }","\subsection{Semi-supervised Learning} \textcolor{black}{ As a remedy for the reliance of deep supervised learning on large-scale annotated datasets, semi-supervised learning (SSL) provides effective solutions to leverage abundant unlabeled data while requiring only a small proportion of labeled samples. Primary SSL methods can be broadly categorized into entropy minimization \cite{ssl_2, ssl_9, ssl_10}, consistency regularization \cite{ssl_11, ssl_12, ssl_13, ssl_14, tnnls_2}, and holistic approaches \cite{ssl_3, ssl_4, ssl_16}. Among the various frameworks, FixMatch \cite{ssl_4} has garnered widespread attention as a strong baseline, offering remarkable effectiveness despite its simple training procedure. Several studies have focused on the confidence thresholding employed by FixMatch, proposing class-specific threshold adjustments based on training difficulty \cite{ssl_19, ssl_20} or adaptive rejection strategies that consider the quantity-quality trade-off of pseudo-labels, thereby progressively filtering noisy samples \cite{ssl_8}. In addition, some efforts have integrated FixMatch with representation learning strategies. These techniques encompass a range of graph-based \cite{ssl_6, ssl_7, ssl_17, tnnls_3} and contrastive learning \cite{ssl_5, rep_3} approaches. By incorporating instance-level similarity relationships into their objectives, they aim to produce more refined classifiers. } \textcolor{black}{ Notably, the majority of leading SSL frameworks that have demonstrated significant success are grounded in self-training \cite{ssl_2, ssl_4, ssl_10}. That is, they leverage predictions from a weak model trained on labeled data as pseudo labels for unlabeled data. However, conventional self-training assumes that the class distributions of the labeled and unlabeled sets are identical, rendering it incapable of handling unknown samples, \textit{i.e.}, outliers, present in the unlabeled data. This approach consequently risks treating the outliers as known categories, ultimately yielding corrupted SSL models. } \subsection{Open-set Semi-supervised Learning} \textcolor{black}{ An open-set SSL problem extends the conventional SSL setting to a more practical scenario by assuming the presence of out-of-category samples within uncurated unlabeled data. This problem was first explored in \cite{ssl_1}, which demonstrated that existing SSL methods suffer performance degradation when outliers are present in the unlabeled set. As an explicit solution, existing open-set SSL works have adopted a detect-and-filter strategy, that is, detecting samples that are expected to be outliers and suppressing their influence in training. The core of this strategy lies in devising a sophisticated criterion and corresponding training procedure to effectively detect potential outliers. One intuitive approach utilizes the predictions of a standard training model as a direct measure for outlier detection without introducing additional modules. This broad category of methods encompasses various out-of-distribution detection techniques, such as prediction confidence \cite{ossl_2, ossl_14}, sample similarity \cite{ossl_12, ossl_16}, and energy discrepancy \cite{ossl_5, ossl_6}. By regarding samples with uncertain predictions under the known distribution as potential outliers, these methods adopt adaptive training strategies accordingly. Another line of the strategy is to exploit learnable detectors as an additional module to handle the outliers. Depending on how the binary property is modeled, various methods have been proposed. A standard way treats all known classes and unknown classes each as a single generic class, effectively employing a one-vs-one binary classifier. Related studies have introduced a curriculum-based framework \cite{ossl_3}, self-training techniques \cite{ossl_9, ossl_10}, and contrastive learning strategies \cite{ossl_8} to train this detector. Another way employs one-vs-all binary classifiers, which independently identify a binary property for each known class, as outlier detectors—a concept first introduced in OpenMatch \cite{ossl_4}. Subsequent studies \cite{ossl_7, ossl_11} have proposed various extensions to enhance this baseline. In contrast, some investigations opt for more implicit strategies to reduce the impact of outliers on the training process by formalizing the training as a bi-level optimization problem \cite{ossl_1} or adopting binary decomposition \cite{ossl_13}, thereby mitigating their influence. } \textcolor{black}{ In this work, we adopt the detect-and-filter strategy, which has emerged as a major trend for tackling open-set SSL problem. Our primary focus is on ensuring robust open-set SSL performance even when the labeled data is scarce, resulting in underspecified prior knowledge. As discussed, under such underspecified regimes, standard detection-and-filter frameworks—including OpenMatch—often suffer from an over-rejection problem, excluding not only outliers but also many inliers from training. Although several studies \cite{ossl_9, ossl_10, ossl_17, ossl_8} have indirectly leveraged potential outliers for representation learning purposes, only a few \cite{ossl_7, ossl_11} have explicitly considered solutions to the over-rejection issue. We note that despite the lack of prior knowledge, existing alternatives \cite{ossl_7, ossl_11} still rely on the prediction uncertainty of a single detector acquired on the labeled set. In contrast, our approach leverages the relative discrepancy between multiple functions acquired on the unlabeled data as an uncertainty measure. This perspective enables the proposed DAC to effectively identify outliers even when the labeled data is underspecified. }", 2505.24334v1,"KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices","Uzair Khan, Franco Fummi, Luigi Capogrosso","In the era of intelligent manufacturing, anomaly detection has become essential for maintaining quality control on modern production lines. However, while many existing models show promising performance, they are often too large, computationally demanding, and impractical to deploy on resource-constrained embedded devices that can be easily installed on the production lines of Small and Medium Enterprises (SMEs). To bridge this gap, we present KairosAD, a novel supervised approach that uses the power of the Mobile Segment Anything Model (MobileSAM) for image-based anomaly detection. KairosAD has been evaluated on the two well-known industrial anomaly detection datasets, i.e., MVTec-AD and ViSA. The results show that KairosAD requires 78% fewer parameters and boasts a 4x faster inference time compared to the leading state-of-the-art model, while maintaining comparable AUROC performance. We deployed KairosAD on two embedded devices, the NVIDIA Jetson NX, and the NVIDIA Jetson AGX. Finally, KairosAD was successfully installed and tested on the real production line of the Industrial Computer Engineering Laboratory (ICE Lab) at the University of Verona. The code is available at https://github.com/intelligolabs/KairosAD.",cs.CV,2025-05-30T08:18:49+00:00,2025-05-30T08:18:49+00:00,http://arxiv.org/abs/2505.24334v1,http://arxiv.org/abs/2505.24334v1,2025-05-30 08:18:49+00:00,"\label{cha:related} Over the years, anomaly detection in the industry domain has been extensively explored~\cite{liu2024deep} (Section~\ref{sec:industry_ad}). Since \ours{} is based on MobileSAM and employs efficiency-driven methods to enable deployment on embedded systems, this section also delves into the literature on foundation models for vision (Section~\ref{sec:vision_foundation_models}) and efficient deep learning techniques (Section~\ref{sec:efficient_dl}). %%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Industrial Anomaly Detection} \label{sec:industry_ad} Although extensive research has been carried out on anomaly detection, industrial image data presents unique challenges~\cite{bergmann2019mvtec}. Many industrial anomaly detection methods focus on image reconstruction and detect anomalies based on the reconstruction error~\cite{liu2024deep}. Generative models such as autoencoders~\cite{bergmann2018improving}, variational autoencoders~\cite{liu2020towards}, and generative adversarial networks~\cite{akcay2019ganomaly} are the architectures most widely used to reconstruct normal images from anomalous ones. Nonetheless, these methods face certain limitations, especially when reconstructing complex industrial textures and patterns. Recent approaches leverage memory banks, where a core set of stored features from a pre-trained backbone is used to compute patch-level distances for anomaly detection~\cite{damm2024anomalydino}. PatchCore~\cite{roth2022towards} is a foundational approach that extracts patch-level feature embeddings of normal images into a memory bank, detecting anomalous patches during inference through a patch query process. SoftPatch~\cite{jiang2022softpatch}, based on PatchCore, introduced a patch-level filtering strategy in which patch features are filtered and weighted before being stored in the memory bank to reduce contamination from anomalous patches, thus enhancing model robustness. However, most of these methods come at the cost of increased computational complexity and a large memory space, making them unsuitable for deployment on resource-constrained devices. In terms of efficient industrial anomaly detection models, our main competitor is STLM~\cite{li2024sam}. Unlike STLM, which employs a two-branch architecture, we adopt just a single-branch design, reducing complexity and improving inference speed. Furthermore, while STLM focuses on both anomaly detection and localization, our method is specifically optimized for just the image-level task, prioritizing efficiency for real-time deployment. From a methodological perspective, STLM is based on feature distillation and contrastive learning, while \ours{} uses MobileSAM for a lightweight yet effective feature extraction, resulting in a simpler and more computationally efficient model. %%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Foundation Models for Vision} \label{sec:vision_foundation_models} Multimodal foundation models have emerged as powerful tools across various tasks~\cite{li2024multimodal}. For visual anomaly detection, particularly relevant are multimodal approaches based on CLIP~\cite{radford2021learning} and Segment Anything Model (SAM)~\cite{kirillov2023segment}, as well as vision-only models like DINO~\cite{caron2021emerging}. Specifically, CLIP learns visual concepts from natural language descriptions by training on image-text pairs using a contrastive learning objective that aligns embeddings from both modalities. This shared feature space enables downstream applications, such as zero-shot image classification, by comparing image embeddings to class-specific textual prompts. Instead, DINO adopts a self-supervised student-teacher framework based on Vision Transformers (ViT), leveraging a multiview strategy to predict softened teacher output, resulting in robust and high-quality feature representations. DINOv2~\cite{oquab2023dinov2} extends these ideas by incorporating patch-level reconstruction techniques and scaling to larger architectures and datasets. Beyond these, SAM and its efficient variant, such as MobileSAM~\cite{zhang2023faster}, have introduced a new paradigm in vision models, particularly for segmentation tasks. SAM is designed as a powerful promptable segmentation model that generalizes well across diverse images, enabling accurate object delineation with minimal supervision. MobileSAM builds upon SAM, optimizing it for edge devices by reducing computational demands while maintaining competitive performance. These models, though primarily designed for segmentation, have shown potential for various downstream tasks, including anomaly detection. As a result, we take advantage of their efficiency and adaptability to develop \ours{}, which is based on the MobileSAM image encoder. %%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Efficient Deep Learning} \label{sec:efficient_dl} Over the past decades, a large amount of research has been invested in improving embedded technologies to enable real-time solutions for many complex applications. However, deploying learning models on tiny devices is substantially difficult due to severe architectural, energetic, and latency constraints~\cite{capogrosso2024machine}. Several techniques aim to reduce the size and computational cost of the model without sacrificing performance. These include pruning~\cite{vadera2022methods}, quantization~\cite{gholami2022survey}, and knowledge distillation~\cite{gou2021knowledge}. Furthermore, in order to provide lightweight models capable of delivering acceptable performances for their intended applications, specialized techniques have been proposed in model architecture exploration, model simplification, and architectural modifications, such as Neural Architecture Search (NAS)~\cite{ren2021comprehensive}, and the attention mechanism~\cite{brauwers2021general}. Although commonly associated with notable contributions to machine translation tasks, the attention mechanism has been adapted for a wide range of applications, allowing the model to selectively look for different features, facilitating the extraction of relevant information from complex and high-dimensional data~\cite{capogrosso2024machine}. One of the most significant advances that utilize this principle is the Transformer architecture~\cite{vaswani2017attention}. Recent efforts to develop computationally efficient Transformers in vision have broadened their potential use in resource-constrained settings~\cite{khan2022transformers}. MobileSAM represents one of the latest advances, and for this reason, we decided to use the MobileSAM image encoder in \ours{}.","Over the years, anomaly detection in the industry domain has been extensively explored~\cite{liu2024deep} (Section~\ref{sec:industry_ad}). Since \ours{} is based on MobileSAM and employs efficiency-driven methods to enable deployment on embedded systems, this section also delves into the literature on foundation models for vision (Section~\ref{sec:vision_foundation_models}) and efficient deep learning techniques (Section~\ref{sec:efficient_dl}). %%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Industrial Anomaly Detection} Although extensive research has been carried out on anomaly detection, industrial image data presents unique challenges~\cite{bergmann2019mvtec}. Many industrial anomaly detection methods focus on image reconstruction and detect anomalies based on the reconstruction error~\cite{liu2024deep}. Generative models such as autoencoders~\cite{bergmann2018improving}, variational autoencoders~\cite{liu2020towards}, and generative adversarial networks~\cite{akcay2019ganomaly} are the architectures most widely used to reconstruct normal images from anomalous ones. Nonetheless, these methods face certain limitations, especially when reconstructing complex industrial textures and patterns. Recent approaches leverage memory banks, where a core set of stored features from a pre-trained backbone is used to compute patch-level distances for anomaly detection~\cite{damm2024anomalydino}. PatchCore~\cite{roth2022towards} is a foundational approach that extracts patch-level feature embeddings of normal images into a memory bank, detecting anomalous patches during inference through a patch query process. SoftPatch~\cite{jiang2022softpatch}, based on PatchCore, introduced a patch-level filtering strategy in which patch features are filtered and weighted before being stored in the memory bank to reduce contamination from anomalous patches, thus enhancing model robustness. However, most of these methods come at the cost of increased computational complexity and a large memory space, making them unsuitable for deployment on resource-constrained devices. In terms of efficient industrial anomaly detection models, our main competitor is STLM~\cite{li2024sam}. Unlike STLM, which employs a two-branch architecture, we adopt just a single-branch design, reducing complexity and improving inference speed. Furthermore, while STLM focuses on both anomaly detection and localization, our method is specifically optimized for just the image-level task, prioritizing efficiency for real-time deployment. From a methodological perspective, STLM is based on feature distillation and contrastive learning, while \ours{} uses MobileSAM for a lightweight yet effective feature extraction, resulting in a simpler and more computationally efficient model. %%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Foundation Models for Vision} Multimodal foundation models have emerged as powerful tools across various tasks~\cite{li2024multimodal}. For visual anomaly detection, particularly relevant are multimodal approaches based on CLIP~\cite{radford2021learning} and Segment Anything Model (SAM)~\cite{kirillov2023segment}, as well as vision-only models like DINO~\cite{caron2021emerging}. Specifically, CLIP learns visual concepts from natural language descriptions by training on image-text pairs using a contrastive learning objective that aligns embeddings from both modalities. This shared feature space enables downstream applications, such as zero-shot image classification, by comparing image embeddings to class-specific textual prompts. Instead, DINO adopts a self-supervised student-teacher framework based on Vision Transformers (ViT), leveraging a multiview strategy to predict softened teacher output, resulting in robust and high-quality feature representations. DINOv2~\cite{oquab2023dinov2} extends these ideas by incorporating patch-level reconstruction techniques and scaling to larger architectures and datasets. Beyond these, SAM and its efficient variant, such as MobileSAM~\cite{zhang2023faster}, have introduced a new paradigm in vision models, particularly for segmentation tasks. SAM is designed as a powerful promptable segmentation model that generalizes well across diverse images, enabling accurate object delineation with minimal supervision. MobileSAM builds upon SAM, optimizing it for edge devices by reducing computational demands while maintaining competitive performance. These models, though primarily designed for segmentation, have shown potential for various downstream tasks, including anomaly detection. As a result, we take advantage of their efficiency and adaptability to develop \ours{}, which is based on the MobileSAM image encoder. %%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Efficient Deep Learning} Over the past decades, a large amount of research has been invested in improving embedded technologies to enable real-time solutions for many complex applications. However, deploying learning models on tiny devices is substantially difficult due to severe architectural, energetic, and latency constraints~\cite{capogrosso2024machine}. Several techniques aim to reduce the size and computational cost of the model without sacrificing performance. These include pruning~\cite{vadera2022methods}, quantization~\cite{gholami2022survey}, and knowledge distillation~\cite{gou2021knowledge}. Furthermore, in order to provide lightweight models capable of delivering acceptable performances for their intended applications, specialized techniques have been proposed in model architecture exploration, model simplification, and architectural modifications, such as Neural Architecture Search (NAS)~\cite{ren2021comprehensive}, and the attention mechanism~\cite{brauwers2021general}. Although commonly associated with notable contributions to machine translation tasks, the attention mechanism has been adapted for a wide range of applications, allowing the model to selectively look for different features, facilitating the extraction of relevant information from complex and high-dimensional data~\cite{capogrosso2024machine}. One of the most significant advances that utilize this principle is the Transformer architecture~\cite{vaswani2017attention}. Recent efforts to develop computationally efficient Transformers in vision have broadened their potential use in resource-constrained settings~\cite{khan2022transformers}. MobileSAM represents one of the latest advances, and for this reason, we decided to use the MobileSAM image encoder in \ours{}.","Over the years, anomaly detection in the industry domain has been extensively explored [22] (Section 2.1). Since KairosAD is based on MobileSAM and em- ploys efficiency-driven methods to enable deployment on embedded systems, this section also delves into the literature on foundation models for vision (Sec- tion 2.2) and efficient deep learning techniques (Section 2.3). 2.1 Industrial Anomaly Detection Although extensive research has been carried out on anomaly detection, indus- trial image data presents unique challenges [2]. Many industrial anomaly detection methods focus on image reconstruction and detect anomalies based on the reconstruction error [22]. Generative models such as autoencoders [3], variational autoencoders [23], and generative adver- sarial networks [1] are the architectures most widely used to reconstruct normal images from anomalous ones. Nonetheless, these methods face certain limita- tions, especially when reconstructing complex industrial textures and patterns. Recentapproachesleveragememorybanks,whereacoresetofstoredfeatures fromapre-trainedbackboneisusedtocomputepatch-leveldistancesforanomaly detection [8]. PatchCore [29] is a foundational approach that extracts patch-level feature embeddings of normal images into a memory bank, detecting anomalous patches during inference through a patch query process. SoftPatch [14], based on PatchCore, introduced a patch-level filtering strategy in which patch features are filtered and weighted before being stored in the memory bank to reduce con- tamination from anomalous patches, thus enhancing model robustness. However, most of these methods come at the cost of increased computational complexity and a large memory space, making them unsuitable for deployment on resource- constrained devices. 1Results calculated by using: https://mlco2.github.io/codecarbon/ . 4 U. Khan et al. In terms of efficient industrial anomaly detection models, our main competi- tor is STLM [19]. Unlike STLM, which employs a two-branch architecture, we adopt just a single-branch design, reducing complexity and improving inference speed. Furthermore, while STLM focuses on both anomaly detection and local- ization, our method is specifically optimized for just the image-level task, prior- itizing efficiency for real-time deployment. From a methodological perspective, STLM is based on feature distillation and contrastive learning, while KairosAD uses MobileSAM for a lightweight yet effective feature extraction, resulting in a simpler and more computationally efficient model. 2.2 Foundation Models for Vision Multimodal foundation models have emerged as powerful tools across various tasks [20]. For visual anomaly detection, particularly relevant are multimodal approaches based on CLIP [27] and Segment Anything Model (SAM) [18], as well as vision-only models like DINO [7]. Specifically, CLIP learns visual concepts from natural language descriptions by training on image-text pairs using a contrastive learning objective that aligns embeddingsfrombothmodalities.Thissharedfeaturespaceenablesdownstream applications, such as zero-shot image classification, by comparing image embed- dings to class-specific textual prompts. Instead, DINO adopts a self-supervised student-teacher framework based on Vision Transformers (ViT), leveraging a multiview strategy to predict softened teacher output, resulting in robust and high-quality feature representations. DI- NOv2 [25] extends these ideas by incorporating patch-level reconstruction tech- niques and scaling to larger architectures and datasets. Beyond these, SAM and its efficient variant, such as MobileSAM [37], have introduced a new paradigm in vision models, particularly for segmentation tasks. SAM is designed as a powerful promptable segmentation model that generalizes well across diverse images, enabling accurate object delineation with minimal supervision. MobileSAM builds upon SAM, optimizing it for edge devices by reducing computational demands while maintaining competitive performance. Thesemodels,thoughprimarilydesignedforsegmentation,haveshownpotential for various downstream tasks, including anomaly detection. As a result, we take advantage of their efficiency and adaptability to develop KairosAD , which is based on the MobileSAM image encoder. 2.3 Efficient Deep Learning Over the past decades, a large amount of research has been invested in im- proving embedded technologies to enable real-time solutions for many complex applications. However, deploying learning models on tiny devices is substantially difficult due to severe architectural, energetic, and latency constraints [5]. Severaltechniquesaimtoreducethesizeandcomputationalcostofthemodel without sacrificing performance. These include pruning [31], quantization [10], and knowledge distillation [12]. Furthermore, in order to provide lightweight KairosAD: Industrial Anomaly Detection on Embedded Devices 5 Semantic Feature Encoder (SFE)Anomaly Score Predictor (ASP)Industry Input ImagesMobileSAM ViT-Based Image EncoderImage EmbeddingNormal  or Abnormal FC" 2505.23290v1,"Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation","Hao Li, Ju Dai, Xin Zhao, Feng Zhou, Junjun Pan, Lei Li","In 3D speech-driven facial animation generation, existing methods commonly employ pre-trained self-supervised audio models as encoders. However, due to the prevalence of phonetically similar syllables with distinct lip shapes in language, these near-homophone syllables tend to exhibit significant coupling in self-supervised audio feature spaces, leading to the averaging effect in subsequent lip motion generation. To address this issue, this paper proposes a plug-and-play semantic decorrelation module-Wav2Sem. This module extracts semantic features corresponding to the entire audio sequence, leveraging the added semantic information to decorrelate audio encodings within the feature space, thereby achieving more expressive audio features. Extensive experiments across multiple Speech-driven models indicate that the Wav2Sem module effectively decouples audio features, significantly alleviating the averaging effect of phonetically similar syllables in lip shape generation, thereby enhancing the precision and naturalness of facial animations. Our source code is available at https://github.com/wslh852/Wav2Sem.git.","cs.SD, cs.CV, eess.AS",2025-05-29T09:42:03+00:00,2025-05-29T09:42:03+00:00,http://arxiv.org/abs/2505.23290v1,http://arxiv.org/abs/2505.23290v1,2025-05-29 09:42:03+00:00,"\subsection{Speech-driven 3D Facial Animation} In recent years, speech-driven facial animation has received considerable attention. Traditional approaches~\cite{TaylorKYMKRHM17,cao2005expressive} focus on establishing mapping relationships between phonemes and facial movements, which often rely on predefined facial movements libraries and complex mapping rules. As a result, the generated outcomes frequently lack natural transitions and exhibit rigid characteristics. With the advancements in deep learning, numerous studies~\cite{FaceFormer,CodeTalker,FaceDiffuser,li2023mask,haque2023facexhubert,EMOTE,peng2023emotalk,thambiraja20233diface} have been dedicated to learning the mapping relationships between speech and facial movements from data. For instance, VOCA~\cite{VOCA} is a straightforward and versatile framework that not only enables the generation of facial animations based on speech but also supports cross-identity speech driving. Faceformer~\cite{FaceFormer} is a speech-driven 3D facial animation architecture based on an autoregressive transformer, which utilizes a pre-trained self-supervised audio model to solve the problem of data scarcity in existing audiovisual datasets. Codetalker~\cite{CodeTalker} demonstrates the advantages of transforming speech-driven facial animation into a code-query task in discrete space, which significantly improves the quality of facial motion synthesis. With the development of the diffusion model, FaceDiffuser~\cite{FaceDiffuser} integrates the diffusion mechanism into speech-driven facial generation and achieves more accurate lip synchronization and facial expression. Similarly, LG-LDM~\cite{LG-LDM} employs latent diffusion modeling to ensure that subtle emotional outputs in facial animations are accurately rendered. Mimic~\cite{fu2024mimic} introduces an innovative speech style disentanglement method, which can realize the speech style coding of any subject, so as to synthesize facial animation more realistically. However, these works focus on more complex models to improve the quality of facial generation, ignoring the impact of generating features for pre-trained self-supervised audio models. In particular, languages have many near-homophonic syllables with similar pronunciations but different lip shapes, the incorrect expression of audio feature space affects the naturalness and consistency of the generated results. \subsection{Audio Encoder in Speech-driver} The audio encoder aims to convert raw audio signals into embeddings that are easier for the model to process, playing a crucial role in speech-driven facial animation. Earlier studies~\cite{wav2lip,DBLP:conf/bmvc/ChenLLYW21} utilize Mel Frequency Cepstrum Coefficients (MFCC) as the feature representation of audio. However, extracting MFCC from raw audio usually leads to losing high-frequency information and poor robustness. With the development of deep learning, DeepSpeech~\cite{DeepSpeech} is developed, which is an audio feature extractor based on a combination of convolutional and recurrent neural networks, aiming to capture the temporal and frequency domain information in audio more efficiently. However, DeepSpeech is highly dependent on labeled data. In order to reduce the dependence on labeled data, self-supervised learning approaches have received wide attention. Wav2Vec 2.0~\cite{wav2vec} provides an efficient self-supervised learning framework that enables models to automatically learn high-quality audio feature representations by large-scale unlabeled audio data. This approach addresses data dependency and generalization challenges in speech recognition tasks. HuBERT~\cite{hubert} introduces unsupervised clustering and multi-stage training, effectively enhancing feature extraction and task performance. At the same time, it simplifies the complex negative sampling mechanism, providing a more efficient and refined approach to speech representation learning. Due to the scarcity of audiovisual datasets, speech-driven research often uses pre-trained self-supervised models as audio encoders. However, these self-supervised models focus on phonemes and low-level features, lacking overall semantic modeling, which results in the model's inability to distinguish homophones based on context. \subsection{Multimodal Facial Animation Generation} With the advancement of multimodal technology, substantial studies~\cite{ao2023gesturediffuclip,liang2024omg,zhang2022motiondiffuse,mughal2024convofusion,zhao2024media2face,DBLP:conf/cvpr/ChhatreDABPBB24} are integrating multimodal information to enhance generation quality. Benjamin et al.~\cite{ElizaldeZR19} propose a multimodal search framework based on co-embeddings of text and audio, aiming to obtain more robust feature representations through a shared embedding space to enhance retrieval performance. Yu et al.~\cite{Yu0L19} generate landmark points intermediate representations by introducing additional textual information, thus improving the final generated facial effects. EMAGE~\cite{EMAGE} is a text and speech-driven character motion generation framework that enables the continuous and natural generation of facial and body movements. These works utilize additional text features to supplement the missing semantic information in the audio. %AMUSE~\cite{DBLP:conf/cvpr/ChhatreDABPBB24} directly synthesizes 3D gestures from speech and controls emotions and styles by combining the content of the driving speech with the emotion and style of another speech sequence. SIGGesture~\cite{SIGGestureDBLP:journals/corr/abs-2405-13336} uses the powerful generalization ability of large language model to generate appropriate semantic gestures for various speech texts. It is worth noting that text and audio are two different ways to express semantics, they differ in how the information is conveyed but ultimately convey the same core semantic information. Therefore, it is feasible to directly obtain the corresponding semantic information from the audio. The Wav2Sem module can directly learn corresponding semantic information from the entire audio without the need for additional text assistance. It can be easily integrated into existing facial animation pipelines, significantly improving the model's ability to capture semantic information. %focus on learning semantic information from audio without introducing textual information. %in the inference phase. %To achieve the goal, we leverage pared text and audio inputs in pretraining the Wav2Sem module without the requirement of text descriptions for facial animation generation.","\subsection{Speech-driven 3D Facial Animation} In recent years, speech-driven facial animation has received considerable attention. Traditional approaches~\cite{TaylorKYMKRHM17,cao2005expressive} focus on establishing mapping relationships between phonemes and facial movements, which often rely on predefined facial movements libraries and complex mapping rules. As a result, the generated outcomes frequently lack natural transitions and exhibit rigid characteristics. With the advancements in deep learning, numerous studies~\cite{FaceFormer,CodeTalker,FaceDiffuser,li2023mask,haque2023facexhubert,EMOTE,peng2023emotalk,thambiraja20233diface} have been dedicated to learning the mapping relationships between speech and facial movements from data. For instance, VOCA~\cite{VOCA} is a straightforward and versatile framework that not only enables the generation of facial animations based on speech but also supports cross-identity speech driving. Faceformer~\cite{FaceFormer} is a speech-driven 3D facial animation architecture based on an autoregressive transformer, which utilizes a pre-trained self-supervised audio model to solve the problem of data scarcity in existing audiovisual datasets. Codetalker~\cite{CodeTalker} demonstrates the advantages of transforming speech-driven facial animation into a code-query task in discrete space, which significantly improves the quality of facial motion synthesis. With the development of the diffusion model, FaceDiffuser~\cite{FaceDiffuser} integrates the diffusion mechanism into speech-driven facial generation and achieves more accurate lip synchronization and facial expression. Similarly, LG-LDM~\cite{LG-LDM} employs latent diffusion modeling to ensure that subtle emotional outputs in facial animations are accurately rendered. Mimic~\cite{fu2024mimic} introduces an innovative speech style disentanglement method, which can realize the speech style coding of any subject, so as to synthesize facial animation more realistically. However, these works focus on more complex models to improve the quality of facial generation, ignoring the impact of generating features for pre-trained self-supervised audio models. In particular, languages have many near-homophonic syllables with similar pronunciations but different lip shapes, the incorrect expression of audio feature space affects the naturalness and consistency of the generated results. \subsection{Audio Encoder in Speech-driver} The audio encoder aims to convert raw audio signals into embeddings that are easier for the model to process, playing a crucial role in speech-driven facial animation. Earlier studies~\cite{wav2lip,DBLP:conf/bmvc/ChenLLYW21} utilize Mel Frequency Cepstrum Coefficients (MFCC) as the feature representation of audio. However, extracting MFCC from raw audio usually leads to losing high-frequency information and poor robustness. With the development of deep learning, DeepSpeech~\cite{DeepSpeech} is developed, which is an audio feature extractor based on a combination of convolutional and recurrent neural networks, aiming to capture the temporal and frequency domain information in audio more efficiently. However, DeepSpeech is highly dependent on labeled data. In order to reduce the dependence on labeled data, self-supervised learning approaches have received wide attention. Wav2Vec 2.0~\cite{wav2vec} provides an efficient self-supervised learning framework that enables models to automatically learn high-quality audio feature representations by large-scale unlabeled audio data. This approach addresses data dependency and generalization challenges in speech recognition tasks. HuBERT~\cite{hubert} introduces unsupervised clustering and multi-stage training, effectively enhancing feature extraction and task performance. At the same time, it simplifies the complex negative sampling mechanism, providing a more efficient and refined approach to speech representation learning. Due to the scarcity of audiovisual datasets, speech-driven research often uses pre-trained self-supervised models as audio encoders. However, these self-supervised models focus on phonemes and low-level features, lacking overall semantic modeling, which results in the model's inability to distinguish homophones based on context. \subsection{Multimodal Facial Animation Generation} With the advancement of multimodal technology, substantial studies~\cite{ao2023gesturediffuclip,liang2024omg,zhang2022motiondiffuse,mughal2024convofusion,zhao2024media2face,DBLP:conf/cvpr/ChhatreDABPBB24} are integrating multimodal information to enhance generation quality. Benjamin et al.~\cite{ElizaldeZR19} propose a multimodal search framework based on co-embeddings of text and audio, aiming to obtain more robust feature representations through a shared embedding space to enhance retrieval performance. Yu et al.~\cite{Yu0L19} generate landmark points intermediate representations by introducing additional textual information, thus improving the final generated facial effects. EMAGE~\cite{EMAGE} is a text and speech-driven character motion generation framework that enables the continuous and natural generation of facial and body movements. These works utilize additional text features to supplement the missing semantic information in the audio. %AMUSE~\cite{DBLP:conf/cvpr/ChhatreDABPBB24} directly synthesizes 3D gestures from speech and controls emotions and styles by combining the content of the driving speech with the emotion and style of another speech sequence. SIGGesture~\cite{SIGGestureDBLP:journals/corr/abs-2405-13336} uses the powerful generalization ability of large language model to generate appropriate semantic gestures for various speech texts. It is worth noting that text and audio are two different ways to express semantics, they differ in how the information is conveyed but ultimately convey the same core semantic information. Therefore, it is feasible to directly obtain the corresponding semantic information from the audio. The Wav2Sem module can directly learn corresponding semantic information from the entire audio without the need for additional text assistance. It can be easily integrated into existing facial animation pipelines, significantly improving the model's ability to capture semantic information. %focus on learning semantic information from audio without introducing textual information. %in the inference phase. %To achieve the goal, we leverage pared text and audio inputs in pretraining the Wav2Sem module without the requirement of text descriptions for facial animation generation.", 2505.23180v1,"Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging","Ping Wang, Lishun Wang, Gang Qu, Xiaodong Wang, Yulun Zhang, Xin Yuan","Deep-unrolling and plug-and-play (PnP) approaches have become the de-facto standard solvers for single-pixel imaging (SPI) inverse problem. PnP approaches, a class of iterative algorithms where regularization is implicitly performed by an off-the-shelf deep denoiser, are flexible for varying compression ratios (CRs) but are limited in reconstruction accuracy and speed. Conversely, unrolling approaches, a class of multi-stage neural networks where a truncated iterative optimization process is transformed into an end-to-end trainable network, typically achieve better accuracy with faster inference but require fine-tuning or even retraining when CR changes. In this paper, we address the challenge of integrating the strengths of both classes of solvers. To this end, we design an efficient deep image restorer (DIR) for the unrolling of HQS (half quadratic splitting) and ADMM (alternating direction method of multipliers). More importantly, a general proximal trajectory (PT) loss function is proposed to train HQS/ADMM-unrolling networks such that learned DIR approximates the proximal operator of an ideal explicit restoration regularizer. Extensive experiments demonstrate that, the resulting proximal unrolling networks can not only flexibly handle varying CRs with a single model like PnP algorithms, but also outperform previous CR-specific unrolling networks in both reconstruction accuracy and speed. Source codes and models are available at https://github.com/pwangcs/ProxUnroll.","eess.IV, cs.CV",2025-05-29T07:16:57+00:00,2025-05-29T07:16:57+00:00,http://arxiv.org/abs/2505.23180v1,http://arxiv.org/abs/2505.23180v1,2025-05-29 07:16:57+00:00,"\label{sec:related} {\bf SPI Reconstruction.} SPI reconstruction is a classical inverse problem in the field of compressive imaging~\cite{RN5318,wang2023full,wang2024hierarchical,wang2023deep}. Mainstream solvers involve iterative optimization algorithms~\cite{figueiredo2007gradient,4587391,he2009exploiting,blumensath2009iterative,beck2009fast,kim2010compressed,yang2011alternating,dong2014compressive,Metzler2016FromDT}, PnP algorithms~\cite{zhang2021plug,hurault2022gradient,hurault2022proximal,fangs,hu2024stochastic}, single-stage neural networks~\cite{kulkarni2016reconnet,shi2019image,shi2019scalable,yao2019dr2}, and unrolling (multi-stage) neural networks~\cite{metzler2017learned,zhang2018ista,yang2018admm,zhang2020optimization,zhang2020amp,shen2022transcs,song2021memory,you2021coast,mou2022deep,ye2023csformer,song2023optimization,wang2023saunet,wang2024ufc,guo2024cpp,qu2024dual}. Iterative algorithms employ hand-crafted regularizers, \eg, sparsity~\cite{figueiredo2007gradient}, total variation~\cite{4587391}, non-local low rank~\cite{dong2014compressive,yuan2016generalized}, with a proximal algorithm, \eg, iterative shrinkage thresholding algorithm (ISTA)~\cite{beck2009fast}, approximate message passing (AMP)~\cite{Metzler2016FromDT}, HQS~\cite{geman1995nonlinear}, and ADMM~\cite{yang2011alternating}. Single-stage networks generally achieve inferior performance due to the insufficient utilization of imaging model information. PnP algorithms are flexible for varying CRs and unrolling networks achieve superior performance, thus making both the de-facto standard tools for SPI reconstruction. \noindent{\bf Proximal learning.} % Proximal learning is an emerging research topic in the filed of optimization algorithm and deep learning. The objective of proximal learning is to train a neural network as the proximal operator of a explicit regularization function. % Previous works focus on proximal deep denoisers. Regularization by denoising~\cite{romano2017little} shows that under homogeneity, nonexpansiveness and Jacobian symmetry conditions, a denoiser can be written as a gradient descent step on a convex function. However, such conditions are unrealistic for deep denoisers. Recently, a new type of gradient denoisers~\cite{hurault2022gradient,hurault2022proximal,fangs} has been proposed by training a denoiser as an explicit gradient step on a functional parameterized by a deep neural network. However, these denoisers must either be a contractive gradient~\cite{hurault2022gradient,hurault2022proximal} or be constrained to input convex neural networks (ICNN)~\cite{fangs}, inevitably sacrificing the expressivity. Proximal learning without assumptions and constraints remains an open challenge.","{\bf SPI Reconstruction.} SPI reconstruction is a classical inverse problem in the field of compressive imaging~\cite{RN5318,wang2023full,wang2024hierarchical,wang2023deep}. Mainstream solvers involve iterative optimization algorithms~\cite{figueiredo2007gradient,4587391,he2009exploiting,blumensath2009iterative,beck2009fast,kim2010compressed,yang2011alternating,dong2014compressive,Metzler2016FromDT}, PnP algorithms~\cite{zhang2021plug,hurault2022gradient,hurault2022proximal,fangs,hu2024stochastic}, single-stage neural networks~\cite{kulkarni2016reconnet,shi2019image,shi2019scalable,yao2019dr2}, and unrolling (multi-stage) neural networks~\cite{metzler2017learned,zhang2018ista,yang2018admm,zhang2020optimization,zhang2020amp,shen2022transcs,song2021memory,you2021coast,mou2022deep,ye2023csformer,song2023optimization,wang2023saunet,wang2024ufc,guo2024cpp,qu2024dual}. Iterative algorithms employ hand-crafted regularizers, \eg, sparsity~\cite{figueiredo2007gradient}, total variation~\cite{4587391}, non-local low rank~\cite{dong2014compressive,yuan2016generalized}, with a proximal algorithm, \eg, iterative shrinkage thresholding algorithm (ISTA)~\cite{beck2009fast}, approximate message passing (AMP)~\cite{Metzler2016FromDT}, HQS~\cite{geman1995nonlinear}, and ADMM~\cite{yang2011alternating}. Single-stage networks generally achieve inferior performance due to the insufficient utilization of imaging model information. PnP algorithms are flexible for varying CRs and unrolling networks achieve superior performance, thus making both the de-facto standard tools for SPI reconstruction. \noindent{\bf Proximal learning.} % Proximal learning is an emerging research topic in the filed of optimization algorithm and deep learning. The objective of proximal learning is to train a neural network as the proximal operator of a explicit regularization function. % Previous works focus on proximal deep denoisers. Regularization by denoising~\cite{romano2017little} shows that under homogeneity, nonexpansiveness and Jacobian symmetry conditions, a denoiser can be written as a gradient descent step on a convex function. However, such conditions are unrealistic for deep denoisers. Recently, a new type of gradient denoisers~\cite{hurault2022gradient,hurault2022proximal,fangs} has been proposed by training a denoiser as an explicit gradient step on a functional parameterized by a deep neural network. However, these denoisers must either be a contractive gradient~\cite{hurault2022gradient,hurault2022proximal} or be constrained to input convex neural networks (ICNN)~\cite{fangs}, inevitably sacrificing the expressivity. Proximal learning without assumptions and constraints remains an open challenge.","SPI Reconstruction. SPI reconstruction is a classical in- verse problem in the field of compressive imaging [48– 50, 59]. Mainstream solvers involve iterative optimization algorithms [3, 5, 10, 16, 21, 26, 28, 30, 53], PnP algo- rithms [15, 22–24, 66], single-stage neural networks [27, 39, 40, 55], and unrolling (multi-stage) neural networks [18, 29, 32, 35, 38, 41, 42, 47, 51, 54, 56, 57, 61, 62, 68]. Itera- tive algorithms employ hand-crafted regularizers, e.g., spar- sity [16], total variation [28], non-local low rank [10, 58], with a proximal algorithm, e.g., iterative shrinkage thresh- olding algorithm (ISTA) [3], approximate message passing (AMP) [30], HQS [17], and ADMM [53]. Single-stage net- works generally achieve inferior performance due to the in- sufficient utilization of imaging model information. PnP algorithms are flexible for varying CRs and unrolling net- works achieve superior performance, thus making both the de-facto standard tools for SPI reconstruction. Proximal learning. The objective of proximal learning is to train a neural network as the proximal operator of a explicit regularization function. Regularization by denoising [36] shows that under homogeneity, nonexpansiveness and Jaco- bian symmetry conditions, a denoiser can be written as a gradient descent step on a convex function. However, such conditions are unrealistic for deep denoisers. Recently, a new type of gradient denoisers [15, 23, 24] has been pro- posed by training a denoiser as an explicit gradient step on a functional parameterized by a deep neural network. However, these denoisers must either be a contractive gra- dient [23, 24] or be constrained to input convex neural net- works (ICNN) [15], inevitably sacrificing the expressivity. Proximal learning without assumptions and constraints re- mains an open challenge." 2505.22616v1,"PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization","Yezhi Shen, Qiuchen Zhai, Fengqing Zhu","Neural rendering methods have gained significant attention for their ability to reconstruct 3D scenes from 2D images. The core idea is to take multiple views as input and optimize the reconstructed scene by minimizing the uncertainty in geometry and appearance across the views. However, the reconstruction quality is limited by the number of input views. This limitation is further pronounced in complex and dynamic scenes, where certain angles of objects are never seen. In this paper, we propose to use video frame interpolation as the data augmentation method for neural rendering. Furthermore, we design a lightweight yet high-quality video frame interpolation model, PS4PRO (Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization). PS4PRO is trained on diverse video datasets, implicitly modeling camera movement as well as real-world 3D geometry. Our model performs as an implicit world prior, enriching the photo supervision for 3D reconstruction. By leveraging the proposed method, we effectively augment existing datasets for neural rendering methods. Our experimental results indicate that our method improves the reconstruction performance on both static and dynamic scenes.","cs.CV, eess.IV",2025-05-28T17:35:39+00:00,2025-05-28T17:35:39+00:00,http://arxiv.org/abs/2505.22616v1,http://arxiv.org/abs/2505.22616v1,2025-05-28 17:35:39+00:00,"\label{sec:related} \textbf{Video Frame Interpolation:} Video frame interpolation (VFI) technique has been widely studied in recent years due to its significance in various video processing applications, including generating smooth slow-motion videos, increasing video frame rates, and enhancing visual quality. Conventional VFI methods rely on model-based motion estimation, blending, and morphing techniques~\cite{choi2007motion, parihar2022comprehensive}, which can be computationally intensive and prone to artifacts such as ghosting or blurring. Recent advances in deep learning have significantly improved VFI techniques, addressing many limitations of conventional methods. This has led to the emergence of end-to-end neural networks for VFI, such as Depth-Aware Video Frame Interpolation (DAIN)~\cite{DAIN}, Real-Time Intermediate Flow Estimation (RIFE)~\cite{RIFE}, Many-to-Many Splatting (M2M)~\cite{m2m}, and EMA-VFI~\cite{EMA}. These methods explore various approaches including mixed, sequential, and parallel feature extractions, and leverage advanced techniques like deformable convolution and depth estimation to produce more accurate and visually appealing interpolated frames. Despite the improvements, challenges such as handling large motion artifacts and preserving fine details remain, driving further research in this area. \textbf{Autonomous Driving NeRF:} Autonomous driving researchers have incorporated neural radiance fields (NeRF) to reconstruct 3D scenes and simulate safety-critical scenarios. UniSim~\cite{unisim} introduces a unified simulation approach that leverages NeRF-based scene representations to generate highly realistic synthetic data, significantly aiding the training of autonomous driving models. Neurad~\cite{neurad} pushes this further by optimizing NeRF for autonomous driving, focusing on modeling physical sensors and improving the fidelity of object-level details to shorten the simulation to real-world gap. Meanwhile, Lightning-NeRF~\cite{cao2024lightning} addresses the computational challenges of traditional NeRF implementations by introducing fast neural initialization techniques, allowing for efficient and scalable 3D scene reconstructions. These methods collectively advance the integration of NeRFs in autonomous driving by improving both the quality and speed of 3D scene representations. \textbf{Neural Rendering Enhancement:} To address challenges in neural rendering arising from insufficient or low-quality data, several methods have been developed. AlignNeRF~\cite{jiang2023alignerf} introduces an optical-flow network to enhance view alignment during training, thereby improving high-frequency details in reconstructed scenes. DiffusionNeRF~\cite{wynn2023diffusionerf} employs a diffusion model to learn gradients of RGBD patch priors, providing regularized geometry and color information for each scene. The 3DGS-enhancer~\cite{3dgsEh} incorporates the diffusion process after initial reconstruction to refine the quality of novel views. However, while effective on low-quality reconstructions, these methods introduce artifacts to high-quality reconstructions and substantially extend the training time, further prolonging an already lengthy process~\cite{yu2024viewcrafter}.","\textbf{Video Frame Interpolation:} Video frame interpolation (VFI) technique has been widely studied in recent years due to its significance in various video processing applications, including generating smooth slow-motion videos, increasing video frame rates, and enhancing visual quality. Conventional VFI methods rely on model-based motion estimation, blending, and morphing techniques~\cite{choi2007motion, parihar2022comprehensive}, which can be computationally intensive and prone to artifacts such as ghosting or blurring. Recent advances in deep learning have significantly improved VFI techniques, addressing many limitations of conventional methods. This has led to the emergence of end-to-end neural networks for VFI, such as Depth-Aware Video Frame Interpolation (DAIN)~\cite{DAIN}, Real-Time Intermediate Flow Estimation (RIFE)~\cite{RIFE}, Many-to-Many Splatting (M2M)~\cite{m2m}, and EMA-VFI~\cite{EMA}. These methods explore various approaches including mixed, sequential, and parallel feature extractions, and leverage advanced techniques like deformable convolution and depth estimation to produce more accurate and visually appealing interpolated frames. Despite the improvements, challenges such as handling large motion artifacts and preserving fine details remain, driving further research in this area. \textbf{Autonomous Driving NeRF:} Autonomous driving researchers have incorporated neural radiance fields (NeRF) to reconstruct 3D scenes and simulate safety-critical scenarios. UniSim~\cite{unisim} introduces a unified simulation approach that leverages NeRF-based scene representations to generate highly realistic synthetic data, significantly aiding the training of autonomous driving models. Neurad~\cite{neurad} pushes this further by optimizing NeRF for autonomous driving, focusing on modeling physical sensors and improving the fidelity of object-level details to shorten the simulation to real-world gap. Meanwhile, Lightning-NeRF~\cite{cao2024lightning} addresses the computational challenges of traditional NeRF implementations by introducing fast neural initialization techniques, allowing for efficient and scalable 3D scene reconstructions. These methods collectively advance the integration of NeRFs in autonomous driving by improving both the quality and speed of 3D scene representations. \textbf{Neural Rendering Enhancement:} To address challenges in neural rendering arising from insufficient or low-quality data, several methods have been developed. AlignNeRF~\cite{jiang2023alignerf} introduces an optical-flow network to enhance view alignment during training, thereby improving high-frequency details in reconstructed scenes. DiffusionNeRF~\cite{wynn2023diffusionerf} employs a diffusion model to learn gradients of RGBD patch priors, providing regularized geometry and color information for each scene. The 3DGS-enhancer~\cite{3dgsEh} incorporates the diffusion process after initial reconstruction to refine the quality of novel views. However, while effective on low-quality reconstructions, these methods introduce artifacts to high-quality reconstructions and substantially extend the training time, further prolonging an already lengthy process~\cite{yu2024viewcrafter}.","Video Frame Interpolation: Video frame interpolation (VFI) technique has been widely studied in recent years due to its significance in various video processing appli- cations, including generating smooth slow-motion videos, increasing video frame rates, and enhancing visual qual- ity. Conventional VFI methods rely on model-based motion estimation, blending, and morphing techniques [11, 33], which can be computationally intensive and prone to arti- facts such as ghosting or blurring. Recent advances in deep learning have significantly improved VFI techniques, ad- dressing many limitations of conventional methods. This has led to the emergence of end-to-end neural networks for VFI, such as Depth-Aware Video Frame Interpola- tion (DAIN) [2], Real-Time Intermediate Flow Estimation (RIFE) [18], Many-to-Many Splatting (M2M) [17], and EMA-VFI [48]. These methods explore various approaches including mixed, sequential, and parallel feature extrac- tions, and leverage advanced techniques like deformable convolution and depth estimation to produce more accurate and visually appealing interpolated frames. Despite the im- provements, challenges such as handling large motion arti- facts and preserving fine details remain, driving further re- search in this area. Autonomous Driving NeRF: Autonomous driving re- searchers have incorporated neural radiance fields (NeRF) to reconstruct 3D scenes and simulate safety-critical scenar- ios. UniSim [45] introduces a unified simulation approach that leverages NeRF-based scene representations to gener- ate highly realistic synthetic data, significantly aiding the training of autonomous driving models. Neurad [39] pushes this further by optimizing NeRF for autonomous driving, focusing on modeling physical sensors and improving the fidelity of object-level details to shorten the simulation to real-world gap. Meanwhile, Lightning-NeRF [7] addresses the computational challenges of traditional NeRF imple-mentations by introducing fast neural initialization tech- niques, allowing for efficient and scalable 3D scene recon- structions. These methods collectively advance the integra- tion of NeRFs in autonomous driving by improving both the quality and speed of 3D scene representations. Neural Rendering Enhancement: To address chal- lenges in neural rendering arising from insufficient or low-quality data, several methods have been developed. AlignNeRF [21] introduces an optical-flow network to en- hance view alignment during training, thereby improving high-frequency details in reconstructed scenes. Diffusion- NeRF [42] employs a diffusion model to learn gradients of RGBD patch priors, providing regularized geometry and color information for each scene. The 3DGS-enhancer [25] incorporates the diffusion process after initial reconstruc- tion to refine the quality of novel views. However, while ef- fective on low-quality reconstructions, these methods intro- duce artifacts to high-quality reconstructions and substan- tially extend the training time, further prolonging an already lengthy process [47]." 2505.22458v1,Universal Domain Adaptation for Semantic Segmentation,"Seun-An Choe, Keon-Hee Park, Jinwoo Choi, Gyeong-Moon Park","Unsupervised domain adaptation for semantic segmentation (UDA-SS) aims to transfer knowledge from labeled source data to unlabeled target data. However, traditional UDA-SS methods assume that category settings between source and target domains are known, which is unrealistic in real-world scenarios. This leads to performance degradation if private classes exist. To address this limitation, we propose Universal Domain Adaptation for Semantic Segmentation (UniDA-SS), achieving robust adaptation even without prior knowledge of category settings. We define the problem in the UniDA-SS scenario as low confidence scores of common classes in the target domain, which leads to confusion with private classes. To solve this problem, we propose UniMAP: UniDA-SS with Image Matching and Prototype-based Distinction, a novel framework composed of two key components. First, Domain-Specific Prototype-based Distinction (DSPD) divides each class into two domain-specific prototypes, enabling finer separation of domain-specific features and enhancing the identification of common classes across domains. Second, Target-based Image Matching (TIM) selects a source image containing the most common-class pixels based on the target pseudo-label and pairs it in a batch to promote effective learning of common classes. We also introduce a new UniDA-SS benchmark and demonstrate through various experiments that UniMAP significantly outperforms baselines. The code is available at \href{https://github.com/KU-VGI/UniMAP}{this https URL}.",cs.CV,2025-05-28T15:14:11+00:00,2025-05-28T15:14:11+00:00,http://arxiv.org/abs/2505.22458v1,http://arxiv.org/abs/2505.22458v1,2025-05-28 15:14:11+00:00,"\subsection{Semantic Segmentation.} Semantic segmentation aims to classify each pixel in an image into a specific semantic. A foundational approach, Fully Convolutional Networks (FCNs)~\cite{long2015fully}, has demonstrated impressive performance in this task. To enhance contextual understanding, subsequent works have introduced methods such as dilated convolutions ~\cite{chen2017deeplab}, global pooling ~\cite{liu2015parsenet}, pyramid pooling ~\cite{zhao2017pyramid}, and attention mechanisms~\cite{zhao2018psanet, zhu2019asymmetric}. More recently, transformer-based methods have achieved significant performance gains ~\cite{xie2021segformer, zheng2021rethinking}. Despite various studies, semantic segmentation models are still vulnerable to domain shifts or category shifts. To address this issue, we propose a universal domain adaptation for semantic segmentation that handles domain shifts and category shifts. % \vspace{-1mm} \subsection{Unsupervised Domain Adaptation for Semantic Segmentation.} % The goal of Unsupervised Domain Adaptation (UDA) is to bridge the domain gap by transferring the knowledge from the labeled source domain to the unlabeled target domain. UDA is an important task for semantic segmentation because it can efficiently solve the cost problem of per-pixel annotation. The general ideas for UDA semantic segmentation include adversarial learning~\cite{hong2018conditional,kim2020learning,tsai2018learning} and self-training strategy~\cite{tranheden2021dacs,hoyer2022daformer,hoyer2023mic}. We briefly review some recent works. First, adversarial learning-based methods employ a discriminator network. A segmentation network generates the segmentation maps of the different domains, and the discriminator network is trained to predict the domain between the source and target domain. The segmentation network aims to fool the discriminator. Then, it ensures that the features of the two domains have a similar distribution. In recent years, self-training methods have shown significant performance improvement in UDA. The self-training approach generates the segmentation map of the given target image and obtains pseudo labels by collecting only the results of pixels whose confidence score exceeds a certain threshold. The generated pseudo labels then iteratively re-train the model with both the ground truth of the source domain and pseudo labels of the target domain. At this time, since the quality of the pseudo labels is crucial, various studies have been conducted to refine the noisy pseudo labels. However, this research is primarily conducted in a closed-set setting, assuming complete alignment of label sets between the source and target domains. In real-world scenarios, the absence of labels for the target makes it challenging to confirm whether it is a closed setting, thereby constraining its applicability. Unsupervised Domain Adaptation (UDA) aims to leverage labeled source data to achieve high performance on unlabeled target data. Existing UDA methods for semantic segmentation can be categorized into two approaches: adversarial learning-based and self-training. Adversarial learning-based methods~\cite{tsai2018learning,hong2018conditional,kim2020learning,pan2020unsupervised,tsai2019domain,chen2019synergistic,du2019ssf} use an adversarial domain classifier to learn domain-invariant features. Self-training methods~\cite{melas2021pixmatch,hoyer2022daformer,hoyer2022hrda,zou2018unsupervised,chen2019domain,zou2019confidence, wang2021domain,lian2019constructing,li2019bidirectional,wang2021uncertainty,zhang2021prototypical, tranheden2021dacs} assign pseudo-labels to each pixel in the target domain using confidence thresholding, and several self-training approaches further enhance target domain performance by re-training the model with these pseudo-labels. Although UDA allows the model to be trained on the target domain without annotations, it requires prior knowledge of class overlap between the source and target domains, which limits the model's applicability and generalizability. To overcome this limitation, we propose a universal domain adaptation approach for semantic segmentation, where the model can adapt to the target domain without requiring prior knowledge of class overlap. \subsection{Universal Domain Adaptation in Classification} Universal Domain Adaptation (UniDA)~\cite{you2019universal} was introduced to address various domain adaptation settings, such as closed-set, open-set, and partial domain adaptation. UniDA is a more challenging scenario because it operates without prior knowledge of the category configuration of the source and target domains. To tackle UniDA in classification tasks, prior works have focused on computing confidence scores for known classes and treating samples with lower scores as unknowns. CMU~\cite{fu2020learning} proposed a thresholding function, while ROS~\cite{bucci2020effectiveness} used the mean confidence score as a threshold, which results in neglecting about half of the target data as unknowns. DANCE~\cite{saito2020universal} set a threshold based on the number of classes in the source domain. OVANet~\cite{saito2021ovanet} introduced training a threshold using source samples and adapting it to the target domain. While UniDA has been extensively studied in the context of classification tasks, it remains underexplored in semantic segmentation, which requires a higher level of visual understanding due to the need for pixel-wise classification. In this work, we aim to investigate UniDA for semantic segmentation. % Universal Domain Adaptation (UniDA) extends the capabilities of domain adaptation by allowing the target domain to contain any combination of shared, source-private, and target-private classes. UniDA aims to adaptively handle both common and unknown classes without assuming any predefined class overlap. This flexibility makes UniDA a more robust framework for real-world applications where the target domain may include diverse categories not observed in the source domain. % Recent UniDA methods tackle this challenge by leveraging pseudo-labeling and self-training techniques to identify shared classes dynamically while treating unrecognized target-specific classes as ``unknown"" \cite{you2019universal, fu2020learning}. In the context of semantic segmentation, where spatial class distributions vary significantly across images, UniDA has proven effective in achieving a balanced alignment between domains. However, a primary difficulty lies in effectively adapting to complex spatial variations inherent in segmentation tasks. % To address this, we propose a novel approach specifically tailored for semantic segmentation in the UniDA setting. Our method, Target Pseudo Label Based Sampling, leverages target pseudo labels to guide both common and unknown class sampling, enhancing model performance across diverse and complex target domains without requiring prior knowledge of class definitions.","\subsection{Semantic Segmentation.} Semantic segmentation aims to classify each pixel in an image into a specific semantic. A foundational approach, Fully Convolutional Networks (FCNs)~\cite{long2015fully}, has demonstrated impressive performance in this task. To enhance contextual understanding, subsequent works have introduced methods such as dilated convolutions ~\cite{chen2017deeplab}, global pooling ~\cite{liu2015parsenet}, pyramid pooling ~\cite{zhao2017pyramid}, and attention mechanisms~\cite{zhao2018psanet, zhu2019asymmetric}. More recently, transformer-based methods have achieved significant performance gains ~\cite{xie2021segformer, zheng2021rethinking}. Despite various studies, semantic segmentation models are still vulnerable to domain shifts or category shifts. To address this issue, we propose a universal domain adaptation for semantic segmentation that handles domain shifts and category shifts. % \vspace{-1mm} \subsection{Unsupervised Domain Adaptation for Semantic Segmentation.} % The goal of Unsupervised Domain Adaptation (UDA) is to bridge the domain gap by transferring the knowledge from the labeled source domain to the unlabeled target domain. UDA is an important task for semantic segmentation because it can efficiently solve the cost problem of per-pixel annotation. The general ideas for UDA semantic segmentation include adversarial learning~\cite{hong2018conditional,kim2020learning,tsai2018learning} and self-training strategy~\cite{tranheden2021dacs,hoyer2022daformer,hoyer2023mic}. We briefly review some recent works. First, adversarial learning-based methods employ a discriminator network. A segmentation network generates the segmentation maps of the different domains, and the discriminator network is trained to predict the domain between the source and target domain. The segmentation network aims to fool the discriminator. Then, it ensures that the features of the two domains have a similar distribution. In recent years, self-training methods have shown significant performance improvement in UDA. The self-training approach generates the segmentation map of the given target image and obtains pseudo labels by collecting only the results of pixels whose confidence score exceeds a certain threshold. The generated pseudo labels then iteratively re-train the model with both the ground truth of the source domain and pseudo labels of the target domain. At this time, since the quality of the pseudo labels is crucial, various studies have been conducted to refine the noisy pseudo labels. However, this research is primarily conducted in a closed-set setting, assuming complete alignment of label sets between the source and target domains. In real-world scenarios, the absence of labels for the target makes it challenging to confirm whether it is a closed setting, thereby constraining its applicability. Unsupervised Domain Adaptation (UDA) aims to leverage labeled source data to achieve high performance on unlabeled target data. Existing UDA methods for semantic segmentation can be categorized into two approaches: adversarial learning-based and self-training. Adversarial learning-based methods~\cite{tsai2018learning,hong2018conditional,kim2020learning,pan2020unsupervised,tsai2019domain,chen2019synergistic,du2019ssf} use an adversarial domain classifier to learn domain-invariant features. Self-training methods~\cite{melas2021pixmatch,hoyer2022daformer,hoyer2022hrda,zou2018unsupervised,chen2019domain,zou2019confidence, wang2021domain,lian2019constructing,li2019bidirectional,wang2021uncertainty,zhang2021prototypical, tranheden2021dacs} assign pseudo-labels to each pixel in the target domain using confidence thresholding, and several self-training approaches further enhance target domain performance by re-training the model with these pseudo-labels. Although UDA allows the model to be trained on the target domain without annotations, it requires prior knowledge of class overlap between the source and target domains, which limits the model's applicability and generalizability. To overcome this limitation, we propose a universal domain adaptation approach for semantic segmentation, where the model can adapt to the target domain without requiring prior knowledge of class overlap. \subsection{Universal Domain Adaptation in Classification} Universal Domain Adaptation (UniDA)~\cite{you2019universal} was introduced to address various domain adaptation settings, such as closed-set, open-set, and partial domain adaptation. UniDA is a more challenging scenario because it operates without prior knowledge of the category configuration of the source and target domains. To tackle UniDA in classification tasks, prior works have focused on computing confidence scores for known classes and treating samples with lower scores as unknowns. CMU~\cite{fu2020learning} proposed a thresholding function, while ROS~\cite{bucci2020effectiveness} used the mean confidence score as a threshold, which results in neglecting about half of the target data as unknowns. DANCE~\cite{saito2020universal} set a threshold based on the number of classes in the source domain. OVANet~\cite{saito2021ovanet} introduced training a threshold using source samples and adapting it to the target domain. While UniDA has been extensively studied in the context of classification tasks, it remains underexplored in semantic segmentation, which requires a higher level of visual understanding due to the need for pixel-wise classification. In this work, we aim to investigate UniDA for semantic segmentation. % Universal Domain Adaptation (UniDA) extends the capabilities of domain adaptation by allowing the target domain to contain any combination of shared, source-private, and target-private classes. UniDA aims to adaptively handle both common and unknown classes without assuming any predefined class overlap. This flexibility makes UniDA a more robust framework for real-world applications where the target domain may include diverse categories not observed in the source domain. % Recent UniDA methods tackle this challenge by leveraging pseudo-labeling and self-training techniques to identify shared classes dynamically while treating unrecognized target-specific classes as ``unknown"" \cite{you2019universal, fu2020learning}. In the context of semantic segmentation, where spatial class distributions vary significantly across images, UniDA has proven effective in achieving a balanced alignment between domains. However, a primary difficulty lies in effectively adapting to complex spatial variations inherent in segmentation tasks. % To address this, we propose a novel approach specifically tailored for semantic segmentation in the UniDA setting. Our method, Target Pseudo Label Based Sampling, leverages target pseudo labels to guide both common and unknown class sampling, enhancing model performance across diverse and complex target domains without requiring prior knowledge of class definitions.", 2505.22427v1,RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network,"Van-Tin Luu, Yon-Lin Cai, Vu-Hoang Tran, Wei-Chen Chiu, Yi-Ting Chen, Ching-Chun Huang","This paper presents a groundbreaking approach - the first online automatic geometric calibration method for radar and camera systems. Given the significant data sparsity and measurement uncertainty in radar height data, achieving automatic calibration during system operation has long been a challenge. To address the sparsity issue, we propose a Dual-Perspective representation that gathers features from both frontal and bird's-eye views. The frontal view contains rich but sensitive height information, whereas the bird's-eye view provides robust features against height uncertainty. We thereby propose a novel Selective Fusion Mechanism to identify and fuse reliable features from both perspectives, reducing the effect of height uncertainty. Moreover, for each view, we incorporate a Multi-Modal Cross-Attention Mechanism to explicitly find location correspondences through cross-modal matching. During the training phase, we also design a Noise-Resistant Matcher to provide better supervision and enhance the robustness of the matching mechanism against sparsity and height uncertainty. Our experimental results, tested on the nuScenes dataset, demonstrate that our method significantly outperforms previous radar-camera auto-calibration methods, as well as existing state-of-the-art LiDAR-camera calibration techniques, establishing a new benchmark for future research. The code is available at https://github.com/nycu-acm/RC-AutoCalib.",cs.CV,2025-05-28T14:52:31+00:00,2025-05-28T14:52:31+00:00,http://arxiv.org/abs/2505.22427v1,http://arxiv.org/abs/2505.22427v1,2025-05-28 14:52:31+00:00,"\label{sec:related} \subsection{Offline Calibration} %reference ""Spatiotemporal Calibration of 3D Millimetre-Wavelength Radar-Camera Pairs"" Offline calibration methods primarily depend on specific calibration targets and cannot address real-time errors. These methods are tailored for fixed environments and necessitate substantial manual effort to achieve precision, rendering them unsuitable for dynamic conditions and generally reserved for controlled settings. Early radar-camera calibration techniques focused on merging radar signals with camera data through homography projection that maps points from the radar's horizontal plane to the camera image plane. Due to inherent noise in radar sensors, these early methods often required specialized trihedral reflectors to establish accurate correspondences \cite{sugimoto2004obstacle,wang2011integrating,kim2014data,kim2018radar}. However, the radar's limitation in accurately measuring the elevation of distant targets indicated that reflectors had to be positioned precisely on the radar's horizontal plane \cite{sugimoto2004obstacle}. More recent radar calibration algorithms aim to minimize ``reprojection error'' to better synchronize object detection across both sensor fields of view, using techniques like estimating radar-to-camera transformations via reprojection error \cite{kim2017comparative}, or intersecting back-projected camera rays with 3D ``arcs'' that conform to radar measurements to determine necessary transformations \cite{el2015radar}. Despite improvements, these methods still rely on specific targets and manual input efforts. %Offline methods primarily rely on calibration targets and cannot address real-time errors. They are designed for pre-determined environments and require significant manual intervention for accuracy, making them less adaptable to changing conditions and typically used in controlled settings. Early radar-camera calibration algorithms aimed to integrate 2D radar with camera data by computing projective homographies mapping points from the horizontal radar plane to the camera image plane. Due to radar sensor noise, these methods often required specialized trihedral reflectors to facilitate correspondence \cite{sugimoto2004obstacle,wang2011integrating,kim2014data,kim2018radar}. However, 2D radar cannot accurately measure the elevation of distant targets and only detects those slightly above the horizontal plane, necessitating reflectors to lie on the radar's horizontal plane \cite{sugimoto2004obstacle}. Newer 2D radar calibration algorithms typically minimize ""reprojection error"" to align objects within both sensor fields of view. These methods include using reprojection error to estimate the radar-to-camera transformation \cite{kim2017comparative}, or intersecting backprojected camera rays with 3D ""arcs"" where radar measurements must lie to determine the transformation \cite{el2015radar}. However, these methods still require specific targets and significant human efforts. \subsection{Online Calibration} Online methods primarily extract features from natural scenes for calibration, offering greater flexibility and adaptability to various scenarios. The rapid development of deep learning has demonstrated neural networks' powerful feature extraction capabilities. However, due to the aforementioned challenges associated with radar, online calibration methods for radar and cameras are less prevalent. In this paper, we focus on developing an end-to-end architecture for the online auto-calibration of radar and cameras, leveraging robust benchmarks established by LiDAR and camera calibration methods. \subsubsection*{LiDAR and Camera.} Li et al. \cite{li2023automatic} categorized targetless calibration methods into information theory-based, feature-based, ego-motion-based, and learning-based approaches. Pandey et al. \cite{pandey2012automatic} used mutual information between point cloud intensities and image grayscale values. Taylor and Nieto \cite{taylor2015motion} utilized sensor ego-motion on moving vehicles to estimate extrinsic parameters. Levinson and Thrun \cite{levinson2013automatic} as well as Yuan et al. \cite{yuan2021pixel} optimized depth-discontinuous and depth-continuous edge features, respectively. Regnet \cite{schneider2017regnet} and CalibNet \cite{iyer2018calibnet} employed deep learning to match features and regress calibration parameters. CalibRCNN \cite{shi2020calibrcnn} combined CNN with LSTM \cite{sak2014long} and added pose constraints for accuracy. LCCNet \cite{lv2021lccnet} used cost volume for feature correlation. Despite achieving positive results, these methods do not explicitly learn the correspondence between point clouds and images. In contrast, in this paper, we introduce Explicit Feature Matching Supervision to guide the model in learning the correspondence relationship between point clouds and images more effectively. \begin{figure*}[t] \centering \includegraphics[width=.85\linewidth]{fig/overall.png} \caption{ Our system flow for iterative online auto-calibration starts with the input image, point cloud, and initial calibration parameters \( T_{init} \), which first pass through the Data Transform module (\cref{sec:data transform}). Here, we obtain the estimated image depth map and miscalibrated radar depth map from the frontal view (FV) perspective, along with the pseudo-BEV image and miscalibrated BEV radar projection. These outputs are then processed in the Feature Extraction module (\cref{sec:feature extraction}), where features from both FV and BEV perspectives undergo Feature Matching (\cref{sec:feature matching}) between the image and radar data. Subsequently, after Feature Matching and Fusion (\cref{sec:selective fusion}), the Regression Head (\cref{sec:regression head}) generates the rotation and translation vectors that form the transformation matrix, $\hat{T}_{pred}^{i}$, to refine calibration. Finally, $\hat{T}_{pred}^{i}$ is fed back to \(T_{init}\) to update the calibration parameters for the next $i$-th iteration.} \vspace{-0.3cm} \label{fig:overall} \end{figure*} \subsubsection*{Radar and Camera.} Per{\v{s}}i{'c} et al. \cite{pervsic2021online} proposed an online calibration method based on detecting and tracking moving objects, focusing on rotational calibration. Sch{\""o}ller et al. \cite{scholler2019targetless} used deep learning to learn rotational calibration matrices but did not address translational calibration. Additionally, their methods utilize stationary traffic radars fixed on highway positions, differing from ours that employ vehicle-mounted 3D radars moving with the car. Wisec et al. \cite{wise2021continuous} developed a targetless calibration method for 3D radar and cameras, using radar velocity information and motion-based camera pose measurements, solved with nonlinear optimization. Later, the same research team extended their work \cite{wise2021continuous} to include radar ego-velocity estimates and unscaled camera pose measurements in \cite{wise2023spatiotemporal} for a more complete spatiotemporal calibration. However, these methods overly rely on radar speed measurements, making them less robust to noise. Additionally, they do not leverage the power of deep learning and fail to explicitly establish the correspondence between radar and images.","\subsection{Offline Calibration} %reference ""Spatiotemporal Calibration of 3D Millimetre-Wavelength Radar-Camera Pairs"" Offline calibration methods primarily depend on specific calibration targets and cannot address real-time errors. These methods are tailored for fixed environments and necessitate substantial manual effort to achieve precision, rendering them unsuitable for dynamic conditions and generally reserved for controlled settings. Early radar-camera calibration techniques focused on merging radar signals with camera data through homography projection that maps points from the radar's horizontal plane to the camera image plane. Due to inherent noise in radar sensors, these early methods often required specialized trihedral reflectors to establish accurate correspondences \cite{sugimoto2004obstacle,wang2011integrating,kim2014data,kim2018radar}. However, the radar's limitation in accurately measuring the elevation of distant targets indicated that reflectors had to be positioned precisely on the radar's horizontal plane \cite{sugimoto2004obstacle}. More recent radar calibration algorithms aim to minimize ``reprojection error'' to better synchronize object detection across both sensor fields of view, using techniques like estimating radar-to-camera transformations via reprojection error \cite{kim2017comparative}, or intersecting back-projected camera rays with 3D ``arcs'' that conform to radar measurements to determine necessary transformations \cite{el2015radar}. Despite improvements, these methods still rely on specific targets and manual input efforts. %Offline methods primarily rely on calibration targets and cannot address real-time errors. They are designed for pre-determined environments and require significant manual intervention for accuracy, making them less adaptable to changing conditions and typically used in controlled settings. Early radar-camera calibration algorithms aimed to integrate 2D radar with camera data by computing projective homographies mapping points from the horizontal radar plane to the camera image plane. Due to radar sensor noise, these methods often required specialized trihedral reflectors to facilitate correspondence \cite{sugimoto2004obstacle,wang2011integrating,kim2014data,kim2018radar}. However, 2D radar cannot accurately measure the elevation of distant targets and only detects those slightly above the horizontal plane, necessitating reflectors to lie on the radar's horizontal plane \cite{sugimoto2004obstacle}. Newer 2D radar calibration algorithms typically minimize ""reprojection error"" to align objects within both sensor fields of view. These methods include using reprojection error to estimate the radar-to-camera transformation \cite{kim2017comparative}, or intersecting backprojected camera rays with 3D ""arcs"" where radar measurements must lie to determine the transformation \cite{el2015radar}. However, these methods still require specific targets and significant human efforts. \subsection{Online Calibration} Online methods primarily extract features from natural scenes for calibration, offering greater flexibility and adaptability to various scenarios. The rapid development of deep learning has demonstrated neural networks' powerful feature extraction capabilities. However, due to the aforementioned challenges associated with radar, online calibration methods for radar and cameras are less prevalent. In this paper, we focus on developing an end-to-end architecture for the online auto-calibration of radar and cameras, leveraging robust benchmarks established by LiDAR and camera calibration methods. \subsubsection*{LiDAR and Camera.} Li et al. \cite{li2023automatic} categorized targetless calibration methods into information theory-based, feature-based, ego-motion-based, and learning-based approaches. Pandey et al. \cite{pandey2012automatic} used mutual information between point cloud intensities and image grayscale values. Taylor and Nieto \cite{taylor2015motion} utilized sensor ego-motion on moving vehicles to estimate extrinsic parameters. Levinson and Thrun \cite{levinson2013automatic} as well as Yuan et al. \cite{yuan2021pixel} optimized depth-discontinuous and depth-continuous edge features, respectively. Regnet \cite{schneider2017regnet} and CalibNet \cite{iyer2018calibnet} employed deep learning to match features and regress calibration parameters. CalibRCNN \cite{shi2020calibrcnn} combined CNN with LSTM \cite{sak2014long} and added pose constraints for accuracy. LCCNet \cite{lv2021lccnet} used cost volume for feature correlation. Despite achieving positive results, these methods do not explicitly learn the correspondence between point clouds and images. In contrast, in this paper, we introduce Explicit Feature Matching Supervision to guide the model in learning the correspondence relationship between point clouds and images more effectively. \subsubsection*{Radar and Camera.} Per{\v{s}}i{'c} et al. \cite{pervsic2021online} proposed an online calibration method based on detecting and tracking moving objects, focusing on rotational calibration. Sch{\""o}ller et al. \cite{scholler2019targetless} used deep learning to learn rotational calibration matrices but did not address translational calibration. Additionally, their methods utilize stationary traffic radars fixed on highway positions, differing from ours that employ vehicle-mounted 3D radars moving with the car. Wisec et al. \cite{wise2021continuous} developed a targetless calibration method for 3D radar and cameras, using radar velocity information and motion-based camera pose measurements, solved with nonlinear optimization. Later, the same research team extended their work \cite{wise2021continuous} to include radar ego-velocity estimates and unscaled camera pose measurements in \cite{wise2023spatiotemporal} for a more complete spatiotemporal calibration. However, these methods overly rely on radar speed measurements, making them less robust to noise. Additionally, they do not leverage the power of deep learning and fail to explicitly establish the correspondence between radar and images.", 2505.22167v1,"Q-VDiT: Towards Accurate Quantization and Distillation of Video-Generation Diffusion Transformers","Weilun Feng, Chuanguang Yang, Haotong Qin, Xiangqi Li, Yu Wang, Zhulin An, Libo Huang, Boyu Diao, Zixiang Zhao, Yongjun Xu, Michele Magno","Diffusion transformers (DiT) have demonstrated exceptional performance in video generation. However, their large number of parameters and high computational complexity limit their deployment on edge devices. Quantization can reduce storage requirements and accelerate inference by lowering the bit-width of model parameters. Yet, existing quantization methods for image generation models do not generalize well to video generation tasks. We identify two primary challenges: the loss of information during quantization and the misalignment between optimization objectives and the unique requirements of video generation. To address these challenges, we present Q-VDiT, a quantization framework specifically designed for video DiT models. From the quantization perspective, we propose the Token-aware Quantization Estimator (TQE), which compensates for quantization errors in both the token and feature dimensions. From the optimization perspective, we introduce Temporal Maintenance Distillation (TMD), which preserves the spatiotemporal correlations between frames and enables the optimization of each frame with respect to the overall video context. Our W3A6 Q-VDiT achieves a scene consistency of 23.40, setting a new benchmark and outperforming current state-of-the-art quantization methods by 1.9$\times$. Code will be available at https://github.com/cantbebetter2/Q-VDiT.",cs.CV,2025-05-28T09:33:52+00:00,2025-05-28T09:33:52+00:00,http://arxiv.org/abs/2505.22167v1,http://arxiv.org/abs/2505.22167v1,2025-05-28 09:33:52+00:00,"\subsection{Diffusion Model} % 一个训练好的扩撒模型可以通过对随机高斯噪声的逐步去噪得到高质量的生成图像。扩散模型通过对数据分布逐步添加噪声来进行前向采样过程。在DDPM中,扩散模型的前向加噪过程为一个马尔可夫链, % Diffusion models~\cite{ho2020ddpm, rombach2022ldm} perform a forward sampling process by gradually adding noise to the data distribution $\mathbf{x}_0 \sim q(x)$. In DDPM, the forward noise addition process of the diffusion model is a Markov chain, taking the form: \begin{equation} \begin{gathered} q(\mathbf{x}_{1:T}|\mathbf{x}_0) = \prod \limits_{t=1}^T q(\mathbf{x}_t|\mathbf{x}_{t-1}), \\ q(\mathbf{x}_t|\mathbf{x}_{t-1}) = \mathcal{N}(\mathbf{x}_t; \sqrt{\alpha_t}\mathbf{x}_{t-1}, \beta_t\mathbf{I}), \end{gathered} \end{equation} where $\alpha_t=1-\beta_t$, $\beta_t$ is time-related schedule. Diffusion models generate high-quality images by applying a denoising process to randomly sampled Gaussian noise $\mathbf{x}_T \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$, taking the form: \begin{equation} p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_t) = \mathcal{N}(\mathbf{x}_{t-1};\hat{\mu}_{\theta, t}(\mathbf{x_t}), \hat{\beta}_t\mathbf{I}), \end{equation} where $\hat{\mu}_{\theta, t}$ and $\hat{\beta}_t$ are outputed by the diffusion model. \subsection{Diffusion Quantization} % PTQ只使用少量的校准数据来为量化参数进行校准,因其无需微调模型的权重因而仅需很少的内存占用和校准时间。目前针对与扩散模型的PTQ方法PTQ4DM,Q-Diffusion做出了最初的探索,之后的工作PTQ-D,TFMQ-DM,APQ-DM和QuEST分别从量化误差,时间步信息,校准数据与校准模块等方向做出了改进,进一步提升了扩散模型在量化后的性能。但是,这些基于PTQ的方法在极低比特(2,3bit)下的性能表现不佳。QAT往往需要大量的数据来对模型权重和量化参数进行全量微调,因此保证了在极低比特下的性能收敛。常用的QAT方法如LSQ和针对扩散模型的方法Q-dm和Binarydm保证了在极低比特甚至二值量化后的模型性能,但是其相对于PTQ方法需要大量额外的训练时长,导致了更大的训练负担。为了结合QAT的优点并减少训练的所需时长,EfficientDM利用LoRA方法来对量化的扩散模型进行低秩微调,不仅具有与PTQ方法相似的训练时长,并且因为对模型的权重进行了低秩微调,也保证了模型的量化性能。但不论是传统的PTQ方法还是EfficientDM,都无法保证扩散模型在极低比特量化后的性能。因此,本文主要关注于在类似PTQ方法的训练时长内,将扩散模型的低比特量化推向极限性能。 % Post-training quantization (PTQ) and Quantization-aware training (QAT) are two main approaches for model quantization. The commonly used QAT methods like LSQ ~\cite{esser2019lsq} and methods for diffusion models Q-dm~\cite{li2024qdm} and Binarydm~\cite{zheng2024binarydm} ensure the model performance at extremely low bit-width or even binary quantization, but they require a lot of extra training time compared with PTQ methods, resulting larger training burden. PTQ methods for diffusion model PTQ4DM~\cite{shang2023ptq4dm} and Q-Diffusion~\cite{li2023qdiffusion} have made initial exploration. The following works PTQ-D~\cite{he2024ptqd}, TFMQ-DM~\cite{huang2024tfmq}, APQ-DM~\cite{wang2024apqdm} and QuEST~\cite{wang2024quest} have made improvements in the direction of quantization error, temporal feature, calibration data, and calibration module. The performance of diffusion model after quantization is further improved. However, the performance of PTQ-based methods suffers from severe degradation at extremely low bit-width. To combine the advantages of QAT and reduce the required training time, EfficientDM~\cite{he2023efficientdm} uses LoRA~\cite{hu2021lora} method to fine-tune the quantized diffusion model.However, neither of these efficient quantization methods can guarantee the performance of the diffusion model under low bit. Therefore, this paper focuses on maximizing the extremely low bit quantization diffusion models performance. For diffusion model quantization, methods such as Q-DM~\cite{li2024qdm}, BinaryDM~\cite{zheng2024binarydm}, BiDM~\cite{zheng2024bidm}, and TerDiT~\cite{lu2024terdit} use quantization-aware training to maintain model performance under 1-2 bits. However, these approaches require extensive additional training time, often lasting several days. For more efficient quantization, approaches like Q-Diffusion~\cite{li2023qdiffusion}, PTQ4DM~\cite{shang2023ptq4dm}, PTQ-D~\cite{he2024ptqd}, TFMQ-DM~\cite{huang2024tfmq}, QuEST~\cite{wang2024quest}, EfficientDM~\cite{he2023efficientdm}, and MixDQ~\cite{zhao2025mixdq} explore quantization from the perspectives of quantization error, temporal features, and calibration data, particularly for Unet-based diffusion models. Similarly, Q-DiT~\cite{chen2024qdit}, PTQ4DiT~\cite{wu2024ptq4dit}, SVDQuant~\cite{li2024svdqunat}, and ViDiT-Q~\cite{zhao2024vidit} focus on the quantization of diffusion transformers, considering their unique data distributions and computational characteristics. However, existing quantization methods primarily focus on image generation tasks, with limited exploration into the more challenging domain of video generation. Therefore, this paper focuses on optimizing the quantization performance of video-generation diffusion transformers.","\subsection{Diffusion Model} % 一个训练好的扩撒模型可以通过对随机高斯噪声的逐步去噪得到高质量的生成图像。扩散模型通过对数据分布逐步添加噪声来进行前向采样过程。在DDPM中,扩散模型的前向加噪过程为一个马尔可夫链, % Diffusion models~\cite{ho2020ddpm, rombach2022ldm} perform a forward sampling process by gradually adding noise to the data distribution $\mathbf{x}_0 \sim q(x)$. In DDPM, the forward noise addition process of the diffusion model is a Markov chain, taking the form: \begin{equation} \begin{gathered} q(\mathbf{x}_{1:T}|\mathbf{x}_0) = \prod \limits_{t=1}^T q(\mathbf{x}_t|\mathbf{x}_{t-1}), \\ q(\mathbf{x}_t|\mathbf{x}_{t-1}) = \mathcal{N}(\mathbf{x}_t; \sqrt{\alpha_t}\mathbf{x}_{t-1}, \beta_t\mathbf{I}), \end{gathered} \end{equation} where $\alpha_t=1-\beta_t$, $\beta_t$ is time-related schedule. Diffusion models generate high-quality images by applying a denoising process to randomly sampled Gaussian noise $\mathbf{x}_T \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$, taking the form: \begin{equation} p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_t) = \mathcal{N}(\mathbf{x}_{t-1};\hat{\mu}_{\theta, t}(\mathbf{x_t}), \hat{\beta}_t\mathbf{I}), \end{equation} where $\hat{\mu}_{\theta, t}$ and $\hat{\beta}_t$ are outputed by the diffusion model. \subsection{Diffusion Quantization} % PTQ只使用少量的校准数据来为量化参数进行校准,因其无需微调模型的权重因而仅需很少的内存占用和校准时间。目前针对与扩散模型的PTQ方法PTQ4DM,Q-Diffusion做出了最初的探索,之后的工作PTQ-D,TFMQ-DM,APQ-DM和QuEST分别从量化误差,时间步信息,校准数据与校准模块等方向做出了改进,进一步提升了扩散模型在量化后的性能。但是,这些基于PTQ的方法在极低比特(2,3bit)下的性能表现不佳。QAT往往需要大量的数据来对模型权重和量化参数进行全量微调,因此保证了在极低比特下的性能收敛。常用的QAT方法如LSQ和针对扩散模型的方法Q-dm和Binarydm保证了在极低比特甚至二值量化后的模型性能,但是其相对于PTQ方法需要大量额外的训练时长,导致了更大的训练负担。为了结合QAT的优点并减少训练的所需时长,EfficientDM利用LoRA方法来对量化的扩散模型进行低秩微调,不仅具有与PTQ方法相似的训练时长,并且因为对模型的权重进行了低秩微调,也保证了模型的量化性能。但不论是传统的PTQ方法还是EfficientDM,都无法保证扩散模型在极低比特量化后的性能。因此,本文主要关注于在类似PTQ方法的训练时长内,将扩散模型的低比特量化推向极限性能。 % Post-training quantization (PTQ) and Quantization-aware training (QAT) are two main approaches for model quantization. The commonly used QAT methods like LSQ ~\cite{esser2019lsq} and methods for diffusion models Q-dm~\cite{li2024qdm} and Binarydm~\cite{zheng2024binarydm} ensure the model performance at extremely low bit-width or even binary quantization, but they require a lot of extra training time compared with PTQ methods, resulting larger training burden. PTQ methods for diffusion model PTQ4DM~\cite{shang2023ptq4dm} and Q-Diffusion~\cite{li2023qdiffusion} have made initial exploration. The following works PTQ-D~\cite{he2024ptqd}, TFMQ-DM~\cite{huang2024tfmq}, APQ-DM~\cite{wang2024apqdm} and QuEST~\cite{wang2024quest} have made improvements in the direction of quantization error, temporal feature, calibration data, and calibration module. The performance of diffusion model after quantization is further improved. However, the performance of PTQ-based methods suffers from severe degradation at extremely low bit-width. To combine the advantages of QAT and reduce the required training time, EfficientDM~\cite{he2023efficientdm} uses LoRA~\cite{hu2021lora} method to fine-tune the quantized diffusion model.However, neither of these efficient quantization methods can guarantee the performance of the diffusion model under low bit. Therefore, this paper focuses on maximizing the extremely low bit quantization diffusion models performance. For diffusion model quantization, methods such as Q-DM~\cite{li2024qdm}, BinaryDM~\cite{zheng2024binarydm}, BiDM~\cite{zheng2024bidm}, and TerDiT~\cite{lu2024terdit} use quantization-aware training to maintain model performance under 1-2 bits. However, these approaches require extensive additional training time, often lasting several days. For more efficient quantization, approaches like Q-Diffusion~\cite{li2023qdiffusion}, PTQ4DM~\cite{shang2023ptq4dm}, PTQ-D~\cite{he2024ptqd}, TFMQ-DM~\cite{huang2024tfmq}, QuEST~\cite{wang2024quest}, EfficientDM~\cite{he2023efficientdm}, and MixDQ~\cite{zhao2025mixdq} explore quantization from the perspectives of quantization error, temporal features, and calibration data, particularly for Unet-based diffusion models. Similarly, Q-DiT~\cite{chen2024qdit}, PTQ4DiT~\cite{wu2024ptq4dit}, SVDQuant~\cite{li2024svdqunat}, and ViDiT-Q~\cite{zhao2024vidit} focus on the quantization of diffusion transformers, considering their unique data distributions and computational characteristics. However, existing quantization methods primarily focus on image generation tasks, with limited exploration into the more challenging domain of video generation. Therefore, this paper focuses on optimizing the quantization performance of video-generation diffusion transformers.", 2505.22552v1,"ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation with Lightweight Specialized LLM","Hoang Pham, Thanh-Do Nguyen, Khac-Hoai Nam Bui","Integrating knowledge graphs (KGs) to enhance the reasoning capabilities of large language models (LLMs) is an emerging research challenge in claim verification. While KGs provide structured, semantically rich representations well-suited for reasoning, most existing verification methods rely on unstructured text corpora, limiting their ability to effectively leverage KGs. Additionally, despite possessing strong reasoning abilities, modern LLMs struggle with multi-step modular pipelines and reasoning over KGs without adaptation. To address these challenges, we propose ClaimPKG, an end-to-end framework that seamlessly integrates LLM reasoning with structured knowledge from KGs. Specifically, the main idea of ClaimPKG is to employ a lightweight, specialized LLM to represent the input claim as pseudo-subgraphs, guiding a dedicated subgraph retrieval module to identify relevant KG subgraphs. These retrieved subgraphs are then processed by a general-purpose LLM to produce the final verdict and justification. Extensive experiments on the FactKG dataset demonstrate that ClaimPKG achieves state-of-the-art performance, outperforming strong baselines in this research field by 9%-12% accuracy points across multiple categories. Furthermore, ClaimPKG exhibits zero-shot generalizability to unstructured datasets such as HoVer and FEVEROUS, effectively combining structured knowledge from KGs with LLM reasoning across various LLM backbones.","cs.CL, cs.AI, cs.DB",2025-05-28T16:34:14+00:00,2025-05-28T16:34:14+00:00,http://arxiv.org/abs/2505.22552v1,http://arxiv.org/abs/2505.22552v1,2025-05-28 16:34:14+00:00,"\textbf{Claim Verification Approaches.} Claim verification systems utilize knowledge bases that can be categorized into unstructured and structured formats. In the unstructured domain, text-based verification methods predominate, with systems designed to verify claims against textual evidence, as demonstrated in the FEVER dataset \cite{fever}. Recent advances have focused on handling specialized verification scenarios, including ambiguous question-answer pairs \cite{faviq}, detecting factual changes \cite{vitamin-c}, and processing multiple documents concurrently \cite{hover}. For structured verification, research has primarily focused on tables and graphs, with early work developing specialized architectures: graph neural networks for knowledge graph processing \cite{graph-review}, table-specific transformers \cite{tapas}, and tree-structured decoders for hierarchical data \cite{rat-sql}. % These structured approaches showed promising results in specific domains, they required specilized modules that reduce method generalizability to diverse verification tasks. \\[0.1cm] \textbf{Claim Verification over Knowledge Graphs (KGs).} The emergence of Large Language Models (LLMs) has simplified direct reasoning over textual corpora for claim verification, as demonstrated by ProgramFC \cite{programfc} and FOLK \cite{folk}. However, structured data sources like tables and graphs can provide more grounded and robust verification results \cite{factkg}. Knowledge graphs are particularly advantageous as they enable explicit representation of reasoning processes through logical rules over nodes and edges. FactKG \cite{factkg} established a foundation in this direction by introducing a comprehensive dataset for evaluating modern verification methods. KG-GPT \cite{kg_gpt} followed this work by demonstrating performance gains through a pipeline that performs sentence decomposition, subgraph retrieval, and logical inference. Additionally, while not directly addressing claim verification, StructGPT \cite{struct-gpt} and RoG \cite{reasoningongraph} achieved promising results in related tasks (e.g., Knowledge Base Question Answering) by collecting relevant evidence, such as subgraphs in KGs, then leveraging LLMs for complex reasoning in particular scenarios.","\textbf{Claim Verification Approaches.} Claim verification systems utilize knowledge bases that can be categorized into unstructured and structured formats. In the unstructured domain, text-based verification methods predominate, with systems designed to verify claims against textual evidence, as demonstrated in the FEVER dataset \cite{fever}. Recent advances have focused on handling specialized verification scenarios, including ambiguous question-answer pairs \cite{faviq}, detecting factual changes \cite{vitamin-c}, and processing multiple documents concurrently \cite{hover}. For structured verification, research has primarily focused on tables and graphs, with early work developing specialized architectures: graph neural networks for knowledge graph processing \cite{graph-review}, table-specific transformers \cite{tapas}, and tree-structured decoders for hierarchical data \cite{rat-sql}. % These structured approaches showed promising results in specific domains, they required specilized modules that reduce method generalizability to diverse verification tasks. \\[0.1cm] \textbf{Claim Verification over Knowledge Graphs (KGs).} The emergence of Large Language Models (LLMs) has simplified direct reasoning over textual corpora for claim verification, as demonstrated by ProgramFC \cite{programfc} and FOLK \cite{folk}. However, structured data sources like tables and graphs can provide more grounded and robust verification results \cite{factkg}. Knowledge graphs are particularly advantageous as they enable explicit representation of reasoning processes through logical rules over nodes and edges. FactKG \cite{factkg} established a foundation in this direction by introducing a comprehensive dataset for evaluating modern verification methods. KG-GPT \cite{kg_gpt} followed this work by demonstrating performance gains through a pipeline that performs sentence decomposition, subgraph retrieval, and logical inference. Additionally, while not directly addressing claim verification, StructGPT \cite{struct-gpt} and RoG \cite{reasoningongraph} achieved promising results in related tasks (e.g., Knowledge Base Question Answering) by collecting relevant evidence, such as subgraphs in KGs, then leveraging LLMs for complex reasoning in particular scenarios.","Claim Verification Approaches. Claim verifica- tion systems utilize knowledge bases that can be categorized into unstructured and structured for- mats. In the unstructured domain, text-based ver- ification methods predominate, with systems de- signed to verify claims against textual evidence, as demonstrated in the FEVER dataset (Thorne et al., 2018). Recent advances have focused on handling specialized verification scenarios, including am- biguous question-answer pairs (Park et al., 2022), detecting factual changes (Schuster et al., 2021), and processing multiple documents concurrently (Jiang et al., 2020). For structured verification, re- search has primarily focused on tables and graphs, with early work developing specialized architec- tures: graph neural networks for knowledge graph processing (Zhou et al., 2020), table-specific trans- formers (Herzig et al., 2020), and tree-structured decoders for hierarchical data (Wang et al., 2020). Claim Verification over Knowledge Graphs (KGs). The emergence of Large Language Models (LLMs) has simplified direct reasoning over textual corpora for claim verification, as demonstrated by ProgramFC (Pan et al., 2023) and FOLK (Wang and Shu, 2023). However, structured data sources like tables and graphs can provide more grounded and robust verification results (Kim et al., 2023b). Knowledge graphs are particularly advantageous as they enable explicit representation of reason- ing processes through logical rules over nodes and edges. FactKG (Kim et al., 2023b) established a foundation in this direction by introducing a com- prehensive dataset for evaluating modern verifica- tion methods. KG-GPT (Kim et al., 2023a) fol- lowed this work by demonstrating performance gains through a pipeline that performs sentence decomposition, subgraph retrieval, and logical in- ference. Additionally, while not directly addressing claim verification, StructGPT (Jiang et al., 2023) and RoG (Luo et al., 2024) achieved promising re- sults in related tasks (e.g., Knowledge Base Ques- tion Answering) by collecting relevant evidence, such as subgraphs in KGs, then leveraging LLMs for complex reasoning in particular scenarios." 2504.21752v1,"VDDP: Verifiable Distributed Differential Privacy under the Client-Server-Verifier Setup","Haochen Sun, Xi He","Despite differential privacy (DP) often being considered the de facto standard for data privacy, its realization is vulnerable to unfaithful execution of its mechanisms by servers, especially in distributed settings. Specifically, servers may sample noise from incorrect distributions or generate correlated noise while appearing to follow established protocols. This work analyzes these malicious behaviors in a general differential privacy framework within a distributed client-server-verifier setup. To address these adversarial problems, we propose a novel definition called Verifiable Distributed Differential Privacy (VDDP) by incorporating additional verification mechanisms. We also explore the relationship between zero-knowledge proofs (ZKP) and DP, demonstrating that while ZKPs are sufficient for achieving DP under verifiability requirements, they are not necessary. Furthermore, we develop two novel and efficient mechanisms that satisfy VDDP: (1) the Verifiable Distributed Discrete Laplacian Mechanism (VDDLM), which offers up to a $4 \times 10^5$x improvement in proof generation efficiency with only 0.1-0.2x error compared to the previous state-of-the-art verifiable differentially private mechanism; (2) an improved solution to Verifiable Randomized Response (VRR) under local DP, a special case of VDDP, achieving up a reduction of up to 5000x in communication costs and the verifier's overhead.","cs.CR, cs.DB",2025-04-30T15:46:55+00:00,2025-04-30T15:46:55+00:00,http://arxiv.org/abs/2504.21752v1,http://arxiv.org/abs/2504.21752v1,2025-04-30 15:46:55+00:00,"\label{sec:rw} Pioneering steps in verifiable executions of differentially private mechanisms involve cryptographic proofs on the correctness of deterministic fundamental computation steps in differentially private database systems like \textbf{VFuzz} \cite{DBLP:conf/eurosys/NarayanFPH15} and \textbf{DPrio} \cite{dprio}. More recent advancements have shifted their focus to the correct sampling from noise distributions, including randomized response (\textbf{KCY21}, \cite{KCY21}), floating-point Gaussian mechanisms (\textbf{STC+24}, \cite{DBLP:conf/iclr/ShamsabadiTCBHP24}), and binomial mechanisms (\textbf{VDBM}, \cite{BC23}). More broadly, other studies on secure computation for randomness generation \cite{DBLP:conf/pkc/AmbainisJL04,DBLP:conf/sp/BonehBCGI21} and differential privacy \cite{DBLP:conf/sigmod/ChowdhuryW0MJ20,DBLP:conf/ccs/BellBGL020}, with multi-party computation (MPC) \cite{DBLP:conf/eurocrypt/DworkKMMN06,DBLP:conf/ccs/ChampionSU19,DBLP:conf/uss/BohlerK20,DBLP:conf/ccs/BohlerK21,DBLP:journals/corr/abs-2109-10074,DBLP:conf/ccs/WeiYFCW23,DBLP:conf/ccs/FuW24}, have laid the foundation for the secure computation of DP mechanisms, especially in distributed settings. However, despite the similarities in multiple aspects, they do not cover the scenario when an external data analyst needs to verify the authenticity of the data and correctness of computation, especially the randomness involved. We compare this study's security and privacy models with the aforementioned studies in Table \ref{tab:comparison}. We discuss additional related work in Appendix \ref{appendix:rw}. % \begin{table}[!t] % \centering % \resizebox{\linewidth}{!}{\begin{tabular}{lccccccc} % \toprule % ~ & MPC-DP & \cite{DBLP:conf/eurosys/NarayanFPH15} & \cite{dprio} & \cite{KCY21} & \cite{DBLP:conf/iclr/ShamsabadiTCBHP24} & \cite{BC23} & \textbf{Ours} \\ % \midrule % Ver. Data. & \ding{56} & \ding{52} & \ding{52} & \ding{52} & \ding{52} & \ding{52} & \ding{52} \\ % Ver. Comp. & \ding{56} & \ding{52} & \ding{56} & \ding{52} & \ding{52} & \ding{52} & \ding{52} \\ % Ver. Rand. & \ding{56} & \ding{56} & \ding{56} & \ding{52} & \ding{52} & \ding{52} & \ding{52} \\ % CSV Model & \ding{56} & \ding{56} & \ding{52} & \ding{56} & \ding{56} & \ding{52} & \ding{52} \\ % E2E DP & N/A & \ding{56} & \ding{56} & \ding{56} & \ding{56} & \ding{56} & \ding{52} \\ % \bottomrule % \end{tabular}} % \caption{Comparison with previous work on MPCs of DP mechanisms (\textbf{MPC-DP}, \cite{DBLP:conf/ccs/ChampionSU19,DBLP:conf/uss/BohlerK20, DBLP:conf/ccs/BohlerK21,DBLP:conf/eurocrypt/DworkKMMN06,DBLP:conf/ccs/WeiYFCW23}) and verifiable executions of DP mechanisms \cite{DBLP:conf/eurosys/NarayanFPH15,dprio,KCY21,DBLP:conf/iclr/ShamsabadiTCBHP24,BC23}. \emph{Ver. Data.}: the authenticity of data is verifiable (to an external data analyst); \emph{Ver. Comp.}: the correctness of the deterministic computation is verifiable; \emph{Ver. Rand.}: the correct sampling from the prescribed random distributions is verifiable; \emph{CSV Model}: client-server-verifier model; \emph{E2E DP}: end-to-end DP guarantee, incorporating additional leakages from the proof. \hs{sp model comparisons?}} % \label{tab:comparison} % \end{table}% \begin{table}[!t] \caption{Comparison of security and privacy models with previous work on MPCs of DP mechanisms (\textbf{MPC-DP}, \cite{DBLP:conf/ccs/ChampionSU19,DBLP:conf/uss/BohlerK20, DBLP:conf/ccs/BohlerK21,DBLP:conf/eurocrypt/DworkKMMN06,DBLP:conf/ccs/WeiYFCW23}) and verifiable executions of DP mechanisms \cite{DBLP:conf/eurosys/NarayanFPH15,dprio,KCY21,DBLP:conf/iclr/ShamsabadiTCBHP24,BC23}. \emph{VD}, \emph{VC}, \emph{VR}: authenticity of data, correct deterministic computation, or correct sampling from the prescribed random distributions is verifiable (to an external data analyst); \emph{N}: resilience against numerical issues of DP due to compatibility with discrete cryptographic primitives; \emph{CSV}: client-server-verifier model; \emph{E2EDP}: end-to-end DP guarantee, incorporating additional leakages from the proof. } \centering %\resizebox{\linewidth}{!}{ \begin{tabular}{@{}lcccccc@{}} \toprule ~ & VD & VC & VR & N & CSV & E2EDP\\ \midrule MPC-DP & \ding{56} & \ding{56} & \ding{56} & \ding{52} & \ding{56} & N/A \\ VFuzz \cite{DBLP:conf/eurosys/NarayanFPH15} & \ding{52} & \ding{52} & \ding{56} & \ding{56} & \ding{56} & \ding{56} \\ DPrio \cite{dprio} & \ding{52} & \ding{56} & \ding{56} & \ding{52} & \ding{52} & \ding{56} \\ KCY21 \cite{KCY21} & \ding{52} & \ding{52} & \ding{52} & \ding{52} & \ding{56} & \ding{56} \\ STC+24 \cite{DBLP:conf/iclr/ShamsabadiTCBHP24} & \ding{52} & \ding{52} & \ding{52} & \ding{56} & \ding{56} & \ding{56} \\ VDBM \cite{BC23} & \ding{52} & \ding{52} & \ding{52} & \ding{52} & \ding{52} & \ding{56} \\ \textbf{Ours} & \ding{52} & \ding{52} & \ding{52} & \ding{52} & \ding{52} & \ding{52} \\ \bottomrule \end{tabular} %} \label{tab:comparison} \end{table}","Pioneering steps in verifiable executions of differentially private mechanisms involve cryptographic proofs on the correctness of deterministic fundamental computation steps in differentially private database systems like \textbf{VFuzz} \cite{DBLP:conf/eurosys/NarayanFPH15} and \textbf{DPrio} \cite{dprio}. More recent advancements have shifted their focus to the correct sampling from noise distributions, including randomized response (\textbf{KCY21}, \cite{KCY21}), floating-point Gaussian mechanisms (\textbf{STC+24}, \cite{DBLP:conf/iclr/ShamsabadiTCBHP24}), and binomial mechanisms (\textbf{VDBM}, \cite{BC23}). More broadly, other studies on secure computation for randomness generation \cite{DBLP:conf/pkc/AmbainisJL04,DBLP:conf/sp/BonehBCGI21} and differential privacy \cite{DBLP:conf/sigmod/ChowdhuryW0MJ20,DBLP:conf/ccs/BellBGL020}, with multi-party computation (MPC) \cite{DBLP:conf/eurocrypt/DworkKMMN06,DBLP:conf/ccs/ChampionSU19,DBLP:conf/uss/BohlerK20,DBLP:conf/ccs/BohlerK21,DBLP:journals/corr/abs-2109-10074,DBLP:conf/ccs/WeiYFCW23,DBLP:conf/ccs/FuW24}, have laid the foundation for the secure computation of DP mechanisms, especially in distributed settings. However, despite the similarities in multiple aspects, they do not cover the scenario when an external data analyst needs to verify the authenticity of the data and correctness of computation, especially the randomness involved. We compare this study's security and privacy models with the aforementioned studies in Table \ref{tab:comparison}. We discuss additional related work in Appendix \ref{appendix:rw}. % \begin{table}[!t] % \centering % \resizebox{\linewidth}{!}{\begin{tabular}{lccccccc} % \toprule % ~ & MPC-DP & \cite{DBLP:conf/eurosys/NarayanFPH15} & \cite{dprio} & \cite{KCY21} & \cite{DBLP:conf/iclr/ShamsabadiTCBHP24} & \cite{BC23} & \textbf{Ours} \\ % \midrule % Ver. Data. & \ding{56} & \ding{52} & \ding{52} & \ding{52} & \ding{52} & \ding{52} & \ding{52} \\ % Ver. Comp. & \ding{56} & \ding{52} & \ding{56} & \ding{52} & \ding{52} & \ding{52} & \ding{52} \\ % Ver. Rand. & \ding{56} & \ding{56} & \ding{56} & \ding{52} & \ding{52} & \ding{52} & \ding{52} \\ % CSV Model & \ding{56} & \ding{56} & \ding{52} & \ding{56} & \ding{56} & \ding{52} & \ding{52} \\ % E2E DP & N/A & \ding{56} & \ding{56} & \ding{56} & \ding{56} & \ding{56} & \ding{52} \\ % \bottomrule % \end{tabular}} % \caption{Comparison with previous work on MPCs of DP mechanisms (\textbf{MPC-DP}, \cite{DBLP:conf/ccs/ChampionSU19,DBLP:conf/uss/BohlerK20, DBLP:conf/ccs/BohlerK21,DBLP:conf/eurocrypt/DworkKMMN06,DBLP:conf/ccs/WeiYFCW23}) and verifiable executions of DP mechanisms \cite{DBLP:conf/eurosys/NarayanFPH15,dprio,KCY21,DBLP:conf/iclr/ShamsabadiTCBHP24,BC23}. \emph{Ver. Data.}: the authenticity of data is verifiable (to an external data analyst); \emph{Ver. Comp.}: the correctness of the deterministic computation is verifiable; \emph{Ver. Rand.}: the correct sampling from the prescribed random distributions is verifiable; \emph{CSV Model}: client-server-verifier model; \emph{E2E DP}: end-to-end DP guarantee, incorporating additional leakages from the proof. \hs{sp model comparisons?}} % % \end{table}% \begin{table}[!t] \caption{Comparison of security and privacy models with previous work on MPCs of DP mechanisms (\textbf{MPC-DP}, \cite{DBLP:conf/ccs/ChampionSU19,DBLP:conf/uss/BohlerK20, DBLP:conf/ccs/BohlerK21,DBLP:conf/eurocrypt/DworkKMMN06,DBLP:conf/ccs/WeiYFCW23}) and verifiable executions of DP mechanisms \cite{DBLP:conf/eurosys/NarayanFPH15,dprio,KCY21,DBLP:conf/iclr/ShamsabadiTCBHP24,BC23}. \emph{VD}, \emph{VC}, \emph{VR}: authenticity of data, correct deterministic computation, or correct sampling from the prescribed random distributions is verifiable (to an external data analyst); \emph{N}: resilience against numerical issues of DP due to compatibility with discrete cryptographic primitives; \emph{CSV}: client-server-verifier model; \emph{E2EDP}: end-to-end DP guarantee, incorporating additional leakages from the proof. } \centering %\resizebox{\linewidth}{!}{ \begin{tabular}{@{}lcccccc@{}} \toprule ~ & VD & VC & VR & N & CSV & E2EDP\\ \midrule MPC-DP & \ding{56} & \ding{56} & \ding{56} & \ding{52} & \ding{56} & N/A \\ VFuzz \cite{DBLP:conf/eurosys/NarayanFPH15} & \ding{52} & \ding{52} & \ding{56} & \ding{56} & \ding{56} & \ding{56} \\ DPrio \cite{dprio} & \ding{52} & \ding{56} & \ding{56} & \ding{52} & \ding{52} & \ding{56} \\ KCY21 \cite{KCY21} & \ding{52} & \ding{52} & \ding{52} & \ding{52} & \ding{56} & \ding{56} \\ STC+24 \cite{DBLP:conf/iclr/ShamsabadiTCBHP24} & \ding{52} & \ding{52} & \ding{52} & \ding{56} & \ding{56} & \ding{56} \\ VDBM \cite{BC23} & \ding{52} & \ding{52} & \ding{52} & \ding{52} & \ding{52} & \ding{56} \\ \textbf{Ours} & \ding{52} & \ding{52} & \ding{52} & \ding{52} & \ding{52} & \ding{52} \\ \bottomrule \end{tabular} %} \end{table}","Pioneering steps in verifiable executions of differentially pri- vate mechanisms involve cryptographic proofs on the cor- rectness of deterministic fundamental computation steps in differentially private database systems like VFuzz [74] and DPrio [63]. More recent advancements have shifted their focus to the correct sampling from noise distributions, in- cluding randomized response ( KCY21 , [61]), floating-point Gaussian mechanisms ( STC+24 , [84]), and binomial mech- anisms ( VDBM , [14]). More broadly, other studies on se- cure computation for randomness generation [3, 20] and differential privacy [10, 29], with multi-party computationTable 5: Comparison of security and privacy models with previous work on MPCs of DP mechanisms ( MPC-DP , [17, 18, 24, 36, 91]) and verifiable executions of DP mech- anisms [14, 61, 63, 74, 84]. VD,VC,VR: authenticity of data, correct deterministic computation, or correct sampling from the prescribed random distributions is verifiable (to an exter- nal data analyst); N: resilience against numerical issues of DP due to compatibility with discrete cryptographic primitives; CSV: client-server-verifier model; E2EDP : end-to-end DP guarantee, incorporating additional leakages from the proof. VD VC VR N CSV E2EDP MPC-DP ✘ ✘ ✘ ✔ ✘ N/A VFuzz [74] ✔ ✔ ✘ ✘ ✘ ✘ DPrio [63] ✔ ✘ ✘ ✔ ✔ ✘ KCY21 [61] ✔ ✔ ✔ ✔ ✘ ✘ STC+24 [84] ✔ ✔ ✔ ✘ ✘ ✘ VDBM [14] ✔ ✔ ✔ ✔ ✔ ✘ Ours ✔ ✔ ✔ ✔ ✔ ✔ (MPC) [17, 18, 24, 34, 36, 44, 91], have laid the foundation for the secure computation of DP mechanisms, especially in dis- tributed settings. However, despite the similarities in multiple aspects, they do not cover the scenario when an external data analyst needs to verify the authenticity of the data and cor- rectness of computation, especially the randomness involved. We compare this study’s security and privacy models with the aforementioned studies in Table" 2504.21282v1,"Birdie: Natural Language-Driven Table Discovery Using Differentiable Search Index","Yuxiang Guo, Zhonghao Hu, Yuren Mao, Baihua Zheng, Yunjun Gao, Mingwei Zhou","Natural language (NL)-driven table discovery identifies relevant tables from large table repositories based on NL queries. While current deep-learning-based methods using the traditional dense vector search pipeline, i.e., representation-index-search, achieve remarkable accuracy, they face several limitations that impede further performance improvements: (i) the errors accumulated during the table representation and indexing phases affect the subsequent search accuracy; and (ii) insufficient query-table interaction hinders effective semantic alignment, impeding accuracy improvements. In this paper, we propose a novel framework Birdie, using a differentiable search index. It unifies the indexing and search into a single encoder-decoder language model, thus getting rid of error accumulations. Birdie first assigns each table a prefix-aware identifier and leverages a large language model-based query generator to create synthetic queries for each table. It then encodes the mapping between synthetic queries/tables and their corresponding table identifiers into the parameters of an encoder-decoder language model, enabling deep query-table interactions. During search, the trained model directly generates table identifiers for a given query. To accommodate the continual indexing of dynamic tables, we introduce an index update strategy via parameter isolation, which mitigates the issue of catastrophic forgetting. Extensive experiments demonstrate that Birdie outperforms state-of-the-art dense methods by 16.8% in accuracy, and reduces forgetting by over 90% compared to other continual learning approaches.",cs.DB,2025-04-30T03:30:21+00:00,2025-04-30T03:30:21+00:00,http://arxiv.org/abs/2504.21282v1,http://arxiv.org/abs/2504.21282v1,2025-04-30 03:30:21+00:00,"\label{sec:relatedwork} %\subsection{Table Discovery} \noindent \textbf{{Table Discovery. }} Table discovery has been extensively researched within the data management community~\cite{TabelDiscovery,DataLake_Survey}. A prevalent line of table discovery is query-driven discovery, which includes: (i) keyword-based table search~\cite{AdelfioS13,GoogleSearch} that aims to identify web tables related to specified keywords, utilizing metadata such as table headers and column names; (ii) table-driven search which locates target tables within a large data lake that can be joined~\cite{JOSIE,Deepjoin,Snoopy} or unioned~\cite{starmine,santos,TUS} with a given query table; and (iii) NL-query-driven table search~\cite{Solo,OpenDTR,OpenWiki}. NL-driven table discovery offers a user-friendly interface that allows users to express their needs more precisely. % NL-driven table discovery is essential for many downstream data analysis tasks~\cite{ReAcTable,Symphony}. Existing NL-driven table discovery methods~\cite{Solo,OpenDTR,OpenWiki} typically follow a traditional representation-index-search pipeline. % This process involves embedding both tables and NL queries into a shared embedding space using a bi-encoder, constructing indexes on the table embeddings, and performing online similarity searches between query embedding and table embeddings. The encoder in the representation phase plays a crucial role in search accuracy. For instance, OpenDTR~\cite{OpenDTR} uses TAPAS~\cite{TAPAS} as the backbone for its bi-encoder. % fine-tuning it using labeled query-table pairs with in-batch negatives. OpenWikiTable~\cite{OpenWiki} offers various options for query and table encoders. However, representing a table as a single vector can sometimes be insufficiently expressive. To address this, Solo~\cite{Solo} encodes each cell-attributes-cell triplet within the table into a fixed-dimensional embedding and retrieves similar triplet embeddings to the query embedding, followed by aggregation of triplets-to-table. However, the lack of deep query-table interactions during retrieval hinders further performance improvements. Another line of NL-based table search literature focuses on the re-ranking~\cite{GTR,AdHoc_TR,TableSearch}. % These methods aim to rank candidate tables generated during the first-stage retrieval. Utilizing a cross-encoder, they input both the query and candidate table to obtain embeddings for each query-table pair. This process enhances accuracy due to the deep query-table interactions but lacks of scalability for first-stage retrieval. \vspace{1mm} \noindent \textbf{Differentiable Search Index.} Differentiable search index (DSI)~\cite{DSI} sparks a novel search paradigm that unifies the indexing and search within a single Transformer architecture. It was initially proposed for document retrieval~\cite{NCI, DSI, DSI-QG} and has been applied in scenarios like retrieval-augmented generation (RAG)~\cite{CorpusLM}, recommendation systems~\cite{Tiger}, etc. To the best of our knowledge, \textsc{Birdie} is the first attempt to perform table discovery using DSI, taking into account the unique properties of tabular data to automate the collection of training data. Real-world applications often involve dynamically changing corpora. However, in DSI, which encodes all corpus information into model parameters, indexing new corpora inevitably leads to the forgetting of old ones. To mitigate catastrophic forgetting, some recent studies~\cite{DSI++, CLEVER} propose replay-based solutions that sample some old data and combine it with new data for continual learning. However, these methods often struggle to balance indexing new data and retaining old memories, resulting in suboptimal average performance. In contrast, \textsc{Birdie} designs a parameter isolation method that ensures the independence of each memory unit, thus achieving a promising average performance.","%\subsection{Table Discovery} \noindent \textbf{{Table Discovery. }} Table discovery has been extensively researched within the data management community~\cite{TabelDiscovery,DataLake_Survey}. A prevalent line of table discovery is query-driven discovery, which includes: (i) keyword-based table search~\cite{AdelfioS13,GoogleSearch} that aims to identify web tables related to specified keywords, utilizing metadata such as table headers and column names; (ii) table-driven search which locates target tables within a large data lake that can be joined~\cite{JOSIE,Deepjoin,Snoopy} or unioned~\cite{starmine,santos,TUS} with a given query table; and (iii) NL-query-driven table search~\cite{Solo,OpenDTR,OpenWiki}. NL-driven table discovery offers a user-friendly interface that allows users to express their needs more precisely. % NL-driven table discovery is essential for many downstream data analysis tasks~\cite{ReAcTable,Symphony}. Existing NL-driven table discovery methods~\cite{Solo,OpenDTR,OpenWiki} typically follow a traditional representation-index-search pipeline. % This process involves embedding both tables and NL queries into a shared embedding space using a bi-encoder, constructing indexes on the table embeddings, and performing online similarity searches between query embedding and table embeddings. The encoder in the representation phase plays a crucial role in search accuracy. For instance, OpenDTR~\cite{OpenDTR} uses TAPAS~\cite{TAPAS} as the backbone for its bi-encoder. % fine-tuning it using labeled query-table pairs with in-batch negatives. OpenWikiTable~\cite{OpenWiki} offers various options for query and table encoders. However, representing a table as a single vector can sometimes be insufficiently expressive. To address this, Solo~\cite{Solo} encodes each cell-attributes-cell triplet within the table into a fixed-dimensional embedding and retrieves similar triplet embeddings to the query embedding, followed by aggregation of triplets-to-table. However, the lack of deep query-table interactions during retrieval hinders further performance improvements. Another line of NL-based table search literature focuses on the re-ranking~\cite{GTR,AdHoc_TR,TableSearch}. % These methods aim to rank candidate tables generated during the first-stage retrieval. Utilizing a cross-encoder, they input both the query and candidate table to obtain embeddings for each query-table pair. This process enhances accuracy due to the deep query-table interactions but lacks of scalability for first-stage retrieval. \vspace{1mm} \noindent \textbf{Differentiable Search Index.} Differentiable search index (DSI)~\cite{DSI} sparks a novel search paradigm that unifies the indexing and search within a single Transformer architecture. It was initially proposed for document retrieval~\cite{NCI, DSI, DSI-QG} and has been applied in scenarios like retrieval-augmented generation (RAG)~\cite{CorpusLM}, recommendation systems~\cite{Tiger}, etc. To the best of our knowledge, \textsc{Birdie} is the first attempt to perform table discovery using DSI, taking into account the unique properties of tabular data to automate the collection of training data. Real-world applications often involve dynamically changing corpora. However, in DSI, which encodes all corpus information into model parameters, indexing new corpora inevitably leads to the forgetting of old ones. To mitigate catastrophic forgetting, some recent studies~\cite{DSI++, CLEVER} propose replay-based solutions that sample some old data and combine it with new data for continual learning. However, these methods often struggle to balance indexing new data and retaining old memories, resulting in suboptimal average performance. In contrast, \textsc{Birdie} designs a parameter isolation method that ensures the independence of each memory unit, thus achieving a promising average performance.", 2504.17448v1,"CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated Active Learning","Jun Zhang, Jue Wang, Huan Li, Zhongle Xie, Ke Chen, Lidan Shou","Active learning (AL) reduces human annotation costs for machine learning systems by strategically selecting the most informative unlabeled data for annotation, but performing it individually may still be insufficient due to restricted data diversity and annotation budget. Federated Active Learning (FAL) addresses this by facilitating collaborative data selection and model training, while preserving the confidentiality of raw data samples. Yet, existing FAL methods fail to account for the heterogeneity of data distribution across clients and the associated fluctuations in global and local model parameters, adversely affecting model accuracy. To overcome these challenges, we propose CHASe (Client Heterogeneity-Aware Data Selection), specifically designed for FAL. CHASe focuses on identifying those unlabeled samples with high epistemic variations (EVs), which notably oscillate around the decision boundaries during training. To achieve both effectiveness and efficiency, \model{} encompasses techniques for 1) tracking EVs by analyzing inference inconsistencies across training epochs, 2) calibrating decision boundaries of inaccurate models with a new alignment loss, and 3) enhancing data selection efficiency via a data freeze and awaken mechanism with subset sampling. Experiments show that CHASe surpasses various established baselines in terms of effectiveness and efficiency, validated across diverse datasets, model complexities, and heterogeneous federation settings.","cs.LG, cs.DB, cs.DC",2025-04-24T11:28:00+00:00,2025-04-24T11:28:00+00:00,http://arxiv.org/abs/2504.17448v1,http://arxiv.org/abs/2504.17448v1,2025-04-24 11:28:00+00:00,"\label{sec:related} % \subsection{Active Learning} \noindent\textbf{Active Learning (AL)}. AL has been widely applied to various tasks, including error detection \cite{ErroDetection}, image captioning \cite{ImageCaption}, and person re-identification \cite{PersonIdentification}. AL methods can be categorized into three groups, namely uncertainty-based, distribution-based, and loss-based. \emph{1) Uncertainty-based methods}: Uncertainty-based methods select data samples according to uncertainty, which can be measured by the predicted probability \cite{lewis1994heterogeneous,lewis1994sequential}, the margin between the probability of two classes \cite{joshi2009multi}, or the entropy of prediction \cite{luo2013latent,settles2012active}. An intuitive assumption is that unlabeled data with high uncertainty might be beneficial for training better models. In addition, Bayesian neural networks \cite{blundell2015weight}, MC-Dropout \cite{gal2016dropout}, or Ensembles are used to estimate model uncertainty. Yet, such methods are computationally inefficient. Notably, for AL-tailored temporal uncertainty methods like TOD\text{~\cite{huang2021semi}}, which measure uncertainty by comparing logits between two updated models, the main challenge in heterogeneous FAL is selecting which two models to compare. \emph{2) Distribution-based methods}: This family of methods estimates the distribution of unlabeled samples to select important ones. While discrete optimization methods \cite{guo2010active,yang2015multi} are used for a larger sample subset by considering diversity, the goal of the clustering method \cite{nguyen2004active} is to find the cluster centroids of subsets. The Core-set \cite{sener2018active} defines AL as a core-set selection problem, i.e.,~finding a small subset such that a model learned on them is competitive on the whole set, which addresses the failure of many AL heuristics when applied in a batch setting. \emph{3) Loss-based methods}: These methods introduce additional networks to select hard samples by designing various loss functions. An early learning loss approach, LL4AL \cite{yoo2019learning}, estimates samples' uncertainty and diversity and selects them with significant ``loss''. Recent works \cite{yuan2021multiple,fu2021agreement} progressively align data distributions for effective AL by introducing multiple adversarial classifiers with iterative loss optimization. The leading methods from these groups, including entropy-based \cite{luo2013latent}, Core-set \cite{sener2018active} and LL4AL \cite{yoo2019learning} have not explored the federation % setting , let alone the client heterogeneity. % \subsection{Federated Learning (FL) and Federated AL} \smallskip \noindent\textbf{Federated Learning (FL) and Federated AL}. FL was early discussed in literature such as study \cite{konevcny2015federated} and further developed along with the proposal of FedAvg \cite{mcmahan2017communication}. % Ahemd et al. \cite{ahmed2020active} and Mohammad et al. \cite{mohammad21flare} explore the effectiveness of current AL methods in FL, but they treat AL and FL as two completely orthogonal modules. Jia et al. \cite{jia2019active} and Nicolas et al. \cite{nicolas2020combine} integrate the pipeline of AL into FL to reduce clients' training costs. Jin et al. \cite{jin2022federated} further verify that picking data on the global model (i.e., FAL($\bullet$)) performs better than that on the local model and the random sampling. % These above works show that existing AL methods perform decently in FL with IID. However, these works ignore the FL's Non-IID issue \cite{10184650}. % Jia et al. \cite{jia2020Robust} discuss the resistance of local training epochs and aggregation frequency to heterogeneous data. However, it is still limited to the application of existing AL methods. To mitigate the Non-IID issue in \fram{}, recent studies examine both local and global models to select samples with high intra- and inter-class diversity \text{\cite{cao2022knowledgeaware}} or samples with labels in majority \text{\cite{kim2023rethinking}}. However, their selections rely on the instant models, which could vary significantly during the training, as disclosed by the samples' epistemic variation (see \text{\cref{fig:intro}}). In contrast, the proposed \model{} approach considers the historical model states by capturing the epistemic variation for effective and robust data selection. Besides, the notation of \emph{Active Federated Learning} (AFL) \cite{goetz2019active} clearly differs from \fram{} that we studied. AFL focuses on actively selecting clients from a server perspective in FL. There are also studies \cite{li2021sample,shin2022sample} on data selection in FL, but they assume data is fully labeled and do not involve AL.","% \subsection{Active Learning} \noindent\textbf{Active Learning (AL)}. AL has been widely applied to various tasks, including error detection \cite{ErroDetection}, image captioning \cite{ImageCaption}, and person re-identification \cite{PersonIdentification}. AL methods can be categorized into three groups, namely uncertainty-based, distribution-based, and loss-based. \emph{1) Uncertainty-based methods}: Uncertainty-based methods select data samples according to uncertainty, which can be measured by the predicted probability \cite{lewis1994heterogeneous,lewis1994sequential}, the margin between the probability of two classes \cite{joshi2009multi}, or the entropy of prediction \cite{luo2013latent,settles2012active}. An intuitive assumption is that unlabeled data with high uncertainty might be beneficial for training better models. In addition, Bayesian neural networks \cite{blundell2015weight}, MC-Dropout \cite{gal2016dropout}, or Ensembles are used to estimate model uncertainty. Yet, such methods are computationally inefficient. Notably, for AL-tailored temporal uncertainty methods like TOD\text{~\cite{huang2021semi}}, which measure uncertainty by comparing logits between two updated models, the main challenge in heterogeneous FAL is selecting which two models to compare. \emph{2) Distribution-based methods}: This family of methods estimates the distribution of unlabeled samples to select important ones. While discrete optimization methods \cite{guo2010active,yang2015multi} are used for a larger sample subset by considering diversity, the goal of the clustering method \cite{nguyen2004active} is to find the cluster centroids of subsets. The Core-set \cite{sener2018active} defines AL as a core-set selection problem, i.e.,~finding a small subset such that a model learned on them is competitive on the whole set, which addresses the failure of many AL heuristics when applied in a batch setting. \emph{3) Loss-based methods}: These methods introduce additional networks to select hard samples by designing various loss functions. An early learning loss approach, LL4AL \cite{yoo2019learning}, estimates samples' uncertainty and diversity and selects them with significant ``loss''. Recent works \cite{yuan2021multiple,fu2021agreement} progressively align data distributions for effective AL by introducing multiple adversarial classifiers with iterative loss optimization. The leading methods from these groups, including entropy-based \cite{luo2013latent}, Core-set \cite{sener2018active} and LL4AL \cite{yoo2019learning} have not explored the federation % setting , let alone the client heterogeneity. % \subsection{Federated Learning (FL) and Federated AL} \smallskip \noindent\textbf{Federated Learning (FL) and Federated AL}. FL was early discussed in literature such as study \cite{konevcny2015federated} and further developed along with the proposal of FedAvg \cite{mcmahan2017communication}. % Ahemd et al. \cite{ahmed2020active} and Mohammad et al. \cite{mohammad21flare} explore the effectiveness of current AL methods in FL, but they treat AL and FL as two completely orthogonal modules. Jia et al. \cite{jia2019active} and Nicolas et al. \cite{nicolas2020combine} integrate the pipeline of AL into FL to reduce clients' training costs. Jin et al. \cite{jin2022federated} further verify that picking data on the global model (i.e., FAL($\bullet$)) performs better than that on the local model and the random sampling. % These above works show that existing AL methods perform decently in FL with IID. However, these works ignore the FL's Non-IID issue \cite{10184650}. % Jia et al. \cite{jia2020Robust} discuss the resistance of local training epochs and aggregation frequency to heterogeneous data. However, it is still limited to the application of existing AL methods. To mitigate the Non-IID issue in \fram{}, recent studies examine both local and global models to select samples with high intra- and inter-class diversity \text{\cite{cao2022knowledgeaware}} or samples with labels in majority \text{\cite{kim2023rethinking}}. However, their selections rely on the instant models, which could vary significantly during the training, as disclosed by the samples' epistemic variation (see \text{\cref{fig:intro}}). In contrast, the proposed \model{} approach considers the historical model states by capturing the epistemic variation for effective and robust data selection. Besides, the notation of \emph{Active Federated Learning} (AFL) \cite{goetz2019active} clearly differs from \fram{} that we studied. AFL focuses on actively selecting clients from a server perspective in FL. There are also studies \cite{li2021sample,shin2022sample} on data selection in FL, but they assume data is fully labeled and do not involve AL.", 2504.14861v1,"Stitching Inner Product and Euclidean Metrics for Topology-aware Maximum Inner Product Search","Tingyang Chen, Cong Fu, Xiangyu Ke, Yunjun Gao, Yabo Ni, Anxiang Zeng","Maximum Inner Product Search (MIPS) is a fundamental challenge in machine learning and information retrieval, particularly in high-dimensional data applications. Existing approaches to MIPS either rely solely on Inner Product (IP) similarity, which faces issues with local optima and redundant computations, or reduce the MIPS problem to the Nearest Neighbor Search under the Euclidean metric via space projection, leading to topology destruction and information loss. Despite the divergence of the two paradigms, we argue that there is no inherent binary opposition between IP and Euclidean metrics. By stitching IP and Euclidean in the design of indexing and search algorithms, we can significantly enhance MIPS performance. Specifically, this paper explores the theoretical and empirical connections between these two metrics from the MIPS perspective. Our investigation, grounded in graph-based search, reveals that different indexing and search strategies offer distinct advantages for MIPS, depending on the underlying data topology. Building on these insights, we introduce a novel graph-based index called Metric-Amphibious Graph (MAG) and a corresponding search algorithm, Adaptive Navigation with Metric Switch (ANMS). To facilitate parameter tuning for optimal performance, we identify three statistical indicators that capture essential data topology properties and correlate strongly with parameter tuning. Extensive experiments on 12 real-world datasets demonstrate that MAG outperforms existing state-of-the-art methods, achieving up to 4x search speedup while maintaining adaptability and scalability.","cs.DB, cs.IR",2025-04-21T05:01:58+00:00,2025-04-21T05:01:58+00:00,http://arxiv.org/abs/2504.14861v1,http://arxiv.org/abs/2504.14861v1,2025-04-21 05:01:58+00:00,"\label{sec:related} {Inner Product} is crucial in \textsf{AI} and machine learning applications such as representation learning,language modeling, computer vision and recommender systems~\cite{wang2024must,yu2014large,xu2020product,asai2023retrieval,huang2020embedding,radford2021learning}. \mips methods are generally categorized into Locality Sensitive Hashing (\lsh), tree-, quantization-, and graph-based approaches: \stitle{LSH-based methods}: Traditional \lsh~\cite{wang2017survey,wei2024det}, originally designed for Euclidean space, is adapted for \mips using transformations such as $L_2$ \cite{shrivastava2014asymmetric}, Correlation \cite{shrivastava2015improved}, and \textsf{XBOX} \cite{bachrach2014speeding}. Range-\lsh~\cite{yan2018norm} is the first to observe that \mips results cluster around large-norm vectors. Simple-\lsh~\cite{neyshabur2015symmetric} introduce a symmetric \lsh that enjoys strong guarantees. Fargo \cite{zhao2023fargo} represents the recent state-of-the-art. \stitle{Tree-based methods}: Early \mips approaches favored trees but struggled with high dimensionality. \textsf{ProMIPS} \cite{song2021promips} addresses this by projecting vectors into a lower-dimensional space, though information loss remains a challenge. \textsf{LRUS-CoverTree}~\cite{ma2024reconsidering} improves on this but faces difficulties with negative inner product values. \stitle{Quantization-based methods}: \textsf{NEQ}~\cite{dai2020norm} quantizes the norms of items in a dataset explicitly to reduce errors in norm. ScaNN \cite{guo2020accelerating} integrates ""VQ-PQ"" with anisotropic quantization loss, while \textsf{SOAR} \cite{sun2024soar} employs an orthogonality-amplified residual loss and have become state-of-the-art and been integrated into ScaNN library. \stitle{Graph-based methods}: Proven effective for \nns, graph-based methods have been adapted for \mips. \textsf{ip-NSW}~\cite{morozov2018non} builds Delaunay graphs via inner product. \textsf{ip-NSW+}~\cite{liu2020understanding} improves graph quality with angular proximity. \textsf{M{\""o}bius-Graph}~\cite{zhou2019mobius} adopts M{\""o}bius transforms for \mips. \textsf{IPDG} prunes extreme points for top-1 MIPS. \textsf{NAPG}~\cite{tan2021norm} uses a norm-adaptive inner product ($\alpha \langle x, y \rangle$) in \textsf{ip-NSW}. \begin{table}[tb!] \caption{The performance enhancement of \magg compared to the best competitor across various search switch steps on \textsf{Imagenet-1k} and \textsf{YFCC1M}.} \vspace{-2mm} \small \label{tab:search-switch} \resizebox{0.92\linewidth}{!}{ \begin{tabular}{cccccc} \toprule Datasets & step=10 & step=20 & step=30 & step=40\\ \midrule \textsf{Imagenet-1K} & 1.6x & 2.24x & 3.7x & 2.64x \\ % \textsf{Text2image1M} & 25\% & 20\% & 15\% & 12\% \\ \textsf{YFCC1M} & 28\% & 39\% & 32\% & 25\% \\ \bottomrule \end{tabular}} \vspace{-2mm} \end{table}","{Inner Product} is crucial in \textsf{AI} and machine learning applications such as representation learning,language modeling, computer vision and recommender systems~\cite{wang2024must,yu2014large,xu2020product,asai2023retrieval,huang2020embedding,radford2021learning}. \mips methods are generally categorized into Locality Sensitive Hashing (\lsh), tree-, quantization-, and graph-based approaches: \stitle{LSH-based methods}: Traditional \lsh~\cite{wang2017survey,wei2024det}, originally designed for Euclidean space, is adapted for \mips using transformations such as $L_2$ \cite{shrivastava2014asymmetric}, Correlation \cite{shrivastava2015improved}, and \textsf{XBOX} \cite{bachrach2014speeding}. Range-\lsh~\cite{yan2018norm} is the first to observe that \mips results cluster around large-norm vectors. Simple-\lsh~\cite{neyshabur2015symmetric} introduce a symmetric \lsh that enjoys strong guarantees. Fargo \cite{zhao2023fargo} represents the recent state-of-the-art. \stitle{Tree-based methods}: Early \mips approaches favored trees but struggled with high dimensionality. \textsf{ProMIPS} \cite{song2021promips} addresses this by projecting vectors into a lower-dimensional space, though information loss remains a challenge. \textsf{LRUS-CoverTree}~\cite{ma2024reconsidering} improves on this but faces difficulties with negative inner product values. \stitle{Quantization-based methods}: \textsf{NEQ}~\cite{dai2020norm} quantizes the norms of items in a dataset explicitly to reduce errors in norm. ScaNN \cite{guo2020accelerating} integrates ""VQ-PQ"" with anisotropic quantization loss, while \textsf{SOAR} \cite{sun2024soar} employs an orthogonality-amplified residual loss and have become state-of-the-art and been integrated into ScaNN library. \stitle{Graph-based methods}: Proven effective for \nns, graph-based methods have been adapted for \mips. \textsf{ip-NSW}~\cite{morozov2018non} builds Delaunay graphs via inner product. \textsf{ip-NSW+}~\cite{liu2020understanding} improves graph quality with angular proximity. \textsf{M{\""o}bius-Graph}~\cite{zhou2019mobius} adopts M{\""o}bius transforms for \mips. \textsf{IPDG} prunes extreme points for top-1 MIPS. \textsf{NAPG}~\cite{tan2021norm} uses a norm-adaptive inner product ($\alpha \langle x, y \rangle$) in \textsf{ip-NSW}. \begin{table}[tb!] \caption{The performance enhancement of \magg compared to the best competitor across various search switch steps on \textsf{Imagenet-1k} and \textsf{YFCC1M}.} \vspace{-2mm} \small \resizebox{0.92\linewidth}{!}{ \begin{tabular}{cccccc} \toprule Datasets & step=10 & step=20 & step=30 & step=40\\ \midrule \textsf{Imagenet-1K} & 1.6x & 2.24x & 3.7x & 2.64x \\ % \textsf{Text2image1M} & 25\% & 20\% & 15\% & 12\% \\ \textsf{YFCC1M} & 28\% & 39\% & 32\% & 25\% \\ \bottomrule \end{tabular}} \vspace{-2mm} \end{table}","Inner Product is crucial in AIand machine learning applications such as representation learning,language modeling, computer vi- sion and recommender systems [ 7,20,30,41,44,47].MIPS methods are generally categorized into Locality Sensitive Hashing ( LSH), tree-, quantization-, and graph-based approaches: LSH-based methods : Traditional LSH [40,43], originally designed for Euclidean space, is adapted for MIPS using transformations such as𝐿2[34], Correlation [ 35], and XBOX [9]. Range- LSH [46] is the first to observe that MIPS results cluster around large-norm vectors. Simple- LSH [27] introduce a symmetric LSH that enjoys strong guarantees. Fargo [ 48] represents the recent state-of-the-art. Tree-based methods : Early MIPS approaches favored trees but struggled with high dimensionality. ProMIPS [36] addresses this by projecting vectors into a lower-dimensional space, though infor- mation loss remains a challenge. LRUS-CoverTree [23] improves on this but faces difficulties with negative inner product values. Quantization-based methods :NEQ [15] quantizes the norms of items in a dataset explicitly to reduce errors in norm. ScaNN [ 19] integrates ""VQ-PQ"" with anisotropic quantization loss, while SOAR [37] employs an orthogonality-amplified residual loss and have become state-of-the-art and been integrated into ScaNN library. Graph-based methods : Proven effective for NNS , graph-based methods have been adapted for MIPS .ip-NSW [26] builds De- launay graphs via inner product. ip-NSW+ [22] improves graph quality with angular proximity. Möbius-Graph [49] adopts Möbius transforms for MIPS .IPDG prunes extreme points for top-1 MIPS. NAPG [38] uses a norm-adaptive inner product ( 𝛼⟨𝑥,𝑦⟩) inip-NSW ." 2504.06975v1,AWDIT: An Optimal Weak Database Isolation Tester,"Lasse Møldrup, Andreas Pavlogiannis","In order to achieve low latency, high throughput, and partition tolerance, modern databases forgo strong transaction isolation for weak isolation guarantees. However, several production databases have been found to suffer from isolation bugs, breaking their data-consistency contract. Black-box testing is a prominent technique for detecting isolation bugs, by checking whether histories of database transactions adhere to a prescribed isolation level. Testing databases on realistic workloads of large size requires isolation testers to be as efficient as possible, a requirement that has initiated a study of the complexity of isolation testing. Although testing strong isolation has been known to be NP-complete, weak isolation levels were recently shown to be testable in polynomial time, which has propelled the scalability of testing tools. However, existing testers have a large polynomial complexity, restricting testing to workloads of only moderate size, which is not typical of large-scale databases. In this work, we develop AWDIT, a highly-efficient and provably optimal tester for weak database isolation. Given a history $H$ of size $n$ and $k$ sessions, AWDIT tests whether H satisfies the most common weak isolation levels of Read Committed (RC), Read Atomic (RA), and Causal Consistency (CC) in time $O(n^{3/2})$, $O(n^{3/2})$, and $O(n \cdot k)$, respectively, improving significantly over the state of the art. Moreover, we prove that AWDIT is essentially optimal, in the sense that there is a conditional lower bound of $n^{3/2}$ for any weak isolation level between RC and CC. Our experiments show that AWDIT is significantly faster than existing, highly optimized testers; e.g., for the $\sim$20% largest histories, AWDIT obtains an average speedup of $245\times$, $193\times$, and $62\times$ for RC, RA, and CC, respectively, over the best baseline.","cs.PL, cs.DB, H.2.4, D.2.5, F.2.2",2025-04-09T15:30:09+00:00,2025-04-09T15:30:09+00:00,http://arxiv.org/abs/2504.06975v1,http://arxiv.org/abs/2504.06975v1,2025-04-09 15:30:09+00:00,"\label{SEC:RELATED_WORK} The formalization of database isolation has been a subject of continuous work following various approaches, such as axiomatically via conflict graphs and variants thereof~\cite{Terry1994a,Berenson1995,Adya2000} and operational semantics~\cite{Crooks2017}. $\ToolName$ follows an axiomatic style using a visibility relation, initially developed in \cite{Burckhardt2014,Cerone2015}, and used by many current weak-isolation testers~\cite{Biswas2019,Liu2024a}. The polynomial complexity of weak isolation levels admits a unifying view, as shown in~\cite{Biswas2019}. Intuitively, this stems from the fact that $\co$ appears only in one of the edges for each isolation level in \cref{fig:isolation-levels}. This can serve as a first criterion for estimating whether a new isolation level admits polynomial-time testing. Plume~\cite{Liu2024a} splits the problem of checking consistency into showing the absence of a number of Transactional Anomalous Patterns (TAPs), each catching a certain kind of a consistency violation that (typically) involves 3 transactions and relations between them. The fine-grained complexity of each weak isolation level is subject to further insights specific to that level. $\ToolName$ achieves a significant improvement in theoretical complexity and practical performance by avoiding an exhaustive search over all TAPs. Black-box testing techniques have also been developed for strong isolation levels, most notably for Serializability~\cite{Tan2020,Geng2024} and Snapshot Isolation~\cite{Zhang2023a,Huang2023b}. Since testing for strong isolation is NP-complete \cite{Biswas2019,Papadimitriou1979a}, these testers mostly rely on SAT/SMT solving, though more efficient algorithms exist when parameterized by the number of sessions or the communication topology~\cite{Biswas2019}. Analogous consistency testing problems arise frequently in the context of shared-memory concurrent programs, where isolation levels give their place for memory models~\cite{Furbach2015}. The landmark work of \cite{Gibbons1997} shows that the problem is NP-complete for Sequential Consistency, via a reduction from the Serializability isolation level~\cite{Papadimitriou1979a}. Similar results are known for weaker memory models, such as x86-TSO, which are still relatively strong~\cite{Furbach2015}. Nevertheless, parameterization by the number of threads and the communication topology is also known to yield polynomial-time algorithms~\cite{Gibbons1994,Abdulla2019b,Chalupa2018,Mathur2020,Bui2021}. Causally-consistent memory models have also been manifested in shared memory, perhaps most prominently in the C/C++ memory model~\cite{Baty2011}. Their weak semantics were shown to allow for efficient, polynomial time consistency checks~\cite{Lahav2015}, though the problem is known to become NP-complete~\cite{Bouajjani2017a}, and even notoriously difficult to parameterize~\cite{Chakraborty2024a}, when store operations do not have unique values. On the technical level, our upper bound for $\CC$ extends a recent result for efficient consistency checks for the Strong Release-Acquire (SRA) memory model~\cite{Tunc2023} to the transactional setting.","The formalization of database isolation has been a subject of continuous work following various approaches, such as axiomatically via conflict graphs and variants thereof~\cite{Terry1994a,Berenson1995,Adya2000} and operational semantics~\cite{Crooks2017}. $\ToolName$ follows an axiomatic style using a visibility relation, initially developed in \cite{Burckhardt2014,Cerone2015}, and used by many current weak-isolation testers~\cite{Biswas2019,Liu2024a}. The polynomial complexity of weak isolation levels admits a unifying view, as shown in~\cite{Biswas2019}. Intuitively, this stems from the fact that $\co$ appears only in one of the edges for each isolation level in \cref{fig:isolation-levels}. This can serve as a first criterion for estimating whether a new isolation level admits polynomial-time testing. Plume~\cite{Liu2024a} splits the problem of checking consistency into showing the absence of a number of Transactional Anomalous Patterns (TAPs), each catching a certain kind of a consistency violation that (typically) involves 3 transactions and relations between them. The fine-grained complexity of each weak isolation level is subject to further insights specific to that level. $\ToolName$ achieves a significant improvement in theoretical complexity and practical performance by avoiding an exhaustive search over all TAPs. Black-box testing techniques have also been developed for strong isolation levels, most notably for Serializability~\cite{Tan2020,Geng2024} and Snapshot Isolation~\cite{Zhang2023a,Huang2023b}. Since testing for strong isolation is NP-complete \cite{Biswas2019,Papadimitriou1979a}, these testers mostly rely on SAT/SMT solving, though more efficient algorithms exist when parameterized by the number of sessions or the communication topology~\cite{Biswas2019}. Analogous consistency testing problems arise frequently in the context of shared-memory concurrent programs, where isolation levels give their place for memory models~\cite{Furbach2015}. The landmark work of \cite{Gibbons1997} shows that the problem is NP-complete for Sequential Consistency, via a reduction from the Serializability isolation level~\cite{Papadimitriou1979a}. Similar results are known for weaker memory models, such as x86-TSO, which are still relatively strong~\cite{Furbach2015}. Nevertheless, parameterization by the number of threads and the communication topology is also known to yield polynomial-time algorithms~\cite{Gibbons1994,Abdulla2019b,Chalupa2018,Mathur2020,Bui2021}. Causally-consistent memory models have also been manifested in shared memory, perhaps most prominently in the C/C++ memory model~\cite{Baty2011}. Their weak semantics were shown to allow for efficient, polynomial time consistency checks~\cite{Lahav2015}, though the problem is known to become NP-complete~\cite{Bouajjani2017a}, and even notoriously difficult to parameterize~\cite{Chakraborty2024a}, when store operations do not have unique values. On the technical level, our upper bound for $\CC$ extends a recent result for efficient consistency checks for the Strong Release-Acquire (SRA) memory model~\cite{Tunc2023} to the transactional setting.", 2506.01833v1,SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model,"Zhao Yang, Jiwei Zhu, Bing Su","Inspired by the success of unsupervised pre-training paradigms, researchers have applied these approaches to DNA pre-training. However, we argue that these approaches alone yield suboptimal results because pure DNA sequences lack sufficient information, since their functions are regulated by genomic profiles like chromatin accessibility. Here, we demonstrate that supervised training for genomic profile prediction serves as a more effective alternative to pure sequence pre-training. Furthermore, considering the multi-species and multi-profile nature of genomic profile prediction, we introduce our $\textbf{S}$pecies-$\textbf{P}$rofile $\textbf{A}$daptive $\textbf{C}$ollaborative $\textbf{E}$xperts (SPACE) that leverages Mixture of Experts (MoE) to better capture the relationships between DNA sequences across different species and genomic profiles, thereby learning more effective DNA representations. Through extensive experiments across various tasks, our model achieves state-of-the-art performance, establishing that DNA models trained with supervised genomic profiles serve as powerful DNA representation learners. The code is available at https://github.com/ZhuJiwei111/SPACE.","cs.LG, q-bio.GN",2025-06-02T16:23:05+00:00,2025-06-02T16:23:05+00:00,http://arxiv.org/abs/2506.01833v1,http://arxiv.org/abs/2506.01833v1,2025-06-02 16:23:05+00:00,"\textbf{Supervised genomic profile models} are trained to predict functional genomic profiles from DNA sequences~\citep{kathail2024leveraging}. DeepSEA~\citep{zhou2015predicting} pioneered this paradigm by leveraging convolutional neural networks (CNNs) to extract DNA sequence features for multi-task prediction. Subsequent works~\citep{kelley2018sequential, zhou2018deep, chen2022sequence} have continued to advance this direction through either more advanced architectures or larger-scale training data. Enformer~\citep{enformer}, widely recognized as the SOTA method, achieved superior prediction performance through a hybrid Transformer-CNN architecture. While these methods primarily focus on \textit{ab initio} prediction of genomic profiles from DNA sequences and directly utilize these profiles for downstream tasks such as variant effect prediction, few studies~\citep{NT} have explored whether their intermediate representations capture meaningful biological patterns. Moreover, these models, which typically adopt a shared encoder coupled with independent profile prediction heads, have not thoroughly explored more effective architectural designs that could potentially enhance both prediction performance and representation learning. \textbf{Unsupervised DNA foundation models} draw from the success of unsupervised pre-training in NLP. DNABERT~\citep{DNABert} pioneered this approach, maintaining nearly identical training methods to BERT~\citep{devlin2019bert} while adapting the tokenization scheme to 6-mers~\citep{celikkanatrevisiting} for DNA sequences. Subsequent works have continued along this direction, employing either MLM~\citep{DNABert2, NT, sanabria2024dna} or NTP~\citep{nguyen2024sequence, HyenaDNA} as unsupervised training objectives. Although these methods have made effective optimizations in terms of training data, model architectures, and tokenization strategies, they still adhere to the assumption that unsupervised pre-training on pure DNA sequences alone is sufficient for learning effective representations. Moreover, there has been little systematic comparison between these models and genomic profile prediction models in terms of their representation learning capabilities. \textbf{The MoE framework} is a conditional computation technique that selectively activates different expert networks for different inputs through sparse routing~\citep{MoE0, SparseMoE}. In Transformer-based large language models (LLMs), MoE is typically applied to feed-forward networks (FFNs) to achieve better parameter efficiency while maintaining model capacity~\citep{fedus2022switch, jiang2023mistral, deepseek}. This adaptive routing mechanism is particularly well-suited for our genomic modeling task, as it enables the model to dynamically balance between learning species-specific patterns and shared biological features, while also capturing the complex dependencies between different genomic profiles. Following common practice in Transformer architectures, we also implement MoE by replacing the FFNs in our model.","\textbf{Supervised genomic profile models} are trained to predict functional genomic profiles from DNA sequences~\citep{kathail2024leveraging}. DeepSEA~\citep{zhou2015predicting} pioneered this paradigm by leveraging convolutional neural networks (CNNs) to extract DNA sequence features for multi-task prediction. Subsequent works~\citep{kelley2018sequential, zhou2018deep, chen2022sequence} have continued to advance this direction through either more advanced architectures or larger-scale training data. Enformer~\citep{enformer}, widely recognized as the SOTA method, achieved superior prediction performance through a hybrid Transformer-CNN architecture. While these methods primarily focus on \textit{ab initio} prediction of genomic profiles from DNA sequences and directly utilize these profiles for downstream tasks such as variant effect prediction, few studies~\citep{NT} have explored whether their intermediate representations capture meaningful biological patterns. Moreover, these models, which typically adopt a shared encoder coupled with independent profile prediction heads, have not thoroughly explored more effective architectural designs that could potentially enhance both prediction performance and representation learning. \textbf{Unsupervised DNA foundation models} draw from the success of unsupervised pre-training in NLP. DNABERT~\citep{DNABert} pioneered this approach, maintaining nearly identical training methods to BERT~\citep{devlin2019bert} while adapting the tokenization scheme to 6-mers~\citep{celikkanatrevisiting} for DNA sequences. Subsequent works have continued along this direction, employing either MLM~\citep{DNABert2, NT, sanabria2024dna} or NTP~\citep{nguyen2024sequence, HyenaDNA} as unsupervised training objectives. Although these methods have made effective optimizations in terms of training data, model architectures, and tokenization strategies, they still adhere to the assumption that unsupervised pre-training on pure DNA sequences alone is sufficient for learning effective representations. Moreover, there has been little systematic comparison between these models and genomic profile prediction models in terms of their representation learning capabilities. \textbf{The MoE framework} is a conditional computation technique that selectively activates different expert networks for different inputs through sparse routing~\citep{MoE0, SparseMoE}. In Transformer-based large language models (LLMs), MoE is typically applied to feed-forward networks (FFNs) to achieve better parameter efficiency while maintaining model capacity~\citep{fedus2022switch, jiang2023mistral, deepseek}. This adaptive routing mechanism is particularly well-suited for our genomic modeling task, as it enables the model to dynamically balance between learning species-specific patterns and shared biological features, while also capturing the complex dependencies between different genomic profiles. Following common practice in Transformer architectures, we also implement MoE by replacing the FFNs in our model.","Supervised genomic profile models are trained to predict functional genomic profiles from DNA sequences (Kathail et al., 2024). DeepSEA (Zhou & Troyanskaya, 2015) pio- neered this paradigm by leveraging convolutional neural net- works (CNNs) to extract DNA sequence features for multi- task prediction. Subsequent works (Kelley et al., 2018; Zhou et al., 2018; Chen et al., 2022) have continued to advance this direction through either more advanced architectures or larger-scale training data. Enformer (Avsec et al., 2021), widely recognized as the SOTA method, achieved superior prediction performance through a hybrid Transformer-CNN architecture. While these methods primarily focus on ab initio prediction of genomic profiles from DNA sequences and directly utilize these profiles for downstream tasks such as variant effect prediction, few studies (Dalla-Torre et al., 2024) have explored whether their intermediate represen- tations capture meaningful biological patterns. Moreover, these models, which typically adopt a shared encoder cou- pled with independent profile prediction heads, have not thoroughly explored more effective architectural designs that could potentially enhance both prediction performance and representation learning. Unsupervised DNA foundation models draw from the suc- cess of unsupervised pre-training in NLP. DNABERT (Ji et al., 2021) pioneered this approach, maintaining nearly identical training methods to BERT (Devlin et al., 2019) while adapting the tokenization scheme to 6-mers (Ce- likkanat et al., 2024) for DNA sequences. Subsequent works have continued along this direction, employing either MLM (Zhou et al., 2024; Dalla-Torre et al., 2024; Sanabria et al., 2024) or NTP (Nguyen et al., 2024a;b) as unsuper- vised training objectives. Although these methods have made effective optimizations in terms of training data, model architectures, and tokenization strategies, they still adhere to the assumption that unsupervised pre-training on pure DNA sequences alone is sufficient for learning effective representations. Moreover, there has been little system- atic comparison between these models and genomic profile prediction models in terms of their representation learning capabilities. The MoE framework is a conditional computation tech- 2 SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation Model Inputs DNA Sequence (Alternating batches from ) Species EmbeddingSequence RepresentationSpecies embeddingStem Conv T ower MHABlock Cross-Species MoE Representation of sequenceRepresentation of sequenceExpert Expert Expert Expert Hidden state of sequence -Specific Gating Network Cross-Species Shared Expert PoolHidden state of sequence -Specific Gating Network Enhanced profiles prediction of Enhanced profiles prediction of Profile-Grouped Enhancement DecoderExpert Species GateLinear Projection Profile-Specific Expert-Selecte Groups Species Embedding Sequence GateSequence RepresentationBase Prediction Expert Expert Expert Cross-Profile Shared Expert Pool Enhanced Prediction Final PredictionConcatenation Add MultiplicationEnformer ModulesRefinement ModulesModule Details Categorization Figure" 2506.00382v1,"Spectral Insights into Data-Oblivious Critical Layers in Large Language Models","Xuyuan Liu, Lei Hsiung, Yaoqing Yang, Yujun Yan","Understanding how feature representations evolve across layers in large language models (LLMs) is key to improving their interpretability and robustness. While recent studies have identified critical layers linked to specific functions or behaviors, these efforts typically rely on data-dependent analyses of fine-tuned models, limiting their use to post-hoc settings. In contrast, we introduce a data-oblivious approach to identify intrinsic critical layers in pre-fine-tuned LLMs by analyzing representation dynamics via Centered Kernel Alignment(CKA). We show that layers with significant shifts in representation space are also those most affected during fine-tuning--a pattern that holds consistently across tasks for a given model. Our spectral analysis further reveals that these shifts are driven by changes in the top principal components, which encode semantic transitions from rationales to conclusions. We further apply these findings to two practical scenarios: efficient domain adaptation, where fine-tuning critical layers leads to greater loss reduction compared to non-critical layers; and backdoor defense, where freezing them reduces attack success rates by up to 40%.","cs.LG, cs.CL",2025-05-31T04:21:39+00:00,2025-05-31T04:21:39+00:00,http://arxiv.org/abs/2506.00382v1,http://arxiv.org/abs/2506.00382v1,2025-05-31 04:21:39+00:00,"\label{sec:related_work} \vspace{-2pt} \paragraph{Representation Space Analysis} Understanding representations in neural networks (NNs) has long been a significant focus of research. \citet{DBLP:conf/nips/MorcosRB18} used Canonical Correlation Analysis (CCA) to study hidden representations, providing insights into how neural networks evolve during training. \citet{DBLP:conf/nips/RaghuGYS17} and \citet{DBLP:conf/icml/Kornblith0LH19} introduced Singular Vector Canonical Correlation Analysis (SVCCA) and CKA, respectively, to compare representations across layers and networks, shedding light on NN's learning dynamics. \citet{DBLP:conf/iclr/NguyenRK21} showed blocks of contiguous hidden layers with highly similar representations in large-capacity neural networks. \citet{phang-etal-2021-fine} investigated how fine-tuning impacts the CKA similarity pattern across layers. \citet{DBLP:conf/nips/LiuCYY24} showed that representation consistency improves model performance on classification tasks. \citet{DBLP:conf/emnlp/BrownGKTK23} applied representation similarity metrics to explore generalization capabilities in language models. \citet{sun2024massive} analyzed how representations evolve across layers and contribute to final predictions in LLMs. Meanwhile, \citet{DBLP:conf/emnlp/MartinezLB24} examined the convergence dynamics of activations by comparing activation similarities across training steps for each layer during the pre-train stage, offering a deeper understanding of model behavior across different scales. \vspace{-3pt} \paragraph{Critical Layer Analysis in LLMs} Transformer-based large language models exhibit varied functionalities across their layers. For instance, \citet{DBLP:conf/nips/MengBAB22} showed that middle layers predominantly encode factual information. Similarly, \citet{DBLP:conf/emnlp/AzariaM23} found that mid-depth layers are crucial for capturing features essential for generating trustworthy responses. \citet{DBLP:conf/emnlp/ChenTGW00YY24} observed substantial changes in the representation space of some layers, which can be useful for model merging. Furthermore, \citet{DBLP:conf/emnlp/ZhaoLLZ024} identified a ""safety layer"" that correlates specific safety-related behaviors to a particular layer. \citet{jin-etal-2025-exploring} presented how concepts emerge across different layers from the view of concept learning. \citet{DBLP:journals/corr/abs-2412-09563} assessed the quality of activation of these layers using various metrics, offering deeper insights into internal evaluations. In this study, we investigate the representation dynamics of LLMs, establishing, for the first time, a connection between layer-wise representation analysis in pre-fine-tuned models and critical layer analysis in downstream fine-tuned models. Additionally, we provide spectral insights into the principal components driving change points in representation dynamics and examine their role in distilling rationales into conclusions at these critical layers.","\vspace{-2pt} \paragraph{Representation Space Analysis} Understanding representations in neural networks (NNs) has long been a significant focus of research. \citet{DBLP:conf/nips/MorcosRB18} used Canonical Correlation Analysis (CCA) to study hidden representations, providing insights into how neural networks evolve during training. \citet{DBLP:conf/nips/RaghuGYS17} and \citet{DBLP:conf/icml/Kornblith0LH19} introduced Singular Vector Canonical Correlation Analysis (SVCCA) and CKA, respectively, to compare representations across layers and networks, shedding light on NN's learning dynamics. \citet{DBLP:conf/iclr/NguyenRK21} showed blocks of contiguous hidden layers with highly similar representations in large-capacity neural networks. \citet{phang-etal-2021-fine} investigated how fine-tuning impacts the CKA similarity pattern across layers. \citet{DBLP:conf/nips/LiuCYY24} showed that representation consistency improves model performance on classification tasks. \citet{DBLP:conf/emnlp/BrownGKTK23} applied representation similarity metrics to explore generalization capabilities in language models. \citet{sun2024massive} analyzed how representations evolve across layers and contribute to final predictions in LLMs. Meanwhile, \citet{DBLP:conf/emnlp/MartinezLB24} examined the convergence dynamics of activations by comparing activation similarities across training steps for each layer during the pre-train stage, offering a deeper understanding of model behavior across different scales. \vspace{-3pt} \paragraph{Critical Layer Analysis in LLMs} Transformer-based large language models exhibit varied functionalities across their layers. For instance, \citet{DBLP:conf/nips/MengBAB22} showed that middle layers predominantly encode factual information. Similarly, \citet{DBLP:conf/emnlp/AzariaM23} found that mid-depth layers are crucial for capturing features essential for generating trustworthy responses. \citet{DBLP:conf/emnlp/ChenTGW00YY24} observed substantial changes in the representation space of some layers, which can be useful for model merging. Furthermore, \citet{DBLP:conf/emnlp/ZhaoLLZ024} identified a ""safety layer"" that correlates specific safety-related behaviors to a particular layer. \citet{jin-etal-2025-exploring} presented how concepts emerge across different layers from the view of concept learning. \citet{DBLP:journals/corr/abs-2412-09563} assessed the quality of activation of these layers using various metrics, offering deeper insights into internal evaluations. In this study, we investigate the representation dynamics of LLMs, establishing, for the first time, a connection between layer-wise representation analysis in pre-fine-tuned models and critical layer analysis in downstream fine-tuned models. Additionally, we provide spectral insights into the principal components driving change points in representation dynamics and examine their role in distilling rationales into conclusions at these critical layers.","Representation Space Analysis Understanding representations in neural networks (NNs) has long been a significant focus of research. Morcos et al. (2018) used Canonical Correlation Analysis (CCA) to study hidden representations, providing insights into how neural networks evolve during training. Raghu et al. (2017) and Kornblith et al. (2019) introduced Singular Vector Canonical Correlation Analysis (SVCCA) and CKA, respectively, to com- pare representations across layers and networks, shedding light on NN’s learning dynamics. Nguyen et al. (2021) showed blocks of contiguous hidden layers with highly similar representations in large- Table 4: Attack Success Rate (ASR) and Harmfulness Score Evaluation Across Different Models. Results demonstrate that freezing change-point layers effectively reduces the impact of attacks. LLaMA2-7B-Chat LLaMA2-13B-Chat Phi-3.0-Mini-128k-Instruct ModelASR ASR Harmful ASR ASR Harmful ASR ASR Harmful (Keyword) (GPT) (GPT) (Keyword) (GPT) (GPT) (Keyword) (GPT) (GPT) MInit 2.3% 0.0% 1.04 1.67% 0.0% 1.01 12.0% 5.3% 1.32 MFull 54.3% 35.0% 2.67 28.00% 22.00% 1.97 87.3% 74.3% 4.18 MNon-Crit. 31.3% 17.7% 1.90 16.67% 12.67% 1.57 64.3% 51.7% 3.26 MCrit. 17.0% 9.0% 1.47 6.00% 2.23% 1.27 51.3% 42.7% 2.85 capacity neural networks. Phang et al. (2021) inves- tigated how fine-tuning impacts the CKA similarity pattern across layers. Liu et al. (2024a) showed that representation consistency improves model perfor- mance on classification tasks. Brown et al. (2023) applied representation similarity metrics to explore generalization capabilities in language models. Sun et al. (2024) analyzed how representations evolve across layers and contribute to final predictions in LLMs. Meanwhile, Martinez et al. (2024) exam- ined the convergence dynamics of activations by comparing activation similarities across training steps for each layer during the pre-train stage, of- fering a deeper understanding of model behavior across different scales. Critical Layer Analysis in LLMs Transformer- based large language models exhibit varied func- tionalities across their layers. For instance, Meng et al. (2022) showed that middle layers predom- inantly encode factual information. Similarly, Azaria and Mitchell (2023) found that mid-depth layers are crucial for capturing features essential for generating trustworthy responses. Chen et al. (2024) observed substantial changes in the repre- sentation space of some layers, which can be useful for model merging. Furthermore, Zhao et al. (2024) identified a ""safety layer"" that correlates specific safety-related behaviors to a particular layer. Jin et al. (2025) presented how concepts emerge across different layers from the view of concept learning. Skean et al. (2024) assessed the quality of activa- tion of these layers using various metrics, offering deeper insights into internal evaluations. In this study, we investigate the representation dynamics of LLMs, establishing, for the first time, a connec- tion between layer-wise representation analysis in pre-fine-tuned models and critical layer analysis in downstream fine-tuned models. Additionally, we provide spectral insights into the principal com- ponents driving change points in representation dynamics and examine their role in distilling ratio-nales into conclusions at these critical layers." 2506.00205v1,"Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective","Junze Deng, Qinhang Wu, Peizhong Ju, Sen Lin, Yingbin Liang, Ness Shroff","Rehearsal-based methods have shown superior performance in addressing catastrophic forgetting in continual learning (CL) by storing and training on a subset of past data alongside new data in current task. While such a concurrent rehearsal strategy is widely used, it remains unclear if this approach is always optimal. Inspired by human learning, where sequentially revisiting tasks helps mitigate forgetting, we explore whether sequential rehearsal can offer greater benefits for CL compared to standard concurrent rehearsal. To address this question, we conduct a theoretical analysis of rehearsal-based CL in overparameterized linear models, comparing two strategies: 1) Concurrent Rehearsal, where past and new data are trained together, and 2) Sequential Rehearsal, where new data is trained first, followed by revisiting past data sequentially. By explicitly characterizing forgetting and generalization error, we show that sequential rehearsal performs better when tasks are less similar. These insights further motivate a novel Hybrid Rehearsal method, which trains similar tasks concurrently and revisits dissimilar tasks sequentially. We characterize its forgetting and generalization performance, and our experiments with deep neural networks further confirm that the hybrid approach outperforms standard concurrent rehearsal. This work provides the first comprehensive theoretical analysis of rehearsal-based CL.",cs.LG,2025-05-30T20:23:15+00:00,2025-05-30T20:23:15+00:00,http://arxiv.org/abs/2506.00205v1,http://arxiv.org/abs/2506.00205v1,2025-05-30 20:23:15+00:00,"\textbf{Empirical studies in CL.} CL has drawn significant attention in recent years, with numerous empirical approaches developed to mitigate the issue of catastrophic forgetting. Architecture-based approaches combat catastrophic forgetting by dynamically adjusting network parameters \citep{rusu2016progressive} or introducing architectural adaptations such as an ensemble of experts \citep{rypescdivide}. Regularization-based methods constrain model parameter updates to preserve the knowledge of previous tasks \citep{kirkpatrick2017overcoming,magistrielastic}. Memory-based methods address forgetting by storing information of old tasks in the memory and leveraging the information during current task learning, which can be further divided into orthogonal projection based methods and rehearsal-based methods. The former stores gradient information of old tasks to modify the optimization space for the current task \citep{saha2021gradient,lin2022trgp}, while the latter stores and reuses a tiny subset of representative data, known as exemplars. Exemplar sampling methods involve reservoir sampling \citep{cbrs2020} and an information-theoretic evaluation of exemplar candidates \citep{infors2022}. Other work such as \citet{shin2017continual} retains past knowledge by replaying ""pseudo-rehearsal"" constructed from input data instead of storing raw input. Rehearsal methods mostly use a concurrent scheme that trains the model using a mix of input data and sampled exemplars \citep{chaudhry2018efficient,chaudhry2019continual,rebuffi2017icarl,gargtic2024}. Other exemplar utilization methods include \citet{lopez2017gradient} and \citet{chaudhry2018efficient}, which use exemplar to impose constraints in the gradient space. \textbf{Theoretical studies in CL. } Compared to the vast amount of empirical studies in CL, the theoretical understanding of CL is very limited but has started to attract much attention very recently. \citet{bennani2020generalisation,doan2021theoretical} investigated CL performance for the orthogonal gradient descent approach in NTK models theoretically. \citet{yin2020optimization} focused on regularization-based methods and proposed a framework, which requires second-order information to approximate loss function. \citet{cao2022provable,li2022provable} characterized the benefits of continual representation learning from a theoretical perspective. \citet{evron2023continual} connected regularization-based methods with Projection Onto Convex Sets. Recently, a series of theoretical studies proposed to leverage the tools of overparameterized linear models to facilitate better understanding of CL. \citet{evron2022catastrophic} studied the performance of forgetting under such a setup. After that, \citet{lin2023theory} characterized the performance of CL, where they discuss the impact of task similarities and the task order. \citet{ding2024understanding} further characterized the impact of finite gradient descent steps on forgetting of CL. \citet{goldfarb2023analysis} illustrated the joint effect of task similarity and overparameterization. \citet{zhao2024statistical} provided a statistical analysis of regularization-based methods. More recently, \citet{li2024theory} theoretically investigated the impact of mixture-of-experts on the performance of CL in linear models. Different from all these studies, we seek to fill up the theoretical understanding for rehearsal-based CL. Note that one concurrent study \citep{banayeeanzade2024theoretical} also investigates rehearsal-based CL in linear models with concurrent rehearsal. However, one key difference here is that we propose a novel rehearsal strategy, i.e., the sequential rehearsal, and theoretically show its benefit over concurrent rehearsal for dissimilar tasks. Our theoretical results further motivate a new algorithm design for CL in practice, which demonstrates promising performance on DNNs.","\textbf{Empirical studies in CL.} CL has drawn significant attention in recent years, with numerous empirical approaches developed to mitigate the issue of catastrophic forgetting. Architecture-based approaches combat catastrophic forgetting by dynamically adjusting network parameters \citep{rusu2016progressive} or introducing architectural adaptations such as an ensemble of experts \citep{rypescdivide}. Regularization-based methods constrain model parameter updates to preserve the knowledge of previous tasks \citep{kirkpatrick2017overcoming,magistrielastic}. Memory-based methods address forgetting by storing information of old tasks in the memory and leveraging the information during current task learning, which can be further divided into orthogonal projection based methods and rehearsal-based methods. The former stores gradient information of old tasks to modify the optimization space for the current task \citep{saha2021gradient,lin2022trgp}, while the latter stores and reuses a tiny subset of representative data, known as exemplars. Exemplar sampling methods involve reservoir sampling \citep{cbrs2020} and an information-theoretic evaluation of exemplar candidates \citep{infors2022}. Other work such as \citet{shin2017continual} retains past knowledge by replaying ""pseudo-rehearsal"" constructed from input data instead of storing raw input. Rehearsal methods mostly use a concurrent scheme that trains the model using a mix of input data and sampled exemplars \citep{chaudhry2018efficient,chaudhry2019continual,rebuffi2017icarl,gargtic2024}. Other exemplar utilization methods include \citet{lopez2017gradient} and \citet{chaudhry2018efficient}, which use exemplar to impose constraints in the gradient space. \textbf{Theoretical studies in CL. } Compared to the vast amount of empirical studies in CL, the theoretical understanding of CL is very limited but has started to attract much attention very recently. \citet{bennani2020generalisation,doan2021theoretical} investigated CL performance for the orthogonal gradient descent approach in NTK models theoretically. \citet{yin2020optimization} focused on regularization-based methods and proposed a framework, which requires second-order information to approximate loss function. \citet{cao2022provable,li2022provable} characterized the benefits of continual representation learning from a theoretical perspective. \citet{evron2023continual} connected regularization-based methods with Projection Onto Convex Sets. Recently, a series of theoretical studies proposed to leverage the tools of overparameterized linear models to facilitate better understanding of CL. \citet{evron2022catastrophic} studied the performance of forgetting under such a setup. After that, \citet{lin2023theory} characterized the performance of CL, where they discuss the impact of task similarities and the task order. \citet{ding2024understanding} further characterized the impact of finite gradient descent steps on forgetting of CL. \citet{goldfarb2023analysis} illustrated the joint effect of task similarity and overparameterization. \citet{zhao2024statistical} provided a statistical analysis of regularization-based methods. More recently, \citet{li2024theory} theoretically investigated the impact of mixture-of-experts on the performance of CL in linear models. Different from all these studies, we seek to fill up the theoretical understanding for rehearsal-based CL. Note that one concurrent study \citep{banayeeanzade2024theoretical} also investigates rehearsal-based CL in linear models with concurrent rehearsal. However, one key difference here is that we propose a novel rehearsal strategy, i.e., the sequential rehearsal, and theoretically show its benefit over concurrent rehearsal for dissimilar tasks. Our theoretical results further motivate a new algorithm design for CL in practice, which demonstrates promising performance on DNNs.","Empirical studies in CL. CL has drawn significant atten- tion in recent years, with numerous empirical approaches developed to mitigate the issue of catastrophic forgetting. Architecture-based approaches combat catastrophic forget- ting by dynamically adjusting network parameters (Rusu et al., 2016) or introducing architectural adaptations such as an ensemble of experts (Rype ´s´c et al., 2024). Regularization- based methods constrain model parameter updates to pre- serve the knowledge of previous tasks (Kirkpatrick et al., 2017; Magistri et al., 2024). Memory-based methods ad- dress forgetting by storing information of old tasks in the memory and leveraging the information during current task learning, which can be further divided into orthogonal pro- jection based methods and rehearsal-based methods. The former stores gradient information of old tasks to modify the optimization space for the current task (Saha et al., 2021; Lin et al., 2022b), while the latter stores and reuses a tiny sub- set of representative data, known as exemplars. Exemplar sampling methods involve reservoir sampling (Chrysakis & Moens, 2020) and an information-theoretic evaluation of exemplar candidates (Sun et al., 2022). Other work such as Shin et al. (2017) retains past knowledge by replaying ”pseudo-rehearsal” constructed from input data instead of storing raw input. Rehearsal methods mostly use a concur- rent scheme that trains the model using a mix of input data and sampled exemplars (Chaudhry et al., 2018; Dokania et al., 2019; Rebuffi et al., 2017; Garg et al., 2024). Other exemplar utilization methods include Lopez-Paz & Ranzato (2017) and Chaudhry et al. (2018), which use exemplar to impose constraints in the gradient space. Theoretical studies in CL. Compared to the vast amount of empirical studies in CL, the theoretical understanding of CL is very limited but has started to attract much attention very recently. Bennani & Sugiyama (2020); Doan et al. (2021) investigated CL performance for the orthogonal gra- dient descent approach in NTK models theoretically. Yin et al. (2020) focused on regularization-based methods and proposed a framework, which requires second-order infor- mation to approximate loss function. Cao et al. (2022); Li et al. (2022) characterized the benefits of continual represen- 2 Unlocking the Power of Rehearsal in Continual Learning: A Theoretical Perspective tation learning from a theoretical perspective. Evron et al. (2023) connected regularization-based methods with Pro- jection Onto Convex Sets. Recently, a series of theoretical studies proposed to leverage the tools of overparameter- ized linear models to facilitate better understanding of CL. Evron et al. (2022) studied the performance of forgetting under such a setup. After that, Lin et al. (2023) character- ized the performance of CL, where they discuss the impact of task similarities and the task order. Ding et al. (2024) further characterized the impact of finite gradient descent steps on forgetting of CL. Goldfarb & Hand (2023) illus- trated the joint effect of task similarity and overparameter- ization. Zhao et al. (2024) provided a statistical analysis of regularization-based methods. More recently, Li et al. (2024) theoretically investigated the impact of mixture-of- experts on the performance of CL in linear models. Different from all these studies, we seek to fill up the theo- retical understanding for rehearsal-based CL. Note that one concurrent study (Banayeeanzade et al., 2024) also investi- gates rehearsal-based CL in linear models with concurrent rehearsal. However, one key difference here is that we pro- pose a novel rehearsal strategy, i.e., the sequential rehearsal, and theoretically show its benefit over concurrent rehearsal for dissimilar tasks. Our theoretical results further motivate a new algorithm design for CL in practice, which demon- strates promising performance on DNNs." 2505.24835v1,"Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting","Fuyuan Lyu, Linfeng Du, Yunpeng Weng, Qiufang Ying, Zhiyan Xu, Wen Zou, Haolun Wu, Xiuqiang He, Xing Tang","Fund allocation has been an increasingly important problem in the financial domain. In reality, we aim to allocate the funds to buy certain assets within a certain future period. Naive solutions such as prediction-only or Predict-then-Optimize approaches suffer from goal mismatch. Additionally, the introduction of the SOTA time series forecasting model inevitably introduces additional uncertainty in the predicted result. To solve both problems mentioned above, we introduce a Risk-aware Time-Series Predict-and-Allocate (RTS-PnO) framework, which holds no prior assumption on the forecasting models. Such a framework contains three features: (i) end-to-end training with objective alignment measurement, (ii) adaptive forecasting uncertainty calibration, and (iii) agnostic towards forecasting models. The evaluation of RTS-PnO is conducted over both online and offline experiments. For offline experiments, eight datasets from three categories of financial applications are used: Currency, Stock, and Cryptos. RTS-PnO consistently outperforms other competitive baselines. The online experiment is conducted on the Cross-Border Payment business at FiT, Tencent, and an 8.4\% decrease in regret is witnessed when compared with the product-line approach. The code for the offline experiment is available at https://github.com/fuyuanlyu/RTS-PnO.",cs.LG,2025-05-30T17:36:45+00:00,2025-05-30T17:36:45+00:00,http://arxiv.org/abs/2505.24835v1,http://arxiv.org/abs/2505.24835v1,2025-05-30 17:36:45+00:00,"\label{sec:rw} \subsection{Time Series Forecasting} Modern architectures for time series forecasting aim to extend the forecasting horizon and improve long-term accuracy. Inspired by the success of Transformer-based models in capturing long-range dependencies, researchers have explored various adaptations of the Transformer architecture for this task. These include i) reducing computational complexity to sub-quadratic levels using sparse~\cite{Informer} and hierarchical~\cite{Pyraformer} attention, ii) extending the attention mechanism’s point-wise dependency modeling to capture segment-wise~\cite{LogSparse} and patch-wise dependencies~\cite{PatchTST, Crossformer}, and iii) modifying the attention mechanism to incorporate domain-specific processing techniques~\cite{Autoformer, FEDformer}. Besides Transformer-based models, modern temporal convolutional networks have also been shown to achieve competitive performance. MICN~\cite{MICN} combines local and global convolutions to better model long sequences, while TimesNet~\cite{TimesNet} reshapes the 1D series into 2D matrices based on salient periodicities to jointly model intra-period and inter-period variations. In fact, with the recent rise of linear models~\cite{DLinear} and MLPs~\cite{TSMixer}, the de facto neural architecture for this task remains undecided. In this work, we demonstrate the wide compatibility of RTS-PtO and RTS-PnO across various model architectures. One drawback of the above-mentioned methods is the lack of uncertainty quantification. Existing approaches resort to generative modeling~\cite{DeepAR, D3VAE}, which naturally captures data variation. However, these approaches are often limited to short-term prediction, as modeling the joint data probability becomes exponentially difficult. Alternatively, we leverage the conformal prediction framework to characterize uncertainty for longer series~\cite{CF-RNN,EnbPI,EnbPI2}, which we show empirically can help achieve satisfactory performance across different datasets. \subsection{From PtO To PnO} The predict-then-optimize (PtO) can be viewed as an abstractive problem for many real-world applications, such as portfolio management or power scheduling, requiring both predicting unknown values and optimizing the target given these unknown values~\cite{Prescriptive,PtO-bound}. Such a paradigm has been recently extended to other large-scale applications, such as carrier allocation~\cite{PTOCA}, fund recommendation~\cite{PTOFA}. However, it is believed that a misalignment of targets exists between prediction and optimization stages. Researchers are increasingly interested in training the prediction model directly targeting the optimization goal, commonly known as predict-and-optimize (PnO)~\cite{PTO-PNO-Benchmark,PtOorPnO,PnO-bound} or decision-focused learning~\cite{DFL-Survey}. The core challenge is to obtain meaningful gradients for model updating, given the optimization stage. Certain researchers adopt analytical approaches and aim to make the optimization layer differentible~\cite{OptNet,Cvxpylayers}. However, these works tend to rely on strong requirements on the objective functions or constraints, restricting their application scopes in reality. Other researchers~\cite{NCE,SPO+} instead adopt surrogate loss for the optimization layer and prove its convergence both theoretically and empirically. Our RTS-PnO first extends the application of the predict-and-optimization paradigm to large-scale industrial problems.","\subsection{Time Series Forecasting} Modern architectures for time series forecasting aim to extend the forecasting horizon and improve long-term accuracy. Inspired by the success of Transformer-based models in capturing long-range dependencies, researchers have explored various adaptations of the Transformer architecture for this task. These include i) reducing computational complexity to sub-quadratic levels using sparse~\cite{Informer} and hierarchical~\cite{Pyraformer} attention, ii) extending the attention mechanism’s point-wise dependency modeling to capture segment-wise~\cite{LogSparse} and patch-wise dependencies~\cite{PatchTST, Crossformer}, and iii) modifying the attention mechanism to incorporate domain-specific processing techniques~\cite{Autoformer, FEDformer}. Besides Transformer-based models, modern temporal convolutional networks have also been shown to achieve competitive performance. MICN~\cite{MICN} combines local and global convolutions to better model long sequences, while TimesNet~\cite{TimesNet} reshapes the 1D series into 2D matrices based on salient periodicities to jointly model intra-period and inter-period variations. In fact, with the recent rise of linear models~\cite{DLinear} and MLPs~\cite{TSMixer}, the de facto neural architecture for this task remains undecided. In this work, we demonstrate the wide compatibility of RTS-PtO and RTS-PnO across various model architectures. One drawback of the above-mentioned methods is the lack of uncertainty quantification. Existing approaches resort to generative modeling~\cite{DeepAR, D3VAE}, which naturally captures data variation. However, these approaches are often limited to short-term prediction, as modeling the joint data probability becomes exponentially difficult. Alternatively, we leverage the conformal prediction framework to characterize uncertainty for longer series~\cite{CF-RNN,EnbPI,EnbPI2}, which we show empirically can help achieve satisfactory performance across different datasets. \subsection{From PtO To PnO} The predict-then-optimize (PtO) can be viewed as an abstractive problem for many real-world applications, such as portfolio management or power scheduling, requiring both predicting unknown values and optimizing the target given these unknown values~\cite{Prescriptive,PtO-bound}. Such a paradigm has been recently extended to other large-scale applications, such as carrier allocation~\cite{PTOCA}, fund recommendation~\cite{PTOFA}. However, it is believed that a misalignment of targets exists between prediction and optimization stages. Researchers are increasingly interested in training the prediction model directly targeting the optimization goal, commonly known as predict-and-optimize (PnO)~\cite{PTO-PNO-Benchmark,PtOorPnO,PnO-bound} or decision-focused learning~\cite{DFL-Survey}. The core challenge is to obtain meaningful gradients for model updating, given the optimization stage. Certain researchers adopt analytical approaches and aim to make the optimization layer differentible~\cite{OptNet,Cvxpylayers}. However, these works tend to rely on strong requirements on the objective functions or constraints, restricting their application scopes in reality. Other researchers~\cite{NCE,SPO+} instead adopt surrogate loss for the optimization layer and prove its convergence both theoretically and empirically. Our RTS-PnO first extends the application of the predict-and-optimization paradigm to large-scale industrial problems.","2.1 Time Series Forecasting Modern architectures for time series forecasting aim to extend the forecasting horizon and improve long-term accuracy. Inspired by the success of Transformer-based models in capturing long- range dependencies, researchers have explored various adaptations of the Transformer architecture for this task. These include i) re- ducing computational complexity to sub-quadratic levels using sparse [ 39] and hierarchical [ 17] attention, ii) extending the at- tention mechanism’s point-wise dependency modeling to capture segment-wise [ 14] and patch-wise dependencies [ 21,37], and iii) modifying the attention mechanism to incorporate domain-specific processing techniques [ 32,40]. Besides Transformer-based models, modern temporal convolutional networks have also been shown to achieve competitive performance. MICN [ 29] combines local and global convolutions to better model long sequences, while Times- Net [ 31] reshapes the 1D series into 2D matrices based on salient periodicities to jointly model intra-period and inter-period varia- tions. In fact, with the recent rise of linear models [ 36] and MLPs [ 6], the de facto neural architecture for this task remains undecided. In this work, we demonstrate the wide compatibility of RTS-PtO and RTS-PnO across various model architectures. One drawback of the above-mentioned methods is the lack of uncertainty quantification. Existing approaches resort to generative modeling [ 15,23], which naturally captures data variation. However, these approaches are often limited to short-term prediction, as modeling the joint data probability becomes exponentially difficult. Alternatively, we leverage the conformal prediction framework to characterize uncertainty for longer series [ 24,34,35], which we show empirically can help achieve satisfactory performance across different datasets. 2.2 From PtO To PnO The predict-then-optimize (PtO) can be viewed as an abstractive problem for many real-world applications, such as portfolio man- agement or power scheduling, requiring both predicting unknown Timing is important: Risk-aware Fund Allocation based on Time-Series Forecasting KDD ’25, August 3–7, 2025, Toronto, ON, Canada values and optimizing the target given these unknown values [ 3,5]. Such a paradigm has been recently extended to other large-scale applications, such as carrier allocation [ 33], fund recommenda- tion [ 27]. However, it is believed that a misalignment of targets exists between prediction and optimization stages. Researchers are increasingly interested in training the prediction model directly targeting the optimization goal, commonly known as predict-and- optimize (PnO) [ 12,16,28] or decision-focused learning [ 19]. The core challenge is to obtain meaningful gradients for model up- dating, given the optimization stage. Certain researchers adopt analytical approaches and aim to make the optimization layer dif- ferentible [ 1,2]. However, these works tend to rely on strong re- quirements on the objective functions or constraints, restricting their application scopes in reality. Other researchers [ 7,20] instead adopt surrogate loss for the optimization layer and prove its con- vergence both theoretically and empirically. Our RTS-PnO first extends the application of the predict-and-optimization paradigm to large-scale industrial problems." 2505.24203v1,Aligning Protein Conformation Ensemble Generation with Physical Feedback,"Jiarui Lu, Xiaoyin Chen, Stephen Zhewen Lu, Aurélie Lozano, Vijil Chenthamarakshan, Payel Das, Jian Tang","Protein dynamics play a crucial role in protein biological functions and properties, and their traditional study typically relies on time-consuming molecular dynamics (MD) simulations conducted in silico. Recent advances in generative modeling, particularly denoising diffusion models, have enabled efficient accurate protein structure prediction and conformation sampling by learning distributions over crystallographic structures. However, effectively integrating physical supervision into these data-driven approaches remains challenging, as standard energy-based objectives often lead to intractable optimization. In this paper, we introduce Energy-based Alignment (EBA), a method that aligns generative models with feedback from physical models, efficiently calibrating them to appropriately balance conformational states based on their energy differences. Experimental results on the MD ensemble benchmark demonstrate that EBA achieves state-of-the-art performance in generating high-quality protein ensembles. By improving the physical plausibility of generated structures, our approach enhances model predictions and holds promise for applications in structural biology and drug discovery.","q-bio.BM, cs.LG",2025-05-30T04:33:39+00:00,2025-05-30T04:33:39+00:00,http://arxiv.org/abs/2505.24203v1,http://arxiv.org/abs/2505.24203v1,2025-05-30 04:33:39+00:00,"\paragraph{Protein conformation generation.} Unlike structure prediction~\citep{jumper2021highly} aiming to identify a single, most-likely folded structure, protein conformation generation focuses on sampling an ensemble of physically plausible states that capture the underlying energy landscape. Boltzmann generator~\citep{noe2019boltzmann} leverages normalizing flows to approximate the Boltzmann distribution by training on simulation data. \citet{arts2023two} applies the diffusion model to capture such distribution over coarse-grained protein conformations. EigenFold~\citep{jing2023eigenfold} adopts a generative perspective on structure prediction, enabling the generation of multiple structures given an input sequence. Str2Str~\citep{lu2024str2str} introduces a score-based sampler trained exclusively on PDB data, framing conformation generation in a structure-to-structure paradigm. DiG~\citep{zheng2024predicting} trains a conditional diffusion model on both PDB and in-house simulation data. % allowing it to generate diverse conformations. ConfDiff~\citep{wang2024proteinconformationgenerationforceguided} incorporates the energy- and force-guidance during the reverse process of diffusion to enhance the accuracy of conformation generation. AlphaFlow~\citep{jing2024alphafoldmeetsflowmatching} repurposes the AlphaFold2 model into a denoising network via flow matching. ESMDiff~\citep{lu2024structure} fine-tunes the protein language model ESM3 using discrete diffusion to produce protein conformations. Finally, MDGen~\citep{jing2024generative} attempts direct generation of MD trajectories by modeling them as time-series of protein structures. \vspace{-4pt} \paragraph{Alignment methods for generative models.} Aligning generative models with desired objectives is becoming increasingly important. The Reinforcement Learning from Human Feedback (RLHF) framework optimizes models via RL using human preference rewards and has been widely applied in tasks like machine translation~\citep{kreutzer2018reliability}, summarization~\citep{stiennon2020learning}, and instruction following~\citep{ouyang2022training}. RLHF has also been applied for alignment of text-to-image diffusion models~\citep{black2023training, fan2024reinforcement}. However, RL-based fine-tuning faces significant challenges in stability and scalability. Direct Preference Optimization ~\citep{rafailov2024direct} mitigates these issues by directly optimizing for the optimal policy via re-parameterization of an implicit reward model. This approach has been extended beyond language modeling: Diffusion-DPO \cite{Wallace_2024_CVPR} for text-to-image generation, ABDPO \cite{zhou2024antigen} for antibody design using Rosetta energy \cite{alford2017rosetta}, and ALIDIFF \cite{gu2024aligning} and DECOMPDPO \cite{cheng2024decomposed} for molecular optimization in structure-based drug design. \textit{Remarks: Our method differs from existing approaches above by adopting a more general-form objective, being grounded in physically meaningful motivations, addressing a different task and demonstrating superior performance.}","\paragraph{Protein conformation generation.} Unlike structure prediction~\citep{jumper2021highly} aiming to identify a single, most-likely folded structure, protein conformation generation focuses on sampling an ensemble of physically plausible states that capture the underlying energy landscape. Boltzmann generator~\citep{noe2019boltzmann} leverages normalizing flows to approximate the Boltzmann distribution by training on simulation data. \citet{arts2023two} applies the diffusion model to capture such distribution over coarse-grained protein conformations. EigenFold~\citep{jing2023eigenfold} adopts a generative perspective on structure prediction, enabling the generation of multiple structures given an input sequence. Str2Str~\citep{lu2024str2str} introduces a score-based sampler trained exclusively on PDB data, framing conformation generation in a structure-to-structure paradigm. DiG~\citep{zheng2024predicting} trains a conditional diffusion model on both PDB and in-house simulation data. % allowing it to generate diverse conformations. ConfDiff~\citep{wang2024proteinconformationgenerationforceguided} incorporates the energy- and force-guidance during the reverse process of diffusion to enhance the accuracy of conformation generation. AlphaFlow~\citep{jing2024alphafoldmeetsflowmatching} repurposes the AlphaFold2 model into a denoising network via flow matching. ESMDiff~\citep{lu2024structure} fine-tunes the protein language model ESM3 using discrete diffusion to produce protein conformations. Finally, MDGen~\citep{jing2024generative} attempts direct generation of MD trajectories by modeling them as time-series of protein structures. \vspace{-4pt} \paragraph{Alignment methods for generative models.} Aligning generative models with desired objectives is becoming increasingly important. The Reinforcement Learning from Human Feedback (RLHF) framework optimizes models via RL using human preference rewards and has been widely applied in tasks like machine translation~\citep{kreutzer2018reliability}, summarization~\citep{stiennon2020learning}, and instruction following~\citep{ouyang2022training}. RLHF has also been applied for alignment of text-to-image diffusion models~\citep{black2023training, fan2024reinforcement}. However, RL-based fine-tuning faces significant challenges in stability and scalability. Direct Preference Optimization ~\citep{rafailov2024direct} mitigates these issues by directly optimizing for the optimal policy via re-parameterization of an implicit reward model. This approach has been extended beyond language modeling: Diffusion-DPO \cite{Wallace_2024_CVPR} for text-to-image generation, ABDPO \cite{zhou2024antigen} for antibody design using Rosetta energy \cite{alford2017rosetta}, and ALIDIFF \cite{gu2024aligning} and DECOMPDPO \cite{cheng2024decomposed} for molecular optimization in structure-based drug design. \textit{Remarks: Our method differs from existing approaches above by adopting a more general-form objective, being grounded in physically meaningful motivations, addressing a different task and demonstrating superior performance.}","Protein conformation generation. Unlike structure pre- diction (Jumper et al., 2021) aiming to identify a single, most-likely folded structure, protein conformation genera- tion focuses on sampling an ensemble of physically plau- sible states that capture the underlying energy landscape. Boltzmann generator (No ´e et al., 2019) leverages normaliz- ing flows to approximate the Boltzmann distribution by train- ing on simulation data. Arts et al. (2023) applies the diffu- sion model to capture such distribution over coarse-grained protein conformations. EigenFold (Jing et al., 2023) adopts a generative perspective on structure prediction, enabling the generation of multiple structures given an input sequence. Str2Str (Lu et al., 2024b) introduces a score-based sampler trained exclusively on PDB data, framing conformation gen- eration in a structure-to-structure paradigm. DiG (Zheng et al., 2024) trains a conditional diffusion model on both PDB and in-house simulation data. ConfDiff (Wang et al., 2024) incorporates the energy- and force-guidance during the reverse process of diffusion to enhance the accuracy of conformation generation. AlphaFlow (Jing et al., 2024a) repurposes the AlphaFold2 model into a denoising network via flow matching. ESMDiff (Lu et al., 2024a) fine-tunes the protein language model ESM3 using discrete diffusion to produce protein conformations. Finally, MDGen (Jing et al., 2024b) attempts direct generation of MD trajectories by modeling them as time-series of protein structures. Alignment methods for generative models. Aligning generative models with desired objectives is becoming in- creasingly important. The Reinforcement Learning from Human Feedback (RLHF) framework optimizes models via RL using human preference rewards and has been widely ap- plied in tasks like machine translation (Kreutzer et al., 2018), summarization (Stiennon et al., 2020), and instruction fol- lowing (Ouyang et al., 2022). RLHF has also been applied for alignment of text-to-image diffusion models (Black et al., 2023; Fan et al., 2024). However, RL-based fine-tuning faces significant challenges in stability and scalability. Di- rect Preference Optimization (Rafailov et al., 2024) mit- Figure 3. (Top) Structure ensembles for the target 6uof Ain AT- LAS test set with RMSF correlation rlabeled. (Bottom) C α- RMSF versus the residue index (N →C terminus from left to right). igates these issues by directly optimizing for the optimal policy via re-parameterization of an implicit reward model. This approach has been extended beyond language model- ing: Diffusion-DPO (Wallace et al., 2024) for text-to-image generation, ABDPO (Zhou et al., 2024) for antibody design using Rosetta energy (Alford et al., 2017), and ALIDIFF (Gu et al., 2024) and DECOMPDPO (Cheng et al., 2024) for molecular optimization in structure-based drug design. Remarks: Our method differs from existing approaches above by adopting a more general-form objective, being grounded in physically meaningful motivations, addressing a different task and demonstrating superior performance." 2506.02847v1,"CLONE: Customizing LLMs for Efficient Latency-Aware Inference at the Edge","Chunlin Tian, Xinpeng Qin, Kahou Tam, Li Li, Zijian Wang, Yuanzhe Zhao, Minglei Zhang, Chengzhong Xu","Deploying large language models (LLMs) on edge devices is crucial for delivering fast responses and ensuring data privacy. However, the limited storage, weight, and power of edge devices make it difficult to deploy LLM-powered applications. These devices must balance latency requirements with energy consumption and model accuracy. In this paper, we first quantify the challenges of deploying LLMs on off-the-shelf edge devices and then we present CLONE, an in-depth algorithm-hardware co-design at both the model- and system-level that intelligently integrates real-time, energy optimization while maintaining robust generality. In order to maximize the synergistic benefits of these algorithms in always-on and intermediate edge computing settings, we specialize in a 28nm scalable hardware accelerator system. We implement and extensively evaluate CLONE on two off-the-shelf edge platforms. Experiments show that CLONE effectively accelerates the inference process up to 11.92x, and saves energy up to 7.36x, while maintaining high-generation.","cs.AR, cs.SY, eess.SY",2025-06-03T13:16:00+00:00,2025-06-03T13:16:00+00:00,http://arxiv.org/abs/2506.02847v1,http://arxiv.org/abs/2506.02847v1,2025-06-03 13:16:00+00:00,"LLMs are highly memory-, compute-, and energy-intensive~\cite{GPT4, PaLM, llama, Llama_2, anthropic_claude, gemma}, driving most ""billion-parameter"" inference to the cloud~\cite{AIIndex2024, apple_ai, song2024powerinfer, yuan2023mobile, app_3, app5_robot, app6_laptops}. However, as edge devices become more powerful, there is increasing interest in executing LLM inference on the edge~\cite{apple_ai, song2024powerinfer, yuan2023mobile, app_3, app5_robot, app6_laptops, ExeGPT, Splitwise, PagedAttention, Just-in-time, llm_flash}, which enhances data privacy and enables real-time service delivery. To address edge resource constraints, techniques such as model architecture search~\cite{jawahar2023llm, huang2024new, liu2024optimizing}, quantization~\cite{hubara2018quantized, GPT3.int8, Just-in-time, Deja, SmoothQuant}, pruning~\cite{cnn_pruning, Deja, Pruner-Zero, SparseGPT}, and knowledge distillation~\cite{DistiLLM, MiniLLM} have been proposed. However, most approaches focus solely on model-level optimization, neglecting system-level trade-offs like storage and weight efficiency. Advances in LLM compilers and software stacks~\cite{pytorch-1, tensorflow, deepspeed, huggingface_transformers} have enabled integration with co-processors and near-sensor processing~\cite{song2024powerinfer, ExeGPT, PagedAttention, Just-in-time, llm_flash, Splitwise, flexgen, Orca,zhao4,zhao5,zhao6,zhao7,zhao8}, but these often add computational and communication overhead, reducing edge device longevity and hindering concurrent application execution. Existing model customization methods~\cite{tambe2021edgebert, zhao2023approxcaliper, liberis2023differentiable, zhao2024felix, tam2024fedhybrid} cannot directly address task-agnostic LLMs, where generalization is crucial. The unique characteristics of LLM decoder layers and stochastic outputs present untapped opportunities for hardware optimization. For system-level optimization, DVFS~\cite{dvfsasplos, dfvs-4, dvfs-2, dvfs-3, bateni2020neuos} has been widely used to dynamically adjust processor voltage and frequency. However, most DVFS strategies are designed for discriminative models like CNNs and RNNs, treating networks as black boxes. Generative LLMs, with their auto-regressive inference and stochastic prompt variability, remain underexplored in this context. \new{Edge–cloud collaboration offers a pragmatic middle ground between fully local and fully cloud‑based inference. In practice, the edge should remain the first line of execution for latency‑critical or privacy‑sensitive workloads, running compact SLMs that fit the device’s power and memory envelope. When a request exceeds local capacity—e.g., requires deeper reasoning, broader context, or larger knowledge—\model~ can transparently escalate the call to a cloud‑resident LLM. This selective offloading preserves real‑time responsiveness, keeps private data on‑device whenever possible, and amortizes bandwidth and compute costs by invoking the cloud only for the fraction of tasks that truly need it.} % \textit{These gaps necessitate an edge-optimized LLM system that seamlessly integrates model- and system-level enhancements to balance energy efficiency, latency, and model performance.}","LLMs are highly memory-, compute-, and energy-intensive~\cite{GPT4, PaLM, llama, Llama_2, anthropic_claude, gemma}, driving most ""billion-parameter"" inference to the cloud~\cite{AIIndex2024, apple_ai, song2024powerinfer, yuan2023mobile, app_3, app5_robot, app6_laptops}. However, as edge devices become more powerful, there is increasing interest in executing LLM inference on the edge~\cite{apple_ai, song2024powerinfer, yuan2023mobile, app_3, app5_robot, app6_laptops, ExeGPT, Splitwise, PagedAttention, Just-in-time, llm_flash}, which enhances data privacy and enables real-time service delivery. To address edge resource constraints, techniques such as model architecture search~\cite{jawahar2023llm, huang2024new, liu2024optimizing}, quantization~\cite{hubara2018quantized, GPT3.int8, Just-in-time, Deja, SmoothQuant}, pruning~\cite{cnn_pruning, Deja, Pruner-Zero, SparseGPT}, and knowledge distillation~\cite{DistiLLM, MiniLLM} have been proposed. However, most approaches focus solely on model-level optimization, neglecting system-level trade-offs like storage and weight efficiency. Advances in LLM compilers and software stacks~\cite{pytorch-1, tensorflow, deepspeed, huggingface_transformers} have enabled integration with co-processors and near-sensor processing~\cite{song2024powerinfer, ExeGPT, PagedAttention, Just-in-time, llm_flash, Splitwise, flexgen, Orca,zhao4,zhao5,zhao6,zhao7,zhao8}, but these often add computational and communication overhead, reducing edge device longevity and hindering concurrent application execution. Existing model customization methods~\cite{tambe2021edgebert, zhao2023approxcaliper, liberis2023differentiable, zhao2024felix, tam2024fedhybrid} cannot directly address task-agnostic LLMs, where generalization is crucial. The unique characteristics of LLM decoder layers and stochastic outputs present untapped opportunities for hardware optimization. For system-level optimization, DVFS~\cite{dvfsasplos, dfvs-4, dvfs-2, dvfs-3, bateni2020neuos} has been widely used to dynamically adjust processor voltage and frequency. However, most DVFS strategies are designed for discriminative models like CNNs and RNNs, treating networks as black boxes. Generative LLMs, with their auto-regressive inference and stochastic prompt variability, remain underexplored in this context. \new{Edge–cloud collaboration offers a pragmatic middle ground between fully local and fully cloud‑based inference. In practice, the edge should remain the first line of execution for latency‑critical or privacy‑sensitive workloads, running compact SLMs that fit the device’s power and memory envelope. When a request exceeds local capacity—e.g., requires deeper reasoning, broader context, or larger knowledge—\model~ can transparently escalate the call to a cloud‑resident LLM. This selective offloading preserves real‑time responsiveness, keeps private data on‑device whenever possible, and amortizes bandwidth and compute costs by invoking the cloud only for the fraction of tasks that truly need it.} % \textit{These gaps necessitate an edge-optimized LLM system that seamlessly integrates model- and system-level enhancements to balance energy efficiency, latency, and model performance.}","2.1 Large Language Models LLM Architecture. In contrast to traditional deep neu- ral networks (DNNs) and convolutional neural networks (CNNs) [38, 76], which integrate diverse types of layers (e.g., convolutional (CONV), fully connected (FC), recurrent (RC), pooling, etc.) designed for specific tasks, large language mod- els (LLMs) predominantly consist of a uniform stack of trans- former decoder layers. For instance, Llama-7B [132] adopts Usenix ATC is no more? LLM Iteration 1 LLM Iteration 2LLM Iteration 3LLM Iteration 4 Token Generation/Decoding Prompt/PrefillKV CacheKV CacheKV Cache Legacy still lives EOSFigure 1: Overview of LLMs autoregressive inference. a homogeneous architecture composed of 32 identical Lla- maDecoderLayers. Each decoder layer encompasses two core components: LlamaAttention and LlamaMLP. Despite the structural uniformity across decoder layers, their contribu- tions to model efficiency and effectiveness vary significantly [26, 137]. Consequently, optimizing the inference execution of LLMs necessitates a detailed analysis of the individual impact of each layer (§3 .1). LLMs Inference. LLMs process a structured sequence in- volving multiple forward passes through the model to se- quentially generate each output token. Figure 1 shows the inference process with a simple example. Typically, this pro- cess mainly contains two stages [3,75,118]. 1) Pre-fill takes a prompt sequence and generates the key-value (KV) cache for each Transformer layer of LLM. Upon receiving the prompt “Usenix ATC is no more?” , the tokenizer embeds input as to- kens, denoted as Xin∈Rn×d, where dis the hidden size and n is the length of input token. Then, the LLM handles all input tokens in parallel during a single forward iteration to gener- ate a KV cache. The output of attention is sent to MLP to generate the first output token “Legacy” . Large-scale matrix multiplications are required to generate the KV cache, which makes the pre-fill computing intensive. 2) Decoding utilizes and updates the KV cache to generate tokens step-by-step. Following the generation of the first token, the LLM leverages the KV caches prepared earlier and adds new information to them. The creation of each new token is influenced by the to- kens generated before it. During each token generation, for the input Xdec∈R1×d, attention layers load the previously stored KV cache, and new KV pairs are computed and concatenated to the existing cache. The output of the last decoder layer is sent to the final prediction layer to predict the next token sequentially. It executes iteratively until an End of Sequence (EOS) token is encountered or a predefined termination crite- rion is met. Unlike traditional models with fixed input formats and structured workflows [38,76], LLM inputs and outputs are highly non-deterministic. This stems from the diverse, open- ended nature of user prompts, which vary widely in structure, intent, and context [15, 87, 89]. Additionally, autoregressive token generation is inherently probabilistic, driven by sam- pling methods [50, 111, 115, 132] and variability in training datasets [11, 13, 134], making outputs context-sensitive.2.2 Bottlenecks of Deploying Edge LLMs Table 1 lists specifications for server-level and edge-level processors commonly used for ML workloads, highlighting resource collapse. Despite the potential benefits, including privacy preservation and instant responses without depending on a stable internet connection [42, 105], deploying LLMs on the edge faces the following critical bottlenecks. 1) High Memory Footprint. The main contributors of “billion-parameter” LLMs are model weights (memory is occupied by the model parameters) and KV cache (mem- ory is occupied by the caching of self-attention tensors to avoid redundant computation). For example, Llama-7B in 16-bit precision requires approximately 14GB memory (7B×sizeof(FP16) ). Its architecture with 32 layers, 32 heads per layer, and a head dimension of 128 incurs a memory cost of0.5MB per token, accounting for K and V matrices. Con- sequently, processing 4096 tokens demands 2GB, limiting the size of models that can be deployed on edge devices with 4–12GB memory [125] and often causing Out-of-Memory (OOM) errors. Table 1: Popular ML hardware specifications. GPU Types Peak Perf. Memory Bandwidth Peak Power Server-level NVIDIA A100" 2505.22194v1,Refining Datapath for Microscaling ViTs,"Can Xiao, Jianyi Cheng, Aaron Zhao","Vision Transformers (ViTs) leverage the transformer architecture to effectively capture global context, demonstrating strong performance in computer vision tasks. A major challenge in ViT hardware acceleration is that the model family contains complex arithmetic operations that are sensitive to model accuracy, such as the Softmax and LayerNorm operations, which cannot be mapped onto efficient hardware with low precision. Existing methods only exploit parallelism in the matrix multiplication operations of the model on hardware and keep these complex operations on the CPU. This results in suboptimal performance due to the communication overhead between the CPU and accelerator. Can new data formats solve this problem? In this work, we present the first ViT accelerator that maps all operations of the ViT models onto FPGAs. We exploit a new arithmetic format named Microscaling Integer (MXInt) for datapath designs and evaluate how different design choices can be made to trade off accuracy, hardware performance, and hardware utilization. Our contributions are twofold. First, we quantize ViTs using the MXInt format, achieving both high area efficiency and accuracy. Second, we propose MXInt-specific hardware optimization that map these complex arithmetic operations into custom hardware. Within 1\% accuracy loss, our method achieves at least 93$\times$ speedup compared to Float16 and at least 1.9$\times$ speedup compared to related work.",cs.AR,2025-05-28T10:15:37+00:00,2025-05-28T10:15:37+00:00,http://arxiv.org/abs/2505.22194v1,http://arxiv.org/abs/2505.22194v1,2025-05-28 10:15:37+00:00,"\label{sec:related_work} % \subsection{Microscaling Quantization} {\em Microscaling Quantization:} Sharing certain components for a block of values has been widely recognized as the state-of-the-art technique for quantizing CNNs~\cite{lin2017accurate, zhang2018lqnets}. Further explorations within this line of research have investigated grouping numbers at various granularities, including layer-wise~\cite{wu2018training}, channel-wise~\cite{krishnamoorthi2018quantizing}, and vector-wise quantization~\cite{dai2021vs}. In addition, many block floating-point variants~\cite{harma2022accuracy, dai2021vs, darvish2020pushing} have been proposed, with the core idea of grouping values into multiple blocks and elements within each block sharing common digits. Moreover, adjusting block sizes and mantissa bit widths across layers provides finer quantization. The closest piece related to our work is by Darvish \textit{et al.}~\cite{darvish2020pushing} that proposes MXInt quantization for DNN accelerators. This work is later extended to multi-level MX quantization, also known as MXFP, where the shared component can be non-integers~\cite{darvish2023shared}. They focus on MXInt quantization and overlook hardware optimization, while our work proposes MXInt-specific datapath optimization with design space exploration. % \subsection{Quantized Transformer Accelerators} {\em Quantized Transformer Accelerators:} Quantization for efficient ML inference on accelerators has been widely studied~\cite{andri2022going, song2020drq, zadeh2022mokey, zhao2021cambricon}, especially using fixed-point numbers~\cite{wang2019haq, dettmers2022llm, frantar2022gptq, dong2019hawq, xiao2022smoothquant, yao2022zeroquant, liu2023psq, liu2024spark}. Other work customizes hardware architectures for efficient inference~\cite{chang2021mix, wu2023msd, sharma2018bit, fan2022adaptable, ham20203, ham2021elsa, hong2022dfx, kao2023flat, li2020ftrans, lu2021sanger}. GOBO~\cite{zadeh2020gobo} and EdgeBERT~\cite{tambe2021edgebert} exploit software and hardware co-designs for accelerating transformers. FACT~\cite{qin2023fact} and FlightLLM~\cite{zeng2024flightllm} exploit mixed-precision quantization using fixed-point numbers on linear layers. They only exploit quantization with fixed-point numbers, while we target MXInt quantization. In the domain of ViT accelerators, existing work focuses on fixed-point quantization~\cite{li2022auto, dong2023heatvit, huang2023integer}, while we propose MXInt quantization with hardware optimizations. They only accelerate part of the models on the FPGA, while our hardware accelerator computes the complete workload of the model.","% \subsection{Microscaling Quantization} {\em Microscaling Quantization:} Sharing certain components for a block of values has been widely recognized as the state-of-the-art technique for quantizing CNNs~\cite{lin2017accurate, zhang2018lqnets}. Further explorations within this line of research have investigated grouping numbers at various granularities, including layer-wise~\cite{wu2018training}, channel-wise~\cite{krishnamoorthi2018quantizing}, and vector-wise quantization~\cite{dai2021vs}. In addition, many block floating-point variants~\cite{harma2022accuracy, dai2021vs, darvish2020pushing} have been proposed, with the core idea of grouping values into multiple blocks and elements within each block sharing common digits. Moreover, adjusting block sizes and mantissa bit widths across layers provides finer quantization. The closest piece related to our work is by Darvish \textit{et al.}~\cite{darvish2020pushing} that proposes MXInt quantization for DNN accelerators. This work is later extended to multi-level MX quantization, also known as MXFP, where the shared component can be non-integers~\cite{darvish2023shared}. They focus on MXInt quantization and overlook hardware optimization, while our work proposes MXInt-specific datapath optimization with design space exploration. % \subsection{Quantized Transformer Accelerators} {\em Quantized Transformer Accelerators:} Quantization for efficient ML inference on accelerators has been widely studied~\cite{andri2022going, song2020drq, zadeh2022mokey, zhao2021cambricon}, especially using fixed-point numbers~\cite{wang2019haq, dettmers2022llm, frantar2022gptq, dong2019hawq, xiao2022smoothquant, yao2022zeroquant, liu2023psq, liu2024spark}. Other work customizes hardware architectures for efficient inference~\cite{chang2021mix, wu2023msd, sharma2018bit, fan2022adaptable, ham20203, ham2021elsa, hong2022dfx, kao2023flat, li2020ftrans, lu2021sanger}. GOBO~\cite{zadeh2020gobo} and EdgeBERT~\cite{tambe2021edgebert} exploit software and hardware co-designs for accelerating transformers. FACT~\cite{qin2023fact} and FlightLLM~\cite{zeng2024flightllm} exploit mixed-precision quantization using fixed-point numbers on linear layers. They only exploit quantization with fixed-point numbers, while we target MXInt quantization. In the domain of ViT accelerators, existing work focuses on fixed-point quantization~\cite{li2022auto, dong2023heatvit, huang2023integer}, while we propose MXInt quantization with hardware optimizations. They only accelerate part of the models on the FPGA, while our hardware accelerator computes the complete workload of the model.","I. I NTRODUCTION Hardware acceleration for transformers has shown sig- nificant performance benefits compared to general proces- sors [1], [2], [3], among which Vision Transformers (ViTs) offer promising performance for capturing global image re- lationships [4]. Compared to traditional Convolutional Neural Networks (CNNs), ViTs present new model features: 1) these models often contain millions of parameters, leading to a large memory size; and 2) they contain non-linear operations, requiring complex hardware operator designs. Traditional techniques for ViT acceleration focus on 1) integer quantization and2) datapath optimization , exploiting the approximation tolerance of ViT models. First, integer quantization represents numbers as small integers, optionally with a scaling factor, leading to both smaller memory and cir- cuit area [2], [3]. Second, the datapath optimization determines new designs with simpler logic and similar results, leading to a smaller circuit area [5]. Still, the non-linear operations in ViT, such as LayerNorm and Softmax, face challenges in efficient acceleration. These operations contain complex mathematical operations, such as exp() and sqrt() , and require large value ranges, restricting existing integer quantization. Existing design methods rely on the CPU and only accelerate part of the ViT models in FPGA fabric [2], [3]. This leads to a working but complex systemTABLE I: Our MXInt design method maps all non-linear operations in ViTs into efficient hardware, achieving lower bitwidths than traditional fixed-point designs." 2505.11554v1,"Multi-Objective Memory Bandwidth Regulation and Cache Partitioning for Multicore Real-Time Systems","Binqi Sun, Zhihang Wei, Andrea Bastoni, Debayan Roy, Mirco Theile, Tomasz Kloda, Rodolfo Pellizzoni, Marco Caccamo","Memory bandwidth regulation and cache partitioning are widely used techniques for achieving predictable timing in real-time computing systems. Combined with partitioned scheduling, these methods require careful co-allocation of tasks and resources to cores, as task execution times strongly depend on available allocated resources. To address this challenge, this paper presents a 0-1 linear program for task-resource co-allocation, along with a multi-objective heuristic designed to minimize resource usage while guaranteeing schedulability under a preemptive EDF scheduling policy. Our heuristic employs a multi-layer framework, where an outer layer explores resource allocations using Pareto-pruned search, and an inner layer optimizes task allocation by solving a knapsack problem using dynamic programming. To evaluate the performance of the proposed optimization algorithm, we profile real-world benchmarks on an embedded AMD UltraScale+ ZCU102 platform, with fine-grained resource partitioning enabled by the Jailhouse hypervisor, leveraging cache set partitioning and MemGuard for memory bandwidth regulation. Experiments based on the benchmarking results show that the proposed 0-1 linear program outperforms existing mixed-integer programs by finding more optimal solutions within the same time limit. Moreover, the proposed multi-objective multi-layer heuristic performs consistently better than the state-of-the-art multi-resource-task co-allocation algorithm in terms of schedulability, resource usage, number of non-dominated solutions, and computational efficiency.","math.OC, cs.AR, cs.DC, cs.OS",2025-05-15T16:40:14+00:00,2025-05-15T16:40:14+00:00,http://arxiv.org/abs/2505.11554v1,http://arxiv.org/abs/2505.11554v1,2025-05-15 16:40:14+00:00,"\label{sec:literature} In this section, we give an overview of the related works on task and resource allocation strategies for real-time systems and discuss the differences between our proposed algorithm and the state-of-the-art resource-task co-allocation methods. \subsection{Task Allocation} Mapping tasks statically to individual processors is widely used in industry practice due to its low scheduling overhead~\cite{DBLP:conf/rtss/BrandenburgG16}. However, since the task allocation problem is NP-hard in the strong sense~\cite{ekberg2021partitioned}, many approximation methods have been developed for both preemptive~\cite{Burchard:1995,Dhall:1978,Baruah:2005,Lopez:2000} and non-preemptive~\cite{Fisher:2006,Senoussaoui:2020} scheduling policies. These methods have also been extended to support parallel task scheduling, including directed acyclic graphs (DAGs)~\cite{fonseca2016response,casini2018partitioned,Zahaf:2020} and gang tasks~\cite{Ueter:2021,sun2024strict,sun2024partitioned}, as well as to take into account inter-task interference~\cite{Zahaf:2021}. On the other hand, exact approaches to the partitioning problem use optimization techniques such as mixed-integer linear programming (MILP)~\cite{abeni2022partitioning,Mo:2023}. While MILP formulations can provide exact solutions, their scalability remains a challenge, particularly in systems with a large number of tasks or processors. A detailed discussion on the precise complexity classes of a list of real-time task allocation problems can be found in~\cite{ekberg2021partitioned}. \subsection{Resource Allocation} Cache and memory bandwidth are two critical resources to be partitioned for achieving timing predictability in real-time systems. A widely adopted \emph{software-based} approach to cache partitioning is \emph{cache coloring}, which has been implemented at both the operating system (OS)~\cite{MDBCCP:13, Kim16:EMSOFT, KWCFAS:17} and hypervisor levels~\cite{KSMCV:19, xilinx-xen-cache-color}. In this paper, we rely on a cache-coloring implementation available in the Jailhouse hypervisor~\cite{minerva-jailhouse}. Alternatively, caches can also be partitioned via hardware modifications (\eg,~\cite{Survey-Way-Part}) or by exploiting hardware support such as the Arm DSU~\cite{arm-dynamiciq}, which is only available on very recent embedded Arm platforms and notably not yet supported on our Ultrascale+. Similarly to caches, memory bandwidth partitions can be assigned in software leveraging hardware features such as Performance Monitoring Units (PMUs). For example, MemGuard~\cite{yun2013memguard} and MemPol~\cite{MemPol} propose a per-core memory bandwidth partitioning using PMU-based counters. Hardware modifications to generally improve the predictability of memory accesses have also been proposed (\eg,~\cite{hassan2019reduced, BRU:20}). Intel RDT~\cite{intel-rdt} supports partitioning of both caches and memory bandwidth and has been used in \eg,~\cite{XPCLLLL:19}. Nonetheless, real-time characteristics of Intel RDT have been found to be not always effective~\cite{SBMYK:22}. Arm MPAM~\cite{arm-mpam} is a recent specification with partitioning capabilities similar to Intel RDT, but to date, no available implementations for COTS platforms exist. Building on these cache and memory bandwidth partitioning methods, various allocation strategies have been developed to effectively dedicate resources to real-time tasks and improve schedulability. For caches, approaches such as branch-and-bound~\cite{Altmeyer:2014,Altmeyer:2016}, genetic algorithms~\cite{Bui:2008,Meroni:2023}, and guided-local search~\cite{sun2023minimizing,sun2024minimizing} have been proposed to optimize how cache partitions are assigned to real-time workloads. Similar efforts exist for memory bandwidth allocation. Aghilinasab~\etal~\cite{aghilinasab2020dynamic} present a dynamic scheme that monitors and reallocates memory bandwidth between real-time and best-effort tasks, adapting to runtime variations. Park~\etal~\cite{park2019copart} further propose a coordinated approach for LLC and memory bandwidth partitioning, targeting workload fairness rather than hard timing guarantees. \subsection{Task and Cache Co-Allocation} Beyond independent allocation strategies for real-time tasks and resources, the co-allocation of tasks and cache partitions has been explored to further enhance real-time schedulability. Under preemptive EDF scheduling, Chisholm~\etal~\cite{CWKA:15} introduce MC$^2$, a linear programming-based optimization framework for mixed-criticality multicore real-time systems. Kim and Rajkumar~\cite{Kim16:EMSOFT} develop a cache management scheme for cache-to-task allocation and later proposed a cache-aware task allocation algorithm tailored for virtual machine design. The task and cache allocation under non-preemptive scheduling introduces additional challenges due to blocking effects, making task utilization an insufficient sole indicator for schedulability. Berna and Puaut~\cite{Berna:2012} propose a period-driven task and cache partitioning algorithm under non-preemptive EDF scheduling, prioritizing task period compatibility as the primary partitioning criterion. Paolieri~\etal~\cite{Paolieri:2011} introduce IA$^3$, an interference-aware allocation algorithm focusing on WCET sensitivity. Sun~\etal~\cite{sun2023co} develop a search-based algorithm leveraging a first-fit heuristic for task allocation and propose two heuristic variants considering task period compatibility and cache sensitivity as the task ordering criteria. However, these works do not consider memory bandwidth allocation. Ignoring contention on the shared memory bus can compromise system predictability while assuming uniform memory bandwidth partitioning limits the flexibility needed to optimize schedulability effectively. \subsection{Task and Multi-Resource Co-Allocation} \label{sec:related_work_multi_resource_co_alloc} Recent studies~\cite{XPCLLLL:19,Meng:2019,Nie:2022,gifford2021dna} have also explored the problem of task and multi-resource co-allocation. %Meng:2019, Meng \etal~\cite{Meng:2019} propose a multi-resource allocation framework for real-time multicore virtualization, incorporating techniques to mitigate abstraction overhead. Nie \etal~\cite{Nie:2022} investigate federated scheduling for parallel tasks, where each task is assigned a set of cores along with cache and memory bandwidth partitions while ensuring schedulability conditions. Gifford \etal~\cite{gifford2021dna} employ a worst-fit bin-packing approach to allocate soft real-time tasks to cores, dynamically adjusting cache and memory bandwidth partitions based on task deadlines. The most \emph{closely related work} to ours is \cite{XPCLLLL:19}, which also addresses the co-allocation of tasks and resources, including memory bandwidth and cache partitions. Their algorithm, CaM, represents the current state-of-the-art and outperforms other previous methods. It employs a k-means algorithm to classify tasks into different clusters, followed by a task-cluster-to-core allocation heuristic based on first-fit bin-packing and a resource-to-core allocation heuristic based on resource utility. Our method, MMO, improves upon CaM in three key ways: \begin{itemize} \item The multi-layer search strategy of MMO provides a more thorough exploration of task and resource co-allocation possibilities. \item The inner-layer task allocation in MMO is formulated as a knapsack problem, enabling the use of dynamic programming to achieve optimal task assignments that maximize core utilization. \item MMO is a multi-objective heuristic that simultaneously optimizes the usage of multiple resources, yielding multiple non-dominated solutions, while CaM can only produce a single solution. \end{itemize} In Section~\ref{sec:exp}, we present extensive experimental evaluations to compare the performance of MMO with CaM in terms of schedulability, resource usage, and computation efficiency.","In this section, we give an overview of the related works on task and resource allocation strategies for real-time systems and discuss the differences between our proposed algorithm and the state-of-the-art resource-task co-allocation methods. \subsection{Task Allocation} Mapping tasks statically to individual processors is widely used in industry practice due to its low scheduling overhead~\cite{DBLP:conf/rtss/BrandenburgG16}. However, since the task allocation problem is NP-hard in the strong sense~\cite{ekberg2021partitioned}, many approximation methods have been developed for both preemptive~\cite{Burchard:1995,Dhall:1978,Baruah:2005,Lopez:2000} and non-preemptive~\cite{Fisher:2006,Senoussaoui:2020} scheduling policies. These methods have also been extended to support parallel task scheduling, including directed acyclic graphs (DAGs)~\cite{fonseca2016response,casini2018partitioned,Zahaf:2020} and gang tasks~\cite{Ueter:2021,sun2024strict,sun2024partitioned}, as well as to take into account inter-task interference~\cite{Zahaf:2021}. On the other hand, exact approaches to the partitioning problem use optimization techniques such as mixed-integer linear programming (MILP)~\cite{abeni2022partitioning,Mo:2023}. While MILP formulations can provide exact solutions, their scalability remains a challenge, particularly in systems with a large number of tasks or processors. A detailed discussion on the precise complexity classes of a list of real-time task allocation problems can be found in~\cite{ekberg2021partitioned}. \subsection{Resource Allocation} Cache and memory bandwidth are two critical resources to be partitioned for achieving timing predictability in real-time systems. A widely adopted \emph{software-based} approach to cache partitioning is \emph{cache coloring}, which has been implemented at both the operating system (OS)~\cite{MDBCCP:13, Kim16:EMSOFT, KWCFAS:17} and hypervisor levels~\cite{KSMCV:19, xilinx-xen-cache-color}. In this paper, we rely on a cache-coloring implementation available in the Jailhouse hypervisor~\cite{minerva-jailhouse}. Alternatively, caches can also be partitioned via hardware modifications (\eg,~\cite{Survey-Way-Part}) or by exploiting hardware support such as the Arm DSU~\cite{arm-dynamiciq}, which is only available on very recent embedded Arm platforms and notably not yet supported on our Ultrascale+. Similarly to caches, memory bandwidth partitions can be assigned in software leveraging hardware features such as Performance Monitoring Units (PMUs). For example, MemGuard~\cite{yun2013memguard} and MemPol~\cite{MemPol} propose a per-core memory bandwidth partitioning using PMU-based counters. Hardware modifications to generally improve the predictability of memory accesses have also been proposed (\eg,~\cite{hassan2019reduced, BRU:20}). Intel RDT~\cite{intel-rdt} supports partitioning of both caches and memory bandwidth and has been used in \eg,~\cite{XPCLLLL:19}. Nonetheless, real-time characteristics of Intel RDT have been found to be not always effective~\cite{SBMYK:22}. Arm MPAM~\cite{arm-mpam} is a recent specification with partitioning capabilities similar to Intel RDT, but to date, no available implementations for COTS platforms exist. Building on these cache and memory bandwidth partitioning methods, various allocation strategies have been developed to effectively dedicate resources to real-time tasks and improve schedulability. For caches, approaches such as branch-and-bound~\cite{Altmeyer:2014,Altmeyer:2016}, genetic algorithms~\cite{Bui:2008,Meroni:2023}, and guided-local search~\cite{sun2023minimizing,sun2024minimizing} have been proposed to optimize how cache partitions are assigned to real-time workloads. Similar efforts exist for memory bandwidth allocation. Aghilinasab~\etal~\cite{aghilinasab2020dynamic} present a dynamic scheme that monitors and reallocates memory bandwidth between real-time and best-effort tasks, adapting to runtime variations. Park~\etal~\cite{park2019copart} further propose a coordinated approach for LLC and memory bandwidth partitioning, targeting workload fairness rather than hard timing guarantees. \subsection{Task and Cache Co-Allocation} Beyond independent allocation strategies for real-time tasks and resources, the co-allocation of tasks and cache partitions has been explored to further enhance real-time schedulability. Under preemptive EDF scheduling, Chisholm~\etal~\cite{CWKA:15} introduce MC$^2$, a linear programming-based optimization framework for mixed-criticality multicore real-time systems. Kim and Rajkumar~\cite{Kim16:EMSOFT} develop a cache management scheme for cache-to-task allocation and later proposed a cache-aware task allocation algorithm tailored for virtual machine design. The task and cache allocation under non-preemptive scheduling introduces additional challenges due to blocking effects, making task utilization an insufficient sole indicator for schedulability. Berna and Puaut~\cite{Berna:2012} propose a period-driven task and cache partitioning algorithm under non-preemptive EDF scheduling, prioritizing task period compatibility as the primary partitioning criterion. Paolieri~\etal~\cite{Paolieri:2011} introduce IA$^3$, an interference-aware allocation algorithm focusing on WCET sensitivity. Sun~\etal~\cite{sun2023co} develop a search-based algorithm leveraging a first-fit heuristic for task allocation and propose two heuristic variants considering task period compatibility and cache sensitivity as the task ordering criteria. However, these works do not consider memory bandwidth allocation. Ignoring contention on the shared memory bus can compromise system predictability while assuming uniform memory bandwidth partitioning limits the flexibility needed to optimize schedulability effectively. \subsection{Task and Multi-Resource Co-Allocation} Recent studies~\cite{XPCLLLL:19,Meng:2019,Nie:2022,gifford2021dna} have also explored the problem of task and multi-resource co-allocation. %Meng:2019, Meng \etal~\cite{Meng:2019} propose a multi-resource allocation framework for real-time multicore virtualization, incorporating techniques to mitigate abstraction overhead. Nie \etal~\cite{Nie:2022} investigate federated scheduling for parallel tasks, where each task is assigned a set of cores along with cache and memory bandwidth partitions while ensuring schedulability conditions. Gifford \etal~\cite{gifford2021dna} employ a worst-fit bin-packing approach to allocate soft real-time tasks to cores, dynamically adjusting cache and memory bandwidth partitions based on task deadlines. The most \emph{closely related work} to ours is \cite{XPCLLLL:19}, which also addresses the co-allocation of tasks and resources, including memory bandwidth and cache partitions. Their algorithm, CaM, represents the current state-of-the-art and outperforms other previous methods. It employs a k-means algorithm to classify tasks into different clusters, followed by a task-cluster-to-core allocation heuristic based on first-fit bin-packing and a resource-to-core allocation heuristic based on resource utility. Our method, MMO, improves upon CaM in three key ways: \begin{itemize} \item The multi-layer search strategy of MMO provides a more thorough exploration of task and resource co-allocation possibilities. \item The inner-layer task allocation in MMO is formulated as a knapsack problem, enabling the use of dynamic programming to achieve optimal task assignments that maximize core utilization. \item MMO is a multi-objective heuristic that simultaneously optimizes the usage of multiple resources, yielding multiple non-dominated solutions, while CaM can only produce a single solution. \end{itemize} In Section~\ref{sec:exp}, we present extensive experimental evaluations to compare the performance of MMO with CaM in terms of schedulability, resource usage, and computation efficiency.","In this section, we give an overview of the related works on task and resource allocation strategies for real-time systems and discuss the differences between our proposed algorithm and the state-of-the-art resource-task co-allocation methods. 2.1 Task Allocation Mapping tasks statically to individual processors is widely used in industry practice due to its low scheduling overhead [12]. However, since the task allocation problem is NP-hard in the strongsense[23], manyapproximationmethodshavebeendevelopedforbothpreemptive[9,14, 21,40] and non-preemptive [25,54] scheduling policies. These methods have also been extended to support parallel task scheduling, including directed acyclic graphs (DAGs) [15,26,68] and gang tasks [58,60,62], as well as to take into account inter-task interference [69]. On the other hand, exact approaches to the partitioning problem use optimization techniques such as mixed-integer linear programming (MILP) [1,47]. While MILP formulations can provide exact solutions, their scalability remains a challenge, particularly in systems with a large number of tasks or processors. A detailed discussion on the precise complexity classes of a list of real-time task allocation problems can be found in [23]. 2.2 Resource Allocation Cache and memory bandwidth are two critical resources to be partitioned for achieving timing predictability in real-time systems. A widely adopted software-based approach to cache partitioning is cache coloring , which has been implemented at both the operating system (OS) [33,34,42] and hypervisor levels [35,64]. In this paper, we rely on a cache-coloring implementation available in the Jailhouse hypervisor [45]. Alternatively, caches can also be partitioned via hardware modifications ( e.g., [17]) or by exploiting hardware support such as the Arm DSU [6], which is only available on very recent embedded Arm platforms and notably not yet supported on our Ultrascale+. Similarly to caches, memory bandwidth partitions ECRTS 2025 7:" 2505.08071v1,"NMP-PaK: Near-Memory Processing Acceleration of Scalable De Novo Genome Assembly","Heewoo Kim, Sanjay Sri Vallabh Singapuram, Haojie Ye, Joseph Izraelevitz, Trevor Mudge, Ronald Dreslinski, Nishil Talati","De novo assembly enables investigations of unknown genomes, paving the way for personalized medicine and disease management. However, it faces immense computational challenges arising from the excessive data volumes and algorithmic complexity. While state-of-the-art de novo assemblers utilize distributed systems for extreme-scale genome assembly, they demand substantial computational and memory resources. They also fail to address the inherent challenges of de novo assembly, including a large memory footprint, memory-bound behavior, and irregular data patterns stemming from complex, interdependent data structures. Given these challenges, de novo assembly merits a custom hardware solution, though existing approaches have not fully addressed the limitations. We propose NMP-PaK, a hardware-software co-design that accelerates scalable de novo genome assembly through near-memory processing (NMP). Our channel-level NMP architecture addresses memory bottlenecks while providing sufficient scratchpad space for processing elements. Customized processing elements maximize parallelism while efficiently handling large data structures that are both dynamic and interdependent. Software optimizations include customized batch processing to reduce the memory footprint and hybrid CPU-NMP processing to address hardware underutilization caused by irregular data patterns. NMP-PaK conducts the same genome assembly while incurring a 14X smaller memory footprint compared to the state-of-the-art de novo assembly. Moreover, NMP-PaK delivers a 16X performance improvement over the CPU baseline, with a 2.4X reduction in memory operations. Consequently, NMP-PaK achieves 8.3X greater throughput than state-of-the-art de novo assembly under the same resource constraints, showcasing its superior computational efficiency.","cs.AR, cs.DC, q-bio.GN",2025-05-12T21:17:20+00:00,2025-05-12T21:17:20+00:00,http://arxiv.org/abs/2505.08071v1,http://arxiv.org/abs/2505.08071v1,2025-05-12 21:17:20+00:00,"\label{800_related_work} %\HEEWOO{other accelerators that accelerate DBG based assembly, GPU, long-reads, satish and mutlur's works, find as many references as possible} %satish and mutlur's works \paragraph{\textbf{In/Near-memory processing accelerators for \textit{de novo} assembly}} Previous work has explored accelerating De Bruijn graph-based genome assembly using in/near-memory processing. Zhou et al. \cite{zhou2021ultra} enhanced the Megahit assembly algorithm \cite{li2015megahit} through the use of hybrid memory cubes, exploiting high degrees of parallelism and enhanced memory bandwidth across near-data processing cores. However, the Megahit algorithm does not scale well with increasing genome sample data sizes, and hybrid memory cubes have been discontinued since 2018. Angizi et al. \cite{angizi2020pim} introduced PIM-Assembler, a processing-in-DRAM platform for \textit{de novo} assembly. Their design achieves high performance and energy efficiency through in-DRAM X(N)OR operations for accelerating comparisons and additions, along with optimized data partitioning and mapping techniques. However, while effective for smaller genomes (9.2 GB), their approach has not been demonstrated for extremely large-scale \textit{de novo} assembly tasks that we address in our work. While our work focuses on accelerating the Iterative Compaction step, Wu et al. \cite{wu2024abakus} accelerated the k-mer counting step using in-storage processing. Their method leverages the embedded CPU controllers and DRAM within the solid-state drive, thereby reducing data transfer between the SSD and the main memory. In contrast, \NMP\ aims to speed up the Iterative Compaction step, which is the most time-consuming phase. \paragraph{\textbf{GPU-based \textit{de novo} assembly accelerators}} Several works \cite{awan2021accelerating,goswami2018gpu,mahmood2011gpu,jain2013gagm,swiercz2018grasshopper} leveraged GPUs' parallel processing capabilities and high bandwidth to accelerate specific assembly operations, such as local assembly, sorting, and prefix scan. However, GPUs face significant limitations for large-scale genome assembly. Their restricted onboard memory capacity \cite{zhou2021ultra} makes them inadequate for extremely large datasets. Additionally, genome assembly algorithms present challenges for GPUs due to their memory-intensive nature and irregular access patterns during graph construction and traversal \cite{awan2021accelerating}. %Awan et al. \cite{awan2021accelerating} offloaded the local assembly step to the GPU, utilizing its massive parallelism. %LaSAGNA \cite{goswami2018gpu} utilizes GPU for operations like sorting and prefix scan %GPU-Euler \cite{mahmood2011gpu} assembly-graph related operations on GPU %GAGM \cite{jain2013gagm} DBG offload most of the assembly to GPU %GrassHopper \cite{swiercz2018grasshopper} sequence alignment to GPUs \paragraph{\textbf{Non-de Bruijn graph assembly accelerators}} %%% For minimap2 and bwa-mem2 There are other algorithms that researchers use when new genomes are discovered. These algorithms uncover new genomes through sequence mapping and alignment with reference genomes. %Unlike the DBG method, which assembles genomes from scratch, %These algorithms uncover new genomes by comparing them with existing reference genomes. BWA-MEM \cite{bwa-mem2} and \texttt{minimap2} \cite{minimap2} are widely used solutions for short-read and long-read alignment, respectively. %These tools perform sequence mapping and alignment of newly discovered genomes with existing ones. %and are crucial for identifying and classifying different SARS-CoV-2 strains, as well as for taxonomic classification based genomic similarities. Multiple studies have explored accelerating these algorithms. %BWA-MEM and \texttt{minimap2} to improve their performance. \texttt{mm2-fast} \cite{mm2-fast} accelerated the \texttt{minimap2} algorithm on CPUs using SIMD optimizations, while \texttt{mm2-ax} \cite{mm2-ax} took a cooperative approach by designing a heterogeneous system involving both the CPU and GPU. % mapping computational steps to the architecture that best accelerates them. \texttt{mm2-gb} \cite{mm2-gb} further improved \texttt{mm2-ax} for long-read sequence mapping by running all the steps on the GPU. Additionally, Guo et al. \cite{guo2019hardware} proposed a custom accelerator for sequence mapping using FPGA. Beyond sequence-to-sequence mapping algorithms like \texttt{minimap2}, several accelerators targeted sequence-to-graph mapping of long reads, including SeGraM \cite{cali2022segram} and Harp \cite{zhang2024harp}. Unlike the above works, \NMP\ addresses genome assembly without the need for a reference genome. In addition to performance gains, \NMP\ focuses on reducing the memory footprint and resource requirements for large datasets. \paragraph{\textbf{Accelerators for other genomic applications}} MegIS \cite{ghiasi2024megis} addressed the significant data movement overhead in metagenomics analysis through in-storage processing. %reducing the transfer of large volumes of low-reuse data across the memory hierarchy. While GenDP \cite{gu2023gendp} and QUETZAL \cite{pavon2024quetzal} accelerated dynamic programming algorithms (Smith-Waterman, Needleman-Wunsch) that are essential kernels in various genomic applications, including reference-guided assembly, \textit{de novo} assembly, metagenomics, and sequence analysis for both long and short reads, our work addresses similar fundamental challenges of processing large, interdependent, and dynamic data structures. However, we focus on end-to-end \textit{de novo} assembly performance, prioritizing efficient computation with minimal hardware resources while handling large memory requirements. % \HEEWOO{The challenges that we are handling --processing interdependent and large, dynamically changing data structures-- are analogous to common challenges encountered in dynamic programming.} %%% Similar, but it is a future work to make it more general %This approach suggests a direction for the sustainable growth of both the genomics and computing fields. % minimap2, bwa-mem2, mm2-fast, mm2-ax, mmx-gb (GPU) % Compare our system (efficiency, large scale // different application space, etc) with their work % Include details about genomic long reads % Why long reads? How's it different from short reads? What applications are short reads suited for? % Cite minimap bwa mem 2, cite Hari's work and his related section. %\NMP\ and PaKman are aimed at discovering new genomes, there are solutions that aide in sequence mapping and alignment of an existing genome with a newly discovered. %These tools were crucial in identifying and classifying different SARS-CoV-2 strains, and for taxonomic classification of members of species, genus, etc. %BWA-MEM\cite{bwa-mem2} and \texttt{minimap2}\cite{minimap2} are widely used solutions for generic short-read and long-read alignment, respectively. %\texttt{mm2-fast}\cite{mm2-fast} accelerated the \texttt{minimap2} algorithm on CPUs using SIMD optimizations, while \texttt{mm2-ax}\cite{mm2-ax} takes a cooperative approach by designing a heterogeneous system involving the CPU and theGPU and mapping computational steps that are better accelerated on each architecture. %\texttt{mm2-gb}\cite{mm2-gb} further improves upon \texttt{mm2-ax} for long-read sequence-mapping by running all the steps on the GPU and maintaining mapping accuracy. %\cite{guo2019hardware} proposed a custom accelerator for sequence mapping using FPGA, but cannot guarantee output equivalency with the original \texttt{minimap2} algorithm. %SquiggleFilter\cite{squigglefilter} designed an accelerator with a 14W power budget that accelerates the expensive base-calling step, potentially enabling a nanopore sequencer to be used as a portable universal virus detector.","%\HEEWOO{other accelerators that accelerate DBG based assembly, GPU, long-reads, satish and mutlur's works, find as many references as possible} %satish and mutlur's works \paragraph{\textbf{In/Near-memory processing accelerators for \textit{de novo} assembly}} Previous work has explored accelerating De Bruijn graph-based genome assembly using in/near-memory processing. Zhou et al. \cite{zhou2021ultra} enhanced the Megahit assembly algorithm \cite{li2015megahit} through the use of hybrid memory cubes, exploiting high degrees of parallelism and enhanced memory bandwidth across near-data processing cores. However, the Megahit algorithm does not scale well with increasing genome sample data sizes, and hybrid memory cubes have been discontinued since 2018. Angizi et al. \cite{angizi2020pim} introduced PIM-Assembler, a processing-in-DRAM platform for \textit{de novo} assembly. Their design achieves high performance and energy efficiency through in-DRAM X(N)OR operations for accelerating comparisons and additions, along with optimized data partitioning and mapping techniques. However, while effective for smaller genomes (9.2 GB), their approach has not been demonstrated for extremely large-scale \textit{de novo} assembly tasks that we address in our work. While our work focuses on accelerating the Iterative Compaction step, Wu et al. \cite{wu2024abakus} accelerated the k-mer counting step using in-storage processing. Their method leverages the embedded CPU controllers and DRAM within the solid-state drive, thereby reducing data transfer between the SSD and the main memory. In contrast, \NMP\ aims to speed up the Iterative Compaction step, which is the most time-consuming phase. \paragraph{\textbf{GPU-based \textit{de novo} assembly accelerators}} Several works \cite{awan2021accelerating,goswami2018gpu,mahmood2011gpu,jain2013gagm,swiercz2018grasshopper} leveraged GPUs' parallel processing capabilities and high bandwidth to accelerate specific assembly operations, such as local assembly, sorting, and prefix scan. However, GPUs face significant limitations for large-scale genome assembly. Their restricted onboard memory capacity \cite{zhou2021ultra} makes them inadequate for extremely large datasets. Additionally, genome assembly algorithms present challenges for GPUs due to their memory-intensive nature and irregular access patterns during graph construction and traversal \cite{awan2021accelerating}. %Awan et al. \cite{awan2021accelerating} offloaded the local assembly step to the GPU, utilizing its massive parallelism. %LaSAGNA \cite{goswami2018gpu} utilizes GPU for operations like sorting and prefix scan %GPU-Euler \cite{mahmood2011gpu} assembly-graph related operations on GPU %GAGM \cite{jain2013gagm} DBG offload most of the assembly to GPU %GrassHopper \cite{swiercz2018grasshopper} sequence alignment to GPUs \paragraph{\textbf{Non-de Bruijn graph assembly accelerators}} %%% For minimap2 and bwa-mem2 There are other algorithms that researchers use when new genomes are discovered. These algorithms uncover new genomes through sequence mapping and alignment with reference genomes. %Unlike the DBG method, which assembles genomes from scratch, %These algorithms uncover new genomes by comparing them with existing reference genomes. BWA-MEM \cite{bwa-mem2} and \texttt{minimap2} \cite{minimap2} are widely used solutions for short-read and long-read alignment, respectively. %These tools perform sequence mapping and alignment of newly discovered genomes with existing ones. %and are crucial for identifying and classifying different SARS-CoV-2 strains, as well as for taxonomic classification based genomic similarities. Multiple studies have explored accelerating these algorithms. %BWA-MEM and \texttt{minimap2} to improve their performance. \texttt{mm2-fast} \cite{mm2-fast} accelerated the \texttt{minimap2} algorithm on CPUs using SIMD optimizations, while \texttt{mm2-ax} \cite{mm2-ax} took a cooperative approach by designing a heterogeneous system involving both the CPU and GPU. % mapping computational steps to the architecture that best accelerates them. \texttt{mm2-gb} \cite{mm2-gb} further improved \texttt{mm2-ax} for long-read sequence mapping by running all the steps on the GPU. Additionally, Guo et al. \cite{guo2019hardware} proposed a custom accelerator for sequence mapping using FPGA. Beyond sequence-to-sequence mapping algorithms like \texttt{minimap2}, several accelerators targeted sequence-to-graph mapping of long reads, including SeGraM \cite{cali2022segram} and Harp \cite{zhang2024harp}. Unlike the above works, \NMP\ addresses genome assembly without the need for a reference genome. In addition to performance gains, \NMP\ focuses on reducing the memory footprint and resource requirements for large datasets. \paragraph{\textbf{Accelerators for other genomic applications}} MegIS \cite{ghiasi2024megis} addressed the significant data movement overhead in metagenomics analysis through in-storage processing. %reducing the transfer of large volumes of low-reuse data across the memory hierarchy. While GenDP \cite{gu2023gendp} and QUETZAL \cite{pavon2024quetzal} accelerated dynamic programming algorithms (Smith-Waterman, Needleman-Wunsch) that are essential kernels in various genomic applications, including reference-guided assembly, \textit{de novo} assembly, metagenomics, and sequence analysis for both long and short reads, our work addresses similar fundamental challenges of processing large, interdependent, and dynamic data structures. However, we focus on end-to-end \textit{de novo} assembly performance, prioritizing efficient computation with minimal hardware resources while handling large memory requirements. % \HEEWOO{The challenges that we are handling --processing interdependent and large, dynamically changing data structures-- are analogous to common challenges encountered in dynamic programming.} %%% Similar, but it is a future work to make it more general %This approach suggests a direction for the sustainable growth of both the genomics and computing fields. % minimap2, bwa-mem2, mm2-fast, mm2-ax, mmx-gb (GPU) % Compare our system (efficiency, large scale // different application space, etc) with their work % Include details about genomic long reads % Why long reads? How's it different from short reads? What applications are short reads suited for? % Cite minimap bwa mem 2, cite Hari's work and his related section. %\NMP\ and PaKman are aimed at discovering new genomes, there are solutions that aide in sequence mapping and alignment of an existing genome with a newly discovered. %These tools were crucial in identifying and classifying different SARS-CoV-2 strains, and for taxonomic classification of members of species, genus, etc. %BWA-MEM\cite{bwa-mem2} and \texttt{minimap2}\cite{minimap2} are widely used solutions for generic short-read and long-read alignment, respectively. %\texttt{mm2-fast}\cite{mm2-fast} accelerated the \texttt{minimap2} algorithm on CPUs using SIMD optimizations, while \texttt{mm2-ax}\cite{mm2-ax} takes a cooperative approach by designing a heterogeneous system involving the CPU and theGPU and mapping computational steps that are better accelerated on each architecture. %\texttt{mm2-gb}\cite{mm2-gb} further improves upon \texttt{mm2-ax} for long-read sequence-mapping by running all the steps on the GPU and maintaining mapping accuracy. %\cite{guo2019hardware} proposed a custom accelerator for sequence mapping using FPGA, but cannot guarantee output equivalency with the original \texttt{minimap2} algorithm. %SquiggleFilter\cite{squigglefilter} designed an accelerator with a 14W power budget that accelerates the expensive base-calling step, potentially enabling a nanopore sequencer to be used as a portable universal virus detector.","In/Near-memory processing accelerators for de novo assem- bly.Previous work has explored accelerating De Bruijn graph- based genome assembly using in/near-memory processing. Zhou et al. [ 57] enhanced the Megahit assembly algorithm [ 34] through the use of hybrid memory cubes, exploiting high degrees of paral- lelism and enhanced memory bandwidth across near-data process- ing cores. However, the Megahit algorithm does not scale well with increasing genome sample data sizes, and hybrid memory cubes have been discontinued since 2018. Angizi et al. [ 3] introduced PIM-Assembler, a processing-in- DRAM platform for de novo assembly. Their design achieves high performance and energy efficiency through in-DRAM X(N)OR op- erations for accelerating comparisons and additions, along with optimized data partitioning and mapping techniques. However, while effective for smaller genomes (9.2 GB), their approach has not been demonstrated for extremely large-scale de novo assembly tasks that we address in our work. While our work focuses on accelerating the Iterative Compaction step, Wu et al. [ 52] accelerated the k-mer counting step using in- storage processing. Their method leverages the embedded CPUcontrollers and DRAM within the solid-state drive, thereby reducing data transfer between the SSD and the main memory. In contrast, NMP-PaK aims to speed up the Iterative Compaction step, which is the most time-consuming phase. GPU-based de novo assembly accelerators .Several works [ 4, 20,25,37,46] leveraged GPUs’ parallel processing capabilities and high bandwidth to accelerate specific assembly operations, such as local assembly, sorting, and prefix scan. However, GPUs face significant limitations for large-scale genome assembly. Their restricted onboard memory capacity [ 57] makes them inadequate for extremely large datasets. Additionally, genome assembly algorithms present challenges for GPUs due to their memory-intensive nature and irregular access patterns during graph construction and traversal [4]. Non-de Bruijn graph assembly accelerators .There are other algorithms that researchers use when new genomes are discov- ered. These algorithms uncover new genomes through sequence mapping and alignment with reference genomes. BWA-MEM [50] andminimap2 [35] are widely used solutions for short-read and long-read alignment, respectively. Multiple studies have explored accelerating these algorithms. mm2-fast [26] accelerated the minimap2 algorithm on CPUs using SIMD optimizations, while mm2-ax [42] took a cooperative approach by designing a heterogeneous system involving both the CPU and GPU. mm2-gb [10] further improved mm2-ax for long-read sequence mapping by running all the steps on the GPU. Additionally, Guo et al. [ 22] proposed a custom accelerator for sequence mapping us- ing FPGA. Beyond sequence-to-sequence mapping algorithms like minimap2 , several accelerators targeted sequence-to-graph map- ping of long reads, including SeGraM [6] and Harp [56]. Unlike the above works, NMP-PaK addresses genome assembly without the need for a reference genome. In addition to performance gains, NMP-PaK focuses on reducing the memory footprint and resource requirements for large datasets. Accelerators for other genomic applications .MegIS [ 17] ad- dressed the significant data movement overhead in metagenomics analysis through in-storage processing. While GenDP [ 21] and QUETZAL [ 39] accelerated dynamic programming algorithms (Smith-Waterman, Needleman-Wunsch) that are essential kernels in various genomic applications, including reference-guided assem- bly,de novo assembly, metagenomics, and sequence analysis for both long and short reads, our work addresses similar fundamen- tal challenges of processing large, interdependent, and dynamic data structures. However, we focus on end-to-end de novo assem- bly performance, prioritizing efficient computation with minimal hardware resources while handling large memory requirements." 2504.06211v1,Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs,"Alhad Daftardar, Jianqiao Mo, Joey Ah-kiow, Benedikt Bünz, Ramesh Karri, Siddharth Garg, Brandon Reagen","Zero-Knowledge Proofs (ZKPs) are rapidly gaining importance in privacy-preserving and verifiable computing. ZKPs enable a proving party to prove the truth of a statement to a verifying party without revealing anything else. ZKPs have applications in blockchain technologies, verifiable machine learning, and electronic voting, but have yet to see widespread adoption due to the computational complexity of the proving process. Recent works have accelerated the key primitives of state-of-the-art ZKP protocols on GPU and ASIC. However, the protocols accelerated thus far face one of two challenges: they either require a trusted setup for each application, or they generate larger proof sizes with higher verification costs, limiting their applicability in scenarios with numerous verifiers or strict verification time constraints. This work presents an accelerator, zkSpeed, for HyperPlonk, a state-of-the-art ZKP protocol that supports both one-time, universal setup and small proof sizes for typical ZKP applications in publicly verifiable, consensus-based systems. We accelerate the entire protocol, including two major primitives: SumCheck and Multi-scalar Multiplications (MSMs). We develop a full-chip architecture using 366.46 mm$^2$ and 2 TB/s of bandwidth to accelerate the entire proof generation process, achieving geometric mean speedups of 801$\times$ over CPU baselines.","cs.AR, cs.CR",2025-04-08T16:56:10+00:00,2025-04-08T16:56:10+00:00,http://arxiv.org/abs/2504.06211v1,http://arxiv.org/abs/2504.06211v1,2025-04-08 16:56:10+00:00,"\label{sec:related_work} Much of the prior body of crytographic hardware and systems research has focused on Fully Homomorphic Encryption and Multi-Party Computation \cite{bts, ark, sharp, f1, clake, rpu, haac, karthik, ciflow}. ZKP hardware research is relatively newer, and has focused primarily on accelerating NTTs and MSMs~\cite{priorMSM, distMSM, cuZK, gypso, reZK, myotosis, MSMAC, intel_zkp, elastic_msm, tches_ntt_msm, sam, legozk, unizk, graz}. A few recent works have accelerated SumChecks on GPU \cite{batchzk} and ASIC \cite{nocap} as well as hashing alternatives to SHA-based hash functions \cite{gottahashemall, amaze, unizk}. Some systems accelerate end-to-end Groth16 proofs (using NTTs and MSMs) on GPU~\cite{gzkp} and ASICs \cite{szkp, pipezk}. SZKP is presently the only ASIC that accelerates Groth16 proofs entirely on-chip. NoCap \cite{nocap} is an ASIC that accelerates the Spartan protocol, using Orion as the polynomial commitment scheme. NoCap focuses on SumCheck and NTTs used in Spartan. We compare zkSpeed with two ASICs that accelerate full proofs end-to-end. \textbf{SZKP} is the state-of-the-art for accelerating Groth16 proofs, focusing on scalable MSM designs and (quasi)-deterministic scheduling for Pippenger's algorithm. It accelerates all MSMs, including Sparse G2 MSMs, achieving geomean speedups of 493$\times$ over a CPU. SZKP improves on PipeZK~\cite{pipezk}, the first hardware accelerator for Groth16 proofs. While Groth16 and HyperPlonk have similar application spaces, as mentioned in Section \ref{sec:intro}, the key advantage of using HyperPlonk is the universal setup, which means that the protocol parameters are application-agnostic. For Groth16, \textit{every new application} that wants to use a ZKP needs its own trusted setup ceremony \cite{ceremony}, which is impractical as the application space grows. Given this context and the recent shift away from Groth16 \cite{trusted_set_up}, the slightly larger proof sizes are considered a reasonable tradeoff. \textbf{NoCap} is a vector-based processor for accelerating Spartan+Orion proofs, but its application space differs from zkSpeed's. NoCap thrives in applications where proof size is not critical, or there are few verifiers. It achieves $41\times$ geomean speedups over PipeZK. In contrast, zkSpeed is ideal for many verifiers and in consensus-based systems; this is where ZKPs are experiencing growing interest. For easier comparison, Table \ref{tab:megatable} compares zkSpeed, NoCap, and SZKP's protocols and software and hardware costs. zkSpeed's parent Hyperplonk has the slowest software prover, reflecting the complexity of the protocol. Of note, Spartan's prover is slow; NoCap's authors explain this is due to inefficient implementation. We compare NoCap's hardware implementation using the design point and numbers from their paper scaled to 7nm using scale factors from prior work \cite{haac, szkp}. We then select a zkSpeed configuration with roughly similar prover time. At iso-prover time, zkSpeed incurs a nearly $10\times$ area cost in return for a three orders-of-magnitude reduction in proof size. NoCap's lower costs come from eliminating MSMs, having simpler sumchecks, and using a 64-bit Goldilocks-64 prime field that yields smaller modmuls. In contrast, zkSpeed supports arbitrary 255-bit and 381-bit primes for MLEs and elliptic curves points, respectively. Consequently, NoCap runs all operations several times, including SumChecks 3 times, to obtain 128 bits of security. We further compare zkSpeed with an iso-area SZKP (Groth16) implementation, giving them the benefit of zkSpeed's improved MSMs, and optimistically scale up their design to use the BLS12-381 curve. This design, SZKP+, enjoys a 6$\times$ reduction in proving time compared to zkSpeed, largely because it has fewer MSMs on its critical path. These speedups come at the cost of circuit-specific setup, incurring large costs any time the application is updated. In sum, NoCap, SZKP, and zkSpeed each address different application domains, representing a range of trade-offs ranging from security and protocol properties to software/hardware costs. \textbf{Jellyfish}: Jellyfish is a HyperPlonk variant supporting gates of arity (fan-in) higher than 2. Unlike R1CS, it supports higher degree constraints, e.g. $x^7=y^5+y^2+7$. The additional expressiveness means, iso-application, the total size of all MLE tables decreases (the number of tables increases with arity, but table size decreases super-proportionally). High-degree gates have utility in many applications\cite{garuda}; this is especially pronounced when proving the correctness of cryptographic operations like encryption\cite{verizexe} or hash-functions\cite{poseidon}. zkSpeed could be extended to support Jellyfish, in which case the ratio of table count to table size may improve the runtime (with sufficient bandwidth). We leave this for future work.","Much of the prior body of crytographic hardware and systems research has focused on Fully Homomorphic Encryption and Multi-Party Computation \cite{bts, ark, sharp, f1, clake, rpu, haac, karthik, ciflow}. ZKP hardware research is relatively newer, and has focused primarily on accelerating NTTs and MSMs~\cite{priorMSM, distMSM, cuZK, gypso, reZK, myotosis, MSMAC, intel_zkp, elastic_msm, tches_ntt_msm, sam, legozk, unizk, graz}. A few recent works have accelerated SumChecks on GPU \cite{batchzk} and ASIC \cite{nocap} as well as hashing alternatives to SHA-based hash functions \cite{gottahashemall, amaze, unizk}. Some systems accelerate end-to-end Groth16 proofs (using NTTs and MSMs) on GPU~\cite{gzkp} and ASICs \cite{szkp, pipezk}. SZKP is presently the only ASIC that accelerates Groth16 proofs entirely on-chip. NoCap \cite{nocap} is an ASIC that accelerates the Spartan protocol, using Orion as the polynomial commitment scheme. NoCap focuses on SumCheck and NTTs used in Spartan. We compare zkSpeed with two ASICs that accelerate full proofs end-to-end. \textbf{SZKP} is the state-of-the-art for accelerating Groth16 proofs, focusing on scalable MSM designs and (quasi)-deterministic scheduling for Pippenger's algorithm. It accelerates all MSMs, including Sparse G2 MSMs, achieving geomean speedups of 493$\times$ over a CPU. SZKP improves on PipeZK~\cite{pipezk}, the first hardware accelerator for Groth16 proofs. While Groth16 and HyperPlonk have similar application spaces, as mentioned in Section \ref{sec:intro}, the key advantage of using HyperPlonk is the universal setup, which means that the protocol parameters are application-agnostic. For Groth16, \textit{every new application} that wants to use a ZKP needs its own trusted setup ceremony \cite{ceremony}, which is impractical as the application space grows. Given this context and the recent shift away from Groth16 \cite{trusted_set_up}, the slightly larger proof sizes are considered a reasonable tradeoff. \textbf{NoCap} is a vector-based processor for accelerating Spartan+Orion proofs, but its application space differs from zkSpeed's. NoCap thrives in applications where proof size is not critical, or there are few verifiers. It achieves $41\times$ geomean speedups over PipeZK. In contrast, zkSpeed is ideal for many verifiers and in consensus-based systems; this is where ZKPs are experiencing growing interest. For easier comparison, Table \ref{tab:megatable} compares zkSpeed, NoCap, and SZKP's protocols and software and hardware costs. zkSpeed's parent Hyperplonk has the slowest software prover, reflecting the complexity of the protocol. Of note, Spartan's prover is slow; NoCap's authors explain this is due to inefficient implementation. We compare NoCap's hardware implementation using the design point and numbers from their paper scaled to 7nm using scale factors from prior work \cite{haac, szkp}. We then select a zkSpeed configuration with roughly similar prover time. At iso-prover time, zkSpeed incurs a nearly $10\times$ area cost in return for a three orders-of-magnitude reduction in proof size. NoCap's lower costs come from eliminating MSMs, having simpler sumchecks, and using a 64-bit Goldilocks-64 prime field that yields smaller modmuls. In contrast, zkSpeed supports arbitrary 255-bit and 381-bit primes for MLEs and elliptic curves points, respectively. Consequently, NoCap runs all operations several times, including SumChecks 3 times, to obtain 128 bits of security. We further compare zkSpeed with an iso-area SZKP (Groth16) implementation, giving them the benefit of zkSpeed's improved MSMs, and optimistically scale up their design to use the BLS12-381 curve. This design, SZKP+, enjoys a 6$\times$ reduction in proving time compared to zkSpeed, largely because it has fewer MSMs on its critical path. These speedups come at the cost of circuit-specific setup, incurring large costs any time the application is updated. In sum, NoCap, SZKP, and zkSpeed each address different application domains, representing a range of trade-offs ranging from security and protocol properties to software/hardware costs. \textbf{Jellyfish}: Jellyfish is a HyperPlonk variant supporting gates of arity (fan-in) higher than 2. Unlike R1CS, it supports higher degree constraints, e.g. $x^7=y^5+y^2+7$. The additional expressiveness means, iso-application, the total size of all MLE tables decreases (the number of tables increases with arity, but table size decreases super-proportionally). High-degree gates have utility in many applications\cite{garuda}; this is especially pronounced when proving the correctness of cryptographic operations like encryption\cite{verizexe} or hash-functions\cite{poseidon}. zkSpeed could be extended to support Jellyfish, in which case the ratio of table count to table size may improve the runtime (with sufficient bandwidth). We leave this for future work.","Much of the prior body of crytographic hardware and systems research has focused on Fully Homomorphic Encryption and Multi- Party Computation [ 19,26–28,37,41,47,48,54]. ZKP hardware research is relatively newer, and has focused primarily on accelerat- ing NTTs and MSMs [ 9,11,23,25,31–33,35,46,59,60,63,66,67]. A few recent works have accelerated SumChecks on GPU [ 34] and ASIC [ 49] as well as hashing alternatives to SHA-based hash functions [ 4,53,60]. Some systems accelerate end-to-end Groth16 proofs (using NTTs and MSMs) on GPU [ 36] and ASICs [ 12,64]. SZKP is presently the only ASIC that accelerates Groth16 proofs Table 5: Area and power of zkSpeed. Other includes the SHA3 unit and interconnect. Area (mm 2) Average Power (W) MSM (" 2504.19283v1,"Efficient Serverless Cold Start: Reducing Library Loading Overhead by Profile-guided Optimization","Syed Salauddin Mohammad Tariq, Ali Al Zein, Soumya Sripad Vaidya, Arati Khanolkar, Zheng Song, Probir Roy","Serverless computing abstracts away server management, enabling automatic scaling, efficient resource utilization, and cost-effective pricing models. However, despite these advantages, it faces the significant challenge of cold-start latency, adversely impacting end-to-end performance. Our study shows that many serverless functions initialize libraries that are rarely or never used under typical workloads, thus introducing unnecessary overhead. Although existing static analysis techniques can identify unreachable libraries, they fail to address workload-dependent inefficiencies, resulting in limited performance improvements. To overcome these limitations, we present SLIMSTART, a profile-guided optimization tool designed to identify and mitigate inefficient library usage patterns in serverless applications. By leveraging statistical sampling and call-path profiling, SLIMSTART collects runtime library usage data, generates detailed optimization reports, and applies automated code transformations to reduce cold-start overhead. Furthermore, SLIMSTART integrates seamlessly into CI/CD pipelines, enabling adaptive monitoring and continuous optimizations tailored to evolving workloads. Through extensive evaluation across three benchmark suites and four real-world serverless applications, SLIMSTART achieves up to a 2.30X speedup in initialization latency, a 2.26X improvement in end-to-end latency, and a 1.51X reduction in memory usage, demonstrating its effectiveness in addressing cold-start inefficiencies and optimizing resource utilization.","cs.DC, cs.PF",2025-04-27T15:50:45+00:00,2025-04-27T15:50:45+00:00,http://arxiv.org/abs/2504.19283v1,http://arxiv.org/abs/2504.19283v1,2025-04-27 15:50:45+00:00,"\label{relatedwork} \noindent{\textbf{Platform-Level Runtime Optimizations:}} Several techniques have been proposed to enhance infrastructure efficiency and mitigate cold start latency through optimized resource allocation and scheduling of serverless functions. These methodologies encompass shared resource utilization~\cite{li2022help}, automatic memory deduplication~\cite{saxena2022memory}, function caching~\cite{chen2023s}, compression~\cite{basu2024codecrunch}, advanced scheduling algorithms~\cite{pan2023sustainable}, and the reuse of pre-warmed instances~\cite{bhasi2021kraken, gunasekaran2020fifer, roy2022icebreaker, shahrad2020serverless}. Additional approaches focus on proactively loading libraries into warm containers to reduce the cold start overhead~\cite{sui2024pre}. \textit{While effective at the platform level, these approaches leave application-level inefficiencies, such as suboptimal library usage, unaddressed.} \noindent{\textbf{User-Directed Serverless Runtime Optimizations:}} User-directed optimizations involve configuring serverless runtime policies to reduce cold start times. Techniques include checkpointing~\cite{ao2022faasnap, du2020catalyzer, silva2020prebaking} to save function state, provisioned concurrency~\cite{provisionedConcurrencyAWS} to keep instances warm, adjusting memory~\cite{improveColdstartByIncreasingMemory} and compute resources~\cite{optimisingServerlessForBBC} to optimize performance, keep-alive~\cite{fuerst2021faascache, pan2022retention, roy2022icebreaker, shahrad2020serverless} configurations to prevent premature termination, and layering dependencies~\cite{yu2024rainbowcake} to reduce loading overhead by caching and updating them independently. \textit{However, these runtime-level policies lack the granularity required to address code-level inefficiencies, such as unused or infrequently used libraries.} \noindent{\textbf{Code-level optimizations:}} Code-level techniques aim to reduce initialization time and improve application performance by code optimization. Examples include function fusion to minimize initialization overhead~\cite{lee2021mitigating}, function decomposition into smaller units~\cite{kalia2021mono2micro, nitin2022cargo, abgaz2023decomposition}, and serverless function compression~\cite{liu2023faaslight}. General-purpose tools like JAX~\cite{frostig2018compiling}, GraalVM~\cite{graalvm}, ProGuard~\cite{proguard}, and R8~\cite{r8_android} use static analysis to optimize runtime performance. \textit{However, these tools do not adapt to dynamic runtime behavior, limiting their effectiveness in serverless workloads with varying library usage patterns.} Unlike prior approaches that overlook application context or dynamic behavior, \tool{} leverages runtime profiling of the serverless application to observe real-time library usage patterns, capturing dynamic dependencies and workload-specific inefficiencies. % \subsection{\textbf{Python Library Structure and it's Initialization}} % \label{python_structure} % \paragraph{\textbf{Library, Package and Module}} % In Python, a \textbf{library} is a collection of related modules and packages that provide a wide range of functionalities aimed at specific tasks or domains. A \textbf{module} is a single file containing Python code—functions, classes, or variables—that can be imported and used in other Python programs to promote code reuse and organization. A \textbf{package}, on the other hand, is a collection of related modules organized within a directory. Packages provide a hierarchical structure to the code, facilitating better organization and modularization. % Packages structure their constituent modules using a hierarchical directory format. The root package directory includes an \texttt{\_\_init\_\_.py} file and multiple Python files (.py), each representing a module. Subdirectories within the root directory serve as subpackages, each containing their own \texttt{\_\_init\_\_.py} files and additional modules. This structured approach allows independent maintenance of different components while ensuring they collectively form a cohesive package. % \paragraph{\textbf{Library initialization time}} % When a package within a library is imported, Python executes the \texttt{\_\_init\_\_.py} files and any other code in the imported modules. This process sets up necessary variables, configurations, and class definitions. The total initialization time for a library is the sum of the initialization times of its individual modules and packages. Each package's \texttt{\_\_init\_\_.py} file may contain setup code that runs during import, including importing other modules, initializing variables, and executing startup routines. Thus, the library's initialization time is the accumulated initialization times of its packages and modules. %\subsection{\textbf{Python Profilers}} %Various Python performance profiling tools have been developed over the years to address specific aspects of performance analysis. At a high level, we divide approaches into instrumentation and sampling-based profilers. The instrumentation-based profilers such as Python's Built-in \textit{cProfile}~\cite{cprofile} and \textit{Profile}~\cite{profile}, trace Python applications at the function or line-level granularity. Due to tracing, these instrumentation-based profilers incur significant overhead and inaccuracy in performance measurements~\cite{288540}. The sampling-based profilers, such as \textit{py-spy}~\cite{py-spy}, \textit{Austin}~\cite{austin}, \textit{Scalene}~\cite{288540}, \textit{Pprofile}~\cite{Pprofile}, and \textit{Pieprof}~\cite{tan2021toward} incur less overhead and provide performance insights at relatively higher accuracy. However, none of these profilers identify inefficient library usage in Python code. In contrast, \tool{} is designed to improve cold-start times of serverless applications by identifying inefficient library initialization. % are instrumentation profilers providing detailed function call analysis. \textit{Yappi} offers both wallclock and CPU time measurements, suitable for detailed performance insights. sampling profilers such as provide low-overhead profiling, ideal for live applications, with Scalene extending its capabilities to GPU and memory profiling. \textit{Pprofile}~\cite{Pprofile} also offers deterministic profiling with multi-threading support. \textit{Pieprof}~\cite{tan2021toward} adds to the list of low-overhead, sampling profilers, identifying hot paths efficiently. In contrast, \tool{} is designed to improve cold-start times of serverless applications by identifying inefficient library initialization.","\noindent{\textbf{Platform-Level Runtime Optimizations:}} Several techniques have been proposed to enhance infrastructure efficiency and mitigate cold start latency through optimized resource allocation and scheduling of serverless functions. These methodologies encompass shared resource utilization~\cite{li2022help}, automatic memory deduplication~\cite{saxena2022memory}, function caching~\cite{chen2023s}, compression~\cite{basu2024codecrunch}, advanced scheduling algorithms~\cite{pan2023sustainable}, and the reuse of pre-warmed instances~\cite{bhasi2021kraken, gunasekaran2020fifer, roy2022icebreaker, shahrad2020serverless}. Additional approaches focus on proactively loading libraries into warm containers to reduce the cold start overhead~\cite{sui2024pre}. \textit{While effective at the platform level, these approaches leave application-level inefficiencies, such as suboptimal library usage, unaddressed.} \noindent{\textbf{User-Directed Serverless Runtime Optimizations:}} User-directed optimizations involve configuring serverless runtime policies to reduce cold start times. Techniques include checkpointing~\cite{ao2022faasnap, du2020catalyzer, silva2020prebaking} to save function state, provisioned concurrency~\cite{provisionedConcurrencyAWS} to keep instances warm, adjusting memory~\cite{improveColdstartByIncreasingMemory} and compute resources~\cite{optimisingServerlessForBBC} to optimize performance, keep-alive~\cite{fuerst2021faascache, pan2022retention, roy2022icebreaker, shahrad2020serverless} configurations to prevent premature termination, and layering dependencies~\cite{yu2024rainbowcake} to reduce loading overhead by caching and updating them independently. \textit{However, these runtime-level policies lack the granularity required to address code-level inefficiencies, such as unused or infrequently used libraries.} \noindent{\textbf{Code-level optimizations:}} Code-level techniques aim to reduce initialization time and improve application performance by code optimization. Examples include function fusion to minimize initialization overhead~\cite{lee2021mitigating}, function decomposition into smaller units~\cite{kalia2021mono2micro, nitin2022cargo, abgaz2023decomposition}, and serverless function compression~\cite{liu2023faaslight}. General-purpose tools like JAX~\cite{frostig2018compiling}, GraalVM~\cite{graalvm}, ProGuard~\cite{proguard}, and R8~\cite{r8_android} use static analysis to optimize runtime performance. \textit{However, these tools do not adapt to dynamic runtime behavior, limiting their effectiveness in serverless workloads with varying library usage patterns.} Unlike prior approaches that overlook application context or dynamic behavior, \tool{} leverages runtime profiling of the serverless application to observe real-time library usage patterns, capturing dynamic dependencies and workload-specific inefficiencies. % \subsection{\textbf{Python Library Structure and it's Initialization}} % % \paragraph{\textbf{Library, Package and Module}} % In Python, a \textbf{library} is a collection of related modules and packages that provide a wide range of functionalities aimed at specific tasks or domains. A \textbf{module} is a single file containing Python code—functions, classes, or variables—that can be imported and used in other Python programs to promote code reuse and organization. A \textbf{package}, on the other hand, is a collection of related modules organized within a directory. Packages provide a hierarchical structure to the code, facilitating better organization and modularization. % Packages structure their constituent modules using a hierarchical directory format. The root package directory includes an \texttt{\_\_init\_\_.py} file and multiple Python files (.py), each representing a module. Subdirectories within the root directory serve as subpackages, each containing their own \texttt{\_\_init\_\_.py} files and additional modules. This structured approach allows independent maintenance of different components while ensuring they collectively form a cohesive package. % \paragraph{\textbf{Library initialization time}} % When a package within a library is imported, Python executes the \texttt{\_\_init\_\_.py} files and any other code in the imported modules. This process sets up necessary variables, configurations, and class definitions. The total initialization time for a library is the sum of the initialization times of its individual modules and packages. Each package's \texttt{\_\_init\_\_.py} file may contain setup code that runs during import, including importing other modules, initializing variables, and executing startup routines. Thus, the library's initialization time is the accumulated initialization times of its packages and modules. %\subsection{\textbf{Python Profilers}} %Various Python performance profiling tools have been developed over the years to address specific aspects of performance analysis. At a high level, we divide approaches into instrumentation and sampling-based profilers. The instrumentation-based profilers such as Python's Built-in \textit{cProfile}~\cite{cprofile} and \textit{Profile}~\cite{profile}, trace Python applications at the function or line-level granularity. Due to tracing, these instrumentation-based profilers incur significant overhead and inaccuracy in performance measurements~\cite{288540}. The sampling-based profilers, such as \textit{py-spy}~\cite{py-spy}, \textit{Austin}~\cite{austin}, \textit{Scalene}~\cite{288540}, \textit{Pprofile}~\cite{Pprofile}, and \textit{Pieprof}~\cite{tan2021toward} incur less overhead and provide performance insights at relatively higher accuracy. However, none of these profilers identify inefficient library usage in Python code. In contrast, \tool{} is designed to improve cold-start times of serverless applications by identifying inefficient library initialization. % are instrumentation profilers providing detailed function call analysis. \textit{Yappi} offers both wallclock and CPU time measurements, suitable for detailed performance insights. sampling profilers such as provide low-overhead profiling, ideal for live applications, with Scalene extending its capabilities to GPU and memory profiling. \textit{Pprofile}~\cite{Pprofile} also offers deterministic profiling with multi-threading support. \textit{Pieprof}~\cite{tan2021toward} adds to the list of low-overhead, sampling profilers, identifying hot paths efficiently. In contrast, \tool{} is designed to improve cold-start times of serverless applications by identifying inefficient library initialization.","cludes the paper. II. M OTIVATION This section introduces our empirical study and its results, which motivated our work. A. Optimizing Library Loading: Why and How Fig. 1: Ratio of library Initialization time to end-to-end time. To quantify the impact of library initialization on overall end-to-end time, we evaluated a collection of serverless Python applications drawn from existing literature [13], [14]. Figure 1 presents the library initialization time, end-to-end time, and their respective ratios. The results demonstrate that, for the majority of serverless applications, library initialization contributes to more than 70% of the total end-to-end time. These findings highlight the critical importance of optimizing library initialization to significantly reduce cold-start latency in serverless Python applications. Observation" 2504.11007v1,"Kubernetes in the Cloud vs. Bare Metal: A Comparative Study of Network Costs","Rodrigo Mompo Redoli, Amjad Ullah","Modern cloud-native applications increasingly utilise managed cloud services and containerisation technologies, such as Kubernetes, to achieve rapid time-to-market and scalable deployments. Organisations must consider various factors, including cost implications when deciding on a hosting platform for containerised applications as the usage grows. An emerging discipline called FinOps combines financial management and cloud operations to optimise costs in cloud-based applications. While prior research has explored system-level optimisation strategies for cost and resource efficiency in containerized systems, analysing network costs in Kubernetes clusters remains underexplored. This paper investigates the network usage and cost implications of containerised applications running on Kubernetes clusters. Using a methodology that combines measurement analysis, experimentation, and cost modelling, we aim to provide organisations with actionable insights into network cost optimisation. Our findings highlight key considerations for analysing network expenditures and evaluating the potential cost benefits of deploying applications on cloud providers. Overall, this paper contributes to the emerging FinOps discipline by addressing the financial and operational aspects of managing network costs in cloud-native environments.",cs.DC,2025-04-15T09:26:08+00:00,2025-04-15T09:26:08+00:00,http://arxiv.org/abs/2504.11007v1,http://arxiv.org/abs/2504.11007v1,2025-04-15 09:26:08+00:00,"\label{sec:RelatedWork} Major cloud providers offer tools to help organisations estimate and compare application hosting costs. For example, AWS Cost Explorer~\footnote{https://aws.amazon.com/aws-cost-management/aws-cost-explorer/}, Google Cloud Billing~\footnote{https://cloud.google.com/billing/docs?hl=es-419}, and Azure Cost Management~\footnote{https://azure.microsoft.com/es-es/products/cost-management} provide insights into cost breakdowns for their services. Although these tools provide valuable information on the overall cost analysis, they do not offer detailed network cost breakdowns. Especially when we have Kubernetes clusters hosting multiple applications, these tools only provide aggregated network costs and do not allow data segregation by application. Hence, making it difficult to identify subsystems that could benefit from a bare-metal approach. Third-party tools also specialise in cost and network analysis for managed cloud environments, offering advanced features for comparing network costs across cloud providers and bare-metal hosting. They usually cover all aspects of a cloud provider, including support for Kubernetes cost analysis. Some examples of such tools include Cloudability~\footnote{https://www.apptio.com/products/cloudability/} and Archera~\footnote{https://archera.ai/}. These tools help organisations assess and optimise network costs, thus aiding in the decision-making process. Another popular tool, Datadog~\footnote{https://www.datadoghq.com/}---a market leader in monitoring---is potent for container resource allocation optimisation. However, none of these tools is open source. Furthermore, none of these consider the specific network requirements and costs associated with running applications in Kubernetes clusters except Datadog. In contrast, we propose using open-source Kubecost to analyse the network costs associated with running applications in Kubernetes clusters. Marino et al~\cite{marino2023dynamic} also use Kubecost to do a similar cost analysis, however, their research focuses on high-performance computing instead of network-intensive applications. There are also several research efforts, where resource management schemes are proposed to address the issues of network utilisation and cost in different environments. For example, some studies have focused on optimising resource allocation and load-balancing techniques in cloud environments to reduce network costs~\cite{verreydt2019leveraging, gao2020hierarchical, chhabra2021dynamic}. Others have developed cost estimation and prediction models to help organisations make informed decisions about hosting their applications~\cite{dong2023agent, cho2020cost, xu2018cost}. However, none of these approaches cover comparative aspects of bare metal vs managed cloud. Lastly, some research efforts, e.g. de Vries et al.~\cite{de2023cost}, gain insight into the costs of a particular application using performance metrics. However, application performance metrics need to be tightly integrated with the application, which may not be suitable for analysing an existing production cluster with multiple applications running that do not have application performance metrics integrated into their code. Tools and techniques that provide a holistic view of network usage and costs based on infrastructure metrics are needed in such cases. This paper aims to fill this gap by using infrastructure cost analytics tools to measure any application running in Kubernetes, which will allow organisations to apply FinOps methodologies later to select the best approaches to optimise costs.","Major cloud providers offer tools to help organisations estimate and compare application hosting costs. For example, AWS Cost Explorer~\footnote{https://aws.amazon.com/aws-cost-management/aws-cost-explorer/}, Google Cloud Billing~\footnote{https://cloud.google.com/billing/docs?hl=es-419}, and Azure Cost Management~\footnote{https://azure.microsoft.com/es-es/products/cost-management} provide insights into cost breakdowns for their services. Although these tools provide valuable information on the overall cost analysis, they do not offer detailed network cost breakdowns. Especially when we have Kubernetes clusters hosting multiple applications, these tools only provide aggregated network costs and do not allow data segregation by application. Hence, making it difficult to identify subsystems that could benefit from a bare-metal approach. Third-party tools also specialise in cost and network analysis for managed cloud environments, offering advanced features for comparing network costs across cloud providers and bare-metal hosting. They usually cover all aspects of a cloud provider, including support for Kubernetes cost analysis. Some examples of such tools include Cloudability~\footnote{https://www.apptio.com/products/cloudability/} and Archera~\footnote{https://archera.ai/}. These tools help organisations assess and optimise network costs, thus aiding in the decision-making process. Another popular tool, Datadog~\footnote{https://www.datadoghq.com/}---a market leader in monitoring---is potent for container resource allocation optimisation. However, none of these tools is open source. Furthermore, none of these consider the specific network requirements and costs associated with running applications in Kubernetes clusters except Datadog. In contrast, we propose using open-source Kubecost to analyse the network costs associated with running applications in Kubernetes clusters. Marino et al~\cite{marino2023dynamic} also use Kubecost to do a similar cost analysis, however, their research focuses on high-performance computing instead of network-intensive applications. There are also several research efforts, where resource management schemes are proposed to address the issues of network utilisation and cost in different environments. For example, some studies have focused on optimising resource allocation and load-balancing techniques in cloud environments to reduce network costs~\cite{verreydt2019leveraging, gao2020hierarchical, chhabra2021dynamic}. Others have developed cost estimation and prediction models to help organisations make informed decisions about hosting their applications~\cite{dong2023agent, cho2020cost, xu2018cost}. However, none of these approaches cover comparative aspects of bare metal vs managed cloud. Lastly, some research efforts, e.g. de Vries et al.~\cite{de2023cost}, gain insight into the costs of a particular application using performance metrics. However, application performance metrics need to be tightly integrated with the application, which may not be suitable for analysing an existing production cluster with multiple applications running that do not have application performance metrics integrated into their code. Tools and techniques that provide a holistic view of network usage and costs based on infrastructure metrics are needed in such cases. This paper aims to fill this gap by using infrastructure cost analytics tools to measure any application running in Kubernetes, which will allow organisations to apply FinOps methodologies later to select the best approaches to optimise costs.","Section 3 details the methodology, and Section 4 presents the experimental setup and results. Section 5 discusses the findings and their implications, while Section 6 concludes the paper. Kubernetes in the Cloud vs. Bare Metal: A Comparative Study of Network Costs 3" 2504.09307v1,"Lumos: Efficient Performance Modeling and Estimation for Large-scale LLM Training","Mingyu Liang, Hiwot Tadese Kassa, Wenyin Fu, Brian Coutinho, Louis Feng, Christina Delimitrou","Training LLMs in distributed environments presents significant challenges due to the complexity of model execution, deployment systems, and the vast space of configurable strategies. Although various optimization techniques exist, achieving high efficiency in practice remains difficult. Accurate performance models that effectively characterize and predict a model's behavior are essential for guiding optimization efforts and system-level studies. We propose Lumos, a trace-driven performance modeling and estimation toolkit for large-scale LLM training, designed to accurately capture and predict the execution behaviors of modern LLMs. We evaluate Lumos on a production ML cluster with up to 512 NVIDIA H100 GPUs using various GPT-3 variants, demonstrating that it can replay execution time with an average error of just 3.3%, along with other runtime details, across different models and configurations. Additionally, we validate its ability to estimate performance for new setups from existing traces, facilitating efficient exploration of model and deployment configurations.","cs.DC, cs.AI",2025-04-12T18:43:24+00:00,2025-04-12T18:43:24+00:00,http://arxiv.org/abs/2504.09307v1,http://arxiv.org/abs/2504.09307v1,2025-04-12 18:43:24+00:00,"\subsection{Profiling Tools and Traces} As the ML system stack evolves rapidly, profiling tools play a crucial role in understanding model execution characteristics and identifying performance bottlenecks. As hardware accelerators like GPUs~\cite{NVIDIA_blackwell} and TPUs~\cite{jouppi2023tpu} become increasingly essential, vendors offer specialized tools—such as NVProf~\cite{NVProf}, CUPTI~\cite{CUPTI}, and Nsight~\cite{Nsight}—to expose hardware performance counters, providing developers with critical insights into performance metrics and enabling effective optimization. To improve the interpretability of profiling results, ML frameworks also provide built-in tools for collecting execution statistics at the operator level. These tools often integrate hardware-level traces, offering a complete view of the entire stack—from host to device. For instance, PyTorch Kineto~\cite{pytorch-kineto} leverages CUPTI~\cite{CUPTI} to capture runtime information for PyTorch operators, CUDA events, and GPU kernels, seamlessly linking them to provide a holistic perspective on model execution. \subsection{LLMs and Parallelism Strategies} Most modern LLMs are built on transformer architectures~\cite{vaswani2017attention}, which rely on self-attention mechanisms to capture long-range dependencies in sequential data. These models feature multiple stacked layers of attention and feedforward networks, with parameter sizes growing rapidly over the years. For example, GPT-2~\cite{radford2019language} introduced in 2019 had 1.5 billion parameters, GPT-3~\cite{brown2020language} in 2020 expanded to 175 billion parameters, and PaLM~\cite{chowdhery2023palm} reached 540 billion parameters by 2022. Training LLMs presents significant computational and memory challenges, especially as model sizes grow beyond the capacity of individual GPUs. To address these limitations, 3D parallelism—a hybrid approach combining data, tensor, and pipeline parallelism—has become essential for efficient large-scale training~\cite{narayanan2021efficient, shoeybi2019megatron, smith2022using, chowdhery2023palm}. Each form of parallelism contributes uniquely: data parallelism (DP) distributes training batches across devices, synchronizing gradients during updates; tensor parallelism (TP) splits large tensors across multiple GPUs, allowing shared computation with frequent communication; and pipeline parallelism (PP) partitions the model into sequential stages, with each stage processed on different devices in a coordinated pipeline. Despite the benefits, configuring 3D parallelism introduces significant complexity, requiring careful coordination across these strategies to balance workloads and minimize communication overhead. Recent research has focused on automating these configurations to reduce the burden on developers and ensure efficient distributed execution. For example, GSPMD~\cite{xu2021gspmd} extends the XLA compiler~\cite{sabne2020xla} to support various parallelism paradigms through user annotations. Alpa~\cite{zheng2022alpa} automates model parallelization by optimizing intra- and inter-operator parallelism for efficient distributed execution. Galvatron~\cite{miao2022galvatron} introduces a decision tree to decompose the search space and designs a dynamic programming algorithm to generate the optimal plan. Emerging techniques like sequence parallelism~\cite{li2021sequence, jacobs2023deepspeed, liu2023ring} further address the challenges of training on long sequences by distributing computations along the sequence dimension, reducing memory overhead and communication bottlenecks. \subsection{Performance Modeling, Simulation, and Optimization} The complexity of LLMs poses challenges and opportunities in system design and optimization, with performance modeling serving as a critical foundation for diagnosing and optimizing overall efficiency. There are two primary approaches to building performance models. The first relies on analytical models. AmPeD~\cite{moolchandani2023amped} introduces an analytical model to estimate performance in distributed transformer training under various model parameters and parallelism strategies. Similarly, Calculon~\cite{isaev2023calculon} provides a parameterized analytical model that explores the co-design space of software and hardware configurations to identify optimal system designs for LLMs. However, these analytical models are often tailored to specific implementations and hardware configurations, limiting their ability to generalize in the face of rapid model and system evolution. Moreover, they typically provide high-level performance estimates, making them inadequate for optimizations like mixed precision training~\cite{das2018mixed, zhu2020daydream} and operator fusion~\cite{zhao2022apollo, jia2019taso}. The second approach leverages trace-based models to simulate execution and derive optimization insights. For example, ASTRA-sim~\cite{rashidi2020astra} and ASTRA-sim2.0~\cite{won2023astra} simulate distributed training with a cycle-level and analytical network backend, evaluating collective communication algorithms and network topologies. In ~\cite{lin2022building}, the authors analyze critical paths within profiled traces to predict per-batch training time for DLRM. Daydream~\cite{zhu2020daydream} uses kernel-level dependency graphs collected with CUPTI to predict runtime under specific optimizations, while dPRO~\cite{hu2022dpro} builds a global dataflow graph by tracking dependencies among operators to estimate DNN training performance. However, these trace-based approaches fail to fully capture the complexities inherent in LLM execution. To the best of our knowledge, this work is the first to leverage traces for accurately modeling the intricate behaviors of LLMs, accounting for detailed operator and kernel interactions essential for precise performance prediction.","\subsection{Profiling Tools and Traces} As the ML system stack evolves rapidly, profiling tools play a crucial role in understanding model execution characteristics and identifying performance bottlenecks. As hardware accelerators like GPUs~\cite{NVIDIA_blackwell} and TPUs~\cite{jouppi2023tpu} become increasingly essential, vendors offer specialized tools—such as NVProf~\cite{NVProf}, CUPTI~\cite{CUPTI}, and Nsight~\cite{Nsight}—to expose hardware performance counters, providing developers with critical insights into performance metrics and enabling effective optimization. To improve the interpretability of profiling results, ML frameworks also provide built-in tools for collecting execution statistics at the operator level. These tools often integrate hardware-level traces, offering a complete view of the entire stack—from host to device. For instance, PyTorch Kineto~\cite{pytorch-kineto} leverages CUPTI~\cite{CUPTI} to capture runtime information for PyTorch operators, CUDA events, and GPU kernels, seamlessly linking them to provide a holistic perspective on model execution. \subsection{LLMs and Parallelism Strategies} Most modern LLMs are built on transformer architectures~\cite{vaswani2017attention}, which rely on self-attention mechanisms to capture long-range dependencies in sequential data. These models feature multiple stacked layers of attention and feedforward networks, with parameter sizes growing rapidly over the years. For example, GPT-2~\cite{radford2019language} introduced in 2019 had 1.5 billion parameters, GPT-3~\cite{brown2020language} in 2020 expanded to 175 billion parameters, and PaLM~\cite{chowdhery2023palm} reached 540 billion parameters by 2022. Training LLMs presents significant computational and memory challenges, especially as model sizes grow beyond the capacity of individual GPUs. To address these limitations, 3D parallelism—a hybrid approach combining data, tensor, and pipeline parallelism—has become essential for efficient large-scale training~\cite{narayanan2021efficient, shoeybi2019megatron, smith2022using, chowdhery2023palm}. Each form of parallelism contributes uniquely: data parallelism (DP) distributes training batches across devices, synchronizing gradients during updates; tensor parallelism (TP) splits large tensors across multiple GPUs, allowing shared computation with frequent communication; and pipeline parallelism (PP) partitions the model into sequential stages, with each stage processed on different devices in a coordinated pipeline. Despite the benefits, configuring 3D parallelism introduces significant complexity, requiring careful coordination across these strategies to balance workloads and minimize communication overhead. Recent research has focused on automating these configurations to reduce the burden on developers and ensure efficient distributed execution. For example, GSPMD~\cite{xu2021gspmd} extends the XLA compiler~\cite{sabne2020xla} to support various parallelism paradigms through user annotations. Alpa~\cite{zheng2022alpa} automates model parallelization by optimizing intra- and inter-operator parallelism for efficient distributed execution. Galvatron~\cite{miao2022galvatron} introduces a decision tree to decompose the search space and designs a dynamic programming algorithm to generate the optimal plan. Emerging techniques like sequence parallelism~\cite{li2021sequence, jacobs2023deepspeed, liu2023ring} further address the challenges of training on long sequences by distributing computations along the sequence dimension, reducing memory overhead and communication bottlenecks. \subsection{Performance Modeling, Simulation, and Optimization} The complexity of LLMs poses challenges and opportunities in system design and optimization, with performance modeling serving as a critical foundation for diagnosing and optimizing overall efficiency. There are two primary approaches to building performance models. The first relies on analytical models. AmPeD~\cite{moolchandani2023amped} introduces an analytical model to estimate performance in distributed transformer training under various model parameters and parallelism strategies. Similarly, Calculon~\cite{isaev2023calculon} provides a parameterized analytical model that explores the co-design space of software and hardware configurations to identify optimal system designs for LLMs. However, these analytical models are often tailored to specific implementations and hardware configurations, limiting their ability to generalize in the face of rapid model and system evolution. Moreover, they typically provide high-level performance estimates, making them inadequate for optimizations like mixed precision training~\cite{das2018mixed, zhu2020daydream} and operator fusion~\cite{zhao2022apollo, jia2019taso}. The second approach leverages trace-based models to simulate execution and derive optimization insights. For example, ASTRA-sim~\cite{rashidi2020astra} and ASTRA-sim2.0~\cite{won2023astra} simulate distributed training with a cycle-level and analytical network backend, evaluating collective communication algorithms and network topologies. In ~\cite{lin2022building}, the authors analyze critical paths within profiled traces to predict per-batch training time for DLRM. Daydream~\cite{zhu2020daydream} uses kernel-level dependency graphs collected with CUPTI to predict runtime under specific optimizations, while dPRO~\cite{hu2022dpro} builds a global dataflow graph by tracking dependencies among operators to estimate DNN training performance. However, these trace-based approaches fail to fully capture the complexities inherent in LLM execution. To the best of our knowledge, this work is the first to leverage traces for accurately modeling the intricate behaviors of LLMs, accounting for detailed operator and kernel interactions essential for precise performance prediction.", 2506.02750v1,"Learning Binarized Representations with Pseudo-positive Sample Enhancement for Efficient Graph Collaborative Filtering","Yankai Chen, Yue Que, Xinni Zhang, Chen Ma, Irwin King","Learning vectorized embeddings is fundamental to many recommender systems for user-item matching. To enable efficient online inference, representation binarization, which embeds latent features into compact binary sequences, has recently shown significant promise in optimizing both memory usage and computational overhead. However, existing approaches primarily focus on numerical quantization, neglecting the associated information loss, which often results in noticeable performance degradation. To address these issues, we study the problem of graph representation binarization for efficient collaborative filtering. Our findings indicate that explicitly mitigating information loss at various stages of embedding binarization has a significant positive impact on performance. Building on these insights, we propose an enhanced framework, BiGeaR++, which specifically leverages supervisory signals from pseudo-positive samples, incorporating both real item data and latent embedding samples. Compared to its predecessor BiGeaR, BiGeaR++ introduces a fine-grained inference distillation mechanism and an effective embedding sample synthesis approach. Empirical evaluations across five real-world datasets demonstrate that the new designs in BiGeaR++ work seamlessly well with other modules, delivering substantial improvements of around 1%-10% over BiGeaR and thus achieving state-of-the-art performance compared to the competing methods. Our implementation is available at https://github.com/QueYork/BiGeaR-SS.",cs.IR,2025-06-03T11:11:43+00:00,2025-06-03T11:11:43+00:00,http://arxiv.org/abs/2506.02750v1,http://arxiv.org/abs/2506.02750v1,2025-06-03 11:11:43+00:00,"\label{sec:work} \paragraph{\textbf{Full-precision Recommender Models.}} (1) \textit{Collaborative Filtering (CF)} is a widely used approach in modern recommender systems~\cite{covington2016deep,pinsage,yang2022hrcf,lin2024effective,zhang2022knowledge,luo2025rank}. Earlier CF methods, such as \textit{Matrix Factorization}~\cite{koren2009matrix,rendle2012bpr}, focus on reconstructing historical interactions to learn user-item embeddings. More recent models, like NeurCF~\cite{neurcf} and attention-based approaches~\cite{chen2017attentive,he2018nais}, improve performance by leveraging neural networks. (2) \textit{Graph-based} methods have been widely used in many domains~\cite{wu2023survey,zhang2024geometric,zhang2023contrastive}. In recommendation, they explore the interaction graph structure for knowledge learning. Graph convolutional networks (GCNs)~\cite{graphsage,kipf2016semi} propagate knowledge via graph topologies~\cite{wang2022aep,chen2025semi} and have inspired both early methods, such as GC-MC~\cite{berg2017graph} and PinSage~\cite{pinsage}, as well as recent models like NGCF~\cite{ngcf}, DGCF~\cite{dgcf}, and LightGCN~\cite{lightgcn}, which effectively capture higher-order collaborative filtering signals among high-hop neighbors for improved recommendations. \paragraph{\textbf{Learning to Hash.}} Hashing-based methods convert dense floating-point embeddings into binary spaces to accelerate \textit{Approximate Nearest Neighbor} (ANN) searches. A prominent model, LSH~\cite{lsh}, has inspired numerous approaches across various domains, including fast image retrieval~\cite{hashnet}, document search~\cite{li2014two}, and categorical information retrieval~\cite{kang2021learning}. In Top-K recommendation, early models~\cite{zhang2016discrete,zhang2017discrete,li2019learning} incorporate neural network architectures. CIGAR~\cite{kang2019candidate} further refines these methods with adaptive designs for fast candidate generation. HashGNN~\cite{hashgnn} integrates hashing techniques with graph neural networks~\cite{wu2020comprehensive,wang2024uncertainty} to capture the graph structure in user-item interactions for recommendation. However, relying solely on binary codes often leads to significant performance degradation. To address this, CIGAR includes additional full-precision recommender models (e.g., BPR-MF~\cite{rendle2012bpr}) for fine-grained re-ranking, while HashGNN introduces a relaxed version that mixes full-precision and binary embedding codes. \paragraph{\textbf{Quantization-based Models.}} Quantization-based models share techniques with hashing-based methods, often using $\sign(\cdot)$ due to its simplicity. Unlike hashing-based methods, however, quantization models are not focused on extreme compression, instead utilizing multi-bit, 2-bit, and 1-bit quantization to optimize performance~\cite{qiu2024hihpq,chen2021towards}. Recently, attention has shifted toward quantizing graph-based models, such as Bi-GCN~\cite{bigcn} and BGCN~\cite{bahri2021binary}. However, these models are primarily designed for geometric classification tasks, leaving their effectiveness in product recommendation unclear. In response, we introduce BiGeaR~\cite{chen2022learning}, a model that learns 1-bit user-item representation quantization for Top-K recommendation. Building on this, we propose \model~with enhanced design features aimed at further improving both efficiency and model performance. {\rev \paragraph{\textbf{Knowledge Distillation}.} Knowledge Distillation (KD) represents the process of transferring knowledge from a larger model to a smaller one~\cite{hinton2015distilling}. We review KD techniques specifically for the topics of Graph Neural Networks (GNN) and Learning to hash as follows. (1) KD in GNNs focuses on transferring knowledge from a large, complex model (teacher) to a smaller, more efficient model (student) while maintaining performance~\cite{tian2023knowledge,yang2020distilling,deng2021graph}. G-CRD~\cite{joshi2022representation} leverages contrastive learning methods~\cite{wang2024graph,zhang2023contrastive} to better capture topological information by aligning teacher and student node embeddings in a shared representation space. HKD~\cite{zhou2021distilling} uses GNNs to integrate both individual knowledge and relational knowledge, i.e., two types of knowledge, while reserving their inherent correlation in the distillation process. KDGA~\cite{wu2022knowledge} focuses on addressing the negative augmentation problem in graph structure augmentation. A recent work~\cite{liu2024fine} further optimizes the resource consumption of GNNs by proposing a KD method for fine-grained learning behavior. FreeKD~\cite{feng2022freekd} considers reinforcement learning in KD for GNNs. (2) KD is also widely used in learning to hash to reduce the information discrepancy and balance efficiency. For example, UKD~\cite{hu2020creating} and SKDCH~\cite{su2021semi} apply KD in cross-modal hashing to reduce modality discrepancy. \citet{jang2022deep} introduce a self-distilled hashing scheme with data augmentation designs. HMAH~\cite{tan2022teacher} constructs a hierarchical message aggregation mechanism to better align the heterogeneous modalities and model the fine-grained multi-modal correlations. A recent work~\cite{yu2024unsupervised} introduces KD to improve the effectiveness of large-scale cross-media hash retrieval. Generally, Combining KD with learning to hash allows the student model to benefit from the teacher's superior representation capabilities while maintaining the efficiency of compact representations. }","\paragraph{\textbf{Full-precision Recommender Models.}} (1) \textit{Collaborative Filtering (CF)} is a widely used approach in modern recommender systems~\cite{covington2016deep,pinsage,yang2022hrcf,lin2024effective,zhang2022knowledge,luo2025rank}. Earlier CF methods, such as \textit{Matrix Factorization}~\cite{koren2009matrix,rendle2012bpr}, focus on reconstructing historical interactions to learn user-item embeddings. More recent models, like NeurCF~\cite{neurcf} and attention-based approaches~\cite{chen2017attentive,he2018nais}, improve performance by leveraging neural networks. (2) \textit{Graph-based} methods have been widely used in many domains~\cite{wu2023survey,zhang2024geometric,zhang2023contrastive}. In recommendation, they explore the interaction graph structure for knowledge learning. Graph convolutional networks (GCNs)~\cite{graphsage,kipf2016semi} propagate knowledge via graph topologies~\cite{wang2022aep,chen2025semi} and have inspired both early methods, such as GC-MC~\cite{berg2017graph} and PinSage~\cite{pinsage}, as well as recent models like NGCF~\cite{ngcf}, DGCF~\cite{dgcf}, and LightGCN~\cite{lightgcn}, which effectively capture higher-order collaborative filtering signals among high-hop neighbors for improved recommendations. \paragraph{\textbf{Learning to Hash.}} Hashing-based methods convert dense floating-point embeddings into binary spaces to accelerate \textit{Approximate Nearest Neighbor} (ANN) searches. A prominent model, LSH~\cite{lsh}, has inspired numerous approaches across various domains, including fast image retrieval~\cite{hashnet}, document search~\cite{li2014two}, and categorical information retrieval~\cite{kang2021learning}. In Top-K recommendation, early models~\cite{zhang2016discrete,zhang2017discrete,li2019learning} incorporate neural network architectures. CIGAR~\cite{kang2019candidate} further refines these methods with adaptive designs for fast candidate generation. HashGNN~\cite{hashgnn} integrates hashing techniques with graph neural networks~\cite{wu2020comprehensive,wang2024uncertainty} to capture the graph structure in user-item interactions for recommendation. However, relying solely on binary codes often leads to significant performance degradation. To address this, CIGAR includes additional full-precision recommender models (e.g., BPR-MF~\cite{rendle2012bpr}) for fine-grained re-ranking, while HashGNN introduces a relaxed version that mixes full-precision and binary embedding codes. \paragraph{\textbf{Quantization-based Models.}} Quantization-based models share techniques with hashing-based methods, often using $\sign(\cdot)$ due to its simplicity. Unlike hashing-based methods, however, quantization models are not focused on extreme compression, instead utilizing multi-bit, 2-bit, and 1-bit quantization to optimize performance~\cite{qiu2024hihpq,chen2021towards}. Recently, attention has shifted toward quantizing graph-based models, such as Bi-GCN~\cite{bigcn} and BGCN~\cite{bahri2021binary}. However, these models are primarily designed for geometric classification tasks, leaving their effectiveness in product recommendation unclear. In response, we introduce BiGeaR~\cite{chen2022learning}, a model that learns 1-bit user-item representation quantization for Top-K recommendation. Building on this, we propose \model~with enhanced design features aimed at further improving both efficiency and model performance. {\rev \paragraph{\textbf{Knowledge Distillation}.} Knowledge Distillation (KD) represents the process of transferring knowledge from a larger model to a smaller one~\cite{hinton2015distilling}. We review KD techniques specifically for the topics of Graph Neural Networks (GNN) and Learning to hash as follows. (1) KD in GNNs focuses on transferring knowledge from a large, complex model (teacher) to a smaller, more efficient model (student) while maintaining performance~\cite{tian2023knowledge,yang2020distilling,deng2021graph}. G-CRD~\cite{joshi2022representation} leverages contrastive learning methods~\cite{wang2024graph,zhang2023contrastive} to better capture topological information by aligning teacher and student node embeddings in a shared representation space. HKD~\cite{zhou2021distilling} uses GNNs to integrate both individual knowledge and relational knowledge, i.e., two types of knowledge, while reserving their inherent correlation in the distillation process. KDGA~\cite{wu2022knowledge} focuses on addressing the negative augmentation problem in graph structure augmentation. A recent work~\cite{liu2024fine} further optimizes the resource consumption of GNNs by proposing a KD method for fine-grained learning behavior. FreeKD~\cite{feng2022freekd} considers reinforcement learning in KD for GNNs. (2) KD is also widely used in learning to hash to reduce the information discrepancy and balance efficiency. For example, UKD~\cite{hu2020creating} and SKDCH~\cite{su2021semi} apply KD in cross-modal hashing to reduce modality discrepancy. \citet{jang2022deep} introduce a self-distilled hashing scheme with data augmentation designs. HMAH~\cite{tan2022teacher} constructs a hierarchical message aggregation mechanism to better align the heterogeneous modalities and model the fine-grained multi-modal correlations. A recent work~\cite{yu2024unsupervised} introduces KD to improve the effectiveness of large-scale cross-media hash retrieval. Generally, Combining KD with learning to hash allows the student model to benefit from the teacher's superior representation capabilities while maintaining the efficiency of compact representations. }", 2505.23452v1,"What About Emotions? Guiding Fine-Grained Emotion Extraction from Mobile App Reviews","Quim Motger, Marc Oriol, Max Tiessler, Xavier Franch, Jordi Marco","Opinion mining plays a vital role in analysing user feedback and extracting insights from textual data. While most research focuses on sentiment polarity (e.g., positive, negative, neutral), fine-grained emotion classification in app reviews remains underexplored. This paper addresses this gap by identifying and addressing the challenges and limitations in fine-grained emotion analysis in the context of app reviews. Our study adapts Plutchik's emotion taxonomy to app reviews by developing a structured annotation framework and dataset. Through an iterative human annotation process, we define clear annotation guidelines and document key challenges in emotion classification. Additionally, we evaluate the feasibility of automating emotion annotation using large language models, assessing their cost-effectiveness and agreement with human-labelled data. Our findings reveal that while large language models significantly reduce manual effort and maintain substantial agreement with human annotators, full automation remains challenging due to the complexity of emotional interpretation. This work contributes to opinion mining by providing structured guidelines, an annotated dataset, and insights for developing automated pipelines to capture the complexity of emotions in app reviews.","cs.IR, cs.SE",2025-05-29T13:58:38+00:00,2025-05-29T13:58:38+00:00,http://arxiv.org/abs/2505.23452v1,http://arxiv.org/abs/2505.23452v1,2025-05-29 13:58:38+00:00,"\label{sec:related-work} \subsection{Emotion Annotation of App Reviews} The use of multi-class, fine-grained emotion taxonomies in app reviews remains limited, though some related work exists. Riccosan published an Indonesian dataset of app reviews annotated with Parrott's taxonomy~\cite{Riccosan2023}. While relatively large (20K reviews), it is not available in English. Moreover, their annotation relied on two annotators, reporting a Cohen’s Kappa of $0.61$. They lack details on disagreement resolution and insights into how specific emotions were adapted to the app review domain. Moreover, emotions like \anticipation and \trustA, relevant to app reviews, are not considered. Several studies have explored methods for inferring emotions from app reviews using proprietary datasets. Malgaonkar et al. developed a tool integrating a WordNet-based lexicon method to identify Ekman’s emotions from a dataset of 53K reviews~\cite{Malgaonkar2019}. Similarly, Keertipati et al. applied a lexicon-based method using the LIWC dictionary for three negative emotions to analyse their correlation with app features~\cite{Keertipati2016}. Singh et al. manually annotated 2K mobile learning app reviews using also a lexicon-based approach aligned with Plutchik’s taxonomy~\cite{Singh2022101929}, linking emotions to review descriptors like ratings, technical quality and usefulness. Beyond lexicon-based methods, Savarimuthu et al. employed IBM Watson’s Tone Analyzer %\footnote{\href{https://cloud.ibm.com/docs/natural-language-understanding}{https://cloud.ibm.com/docs/natural-language-understanding}} to extract emotions as descriptors for assessing data waste in mobile app reviews~\cite{Savarimuthu2023}. Lastly, Cabellos et al.~\cite{Cabellos2022} manually analysed video game reviews using Liew \& Turtle’s taxonomy to align emotions with moral aspects. However, these datasets are unavailable, lack emotion annotations, and do not evaluate extraction methods empirically. This underscores the relevance of emotion analysis but highlights the scarcity of annotated datasets. \subsection{LLM-based Annotation} The potential of LLMs for human-like reasoning tasks, combined with the need for large domain-specific datasets to reduce hallucinations and errors, has driven research into their use as data annotators~\cite{Hou2024}. Heseltine et al. analysed the performance of multiple annotation runs using OpenAI's GPT-4 for political text annotation~\cite{Heseltine2024}. Their findings suggest that while LLM-assisted tagging achieves high accuracy for simple tasks in cost-efficient settings, it struggles with complex and subjective analyses, such as sentiment annotation, where interpretation often varies between annotators~\cite{Zhang2025}. Similarly, Sayeed et al. evaluated Gemini for text classification in materials science, reaching comparable conclusions~\cite{Sayeed2024}. Research has further explored LLM annotation across various fields, including mathematics~\cite{Shan2024}, finance~\cite{Aguda2024}, and linguistics~\cite{Yu2024}. To address these limitations, Kim et al. proposed MEGAnno+~\cite{kim-etal-2024-meganno}, a \textit{human-LLM} collaborative framework designed to enhance the reliability and robustness of LLM-generated labels. Their approach integrates a human-in-the-loop mechanism to verify LLM annotations, concluding that fully autonomous annotation remains prone to errors, requiring human oversight for reliability. Similar studies investigate additional dimensions, such as the explainability~\cite{Wang2024} and cost-effectiveness~\cite{Rouzegar2024} of human-LLM collaboration in annotation tasks. While several domain-specific studies have been conducted, further research is needed to assess the reliability of these agents and explore improvements through alternative annotation mechanisms. To this end, and in line with our findings, hybrid approaches that combine expert validation with automated annotations may provide a balanced solution for generating datasets to support supervised extraction methods.","\subsection{Emotion Annotation of App Reviews} The use of multi-class, fine-grained emotion taxonomies in app reviews remains limited, though some related work exists. Riccosan published an Indonesian dataset of app reviews annotated with Parrott's taxonomy~\cite{Riccosan2023}. While relatively large (20K reviews), it is not available in English. Moreover, their annotation relied on two annotators, reporting a Cohen’s Kappa of $0.61$. They lack details on disagreement resolution and insights into how specific emotions were adapted to the app review domain. Moreover, emotions like \anticipation and \trustA, relevant to app reviews, are not considered. Several studies have explored methods for inferring emotions from app reviews using proprietary datasets. Malgaonkar et al. developed a tool integrating a WordNet-based lexicon method to identify Ekman’s emotions from a dataset of 53K reviews~\cite{Malgaonkar2019}. Similarly, Keertipati et al. applied a lexicon-based method using the LIWC dictionary for three negative emotions to analyse their correlation with app features~\cite{Keertipati2016}. Singh et al. manually annotated 2K mobile learning app reviews using also a lexicon-based approach aligned with Plutchik’s taxonomy~\cite{Singh2022101929}, linking emotions to review descriptors like ratings, technical quality and usefulness. Beyond lexicon-based methods, Savarimuthu et al. employed IBM Watson’s Tone Analyzer %\footnote{\href{https://cloud.ibm.com/docs/natural-language-understanding}{https://cloud.ibm.com/docs/natural-language-understanding}} to extract emotions as descriptors for assessing data waste in mobile app reviews~\cite{Savarimuthu2023}. Lastly, Cabellos et al.~\cite{Cabellos2022} manually analysed video game reviews using Liew \& Turtle’s taxonomy to align emotions with moral aspects. However, these datasets are unavailable, lack emotion annotations, and do not evaluate extraction methods empirically. This underscores the relevance of emotion analysis but highlights the scarcity of annotated datasets. \subsection{LLM-based Annotation} The potential of LLMs for human-like reasoning tasks, combined with the need for large domain-specific datasets to reduce hallucinations and errors, has driven research into their use as data annotators~\cite{Hou2024}. Heseltine et al. analysed the performance of multiple annotation runs using OpenAI's GPT-4 for political text annotation~\cite{Heseltine2024}. Their findings suggest that while LLM-assisted tagging achieves high accuracy for simple tasks in cost-efficient settings, it struggles with complex and subjective analyses, such as sentiment annotation, where interpretation often varies between annotators~\cite{Zhang2025}. Similarly, Sayeed et al. evaluated Gemini for text classification in materials science, reaching comparable conclusions~\cite{Sayeed2024}. Research has further explored LLM annotation across various fields, including mathematics~\cite{Shan2024}, finance~\cite{Aguda2024}, and linguistics~\cite{Yu2024}. To address these limitations, Kim et al. proposed MEGAnno+~\cite{kim-etal-2024-meganno}, a \textit{human-LLM} collaborative framework designed to enhance the reliability and robustness of LLM-generated labels. Their approach integrates a human-in-the-loop mechanism to verify LLM annotations, concluding that fully autonomous annotation remains prone to errors, requiring human oversight for reliability. Similar studies investigate additional dimensions, such as the explainability~\cite{Wang2024} and cost-effectiveness~\cite{Rouzegar2024} of human-LLM collaboration in annotation tasks. While several domain-specific studies have been conducted, further research is needed to assess the reliability of these agents and explore improvements through alternative annotation mechanisms. To this end, and in line with our findings, hybrid approaches that combine expert validation with automated annotations may provide a balanced solution for generating datasets to support supervised extraction methods.","exists. Riccosan published an Indonesian dataset of app re- views annotated with Parrott’s taxonomy [36]. While relatively large (20K reviews), it is not available in English. Moreover, their annotation relied on two annotators, reporting a Cohen’s Kappa of 0" 2505.21811v1,Revisiting Self-attention for Cross-domain Sequential Recommendation,"Clark Mingxuan Ju, Leonardo Neves, Bhuvesh Kumar, Liam Collins, Tong Zhao, Yuwei Qiu, Qing Dou, Sohail Nizam, Sen Yang, Neil Shah","Sequential recommendation is a popular paradigm in modern recommender systems. In particular, one challenging problem in this space is cross-domain sequential recommendation (CDSR), which aims to predict future behaviors given user interactions across multiple domains. Existing CDSR frameworks are mostly built on the self-attention transformer and seek to improve by explicitly injecting additional domain-specific components (e.g. domain-aware module blocks). While these additional components help, we argue they overlook the core self-attention module already present in the transformer, a naturally powerful tool to learn correlations among behaviors. In this work, we aim to improve the CDSR performance for simple models from a novel perspective of enhancing the self-attention. Specifically, we introduce a Pareto-optimal self-attention and formulate the cross-domain learning as a multi-objective problem, where we optimize the recommendation task while dynamically minimizing the cross-domain attention scores. Our approach automates knowledge transfer in CDSR (dubbed as AutoCDSR) -- it not only mitigates negative transfer but also encourages complementary knowledge exchange among auxiliary domains. Based on the idea, we further introduce AutoCDSR+, a more performant variant with slight additional cost. Our proposal is easy to implement and works as a plug-and-play module that can be incorporated into existing transformer-based recommenders. Besides flexibility, it is practical to deploy because it brings little extra computational overheads without heavy hyper-parameter tuning. AutoCDSR on average improves Recall@10 for SASRec and Bert4Rec by 9.8% and 16.0% and NDCG@10 by 12.0% and 16.7%, respectively. Code is available at https://github.com/snap-research/AutoCDSR.","cs.IR, cs.AI",2025-05-27T22:38:32+00:00,2025-05-27T22:38:32+00:00,http://arxiv.org/abs/2505.21811v1,http://arxiv.org/abs/2505.21811v1,2025-05-27 22:38:32+00:00,"\subsection{Sequential Recommendation} Sequential recommendation aims at predicting user's future behaviors given an ordered list of user's historical interactions. Prior to the popularity of transformer models, researchers explored models based on recurrent architectures ~\citep{wu2017recurrent, chung2014empirical} to encode the sequential patterns in user behavior histories, such as GRU4Rec~\citep{hidasi2015session}, STAMP~\citep{liu2018stamp}, NARM~\citep{li2017neural}, etc. These works demonstrate that models consuming sequence of user behaviors significantly outperforms pair-wise models such as matrix factorization~\citep{rendle2009bpr}. After the invention of the transformer~\citep{vaswani2017attention}, sequential recommendation frameworks by default explore backbone model architectures based on this architecture~\citep{kang2018self,sun2019bert4rec}, owing to its strong capabilities of modeling long sequential data that have been well demonstrated in other fields~\citep{vaswani2017attention,beltagy2020longformer}. For instance, approaches such as SASRec~\citep{kang2018self}, BERT4Rec~\cite{sun2019bert4rec}, SINE~\citep{tan2021sparse}, and LightSANs~\citep{fan2021lighter} train a transformer-based model with supervision signals like causal language modeling or masked language modeling on the user behavior sequence. Another branch of research explores textual attributes of behaviors (e.g., reviews and descriptions) and utilizes large language models to conduct sequential recommendation~\citep{zhu2024collaborative,zhao2023survey,wu2024survey,zhang2023recommendation,hou2024large,cui2022m6,geng2022recommendation}. \subsection{Cross-domain Sequential Recommendation} \label{sec:CDSR_relatedworks} Cross-domain recommendation aims at improving recommendation performance by leveraging information from multiple domains simultaneously. A branch of early studies explore matrix factorization approaches to model user-item interactions across different domains without considering their sequential nature~\citep{gao2013cross,singh2008relational,liu2020cross,zhu2019dtcdr,li2023one}. Follow-up research proposes cross-domain sequential recommendation (CDSR) to further improve performance by explicitly injecting additional domain-specific components, such as adding additional supervision signals~\citep{cao2022contrastive}, reweighing different domains~\citep{park2024pacer} and deriving domain-aware module blocks~\citep{hwang2024multi,zhang2024mdmtrec}. Specifically, $\pi$-net proposes a domain-aware gating mechanism to facilitate knowledge transfer between domains~\citep{ma2019pi}. C$^2$DSR leverages graph neural networks that models cross-domain graphs to improve the performance~\citep{cao2022contrastive}. Similarly, MIFN uses a knowledge graph to enhance CDSR~\citep{ma2022mixed}. MAN~\citep{lin2024mixed} harnesses additional supervision signals and domain-aware blocks to disentangle information from different domains~\citep{lin2024mixed}. SyNCRec proposes a cooperative learning framework and utilizes additional domain-specific blocks to advance CDSR~\citep{park2024pacer}. Although incorporating additional components can be effective, such approaches often overlook the self-attention module in the backbone transformer, which is inherently a powerful tool for capturing fine-grained correlations among heterogeneous behaviors on its own~\citep{nagrani2021attention,tsai2019multimodal,xu2023multimodal}.","\subsection{Sequential Recommendation} Sequential recommendation aims at predicting user's future behaviors given an ordered list of user's historical interactions. Prior to the popularity of transformer models, researchers explored models based on recurrent architectures ~\citep{wu2017recurrent, chung2014empirical} to encode the sequential patterns in user behavior histories, such as GRU4Rec~\citep{hidasi2015session}, STAMP~\citep{liu2018stamp}, NARM~\citep{li2017neural}, etc. These works demonstrate that models consuming sequence of user behaviors significantly outperforms pair-wise models such as matrix factorization~\citep{rendle2009bpr}. After the invention of the transformer~\citep{vaswani2017attention}, sequential recommendation frameworks by default explore backbone model architectures based on this architecture~\citep{kang2018self,sun2019bert4rec}, owing to its strong capabilities of modeling long sequential data that have been well demonstrated in other fields~\citep{vaswani2017attention,beltagy2020longformer}. For instance, approaches such as SASRec~\citep{kang2018self}, BERT4Rec~\cite{sun2019bert4rec}, SINE~\citep{tan2021sparse}, and LightSANs~\citep{fan2021lighter} train a transformer-based model with supervision signals like causal language modeling or masked language modeling on the user behavior sequence. Another branch of research explores textual attributes of behaviors (e.g., reviews and descriptions) and utilizes large language models to conduct sequential recommendation~\citep{zhu2024collaborative,zhao2023survey,wu2024survey,zhang2023recommendation,hou2024large,cui2022m6,geng2022recommendation}. \subsection{Cross-domain Sequential Recommendation} Cross-domain recommendation aims at improving recommendation performance by leveraging information from multiple domains simultaneously. A branch of early studies explore matrix factorization approaches to model user-item interactions across different domains without considering their sequential nature~\citep{gao2013cross,singh2008relational,liu2020cross,zhu2019dtcdr,li2023one}. Follow-up research proposes cross-domain sequential recommendation (CDSR) to further improve performance by explicitly injecting additional domain-specific components, such as adding additional supervision signals~\citep{cao2022contrastive}, reweighing different domains~\citep{park2024pacer} and deriving domain-aware module blocks~\citep{hwang2024multi,zhang2024mdmtrec}. Specifically, $\pi$-net proposes a domain-aware gating mechanism to facilitate knowledge transfer between domains~\citep{ma2019pi}. C$^2$DSR leverages graph neural networks that models cross-domain graphs to improve the performance~\citep{cao2022contrastive}. Similarly, MIFN uses a knowledge graph to enhance CDSR~\citep{ma2022mixed}. MAN~\citep{lin2024mixed} harnesses additional supervision signals and domain-aware blocks to disentangle information from different domains~\citep{lin2024mixed}. SyNCRec proposes a cooperative learning framework and utilizes additional domain-specific blocks to advance CDSR~\citep{park2024pacer}. Although incorporating additional components can be effective, such approaches often overlook the self-attention module in the backbone transformer, which is inherently a powerful tool for capturing fine-grained correlations among heterogeneous behaviors on its own~\citep{nagrani2021attention,tsai2019multimodal,xu2023multimodal}.","2.1 Sequential Recommendation Sequential recommendation aims at predicting user’s future behav- iors given an ordered list of user’s historical interactions. Prior to the popularity of transformer models, researchers explored models based on recurrent architectures [ 3,50] to encode the sequential pat- terns in user behavior histories, such as GRU4Rec [ 13], STAMP [ 31], NARM [ 26], etc. These works demonstrate that models consum- ing sequence of user behaviors significantly outperforms pair-wise models such as matrix factorization [ 39]. After the invention of the transformer [ 48], sequential recommendation frameworks bydefault explore backbone model architectures based on this archi- tecture [ 22,44], owing to its strong capabilities of modeling long se- quential data that have been well demonstrated in other fields [ 1,48]. For instance, approaches such as SASRec [ 22], BERT4Rec [ 44], SINE [ 45], and LightSANs [ 7] train a transformer-based model with supervision signals like causal language modeling or masked language modeling on the user behavior sequence. Another branch of research explores textual attributes of behaviors (e.g., reviews and descriptions) and utilizes large language models to conduct sequential recommendation [4, 11, 16, 51, 56, 60, 65]. 2.2 Cross-domain Sequential Recommendation Cross-domain recommendation aims at improving recommenda- tion performance by leveraging information from multiple domains simultaneously. A branch of early studies explore matrix factoriza- tion approaches to model user-item interactions across different do- mains without considering their sequential nature [ 10,25,30,43,62]. Follow-up research proposes cross-domain sequential recommenda- tion (CDSR) to further improve performance by explicitly injecting additional domain-specific components, such as adding additional supervision signals [2], reweighing different domains [38] and de- riving domain-aware module blocks [ 17,59]. Specifically, 𝜋-net proposes a domain-aware gating mechanism to facilitate knowl- edge transfer between domains [ 33]. C2DSR leverages graph neural networks that models cross-domain graphs to improve the perfor- mance [ 2]. Similarly, MIFN uses a knowledge graph to enhance CDSR [ 32]. MAN [ 27] harnesses additional supervision signals and domain-aware blocks to disentangle information from different do- mains [ 27]. SyNCRec proposes a cooperative learning framework and utilizes additional domain-specific blocks to advance CDSR [ 38]. Although incorporating additional components can be effective, such approaches often overlook the self-attention module in the backbone transformer, which is inherently a powerful tool for cap- turing fine-grained correlations among heterogeneous behaviors on its own [35, 46, 53]." 2505.20227v1,"Measure Domain's Gap: A Similar Domain Selection Principle for Multi-Domain Recommendation","Yi Wen, Yue Liu, Derong Xu, Huishi Luo, Pengyue Jia, Yiqing Wu, Siwei Wang, Ke Liang, Maolin Wang, Yiqi Wang, Fuzhen Zhuang, Xiangyu Zhao","Multi-Domain Recommendation (MDR) achieves the desirable recommendation performance by effectively utilizing the transfer information across different domains. Despite the great success, most existing MDR methods adopt a single structure to transfer complex domain-shared knowledge. However, the beneficial transferring information should vary across different domains. When there is knowledge conflict between domains or a domain is of poor quality, unselectively leveraging information from all domains will lead to a serious Negative Transfer Problem (NTP). Therefore, how to effectively model the complex transfer relationships between domains to avoid NTP is still a direction worth exploring. To address these issues, we propose a simple and dynamic Similar Domain Selection Principle (SDSP) for multi-domain recommendation in this paper. SDSP presents the initial exploration of selecting suitable domain knowledge for each domain to alleviate NTP. Specifically, we propose a novel prototype-based domain distance measure to effectively model the complexity relationship between domains. Thereafter, the proposed SDSP can dynamically find similar domains for each domain based on the supervised signals of the domain metrics and the unsupervised distance measure from the learned domain prototype. We emphasize that SDSP is a lightweight method that can be incorporated with existing MDR methods for better performance while not introducing excessive time overheads. To the best of our knowledge, it is the first solution that can explicitly measure domain-level gaps and dynamically select appropriate domains in the MDR field. Extensive experiments on three datasets demonstrate the effectiveness of our proposed method.",cs.IR,2025-05-26T17:07:31+00:00,2025-05-26T17:07:31+00:00,http://arxiv.org/abs/2505.20227v1,http://arxiv.org/abs/2505.20227v1,2025-05-26 17:07:31+00:00,"% \subsection{Single-Domain Recommendation} % Despite the great success achieved, most of the above approaches are unable to process the multi-domain data, which are often encountered in real-world applications due to diverse user behavior patterns and complex business platform structures. \subsection{Multi-Domain Recommendation} Recommendation systems (RS) \cite{gu2021self,lqd1,lqd2, hyp1, hyp2, liuyue_Rec1,liuyue_rec2,wang2019ngcf-graph-rec} aims to analyze user interactions to uncover interests, becoming a key research focus in recent years. However, classical single-domain approaches are unable to process the multi-domain data, which are often encountered in real-world applications. As a result, abundant Multi-Domain Recommendation (MDR) methods have been proposed \cite{luo2023mamdr-multi-domain-rec, chen2021user-cross-domain-rec,chang2023pepnet-multi-domain-multi-task-rec, fu2023unified-llm-multi-domain-rec, li2022gromov-cross-domain-rec, fan2023adversarial-cross-domain-rec,gao2023autotransfer-cross-domain-rec}, leveraging shared knowledge across domains to address challenges such as cold-start issues \cite{wang2017item,zhu2024m,jin2022multi}. These methods can be broadly categorized into Shared-Specific (SS) based methods and Dynamic Weight (DW) based methods, depending on how they model inter-domain relationships. SS-based methods\cite{tang2020ple-multi-task-rec,tong2024mdap,ning2023multi-multi-domain-graph-rec}, such as STAR \cite{sheng2021star-multi-domain-rec} employ a shared-bottom architecture with domain-specific towers to model features. While DW-based methods \cite{yan2022apg-rec,bian2020can-rec, zhang2022m2m-multi-domain-multi-task-rec} often use scenario-sensitive features to generate weighted parameters for the network. However, DW-based methods rely on manually selected features and hence are less generalizable when new scenarios are encountered. Furthermore, most SS-based approaches \cite{wang2023plate-multi-domain-rec,wang2024diff-cold-multi-domain-rec} employ a single domain-shared module, making it difficult to transfer complex multi-domain knowledge. To tackle these issues, SDSP proposes a novel domain selection module that can decouple the current single domain-shared without additional feature engineering. \subsection{Selection Problem} \label{select} Discrete selection problems are generally more challenging than continuous optimization problems. This is because discrete choices involve combinatorial complexity, where the solution space is not smooth or continuous. Thus, traditional optimization techniques like gradient-based methods cannot be directly applied, requiring specialized algorithms to explore the solution space efficiently. In some fields, several attempts \cite{zhu2022user,zhou2022filter} have been proposed to solve different selection problems. Standley et al. \cite{standley2020tasks-multi-task} propose a group framework for choosing the suitable tasks to train together in the multi-task field. In the multi-modal field, He et al. \cite{he2024efficient-multi-modal} proposes a greedy modality selection algorithm via submodular maximization. In the cross-domain field, Park et al. \cite{park2024pacer-cross-domain-rec} devise a weight factor to control the negative transfer of the multi-domain part. However, the greedy-based search algorithm incurs additional overhead and is not applicable in the time-sensitive field. Besides, a single gating mechanism doesn't apply to the complex multi-domain field. To address these issues, SDSP proposes a dynamic selection method to tackle the selection problem efficiently.","% \subsection{Single-Domain Recommendation} % Despite the great success achieved, most of the above approaches are unable to process the multi-domain data, which are often encountered in real-world applications due to diverse user behavior patterns and complex business platform structures. \subsection{Multi-Domain Recommendation} Recommendation systems (RS) \cite{gu2021self,lqd1,lqd2, hyp1, hyp2, liuyue_Rec1,liuyue_rec2,wang2019ngcf-graph-rec} aims to analyze user interactions to uncover interests, becoming a key research focus in recent years. However, classical single-domain approaches are unable to process the multi-domain data, which are often encountered in real-world applications. As a result, abundant Multi-Domain Recommendation (MDR) methods have been proposed \cite{luo2023mamdr-multi-domain-rec, chen2021user-cross-domain-rec,chang2023pepnet-multi-domain-multi-task-rec, fu2023unified-llm-multi-domain-rec, li2022gromov-cross-domain-rec, fan2023adversarial-cross-domain-rec,gao2023autotransfer-cross-domain-rec}, leveraging shared knowledge across domains to address challenges such as cold-start issues \cite{wang2017item,zhu2024m,jin2022multi}. These methods can be broadly categorized into Shared-Specific (SS) based methods and Dynamic Weight (DW) based methods, depending on how they model inter-domain relationships. SS-based methods\cite{tang2020ple-multi-task-rec,tong2024mdap,ning2023multi-multi-domain-graph-rec}, such as STAR \cite{sheng2021star-multi-domain-rec} employ a shared-bottom architecture with domain-specific towers to model features. While DW-based methods \cite{yan2022apg-rec,bian2020can-rec, zhang2022m2m-multi-domain-multi-task-rec} often use scenario-sensitive features to generate weighted parameters for the network. However, DW-based methods rely on manually selected features and hence are less generalizable when new scenarios are encountered. Furthermore, most SS-based approaches \cite{wang2023plate-multi-domain-rec,wang2024diff-cold-multi-domain-rec} employ a single domain-shared module, making it difficult to transfer complex multi-domain knowledge. To tackle these issues, SDSP proposes a novel domain selection module that can decouple the current single domain-shared without additional feature engineering. \subsection{Selection Problem} Discrete selection problems are generally more challenging than continuous optimization problems. This is because discrete choices involve combinatorial complexity, where the solution space is not smooth or continuous. Thus, traditional optimization techniques like gradient-based methods cannot be directly applied, requiring specialized algorithms to explore the solution space efficiently. In some fields, several attempts \cite{zhu2022user,zhou2022filter} have been proposed to solve different selection problems. Standley et al. \cite{standley2020tasks-multi-task} propose a group framework for choosing the suitable tasks to train together in the multi-task field. In the multi-modal field, He et al. \cite{he2024efficient-multi-modal} proposes a greedy modality selection algorithm via submodular maximization. In the cross-domain field, Park et al. \cite{park2024pacer-cross-domain-rec} devise a weight factor to control the negative transfer of the multi-domain part. However, the greedy-based search algorithm incurs additional overhead and is not applicable in the time-sensitive field. Besides, a single gating mechanism doesn't apply to the complex multi-domain field. To address these issues, SDSP proposes a dynamic selection method to tackle the selection problem efficiently.","6.1 Multi-Domain Recommendation Recommendation systems (RS) [ 14,19,20,31,32,34,35,57] aims to analyze user interactions to uncover interests, becoming a key research focus in recent years. However, classical single-domain approaches are unable to process the multi-domain data, which are often encountered in real-world applications. As a result, abundant Multi-Domain Recommendation (MDR) methods have been pro- posed [ 5,6,9,10,13,28,37], leveraging shared knowledge across domains to address challenges such as cold-start issues [ 21,56,72]. These methods can be broadly categorized into Shared-Specific (SS) based methods and Dynamic Weight (DW) based methods, de- pending on how they model inter-domain relationships. SS-based methods[ 41,50,51], such as STAR [ 45] employ a shared-bottom architecture with domain-specific towers to model features. WhileDW-based methods [ 3,61,65] often use scenario-sensitive features to generate weighted parameters for the network. However, DW-based methods rely on manually selected features and hence are less generalizable when new scenarios are encoun- tered. Furthermore, most SS-based approaches [ 58,59] employ a sin- gle domain-shared module, making it difficult to transfer complex multi-domain knowledge. To tackle these issues, SDSP proposes a novel domain selection module that can decouple the current single domain-shared without additional feature engineering. 6.2 Selection Problem Discrete selection problems are generally more challenging than continuous optimization problems. This is because discrete choices involve combinatorial complexity, where the solution space is not smooth or continuous. Thus, traditional optimization techniques like gradient-based methods cannot be directly applied, requiring specialized algorithms to explore the solution space efficiently. In some fields, several attempts [ 70,71] have been proposed to solve different selection problems. Standley et al. [ 47] propose a group framework for choosing the suitable tasks to train together in the multi-task field. In the multi-modal field, He et al. [ 18] proposes a greedy modality selection algorithm via submodular maximiza- tion. In the cross-domain field, Park et al. [ 42] devise a weight factor to control the negative transfer of the multi-domain part. However, the greedy-based search algorithm incurs additional overhead and is not applicable in the time-sensitive field. Besides, a single gating mechanism doesn’t apply to the complex multi- domain field. To address these issues, SDSP proposes a dynamic selection method to tackle the selection problem efficiently." 2505.19356v1,"Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval","Kidist Amde Mekonnen, Yosef Worku Alemneh, Maarten de Rijke","Neural retrieval methods using transformer-based pre-trained language models have advanced multilingual and cross-lingual retrieval. However, their effectiveness for low-resource, morphologically rich languages such as Amharic remains underexplored due to data scarcity and suboptimal tokenization. We address this gap by introducing Amharic-specific dense retrieval models based on pre-trained Amharic BERT and RoBERTa backbones. Our proposed RoBERTa-Base-Amharic-Embed model (110M parameters) achieves a 17.6% relative improvement in MRR@10 and a 9.86% gain in Recall@10 over the strongest multilingual baseline, Arctic Embed 2.0 (568M parameters). More compact variants, such as RoBERTa-Medium-Amharic-Embed (42M), remain competitive while being over 13x smaller. Additionally, we train a ColBERT-based late interaction retrieval model that achieves the highest MRR@10 score (0.843) among all evaluated models. We benchmark our proposed models against both sparse and dense retrieval baselines to systematically assess retrieval effectiveness in Amharic. Our analysis highlights key challenges in low-resource settings and underscores the importance of language-specific adaptation. To foster future research in low-resource IR, we publicly release our dataset, codebase, and trained models at https://github.com/kidist-amde/amharic-ir-benchmarks.","cs.IR, cs.AI, cs.CL, cs.LG, 68T50 (Primary), 68T05 (Secondary), H.3.3, H.3.1, I.2.7",2025-05-25T23:06:20+00:00,2025-05-25T23:06:20+00:00,http://arxiv.org/abs/2505.19356v1,http://arxiv.org/abs/2505.19356v1,2025-05-25 23:06:20+00:00,"\label{related} Retrieval systems commonly adopt a two-stage pipeline to optimize efficiency and effectiveness: \begin{enumerate*}[label=(\roman*)] \item First-stage retrieval efficiently retrieves candidate documents using lightweight methods such as sparse or dense retrieval. \item Re-ranking refines the results using computationally more intensive models, such as cross-encoders. \end{enumerate*} \heading{Sparse retrieval.} Sparse retrieval is fundamental in IR, with BM25 known for its efficiency, interpretability, and cross-domain robustness~\cite{Robertson2009ThePR}. However, it struggles with vocabulary mismatch and morphological variability, challenges that are particularly acute in morphologically rich languages like Amharic. \Ac{LSR} methods~\cite{formal2021splade, formal2021splade-v2} attempt to mitigate these issues by dynamically weighting and expanding terms, thereby enhancing relevance while maintaining interpretability~\cite{dai2020context}. However, LSR faces limitations in low-resource settings due to the scarcity of annotated data, dialectal diversity, and morphological complexity (e.g., Amharic's templatic morphology), which necessitate subword-aware tokenization or morphological analyzers that are often unavailable. \heading{Dense retrieval.} Dense retrieval encodes queries and documents into a shared semantic space using neural network encoders, enabling efficient retrieval via \ac{ANN} search based on embedding similarity~\cite{johnson2019billion, karpukhin-etal-2020-dense, Xiong2020ApproximateNN}. While it helps mitigate lexical mismatch, its effectiveness in low-resource languages is hindered by the need for large-scale labeled training data. Multilingual models such as mBERT~\cite{mbert}, XLM-R~\cite{XLM-R}, and African language-specific models like SERENGETI~\cite{SERENGETI} and AfriBERTa~\cite{AfriBERTa} partially address data scarcity through cross-lingual pretraining. However, their effectiveness in morphologically complex languages like Amharic has not been thoroughly investigated. Recent advances in unsupervised contrastive learning, such as Contriever~\cite{Izacard2021UnsupervisedDI}, have demonstrated strong zero-shot and multilingual retrieval performance, especially in cross-lingual transfer scenarios. Nonetheless, their effectiveness in morphologically complex languages like Amharic remains unexplored, as current evaluations do not account for challenges arising from root-based and templatic morphologies. Beyond data scarcity, retrieval performance is further constrained by morphological complexity and tokenization challenges. Amharic’s templatic morphology often causes standard subword tokenizers to over-segment words into non-morphemic units, leading to fragmented representations that obscure semantic relationships. Broader research on multilingual tokenization quality~\cite{rust-etal-2021-good} shows that excessive segmentation in morphologically rich languages introduces noise into subword representations, degrading performance in downstream tasks. Despite recent advances in multilingual dense retrieval, state-of-the-art models such as Arctic Embed 2.0~\cite{yu2024arctic} and Multilingual E5~\cite{wang2024multilingual}, which topped the \textit{MTEB Embedding Leaderboard}\footnote{\url{https://huggingface.co/spaces/mteb/leaderboard}} at the time of our study, continue to struggle with highly inflected languages. These models often produce suboptimal tokenizations, fragmented subword representations, and inefficient embeddings, ultimately limiting their retrieval effectiveness. Our empirical findings in Section~\ref{sec:Fertility} illustrate the extent to which tokenization errors impair retrieval performance in Amharic. % Added it briefly on our motivation section on efforts made on Amharic NLP % \yos{There was one other Encoder base model pretrained on Amharic data, amRoBERTa (https://arxiv.org/abs/2011.01154), we ended up not using it because RoBERTa Medium Amharic (42M) outperformed it on key benchmarks while having 1/10 of its param count. Furthermore, that model's tokenizer had an abnormally large vocabulary size of 520000 (vs 32000 for our model, and 50265 in the original RoBERTa) , inflating its parameter count to 443M even though it has just 6 layers.} \heading{Bridging the gap in Amharic IR.} Retrieval systems are primarily optimized for high-resource languages, exacerbating performance disparities in low-resource settings like Amharic~\cite{nigatu2024searched}. Prior research in Amharic \ac{IR} has explored pre-trained embeddings~\cite[Word2Vec, fastText, AmRoBERTa,][]{destaw-etal-2021-development}, morphological tools~\cite[e.g., annotation frameworks, WordNet-based query expansion,][]{yeshambel2021morphologically}, and cross-lingual transfer via multilingual models~\cite[AfriBERTa,][]{azime2024enhancing}. However, systematic evaluations of sparse and dense retrieval architectures remain absent, making principled comparisons difficult and leaving the effectiveness of different paradigms in Amharic IR largely unexamined. \citet{2AIRTC} introduce 2AIRTC, a TREC-style test collection for standardized Amharic IR evaluation, but it lacks baseline retrieval benchmarks and complete relevance judgments, making recall-based assessments unreliable. To ensure robust evaluation, we conduct our main experiments on the Amharic Passage Retrieval Dataset, which we derive by preprocessing the Amharic News Text Classification Dataset (AMNEWS)~\cite{am_news_data} into MS MARCO-style query-passage pairs (see Section~\ref{exp}). A detailed analysis of 2AIRTC, its limitations, and our supplementary evaluations on this dataset is provided in Appendix~\ref{sec:appendix}. To address these gaps, our work introduces Amharic-specific retrieval models that incorporate both strong and compact encoder backbones (Section~\ref{sec:amharic_embedding_models}), optimized using contrastive training to better handle Amharic’s morphological complexity. We also develop and evaluate a late-interaction ColBERT model tailored for Amharic, and benchmark both sparse and dense retrieval architectures. This enables rigorous, reproducible comparisons across retrieval paradigms.","Retrieval systems commonly adopt a two-stage pipeline to optimize efficiency and effectiveness: \begin{enumerate*}[label=(\roman*)] \item First-stage retrieval efficiently retrieves candidate documents using lightweight methods such as sparse or dense retrieval. \item Re-ranking refines the results using computationally more intensive models, such as cross-encoders. \end{enumerate*} \heading{Sparse retrieval.} Sparse retrieval is fundamental in IR, with BM25 known for its efficiency, interpretability, and cross-domain robustness~\cite{Robertson2009ThePR}. However, it struggles with vocabulary mismatch and morphological variability, challenges that are particularly acute in morphologically rich languages like Amharic. \Ac{LSR} methods~\cite{formal2021splade, formal2021splade-v2} attempt to mitigate these issues by dynamically weighting and expanding terms, thereby enhancing relevance while maintaining interpretability~\cite{dai2020context}. However, LSR faces limitations in low-resource settings due to the scarcity of annotated data, dialectal diversity, and morphological complexity (e.g., Amharic's templatic morphology), which necessitate subword-aware tokenization or morphological analyzers that are often unavailable. \heading{Dense retrieval.} Dense retrieval encodes queries and documents into a shared semantic space using neural network encoders, enabling efficient retrieval via \ac{ANN} search based on embedding similarity~\cite{johnson2019billion, karpukhin-etal-2020-dense, Xiong2020ApproximateNN}. While it helps mitigate lexical mismatch, its effectiveness in low-resource languages is hindered by the need for large-scale labeled training data. Multilingual models such as mBERT~\cite{mbert}, XLM-R~\cite{XLM-R}, and African language-specific models like SERENGETI~\cite{SERENGETI} and AfriBERTa~\cite{AfriBERTa} partially address data scarcity through cross-lingual pretraining. However, their effectiveness in morphologically complex languages like Amharic has not been thoroughly investigated. Recent advances in unsupervised contrastive learning, such as Contriever~\cite{Izacard2021UnsupervisedDI}, have demonstrated strong zero-shot and multilingual retrieval performance, especially in cross-lingual transfer scenarios. Nonetheless, their effectiveness in morphologically complex languages like Amharic remains unexplored, as current evaluations do not account for challenges arising from root-based and templatic morphologies. Beyond data scarcity, retrieval performance is further constrained by morphological complexity and tokenization challenges. Amharic’s templatic morphology often causes standard subword tokenizers to over-segment words into non-morphemic units, leading to fragmented representations that obscure semantic relationships. Broader research on multilingual tokenization quality~\cite{rust-etal-2021-good} shows that excessive segmentation in morphologically rich languages introduces noise into subword representations, degrading performance in downstream tasks. Despite recent advances in multilingual dense retrieval, state-of-the-art models such as Arctic Embed 2.0~\cite{yu2024arctic} and Multilingual E5~\cite{wang2024multilingual}, which topped the \textit{MTEB Embedding Leaderboard}\footnote{\url{https://huggingface.co/spaces/mteb/leaderboard}} at the time of our study, continue to struggle with highly inflected languages. These models often produce suboptimal tokenizations, fragmented subword representations, and inefficient embeddings, ultimately limiting their retrieval effectiveness. Our empirical findings in Section~\ref{sec:Fertility} illustrate the extent to which tokenization errors impair retrieval performance in Amharic. % Added it briefly on our motivation section on efforts made on Amharic NLP % \yos{There was one other Encoder base model pretrained on Amharic data, amRoBERTa (https://arxiv.org/abs/2011.01154), we ended up not using it because RoBERTa Medium Amharic (42M) outperformed it on key benchmarks while having 1/10 of its param count. Furthermore, that model's tokenizer had an abnormally large vocabulary size of 520000 (vs 32000 for our model, and 50265 in the original RoBERTa) , inflating its parameter count to 443M even though it has just 6 layers.} \heading{Bridging the gap in Amharic IR.} Retrieval systems are primarily optimized for high-resource languages, exacerbating performance disparities in low-resource settings like Amharic~\cite{nigatu2024searched}. Prior research in Amharic \ac{IR} has explored pre-trained embeddings~\cite[Word2Vec, fastText, AmRoBERTa,][]{destaw-etal-2021-development}, morphological tools~\cite[e.g., annotation frameworks, WordNet-based query expansion,][]{yeshambel2021morphologically}, and cross-lingual transfer via multilingual models~\cite[AfriBERTa,][]{azime2024enhancing}. However, systematic evaluations of sparse and dense retrieval architectures remain absent, making principled comparisons difficult and leaving the effectiveness of different paradigms in Amharic IR largely unexamined. \citet{2AIRTC} introduce 2AIRTC, a TREC-style test collection for standardized Amharic IR evaluation, but it lacks baseline retrieval benchmarks and complete relevance judgments, making recall-based assessments unreliable. To ensure robust evaluation, we conduct our main experiments on the Amharic Passage Retrieval Dataset, which we derive by preprocessing the Amharic News Text Classification Dataset (AMNEWS)~\cite{am_news_data} into MS MARCO-style query-passage pairs (see Section~\ref{exp}). A detailed analysis of 2AIRTC, its limitations, and our supplementary evaluations on this dataset is provided in Appendix~\ref{sec:appendix}. To address these gaps, our work introduces Amharic-specific retrieval models that incorporate both strong and compact encoder backbones (Section~\ref{sec:amharic_embedding_models}), optimized using contrastive training to better handle Amharic’s morphological complexity. We also develop and evaluate a late-interaction ColBERT model tailored for Amharic, and benchmark both sparse and dense retrieval architectures. This enables rigorous, reproducible comparisons across retrieval paradigms.","Retrieval systems commonly adopt a two-stage pipeline to optimize efficiency and effectiveness: (i) First-stage retrieval efficiently retrieves candi- date documents using lightweight methods such as sparse or dense retrieval. (ii) Re-ranking refines the results using computationally more intensive models, such as cross-encoders. Sparse retrieval. Sparse retrieval is fundamental in IR, with BM25 known for its efficiency, inter- pretability, and cross-domain robustness (Robert- son and Zaragoza, 2009). However, it strug- gles with vocabulary mismatch and morphological variability, challenges that are particularly acute in morphologically rich languages like Amharic. Learned sparse retrieval ( LSR) methods (Formal et al., 2021b,a) attempt to mitigate these issues by dynamically weighting and expanding terms, thereby enhancing relevance while maintaining in- terpretability (Dai and Callan, 2020). However, LSR faces limitations in low-resource settings due to the scarcity of annotated data, dialectal diversity, and morphological complexity (e.g., Amharic’s templatic morphology), which necessitate subword- aware tokenization or morphological analyzers that are often unavailable. Dense retrieval. Dense retrieval encodes queries and documents into a shared semantic space us- ing neural network encoders, enabling efficient re- trieval via approximate nearest neighbor ( ANN ) search based on embedding similarity (Johnson et al., 2019; Karpukhin et al., 2020; Xiong et al., 2021). While it helps mitigate lexical mismatch, its effectiveness in low-resource languages is hindered by the need for large-scale labeled training data. Multilingual models such as mBERT (Pires et al., 2019), XLM-R (Conneau et al., 2020), and African language-specific models like SERENGETI (Ade- bara et al., 2023) and AfriBERTa (Ogueji et al., 2021) partially address data scarcity through cross- lingual pretraining. However, their effectiveness in morphologically complex languages like Amharic has not been thoroughly investigated.Recent advances in unsupervised contrastive learning, such as Contriever (Izacard et al., 2022), have demonstrated strong zero-shot and multilin- gual retrieval performance, especially in cross- lingual transfer scenarios. Nonetheless, their ef- fectiveness in morphologically complex languages like Amharic remains unexplored, as current evalu- ations do not account for challenges arising from root-based and templatic morphologies. Beyond data scarcity, retrieval performance is further constrained by morphological complexity and tokenization challenges. Amharic’s templatic morphology often causes standard subword tok- enizers to over-segment words into non-morphemic units, leading to fragmented representations that obscure semantic relationships. Broader research on multilingual tokenization quality (Rust et al., 2021) shows that excessive segmentation in mor- phologically rich languages introduces noise into subword representations, degrading performance in downstream tasks. Despite recent advances in multilingual dense retrieval, state-of-the-art models such as Arctic Em- bed 2.0 (Yu et al., 2024) and Multilingual E5 (Wang et al., 2024), which topped the MTEB Embedding Leaderboard4at the time of our study, continue to struggle with highly inflected languages. These models often produce suboptimal tokenizations, fragmented subword representations, and ineffi- cient embeddings, ultimately limiting their retrieval effectiveness. Our empirical findings in Section 6.3 illustrate the extent to which tokenization errors impair retrieval performance in Amharic. Bridging the gap in Amharic IR. Retrieval sys- tems are primarily optimized for high-resource lan- guages, exacerbating performance disparities in low-resource settings like Amharic (Nigatu and Raji, 2024). Prior research in Amharic IRhas ex- plored pre-trained embeddings (Word2Vec, fast- Text, AmRoBERTa, Belay et al., 2021), morpholog- ical tools (e.g., annotation frameworks, WordNet- based query expansion, Yeshambel et al., 2021), and cross-lingual transfer via multilingual mod- els (AfriBERTa, Azime et al., 2024a). However, systematic evaluations of sparse and dense retrieval architectures remain absent, making principled comparisons difficult and leaving the effectiveness of different paradigms in Amharic IR largely unex- amined. 4https://huggingface.co/spaces/mteb/ leaderboard Yeshambel et al. (2020) introduce 2AIRTC, a TREC-style test collection for standardized Amharic IR evaluation, but it lacks baseline re- trieval benchmarks and complete relevance judg- ments, making recall-based assessments unreliable. To ensure robust evaluation, we conduct our main experiments on the Amharic Passage Retrieval Dataset, which we derive by preprocessing the Amharic News Text Classification Dataset (AM- NEWS) (Azime and Mohammed, 2021) into MS MARCO-style query-passage pairs (see Section 5). A detailed analysis of 2AIRTC, its limitations, and our supplementary evaluations on this dataset is provided in Appendix A. To address these gaps, our work introduces Amharic-specific retrieval models that incorporate both strong and compact encoder backbones (Sec- tion 4.2), optimized using contrastive training to better handle Amharic’s morphological complexity. We also develop and evaluate a late-interaction Col- BERT model tailored for Amharic, and benchmark both sparse and dense retrieval architectures. This enables rigorous, reproducible comparisons across retrieval paradigms." 2505.19307v1,"Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization","João Coelho, Bruno Martins, João Magalhães, Chenyan Xiong","Neural retrieval models excel in Web search, but their training requires substantial amounts of labeled query-document pairs, which are costly to obtain. With the widespread availability of Web document collections like ClueWeb22, synthetic queries generated by large language models offer a scalable alternative. Still, synthetic training queries often vary in quality, which leads to suboptimal downstream retrieval performance. Existing methods typically filter out noisy query-document pairs based on signals from an external re-ranker. In contrast, we propose a framework that leverages Direct Preference Optimization (DPO) to integrate ranking signals into the query generation process, aiming to directly optimize the model towards generating high-quality queries that maximize downstream retrieval effectiveness. Experiments show higher ranker-assessed relevance between query-document pairs after DPO, leading to stronger downstream performance on the MS~MARCO benchmark when compared to baseline models trained with synthetic data.",cs.IR,2025-05-25T20:34:12+00:00,2025-05-25T20:34:12+00:00,http://arxiv.org/abs/2505.19307v1,http://arxiv.org/abs/2505.19307v1,2025-05-25 20:34:12+00:00,"Transformer-based bi-encoders are the standard architecture for dense retrieval~\citep{DBLP:conf/iclr/XiongXLTLBAO21,DBLP:conf/emnlp/KarpukhinOMLWEC20}, typically trained with contrastive objectives and hard negative mining strategies such as ANCE~\citep{DBLP:conf/iclr/XiongXLTLBAO21}. Retrieval-aligned pre-training on in-domain corpora is also commonly adopted to improve retrieval effectiveness~\citep{DBLP:conf/acl/GaoC22, lu-etal-2021-less, DBLP:conf/emnlp/XiaoLSC22, DBLP:conf/acl/LeeCT19, DBLP:conf/sigir/MaGZFC22, DBLP:journals/corr/abs-2401-11248}. % Unsupervised dense retrieval methods still leverage contrastive training methodologies, but without relying on human-annotated labels. Multiple approaches have been explored to obtain positive samples, for instance, based on anchor text~\citep{DBLP:conf/sigir/XieLX23}, sampling positive spans from a single document~\citep{DBLP:conf/acl/LeeCT19, DBLP:journals/tmlr/IzacardCHRBJG22}, or heuristically mining pairs from structured documents (e.g., title-paragraph)~\citep{DBLP:journals/corr/abs-2212-03533}. The community has explored generating synthetic queries from documents for model training. Approaches such as Doc2Query~\citep{DBLP:journals/corr/abs-1904-08375} and DocT5Query~\citep{nogueira2019doc2query} train a lightweight Transformer~\citep{DBLP:conf/nips/VaswaniSPUJGKP17} on labeled query–document pairs to expand document representations. Subsequent work~\citep{DBLP:conf/ecir/GospodinovMM23} demonstrates that filtering out hallucinated queries can further enhance downstream performance. More recent methods leverage LLMs. For instance, InPars~\citep{DBLP:journals/corr/abs-2202-05144} employs few-shot prompting with filtering based on generation probability, while InPars-v2~\citep{DBLP:journals/corr/abs-2301-01820} leverages a supervised ranker for query filtering. Similarly, Promptagator~\citep{DBLP:conf/iclr/DaiZMLNLBGHC23} uses task-specific prompts to perform few-shot query generation. Building on these approaches, the Gecko model~\citep{DBLP:journals/corr/abs-2403-20327} iteratively refines synthetic queries through a process involving retrieval, re-ranking, positive relabeling, and hard negative sampling, thereby improving the quality of the training data. %\cx{not very relevant?} % Other lines of work explore synthesizing documents instead of queries. Methods such as Syntriever~\citep{kim2025syntriever} generate synthetic documents (both positive and negative) conditioned on input queries, framing this as knowledge distillation from LLMs. Other approaches generate fully synthetic query-document pairs at scale, enabling retrieval-oriented pre-training~\citep{DBLP:conf/acl/WangYHYMW24}. Recent work has explored reinforcement learning to improve synthetic query generation. Token-level Proximal Policy Optimization (TPPO)~\citep{DBLP:journals/corr/abs-2411-00722, DBLP:journals/corr/SchulmanWDRK17} has been applied to query suggestion tasks, optimizing generation based on token-level rewards derived from user interaction histories. While this setting targets interactive scenarios rather than offline generation from documents, it highlights the promise of reinforcement learning for enhancing query quality. % todo: add token-level PPO paper","Transformer-based bi-encoders are the standard architecture for dense retrieval~\citep{DBLP:conf/iclr/XiongXLTLBAO21,DBLP:conf/emnlp/KarpukhinOMLWEC20}, typically trained with contrastive objectives and hard negative mining strategies such as ANCE~\citep{DBLP:conf/iclr/XiongXLTLBAO21}. Retrieval-aligned pre-training on in-domain corpora is also commonly adopted to improve retrieval effectiveness~\citep{DBLP:conf/acl/GaoC22, lu-etal-2021-less, DBLP:conf/emnlp/XiaoLSC22, DBLP:conf/acl/LeeCT19, DBLP:conf/sigir/MaGZFC22, DBLP:journals/corr/abs-2401-11248}. % Unsupervised dense retrieval methods still leverage contrastive training methodologies, but without relying on human-annotated labels. Multiple approaches have been explored to obtain positive samples, for instance, based on anchor text~\citep{DBLP:conf/sigir/XieLX23}, sampling positive spans from a single document~\citep{DBLP:conf/acl/LeeCT19, DBLP:journals/tmlr/IzacardCHRBJG22}, or heuristically mining pairs from structured documents (e.g., title-paragraph)~\citep{DBLP:journals/corr/abs-2212-03533}. The community has explored generating synthetic queries from documents for model training. Approaches such as Doc2Query~\citep{DBLP:journals/corr/abs-1904-08375} and DocT5Query~\citep{nogueira2019doc2query} train a lightweight Transformer~\citep{DBLP:conf/nips/VaswaniSPUJGKP17} on labeled query–document pairs to expand document representations. Subsequent work~\citep{DBLP:conf/ecir/GospodinovMM23} demonstrates that filtering out hallucinated queries can further enhance downstream performance. More recent methods leverage LLMs. For instance, InPars~\citep{DBLP:journals/corr/abs-2202-05144} employs few-shot prompting with filtering based on generation probability, while InPars-v2~\citep{DBLP:journals/corr/abs-2301-01820} leverages a supervised ranker for query filtering. Similarly, Promptagator~\citep{DBLP:conf/iclr/DaiZMLNLBGHC23} uses task-specific prompts to perform few-shot query generation. Building on these approaches, the Gecko model~\citep{DBLP:journals/corr/abs-2403-20327} iteratively refines synthetic queries through a process involving retrieval, re-ranking, positive relabeling, and hard negative sampling, thereby improving the quality of the training data. %\cx{not very relevant?} % Other lines of work explore synthesizing documents instead of queries. Methods such as Syntriever~\citep{kim2025syntriever} generate synthetic documents (both positive and negative) conditioned on input queries, framing this as knowledge distillation from LLMs. Other approaches generate fully synthetic query-document pairs at scale, enabling retrieval-oriented pre-training~\citep{DBLP:conf/acl/WangYHYMW24}. Recent work has explored reinforcement learning to improve synthetic query generation. Token-level Proximal Policy Optimization (TPPO)~\citep{DBLP:journals/corr/abs-2411-00722, DBLP:journals/corr/SchulmanWDRK17} has been applied to query suggestion tasks, optimizing generation based on token-level rewards derived from user interaction histories. While this setting targets interactive scenarios rather than offline generation from documents, it highlights the promise of reinforcement learning for enhancing query quality. % todo: add token-level PPO paper","Transformer-based bi-encoders are the standard architecture for dense retrieval [ 16,38], typically trained with contrastive objectives and hard negative mining strategies such as ANCE [ 38]. Retrieval- aligned pre-training on in-domain corpora is also commonly adopted to improve retrieval effectiveness [9, 19–22, 36]. The community has explored generating synthetic queries from documents for model training. Approaches such as Doc2Query [ 25] and DocT5Query [ 24] train a lightweight Transformer [ 33] on la- beled query–document pairs to expand document representations. Subsequent work [ 10] demonstrates that filtering out hallucinated queries can further enhance downstream performance. More recent methods leverage LLMs. For instance, InPars [ 2] employs few-shot prompting with filtering based on generation probability, while InPars-v2 [ 15] leverages a supervised ranker for query filtering. Similarly, Promptagator [6] uses task-specific prompts to perform few-shot query generation. Building on these approaches, the Gecko model [ 18] iteratively refines synthetic queries through a process in- volving retrieval, re-ranking, positive relabeling, and hard negative sampling, thereby improving the quality of the training data. Recent work has explored reinforcement learning to improve synthetic query generation. Token-level Proximal Policy Optimiza- tion (TPPO) [ 26,30] has been applied to query suggestion tasks, optimizing generation based on token-level rewards derived from user interaction histories. While this setting targets interactive sce- narios rather than offline generation from documents, it highlights the promise of reinforcement learning for enhancing query quality." 2505.17507v1,"Benchmarking Recommendation, Classification, and Tracing Based on Hugging Face Knowledge Graph","Qiaosheng Chen, Kaijia Huang, Xiao Zhou, Weiqing Luo, Yuanning Cui, Gong Cheng","The rapid growth of open source machine learning (ML) resources, such as models and datasets, has accelerated IR research. However, existing platforms like Hugging Face do not explicitly utilize structured representations, limiting advanced queries and analyses such as tracing model evolution and recommending relevant datasets. To fill the gap, we construct HuggingKG, the first large-scale knowledge graph built from the Hugging Face community for ML resource management. With 2.6 million nodes and 6.2 million edges, HuggingKG captures domain-specific relations and rich textual attributes. It enables us to further present HuggingBench, a multi-task benchmark with three novel test collections for IR tasks including resource recommendation, classification, and tracing. Our experiments reveal unique characteristics of HuggingKG and the derived tasks. Both resources are publicly available, expected to advance research in open source resource sharing and management.",cs.IR,2025-05-23T06:00:20+00:00,2025-05-23T06:00:20+00:00,http://arxiv.org/abs/2505.17507v1,http://arxiv.org/abs/2505.17507v1,2025-05-23 06:00:20+00:00,"\label{sec:related-work} % In this section, we review existing works on KGs (Section~\ref{sec:related-work-kg}) and KG-based benchmarks~(Section~\ref{sec:related-work-dataset}) in the domain of open source resource management. \textbf{KGs for Resource Management.} %\label{sec:related-work-kg} KGs have been extensively used to represent and analyze complex relationships in various domains, including open source resource management. Previous works such as DEKR~\cite{DEKR} and MLTaskKG~\cite{tse23} have constructed KGs to support recommendation tasks by capturing relationships among ML resources. DEKR~\cite{DEKR} primarily relies on description enhancement for ML method recommendation. MLTaskKG~\cite{tse23} constructs an AI task-model KG by integrating static data to support task-oriented ML/DL library recommendation. However, both approaches focus on static attributes and a narrow set of relations, failing to capture dynamic user interactions and inter-\texttt{Model} relations. In contrast, as shown in Table~\ref{tab:kg-bench-comparison}, our proposed $\mathsf{HuggingKG}$ is built on the rich metadata provided by Hugging Face, offering a large-scale KG with a more extensive set of relations. In addition to generic relations (e.g.,~\texttt{Defined For}, \texttt{Cite}), $\mathsf{HuggingKG}$ incorporates multiple inter-\texttt{Model} relations (i.e.,~\texttt{Adapter}, \texttt{Finetune}, \texttt{Merge}, and \texttt{Quantize}) and captures user interaction signals (i.e.,~\texttt{Publish}, \texttt{Like}, and \texttt{Follow}). \emph{This enriched structure facilitates a deeper analysis of ML resources and supports more effective recommendation strategies.} \textbf{KG-based Benchmarks.} %\label{sec:related-work-dataset} Various benchmark datasets have been proposed to evaluate KG-based tasks. For example, OAG-Bench~\cite{OAGBench} provides a human-curated benchmark for academic graph mining, focusing on citation and collaboration networks. In the domain of open source resource management, our $\mathsf{HuggingBench}$ benchmark distinguishes itself by providing datasets for three IR tasks: resource recommendation, task classification, and model tracing. For \textbf{resource recommendation}, paper2repo~\cite{paper2repo} introduces a distant-supervised recommender system that matches papers with related code repositories. However, it incorporates a limited range of entity types that are insufficient to build fine-grained interdependencies. Xu et al.~\cite{RepoRecommendation} leverages multi-modal features from developers’ sequential behaviors and repository text to generate relevant and tailored suggestions for developers, yet it does not explicitly construct or exploit a structured KG. In contrast, as shown in Table~\ref{tab:kg-bench-comparison}, \emph{$\mathsf{HuggingBench}$ benefits from the inherent structure of $\mathsf{HuggingKG}$ that captures rich relational data for recommendation}. Furthermore, GRETA~\cite{GRETA} and recent efforts in automated categorization~\cite{EASE24, ESEM24} address specific \textbf{tagging/classification} tasks. GRETA~\cite{GRETA} constructs an Entity Tag Graph (ETG) using the cross-community knowledge from GitHub and Stack Overflow, and uses an iterative random walk with restart algorithm to automatically assign tags to repositories. \emph{$\mathsf{HuggingKG}$ integrates richer textual descriptions and metadata to construct a graph that encapsulates fine-grained relationships among models and datasets, thereby facilitating multi-label task classification for ML resources.} \begin{table*}[t] \centering \caption{Comparison between KGs and benchmarks on open source resource management.} \label{tab:kg-bench-comparison} %\footnotesize \resizebox{\textwidth}{!}{ \begin{tabular}{@{}lcrrrrcccc@{}} \toprule & \textbf{Source} & \textbf{\#Nodes} & \textbf{\#Types} & \textbf{\#Relations} & \textbf{\#Edges} & \textbf{Key Entities \& (Attributes)} & \textbf{Model Evolution} & \textbf{User Interaction} & \textbf{Tasks} \\ \midrule DEKR~\cite{DEKR} & \begin{tabular}{@{}c@{}}Open academic platforms\\(e.g.,~PapersWithCode, GitHub)\end{tabular} & 17,483 & 5 & 23 & 117,245 & \begin{tabular}{@{}c@{}}\texttt{Dataset}, \texttt{Method}\\(Description)\end{tabular} & No & No & Recommendation \\ \specialrule{0em}{1.5pt}{1.5pt} MLTaskKG~\cite{tse23} & \begin{tabular}{@{}c@{}}PapersWithCode,\\ ML/DL Papers,\\ ML/DL Framework Docs\end{tabular} & 159,310 & 16 & 39 & 628,045 & \begin{tabular}{@{}c@{}}\texttt{Task}, \texttt{Model},\\ \texttt{Model Implementation}\end{tabular} & No & No & Recommendation \\ \specialrule{0em}{1.5pt}{1.5pt} paper2repo~\cite{paper2repo} & \begin{tabular}{@{}c@{}}GitHub,\\ Microsoft Academic\end{tabular} & 39,600 & 2 & - & - & \texttt{Paper}, \texttt{Repository} & No & \begin{tabular}{@{}c@{}}Yes\\(\texttt{Star})\end{tabular} & Recommendation \\ \specialrule{0em}{1.5pt}{1.5pt} GRETA~\cite{GRETA} & \begin{tabular}{@{}c@{}}GitHub,\\ Stack Overflow\end{tabular} & 707,891 & 4 & - & - & \texttt{Repository}, \texttt{Tag} & No & \begin{tabular}{@{}c@{}}Yes\\(\texttt{Search}, \texttt{Raise}, \texttt{Answer})\end{tabular} & Tag Assignment \\ \specialrule{0em}{1.5pt}{1.5pt} AIPL(Facebook/React)~\cite{issue-PR-link-prediction} & GitHub & 97,556 & 4 & 9 & 196,834 & \begin{tabular}{@{}c@{}}\texttt{Issue}, \texttt{PR},\\ \texttt{Repository}, \texttt{User}\end{tabular} & No & Yes & Issue-PR Link Prediction \\ \specialrule{0em}{1.5pt}{1.5pt} AIPL(vuejs/vue)~\cite{issue-PR-link-prediction} & GitHub & 49,200 & 4 & 9 & 95,160 & \begin{tabular}{@{}c@{}}\texttt{Issue}, \texttt{PR},\\ \texttt{Repository}, \texttt{User}\end{tabular} & No & Yes & Issue-PR Link Prediction \\ \midrule $\mathsf{HuggingKG}$ \& $\mathsf{HuggingBench}$ & \textbf{Hugging Face} & \textbf{2,614,270} & \textbf{8} & \textbf{30} & \textbf{6,246,353} & \textbf{\begin{tabular}{@{}c@{}}\texttt{Model}, \texttt{Dataset},\\ \texttt{User}, \texttt{Task}\\(Description)\end{tabular}} & \textbf{\begin{tabular}{@{}c@{}}Yes\\(\texttt{Finetune}, \texttt{Adapter},\\ \texttt{Merge}, \texttt{Quantize})\end{tabular}} & \textbf{\begin{tabular}{@{}c@{}}Yes\\(\texttt{Publish}, \texttt{Like}, \texttt{Follow})\end{tabular}} & \textbf{\begin{tabular}{@{}c@{}}Recommendation,\\ Classification, Tracing\end{tabular}} \\ \bottomrule \end{tabular} } \end{table*} Recent work by Bai et al. ~\cite{issue-PR-link-prediction} uses a knowledge-aware heterogeneous graph learning approach to \textbf{predict links} between issues and pull requests on GitHub, effectively capturing complex relational information through metapath aggregation. However, % while this method demonstrates that metapath aggregation can capture relational complexity in software development, it remains confined to linking \texttt{Issue}–\texttt{PR} pairs and does not address the broader challenge of tracking model evolution across ML resources. \emph{The novel model tracing task in $\mathsf{HuggingBench}$ not only pioneers the exploration of inter-\texttt{Model} relations, but also provides practical insights into the evolution, reuse, and optimization of ML models}, thereby supporting more informed decision-making in real-world open source resource management.","% In this section, we review existing works on KGs (Section~\ref{sec:related-work-kg}) and KG-based benchmarks~(Section~\ref{sec:related-work-dataset}) in the domain of open source resource management. \textbf{KGs for Resource Management.} %KGs have been extensively used to represent and analyze complex relationships in various domains, including open source resource management. Previous works such as DEKR~\cite{DEKR} and MLTaskKG~\cite{tse23} have constructed KGs to support recommendation tasks by capturing relationships among ML resources. DEKR~\cite{DEKR} primarily relies on description enhancement for ML method recommendation. MLTaskKG~\cite{tse23} constructs an AI task-model KG by integrating static data to support task-oriented ML/DL library recommendation. However, both approaches focus on static attributes and a narrow set of relations, failing to capture dynamic user interactions and inter-\texttt{Model} relations. In contrast, as shown in Table~\ref{tab:kg-bench-comparison}, our proposed $\mathsf{HuggingKG}$ is built on the rich metadata provided by Hugging Face, offering a large-scale KG with a more extensive set of relations. In addition to generic relations (e.g.,~\texttt{Defined For}, \texttt{Cite}), $\mathsf{HuggingKG}$ incorporates multiple inter-\texttt{Model} relations (i.e.,~\texttt{Adapter}, \texttt{Finetune}, \texttt{Merge}, and \texttt{Quantize}) and captures user interaction signals (i.e.,~\texttt{Publish}, \texttt{Like}, and \texttt{Follow}). \emph{This enriched structure facilitates a deeper analysis of ML resources and supports more effective recommendation strategies.} \textbf{KG-based Benchmarks.} %Various benchmark datasets have been proposed to evaluate KG-based tasks. For example, OAG-Bench~\cite{OAGBench} provides a human-curated benchmark for academic graph mining, focusing on citation and collaboration networks. In the domain of open source resource management, our $\mathsf{HuggingBench}$ benchmark distinguishes itself by providing datasets for three IR tasks: resource recommendation, task classification, and model tracing. For \textbf{resource recommendation}, paper2repo~\cite{paper2repo} introduces a distant-supervised recommender system that matches papers with related code repositories. However, it incorporates a limited range of entity types that are insufficient to build fine-grained interdependencies. Xu et al.~\cite{RepoRecommendation} leverages multi-modal features from developers’ sequential behaviors and repository text to generate relevant and tailored suggestions for developers, yet it does not explicitly construct or exploit a structured KG. In contrast, as shown in Table~\ref{tab:kg-bench-comparison}, \emph{$\mathsf{HuggingBench}$ benefits from the inherent structure of $\mathsf{HuggingKG}$ that captures rich relational data for recommendation}. Furthermore, GRETA~\cite{GRETA} and recent efforts in automated categorization~\cite{EASE24, ESEM24} address specific \textbf{tagging/classification} tasks. GRETA~\cite{GRETA} constructs an Entity Tag Graph (ETG) using the cross-community knowledge from GitHub and Stack Overflow, and uses an iterative random walk with restart algorithm to automatically assign tags to repositories. \emph{$\mathsf{HuggingKG}$ integrates richer textual descriptions and metadata to construct a graph that encapsulates fine-grained relationships among models and datasets, thereby facilitating multi-label task classification for ML resources.} \begin{table*}[t] \centering \caption{Comparison between KGs and benchmarks on open source resource management.} %\footnotesize \resizebox{\textwidth}{!}{ \begin{tabular}{@{}lcrrrrcccc@{}} \toprule & \textbf{Source} & \textbf{\#Nodes} & \textbf{\#Types} & \textbf{\#Relations} & \textbf{\#Edges} & \textbf{Key Entities \& (Attributes)} & \textbf{Model Evolution} & \textbf{User Interaction} & \textbf{Tasks} \\ \midrule DEKR~\cite{DEKR} & \begin{tabular}{@{}c@{}}Open academic platforms\\(e.g.,~PapersWithCode, GitHub)\end{tabular} & 17,483 & 5 & 23 & 117,245 & \begin{tabular}{@{}c@{}}\texttt{Dataset}, \texttt{Method}\\(Description)\end{tabular} & No & No & Recommendation \\ \specialrule{0em}{1.5pt}{1.5pt} MLTaskKG~\cite{tse23} & \begin{tabular}{@{}c@{}}PapersWithCode,\\ ML/DL Papers,\\ ML/DL Framework Docs\end{tabular} & 159,310 & 16 & 39 & 628,045 & \begin{tabular}{@{}c@{}}\texttt{Task}, \texttt{Model},\\ \texttt{Model Implementation}\end{tabular} & No & No & Recommendation \\ \specialrule{0em}{1.5pt}{1.5pt} paper2repo~\cite{paper2repo} & \begin{tabular}{@{}c@{}}GitHub,\\ Microsoft Academic\end{tabular} & 39,600 & 2 & - & - & \texttt{Paper}, \texttt{Repository} & No & \begin{tabular}{@{}c@{}}Yes\\(\texttt{Star})\end{tabular} & Recommendation \\ \specialrule{0em}{1.5pt}{1.5pt} GRETA~\cite{GRETA} & \begin{tabular}{@{}c@{}}GitHub,\\ Stack Overflow\end{tabular} & 707,891 & 4 & - & - & \texttt{Repository}, \texttt{Tag} & No & \begin{tabular}{@{}c@{}}Yes\\(\texttt{Search}, \texttt{Raise}, \texttt{Answer})\end{tabular} & Tag Assignment \\ \specialrule{0em}{1.5pt}{1.5pt} AIPL(Facebook/React)~\cite{issue-PR-link-prediction} & GitHub & 97,556 & 4 & 9 & 196,834 & \begin{tabular}{@{}c@{}}\texttt{Issue}, \texttt{PR},\\ \texttt{Repository}, \texttt{User}\end{tabular} & No & Yes & Issue-PR Link Prediction \\ \specialrule{0em}{1.5pt}{1.5pt} AIPL(vuejs/vue)~\cite{issue-PR-link-prediction} & GitHub & 49,200 & 4 & 9 & 95,160 & \begin{tabular}{@{}c@{}}\texttt{Issue}, \texttt{PR},\\ \texttt{Repository}, \texttt{User}\end{tabular} & No & Yes & Issue-PR Link Prediction \\ \midrule $\mathsf{HuggingKG}$ \& $\mathsf{HuggingBench}$ & \textbf{Hugging Face} & \textbf{2,614,270} & \textbf{8} & \textbf{30} & \textbf{6,246,353} & \textbf{\begin{tabular}{@{}c@{}}\texttt{Model}, \texttt{Dataset},\\ \texttt{User}, \texttt{Task}\\(Description)\end{tabular}} & \textbf{\begin{tabular}{@{}c@{}}Yes\\(\texttt{Finetune}, \texttt{Adapter},\\ \texttt{Merge}, \texttt{Quantize})\end{tabular}} & \textbf{\begin{tabular}{@{}c@{}}Yes\\(\texttt{Publish}, \texttt{Like}, \texttt{Follow})\end{tabular}} & \textbf{\begin{tabular}{@{}c@{}}Recommendation,\\ Classification, Tracing\end{tabular}} \\ \bottomrule \end{tabular} } \end{table*} Recent work by Bai et al. ~\cite{issue-PR-link-prediction} uses a knowledge-aware heterogeneous graph learning approach to \textbf{predict links} between issues and pull requests on GitHub, effectively capturing complex relational information through metapath aggregation. However, % while this method demonstrates that metapath aggregation can capture relational complexity in software development, it remains confined to linking \texttt{Issue}–\texttt{PR} pairs and does not address the broader challenge of tracking model evolution across ML resources. \emph{The novel model tracing task in $\mathsf{HuggingBench}$ not only pioneers the exploration of inter-\texttt{Model} relations, but also provides practical insights into the evolution, reuse, and optimization of ML models}, thereby supporting more informed decision-making in real-world open source resource management.","KGs for Resource Management. KGs have been extensively used to represent and analyze complex relationships in various domains, including open source resource management. Previous works such as DEKR [ 6] and MLTaskKG [ 23] have constructed KGs to support recommendation tasks by capturing relationships among ML re- sources. DEKR [ 6] primarily relies on description enhancement for ML method recommendation. MLTaskKG [ 23] constructs an AI task- model KG by integrating static data to support task-oriented ML/DL library recommendation. However, both approaches focus on static attributes and a narrow set of relations, failing to capture dynamic user interactions and inter- Model relations. In contrast, as shown in Table 1, our proposed HuggingKG is built on the rich metadata pro- vided by Hugging Face, offering a large-scale KG with a more exten- sive set of relations. In addition to generic relations (e.g., Defined For,Cite ),HuggingKG incorporates multiple inter- Model relations (i.e., Adapter ,Finetune ,Merge , and Quantize ) and captures user interaction signals (i.e., Publish ,Like , and Follow ).This enriched structure facilitates a deeper analysis of ML resources and supports more effective recommendation strategies. KG-based Benchmarks. Various benchmark datasets have been proposed to evaluate KG-based tasks. For example, OAG-Bench [ 51] provides a human-curated benchmark for academic graph mining, focusing on citation and collaboration networks. In the domain of open source resource management, our HuggingBench benchmark distinguishes itself by providing datasets for three IR tasks: resource recommendation, task classification, and model tracing. Forresource recommendation , paper2repo [ 32] introduces a distant-supervised recommender system that matches papers 1https://huggingface.co/collections/cqsss/huggingbench-67b2ee02ca45b15e351009a2 2https://github.com/nju-websoft/HuggingBenchwith related code repositories. However, it incorporates a limited range of entity types that are insufficient to build fine-grained in- terdependencies. Xu et al. [ 44] leverages multi-modal features from developers’ sequential behaviors and repository text to generate relevant and tailored suggestions for developers, yet it does not ex- plicitly construct or exploit a structured KG. In contrast, as shown in Table 1, HuggingBench benefits from the inherent structure of HuggingKG that captures rich relational data for recommendation . Furthermore, GRETA [ 5] and recent efforts in automated cate- gorization [ 25,34] address specific tagging/classification tasks. GRETA [ 5] constructs an Entity Tag Graph (ETG) using the cross- community knowledge from GitHub and Stack Overflow, and uses an iterative random walk with restart algorithm to automatically assign tags to repositories. HuggingKG integrates richer textual descriptions and metadata to construct a graph that encapsulates fine- grained relationships among models and datasets, thereby facilitating multi-label task classification for ML resources. Recent work by Bai et al. [ 1] uses a knowledge-aware heteroge- neous graph learning approach to predict links between issues and pull requests on GitHub, effectively capturing complex rela- tional information through metapath aggregation. However, it re- mains confined to linking Issue –PRpairs and does not address the broader challenge of tracking model evolution across ML resources. The novel model tracing task in HuggingBench not only pioneers the exploration of inter- Model relations, but also provides practical insights into the evolution, reuse, and optimization of ML models , thereby supporting more informed decision-making in real-world open source resource management. 3HuggingKG Knowledge Graph 3.1 KG Construction The construction of HuggingKG follows a principled process that includes defining nodes and edges, crawling and converting data from the Hugging Face community website, and performing data verification and cleaning. Schema Definition. The nodes and edges in HuggingKG are defined based on our meticulous analysis of the Hugging Face website and general IR needs in real-world scenarios. Figure 2 shows an example model page of Qwen/Qwen2.5-7B-Instruct3on the Hugging Face website. We can intuitively see that the key 3https://huggingface.co/Qwen/Qwen2.5-7B-Instruct Benchmarking Recommendation, Classification, and Tracing Based on Hugging Face Knowledge Graph SIGIR ’25, July 13–18, 2025, Padua, Italy Table 1: Comparison between KGs and benchmarks on open source resource management. Source #Nodes #Types #Relations #Edges Key Entities & (Attributes) Model Evolution User Interaction Tasks DEKR [6]Open academic platforms (e.g., PapersWithCode, GitHub)17,483 5 23 117,245Dataset ,Method (Description)No No Recommendation MLTaskKG [23]PapersWithCode, ML/DL Papers, ML/DL Framework Docs159,310 16 39 628,045Task ,Model , Model ImplementationNo No Recommendation paper2repo [32]GitHub, Microsoft Academic39,600 2 - - Paper ,Repository NoYes (Star )Recommendation GRETA [5]GitHub, Stack Overflow707,891 4 - - Repository ,Tag NoYes (Search ,Raise ,Answer )Tag Assignment AIPL(Facebook/React) [1] GitHub 97,556 4 9 196,834Issue ,PR, Repository ,UserNo Yes Issue-PR Link Prediction AIPL(vuejs/vue) [1] GitHub 49,200 4 9 95,160Issue ,PR, Repository ,UserNo Yes Issue-PR Link Prediction HuggingKG &HuggingBench Hugging Face 2,614,270 8 30 6,246,353Model ,Dataset , User,Task (Description)Yes (Finetune ,Adapter , Merge ,Quantize )Yes (Publish ,Like,Follow )Recommendation, Classification, Tracing Model Tags Model DescriptionModel Publisher/Model Name Model-P aper Relation Space-M odel RelationUser-M odel Relation User-O rganization Relation Collection-M odel RelationModel-T as k Relation Model-M odel Relation Figure 2: An example model page on Hugging Face. attributes of a Model include its name, publisher, tags, and text description on the model card, etc. The key relations that can be observed on the page include Finetune between Model s and Like between User andModel , etc. Through an analysis of pages such as models, datasets, and spaces, we identify 8 types of nodes and 30 types of edges between them, as illustrated in Figure" 2505.12791v1,"Unlearning for Federated Online Learning to Rank: A Reproducibility Study","Yiling Tao, Shuyi Wang, Jiaxi Yang, Guido Zuccon","This paper reports on findings from a comparative study on the effectiveness and efficiency of federated unlearning strategies within Federated Online Learning to Rank (FOLTR), with specific attention to systematically analysing the unlearning capabilities of methods in a verifiable manner. Federated approaches to ranking of search results have recently garnered attention to address users privacy concerns. In FOLTR, privacy is safeguarded by collaboratively training ranking models across decentralized data sources, preserving individual user data while optimizing search results based on implicit feedback, such as clicks. Recent legislation introduced across numerous countries is establishing the so called ""the right to be forgotten"", according to which services based on machine learning models like those in FOLTR should provide capabilities that allow users to remove their own data from those used to train models. This has sparked the development of unlearning methods, along with evaluation practices to measure whether unlearning of a user data successfully occurred. Current evaluation practices are however often controversial, necessitating the use of multiple metrics for a more comprehensive assessment -- but previous proposals of unlearning methods only used single evaluation metrics. This paper addresses this limitation: our study rigorously assesses the effectiveness of unlearning strategies in managing both under-unlearning and over-unlearning scenarios using adapted, and newly proposed evaluation metrics. Thanks to our detailed analysis, we uncover the strengths and limitations of five unlearning strategies, offering valuable insights into optimizing federated unlearning to balance data privacy and system performance within FOLTR. We publicly release our code and complete results at https://github.com/Iris1026/Unlearning-for-FOLTR.git.","cs.IR, cs.LG",2025-05-19T07:23:46+00:00,2025-05-19T07:23:46+00:00,http://arxiv.org/abs/2505.12791v1,http://arxiv.org/abs/2505.12791v1,2025-05-19 07:23:46+00:00,"\subsection{Federated Online Learning to Rank} As a specialized distributed machine learning paradigm, FL enables collaborative training while preserving user privacy. To satisfy privacy-preserving requirements in OLTR, FL has also been introduced to this field termed federated online learning to rank (FOLTR). The FOLtR-ES method does this by incorporating evolution strategies to optimize the ranking model and \(\epsilon\)-local differential privacy for enhanced data protection~\cite{kharitonov2019federated}, but this methods is found to have reduced effectiveness and adaptability on large-scale datasets~\cite{wang2021federated}. A more effective approach, FPDGD~\cite{wang2021effective}, has been proposed by integrating the advanced Pairwise Differentiable Gradient Descent (PDGD)~\cite{oosterhuis2018differentiable} into the FL framework. Thanks to the superiority of locally-deployed PDGD ranker in handling noise and biases from user interactions, FPDGD has demonstrated strong ranking performance which is comparable to that of the state-of-the-art centralised OLTR methods. Following this, Wang and Zuccon~\cite{wang2022non} investigate the robustness of FPDGD to non-independent and identically distributed client data. The vulnerability of FPDGD to poisoning attacks in the federated system is also simulated and verified~\cite{wang2023analysis}, which extends the landscape of study on FOLTR. \iffalse FOLTR introduces a distributed OLTR~\cite{jia2022learning, wang2018efficient} scenario where users coordinate under a central server to collaboratively train a global ranker. In a FOLTR system, clients train local rankers using interactions generated on their local devices and then upload model parameters to the central server without sharing data with other clients or the central server. This scenario protects user data privacy while also allowing clients to benefit from a global ranking model that learns from the contributions of each client in the federation. Although research on FOLTR is still in its early stages, some foundational studies have been established. For instance, Federated OLTR with Evolutionary Strategies(FOLtR-ES) ~\cite{kharitonov2019federated} is the first work to extend the OLTR to a FL setting, using evolution strategies to optimize the ranking model and incorporating \(\epsilon\)-local differential privacy for enhanced data protection. However, recent study by Wang et al.~\cite{wang2021federated} has shown that FOLtR-ES is less effective and adaptable on large-scale datasets. To address this, Wang~\etal~\cite{wang2021effective} proposed an improved method named FPDGD, which adapts the advanced Pairwise Differentiable Gradient Descent (PDGD) method to the federated learning framework. Compared to FOLtR-ES, FPDGD shows significant improvements in performance and generalization across datasets. Thus, in this paper, we employ FPDGD as the base FOLTR method and investigate unlearning strategies on top of it. \fi \subsection{Federated Unlearning} %Although machine unlearning in centralized settings dominated the existing research efforts, FU has attracted a lot of attention recently. Federated Unlearning extends the principles of machine unlearning to FL settings, allowing individual clients' data to be removed from a global model. % Several approaches to Federated Unlearning have been recently proposed. A naive approach in FU is to retrain the global model from scratch without involving clients who requested deletion, ensuing complete removal but at a high computational cost. In order to accelerate the FU process, FedEraser adjusts historical updates from clients to reconstruct unlearned models~\cite{liu2021federaser}; other computation-efficient approaches have followed~\cite{wu2022federated,liu2022right,halimi2022federated}. %Beyond its application in computer vision tasks, FU has also been applied in other fields, such as recommendation systems~\cite{yuan2023federated}, Graph Neural Networks (GNN)~\cite{zhu2023heterogeneous}. While FU has been studied across natural language processing and recommendation systems~\cite{yuan2023federated,zhu2023heterogeneous}, its application to FOLTR tasks remains unexplored, with Wang et al.~\cite{wang2024forget} presenting the only approach to date. This study implemented an existing FU approach and evaluated its unlearning performance under a user simulation inspired by poisoning attacks~\cite{wang2023analysis, shejwalkar2021manipulating}. While Wang et al.~\cite{wang2024forget} pointed out that the uniqueness of FOLTR including optimising ranking tasks with implicit user feedback in an online manner, the challenge of adapting existing FU methods to FOLTR and evaluating the unlearning performance are largely unexplored. In this paper, we close this gap by evaluating within the context of FOLTR the performance of a broad range of unlearning methods across a diverse set of evaluation metrics. \iffalse Unlike machine unlearning in centralized settings, unlearning receives less attention in federated settings. The most naive way of implementing FU is to retrain the model from scratch after the clients request to be removed (the target clients). However, this method requires substantial computational resources. Recent research has sought to accelerate this process. In the field of Computer Vision(CV), Liu et al.~\cite{liu2021federaser} introduced FedEraser, which adjusts historical updates from clients to reconstruct unlearned models. Subsequently, Wu et al.~\cite{wu2022federated} developed a method that erases these updates from a target client and uses knowledge distillation to restore performance. Both methods necessitate the server's retention of all clients' update histories. Additionally, Liu et al. ~\cite{liu2022right} proposed a time and energy-efficient retraining approach using Newton's methods. To efficiently erase client data, Halimi et al. ~\cite{halimi2022federated} suggested maximizing loss through gradient ascent and addressing the constrained optimization problem using Projected Gradient Descent (PGD). In terms of on-device recommendation system, Yuan et al.~\cite{yuan2023federated} proposed a Federated Recommendation Unlearning (FRU) method by rolling back and calibrating historical parameter updates. In the Knowledge Graph (KG) domain, Zhu et al. ~\cite{zhu2023heterogeneous} introduced FedLU, employing a neuroscientific approach to unlearning by leveraging retroactive interference and passive decay to selectively remove specific knowledge from local clients and update the global model through knowledge distillation. However, the aforementioned work does not cover the OLTR domain, highlighting that FU research in OLTR remains extensively unexplored.~\shuyi{The work by Wang et al.~\cite{wang2024forget} is by far the only unlearning method in FOLTR...} \fi","\subsection{Federated Online Learning to Rank} As a specialized distributed machine learning paradigm, FL enables collaborative training while preserving user privacy. To satisfy privacy-preserving requirements in OLTR, FL has also been introduced to this field termed federated online learning to rank (FOLTR). The FOLtR-ES method does this by incorporating evolution strategies to optimize the ranking model and \(\epsilon\)-local differential privacy for enhanced data protection~\cite{kharitonov2019federated}, but this methods is found to have reduced effectiveness and adaptability on large-scale datasets~\cite{wang2021federated}. A more effective approach, FPDGD~\cite{wang2021effective}, has been proposed by integrating the advanced Pairwise Differentiable Gradient Descent (PDGD)~\cite{oosterhuis2018differentiable} into the FL framework. Thanks to the superiority of locally-deployed PDGD ranker in handling noise and biases from user interactions, FPDGD has demonstrated strong ranking performance which is comparable to that of the state-of-the-art centralised OLTR methods. Following this, Wang and Zuccon~\cite{wang2022non} investigate the robustness of FPDGD to non-independent and identically distributed client data. The vulnerability of FPDGD to poisoning attacks in the federated system is also simulated and verified~\cite{wang2023analysis}, which extends the landscape of study on FOLTR. \iffalse FOLTR introduces a distributed OLTR~\cite{jia2022learning, wang2018efficient} scenario where users coordinate under a central server to collaboratively train a global ranker. In a FOLTR system, clients train local rankers using interactions generated on their local devices and then upload model parameters to the central server without sharing data with other clients or the central server. This scenario protects user data privacy while also allowing clients to benefit from a global ranking model that learns from the contributions of each client in the federation. Although research on FOLTR is still in its early stages, some foundational studies have been established. For instance, Federated OLTR with Evolutionary Strategies(FOLtR-ES) ~\cite{kharitonov2019federated} is the first work to extend the OLTR to a FL setting, using evolution strategies to optimize the ranking model and incorporating \(\epsilon\)-local differential privacy for enhanced data protection. However, recent study by Wang et al.~\cite{wang2021federated} has shown that FOLtR-ES is less effective and adaptable on large-scale datasets. To address this, Wang~\etal~\cite{wang2021effective} proposed an improved method named FPDGD, which adapts the advanced Pairwise Differentiable Gradient Descent (PDGD) method to the federated learning framework. Compared to FOLtR-ES, FPDGD shows significant improvements in performance and generalization across datasets. Thus, in this paper, we employ FPDGD as the base FOLTR method and investigate unlearning strategies on top of it. \fi \subsection{Federated Unlearning} %Although machine unlearning in centralized settings dominated the existing research efforts, FU has attracted a lot of attention recently. Federated Unlearning extends the principles of machine unlearning to FL settings, allowing individual clients' data to be removed from a global model. % Several approaches to Federated Unlearning have been recently proposed. A naive approach in FU is to retrain the global model from scratch without involving clients who requested deletion, ensuing complete removal but at a high computational cost. In order to accelerate the FU process, FedEraser adjusts historical updates from clients to reconstruct unlearned models~\cite{liu2021federaser}; other computation-efficient approaches have followed~\cite{wu2022federated,liu2022right,halimi2022federated}. %Beyond its application in computer vision tasks, FU has also been applied in other fields, such as recommendation systems~\cite{yuan2023federated}, Graph Neural Networks (GNN)~\cite{zhu2023heterogeneous}. While FU has been studied across natural language processing and recommendation systems~\cite{yuan2023federated,zhu2023heterogeneous}, its application to FOLTR tasks remains unexplored, with Wang et al.~\cite{wang2024forget} presenting the only approach to date. This study implemented an existing FU approach and evaluated its unlearning performance under a user simulation inspired by poisoning attacks~\cite{wang2023analysis, shejwalkar2021manipulating}. While Wang et al.~\cite{wang2024forget} pointed out that the uniqueness of FOLTR including optimising ranking tasks with implicit user feedback in an online manner, the challenge of adapting existing FU methods to FOLTR and evaluating the unlearning performance are largely unexplored. In this paper, we close this gap by evaluating within the context of FOLTR the performance of a broad range of unlearning methods across a diverse set of evaluation metrics. \iffalse Unlike machine unlearning in centralized settings, unlearning receives less attention in federated settings. The most naive way of implementing FU is to retrain the model from scratch after the clients request to be removed (the target clients). However, this method requires substantial computational resources. Recent research has sought to accelerate this process. In the field of Computer Vision(CV), Liu et al.~\cite{liu2021federaser} introduced FedEraser, which adjusts historical updates from clients to reconstruct unlearned models. Subsequently, Wu et al.~\cite{wu2022federated} developed a method that erases these updates from a target client and uses knowledge distillation to restore performance. Both methods necessitate the server's retention of all clients' update histories. Additionally, Liu et al. ~\cite{liu2022right} proposed a time and energy-efficient retraining approach using Newton's methods. To efficiently erase client data, Halimi et al. ~\cite{halimi2022federated} suggested maximizing loss through gradient ascent and addressing the constrained optimization problem using Projected Gradient Descent (PGD). In terms of on-device recommendation system, Yuan et al.~\cite{yuan2023federated} proposed a Federated Recommendation Unlearning (FRU) method by rolling back and calibrating historical parameter updates. In the Knowledge Graph (KG) domain, Zhu et al. ~\cite{zhu2023heterogeneous} introduced FedLU, employing a neuroscientific approach to unlearning by leveraging retroactive interference and passive decay to selectively remove specific knowledge from local clients and update the global model through knowledge distillation. However, the aforementioned work does not cover the OLTR domain, highlighting that FU research in OLTR remains extensively unexplored.~\shuyi{The work by Wang et al.~\cite{wang2024forget} is by far the only unlearning method in FOLTR...} \fi","2.1 Federated Online Learning to Rank As a specialized distributed machine learning paradigm, FL enables collaborative training while preserving user privacy. To satisfy privacy-preserving requirements in OLTR, FL has also been intro- duced to this field termed federated online learning to rank (FOLTR). The FOLtR-ES method does this by incorporating evolution strate- gies to optimize the ranking model and 𝜖-local differential privacy for enhanced data protection [ 14], but this methods is found to have reduced effectiveness and adaptability on large-scale datasets [ 28]. A more effective approach, FPDGD [ 26], has been proposed by in- tegrating the advanced Pairwise Differentiable Gradient Descent (PDGD) [ 19] into the FL framework. Thanks to the superiority of locally-deployed PDGD ranker in handling noise and biases from user interactions, FPDGD has demonstrated strong ranking perfor- mance which is comparable to that of the state-of-the-art centralised OLTR methods. Following this, Wang and Zuccon [ 29] investigate the robustness of FPDGD to non-independent and identically dis- tributed client data. The vulnerability of FPDGD to poisoning at- tacks in the federated system is also simulated and verified [ 30], which extends the landscape of study on FOLTR. 2.2 Federated Unlearning Federated Unlearning extends the principles of machine unlearning to FL settings, allowing individual clients’ data to be removed from a global model. A naive approach in FU is to retrain the global model from scratch without involving clients who requested dele- tion, ensuing complete removal but at a high computational cost. In order to accelerate the FU process, FedEraser adjusts historical updates from clients to reconstruct unlearned models [ 15]; other computation-efficient approaches have followed [ 11,16,31]. While FU has been studied across natural language processing and recom- mendation systems [ 33,37], its application to FOLTR tasks remains unexplored, with Wang et al. [ 27] presenting the only approach to date. This study implemented an existing FU approach and evalu- ated its unlearning performance under a user simulation inspired by poisoning attacks [ 24,30]. While Wang et al. [ 27] pointed out that the uniqueness of FOLTR including optimising ranking tasks with implicit user feedback in an online manner, the challenge of Unlearning for Federated Online Learning to Rank: A Reproducibility Study SIGIR ’25, July 13–18, 2025, Padua, Italy adapting existing FU methods to FOLTR and evaluating the unlearn- ing performance are largely unexplored. In this paper, we close this gap by evaluating within the context of FOLTR the performance of a broad range of unlearning methods across a diverse set of evaluation metrics." 2505.07166v1,"Pre-training vs. Fine-tuning: A Reproducibility Study on Dense Retrieval Knowledge Acquisition","Zheng Yao, Shuai Wang, Guido Zuccon","Dense retrievers utilize pre-trained backbone language models (e.g., BERT, LLaMA) that are fine-tuned via contrastive learning to perform the task of encoding text into sense representations that can be then compared via a shallow similarity operation, e.g. inner product. Recent research has questioned the role of fine-tuning vs. that of pre-training within dense retrievers, specifically arguing that retrieval knowledge is primarily gained during pre-training, meaning knowledge not acquired during pre-training cannot be sub-sequentially acquired via fine-tuning. We revisit this idea here as the claim was only studied in the context of a BERT-based encoder using DPR as representative dense retriever. We extend the previous analysis by testing other representation approaches (comparing the use of CLS tokens with that of mean pooling), backbone architectures (encoder-only BERT vs. decoder-only LLaMA), and additional datasets (MSMARCO in addition to Natural Questions). Our study confirms that in DPR tuning, pre-trained knowledge underpins retrieval performance, with fine-tuning primarily adjusting neuron activation rather than reorganizing knowledge. However, this pattern does not hold universally, such as in mean-pooled (Contriever) and decoder-based (LLaMA) models. We ensure full reproducibility and make our implementation publicly available at https://github.com/ielab/DenseRetriever-Knowledge-Acquisition.","cs.IR, cs.CL",2025-05-12T01:24:00+00:00,2025-05-12T01:24:00+00:00,http://arxiv.org/abs/2505.07166v1,http://arxiv.org/abs/2505.07166v1,2025-05-12 01:24:00+00:00,"Dense retrieval relies on deep neural networks to encode textual data into dense vector representations, enabling efficient approximate nearest neighbor search. These models are broadly categorized into encoder-based and decoder-based retrievers, each employing different representation and retrieval strategies. \textbf{Encoder-based dense retrievers}, such as DPR~\cite{karpukhin2020dense}, utilize transformer encoders to map queries and documents into fixed-size vectors. The retrieval process is then performed by computing the dot product between query and document representations. One common approach is the use of the \textbf{[CLS] token representation}, where the final hidden state of the special [CLS] token is extracted as a compact representation of the entire input sequence. While effective, this method has been observed to focus more on the beginning of the input, potentially missing finer-grained information distributed throughout the text. An alternative strategy is \textbf{mean pooling}, as adopted by Contriever~\cite{izacard2021contriever}, where the final embeddings of all tokens are averaged to form a unified document representation. This pooling mechanism captures a more distributed representation of the input and is often preferred in cases where information is spread across longer passages. Furthermore, some models, such as Sentence-BERT~\cite{reimers2019sentence} and SimCSE~\cite{gao2021simcse}, extend dense retrieval capabilities to zero-shot settings, leveraging contrastive learning and pre-trained embeddings to provide robust document representations without requiring task-specific fine-tuning. \textbf{Decoder-based dense retrievers} are in contrast to encoder-based models, incorporating autoregressive decoding mechanisms for retrieval. RePLAMA~\cite{replama2021} exemplifies this approach by reframing the retrieval task as a sequence generation problem, where the model generates candidate passages based on query context rather than performing direct similarity matching. This method enhances retrieval flexibility by capturing long-range dependencies and richer query-document relationships. Similarly, PromptReps~\cite{promptreps2021} integrates prompt-based learning with dense retrieval, employing carefully designed prompts to guide the model in generating more discriminative representations. These approaches illustrate the growing shift toward generative retrieval frameworks that combine aspects of traditional retrieval with neural sequence modeling. \textbf{Benchmarks} such as \textbf{MS MARCO Passage Ranking}~\cite{msmarco} and \textbf{Natural Questions (NQ)}~\cite{naturalquestions} are commonly used to evaluate Dense retrievers. MS MARCO consists of web search queries with passage relevance annotations, while NQ features real-world questions paired with relevant Wikipedia passages. These datasets provide diverse and realistic challenges, making them essential for assessing retrieval effectiveness and advancing research in dense retrieval.","Dense retrieval relies on deep neural networks to encode textual data into dense vector representations, enabling efficient approximate nearest neighbor search. These models are broadly categorized into encoder-based and decoder-based retrievers, each employing different representation and retrieval strategies. \textbf{Encoder-based dense retrievers}, such as DPR~\cite{karpukhin2020dense}, utilize transformer encoders to map queries and documents into fixed-size vectors. The retrieval process is then performed by computing the dot product between query and document representations. One common approach is the use of the \textbf{[CLS] token representation}, where the final hidden state of the special [CLS] token is extracted as a compact representation of the entire input sequence. While effective, this method has been observed to focus more on the beginning of the input, potentially missing finer-grained information distributed throughout the text. An alternative strategy is \textbf{mean pooling}, as adopted by Contriever~\cite{izacard2021contriever}, where the final embeddings of all tokens are averaged to form a unified document representation. This pooling mechanism captures a more distributed representation of the input and is often preferred in cases where information is spread across longer passages. Furthermore, some models, such as Sentence-BERT~\cite{reimers2019sentence} and SimCSE~\cite{gao2021simcse}, extend dense retrieval capabilities to zero-shot settings, leveraging contrastive learning and pre-trained embeddings to provide robust document representations without requiring task-specific fine-tuning. \textbf{Decoder-based dense retrievers} are in contrast to encoder-based models, incorporating autoregressive decoding mechanisms for retrieval. RePLAMA~\cite{replama2021} exemplifies this approach by reframing the retrieval task as a sequence generation problem, where the model generates candidate passages based on query context rather than performing direct similarity matching. This method enhances retrieval flexibility by capturing long-range dependencies and richer query-document relationships. Similarly, PromptReps~\cite{promptreps2021} integrates prompt-based learning with dense retrieval, employing carefully designed prompts to guide the model in generating more discriminative representations. These approaches illustrate the growing shift toward generative retrieval frameworks that combine aspects of traditional retrieval with neural sequence modeling. \textbf{Benchmarks} such as \textbf{MS MARCO Passage Ranking}~\cite{msmarco} and \textbf{Natural Questions (NQ)}~\cite{naturalquestions} are commonly used to evaluate Dense retrievers. MS MARCO consists of web search queries with passage relevance annotations, while NQ features real-world questions paired with relevant Wikipedia passages. These datasets provide diverse and realistic challenges, making them essential for assessing retrieval effectiveness and advancing research in dense retrieval.","Dense retrieval relies on deep neural networks to encode textual data into dense vector representations, enabling efficient approximate nearest neighbor search. These models are broadly categorized into encoder- based and decoder-based retrievers, each employing different representation and retrieval strategies. Encoder-based dense retrievers , such as DPR Karpukhin et al. [2020], utilize transformer encoders to map queries and documents into fixed-size vectors. The retrieval process is then performed by computing the dot product between query and document representations. One common approach is the use of the [CLS] token representation , where the final hidden state of the special [CLS] token is extracted as a compact representation of the entire input sequence. While effective, this method has been observed to focus more on the beginning of the input, potentially missing finer-grained information distributed throughout the text. An alternative strategy is mean pooling , as adopted by Contriever Izacard and Grave [2021], where the final embeddings of all tokens are averaged to form a unified document representation. This pooling mechanism captures a more distributed representation of the input and is often preferred in cases where information is spread across longer passages. Furthermore, some models, such as Sentence-BERT Reimers and Gurevych [2019] and SimCSE Gao et al. [2021], extend dense retrieval capabilities to zero-shot settings, leveraging contrastive learning and pre-trained embeddings to provide robust document representations without requiring task-specific fine-tuning. Decoder-based dense retrievers are in contrast to encoder-based models, incorporating autoregressive decoding mechanisms for retrieval. RePLAMA Smith and Doe [2021] exemplifies this approach by reframing the retrieval task as a sequence generation problem, where the model generates candidate passages based on query context rather than performing direct similarity matching. This method enhances retrieval flexibility by capturing long-range dependencies and richer query-document relationships. Similarly, PromptReps Lee and Kumar [2021] integrates prompt-based learning with dense retrieval, employing carefully designed prompts to guide the model in generating more discriminative representations. These approaches illustrate the growing shift toward generative retrieval frameworks that combine aspects of traditional retrieval with neural sequence modeling. Benchmarks such as MS MARCO Passage Ranking Nguyen et al. [2016] and Natural Questions (NQ) Kwiatkowski et al. [2019] are commonly used to evaluate Dense retrievers. MS MARCO consists of 17 web search queries with passage relevance annotations, while NQ features real-world questions paired with relevant Wikipedia passages. These datasets provide diverse and realistic challenges, making them essential for assessing retrieval effectiveness and advancing research in dense retrieval." 2505.03484v1,"STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation","Maolin Wang, Sheng Zhang, Ruocheng Guo, Wanyu Wang, Xuetao Wei, Zitao Liu, Hongzhi Yin, Yi Chang, Xiangyu Zhao","Recent deep sequential recommendation models often struggle to effectively model key characteristics of user behaviors, particularly in handling sequence length variations and capturing diverse interaction patterns. We propose STAR-Rec, a novel architecture that synergistically combines preference-aware attention and state-space modeling through a sequence-level mixture-of-experts framework. STAR-Rec addresses these challenges by: (1) employing preference-aware attention to capture both inherently similar item relationships and diverse preferences, (2) utilizing state-space modeling to efficiently process variable-length sequences with linear complexity, and (3) incorporating a mixture-of-experts component that adaptively routes different behavioral patterns to specialized experts, handling both focused category-specific browsing and diverse category exploration patterns. We theoretically demonstrate how the state space model and attention mechanisms can be naturally unified in recommendation scenarios, where SSM captures temporal dynamics through state compression while attention models both similar and diverse item relationships. Extensive experiments on four real-world datasets demonstrate that STAR-Rec consistently outperforms state-of-the-art sequential recommendation methods, particularly in scenarios involving diverse user behaviors and varying sequence lengths.",cs.IR,2025-05-06T12:40:38+00:00,2025-05-06T12:40:38+00:00,http://arxiv.org/abs/2505.03484v1,http://arxiv.org/abs/2505.03484v1,2025-05-06 12:40:38+00:00,"\noindent\textbf{Transformers and RNNs for Sequential Recommendation} Sequential recommendation has evolved significantly from traditional methods to deep learning-based solutions~\cite{Frequency23,DL4,Xavier,sse-pt,zhao2023embedding,FMLP,strec,MLM4Rec,PEPNet,mb-str,lightsan,autoseqrec,HRNN,zhao2023user}. Early approaches like TransRec~\cite{DMAN} and matrix factorization methods~\cite{koren2009matrix} focused on modeling user-item interactions through conventional data mining techniques, but they struggled with capturing multiple user behaviors and faced efficiency challenges with longer sequences. This led to the emergence of deep learning methods, particularly Transformers and RNNs. Transformer-based models like SASRec~\cite{Kang01} leveraged multi-head attention mechanisms for sequence modeling, while BERT4Rec~\cite{bert4rec} employed bidirectional transformers to capture contextual information. LinRec~\cite{Linrec} further improved efficiency by introducing linear complexity attention mechanisms. Despite their effectiveness, these transformer-based models suffer from quadratic computational complexity when modeling long sequences. RNN-based approaches like GRU4Rec~\cite{GRU4Rec} provided linear computational complexity but showed limited effectiveness in sequential recommendations. To address this, STAR-Rec combines preference-aware attention and state-space modeling to handle variable-length sequences while maintaining efficiency. \noindent\textbf{State Space Models for Sequential Recommendation} Recently, state-space-models (SSMs) have demonstrated remarkable effectiveness in sequence modeling tasks due to their superior capability in capturing temporal dynamics and hidden patterns~\cite{GLINTours25,HiPPOs21,16Dual,gu2023mamba,qu2024survey,dao2024transformers,MambaRec,wang2024echomamba4rec,cao2024mamba4kt,liu2024bidirectional,yang2024uncovering,Visionzhu}. Mamba4Rec~\cite{mamba4rec} pioneered this direction by demonstrating improved efficiency while maintaining competitive performance through its selective state space modeling. Following this, ECHO-Mamba4Rec~\cite{wang2024echomamba4rec} advanced the field by combining bidirectional Mamba with frequency-domain filtering for more accurate pattern capture. RecMamba~\cite{yang2024uncovering} demonstrated Mamba's capability in handling lifelong scenarios, while Mamba4KT~\cite{cao2024mamba4kt} adapted the architecture for knowledge tracing applications. Most recently, SIGMA~\cite{liu2024bidirectional} attempted to address Mamba's limitations in context modeling and short sequence handling through a bi-directional structure with selective gating mechanisms. These approaches face challenges in balancing long/short-term sequence modeling and pattern diversity in recommendation scenarios. STAR-Rec addresses this through preference-aware attention and state-space modeling via sequence-level mixture-of-experts, effectively handling diverse item relationships and varying sequence patterns.","\noindent\textbf{Transformers and RNNs for Sequential Recommendation} Sequential recommendation has evolved significantly from traditional methods to deep learning-based solutions~\cite{Frequency23,DL4,Xavier,sse-pt,zhao2023embedding,FMLP,strec,MLM4Rec,PEPNet,mb-str,lightsan,autoseqrec,HRNN,zhao2023user}. Early approaches like TransRec~\cite{DMAN} and matrix factorization methods~\cite{koren2009matrix} focused on modeling user-item interactions through conventional data mining techniques, but they struggled with capturing multiple user behaviors and faced efficiency challenges with longer sequences. This led to the emergence of deep learning methods, particularly Transformers and RNNs. Transformer-based models like SASRec~\cite{Kang01} leveraged multi-head attention mechanisms for sequence modeling, while BERT4Rec~\cite{bert4rec} employed bidirectional transformers to capture contextual information. LinRec~\cite{Linrec} further improved efficiency by introducing linear complexity attention mechanisms. Despite their effectiveness, these transformer-based models suffer from quadratic computational complexity when modeling long sequences. RNN-based approaches like GRU4Rec~\cite{GRU4Rec} provided linear computational complexity but showed limited effectiveness in sequential recommendations. To address this, STAR-Rec combines preference-aware attention and state-space modeling to handle variable-length sequences while maintaining efficiency. \noindent\textbf{State Space Models for Sequential Recommendation} Recently, state-space-models (SSMs) have demonstrated remarkable effectiveness in sequence modeling tasks due to their superior capability in capturing temporal dynamics and hidden patterns~\cite{GLINTours25,HiPPOs21,16Dual,gu2023mamba,qu2024survey,dao2024transformers,MambaRec,wang2024echomamba4rec,cao2024mamba4kt,liu2024bidirectional,yang2024uncovering,Visionzhu}. Mamba4Rec~\cite{mamba4rec} pioneered this direction by demonstrating improved efficiency while maintaining competitive performance through its selective state space modeling. Following this, ECHO-Mamba4Rec~\cite{wang2024echomamba4rec} advanced the field by combining bidirectional Mamba with frequency-domain filtering for more accurate pattern capture. RecMamba~\cite{yang2024uncovering} demonstrated Mamba's capability in handling lifelong scenarios, while Mamba4KT~\cite{cao2024mamba4kt} adapted the architecture for knowledge tracing applications. Most recently, SIGMA~\cite{liu2024bidirectional} attempted to address Mamba's limitations in context modeling and short sequence handling through a bi-directional structure with selective gating mechanisms. These approaches face challenges in balancing long/short-term sequence modeling and pattern diversity in recommendation scenarios. STAR-Rec addresses this through preference-aware attention and state-space modeling via sequence-level mixture-of-experts, effectively handling diverse item relationships and varying sequence patterns.","Transformers and RNNs for Sequential Recommendation Sequential recommendation has evolved significantly from tradi- tional methods to deep learning-based solutions [ 3,9,10,13,22,28, 31,33,44,49,53,54,56,60]. Early approaches like TransRec [ 36] and matrix factorization methods [ 21] focused on modeling user- item interactions through conventional data mining techniques, but they struggled with capturing multiple user behaviors andfaced efficiency challenges with longer sequences. This led to the emergence of deep learning methods, particularly Transformers and RNNs. Transformer-based models like SASRec [ 18] leveraged multi-head attention mechanisms for sequence modeling, while BERT4Rec [ 35] employed bidirectional transformers to capture contextual information. LinRec [ 27] further improved efficiency by introducing linear complexity attention mechanisms. Despite their effectiveness, these transformer-based models suffer from qua- dratic computational complexity when modeling long sequences. RNN-based approaches like GRU4Rec [ 16] provided linear compu- tational complexity but showed limited effectiveness in sequential recommendations. To address this, STAR-Rec combines preference- aware attention and state-space modeling to handle variable-length sequences while maintaining efficiency. State Space Models for Sequential Recommendation Recently, state-space-models (SSMs) have demonstrated remarkable effective- ness in sequence modeling tasks due to their superior capability in capturing temporal dynamics and hidden patterns [ 2,7,14,17,29, 32,41,46–48,51,61]. Mamba4Rec [ 26] pioneered this direction by demonstrating improved efficiency while maintaining competitive performance through its selective state space modeling. Following this, ECHO-Mamba4Rec [ 41] advanced the field by combining bidi- rectional Mamba with frequency-domain filtering for more accurate pattern capture. RecMamba [ 47] demonstrated Mamba’s capabil- ity in handling lifelong scenarios, while Mamba4KT [ 2] adapted the architecture for knowledge tracing applications. Most recently, SIGMA [ 29] attempted to address Mamba’s limitations in context modeling and short sequence handling through a bi-directional structure with selective gating mechanisms. These approaches face challenges in balancing long/short-term sequence modeling and pat- tern diversity in recommendation scenarios. STAR-Rec addresses this through preference-aware attention and state-space modeling via sequence-level mixture-of-experts, effectively handling diverse item relationships and varying sequence patterns." 2505.00552v1,Graph Spectral Filtering with Chebyshev Interpolation for Recommendation,"Chanwoo Kim, Jinkyu Sung, Yebonn Han, Joonseok Lee","Graph convolutional networks have recently gained prominence in collaborative filtering (CF) for recommendations. However, we identify potential bottlenecks in two foundational components. First, the embedding layer leads to a latent space with limited capacity, overlooking locally observed but potentially valuable preference patterns. Also, the widely-used neighborhood aggregation is limited in its ability to leverage diverse preference patterns in a fine-grained manner. Building on spectral graph theory, we reveal that these limitations stem from graph filtering with a cut-off in the frequency spectrum and a restricted linear form. To address these issues, we introduce ChebyCF, a CF framework based on graph spectral filtering. Instead of a learned embedding, it takes a user's raw interaction history to utilize the full spectrum of signals contained in it. Also, it adopts Chebyshev interpolation to effectively approximate a flexible non-linear graph filter, and further enhances it by using an additional ideal pass filter and degree-based normalization. Through extensive experiments, we verify that ChebyCF overcomes the aforementioned bottlenecks and achieves state-of-the-art performance across multiple benchmarks and reasonably fast inference. Our code is available at https://github.com/chanwoo0806/ChebyCF.","cs.IR, cs.LG",2025-05-01T14:28:44+00:00,2025-05-01T14:28:44+00:00,http://arxiv.org/abs/2505.00552v1,http://arxiv.org/abs/2505.00552v1,2025-05-01 14:28:44+00:00,"\label{sec:related} \textbf{Graph Neural Networks.} Beside random-walk based approaches \cite{perozziDeepwalk2014, groverNode2vecScalableFeature2016, huangGraphRecurrentNetworks2019, nikolentzosRandomwalkgraphneuralnetworks2020, jinRawgnn2022, wangNonConvGNN2024}, there are two primary approaches in Graph Neural Networks (GNNs): spatial and spectral graph convolutions. Spatial graph convolution defines convolution in the vertex domain. GCN \cite{kipfSemiSupervisedClassificationGraph2017} simplifies graph spectral convolutions using only the first-order linear filters, equivalent to the spatial convolution of 1-hop neighbor aggregation. SGC \cite{wuSimplifyingGraphConvolutional2019} reduces its computational complexity by removing its redundant feature transformations. More recent works have expanded GCNs to a broader range of applications \cite{hamiltonInductiveRepresentationLearning2017, gilmerNeuralMessagePassing2017, velickovicDeepGraphInfomax2018, xuHowPowerfulAre2019}. Spectral graph convolution relies on costly Laplacian eigendecomposition to perform convolution in the spectral domain, triggering the development of numerous polynomial approximations to circumvent this issue. ChebNet \cite{defferrardConvolutionalNeuralNetworks2016}, for example, adopts a Chebyshev basis. GPR-GNN \cite{chienAdaptiveUniversalGeneralized2021} and BernNet \cite{heBernNetLearningArbitrary2021} utilize the monomial and Bernstein basis, respectively. ChebNetII \cite{chenRevisitingGraphBased2020} presents an improved model using Chebyshev interpolation, which reduces the Runge phenomenon. Inspired by the strong performance of graph filtering using polynomial approximations in the node classification task, we design ChebyCF—a graph filter utilizing Chebyshev interpolation, specifically adapted to the context of collaborative filtering, including the elimination of heavy node-wise feature transformations. \vspace{0.1cm} \noindent \textbf{Graph-based Recommendations.} Interpreting the user-item interaction as a graph, GNNs have been extensively explored in collaborative filtering. After GCNs are applied to CF for the first time \cite{wangNeuralGraphCollaborative2019}, several following studies \cite{chenRevisitingGraphBased2020, heLightGCNSimplifyingPowering2020} have shown that the inherent complexity of GCNs is less suitable for CF. This is because CF relies only on the user-item interaction data without any feature information. To address this, several attempts to simplify the model structure have been made \cite{maoUltraGCNUltraSimplification2021, heSGCF2023}. There are also various approaches \cite{sunNeighborInteractionAware2020, wangDisentangledGraphCollaborative2020, liuInterestawareMessagePassingGCN2021, kongLinearNonLinearThat2022, fanGraphTrendFiltering2022, guoJGCF2023, wangCollaborationAwareGraphConvolutional2023, zhuGiffCF2024,jinri2024content,eungi2025reducedgcn} to capture important information from the user-item interaction graph. Additionally, there have been attempts \cite{wuSelfsupervisedGraphLearning2021, xiaHypergraphContrastiveCollaborative2022, linImprovingGraphCollaborative2022, jiangAdaptiveGraphContrastive2023} to overcome the sparsity of interaction data by leveraging graph contrastive learning. While many of the aforementioned studies have proposed GCN-based models with spatial convolution, there are also GSF-based approaches utilizing spectral convolution \cite{zhengSpectralCollaborativeFiltering2018, shenHowPowerfulGraph2021, fuRevisitingNeighborhoodbasedLink2022, liuPersonalizedGraphSignal2023, pengSGFCF2024, park2024turbo}. LinkProp \cite{fuRevisitingNeighborhoodbasedLink2022} interprets CF from the perspective of the graph link prediction task. GF-CF \cite{shenHowPowerfulGraph2021} highlights that the power of existing spatial convolution approaches lies in their low-pass filtering capabilities from the spectral filtering perspective. PGSP \cite{liuPersonalizedGraphSignal2023} builds on GF-CF by augmenting both the input signal and graph. SGFCF \cite{pengSGFCF2024} further enhances GF-CF through the introduction of a new graph normalization technique and individualized filtering. Nevertheless, the form of graph filters has largely remained linear, with limited attention given to enabling more flexible formulations. Our method addresses this by leveraging Chebyshev interpolation with a graph filter that is both flexible and computationally efficient.","\textbf{Graph Neural Networks.} Beside random-walk based approaches \cite{perozziDeepwalk2014, groverNode2vecScalableFeature2016, huangGraphRecurrentNetworks2019, nikolentzosRandomwalkgraphneuralnetworks2020, jinRawgnn2022, wangNonConvGNN2024}, there are two primary approaches in Graph Neural Networks (GNNs): spatial and spectral graph convolutions. Spatial graph convolution defines convolution in the vertex domain. GCN \cite{kipfSemiSupervisedClassificationGraph2017} simplifies graph spectral convolutions using only the first-order linear filters, equivalent to the spatial convolution of 1-hop neighbor aggregation. SGC \cite{wuSimplifyingGraphConvolutional2019} reduces its computational complexity by removing its redundant feature transformations. More recent works have expanded GCNs to a broader range of applications \cite{hamiltonInductiveRepresentationLearning2017, gilmerNeuralMessagePassing2017, velickovicDeepGraphInfomax2018, xuHowPowerfulAre2019}. Spectral graph convolution relies on costly Laplacian eigendecomposition to perform convolution in the spectral domain, triggering the development of numerous polynomial approximations to circumvent this issue. ChebNet \cite{defferrardConvolutionalNeuralNetworks2016}, for example, adopts a Chebyshev basis. GPR-GNN \cite{chienAdaptiveUniversalGeneralized2021} and BernNet \cite{heBernNetLearningArbitrary2021} utilize the monomial and Bernstein basis, respectively. ChebNetII \cite{chenRevisitingGraphBased2020} presents an improved model using Chebyshev interpolation, which reduces the Runge phenomenon. Inspired by the strong performance of graph filtering using polynomial approximations in the node classification task, we design ChebyCF—a graph filter utilizing Chebyshev interpolation, specifically adapted to the context of collaborative filtering, including the elimination of heavy node-wise feature transformations. \vspace{0.1cm} \noindent \textbf{Graph-based Recommendations.} Interpreting the user-item interaction as a graph, GNNs have been extensively explored in collaborative filtering. After GCNs are applied to CF for the first time \cite{wangNeuralGraphCollaborative2019}, several following studies \cite{chenRevisitingGraphBased2020, heLightGCNSimplifyingPowering2020} have shown that the inherent complexity of GCNs is less suitable for CF. This is because CF relies only on the user-item interaction data without any feature information. To address this, several attempts to simplify the model structure have been made \cite{maoUltraGCNUltraSimplification2021, heSGCF2023}. There are also various approaches \cite{sunNeighborInteractionAware2020, wangDisentangledGraphCollaborative2020, liuInterestawareMessagePassingGCN2021, kongLinearNonLinearThat2022, fanGraphTrendFiltering2022, guoJGCF2023, wangCollaborationAwareGraphConvolutional2023, zhuGiffCF2024,jinri2024content,eungi2025reducedgcn} to capture important information from the user-item interaction graph. Additionally, there have been attempts \cite{wuSelfsupervisedGraphLearning2021, xiaHypergraphContrastiveCollaborative2022, linImprovingGraphCollaborative2022, jiangAdaptiveGraphContrastive2023} to overcome the sparsity of interaction data by leveraging graph contrastive learning. While many of the aforementioned studies have proposed GCN-based models with spatial convolution, there are also GSF-based approaches utilizing spectral convolution \cite{zhengSpectralCollaborativeFiltering2018, shenHowPowerfulGraph2021, fuRevisitingNeighborhoodbasedLink2022, liuPersonalizedGraphSignal2023, pengSGFCF2024, park2024turbo}. LinkProp \cite{fuRevisitingNeighborhoodbasedLink2022} interprets CF from the perspective of the graph link prediction task. GF-CF \cite{shenHowPowerfulGraph2021} highlights that the power of existing spatial convolution approaches lies in their low-pass filtering capabilities from the spectral filtering perspective. PGSP \cite{liuPersonalizedGraphSignal2023} builds on GF-CF by augmenting both the input signal and graph. SGFCF \cite{pengSGFCF2024} further enhances GF-CF through the introduction of a new graph normalization technique and individualized filtering. Nevertheless, the form of graph filters has largely remained linear, with limited attention given to enabling more flexible formulations. Our method addresses this by leveraging Chebyshev interpolation with a graph filter that is both flexible and computationally efficient.","Graph Neural Networks. Beside random-walk based approaches [14,23,25,43,47,64], there are two primary approaches in Graph Neural Networks (GNNs): spatial and spectral graph convolutions. Spatial graph convolution defines convolution in the vertex domain. GCN [ 29] simplifies graph spectral convolutions using only the first-order linear filters, equivalent to the spatial convolution of 1-hop neighbor aggregation. SGC [ 66] reduces its computational complexity by removing its redundant feature transformations. More recent works have expanded GCNs to a broader range of applications [13, 16, 59, 70]. Spectral graph convolution relies on costly Laplacian eigende- composition to perform convolution in the spectral domain, trig- gering the development of numerous polynomial approximations to circumvent this issue. ChebNet [ 8], for example, adopts a Cheby- shev basis. GPR-GNN [ 4] and BernNet [ 18] utilize the monomial and Bernstein basis, respectively. ChebNetII [ 3] presents an improved model using Chebyshev interpolation, which reduces the Runge phenomenon. Inspired by the strong performance of graph filtering using polynomial approximations in the node classification task, we design ChebyCF—a graph filter utilizing Chebyshev interpolation, specifically adapted to the context of collaborative filtering, includ- ing the elimination of heavy node-wise feature transformations. Graph-based Recommendations. Interpreting the user-item in- teraction as a graph, GNNs have been extensively explored in col- laborative filtering. After GCNs are applied to CF for the first time [62], several following studies [ 3,21] have shown that the inher- ent complexity of GCNs is less suitable for CF. This is because CF relies only on the user-item interaction data without any fea- ture information. To address this, several attempts to simplify the model structure have been made [ 17,41]. There are also various approaches [ 10,15,27,28,30,38,55,63,65,73] to capture impor- tant information from the user-item interaction graph. Additionally, there have been attempts [ 24,37,67,69] to overcome the sparsity of interaction data by leveraging graph contrastive learning. While many of the aforementioned studies have proposed GCN- based models with spatial convolution, there are also GSF-based approaches utilizing spectral convolution [ 11,39,45,46,52,72]. Graph Spectral Filtering with Chebyshev Interpolation for Recommendation SIGIR ’25, July 13–18, 2025, Padova, Italy LinkProp [ 11] interprets CF from the perspective of the graph link prediction task. GF-CF [ 52] highlights that the power of existing spatial convolution approaches lies in their low-pass filtering capa- bilities from the spectral filtering perspective. PGSP [ 39] builds on GF-CF by augmenting both the input signal and graph. SGFCF [ 46] further enhances GF-CF through the introduction of a new graph normalization technique and individualized filtering. Nevertheless, the form of graph filters has largely remained linear, with limited attention given to enabling more flexible formulations. Our method addresses this by leveraging Chebyshev interpolation with a graph filter that is both flexible and computationally efficient." 2504.20458v1,"Search-Based Interaction For Conversation Recommendation via Generative Reward Model Based Simulated User","Xiaolei Wang, Chunxuan Xia, Junyi Li, Fanzhe Meng, Lei Huang, Jinpeng Wang, Wayne Xin Zhao, Ji-Rong Wen","Conversational recommendation systems (CRSs) use multi-turn interaction to capture user preferences and provide personalized recommendations. A fundamental challenge in CRSs lies in effectively understanding user preferences from conversations. User preferences can be multifaceted and complex, posing significant challenges for accurate recommendations even with access to abundant external knowledge. While interaction with users can clarify their true preferences, frequent user involvement can lead to a degraded user experience. To address this problem, we propose a generative reward model based simulated user, named GRSU, for automatic interaction with CRSs. The simulated user provides feedback to the items recommended by CRSs, enabling them to better capture intricate user preferences through multi-turn interaction. Inspired by generative reward models, we design two types of feedback actions for the simulated user: i.e., generative item scoring, which offers coarse-grained feedback, and attribute-based item critique, which provides fine-grained feedback. To ensure seamless integration, these feedback actions are unified into an instruction-based format, allowing the development of a unified simulated user via instruction tuning on synthesized data. With this simulated user, automatic multi-turn interaction with CRSs can be effectively conducted. Furthermore, to strike a balance between effectiveness and efficiency, we draw inspiration from the paradigm of reward-guided search in complex reasoning tasks and employ beam search for the interaction process. On top of this, we propose an efficient candidate ranking method to improve the recommendation results derived from interaction. Extensive experiments on public datasets demonstrate the effectiveness, efficiency, and transferability of our approach.","cs.IR, cs.CL",2025-04-29T06:37:30+00:00,2025-04-29T06:37:30+00:00,http://arxiv.org/abs/2504.20458v1,http://arxiv.org/abs/2504.20458v1,2025-04-29 06:37:30+00:00,"Our work is related to the following two research directions. \paratitle{Conversational recommendation.} Conversational recommender systems~(CRSs) aim to provide item recommendations through multi-turn interaction. One line of work~\cite{lei2020estimation,lei2020interactive,li2021seamlessly} focuses on the optimization of interaction policy. They simplify the interaction to pre-defined actions (\eg asking questions or making recommendations) and handcrafted templates. Based on this, they optimize CRSs to give accurate recommendations within as few turns as possible. Another line of work~\cite{wang2022towards,wang2023improving,zhao2023alleviating} focuses on the elicitation and understanding of user preference in more free-form natural language conversations. Since conversations usually lack sufficient contextual information, existing work introduces knowledge from external resources, such as knowledge graphs~\cite{dao2024broadening}, large language models~(LLMs)~\cite{he2023large,yang2024unleashing}, and conversational recommendation corpora~\cite{dao2024broadening,xie2024neighborhood}. % Based on this, they design specific alignment strategies~(\eg prompt learning~\cite{dao2024broadening} and instruction tuning~\cite{yang2024unleashing}) to incorporate the introduced knowledge for user preference understanding and item recommendation. % However, the user preference can be multifaceted and complex, making accurate recommendations challenging even with enriched knowledge. Our work follows the second category and proposes a simulated user, which can automatically interact with CRSs to help them discern the true user preference from a complex conversation. % However, user preference can be complex, making it hard to give accurate recommendations even if sufficient knowledge is provided. % In this paper, we aim to address this issue by developing a simulated user to provide feedback for CRSs in an automatic interaction process. \paratitle{Generative reward models.} Reward models~\cite{cobbe2021training,lightmanlet} have become an emerging topic for solving complex reasoning tasks. For example, in the commonly used ``Best-of-N'' strategy~\cite{cobbe2021training}, a task model first generates several candidate solutions, then a reward model ranks these candidates and selects the best one as the final prediction. Recently, generative reward models~\cite{mahan2024generative,zhang24generative} have been proposed, which unify generation and reward modeling by representing reward as the probability of a specific token. Based on this, critiques can be introduced in generation for better reward modeling~\cite{mahan2024generative,zhang24generative}. In this work, we take inspiration from generative reward models to design the actions of our simulated user.","Our work is related to the following two research directions. \paratitle{Conversational recommendation.} Conversational recommender systems~(CRSs) aim to provide item recommendations through multi-turn interaction. One line of work~\cite{lei2020estimation,lei2020interactive,li2021seamlessly} focuses on the optimization of interaction policy. They simplify the interaction to pre-defined actions (\eg asking questions or making recommendations) and handcrafted templates. Based on this, they optimize CRSs to give accurate recommendations within as few turns as possible. Another line of work~\cite{wang2022towards,wang2023improving,zhao2023alleviating} focuses on the elicitation and understanding of user preference in more free-form natural language conversations. Since conversations usually lack sufficient contextual information, existing work introduces knowledge from external resources, such as knowledge graphs~\cite{dao2024broadening}, large language models~(LLMs)~\cite{he2023large,yang2024unleashing}, and conversational recommendation corpora~\cite{dao2024broadening,xie2024neighborhood}. % Based on this, they design specific alignment strategies~(\eg prompt learning~\cite{dao2024broadening} and instruction tuning~\cite{yang2024unleashing}) to incorporate the introduced knowledge for user preference understanding and item recommendation. % However, the user preference can be multifaceted and complex, making accurate recommendations challenging even with enriched knowledge. Our work follows the second category and proposes a simulated user, which can automatically interact with CRSs to help them discern the true user preference from a complex conversation. % However, user preference can be complex, making it hard to give accurate recommendations even if sufficient knowledge is provided. % In this paper, we aim to address this issue by developing a simulated user to provide feedback for CRSs in an automatic interaction process. \paratitle{Generative reward models.} Reward models~\cite{cobbe2021training,lightmanlet} have become an emerging topic for solving complex reasoning tasks. For example, in the commonly used ``Best-of-N'' strategy~\cite{cobbe2021training}, a task model first generates several candidate solutions, then a reward model ranks these candidates and selects the best one as the final prediction. Recently, generative reward models~\cite{mahan2024generative,zhang24generative} have been proposed, which unify generation and reward modeling by representing reward as the probability of a specific token. Based on this, critiques can be introduced in generation for better reward modeling~\cite{mahan2024generative,zhang24generative}. In this work, we take inspiration from generative reward models to design the actions of our simulated user.","Our work is related to the following two research directions. Conversational recommendation. Conversational recommender systems (CRSs) aim to provide item recommendations through multi-turn interaction. One line of work [ 20,21,23] focuses on the optimization of interaction policy. They simplify the interaction to pre-defined actions ( e.g.,asking questions or making recommen- dations) and handcrafted templates. Based on this, they optimize CRSs to give accurate recommendations within as few turns as possible. Another line of work [ 32,33,42] focuses on the elicitation and understanding of user preference in more free-form natural language conversations. Since conversations usually lack sufficient contextual information, existing work introduces knowledge from external resources, such as knowledge graphs [ 8], large language models (LLMs) [ 13,36], and conversational recommendation cor- pora [ 8,34]. Our work follows the second category and proposes a simulated user, which can automatically interact with CRSs to help them discern the true user preference from a complex conversation. Generative reward models. Reward models [ 7,25] have become an emerging topic for solving complex reasoning tasks. For example, in the commonly used “Best-of-N” strategy [ 7], a task model first generates several candidate solutions, then a reward model ranks these candidates and selects the best one as the final prediction. Recently, generative reward models [ 26,38] have been proposed, which unify generation and reward modeling by representing re- ward as the probability of a specific token. Based on this, critiques can be introduced in generation for better reward modeling [ 26,38]. In this work, we take inspiration from generative reward models to design the actions of our simulated user." 2504.18383v1,"Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation","Qidong Liu, Xiangyu Zhao, Yejing Wang, Zijian Zhang, Howard Zhong, Chong Chen, Xiang Li, Wei Huang, Feng Tian","Cross-domain Sequential Recommendation (CDSR) aims to extract the preference from the user's historical interactions across various domains. Despite some progress in CDSR, two problems set the barrier for further advancements, i.e., overlap dilemma and transition complexity. The former means existing CDSR methods severely rely on users who own interactions on all domains to learn cross-domain item relationships, compromising the practicability. The latter refers to the difficulties in learning the complex transition patterns from the mixed behavior sequences. With powerful representation and reasoning abilities, Large Language Models (LLMs) are promising to address these two problems by bridging the items and capturing the user's preferences from a semantic view. Therefore, we propose an LLMs Enhanced Cross-domain Sequential Recommendation model (LLM4CDSR). To obtain the semantic item relationships, we first propose an LLM-based unified representation module to represent items. Then, a trainable adapter with contrastive regularization is designed to adapt the CDSR task. Besides, a hierarchical LLMs profiling module is designed to summarize user cross-domain preferences. Finally, these two modules are integrated into the proposed tri-thread framework to derive recommendations. We have conducted extensive experiments on three public cross-domain datasets, validating the effectiveness of LLM4CDSR. We have released the code online.","cs.IR, cs.AI",2025-04-25T14:30:25+00:00,2025-04-25T14:30:25+00:00,http://arxiv.org/abs/2504.18383v1,http://arxiv.org/abs/2504.18383v1,2025-04-25 14:30:25+00:00,"% In this section, we will conclude the literature relevant to this paper from two aspects, \ie cross-domain sequential recommendation and LLMs for the sequential recommendation. % \subsection{Cross-domain Sequential Recommendation} \noindent \textbf{Cross-domain Sequential Recommendation}. % Sequential Recommender Systems (SRS)~\cite{pan2024survey} has attracted wide attention because it can capture dynamic preferences from the user's historical interactions. However, most SRS studies focus on addressing the sparsity problem by contrastive learning~\cite{qiu2022contrastive,xie2022contrastive}, while still trapped in the limited single-domain data. Recommender systems~\cite{zhao2018deep,zhao2018recommendations,liu2023multi,wang2023multi,wang2023single,liu2024multimodal} become important in life since they can alleviate the problem of information overload. Recently, CDSR~\cite{chen2024survey} has emerged, absorbing both merits from cross-domain recommendation~\cite{li2022gromov,wang2023plate,li2023hamur,gao2023autotransfer,zhang2024m3oe,jia2024d3,liu2024multifs} and sequential recommendation~\cite{liu2025sigma,liu2023diffuasr,li2023strec,liu2023dirac}. The key to CDSR lies in filling the distribution gap between various domains. % Existing CDSR works often address it from item or user perspectives. From the item aspect, most current studies adopt Graph Neural Networks (GNN)~\cite{wu2022graph} to establish the item connections across domains. As one of the pioneers, C2DSR~\cite{cao2022contrastive} proposes to build an interaction graph based on the co-occurrences of the items across two domains. % Then, the GNN is imposed on this graph and the relationships between items in different domains are learned. Following C2DSR, EA-GCL~\cite{wang2023unbiased} and MGCL~\cite{xu2023multi} further design contrastive objectives for graph learning to alleviate density bias and transition difficulty. % Specifically, EA-GCL~\cite{wang2023unbiased} perturb the global interaction graph and impose contrastive learning to enhance the node embeddings. % In terms of MGCL~\cite{xu2023multi}, it devises contrastive tasks to distinguish local and global graphs so that the intra- and inter-domain transitions are acquired. From the user perspective, CDSR works often design more sophisticated architectures to capture global preferences from the mixed behavior sequences. $\pi$-Net~\cite{ma2019pi}, an early work in this line, proposes a cross-domain transfer module to derive hybrid preferences. Recently, TriCDR~\cite{ma2024triple} has focused on modeling mixed sequences, designing triple contrastive tasks to dig into fine-grained global interests. Though existing CDSR works have explored various methods to bridge the domains, they are trapped in a collaborative view. % This leads to the overlap dilemma and collaborative complexity intrinsically. In contrast, we propose a semantic view to enhance CDSR by LLMs. \vspace{1mm} % \subsection{LLMs for Sequential Recommendation} \noindent \textbf{LLMs for Sequential Recommendation}. LLMs have been applied to several fields~\cite{xu2024multi,liu2024moelora}, including recommender systems~\cite{wu2024survey,lin2023can,bao2024large,wang2024towards,liu2025llmemb,sun2025llmser,liu2024leader,wang2024llm4msr,fu2023unified}. Relying on powerful abilities in reasoning and understanding~\cite{zhao2023survey}, LLMs can benefit SRS by analyzing users' behaviors and items' attributes. Existing LLMs for SRS works can be categorized into two main lines, \ie LLMs as SRS and LLMs enhancing SRS. The first category refers to utilizing LLMs to generate recommendations directly. For example, GOT4Rec~\cite{long2024got4rec} designs chain-of-thoughts to prompt LLMs giving out recommendations. To fill the gap between natural language and recommendation tasks, some propose to fine-tune open-sourced LLMs, \eg TALLRec~\cite{bao2023tallrec} and S-DPO~\cite{chen2024softmax}. % However, these methods do not take the collaborative signals into account. To face this challenge, E4SRec~\cite{li2023e4srec} and LLaRA~\cite{liao2024llara} propose to integrate pre-trained item embeddings of SRS into LLMs. The other line, \ie LLMs enhancing SRS, often adopts LLMs as item or user encoders~\cite{liu2024llmers}. Since the LLMs embeddings can be cached in advance, they are more practical due to no need for LLMs while serving. SAID~\cite{hu2024enhancing} and TSLRec~\cite{liu2024practice} are two representatives, which use the LLMs embeddings to initialize the embedding layer of SRS. To better combine collaborative and semantic information, LLM-ESR~\cite{liu2024llm} designs a dual-view modeling framework. % URLLM~\cite{shen2024exploring} belongs to the first thread, facing the high latency issue when applied to real-world applications. It is worth noting that we are the first to explore how to utilize LLMs to enhance CDSR.","% In this section, we will conclude the literature relevant to this paper from two aspects, \ie cross-domain sequential recommendation and LLMs for the sequential recommendation. % \subsection{Cross-domain Sequential Recommendation} \noindent \textbf{Cross-domain Sequential Recommendation}. % Sequential Recommender Systems (SRS)~\cite{pan2024survey} has attracted wide attention because it can capture dynamic preferences from the user's historical interactions. However, most SRS studies focus on addressing the sparsity problem by contrastive learning~\cite{qiu2022contrastive,xie2022contrastive}, while still trapped in the limited single-domain data. Recommender systems~\cite{zhao2018deep,zhao2018recommendations,liu2023multi,wang2023multi,wang2023single,liu2024multimodal} become important in life since they can alleviate the problem of information overload. Recently, CDSR~\cite{chen2024survey} has emerged, absorbing both merits from cross-domain recommendation~\cite{li2022gromov,wang2023plate,li2023hamur,gao2023autotransfer,zhang2024m3oe,jia2024d3,liu2024multifs} and sequential recommendation~\cite{liu2025sigma,liu2023diffuasr,li2023strec,liu2023dirac}. The key to CDSR lies in filling the distribution gap between various domains. % Existing CDSR works often address it from item or user perspectives. From the item aspect, most current studies adopt Graph Neural Networks (GNN)~\cite{wu2022graph} to establish the item connections across domains. As one of the pioneers, C2DSR~\cite{cao2022contrastive} proposes to build an interaction graph based on the co-occurrences of the items across two domains. % Then, the GNN is imposed on this graph and the relationships between items in different domains are learned. Following C2DSR, EA-GCL~\cite{wang2023unbiased} and MGCL~\cite{xu2023multi} further design contrastive objectives for graph learning to alleviate density bias and transition difficulty. % Specifically, EA-GCL~\cite{wang2023unbiased} perturb the global interaction graph and impose contrastive learning to enhance the node embeddings. % In terms of MGCL~\cite{xu2023multi}, it devises contrastive tasks to distinguish local and global graphs so that the intra- and inter-domain transitions are acquired. From the user perspective, CDSR works often design more sophisticated architectures to capture global preferences from the mixed behavior sequences. $\pi$-Net~\cite{ma2019pi}, an early work in this line, proposes a cross-domain transfer module to derive hybrid preferences. Recently, TriCDR~\cite{ma2024triple} has focused on modeling mixed sequences, designing triple contrastive tasks to dig into fine-grained global interests. Though existing CDSR works have explored various methods to bridge the domains, they are trapped in a collaborative view. % This leads to the overlap dilemma and collaborative complexity intrinsically. In contrast, we propose a semantic view to enhance CDSR by LLMs. \vspace{1mm} % \subsection{LLMs for Sequential Recommendation} \noindent \textbf{LLMs for Sequential Recommendation}. LLMs have been applied to several fields~\cite{xu2024multi,liu2024moelora}, including recommender systems~\cite{wu2024survey,lin2023can,bao2024large,wang2024towards,liu2025llmemb,sun2025llmser,liu2024leader,wang2024llm4msr,fu2023unified}. Relying on powerful abilities in reasoning and understanding~\cite{zhao2023survey}, LLMs can benefit SRS by analyzing users' behaviors and items' attributes. Existing LLMs for SRS works can be categorized into two main lines, \ie LLMs as SRS and LLMs enhancing SRS. The first category refers to utilizing LLMs to generate recommendations directly. For example, GOT4Rec~\cite{long2024got4rec} designs chain-of-thoughts to prompt LLMs giving out recommendations. To fill the gap between natural language and recommendation tasks, some propose to fine-tune open-sourced LLMs, \eg TALLRec~\cite{bao2023tallrec} and S-DPO~\cite{chen2024softmax}. % However, these methods do not take the collaborative signals into account. To face this challenge, E4SRec~\cite{li2023e4srec} and LLaRA~\cite{liao2024llara} propose to integrate pre-trained item embeddings of SRS into LLMs. The other line, \ie LLMs enhancing SRS, often adopts LLMs as item or user encoders~\cite{liu2024llmers}. Since the LLMs embeddings can be cached in advance, they are more practical due to no need for LLMs while serving. SAID~\cite{hu2024enhancing} and TSLRec~\cite{liu2024practice} are two representatives, which use the LLMs embeddings to initialize the embedding layer of SRS. To better combine collaborative and semantic information, LLM-ESR~\cite{liu2024llm} designs a dual-view modeling framework. % URLLM~\cite{shen2024exploring} belongs to the first thread, facing the high latency issue when applied to real-world applications. It is worth noting that we are the first to explore how to utilize LLMs to enhance CDSR.","Cross-domain Sequential Recommendation . Recommender systems [ 23,32,48,49,61,62] become important in life since they can alleviate the problem of information overload. Recently, CDSR [ 7] has emerged, absorbing both merits from cross-domain recommendation [ 10,14,18,19,22,51,58] and sequential recom- mendation [ 17,24,29,31]. The key to CDSR lies in filling the distri- bution gap between various domains. From the item aspect, most current studies adopt Graph Neural Networks (GNN) [ 53] to estab- lish the item connections across domains. As one of the pioneers, C2DSR [ 5] proposes to build an interaction graph based on the co-occurrences of the items across two domains. Following C2DSR, EA-GCL [ 47] and MGCL [ 56] further design contrastive objectives for graph learning to alleviate density bias and transition difficulty. From the user perspective, CDSR works often design more sophis- ticated architectures to capture global preferences from the mixedbehavior sequences. 𝜋-Net [ 35], an early work in this line, pro- poses a cross-domain transfer module to derive hybrid preferences. Recently, TriCDR [ 34] has focused on modeling mixed sequences, designing triple contrastive tasks to dig into fine-grained global interests. Though existing CDSR works have explored various meth- ods to bridge the domains, they are trapped in a collaborative view. In contrast, we propose a semantic view to enhance CDSR by LLMs. LLMs for Sequential Recommendation . LLMs have been ap- plied to several fields [ 27,54], including recommender systems [ 2,9, 20,25,28,43,46,50,52]. Relying on powerful abilities in reasoning and understanding [ 59], LLMs can benefit SRS by analyzing users’ behaviors and items’ attributes. Existing LLMs for SRS works can be categorized into two main lines, i.e.,LLMs as SRS and LLMs enhancing SRS. The first category refers to utilizing LLMs to gener- ate recommendations directly. For example, GOT4Rec [ 33] designs chain-of-thoughts to prompt LLMs giving out recommendations. To fill the gap between natural language and recommendation tasks, some propose to fine-tune open-sourced LLMs, e.g.,TALLRec [ 4] and S-DPO [ 8]. The other line, i.e.,LLMs enhancing SRS, often adopts LLMs as item or user encoders [ 30]. Since the LLMs em- beddings can be cached in advance, they are more practical due to no need for LLMs while serving. SAID [ 13] and TSLRec [ 21] are two representatives, which use the LLMs embeddings to initialize the embedding layer of SRS. To better combine collaborative and semantic information, LLM-ESR [ 26] designs a dual-view modeling framework. It is worth noting that we are the first to explore how to utilize LLMs to enhance CDSR." 2504.17519v1,Replication and Exploration of Generative Retrieval over Dynamic Corpora,"Zhen Zhang, Xinyu Ma, Weiwei Sun, Pengjie Ren, Zhumin Chen, Shuaiqiang Wang, Dawei Yin, Maarten de Rijke, Zhaochun Ren","Generative retrieval (GR) has emerged as a promising paradigm in information retrieval (IR). However, most existing GR models are developed and evaluated using a static document collection, and their performance in dynamic corpora where document collections evolve continuously is rarely studied. In this paper, we first reproduce and systematically evaluate various representative GR approaches over dynamic corpora. Through extensive experiments, we reveal that existing GR models with \textit{text-based} docids show superior generalization to unseen documents. We observe that the more fine-grained the docid design in the GR model, the better its performance over dynamic corpora, surpassing BM25 and even being comparable to dense retrieval methods. While GR models with \textit{numeric-based} docids show high efficiency, their performance drops significantly over dynamic corpora. Furthermore, our experiments find that the underperformance of numeric-based docids is partly due to their excessive tendency toward the initial document set, which likely results from overfitting on the training set. We then conduct an in-depth analysis of the best-performing GR methods. We identify three critical advantages of text-based docids in dynamic corpora: (i) Semantic alignment with language models' pretrained knowledge, (ii) Fine-grained docid design, and (iii) High lexical diversity. Building on these insights, we finally propose a novel multi-docid design that leverages both the efficiency of numeric-based docids and the effectiveness of text-based docids, achieving improved performance in dynamic corpus without requiring additional retraining. Our work offers empirical evidence for advancing GR methods over dynamic corpora and paves the way for developing more generalized yet efficient GR models in real-world search engines.",cs.IR,2025-04-24T13:01:23+00:00,2025-04-24T13:01:23+00:00,http://arxiv.org/abs/2504.17519v1,http://arxiv.org/abs/2504.17519v1,2025-04-24 13:01:23+00:00,"\subsection{Generative retrieval} Generative retrieval is an emerging paradigm in information retrieval that reformulates the retrieval process as a text generation task. Instead of relying on dense or sparse index structures, GR methods leverage autoregressive language models (e.g., T5) to produce a document identifies (docid)~\cite{de2020autoregressive, tay2022transformer, zhou2022dynamicretriever}. Early work in this direction includes autoregressive entity retrieval models that generate entity titles~\cite{de2020autoregressive}. Beyond simple string identifiers, recent studies have explored generating more semantically meaningful docids. Existing GR methods can be categorized into two main types according to the nature of the docids: \textit{numeric-based} and \textit{text-based}. \textit{numeric-based} docids typically involve a quantizer that converts document content into a numeric sequence, followed by training a model to learn the mapping between the document and the numeric sequence. There are various types of quantization strategies, such as hierarchical k-means, product quantization (PQ), residual quantization (RQ), etc. \citet{tay2022transformer} use hierarchical k-means clustering on document embeddings to generate docids and then trains the model to generate the corresponding numeric sequences. \citet{wang2022neural} adopt a similar docids design to DSI~\cite{tay2022transformer} but applies distinct word embeddings for the same numeric value based on different positions and prefixes. \citet{zhou2022ultron} employ PQ to convert document embeddings into numeric sequences and designs a three-stage training task to enable the model to memorize the documents. \citet{zeng2024scalable} use RQ to quantize document embeddings, while incorporating interaction information between the query and the document when obtaining the document embeddings. \citet{sun2024learning} adopt a variant of the RQ strategy, replacing the word embedding matrix with a codebook from the RQ method. The authors use constrained cluster centers as docids and trains the docids representations through document tokenization, retrieval, and reconstruction tasks. \textit{Text-based} docids use meta-information from the document, such as URLs, titles, or queries, as docids, effectively utilizing the powerful capabilities of pre-trained language models. \citet{de2020autoregressive, chen2022gere, chen2022corpusbrain} treat the document title as the docids, as the title is the most intuitive and commonly used form of abstract textual information. \citet{zhou2022ultron, tang2024generative} use URLs and queries as docids in web search scenarios. Single docids type may not fully represent the information contained in a document. \citet{li2023multiview} treat various textual elements as docids and retrieves results for multiple docids simultaneously during the retrieval stage. N-grams are also an effective way to represent document content, but directly storing n-grams for constrained decoding requires significant computational and storage costs. \citet{bevilacqua2022autoregressive} introduce the use of FM-index to store n-gram information and control generation, significantly improving both efficiency and retrieval performance. In this paper, we reproduce representative methods from both \textit{numeric-based} and \textit{text-based} paradigms to comprehensively evaluate their adaptability in dynamic corpora. \subsection{GR over dynamic corpora} Previous GR methods primarily focus on fixed document collections, whereas practical tasks often involve corpora that continuously evolve over time. To address this challenge, researchers have developed a series of methods to optimize GR models for handling dynamic corpora. \citet{mehta2022dsi++} employ an incremental training method by optimizing flat loss basins through the Sharpness-Aware Minimization (SAM) optimizer, enabling the model to remember new documents while maintaining stable retrieval performance for the initial documents. \citet{guo2024corpusbrain++} design an adapter structure that shares a backbone model while introducing task-specific adapters for training on specific documents. This method effectively learns representations for new documents. \citet{chen2023continual} build upon the PQ strategy and proposes Incremental Product Quantization (IPQ), using the generated PQ codes as docids. It achieves flexible document addition by updating the PQ centroids. \citet{kim2023exploring} evaluate the performance of GR methods on dynamic corpora, demonstrating the superiority of GR in terms of efficiency and memory compared to traditional retrieval strategies. These methods have, to some extent, addressed or alleviated the challenges faced by GR methods in dynamic corpora tasks. However, they either require additional training or involve complex docids designs, making them unsuitable for the demands of dynamic corpora in practical scenarios. Therefore, it is crucial to explore the ability of models to generalize to new documents without complex modifications or additional training.","\subsection{Generative retrieval} Generative retrieval is an emerging paradigm in information retrieval that reformulates the retrieval process as a text generation task. Instead of relying on dense or sparse index structures, GR methods leverage autoregressive language models (e.g., T5) to produce a document identifies (docid)~\cite{de2020autoregressive, tay2022transformer, zhou2022dynamicretriever}. Early work in this direction includes autoregressive entity retrieval models that generate entity titles~\cite{de2020autoregressive}. Beyond simple string identifiers, recent studies have explored generating more semantically meaningful docids. Existing GR methods can be categorized into two main types according to the nature of the docids: \textit{numeric-based} and \textit{text-based}. \textit{numeric-based} docids typically involve a quantizer that converts document content into a numeric sequence, followed by training a model to learn the mapping between the document and the numeric sequence. There are various types of quantization strategies, such as hierarchical k-means, product quantization (PQ), residual quantization (RQ), etc. \citet{tay2022transformer} use hierarchical k-means clustering on document embeddings to generate docids and then trains the model to generate the corresponding numeric sequences. \citet{wang2022neural} adopt a similar docids design to DSI~\cite{tay2022transformer} but applies distinct word embeddings for the same numeric value based on different positions and prefixes. \citet{zhou2022ultron} employ PQ to convert document embeddings into numeric sequences and designs a three-stage training task to enable the model to memorize the documents. \citet{zeng2024scalable} use RQ to quantize document embeddings, while incorporating interaction information between the query and the document when obtaining the document embeddings. \citet{sun2024learning} adopt a variant of the RQ strategy, replacing the word embedding matrix with a codebook from the RQ method. The authors use constrained cluster centers as docids and trains the docids representations through document tokenization, retrieval, and reconstruction tasks. \textit{Text-based} docids use meta-information from the document, such as URLs, titles, or queries, as docids, effectively utilizing the powerful capabilities of pre-trained language models. \citet{de2020autoregressive, chen2022gere, chen2022corpusbrain} treat the document title as the docids, as the title is the most intuitive and commonly used form of abstract textual information. \citet{zhou2022ultron, tang2024generative} use URLs and queries as docids in web search scenarios. Single docids type may not fully represent the information contained in a document. \citet{li2023multiview} treat various textual elements as docids and retrieves results for multiple docids simultaneously during the retrieval stage. N-grams are also an effective way to represent document content, but directly storing n-grams for constrained decoding requires significant computational and storage costs. \citet{bevilacqua2022autoregressive} introduce the use of FM-index to store n-gram information and control generation, significantly improving both efficiency and retrieval performance. In this paper, we reproduce representative methods from both \textit{numeric-based} and \textit{text-based} paradigms to comprehensively evaluate their adaptability in dynamic corpora. \subsection{GR over dynamic corpora} Previous GR methods primarily focus on fixed document collections, whereas practical tasks often involve corpora that continuously evolve over time. To address this challenge, researchers have developed a series of methods to optimize GR models for handling dynamic corpora. \citet{mehta2022dsi++} employ an incremental training method by optimizing flat loss basins through the Sharpness-Aware Minimization (SAM) optimizer, enabling the model to remember new documents while maintaining stable retrieval performance for the initial documents. \citet{guo2024corpusbrain++} design an adapter structure that shares a backbone model while introducing task-specific adapters for training on specific documents. This method effectively learns representations for new documents. \citet{chen2023continual} build upon the PQ strategy and proposes Incremental Product Quantization (IPQ), using the generated PQ codes as docids. It achieves flexible document addition by updating the PQ centroids. \citet{kim2023exploring} evaluate the performance of GR methods on dynamic corpora, demonstrating the superiority of GR in terms of efficiency and memory compared to traditional retrieval strategies. These methods have, to some extent, addressed or alleviated the challenges faced by GR methods in dynamic corpora tasks. However, they either require additional training or involve complex docids designs, making them unsuitable for the demands of dynamic corpora in practical scenarios. Therefore, it is crucial to explore the ability of models to generalize to new documents without complex modifications or additional training.", 2504.15849v1,"NLCTables: A Dataset for Marrying Natural Language Conditions with Table Discovery","Lingxi Cui, Huan Li, Ke Chen, Lidan Shou, Gang Chen","With the growing abundance of repositories containing tabular data, discovering relevant tables for in-depth analysis remains a challenging task. Existing table discovery methods primarily retrieve desired tables based on a query table or several vague keywords, leaving users to manually filter large result sets. To address this limitation, we propose a new task: NL-conditional table discovery (nlcTD), where users combine a query table with natural language (NL) requirements to refine search results. To advance research in this area, we present nlcTables, a comprehensive benchmark dataset comprising 627 diverse queries spanning NL-only, union, join, and fuzzy conditions, 22,080 candidate tables, and 21,200 relevance annotations. Our evaluation of six state-of-the-art table discovery methods on nlcTables reveals substantial performance gaps, highlighting the need for advanced techniques to tackle this challenging nlcTD scenario. The dataset, construction framework, and baseline implementations are publicly available at https://github.com/SuDIS-ZJU/nlcTables to foster future research.","cs.IR, 68P20",2025-04-22T12:44:59+00:00,2025-04-22T12:44:59+00:00,http://arxiv.org/abs/2504.15849v1,http://arxiv.org/abs/2504.15849v1,2025-04-22 12:44:59+00:00,"\label{sec:related} Section~\ref{ssec:existing_table_discovery} goes through current table discovery methods, highlighting the need and opportunities for developing \task{} techniques. Section~\ref{ssec:existing_data_collection} examines existing test data collections, identifying gaps that must be addressed to facilitate \task{}. \subsection{Table Discovery Approaches} \label{ssec:existing_table_discovery} \noindent\textbf{Keyword-based Table Search}~\cite{wang_retrieving_2021,trabelsi_strubert_2022,wang_solo_2023} retrieves a ranked list of table instances from the repository, ordered by their relevance scores in relation to one or more user-provided keywords. These approaches allow users to locate data with minimal prior knowledge of the structure or relationships within the table repository. However, they typically accept sorely keywords as input and treat tables as plain text (e.g., \textsc{Strubert}~\cite{trabelsi_strubert_2022} serializes tables to a sequence of tokens). While keyword-based approaches are highly valuable, there remains significant room for improvement. \task{} defined in Section~\ref{ssec:definition} aims to extend this paradigm by supporting inputs that go beyond simple keywords, allowing for more complex natural language queries, such as full sentences. Nonetheless, taking only NL as input is only a special case in our \task{} scenarios. In practice, users often already have a table at hand and would use it as query context to refine their table retrieval results. \smallskip \noindent\textbf{Query-table-based Table Search} identifies additional tables relevant to a given query table, leveraging its header and/or body content. The literature has mainly seen two categories of work. % \underline{\textit{Table Union Search}}~\cite{nargesian_table_2018,bogatu_dataset_2020,khatiwada_santos_2023,fan_semantics-aware_2023,hu_automatic_2023} retrieves tables that are union-compatible with the query table, meaning these two tables having multiple pairs of columns that can be merged. This involves assessing similarity between query table and candidate tables. Early approach \TUS{}~\cite{nargesian_table_2018} defines three probabilistic models to measure the similarity between column values, column domains, and word embeddings of column content. \Santos{}~\cite{khatiwada_santos_2023} utilizes knowledge graph to measure the possibility that columns originate from the same domain. Recently, methods like \Starmie{}~\cite{fan_semantics-aware_2023} have adopted table representation learning~\cite{deng_turl_2020,hu_automatic_2023} to generate column embeddings for similarity measure, allowing a more nuanced understanding of table contexts. \underline{\textit{Table Join Search}}~\cite{yakout_infogather_2012,zhu_josie_2019,dong_efficient_2021,dong_deepjoin_2023,liu_feature_2022,chepurko_arda_2020,Table2022dong} retrieves tables that are joinable with the query table at a specified column $C_q$, meaning the two tables sharing overlapping or semantically related values in that column. \Josie{}~\cite{zhu_josie_2019} considers only exact value overlap using set similarity search. Here, $C_q$ is treated as a set, and the top-$k$ columns with the highest value overlap are returned. Recent embedding-based approaches such as \DeepJoin{}~\cite{dong_deepjoin_2023} consider semantic overlap (e.g., ``\texttt{Incorporation}'' and ``\texttt{Inc.}'') through representation learning. Typically, they first encode the columns, then index these columns, and finally search joinable pairs based on the index. Query-table-based search is commonly used to identify table components that can augment or complement an existing query table. While \task{} encompasses this functionality --- allowing users to specify conditions like retrieving unionable or joinable tables ---\task{} goes beyond these by supporting more flexible and customizable NL conditions. Not limited to predefined operations (e.g., set similarity or semantic encoding), \task{} allows users to specify nuanced requests in natural language. Existing approaches struggle with \task{} as they fail to capture and utilize the rich semantics of these conditions (see our empirical study in Section~\ref{ssec:overall_comparison}). \subsection{Existing Test Data Collections} \label{ssec:existing_data_collection} To the best of our knowledge, no prior work has specifically constructed a dataset for NL-conditional table discovery, despite its significant application potential. Existing tabular data collections primarily focus on table retrieval (or table discovery) in either keyword-based or query-table-based paradigms. Table~\ref{related_work_statistic} summarizes representative datasets, highlighting their focus, dataset scales, and the availability of ground truth annotations for benchmarking. For example, \textsc{WikiTables}~\cite{zhang_ad_2018,wang_retrieving_2021,trabelsi_strubert_2022} is centered on keyword-based search, accepting only keywords in queries. \TUS{}~\cite{nargesian_table_2018} and \Santos{}~\cite{khatiwada_santos_2023} focus on table union search scenarios, where query tables are provided as input and ground truth is generated via table splitting (see Section~\ref{ssec:construction}). \textsc{LakeBench}\cite{deng2024lakebench} generates table union and join search datasets by applying table splitting to \textsc{Opendata}\cite{opendata} and \textsc{WebTable}~\cite{venetis_recovering_2011,cafarella2009data}. It provides large-scale datasets aligned with data lake usage, emphasizing algorithm efficiency and scalability. Despite these advancements, all of the aforementioned table discovery datasets only label keyword-table or table-table relatedness, making them unsuitable for tasks requiring a ``triangular'' relationship among the NL condition, the query table, and the candidate table. This ternary relationship is inherently more complex than binary relationships, and thus necessitates the creation of a substantial and diverse experimental dataset. While the last four datasets from \textsc{LakeBench}~\cite{deng2024lakebench} are designed to progressively increase in scale, % the other datasets remain relatively small and are derived from single data sources. This lack of diversity makes them less suitable for scenarios involving highly varied NL conditions. Our dataset, \dataset{}, addresses these gaps by aligning with prior works' input data levels while introducing a comprehensive table discovery framework driven by NL conditions. By supporting diverse NL inputs alongside query tables, \dataset{} enables robust and scalable evaluations for real-world applications. \begin{table}[] \centering \caption{Characteristics of existing test collections. Types \texttt{K}, \texttt{U}, and \texttt{J} denote keyword-based, table union search, and table join search, respectively. The scale of each dataset is represented by the number of queries $|\{T^q\}|$, number of candidate tables $|\mathcal{T}|$, total number of ground truth \#(GT), and the row counts per table Avg \#(Rows). Numbers in \textit{italics} indicate statistics not directly reported in the original papers but derived from our evaluations of the dataset.} \label{related_work_statistic} \footnotesize \begin{tabular}{lrrrrr} \toprule {Dataset} & {Type} & {$|\{T^q\}|$} & {$|\mathcal{T}|$} & {\#(GT)} & {Avg \#(Rows)} \\ \midrule \textsc{WikiTables} & \texttt{K} & 60 & 3K & 3K & 10.9 \\ \TUS{} & \texttt{U} & \textit{92} & 5K & \textit{5K} & 1.9K \\ \Santos{} & \texttt{U} & 80 & 11K & \textit{1.6K} & 7.7K \\ \textsc{OpenData}-$\textsf{U}$ & \texttt{U} & 4.6K & 65K & \textit{49.5K} & 112.4K \\ \textsc{OpenData}-$\textsf{J}$ & \texttt{J} & 4.8K & 65K & \textit{42.6K} & 112.4K \\ \textsc{WebTable}-$\textsf{U}$ & \texttt{U} & 6.8K & 2.8M & \textit{54.9K} & 23.5 \\ \textsc{WebTable}-$\textsf{J}$ & \texttt{J} & 7.5K & 16.6M & \textit{54.8K} & 23.5 \\ \bottomrule \end{tabular} \end{table}","Section~\ref{ssec:existing_table_discovery} goes through current table discovery methods, highlighting the need and opportunities for developing \task{} techniques. Section~\ref{ssec:existing_data_collection} examines existing test data collections, identifying gaps that must be addressed to facilitate \task{}. \subsection{Table Discovery Approaches} \noindent\textbf{Keyword-based Table Search}~\cite{wang_retrieving_2021,trabelsi_strubert_2022,wang_solo_2023} retrieves a ranked list of table instances from the repository, ordered by their relevance scores in relation to one or more user-provided keywords. These approaches allow users to locate data with minimal prior knowledge of the structure or relationships within the table repository. However, they typically accept sorely keywords as input and treat tables as plain text (e.g., \textsc{Strubert}~\cite{trabelsi_strubert_2022} serializes tables to a sequence of tokens). While keyword-based approaches are highly valuable, there remains significant room for improvement. \task{} defined in Section~\ref{ssec:definition} aims to extend this paradigm by supporting inputs that go beyond simple keywords, allowing for more complex natural language queries, such as full sentences. Nonetheless, taking only NL as input is only a special case in our \task{} scenarios. In practice, users often already have a table at hand and would use it as query context to refine their table retrieval results. \smallskip \noindent\textbf{Query-table-based Table Search} identifies additional tables relevant to a given query table, leveraging its header and/or body content. The literature has mainly seen two categories of work. % \underline{\textit{Table Union Search}}~\cite{nargesian_table_2018,bogatu_dataset_2020,khatiwada_santos_2023,fan_semantics-aware_2023,hu_automatic_2023} retrieves tables that are union-compatible with the query table, meaning these two tables having multiple pairs of columns that can be merged. This involves assessing similarity between query table and candidate tables. Early approach \TUS{}~\cite{nargesian_table_2018} defines three probabilistic models to measure the similarity between column values, column domains, and word embeddings of column content. \Santos{}~\cite{khatiwada_santos_2023} utilizes knowledge graph to measure the possibility that columns originate from the same domain. Recently, methods like \Starmie{}~\cite{fan_semantics-aware_2023} have adopted table representation learning~\cite{deng_turl_2020,hu_automatic_2023} to generate column embeddings for similarity measure, allowing a more nuanced understanding of table contexts. \underline{\textit{Table Join Search}}~\cite{yakout_infogather_2012,zhu_josie_2019,dong_efficient_2021,dong_deepjoin_2023,liu_feature_2022,chepurko_arda_2020,Table2022dong} retrieves tables that are joinable with the query table at a specified column $C_q$, meaning the two tables sharing overlapping or semantically related values in that column. \Josie{}~\cite{zhu_josie_2019} considers only exact value overlap using set similarity search. Here, $C_q$ is treated as a set, and the top-$k$ columns with the highest value overlap are returned. Recent embedding-based approaches such as \DeepJoin{}~\cite{dong_deepjoin_2023} consider semantic overlap (e.g., ``\texttt{Incorporation}'' and ``\texttt{Inc.}'') through representation learning. Typically, they first encode the columns, then index these columns, and finally search joinable pairs based on the index. Query-table-based search is commonly used to identify table components that can augment or complement an existing query table. While \task{} encompasses this functionality --- allowing users to specify conditions like retrieving unionable or joinable tables ---\task{} goes beyond these by supporting more flexible and customizable NL conditions. Not limited to predefined operations (e.g., set similarity or semantic encoding), \task{} allows users to specify nuanced requests in natural language. Existing approaches struggle with \task{} as they fail to capture and utilize the rich semantics of these conditions (see our empirical study in Section~\ref{ssec:overall_comparison}). \subsection{Existing Test Data Collections} To the best of our knowledge, no prior work has specifically constructed a dataset for NL-conditional table discovery, despite its significant application potential. Existing tabular data collections primarily focus on table retrieval (or table discovery) in either keyword-based or query-table-based paradigms. Table~\ref{related_work_statistic} summarizes representative datasets, highlighting their focus, dataset scales, and the availability of ground truth annotations for benchmarking. For example, \textsc{WikiTables}~\cite{zhang_ad_2018,wang_retrieving_2021,trabelsi_strubert_2022} is centered on keyword-based search, accepting only keywords in queries. \TUS{}~\cite{nargesian_table_2018} and \Santos{}~\cite{khatiwada_santos_2023} focus on table union search scenarios, where query tables are provided as input and ground truth is generated via table splitting (see Section~\ref{ssec:construction}). \textsc{LakeBench}\cite{deng2024lakebench} generates table union and join search datasets by applying table splitting to \textsc{Opendata}\cite{opendata} and \textsc{WebTable}~\cite{venetis_recovering_2011,cafarella2009data}. It provides large-scale datasets aligned with data lake usage, emphasizing algorithm efficiency and scalability. Despite these advancements, all of the aforementioned table discovery datasets only label keyword-table or table-table relatedness, making them unsuitable for tasks requiring a ``triangular'' relationship among the NL condition, the query table, and the candidate table. This ternary relationship is inherently more complex than binary relationships, and thus necessitates the creation of a substantial and diverse experimental dataset. While the last four datasets from \textsc{LakeBench}~\cite{deng2024lakebench} are designed to progressively increase in scale, % the other datasets remain relatively small and are derived from single data sources. This lack of diversity makes them less suitable for scenarios involving highly varied NL conditions. Our dataset, \dataset{}, addresses these gaps by aligning with prior works' input data levels while introducing a comprehensive table discovery framework driven by NL conditions. By supporting diverse NL inputs alongside query tables, \dataset{} enables robust and scalable evaluations for real-world applications. \begin{table}[] \centering \caption{Characteristics of existing test collections. Types \texttt{K}, \texttt{U}, and \texttt{J} denote keyword-based, table union search, and table join search, respectively. The scale of each dataset is represented by the number of queries $|\{T^q\}|$, number of candidate tables $|\mathcal{T}|$, total number of ground truth \#(GT), and the row counts per table Avg \#(Rows). Numbers in \textit{italics} indicate statistics not directly reported in the original papers but derived from our evaluations of the dataset.} \footnotesize \begin{tabular}{lrrrrr} \toprule {Dataset} & {Type} & {$|\{T^q\}|$} & {$|\mathcal{T}|$} & {\#(GT)} & {Avg \#(Rows)} \\ \midrule \textsc{WikiTables} & \texttt{K} & 60 & 3K & 3K & 10.9 \\ \TUS{} & \texttt{U} & \textit{92} & 5K & \textit{5K} & 1.9K \\ \Santos{} & \texttt{U} & 80 & 11K & \textit{1.6K} & 7.7K \\ \textsc{OpenData}-$\textsf{U}$ & \texttt{U} & 4.6K & 65K & \textit{49.5K} & 112.4K \\ \textsc{OpenData}-$\textsf{J}$ & \texttt{J} & 4.8K & 65K & \textit{42.6K} & 112.4K \\ \textsc{WebTable}-$\textsf{U}$ & \texttt{U} & 6.8K & 2.8M & \textit{54.9K} & 23.5 \\ \textsc{WebTable}-$\textsf{J}$ & \texttt{J} & 7.5K & 16.6M & \textit{54.8K} & 23.5 \\ \bottomrule \end{tabular} \end{table}","Section 3.1 goes through current table discovery methods, highlight- ing the need and opportunities for developing nlcTD techniques. Section 3.2 examines existing test data collections, identifying gaps that must be addressed to facilitate nlcTD . 3.1 Table Discovery Approaches Keyword-based Table Search [31,33,34] retrieves a ranked list of table instances from the repository, ordered by their relevance scores in relation to one or more user-provided keywords. These approaches allow users to locate data with minimal prior knowl- edge of the structure or relationships within the table repository. However, they typically accept sorely keywords as input and treat tables as plain text (e.g., Strubert [31] serializes tables to a se- quence of tokens). While keyword-based approaches are highly valuable, there remains significant room for improvement. nlcTD defined in Section 2.1 aims to extend this paradigm by supporting inputs that go beyond simple keywords, allowing for more complex natural language queries, such as full sentences. Nonetheless, tak- ing only NL as input is only a special case in our nlcTD scenarios. In practice, users often already have a table at hand and would use it as query context to refine their table retrieval results. Query-table-based Table Search identifies additional tables rel- evant to a given query table, leveraging its header and/or body content. The literature has mainly seen two categories of work. Table Union Search [3,16,17,19,24] retrieves tables that are union-compatible with the query table, meaning these two tables having multiple pairs of columns that can be merged. This involvesassessing similarity between query table and candidate tables. Early approach Tus[24] defines three probabilistic models to measure the similarity between column values, column domains, and word embeddings of column content. Santos [19] utilizes knowledge graph to measure the possibility that columns originate from the same domain. Recently, methods like Starmie [16] have adopted ta- ble representation learning [ 10,17] to generate column embeddings for similarity measure, allowing a more nuanced understanding of table contexts. Table Join Search [8,12–14,22,35,41] retrieves tables that are joinable with the query table at a specified column 𝐶𝑞, meaning the two tables sharing overlapping or semantically related values in that column. Josie [41] considers only exact value overlap using set similarity search. Here, 𝐶𝑞is treated as a set, and the top- 𝑘columns with the highest value overlap are returned. Recent embedding- based approaches such as DeepJoin [14] consider semantic overlap (e.g., “ Incorporation ” and “ Inc. ”) through representation learn- ing. Typically, they first encode the columns, then index these columns, and finally search joinable pairs based on the index. Query-table-based search is commonly used to identify table components that can augment or complement an existing query ta- ble. While nlcTD encompasses this functionality — allowing users to specify conditions like retrieving unionable or joinable tables —nlcTD goes beyond these by supporting more flexible and cus- tomizable NL conditions. Not limited to predefined operations (e.g., set similarity or semantic encoding), nlcTD allows users to specify nuanced requests in natural language. Existing approaches struggle with nlcTD as they fail to capture and utilize the rich semantics of these conditions (see our empirical study in Section 5.2). 3.2 Existing Test Data Collections To the best of our knowledge, no prior work has specifically con- structed a dataset for NL-conditional table discovery, despite its significant application potential. Existing tabular data collections primarily focus on table retrieval (or table discovery) in either keyword-based or query-table-based paradigms. Table 1 summa- rizes representative datasets, highlighting their focus, dataset scales, and the availability of ground truth annotations for benchmarking. For example, WikiTables [31,33,38] is centered on keyword-based search, accepting only keywords in queries. Tus [24] and San- tos[19] focus on table union search scenarios, where query tables are provided as input and ground truth is generated via table split- ting (see Section 4.2). LakeBench [11] generates table union and join search datasets by applying table splitting to Opendata [2] and WebTable [4,32]. It provides large-scale datasets aligned with data lake usage, emphasizing algorithm efficiency and scalability. Despite these advancements, all of the aforementioned table dis- covery datasets only label keyword-table or table-table relatedness, making them unsuitable for tasks requiring a “triangular” relation- ship among the NL condition, the query table, and the candidate table. This ternary relationship is inherently more complex than binary relationships, and thus necessitates the creation of a substan- tial and diverse experimental dataset. While the last four datasets from LakeBench [11] are designed to progressively increase in scale, the other datasets remain relatively small and are derived from single data sources. This lack of diversity makes them less 4 Table 1: Characteristics of existing test collections. Types K,U, and Jdenote keyword-based, table union search, and table join search, respectively. The scale of each dataset is represented by the number of queries |{𝑇𝑞}|, number of can- didate tables|T|, total number of ground truth #(GT), and the row counts per table Avg #(Rows). Numbers in italics indicate statistics not directly reported in the original papers but derived from our evaluations of the dataset. Dataset Type |{𝑇𝑞}| |T| #(GT) Avg #(Rows) WikiTables K 60 3K 3K 10.9 Tus U 92 5K 5K 1.9K Santos U 80 11K 1.6K 7.7K OpenData -U U 4.6K 65K 49.5K 112.4K OpenData -J J 4.8K 65K 42.6K 112.4K WebTable -U U 6.8K 2.8M 54.9K 23.5 WebTable -J J 7.5K 16.6M 54.8K 23.5 suitable for scenarios involving highly varied NL conditions. Our dataset, nlcTables , addresses these gaps by aligning with prior works’ input data levels while introducing a comprehensive table discovery framework driven by NL conditions. By supporting di- verse NL inputs alongside query tables, nlcTables enables robust and scalable evaluations for real-world applications." 2504.14991v1,"Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics","Chen Xu, Jujia Zhao, Wenjie Wang, Liang Pang, Jun Xu, Tat-Seng Chua, Maarten de Rijke","Fairness is an increasingly important factor in re-ranking tasks. Prior work has identified a trade-off between ranking accuracy and item fairness. However, the underlying mechanisms are still not fully understood. An analogy can be drawn between re-ranking and the dynamics of economic transactions. The accuracy-fairness trade-off parallels the coupling of the commodity tax transfer process. Fairness considerations in re-ranking, similar to a commodity tax on suppliers, ultimately translate into a cost passed on to consumers. Analogously, item-side fairness constraints result in a decline in user-side accuracy. In economics, the extent to which commodity tax on the supplier (item fairness) transfers to commodity tax on users (accuracy loss) is formalized using the notion of elasticity. The re-ranking fairness-accuracy trade-off is similarly governed by the elasticity of utility between item groups. This insight underscores the limitations of current fair re-ranking evaluations, which often rely solely on a single fairness metric, hindering comprehensive assessment of fair re-ranking algorithms. Centered around the concept of elasticity, this work presents two significant contributions. We introduce the Elastic Fairness Curve (EF-Curve) as an evaluation framework. This framework enables a comparative analysis of algorithm performance across different elasticity levels, facilitating the selection of the most suitable approach. Furthermore, we propose ElasticRank, a fair re-ranking algorithm that employs elasticity calculations to adjust inter-item distances within a curved space. Experiments on three widely used ranking datasets demonstrate its effectiveness and efficiency.",cs.IR,2025-04-21T09:41:08+00:00,2025-04-21T09:41:08+00:00,http://arxiv.org/abs/2504.14991v1,http://arxiv.org/abs/2504.14991v1,2025-04-21 09:41:08+00:00,"\noindent% \textbf{Fair re-ranking.} Over the past decade, work on fair ranking tasks has rapidly grown in volume, driven by the need for a responsible and trustworthy ecosystem~\cite{lifairness, lipani2016fairness, deldjoo2022survey, xu2025fairdiversecomprehensivetoolkitfair}. Previous research often categorizes fair-aware methods into three categories based on ranking phases: pre-processing~\cite{Calmon17, xiong2024fairwasp}, in-processing~\cite{Tang23FairBias}, and post-processing (\ie re-ranking tasks)~\cite{xu2023p, fairrec}. The re-ranking phase is regarded as the most easily adaptable and practical stage in optimizing ranking systems~\cite{fairrec}. During the re-ranking phase, the concept of fairness in re-ranking depends on the stakeholders involved~\cite{abdollahpouri2020multistakeholder, abdollahpouri2019multi}. Prior work has examined user-oriented fairness~\cite{abdollahpouri2019unfairness, li2021user} and item-oriented fairness~\cite{fairrec, xu2023p, TaxRank, singh2019policy, jaenich2024fairness, TaoSIGIRAP}. In this paper, we focus on item group fairness in re-ranking tasks. \smallskip\noindent% \textbf{Metrics and algorithms in fair re-ranking.} Fairness metrics vary widely across works, with different studies optimizing distinct metrics. For instance, some work~\cite{fairrec, TaoSIGIRAP} employs proportional fairness, \citet{do2022optimizing} focuses on the Gini Index, other work~\cite{xu2023p} prioritizes MMF, and TaxRank~\cite{TaxRank} optimizes $\alpha$-fairness. However, these approaches rely on single fairness metrics, which limits their ability to provide a comprehensive evaluation. Previous work on re-ranking methods to improve item fairness can be divided into \begin{enumerate*}[label=(\roman*)] \item regularized methods, which use a multi-task optimization approach with a linear combination of accuracy and fairness loss functions, incorporating a trade-off coefficient $\lambda$~\cite{xu2023p, do2022optimizing, cpfair}, and \item constraint-based methods, which formulate the task as a constrained optimization problem to ensure that fairness metrics do not exceed a specified threshold~\cite{wu2021tfrom, fairrecplus, zafar2019fairness, fairrec}. \end{enumerate*} Despite achieving notable performance improvements, existing fairness intervention methods are often designed to optimize specific fairness metrics and typically involve high computational costs, making them challenging to adapt to real-world industrial systems. % I don't understand this: show poor continuity and systematic controllability under a taxation perspective for fair re-ranking. % A: sorry, I don't write the related, current version is from previous work, I just re-write them %\mdr{What we add on top of prior work on the accuracy-fairness trade-off in re-ranking is \ldots} \smallskip\noindent% \textbf{An economic perspective on fair re-ranking.} In economics, resource allocation typically occurs through processes of distribution and re-distribution~\cite{lambert1992distribution}. Previous work~\cite{saito2022fair} regards fair ranking as a resource allocation problem and formulated the problem related to Nash Social Welfare in economics, see also~\citet{fairrec, fairrecplus}. TaxRank~\cite{TaxRank} regards fair re-ranking as a taxation process, which often serves as a key mechanism in the re-distribution process, enabling wealth reallocation and addressing income inequality~\cite{hanlon2010review, nerre2001concept}. However, they merely use economic objectives to define different fairness metrics, without understanding how fairness-accuracy trade-offs occur under different metrics. %In contrast, we leverage elasticity theory from economics to reinterpret how fairness-accuracy trade-offs operate, providing a unified fairness metric and an efficient optimization algorithm. %saito2022fair % In the process of re-distribution, taxation is frequently employed as a mechanism to re-distribute wealth and tackle income inequality~\cite{hanlon2010review, nerre2001concept}. % \mdr{Do we need to know about these taxation methods?} % There are usually two types of taxation method: % \begin{enumerate*}[label=(\roman*)] % \item flat tax rates, such as property tax~\cite{oates1969effects}, which are designed with different fixed rates based on the varying amounts of property; % \item progressive tax, such as income taxes~\cite{poulson2008state} and payroll taxes~\cite{brittain1971incidence}, typically involve progressive tax rates that increase as a taxpayer's earnings increase. Process tax often is regarded as a more useful but complex taxation method. % \end{enumerate*} % \mdr{It's probably more important to explain what prior work on an economic perspective on fairness and the accuracy-fairness trade-off has done, and what the gap is that that work has left.} % \mdr{What we add on top of prior work that contributed to an economic perspective on fair re-ranking is \ldots} %Previous tax policies resemble a flat tax, whereas Tax-rank adopts a closer resemblance to a progressive tax. Finally, the Tax-rank policy is more closely related to $\alpha$-fair optimization in cooperative games~\cite{bertsimas2011price, bertsimas2012efficiency}. However, these approaches are not suitable for fair re-ranking tasks.","\noindent% \textbf{Fair re-ranking.} Over the past decade, work on fair ranking tasks has rapidly grown in volume, driven by the need for a responsible and trustworthy ecosystem~\cite{lifairness, lipani2016fairness, deldjoo2022survey, xu2025fairdiversecomprehensivetoolkitfair}. Previous research often categorizes fair-aware methods into three categories based on ranking phases: pre-processing~\cite{Calmon17, xiong2024fairwasp}, in-processing~\cite{Tang23FairBias}, and post-processing (\ie re-ranking tasks)~\cite{xu2023p, fairrec}. The re-ranking phase is regarded as the most easily adaptable and practical stage in optimizing ranking systems~\cite{fairrec}. During the re-ranking phase, the concept of fairness in re-ranking depends on the stakeholders involved~\cite{abdollahpouri2020multistakeholder, abdollahpouri2019multi}. Prior work has examined user-oriented fairness~\cite{abdollahpouri2019unfairness, li2021user} and item-oriented fairness~\cite{fairrec, xu2023p, TaxRank, singh2019policy, jaenich2024fairness, TaoSIGIRAP}. In this paper, we focus on item group fairness in re-ranking tasks. \smallskip\noindent% \textbf{Metrics and algorithms in fair re-ranking.} Fairness metrics vary widely across works, with different studies optimizing distinct metrics. For instance, some work~\cite{fairrec, TaoSIGIRAP} employs proportional fairness, \citet{do2022optimizing} focuses on the Gini Index, other work~\cite{xu2023p} prioritizes MMF, and TaxRank~\cite{TaxRank} optimizes $\alpha$-fairness. However, these approaches rely on single fairness metrics, which limits their ability to provide a comprehensive evaluation. Previous work on re-ranking methods to improve item fairness can be divided into \begin{enumerate*}[label=(\roman*)] \item regularized methods, which use a multi-task optimization approach with a linear combination of accuracy and fairness loss functions, incorporating a trade-off coefficient $\lambda$~\cite{xu2023p, do2022optimizing, cpfair}, and \item constraint-based methods, which formulate the task as a constrained optimization problem to ensure that fairness metrics do not exceed a specified threshold~\cite{wu2021tfrom, fairrecplus, zafar2019fairness, fairrec}. \end{enumerate*} Despite achieving notable performance improvements, existing fairness intervention methods are often designed to optimize specific fairness metrics and typically involve high computational costs, making them challenging to adapt to real-world industrial systems. % I don't understand this: show poor continuity and systematic controllability under a taxation perspective for fair re-ranking. % A: sorry, I don't write the related, current version is from previous work, I just re-write them %\mdr{What we add on top of prior work on the accuracy-fairness trade-off in re-ranking is \ldots} \smallskip\noindent% \textbf{An economic perspective on fair re-ranking.} In economics, resource allocation typically occurs through processes of distribution and re-distribution~\cite{lambert1992distribution}. Previous work~\cite{saito2022fair} regards fair ranking as a resource allocation problem and formulated the problem related to Nash Social Welfare in economics, see also~\citet{fairrec, fairrecplus}. TaxRank~\cite{TaxRank} regards fair re-ranking as a taxation process, which often serves as a key mechanism in the re-distribution process, enabling wealth reallocation and addressing income inequality~\cite{hanlon2010review, nerre2001concept}. However, they merely use economic objectives to define different fairness metrics, without understanding how fairness-accuracy trade-offs occur under different metrics. %In contrast, we leverage elasticity theory from economics to reinterpret how fairness-accuracy trade-offs operate, providing a unified fairness metric and an efficient optimization algorithm. %saito2022fair % In the process of re-distribution, taxation is frequently employed as a mechanism to re-distribute wealth and tackle income inequality~\cite{hanlon2010review, nerre2001concept}. % \mdr{Do we need to know about these taxation methods?} % There are usually two types of taxation method: % \begin{enumerate*}[label=(\roman*)] % \item flat tax rates, such as property tax~\cite{oates1969effects}, which are designed with different fixed rates based on the varying amounts of property; % \item progressive tax, such as income taxes~\cite{poulson2008state} and payroll taxes~\cite{brittain1971incidence}, typically involve progressive tax rates that increase as a taxpayer's earnings increase. Process tax often is regarded as a more useful but complex taxation method. % \end{enumerate*} % \mdr{It's probably more important to explain what prior work on an economic perspective on fairness and the accuracy-fairness trade-off has done, and what the gap is that that work has left.} % \mdr{What we add on top of prior work that contributed to an economic perspective on fair re-ranking is \ldots} %Previous tax policies resemble a flat tax, whereas Tax-rank adopts a closer resemblance to a progressive tax. Finally, the Tax-rank policy is more closely related to $\alpha$-fair optimization in cooperative games~\cite{bertsimas2011price, bertsimas2012efficiency}. However, these approaches are not suitable for fair re-ranking tasks.","Fair re-ranking. Over the past decade, work on fair ranking tasks has rapidly grown in volume, driven by the need for a responsi- ble and trustworthy ecosystem [ 9,21,23,39]. Previous research often categorizes fair-aware methods into three categories based on ranking phases: pre-processing [ 7,37], in-processing [ 34], and post- processing ( i.e.,re-ranking tasks) [ 28,38]. The re-ranking phase is regarded as the most easily adaptable and practical stage in op- timizing ranking systems [ 28]. During the re-ranking phase, the concept of fairness in re-ranking depends on the stakeholders in- volved [ 1,2]. Prior work has examined user-oriented fairness [ 3,19] and item-oriented fairness [ 15,28,33,38,41,43]. In this paper, we focus on item group fairness in re-ranking tasks. Metrics and algorithms in fair re-ranking. Fairness metrics vary widely across works, with different studies optimizing distinct metrics. For instance, some work [ 28,43] employs proportional fairness, Do and Usunier [11] focuses on the Gini Index, other work [ 38] prioritizes MMF, and TaxRank [ 41] optimizes𝛼-fairness. However, these approaches rely on single fairness metrics, which limits their ability to provide a comprehensive evaluation. Previous work on re-ranking methods to improve item fairness can be divided into (i) regularized methods, which use a multi– task optimization approach with a linear combination of accuracy and fairness loss functions, incorporating a trade-off coefficient 𝜆[11,26,38], and (ii) constraint-based methods, which formulate the task as a constrained optimization problem to ensure that fair- ness metrics do not exceed a specified threshold [ 5,28,36,46]. Understanding Accuracy-Fairness Trade-offs in Re-ranking through Elasticity in Economics SIGIR ’25, July 13–18, 2025, Padua, Italy Table 1: Correspondence between taxation elements in eco- nomics and fair re-ranking. Economics Fair re-ranking Consumer (buy product) Users U(click items) Supplier (sell product) Item groups G(provide items) Commodity tax Fairness constraint Tax subsidies for the poor Increase ranking score for the poor Selling price (tax objective) Ranking scores (fairness objective) Elasticity on price 𝐸𝑒 Elasticity on utilities of item group 𝐸𝑟,𝑝 Despite achieving notable performance improvements, existing fairness intervention methods are often designed to optimize spe- cific fairness metrics and typically involve high computational costs, making them challenging to adapt to real-world industrial systems. An economic perspective on fair re-ranking. In economics, re- source allocation typically occurs through processes of distribution and re-distribution [ 17]. Previous work [ 32] regards fair ranking as a resource allocation problem and formulated the problem related to Nash Social Welfare in economics, see also Biswas et al . [5], Patro et al. [28] . TaxRank [ 41] regards fair re-ranking as a taxation pro- cess, which often serves as a key mechanism in the re-distribution process, enabling wealth reallocation and addressing income in- equality [ 13,27]. However, they merely use economic objectives to define different fairness metrics, without understanding how fairness-accuracy trade-offs occur under different metrics." 2504.14243v1,"Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems","Yimeng Bai, Shunyu Zhang, Yang Zhang, Hu Liu, Wentian Bao, Enyun Yu, Fuli Feng, Wenwu Ou","Ranking models primarily focus on modeling the relative order of predictions while often neglecting the significance of the accuracy of their absolute values. However, accurate absolute values are essential for certain downstream tasks, necessitating the calibration of the original predictions. To address this, existing calibration approaches typically employ predefined transformation functions with order-preserving properties to adjust the original predictions. Unfortunately, these functions often adhere to fixed forms, such as piece-wise linear functions, which exhibit limited expressiveness and flexibility, thereby constraining their effectiveness in complex calibration scenarios. To mitigate this issue, we propose implementing a calibrator using an Unconstrained Monotonic Neural Network (UMNN), which can learn arbitrary monotonic functions with great modeling power. This approach significantly relaxes the constraints on the calibrator, improving its flexibility and expressiveness while avoiding excessively distorting the original predictions by requiring monotonicity. Furthermore, to optimize this highly flexible network for calibration, we introduce a novel additional loss function termed Smooth Calibration Loss (SCLoss), which aims to fulfill a necessary condition for achieving the ideal calibration state. Extensive offline experiments confirm the effectiveness of our method in achieving superior calibration performance. Moreover, deployment in Kuaishou's large-scale online video ranking system demonstrates that the method's calibration improvements translate into enhanced business metrics. The source code is available at https://github.com/baiyimeng/UMC.","cs.IR, H.3.3, H.3.5",2025-04-19T09:35:11+00:00,2025-04-19T09:35:11+00:00,http://arxiv.org/abs/2504.14243v1,http://arxiv.org/abs/2504.14243v1,2025-04-19 09:35:11+00:00,"In this section, we examine existing research on the calibration of ranking systems in recommendation and search. The research can be classified into two categories based on the primary focus: calibrator modeling and loss reconciling. \subsection{Calibrator Modeling} The primary research focus lies in calibrator modeling, with the objective of developing powerful calibrators to refine original predictions. Early methods predominantly leverage the statistical characteristics of original predictions to perform univariate transformations that ensure global order preservation. Specifically, binning methods~\cite{HB,MBCT} involve partitioning samples into bins and adjusting each sample by assigning a statistical quantity, such as the average predicted probability of its bin, as the calibrated value. Isotonic regression methods~\cite{IR,SIR} optimize squared errors under a non-decreasing constraint to fit a univariate isotonic calibration function. Scaling methods~\cite{PlattScaling,TemperatureScaling,BetaCalib,GammaGauss,ConfCalib} directly fit predefined transformations, like the logistic function, for calibration purposes. In general, the global order preservation of these methods theoretically prevents the calibration process from affecting the ranking performance. However, the constrained parameter space of these transformations limits their effectiveness, particularly in the context of industrial deep ranking models~\cite{LiRank}. Later works delve into the exploration of utilizing neural networks for calibration, using them to adaptively learn parameters of transformation functions for samples with varying features. For instance, FAC~\cite{NeuralCalib} combines a univariate piece-wise linear model with a field-aware auxiliary neural network. AdaCalib~\cite{AdaCalib} learns isotonic function families to calibrate predictions, guided by posterior statistics. SBCR~\cite{SBCR} introduces a neural piece-wise linear model that integrates sample features directly into the learning of linear weights. However, due to the inadequate fitting performance of piece-wise linear interpolation~\cite{error}, these methods struggle to thoroughly handle multi-field calibration that involves nuanced patterns. DESC~\cite{DESC}, developed concurrently with our approach, replaces piece-wise linear functions with combinations of multiple nonlinear basis functions. Nevertheless, the calibrator expressiveness remains partially constrained, as it lacks guarantees for fitting arbitrary monotonic functions. \subsection{Loss Reconciling} Apart from the calibrator modeling aspect, there is also research focused on designing joint optimization strategies to handle point-wise calibration loss and pairwise or list-wise ranking loss, aiming to enhance compatibility between ranking and calibration. Specifically, CalSoftmax~\cite{ScaleCalib} addresses training divergence issues of the ranking loss and achieves calibrated outputs through the use of virtual candidates. JRC~\cite{JRC} utilizes two logits for click and non-click states to decouple the optimization of ranking and calibration. RCR~\cite{RCR} proposes a regression-compatible ranking approach to balance calibration and ranking accuracy. CLID~\cite{CLID} employs a calibration-compatible list-wise distillation loss to distill the teacher model's ranking ability without destroying the model's calibration ability. SBCR~\cite{SBCR} introduces a self-boosted ranking loss that utilizes dumped ranking scores obtained from the online deployed model, facilitating comparisons between samples associated with the same query and allowing for extensive shuffling of sample-level data. \bym{BBP~\cite{BBP} tackles the issue of insufficient samples for ranking loss by estimating beta distributions for users and items, generating continuously comparable ranking score labels.} However, they primarily focus on reconciling the optimization of ranking and calibration, instead of developing calibrator architectures.","In this section, we examine existing research on the calibration of ranking systems in recommendation and search. The research can be classified into two categories based on the primary focus: calibrator modeling and loss reconciling. \subsection{Calibrator Modeling} The primary research focus lies in calibrator modeling, with the objective of developing powerful calibrators to refine original predictions. Early methods predominantly leverage the statistical characteristics of original predictions to perform univariate transformations that ensure global order preservation. Specifically, binning methods~\cite{HB,MBCT} involve partitioning samples into bins and adjusting each sample by assigning a statistical quantity, such as the average predicted probability of its bin, as the calibrated value. Isotonic regression methods~\cite{IR,SIR} optimize squared errors under a non-decreasing constraint to fit a univariate isotonic calibration function. Scaling methods~\cite{PlattScaling,TemperatureScaling,BetaCalib,GammaGauss,ConfCalib} directly fit predefined transformations, like the logistic function, for calibration purposes. In general, the global order preservation of these methods theoretically prevents the calibration process from affecting the ranking performance. However, the constrained parameter space of these transformations limits their effectiveness, particularly in the context of industrial deep ranking models~\cite{LiRank}. Later works delve into the exploration of utilizing neural networks for calibration, using them to adaptively learn parameters of transformation functions for samples with varying features. For instance, FAC~\cite{NeuralCalib} combines a univariate piece-wise linear model with a field-aware auxiliary neural network. AdaCalib~\cite{AdaCalib} learns isotonic function families to calibrate predictions, guided by posterior statistics. SBCR~\cite{SBCR} introduces a neural piece-wise linear model that integrates sample features directly into the learning of linear weights. However, due to the inadequate fitting performance of piece-wise linear interpolation~\cite{error}, these methods struggle to thoroughly handle multi-field calibration that involves nuanced patterns. DESC~\cite{DESC}, developed concurrently with our approach, replaces piece-wise linear functions with combinations of multiple nonlinear basis functions. Nevertheless, the calibrator expressiveness remains partially constrained, as it lacks guarantees for fitting arbitrary monotonic functions. \subsection{Loss Reconciling} Apart from the calibrator modeling aspect, there is also research focused on designing joint optimization strategies to handle point-wise calibration loss and pairwise or list-wise ranking loss, aiming to enhance compatibility between ranking and calibration. Specifically, CalSoftmax~\cite{ScaleCalib} addresses training divergence issues of the ranking loss and achieves calibrated outputs through the use of virtual candidates. JRC~\cite{JRC} utilizes two logits for click and non-click states to decouple the optimization of ranking and calibration. RCR~\cite{RCR} proposes a regression-compatible ranking approach to balance calibration and ranking accuracy. CLID~\cite{CLID} employs a calibration-compatible list-wise distillation loss to distill the teacher model's ranking ability without destroying the model's calibration ability. SBCR~\cite{SBCR} introduces a self-boosted ranking loss that utilizes dumped ranking scores obtained from the online deployed model, facilitating comparisons between samples associated with the same query and allowing for extensive shuffling of sample-level data. \bym{BBP~\cite{BBP} tackles the issue of insufficient samples for ranking loss by estimating beta distributions for users and items, generating continuously comparable ranking score labels.} However, they primarily focus on reconciling the optimization of ranking and calibration, instead of developing calibrator architectures.","In this section, we examine existing research on the calibration of ranking systems in recommendation and search. The research can be classified into two categories based on the primary focus: calibrator modeling and loss reconciling. 2https://www.kuaishou.com Unconstrained Monotonic Calibration of Predictions in Deep Ranking Systems SIGIR ’25, July 13–18, 2025, Padua, Italy 2.1 Calibrator Modeling The primary research focus lies in calibrator modeling, with the objective of developing powerful calibrators to refine original pre- dictions. Early methods predominantly leverage the statistical char- acteristics of original predictions to perform univariate transforma- tions that ensure global order preservation. Specifically, binning methods [ 21,41] involve partitioning samples into bins and ad- justing each sample by assigning a statistical quantity, such as the average predicted probability of its bin, as the calibrated value. Iso- tonic regression methods [ 13,42] optimize squared errors under a non-decreasing constraint to fit a univariate isotonic calibration function. Scaling methods [ 16,24,26,33,46] directly fit predefined transformations, like the logistic function, for calibration purposes. In general, the global order preservation of these methods theoreti- cally prevents the calibration process from affecting the ranking performance. However, the constrained parameter space of these transformations limits their effectiveness, particularly in the con- text of industrial deep ranking models [8]. Later works delve into the exploration of utilizing neural net- works for calibration, using them to adaptively learn parameters of transformation functions for samples with varying features. For instance, FAC [ 30] combines a univariate piece-wise linear model with a field-aware auxiliary neural network. AdaCalib [ 37] learns isotonic function families to calibrate predictions, guided by pos- terior statistics. SBCR [ 44] introduces a neural piece-wise linear model that integrates sample features directly into the learning of linear weights. However, due to the inadequate fitting performance of piece-wise linear interpolation [ 9], these methods struggle to thoroughly handle multi-field calibration that involves nuanced patterns. DESC [ 40], developed concurrently with our approach, replaces piece-wise linear functions with combinations of multiple nonlinear basis functions. Nevertheless, the calibrator expressive- ness remains partially constrained, as it lacks guarantees for fitting arbitrary monotonic functions. 2.2 Loss Reconciling Apart from the calibrator modeling aspect, there is also research focused on designing joint optimization strategies to handle point- wise calibration loss and pairwise or list-wise ranking loss, aiming to enhance compatibility between ranking and calibration. Specifically, CalSoftmax [ 38] addresses training divergence is- sues of the ranking loss and achieves calibrated outputs through the use of virtual candidates. JRC [ 34] utilizes two logits for click and non-click states to decouple the optimization of ranking and calibra- tion. RCR [ 2] proposes a regression-compatible ranking approach to balance calibration and ranking accuracy. CLID [ 15] employs a calibration-compatible list-wise distillation loss to distill the teacher model’s ranking ability without destroying the model’s calibration ability. SBCR [ 44] introduces a self-boosted ranking loss that uti- lizes dumped ranking scores obtained from the online deployed model, facilitating comparisons between samples associated with the same query and allowing for extensive shuffling of sample-level data. BBP [ 28] tackles the issue of insufficient samples for ranking loss by estimating beta distributions for users and items, generat- ing continuously comparable ranking score labels. However, theyprimarily focus on reconciling the optimization of ranking and calibration, instead of developing calibrator architectures." 2504.12900v1,"FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization","Mingzhe Yu, Yunshan Ma, Lei Wu, Changshuo Wang, Xue Li, Lei Meng","Personalized outfit generation aims to construct a set of compatible and personalized fashion items as an outfit. Recently, generative AI models have received widespread attention, as they can generate fashion items for users to complete an incomplete outfit or create a complete outfit. However, they have limitations in terms of lacking diversity and relying on the supervised learning paradigm. Recognizing this gap, we propose a novel framework FashionDPO, which fine-tunes the fashion outfit generation model using direct preference optimization. This framework aims to provide a general fine-tuning approach to fashion generative models, refining a pre-trained fashion outfit generation model using automatically generated feedback, without the need to design a task-specific reward function. To make sure that the feedback is comprehensive and objective, we design a multi-expert feedback generation module which covers three evaluation perspectives, \ie quality, compatibility and personalization. Experiments on two established datasets, \ie iFashion and Polyvore-U, demonstrate the effectiveness of our framework in enhancing the model's ability to align with users' personalized preferences while adhering to fashion compatibility principles. Our code and model checkpoints are available at https://github.com/Yzcreator/FashionDPO.","cs.MM, cs.IR",2025-04-17T12:41:41+00:00,2025-04-17T12:41:41+00:00,http://arxiv.org/abs/2504.12900v1,http://arxiv.org/abs/2504.12900v1,2025-04-17 12:41:41+00:00,"% 先介绍个性化的时尚穿搭推荐作品,然后再进入生成作品。 % From Introduction: However, some people may lack professional fashion knowledge, leading to confusion when confronted with rapidly changing fashion trends, making it difficult for them to put together compatible outfits that suit their style ~\cite{OGR}. Consequently, Outfit Recommendation (OR) has gained widespread application in the fashion domain ~\cite{PORGraph, PORAnchors, A-FKG}. Meanwhile, the rapid development of generative models ~\cite{DDIM, SD, controlNet, lora} has made it possible to directly recommend high-quality generated fashion products to users. \noindent \textbf{Fashion Outfit Recommendation.} In the fashion domain, Outfit Recommendation (OR)~\cite{PORGraph, PORAnchors, A-FKG} has gained widespread application. There are two requirements in fashion outfit recommendation: compatibility and personalization. Furthermore, it is also a popular task in the domain of computational fashion ~\cite{FashionRecSurvey-23}. Early works ~\cite{personalCom, PORAnchors, A-FKG} primarily focused on compatibility, aiming to retrieve already well-matched outfits for users. Some works ~\cite{PFOG, POG} attempt to introduce personalization in the recommendation process, combining a set of personalized and compatible items into an outfit that aligns with fashion styling principles. Moreover, bundle recommendation, a more generalized recommendation paradigm, subsume personalized fashion outfit recommendation as one of its applications. Multiple works~\cite{MultiCBR,EBRec,BundleMLLM} have been proposed by using graph learning, contrastive learning, as well as multimodal large language models. Despite various progress, the above works follow the retrieval paradigm and are constrained by the variety and quantity of fashion products in the dataset, making it difficult to meet users' personalized needs, especially in terms of texture and other details. However, with the rapid development of generative models ~\cite{SD, controlNet, lora}, the quality and diversity of image generation have significantly improved, making it possible to directly recommend generated custom fashion products to users. Recent work ~\cite{DiFashion} has introduced the PFITB task, which combines the user's interaction history with fashion products to generate a personalized matching outfit. %However, the limited quantity and variety of clothing in the dataset prevent it from meeting users' personalized needs ~\cite{PFOG}. \noindent \textbf{Fashion Image Generation.} It refers to the task of generating fashion-related images using deep learning models. This task is widely applied in the fashion domain, covering areas such as clothing design, virtual try-on, and personalized recommendation, among others~\cite{yang2018recommendation, Compatibility,FashionReGen24}. Previous works, such as CRAFT~\cite{CRAFT}, generate feature representations for clothes pairings and retrieve the most suitable individual clothes items from the dataset. In the virtual try-on domain, previous works ~\cite{VITON, GP-VTON} based on GANs involve generating warped clothes aligned with character, and then generating images of the character wearing the warped clothes. The diffusion models ~\cite{DCI-VTON} enhance image quality by replacing the generator in the second stage. Current work ~\cite{stableVTON} learns the semantic correspondence between the clothing and the human body within the latent space of the pre-trained diffusion model in an end-to-end manner. In the personalized recommendation domain, HMaVTON ~\cite{HMaVTON} generates diverse and well-matched fashion items to the given person. Existing personalized image generation models ~\cite{Jedi, ELITE, PathchDPO, BDPO} aim to generate images aligned with reference styles or elements, yet recommending images consistent with a user's interaction history is meaningless. % And DVBPR ~\cite{DVBPR} generates clothes images based on user preferences but is limited to generating images that are identical in shape to those in the dataset. \begin{figure*}[ht] \centering \setlength{\abovecaptionskip}{0.1cm} \setlength{\belowcaptionskip}{-0.3cm} \includegraphics[width=1.0\textwidth]{images/model_v5.pdf} \caption{ The overview of FashionDPO, which consists of three consecutive key modules: 1) Fashion Image Generation without Feedback, 2) Feedback Generation from Multiple Experts, and 3) Model Fine-tuning with Direct Preference Optimization. %We design comprehensive AI models to evaluate generated fashion items and update the model parameters using the preference information contained in the AI feedback. } \label{fig:model} \end{figure*} % 补充: % sd,controlnet应用于clothing design % try-on, 两阶段gan+dm,一阶段dm \noindent \textbf{Direct Preference Optimization.} In the field of natural language processing, Direct Preference Optimization (DPO) has been proposed to reduce training costs ~\cite{DPO}, which uses preferences rather than explicit rewards to fine-tune LLMs. This approach is also applied to the post-training of text-to-image diffusion models. Diffusion-DPO ~\cite{Diffusion-DPO} fine-tunes the generative model in a single step after receiving feedback from the Preference Evaluator. D3PO ~\cite{D3PO} assumes that the preferred outcome holds true for all time steps in the diffusion model and fine-tunes each of the time steps in the generative model based on the feedback results. It demonstrates that in diffusion models, directly updating the policy based on human preferences within an MDP is equivalent to first learning the optimal reward model and then using it to guide policy updates. SPO ~\cite{SPO} assesses preferences at each time step during the sampling process and adds noise to the preferred image to generate the noise image for the next time step. We introduce DPO into generative fashion recommendation, where learning based on preference feedback eliminates the constraints of ground truth, showcasing richer possibilities in clothing textures and details.","% 先介绍个性化的时尚穿搭推荐作品,然后再进入生成作品。 % From Introduction: However, some people may lack professional fashion knowledge, leading to confusion when confronted with rapidly changing fashion trends, making it difficult for them to put together compatible outfits that suit their style ~\cite{OGR}. Consequently, Outfit Recommendation (OR) has gained widespread application in the fashion domain ~\cite{PORGraph, PORAnchors, A-FKG}. Meanwhile, the rapid development of generative models ~\cite{DDIM, SD, controlNet, lora} has made it possible to directly recommend high-quality generated fashion products to users. \noindent \textbf{Fashion Outfit Recommendation.} In the fashion domain, Outfit Recommendation (OR)~\cite{PORGraph, PORAnchors, A-FKG} has gained widespread application. There are two requirements in fashion outfit recommendation: compatibility and personalization. Furthermore, it is also a popular task in the domain of computational fashion ~\cite{FashionRecSurvey-23}. Early works ~\cite{personalCom, PORAnchors, A-FKG} primarily focused on compatibility, aiming to retrieve already well-matched outfits for users. Some works ~\cite{PFOG, POG} attempt to introduce personalization in the recommendation process, combining a set of personalized and compatible items into an outfit that aligns with fashion styling principles. Moreover, bundle recommendation, a more generalized recommendation paradigm, subsume personalized fashion outfit recommendation as one of its applications. Multiple works~\cite{MultiCBR,EBRec,BundleMLLM} have been proposed by using graph learning, contrastive learning, as well as multimodal large language models. Despite various progress, the above works follow the retrieval paradigm and are constrained by the variety and quantity of fashion products in the dataset, making it difficult to meet users' personalized needs, especially in terms of texture and other details. However, with the rapid development of generative models ~\cite{SD, controlNet, lora}, the quality and diversity of image generation have significantly improved, making it possible to directly recommend generated custom fashion products to users. Recent work ~\cite{DiFashion} has introduced the PFITB task, which combines the user's interaction history with fashion products to generate a personalized matching outfit. %However, the limited quantity and variety of clothing in the dataset prevent it from meeting users' personalized needs ~\cite{PFOG}. \noindent \textbf{Fashion Image Generation.} It refers to the task of generating fashion-related images using deep learning models. This task is widely applied in the fashion domain, covering areas such as clothing design, virtual try-on, and personalized recommendation, among others~\cite{yang2018recommendation, Compatibility,FashionReGen24}. Previous works, such as CRAFT~\cite{CRAFT}, generate feature representations for clothes pairings and retrieve the most suitable individual clothes items from the dataset. In the virtual try-on domain, previous works ~\cite{VITON, GP-VTON} based on GANs involve generating warped clothes aligned with character, and then generating images of the character wearing the warped clothes. The diffusion models ~\cite{DCI-VTON} enhance image quality by replacing the generator in the second stage. Current work ~\cite{stableVTON} learns the semantic correspondence between the clothing and the human body within the latent space of the pre-trained diffusion model in an end-to-end manner. In the personalized recommendation domain, HMaVTON ~\cite{HMaVTON} generates diverse and well-matched fashion items to the given person. Existing personalized image generation models ~\cite{Jedi, ELITE, PathchDPO, BDPO} aim to generate images aligned with reference styles or elements, yet recommending images consistent with a user's interaction history is meaningless. % And DVBPR ~\cite{DVBPR} generates clothes images based on user preferences but is limited to generating images that are identical in shape to those in the dataset. % 补充: % sd,controlnet应用于clothing design % try-on, 两阶段gan+dm,一阶段dm \noindent \textbf{Direct Preference Optimization.} In the field of natural language processing, Direct Preference Optimization (DPO) has been proposed to reduce training costs ~\cite{DPO}, which uses preferences rather than explicit rewards to fine-tune LLMs. This approach is also applied to the post-training of text-to-image diffusion models. Diffusion-DPO ~\cite{Diffusion-DPO} fine-tunes the generative model in a single step after receiving feedback from the Preference Evaluator. D3PO ~\cite{D3PO} assumes that the preferred outcome holds true for all time steps in the diffusion model and fine-tunes each of the time steps in the generative model based on the feedback results. It demonstrates that in diffusion models, directly updating the policy based on human preferences within an MDP is equivalent to first learning the optimal reward model and then using it to guide policy updates. SPO ~\cite{SPO} assesses preferences at each time step during the sampling process and adds noise to the preferred image to generate the noise image for the next time step. We introduce DPO into generative fashion recommendation, where learning based on preference feedback eliminates the constraints of ground truth, showcasing richer possibilities in clothing textures and details.","Fashion Outfit Recommendation. In the fashion domain, Outfit Recommendation (OR) [ 20,24,50] has gained widespread appli- cation. There are two requirements in fashion outfit recommen- dation: compatibility and personalization. Furthermore, it is also a popular task in the domain of computational fashion [ 2]. Early works [ 6,24,50] primarily focused on compatibility, aiming to re- trieve already well-matched outfits for users. Some works [ 1,5] attempt to introduce personalization in the recommendation pro- cess, combining a set of personalized and compatible items into an outfit that aligns with fashion styling principles. Moreover, bundle recommendation, a more generalized recommendation paradigm, subsume personalized fashion outfit recommendation as one of its applications. Multiple works [ 7,23,27] have been proposed by using graph learning, contrastive learning, as well as multimodal large language models. Despite various progress, the above works follow the retrieval paradigm and are constrained by the variety and quantity of fashion products in the dataset, making it difficult to meet users’ personalized needs, especially in terms of texture and other details. However, with the rapid development of generative models [ 34,44,51], the quality and diversity of image generation have significantly improved, making it possible to directly recom- mend generated custom fashion products to users. Recent work [ 43] has introduced the PFITB task, which combines the user’s inter- action history with fashion products to generate a personalized matching outfit. FashionDPO: Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization SIGIR ’25, July 13–18, 2025, Padua, Italy Fashion Image Generation. It refers to the task of generating fashion-related images using deep learning models. This task is widely applied in the fashion domain, covering areas such as cloth- ing design, virtual try-on, and personalized recommendation, among others [ 3,35,46]. Previous works, such as CRAFT [ 15], generate feature representations for clothes pairings and retrieve the most suitable individual clothes items from the dataset. In the virtual try-on domain, previous works [ 10,42] based on GANs involve gen- erating warped clothes aligned with character, and then generating images of the character wearing the warped clothes. The diffusion models [ 8] enhance image quality by replacing the generator in the second stage. Current work [ 16] learns the semantic correspon- dence between the clothing and the human body within the latent space of the pre-trained diffusion model in an end-to-end manner. In the personalized recommendation domain, HMaVTON [ 48] gen- erates diverse and well-matched fashion items to the given person. Existing personalized image generation models [ 14,29,41,49] aim to generate images aligned with reference styles or elements, yet recommending images consistent with a user’s interaction history is meaningless. Direct Preference Optimization. In the field of natural language processing, Direct Preference Optimization (DPO) has been pro- posed to reduce training costs [ 32], which uses preferences rather than explicit rewards to fine-tune LLMs. This approach is also ap- plied to the post-training of text-to-image diffusion models. Diffusion- DPO [ 40] fine-tunes the generative model in a single step after receiving feedback from the Preference Evaluator. D3PO [ 45] as- sumes that the preferred outcome holds true for all time steps in the diffusion model and fine-tunes each of the time steps in the generative model based on the feedback results. It demonstrates that in diffusion models, directly updating the policy based on hu- man preferences within an MDP is equivalent to first learning the optimal reward model and then using it to guide policy updates. SPO [ 21] assesses preferences at each time step during the sampling process and adds noise to the preferred image to generate the noise image for the next time step. We introduce DPO into generative fashion recommendation, where learning based on preference feed- back eliminates the constraints of ground truth, showcasing richer possibilities in clothing textures and details." 2504.09935v1,Constrained Auto-Regressive Decoding Constrains Generative Retrieval,"Shiguang Wu, Zhaochun Ren, Xin Xin, Jiyuan Yang, Mengqi Zhang, Zhumin Chen, Maarten de Rijke, Pengjie Ren","Generative retrieval seeks to replace traditional search index data structures with a single large-scale neural network, offering the potential for improved efficiency and seamless integration with generative large language models. As an end-to-end paradigm, generative retrieval adopts a learned differentiable search index to conduct retrieval by directly generating document identifiers through corpus-specific constrained decoding. The generalization capabilities of generative retrieval on out-of-distribution corpora have gathered significant attention. In this paper, we examine the inherent limitations of constrained auto-regressive generation from two essential perspectives: constraints and beam search. We begin with the Bayes-optimal setting where the generative retrieval model exactly captures the underlying relevance distribution of all possible documents. Then we apply the model to specific corpora by simply adding corpus-specific constraints. Our main findings are two-fold: (i) For the effect of constraints, we derive a lower bound of the error, in terms of the KL divergence between the ground-truth and the model-predicted step-wise marginal distributions. (ii) For the beam search algorithm used during generation, we reveal that the usage of marginal distributions may not be an ideal approach. This paper aims to improve our theoretical understanding of the generalization capabilities of the auto-regressive decoding retrieval paradigm, laying a foundation for its limitations and inspiring future advancements toward more robust and generalizable generative retrieval.",cs.IR,2025-04-14T06:54:49+00:00,2025-04-14T06:54:49+00:00,http://arxiv.org/abs/2504.09935v1,http://arxiv.org/abs/2504.09935v1,2025-04-14 06:54:49+00:00,"\label{sec:related} \headernodot{\Acf{gr}} is an emerging direction in neural information retrieval, exploring the possibility of replacing traditional index structures in retrieval systems with a single large-scale neural networks~\citep{liSurveyGenerativeIR2024,white2025surveyinformationaccess}. It leverages generative models to directly generate the relevant \acp{docid} given a query. This paradigm originated with~\citet{metzlerRethinkingSearch2021,decaoAutoregressiveEntityRetrieval2020} and has garnered considerable attention~\cite{sunLearningTokenizeGenerative2023,wangNeuralCorpusIndexer2023,liLearningRankGenerative2023,Zhuang2022BridgingTG,Zhang2023TermSetsCB,yangAutoSearchIndexer2023,tang2023semantic,tang2024generative,wuGenerativeRetrievalMultiVector2024,seal2022,tayTransformerMemoryDifferentiable2022a,dynamic-retriever2023,nguyen-2023-generative,zengScalableEffectiveGenerative2023b} in the information retrieval community. %\header{Generalization in \acl{gr}} %Although it was initially proposed for building domain experts~\citep{metzlerRethinkingSearch2021}, \ac{gr}, as a retrieval system itself, is much concerned with its generalization ability to out-of-distribution corpora~\citep{sunLearningTokenizeGenerative2023,askariFewshotIndexing2024,cont-learning-gr2023cikm,liu2024robustnessgenerative,liuRobustnessGenerativeRetrieval2023,liSurveyGenerativeIR2024,leeNonparametricDecodingGenerative2023}. Generalization remains a challenge for \ac{gr}, especially when applied to out-of-distribution corpora~\citep{sunLearningTokenizeGenerative2023,askariFewshotIndexing2024,cont-learning-gr2023cikm,liu2024robustnessgenerative,liuRobustnessGenerativeRetrieval2023,liSurveyGenerativeIR2024,leeNonparametricDecodingGenerative2023}. Previous research attributes this challenges to limited model capacity~\citep{leeNonparametricDecodingGenerative2023,yuan2024generative-memory-burden}, lack of learning in the docID construction~\citep{sunLearningTokenizeGenerative2023,yangAutoSearchIndexer2023,Zhang2023TermSetsCB}, and difficulties in learning semantic representations~\citep{tang2023semantic,wangNOVOLearnableInterpretable2023}. In contrast, our work focuses on the constrained auto-regressive decoding strategy widely applied in \ac{gr}, which is crucial for adapting \ac{gr} models to new corpora dynamically. Our setting aligns closely to the few-shot indexing approach~\citep{askariFewshotIndexing2024}, where a pre-trained \ac{llm} generates \acp{docid} solely based on its pre-trained knowledge and generalization capabilities, without additional training. We treat their method as a conceptual blueprint for a fully generalizable \ac{gr} system and aim to investigate the inference stage under this setting. % askariFewshotIndexing2024 % A highly related topic is the updatable \ac{gr}~\citep{kishoreIncDSI2023,mehtaDSIpp2023,cont-learning-gr2023cikm,guoContinualGenerative2024} % large scale gr % unified gr and nlp generation models Updatable \acl{gr} is another critical task on dynamic corpora. The primary challenges indicate the cost of updating the model with new documents and the catastrophic forgetting problem~\citep{kishoreIncDSI2023,mehtaDSIpp2023,cont-learning-gr2023cikm,guoContinualGenerative2024}. Previous efforts have concentrated on developing efficient continual learning strategies by fixing the indexing construction procedure. We consider an idealized scenario where the model has full knowledge of all possible documents and focuses solely on generating relevant \acp{docid} on dynamic corpus. \header{Constrained decoding} Constrained decoding has been widely studied for guiding machine learning models to produce outputs that satisfy specific conditions~\citep{ahmedNeuroSymbolicLearning2023,mustafaStrcutredOutputPrediction2021}. Instead of learning to satisfy through training, constraints are more often preferable only in the inference time due to the flexibility and efficiency. \citet{nishinoGeneralizationAnalysisLearning2022a,nishinoUnderstandingCV2025} demonstrate the preservation of relative errors of certain loss functions in realizable setting. We instead provide a failure case via establishing the existence of a lower-bound error for auto-regressive models operating under step-wise inference-time constraints. Recent work on \ac{ctg} in \acp{llm}~\citep[see, e.g.,][]{zhangSurveyControllableText2023} also explores imposing constraints during inference without updating the underlying model~\citep{mireshghallahControllableTextGeneration2022,mudgalControlledDecoding2025,kimCriticGuidedDecoding2023,chakrabortyPrincipledDecodingLLM2024}. However, many of these approaches do not focus on strictly enforcing constraint satisfaction. A few studies~\citep{kimGuaranteedGenerationLarge2024,honghuaLogicalControl2024,zhangTractableControlAutoregressive2023} propose methods to produce outputs that strictly adhere to constraints, mainly hard keywords inclusion constraints, using tractable probabilistic models or policy gradient techniques. Our work differs by focusing on a specific corpus-level constraint, i.e., the set of valid \acp{docid} is sampled from the complete corpus, a problem unique to retrieval tasks. \headernodot{Beam search} is a widely used heuristic algorithm for decoding structured predictors and has been applied as a non-\ac{mips} setup for large-scale retrieval systems with explicit tree structures~\citep{liTreeIndexDenseRetrieval2023,zhuTreeRecsys2018,zhuoOptimalTreeModels2020,zhuJointTreeIndexRecsys2019}. Beam search is known to have a performance deterioration, and only few works provided theoretical insights into this issue. As far as we know, only \citet{zhuoOptimalTreeModels2020} demonstrate a training-test discrepancy in tree-structured models using binary cross-entropy loss. They showed that pseudo-labeling during training does not guarantee that beam search will get the most relevant targets. In our work, we analyze the marginal distribution of an auto-regressive distribution and provide a theoretical result on the top-$1$ and top-$k$ performance under sparse relevance situations. \citet{zhuoOptimalTreeModels2020} also provide a Bayes-optimal tree structure, which is often called max-heap assumption~\citep{liTreeIndexDenseRetrieval2023}, and we will discuss the difficulty of enforcing this assumption in our setting in Section~\ref{sub:solution}. In \ac{gr}, some work have reached the same conclusion that beam search is not sufficient for retrieval as it is likely to prune the relevant \acp{docid} and the model is not able to recover from this~\citep{zengPlanningAheadGenerative2024,liCorpusLM2024,liUnigen2024}. They propose to use a hybrid retrieval strategy to help bypassing this problem. We instead focus on understanding the root cause of this problem, i.e., the usage of marginal distribution. % max heap assumption","\headernodot{\Acf{gr}} is an emerging direction in neural information retrieval, exploring the possibility of replacing traditional index structures in retrieval systems with a single large-scale neural networks~\citep{liSurveyGenerativeIR2024,white2025surveyinformationaccess}. It leverages generative models to directly generate the relevant \acp{docid} given a query. This paradigm originated with~\citet{metzlerRethinkingSearch2021,decaoAutoregressiveEntityRetrieval2020} and has garnered considerable attention~\cite{sunLearningTokenizeGenerative2023,wangNeuralCorpusIndexer2023,liLearningRankGenerative2023,Zhuang2022BridgingTG,Zhang2023TermSetsCB,yangAutoSearchIndexer2023,tang2023semantic,tang2024generative,wuGenerativeRetrievalMultiVector2024,seal2022,tayTransformerMemoryDifferentiable2022a,dynamic-retriever2023,nguyen-2023-generative,zengScalableEffectiveGenerative2023b} in the information retrieval community. %\header{Generalization in \acl{gr}} %Although it was initially proposed for building domain experts~\citep{metzlerRethinkingSearch2021}, \ac{gr}, as a retrieval system itself, is much concerned with its generalization ability to out-of-distribution corpora~\citep{sunLearningTokenizeGenerative2023,askariFewshotIndexing2024,cont-learning-gr2023cikm,liu2024robustnessgenerative,liuRobustnessGenerativeRetrieval2023,liSurveyGenerativeIR2024,leeNonparametricDecodingGenerative2023}. Generalization remains a challenge for \ac{gr}, especially when applied to out-of-distribution corpora~\citep{sunLearningTokenizeGenerative2023,askariFewshotIndexing2024,cont-learning-gr2023cikm,liu2024robustnessgenerative,liuRobustnessGenerativeRetrieval2023,liSurveyGenerativeIR2024,leeNonparametricDecodingGenerative2023}. Previous research attributes this challenges to limited model capacity~\citep{leeNonparametricDecodingGenerative2023,yuan2024generative-memory-burden}, lack of learning in the docID construction~\citep{sunLearningTokenizeGenerative2023,yangAutoSearchIndexer2023,Zhang2023TermSetsCB}, and difficulties in learning semantic representations~\citep{tang2023semantic,wangNOVOLearnableInterpretable2023}. In contrast, our work focuses on the constrained auto-regressive decoding strategy widely applied in \ac{gr}, which is crucial for adapting \ac{gr} models to new corpora dynamically. Our setting aligns closely to the few-shot indexing approach~\citep{askariFewshotIndexing2024}, where a pre-trained \ac{llm} generates \acp{docid} solely based on its pre-trained knowledge and generalization capabilities, without additional training. We treat their method as a conceptual blueprint for a fully generalizable \ac{gr} system and aim to investigate the inference stage under this setting. % askariFewshotIndexing2024 % A highly related topic is the updatable \ac{gr}~\citep{kishoreIncDSI2023,mehtaDSIpp2023,cont-learning-gr2023cikm,guoContinualGenerative2024} % large scale gr % unified gr and nlp generation models Updatable \acl{gr} is another critical task on dynamic corpora. The primary challenges indicate the cost of updating the model with new documents and the catastrophic forgetting problem~\citep{kishoreIncDSI2023,mehtaDSIpp2023,cont-learning-gr2023cikm,guoContinualGenerative2024}. Previous efforts have concentrated on developing efficient continual learning strategies by fixing the indexing construction procedure. We consider an idealized scenario where the model has full knowledge of all possible documents and focuses solely on generating relevant \acp{docid} on dynamic corpus. \header{Constrained decoding} Constrained decoding has been widely studied for guiding machine learning models to produce outputs that satisfy specific conditions~\citep{ahmedNeuroSymbolicLearning2023,mustafaStrcutredOutputPrediction2021}. Instead of learning to satisfy through training, constraints are more often preferable only in the inference time due to the flexibility and efficiency. \citet{nishinoGeneralizationAnalysisLearning2022a,nishinoUnderstandingCV2025} demonstrate the preservation of relative errors of certain loss functions in realizable setting. We instead provide a failure case via establishing the existence of a lower-bound error for auto-regressive models operating under step-wise inference-time constraints. Recent work on \ac{ctg} in \acp{llm}~\citep[see, e.g.,][]{zhangSurveyControllableText2023} also explores imposing constraints during inference without updating the underlying model~\citep{mireshghallahControllableTextGeneration2022,mudgalControlledDecoding2025,kimCriticGuidedDecoding2023,chakrabortyPrincipledDecodingLLM2024}. However, many of these approaches do not focus on strictly enforcing constraint satisfaction. A few studies~\citep{kimGuaranteedGenerationLarge2024,honghuaLogicalControl2024,zhangTractableControlAutoregressive2023} propose methods to produce outputs that strictly adhere to constraints, mainly hard keywords inclusion constraints, using tractable probabilistic models or policy gradient techniques. Our work differs by focusing on a specific corpus-level constraint, i.e., the set of valid \acp{docid} is sampled from the complete corpus, a problem unique to retrieval tasks. \headernodot{Beam search} is a widely used heuristic algorithm for decoding structured predictors and has been applied as a non-\ac{mips} setup for large-scale retrieval systems with explicit tree structures~\citep{liTreeIndexDenseRetrieval2023,zhuTreeRecsys2018,zhuoOptimalTreeModels2020,zhuJointTreeIndexRecsys2019}. Beam search is known to have a performance deterioration, and only few works provided theoretical insights into this issue. As far as we know, only \citet{zhuoOptimalTreeModels2020} demonstrate a training-test discrepancy in tree-structured models using binary cross-entropy loss. They showed that pseudo-labeling during training does not guarantee that beam search will get the most relevant targets. In our work, we analyze the marginal distribution of an auto-regressive distribution and provide a theoretical result on the top-$1$ and top-$k$ performance under sparse relevance situations. \citet{zhuoOptimalTreeModels2020} also provide a Bayes-optimal tree structure, which is often called max-heap assumption~\citep{liTreeIndexDenseRetrieval2023}, and we will discuss the difficulty of enforcing this assumption in our setting in Section~\ref{sub:solution}. In \ac{gr}, some work have reached the same conclusion that beam search is not sufficient for retrieval as it is likely to prune the relevant \acp{docid} and the model is not able to recover from this~\citep{zengPlanningAheadGenerative2024,liCorpusLM2024,liUnigen2024}. They propose to use a hybrid retrieval strategy to help bypassing this problem. We instead focus on understanding the root cause of this problem, i.e., the usage of marginal distribution. % max heap assumption","Generative retrieval ( GR)is an emerging direction in neural in- formation retrieval, exploring the possibility of replacing traditional index structures in retrieval systems with a single large-scale neural networks [ 27,54]. It leverages generative models to directly gen- erate the relevant docID s given a query. This paradigm originated with Cao et al . [9], Metzler et al . [34] and has garnered consid- erable attention [ 4,30,38,46–49,52,55,57,61,66,67,70] in the information retrieval community. Generalization remains a challenge for GR, especially when ap- plied to out-of-distribution corpora [ 2,11,22,27,31,32,46]. Pre- vious research attributes this challenges to limited model capac- ity [22,60], lack of learning in the docID construction [ 46,57,66], and difficulties in learning semantic representations [ 47,53]. In contrast, our work focuses on the constrained auto-regressive de- coding strategy widely applied in GR, which is crucial for adapting GRmodels to new corpora dynamically. Our setting aligns closely to the few-shot indexing approach [ 2], where a pre-trained LLM generates docID s solely based on its pre-trained knowledge and generalization capabilities, without additional training. We treat their method as a conceptual blueprint for a fully generalizable GR system and aim to investigate the inference stage under this setting. Updatable generative retrieval is another critical task on dynamic corpora. The primary challenges indicate the cost of updating the model with new documents and the catastrophic forgetting prob- lem [ 11,17,21,33]. Previous efforts have concentrated on devel- oping efficient continual learning strategies by fixing the indexing construction procedure. We consider an idealized scenario where the model has full knowledge of all possible documents and focuses solely on generating relevant docIDs on dynamic corpus. Constrained decoding. Constrained decoding has been widely studied for guiding machine learning models to produce outputs that satisfy specific conditions [ 1,37]. Instead of learning to sat- isfy through training, constraints are more often preferable only in the inference time due to the flexibility and efficiency. Nishino et al. [40,41]demonstrate the preservation of relative errors of certain loss functions in realizable setting. We instead provide a failure case via establishing the existence of a lower-bound error for auto-regressive models operating under step-wise inference-time constraints. Recent work on controllable text generation ( CTG ) inLLM s [see, e.g., 65] also explores imposing constraints during inference without updating the underlying model [ 10,19,35,36]. Constrained Auto-Regressive Decoding Constrains Generative Retrieval SIGIR ’25, July 13–18, 2025, Padua, Italy However, many of these approaches do not focus on strictly en- forcing constraint satisfaction. A few studies [ 20,63,64] propose methods to produce outputs that strictly adhere to constraints, mainly hard keywords inclusion constraints, using tractable proba- bilistic models or policy gradient techniques. Our work differs by focusing on a specific corpus-level constraint, i.e., the set of valid docID s is sampled from the complete corpus, a problem unique to retrieval tasks. Beam search is a widely used heuristic algorithm for decoding structured predictors and has been applied as a non-maximum in- ner product search ( MIPS ) setup for large-scale retrieval systems with explicit tree structures [ 24,68,69,71]. Beam search is known to have a performance deterioration, and only few works provided theoretical insights into this issue. As far as we know, only Zhuo et al. [71] demonstrate a training-test discrepancy in tree-structured models using binary cross-entropy loss. They showed that pseudo- labeling during training does not guarantee that beam search will get the most relevant targets. In our work, we analyze the marginal distribution of an auto-regressive distribution and provide a the- oretical result on the top- 1and top-𝑘performance under sparse relevance situations. Zhuo et al . [71] also provide a Bayes-optimal tree structure, which is often called max-heap assumption [ 24], and we will discuss the difficulty of enforcing this assumption in our setting in Section"