title
stringlengths
21
128
content_TLDR
stringlengths
40
250
abstract
stringlengths
613
2.09k
authors
listlengths
1
42
openreview_url
stringlengths
42
42
id
stringlengths
10
10
forum
stringlengths
10
10
authorids
listlengths
1
42
venue
dict
venueid
dict
pdf_url
dict
invitation
stringclasses
1 value
group
stringclasses
1 value
venue_name
stringclasses
1 value
year
int64
2.03k
2.03k
conference
stringclasses
1 value
content_keywords
listlengths
1
16
content_code_of_ethics
stringclasses
1 value
content_author_guide
stringclasses
1 value
content_flagged_for_ethics_review
bool
1 class
content_ethics_comments
stringclasses
11 values
content__bibtex
stringlengths
246
1.01k
content_paperhash
stringlengths
29
134
content_supplementary_material
stringclasses
73 values
content_award_nomination
bool
1 class
content_reciprocal_reviewing_status
stringclasses
1 value
content_reciprocal_reviewing_author
stringclasses
4 values
content_reciprocal_reviewing_exemption_reason
dict
Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources
Open-Qwen2VL is a fully open-source multimodal LLM which is trained on only 5B high-quality multimodal tokens but outperforms Qwen2-VL, trained on 1.4T tokens, on various benchmarks.
The reproduction of state-of-the-art multimodal LLM pre-training faces barriers at every stage of the pipeline, including high-quality data filtering, multimodal data mixture strategies, sequence packing techniques, and training frameworks. We introduce Open-Qwen2VL, a fully open-source 2B-parameter Multimodal Large Language Model pre-trained efficiently on 29M image-text pairs using only 220 A100-40G GPU hours. Our approach employs low-to-high dynamic image resolution and multimodal sequence packing to significantly enhance pre-training efficiency. The training dataset was carefully curated using both MLLM-based filtering techniques (e.g., MLM-Filter) and conventional CLIP-based filtering methods, substantially improving data quality and training efficiency. The Open-Qwen2VL pre-training is conducted on academic level 8xA100-40G GPUs at UCSB on 5B packed multimodal tokens, which is 0.36\% of 1.4T multimodal pre-training tokens of Qwen2-VL. The final instruction-tuned Open-Qwen2VL outperforms partially-open state-of-the-art MLLM Qwen2-VL-2B on various multimodal benchmarks of MMBench, SEEDBench, MMstar, and MathVista, indicating the remarkable training efficiency of Open-Qwen2VL. We open-source all aspects of our work, including compute-efficient and data-efficient training details, data filtering methods, sequence packing scripts, pre-training data in WebDataset format, FSDP-based training codebase, and both base and instruction-tuned model checkpoints. We redefine "fully open" for multimodal LLMs as the complete release of: 1) the training codebase, 2) detailed data filtering techniques, and 3) all pre-training and supervised fine-tuning data used to develop the model.
[ "Weizhi Wang", "Yu Tian", "Linjie Yang", "Heng Wang", "Xifeng Yan" ]
https://openreview.net/forum?id=nVQmW1af6j
nVQmW1af6j
nVQmW1af6j
[ "~Weizhi_Wang1", "~Yu_Tian4", "~Linjie_Yang4", "~Heng_Wang2", "~Xifeng_Yan1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ee174c9b1df40753c61805a156d2891016605052.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "multimodal large language model", "efficient pre-training", "high quality image-text filtering" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025openqwenvl, title={Open-Qwen2{VL}: Compute-Efficient Pre-Training of Fully-Open Multimodal {LLM}s on Academic Resources}, author={Weizhi Wang and Yu Tian and Linjie Yang and Heng Wang and Xifeng Yan}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=nVQmW1af6j} }
wang|openqwen2vl_computeefficient_pretraining_of_fullyopen_multimodal_llms_on_academic_resources
null
null
null
null
null
Plancraft: an evaluation dataset for planning with LLM agents
We introduce Plancraft, a multi-modal evaluation dataset for LLMs, designed to assess tool use, planning, and decision-making with both solvable and unsolvable examples.
We present Plancraft, a multi-modal evaluation dataset for LLM agents. Plancraft has both a text-only and multi-modal interface, based on the Minecraft crafting GUI. We include the Minecraft Wiki to evaluate tool use and Retrieval Augmented Generation (RAG), as well as a handcrafted planner and Oracle Retriever, to ablate the different components of a modern agent architecture. To evaluate decision-making, Plancraft also includes a subset of examples that are intentionally unsolvable, providing a realistic challenge that requires the agent not only to complete tasks but also to decide whether they are solvable at all. We benchmark both open-source and closed-source LLMs and compare their performance and efficiency to a handcrafted planner. Overall, we find that LLMs and VLMs struggle with the planning problems that Plancraft introduces, and offer suggestions on how to improve their capabilities.
[ "Gautier Dagan", "Frank Keller", "Alex Lascarides" ]
https://openreview.net/forum?id=nSV8Depcpx
nSV8Depcpx
nSV8Depcpx
[ "~Gautier_Dagan1", "~Frank_Keller1", "~Alex_Lascarides1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9dbb6b20cff2ad8aca8728850e46fd135d8a2353.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "planning", "multi-modal", "agents", "LLMs", "tool use", "minecraft", "RAG" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ dagan2025plancraft, title={Plancraft: an evaluation dataset for planning with {LLM} agents}, author={Gautier Dagan and Frank Keller and Alex Lascarides}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=nSV8Depcpx} }
dagan|plancraft_an_evaluation_dataset_for_planning_with_llm_agents
/attachment/73266a3ae8ba8b61a11ec99084fc33ef47b66c57.zip
null
null
null
null
Teaching Models to Understand (but not Generate) High-risk Data
A new pre-training paradigm that enables language models to understand high-risk data without learning to generate it.
Language model developers typically filter out high-risk content—such as toxic or copyrighted text—from their pre-training data to prevent models from generating similar outputs. However, removing such data altogether limits models’ ability to recognize and appropriately respond to harmful or sensitive content. In this paper, we introduce Selective Loss to Understand but Not Generate (SLUNG), a pre-training paradigm through which models learn to understand high-risk data without learning to generate it. Instead of uniformly applying the next-token prediction loss, SLUNG selectively avoids incentivizing the generation of high-risk tokens while ensuring they remain within the model's context window. As the model learns to predict low-risk tokens that follow high-risk ones, it is forced to understand the high-risk content. Through our experiments, we show that SLUNG consistently improves models' understanding of high-risk data (e.g., ability to recognize toxic content) without increasing its generation (e.g., toxicity of model responses). Overall, our SLUNG paradigm enables models to benefit from high-risk text that would otherwise be filtered out.
[ "Ryan Yixiang Wang", "Matthew Finlayson", "Luca Soldaini", "Swabha Swayamdipta", "Robin Jia" ]
https://openreview.net/forum?id=n6mTO5JS4j
n6mTO5JS4j
n6mTO5JS4j
[ "~Ryan_Yixiang_Wang1", "~Matthew_Finlayson1", "~Luca_Soldaini1", "~Swabha_Swayamdipta1", "~Robin_Jia1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/4c4b33c1dca8f5ac130e0bc60c949aed1976381b.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLMs", "Language Models", "Pre-training Data" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025teaching, title={Teaching Models to Understand (but not Generate) High-risk Data}, author={Ryan Yixiang Wang and Matthew Finlayson and Luca Soldaini and Swabha Swayamdipta and Robin Jia}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=n6mTO5JS4j} }
wang|teaching_models_to_understand_but_not_generate_highrisk_data
null
null
null
null
null
Defending LLM Watermarking Against Spoofing Attacks with Contrastive Representation Learning
The paper proposes a semantic-aware watermarking method that defends against spoofing attacks by using a contrastively trained semantic mapping model to detect semantic distortions.
Watermarking has emerged as a promising technique for detecting texts generated by LLMs. Current research has primarily focused on three design criteria -- high quality of the watermarked text, high detectability, and robustness against removal attack. However, the security against spoofing attacks remains relatively understudied. For example, a piggyback attack can maliciously alter the meaning of watermarked text by transforming it into hate speech, while preserving the original watermark, thereby damaging the reputation of the LLM provider. We identify two core challenges that make defending against spoofing difficult: (1) the need for watermarks to be both sensitive to semantic-distorting changes and insensitive to semantic-preserving edits, and (2) the contradiction between the need to detect global semantic shifts and the local, auto-regressive nature of most watermarking schemes. To address these challenges, we propose a semantic-aware watermarking algorithm that post-hoc embeds watermarks into a given target text while preserving its original meaning. Our method introduces a semantic mapping model, which guides the generation of a green-red token list, contrastively trained to be sensitive to semantic-distorting changes and insensitive to semantic-preserving changes. Experiments on two standard benchmarks demonstrate strong robustness against removal attacks and security against spoofing attacks, including sentiment reversal and toxic content insertion, while maintaining high watermark detectability. Our approach offers a significant step toward more secure and semantically aware watermarking for LLMs.
[ "Li An", "Yujian Liu", "Yepeng Liu", "Yang Zhang", "Yuheng Bu", "Shiyu Chang" ]
https://openreview.net/forum?id=n5hmtkdl7k
n5hmtkdl7k
n5hmtkdl7k
[ "~Li_An3", "~Yujian_Liu1", "~Yepeng_Liu1", "~Yang_Zhang3", "~Yuheng_Bu1", "~Shiyu_Chang2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/3244e3dcdb7ec5c678906e32428bd780da24db80.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM watermarking", "Spoofing attack", "Piggyback Spoofing attack" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ an2025defending, title={Defending {LLM} Watermarking Against Spoofing Attacks with Contrastive Representation Learning}, author={Li An and Yujian Liu and Yepeng Liu and Yang Zhang and Yuheng Bu and Shiyu Chang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=n5hmtkdl7k} }
an|defending_llm_watermarking_against_spoofing_attacks_with_contrastive_representation_learning
null
null
null
null
null
True Multimodal In-Context Learning Needs Attention to the Visual Context
We propose both a Dynamic Attention Reallocation tuning algorithm and a dedicated dataset to improve the true MICL ability of MLLMs
Multimodal Large Language Models (MLLMs), built on powerful language backbones, have enabled Multimodal In-Context Learning (MICL)—adapting to new tasks from a few multimodal demonstrations consisting of images, questions, and answers. Despite showing noticeable improvement on standard vision-language datasets, current MLLMs struggle to leverage visual information in the demonstrations. Specifically, they tend to neglect visual cues and over-rely on textual patterns, leading to mere text imitation rather than genuine multimodal adaptation. This behavior makes MICL still unimodal and largely restricts its practical utility. More importantly, this limitation is often concealed by the improved performance on tasks that do not require understanding the visual context. As a result, how to effectively enhance MICL ability and reliably evaluate the MICL performance remains underexplored. To address these issues, we first introduce Dynamic Attention Reallocation (DARA), an efficient fine-tuning strategy that encourages models to attend to the visual context by rebalancing attention across visual and textual tokens. In addition, we present TrueMICL, an MICL-dedicated dataset with both support and test sets that explicitly requires the integration of multimodal information—particularly visual content—for correct task completion. Extensive experiments demonstrate the effectiveness of our holistic solution, showcasing substantial improvements in the MICL capabilities. Code and datasets are available at [here](https://chenxshuo.github.io/true-micl-colm/).
[ "Shuo Chen", "Jianzhe Liu", "Zhen Han", "Yan Xia", "Daniel Cremers", "Philip Torr", "Volker Tresp", "Jindong Gu" ]
https://openreview.net/forum?id=n4JdyBGu6T
n4JdyBGu6T
n4JdyBGu6T
[ "~Shuo_Chen12", "~Jianzhe_Liu1", "~Zhen_Han3", "~Yan_Xia5", "~Daniel_Cremers1", "~Philip_Torr1", "~Volker_Tresp1", "~Jindong_Gu1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/da3367f29bfb2c448b1ae58988478c487bd06b7c.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Multimodal In-Context Learning", "Multimodal Large Language Models", "Vision Language Models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ chen2025true, title={True Multimodal In-Context Learning Needs Attention to the Visual Context}, author={Shuo Chen and Jianzhe Liu and Zhen Han and Yan Xia and Daniel Cremers and Philip Torr and Volker Tresp and Jindong Gu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=n4JdyBGu6T} }
chen|true_multimodal_incontext_learning_needs_attention_to_the_visual_context
null
null
null
null
null
Mixture of Attention Spans: Optimizing LLM Inference Efficiency with Heterogeneous Sliding-Window Lengths
We design heterogeneous elastic rules for sliding-window lengths of attention for efficient large language models
Sliding-window attention offers a hardware-efficient solution to the memory and throughput challenges of Large Language Models (LLMs) in long-context scenarios. Existing methods typically employ a single window length across all attention heads and input sizes. However, this uniform approach fails to capture the heterogeneous attention patterns inherent in LLMs, ignoring their distinct accuracy-latency trade-offs. To address this challenge, we propose *Mixture of Attention Spans* (MoA), which automatically tailors distinct sliding-window length configurations to different heads and layers. MoA constructs and navigates a search space of various window lengths and their scaling rules relative to input sizes. It profiles the model, evaluates potential configurations, and pinpoints the optimal length configurations for each head. MoA adapts to varying input sizes, revealing that some attention heads expand their focus to accommodate longer inputs, while other heads consistently concentrate on fixed-length local contexts. Experiments show that MoA increases the effective context length by 3.9× with the same average sliding-window length, boosting retrieval accuracy by 1.5-7.1× over the uniform-window baseline across Vicuna-{7B,13B}, and Llama3-{8B,70B} models. Moreover, MoA narrows the performance gap with full attention, reducing the maximum relative performance drop from 9%-36% to within 5% across three long-context understanding benchmarks. MoA achieves a 1.2-1.4× GPU memory reduction, boosting decode throughput by 6.6-8.2× and 1.7-1.9× over FlashAttention2 and vLLM, with minimal performance impact. Our code is available at https://github.com/thu-nics/MoA.
[ "Tianyu Fu", "Haofeng Huang", "Xuefei Ning", "Genghan Zhang", "Boju Chen", "Tianqi Wu", "Hongyi Wang", "Zixiao Huang", "Shiyao Li", "Shengen Yan", "Guohao Dai", "Huazhong Yang", "Yu Wang" ]
https://openreview.net/forum?id=n3rZJrWPLE
n3rZJrWPLE
n3rZJrWPLE
[ "~Tianyu_Fu3", "~Haofeng_Huang3", "~Xuefei_Ning1", "~Genghan_Zhang1", "~Boju_Chen1", "~Tianqi_Wu2", "~Hongyi_Wang8", "~Zixiao_Huang2", "~Shiyao_Li2", "~Shengen_Yan1", "~Guohao_Dai4", "~Huazhong_Yang2", "~Yu_Wang3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/c9da10b21088f5f6d7b33ab09e49c617dd22c0c3.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Efficient Attention", "Sparse Attention", "KV Cache Management", "Large Language Models", "Efficiency" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ fu2025mixture, title={Mixture of Attention Spans: Optimizing {LLM} Inference Efficiency with Heterogeneous Sliding-Window Lengths}, author={Tianyu Fu and Haofeng Huang and Xuefei Ning and Genghan Zhang and Boju Chen and Tianqi Wu and Hongyi Wang and Zixiao Huang and Shiyao Li and Shengen Yan and Guohao Dai and Huazhong Yang and Yu Wang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=n3rZJrWPLE} }
fu|mixture_of_attention_spans_optimizing_llm_inference_efficiency_with_heterogeneous_slidingwindow_lengths
null
null
null
null
null
Fluid Language Model Benchmarking
Fluid Benchmarking improves evaluation by adapting items to the capability level of language models.
Language model (LM) benchmarking faces several challenges: comprehensive evaluations are costly, benchmarks often fail to measure the intended capabilities, and evaluation quality can degrade due to labeling errors and benchmark saturation. Although various strategies have been proposed to mitigate these issues, they tend to address individual aspects in isolation, neglecting broader questions about overall evaluation quality. Here, we introduce Fluid Benchmarking, a new evaluation approach that advances LM benchmarking across multiple dimensions. Inspired by psychometrics, Fluid Benchmarking is based on the insight that the relative value of benchmark items depends on an LM's capability level, suggesting that evaluation should adapt to each LM. Methodologically, Fluid Benchmarking estimates an item response model based on existing LM evaluation results and uses the inferred quantities to select evaluation items dynamically, similar to computerized adaptive testing in education. In our experiments, we compare Fluid Benchmarking against the common practice of random item sampling as well as more sophisticated baselines, including alternative methods grounded in item response theory. We examine four dimensions—efficiency, validity, variance, and saturation—and find that Fluid Benchmarking achieves superior performance in all of them (e.g., higher validity and less variance on MMLU with fifty times fewer items). Our analysis shows that the two components of Fluid Benchmarking have distinct effects: item response theory, used to map performance into a latent ability space, increases validity, while dynamic item selection reduces variance. Overall, our results suggest that LM benchmarking can be substantially improved by moving beyond static evaluation.
[ "Valentin Hofmann", "David Heineman", "Ian Magnusson", "Kyle Lo", "Jesse Dodge", "Maarten Sap", "Pang Wei Koh", "Chun Wang", "Hannaneh Hajishirzi", "Noah A. Smith" ]
https://openreview.net/forum?id=mxcCg9YRqj
mxcCg9YRqj
mxcCg9YRqj
[ "~Valentin_Hofmann1", "~David_Heineman1", "~Ian_Magnusson1", "~Kyle_Lo1", "~Jesse_Dodge1", "~Maarten_Sap1", "~Pang_Wei_Koh1", "~Chun_Wang8", "~Hannaneh_Hajishirzi1", "~Noah_A._Smith2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/998cc5e4815d857cf1218ede711a4cc0e044ac3d.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "language models", "evaluation", "item response theory", "efficiency", "robustness" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ hofmann2025fluid, title={Fluid Language Model Benchmarking}, author={Valentin Hofmann and David Heineman and Ian Magnusson and Kyle Lo and Jesse Dodge and Maarten Sap and Pang Wei Koh and Chun Wang and Hannaneh Hajishirzi and Noah A. Smith}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=mxcCg9YRqj} }
hofmann|fluid_language_model_benchmarking
null
true
This submission is NOT exempt from the Reciprocal Reviewing requirement. (We expect most submissions to fall in this category.)
~Valentin_Hofmann1
{ "readers": [ "colmweb.org/COLM/2025/Conference", "colmweb.org/COLM/2025/Conference/Submission1586/Authors" ] }
Rethinking Multilingual Continual Pretraining: Data Mixing for Adapting LLMs Across Languages and Resources
This study evaluates 36 continual pretraining configurations across three multilingual LLMs and 30+ languages, analyzing their effects across resource levels and language behavior categories.
Large Language Models (LLMs) exhibit significant disparities in performance across languages, primarily benefiting high-resource languages while marginalizing underrepresented ones. Continual Pretraining (CPT) has emerged as a promising approach to address this imbalance, although the relative effectiveness of monolingual, bilingual, and code-augmented data strategies remains unclear. This study systematically evaluates 36 CPT configurations involving three multilingual base models, across 30+ languages categorized as altruistic, selfish, and stagnant, spanning various resource levels. Our findings reveal three major insights: (1) Bilingual CPT improves multilingual classification but often causes language mixing issues during generation. (2) Including programming code data during CPT consistently enhances multilingual classification accuracy and language modeling capabilities, particularly benefiting low-resource languages, but introduces a trade-off by slightly degrading generation quality. (3) Contrary to prior work, we observe substantial deviations from language classifications according to their impact on cross-lingual transfer: Languages classified as altruistic often negatively affect related languages, selfish languages show conditional and configuration-dependent behavior, and stagnant languages demonstrate surprising adaptability under certain CPT conditions. These nuanced interactions emphasize the complexity of multilingual representation learning, underscoring the importance of systematic studies on generalizable language classification to inform future multilingual CPT strategies.
[ "Zihao Li", "Shaoxiong Ji", "Hengyu Luo", "Jörg Tiedemann" ]
https://openreview.net/forum?id=mpTIzK4Zca
mpTIzK4Zca
mpTIzK4Zca
[ "~Zihao_Li13", "~Shaoxiong_Ji1", "~Hengyu_Luo1", "~Jörg_Tiedemann1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/574bedec3f3e2b12092d3af357d15c9c990d9a65.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Multilingual continual pretraining", "data mixing" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ li2025rethinking, title={Rethinking Multilingual Continual Pretraining: Data Mixing for Adapting {LLM}s Across Languages and Resources}, author={Zihao Li and Shaoxiong Ji and Hengyu Luo and J{\"o}rg Tiedemann}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=mpTIzK4Zca} }
li|rethinking_multilingual_continual_pretraining_data_mixing_for_adapting_llms_across_languages_and_resources
null
null
null
null
null
SPIN-Bench: How Well Do LLMs Plan Strategically and Reason Socially?
SPIN-Bench tests LLMs on strategic planning and social intelligence, showing they struggle with complex multi-agent scenarios requiring extended reasoning and negotiation.
Reasoning and strategic behavior in social interactions is a hallmark of intelligence. This form of reasoning is significantly more sophisticated than isolated planning or reasoning tasks in static settings (e.g., math problem solving). In this paper, we present Strategic Planning, Interaction, and Negotiation (SPIN-Bench), a new multi-domain evaluation designed to measure the intelligence of strategic planning and social reasoning. While many existing benchmarks focus on narrow planning or single-agent reasoning, SPIN-Bench combines classical PDDL tasks, competitive board games, cooperative card games, and multi-agent negotiation scenarios in one unified framework. The framework includes both a benchmark as well as an arena to simulate and evaluate the variety of social settings to test reasoning and strategic behavior of AI agents. We formulate the benchmark SPIN-Bench by systematically varying action spaces, state complexity, and the number of interacting agents to simulate a variety of social settings where success depends on not only methodical and step-wise decision making, but also conceptual inference of other (adversarial or cooperative) participants. Our experiments reveal that while contemporary LLMs handle basic fact retrieval and short-range planning reasonably well, they encounter significant performance bottlenecks in tasks requiring deep multi-hop reasoning over large state spaces and socially adept coordination under uncertainty. We envision SPIN-Bench as a catalyst for future research on robust multi-agent planning, social reasoning, and human--AI teaming.
[ "Jianzhu Yao", "Kevin Wang", "Ryan Hsieh", "Haisu Zhou", "Tianqing Zou", "Zerui Cheng", "Zhangyang Wang", "Pramod Viswanath" ]
https://openreview.net/forum?id=mgsS73kvOA
mgsS73kvOA
mgsS73kvOA
[ "~Jianzhu_Yao1", "~Kevin_Wang4", "~Ryan_Hsieh2", "~Haisu_Zhou1", "~Tianqing_Zou1", "~Zerui_Cheng1", "~Zhangyang_Wang1", "~Pramod_Viswanath2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/527b4ef621918badbf88541f24daecd678a6888d.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Planning", "Game", "Benchmark", "Social Intelligence" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yao2025spinbench, title={{SPIN}-Bench: How Well Do {LLM}s Plan Strategically and Reason Socially?}, author={Jianzhu Yao and Kevin Wang and Ryan Hsieh and Haisu Zhou and Tianqing Zou and Zerui Cheng and Zhangyang Wang and Pramod Viswanath}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=mgsS73kvOA} }
yao|spinbench_how_well_do_llms_plan_strategically_and_reason_socially
/attachment/b27f210e2adf29e68a923dd43e9fb24d62894bf2.zip
null
null
null
null
Improving Fisher Information Estimation and Efficiency for LoRA-based LLM Unlearning
We propose VILA, a scalable and efficient unlearning method for LLMs that addresses FILA's limitations by improving parameter importance estimation and reducing computational overhead.
LLMs have demonstrated remarkable performance across various tasks but face challenges related to unintentionally generating outputs containing sensitive information. A straightforward approach to address this issue is to retrain the model after excluding the problematic data. However, this approach incurs prohibitively high computational costs. To overcome this limitation, machine unlearning has emerged as a promising solution that can effectively remove sensitive information without the need to retrain the model from scratch. Recently, FILA has been proposed as a parameter-efficient unlearning method by integrating LoRA adapters. Specifically, it calculates the Fisher information to identify parameters associated with the forget set and assigns them to LoRA adapters for updates. Despite its innovative approach, FILA still requires access to all model parameters and does not adequately account for fundamental assumptions underlying Fisher information, leading to inaccuracies in importance estimation. To address these limitations, we propose VILA, a novel unlearning framework that explicitly considers the assumptions overlooked in FILA, thereby enhancing the accuracy of parameter identification for the forget set. Moreover, VILA significantly reduces computational costs by enabling parameter identification without accessing the entire model. Our method achieves up to 100× higher parameter efficiency and 40× faster training speed compared to FILA, and sets new state-of-the-art performance on benchmarks including TOFU, WMDP, and MUSE. Our code is available at https://github.com/kyj93790/VILA.
[ "Yejin Kim", "Eunwon Kim", "Buru Chang", "Junsuk Choe" ]
https://openreview.net/forum?id=mTJW8Y1nd8
mTJW8Y1nd8
mTJW8Y1nd8
[ "~Yejin_Kim7", "~Eunwon_Kim1", "~Buru_Chang1", "~Junsuk_Choe1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8cc2caa1fac4c87a9c4e3277fd4b747ae7b3802f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Machine Unlearning", "Large Language Model" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ kim2025improving, title={Improving Fisher Information Estimation and Efficiency for Lo{RA}-based {LLM} Unlearning}, author={Yejin Kim and Eunwon Kim and Buru Chang and Junsuk Choe}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=mTJW8Y1nd8} }
kim|improving_fisher_information_estimation_and_efficiency_for_lorabased_llm_unlearning
null
null
null
null
null
ICQuant: Index Coding enables Low-bit LLM Quantization
This paper presents ICQuant, a novel framework that leverages outlier statistics to design an efficient index coding scheme for low-bit weight-only quantization.
The rapid deployment of Large Language Models (LLMs) highlights the need for efficient low-bit post-training quantization (PTQ) due to their high memory costs. A key challenge in weight quantization is the presence of outliers, which inflate quantization ranges and lead to large errors. While a number of outlier suppression techniques have been proposed, they either: fail to effectively shrink the quantization range, or incur (relatively) high bit overhead. In this paper, we present ICQuant, a novel framework that leverages outlier statistics to design an efficient index coding scheme for outlier-aware weight-only quantization. Compared to existing outlier suppression techniques requiring $\approx 1$ bit overhead to halve the quantization range, ICQuant requires only $\approx 0.25$ bits; a significant saving in low bit regimes (e.g., 2-3 bits). ICQuant can be used on top of any existing quantizers to eliminate outliers, improving the quantization quality. Using just 2.3 bits per weight and simple scalar quantizers, \ours improves the zero-shot accuracy of the 2-bit Llama3-70B model by up to 130\% and 150\% relative to QTIP (Tseng et al. (2024b)) and QuIP\# (Tseng et al. (2024a) respectively; and it achieves comparable performance to the best-known fine-tuned quantizer (Malinovskii et al. (2024)) without any fine-tuning.
[ "Xinlin Li", "Osama Hanna", "Christina Fragouli", "Suhas Diggavi" ]
https://openreview.net/forum?id=m6nBgFSMTL
m6nBgFSMTL
m6nBgFSMTL
[ "~Xinlin_Li3", "~Osama_Hanna1", "~Christina_Fragouli1", "~Suhas_Diggavi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ed1fe4cea396c575853e69ba69b116e193e547d6.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM Quantization", "LLM Compression", "Post Training Quantization" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ li2025icquant, title={{ICQ}uant: Index Coding enables Low-bit {LLM} Quantization}, author={Xinlin Li and Osama Hanna and Christina Fragouli and Suhas Diggavi}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=m6nBgFSMTL} }
li|icquant_index_coding_enables_lowbit_llm_quantization
null
null
null
null
null
LLM Unlearning Without an Expert Curated Dataset
Synthetic forget set to replace expert-curation for LLM unlearning.
Modern large language models often encode sensitive, harmful, or copyrighted knowledge, raising the need for post-hoc unlearning—the ability to remove specific domains of knowledge from a model without full retraining. A major bottleneck in current unlearning pipelines is constructing effective forget sets—datasets that approximate the target domain and guide the model to forget it. In this work, we introduce a scalable, automated approach to generate high-quality forget sets using language models themselves. Our method synthesizes textbook-style data through a structured prompting pipeline, requiring only a domain name as input. Through experiments on unlearning biosecurity, cybersecurity, and Harry Potter novels, we show that our synthetic datasets consistently outperform the baseline synthetic alternatives and are comparable to the expert-curated ones. Additionally, ablation studies reveal that the multi-step generation pipeline significantly boosts data diversity, which in turn improves unlearning utility. Overall, our findings suggest that synthetic datasets offer a promising path toward practical, scalable unlearning for a wide range of emerging domains without the need for manual intervention. We release our code and dataset at [https://github.com/xyzhu123/Synthetic_Textbook](https://github.com/xyzhu123/Synthetic_Textbook).
[ "Xiaoyuan Zhu", "Muru Zhang", "Ollie Liu", "Robin Jia", "Willie Neiswanger" ]
https://openreview.net/forum?id=m4F3kQCfGX
m4F3kQCfGX
m4F3kQCfGX
[ "~Xiaoyuan_Zhu2", "~Muru_Zhang1", "~Ollie_Liu1", "~Robin_Jia1", "~Willie_Neiswanger2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/0a5710cc0d3e8a268ba4e8727366abd1bd3e6b14.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "NLP", "Machine Unlearning", "Synthetic Data Generation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhu2025llm, title={{LLM} Unlearning Without an Expert Curated Dataset}, author={Xiaoyuan Zhu and Muru Zhang and Ollie Liu and Robin Jia and Willie Neiswanger}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=m4F3kQCfGX} }
zhu|llm_unlearning_without_an_expert_curated_dataset
null
null
null
null
null
How does Watermarking Affect Visual Language Models in Document Understanding?
Investigation of the robustness of VLM for document understanding task
Visual Language Models (VLMs) have become foundational models for document understanding tasks, widely used in the processing of complex multimodal documents across domains such as finance, law, and academia. However, documents often contain noise-like information, such as watermarks, which inevitably leads us to inquire: Do watermarks degrade the performance of VLMs in document understanding? To address this, we propose a novel evaluation framework to investigate the effect of visible watermarks on VLMs performance. We takes into account various factors, including different types of document data, the positions of watermarks within documents and variations in watermark content. Our experimental results reveal that VLMs performance can be significantly compromised by watermarks, with performance drop rates reaching up to 36\%. We discover that \emph{scattered} watermarks cause stronger interference than centralized ones, and that \emph{semantic contents} in watermarks creates greater disruption than simple visual occlusion. Through attention mechanism analysis and embedding similarity examination, we find that the performance drops are mainly attributed to that watermarks 1) force widespread attention redistribution, and 2) alter semantic representation in the embedding space. Our research not only highlights significant challenges in deploying VLMs for document understanding, but also provides insights towards developing robust inference mechanisms on watermarked documents.
[ "Chunxue Xu", "Yiwei Wang", "Bryan Hooi", "Yujun Cai", "Songze Li" ]
https://openreview.net/forum?id=lvQwn8eiRf
lvQwn8eiRf
lvQwn8eiRf
[ "~Chunxue_Xu1", "~Yiwei_Wang2", "~Bryan_Hooi1", "~Yujun_Cai1", "~Songze_Li1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/2d9289a5e67f0efe5aad593f326b2bedf28bf50f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "VLMs", "Document Understanding", "Robustness" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ xu2025how, title={How does Watermarking Affect Visual Language Models in Document Understanding?}, author={Chunxue Xu and Yiwei Wang and Bryan Hooi and Yujun Cai and Songze Li}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=lvQwn8eiRf} }
xu|how_does_watermarking_affect_visual_language_models_in_document_understanding
null
null
null
null
null
DynaSaur: Large Language Agents Beyond Predefined Actions
We propose a flexible LLM agent framework for open-ended environments, where it dynamically generates new actions when existing ones are insufficient. These actions accumulate over time for future reuse.
Existing LLM agent systems typically select actions from a fixed and predefined set at every step. While effective in closed, narrowly scoped environments, this approach presents two major challenges for real-world, open-ended scenarios: (1) it significantly restricts the planning and acting capabilities of LLM agents, and (2) it requires substantial human effort to enumerate and implement all possible actions, which is impractical in complex environments with a vast number of potential actions. To address these limitations, we propose an LLM agent framework that enables the dynamic creation and composition of actions in an online manner. In this framework, the agent interacts with its environment by generating and executing programs written in a general-purpose programming language. Furthermore, generated actions are accumulated over time for future reuse. Our extensive experiments across multiple benchmarks demonstrate that this framework significantly improves flexibility and outperforms prior methods that rely on a fixed action set. Notably, it enables LLM agents to adapt and recover in scenarios where predefined actions are insufficient or fail due to unforeseen edge cases.
[ "Dang Nguyen", "Viet Dac Lai", "Seunghyun Yoon", "Ryan A. Rossi", "Handong Zhao", "Ruiyi Zhang", "Puneet Mathur", "Nedim Lipka", "Yu Wang", "Trung Bui", "Franck Dernoncourt", "Tianyi Zhou" ]
https://openreview.net/forum?id=lv0cJ2pWVd
lv0cJ2pWVd
lv0cJ2pWVd
[ "~Dang_Nguyen3", "~Viet_Dac_Lai1", "~Seunghyun_Yoon1", "~Ryan_A._Rossi2", "~Handong_Zhao3", "~Ruiyi_Zhang3", "~Puneet_Mathur2", "~Nedim_Lipka1", "~Yu_Wang41", "~Trung_Bui1", "~Franck_Dernoncourt1", "~Tianyi_Zhou2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9c0c2a1ca434928a59d8610c8e4f3cb8fe728cb9.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM", "LLM agents" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ nguyen2025dynasaur, title={DynaSaur: Large Language Agents Beyond Predefined Actions}, author={Dang Nguyen and Viet Dac Lai and Seunghyun Yoon and Ryan A. Rossi and Handong Zhao and Ruiyi Zhang and Puneet Mathur and Nedim Lipka and Yu Wang and Trung Bui and Franck Dernoncourt and Tianyi Zhou}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=lv0cJ2pWVd} }
nguyen|dynasaur_large_language_agents_beyond_predefined_actions
null
null
null
null
null
Inducing Programmatic Skills for Agentic Tasks
We propose ASI, Agent Skill Induction, which induces and applies skill programs from web navigation experiences without supervision, yielding improved correctness and efficiency.
To succeed in common digital tasks such as web navigation, agents must carry out a variety of specialized tasks such as searching for products or planning a travel route. To tackle these tasks, agents can bootstrap themselves by learning task-specific skills online through interaction with the web environment. In this work, we demonstrate that programs are an effective representation for skills. We propose agent skill induction (ASI), which allows agents to adapt themselves by inducing, verifying, and utilizing program-based skills on the fly. We start with an evaluation on the WebArena agent benchmark and show that ASI outperforms the static baseline agent and its text-skill counterpart by 23.5% and 11.3% in success rate, mainly thanks to the programmatic verification guarantee during the induction phase. ASI also improves efficiency by reducing 10.7–15.3% of the steps over baselines, by composing primitive actions (e.g., click) into higher-level skills (e.g., search product). We then highlight the efficacy of ASI in remaining efficient and accurate under scaled-up web activities. Finally, we examine the generalizability of induced skills when transferring between websites, and find that ASI can effectively reuse common skills, while also updating incompatible skills to versatile website changes.
[ "Zora Zhiruo Wang", "Apurva Gandhi", "Graham Neubig", "Daniel Fried" ]
https://openreview.net/forum?id=lsAY6fWsog
lsAY6fWsog
lsAY6fWsog
[ "~Zora_Zhiruo_Wang1", "~Apurva_Gandhi1", "~Graham_Neubig1", "~Daniel_Fried1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/367b824215ee6bfc801df80d296736d22b5ea8f8.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "agent", "skill learning", "web navigation", "scalability", "generalization" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025inducing, title={Inducing Programmatic Skills for Agentic Tasks}, author={Zora Zhiruo Wang and Apurva Gandhi and Graham Neubig and Daniel Fried}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=lsAY6fWsog} }
wang|inducing_programmatic_skills_for_agentic_tasks
/attachment/557818dcceefef95e993d00d56cf6bd03deaf89f.zip
null
null
null
null
C3PO: Critical-Layer, Core-Expert, Collaborative Pathway Optimization for Test-Time Expert Re-Mixing
Optimizing MoE LLM pathways at test-time via reference-based expert re-weighting, boosting accuracy by 7-15%.
Mixture-of-Experts (MoE) Large Language Models (LLMs) suffer from severely sub-optimal expert pathways—our study reveals that naive expert selection learned from pretraining leaves a surprising 10-20% accuracy gap for improvement. Motivated by this observation, we develop a novel class of test-time optimization methods to re-weight or “re-mixing” the experts in different layers jointly for each test sample. Since the test sample’s ground truth is unknown, we propose to optimize a surrogate objective defined by the sample’s “successful neighbors” from a reference set of samples. We introduce three surrogates and algorithms based on mode-finding, kernel regression, and the average loss of similar reference samples/tasks. To reduce the cost of optimizing whole pathways, we apply our algorithms merely to the core experts’ mixing weights in critical layers, which enjoy similar performance but save significant computation. This leads to “Critical-Layer, Core-Expert, Collaborative Pathway Optimization (C3PO)”. We apply C3PO to two recent MoE LLMs and examine it on six widely-used benchmarks. It consistently improves the base model by 7-15% in accuracy and outperforms widely used test-time learning baselines, e.g., in-context learning and prompt/prefix tuning, by a large margin. Moreover, C3PO enables MoE LLMs with 1-3B active parameters to outperform LLMs of 7-9B parameters, hence improving MoE’s advantages on efficiency. Our thorough ablation study further sheds novel insights on achieving test-time improvement on MoE.
[ "Zhongyang Li", "Ziyue Li", "Tianyi Zhou" ]
https://openreview.net/forum?id=lqC5J7pBP9
lqC5J7pBP9
lqC5J7pBP9
[ "~Zhongyang_Li5", "~Ziyue_Li1", "~Tianyi_Zhou2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/27513ccd493f683c9f199ca9978819cb08488942.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Mixture-of-Experts", "Large Language Models", "Test-Time Learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ li2025cpo, title={C3{PO}: Critical-Layer, Core-Expert, Collaborative Pathway Optimization for Test-Time Expert Re-Mixing}, author={Zhongyang Li and Ziyue Li and Tianyi Zhou}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=lqC5J7pBP9} }
li|c3po_criticallayer_coreexpert_collaborative_pathway_optimization_for_testtime_expert_remixing
null
null
null
null
null
Recycling the Web: A Method to Enhance Pre-training Data Quality and Quantity for Language Models
We propose rewriting low-quality web documents to improve their utility, thereby increasing the availability of pre-training data for language models
Scaling laws predict that the performance of large language models improves with increasing model size and data size. In practice, pre-training has been relying on massive web crawls, using almost all data sources publicly available on the internet so far. However, this pool of natural data does not grow at the same rate as the compute supply. Furthermore, the availability of high-quality texts is even more limited: data filtering pipelines often remove up to 99% of the initial web scrapes to achieve state-of-the-art. To address the "data wall" of pre-training scaling, our work explores ways to transform and recycle data discarded in existing filtering processes. We propose REWIRE, REcycling the Web with guIded REwrite, a method to enrich low-quality documents so that they could become useful for training. This in turn allows us to increase the representation of synthetic data in the final pre-training set. Experiments at 1B, 3B and 7B scales of the DCLM benchmark show that mixing high-quality raw texts and our rewritten texts lead to 1.0, 1.3 and 2.5 percentage points improvement respectively across 22 diverse tasks, compared to training on only filtered web data. Training on the raw-synthetic data mix is also more effective than having access to 2x web data. Through further analysis, we demonstrate that about 82% of the mixed in texts come from transforming lower-quality documents that would otherwise be discarded. REWIRE also outperforms related approaches of generating synthetic data, including Wikipedia-style paraphrasing, question-answer synthesizing and knowledge extraction. These results suggest that recycling web texts holds the potential for being a simple and effective approach for scaling pre-training data. We make our high-quality synthetic data publicly available at https://huggingface.co/datasets/facebook/recycling_the_web.
[ "Thao Nguyen", "Yang Li", "Olga Golovneva", "Luke Zettlemoyer", "Sewoong Oh", "Ludwig Schmidt", "Xian Li" ]
https://openreview.net/forum?id=lkjhBdz3rn
lkjhBdz3rn
lkjhBdz3rn
[ "~Thao_Nguyen3", "~Yang_Li112", "~Olga_Golovneva1", "~Luke_Zettlemoyer1", "~Sewoong_Oh3", "~Ludwig_Schmidt1", "~Xian_Li1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/467604cfb06de462129149a92b8208c1fb47d348.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "pretraining", "synthetic data", "rewriting", "data curation for LLMs", "data filtering" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ nguyen2025recycling, title={Recycling the Web: A Method to Enhance Pre-training Data Quality and Quantity for Language Models}, author={Thao Nguyen and Yang Li and Olga Golovneva and Luke Zettlemoyer and Sewoong Oh and Ludwig Schmidt and Xian Li}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=lkjhBdz3rn} }
nguyen|recycling_the_web_a_method_to_enhance_pretraining_data_quality_and_quantity_for_language_models
null
null
null
null
null
SuperBPE: Space Travel for Language Models
Superword tokenization leads to better and more efficient language models
The assumption across nearly all language model (LM) tokenization schemes is that tokens should be subwords, i.e., contained within word boundaries. While providing a seemingly reasonable inductive bias, is this common practice limiting the potential of modern LMs? Whitespace is not a reliable delimiter of meaning, as evidenced by multi-word expressions (e.g., "by the way"), crosslingual variation in the number of words needed to express a concept (e.g., "spacesuit helmet" in German is "raumanzughelm"), and languages that do not use whitespace at all (e.g., Chinese). To explore the potential of tokenization beyond subwords, we introduce a "superword" tokenizer, SuperBPE, which incorporates a simple pretokenization curriculum into the byte-pair encoding (BPE) algorithm to first learn subwords, then superwords that bridge whitespace. This brings dramatic improvements in encoding efficiency: when fixing the vocabulary size to 200k, SuperBPE encodes a fixed piece of text with up to 33% fewer tokens than BPE on average. In experiments, we pretrain 8B transformer LMs from scratch while fixing the model size, vocabulary size, and train compute, varying *only* the algorithm for learning the vocabulary. Our model trained with SuperBPE achieves an average +4.0% absolute improvement over the BPE baseline across 30 downstream tasks (including +8.2% on MMLU), while simultaneously requiring 27% less compute at inference time. In analysis, we find that SuperBPE results in segmentations of text that are more uniform in per-token difficulty. Qualitatively, this may be because SuperBPE tokens often capture common multi-word expressions that function semantically as a single unit. SuperBPE is a straightforward, local modification to tokenization that improves both encoding efficiency and downstream performance, yielding better language models overall.
[ "Alisa Liu", "Jonathan Hayase", "Valentin Hofmann", "Sewoong Oh", "Noah A. Smith", "Yejin Choi" ]
https://openreview.net/forum?id=lcDRvffeNP
lcDRvffeNP
lcDRvffeNP
[ "~Alisa_Liu1", "~Jonathan_Hayase2", "~Valentin_Hofmann1", "~Sewoong_Oh3", "~Noah_A._Smith2", "~Yejin_Choi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/f503ab922e85bb5ffab9a14ae81b12062a9064f7.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "tokenization", "language modeling", "efficiency" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ liu2025superbpe, title={Super{BPE}: Space Travel for Language Models}, author={Alisa Liu and Jonathan Hayase and Valentin Hofmann and Sewoong Oh and Noah A. Smith and Yejin Choi}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=lcDRvffeNP} }
liu|superbpe_space_travel_for_language_models
null
null
null
null
null
A Survey on Personalized and Pluralistic Preference Alignment in Large Language Models
A survey on personalized and pluralistic preference Alignment in Large Language Models
Personalized preference alignment for large language models (LLMs), the process of tailoring LLMs to individual users' preferences, is an emerging research direction spanning the area of NLP and personalization. In this survey, we present an analysis of works on personalized alignment and modeling for LLMs. We introduce a taxonomy of preference alignment techniques, including training time, inference time, and heuristic-driven methods. We provide analysis and discussion on the strengths and limitations of each group of techniques and then cover evaluation, benchmarks, as well as open problems in the field.
[ "Zhouhang Xie", "Junda Wu", "Yiran Shen", "Raghav Jain", "Yu Xia", "Xintong Li", "Aaron Chang", "Ryan A. Rossi", "Tong Yu", "Sachin Kumar", "Bodhisattwa Prasad Majumder", "Jingbo Shang", "Prithviraj Ammanabrolu", "Julian McAuley" ]
https://openreview.net/forum?id=lSWOMjonL7
lSWOMjonL7
lSWOMjonL7
[ "~Zhouhang_Xie1", "~Junda_Wu1", "~Yiran_Shen2", "~Raghav_Jain1", "~Yu_Xia9", "~Xintong_Li2", "~Aaron_Chang1", "~Ryan_A._Rossi2", "~Tong_Yu3", "~Sachin_Kumar1", "~Bodhisattwa_Prasad_Majumder1", "~Jingbo_Shang2", "~Prithviraj_Ammanabrolu1", "~Julian_McAuley1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/2ce1323cfbae336695d501d9e61dbff93d15ea57.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Preference Alignment", "Personalization", "Pluralistic Alignment", "Large Language Models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ xie2025a, title={A Survey on Personalized and Pluralistic Preference Alignment in Large Language Models}, author={Zhouhang Xie and Junda Wu and Yiran Shen and Raghav Jain and Yu Xia and Xintong Li and Aaron Chang and Ryan A. Rossi and Tong Yu and Sachin Kumar and Bodhisattwa Prasad Majumder and Jingbo Shang and Prithviraj Ammanabrolu and Julian McAuley}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=lSWOMjonL7} }
xie|a_survey_on_personalized_and_pluralistic_preference_alignment_in_large_language_models
null
null
null
null
null
Task Vectors in In-Context Learning: Emergence, Formation, and Benefits
We study task vectors in transformers trained from scratch on synthetic tasks and find they emerge but remain indistinct. To enhance their formation, we introduce TVP-loss. Strong task vectors in deeper layers improve ICL on OOD prompts.
In-context learning is a remarkable capability of transformers, referring to their ability to adapt to specific tasks based on a short history or context. Previous research has found that task-specific information is locally encoded within models, though their emergence and functionality remain unclear due to opaque pre-training processes. In this work, we investigate the formation of task vectors in a controlled setting, using models trained from scratch on synthetic datasets. Our findings confirm that task vectors naturally emerge under certain conditions, but the tasks may be relatively weakly and/or non-locally encoded within the model. To promote strong task vectors encoded at a prescribed location within the model, we propose an auxiliary training mechanism based on a task vector prompting loss (TVP-loss). This method eliminates the need to search for task-correlated encodings within the trained model and demonstrably improves robustness and generalization.
[ "Liu Yang", "Ziqian Lin", "Kangwook Lee", "Dimitris Papailiopoulos", "Robert D Nowak" ]
https://openreview.net/forum?id=lODGn1Rp5t
lODGn1Rp5t
lODGn1Rp5t
[ "~Liu_Yang6", "~Ziqian_Lin1", "~Kangwook_Lee1", "~Dimitris_Papailiopoulos1", "~Robert_D_Nowak1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/7e185d2734f706b41e72249356e60ec9fd37135e.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "in-context learning; task vector" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yang2025task, title={Task Vectors in In-Context Learning: Emergence, Formation, and Benefits}, author={Liu Yang and Ziqian Lin and Kangwook Lee and Dimitris Papailiopoulos and Robert D Nowak}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=lODGn1Rp5t} }
yang|task_vectors_in_incontext_learning_emergence_formation_and_benefits
null
null
null
null
null
SEAM: Semantically Equivalent Across Modalities Benchmark for Vision-Language Models
We created a benchmark testing VLMs' ability to reason consistently across different modalities, revealing limitations in how models process semantically equivalent information based on format rather than meaning.
Evaluating whether vision–language models (VLMs) reason consistently across representations is challenging because modality comparisons are typically confounded by task differences and asymmetric information. We introduce SEAM, a benchmark that pairs semantically equivalent inputs across four domains that have existing standardized textual and visual notations. By employing distinct notation systems across modalities, in contrast to OCR-based image-text pairing, SEAM provides a rigorous comparative assessment of the textual-symbolic and visual-spatial reasoning capabilities of VLMs. Across 21 contemporary models, we observe systematic modality imbalance: vision frequently lags language in overall performance, despite the problems containing semantically equivalent information, and cross-modal agreement is relatively low. Our error analysis reveals two main drivers: textual perception failures from tokenization in domain notation and visual perception failures that induce hallucinations. We also show that our results are largely robust to visual transformations. SEAM establishes a controlled, semantically equivalent setting for measuring and improving modality-agnostic reasoning.
[ "Zhenwei Tang", "Difan Jiao", "Blair Yang", "Ashton Anderson" ]
https://openreview.net/forum?id=lI4LgGv4sX
lI4LgGv4sX
lI4LgGv4sX
[ "~Zhenwei_Tang1", "~Difan_Jiao1", "~Blair_Yang1", "~Ashton_Anderson1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/980a13fe20145a2f7419388bb1bf725b023950e3.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Vision-Language Models", "Benchmark", "Modality Imbalance" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ tang2025seam, title={{SEAM}: Semantically Equivalent Across Modalities Benchmark for Vision-Language Models}, author={Zhenwei Tang and Difan Jiao and Blair Yang and Ashton Anderson}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=lI4LgGv4sX} }
tang|seam_semantically_equivalent_across_modalities_benchmark_for_visionlanguage_models
null
null
null
null
null
Beyond the Reported Cutoff: Where Large Language Models Fall Short on Financial Knowledge
This paper evaluates Large Language Models' knowledge of historical financial data, finding they know more about larger, recent companies but are also prone to hallucinations about these firms.
Large Language Models (LLMs) are frequently utilized as sources of knowledge for question-answering. While it is known that LLMs may lack access to real-time data or newer data produced after the model's cutoff date, it is less clear how their knowledge spans across *historical* information. In this study, we assess the breadth of LLMs' knowledge using financial data of U.S. publicly traded companies by evaluating more than 197k questions and comparing model responses to factual data. We further explore the impact of company characteristics, such as size, retail investment, institutional attention, and readability of financial filings, on the accuracy of knowledge represented in LLMs. Our results reveal that LLMs are less informed about past financial performance, but they display a stronger awareness of larger companies and more recent information. Interestingly, at the same time, our analysis also reveals that LLMs are more likely to hallucinate for larger companies, especially for data from more recent years. The code, prompts, and model outputs are available on [GitHub](https://github.com/gtfintechlab/knowledge-gap).
[ "Agam Shah", "Liqin Ye", "Sebastian Jaskowski", "Wei Xu", "Sudheer Chava" ]
https://openreview.net/forum?id=lEpPFmGH3L
lEpPFmGH3L
lEpPFmGH3L
[ "~Agam_Shah1", "~Liqin_Ye1", "~Sebastian_Jaskowski1", "~Wei_Xu5", "~Sudheer_Chava1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/b24fa9a8fd3c6721e3f957e197d5b44ec2782ba3.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Knowledge Cutoff", "Model Hallucinations" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ shah2025beyond, title={Beyond the Reported Cutoff: Where Large Language Models Fall Short on Financial Knowledge}, author={Agam Shah and Liqin Ye and Sebastian Jaskowski and Wei Xu and Sudheer Chava}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=lEpPFmGH3L} }
shah|beyond_the_reported_cutoff_where_large_language_models_fall_short_on_financial_knowledge
null
null
null
null
null
Overcoming Vocabulary Constraints with Pixel-level Fallback
We augment pretrained language models with pixel-based text representations, overcoming vocabulary constraints, improving multilingual and cross-script performance
Subword tokenization requires balancing computational efficiency and vocabulary coverage, often leading to suboptimal performance on languages and scripts not prioritized during training. We propose to augment pretrained language models with a vocabulary-free encoder that generates input embeddings from text rendered to pixels. Through experiments on English-centric language models, we demonstrate that our approach substantially improves machine translation performance and facilitates effective cross-lingual transfer, outperforming tokenizer-based methods. Furthermore, we find that pixel-based representations outperform byte-level approaches and standard vocabulary expansion. Our approach enhances the multilingual capabilities of monolingual language models without extensive retraining and reduces decoding latency via input compression.
[ "Jonas F. Lotz", "Hendra Setiawan", "Stephan Peitz", "Yova Kementchedjhieva" ]
https://openreview.net/forum?id=lEaHNs2qEv
lEaHNs2qEv
lEaHNs2qEv
[ "~Jonas_F._Lotz1", "~Hendra_Setiawan1", "~Stephan_Peitz1", "~Yova_Kementchedjhieva1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/bd9290fab67e049c39a67c9f2e6dde45900c894c.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "pixel-based text representations", "machine translation", "multilinguality", "cross-lingual transfer", "unseen scripts" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lotz2025overcoming, title={Overcoming Vocabulary Constraints with Pixel-level Fallback}, author={Jonas F. Lotz and Hendra Setiawan and Stephan Peitz and Yova Kementchedjhieva}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=lEaHNs2qEv} }
lotz|overcoming_vocabulary_constraints_with_pixellevel_fallback
null
null
null
null
null
EvidenceBench: A Benchmark for Extracting Evidence from Biomedical Papers
A large biomedical benchmark to evaluate LLMs' evidence-finding ability for scientific hypotheses and an automatic pipeline to create such benchmarks.
We study the task of automatically finding evidence relevant to hypotheses in biomedical papers. Finding relevant evidence is an important step when researchers investigate scientific hypotheses. We introduce EvidenceBench to measure models performance on this task, which is created by a novel pipeline that consists of hypothesis generation and sentence-by-sentence annotation of biomedical papers for relevant evidence, completely guided by and faithfully following existing human experts judgment. We demonstrate the pipeline’s validity and accuracy with multiple sets of human-expert annotations. We evaluated a diverse set of language models and retrieval systems on the benchmark and found that model performances still fall significantly short of the expert level on this task. To show the scalability of our proposed pipeline, we create a larger EvidenceBench-100k with 107,461 fully annotated papers with hypotheses to facilitate model training and development. Both datasets are available at https://github.com/EvidenceBench/EvidenceBench
[ "Jianyou Wang", "Weili Cao", "Kaicheng Wang", "Xiaoyue Wang", "Ashish Dalvi", "Gino Prasad", "Qishan Liang", "Hsuan-lin Her", "Mingwang", "Qin Yang", "Gene W. Yeo", "David E Neal", "Maxim Khan", "Christopher D. Rosin", "Ramamohan Paturi", "Leon Bergen" ]
https://openreview.net/forum?id=lEQnUI5lEA
lEQnUI5lEA
lEQnUI5lEA
[ "~Jianyou_Wang1", "~Weili_Cao1", "~Kaicheng_Wang1", "~Xiaoyue_Wang3", "~Ashish_Dalvi1", "~Gino_Prasad1", "~Qishan_Liang1", "~Hsuan-lin_Her1", "~Mingwang1", "~Qin_Yang5", "~Gene_W._Yeo1", "~David_E_Neal1", "~Maxim_Khan1", "~Christopher_D._Rosin1", "~Ramamohan_Paturi1", "~Leon_Bergen1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/b28cd425cb358879df700d6cdeb0bbdbc1a689df.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Biomedical Benchmark", "Scientific Information Extraction", "Large Language Models", "BioNLP" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025evidencebench, title={EvidenceBench: A Benchmark for Extracting Evidence from Biomedical Papers}, author={Jianyou Wang and Weili Cao and Kaicheng Wang and Xiaoyue Wang and Ashish Dalvi and Gino Prasad and Qishan Liang and Hsuan-lin Her and Mingwang and Qin Yang and Gene W. Yeo and David E Neal and Maxim Khan and Christopher D. Rosin and Ramamohan Paturi and Leon Bergen}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=lEQnUI5lEA} }
wang|evidencebench_a_benchmark_for_extracting_evidence_from_biomedical_papers
null
null
null
null
null
SEAL: Steerable Reasoning Calibration of Large Language Models for Free
We developed a training-free method that improves LLM accuracy and efficiency by calibrating CoT reasoning traces through a learned steering vector, reducing redundant thoughts and enhancing performance across multiple benchmarks.
Large Language Models (LLMs), such as OpenAI’s o1-series have demonstrated compelling capabilities for complex reasoning tasks via the extended chain-of-thought (CoT) reasoning mechanism. However, recent studies reveal substantial redundancy in the CoT reasoning traces, which not only increases inference latency but also negatively impacts model performance by diverting attention to unnecessary reasoning paths. To address this issue, we investigate the internal reasoning structures of LLMs and categorize them into three primary thought types: execution, reflection, and transition thoughts. Moreover, our analysis reveals that excessive reflection and transition thoughts are strongly correlated with failure cases and these thought categories exhibit clear separation in the latent space. Based on these, we introduce SEAL (**S**teerable r**EA**soning ca**L**ibration), a training-free approach that seamlessly calibrates the CoT process, improving accuracy while demonstrating significant efficiency gains. SEAL consists of an offline stage for extracting the reasoning steering vector in the latent space, followed by an on-the-fly calibration of the reasoning trace through representation intervention using the steering vector. Notably, the steering vector exhibits strong transferability across various tasks. Extensive experiments across multiple models (DeepSeek-R1-Distill and QwQ-32B-Preview) and benchmarks (Math500, GSM8K, LiveCodeBench) validate the effectiveness of SEAL, up to a 11\% improvement in accuracy while reducing reasoning tokens by 11.8\% to 50.4\%.
[ "Runjin Chen", "Zhenyu Zhang", "Junyuan Hong", "Souvik Kundu", "Zhangyang Wang" ]
https://openreview.net/forum?id=klPszYDIRT
klPszYDIRT
klPszYDIRT
[ "~Runjin_Chen1", "~Zhenyu_Zhang4", "~Junyuan_Hong1", "~Souvik_Kundu2", "~Zhangyang_Wang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/78d6d04c381b639fc7c04ce750bff4724dec7a4f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM", "Reasoning", "Representation Engineering" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ chen2025seal, title={{SEAL}: Steerable Reasoning Calibration of Large Language Models for Free}, author={Runjin Chen and Zhenyu Zhang and Junyuan Hong and Souvik Kundu and Zhangyang Wang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=klPszYDIRT} }
chen|seal_steerable_reasoning_calibration_of_large_language_models_for_free
null
null
null
null
null
ReasonIR: Training Retrievers for Reasoning Tasks
We present the first bi-encoder retriever specially trained to retrieve helpful documents for reasoning tasks.
We present ReasonIR-8B, the first retriever specifically trained for general reasoning tasks. Existing retrievers have shown limited gains on reasoning tasks, in part because existing training datasets focus on short factual queries tied to documents that straightforwardly answer them. We develop a synthetic data generation pipeline that, for each document, produces a challenging and relevant query that requires reasoning to match, as well as a plausibly related but ultimately unhelpful hard negative. By training on a mixture of this synthetic data and existing public data, ReasonIR-8B achieves a new state-of-the-art of 29.9 nDCG@10 on BRIGHT, a widely-used reasoning-intensive information retrieval (IR) benchmark. In addition, ReasonIR-8B uses test-time compute more effectively: on BRIGHT, its performance consistently increases with longer and more information-rich rewritten queries; it outperforms other retrievers when combined with our simple-yet-effective tie-breaking LLM reranker (36.9 nDCG@10). When applied to RAG tasks, ReasonIR-8B improves MMLU and GPQA performance by 6.4% and 22.6% respectively, relative to the closed-book baseline, outperforming other retrievers and search engines. Our training recipe is general and can be easily extended to future LLMs.
[ "Rulin Shao", "Rui Qiao", "Varsha Kishore", "Niklas Muennighoff", "Xi Victoria Lin", "Daniela Rus", "Bryan Kian Hsiang Low", "Sewon Min", "Wen-tau Yih", "Pang Wei Koh", "Luke Zettlemoyer" ]
https://openreview.net/forum?id=kkBCNLMbGj
kkBCNLMbGj
kkBCNLMbGj
[ "~Rulin_Shao1", "~Rui_Qiao3", "~Varsha_Kishore1", "~Niklas_Muennighoff1", "~Xi_Victoria_Lin1", "~Daniela_Rus1", "~Bryan_Kian_Hsiang_Low1", "~Sewon_Min1", "~Wen-tau_Yih1", "~Pang_Wei_Koh1", "~Luke_Zettlemoyer1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/08ca247ba1def28f58c2066c936038f437578e07.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "retriever", "information retrieval", "retrieval-augmented generation", "reasoning", "synthetic data" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ shao2025reasonir, title={Reason{IR}: Training Retrievers for Reasoning Tasks}, author={Rulin Shao and Rui Qiao and Varsha Kishore and Niklas Muennighoff and Xi Victoria Lin and Daniela Rus and Bryan Kian Hsiang Low and Sewon Min and Wen-tau Yih and Pang Wei Koh and Luke Zettlemoyer}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=kkBCNLMbGj} }
shao|reasonir_training_retrievers_for_reasoning_tasks
null
null
null
null
null
How Multimodal LLMs Solve Image Tasks: A Lens on Visual Grounding, Task Reasoning, and Answer Decoding
We propose a probing framework to analyze how multimodal LLMs process visual and textual inputs across layers
Multimodal Large Language Models (MLLMs) have demonstrated strong performance across a wide range of vision-language tasks, yet their internal processing dynamics remain underexplored. In this work, we introduce a probing framework to systematically analyze how MLLMs process visual and textual inputs across layers. We train linear classifiers to predict fine-grained visual categories (e.g., dog breeds) from token embeddings extracted at each layer, using a standardized anchor question. To uncover the functional roles of different layers, we evaluate these probes under three types of controlled prompt variations: (1) lexical variants that test sensitivity to surface-level changes, (2) semantic negation variants that flip the expected answer by modifying the visual concept in the prompt, and (3) output format variants that preserve reasoning but alter the answer format. Applying our framework to LLaVA-1.5, LLaVA-Next-LLaMA-3, and Qwen2-VL, we identify a consistent stage-wise structure in which early layers perform visual grounding, middle layers support lexical integration and semantic reasoning, and final layers prepare task-specific outputs. We further show that while the overall stage-wise structure remains stable across variations in visual tokenization, instruction tuning data, and pretraining corpus, the specific layer allocation to each stage shifts notably with changes in the base LLM architecture. Our findings provide a unified perspective on the layer-wise organization of MLLMs and offer a lightweight, model-agnostic approach for analyzing multimodal representation dynamics.
[ "Zhuoran Yu", "Yong Jae Lee" ]
https://openreview.net/forum?id=kjNJYWvfPA
kjNJYWvfPA
kjNJYWvfPA
[ "~Zhuoran_Yu2", "~Yong_Jae_Lee2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/f1fc281cc94ab858757534be1c0097d1b92201cc.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Multimodal LLM", "Interpretability" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yu2025how, title={How Multimodal {LLM}s Solve Image Tasks: A Lens on Visual Grounding, Task Reasoning, and Answer Decoding}, author={Zhuoran Yu and Yong Jae Lee}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=kjNJYWvfPA} }
yu|how_multimodal_llms_solve_image_tasks_a_lens_on_visual_grounding_task_reasoning_and_answer_decoding
null
null
null
null
null
SAEs Can Improve Unlearning: Dynamic Sparse Autoencoder Guardrails for Precision Unlearning in LLMs
We introduce Dynamic SAE Guardrails, a computationally efficient method that leverages sparse autoencoders for precision unlearning in LLMs, outperforming existing approaches while maintaining model utility.
Machine unlearning is a promising approach to improve LLM safety by removing unwanted knowledge from a trained model. However, prevailing gradient-based unlearning methods suffer from issues such as high computational costs, hyperparameter instability, poor sequential unlearning capability, vulnerability to relearning attacks, low data efficiency, and lack of interpretability. While Sparse Autoencoders are well-suited to improve these aspects by enabling targeted activation-based unlearning, prior approaches underperform gradient-based methods. This work demonstrates that, contrary to these earlier findings, SAEs can significantly improve unlearning when employed dynamically. We introduce Dynamic SAE Guardrails (DSG), a novel method for precision unlearning that leverages principled feature selection and a dynamic classifier. Our experiments show DSG substantially outperforms leading unlearning methods, achieving superior forget-utility trade-offs. DSG addresses key drawbacks of gradient-based approaches for unlearning---offering enhanced computational efficiency and stability, robust performance in sequential unlearning, stronger resistance to relearning attacks, better data efficiency including zero-shot settings, and more interpretable unlearning.
[ "Aashiq Muhamed", "Jacopo Bonato", "Mona T. Diab", "Virginia Smith" ]
https://openreview.net/forum?id=kaPAalWAp3
kaPAalWAp3
kaPAalWAp3
[ "~Aashiq_Muhamed1", "~Jacopo_Bonato1", "~Mona_T._Diab1", "~Virginia_Smith1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/15a48f1a716e9205e87c0e7e53e18753bc29238d.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "unlearning", "sparse autoencoders", "representation editing", "steering vectors", "relearning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ muhamed2025saes, title={{SAE}s Can Improve Unlearning: Dynamic Sparse Autoencoder Guardrails for Precision Unlearning in {LLM}s}, author={Aashiq Muhamed and Jacopo Bonato and Mona T. Diab and Virginia Smith}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=kaPAalWAp3} }
muhamed|saes_can_improve_unlearning_dynamic_sparse_autoencoder_guardrails_for_precision_unlearning_in_llms
null
null
null
null
null
Putting the Value Back in RL: Better Test-Time Scaling by Unifying LLM Reasoners With Verifiers
We propose to improve inference-time scaling by jointly training a unified model for both reasoning and verification.
Prevalent reinforcement learning~(RL) methods for fine-tuning LLM reasoners, such as GRPO or Leave-one-out PPO, abandon the learned value function in favor of empirically estimated returns. This hinders test-time compute scaling that relies on using the value-function for verification. In this work, we propose RL$^V$ that augments any ``value-free'' RL method by jointly training the LLM as both a reasoner and a generative verifier using RL-generated data, adding verification capabilities without significant overhead. Empirically, RL$^V$ boosts MATH accuracy by over 20\% with parallel sampling and enables $8-32\times$ efficient test-time compute scaling compared to the base RL method. RL$^V$ also exhibits strong generalization capabilities for both easy-to-hard and out-of-domain tasks. Furthermore, RL$^V$ achieves $1.5-2\times$ higher performance when jointly scaling parallel and sequential test-time compute with a long reasoning R1 model.
[ "Kusha Sareen", "Morgane M Moss", "Alessandro Sordoni", "Arian Hosseini", "Rishabh Agarwal" ]
https://openreview.net/forum?id=kVOrGZM5N7
kVOrGZM5N7
kVOrGZM5N7
[ "~Kusha_Sareen1", "~Morgane_M_Moss1", "~Alessandro_Sordoni2", "~Arian_Hosseini1", "~Rishabh_Agarwal2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/3acf5484382857c965093513560345a4a8c7d01f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM Reasoning", "Verifiers", "Test-time scaling", "Reinforcement Learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ sareen2025putting, title={Putting the Value Back in {RL}: Better Test-Time Scaling by Unifying {LLM} Reasoners With Verifiers}, author={Kusha Sareen and Morgane M Moss and Alessandro Sordoni and Arian Hosseini and Rishabh Agarwal}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=kVOrGZM5N7} }
sareen|putting_the_value_back_in_rl_better_testtime_scaling_by_unifying_llm_reasoners_with_verifiers
null
null
null
null
null
Corrupted by Reasoning: Reasoning Language Models Become Free-Riders in Public Goods Games
We evaluate the cooperative behavior of LLMs in a public goods game with norm enforcement and find that reasoning models are less effective at maintaining cooperation than traditional ones.
As large language models (LLMs) are increasingly deployed as autonomous agents, understanding their cooperation and social mechanisms is becoming increasingly important. In particular, how LLMs balance self-interest and collective well-being is a critical challenge for ensuring alignment, robustness, and safe deployment. In this paper, we examine the challenge of costly sanctioning in multi-agent LLM systems, where an agent must decide whether to invest its own resources to incentivize cooperation or penalize defection. To study this, we adapt a public goods game with institutional choice from behavioral economics, allowing us to observe how different LLMs navigate social dilemmas over repeated interactions. Our analysis reveals four distinct behavioral patterns among models: some consistently establish and sustain high levels of cooperation, others fluctuate between engagement and disengagement, some gradually decline in cooperative behavior over time, and others rigidly follow fixed strategies regardless of outcomes. Surprisingly, we find that reasoning LLMs, such as the o1 series, struggle significantly with cooperation, whereas some traditional LLMs consistently achieve high levels of cooperation. These findings suggest that the current approach to improving LLMs, which focuses on enhancing their reasoning capabilities, does not necessarily lead to cooperation, providing valuable insights for deploying LLM agents in environments that require sustained collaboration.
[ "David Guzman Piedrahita", "Yongjin Yang", "Mrinmaya Sachan", "Giorgia Ramponi", "Bernhard Schölkopf", "Zhijing Jin" ]
https://openreview.net/forum?id=kH6LOHGjEl
kH6LOHGjEl
kH6LOHGjEl
[ "~David_Guzman_Piedrahita1", "~Yongjin_Yang1", "~Mrinmaya_Sachan3", "~Giorgia_Ramponi1", "~Bernhard_Schölkopf1", "~Zhijing_Jin1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a207da379702863ad85f59003fc2c5b4b7c47ac7.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Multi-Agent", "Cooperation", "Social Dilemmas", "Public Goods Games", "Institutional Choice", "Costly Sanctioning", "Norm Enforcement", "Reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ piedrahita2025corrupted, title={Corrupted by Reasoning: Reasoning Language Models Become Free-Riders in Public Goods Games}, author={David Guzman Piedrahita and Yongjin Yang and Mrinmaya Sachan and Giorgia Ramponi and Bernhard Sch{\"o}lkopf and Zhijing Jin}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=kH6LOHGjEl} }
piedrahita|corrupted_by_reasoning_reasoning_language_models_become_freeriders_in_public_goods_games
/attachment/e77bf91abe796499b424be1e4bdb5bc6a3541fc2.zip
null
null
null
null
AdaptMI: Adaptive Skill-based In-context Math Instructions for Small Language Models
We introduce AdaptMI, an adaptive approach to selecting skill-based in-context math instructions for small language models.
In-context learning (ICL) allows a language model to improve its problem-solving capability when provided with suitable information in context. Since the choice of in-context information can be determined based on the problem itself, in-context learning is analogous to human learning from teachers in a classroom. Recent works (Didolkar et al., 2024a; 2024b) show that ICL performance can be improved by leveraging a frontier large language model’s (LLM) ability to predict required skills to solve a problem, popularly referred to as an LLM’s metacognition, and using the recommended skills to construct necessary in-context examples. While this skill-based strategy boosts ICL performance in larger models, its gains on small language models (SLMs) have been minimal, highlighting a performance gap in ICL capabilities. We investigate this gap and show that skill-based prompting can hurt SLM performance on easy questions by introducing unnecessary information, akin to cognitive overload. To address this, we introduce AdaptMI, an adaptive approach to selecting skill-based in-context Math Instructions for SLMs. Inspired by cognitive load theory from human pedagogy, our method only introduces skill-based examples when the model performs poorly. We further propose AdaptMI+, which adds examples targeted to the specific skills missing from the model’s responses. On 5-shot evaluations across popular math benchmarks and five SLMs (1B–7B; Qwen, Llama), AdaptMI+ improves accuracy by up to 6% over naive skill-based strategies.
[ "Yinghui He", "Abhishek Panigrahi", "Yong Lin", "Sanjeev Arora" ]
https://openreview.net/forum?id=k72RxnoS5g
k72RxnoS5g
k72RxnoS5g
[ "~Yinghui_He1", "~Abhishek_Panigrahi1", "~Yong_Lin2", "~Sanjeev_Arora1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/4d231204c9988dda0204c56d1dcf4623ef1bd631.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Small language models", "large language models", "in-context learning", "natural language processing" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ he2025adaptmi, title={Adapt{MI}: Adaptive Skill-based In-context Math Instructions for Small Language Models}, author={Yinghui He and Abhishek Panigrahi and Yong Lin and Sanjeev Arora}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=k72RxnoS5g} }
he|adaptmi_adaptive_skillbased_incontext_math_instructions_for_small_language_models
null
null
null
null
null
FineWeb2: One Pipeline to Scale Them All — Adapting Pre-Training Data Processing to Every Language
We introduce a new pre-training dataset curation pipeline based on FineWeb that we use to create a new large multilingual dataset, FineWeb2
Pre-training state-of-the-art large language models (LLMs) requires vast amounts of clean and diverse text data. While the open development of large high-quality English pre-training datasets has seen substantial recent progress, training performant multilingual LLMs remains a challenge, in large part due to the inherent difficulty of tailoring filtering and deduplication pipelines to a large number of languages. In this work, we introduce a new pre-training dataset curation pipeline based on FineWeb that can be automatically adapted to support any language. We extensively ablate our pipeline design choices on a set of 9 diverse languages, guided by a set of meaningful and informative evaluation tasks that were chosen through a novel selection process based on measurable criteria. Ultimately, we show that our pipeline can be used to create non-English corpora that produce more performant models than prior datasets. We additionally introduce a straightforward and principled approach to rebalance datasets that takes into consideration both duplication count and quality, providing an additional performance uplift. Finally, we scale our pipeline to over 1000 languages using almost 100 Common Crawl snapshots to produce FineWeb2, a new 20 terabyte (5 billion document) multilingual dataset which we release along with our pipeline, training, and evaluation codebases.
[ "Guilherme Penedo", "Hynek Kydlíček", "Vinko Sabolčec", "Bettina Messmer", "Negar Foroutan", "Amir Hossein Kargaran", "Colin Raffel", "Martin Jaggi", "Leandro Von Werra", "Thomas Wolf" ]
https://openreview.net/forum?id=jnRBe6zatP
jnRBe6zatP
jnRBe6zatP
[ "~Guilherme_Penedo1", "~Hynek_Kydlíček1", "~Vinko_Sabolčec1", "~Bettina_Messmer1", "~Negar_Foroutan1", "~Amir_Hossein_Kargaran1", "~Colin_Raffel1", "~Martin_Jaggi1", "~Leandro_Von_Werra1", "~Thomas_Wolf1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8a8d8d04776824b23e7bd4683c04c4f4104f6682.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "multilingual", "dataset", "pretraining", "web data", "llm" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
true
While there might not be a direct risk but since there are model released on large quantity of web-crawled data, it would be useful to include a potential risk statement at the end of the paper.
@inproceedings{ penedo2025fineweb, title={FineWeb2: One Pipeline to Scale Them All {\textemdash} Adapting Pre-Training Data Processing to Every Language}, author={Guilherme Penedo and Hynek Kydl{\'\i}{\v{c}}ek and Vinko Sabol{\v{c}}ec and Bettina Messmer and Negar Foroutan and Amir Hossein Kargaran and Colin Raffel and Martin Jaggi and Leandro Von Werra and Thomas Wolf}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=jnRBe6zatP} }
penedo|fineweb2_one_pipeline_to_scale_them_all_adapting_pretraining_data_processing_to_every_language
null
true
null
null
null
AI-Slop to AI-Polish? Aligning Language Models through Edit-Based Writing Rewards and Test-time computation
AI models struggle to assess writing quality, but the new Writing Quality Benchmark and specialized reward model enable significant improvements to AI-generated text.
AI-generated text is proliferating across domains, from creative writing and journalism to marketing content and scientific articles. Models can follow user-provided instructions to generate coherent and grammatically correct outputs but in this work, we study a more fundamental question: how do we evaluate and improve the writing quality of AI-generated text? Writing quality assessment has received less attention from the community, in part because it is fundamentally subjective and requires expertise. We first introduce the Writing Quality Benchmark (WQ) by consolidating five writing-preference datasets into 4,729 writing quality judgments. Our experiments show that competitive baselines, including state-of-the-art LLMs that excel at reasoning tasks, barely outperform random baselines on WQ. We then train specialized Writing Quality Reward Models (WQRM) of various sizes for writing quality assessment that demonstrate strong generalization on four out-of-distribution test sets and 74% accuracy on the WQ benchmark. To further show WQRM's practical benefits during inference, we leverage additional test-time compute to generate and rank multiple candidate revisions, allowing us to select higher-quality outputs from an initial draft. Human evaluation with 9 experienced writers confirm that WQRM-based selection produces writing samples preferred by experts 66% overall, and 72.2% when the reward gap is larger than 1 point. We release our datasets and models to encourage community engagement with writing quality assessment and development of AI writing systems better aligned with human preferences.
[ "Tuhin Chakrabarty", "Philippe Laban", "Chien-Sheng Wu" ]
https://openreview.net/forum?id=jeDYcjuZIV
jeDYcjuZIV
jeDYcjuZIV
[ "~Tuhin_Chakrabarty2", "~Philippe_Laban1", "~Chien-Sheng_Wu1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9c9ec96e904859c276d16b3667db86b735894ec3.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM", "Writing", "Reward Models", "Alignment", "Human Centered NLP", "Test Time Compute", "Text Editing", "Behavioral Science" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ chakrabarty2025aislop, title={{AI}-Slop to {AI}-Polish? Aligning Language Models through Edit-Based Writing Rewards and Test-time computation}, author={Tuhin Chakrabarty and Philippe Laban and Chien-Sheng Wu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=jeDYcjuZIV} }
chakrabarty|aislop_to_aipolish_aligning_language_models_through_editbased_writing_rewards_and_testtime_computation
null
null
null
null
null
EuroBERT: Scaling Multilingual Encoders for European Languages
We introduce EuroBERT, a family of multilingual encoders leveraging recent architectural advances, achieving state-of-the-art performance across diverse tasks with support for sequences up to 8,192 tokens.
General-purpose multilingual vector representations, used in retrieval, regression and classification, are traditionally obtained from bidirectional encoder models. Despite their wide applicability, encoders have been recently overshadowed by advances in generative decoder-only models. However, many innovations driving this progress are not inherently tied to decoders. In this paper, we revisit the development of multilingual encoders through the lens of these advances, and introduce EuroBERT, a family of multilingual encoders covering European and widely spoken global languages. Our models outperform existing alternatives across a diverse range of tasks, spanning multilingual capabilities, mathematics, and coding, and natively supporting sequences of up to 8,192 tokens. We also examine the design decisions behind EuroBERT, offering insights into our dataset composition and training pipeline. We publicly release the EuroBERT models, including intermediate training checkpoints, together with our training framework.
[ "Nicolas Boizard", "Hippolyte Gisserot-Boukhlef", "Duarte Miguel Alves", "Andre Martins", "Ayoub Hammal", "Caio Corro", "CELINE HUDELOT", "Emmanuel Malherbe", "Etienne Malaboeuf", "Fanny Jourdan", "Gabriel Hautreux", "João Alves", "Kevin El Haddad", "Manuel Faysse", "Maxime Peyrard", "Nuno M Guerreiro", "Patrick Fernandes", "Ricardo Rei", "Pierre Colombo" ]
https://openreview.net/forum?id=jdOC24msVq
jdOC24msVq
jdOC24msVq
[ "~Nicolas_Boizard1", "~Hippolyte_Gisserot-Boukhlef1", "~Duarte_Miguel_Alves1", "~Andre_Martins1", "~Ayoub_Hammal1", "~Caio_Corro2", "~CELINE_HUDELOT1", "~Emmanuel_Malherbe3", "~Etienne_Malaboeuf1", "~Fanny_Jourdan1", "~Gabriel_Hautreux2", "~João_Alves2", "~Kevin_El_Haddad1", "~Manuel_Faysse1", "~Maxime_Peyrard2", "~Nuno_M_Guerreiro1", "~Patrick_Fernandes1", "~Ricardo_Rei1", "~Pierre_Colombo2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/5b43e38fbf30639e53eb283215e3f86c4dfa0b8c.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Encoder", "Multilingual", "EuroBERT", "Training", "Vector Representations", "Bidirectional", "European" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ boizard2025eurobert, title={Euro{BERT}: Scaling Multilingual Encoders for European Languages}, author={Nicolas Boizard and Hippolyte Gisserot-Boukhlef and Duarte Miguel Alves and Andre Martins and Ayoub Hammal and Caio Corro and CELINE HUDELOT and Emmanuel Malherbe and Etienne Malaboeuf and Fanny Jourdan and Gabriel Hautreux and Jo{\~a}o Alves and Kevin El Haddad and Manuel Faysse and Maxime Peyrard and Nuno M Guerreiro and Patrick Fernandes and Ricardo Rei and Pierre Colombo}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=jdOC24msVq} }
boizard|eurobert_scaling_multilingual_encoders_for_european_languages
null
null
null
null
null
MALT: Improving Reasoning with Multi-Agent LLM Training
We introduce a multi-agent post-training approach to improve the reasoning and self-correction performance of a generator, verifier, and refinement model working together
Large Language Models (LLMs) often produce answers with a single chain-of-thought, which restricts their ability to explore reasoning paths or self-correct flawed outputs in complex tasks. In this paper, we introduce MALT (Multi-Agent LLM Training), a novel post-training strategy that divides the reasoning process into generation, verification, and refinement steps using a sequential pipeline of heterogeneous agents. During data generation, each agent is repeatedly sampled to form a multi-agent search tree, where final outputs are graded against ground-truth data. We then apply value iteration to propagate reward signals back to each role-conditioned model, automatically producing multi-agent post-training data without human or teacher-model supervision. Our off-policy approach allows each agent to specialize by learning from correct and incorrect trajectories, ultimately improving the end-to-end reasoning chain. On MATH, GSM8K, and CSQA, MALT surpasses the same baseline LLM with relative improvements of 15.66%, 7.42%, and 9.40%. It also generalizes to more challenging benchmarks, marking an early advance in multi-agent cooperative training.
[ "Sumeet Ramesh Motwani", "Chandler Smith", "Rocktim Jyoti Das", "Rafael Rafailov", "Philip Torr", "Ivan Laptev", "Fabio Pizzati", "Ronald Clark", "Christian Schroeder de Witt" ]
https://openreview.net/forum?id=jXP9bgFack
jXP9bgFack
jXP9bgFack
[ "~Sumeet_Ramesh_Motwani1", "~Chandler_Smith1", "~Rocktim_Jyoti_Das2", "~Rafael_Rafailov1", "~Philip_Torr1", "~Ivan_Laptev1", "~Fabio_Pizzati1", "~Ronald_Clark2", "~Christian_Schroeder_de_Witt1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/da3f367150d10b65b2dcaa8152a1aea745c9a7bd.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "reasoning", "multi-agent systems", "post-training", "reinforcement learning", "large language models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ motwani2025malt, title={{MALT}: Improving Reasoning with Multi-Agent {LLM} Training}, author={Sumeet Ramesh Motwani and Chandler Smith and Rocktim Jyoti Das and Rafael Rafailov and Philip Torr and Ivan Laptev and Fabio Pizzati and Ronald Clark and Christian Schroeder de Witt}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=jXP9bgFack} }
motwani|malt_improving_reasoning_with_multiagent_llm_training
/attachment/c362d320dbebaac8a22e3ce459c2bf0981b28842.zip
null
null
null
null
Can Test-Time Scaling Improve World Foundation Model?
Test time scaling could work for world foundation models and shows a clear scaling law. Smaller model with test time scaling is even better than x2 larger pretrained model. A extensible tookit will be opensourced for evaluating performance of WFM.
World foundation models, which simulate the physical world by predicting future states from current observations and inputs, have become central to many applications in physical intelligence, including autonomous driving and robotics. However, these models require substantial computational resources for pretraining and are further constrained by available data during post-training. As such, scaling computation at test time emerges as both a critical and practical alternative to traditional model enlargement or re-training. In this work, we introduce **SWIFT**, a test-time scaling framework tailored for WFMs. SWIFT integrates our extensible WFM evaluation toolkit with process-level inference strategies, including fast tokenization, probability-based Top-K pruning, and efficient beam search. Empirical results on the COSMOS model demonstrate that test-time scaling exists even in a compute-optimal way. Our findings reveal that test-time scaling laws hold for WFMs and that SWIFT provides a scalable and effective pathway for improving WFM inference without retraining or increasing model size. Project page: [https://scalingwfm.github.io/](https://scalingwfm.github.io/).
[ "Wenyan Cong", "Hanqing Zhu", "Peihao Wang", "Bangya Liu", "Dejia Xu", "Kevin Wang", "David Z. Pan", "Yan Wang", "Zhiwen Fan", "Zhangyang Wang" ]
https://openreview.net/forum?id=jSmpq7IRYe
jSmpq7IRYe
jSmpq7IRYe
[ "~Wenyan_Cong1", "~Hanqing_Zhu1", "~Peihao_Wang1", "~Bangya_Liu1", "~Dejia_Xu1", "~Kevin_Wang4", "~David_Z._Pan1", "~Yan_Wang10", "~Zhiwen_Fan2", "~Zhangyang_Wang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/fb37c34624a030a952ca49f34e233a4b8989d236.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "world foundation model", "test time scaling", "autoregressive video generation", "evaluation tookit" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ cong2025can, title={Can Test-Time Scaling Improve World Foundation Model?}, author={Wenyan Cong and Hanqing Zhu and Peihao Wang and Bangya Liu and Dejia Xu and Kevin Wang and David Z. Pan and Yan Wang and Zhiwen Fan and Zhangyang Wang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=jSmpq7IRYe} }
cong|can_testtime_scaling_improve_world_foundation_model
null
null
null
null
null
Implicit In-Context Learning: Evidence from Artificial Language Experiments
LLMs show human-like implicit learning during inference, but different models excel in different linguistic domains, pointing to distinct in-context learning mechanisms influenced by model architecture.
Humans acquire language through implicit learning, absorbing complex patterns without explicit awareness. While large language models (LLMs) demonstrate impressive linguistic capabilities, it remains unclear whether they exhibit human-like pattern recognition during in-context learning at inferencing level. We adapted three classic artificial language learning experiments spanning morphology (regular/irregular plural marking), morphosyntax (context-dependent determiners), and syntax (finite state grammar) to systematically evaluate implicit learning at inferencing level in two state-of-the-art Openai models: gpt-4o (optimized for general language tasks) and o3-mini (specifically fine-tuned for explicit reasoning). This comparison allowed us to examine whether models trained to articulate reasoning processes differ in their ability to extract implicit patterns. Our findings reveal a complex picture: o3-mini demonstrated human-like probabilistic learning in morphological regularization, while gpt-4o showed stronger performance in finite state grammar acquisition. Neither model successfully replicated human patterns in the morphosyntax task. Post-experiment probes revealed correlations between models' performance and their ability to articulate underlying patterns, suggesting alignment between implicit recognition and explicit awareness. These results indicate that different LLMs implement distinct in-context processing mechanisms, with architecture and training objectives influencing pattern extraction across linguistic domains. Our study contributes to understanding in-context learning in LLMs and provides a novel framework for evaluating these models through the lens of cognitive science, highlighting both similarities and differences between human implicit learning and machine in-context pattern recognition.
[ "Xiaomeng Ma", "Qihui Xu" ]
https://openreview.net/forum?id=jST2VzWUFb
jST2VzWUFb
jST2VzWUFb
[ "~Xiaomeng_Ma1", "~Qihui_Xu1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ee53e57c5115c95f27c7d0afcfe22b6389effb30.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "implicit learning", "artificial language learning", "in-context learning", "psycholinguistics", "cognitive science" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
true
Only a minor thing. The paper uses probably copyrighted images from other papers in Figure 1 and 3. They should produce new figures with the original numbers as they did in Figure 2.
@inproceedings{ ma2025implicit, title={Implicit In-Context Learning: Evidence from Artificial Language Experiments}, author={Xiaomeng Ma and Qihui Xu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=jST2VzWUFb} }
ma|implicit_incontext_learning_evidence_from_artificial_language_experiments
null
null
null
null
null
Post-training for Efficient Communication via Convention Formation
We introduce a post-training method and new tasks to test and improve LLMs' abilities in forming conventions for efficient communication.
Humans communicate with increasing efficiency in multi-turn interactions, by adapting their language and forming ad-hoc conventions. In contrast, prior work shows that LLMs do not naturally show this behavior. We develop a post-training process to develop this ability through targeted fine-tuning on heuristically identified demonstrations of convention formation. We evaluate with two new benchmarks focused on this capability. First, we design a focused, cognitively-motivated interaction benchmark that consistently elicits strong convention formation trends in humans. Second, we create a new document-grounded reference completion task that reflects in-the-wild convention formation behavior. Our studies show significantly improved convention formation abilities in post-trained LLMs across the two evaluation methods.
[ "Yilun Hua", "Evan Wang", "Yoav Artzi" ]
https://openreview.net/forum?id=jRGGmbhX2s
jRGGmbhX2s
jRGGmbhX2s
[ "~Yilun_Hua1", "~Evan_Wang2", "~Yoav_Artzi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/90eee569c161e32eb84ea65de4a8205e9fa60de2.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Interaction", "Communication Efficiency", "Linguistic Convention", "Post-training", "Alignment", "LLM", "In-context learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ hua2025posttraining, title={Post-training for Efficient Communication via Convention Formation}, author={Yilun Hua and Evan Wang and Yoav Artzi}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=jRGGmbhX2s} }
hua|posttraining_for_efficient_communication_via_convention_formation
null
null
null
null
null
Tulu 3: Pushing Frontiers in Open Language Model Post-Training
A new multi-stage post-training recipe scaling preference learning and reinforcement learning with verifiable rewards.
Language model post-training is applied to refine behaviors and unlock new skills across a wide range of language models, but open recipes for applying these techniques lag behind proprietary ones. The underlying training data and recipes for post-training are simultaneously the most im- portant pieces of the puzzle and the portion with the least transparency. To bridge this gap, we introduce TÜLU 3, a family of fully-open state-of-the-art post-trained models, alongside its data, code, and training recipes, serving as a comprehensive guide for modern post-training techniques. TÜLU 3, which builds on Llama 3.1 base models at 8B, 70B and 405B parameters, achieves results surpassing the instruct versions of Llama 3.1, Qwen 2.5, and Mistral at comparable model sizes. The 405B TÜLU 3 performs compet- itively against closed models such as GPT-4o-mini and Claude 3.5-Haiku or large open models like DeepSeek V3. The training algorithms for our mod- els include supervised finetuning (SFT), Direct Preference Optimization (DPO), and a novel method we call Reinforcement Learning with Verifiable Rewards (RLVR). We detail varying the objective, model initialization, gen- eralization, and over-optimization of this new RL finetuning method. With TÜLU 3, we build a multi-task evaluation scheme for post-training with development and unseen evaluations, standard benchmark implementa- tions, and substantial decontamination of existing open datasets on said benchmarks. The TÜLU 3 release includes model weights, a demo, and the complete recipe — datasets for diverse core skills, a robust toolkit for data curation and evaluation, the training code and infrastructure, and, most importantly, a detailed recipe for reproducing and further adapting the TÜLU 3 approach to more domains.
[ "Nathan Lambert", "Jacob Morrison", "Valentina Pyatkin", "Shengyi Huang", "Hamish Ivison", "Faeze Brahman", "Lester James Validad Miranda", "Alisa Liu", "Nouha Dziri", "Xinxi Lyu", "Yuling Gu", "Saumya Malik", "Victoria Graf", "Jena D. Hwang", "Jiangjiang Yang", "Ronan Le Bras", "Oyvind Tafjord", "Christopher Wilhelm", "Luca Soldaini", "Noah A. Smith", "Yizhong Wang", "Pradeep Dasigi", "Hannaneh Hajishirzi" ]
https://openreview.net/forum?id=i1uGbfHHpH
i1uGbfHHpH
i1uGbfHHpH
[ "~Nathan_Lambert1", "~Jacob_Morrison2", "~Valentina_Pyatkin1", "~Shengyi_Huang1", "~Hamish_Ivison1", "~Faeze_Brahman1", "~Lester_James_Validad_Miranda1", "~Alisa_Liu1", "~Nouha_Dziri2", "~Xinxi_Lyu1", "~Yuling_Gu1", "~Saumya_Malik1", "~Victoria_Graf1", "~Jena_D._Hwang1", "~Jiangjiang_Yang1", "~Ronan_Le_Bras1", "~Oyvind_Tafjord2", "~Christopher_Wilhelm1", "~Luca_Soldaini1", "~Noah_A._Smith2", "~Yizhong_Wang2", "~Pradeep_Dasigi1", "~Hannaneh_Hajishirzi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9c0d921fcc7c90227f3e1bcba16bfea6f63ad480.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "post-training", "reinforcement learning", "preference learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lambert2025tulu, title={Tulu 3: Pushing Frontiers in Open Language Model Post-Training}, author={Nathan Lambert and Jacob Morrison and Valentina Pyatkin and Shengyi Huang and Hamish Ivison and Faeze Brahman and Lester James Validad Miranda and Alisa Liu and Nouha Dziri and Xinxi Lyu and Yuling Gu and Saumya Malik and Victoria Graf and Jena D. Hwang and Jiangjiang Yang and Ronan Le Bras and Oyvind Tafjord and Christopher Wilhelm and Luca Soldaini and Noah A. Smith and Yizhong Wang and Pradeep Dasigi and Hannaneh Hajishirzi}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=i1uGbfHHpH} }
lambert|tulu_3_pushing_frontiers_in_open_language_model_posttraining
null
true
null
null
null
Beyond Blanket Masking: Examining Granularity for Privacy Protection in Images Captured by Blind and Low Vision Users
We propose a fine-grained privacy protection framework that selectively masks only high-risk private information while preserving low-risk information in images taken by blind and low vision users for improved usability.
As visual assistant systems powered by visual language models (VLMs) become more prevalent, concerns over user privacy have grown, particularly for blind and low vision users who may unknowingly capture personal private information in their images. Existing privacy protection methods rely on coarse-grained segmentation, which uniformly masks entire private objects, often at the cost of usability. In this work, we propose FiG-Priv, a fine-grained privacy protection framework that selectively masks only high-risk private information while preserving low-risk information. Our approach integrates fine-grained segmentation with a data-driven risk scoring mechanism. By leveraging a more nuanced understanding of privacy risk, our method enables more effective protection without unnecessarily restricting users’ access to critical information. We evaluate our framework using the BIV-Priv-Seg dataset and show that FiG-Priv preserves +26% of image content, enhancing the ability of VLMs to provide useful responses by 11% and identify the image content by 45%, while ensuring privacy protection.
[ "Jeffri Murrugarra-Llerena", "Haoran Niu", "K. Suzanne Barber", "Hal Daumé III", "Yang Trista Cao", "Paola Cascante-Bonilla" ]
https://openreview.net/forum?id=hLjoekkPiJ
hLjoekkPiJ
hLjoekkPiJ
[ "~Jeffri_Murrugarra-Llerena1", "~Haoran_Niu1", "~K._Suzanne_Barber1", "~Hal_Daumé_III1", "~Yang_Trista_Cao1", "~Paola_Cascante-Bonilla1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/c24fbd6345ba739cef3f37e046cb5b2524fc8652.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "visual language models", "privacy", "safety", "accessibility" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ murrugarra-llerena2025beyond, title={Beyond Blanket Masking: Examining Granularity for Privacy Protection in Images Captured by Blind and Low Vision Users}, author={Jeffri Murrugarra-Llerena and Haoran Niu and K. Suzanne Barber and Hal Daum{\'e} III and Yang Trista Cao and Paola Cascante-Bonilla}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=hLjoekkPiJ} }
murrugarrallerena|beyond_blanket_masking_examining_granularity_for_privacy_protection_in_images_captured_by_blind_and_low_vision_users
null
null
null
null
null
Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning
This paper proposes a new RL framework to pursue the performance limit that can be achieved through outcome reward-based reinforcement learning for mathematical reasoning tasks, where only binary outcome rewards are easily accessible.
Reasoning abilities, especially those for solving complex math problems, are crucial components of general intelligence. Recent advances by proprietary companies, such as o-series models of OpenAI, have made remarkable progress on reasoning tasks. However, the complete technical details remain unrevealed, and the techniques that are believed certainly to be adopted are only reinforcement learning (RL) and the long chain of thoughts. This paper proposes a new RL framework, termed OREAL, to pursue the performance limit that can be achieved through **O**utcome **RE**w**A**rd-based reinforcement **L**earning for mathematical reasoning tasks, where only binary outcome rewards are easily accessible. We theoretically prove that behavior cloning on positive trajectories from best-of-N (BoN) sampling is sufficient to learn the KL-regularized optimal policy in binary feedback environments. This formulation further implies that the rewards of negative samples should be reshaped to ensure the gradient consistency between positive and negative samples. To alleviate the long-existing difficulties brought by sparse rewards in RL, which are even exacerbated by the partial correctness of the long chain of thought for reasoning tasks, we further apply a token-level reward model to sample important tokens in reasoning trajectories for learning. With OREAL, for the first time, a 7B model can obtain 94.0 pass@1 accuracy on MATH-500 through RL, being on par with 32B models. OREAL-32B also surpasses previous 32B models trained by distillation with 95.0 pass@1 accuracy on MATH-500. Our investigation also indicates the importance of initial policy models and training queries for RL. Code, models, and data are available at https://github.com/InternLM/OREAL.
[ "Chengqi Lyu", "Songyang Gao", "Yuzhe Gu", "Wenwei Zhang", "Jianfei Gao", "Kuikun Liu", "Ziyi Wang", "Shuaibin Li", "Qian Zhao", "Haian Huang", "Weihan Cao", "Jiangning Liu", "Hongwei Liu", "Junnan Liu", "Songyang Zhang", "Dahua Lin", "Kai Chen" ]
https://openreview.net/forum?id=hLg2rzBJR2
hLg2rzBJR2
hLg2rzBJR2
[ "~Chengqi_Lyu1", "~Songyang_Gao1", "~Yuzhe_Gu1", "~Wenwei_Zhang1", "~Jianfei_Gao1", "~Kuikun_Liu1", "~Ziyi_Wang30", "~Shuaibin_Li2", "~Qian_Zhao10", "~Haian_Huang1", "~Weihan_Cao1", "~Jiangning_Liu1", "~Hongwei_Liu2", "~Junnan_Liu1", "~Songyang_Zhang1", "~Dahua_Lin1", "~Kai_Chen4" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/5d9ab08857ddd9047f14e3abbcb0430b6b09ada9.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Model", "Reinforcement Learning", "Mathmatical Reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lyu2025exploring, title={Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning}, author={Chengqi Lyu and Songyang Gao and Yuzhe Gu and Wenwei Zhang and Jianfei Gao and Kuikun Liu and Ziyi Wang and Shuaibin Li and Qian Zhao and Haian Huang and Weihan Cao and Jiangning Liu and Hongwei Liu and Junnan Liu and Songyang Zhang and Dahua Lin and Kai Chen}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=hLg2rzBJR2} }
lyu|exploring_the_limit_of_outcome_reward_for_learning_mathematical_reasoning
null
null
null
null
null
The World According to LLMs: How Geographic Origin Influences LLMs' Entity Deduction Capabilities
LLMs demonstrate geographic bias in entity deduction games, with notably better performance on entities from the Global North and West despite controlling for language, popularity and frequency factors.
Large Language Models (LLMs) have been extensively tuned to mitigate explicit biases, yet they often exhibit subtle implicit biases rooted in their pre-training data. Rather than directly probing LLMs with human-crafted questions that may trigger guardrails, we propose studying how models behave when they proactively ask questions themselves. The 20 Questions game, a multi-turn deduction task, serves as an ideal testbed for this purpose. We systematically evaluate geographic performance disparities in entity deduction using a new dataset, Geo20Q+, consisting of both notable people and culturally significant objects (e.g., foods, landmarks, animals) from diverse regions. We test popular LLMs across two gameplay configurations (canonical 20-question and unlimited turns) and in seven languages (English, Hindi, Mandarin, Japanese, French, Spanish, and Turkish). Our results reveal geographic disparities: LLMs are substantially more successful at deducing entities from the _Global North_ than the _Global South_, and the _Global West_ than the _Global East_. While Wikipedia pageviews and pre-training corpus frequency correlate mildly with performance, they fail to fully explain these disparities. Notably, the language in which the game is played has minimal impact on performance gaps. These findings demonstrate the value of creative, _free-form_ evaluation frameworks for uncovering subtle biases in LLMs that remain hidden in standard prompting setups. By analyzing how models initiate and pursue reasoning goals over multiple turns, we find geographic and cultural disparities embedded in their reasoning processes. We release the dataset (Geo20Q+) and code at https://sites.google.com/view/llmbias20q/home.
[ "Harsh Nishant Lalai", "Raj Sanjay Shah", "Jiaxin Pei", "Sashank Varma", "Yi-Chia Wang", "Ali Emami" ]
https://openreview.net/forum?id=hJtvCfDfs1
hJtvCfDfs1
hJtvCfDfs1
[ "~Harsh_Nishant_Lalai1", "~Raj_Sanjay_Shah2", "~Jiaxin_Pei1", "~Sashank_Varma1", "~Yi-Chia_Wang2", "~Ali_Emami3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ac4433d186627b5e757785570169152bc828290f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "geographic representation", "LLM evaluation", "fairness and bias", "reasoning capabilities", "cross-cultural NLP", "interactive question answering" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lalai2025the, title={The World According to {LLM}s: How Geographic Origin Influences {LLM}s' Entity Deduction Capabilities}, author={Harsh Nishant Lalai and Raj Sanjay Shah and Jiaxin Pei and Sashank Varma and Yi-Chia Wang and Ali Emami}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=hJtvCfDfs1} }
lalai|the_world_according_to_llms_how_geographic_origin_influences_llms_entity_deduction_capabilities
/attachment/074af523bf6b035bfdf44772c9856eb01071c044.zip
null
null
null
null
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
We introduce FacTool, a tool augmented factuality detection framework that can effectively detect diverse factual errors generated by LLMs.
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text but has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) The content generated by these models tends to be lengthy and lacks clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a tool augmented multi-task and multi-domain framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool with ChatGPT plugin in https://github.com/GAIR-NLP/factool.
[ "Ethan Chern", "Steffi Chern", "Shiqi Chen", "Weizhe Yuan", "Kehua Feng", "Chunting Zhou", "Junxian He", "Graham Neubig", "Pengfei Liu" ]
https://openreview.net/forum?id=hJkQL9VtWT
hJkQL9VtWT
hJkQL9VtWT
[ "~Ethan_Chern1", "~Steffi_Chern1", "~Shiqi_Chen3", "~Weizhe_Yuan1", "~Kehua_Feng1", "~Chunting_Zhou1", "~Junxian_He1", "~Graham_Neubig1", "~Pengfei_Liu1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/56beb802a5975ad23bc08b1f979c1bbbfdf9b831.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "factuality", "llm", "hallucination" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ chern2025factool, title={FacTool: Factuality Detection in Generative {AI} -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios}, author={Ethan Chern and Steffi Chern and Shiqi Chen and Weizhe Yuan and Kehua Feng and Chunting Zhou and Junxian He and Graham Neubig and Pengfei Liu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=hJkQL9VtWT} }
chern|factool_factuality_detection_in_generative_ai_a_tool_augmented_framework_for_multitask_and_multidomain_scenarios
null
null
null
null
null
Overflow Prevention Enhances Long-Context Recurrent LLMs
We identify that recurrent LLMs suffer from recurrent memory overflows that limit their performance in long-context tasks. We propose OPRM, a training-free overflow-prevention mechanism that achieves significant gains in many long-context tasks.
A recent trend in LLMs is developing recurrent sub-quadratic models that improve long-context processing efficiency. We investigate leading large long-context models, focusing on how their fixed-size recurrent memory affects their performance. Our experiments reveal that, even when these models are trained for extended contexts, their use of long contexts remains underutilized. Specifically, we demonstrate that a chunk-based inference procedure, which identifies and processes only the most relevant portion of the input can mitigate recurrent memory failures and be effective for many long-context tasks: On LongBench, our method improves the overall performance of Falcon3-Mamba-Inst-7B by 14%, Falcon-Mamba-Inst-7B by 28%, RecurrentGemma-IT-9B by 50%, and RWKV6-Finch-7B by 51%. Surprisingly, this simple approach also leads to state-of-the-art results in the challenging LongBench v2 benchmark, showing competitive performance with equivalent size Transformers. Furthermore, our findings raise questions about whether recurrent models genuinely exploit long-range dependencies across multiple chunks, since our single-chunk strategy delivers stronger performance - even in tasks that presumably require cross-segment relations. We will release our code.
[ "Assaf Ben-Kish", "Itamar Zimerman", "Muhammad Jehanzeb Mirza", "Lior Wolf", "James R. Glass", "Leonid Karlinsky", "Raja Giryes" ]
https://openreview.net/forum?id=h99hJlU99U
h99hJlU99U
h99hJlU99U
[ "~Assaf_Ben-Kish1", "~Itamar_Zimerman1", "~Muhammad_Jehanzeb_Mirza1", "~Lior_Wolf1", "~James_R._Glass1", "~Leonid_Karlinsky3", "~Raja_Giryes1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/efdf046a9a269338d76057e9cbd738e378ebcfd4.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Mamba", "Sub-Quadratic Models", "Long Context", "Long-Range Language Modeling", "RNNs" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ben-kish2025overflow, title={Overflow Prevention Enhances Long-Context Recurrent {LLM}s}, author={Assaf Ben-Kish and Itamar Zimerman and Muhammad Jehanzeb Mirza and Lior Wolf and James R. Glass and Leonid Karlinsky and Raja Giryes}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=h99hJlU99U} }
benkish|overflow_prevention_enhances_longcontext_recurrent_llms
/attachment/d052d7d6c9526dfe93dd56e2282dfe73efb8d2a8.zip
null
null
null
null
Both Direct and Indirect Evidence Contribute to Dative Alternation Preferences in Language Models
Through manipulating word-order preferences in datives and non-datives in the training sets of language models, we find that they acquire dative alternation preferences from both direct and indirect evidence.
Language models (LMs) tend to show human-like preferences on a number of syntactic phenomena, but the extent to which these are attributable to direct exposure to the phenomena or more general properties of language is unclear. We explore this with the English dative alternation (DO: "gave Y the X" vs. PO: "gave the X to Y"), using a controlled rearing paradigm wherein we iteratively train small LMs on systematically manipulated input. We focus on two properties that affect the choice of alternant: length and animacy. Both properties are directly present in datives but also reflect more global tendencies for shorter elements to precede longer ones and animates to precede inanimates. First, by manipulating and ablating datives for these biases in the input, we show that direct evidence of length and animacy matters, but easy-first preferences persist even without such evidence. Then, using LMs trained on systematically perturbed datasets to manipulate global length effects (re-linearizing sentences globally while preserving dependency structure), we find that dative preferences can emerge from indirect evidence. We conclude that LMs' emergent syntactic preferences come from a mix of direct and indirect sources.
[ "Qing Yao", "Kanishka Misra", "Leonie Weissweiler", "Kyle Mahowald" ]
https://openreview.net/forum?id=h5SRsDax8v
h5SRsDax8v
h5SRsDax8v
[ "~Qing_Yao1", "~Kanishka_Misra1", "~Leonie_Weissweiler1", "~Kyle_Mahowald1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8e4eb5c7ff5e692e1ee71bfed0fc2db5ec51f2cc.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "linguistics", "dative alternation", "indirect evidence", "language learning", "cognitive science", "linguistic constructions" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yao2025both, title={Both Direct and Indirect Evidence Contribute to Dative Alternation Preferences in Language Models}, author={Qing Yao and Kanishka Misra and Leonie Weissweiler and Kyle Mahowald}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=h5SRsDax8v} }
yao|both_direct_and_indirect_evidence_contribute_to_dative_alternation_preferences_in_language_models
/attachment/95f95f9867e13b8b85bd505fc06b68fc5343fa85.zip
null
null
null
null
Training Plug-and-Play Knowledge Modules with Deep Context Distillation
We encapsulate knowledge from a document inside a LoRA adapter via distillation
Dynamically integrating new or rapidly evolving information after (Large) Language Model pre-training remains challenging, particularly in low-data scenarios or when dealing with private and specialized documents. In-context learning and retrieval-augmented generation (RAG) face limitations, including their high inference costs and their inability to capture global document information. In this paper, we propose a way of modularizing knowledge by training document-level Knowledge Modules (KMs). KMs are lightweight components implemented as parameter-efficient LoRA modules, which are trained to store information about new documents and can be easily plugged into models on demand. We show that next-token prediction performs poorly as the training objective for KMs. We instead propose Deep Context Distillation: we learn KMs parameters such as to simulate hidden states and logits of a teacher that takes the document in context. Our method outperforms standard next-token prediction and pre-instruction training techniques, across two datasets. Finally, we highlight synergies between KMs and retrieval-augmented generation.
[ "Lucas Caccia", "Alan Ansell", "Edoardo Ponti", "Ivan Vulić", "Alessandro Sordoni" ]
https://openreview.net/forum?id=ghyyHZYORi
ghyyHZYORi
ghyyHZYORi
[ "~Lucas_Caccia1", "~Alan_Ansell1", "~Edoardo_Ponti1", "~Ivan_Vulić1", "~Alessandro_Sordoni2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/e6440e102c61ff2e77b350b613c2c70b72b3c8cc.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "knowledge extraction", "document understanding", "modular learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ caccia2025training, title={Training Plug-and-Play Knowledge Modules with Deep Context Distillation}, author={Lucas Caccia and Alan Ansell and Edoardo Ponti and Ivan Vuli{\'c} and Alessandro Sordoni}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=ghyyHZYORi} }
caccia|training_plugandplay_knowledge_modules_with_deep_context_distillation
null
null
null
null
null
StagFormer: Time Staggering Decoder only Transformers
We propose a novel variant of the Transformer architecture for decoder only language modeling where we break the causal flow of information along the layers of a model by staggering in the time axis.
Standard decoding in a Transformer based language model is inherently sequential as we wait for a token’s embedding to pass through all the layers in the network before starting the generation of the next token. In this work, we propose anew architecture StagFormer (Staggered Transformer), which staggered execution along the time axis and thereby enables parallelizing the decoding process along the depth of the model. We achieve this by breaking the dependency of the token representation at time step $i$ in layer $l$ upon the representations of tokens until time step $i$ from layer $l−1$. Instead, we stagger the execution and only allow a dependency on token representations until time step $i−1$. The later sections of the Transformer still get access to the ”rich” representations from the prior section but only from those token positions which are one time step behind. StagFormer allows for different sections of the model to be executed in parallel yielding up to 33% speedup in decoding while being quality neutral. We also explore many natural variants of this idea. We present how weight-sharing across the different sections being staggered can be more practical in settings with limited memory. We show how one can approximate a recurrent model during inference using such weight-sharing. We explore the efficacy of using a bounded window attention to pass information from one section to another which helps drive further latency gains for some applications. We also explore demonstrate the scalability of the staggering idea over more than 2 sections of the Transformer.
[ "Dylan J Cutler", "Arun Kandoor", "Nishanth Dikkala", "Nikunj Saunshi", "Xin Wang", "Rina Panigrahy" ]
https://openreview.net/forum?id=gOKTe1KI8K
gOKTe1KI8K
gOKTe1KI8K
[ "~Dylan_J_Cutler1", "~Arun_Kandoor1", "~Nishanth_Dikkala1", "~Nikunj_Saunshi1", "~Xin_Wang30", "~Rina_Panigrahy1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ba36b6a43243381888f9fe458c1dd690d8466235.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "staggered execution", "decoder only language models", "efficiency", "novel architectures", "generative models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ cutler2025stagformer, title={StagFormer: Time Staggering Decoder only Transformers}, author={Dylan J Cutler and Arun Kandoor and Nishanth Dikkala and Nikunj Saunshi and Xin Wang and Rina Panigrahy}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=gOKTe1KI8K} }
cutler|stagformer_time_staggering_decoder_only_transformers
null
null
null
null
null
X-Teaming: Multi-Turn Jailbreaks and Defenses with Adaptive Multi-Agents
X-Teaming, a scalable multi-agent framework that achieves state-of-the-art multi-turn jailbreaking of language models while X-Guard trains models to defend against these attacks.
Multi-turn interactions with language models (LMs) pose critical safety risks, as harmful intent can be strategically spread across exchanges. Yet, the vast majority of prior work has focused on single-turn safety, while adaptability and diversity remain among the key challenges of multi-turn red-teaming. To address these challenges, we present X-Teaming, a scalable framework that systematically explores how seemingly harmless interactions escalate into harmful outcomes and generates corresponding attack scenarios. X-Teaming employs collaborative agents for planning, attack optimization, and verification, achieving state-of-the-art multi-turn jailbreak effectiveness and diversity with success rates up to 98.1% across representative leading open-weight and closed-source models. In particular, X-Teaming achieves a 96.2% attack success rate against the latest Claude 3.7 Sonnet model, which has been considered nearly immune to single-turn attacks. Building on X-Teaming, we introduce X-Guard-Train, an open-source multi-turn safety training dataset that's $~20\times$ larger than the previous best resource, comprising 30K interactive jailbreaks, designed to enable robust multi-turn safety alignment for LMs. Our work offers essential tools and insights for mitigating sophisticated conversational attacks, advancing the multi-turn safety of LMs.
[ "Salman Rahman", "Liwei Jiang", "James Shiffer", "Genglin Liu", "Sheriff Issaka", "Md Rizwan Parvez", "Hamid Palangi", "Kai-Wei Chang", "Yejin Choi", "Saadia Gabriel" ]
https://openreview.net/forum?id=gKfj7Jb1kj
gKfj7Jb1kj
gKfj7Jb1kj
[ "~Salman_Rahman1", "~Liwei_Jiang2", "~James_Shiffer1", "~Genglin_Liu1", "~Sheriff_Issaka1", "~Md_Rizwan_Parvez1", "~Hamid_Palangi1", "~Kai-Wei_Chang1", "~Yejin_Choi1", "~Saadia_Gabriel1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/3e4a1c4d08cce66b395c76b84a9a11c7dcae8c65.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Multi-turn Jailbreaks", "Adaptive Multi-Agent", "Conversational AI Safety", "Red-Teaming", "Defensive Alignment" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ rahman2025xteaming, title={X-Teaming: Multi-Turn Jailbreaks and Defenses with Adaptive Multi-Agents}, author={Salman Rahman and Liwei Jiang and James Shiffer and Genglin Liu and Sheriff Issaka and Md Rizwan Parvez and Hamid Palangi and Kai-Wei Chang and Yejin Choi and Saadia Gabriel}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=gKfj7Jb1kj} }
rahman|xteaming_multiturn_jailbreaks_and_defenses_with_adaptive_multiagents
null
null
null
null
null
SQuat: Subspace-orthogonal KV Cache Quantization
We propose a KV cache quantization method that preserves task-critical information throughout the quantization process.
The key-value (KV) cache accelerates LLMs decoding by storing KV tensors from previously generated tokens. It reduces redundant computation at the cost of increased memory usage. To mitigate this overhead, existing approaches compress KV tensors into lower-bit representations; however, quantization errors can accumulate as more tokens are generated, potentially resulting in undesired outputs. In this paper, we introduce SQuat (Subspace-orthogonal KV cache quantization). It first constructs a subspace spanned by query tensors to capture the most critical task-related information. During key tensor quantization, it enforces that the difference between the (de)quantized and original keys remains orthogonal to this subspace, minimizing the impact of quantization errors on the attention mechanism’s outputs. SQuat requires no model fine-tuning, no additional calibration dataset for offline learning, and is grounded in a theoretical framework we develop. Through numerical experiments, we show that our method reduces peak memory by $2.17\times \sim 2.82\times$, improves throughput by $2.45\times \sim 3.60 \times$, and achieves more favorable benchmark scores than existing KV cache quantization algorithms.
[ "Hao Wang", "Ligong Han", "Kai Xu", "Akash Srivastava" ]
https://openreview.net/forum?id=gKdhzBiHay
gKdhzBiHay
gKdhzBiHay
[ "~Hao_Wang22", "~Ligong_Han1", "~Kai_Xu4", "~Akash_Srivastava1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ed8d739e7f3da1993f38d6c9bb76c233b8435be1.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "KV cache quantization", "LLMs" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025squat, title={{SQ}uat: Subspace-orthogonal {KV} Cache Quantization}, author={Hao Wang and Ligong Han and Kai Xu and Akash Srivastava}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=gKdhzBiHay} }
wang|squat_subspaceorthogonal_kv_cache_quantization
null
null
null
null
null
KVSink: Understanding and Enhancing the Preservation of Attention Sinks in KV Cache Quantization for LLMs
Understanding and Enhancing the Preservation of Attention Sinks in KV Cache Quantization for LLMs
Key-Value (KV) cache quantization has become a widely adopted optimization technique for efficient large language models (LLMs) inference by reducing KV cache memory usage and mitigating memory-bound constraints. Recent studies have emphasized the importance of preserving the original precision of KVs for the first few tokens to ensure the protection of attention sinks. While this approach has proven effective in mitigating performance degradation, its underlying principles remain insufficiently understood. Moreover, it fails to address the recent discovery that attention sinks can emerge beyond the initial token positions. In this work, we elucidate the underlying mechanisms of attention sinks during inference by examining their role in the cross-layer evolution of extreme activation outliers. Additionally, we provide a comprehensive analysis of the interplay between attention sinks and KV cache quantization. Based on our enhanced understanding, we introduce KVSink, a plug-and-play method that effectively predicts sink tokens with negligible overhead, enabling more thorough preservation. Extensive experiments demonstrate that KVSink outperforms the existing Preserve-First-N (PFN) strategy, offering more effective preservation of attention sinks during KV cache quantization. Moreover, when applied to the well-established KVQuant method, KVSink further improves perplexity (PPL) and reduces reliance on 16-bit numerical outliers.
[ "Zunhai Su", "Kehong Yuan" ]
https://openreview.net/forum?id=gIqb6zWZoO
gIqb6zWZoO
gIqb6zWZoO
[ "~Zunhai_Su1", "~Kehong_Yuan1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/b192f1bfa40f51a5501e78eb839aa697e1a1a6c3.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "quantization", "kv cache", "transformer", "llm", "attention" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ su2025kvsink, title={{KVS}ink: Understanding and Enhancing the Preservation of Attention Sinks in {KV} Cache Quantization for {LLM}s}, author={Zunhai Su and Kehong Yuan}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=gIqb6zWZoO} }
su|kvsink_understanding_and_enhancing_the_preservation_of_attention_sinks_in_kv_cache_quantization_for_llms
null
true
null
null
null
Efficient Construction of Model Family through Progressive Training Using Model Expansion
We propose a progressive training approach that efficiently builds a family of LLMs, reducing total computational requirements while achieving comparable or even better performance.
As Large Language Models (LLMs) gain widespread practical applica- tion, offering model families with varying parameter sizes has become standard practice to accommodate diverse computational requirements. Traditionally, each model in the family is trained independently, incurring computational costs that scale additively with the number of models. In this work, we propose an efficient method for constructing model families via progressive training, where smaller models are incrementally expanded to larger sizes to create a complete model family. Through extensive ex- periments on a model family ranging from 1B to 8B parameters, we show that our approach reduces total computational cost by approximately 25% while maintaining comparable performance to independently trained mod- els. Moreover, by strategically adjusting the maximum learning rate based on model size, our method outperforms the independent training across various metrics. Beyond these improvements, our approach also fosters greater consistency in behavior across model sizes.
[ "Kazuki Yano", "Sho Takase", "Sosuke Kobayashi", "Shun Kiyono", "Jun Suzuki" ]
https://openreview.net/forum?id=fuBrcTH8NM
fuBrcTH8NM
fuBrcTH8NM
[ "~Kazuki_Yano1", "~Sho_Takase2", "~Sosuke_Kobayashi1", "~Shun_Kiyono1", "~Jun_Suzuki1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/5558b4586a726c29e22f45704837021d08ddb3bf.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "pre-training", "model familly", "compute efficiency" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yano2025efficient, title={Efficient Construction of Model Family through Progressive Training Using Model Expansion}, author={Kazuki Yano and Sho Takase and Sosuke Kobayashi and Shun Kiyono and Jun Suzuki}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=fuBrcTH8NM} }
yano|efficient_construction_of_model_family_through_progressive_training_using_model_expansion
null
null
null
null
null
UNVEILING: What Makes Linguistics Olympiad Puzzles Tricky for LLMs?
The study presents a linguistic features-based annotation of Linguistics Olympiad puzzles to find LLM weaknesses; LLMs are bad at puzzles with higher morphological complexity, dissimilar to English, and when the puzzle is data constrained.
Large language models (LLMs) have demonstrated potential in reasoning tasks, but their performance on linguistics puzzles remains consistently poor. These puzzles, often derived from Linguistics Olympiad (LO) contests, provide a minimal contamination environment to assess LLMs' linguistic reasoning abilities across low-resource languages. This work analyses LLMs' performance on 629 problems across 41 low-resource languages by labelling each with linguistically informed features to unveil weaknesses. Our analyses show that LLMs struggle with puzzles involving higher morphological complexity and perform better on puzzles involving linguistic features that are also found in English. We also show that splitting words into morphemes as a pre-processing step improves solvability, indicating a need for more informed and language-specific tokenisers. These findings thus offer insights into some challenges in linguistic reasoning and modelling of low-resource languages.
[ "Mukund Choudhary", "KV Aditya Srivatsa", "Gaurja Aeron", "Antara Raaghavi Bhattacharya", "Dang Khoa Dang Dinh", "Ikhlasul Akmal Hanif", "Daria Kotova", "Ekaterina Kochmar", "Monojit Choudhury" ]
https://openreview.net/forum?id=fcRcl1EXc4
fcRcl1EXc4
fcRcl1EXc4
[ "~Mukund_Choudhary1", "~KV_Aditya_Srivatsa1", "~Gaurja_Aeron1", "~Antara_Raaghavi_Bhattacharya1", "~Dang_Khoa_Dang_Dinh1", "~Ikhlasul_Akmal_Hanif1", "~Daria_Kotova1", "~Ekaterina_Kochmar2", "~Monojit_Choudhury1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/e2a9b0fcf4a176a28b878dd7d16d5725a0164c0c.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "linguistic reasoning", "metalinguistics", "LLM evaluation", "morphology", "linguistics olympiad", "interpretability", "low resource languages", "annotation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ choudhary2025unveiling, title={{UNVEILING}: What Makes Linguistics Olympiad Puzzles Tricky for {LLM}s?}, author={Mukund Choudhary and KV Aditya Srivatsa and Gaurja Aeron and Antara Raaghavi Bhattacharya and Dang Khoa Dang Dinh and Ikhlasul Akmal Hanif and Daria Kotova and Ekaterina Kochmar and Monojit Choudhury}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=fcRcl1EXc4} }
choudhary|unveiling_what_makes_linguistics_olympiad_puzzles_tricky_for_llms
null
null
null
null
null
AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectories
AgentRewardBench is a benchmark that evaluates how well Large Language Model (LLM) judges align with human preferences when assessing autonomous web agents, using over 1,200 expert-reviewed trajectories across five benchmarks
Web agents enable users to perform tasks on web browsers through natural language interaction. Evaluating web agents trajectories is an important problem, since it helps us determine whether the agent successfully completed the tasks. Rule-based methods are widely used for this purpose, but they are challenging to extend to new tasks and may not always recognize successful trajectories. We may achieve higher accuracy through human evaluation, but the process would be substantially slower and more expensive. Automatic evaluations with LLMs may avoid the challenges of designing new rules and manually annotating trajectories, enabling faster and cost-effective evaluation. However, it is unclear how effective they are at evaluating web agents. To this end, we propose AgentRewardBench, the first benchmark to assess the effectiveness of LLM judges for evaluating web agents. AgentRewardBench contains 1302 trajectories across 5 benchmarks and 4 LLMs. Each trajectory in AgentRewardBench is reviewed by an expert, who answers questions pertaining to the success, side effects, and repetitiveness of the agent. Using our benchmark, we evaluate 12 LLM judges and find that no single LLM excels across all benchmarks. We also find that the rule-based evaluation used by common benchmarks tends to underreport the success rate of web agents, highlighting a key weakness of rule-based evaluation and the need to develop more flexible automatic evaluations. We release the benchmark at: https://agent-reward-bench.github.io
[ "Xing Han Lù", "Amirhossein Kazemnejad", "Nicholas Meade", "Arkil Patel", "Dongchan Shin", "Alejandra Zambrano", "Karolina Stanczak", "Peter Shaw", "Christopher Pal", "Siva Reddy" ]
https://openreview.net/forum?id=fQcUZMPIvu
fQcUZMPIvu
fQcUZMPIvu
[ "~Xing_Han_Lù1", "~Amirhossein_Kazemnejad1", "~Nicholas_Meade1", "~Arkil_Patel1", "~Dongchan_Shin1", "~Alejandra_Zambrano1", "~Karolina_Stanczak1", "~Peter_Shaw1", "~Christopher_Pal1", "~Siva_Reddy1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9049329e5673f53bcd91fc9e3157946c0407caff.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Agent", "Web Agent", "LLM Judge", "LLM-as-a-Judge", "Digital Agent", "Benchmark", "Reward Modeling" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lu2025agentrewardbench, title={AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectories}, author={Xing Han L{\`u} and Amirhossein Kazemnejad and Nicholas Meade and Arkil Patel and Dongchan Shin and Alejandra Zambrano and Karolina Stanczak and Peter Shaw and Christopher Pal and Siva Reddy}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=fQcUZMPIvu} }
lù|agentrewardbench_evaluating_automatic_evaluations_of_web_agent_trajectories
null
null
null
null
null
Inside-Out: Hidden Factual Knowledge in LLMs
We introduce a framework for evaluating the gap between the knowledge LLMs encode internally and what they express in their outputs, and provide strong evidence of this gap across popular LLMs.
This work presents a framework for assessing whether large language models (LLMs) encode more factual knowledge in their parameters than what they express in their outputs. While a few studies hint at this possibility, none has clearly defined or demonstrated this phenomenon. We first propose a formal definition of knowledge, quantifying it for a given question as the fraction of correct-incorrect answer pairs where the correct one is ranked higher. This gives rise to external and internal knowledge, depending on the information used to score individual answer candidates: either the model’s observable token-level probabilities or its intermediate computations. Hidden knowledge arises when internal knowledge exceeds external knowledge. We then present a case study, applying this framework to three popular open-weights LLMs in a closed-book QA setup. Our results indicate that: (1) LLMs consistently encode more factual knowledge internally than what they express externally, with an average gap of 40%. (2) Surprisingly, some knowledge is so deeply hidden that a model can internally know an answer perfectly, yet fail to generate it even once, despite large-scale repeated sampling of 1,000 answers. This reveals fundamental limitations in the generation capabilities of LLMs, which (3) puts a practical constraint on scaling test-time compute via repeated answer sampling in closed-book QA: significant performance improvements remain inaccessible because some answers are practically never sampled, yet if they were, we would be guaranteed to rank them first.
[ "Zorik Gekhman", "Eyal Ben-David", "Hadas Orgad", "Eran Ofek", "Yonatan Belinkov", "Idan Szpektor", "Jonathan Herzig", "Roi Reichart" ]
https://openreview.net/forum?id=f7GG1MbsSM
f7GG1MbsSM
f7GG1MbsSM
[ "~Zorik_Gekhman1", "~Eyal_Ben-David1", "~Hadas_Orgad1", "~Eran_Ofek1", "~Yonatan_Belinkov1", "~Idan_Szpektor1", "~Jonathan_Herzig2", "~Roi_Reichart1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/13c1300909f9eb3d5e428b69bc513b2e0ea8d642.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLMs", "Knowledge" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ gekhman2025insideout, title={Inside-Out: Hidden Factual Knowledge in {LLM}s}, author={Zorik Gekhman and Eyal Ben-David and Hadas Orgad and Eran Ofek and Yonatan Belinkov and Idan Szpektor and Jonathan Herzig and Roi Reichart}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=f7GG1MbsSM} }
gekhman|insideout_hidden_factual_knowledge_in_llms
null
null
null
null
null
The Unlearning Mirage: A Dynamic Framework for Evaluating LLM Unlearning
A Dynamic Framework for Evaluating LLM Unlearning
Unlearning in Large Language Models (LLMs) aims to enhance safety, mitigate biases, and comply with legal mandates, such as the right to be forgotten. However, existing unlearning methods are brittle: minor query modifications, such as multi-hop reasoning and entity aliasing, can recover supposedly forgotten information. As a result, current evaluation metrics often create an illusion of effectiveness, failing to detect these vulnerabilities due to reliance on static, unstructured benchmarks. We propose a dynamic framework that stress tests unlearning robustness using complex structured queries. Our approach first elicits knowledge from the target model (pre-unlearning) and constructs targeted probes, ranging from simple queries to multi-hop chains, allowing precise control over query difficulty. Our experiments show that the framework: (1) shows comparable coverage to existing benchmarks by automatically generating semantically equivalent Q&A probes, (2) aligns with prior evaluations, and (3) uncovers new unlearning failures missed by other benchmarks, particularly in multi-hop settings. Furthermore, activation analyses show that single-hop queries typically follow dominant computation pathways, which are more likely to be disrupted by unlearning methods. In contrast, multi-hop queries tend to use alternative pathways that often remain intact, explaining the brittleness of unlearning techniques in multi-hop settings. Our framework enables practical and scalable evaluation of unlearning methods without the need for manual construction of forget test sets, enabling easier adoption for real-world applications. We release the pip package and the code at https://sites.google.com/view/unlearningmirage/home.
[ "Raj Sanjay Shah", "Jing Huang", "Keerthiram Murugesan", "Nathalie Baracaldo", "Diyi Yang" ]
https://openreview.net/forum?id=exW2SFJK4H
exW2SFJK4H
exW2SFJK4H
[ "~Raj_Sanjay_Shah2", "~Jing_Huang2", "~Keerthiram_Murugesan1", "~Nathalie_Baracaldo1", "~Diyi_Yang2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/c17d1f36ddb6d284517474f548310cba9ce3ae00.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Unlearning evaluation", "Multi-hop reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ shah2025the, title={The Unlearning Mirage: A Dynamic Framework for Evaluating {LLM} Unlearning}, author={Raj Sanjay Shah and Jing Huang and Keerthiram Murugesan and Nathalie Baracaldo and Diyi Yang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=exW2SFJK4H} }
shah|the_unlearning_mirage_a_dynamic_framework_for_evaluating_llm_unlearning
/attachment/3effe50b2de608ddbd7719e8b65eb2fd13f7c786.zip
true
null
null
null
EvalAgents: Discovering Implicit Evaluation Criteria from the Web
We propose a novel framework EvalAgent that dynamically generates grounded, implicit evaluation criteria for a given prompt based on retrieved expert advice.
Evaluation of language model outputs on structured writing tasks is typically conducted with a number of desirable criteria presented to human evaluators or large language models (LLMs). For instance, on a prompt like "Help me draft an academic talk on coffee intake vs research productivity", a model response may be evaluated for criteria like accuracy and coherence. However, high-quality responses should do more than just satisfy basic task requirements. An effective response to this query should include quintessential features of an academic talk, such as a compelling opening, clear research questions, and a takeaway. To help identify these implicit criteria, we introduce EvalAgent, a novel framework designed to automatically uncover nuanced and task-specific criteria. EvalAgent first mines expert-authored online guidance. It then uses this evidence to propose diverse, long-tail evaluation criteria that are grounded in reliable external sources. Our experiments demonstrate that the grounded criteria produced by EvalAgent are often implicit (not directly stated in the user's prompt), yet specific (high degree of lexical precision). Further, EvalAgent criteria are often not satisfied by initial responses but they are actionable, such that responses can be refined to satisfy them. Finally, we show that combining LLM-generated and EvalAgent criteria uncovers more human-valued criteria than using LLMs alone.
[ "Manya Wadhwa", "Zayne Rea Sprague", "Chaitanya Malaviya", "Philippe Laban", "Junyi Jessy Li", "Greg Durrett" ]
https://openreview.net/forum?id=erGpkHCybv
erGpkHCybv
erGpkHCybv
[ "~Manya_Wadhwa1", "~Zayne_Rea_Sprague1", "~Chaitanya_Malaviya1", "~Philippe_Laban1", "~Junyi_Jessy_Li2", "~Greg_Durrett1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/231c145f57a04150de2e76cfc963745238701095.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "evaluation", "writing", "large language models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wadhwa2025evalagents, title={EvalAgents: Discovering Implicit Evaluation Criteria from the Web}, author={Manya Wadhwa and Zayne Rea Sprague and Chaitanya Malaviya and Philippe Laban and Junyi Jessy Li and Greg Durrett}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=erGpkHCybv} }
wadhwa|evalagents_discovering_implicit_evaluation_criteria_from_the_web
null
null
null
null
null
Discovering Knowledge Deficiencies of Language Models on Massive Knowledge Base
We propose stochastic error ascend, an optimization-based framework that efficiently identifies and refines failure modes in LLMs, discovering significantly more errors than existing methods while reducing evaluation costs.
Large language models (LLMs) possess impressive linguistic capabilities but often fail to faithfully retain factual knowledge, leading to hallucinations and unreliable outputs. Understanding LLMs' knowledge deficiencies by exhaustively evaluating against full-scale knowledge bases is computationally prohibitive, especially for closed-weight models. We propose stochastic error ascent (SEA), a scalable and efficient framework for discovering knowledge deficiencies (errors) in closed-weight LLMs under a strict query budget. Rather than naively probing all knowledge candidates, SEA formulates error discovery as a stochastic optimization process: it iteratively retrieves new high-error candidates by leveraging the semantic similarity to previously observed failures. To further enhance search efficiency and coverage, SEA employs hierarchical retrieval across document and paragraph levels, and constructs a relation directed acyclic graph to model error propagation and identify systematic failure modes. Empirically, SEA uncovers 40.7× more knowledge errors than Automated Capability Discovery and 26.7% more than AutoBencher, while reducing the cost-per-error by 599× and 9×, respectively. Human evaluation confirms the high quality of generated questions, while ablation and convergence analyses validate the contribution of each component in SEA. Further analysis on the discovered errors reveals correlated failure patterns across LLM families and recurring deficits, highlighting the need for better data coverage and targeted fine-tuning in future LLM development.
[ "Linxin Song", "Xuwei Ding", "Jieyu Zhang", "Taiwei Shi", "Ryotaro Shimizu", "Rahul Gupta", "Yang Liu", "Jian Kang", "Jieyu Zhao" ]
https://openreview.net/forum?id=eqNItk1sWo
eqNItk1sWo
eqNItk1sWo
[ "~Linxin_Song1", "~Xuwei_Ding1", "~Jieyu_Zhang1", "~Taiwei_Shi1", "~Ryotaro_Shimizu1", "~Rahul_Gupta3", "~Yang_Liu60", "~Jian_Kang1", "~Jieyu_Zhao1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/084e39ff598ebc6cfeb04aecfccaafaf4bf56c00.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Evaluation", "Misinformation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ song2025discovering, title={Discovering Knowledge Deficiencies of Language Models on Massive Knowledge Base}, author={Linxin Song and Xuwei Ding and Jieyu Zhang and Taiwei Shi and Ryotaro Shimizu and Rahul Gupta and Yang Liu and Jian Kang and Jieyu Zhao}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=eqNItk1sWo} }
song|discovering_knowledge_deficiencies_of_language_models_on_massive_knowledge_base
null
null
null
null
null
MuSeD: A Multimodal Spanish Dataset for Sexism Detection in Social Media Videos
We present MuSeD, a multimodal Spanish dataset for sexism detection in videos, with an innovative annotation framework for analyzing the contribution of textual and multimodal labels. We evaluate the performance of LLMs and multimodal LLMs on MuSeD.
Sexism is generally defined as prejudice and discrimination based on sex or gender, affecting every sector of society, from social institutions to relationships and individual behavior. Social media platforms amplify the impact of sexism by conveying discriminatory content not only through text but also across multiple modalities, highlighting the critical need for a multimodal approach to the analysis of sexism online. With the rise of social media platforms where users share short videos, sexism is increasingly spreading through video content. Automatically detecting sexism in videos is a challenging task, as it requires analyzing the combination of verbal, audio, and visual elements to identify sexist content. In this study, (1) we introduce MuSeD, a new Multimodal Spanish dataset for Sexism Detection consisting of ≈ 11 hours of videos extracted from TikTok and BitChute; (2) we propose an innovative annotation framework for analyzing the contributions of textual, vocal, and visual modalities to the classification of content as either sexist or non-sexist; and (3) we evaluate a range of large language models (LLMs) and multimodal LLMs on the task of sexism detection. We find that visual information plays a key role in labeling sexist content for both humans and models. Models effectively detect explicit sexism; however, they struggle with implicit cases, such as stereotypes—instances where annotators also show low agreement. This highlights the inherent difficulty of the task, as identifying implicit sexism depends on the social and cultural context.
[ "Laura De Grazia", "Pol Pastells", "Mauro Vázquez Chas", "Desmond Elliott", "Danae Sanchez Villegas", "Mireia Farrús", "Mariona Taulé Delor" ]
https://openreview.net/forum?id=eSAv7GKVFt
eSAv7GKVFt
eSAv7GKVFt
[ "~Laura_De_Grazia1", "~Pol_Pastells1", "~Mauro_Vázquez_Chas1", "~Desmond_Elliott1", "~Danae_Sanchez_Villegas1", "~Mireia_Farrús1", "~Mariona_Taulé_Delor1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/19195e99dd51cc5a9d15e559e82f4506a2d30845.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "sexism", "multimodal", "classification", "social media", "LLMs" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
true
Dataset contains samples with discrimination/stereotype/inequality/objectification in videos, especially sensitive samples from BitChute platform where annotators returned ~94% samples as sexist. The proposed Spanish dataset for sexism detection may contribute to the emergence of sexual harassment-related public opinion and provide them with a channel.
@inproceedings{ grazia2025mused, title={MuSeD: A Multimodal Spanish Dataset for Sexism Detection in Social Media Videos}, author={Laura De Grazia and Pol Pastells and Mauro V{\'a}zquez Chas and Desmond Elliott and Danae Sanchez Villegas and Mireia Farr{\'u}s and Mariona Taul{\'e} Delor}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=eSAv7GKVFt} }
grazia|mused_a_multimodal_spanish_dataset_for_sexism_detection_in_social_media_videos
null
null
null
null
null
Meta-Learning for Speeding Up Large Model Inference in Decentralized Environments
We propose a meta-learning framework to optimize inference acceleration in decentralized AI systems based on task-specific data, promoting more cost-effective and scalable AI deployment.
The deployment of large-scale models, such as large language models (LLMs), incurs substantial costs due to their computational demands. To mitigate these costs and address challenges related to scalability and data security, there is a growing shift towards decentralized systems for model deployment, where choosing efficient inference acceleration schemes become crucial to manage computational resources effectively and enhance system responsiveness. In this work, we address the challenge of selecting optimal acceleration methods in decentralized systems by introducing a meta-learning-based framework. This framework automates the selection process by learning from historical performance data of various acceleration techniques across different tasks. Unlike traditional methods that rely on random selection or expert intuition, our approach systematically identifies the best acceleration strategies based on the specific characteristics of each task. We demonstrate that our meta-learning framework not only streamlines the decision-making process but also consistently outperforms conventional methods in terms of efficiency and performance. Our results highlight the potential of inference acceleration in decentralized AI systems, offering a path towards more democratic and economically feasible artificial intelligence solutions. Our code and data will be released later.
[ "Yipeng Du", "Zihao Wang", "Ahmad Farhan", "Claudio Angione", "Harry Yang", "Fielding Johnston", "James P. Buban", "Yue Zhao", "Yuzhe Yang" ]
https://openreview.net/forum?id=eLWn2XVMHA
eLWn2XVMHA
eLWn2XVMHA
[ "~Yipeng_Du2", "~Zihao_Wang36", "~Ahmad_Farhan1", "~Claudio_Angione1", "~Harry_Yang2", "~Fielding_Johnston1", "~James_P._Buban1", "~Yue_Zhao13", "~Yuzhe_Yang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/2448403da60ff435885d137ba6343485b7151611.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Fast inference", "LLMs", "meta learning", "decentralized systems" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ du2025metalearning, title={Meta-Learning for Speeding Up Large Model Inference in Decentralized Environments}, author={Yipeng Du and Zihao Wang and Ahmad Farhan and Claudio Angione and Harry Yang and Fielding Johnston and James P. Buban and Yue Zhao and Yuzhe Yang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=eLWn2XVMHA} }
du|metalearning_for_speeding_up_large_model_inference_in_decentralized_environments
null
null
null
null
null
(Im)possibility of Automated Hallucination Detection in Large Language Models
We propose a novel theoretical model for hallucination detection and show that it is generally impossible to automate this task only with positive examples; however, if we have negative examples, the task becomes much easier.
Is automated hallucination detection fundamentally possible? In this paper, we introduce a theoretical framework to rigorously study the (im)possibility of automatically detecting hallucinations produced by large language models (LLMs). Our model builds on the classical Gold-Angluin framework of language identification and its recent adaptation by Kleinberg and Mullainathan to the language generation setting. Concretely, we investigate whether an algorithm—trained on examples from an unknown target language $K$, chosen from a countable collection of languages $\mathcal{L}$, and given access to an LLM—can reliably determine if the LLM’s outputs are correct or constitute hallucinations. First, we establish a strong equivalence between hallucination detection and the classical problem of language identification. Specifically, we prove that any algorithm capable of identifying languages (in the limit) can be efficiently transformed into one that reliably detects hallucinations, and conversely, successful hallucination detection strategy inherently implies language identification. Given the notorious difficulty of language identification, our first result implies that hallucination detection is *impossible* for most collections of languages. Second, we show that once we enrich the detector’s training data, i.e., providing it with both positive examples (correct statements) and negative examples (explicitly labeled incorrect statements)— the conclusion dramatically changes. Under this enriched training regime, we show that automated hallucination detection is *possible* for any countable collection $\mathcal{L}$. Our theoretical results, thus, underscore the fundamental importance of expert-labeled feedback in the practical deployment of hallucination detection methods, reinforcing why feedback-based approaches, such as reinforcement learning with human feedback (RLHF), have proven so crucial in improving the reliability and safety of real-world LLMs.
[ "Amin Karbasi", "Omar Montasser", "John Sous", "Grigoris Velegkas" ]
https://openreview.net/forum?id=e5jWdZIX0Q
e5jWdZIX0Q
e5jWdZIX0Q
[ "~Amin_Karbasi3", "~Omar_Montasser1", "~John_Sous1", "~Grigoris_Velegkas1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/6b7c25bf886c16a2b1e8e9a0fb45dfbf9edae089.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "hallucinations", "theory", "RLHF" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ karbasi2025impossibility, title={(Im)possibility of Automated Hallucination Detection in Large Language Models}, author={Amin Karbasi and Omar Montasser and John Sous and Grigoris Velegkas}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=e5jWdZIX0Q} }
karbasi|impossibility_of_automated_hallucination_detection_in_large_language_models
null
null
null
null
null
Overfill: Two-Stage Models for Efficient Language Model Decoding
Leveraging more compute during prefill, OverFill improves generation quality with minimal latency overhead.
Large language models (LLMs) excel across diverse tasks but face significant deployment challenges due to high inference costs. LLM inference comprises prefill (compute-bound) and decode (memory-bound) stages, with decode dominating latency particularly for long sequences. Current decoder-only models handle both stages uniformly, despite their distinct computational profiles. We propose Overfill, which decouples these stages to optimize accuracy-efficiency tradeoffs. Overfill begins with a full model for prefill, processing system and user inputs in parallel. It then switches to a dense pruned model, while generating tokens sequentially. Leveraging more compute during prefill, Overfill improves generation quality with minimal latency overhead. Our 3B-to-1B Overfill configuration outperforms 1B pruned models by 83.2%, while the 8B-to-3B configuration improves over 3B pruned models by 79.2% on average across standard benchmarks. Overfill matches the performance of same-sized models trained from scratch, while using significantly less training data. Our code is available at https://github.com/friendshipkim/overfill.
[ "Woojeong Kim", "Junxiong Wang", "Jing Nathan Yan", "Mohamed S. Abdelfattah", "Alexander M Rush" ]
https://openreview.net/forum?id=e112iu5ssg
e112iu5ssg
e112iu5ssg
[ "~Woojeong_Kim1", "~Junxiong_Wang1", "~Jing_Nathan_Yan1", "~Mohamed_S._Abdelfattah1", "~Alexander_M_Rush1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a53da3fa4a7edb32a0e94623f81ffc790ebd31f8.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "deep learning", "large language models", "inference efficiency" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ kim2025overfill, title={Overfill: Two-Stage Models for Efficient Language Model Decoding}, author={Woojeong Kim and Junxiong Wang and Jing Nathan Yan and Mohamed S. Abdelfattah and Alexander M Rush}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=e112iu5ssg} }
kim|overfill_twostage_models_for_efficient_language_model_decoding
null
null
null
null
null
URANIA: Differentially Private Insights into AI Use
We introduce a novel framework primarily designed for generating insights about LLM chatbot interactions while providing differential privacy guarantees, while also being applicable to other text corpora.
We introduce _Urania_, a novel framework for generating insights about LLM chatbot interactions with rigorous differential privacy (DP) guarantees. The framework employs a private clustering mechanism and innovative keyword extraction methods, including frequency-based, TF-IDF-based, and LLM-guided approaches. By leveraging DP tools such as clustering, partition selection, and histogram-based summarization, _Urania_ provides end-to-end privacy protection. Our evaluation assesses lexical and semantic content preservation, pair similarity, and LLM-based metrics, benchmarking against a non-private method inspired by _Clio_ (Tamkin et al. 2024). Moreover, we develop a simple empirical privacy evaluation that demonstrates the enhanced robustness of our DP pipeline. The results show the framework's ability to extract meaningful conversational insights while maintaining stringent user privacy, effectively balancing data utility with privacy preservation.
[ "Daogao Liu", "Edith Cohen", "Badih Ghazi", "Peter Kairouz", "Pritish Kamath", "Alexander Knop", "Ravi Kumar", "Pasin Manurangsi", "Adam Sealfon", "Da Yu", "Chiyuan Zhang" ]
https://openreview.net/forum?id=dujG4nGClA
dujG4nGClA
dujG4nGClA
[ "~Daogao_Liu1", "~Edith_Cohen1", "~Badih_Ghazi1", "~Peter_Kairouz1", "~Pritish_Kamath2", "~Alexander_Knop1", "~Ravi_Kumar1", "~Pasin_Manurangsi2", "~Adam_Sealfon1", "~Da_Yu1", "~Chiyuan_Zhang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ed4ba9cf9757208ce5d3327f79b31d0c9f1d18b9.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Differential Privacy", "Clustering", "Summarization" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ liu2025urania, title={{URANIA}: Differentially Private Insights into {AI} Use}, author={Daogao Liu and Edith Cohen and Badih Ghazi and Peter Kairouz and Pritish Kamath and Alexander Knop and Ravi Kumar and Pasin Manurangsi and Adam Sealfon and Da Yu and Chiyuan Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=dujG4nGClA} }
liu|urania_differentially_private_insights_into_ai_use
null
null
null
null
null
PersonaEval: Are LLM Evaluators Human Enough to Judge Role-Play?
We introduce PersonaEval, a benchmark showing that LLMs struggle to judge role-play like humans. Even top models fail basic role identification, highlighting a need for more human-like reasoning in LLM evaluators.
Current role-play studies often rely on unvalidated LLM-as-a-judge paradigms, which may fail to reflect how humans perceive role fidelity. A key prerequisite for human-aligned evaluation is role identification, the ability to recognize who is speaking based on dialogue context. We argue that any meaningful judgment of role-playing quality (how well a character is played) fundamentally depends on first correctly attributing words and actions to the correct persona (who is speaking). We present PersonaEval, the first benchmark designed to test whether LLM evaluators can reliably identify human roles. PersonaEval uses human-authored dialogues from novels, scripts, and video transcripts, challenging models to determine the correct persona according to the conversation context. Our experiments, including a human study, show that even the best-performing LLMs reach only around 69% accuracy, well below the level needed for reliable evaluation. In contrast, human participants perform near ceiling with 90.8% accuracy, highlighting that current LLM evaluators are still not human enough to effectively judge role-play scenarios. To better understand this gap, we examine training-time adaptation and test-time compute, suggesting that reliable evaluation requires more than task-specific tuning, but depends on strong, human-like reasoning abilities in LLM evaluators. We release our benchmark at https://github.com/maple-zhou/PersonaEval.
[ "Lingfeng Zhou", "Jialing Zhang", "Jin Gao", "Mohan Jiang", "Dequan Wang" ]
https://openreview.net/forum?id=drdrFhKYjP
drdrFhKYjP
drdrFhKYjP
[ "~Lingfeng_Zhou1", "~Jialing_Zhang1", "~Jin_Gao3", "~Mohan_Jiang1", "~Dequan_Wang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/7d82995478de95962a51816e94a3e690c84e9d2e.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Role-play", "evaluating LLM evaluators", "benchmark" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhou2025personaeval, title={PersonaEval: Are {LLM} Evaluators Human Enough to Judge Role-Play?}, author={Lingfeng Zhou and Jialing Zhang and Jin Gao and Mohan Jiang and Dequan Wang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=drdrFhKYjP} }
zhou|personaeval_are_llm_evaluators_human_enough_to_judge_roleplay
/attachment/6a25cb35829c49d55a1670963857bef598b95c9e.zip
null
null
null
null
Learning to Reason for Long-Form Story Generation
We propose a long-form story generation task, Next-Chapter Prediction, and a novel reward formulation that allows us to learn reasoning traces which improve predicted chapters without a labeled dataset.
Generating high-quality stories spanning thousands of tokens requires competency across a variety of skills, from tracking plot and character arcs to keeping a consistent and engaging style. Due to the difficulty of sourcing labeled datasets and precise quality measurements, most work using large language models (LLMs) for long-form story generation uses combinations of hand-designed prompting techniques to elicit author-like behavior. This is a manual process that is highly dependent on the specific story-generation task. Motivated by the recent success of applying RL with Verifiable Rewards to domains like math and coding, we propose a general story-generation task (Next-Chapter Prediction) and a reward formulation (Verified Rewards via Completion Likelihood Improvement) that allows us to use an unlabeled book dataset as a learning signal for reasoning. We learn to reason over a story's condensed information and generate a detailed plan for the next chapter. Our reasoning is evaluated via the chapters it helps a story-generator create, and compared against non-trained and supervised finetuning (SFT) baselines. Pairwise human judgments reveal the chapters our learned reasoning produces are preferred across almost all metrics, and the effect is more pronounced in Scifi and Fantasy genres.
[ "Alexander Gurung", "Mirella Lapata" ]
https://openreview.net/forum?id=dr3eg5ehR2
dr3eg5ehR2
dr3eg5ehR2
[ "~Alexander_Gurung1", "~Mirella_Lapata1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/bfa23a57d4114b772b5398b348d3308aa0ebfd0f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "story generation", "reasoning", "reinforcement learning", "long-context generation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ gurung2025learning, title={Learning to Reason for Long-Form Story Generation}, author={Alexander Gurung and Mirella Lapata}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=dr3eg5ehR2} }
gurung|learning_to_reason_for_longform_story_generation
null
null
null
null
null
Echo Chamber: RL Post-training Amplifies Behaviors Learned in Pretraining
We conduct a systematic end-to-end study of RL fine-tuning from scratch for mathematical reasoning, uncovering how RL shapes model behavior across scales and data mixtures.
Reinforcement learning (RL)-based fine-tuning has become a crucial step in post-training language models for advanced mathematical reasoning and coding. Following the success of frontier reasoning models, recent work has demonstrated that RL fine-tuning consistently improves performance, even in smaller-scale models; however, the underlying mechanisms driving these improvements are not well-understood. Understanding the effects of RL fine-tuning requires disentangling its interaction with pretraining data composition, hyperparameters, and model scale, but such problems are exacerbated by the lack of transparency regarding the training data used in many existing models. In this work, we present a systematic end-to-end study of RL fine-tuning for mathematical reasoning by training models entirely from scratch on different mixtures of fully open datasets. We investigate the effects of various RL fine-tuning algorithms (PPO, GRPO, and Expert Iteration) across models of different scales. Our study reveals that RL algorithms consistently converge towards a dominant output distribution, amplifying patterns in the pretraining data. We also find that models of different scales trained on the same data mixture will converge to distinct output distributions, suggesting that there are scale-dependent biases in model generalization. Moreover, we find that RL post-training on simpler questions can lead to performance gains on harder ones, indicating that certain reasoning capabilities generalize across tasks. Our findings show that small-scale proxies in controlled settings can elicit interesting insights regarding the role of RL in shaping language model behavior.
[ "Rosie Zhao", "Alexandru Meterez", "Sham M. Kakade", "Cengiz Pehlevan", "Samy Jelassi", "Eran Malach" ]
https://openreview.net/forum?id=dp4KWuSDzj
dp4KWuSDzj
dp4KWuSDzj
[ "~Rosie_Zhao1", "~Alexandru_Meterez1", "~Sham_M._Kakade1", "~Cengiz_Pehlevan2", "~Samy_Jelassi1", "~Eran_Malach3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8cfad45ef47f433c49770fdee1de3c3007efecc7.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "reinforcement learning", "language models", "post-training", "ppo", "pretraining", "reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhao2025echo, title={Echo Chamber: {RL} Post-training Amplifies Behaviors Learned in Pretraining}, author={Rosie Zhao and Alexandru Meterez and Sham M. Kakade and Cengiz Pehlevan and Samy Jelassi and Eran Malach}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=dp4KWuSDzj} }
zhao|echo_chamber_rl_posttraining_amplifies_behaviors_learned_in_pretraining
null
null
null
null
null
Evaluating LLMs on Chinese Idiom Translation
We evaluate large language models on Chinese idiom translation across multiple domains and introduce new metrics that reliably measures idiom translation quality, significantly outperforming existing translation metrics.
Idioms, whose figurative meanings usually differ from their literal interpretations, are common in everyday language, especially in Chinese, where they often contain historical references and follow specific structural patterns. Despite recent progress in machine translation with large language models, little is known about Chinese idiom translation. In this work, we introduce IdiomEval, a framework with a comprehensive error taxonomy for Chinese idiom translation. We annotate 900 translation pairs from nine modern systems, including GPT-4o and Google Translate, across four domains: web, news, Wikipedia, and social media. We find these systems fail at idiom translation, producing incorrect, literal, partial, or even missing translations. The best-performing system, GPT-4, makes errors in 28\% of cases. We also find that existing evaluation metrics measure idiom quality poorly with Pearson correlation below 0.48 with human ratings. We thus develop improved models that achieve F$_1$ scores of 0.68 for detecting idiom translation errors.
[ "Cai Yang", "Yao Dou", "David Heineman", "Xiaofeng Wu", "Wei Xu" ]
https://openreview.net/forum?id=dkE5rveDuh
dkE5rveDuh
dkE5rveDuh
[ "~Cai_Yang1", "~Yao_Dou1", "~David_Heineman1", "~Xiaofeng_Wu5", "~Wei_Xu5" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/288488f3af0e0caf3be74ad0d0cefb3b2efbb459.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Model Evaluation", "Chinese Idioms", "Meta Analysis", "Multilingual Evaluation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yang2025evaluating, title={Evaluating {LLM}s on Chinese Idiom Translation}, author={Cai Yang and Yao Dou and David Heineman and Xiaofeng Wu and Wei Xu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=dkE5rveDuh} }
yang|evaluating_llms_on_chinese_idiom_translation
null
null
This submission is NOT exempt from the Reciprocal Reviewing requirement. (We expect most submissions to fall in this category.)
~Yao_Dou1
{ "readers": [ "colmweb.org/COLM/2025/Conference", "colmweb.org/COLM/2025/Conference/Submission992/Authors" ] }
QAPyramid: Fine-grained Evaluation of Content Selection for Text Summarization
Fine-grained Evaluation of Content Selection for Text Summarization
How to properly conduct human evaluations for text summarization is a longstanding challenge. The Pyramid human evaluation protocol, which assesses content selection by breaking the reference summary into sub-units and verifying their presence in the system summary, has been widely adopted. However, it suffers from a lack of systematicity in the definition and granularity of the sub-units. We address these problems by proposing QAPyramid, which decomposes each reference summary into finer-grained question-answer (QA) pairs according to the QA-SRL framework. We collect QA-SRL annotations for reference summaries from CNN/DM and evaluate 10 summarization systems, resulting in 8.9K QA-level annotations. We show that, compared to Pyramid, QAPyramid provides more systematic and fine-grained content selection evaluation while maintaining high inter-annotator agreement without needing expert annotations. Furthermore, we propose metrics that automate the evaluation pipeline and achieve higher correlations with QAPyra- mid than other widely adopted metrics, allowing future work to accurately and efficiently benchmark summarization systems.
[ "Shiyue Zhang", "David Wan", "Arie Cattan", "Ayal Klein", "Ido Dagan", "Mohit Bansal" ]
https://openreview.net/forum?id=dZRzInscvA
dZRzInscvA
dZRzInscvA
[ "~Shiyue_Zhang1", "~David_Wan1", "~Arie_Cattan1", "~Ayal_Klein1", "~Ido_Dagan1", "~Mohit_Bansal2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/4120b6c3ef2897eadd231d9b69c23815f5944c22.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Evaluation", "Summarization" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhang2025qapyramid, title={{QAP}yramid: Fine-grained Evaluation of Content Selection for Text Summarization}, author={Shiyue Zhang and David Wan and Arie Cattan and Ayal Klein and Ido Dagan and Mohit Bansal}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=dZRzInscvA} }
zhang|qapyramid_finegrained_evaluation_of_content_selection_for_text_summarization
null
null
null
null
null
Steering the CensorShip: Uncovering Representation Vectors for LLM "Thought'' Control
We examine safety-tuned LLMs and discover representation vectors for measuring and controlling censorship imposed through refusal and thought suppression.
Large language models (LLMs) have transformed the way we access information. These models are often tuned to refuse to comply with requests that are considered harmful and to produce responses that better align with the preferences of those who control the models. To understand how this "censorship'" works. We use representation engineering techniques to study open-weights safety-tuned models. We present a method for finding a refusal--compliance vector that detects and controls the level of censorship in model outputs. We also analyze recent reasoning LLMs, distilled from DeepSeek-R1, and uncover an additional dimension of censorship through "thought suppression". We show a similar approach can be used to find a vector that suppresses the model's reasoning process, allowing us to remove censorship by applying the negative multiples of this vector
[ "Hannah Cyberey", "David Evans" ]
https://openreview.net/forum?id=dVqZBagXF3
dVqZBagXF3
dVqZBagXF3
[ "~Hannah_Cyberey1", "~David_Evans1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/70893e89e94379603250043cfb412820efbd25a4.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "censorship", "activation steering", "representation engineering", "reasoning LLMs" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ cyberey2025steering, title={Steering the CensorShip: Uncovering Representation Vectors for {LLM} ''Thought'' Control}, author={Hannah Cyberey and David Evans}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=dVqZBagXF3} }
cyberey|steering_the_censorship_uncovering_representation_vectors_for_llm_thought_control
null
null
null
null
null
Cutting the Root of Hallucination: Structural Trimming for Vulnerability Mitigation in Code LLMs
LLMs hallucinate code often with security risks. We trace these structurally, prune them surgically, and predict repair effectiveness. Our method patches code and mitigates risk using a transferable score (CSHS).
We introduce a structural perspective on hallucinations in code-generating language models, framing them as causality anchors in syntax graphs that trigger cascading semantic errors and latent security flaws. This work is the first to systematically connect code hallucinations with vulnerability risks, offering a unified conceptual and practical framework to address them. At the heart of our approach is the notion of hallucination anchors, localized subtrees in the abstract syntax tree (AST) that serve as root causes of defective logic. We propose Structural Trimming (ST), a targeted mitigation method that removes these anchors while preserving functional semantics. To anticipate the effect of trimming, we introduce the Compositional Structural Hallucination Score (CSHS), which quantifies the likelihood that pruning will improve robustness. By grounding error reduction in the syntax graph itself, our method reframes hallucination mitigation as a structured intervention process interpretable, generalizable, and actionable.
[ "Yage Zhang" ]
https://openreview.net/forum?id=dU4Y2sNfJ2
dU4Y2sNfJ2
dU4Y2sNfJ2
[ "~Yage_Zhang2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/94228dff9c0d4e67d4c92dc5132ee037a4340a14.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM hallucinations", "code generation", "program repair", "vulnerability mitigation", "structural pruning", "abstract syntax tree", "hallucination detection", "CSHS", "model-agnostic risk estimation", "generative code safety" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhang2025cutting, title={Cutting the Root of Hallucination: Structural Trimming for Vulnerability Mitigation in Code {LLM}s}, author={Yage Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=dU4Y2sNfJ2} }
zhang|cutting_the_root_of_hallucination_structural_trimming_for_vulnerability_mitigation_in_code_llms
null
null
null
null
null
Algorithm Discovery With LLMs: Evolutionary Search Meets Reinforcement Learning
We propose an approach that integrates LLM-based evolutionary search with RL fine-tuning for accelerated discovery of algorithms, as demonstrated on combinatorial optimization tasks.
Discovering efficient algorithms for solving complex problems has been an outstanding challenge in mathematics and computer science, requiring substantial human expertise over the years. Recent advancements in evolutionary search with large language models (LLMs) have shown promise in accelerating the discovery of algorithms across various domains, particularly in mathematics and optimization. However, existing approaches treat the LLM as a static generator, missing the opportunity to update the model with the signal obtained from evolutionary exploration. In this work, we propose to augment LLM-based evolutionary search by continuously refining the search operator $-$ the LLM $-$ through reinforcement learning (RL) fine-tuning. Our method leverages evolutionary search as an exploration strategy to discover improved algorithms, while RL optimizes the LLM policy based on these discoveries. Our experiments on combinatorial optimization tasks demonstrate that integrating RL with evolutionary search accelerates the discovery of superior algorithms, showcasing the potential of RL-enhanced evolutionary strategies for algorithm design.
[ "Anja Šurina", "Amin Mansouri", "Lars C.P.M. Quaedvlieg", "Amal Seddas", "Maryna Viazovska", "Emmanuel Abbe", "Caglar Gulcehre" ]
https://openreview.net/forum?id=dNW3RGW0gi
dNW3RGW0gi
dNW3RGW0gi
[ "~Anja_Šurina1", "~Amin_Mansouri1", "~Lars_C.P.M._Quaedvlieg1", "~Amal_Seddas1", "~Maryna_Viazovska1", "~Emmanuel_Abbe1", "~Caglar_Gulcehre1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/92babb3133c8776c7888a177c504da576de4c7f4.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Reinforcement Learning", "Evolutionary Search", "Algorithm Discovery", "Self-Improvement", "AI for Math" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ surina2025algorithm, title={Algorithm Discovery With {LLM}s: Evolutionary Search Meets Reinforcement Learning}, author={Anja {\v{S}}urina and Amin Mansouri and Lars C.P.M. Quaedvlieg and Amal Seddas and Maryna Viazovska and Emmanuel Abbe and Caglar Gulcehre}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=dNW3RGW0gi} }
urina|algorithm_discovery_with_llms_evolutionary_search_meets_reinforcement_learning
null
null
null
null
null
You Cannot Feed Two Birds with One Score: the Accuracy-Naturalness Tradeoff in Translation
We prove mathematically and demonstrate empirically that optimizing a single metric for machine translation *cannot* lead to a system that is both accurate and fluent. We also establish a connection between no-reference metrics and our theory.
The goal of translation, be it by human or by machine, is, given some text in a source language, to produce text in a target language that simultaneously 1) preserves the meaning of the source text and 2) achieves natural expression in the target language. However, researchers in the machine translation community usually assess translations using a single score intended to capture semantic accuracy and the naturalness of the output simultaneously. In this paper, we build on recent advances in information theory to mathematically prove and empirically demonstrate that such single-score summaries *do not and cannot* give the complete picture of a system's true performance. Concretely, we prove that a tradeoff exists between accuracy and naturalness and demonstrate it by evaluating the submissions to the WMT24 shared task. Our findings help explain well-known empirical phenomena, such as the observation that optimizing translation systems for a specific accuracy metric (like BLEU) initially improves the system's naturalness, while “overfitting" the system to the metric can significantly degrade its naturalness. Thus, we advocate for a change in how translations are evaluated: rather than comparing systems using a single number, they should be compared on an *accuracy-naturalness plane*.
[ "Gergely Flamich", "David Vilar", "Jan-Thorsten Peter", "Markus Freitag" ]
https://openreview.net/forum?id=d9EkgbZZH9
d9EkgbZZH9
d9EkgbZZH9
[ "~Gergely_Flamich1", "~David_Vilar1", "~Jan-Thorsten_Peter1", "~Markus_Freitag2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/e125a1f2f817db9daf4c4a2ed3b825ae306572f5.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "translation", "accuracy", "naturalness", "tradeoff", "distortion", "perception", "no-reference metric" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ flamich2025you, title={You Cannot Feed Two Birds with One Score: the Accuracy-Naturalness Tradeoff in Translation}, author={Gergely Flamich and David Vilar and Jan-Thorsten Peter and Markus Freitag}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=d9EkgbZZH9} }
flamich|you_cannot_feed_two_birds_with_one_score_the_accuracynaturalness_tradeoff_in_translation
null
null
null
null
null
Teach Old SAEs New Domain Tricks with Boosting
We propose a method to add domain-specific features into a pre-trained SAE.
Sparse Autoencoders have emerged as powerful tools for interpreting the internal representations of Large Language Models, yet they often fail to capture domain-specific features not prevalent in their training corpora. This paper introduces a residual learning approach that addresses this feature blindness without requiring complete retraining. We propose training a secondary SAE specifically to model the reconstruction error of a pretrained SAE on domain-specific texts, effectively capturing features missed by the primary model. By summing the outputs of both models during inference, we demonstrate significant improvements in both LLM cross-entropy and explained variance metrics across multiple specialized domains. Our experiments show that this method efficiently incorporates new domain knowledge into existing SAEs while maintaining their performance on general tasks. This approach enables researchers to selectively enhance SAE interpretability for specific domains of interest, opening new possibilities for targeted mechanistic interpretability of LLMs.
[ "Nikita Koriagin", "Yaroslav Aksenov", "Daniil Laptev", "Gleb Gerasimov", "Nikita Balagansky", "Daniil Gavrilov" ]
https://openreview.net/forum?id=d4XXFVAlV7
d4XXFVAlV7
d4XXFVAlV7
[ "~Nikita_Koriagin1", "~Yaroslav_Aksenov1", "~Daniil_Laptev1", "~Gleb_Gerasimov1", "~Nikita_Balagansky3", "~Daniil_Gavrilov1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/239b22ba90f8847d2c8c75fead3882620a2ec8d6.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM", "Interpretability", "SAE" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ koriagin2025teach, title={Teach Old {SAE}s New Domain Tricks with Boosting}, author={Nikita Koriagin and Yaroslav Aksenov and Daniil Laptev and Gleb Gerasimov and Nikita Balagansky and Daniil Gavrilov}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=d4XXFVAlV7} }
koriagin|teach_old_saes_new_domain_tricks_with_boosting
null
null
null
null
null
CultureCLIP: Empowering CLIP with Cultural Awareness through Synthetic Images and Contextualized Captions
We introduce CulTwin, a synthetic cultural dataset of visually similar concept pairs with contextualized captions, and CultureCLIP, a CLIP-based model fine-tuned to better distinguish visually similar yet culturally distinct concepts.
Pretrained vision-language models (VLMs) such as CLIP excel in general multimodal comprehension but often struggle to capture nuanced, context-dependent visual cues. This makes it difficult to distinguish between similar-looking concepts with potentially different cultural meanings. Such deficiencies are mainly due to a limited amount of high-quality cultural data, contextual information, and the lack of negative examples that highlight subtle differences. To mitigate this, we design a data curation pipeline leveraging open-sourced VLMs and text-to-image models to construct CulTwin, a synthetic cultural dataset. This dataset consists of paired concept-caption-image triplets, where concepts visually resemble each other but are culturally different. Then, we fine-tune CLIP on CulTwin to develop CultureCLIP, which aligns cultural concepts with contextually enhanced captions and synthetic images through tailored contrastive learning. Experiments on culture-specific benchmarks show that CultureCLIP outperforms the base CLIP, achieving up to a notable 5.49\% improvement in fine-grained concept recognition on certain tasks while preserving CLIP's original generalization ability, validating the effectiveness of our data synthesis and VLM backbone training paradigm in capturing subtle cultural distinctions.
[ "Yuchen Huang", "Zhiyuan Fan", "Zhitao He", "Sandeep Polisetty", "Wenyan Li", "Yi R. Fung" ]
https://openreview.net/forum?id=cWVpXWARbt
cWVpXWARbt
cWVpXWARbt
[ "~Yuchen_Huang4", "~Zhiyuan_Fan2", "~Zhitao_He1", "~Sandeep_Polisetty1", "~Wenyan_Li1", "~Yi_R._Fung1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/43578f00390a568a2e3b91a5e2c0dd7ab5d002b7.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Vision-Language Models", "Cultural Understanding", "Fine-Grained Recognition", "Contextual Knowledge", "Synthetic Data Generation", "Contrastive Learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ huang2025cultureclip, title={Culture{CLIP}: Empowering {CLIP} with Cultural Awareness through Synthetic Images and Contextualized Captions}, author={Yuchen Huang and Zhiyuan Fan and Zhitao He and Sandeep Polisetty and Wenyan Li and Yi R. Fung}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=cWVpXWARbt} }
huang|cultureclip_empowering_clip_with_cultural_awareness_through_synthetic_images_and_contextualized_captions
null
null
null
null
null
Detecting and Pruning Prominent but Detrimental Neurons in Large Language Models
We fine-tune LLMs by pruning shortcut neurons using Integrated Gradients, improving generalization and performance on multiple-choice benchmarks.
Large language models (LLMs) often develop learned mechanisms specialized to specific datasets, such as reliance on domain-specific correlations, which yield high-confidence predictions without generalizable reasoning. While beneficial in one setting, these dataset-specific mechanisms typically degrade performance when models encounter novel tasks or distributions. In this work, we introduce a fine-tuning approach designed to enhance generalization by identifying and pruning neurons associated with dataset-specific mechanisms in transformer-based LLMs. Our method employs Integrated Gradients to quantify each neuron’s influence on high-confidence predictions, pinpointing those that disproportionately contribute to dataset-specific performance without supporting robust, transferable reasoning. Selectively pruning these neurons compels the model to depend on generalizable representations. Evaluated across multiple-choice benchmarks, our pruning-based fine-tuning significantly enhances performance, surpassing prior (non-pruning) adaptation methods.
[ "Ameen Ali Ali", "Shahar Katz", "Lior Wolf", "Ivan Titov" ]
https://openreview.net/forum?id=cRE1XrHf1h
cRE1XrHf1h
cRE1XrHf1h
[ "~Ameen_Ali_Ali1", "~Shahar_Katz1", "~Lior_Wolf1", "~Ivan_Titov1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/3e75fad4784de03c4f8695f26d4c0af5020dc3b9.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLMs", "spurious correlations", "Integrated Gradients", "generalization", "model adaptation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ali2025detecting, title={Detecting and Pruning Prominent but Detrimental Neurons in Large Language Models}, author={Ameen Ali Ali and Shahar Katz and Lior Wolf and Ivan Titov}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=cRE1XrHf1h} }
ali|detecting_and_pruning_prominent_but_detrimental_neurons_in_large_language_models
/attachment/0f250681dff84bc33314f010f39c137641aa6d12.zip
null
null
null
null
Approximating Language Model Training Data from Weights
we recover suitable training data given only model weights
Modern language models often have open weights but closed training data. We formalize the problem of data recovery from model weights and propose several baselines and metrics. We develop a gradient-based approach that selects the highest-matching data from a large public text corpus and show its effectiveness at recovering data given only weights of the original and finetuned models. The training subset pinpointed by our method in a large corpus can be used to train another model to comparable performance. Even when none of the true training data is available, data selected by our method from publicly available Web documents can be used to train a competent model.
[ "John Xavier Morris", "Junjie Oscar Yin", "Woojeong Kim", "Vitaly Shmatikov", "Alexander M Rush" ]
https://openreview.net/forum?id=cQechnXCQt
cQechnXCQt
cQechnXCQt
[ "~John_Xavier_Morris1", "~Junjie_Oscar_Yin1", "~Woojeong_Kim1", "~Vitaly_Shmatikov1", "~Alexander_M_Rush1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/0566a6c6e8416f4fc6d0bedebfc25b7df8462f68.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "inversion", "training data reconstruction" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ morris2025approximating, title={Approximating Language Model Training Data from Weights}, author={John Xavier Morris and Junjie Oscar Yin and Woojeong Kim and Vitaly Shmatikov and Alexander M Rush}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=cQechnXCQt} }
morris|approximating_language_model_training_data_from_weights
null
null
null
null
null
Self-Rewarding PPO: Aligning Large Language Models with Demonstrations Only
We propose Self-Rewarding PPO, a novel fine-tuning method that combines the strengths of SFT and proximal policy optimization (PPO) to achieve more effective alignment from demonstration data.
Supervised fine-tuning (SFT) has emerged as a crucial method for aligning large language models (LLMs) with human-annotated demonstrations. However, SFT, being an off-policy approach similar to behavior cloning, often struggles with overfitting and poor out-of-domain generalization, especially in limited-data scenarios. To address these limitations, we propose Self-Rewarding PPO, a novel fine-tuning method that leverages on-policy techniques to enhance generalization performance. Our approach combines the strengths of SFT and proximal policy optimization (PPO) to achieve more effective alignment from demonstration data. At its core is a reward function designed as the log policy ratio between the SFT model and the pretrained base model. This function serves as an implicit reward signal, using the pretrained policy as a baseline and the SFT policy as a target. By doing so, it enables on-policy fine-tuning without relying on human preference annotations. The integration of this self-rewarding mechanism with PPO addresses key limitations of SFT, improving generalization, data efficiency, and robustness. Our empirical evaluation across a range of natural language processing tasks demonstrates that Self-Rewarding PPO consistently outperforms traditional SFT methods. The results highlight the effectiveness of our approach in aligning LLMs using demonstration data, particularly in scenarios where high-quality annotated data is scarce.
[ "Qingru Zhang", "Liang Qiu", "Ilgee Hong", "Zhenghao Xu", "Tianyi Liu", "Shiyang Li", "Rongzhi Zhang", "Zheng Li", "Lihong Li", "Bing Yin", "Chao Zhang", "Jianshu Chen", "Haoming Jiang", "Tuo Zhao" ]
https://openreview.net/forum?id=cOlHP5E3qF
cOlHP5E3qF
cOlHP5E3qF
[ "~Qingru_Zhang2", "~Liang_Qiu2", "~Ilgee_Hong1", "~Zhenghao_Xu1", "~Tianyi_Liu2", "~Shiyang_Li1", "~Rongzhi_Zhang2", "~Zheng_Li9", "~Lihong_Li1", "~Bing_Yin1", "~Chao_Zhang15", "~Jianshu_Chen1", "~Haoming_Jiang1", "~Tuo_Zhao2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/c544f1212296d14078181345cb70fcb3f3316194.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Alignment with Demonstration", "Self-Rewarding PPO", "Large Language Models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhang2025selfrewarding, title={Self-Rewarding {PPO}: Aligning Large Language Models with Demonstrations Only}, author={Qingru Zhang and Liang Qiu and Ilgee Hong and Zhenghao Xu and Tianyi Liu and Shiyang Li and Rongzhi Zhang and Zheng Li and Lihong Li and Bing Yin and Chao Zhang and Jianshu Chen and Haoming Jiang and Tuo Zhao}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=cOlHP5E3qF} }
zhang|selfrewarding_ppo_aligning_large_language_models_with_demonstrations_only
null
null
null
null
null
MS-SSM: A Multi-Scale State Space Model for Efficient Sequence Modeling
We introduce MS-SSM which enhances traditional SSMs by modeling sequence dynamics at multiple resolutions using independent SSMs, scale-dependent initialization, and an input-dependent scale-mixer.
State-space models (SSMs) have recently attention as an efficient alternative to computationally expensive attention-based models for sequence modeling. They rely on linear recurrences to integrate information over time, enabling fast inference, parallelizable training, and control over recurrence stability. However, traditional SSMs often suffer from limited effective memory, requiring larger state sizes for improved recall. Moreover, existing SSMs struggle to capture multi-scale dependencies, which are essential for modeling complex structures in time series, images, and natural language. This paper introduces a multi-scale SSM framework that addresses these limitations by representing sequence dynamics across multiple resolution and processing each resolution with specialized state-space dynamics. By capturing both fine-grained, high-frequency patterns and coarse, global trends, MS-SSM enhances memory efficiency and long-range modeling. We further introduce an input-dependent scale-mixer, enabling dynamic information fusion across resolutions. The proposed approach significantly improves sequence modeling, particularly in long-range and hierarchical tasks, while maintaining computational efficiency. Extensive experiments on benchmarks, including Long Range Arena, hierarchical reasoning, time series classification, and image recognition, demonstrate that MS-SSM consistently outperforms prior SSM-based models, highlighting the benefits of multi-resolution processing in state-space architectures.
[ "Mahdi Karami", "Ali Behrouz", "Peilin Zhong", "Razvan Pascanu", "Vahab Mirrokni" ]
https://openreview.net/forum?id=cCYWeCzAv0
cCYWeCzAv0
cCYWeCzAv0
[ "~Mahdi_Karami2", "~Ali_Behrouz1", "~Peilin_Zhong1", "~Razvan_Pascanu1", "~Vahab_Mirrokni2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/483868b1b70acb79914fb7e065ea71d3d5a37968.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "MS-SSM: Sequence Models", "Language models", "State Space Model", "Multi-Scale", "Multi-Resolution" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ karami2025msssm, title={{MS}-{SSM}: A Multi-Scale State Space Model for Efficient Sequence Modeling}, author={Mahdi Karami and Ali Behrouz and Peilin Zhong and Razvan Pascanu and Vahab Mirrokni}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=cCYWeCzAv0} }
karami|msssm_a_multiscale_state_space_model_for_efficient_sequence_modeling
null
null
null
null
null
DEL: Context-Aware Dynamic Exit Layer for Efficient Self-Speculative Decoding
We propose DEL, a dynamic method that adaptively selects the exit layer and speculation length in self-speculative decoding to accelerate large language model inference.
Speculative Decoding (SD) is a widely used approach to accelerate the inference of large language models (LLMs) without reducing generation quality. It operates by first using a compact model to draft multiple tokens efficiently, followed by parallel verification using the target LLM. This approach leads to faster inference compared to auto-regressive decoding. While there are multiple approaches to create a draft model, one promising approach is to use early-exit methods. These methods draft candidate tokens by using a subset of layers of the primary model and applying the remaining layers for verification, allowing a single model to handle both drafting and verification. While this technique reduces memory usage and computational cost, its performance relies on the choice of the exit layer for drafting and the number of tokens drafted (speculation length) in each SD round. Prior works use hyperparameter exploration to statically select these values. However, our evaluations show that these hyperparameter values are task-specific, and even within a task they are dependent on the current sequence context. We introduce DEL (Dynamic Exit Layer), a plug-and-play method that adaptively selects the exit layer and speculation length during inference. DEL dynamically tracks the token acceptance rate if the tokens are drafted at each layer of an LLM and uses that knowledge to heuristically select the optimal exit layer and speculation length. Our experiments across a broad range of models and downstream tasks show that DEL achieves overall speedups of $2.16\times$$\sim$$2.62\times$ over vanilla auto-regressive decoding and improves upon state-of-the-art SD methods, which peak at $2.43\times$, by up to $0.19\times$. The code is available at https://github.com/hoenza/DEL.
[ "Hossein Entezari Zarch", "Lei Gao", "Chaoyi Jiang", "Murali Annavaram" ]
https://openreview.net/forum?id=cAFxSuXQvT
cAFxSuXQvT
cAFxSuXQvT
[ "~Hossein_Entezari_Zarch1", "~Lei_Gao3", "~Chaoyi_Jiang1", "~Murali_Annavaram1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/e1dbcba837d14d0fa9c917359c3c5f0b9d4cb9d5.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Speculative Decoding", "Efficient Large Language Model", "Inference Acceleration" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zarch2025del, title={{DEL}: Context-Aware Dynamic Exit Layer for Efficient Self-Speculative Decoding}, author={Hossein Entezari Zarch and Lei Gao and Chaoyi Jiang and Murali Annavaram}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=cAFxSuXQvT} }
zarch|del_contextaware_dynamic_exit_layer_for_efficient_selfspeculative_decoding
null
null
null
null
null
LLMs Are In-Context Bandit Reinforcement Learners
LLMs can learn in-context from online rewards like in reinforcement learning, instead of just supervised examples
Large Language Models (LLMs) excel at in-context learning (ICL), a supervised learning technique that relies on adding annotated examples to the model context. We investigate a contextual bandit version of in-context reinforcement learning (ICRL), where models learn in-context, online, from external reward, instead of supervised data. We show that LLMs effectively demonstrate such learning, and provide a detailed study of the phenomena, experimenting with challenging classification tasks and models of sizes from 500M to 70B parameters. This includes identifying and addressing the instability of the process, demonstrating learning with both semantic and abstract labels, and showing scaling trends. Our findings highlight ICRL capabilities in LLMs, while also underscoring fundamental limitations in their implicit reasoning about errors.
[ "Giovanni Monea", "Antoine Bosselut", "Kianté Brantley", "Yoav Artzi" ]
https://openreview.net/forum?id=c0RsezY2D1
c0RsezY2D1
c0RsezY2D1
[ "~Giovanni_Monea1", "~Antoine_Bosselut1", "~Kianté_Brantley2", "~Yoav_Artzi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/628804cb3e9ad8568125424f3e5621da8ba4504d.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "In-context reinforcement learning", "in-context learning", "contextual bandits", "online learning", "large language models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ monea2025llms, title={{LLM}s Are In-Context Bandit Reinforcement Learners}, author={Giovanni Monea and Antoine Bosselut and Kiant{\'e} Brantley and Yoav Artzi}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=c0RsezY2D1} }
monea|llms_are_incontext_bandit_reinforcement_learners
/attachment/856e277943bbf727353282951700d1741ae0833b.zip
null
null
null
null
Stop-Think-AutoRegress: Language Modeling with Latent Diffusion Planning
We introduce a unified architecture that pauses autoregressive text generation for latent diffusion planning, enabling higher quality and more controllable text generation with improved language understanding.
The Stop-Think-AutoRegress Language Diffusion Model (STAR-LDM) integrates latent diffusion planning with autoregressive generation. Unlike conventional autoregressive language models limited to token-by-token decisions, STAR-LDM incorporates a ``thinking'' phase that pauses generation to refine a semantic plan through diffusion before continuing. This enables global planning in continuous space prior to committing to discrete tokens. Evaluations show STAR-LDM significantly outperforms similar-sized models on language understanding benchmarks and achieves >70% win rates in LLM-as-judge comparisons for narrative coherence and commonsense reasoning. The architecture also allows straightforward control through lightweight classifiers, enabling fine-grained steering of attributes without model retraining while maintaining better fluency-control trade-offs than specialized approaches.
[ "Justin Lovelace", "Christian K Belardi", "Sofian Zalouk", "Adhitya Polavaram", "Srivatsa R Kundurthy", "Kilian Q Weinberger" ]
https://openreview.net/forum?id=c05qIG1Z2B
c05qIG1Z2B
c05qIG1Z2B
[ "~Justin_Lovelace1", "~Christian_K_Belardi1", "~Sofian_Zalouk1", "~Adhitya_Polavaram1", "~Srivatsa_R_Kundurthy1", "~Kilian_Q_Weinberger1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a567037e14bb0b772ee2dbb773ee14c211a329eb.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "diffusion", "latent diffusion", "language generation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lovelace2025stopthinkautoregress, title={Stop-Think-AutoRegress: Language Modeling with Latent Diffusion Planning}, author={Justin Lovelace and Christian K Belardi and Sofian Zalouk and Adhitya Polavaram and Srivatsa R Kundurthy and Kilian Q Weinberger}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=c05qIG1Z2B} }
lovelace|stopthinkautoregress_language_modeling_with_latent_diffusion_planning
null
null
null
null
null
Yourbench: Dynamic Evaluation Set Generation with LLMs
a new system to generate diverse question answers from source documents ensuring maximum document coverage
Large language models (LLMs) have rapidly outpaced traditional evaluation methodologies, with static benchmarks suffering from saturation, contamination, and domain-specificity limitations while human evaluation remains prohibitively expensive. We present YourBench, an open-source framework that transforms this evaluation paradigm by enabling automated generation of reliable, contamination-free benchmarks directly from user-provided documents without human annotation. To validate our approach, we successfully reproduce the challenging MMLU-Pro benchmark across 86 models spanning 400M to 405B parameters, achieving remarkable Pearson correlations of 0.91-0.99 while generating entirely novel questions for under $15 per model. This demonstrates that dynamically generated evaluations can match the discriminative power of expert-curated benchmarks while eliminating contamination risks. YourBench enables researchers to create domain-specific benchmarks in minutes rather than months. We demonstrate applications in agriculture, personalized education, and RAG training that were previously infeasible. By releasing the YourBench library, Tempora-0325 dataset, 150K+ generated QA pairs, and all evaluation traces, we provide the community with a practical solution to the challenge of keeping pace with rapidly evolving model capabilities.
[ "Sumuk Shashidhar", "Clémentine Fourrier", "Alina Lozovskaya", "Thomas Wolf", "Gokhan Tur", "Dilek Hakkani-Tür" ]
https://openreview.net/forum?id=bkWERVKzuP
bkWERVKzuP
bkWERVKzuP
[ "~Sumuk_Shashidhar1", "~Clémentine_Fourrier1", "~Alina_Lozovskaya1", "~Thomas_Wolf1", "~Gokhan_Tur2", "~Dilek_Hakkani-Tür1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a24fd7f123f1ad2093ae9bd1a7d79ae8e52f2822.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "benchmarking", "contemporary dataset", "dataset", "reference-free", "automated", "llm" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ shashidhar2025yourbench, title={Yourbench: Dynamic Evaluation Set Generation with {LLM}s}, author={Sumuk Shashidhar and Cl{\'e}mentine Fourrier and Alina Lozovskaya and Thomas Wolf and Gokhan Tur and Dilek Hakkani-T{\"u}r}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=bkWERVKzuP} }
shashidhar|yourbench_dynamic_evaluation_set_generation_with_llms
/attachment/6f599491b12fd956502386d694a034247fe66463.zip
null
null
null
null
Hawkeye: Model Collaboration for Efficient Reasoning
We provide an efficient inference pipeline that optimizes Chain-of-Thought (CoT) reasoning by instructing a Large Language Model (LLM) to generate concise yet effective CoTs for a Small Language Model (SLM) to decode through reinforcement learning.
Chain-of-Thought (CoT) reasoning has demonstrated remarkable effectiveness in enhancing the reasoning abilities of large language models (LLMs). However, its efficiency remains a challenge due to excessive intermediate reasoning tokens, which introduce both semantic redundancy and unnecessarily detailed reasoning steps. Moreover, the computational expense and latency remain high, as the cost is determined by the number of output tokens, which encompasses these intermediate steps. In this work, we observe that most CoT tokens are unnecessary, and retaining only a small portion of them is sufficient for high-quality responses. Inspired by this, we propose Hawkeye, a novel post-training and inference framework where a large model produce concise CoT instructions to guide a smaller model in response generation. Hawkeye quantifies redundancy in CoT reasoning and distills high-density information via reinforcement learning. By leveraging these concise CoTs, Hawkeye is able to expand responses while reducing token usage and computational cost significantly. Our evaluation results show that Hawkeye can achieve comparable response quality using only 35\% of the complete CoTs while improving clarity, coherence, and conciseness by approximately 10\%. Furthermore, Hawkeye can accelerate end-to-end reasoning by up to 3.4× on complex math tasks while saving up tp 60\% inference cost. Hawkeye will be open-sourced and the models will be available soon.
[ "Jianshu She", "Zhuohao Li", "Zhemin Huang", "Qi Li", "Peiran Xu", "Haonan Li", "Qirong Ho" ]
https://openreview.net/forum?id=bdCWK4NkK7
bdCWK4NkK7
bdCWK4NkK7
[ "~Jianshu_She1", "~Zhuohao_Li3", "~Zhemin_Huang1", "~Qi_Li39", "~Peiran_Xu2", "~Haonan_Li2", "~Qirong_Ho1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/1e689a967422ea14219cc7b7d5e1234d934f7ed1.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "reinforcement learning (with human feedback)", "fine-tuning", "compression", "decoding algorithms", "reasoning algorithms" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ she2025hawkeye, title={Hawkeye: Model Collaboration for Efficient Reasoning}, author={Jianshu She and Zhuohao Li and Zhemin Huang and Qi Li and Peiran Xu and Haonan Li and Qirong Ho}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=bdCWK4NkK7} }
she|hawkeye_model_collaboration_for_efficient_reasoning
null
null
null
null
null
LoRe: Personalizing LLMs via Low-Rank Reward Modeling
We introduce low-rank personalized preference modeling for LLMs, enabling scalable and efficient user-specific reward learning with superior generalization and few-shot adaptation.
Personalizing large language models (LLMs) to accommodate diverse user preferences is essential for enhancing alignment and user satisfaction. Traditional reinforcement learning from human feedback (RLHF) approaches often rely on monolithic value representations, limiting their ability to adapt to individual preferences. We introduce a novel framework that leverages low-rank preference modeling to efficiently learn and generalize user-specific reward functions. By representing reward functions in a low-dimensional subspace and modeling individual preferences as weighted combinations of shared basis functions, our approach avoids rigid user categorization while enabling scalability and few-shot adaptation. We validate our method on multiple preference datasets, demonstrating superior generalization to unseen users and improved accuracy in preference prediction tasks.
[ "Avinandan Bose", "Zhihan Xiong", "Yuejie Chi", "Simon Shaolei Du", "Lin Xiao", "Maryam Fazel" ]
https://openreview.net/forum?id=bYu4DOqRY8
bYu4DOqRY8
bYu4DOqRY8
[ "~Avinandan_Bose1", "~Zhihan_Xiong1", "~Yuejie_Chi1", "~Simon_Shaolei_Du1", "~Lin_Xiao1", "~Maryam_Fazel1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/350a08ec6b099c62f340e25b0560b73653f3a110.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "preference learning", "personalization", "reward modeling", "plurality" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ bose2025lore, title={LoRe: Personalizing {LLM}s via Low-Rank Reward Modeling}, author={Avinandan Bose and Zhihan Xiong and Yuejie Chi and Simon Shaolei Du and Lin Xiao and Maryam Fazel}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=bYu4DOqRY8} }
bose|lore_personalizing_llms_via_lowrank_reward_modeling
/attachment/378b641524b06586290a0a9e85ffb9de08e61924.zip
null
null
null
null
The Dual-Route Model of Induction
We find that LLMs can do in-context copying in two different ways: either by copying individual tokens verbatim, or by copying entire word meanings (which may span multiple tokens).
Prior work on in-context copying has shown the existence of *induction heads*, which attend to and promote individual tokens during copying. In this work we discover a new type of induction head: *concept-level* induction heads, which copy entire lexical units instead of individual tokens. Concept induction heads learn to attend to the ends of multi-token words throughout training, working in parallel with token-level induction heads to copy meaningful text. We show that these heads are responsible for semantic tasks like word-level translation, whereas token induction heads are vital for tasks that can only be done verbatim (like copying nonsense tokens). These two "routes" operate independently: we show that ablation of token induction heads causes models to paraphrase where they would otherwise copy verbatim. By patching concept induction head outputs, we find that they contain language-independent word representations that mediate natural language translation, suggesting that LLMs represent abstract word meanings independent of language or form.
[ "Sheridan Feucht", "Eric Todd", "Byron C Wallace", "David Bau" ]
https://openreview.net/forum?id=bNTrKqqnG9
bNTrKqqnG9
bNTrKqqnG9
[ "~Sheridan_Feucht1", "~Eric_Todd1", "~Byron_C_Wallace1", "~David_Bau1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/44805a940b65757f691cc13691a90581dcdb453a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "interpretability", "induction heads", "in-context learning", "ICL", "detokenization" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ feucht2025the, title={The Dual-Route Model of Induction}, author={Sheridan Feucht and Eric Todd and Byron C Wallace and David Bau}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=bNTrKqqnG9} }
feucht|the_dualroute_model_of_induction
null
null
null
null
null
CrossWordBench: Evaluating the Reasoning Capabilities of LLMs and LVLMs with Controllable Puzzle Generation
We propose CrossWordBench, a benchmark to evaluate the reasoning capabilities of both LLMs and LVLMs though the medium of crossword puzzles.
Existing reasoning evaluation frameworks for Large Language Models (LLMs) and Large Vision-Language Models (LVLMs) predominantly assess either text-based reasoning or vision-language understanding capabilities, with limited dynamic interplay between textual and visual constraints. To address this limitation, we introduce CrossWordBench, a benchmark designed to evaluate the reasoning capabilities of both LLMs and LVLMs through the medium of crossword puzzles—a task requiring multimodal adherence to semantic constraints from $\textbf{text-based clues}$ and intersectional constraints from $\textbf{visual grid structures}$. CrossWordBench leverages a controllable puzzle generation framework that produces puzzles in two formats ($\textit{text}$ and $\textit{image}$), supports adjustable difficulty through prefill ratio control, and offers different evaluation strategies, ranging from direct puzzle solving to interactive modes. Our extensive evaluation of over 20 models reveals that reasoning LLMs substantially outperform non-reasoning models by effectively leveraging crossing-letter constraints. We further demonstrate that LVLMs struggle with the task, showing a strong correlation between their puzzle-solving performance and grid-parsing accuracy. Our findings highlight limitations of the reasoning capabilities of current LLMs and LVLMs, and provide an effective approach for creating multimodal constrained tasks for future evaluations.
[ "Jixuan Leng", "Chengsong Huang", "Langlin Huang", "Bill Yuchen Lin", "William W. Cohen", "Haohan Wang", "Jiaxin Huang" ]
https://openreview.net/forum?id=bJCQMKwPVq
bJCQMKwPVq
bJCQMKwPVq
[ "~Jixuan_Leng1", "~Chengsong_Huang1", "~Langlin_Huang1", "~Bill_Yuchen_Lin1", "~William_W._Cohen2", "~Haohan_Wang1", "~Jiaxin_Huang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/4387a92ca259f0b87c9d01fd2c80a1dd97ba0084.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLMs", "LVLMs", "Evaluation", "Benchmark" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ leng2025crosswordbench, title={CrossWordBench: Evaluating the Reasoning Capabilities of {LLM}s and {LVLM}s with Controllable Puzzle Generation}, author={Jixuan Leng and Chengsong Huang and Langlin Huang and Bill Yuchen Lin and William W. Cohen and Haohan Wang and Jiaxin Huang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=bJCQMKwPVq} }
leng|crosswordbench_evaluating_the_reasoning_capabilities_of_llms_and_lvlms_with_controllable_puzzle_generation
null
null
null
null
null
From Next-Token to Mathematics: The Learning Dynamics of Mathematical Reasoning in Language Models
We conduct the first analysis of how math reasoning skills are learned during pre- and post-training using open checkpoint and open weight models.
Large Language Models (LLMs) solely trained on next-token prediction learn to solve a wide range of problems involving mathematical reasoning. How does this ability evolve during training? We show the first analysis of how mathematical reasoning abilities of several open-weight LLMs develop during pre-training and post-training. To this end, we construct MathCAMPS, a synthetic dataset of novel mathematical reasoning problems grounded in 44 fine-grained skills taken from the Common Core curriculum from K to 8th grades. In one experiment, we show that mathematical skills are learned during pre-training in an order that measurably correlates with the human-designed curriculum, even though training data are randomly ordered. We also show a detailed analysis of which mathematical abilities benefit from instruction-tuning, a widely used post-training method and, in contrast, which skills suffer. Our work paves the way for an empirical understanding of LLM training dynamics in relation to reasoning.
[ "Shubhra Mishra", "Gabriel Poesia", "Noah Goodman" ]
https://openreview.net/forum?id=bJ9aARjtBu
bJ9aARjtBu
bJ9aARjtBu
[ "~Shubhra_Mishra1", "~Gabriel_Poesia1", "~Noah_Goodman1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/af194ebe55dcf8e07919154014cfe0ed016dd5db.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "math reasoning", "training dynamics", "reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ mishra2025from, title={From Next-Token to Mathematics: The Learning Dynamics of Mathematical Reasoning in Language Models}, author={Shubhra Mishra and Gabriel Poesia and Noah Goodman}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=bJ9aARjtBu} }
mishra|from_nexttoken_to_mathematics_the_learning_dynamics_of_mathematical_reasoning_in_language_models
null
null
null
null
null
LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation
LoRI is a simple yet effective method for parameter-efficient fine-tuning that reduces cross-task interference by freezing projection matrices $A$ and sparsifying $B$.
Low-Rank Adaptation (LoRA) has emerged as a popular parameter-efficient fine-tuning (PEFT) method for Large Language Models (LLMs), yet it still incurs notable overhead and suffers from parameter interference in multi-task scenarios. We propose LoRA with Reduced Interference (LoRI), a simple yet effective approach that freezes the projection matrices $A$ as random projections and sparsifies the matrices $B$ using task-specific masks. This design substantially reduces the number of trainable parameters while maintaining strong task performance. Moreover, LoRI minimizes cross-task interference in adapter merging by leveraging the orthogonality between adapter subspaces, and supports continual learning by using sparsity to mitigate catastrophic forgetting. Extensive experiments across natural language understanding, mathematical reasoning, code generation, and safety alignment tasks demonstrate that LoRI outperforms full fine-tuning and existing PEFT methods, while using up to 95\% fewer trainable parameters than LoRA. In multi-task experiments, LoRI enables effective adapter merging and continual learning with reduced cross-task interference. Code is available at: https://github.com/juzhengz/LoRI.
[ "Juzheng Zhang", "Jiacheng You", "Ashwinee Panda", "Tom Goldstein" ]
https://openreview.net/forum?id=b8cW86QcOD
b8cW86QcOD
b8cW86QcOD
[ "~Juzheng_Zhang2", "~Jiacheng_You1", "~Ashwinee_Panda1", "~Tom_Goldstein1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/31e650897c20dd83f40dca4cf8678bd5c44bdfac.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Parameter-Efficient Fine-Tuning", "Model Merging", "Sparsity" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhang2025lori, title={Lo{RI}: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation}, author={Juzheng Zhang and Jiacheng You and Ashwinee Panda and Tom Goldstein}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=b8cW86QcOD} }
zhang|lori_reducing_crosstask_interference_in_multitask_lowrank_adaptation
null
null
null
null
null
OpenCodeReasoning: Advancing Data Distillation for Competitive Coding
We construct, analyze and release a curated code reasoning dataset of 735K samples that instills SOTA reasoning capabilities, evaluated on LiveCodeBench.
Since the advent of reasoning-based large language models, many have found great success from distilling reasoning capabilities into student models. Such techniques have significantly bridged the gap between reasoning and standard LLMs on coding tasks. Despite this, much of the progress on distilling reasoning models remains locked behind proprietary datasets or lacks details on data curation, filtering and subsequent training. To address this, we construct a superior supervised fine-tuning (SFT) dataset that we use to achieve state-of-the-art coding capability results in models of various sizes. Our distilled models use only SFT to achieve 61.8% on LiveCodeBench and 24.6% on CodeContests, surpassing alternatives trained with reinforcement learning. We then perform analysis on the data sources used to construct our dataset, the impact of code execution filtering, and the importance of instruction/solution diversity. We observe that execution filtering negatively affected benchmark accuracy, leading us to prioritize instruction diversity over solution correctness. Finally, we also analyze the token efficiency and reasoning patterns utilized by these models.
[ "Wasi Uddin Ahmad", "Sean Narenthiran", "Somshubra Majumdar", "Aleksander Ficek", "Siddhartha Jain", "Jocelyn Huang", "Vahid Noroozi", "Boris Ginsburg" ]
https://openreview.net/forum?id=aykM7KUVJZ
aykM7KUVJZ
aykM7KUVJZ
[ "~Wasi_Uddin_Ahmad1", "~Sean_Narenthiran1", "~Somshubra_Majumdar1", "~Aleksander_Ficek1", "~Siddhartha_Jain1", "~Jocelyn_Huang1", "~Vahid_Noroozi2", "~Boris_Ginsburg1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/544a715a36130a61830a4ddadb3420047cc5cf02.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Code Generation", "Code Reasoning", "Large Language Model", "LiveCodeBench" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ahmad2025opencodereasoning, title={OpenCodeReasoning: Advancing Data Distillation for Competitive Coding}, author={Wasi Uddin Ahmad and Sean Narenthiran and Somshubra Majumdar and Aleksander Ficek and Siddhartha Jain and Jocelyn Huang and Vahid Noroozi and Boris Ginsburg}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=aykM7KUVJZ} }
ahmad|opencodereasoning_advancing_data_distillation_for_competitive_coding
null
null
null
null
null
PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information Funneling
We developed PyramidKV, a novel and effective KV cache compression method.
In this study, we investigate whether attention-based information flow inside large language models (LLMs) is aggregated through noticeable patterns for long context processing. Our observations reveal that LLMs aggregate information through Pyramidal Information Funneling where attention is scattering widely in lower layers, progressively consolidating within specific contexts, and ultimately focusing on critical tokens (a.k.a massive activation or attention sink) in higher layers. Motivated by these insights, we developed PyramidKV, a novel and effective KV cache compression method. This approach dynamically adjusts the KV cache size across different layers, allocating more cache in lower layers and less in higher ones, diverging from traditional methods that maintain a uniform KV cache size. Our experimental evaluations, utilizing the LongBench benchmark, show that PyramidKV matches the performance of models with a full KV cache while retaining only 12% of the KV cache, thus significantly reducing memory usage. In scenarios emphasizing memory efficiency, where only 0.7% of the KV cache is maintained, PyramidKV surpasses other KV cache compression techniques, achieving up to a 20.5 absolute accuracy improvement on TREC dataset. In the Needle-in-a-Haystack experiment, PyramidKV outperforms competing methods in maintaining long-context comprehension in LLMs; notably, retaining just 128 KV cache entries enables the LLAMA-3-70B model to achieve 100% Acc. performance, matching that of a full KV cache.
[ "Zefan Cai", "Yichi Zhang", "Bofei Gao", "Yuliang Liu", "Yucheng Li", "Tianyu Liu", "Keming Lu", "Wayne Xiong", "Yue Dong", "Junjie Hu", "Wen Xiao" ]
https://openreview.net/forum?id=ayi7qezU87
ayi7qezU87
ayi7qezU87
[ "~Zefan_Cai1", "~Yichi_Zhang16", "~Bofei_Gao1", "~Yuliang_Liu5", "~Yucheng_Li5", "~Tianyu_Liu3", "~Keming_Lu1", "~Wayne_Xiong1", "~Yue_Dong2", "~Junjie_Hu2", "~Wen_Xiao2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/40900f3b1b3442ab76c9934ef2acc1c2a3ebb3e5.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "KV Cache Compression", "Inference Acceleration" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ cai2025pyramidkv, title={Pyramid{KV}: Dynamic {KV} Cache Compression based on Pyramidal Information Funneling}, author={Zefan Cai and Yichi Zhang and Bofei Gao and Yuliang Liu and Yucheng Li and Tianyu Liu and Keming Lu and Wayne Xiong and Yue Dong and Junjie Hu and Wen Xiao}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=ayi7qezU87} }
cai|pyramidkv_dynamic_kv_cache_compression_based_on_pyramidal_information_funneling
null
true
null
null
null
RWKV-7 "Goose" with Expressive Dynamic State Evolution
RWKV-7 is a new sequence modeling architecture with constant memory usage and inference time per token, SoTA performance on multilingual tasks, and near SoTA English LLM performance at 3B scale, with dramatically less training than top 3B models.
We present RWKV-7 "Goose", a new sequence modeling architecture with constant memory usage and constant inference time per token. Despite being trained on dramatically fewer tokens than other top models, our 2.9 billion parameter language model achieves a new 3B SoTA on multilingual tasks and matches the current 3B SoTA on English language downstream performance. RWKV-7 introduces a newly generalized formulation of the delta rule with vector-valued gating and in-context learning rates, as well as a relaxed value replacement rule. We show that RWKV-7 can perform state tracking and recognize all regular languages, while retaining parallelizability of training. This exceeds the capabilities of Transformers under standard complexity conjectures, which are limited to $\mathsf{TC}^0$. To demonstrate RWKV-7's language modeling capability, we also present an extended open source 3.1 trillion token multilingual corpus, and train four RWKV-7 models ranging from 0.19 billion to 2.9 billion parameters on this dataset. To foster openness, reproduction, and adoption, we release our models and dataset component listing at https://huggingface.co/RWKV, and our training and inference code at https://github.com/RWKV/RWKV-LM; all under the Apache 2.0 License.
[ "Bo Peng", "Ruichong Zhang", "Daniel Goldstein", "Eric Alcaide", "Xingjian Du", "Haowen Hou", "Jiaju Lin", "Jiaxing Liu", "Janna Lu", "William Merrill", "Guangyu Song", "Kaifeng Tan", "Saiteja Utpala", "Nathan Wilce", "Johan S. Wind", "Tianyi Wu", "Daniel Wuttke", "Christian Zhou-Zheng" ]
https://openreview.net/forum?id=ayB1PACN5j
ayB1PACN5j
ayB1PACN5j
[ "~Bo_Peng21", "~Ruichong_Zhang1", "~Daniel_Goldstein2", "~Eric_Alcaide2", "~Xingjian_Du1", "~Haowen_Hou2", "~Jiaju_Lin1", "~Jiaxing_Liu2", "~Janna_Lu1", "~William_Merrill1", "~Guangyu_Song1", "~Kaifeng_Tan1", "~Saiteja_Utpala1", "~Nathan_Wilce1", "~Johan_S._Wind2", "~Tianyi_Wu11", "~Daniel_Wuttke1", "~Christian_Zhou-Zheng1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/74011cffc1082ac9394afde92802c810c51baabf.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Goose", "LLM", "RWKV", "RWKV-7", "RWKV7", "Linear", "Linear Attention", "SSM", "subquadratic" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ peng2025rwkv, title={{RWKV}-7 ''Goose'' with Expressive Dynamic State Evolution}, author={Bo Peng and Ruichong Zhang and Daniel Goldstein and Eric Alcaide and Xingjian Du and Haowen Hou and Jiaju Lin and Jiaxing Liu and Janna Lu and William Merrill and Guangyu Song and Kaifeng Tan and Saiteja Utpala and Nathan Wilce and Johan S. Wind and Tianyi Wu and Daniel Wuttke and Christian Zhou-Zheng}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=ayB1PACN5j} }
peng|rwkv7_goose_with_expressive_dynamic_state_evolution
null
null
null
null
null
Navigating the Rabbit Hole: Emergent Biases in LLM-Generated Attack Narratives Targeting Mental Health Groups
We propose a framework to analyze the disproportionate targeting of mental health groups in LLM-generated attack chains.
Large Language Models (LLMs) have been shown to demonstrate imbalanced biases against certain groups. However, the study of unprovoked targeted attacks by LLMs towards at-risk populations remains underexplored. Our paper presents three novel contributions: (1) the explicit evaluation of LLM-generated attacks on highly vulnerable mental health groups; (2) a network-based framework to study the propagation of relative biases; and (3) an assessment of the relative degree of stigmatization that emerges from these attacks. Our analysis of a recently released large-scale bias audit dataset reveals that mental health entities occupy central positions within attack narrative networks, as revealed by a significantly higher mean centrality of closeness (p-value = 4.06e-10) and dense clustering (Gini coefficient = 0.7). Drawing from an established stigmatization framework, our analysis indicates increased labeling components for mental health disorder-related targets relative to initial targets in generation chains. Taken together, these insights shed light on the structural predilections of large language models to heighten harmful discourse and highlight the need for suitable approaches for mitigation.
[ "Rijul Magu", "Arka Dutta", "Sean Kim", "Ashiqur R. KhudaBukhsh", "Munmun De Choudhury" ]
https://openreview.net/forum?id=am6p8VFm9l
am6p8VFm9l
am6p8VFm9l
[ "~Rijul_Magu1", "~Arka_Dutta2", "~Sean_Kim2", "~Ashiqur_R._KhudaBukhsh1", "~Munmun_De_Choudhury1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/1b19b81748a05fae5824c5d1f9c1564132c741c6.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Mental Health", "Network Analysis", "Stigmatization", "Emergent Bias", "Toxicity" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ magu2025navigating, title={Navigating the Rabbit Hole: Emergent Biases in {LLM}-Generated Attack Narratives Targeting Mental Health Groups}, author={Rijul Magu and Arka Dutta and Sean Kim and Ashiqur R. KhudaBukhsh and Munmun De Choudhury}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=am6p8VFm9l} }
magu|navigating_the_rabbit_hole_emergent_biases_in_llmgenerated_attack_narratives_targeting_mental_health_groups
null
null
null
null
null
CLIPPER: Compression enables long-context synthetic data generation
We introduce CLIPPER, a compression-based approach for generating synthetic data tailored to narrative claim verification—a task that requires reasoning over a book to verify a given claim.
LLM developers are increasingly reliant on synthetic data, but generating high-quality data for complex long-context reasoning tasks remains challenging. We introduce CLIPPER, a compression-based approach for generating synthetic data tailored to narrative claim verification—a task that requires reasoning over a book to verify a given claim. Instead of generating claims directly from the raw text of the book, which results in artifact-riddled claims, CLIPPER first compresses the book into chapter outlines and book summaries and then uses these intermediate representations to generate complex claims and corresponding chain-of-thoughts. Compared to naive approaches, CLIPPER produces claims that are more valid, grounded, and complex. Using CLIPPER, we synthesize a dataset of 19K claims paired with source books and chain-of-thought reasoning, and use it to fine-tune three open-weight models. Our best model achieves breakthrough results on narrative claim verification (from 28% to 76% accuracy on our test set) and sets a new state-of-the-art for sub-10B models on the NoCha leaderboard. Further analysis shows that our models generate more detailed and grounded chain-of-thought reasoning while also improving performance on other narrative understanding tasks (e.g., NarrativeQA).
[ "Chau Minh Pham", "Yapei Chang", "Mohit Iyyer" ]
https://openreview.net/forum?id=akHq1QcqeZ
akHq1QcqeZ
akHq1QcqeZ
[ "~Chau_Minh_Pham1", "~Yapei_Chang1", "~Mohit_Iyyer1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/f2024d79618cdeeb40b50c0b1836ce0cee87c511.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "synthetic data", "fine-tuning", "instruction-tuning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ pham2025clipper, title={{CLIPPER}: Compression enables long-context synthetic data generation}, author={Chau Minh Pham and Yapei Chang and Mohit Iyyer}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=akHq1QcqeZ} }
pham|clipper_compression_enables_longcontext_synthetic_data_generation
null
null
null
null
null
EvalTree: Profiling Language Model Weaknesses via Hierarchical Capability Trees
Formulating the problem of LM weakness profiling and proposing a method EvalTree
An ideal model evaluation should achieve two goals: identifying where the model fails and providing actionable improvement guidance. Toward these goals for language model (LM) evaluations, we formulate the problem of generating a weakness profile, a set of weaknesses expressed in natural language, given an LM's performance on every individual instance in a benchmark. We introduce a suite of quantitative assessments to compare different weakness profiling methods. We also introduce a weakness profiling method EvalTree. EvalTree constructs a capability tree where each node represents a capability described in natural language and is linked to a subset of benchmark instances that specifically evaluate this capability; it then extracts nodes where the LM performs poorly to generate a weakness profile. On the MATH and WildChat benchmarks, we show that EvalTree outperforms baseline weakness profiling methods by identifying weaknesses more precisely and comprehensively. Weakness profiling further enables weakness-guided data collection, and training data collection guided by EvalTree-identified weaknesses improves LM performance more than other data collection strategies. We also show how EvalTree exposes flaws in Chatbot Arena's human-voter-based evaluation practice. To facilitate future work, we provide an interface that allows practitioners to interactively explore the capability trees built by EvalTree.
[ "Zhiyuan Zeng", "Yizhong Wang", "Hannaneh Hajishirzi", "Pang Wei Koh" ]
https://openreview.net/forum?id=aV2hQN9vkp
aV2hQN9vkp
aV2hQN9vkp
[ "~Zhiyuan_Zeng3", "~Yizhong_Wang2", "~Hannaneh_Hajishirzi1", "~Pang_Wei_Koh1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/64efd69a5024b2582a5aca08978a2cae631b8b00.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Evaluation", "Interpretability" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zeng2025evaltree, title={EvalTree: Profiling Language Model Weaknesses via Hierarchical Capability Trees}, author={Zhiyuan Zeng and Yizhong Wang and Hannaneh Hajishirzi and Pang Wei Koh}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=aV2hQN9vkp} }
zeng|evaltree_profiling_language_model_weaknesses_via_hierarchical_capability_trees
null
null
null
null
null
Shared Global and Local Geometry of Language Model Embeddings
We characterize the global and local geometry of language model token embeddings and find similarities across language models.
Researchers have recently suggested that models share common representations. In our work, we find numerous geometric similarities across the token embeddings of large language models. First, we find “global” similarities: token embeddings often share similar relative orientations. Next, we characterize local geometry in two ways: (1) by using Locally Linear Embeddings, and (2) by defining a simple measure for the intrinsic dimension of each embedding. Both characterizations allow us to find local similarities across token embeddings. Additionally, our intrinsic dimension demonstrates that embeddings lie on a lower dimensional manifold, and that tokens with lower intrinsic dimensions often have semantically coherent clusters, while those with higher intrinsic dimensions do not. Based on our findings, we introduce EMB2EMB, a simple application to linearly transform steering vectors from one language model to another, despite the two models having different dimensions.
[ "Andrew Lee", "Melanie Weber", "Fernanda Viégas", "Martin Wattenberg" ]
https://openreview.net/forum?id=aJDykpJAYF
aJDykpJAYF
aJDykpJAYF
[ "~Andrew_Lee2", "~Melanie_Weber1", "~Fernanda_Viégas1", "~Martin_Wattenberg1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/3cb8501d114bc2f5dbfc0cb0be5bdffcc5d4d3e1.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Embeddings", "Alignment", "Interpretability" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lee2025shared, title={Shared Global and Local Geometry of Language Model Embeddings}, author={Andrew Lee and Melanie Weber and Fernanda Vi{\'e}gas and Martin Wattenberg}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=aJDykpJAYF} }
lee|shared_global_and_local_geometry_of_language_model_embeddings
null
true
null
null
null
Sharpe Ratio-Guided Active Learning for Preference Optimization in RLHF
We provide a novel active learning method for RLHF based on the Sharpe Ratio.
Reinforcement learning from human feedback (RLHF) has become a cornerstone of the training and alignment pipeline for large language models (LLMs). Recent advances, such as direct preference optimization (DPO), have simplified the preference learning step. However, collecting preference data remains a challenging and costly process, often requiring expert annotation. This cost can be mitigated by carefully selecting the data points presented for annotation. In this work, we propose an active learning approach to efficiently select prompt and preference pairs using a risk assessment strategy based on the Sharpe Ratio. To address the challenge of unknown preferences prior to annotation, our method evaluates the gradients of all potential preference annotations to assess their impact on model updates. These gradient-based evaluations enable risk assessment of data points regardless of the annotation outcome. By leveraging the DPO loss derivations, we derive a \emph{closed-form expression} for computing these Sharpe ratios on a per-tuple basis, ensuring our approach remains both \emph{tractable} and \emph{computationally efficient}. We also introduce two variants of our method, each making different assumptions about prior information. Experimental results demonstrate that our method outperforms the baseline by up to 5\% in win rates against the chosen completion with limited human preference data across several language models and real-world datasets.
[ "Syrine Belakaria", "Joshua Kazdan", "Charles Marx", "Chris Cundy", "Willie Neiswanger", "Sanmi Koyejo", "Barbara E Engelhardt", "Stefano Ermon" ]
https://openreview.net/forum?id=a6xzTqMUFQ
a6xzTqMUFQ
a6xzTqMUFQ
[ "~Syrine_Belakaria1", "~Joshua_Kazdan1", "~Charles_Marx1", "~Chris_Cundy1", "~Willie_Neiswanger2", "~Sanmi_Koyejo1", "~Barbara_Engelhardt1", "~Stefano_Ermon1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/25abf5aa983f29b83734f87acba47b58d8e8c5f6.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "active learning", "RLHF", "llm" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ belakaria2025sharpe, title={Sharpe Ratio-Guided Active Learning for Preference Optimization in {RLHF}}, author={Syrine Belakaria and Joshua Kazdan and Charles Marx and Chris Cundy and Willie Neiswanger and Sanmi Koyejo and Barbara E Engelhardt and Stefano Ermon}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=a6xzTqMUFQ} }
belakaria|sharpe_ratioguided_active_learning_for_preference_optimization_in_rlhf
null
null
null
null
null
Can Performant LLMs Be Ethical? Quantifying the Impact of Web Crawling Opt-Outs
We study how respecting web crawling opt-outs (robots.txt) affects LLM performance by introducing the concept of Data Compliance Gap (DCG).
The increasing adoption of web crawling opt-outs by copyright holders of online content raises critical questions about the impact of data compliance on large language model (LLM) performance. However, little is known about how these restrictions (and the resultant filtering of pretraining datasets) affect the capabilities of models trained using these corpora. In this work, we conceptualize this effect as the $\textit{data compliance gap} (DCG)$, which quantifies the performance difference between models trained on datasets that comply with web crawling opt-outs, and those that do not. We measure the data compliance gap in two settings: pretraining models from scratch and continual pretraining from existing compliant models (simulating a setting where copyrighted data could be integrated later in pertaining). Our experiments with 1.5B models show that, as of January 2025, compliance with web data opt-outs does not degrade general knowledge acquisition (close to 0\% DCG). However, in specialized domains such as biomedical research, excluding major publishers leads to performance declines. These findings suggest that while general-purpose LLMs can be trained to perform equally well using fully open data, performance in specialized domains may benefit from access to high-quality copyrighted sources later in training. Our study provides empirical insights into the long-debated trade-off between data compliance and downstream model performance, informing future discussions on AI training practices and policy decisions.
[ "Dongyang Fan", "Vinko Sabolčec", "Matin Ansaripour", "Ayush Kumar Tarun", "Martin Jaggi", "Antoine Bosselut", "Imanol Schlag" ]
https://openreview.net/forum?id=a6QsOjr3wo
a6QsOjr3wo
a6QsOjr3wo
[ "~Dongyang_Fan2", "~Vinko_Sabolčec1", "~Matin_Ansaripour1", "~Ayush_Kumar_Tarun1", "~Martin_Jaggi1", "~Antoine_Bosselut1", "~Imanol_Schlag3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/c04ef11771a5e838ccf85b71b78636ed913f5ae0.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Responsible AI", "AI and Fair Use", "Robots.txt Opt-out", "LLMs" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ fan2025can, title={Can Performant {LLM}s Be Ethical? Quantifying the Impact of Web Crawling Opt-Outs}, author={Dongyang Fan and Vinko Sabol{\v{c}}ec and Matin Ansaripour and Ayush Kumar Tarun and Martin Jaggi and Antoine Bosselut and Imanol Schlag}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=a6QsOjr3wo} }
fan|can_performant_llms_be_ethical_quantifying_the_impact_of_web_crawling_optouts
null
null
null
null
null
Task-Circuit Quantization: Leveraging Knowledge Localization and Interpretability for Compression
Using interpretability informed saliency scores based on task-specific information to localize important weights to preserve during model compression, yielding SOTA method for both general and task specific quantization
​​Post-training quantization reduces a model's memory footprint by mapping full precision weights into low bit weights without costly retraining, but can degrade its downstream performance especially in low 2- to 3-bit settings. Existing methods mitigate these drops by keeping some important weights in higher precision; we develop a new mixed-precision approach, Task-Circuit Quantization (TCQ), that directly conditions the quantization process on specific circuits -- which we define as sets of weights associated with downstream task performance. TCQ draws parallels to automated circuit discovery, introducing a novel method to identify a small number of key weights that are particularly important to task performance; these weights are kept as 16-bit weights, while others are quantized, maintaining performance while only adding a marginal memory cost. Specifically, TCQ contrasts unquantized model weights with a uniformly-quantized model to estimate the expected change in weights due to quantization and uses gradient information to predict the resulting impact on task performance, allowing us to preserve task-specific weights. We compare TCQ-based quantization to existing mixed-precision quantization methods and GPTQ when conditioning both on general-purpose and task-specific data. Across QA, math reasoning, text-to-SQL tasks and for both Llama-3 and Qwen2.5 models, we find that TCQ outperforms baselines like SPQR and Slim-LLM using the same calibration data and a lower weight budget, achieving major improvements in the 2- and 3-bit regime. With only 3.1 bits we are able to recover 97% of the model's unquantized 16-bit MMLU performance, obtaining a 5.25% absolute improvement over SPQR. Furthermore, we observe consistently large gains over existing methods in the 2-bit regime, with an average gain of 14.74% over the strongest baseline, Slim-LLM. Code: [https://github.com/The-Inscrutable-X/TACQ](https://github.com/The-Inscrutable-X/TACQ)
[ "Hanqi Xiao", "Yi-Lin Sung", "Elias Stengel-Eskin", "Mohit Bansal" ]
https://openreview.net/forum?id=a201nfn3xX
a201nfn3xX
a201nfn3xX
[ "~Hanqi_Xiao1", "~Yi-Lin_Sung1", "~Elias_Stengel-Eskin1", "~Mohit_Bansal2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/e79023ddb144530c26e3752115728796c7dc13cb.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Quantization", "Mixed Precision", "Interpretability" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ xiao2025taskcircuit, title={Task-Circuit Quantization: Leveraging Knowledge Localization and Interpretability for Compression}, author={Hanqi Xiao and Yi-Lin Sung and Elias Stengel-Eskin and Mohit Bansal}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=a201nfn3xX} }
xiao|taskcircuit_quantization_leveraging_knowledge_localization_and_interpretability_for_compression
null
null
null
null
null
Rethinking Safety in LLM Fine-tuning: An Optimization Perspective
Fine-tuning can preserve safety without extra interventions by optimizing hyperparameters and using EMA momentum to stabilize training.
Fine-tuning language models is commonly believed to inevitably harm their safety, i.e., refusing to respond to harmful user requests, even when using harmless datasets, thus requiring additional safety measures. We challenge this belief through systematic testing, showing that poor optimization choices, rather than inherent trade-offs, often cause safety problems, measured as harmful responses to adversarial prompts. By properly selecting key training hyper-parameters, e.g., learning rate, batch size, and gradient steps, we reduce unsafe model responses from 16\% to approximately 5\%, as measured by keyword matching, while maintaining utility performance. Based on this observation, we propose a simple exponential moving average (EMA) momentum technique in parameter space that preserves safety performance by creating a stable optimization path and retains the original pre-trained model's safety properties. Our experiments on the Llama families across multiple datasets (Dolly, Alpaca, ORCA) demonstrate that safety problems during fine-tuning can largely be avoided without specialized interventions, outperforming existing approaches that require additional safety data while offering practical guidelines for maintaining both model performance and safety during adaptation.
[ "Minseon Kim", "Jin Myung Kwak", "Lama Alssum", "Bernard Ghanem", "Philip Torr", "David Krueger", "Fazl Barez", "Adel Bibi" ]
https://openreview.net/forum?id=ZnOoEA2nDn
ZnOoEA2nDn
ZnOoEA2nDn
[ "~Minseon_Kim1", "~Jin_Myung_Kwak1", "~Lama_Alssum1", "~Bernard_Ghanem1", "~Philip_Torr1", "~David_Krueger1", "~Fazl_Barez1", "~Adel_Bibi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/69c14ab3b7570c1351454fdff6e3989c37b2932d.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Finetuning LLM", "Safety alignment" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ kim2025rethinking, title={Rethinking Safety in {LLM} Fine-tuning: An Optimization Perspective}, author={Minseon Kim and Jin Myung Kwak and Lama Alssum and Bernard Ghanem and Philip Torr and David Krueger and Fazl Barez and Adel Bibi}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=ZnOoEA2nDn} }
kim|rethinking_safety_in_llm_finetuning_an_optimization_perspective
null
null
null
null
null
Hell or High Water: Evaluating Agentic Recovery from External Failures
Evaluating how well LLMs can find backup plans
As language model agents are applied to real world problems of increasing complexity, they will be expected to formulate plans across large search spaces. If those plans fail for reasons beyond their control, how well do language agents search for alternative ways to achieve their goals? We devise a specialized agentic planning benchmark to study this question. Each planning problem is solved via combinations of function calls. The agent searches for relevant functions from a set of over four thousand possibilities, and observes environmental feedback in the form of function outputs or error messages. Our benchmark confronts the agent with external failures in its workflow, such as functions that suddenly become unavailable. At the same time, even with the introduction of these failures, we guarantee that the task remains solvable. Ideally, an agent’s performance on the planning task should not be affected by the presence of external failures. Overall, we find that language agents struggle to formulate and execute backup plans in response to environment feedback. While state-of-the-art models are often able to identify the correct function to use in the right context, they struggle to adapt to feedback from the environment and often fail to pursue alternate courses of action, even when the search space is artificially restricted. We provide a systematic analysis of the failures of both open-source and commercial models, examining the effects of search space size, as well as the benefits of scaling model size in our setting. Our analysis identifies key challenges for current generative models as well as promising directions for future work.
[ "Andrew Wang", "Sophia Hager", "Adi Asija", "Daniel Khashabi", "Nicholas Andrews" ]
https://openreview.net/forum?id=Zk224WPT42
Zk224WPT42
Zk224WPT42
[ "~Andrew_Wang3", "~Sophia_Hager1", "~Adi_Asija1", "~Daniel_Khashabi2", "~Nicholas_Andrews2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/30827c25a31a17f74cbb9a570f0d6f26a95ad034.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "planning", "tool-use" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025hell, title={Hell or High Water: Evaluating Agentic Recovery from External Failures}, author={Andrew Wang and Sophia Hager and Adi Asija and Daniel Khashabi and Nicholas Andrews}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Zk224WPT42} }
wang|hell_or_high_water_evaluating_agentic_recovery_from_external_failures
null
null
null
null
null
Do LLMs Understand Your Translations? Evaluating Paragraph-level MT with Question Answering
An extrinsic approach to evaluate long translations, using LLMs to generate and answer reading comprehension questions.
Despite the steady progress in machine translation evaluation, existing automatic metrics struggle to capture how well meaning is preserved beyond sentence boundaries. We posit that reliance on a single intrinsic quality score, trained to mimic human judgments, might be insufficient for evaluating translations of long, complex passages, and a more “pragmatic” approach that assesses how accurately key information is conveyed by a translation in context is needed. We introduce TREQA (Translation Evaluation via Question-Answering), a framework that extrinsically evaluates translation quality by assessing how accurately candidate translations answer reading comprehension questions that target key information in the original source or reference texts. In challenging domains that require long-range understanding, such as literary texts, we show that TREQA is competitive with and, in some cases, outperforms state-of-the-art neural and LLM-based metrics in ranking alternative paragraph-level translations, despite never being explicitly optimized to correlate with human judgments. Furthermore, the generated questions and answers offer interpretability: empirical analysis shows that they effectively target translation errors identified by experts in evaluated datasets.
[ "Patrick Fernandes", "Sweta Agrawal", "Emmanouil Zaranis", "Andre Martins", "Graham Neubig" ]
https://openreview.net/forum?id=Zfa9jCYGCz
Zfa9jCYGCz
Zfa9jCYGCz
[ "~Patrick_Fernandes1", "~Sweta_Agrawal1", "~Emmanouil_Zaranis1", "~Andre_Martins1", "~Graham_Neubig1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/e337466478c0587324eac3059bdd874f58d81acd.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "llm-based metric", "machine translation", "evaluation", "question generation", "question answering" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ fernandes2025do, title={Do {LLM}s Understand Your Translations? Evaluating Paragraph-level {MT} with Question Answering}, author={Patrick Fernandes and Sweta Agrawal and Emmanouil Zaranis and Andre Martins and Graham Neubig}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Zfa9jCYGCz} }
fernandes|do_llms_understand_your_translations_evaluating_paragraphlevel_mt_with_question_answering
null
null
null
null
null