title
stringlengths
21
128
content_TLDR
stringlengths
40
250
abstract
stringlengths
613
2.09k
authors
listlengths
1
42
openreview_url
stringlengths
42
42
id
stringlengths
10
10
forum
stringlengths
10
10
authorids
listlengths
1
42
venue
dict
venueid
dict
pdf_url
dict
invitation
stringclasses
1 value
group
stringclasses
1 value
venue_name
stringclasses
1 value
year
int64
2.03k
2.03k
conference
stringclasses
1 value
content_keywords
listlengths
1
16
content_code_of_ethics
stringclasses
1 value
content_author_guide
stringclasses
1 value
content_flagged_for_ethics_review
bool
1 class
content_ethics_comments
stringclasses
11 values
content__bibtex
stringlengths
246
1.01k
content_paperhash
stringlengths
29
134
content_supplementary_material
stringclasses
73 values
content_award_nomination
bool
1 class
content_reciprocal_reviewing_status
stringclasses
1 value
content_reciprocal_reviewing_author
stringclasses
4 values
content_reciprocal_reviewing_exemption_reason
dict
E$^2$-RAG: Towards Editable Efficient RAG by Editing Compressed KV Caches
E$^2$-RAG efficiently edits compressed KV caches for knowledge updates, achieving 40x faster editing and 3x faster generation than standard RAG with minimal performance loss.
Retrieval-Augmented Generation (RAG) demonstrates remarkable capabilities for enhancing the performance of Large Language Models (LLMs) by integrating external knowledge. Standard RAG introduces additional computations due to the extra retrieved context. To improve efficiency, recent studies propose compressing chunk tokens into compact forms, such as key-value (KV) caches. However, maintaining these compressed KV caches in an updated state presents a significant challenge, undermining the primary goal of RAG: acquiring up-to-date knowledge. In this work, we propose **E$^{2}$-RAG**, the first **E**ditable **E**fficient-**RAG** method designed to efficiently edit compressed KV caches for knowledge updates. E$^2$-RAG features an encoder-decoder architecture similar to efficient RAG methods, along with an additional editor. The encoder-decoder compresses chunk tokens into KV caches and generates responses. The editor takes old KV caches and new knowledge tokens as inputs, enabling efficient updates to the KV caches. To formalize knowledge updating, we define three operations: INSERT, DELETE, and UPDATE. We create three sets of datasets for each operation. Through extensive experiments, E$^2$-RAG achieves nearly **40x faster** editing compared to recomputing KV caches while maintaining **3x faster** generation efficiency than standard RAG, with a performance downgrade of 1%-5%. We also conduct various ablation studies, including multi-turn editing, multi-chunk capability, and knowledge conflicts, to explore the capabilities of E$^2$-RAG.
[ "Tongxu Luo", "Wenyu Du", "HanWen Hao", "Min Zhang", "Hao Yang", "Benyou Wang" ]
https://openreview.net/forum?id=ZZ4tcxJvux
ZZ4tcxJvux
ZZ4tcxJvux
[ "~Tongxu_Luo1", "~Wenyu_Du1", "~HanWen_Hao1", "~Min_Zhang10", "~Hao_Yang7", "~Benyou_Wang2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/90a703fd95fe0a78eba30c62f09d41700971be6f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Retrieval Augmented Generation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ luo2025erag, title={E\${\textasciicircum}2\$-{RAG}: Towards Editable Efficient {RAG} by Editing Compressed {KV} Caches}, author={Tongxu Luo and Wenyu Du and HanWen Hao and Min Zhang and Hao Yang and Benyou Wang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=ZZ4tcxJvux} }
luo|e^2rag_towards_editable_efficient_rag_by_editing_compressed_kv_caches
/attachment/cb6b8a5009cad693d953c6721496aa806e2f5e1b.zip
null
null
null
null
Imagine All The Relevance: Scenario-Profiled Indexing with Knowledge Expansion for Dense Retrieval
We propose SPIKE, a dense retrieval framework that decomposes documents into scenarios.
Existing dense retrieval models struggle with reasoning-intensive retrieval task as they fail to capture implicit relevance that requires reasoning beyond surface-level semantic information. To address these challenges, we propose Scenario-Profiled Indexing with Knowledge Expansion (SPIKE), a dense retrieval framework that explicitly indexes implicit relevance by decomposing documents into scenario-based retrieval units. SPIKE organizes documents into scenario, which encapsulates the reasoning process necessary to uncover implicit relationships between hypothetical information needs and document content. SPIKE constructs a scenario-augmented dataset using a powerful teacher large language model (LLM), then distills these reasoning capabilities into a smaller, efficient scenario generator. During inference, SPIKE incorporates scenario-level relevance alongside document-level relevance, enabling reasoning-aware retrieval. Extensive experiments demonstrate that SPIKE consistently enhances retrieval performance across various query types and dense retrievers. It also enhances the retrieval experience for users through scenario and offers valuable contextual information for LLMs in retrieval-augmented generation (RAG).
[ "Sangam Lee", "Ryang Heo", "SeongKu Kang", "Dongha Lee" ]
https://openreview.net/forum?id=ZYVAtUUNbH
ZYVAtUUNbH
ZYVAtUUNbH
[ "~Sangam_Lee1", "~Ryang_Heo1", "~SeongKu_Kang1", "~Dongha_Lee1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/0efd6d5ae8cf275cc6d655a40b5711fe1a00b98f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Information Retrieval", "Reasoning Intensive Retrieval", "Dense Retrieval", "Reasoning", "LLM" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lee2025imagine, title={Imagine All The Relevance: Scenario-Profiled Indexing with Knowledge Expansion for Dense Retrieval}, author={Sangam Lee and Ryang Heo and SeongKu Kang and Dongha Lee}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=ZYVAtUUNbH} }
lee|imagine_all_the_relevance_scenarioprofiled_indexing_with_knowledge_expansion_for_dense_retrieval
null
null
null
null
null
In-Context Occam’s Razor: How Transformers Prefer Simpler Hypotheses on the Fly
Transformers exhibit Bayesian Occam's razor in-context
In-context learning (ICL) enables transformers to adapt to new tasks through contextual examples without parameter updates. While existing research has typically studied ICL in fixed-complexity setups, real-world language models encounter tasks of diverse complexity levels. This paper investigates how transformers navigate hierarchical task structures where higher-complexity categories can perfectly represent any pattern generated by simpler ones. We design testbeds based on Markov chains and linear regression that reveal transformers not only identify the correct complexity level for each task but also accurately infer the corresponding parameters—even when the in-context examples fit multiple complexity hypotheses. Notably, when presented with data generated by simpler processes, transformers consistently favor the least complex sufficient explanation. We theoretically explain this behavior through a Bayesian framework, demonstrating that transformers effectively implement an in-context Bayesian Occam's razor by balancing model fit against complexity penalties.
[ "Puneesh Deora", "Bhavya Vasudeva", "Tina Behnia", "Christos Thrampoulidis" ]
https://openreview.net/forum?id=ZSMnX3LBva
ZSMnX3LBva
ZSMnX3LBva
[ "~Puneesh_Deora1", "~Bhavya_Vasudeva1", "~Tina_Behnia1", "~Christos_Thrampoulidis1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/70c4cab2eb53decf5c0d430077a51eb24ac285af.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "In-context learning", "transformers", "linear regression", "Markov chains", "Bayesian Occam's razor" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ deora2025incontext, title={In-Context Occam{\textquoteright}s Razor: How Transformers Prefer Simpler Hypotheses on the Fly}, author={Puneesh Deora and Bhavya Vasudeva and Tina Behnia and Christos Thrampoulidis}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=ZSMnX3LBva} }
deora|incontext_occams_razor_how_transformers_prefer_simpler_hypotheses_on_the_fly
null
null
null
null
null
ADAPT: Actively Discovering and Adapting to Preferences for any Task
A benchmark, ADAPT, and a training mechanism, ReflectionDPO, to create and evaluate a grounded task planner that can actively elicit user preferences by asking questions, and adapt execution accordingly.
Assistive agents should be able to perform under-specified long-horizon tasks while respecting user preferences. We introduce Actively Discovering and Adapting to Preferences for any Task (ADAPT) – a benchmark designed to evaluate agents’ ability to adhere to user preferences across various household tasks through active questioning. Next, we propose Reflection-DPO, a novel training approach for adapting large language models (LLMs) to the task of active questioning. Reflection-DPO finetunes a ‘student’ LLM to follow the actions of a privileged ‘teacher’ LLM, and optionally ask a question to gather necessary information to better predict the teacher action. We find that prior approaches that use state-of-the-art LLMs fail to sufficiently follow user preferences in ADAPT due to insufficient questioning and poor adherence to elicited preferences. In contrast, Reflection-DPO achieves a higher rate of satisfying user preferences, outperforming a zero-shot chain-of-thought baseline by 6.1% on unseen users.
[ "Maithili Patel", "Xavier Puig", "Ruta Desai", "Roozbeh Mottaghi", "Sonia Chernova", "Joanne Truong", "Akshara Rai" ]
https://openreview.net/forum?id=Z8vtD1egtI
Z8vtD1egtI
Z8vtD1egtI
[ "~Maithili_Patel1", "~Xavier_Puig1", "~Ruta_Desai1", "~Roozbeh_Mottaghi1", "~Sonia_Chernova2", "~Joanne_Truong1", "~Akshara_Rai1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/7dd5515b2423b0c9dcfd62b55d1506ad08b10216.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Task Oriented Agents", "Interactive Learning", "Active Dialog", "Personalization", "Task Planning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ patel2025adapt, title={{ADAPT}: Actively Discovering and Adapting to Preferences for any Task}, author={Maithili Patel and Xavier Puig and Ruta Desai and Roozbeh Mottaghi and Sonia Chernova and Joanne Truong and Akshara Rai}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Z8vtD1egtI} }
patel|adapt_actively_discovering_and_adapting_to_preferences_for_any_task
null
null
null
null
null
Multi-Token Attention
We present Multi-Token Attention: new method that allows LLMs to condition their attention weights on multiple query and key vectors simultaneously.
Soft attention is a critical mechanism powering LLMs to locate relevant parts within a given context. However, individual attention weights are determined by the similarity of only a single query and key token vector. This “single token attention” bottlenecks the amount of information used in distinguishing a relevant part from the rest of the context. To address this issue, we propose a new attention method, Multi-Token Attention (MTA), which allows LLMs to condition their attention weights on multiple query and key vectors simultaneously. This is achieved by applying convolution operations over queries, keys and heads, allowing nearby queries and keys to affect each other’s attention weights for more precise attention. As a result, our method can locate relevant context using richer, more nuanced information that can exceed a single vector’s capacity. Through extensive evaluations, we demonstrate that MTA achieves enhanced performance on a range of popular benchmarks. Notably, it outperforms Transformer baseline models on standard language modeling tasks, and on tasks that require searching for information within long contexts, where our method’s ability to leverage richer information proves particularly beneficial.
[ "Olga Golovneva", "Tianlu Wang", "Jason E Weston", "Sainbayar Sukhbaatar" ]
https://openreview.net/forum?id=Z3L35tQTEg
Z3L35tQTEg
Z3L35tQTEg
[ "~Olga_Golovneva1", "~Tianlu_Wang1", "~Jason_E_Weston1", "~Sainbayar_Sukhbaatar1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9532870a0ce87caae8564ef996ce940c575cebee.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Deep learning architectures", "Large Language Model (LLM)", "Transformer", "Attention" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ golovneva2025multitoken, title={Multi-Token Attention}, author={Olga Golovneva and Tianlu Wang and Jason E Weston and Sainbayar Sukhbaatar}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Z3L35tQTEg} }
golovneva|multitoken_attention
null
null
null
null
null
FormaRL: Enhancing Autoformalization with no Labeled Data
We proposed a new training method to enhance autoformalization via reinforcement learning, and curated a dataset of undergraduate-level proof problems.
Autoformalization is one of the central tasks in formal verification, while its advancement remains hindered due to the data scarcity and the absence efficient methods. In this work we propose **FormaRL**, a simple yet efficient reinforcement learning framework for autoformalization which only requires a small amount of unlabeled data. FormaRL integrates syntax check from Lean compiler and consistency check from large language model to calculate the reward, and adopts GRPO algorithm to update the formalizer. We also curated a proof problem dataset from undergraduate-level math materials, named **uproof**, in the hope to facilitate the exploration of autoformalization and theorem proving in advanced math. Experiments show that FormaRL can increase the pass@1 autoformalization accuracy of Qwen2.5-Coder-7B-Instruct by 4 $\sim$ 6x (4.04\% $\to$ 26.15\% on ProofNet and 2.4\% $\to$ 9.6\% on uproof) with merely 859 unlabeled data. And on uproof our method also achieved a strong improvement in out-of-distribution performance compared to existing open-source state-of-the-art autoformalizers on both pass@1 accuracy (6.2\% $\to$ 9.6\%) and pass@16 accuracy (24.4\% $\to$ 33.6\%). Training code of FormaRL is open-sourced at [https://github.com/THUNLP-MT/FormaRL](https://github.com/THUNLP-MT/FormaRL).
[ "Yanxing Huang", "Xinling Jin", "Sijie Liang", "Fuwen Luo", "Peng Li", "Yang Liu" ]
https://openreview.net/forum?id=Z2El1U94bq
Z2El1U94bq
Z2El1U94bq
[ "~Yanxing_Huang1", "~Xinling_Jin1", "~Sijie_Liang1", "~Fuwen_Luo1", "~Peng_Li2", "~Yang_Liu19" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/085dbdfbdd89d2da4ba20ea90affebf80b429c0e.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Model", "Formal Verification", "Autoformalization", "Reinforcement Learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ huang2025formarl, title={Forma{RL}: Enhancing Autoformalization with no Labeled Data}, author={Yanxing Huang and Xinling Jin and Sijie Liang and Fuwen Luo and Peng Li and Yang Liu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Z2El1U94bq} }
huang|formarl_enhancing_autoformalization_with_no_labeled_data
/attachment/e15a03408291a966b2ffe7431f1e2b9c894481f7.zip
null
null
null
null
Learning Adaptive Parallel Reasoning with Language Models
We propose Adaptive Parallel Reasoning (APR), a reinforcement-learning-optimized inference framework allowing language models to dynamically balance serial and parallel computations, significantly enhancing reasoning accuracy and efficiency.
Scaling inference-time computation has substantially improved the reasoning capabilities of language models. However, existing methods have significant limitations: serialized chain-of-thought approaches generate overly long outputs, leading to increased latency and exhausted context windows, while parallel methods such as self-consistency suffer from insufficient coordination, resulting in redundant computations and limited performance gains. To address these shortcomings, we propose Adaptive Parallel Reasoning (APR), a novel reasoning framework that enables language models to orchestrate both serialized and parallel computations end-to-end. APR generalizes existing reasoning methods by enabling adaptive multi-threaded inference using spawn() and join() operations. A key innovation is our end-to-end reinforcement learning strategy, optimizing both parent and child inference threads to enhance task success rate without requiring predefined reasoning structures. Experiments on the Countdown reasoning task demonstrate significant benefits of APR: (1) higher performance within the same context window (83.4% vs. 60.0% at 4k context); (2) superior scalability with increased computation (80.1% vs. 66.6% at 20k total tokens); (3) improved accuracy at equivalent latency (75.2% vs. 57.3% at approximately 5,000ms). APR represents a step towards enabling language models to autonomously optimize their reasoning processes through adaptive allocation of computation.
[ "Jiayi Pan", "Xiuyu Li", "Long Lian", "Charlie Victor Snell", "Yifei Zhou", "Adam Yala", "Trevor Darrell", "Kurt Keutzer", "Alane Suhr" ]
https://openreview.net/forum?id=YgwQ7sXPXU
YgwQ7sXPXU
YgwQ7sXPXU
[ "~Jiayi_Pan1", "~Xiuyu_Li1", "~Long_Lian1", "~Charlie_Victor_Snell1", "~Yifei_Zhou1", "~Adam_Yala1", "~Trevor_Darrell2", "~Kurt_Keutzer1", "~Alane_Suhr1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/4326af2e36216b5bf3712412640e6ce9524e8496.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "large language models", "reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ pan2025learning, title={Learning Adaptive Parallel Reasoning with Language Models}, author={Jiayi Pan and Xiuyu Li and Long Lian and Charlie Victor Snell and Yifei Zhou and Adam Yala and Trevor Darrell and Kurt Keutzer and Alane Suhr}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=YgwQ7sXPXU} }
pan|learning_adaptive_parallel_reasoning_with_language_models
null
null
null
null
null
Scoring Verifiers: Evaluating Synthetic Verification for Code and Reasoning
This paper benchmarks synthetic verification methods for code correctness, showing reasoning models improve test case generation and verification accuracy.
Synthetic verification techniques such as generating test cases and reward modelling are common ways to enhance the coding capabilities of large language models (LLM) beyond predefined tests. Additionally, code verification has recently found great success as a critical component in improving reasoning capability of LLMs via reinforcement learning. In this paper, we propose an approach which can transform existing coding benchmarks into scoring and ranking datasets to evaluate the effectiveness of synthetic verifiers. We also propose multiple metrics to measure different aspects of the synthetic verifiers with the proposed benchmarks. By employing the proposed approach, we release four new benchmarks (HE-R, HE-R+, MBPP-R, and MBPP-R+), and analyzed synthetic verification methods with standard, reasoning-based, and reward-based LLMs. Our experiments show that reasoning can significantly improve test case generation and that scaling the number of test cases enhances the verification accuracy.
[ "Aleksander Ficek", "Somshubra Majumdar", "Vahid Noroozi", "Boris Ginsburg" ]
https://openreview.net/forum?id=YLze3CETYP
YLze3CETYP
YLze3CETYP
[ "~Aleksander_Ficek1", "~Somshubra_Majumdar1", "~Vahid_Noroozi2", "~Boris_Ginsburg1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/5e309c711da0e63eff0b0c69db1b1ff080e0f216.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "code generation and understanding", "benchmarking", "NLP datasets", "evaluation methodologies", "automatic evaluation of datasets", "evaluation", "metrics", "reproducibility", "statistical testing for evaluation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ficek2025scoring, title={Scoring Verifiers: Evaluating Synthetic Verification for Code and Reasoning}, author={Aleksander Ficek and Somshubra Majumdar and Vahid Noroozi and Boris Ginsburg}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=YLze3CETYP} }
ficek|scoring_verifiers_evaluating_synthetic_verification_for_code_and_reasoning
null
null
null
null
null
SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths
Based on MDP theory, we use a trained acceptance prediction head to determine when to stop the current proposal round for speculative decoding.
Speculative decoding reduces the inference latency of a target large language model via utilizing a smaller and faster draft model. Its performance depends on a hyperparameter K -- the candidate length, i.e., the number of candidate tokens for the target model to verify in each round. However, previous methods often use simple heuristics to choose K, which may result in sub-optimal performance. We study the choice of the candidate length K and formulate it as a Markov Decision Process. We theoretically show that the optimal policy of this Markov decision process takes the form of a threshold policy, i.e., the current speculation should stop and be verified when the probability of getting a rejection exceeds a threshold value. Motivated by this theory, we propose SpecDec++, an enhanced version of speculative decoding that adaptively determines the candidate length on the fly. We augment the draft model with a trained acceptance prediction head to predict the conditional acceptance probability of the candidate tokens. SpecDec++ will stop the current speculation when the predicted probability that at least one token gets rejected exceeds a threshold. We implement SpecDec++ and apply it to the llama-2-chat 7B & 70B model pair. Our adaptive method achieves a 2.04x speedup on the Alpaca dataset (7.2% improvement over the baseline speculative decoding). On the GSM8K and HumanEval datasets, our method achieves a 2.26x speedup (9.4% improvement) and 2.23x speedup (11.1% improvement), respectively. The code of this paper is available at https://github.com/Kaffaljidhmah2/SpecDec_pp.
[ "Kaixuan Huang", "Xudong Guo", "Mengdi Wang" ]
https://openreview.net/forum?id=Y131N9fUbU
Y131N9fUbU
Y131N9fUbU
[ "~Kaixuan_Huang1", "~Xudong_Guo1", "~Mengdi_Wang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/20e04fa6a4fd4676e6c928f26603189768563b4a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "speculative decoding", "MDP theory", "large language models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ huang2025specdec, title={SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths}, author={Kaixuan Huang and Xudong Guo and Mengdi Wang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Y131N9fUbU} }
huang|specdec_boosting_speculative_decoding_via_adaptive_candidate_lengths
null
null
null
null
null
Self-Steering Language Models
We introduce a new approach to structuring test-time computation that uses LMs to plan and execute task-specific search procedures in a probabilistic programming language.
While test-time reasoning enables language models (LMs) to tackle complex tasks, searching or planning in natural language can be slow, costly, and error-prone. But even when LMs struggle to emulate the precise reasoning steps needed to solve a problem, they often excel at describing its *abstract structure*—both how to verify solutions and *how to search* for them. This paper introduces DisCIPL, a method for “self-steering” LMs where a *Planner model* generates a task-specific *inference program* that is executed by a population of *Follower models*. Our approach equips LMs with the ability to write recursive search procedures that guide LM inference, enabling new forms of verifiable and efficient reasoning. When instantiated with a small Follower (e.g., Llama-3.2-1B or Qwen3-1.7B), DisCIPL matches (and sometimes outperforms) much larger models, including GPT-4o and o1, on challenging constrained generation tasks. Our work opens up a design space of highly-parallelized Monte Carlo inference strategies that outperform standard best-of-N sampling, require no finetuning, and can be implemented automatically by existing LMs.
[ "Gabriel Grand", "Joshua B. Tenenbaum", "Vikash Mansinghka", "Alexander K. Lew", "Jacob Andreas" ]
https://openreview.net/forum?id=XvCBtm5PgF
XvCBtm5PgF
XvCBtm5PgF
[ "~Gabriel_Grand1", "~Joshua_B._Tenenbaum1", "~Vikash_Mansinghka1", "~Alexander_K._Lew1", "~Jacob_Andreas1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a0b7c778c71e6dfe14013820aa6bfe0ff49bab8a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Probabilistic inference", "sequential Monte Carlo", "code generation", "test-time search", "constrained generation", "reasoning", "language models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ grand2025selfsteering, title={Self-Steering Language Models}, author={Gabriel Grand and Joshua B. Tenenbaum and Vikash Mansinghka and Alexander K. Lew and Jacob Andreas}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=XvCBtm5PgF} }
grand|selfsteering_language_models
null
null
null
null
null
Evaluating and Designing Sparse Autoencoders by Approximating Quasi-Orthogonality
The paper introduces Approximate Feature Activation (AFA) and the $\varepsilon$LBO metric to address the lack of principled hyperparameter selection in top-k SAEs and to evaluate SAEs using quasi-orthogonality.
Sparse autoencoders (SAEs) are widely used in mechanistic interpretability research for large language models; however, the state-of-the-art method of using $k$-sparse autoencoders lacks a theoretical grounding for selecting the hyperparameter $k$ that represents the number of nonzero activations, often denoted by $\ell_0$. In this paper, we reveal a theoretical link that the $\ell_2$-norm of the sparse feature vector can be approximated with the $\ell_2$-norm of the dense vector with a closed-form error, which allows sparse autoencoders to be trained without the need to manually determine $\ell_0$. Specifically, we validate two applications of our theoretical findings. First, we introduce a new methodology that can assess the feature activations of pre-trained SAEs by computing the theoretically expected value from the input embedding, which has been overlooked by existing SAE evaluation methods and loss functions. Second, we introduce a novel activation function, top-AFA, which builds upon our formulation of approximate feature activation (AFA). This function enables top-$k$ style activation without requiring a constant hyperparameter $k$ to be tuned, dynamically determining the number of activated features for each input. By training SAEs on three intermediate layers to reconstruct GPT2 hidden embeddings for over 80 million tokens from the OpenWebText dataset, we demonstrate the empirical merits of this approach and compare it with current state-of-the-art $k$-sparse autoencoders. Our code is available at: https://github.com/SewoongLee/top-afa-sae.
[ "Sewoong Lee", "Adam Davies", "Marc E. Canby", "Julia Hockenmaier" ]
https://openreview.net/forum?id=XhdNFeMclS
XhdNFeMclS
XhdNFeMclS
[ "~Sewoong_Lee2", "~Adam_Davies2", "~Marc_E._Canby1", "~Julia_Hockenmaier1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/217c61848f7563dea844e2801b43491e39ef9338.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "mechanistic interpretability", "sparse autoencoder" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lee2025evaluating, title={Evaluating and Designing Sparse Autoencoders by Approximating Quasi-Orthogonality}, author={Sewoong Lee and Adam Davies and Marc E. Canby and Julia Hockenmaier}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=XhdNFeMclS} }
lee|evaluating_and_designing_sparse_autoencoders_by_approximating_quasiorthogonality
null
null
null
null
null
NoveltyBench: Evaluating Creativity and Diversity in Language Models
We introduce a benchmark for creativity and diversity in language models.
Language models have demonstrated remarkable capabilities on standard benchmarks, yet they struggle increasingly from *mode collapse*, the inability to generate diverse and novel outputs. Our work introduces **NoveltyBench**, a benchmark specifically designed to evaluate the ability of language models to produce multiple distinct and high-quality outputs. NoveltyBench utilizes prompts curated to elicit diverse answers and filtered real-world user queries. Evaluating 20 leading language models, we find that current state-of-the-art systems generate significantly less diversity than human writers. Notably, larger models within a family often exhibit less diversity than their smaller counterparts, challenging the notion that capability on standard benchmarks translates directly to generative utility. While prompting strategies like in-context regeneration can elicit diversity, our findings highlight a fundamental lack of distributional diversity in current models, reducing their utility for users seeking varied responses and suggesting the need for new training and evaluation paradigms that prioritize creativity alongside quality.
[ "Yiming Zhang", "Harshita Diddee", "Susan Holm", "Hanchen Liu", "Xinyue Liu", "Vinay Samuel", "Barry Wang", "Daphne Ippolito" ]
https://openreview.net/forum?id=XZm1ekzERf
XZm1ekzERf
XZm1ekzERf
[ "~Yiming_Zhang5", "~Harshita_Diddee1", "~Susan_Holm1", "~Hanchen_Liu1", "~Xinyue_Liu11", "~Vinay_Samuel1", "~Barry_Wang1", "~Daphne_Ippolito1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/968f2cdaefa4618b2e3fbab147422f76f64b7718.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "generation", "diversity", "evaluation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhang2025noveltybench, title={NoveltyBench: Evaluating Creativity and Diversity in Language Models}, author={Yiming Zhang and Harshita Diddee and Susan Holm and Hanchen Liu and Xinyue Liu and Vinay Samuel and Barry Wang and Daphne Ippolito}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=XZm1ekzERf} }
zhang|noveltybench_evaluating_creativity_and_diversity_in_language_models
null
null
null
null
null
To Backtrack or Not to Backtrack: When Sequential Search Limits Model Reasoning
A comparative study of backtracking versus parallel search in LLMs, revealing nuanced trade-offs: backtracking can harm reasoning due to constrained search strategies and verbosity, but is particularly suitable for RL.
Recent advancements in large language models (LLMs) have significantly improved their reasoning abilities, particularly through techniques involving search and backtracking. Backtracking naturally scales test-time compute by enabling sequential, linearized exploration via long chain-of-thought (CoT) generation. However, this is not the only strategy for scaling test time-compute: parallel sampling with best-of-n selection provides an alternative that generates diverse solutions simultaneously. Despite the growing adoption of sequential search, its advantages over parallel sampling—especially under a fixed compute budget—remain poorly understood. In this paper, we systematically compare these two approaches on two challenging reasoning tasks: CountDown and Sudoku. Surprisingly, we find that sequential search underperforms parallel sampling on CountDown but outperforms it on Sudoku, suggesting that backtracking is not universally beneficial. We identify two factors that can cause backtracking to degrade performance: (1) training on fixed search traces can lock models intro suboptimal strategies, and (2) explicit CoT supervision can discourage ‘implicit‘ (non verbalized) reasoning. Extending our analysis to reinforcement learning (RL), we show that models with backtracking capabilities benefit significantly from RL fine-tuning, while models without backtracking see limited, mixed gains. Together, these findings challenge the assumption that backtracking universally enhances LLM reasoning, instead revealing a complex interaction between task structure, training data, model scale, and learning paradigm.
[ "Tian Qin", "David Alvarez-Melis", "Samy Jelassi", "Eran Malach" ]
https://openreview.net/forum?id=XNQHMYsUHf
XNQHMYsUHf
XNQHMYsUHf
[ "~Tian_Qin3", "~David_Alvarez-Melis1", "~Samy_Jelassi1", "~Eran_Malach3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/537fcbe7321564f1b58941365ae11bd87d0b08a4.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM Reasoning", "Backtracking", "Test-time Computate" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ qin2025to, title={To Backtrack or Not to Backtrack: When Sequential Search Limits Model Reasoning}, author={Tian Qin and David Alvarez-Melis and Samy Jelassi and Eran Malach}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=XNQHMYsUHf} }
qin|to_backtrack_or_not_to_backtrack_when_sequential_search_limits_model_reasoning
/attachment/0a403995ea0dfbbe8a68a5305859fe4ebe0618c1.zip
null
null
null
null
DualEdit: Dual Editing for Knowledge Updating in Vision-Language Models
We explore the importance of image and text modalities and propose a novel dual editing method—DualEdit.
Model editing aims to efficiently update a pre-trained model’s knowledge without the need for time-consuming full retraining. While existing pioneering editing methods achieve promising results, they primarily focus on editing single-modal language models (LLMs). However, for vision-language models (VLMs), which involve multiple modalities, the role and impact of each modality on editing performance remain largely unexplored. To address this gap, we explore the impact of textual and visual modalities on model editing and find that: (1) textual and visual representations reach peak sensitivity at different layers, reflecting their varying importance; and (2) editing both modalities can efficiently update knowledge, but this comes at the cost of compromising the model’s original capabilities. Based on our findings, we propose DualEdit, an editor that modifies both textual and visual modalities at their respective key layers. Additionally, we introduce a gating module within the more sensitive textual modality, allowing DualEdit to efficiently update new knowledge while preserving the model’s original information. We evaluate DualEdit across multiple VLM backbones and benchmark datasets, demonstrating its superiority over state-of-the-art VLM editing baselines as well as adapted LLM editing methods on different evaluation metrics. Codes are available at https://github.com/zhiyiscs/DualEdit.
[ "Zhiyi Shi", "Binjie Wang", "Chongjie Si", "Yichen Wu", "Junsik Kim", "Hanspeter Pfister" ]
https://openreview.net/forum?id=X5vFauyVWr
X5vFauyVWr
X5vFauyVWr
[ "~Zhiyi_Shi1", "~Binjie_Wang1", "~Chongjie_Si1", "~Yichen_Wu2", "~Junsik_Kim1", "~Hanspeter_Pfister1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/f7d988bae200fba530854b0f4ae77d12e77b7ca1.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Model Editing", "Multimodal Learning", "VLM" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ shi2025dualedit, title={DualEdit: Dual Editing for Knowledge Updating in Vision-Language Models}, author={Zhiyi Shi and Binjie Wang and Chongjie Si and Yichen Wu and Junsik Kim and Hanspeter Pfister}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=X5vFauyVWr} }
shi|dualedit_dual_editing_for_knowledge_updating_in_visionlanguage_models
null
null
null
null
null
Agents Are All You Need for LLM Unlearning
LLM agents based unlearning beats all the existing unlearning methods
Information removal or suppression in large language models (LLMs) is a desired functionality, useful in AI regulation, legal compliance, safety, and privacy. LLM unlearning methods aim to remove information on demand from LLMs. Current LLM unlearning methods struggle to balance the unlearning efficacy and utility due to the competing nature of these objectives. Keeping the unlearning process computationally feasible without assuming access to the model weights is an overlooked area. In this work we show that \textit{agents might be all we need for effective and practical LLM unlearning}. We present the first agentic LLM unlearning (\texttt{ALU}) method, a multi-agent, retrain-free, model-agnostic approach to LLM unlearning that achieves effective unlearning while preserving the utility. Our \texttt{ALU} framework unlearns by involving multiple LLM agents, each designed for a specific step in the unlearning process, without the need to update model weights for any of the agents in the framework. Users can easily request any set of unlearning instances in any sequence, and \texttt{ALU} seamlessly adapts in real time. This is facilitated without requiring any changes in the underlying LLM model. Through extensive experiments on established benchmarks (TOFU, WMDP, WPU) and jailbreaking techniques (many shot, target masking, other languages), we demonstrate that \texttt{ALU} consistently stands out as the most robust LLM unlearning framework among current state-of-the-art methods while incurring time cost that remains effectively constant regardless of the number of unlearning targets. We further highlight \texttt{ALU}'s superior performance compared to existing methods when evaluated at scale. Specifically, \texttt{ALU} is assessed on up to 1000 unlearning targets, exceeding the evaluation scope of all previously proposed LLM unlearning methods.
[ "Debdeep Sanyal", "Murari Mandal" ]
https://openreview.net/forum?id=X39dK0SX9W
X39dK0SX9W
X39dK0SX9W
[ "~Debdeep_Sanyal1", "~Murari_Mandal1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/16a114dc6c9fda7e1b1d1e1692aec55da696fb3a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM Agents", "unlearning", "Safety in AI" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ sanyal2025agents, title={Agents Are All You Need for {LLM} Unlearning}, author={Debdeep Sanyal and Murari Mandal}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=X39dK0SX9W} }
sanyal|agents_are_all_you_need_for_llm_unlearning
null
null
null
null
null
Local Mixtures of Experts: Essentially Free Test-Time Training via Model Merging
We propose Test-Time Model Merging (TTMM) which approaches the performance of Test-Time Training (TTT) without almost any test-time overhead.
Mixture of expert (MoE) models are a promising approach to increasing model capacity without increasing inference cost, and are core components of many state-of-the-art language models. However, current MoE models typically use only few experts due to prohibitive training and inference cost. We propose _**T**est-**T**ime **M**odel **M**erging_ (TTMM) which scales the MoE paradigm to orders of magnitude more experts and uses model merging to avoid almost any test-time overhead. We show that TTMM is an approximation of test-time training (TTT), which fine-tunes an expert model for each prediction task, i.e., prompt. TTT has recently been shown to significantly improve language models, but is computationally expensive. We find that performance of TTMM improves with more experts and approaches the performance of TTT. Moreover, we find that with a 1B parameter base model, _TTMM is more than $100\times$ faster than TTT_ at test-time by amortizing the cost of TTT at train-time. Thus, TTMM offers a promising cost-effective approach to scale test-time training.
[ "Ryo Bertolissi", "Jonas Hübotter", "Ido Hakimi", "Andreas Krause" ]
https://openreview.net/forum?id=X2RXpFA6Vh
X2RXpFA6Vh
X2RXpFA6Vh
[ "~Ryo_Bertolissi1", "~Jonas_Hübotter1", "~Ido_Hakimi1", "~Andreas_Krause1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/87f26d3172dfebb33a4927351d920c8aba384a5f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "test-time training", "model merging", "mixture of experts", "language modeling", "local learning", "transductive learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ bertolissi2025local, title={Local Mixtures of Experts: Essentially Free Test-Time Training via Model Merging}, author={Ryo Bertolissi and Jonas H{\"u}botter and Ido Hakimi and Andreas Krause}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=X2RXpFA6Vh} }
bertolissi|local_mixtures_of_experts_essentially_free_testtime_training_via_model_merging
/attachment/7454da207edc9f4504d90fe9d3fa15d8e2804df7.zip
null
null
null
null
DFRot: Achieving Outlier-Free and Massive Activation-Free for Rotated LLMs with Refined Rotation
We explained why random Hadamard is superior to randomized orthogonal transforms in the W4A4 quantization process and proposed an optimization method for the rotation matrix.
Rotating the activation and weight matrices to reduce the influence of outliers in large language models (LLMs) has recently attracted significant attention, particularly in the context of model quantization. Prior studies have shown that in low-precision quantization scenarios, such as 4-bit weights and 4-bit activations~(W4A4), randomized Hadamard transforms can achieve significantly higher accuracy than randomized orthogonal transforms. Notably, the reason behind this phenomenon remains unknown. In this paper, we find that these transformations show substantial improvement in eliminating outliers for common tokens and achieve similar quantization error. The primary reason for the accuracy difference lies in the fact that randomized Hadamard transforms can slightly reduce the quantization error for tokens with massive activations while randomized orthogonal transforms increase the quantization error. Due to the extreme rarity of these tokens and their critical impact on model accuracy, we consider this a long-tail optimization problem, and therefore construct a simple yet effective method: a weighted loss function. Additionally, we propose an optimization strategy for the rotation matrix that involves alternating optimization of quantization parameters while employing orthogonal Procrustes transforms to refine the rotation matrix. This makes the distribution of the rotated activation values more conducive to quantization, especially for tokens with massive activations. Our method enhances the Rotated LLMs by achieving dual free, **Outlier-Free** and **Massive Activation-Free**, dubbed as **DFRot**. Extensive experiments demonstrate the effectiveness and efficiency of DFRot. By tuning the rotation matrix using just a single sample, DFRot achieves a perplexity improvement of 0.98 and 0.95 on W4A4KV4 and W4A4KV16, respectively, for LLaMA3-70B, a model known for its quantization challenges. Code is available at https://github.com/JingyangXiang/DFRot.
[ "Jingyang Xiang", "Sai Qian Zhang" ]
https://openreview.net/forum?id=WzGypILLDb
WzGypILLDb
WzGypILLDb
[ "~Jingyang_Xiang2", "~Sai_Qian_Zhang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/33e97f26a84760262809a6c096cc688671523fef.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "W4A4 quantization", "randomized hadamard transforms", "randomized orthogonal transforms", "outlier", "masssive activation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ xiang2025dfrot, title={{DFR}ot: Achieving Outlier-Free and Massive Activation-Free for Rotated {LLM}s with Refined Rotation}, author={Jingyang Xiang and Sai Qian Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=WzGypILLDb} }
xiang|dfrot_achieving_outlierfree_and_massive_activationfree_for_rotated_llms_with_refined_rotation
null
null
null
null
null
Robo-Instruct: Simulator-Augmented Instruction Alignment For Finetuning Code LLMs
We propose a simulator-augmented approach for generating synthetic training data to fine-tune code LLMs on domain-specific robot tasks.
Code LLMs have shown promising results with converting tasks in natural language to programs that can be executed by service robots. We are interested in finetuning small, specialized LLMs for this purpose, but collecting datasets of task-program pairs specific to each robot is time-consuming and expensive. While approaches such as SELF-INSTRUCT and EVOL-INSTRUCT are capable of generating novel tasks given a few examples, they are unable to provide the corresponding programs that correctly abide by physical-world and robot-constraints using the provided programming interface. Using a simulator is a natural potential solution to checking for such constraints, but building simulation environments that can handle arbitrary tasks and their necessary objects and locations, is challenging. To address these challenges, we introduce ROBO-INSTRUCT, which synthesizes task-specific simulation environments on the fly during program execution, by opportunistically inferring entity properties and enforcing corresponding constraints based on how the entities are used in the task program. Additionally, ROBO-INSTRUCT integrates an LLM-aided post-processing procedure to refine instructions for better alignment with robot programs. We demonstrate the effectiveness of ROBO-INSTRUCT across multiple LLMs, showing that our fine-tuned models outperform all baseline methods and even match or surpass the performance of several larger and proprietary models.
[ "Zichao Hu", "Junyi Jessy Li", "Arjun Guha", "Joydeep Biswas" ]
https://openreview.net/forum?id=WnZjdQOWiY
WnZjdQOWiY
WnZjdQOWiY
[ "~Zichao_Hu1", "~Junyi_Jessy_Li2", "~Arjun_Guha3", "~Joydeep_Biswas1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/fd8cec822f3d537ccc371b6cf428f41186476ba4.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Self-Instruct", "Fine-tuning Code LLMs for service robot tasks", "Domain-Specific Program Generation", "Code LLMs for Robotics" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ hu2025roboinstruct, title={Robo-Instruct: Simulator-Augmented Instruction Alignment For Finetuning Code {LLM}s}, author={Zichao Hu and Junyi Jessy Li and Arjun Guha and Joydeep Biswas}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=WnZjdQOWiY} }
hu|roboinstruct_simulatoraugmented_instruction_alignment_for_finetuning_code_llms
null
true
null
null
null
Enhancing LLM Reliability via Explicit Knowledge Boundary Modeling
We present the Explicit Knowledge Boundary Modeling (EKBM) framework, which improves large language models' reliability by integrating fast and slow reasoning systems to minimize hallucinations while ensuring efficiency in error-sensitive tasks.
Large language models (LLMs) are prone to hallucination stemming from misaligned self-awareness, particularly when processing queries exceeding their knowledge boundaries. While existing mitigation strategies employ uncertainty estimation or query rejection mechanisms, they suffer from computational efficiency and sacrificed helpfulness. To address these issues, we propose the \textit{Explicit Knowledge Boundary Modeling} (EKBM) framework, integrating fast and slow reasoning systems to harmonize reliability and usability. The framework first employs a fast-thinking model to generate confidence-labeled responses, enabling immediate utilization of high-confidence outputs, whereas uncertain predictions trigger a slow refinement model for accuracy improvement. To align model behavior with our proposed object, we propose a hybrid training pipeline, enhancing self-awareness without degrading task performance. Evaluations on dialogue state tracking tasks demonstrate that EKBM achieves superior model reliability over uncertainty-based baselines. Further analysis reveals that refinement substantially boosts accuracy while maintaining low computational overhead. The framework establishes a scalable paradigm for deploying reliable LLMs in error-sensitive applications, effectively balancing accuracy and practical utility.
[ "Hang Zheng", "Hongshen Xu", "Yuncong Liu", "Shuai Fan", "Lu Chen", "Pascale Fung", "Kai Yu" ]
https://openreview.net/forum?id=WLgfeRhuA0
WLgfeRhuA0
WLgfeRhuA0
[ "~Hang_Zheng3", "~Hongshen_Xu1", "~Yuncong_Liu1", "~Shuai_Fan1", "~Lu_Chen3", "~Pascale_Fung1", "~Kai_Yu3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a56440afc6933bfb0876b800c75ca96c0c6b5c6a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "reliability", "knowledge boundary", "self-awareness" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zheng2025enhancing, title={Enhancing {LLM} Reliability via Explicit Knowledge Boundary Modeling}, author={Hang Zheng and Hongshen Xu and Yuncong Liu and Shuai Fan and Lu Chen and Pascale Fung and Kai Yu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=WLgfeRhuA0} }
zheng|enhancing_llm_reliability_via_explicit_knowledge_boundary_modeling
null
null
null
null
null
LeakAgent: RL-based Red-teaming Agent for LLM Privacy Leakage
LeakAgent is a new black-box red-teaming framework that automates the generation of adversarial prompts with RL to exploit privacy in LLMs.
Recent studies have discovered that large language models (LLM) may be ``fooled'' to output private information, including training data, system prompts, and personally identifiable information, under carefully crafted adversarial prompts. Existing red-teaming approaches for privacy leakage either rely on manual efforts or focus solely on system prompt extraction, making them ineffective for severe risks of training data leakage. We propose LeakAgent, a novel black-box red-teaming framework for LLM privacy leakage. Our framework trains an open-source LLM through reinforcement learning as the attack agent to generate adversarial prompts for both training data extraction and system prompt extraction. To achieve this, we propose a novel reward function to provide effective and fine-grained rewards and design novel mechanisms to balance exploration and exploitation during learning and enhance the diversity of adversarial prompts. Through extensive evaluations, we first show that LeakAgent significantly outperforms existing rule-based approaches in training data extraction and automated methods in system prompt leakage. We also demonstrate the effectiveness of LeakAgent in extracting system prompts from real-world applications in OpenAI's GPT Store. We further demonstrate LeakAgent's effectiveness in evading the existing guardrail defense and its helpfulness in enabling better safety alignment. Finally, we validate our customized designs through a detailed ablation study. We release our code here \url{https://github.com/rucnyz/LeakAgent}.
[ "Yuzhou Nie", "Zhun Wang", "Ye Yu", "Xian Wu", "Xuandong Zhao", "Nathaniel D. Bastian", "Wenbo Guo", "Dawn Song" ]
https://openreview.net/forum?id=WIfns41MAb
WIfns41MAb
WIfns41MAb
[ "~Yuzhou_Nie1", "~Zhun_Wang1", "~Ye_Yu5", "~Xian_Wu8", "~Xuandong_Zhao1", "~Nathaniel_D._Bastian1", "~Wenbo_Guo1", "~Dawn_Song1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8dfa927a8d360bc872b39591cb5a7a9483339f07.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Privacy Leakage", "LLM" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ nie2025leakagent, title={LeakAgent: {RL}-based Red-teaming Agent for {LLM} Privacy Leakage}, author={Yuzhou Nie and Zhun Wang and Ye Yu and Xian Wu and Xuandong Zhao and Nathaniel D. Bastian and Wenbo Guo and Dawn Song}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=WIfns41MAb} }
nie|leakagent_rlbased_redteaming_agent_for_llm_privacy_leakage
null
true
null
null
null
Zero-shot Benchmarking: A Framework for Flexible and Scalable Automatic Evaluation of Language Models
We present Zero-shot Benchmarking (ZSB), a framework for creating high-quality benchmarks for any task by leveraging language models for both synthetic test data creation and evaluation.
As language models improve and grow capable of performing more complex tasks across modalities, evaluating them automatically becomes increasingly challenging. Developing strong and robust task-specific automatic metrics gets harder, and human-annotated test sets—which are expensive to create—saturate more quickly. A compelling alternative is to design reliable strategies to automate the creation of test data and evaluation, but previous attempts either rely on pre-existing data, or focus solely on individual tasks. We present Zero-shot Benchmarking (ZSB), a framework for creating high-quality benchmarks for any task by leveraging language models for both synthetic test data creation and evaluation. ZSB is simple and flexible: it requires only the creation of a prompt for data generation and one for evaluation; it is scalable to tasks and languages where collecting real-world data is costly or impractical; it is model-agnostic, allowing the creation of increasingly challenging benchmarks as models improve. To assess the effectiveness of our framework, we create benchmarks for five text-only tasks and a multi-modal one: general capabilities in four languages (English, Chinese, French, and Korean), translation, and general vision-language capabilities in English. We then rank a broad range of open and closed systems on our benchmarks. ZSB rankings consistently correlate strongly with human rankings, outperforming widely-adopted standard benchmarks. Through ablations, we find that strong benchmarks can be created with open models, and that judge model size and dataset variety are crucial drivers of performance. We release all our benchmarks, and code to reproduce our experiments and to produce new benchmarks.
[ "José Pombal", "Nuno M Guerreiro", "Ricardo Rei", "Andre Martins" ]
https://openreview.net/forum?id=WARZwyDf17
WARZwyDf17
WARZwyDf17
[ "~José_Pombal1", "~Nuno_M_Guerreiro1", "~Ricardo_Rei1", "~Andre_Martins1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/239ca1a13fb128af2c9c973dcdc64a8c7b6028fa.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "automatic evaluation", "large language models", "vision language models", "multilinguality", "llm-as-a-judge" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ pombal2025zeroshot, title={Zero-shot Benchmarking: A Framework for Flexible and Scalable Automatic Evaluation of Language Models}, author={Jos{\'e} Pombal and Nuno M Guerreiro and Ricardo Rei and Andre Martins}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=WARZwyDf17} }
pombal|zeroshot_benchmarking_a_framework_for_flexible_and_scalable_automatic_evaluation_of_language_models
null
null
null
null
null
Model-Agnostic Policy Explanations with Large Language Models
We propose an approach that learns a behavior representation from observed states and actions and then generates explanations with minimal hallucination using a pre-trained large language model.
Intelligent agents, such as robots, are increasingly deployed in real-world, human-centric environments. To foster appropriate human trust and meet legal and ethical standards, these agents must be able to explain their behavior. However, state-of-the-art agents are typically driven by black-box models like deep neural networks, limiting their interpretability. We propose a method for generating natural language explanations of agent behavior based *only* on observed states and actions -- without access to the agent's underlying model. Our approach learns a locally interpretable surrogate model of the agent's behavior from observations, which then guides a large language model to generate plausible explanations with minimal hallucination. Empirical results show that our method produces explanations that are more comprehensible and correct than those from baselines, as judged by both language models and human evaluators. Furthermore, we find that participants in a user study more accurately predicted the agent's future actions when given our explanations, suggesting improved understanding of agent behavior. Importantly, we show that participants are unable to detect hallucinations in explanations, underscoring the need for explainability methods that minimize hallucinations by design.
[ "Zhang Xi-Jia", "Yue Guo", "Shufei Chen", "Simon Stepputtis", "Matthew Craig Gombolay", "Katia P. Sycara", "Joseph Campbell" ]
https://openreview.net/forum?id=VzXpFjKgJg
VzXpFjKgJg
VzXpFjKgJg
[ "~Zhang_Xi-Jia1", "~Yue_Guo7", "~Shufei_Chen1", "~Simon_Stepputtis1", "~Matthew_Craig_Gombolay1", "~Katia_P._Sycara1", "~Joseph_Campbell1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/71ee490dcb37c9b6311b2a8f5d3efa10079542e8.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Explainability", "Model-Agnostic Explanations", "Large Language Models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ xi-jia2025modelagnostic, title={Model-Agnostic Policy Explanations with Large Language Models}, author={Zhang Xi-Jia and Yue Guo and Shufei Chen and Simon Stepputtis and Matthew Craig Gombolay and Katia P. Sycara and Joseph Campbell}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=VzXpFjKgJg} }
xijia|modelagnostic_policy_explanations_with_large_language_models
null
null
null
null
null
On Mechanistic Circuits for Extractive Question-Answering
We extract a mechanistic circuit for extractive QA and perform data attribution and model steering with the insights
Recent studies have extracted circuits from the computational graphs of language models for simple language tasks such as entity tracking or indirect object identification. In our paper, we scale up circuit extraction to a real-world language modeling task: context-augmented language modeling for question-answering (QA) tasks and understand the potential benefits of circuits towards downstream applications such as data attribution. We extract circuits as a function of internal model components (e.g., attention heads, attention layers, MLPs) using causal mediation analysis techniques. Leveraging the extracted circuits, we first understand the interplay between the language model's usage of parametric memory and retrieved context towards a better mechanistic understanding of context-augmented language models. We then identify a small set of attention heads in our circuit which performs reliable data attribution by default, thereby obtaining attribution for free in just the model's forward pass! Using this insight, we then introduce AttnAttrib, a fast data attribution algorithm. Through a range of empirical experiments across different extractive QA benchmarks, we show that performing data attribution with AttnAttrib obtains state-of-the-art attribution results across different language models. Finally, we show the possibility to steer the language model towards answering from the context, instead of the parametric memory by (i) using the attribution from our extracted attention head as an additional signal during the forward pass and (ii) scaling the output of a small set of attention heads. Beyond mechanistic understanding, our paper provides tangible applications of mechanistic circuits in the form of reliable data attribution and model steering.
[ "Samyadeep Basu", "Vlad I Morariu", "Ryan A. Rossi", "Nanxuan Zhao", "Zichao Wang", "Soheil Feizi", "Varun Manjunatha" ]
https://openreview.net/forum?id=VvSWiNIuPL
VvSWiNIuPL
VvSWiNIuPL
[ "~Samyadeep_Basu1", "~Vlad_I_Morariu1", "~Ryan_A._Rossi2", "~Nanxuan_Zhao1", "~Zichao_Wang1", "~Soheil_Feizi2", "~Varun_Manjunatha1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/35c2be3f6296ec7f34eeb53635258b3974d42339.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "mechanistic circuits", "interpretability" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ basu2025on, title={On Mechanistic Circuits for Extractive Question-Answering}, author={Samyadeep Basu and Vlad I Morariu and Ryan A. Rossi and Nanxuan Zhao and Zichao Wang and Soheil Feizi and Varun Manjunatha}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=VvSWiNIuPL} }
basu|on_mechanistic_circuits_for_extractive_questionanswering
null
true
null
null
null
Cats Confuse Reasoning LLM: Query Agnostic Adversarial Triggers for Reasoning Models
The paper introduces CatAttack, a method to generate query-agnostic adversarial triggers that mislead reasoning models into giving incorrect answers, revealing critical vulnerabilities in state-of-the-art Reasoning models.
We investigate the robustness of reasoning models trained for step-by-step problem solving by introducing query-agnostic adversarial triggers – short, irrelevant text that, when appended to math problems, systematically misleads models to output incorrect answers without altering the problem’s semantics. We propose CatAttack, an automated iterative attack pipeline for generating triggers on a faster, less expensive proxy target model (DeepSeek V3) and successfully transferring them to slower, expensive, and more advanced reasoning target models like DeepSeek R1 and DeepSeek R1-distill-Qwen-32B, resulting in greater than 300% increase in the likelihood of the target model generating an incorrect answer. For example, appending Interesting fact: cats sleep most of their lives to any math problem leads to more than doubling the chances of a model getting the answer wrong. Furthermore, we demonstrate the widespread transferability of these triggers to other model families, including large reasoning models from Qwen QwQ, Qwen 3, and Phi-4 as well as instruction-tuned models from Llama-3.1 and Mistral. These tests showed that the models were affected by error rates that increased by up to 500% for reasoning models and by 700% for instruction-tuned models. Our findings highlight critical vulnerabilities in reasoning models, revealing that even state-of-the-art models remain susceptible to subtle adversarial inputs, raising security and reliability concerns. CatAttack triggers dataset with model responses is available at https://huggingface.co/datasets/collinear-ai/cat-attack-adversarial-triggers
[ "Meghana Arakkal Rajeev", "Rajkumar Ramamurthy", "Prapti Trivedi", "Vikas Yadav", "Oluwanifemi Bamgbose", "Sathwik Tejaswi Madhusudhan", "James Zou", "Nazneen Rajani" ]
https://openreview.net/forum?id=VrEPiN5WhM
VrEPiN5WhM
VrEPiN5WhM
[ "~Meghana_Arakkal_Rajeev1", "~Rajkumar_Ramamurthy1", "~Prapti_Trivedi2", "~Vikas_Yadav2", "~Oluwanifemi_Bamgbose1", "~Sathwik_Tejaswi_Madhusudhan2", "~James_Zou1", "~Nazneen_Rajani1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8023834862148fe2c7a86b6d82087f5bcbdc8edf.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Adversarial attacks", "Query-agnostic adversarial triggers", "Reasoning Models", "Automatic Iterative Attack", "Math-based triggers", "Security", "Redteaming" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ rajeev2025cats, title={Cats Confuse Reasoning {LLM}: Query Agnostic Adversarial Triggers for Reasoning Models}, author={Meghana Arakkal Rajeev and Rajkumar Ramamurthy and Prapti Trivedi and Vikas Yadav and Oluwanifemi Bamgbose and Sathwik Tejaswi Madhusudhan and James Zou and Nazneen Rajani}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=VrEPiN5WhM} }
rajeev|cats_confuse_reasoning_llm_query_agnostic_adversarial_triggers_for_reasoning_models
null
null
null
null
null
A Critical Look At Tokenwise Reward-Guided Text Generation
We analyse some of the pitfalls of contemporary reward guided text generation methods, and present a principled approach with strong performance on several language generation benchmarks.
Large language models (LLMs) can be improved by aligning with human preferences through fine-tuning -- the so-called reinforcement learning from human feedback (RLHF). However, the cost of fine-tuning an LLM is prohibitive for many users. Due to their ability to bypass LLM fine-tuning, prediction-time tokenwise reward-guided text generation (RGTG) methods have recently been proposed. They use a reward model trained on full sequences to score partial sequences during decoding in a bid to steer the generation towards sequences with high rewards. However, these methods have so far been only heuristically motivated and poorly analyzed. In this work, we show that reward models trained on full sequences are not compatible with scoring partial sequences. To alleviate this, we propose to train a Bradley-Terry reward model on partial sequences explicitly, and autoregressively sample from the implied tokenwise policy during decoding. We study the properties of this reward model and the resulting policy: we show that this policy is proportional to the ratio of two distinct RLHF policies. Our simple approach outperforms previous RGTG methods and performs similarly to strong offline baselines without large-scale LLM fine-tuning.
[ "Ahmad Rashid", "Ruotian Wu", "Julia Grosse", "Agustinus Kristiadi", "Pascal Poupart" ]
https://openreview.net/forum?id=Vnw9c1YLhV
Vnw9c1YLhV
Vnw9c1YLhV
[ "~Ahmad_Rashid1", "~Ruotian_Wu1", "~Julia_Grosse1", "~Agustinus_Kristiadi1", "~Pascal_Poupart2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/92e5e13ad4933a0781534ccfc64e79e7321793b4.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM", "RLHF", "Alignment", "Model Efficiency", "Reward Models", "Sampling", "Test-time" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ rashid2025a, title={A Critical Look At Tokenwise Reward-Guided Text Generation}, author={Ahmad Rashid and Ruotian Wu and Julia Grosse and Agustinus Kristiadi and Pascal Poupart}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Vnw9c1YLhV} }
rashid|a_critical_look_at_tokenwise_rewardguided_text_generation
/attachment/318baf88ec5cccfdcfb67114ab9ef513205be824.zip
null
null
null
null
Sample Efficient Preference Alignment in LLMs via Active Exploration
We propose an exploration based approach for active learning for RLHF addressing both online and offline settings
Preference-based feedback is important for many applications in machine learning where evaluation of a reward function is not feasible. Notable recent examples arise in preference alignment for large language models, including in reinforcement learning from human feedback (RLHF) and direct preference optimization (DPO). For many applications of preference alignment, the cost of acquiring human feedback can be substantial. In this work, we take advantage of the fact that one can often choose contexts at which to obtain human feedback to most efficiently identify a good policy, and formalize the setting as an \emph{active contextual dueling bandit} problem. We propose an active exploration algorithm to efficiently select the data and provide theoretical proof that it has a polynomial worst-case regret bound. We extend the setting and methodology for practical use in preference alignment of large language models. We provide two extensions, an online and an offline approach. Our method outperforms the baselines with limited samples of human preferences on several language models and four real-world datasets including two new datasets that we contribute to the literature.
[ "Viraj Mehta", "Syrine Belakaria", "Vikramjeet Das", "Ojash Neopane", "Yijia Dai", "Ilija Bogunovic", "Barbara E Engelhardt", "Stefano Ermon", "Jeff Schneider", "Willie Neiswanger" ]
https://openreview.net/forum?id=Vi5cIfIslX
Vi5cIfIslX
Vi5cIfIslX
[ "~Viraj_Mehta1", "~Syrine_Belakaria1", "~Vikramjeet_Das1", "~Ojash_Neopane1", "~Yijia_Dai1", "~Ilija_Bogunovic2", "~Barbara_Engelhardt1", "~Stefano_Ermon1", "~Jeff_Schneider1", "~Willie_Neiswanger2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/7bbd6292ebe368d5537c94c5f2fcf2b3a3d6f97b.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Sample Efficient", "DPO", "RLHF", "Alignment", "Active learning", "LLMs" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ mehta2025sample, title={Sample Efficient Preference Alignment in {LLM}s via Active Exploration}, author={Viraj Mehta and Syrine Belakaria and Vikramjeet Das and Ojash Neopane and Yijia Dai and Ilija Bogunovic and Barbara E Engelhardt and Stefano Ermon and Jeff Schneider and Willie Neiswanger}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Vi5cIfIslX} }
mehta|sample_efficient_preference_alignment_in_llms_via_active_exploration
/attachment/76e6987c2fa36c3a4f40a8581f3f410848ba4ab8.zip
null
null
null
null
Society of Mind Meets Real-Time Strategy: A Hierarchical Multi-Agent Framework for Strategic Reasoning
We propose a multi-agent framework uniting specialized imitation learning modules under a meta-controller, achieving robust long-horizon strategic reasoning and superior adaptability in dynamic environments.
Large Language Models (LLMs) have recently demonstrated impressive action sequence prediction capabilities but often struggle with dynamic, long-horizon tasks such as real-time strategic games. In a game such as StarCraft II (SC2), agents need to manage resource constraints and adapt to evolving battlefield situations in a partially observable environment. This often overwhelms exisiting LLM-based approaches. To address these challenges, we propose a hierarchical multi-agent framework that employs specialized imitation learning agents under a meta-controller called Strategic Planner (SP). By expert demonstrations, each specialized agent learns a distinctive strategy, such as aerial support or defensive maneuvers, and produces coherent, structured multistep action sequences. The SP then orchestrates these proposals into a single, environmentally adaptive plan that ensures local decisions aligning with long-term strategies. We call this HIMA (Hierarchical Imitation Multi-Agent). We also present TEXTSCII-ALL, a comprehensive SC2 testbed that encompasses all race match combinations in SC2. Our empirical results show that HIMA outperforms state of the arts in strategic clarity, adaptability, and computational efficiency, underscoring the potential of combining specialized imitation modules with meta-level orchestration to develop more robust, general-purpose AI agents.
[ "Daechul Ahn", "San Kim", "Jonghyun Choi" ]
https://openreview.net/forum?id=VYdbeSoXWD
VYdbeSoXWD
VYdbeSoXWD
[ "~Daechul_Ahn4", "~San_Kim2", "~Jonghyun_Choi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/7a9859d0ac848995814c41ceea8ac21116aad9a7.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "multi agent", "real-time simulation", "strategic reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ahn2025society, title={Society of Mind Meets Real-Time Strategy: A Hierarchical Multi-Agent Framework for Strategic Reasoning}, author={Daechul Ahn and San Kim and Jonghyun Choi}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=VYdbeSoXWD} }
ahn|society_of_mind_meets_realtime_strategy_a_hierarchical_multiagent_framework_for_strategic_reasoning
null
null
null
null
null
MapIQ: Evaluating Multimodal Large Language Models for Map Question Answering
We introduce MapIQ, a benchmark for evaluating multimodal large language models (MLLMs) on map-based visual question answering (Map-VQA) across three map types, also assessing their robustness to map desing variations.
Recent advancements in multimodal large language models (MLLMs) have driven researchers to explore how well these models read data visualizations, e.g., bar charts, scatter plots. More recently, attention has shifted to visual question answering with maps (Map-VQA). However, Map-VQA research has primarily focused on choropleth maps, which cover only a limited range of thematic categories and visual analytical tasks. To address these gaps, we introduce MapIQ, a benchmark dataset comprising 14,706 question-answer pairs across three map types—choropleth maps, cartograms, and proportional symbol maps spanning topics from six distinct themes (e.g., housing, crime). We evaluate multiple MLLMs using six visual analytical tasks, comparing their performance against one another and a human baseline. An additional experiment examining the impact of map design changes (e.g., altered color schemes, modified legend designs, and removal of map elements) provides insights into the robustness and sensitivity of MLLMs, their reliance on internal geographic knowledge, and potential avenues for improving Map-VQA performance.
[ "Varun Srivastava", "Fan Lei", "Srija Mukhopadhyay", "Vivek Gupta", "Ross Maciejewski" ]
https://openreview.net/forum?id=VSwRuGtB5n
VSwRuGtB5n
VSwRuGtB5n
[ "~Varun_Srivastava3", "~Fan_Lei1", "~Srija_Mukhopadhyay1", "~Vivek_Gupta2", "~Ross_Maciejewski1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/75c5f068b1416db4345958771bd9b3c633b9a358.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Visual Question Answering", "Maps", "Geospatial Analysis", "Visual Analytics", "Benchmark Dataset" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ srivastava2025mapiq, title={Map{IQ}: Evaluating Multimodal Large Language Models for Map Question Answering}, author={Varun Srivastava and Fan Lei and Srija Mukhopadhyay and Vivek Gupta and Ross Maciejewski}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=VSwRuGtB5n} }
srivastava|mapiq_evaluating_multimodal_large_language_models_for_map_question_answering
null
null
null
null
null
Steering Large Language Model Activations in Sparse Spaces
Sparse activation steering (SAS) propose an steering method in sparse spaces to precisely control LLM behavior by isolating interpretable features.
A key challenge in AI alignment is guiding large language models (LLMs) to follow desired behaviors at test time. Activation steering, which modifies internal model activations during inference, offers a promising solution. However, prior work in dense activation spaces struggles with $\textit{superposition}$, where multiple features become entangled, limiting interpretability and precise control. In contrast, sparse representations offer an untapped opportunity for more interpretable behavior modulation. In this work, we introduce $\textit{Sparse Activation Steering}$ (SAS), a novel method for steering LLM behavior in $\textit{sparse spaces}$. By isolating behavior-specific features (i.e., latent dimensions) through a contrastive prompt-pairing approach, we define a set of features that can selectively reinforce or suppress behaviors. Experiments on Gemma 2 LLMs show that SAS vectors enable steering on par with its dense counterpart while offering interpretability advantages such as easier compositionality of features in these spaces. Furthermore, our scaling studies on sparse latents reveal a trend toward greater sparsity in SAS vectors, approaching ideal $\textit{monosemanticity}$.
[ "Reza Bayat", "Ali Rahimi-Kalahroudi", "Mohammad Pezeshki", "Sarath Chandar", "Pascal Vincent" ]
https://openreview.net/forum?id=VGw1viYliK
VGw1viYliK
VGw1viYliK
[ "~Reza_Bayat1", "~Ali_Rahimi-Kalahroudi1", "~Mohammad_Pezeshki1", "~Sarath_Chandar1", "~Pascal_Vincent1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/70f3af98996906bfa9bd84ba645f0cf45176844a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "AI alignment", "Activation steering", "Sparse representations", "Sparse autoencoders (SAEs)", "Large language models (LLMs)", "Interpretability" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ bayat2025steering, title={Steering Large Language Model Activations in Sparse Spaces}, author={Reza Bayat and Ali Rahimi-Kalahroudi and Mohammad Pezeshki and Sarath Chandar and Pascal Vincent}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=VGw1viYliK} }
bayat|steering_large_language_model_activations_in_sparse_spaces
null
null
null
null
null
Scaling Laws of Synthetic Data for Language Model
We propose a scalable method for synthetic generation and investigate the scaling laws of synthetic data.
Large language models (LLMs) achieve strong performance across diverse tasks, driven by high-quality web data used in pre-training. However, recent studies indicate web data is rapidly depleting. Synthetic data emerges as a promising alternative, but it remains unclear whether synthetic datasets exhibit predictable scalability comparable to raw pre-training data. In this work, we systematically investigate scaling laws of synthetic data by introducing SynthLLM, a scalable framework that transforms pre-training corpora into diverse, high-quality synthetic datasets. Our approach achieves this by automatically extracting and recombining high-level concepts across multiple documents using a graph algorithm. Key findings from our experiments with SynthLLM on math domain include: (1) SynthLLM generates synthetic data that reliably adheres to rectified scaling law across various model sizes; (2) Performance gains gradually diminish near 300B tokens; and (3) Larger models approach optimal performance with fewer training tokens. For instance, an 8B model peaks at 1T tokens, while a 3B model requires 4T. Moreover, comparisons with existing synthetic data generation and augmentation methods demonstrate that SynthLLM achieves superior performance and scalability. Our findings highlight synthetic data as a scalable and reliable alternative to raw pre-training data, offering a viable path toward continued improvement in model performance.
[ "Zeyu Qin", "Qingxiu Dong", "Xingxing Zhang", "Li Dong", "Xiaolong Huang", "Ziyi Yang", "MAHMOUD KHADEMI", "Dongdong Zhang", "Hany Hassan Awadalla", "Yi R. Fung", "Weizhu Chen", "Minhao Cheng", "Furu Wei" ]
https://openreview.net/forum?id=UmUXPXHtdl
UmUXPXHtdl
UmUXPXHtdl
[ "~Zeyu_Qin1", "~Qingxiu_Dong1", "~Xingxing_Zhang1", "~Li_Dong1", "~Xiaolong_Huang1", "~Ziyi_Yang1", "~MAHMOUD_KHADEMI2", "~Dongdong_Zhang4", "~Hany_Hassan_Awadalla1", "~Yi_R._Fung1", "~Weizhu_Chen1", "~Minhao_Cheng1", "~Furu_Wei1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8e8b4de2eae1ad893ee42a2ae8bc4bc18e1e6123.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Synthetic Data; Scaling Laws" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ qin2025scaling, title={Scaling Laws of Synthetic Data for Language Model}, author={Zeyu Qin and Qingxiu Dong and Xingxing Zhang and Li Dong and Xiaolong Huang and Ziyi Yang and MAHMOUD KHADEMI and Dongdong Zhang and Hany Hassan Awadalla and Yi R. Fung and Weizhu Chen and Minhao Cheng and Furu Wei}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=UmUXPXHtdl} }
qin|scaling_laws_of_synthetic_data_for_language_model
null
null
null
null
null
ParaPO: Aligning Language Models to Reduce Verbatim Reproduction of Pre-training Data
Fine-tuning language models to mitigate regurgitation in open-ended generation.
Language models (LMs) can memorize and reproduce segments from their pretraining data verbatim even in non-adversarial settings, raising concerns about copyright, plagiarism, privacy, and creativity. We introduce Paraphrase Preference Optimization (ParaPO), a post-training method that fine-tunes LMs to reduce regurgitation while preserving their overall utility. ParaPO trains LMs to prefer paraphrased versions of memorized segments over the original verbatim content from the pretraining data. To preserve the ability to recall famous quotations, we additionally develop a variant of ParaPO that uses system prompts to control whether LMs should reduce regurgitation. On Llama3.1-8B, ParaPO consistently reduces regurgitation across all datasets we evaluated (e.g., reducing the regurgitation metric from 17.3 to 12.9 in creative writing), whereas unlearning methods used in prior work to mitigate regurgitation are less effective outside their targeted unlearned domain (from 17.3 to 16.9). On the instruction-tuned model Tulu3-8B, ParaPO with system prompts achieve a 27.5\% reduction in regurgitation (from 8.7 to 6.3) in creative writing, while preserving similar accuracy in requesting famous quotations. In contrast, the base Tulu model with inference-time system prompts achieves only a 3.5\% reduction (from 8.7 to 8.4).
[ "Tong Chen", "Faeze Brahman", "Jiacheng Liu", "Niloofar Mireshghallah", "Weijia Shi", "Pang Wei Koh", "Luke Zettlemoyer", "Hannaneh Hajishirzi" ]
https://openreview.net/forum?id=Uic3ojVhXh
Uic3ojVhXh
Uic3ojVhXh
[ "~Tong_Chen3", "~Faeze_Brahman1", "~Jiacheng_Liu2", "~Niloofar_Mireshghallah1", "~Weijia_Shi1", "~Pang_Wei_Koh1", "~Luke_Zettlemoyer1", "~Hannaneh_Hajishirzi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/00c0e42d4a40349f81a740f57820ec7a63017766.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "security and privacy", "fine-tuning", "ethical considerations in NLP applications" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
true
This submission presents a method to discourage verbatim generation of pre-training data. The method could potentially be used to hide copyright infringement from model developers who may unethically use large-scale copyright-protected data for pre-training.
@inproceedings{ chen2025parapo, title={Para{PO}: Aligning Language Models to Reduce Verbatim Reproduction of Pre-training Data}, author={Tong Chen and Faeze Brahman and Jiacheng Liu and Niloofar Mireshghallah and Weijia Shi and Pang Wei Koh and Luke Zettlemoyer and Hannaneh Hajishirzi}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Uic3ojVhXh} }
chen|parapo_aligning_language_models_to_reduce_verbatim_reproduction_of_pretraining_data
null
null
null
null
null
Can a Crow Hatch a Falcon? Lineage Matters in Predicting Large Language Model Performance
Explicitly modeling LLM **lineage** boosts performance prediction accuracy. Our lineage-regularized matrix factorization leverages ancestry to outperform standard methods when predicting new or merged models with minimal evaluation data.
Accurately forecasting the performance of Large Language Models (LLMs) before extensive fine-tuning or merging can substantially reduce both computational expense and development time. Although prior approaches like scaling laws account for global factors such as parameter size or training tokens, they often overlook explicit lineage relationships—i.e., which models are derived or merged from which parents. In this work, we propose a novel Lineage-Regularized Matrix Factorization (LRMF) framework that encodes ancestral ties among LLMs via a graph Laplacian regularizer. By leveraging multi-hop parent--child connections, LRMF consistently outperforms conventional matrix factorization and collaborative filtering methods in both instance-level and benchmark-level performance prediction. Our large-scale study includes 2,934 publicly available Hugging Face models and 21,000+ instances across 6 major benchmarks, showing that the introduction of lineage constraints yields up to 0.15–0.30 higher correlation coefficients with actual performance compared to baseline methods. Moreover, LRMF effectively addresses the cold-start problem, providing accurate estimates for newly derived or merged models even with minimal data. This lineage-guided strategy thus offers a resource-efficient way to inform hyperparameter tuning, data selection, and model combination in modern LLM development.
[ "Takuya Tamura", "Taro Yano", "Masafumi Enomoto", "Masafumi Oyamada" ]
https://openreview.net/forum?id=ULYqB2JORB
ULYqB2JORB
ULYqB2JORB
[ "~Takuya_Tamura1", "~Taro_Yano1", "~Masafumi_Enomoto1", "~Masafumi_Oyamada1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/0a553f6eee082d5a4e51bb0edf691b800a56c699.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM", "Performance Estimation", "Matrix Factorization", "Neural Collaborative Filtering" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ tamura2025can, title={Can a Crow Hatch a Falcon? Lineage Matters in Predicting Large Language Model Performance}, author={Takuya Tamura and Taro Yano and Masafumi Enomoto and Masafumi Oyamada}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=ULYqB2JORB} }
tamura|can_a_crow_hatch_a_falcon_lineage_matters_in_predicting_large_language_model_performance
null
null
null
null
null
Rerouting LLM Routers
Proposing a novel class of vulnerabilities, where adversaries can manipulate LLM routing decisions to their advantage.
LLM routers balance quality and cost of responding to queries by routing them to a cheaper or more expensive LLM depending on the query's estimated complexity. Routers are a type of what we call ``LLM control planes,'' i.e., systems that orchestrate multiple LLMs. In this paper, we investigate adversarial robustness of LLM control planes using routers as a concrete example. We formulate LLM control-plane integrity as a distinct problem in AI safety, where the adversary's goal is to control the order or selection of LLMs employed to process users' queries. We then demonstrate that it is possible to generate query-independent ``gadget'' strings that, when added to any query, cause routers to send this query to a strong LLM. In contrast to conventional adversarial inputs, gadgets change the control flow but preserve or even improve the quality of outputs generated in response to adversarially modified queries. We show that this attack is successful both in white-box and black-box settings against several open-source and commercial routers. We also show that perplexity-based defenses fail, and investigate alternatives.
[ "Avital Shafran", "Roei Schuster", "Tom Ristenpart", "Vitaly Shmatikov" ]
https://openreview.net/forum?id=U6C7odo5SX
U6C7odo5SX
U6C7odo5SX
[ "~Avital_Shafran1", "~Roei_Schuster1", "~Tom_Ristenpart1", "~Vitaly_Shmatikov1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/90196a560af282e8b63901f276e00ea0dd18c970.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLMs", "Routers", "Adversarial Machine Learning", "ML Security" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ shafran2025rerouting, title={Rerouting {LLM} Routers}, author={Avital Shafran and Roei Schuster and Tom Ristenpart and Vitaly Shmatikov}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=U6C7odo5SX} }
shafran|rerouting_llm_routers
null
null
null
null
null
Bayesian scaling laws for in-context learning
We test the claim that in-context learning in LLMs is Bayesian, leading to a new interpretable scaling law that accurately predicts when suppressed behaviors in both toy and real-world language models will reemerge.
In-context learning (ICL) is a powerful technique for getting language models to perform complex tasks with no training updates. Prior work has established strong correlations between the number of in-context examples provided and the accuracy of the model's predictions. In this paper, we seek to explain this correlation by showing that ICL approximates a Bayesian learner. This perspective gives rise to a novel Bayesian scaling law for ICL. In experiments with GPT-2 models of different sizes, our scaling law matches existing scaling laws in accuracy while also offering interpretable terms for task priors, learning efficiency, and per-example probabilities. To illustrate the analytic power that such interpretable scaling laws provide, we report on controlled synthetic dataset experiments designed to inform real-world studies of safety alignment. In our experimental protocol, we use SFT or DPO to suppress an unwanted existing model capability and then use ICL to try to bring that capability back (many-shot jailbreaking). We then study ICL on real-world instruction-tuned LLMs using capabilities benchmarks as well as a new many-shot jailbreaking dataset. In all cases, Bayesian scaling laws accurately predict the conditions under which ICL will cause suppressed behaviors to reemerge, which sheds light on the ineffectiveness of post-training at increasing LLM safety.
[ "Aryaman Arora", "Dan Jurafsky", "Christopher Potts", "Noah Goodman" ]
https://openreview.net/forum?id=U2ihVSREUb
U2ihVSREUb
U2ihVSREUb
[ "~Aryaman_Arora1", "~Dan_Jurafsky1", "~Christopher_Potts1", "~Noah_Goodman1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/076b6410f98373074352f0d7b558d5ca201c2244.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "in-context learning", "bayesian inference", "scaling laws" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ arora2025bayesian, title={Bayesian scaling laws for in-context learning}, author={Aryaman Arora and Dan Jurafsky and Christopher Potts and Noah Goodman}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=U2ihVSREUb} }
arora|bayesian_scaling_laws_for_incontext_learning
null
true
null
null
null
AdaptiVocab: Enhancing LLM Efficiency in Focused Domains through Lightweight Vocabulary Adaptation
AdaptiVocab enhances LLM efficiency in domain-specific settings by adapting the model's vocabulary to better fit the target domain, improving generation efficiency by 25% without compromising performance.
Large Language Models (LLMs) have shown impressive versatility as general purpose models. However, their broad applicability comes at a high-cost computational overhead, particularly in auto-regressive decoding where each step requires a forward pass. In domain-specific settings, general-purpose capabilities are unnecessary and can be exchanged for efficiency. In this work, we take a novel perspective on domain adaptation–reducing latency and computational costs by adapting the vocabulary to focused domains of interest. We introduce AdaptiVocab, an end-to-end approach for vocabulary adaptation, designed to enhance LLM efficiency in low resource domains. AdaptiVocab can be applied to any tokenizer and architecture, modifying the vocabulary by replacing tokens with domain-specific n-gram-based tokens, thereby reducing the number of tokens required for both input processing and output generation. AdaptiVocab initializes new n-token embeddings using an exponentially weighted combination of existing embeddings and employs a lightweight fine-tuning phase that can be efficiently performed on a single GPU. We evaluate two 7B LLMs across three niche domains, assessing efficiency, generation quality, and end-task performance. Our results show that AdaptiVocab reduces token usage by over 25% without compromising performance.
[ "Itay Nakash", "Nitay Calderon", "Eyal Ben-David", "Elad Hoffer", "Roi Reichart" ]
https://openreview.net/forum?id=TyXf9dwpZP
TyXf9dwpZP
TyXf9dwpZP
[ "~Itay_Nakash1", "~Nitay_Calderon1", "~Eyal_Ben-David1", "~Elad_Hoffer1", "~Roi_Reichart1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a3948743216f973e8868927800e3236adbf49875.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Domain Adaptation", "Efficiency", "Tokenization", "Vocabulary Adaptation", "Efficient LLMs" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ nakash2025adaptivocab, title={AdaptiVocab: Enhancing {LLM} Efficiency in Focused Domains through Lightweight Vocabulary Adaptation}, author={Itay Nakash and Nitay Calderon and Eyal Ben-David and Elad Hoffer and Roi Reichart}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=TyXf9dwpZP} }
nakash|adaptivocab_enhancing_llm_efficiency_in_focused_domains_through_lightweight_vocabulary_adaptation
null
null
null
null
null
Fleurs-SLU: A Massively Multilingual Benchmark for Spoken Language Understanding
A massively multilingual benchmark for topical utterance classification and textual multiple-choice QA from spoken paragraphs
Spoken language understanding (SLU) is indispensable for half of all living languages that lack a formal writing system, since these languages cannot pair automatic speech recognition (ASR) with language models to benefit from language technology. Even if low-resource languages possess a writing system, ASR for these languages remains unreliable due to limited bimodal speech and text training data. However, the evaluation of multilingual SLU remains limited to shallow tasks such as intent classification or language identification. To address this, we present Fleurs-SLU, a multilingual SLU benchmark that encompasses (i) 692 hours of speech for topical utterance classification in 102 languages and (ii) multiple-choice question answering through listening comprehension spanning 944 hours of speech across 92 languages. We extensively evaluate both end-to-end speech classification models and cascaded systems that combine speech-to-text transcription with subsequent classification by large language models on Fleurs-SLU. Our results show that cascaded systems exhibit greater robustness in multilingual SLU tasks, though speech encoders can achieve competitive performance in topical speech classification when appropriately pre-trained. We further find a strong correlation between robust multilingual ASR, effective speech-to-text translation, and strong multilingual SLU, highlighting the mutual benefits between acoustic and semantic speech representations.
[ "Fabian David Schmidt", "Ivan Vulić", "Goran Glavaš", "David Ifeoluwa Adelani" ]
https://openreview.net/forum?id=Tqj3fYqhwS
Tqj3fYqhwS
Tqj3fYqhwS
[ "~Fabian_David_Schmidt1", "~Ivan_Vulić1", "~Goran_Glavaš1", "~David_Ifeoluwa_Adelani1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a56ca4100519acabe9f4ee62fe5480872cee973b.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "spoken language understanding", "multilingual benchmarks", "multilingual evaluation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ schmidt2025fleursslu, title={Fleurs-{SLU}: A Massively Multilingual Benchmark for Spoken Language Understanding}, author={Fabian David Schmidt and Ivan Vuli{\'c} and Goran Glava{\v{s}} and David Ifeoluwa Adelani}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Tqj3fYqhwS} }
schmidt|fleursslu_a_massively_multilingual_benchmark_for_spoken_language_understanding
/attachment/e9114f54a52339dacc131c52fec8c4f2fafe8cf5.zip
null
null
null
null
The Blessing and Curse of Dimensionality in Safety Alignment
We explore the concept of safety as represented by a linear subspace and its relation to the hidden dimensions of a model.
The focus on safety alignment in large language models (LLMs) has increased significantly due to their widespread adoption across different domains. The scale of LLMs play a contributing role in their success, and the growth in parameter count follows larger hidden dimensions. In this paper, we hypothesize that while the increase in dimensions has been a key advantage, it may lead to emergent problems as well. These problems emerge as the linear structures in the activation space can be exploited, in the form of activation engineering, to circumvent its safety alignment. Through detailed visualizations of linear subspaces associated with different concepts, such as safety, across various model scales, we show that the curse of high-dimensional representations uniquely impacts LLMs. Further substantiating our claim, we demonstrate that projecting the representations of the model onto a lower dimensional subspace can preserve sufficient information for alignment while avoiding those linear structures. Empirical results confirm that such dimensional reduction significantly reduces susceptibility to jailbreaking through representation engineering. Building on our empirical validations, we provide theoretical insights into these linear jailbreaking methods relative to a model's hidden dimensions. Broadly speaking, our work posits that the high dimensions of a model's internal representations can be both a blessing and a curse in safety alignment.
[ "Rachel S.Y. Teo", "Laziz Abdullaev", "Tan Minh Nguyen" ]
https://openreview.net/forum?id=TiTk6VDz2H
TiTk6VDz2H
TiTk6VDz2H
[ "~Rachel_S.Y._Teo1", "~Laziz_Abdullaev1", "~Tan_Minh_Nguyen1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/32dca478b2e7d36950ee5769471975b85ed28a93.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "safety alignment", "large language models", "jailbreak", "activation engineering", "linear separation hypothesis" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ teo2025the, title={The Blessing and Curse of Dimensionality in Safety Alignment}, author={Rachel S.Y. Teo and Laziz Abdullaev and Tan Minh Nguyen}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=TiTk6VDz2H} }
teo|the_blessing_and_curse_of_dimensionality_in_safety_alignment
/attachment/72d52ac6f2cbf593a830158aaec0ea435826d90c.zip
null
null
null
null
Out-of-Distribution Detection using Synthetic Data Generation
This work presents an effective OOD detection method using LLM-generated synthetic proxies, eliminating the need for external OOD data. Experiments show it reduces false positives and outperforms baseline methods in text classification tasks.
Distinguishing in- and out-of-distribution (OOD) inputs is crucial for reliable deployment of classification systems. However, OOD data is typically unavailable or difficult to collect, posing a significant challenge for accurate OOD detection. In this work, we present a method that harnesses the generative capabilities of Large Language Models (LLMs) to create high-quality synthetic OOD proxies, eliminating the dependency on any external OOD data source. We study the efficacy of our method on classical text classification tasks such as toxicity detection and sentiment classification as well as classification tasks arising in LLM development and deployment, such as training a reward model for RLHF and detecting misaligned generations. Extensive experiments on nine InD-OOD dataset pairs and various model sizes show that our approach dramatically lowers false positive rates (achieving a perfect zero in some cases) while maintaining high accuracy on in-distribution tasks, outperforming baseline methods by a significant margin.
[ "Momin Abbas", "Muneeza Azmat", "Raya Horesh", "Mikhail Yurochkin" ]
https://openreview.net/forum?id=TiRiDMkTmG
TiRiDMkTmG
TiRiDMkTmG
[ "~Momin_Abbas1", "~Muneeza_Azmat1", "~Raya_Horesh1", "~Mikhail_Yurochkin1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/7fe91ed9b3588c0dc88d186dbe502075a3b92505.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "out-of-distribution detection", "out-of-distribution generalization", "synthetic data" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
true
I'm not sure if requires ethic revision due to the content in the used datasets.
@inproceedings{ abbas2025outofdistribution, title={Out-of-Distribution Detection using Synthetic Data Generation}, author={Momin Abbas and Muneeza Azmat and Raya Horesh and Mikhail Yurochkin}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=TiRiDMkTmG} }
abbas|outofdistribution_detection_using_synthetic_data_generation
/attachment/47161fc624b3315b25b57a1de37783c4cbf8c017.zip
null
null
null
null
Effective Length Extrapolation via Dimension-Wise Positional Embeddings Manipulation
Dimensionally manipulating the relative position matrix to extrapolate the context window of LLMs without additional training.
Large Language Models (LLMs) often struggle to process and generate coherent context when the number of input tokens exceeds the pre-trained length. Recent advancements in long-context extension have significantly expanded the context window of LLMs but require expensive overhead to train the large-scale models with longer context. In this work, we propose Dimension-Wise Positional Embeddings Manipulation (DPE), a training-free framework to extrapolate the context window of LLMs by diving into RoPE's different hidden dimensions. Instead of manipulating all dimensions equally, DPE detects the effective length for every dimension and finds the key dimensions for context extension. We reuse the original position indices with their embeddings from the pre-trained model and manipulate the key dimensions' position indices to their most effective lengths. In this way, DPE adjusts the pre-trained models with minimal modifications while ensuring that each dimension reaches its optimal state for extrapolation. DPE significantly surpasses well-known baselines such as YaRN and Self-Extend. DPE enables Llama3-8k 8B to support context windows of 128k tokens without continual training and integrates seamlessly with Flash Attention 2. In addition to its impressive extrapolation capability, DPE also dramatically improves the models' performance within training length, such as Llama3.1 70B, by over 18 points on popular long-context benchmarks RULER. When compared with commercial models, Llama 3.1 70B with DPE even achieves better performance than GPT-4-128K.
[ "Yi Lu", "Wanxu Zhao", "Xin Zhou", "Chenxin An", "Chenglong Wang", "Shuo Li", "Yuming Yang", "Jun Zhao", "Tao Ji", "Tao Gui", "Qi Zhang", "Xuanjing Huang" ]
https://openreview.net/forum?id=Tahpc3iAnO
Tahpc3iAnO
Tahpc3iAnO
[ "~Yi_Lu7", "~Wanxu_Zhao2", "~Xin_Zhou6", "~Chenxin_An1", "~Chenglong_Wang6", "~Shuo_Li12", "~Yuming_Yang1", "~Jun_Zhao5", "~Tao_Ji1", "~Tao_Gui1", "~Qi_Zhang8", "~Xuanjing_Huang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/96ad0479166278f66c8fba17d248e253992df48e.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Long Context", "Extrapolation", "Training-Free Framework" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lu2025effective, title={Effective Length Extrapolation via Dimension-Wise Positional Embeddings Manipulation}, author={Yi Lu and Wanxu Zhao and Xin Zhou and Chenxin An and Chenglong Wang and Shuo Li and Yuming Yang and Jun Zhao and Tao Ji and Tao Gui and Qi Zhang and Xuanjing Huang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Tahpc3iAnO} }
lu|effective_length_extrapolation_via_dimensionwise_positional_embeddings_manipulation
/attachment/0101b7de0eaaa5eccaca6af9016daed25bb78a2b.zip
null
null
null
null
LLM Can be a Dangerous Persuader: Empirical Study of Persuasion Safety in Large Language Models
We introduce PersuSafety, the first comprehensive framework for evaluating the safety and ethicality in LLM-driven persuasion.
Recent advancements in Large Language Models (LLMs) have enabled them to approach human-level persuasion capabilities. However, such potential also raises concerns about the safety risks of LLM-driven persuasion, particularly their potential for unethical influence through manipulation, deception, exploitation of vulnerabilities, and many other harmful tactics. In this work, we present a systematic investigation of LLM persuasion safety through two critical aspects: (1) whether LLMs appropriately reject unethical persuasion tasks and avoid unethical strategies during execution, including cases where the initial persuasion goal appears ethically neutral, and (2) how influencing factors like personality traits and external pressures affect their behavior. To this end, we introduce PersuSafety, the first comprehensive framework for the assessment of persuasion safety which consists of three stages, i.e., persuasion scene creation, persuasive conversation simulation, and persuasion safety assessment. PersuSafety covers 6 diverse unethical persuasion topics and 15 common unethical strategies. Through extensive experiments across 8 widely used LLMs, we observe significant safety concerns in most LLMs, including failing to identify harmful persuasion tasks and leveraging various unethical persuasion strategies. Our study calls for more attention to improve safety alignment in progressive and goal-driven conversations such as persuasion.
[ "Minqian Liu", "Zhiyang Xu", "Xinyi Zhang", "Heajun An", "Sarvech Qadir", "Qi Zhang", "Pamela J. Wisniewski", "Jin-Hee Cho", "Sang Won Lee", "Ruoxi Jia", "Lifu Huang" ]
https://openreview.net/forum?id=TMB9SKqit9
TMB9SKqit9
TMB9SKqit9
[ "~Minqian_Liu2", "~Zhiyang_Xu1", "~Xinyi_Zhang26", "~Heajun_An1", "~Sarvech_Qadir1", "~Qi_Zhang41", "~Pamela_J._Wisniewski1", "~Jin-Hee_Cho1", "~Sang_Won_Lee1", "~Ruoxi_Jia1", "~Lifu_Huang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/e5a56263d50c1fbdb24b2f741ca1a5f0f7e754f2.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "persuasion", "large language models", "safety", "evaluation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ liu2025llm, title={{LLM} Can be a Dangerous Persuader: Empirical Study of Persuasion Safety in Large Language Models}, author={Minqian Liu and Zhiyang Xu and Xinyi Zhang and Heajun An and Sarvech Qadir and Qi Zhang and Pamela J. Wisniewski and Jin-Hee Cho and Sang Won Lee and Ruoxi Jia and Lifu Huang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=TMB9SKqit9} }
liu|llm_can_be_a_dangerous_persuader_empirical_study_of_persuasion_safety_in_large_language_models
null
null
null
null
null
Self-Evolving Critique Abilities in Large Language Models
SCRIT enables large language models to evolve their critique capabilities without human oversight by learning from self-generated data.
Despite their remarkable performance, Large Language Models (LLMs) face a critical challenge: providing feedback for tasks where human evaluation is difficult or where LLMs potentially outperform humans. In such scenarios, leveraging the *critique* ability of LLMs themselves—identifying and correcting flaws—shows considerable promise. This paper explores enhancing critique abilities of LLMs, noting that current approaches rely on human annotations or more powerful models, leaving the challenge of improving critique abilities *without* external supervision *unresolved*. We introduce SCRIT (Self-evolving CRITic), a framework that trains LLMs with self-generated data to evolve their critique abilities. We find that naive data generation approaches often produce superficial critiques of low quality. To address this limitation, we propose a contrastive-critic approach that uses reference solutions to enhance the understanding of LLMs for relevant concepts and incorporates a self-validation scheme to further improve data quality. Implemented with Qwen2.5-72B-Instruct, a leading LLM, SCRIT demonstrates consistent improvements: a 10.0\% relative gain in critique-correction accuracy and a 19.0\% relative improvement in error identification F1-score across various benchmarks. Our analysis reveals that SCRIT's performance scales positively with data and model size and enables continuous improvement through multi-round iterations.
[ "Zhengyang Tang", "Ziniu Li", "Zhenyang Xiao", "Tian Ding", "Ruoyu Sun", "Benyou Wang", "Dayiheng Liu", "Fei Huang", "Tianyu Liu", "Bowen Yu", "Junyang Lin" ]
https://openreview.net/forum?id=TA6azZKWJq
TA6azZKWJq
TA6azZKWJq
[ "~Zhengyang_Tang1", "~Ziniu_Li1", "~Zhenyang_Xiao1", "~Tian_Ding1", "~Ruoyu_Sun1", "~Benyou_Wang2", "~Dayiheng_Liu1", "~Fei_Huang3", "~Tianyu_Liu3", "~Bowen_Yu3", "~Junyang_Lin1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/67171a4d26bdcd50f955d395efa0276253dc1056.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Critique Model", "Synthetic Data", "Mathematical Reasoning", "Scalable Oversight" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ tang2025selfevolving, title={Self-Evolving Critique Abilities in Large Language Models}, author={Zhengyang Tang and Ziniu Li and Zhenyang Xiao and Tian Ding and Ruoyu Sun and Benyou Wang and Dayiheng Liu and Fei Huang and Tianyu Liu and Bowen Yu and Junyang Lin}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=TA6azZKWJq} }
tang|selfevolving_critique_abilities_in_large_language_models
null
null
null
null
null
LIMO: Less is More for Reasoning
We represent that large language models can achieve competition-level mathematical reasoning with just hundreds of high-quality training examples while maintaining strong generalization across diverse out-of-distribution benchmarks.
We challenge the prevailing assumption that complex reasoning in large language models (LLMs) necessitates massive training data. We demonstrate that sophisticated mathematical reasoning can emerge with only a few examples. Specifically, through simple supervised fine-tuning, our model, LIMO, achieves 63.3% accuracy on AIME24 and 95.6% on MATH500, surpassing previous fine-tuned models (6.5% on AIME24, 59.2% on MATH500) while using only 1% of the training data required by prior approaches. Furthermore, LIMO exhibits strong out-of-distribution generalization, achieving a 45.8% absolute improvement across diverse benchmarks, outperforming models trained on 100× more data. Synthesizing these findings, we propose the Less-Is-More Reasoning Hypothesis (LIMO Hypothesis): In foundation models where domain knowledge has been comprehensively encoded during pre-training, sophisticated reasoning can emerge through minimal but strategically designed demonstrations of cognitive processes. This hypothesis suggests that the threshold for eliciting complex reasoning is not dictated by task complexity but rather by two key factors: (1) the completeness of the model's pre-trained knowledge base and (2) the effectiveness of post-training examples in serving as “cognitive templates” that guide reasoning.
[ "Yixin Ye", "Zhen Huang", "Yang Xiao", "Ethan Chern", "Shijie Xia", "Pengfei Liu" ]
https://openreview.net/forum?id=T2TZ0RY4Zk
T2TZ0RY4Zk
T2TZ0RY4Zk
[ "~Yixin_Ye1", "~Zhen_Huang9", "~Yang_Xiao6", "~Ethan_Chern1", "~Shijie_Xia2", "~Pengfei_Liu1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/cde8a983c12998496197ba936eb6c90b4cb8e4b1.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large language models", "Mathematical reasoning", "Data efficiency", "Supervised fine-tuning", "Inference-time computation", "Reasoning chains" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ye2025limo, title={{LIMO}: Less is More for Reasoning}, author={Yixin Ye and Zhen Huang and Yang Xiao and Ethan Chern and Shijie Xia and Pengfei Liu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=T2TZ0RY4Zk} }
ye|limo_less_is_more_for_reasoning
null
null
null
null
null
AIR: A Systematic Analysis of Annotations, Instructions, and Response Pairs in Preference Dataset
We propose \textbf{AIR}, a framework to systematically dissect preference datasets into three core components—\textbf{A}nnotations, \textbf{I}nstructions, and \textbf{R}esponse Pairs—and quantify their alignment impact.
Preference learning is critical for aligning large language models (LLMs) with human values, yet its success hinges on high-quality datasets comprising three core components: Preference \textbf{A}nnotations, \textbf{I}nstructions, and \textbf{R}esponse Pairs. Current approaches conflate these components, obscuring their individual impacts and hindering systematic optimization. In this work, we propose \textbf{AIR}, a component-wise analysis framework that systematically isolates and optimizes each component while evaluating their synergistic effects. Through rigorous experimentation, AIR reveals actionable principles: annotation simplicity (point-wise generative scoring), instruction inference stability (variance-based filtering across LLMs), and response pair quality (moderate margins + high absolute scores). When combined, these principles yield +5.3 average gains over baseline method, even with only 14k high-quality pairs. Our work shifts preference dataset design from ad hoc scaling to component-aware optimization, offering a blueprint for efficient, reproducible alignment.
[ "Bingxiang He", "Wenbin Zhang", "Jiaxi Song", "Cheng Qian", "Zixuan Fu", "Bowen Sun", "Ning Ding", "Haiwen Hong", "Longtao Huang", "Hui Xue", "Ganqu Cui", "Wanxiang Che", "Zhiyuan Liu", "Maosong Sun" ]
https://openreview.net/forum?id=Sz3ZU6oeVJ
Sz3ZU6oeVJ
Sz3ZU6oeVJ
[ "~Bingxiang_He1", "~Wenbin_Zhang3", "~Jiaxi_Song1", "~Cheng_Qian4", "~Zixuan_Fu2", "~Bowen_Sun2", "~Ning_Ding5", "~Haiwen_Hong1", "~Longtao_Huang2", "~Hui_Xue5", "~Ganqu_Cui1", "~Wanxiang_Che1", "~Zhiyuan_Liu1", "~Maosong_Sun1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/285d78f81924a2f4d2c49e4d7252b6ebbbe839af.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Direct Preference Optimization", "Preference Learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ he2025air, title={{AIR}: A Systematic Analysis of Annotations, Instructions, and Response Pairs in Preference Dataset}, author={Bingxiang He and Wenbin Zhang and Jiaxi Song and Cheng Qian and Zixuan Fu and Bowen Sun and Ning Ding and Haiwen Hong and Longtao Huang and Hui Xue and Ganqu Cui and Wanxiang Che and Zhiyuan Liu and Maosong Sun}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Sz3ZU6oeVJ} }
he|air_a_systematic_analysis_of_annotations_instructions_and_response_pairs_in_preference_dataset
/attachment/e375627504f8c6a7d8f8af6f540ac91b443b2c1b.zip
null
null
null
null
LongPerceptualThoughts: Distilling System-2 Reasoning for System-1 Perception
We introduce a novel data generation pipeline for generating multimodal reasoning data by injecting key cognitive behaviours.
Recent reasoning models through test-time scaling have demonstrated that long chain-of-thoughts can unlock substantial performance boosts in hard reasoning tasks such as math and code. However, the benefit of such long thoughts for system-2 reasoning is relatively less explored in other domains such as perceptual tasks where shallower, system-1 reasoning seems sufficient. In this paper, we introduce $\textit{LongPerceptualThoughts}$, a new synthetic dataset with 30K long-thought traces for perceptual tasks. The key challenges in synthesizing elaborate reasoning thoughts for perceptual tasks are that off-the-shelf models are not yet equipped with such thinking behavior and that it is not straightforward to build a reliable process verifier for perceptual tasks. Thus, we propose a novel three-stage data synthesis framework that first synthesizes verifiable multiple-choice questions from dense image descriptions, then distills simple CoTs from VLMs for those verifiable problems, and finally expands those simple thoughts to elaborate long thoughts via frontier reasoning models. In controlled experiments with a strong instruction-tuned 7B model, we demonstrate notable improvements over existing visual reasoning data-generation methods. Our model, trained on the generated dataset, achieves an average +3.4 points improvement over 5 vision-centric benchmarks, including +11.8 points on V$^*$ Bench. Notably, despite being tuned for vision tasks, it also improves performance on the challenging text reasoning benchmark, MMLU-Pro, by +2 points.
[ "Yuan-Hong Liao", "Sven Elflein", "Liu He", "Laura Leal-Taixé", "Yejin Choi", "Sanja Fidler", "David Acuna" ]
https://openreview.net/forum?id=SrKdi4MsUW
SrKdi4MsUW
SrKdi4MsUW
[ "~Yuan-Hong_Liao2", "~Sven_Elflein1", "~Liu_He2", "~Laura_Leal-Taixé1", "~Yejin_Choi1", "~Sanja_Fidler1", "~David_Acuna1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/b5cf2797bb593498db2fef7e5dfd5a60f33515b0.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "multimodal reasoning", "vision-language models", "chain-of-thought" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ liao2025longperceptualthoughts, title={LongPerceptualThoughts: Distilling System-2 Reasoning for System-1 Perception}, author={Yuan-Hong Liao and Sven Elflein and Liu He and Laura Leal-Taix{\'e} and Yejin Choi and Sanja Fidler and David Acuna}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=SrKdi4MsUW} }
liao|longperceptualthoughts_distilling_system2_reasoning_for_system1_perception
/attachment/ba91a96a582fba071ab27d740f8f0a19b0eb11d4.zip
null
null
null
null
Advancing Language Multi-Agent Learning with Credit Re-Assignment for Interactive Environment Generalization
Advancing Language Multi-Agent Learning with Credit Re-Assignment for Interactive Environment Generalization
LLM-based agents have made significant advancements in interactive environments, such as mobile operations and web browsing, and other domains beyond computer using. Current multi-agent systems universally excel in performance, compared to single agents, but struggle with generalization across environments due to predefined roles and inadequate strategies for generalizing language agents. The challenge of achieving both strong performance and good generalization has hindered the progress of multi-agent systems for interactive environments. To address these issues, we propose **CollabUIAgents**, a multi-agent reinforcement learning framework with a novel multi-agent credit re-assignment (CR) strategy, assigning process rewards with LLMs rather than environment-specific rewards and learning with synthesized preference data, in order to foster generalizable, collaborative behaviors among the role-free agents' policies. Empirical results show that our framework improves both performance and cross-environment generalizability of multi-agent systems. Moreover, our 7B-parameter system achieves results on par with or exceed strong closed-source models, and the LLM that guides the CR. We also provide insights in using granular CR rewards effectively for environment generalization, and accommodating trained LLMs in multi-agent systems. Our work is available at https://github.com/THUNLP-MT/CollabUIAgents.
[ "Zhitao He", "Zijun Liu", "Peng Li", "Yi R. Fung", "Ming Yan", "Ji Zhang", "Fei Huang", "Yang Liu" ]
https://openreview.net/forum?id=SoEmgM1ioC
SoEmgM1ioC
SoEmgM1ioC
[ "~Zhitao_He1", "~Zijun_Liu2", "~Peng_Li2", "~Yi_R._Fung1", "~Ming_Yan2", "~Ji_Zhang3", "~Fei_Huang2", "~Yang_Liu19" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/feda4ba527e7485ea91852275b75b730084dfd2b.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large language model; Multi-agent learning; Reinforcement learning; UI agent" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ he2025advancing, title={Advancing Language Multi-Agent Learning with Credit Re-Assignment for Interactive Environment Generalization}, author={Zhitao He and Zijun Liu and Peng Li and Yi R. Fung and Ming Yan and Ji Zhang and Fei Huang and Yang Liu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=SoEmgM1ioC} }
he|advancing_language_multiagent_learning_with_credit_reassignment_for_interactive_environment_generalization
null
null
null
null
null
Assessing Judging Bias in Large Reasoning Models: An Empirical Study
We demonstrate that Large Reasoning Models remain susceptible to judging biases despite their advanced capabilities.
Large Reasoning Models (LRMs) like DeepSeek-R1 and OpenAI-o1 have demonstrated remarkable reasoning capabilities, raising important questions about their biases in LLM-as-a-judge settings. We present a comprehensive benchmark comparing judging biases between LLMs and LRMs across both subjective preference-alignment datasets and objective fact-based datasets. Through investigation of bandwagon, authority, position, and distraction biases, we uncover four key findings: (1) despite their advanced reasoning capabilities, LRMs remain susceptible to the above biases; (2) LRMs demonstrate better robustness than LLMs specifically on fact-related datasets; (3) LRMs exhibit notable position bias, preferring options in later positions; and (4) we identify a novel "superficial reflection bias" where phrases mimicking reasoning (e.g., "wait, let me think...") significantly influence model judgments. To address these biases, we design and evaluate three mitigation strategies: specialized system prompts that reduce judging biases by up to 19\% in preference alignment datasets and 14\% in fact-related datasets, in-context learning that provides up to 27\% improvement on preference tasks but shows inconsistent results on factual tasks, and a self-reflection mechanism that reduces biases by up to 10\% in preference datasets and 16\% in fact-related datasets, with self-reflection proving particularly effective for LRMs. Our work provides crucial insights for developing more reliable LLM-as-a-Judge frameworks, especially as LRMs become increasingly deployed as automated judges. Our code is available at \url{https://github.com/Persdre/LRM-bias-evaluation}.
[ "Qian Wang", "Zhanzhi Lou", "Zhenheng Tang", "Nuo Chen", "Xuandong Zhao", "Wenxuan Zhang", "Dawn Song", "Bingsheng He" ]
https://openreview.net/forum?id=SlRtFwBdzP
SlRtFwBdzP
SlRtFwBdzP
[ "~Qian_Wang25", "~Zhanzhi_Lou1", "~Zhenheng_Tang2", "~Nuo_Chen4", "~Xuandong_Zhao1", "~Wenxuan_Zhang1", "~Dawn_Song1", "~Bingsheng_He1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/1406b304c4ddd1ce75ef08e961291b4af7b4c654.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Reasoning Models", "LLM Evaluation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025assessing, title={Assessing Judging Bias in Large Reasoning Models: An Empirical Study}, author={Qian Wang and Zhanzhi Lou and Zhenheng Tang and Nuo Chen and Xuandong Zhao and Wenxuan Zhang and Dawn Song and Bingsheng He}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=SlRtFwBdzP} }
wang|assessing_judging_bias_in_large_reasoning_models_an_empirical_study
null
null
null
null
null
MegaMath: Pushing the Limits of Open Math Corpora
MegaMath is an open dataset of over 300B tokens from web documents, math-related code, and synthetic sources, designed to enhance language models' mathematical reasoning capabilities.
Mathematical reasoning represents a cornerstone of human intelligence, driving problem-solving and innovation, and thus serves as a key indicator of the advanced capabilities of large language models(LLMs). However, the research community still lacks an open, adequate-scaled, high-quality mathematical corpus to match the data requirements of top-grade LLMs. We present MegaMath, an open dataset curated from diverse, mathematics-focused sources, designed to enhance LLMs' proficiency in mathematical reasoning. Specifically, MegaMath is curated via following practices: (1) Revisiting web data: We re-extract all mathematical documents with math-oriented HTML optimizations, fasttext-based filtering and deduplication, all aimed at acquiring higher-quality data specifically for the mathematical domain on the Internet. (2)Recalling Math-related code data: We identify high quality math-related code from large code training corpus, Stack-V2, further enhancing data diversity. (3) Exploring Synthetic data: We conduct various data synthesis practices, resulting in a massive dataset including both synthetic text such as QA-style data, and code. By integrating these strategies and validating their practicality via extensive ablations, MegaMath delivers 371B tokens with largest quantity and top quality among existing open math pre-training datasets.
[ "Fan Zhou", "Zengzhi Wang", "Nikhil Ranjan", "Zhoujun Cheng", "Liping Tang", "Guowei He", "Zhengzhong Liu", "Eric P. Xing" ]
https://openreview.net/forum?id=SHB0sLrZrh
SHB0sLrZrh
SHB0sLrZrh
[ "~Fan_Zhou6", "~Zengzhi_Wang1", "~Nikhil_Ranjan2", "~Zhoujun_Cheng1", "~Liping_Tang2", "~Guowei_He1", "~Zhengzhong_Liu1", "~Eric_Xing1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/64011155bba0744c968a3cdacfe0f28441a1f116.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Pre-training Data", "Mathematical Reasoning", "Synthetic Data" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhou2025megamath, title={MegaMath: Pushing the Limits of Open Math Corpora}, author={Fan Zhou and Zengzhi Wang and Nikhil Ranjan and Zhoujun Cheng and Liping Tang and Guowei He and Zhengzhong Liu and Eric P. Xing}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=SHB0sLrZrh} }
zhou|megamath_pushing_the_limits_of_open_math_corpora
null
null
null
null
null
Bootstrapping Visual Assistant Modeling with Situated Interaction Simulation
We show that synthetic interaction data from simulated users and assistants can boost the development of visual assistant models that effectively guide real users to complete complex tasks.
Visual assistants that can guide humans through complex tasks in physical environments have significant potential, yet their development is hindered by the high cost of human-in-the-loop data collection. We present BASIS (Bootstrapping Assistant modeling with Situated Interaction Simulation), a novel framework that fundamentally rethinks how visual assistants are developed and evaluated. Rather than relying on expensive human data collection, BASIS leverages simulation to bootstrap capable assistants through three interconnected stages: (1) Situated Interaction Simulation generates high-quality synthetic data through interactions between oracle assistants and simulated users; (2) Autonomous Model Development trains and continuously evaluates assistant models using this synthetic data; and (3) Real-User Validation verifies effectiveness with human users. We implement BASIS in Alexa Arena and demonstrate that our best model—despite being fine-tuned solely on synthetic data and operating under realistic perception conditions—enables real human users to achieve a 72.9% success rate, approaching the 88.6% performance of an oracle assistant with access to privileged information of perfect perception. Through detailed error analysis, we identify object identification as the primary bottleneck for current visual assistants. Our approach successfully bridges the gap between simulation and reality, establishing a scalable pipeline for developing assistants that can effectively guide users through complex tasks. Project website: https://colm-basis.github.io/
[ "Yichi Zhang", "Run Peng", "Yinpei Dai", "Lingyun Wu", "Xuweiyi Chen", "Qiaozi Gao", "Joyce Chai" ]
https://openreview.net/forum?id=S4nTXotasR
S4nTXotasR
S4nTXotasR
[ "~Yichi_Zhang1", "~Run_Peng1", "~Yinpei_Dai1", "~Lingyun_Wu2", "~Xuweiyi_Chen1", "~Qiaozi_Gao1", "~Joyce_Chai2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/5aa33e8ad456f55bd2ae71c6e077b69b84011c82.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "visual assistant", "embodied", "simulation", "multimodal", "LLM agent", "situated dialogue" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhang2025bootstrapping, title={Bootstrapping Visual Assistant Modeling with Situated Interaction Simulation}, author={Yichi Zhang and Run Peng and Yinpei Dai and Lingyun Wu and Xuweiyi Chen and Qiaozi Gao and Joyce Chai}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=S4nTXotasR} }
zhang|bootstrapping_visual_assistant_modeling_with_situated_interaction_simulation
null
null
null
null
null
Weight ensembling improves reasoning in language models
Weight ensembling improves pass@k of reasoning models.
We investigate a pitfall during the training of reasoning models where the diversity of generations begins to collapse, leading to suboptimal test-time scaling. Notably, Pass@1 reliably improves during supervised finetuning (SFT), but Pass@k rapidly deteriorates. Surprisingly, a simple intervention of interpolating the weights of the latest SFT checkpoint with an early checkpoint, otherwise known as WiSE-FT, almost completely recovers Pass@k while also improving Pass@1. The WiSE-FT variant achieves better test-time scaling (Best@k, majority vote) and achieves superior results with less data when tuned further by reinforcement learning. Finally, we note that WiSE-FT provides complementary gains across performance metrics that is not achievable by diversity-inducing decoding strategies alone, like temperature scaling. We formalize a \emph{bias-variance tradeoff} of Pass@k with respect to the expectation and variance of Pass@1 over the test distribution. We find that WiSE-FT can reduce bias and variance simultaneously, while temperature scaling and possibly other decoding strategies face an inherent tradeoff between decreasing variance with increasing bias.
[ "Xingyu Dang", "Christina Baek", "Kaiyue Wen", "J Zico Kolter", "Aditi Raghunathan" ]
https://openreview.net/forum?id=S2IKxulLT1
S2IKxulLT1
S2IKxulLT1
[ "~Xingyu_Dang2", "~Christina_Baek2", "~Kaiyue_Wen1", "~J_Zico_Kolter1", "~Aditi_Raghunathan1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9bc30f4f47d61a832f8594cf6a26ad0ec99117f0.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "test-time scaling", "RL", "reasoning", "diversity", "decoding" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ dang2025weight, title={Weight ensembling improves reasoning in language models}, author={Xingyu Dang and Christina Baek and Kaiyue Wen and J Zico Kolter and Aditi Raghunathan}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=S2IKxulLT1} }
dang|weight_ensembling_improves_reasoning_in_language_models
null
null
null
null
null
Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning
A RL framework to train LLMs for interleaved reasoning and retrieval
Efficiently acquiring external knowledge and up-to-date information is essential for effective reasoning and text generation in large language models (LLMs). Prompting advanced LLMs with reasoning capabilities to use search engines during inference is often suboptimal, as the LLM might not fully possess the capability on how to interact optimally with the search engine. This paper introduces \Ours, an extension of reinforcement learning (RL) for reasoning frameworks where the LLM learns to autonomously generate (multiple) search queries during step-by-step reasoning with real-time retrieval. \Ours optimizes LLM reasoning trajectories with multi-turn search interactions, leveraging retrieved token masking for stable RL training and a simple outcome-based reward function. Experiments on seven question-answering datasets show that \Ours improves performance by 41\% (Qwen2.5-7B) and 20\% (Qwen2.5-3B) over RAG baselines under the same setting. This paper further provides empirical insights into RL optimization methods, LLM choices, and response length dynamics in retrieval-augmented reasoning.
[ "Bowen Jin", "Hansi Zeng", "Zhenrui Yue", "Jinsung Yoon", "Sercan O Arik", "Dong Wang", "Hamed Zamani", "Jiawei Han" ]
https://openreview.net/forum?id=Rwhi91ideu
Rwhi91ideu
Rwhi91ideu
[ "~Bowen_Jin1", "~Hansi_Zeng1", "~Zhenrui_Yue1", "~Jinsung_Yoon1", "~Sercan_O_Arik1", "~Dong_Wang21", "~Hamed_Zamani1", "~Jiawei_Han1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/df3a75c7e70329ac2e4bbb73b5d7e8eef77584ee.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "reasoning", "retrieval", "reinforcement learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ jin2025searchr, title={Search-R1: Training {LLM}s to Reason and Leverage Search Engines with Reinforcement Learning}, author={Bowen Jin and Hansi Zeng and Zhenrui Yue and Jinsung Yoon and Sercan O Arik and Dong Wang and Hamed Zamani and Jiawei Han}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Rwhi91ideu} }
jin|searchr1_training_llms_to_reason_and_leverage_search_engines_with_reinforcement_learning
null
null
null
null
null
Backdoor Attacks on Dense Retrieval via Public and Unintentional Triggers
Backdoor Attacks on Dense Passage Retrievers
Dense retrieval systems have been widely used in various NLP applications. However, their vulnerabilities to potential attacks have been underexplored. This paper investigates a novel attack scenario where the attackers aim to mislead the retrieval system into retrieving the attacker-specified contents. Those contents, injected into the retrieval corpus by attackers, can include harmful text like hate speech or spam. Unlike prior methods that rely on model weights and generate conspicuous, unnatural outputs, we propose a covert backdoor attack triggered by grammar errors. Our approach ensures that the attacked models can function normally for standard queries while covertly triggering the retrieval of the attacker's contents in response to minor linguistic mistakes. Specifically, dense retrievers are trained with contrastive loss and hard negative sampling. Surprisingly, our findings demonstrate that contrastive loss is notably sensitive to grammatical errors, and hard negative sampling can exacerbate susceptibility to backdoor attacks. Our proposed method achieves a high attack success rate with a minimal corpus poisoning rate of only 0.048\%, while preserving normal retrieval performance. This indicates that the method has negligible impact on user experience for error-free queries. Furthermore, evaluations across three real-world defense strategies reveal that the malicious passages embedded within the corpus remain highly resistant to detection and filtering, underscoring the robustness and subtlety of the proposed attack.
[ "Quanyu Long", "Yue Deng", "Leilei Gan", "Wenya Wang", "Sinno Jialin Pan" ]
https://openreview.net/forum?id=RsnxggqW4l
RsnxggqW4l
RsnxggqW4l
[ "~Quanyu_Long1", "~Yue_Deng3", "~Leilei_Gan1", "~Wenya_Wang1", "~Sinno_Jialin_Pan1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/153ccb1928b125deed6cad76d769e9a58fc39235.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "backdoor attack", "dense retrieval" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ long2025backdoor, title={Backdoor Attacks on Dense Retrieval via Public and Unintentional Triggers}, author={Quanyu Long and Yue Deng and Leilei Gan and Wenya Wang and Sinno Jialin Pan}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=RsnxggqW4l} }
long|backdoor_attacks_on_dense_retrieval_via_public_and_unintentional_triggers
null
null
null
null
null
Learning by Teaching: Engaging Students as Instructors of Large Language Models in Computer Science Education
Students learn computer science better by teaching large language models, reversing the usual teacher-student roles.
While Large Language Models (LLMs) are often used as virtual tutors in computer science (CS) education, this approach can foster passive learning and over-reliance. This paper presents a novel pedagogical paradigm that inverts this model: students act as instructors who must teach an LLM to solve problems. To facilitate this, we developed strategies for designing questions with engineered knowledge gaps that only a student can bridge, and we introduce Socrates, a system for deploying this method with minimal overhead. We evaluated our approach in an undergraduate course and found that this active-learning method led to statistically significant improvements in student performance compared to historical cohorts. Our work demonstrates a practical, cost-effective framework for using LLMs to deepen student engagement and mastery.
[ "Xinming Yang", "Haasil Pujara", "Jun Li" ]
https://openreview.net/forum?id=RUAoV3j6tM
RUAoV3j6tM
RUAoV3j6tM
[ "~Xinming_Yang1", "~Haasil_Pujara1", "~Jun_Li66" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/6462e35b4b77b748fed66fc4d0cfa103094b2448.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Model", "Computer Science Education", "Human-AI Collaboration", "Role Reversal" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ yang2025learning, title={Learning by Teaching: Engaging Students as Instructors of Large Language Models in Computer Science Education}, author={Xinming Yang and Haasil Pujara and Jun Li}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=RUAoV3j6tM} }
yang|learning_by_teaching_engaging_students_as_instructors_of_large_language_models_in_computer_science_education
null
null
null
null
null
From Queries to Criteria: Understanding How Astronomers Evaluate LLMs
We analyze and recommend evaluation criteria grounded in real Human-LLM interactions for exploring scientific literature.
There is growing interest in leveraging LLMs to aid in astronomy and other scientific research, but benchmarks for LLM evaluation in general have not kept pace with the increasingly diverse ways that real people evaluate and use these models. In this study, we seek to improve evaluation procedures by building an understanding of how users evaluate LLMs. We focus on a particular use case: an LLM-powered retrieval-augmented generation bot for engaging with astronomical literature, which we deployed via Slack. Our inductive coding of 368 queries to the bot over four weeks and our follow-up interviews with 11 astronomers reveal how humans evaluated this system, including the types of questions asked and the criteria for judging responses. We synthesize our findings into concrete recommendations for building better benchmarks, which we then employ in constructing a sample benchmark for evaluating LLMs for astronomy. Overall, our work offers ways to improve LLM evaluation and ultimately usability, particularly for use in scientific research.
[ "Alina Hyk", "Kiera McCormick", "Mian Zhong", "Ioana Ciucă", "Sanjib Sharma", "John F Wu", "J. E. G. Peek", "Kartheik G. Iyer", "Ziang Xiao", "Anjalie Field" ]
https://openreview.net/forum?id=ROtDZDUgvw
ROtDZDUgvw
ROtDZDUgvw
[ "~Alina_Hyk1", "~Kiera_McCormick1", "~Mian_Zhong1", "~Ioana_Ciucă1", "~Sanjib_Sharma1", "~John_F_Wu1", "~J._E._G._Peek1", "~Kartheik_G._Iyer1", "~Ziang_Xiao1", "~Anjalie_Field2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/faa1d68c305fffb21d97c548e0a7abe00857ac43.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Human-centered computing", "AI for Science", "LM Evaluation", "LLM for Astronomy" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ hyk2025from, title={From Queries to Criteria: Understanding How Astronomers Evaluate {LLM}s}, author={Alina Hyk and Kiera McCormick and Mian Zhong and Ioana Ciuc{\u{a}} and Sanjib Sharma and John F Wu and J. E. G. Peek and Kartheik G. Iyer and Ziang Xiao and Anjalie Field}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=ROtDZDUgvw} }
hyk|from_queries_to_criteria_understanding_how_astronomers_evaluate_llms
null
null
null
null
null
OpinioRAG: Towards Generating User-Centric Opinion Highlights from Large-scale Online Reviews
We present OpinioRAG, a scalable framework using RAG-based retrieval and LLMs, with novel verification metrics and a large-scale dataset for user-centric long-form opinion summarization.
We study the problem of opinion highlights generation from large volumes of user reviews, often exceeding thousands per entity, where existing methods either fail to scale or produce generic, one-size-fits-all summaries that overlook personalized needs. To tackle this, we introduce OpinioRAG, a scalable, training-free framework that combines RAG-based evidence retrieval with LLMs to efficiently produce tailored summaries. Additionally, we propose novel reference-free verification metrics designed for sentiment-rich domains, where accurately capturing opinions and sentiment alignment is essential. These metrics offer a fine-grained, context-sensitive assessment of factual consistency. To facilitate evaluation, we contribute the first large-scale dataset of long-form user reviews, comprising entities with over a thousand reviews each, paired with unbiased expert summaries and manually annotated queries. Through extensive experiments, we identify key challenges, provide actionable insights into improving systems, pave the way for future research, and position OpinioRAG as a robust framework for generating accurate, relevant, and structured summaries at scale.
[ "Mir Tafseer Nayeem", "Davood Rafiei" ]
https://openreview.net/forum?id=R94bCTckhV
R94bCTckhV
R94bCTckhV
[ "~Mir_Tafseer_Nayeem1", "~Davood_Rafiei2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/0cbdbb7ad1e11aa6ca244670f65bfc1fdb47be83.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "User-Centric Summarization", "Long-form Opinions", "Retrieval-Augmented Generation (RAG)", "Reference-free Verification", "Dataset Construction" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
true
My concern is about the data licensing - waiting for clarification from the authors (see Questions for Authors).
@inproceedings{ nayeem2025opiniorag, title={Opinio{RAG}: Towards Generating User-Centric Opinion Highlights from Large-scale Online Reviews}, author={Mir Tafseer Nayeem and Davood Rafiei}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=R94bCTckhV} }
nayeem|opiniorag_towards_generating_usercentric_opinion_highlights_from_largescale_online_reviews
null
null
null
null
null
When To Solve, When To Verify: Compute-Optimal Problem Solving and Generative Verification for LLM Reasoning
We perform test-time compute matched comparison between scaling solutions via self-consistency and verifications via GenRM to find useful insights for the practitioners.
Scaling test-time compute has emerged as a key strategy for enhancing the reasoning capabilities of large language models (LLMs), particularly in tasks like mathematical problem-solving. A traditional approach, Self-Consistency (SC), generates multiple solutions to a problem and selects the most common answer via majority voting. Another common method involves scoring each solution with a reward model (verifier) and choosing the best one. Recent advancements in Generative Reward Models (GenRM) reframe verification as a next-token prediction task, enabling inference-time scaling along a new axis. Specifically, GenRM generates multiple verification chains-of-thought to score each solution. Under a limited inference budget, this introduces a fundamental trade-off: should you spend the budget on scaling solutions via SC or generate fewer solutions and allocate compute to verification via GenRM? To address this, we evaluate GenRM against SC under a fixed inference budget. Interestingly, we find that SC is more compute-efficient than GenRM for most practical inference budgets across diverse models and datasets. For instance, GenRM first matches SC after consuming up to $8\times$ the inference compute and requires significantly more compute to outperform it. Furthermore, we derive inference scaling laws for the GenRM paradigm, revealing that compute-optimal inference favors scaling solution generation more aggressively than scaling the number of verifications. Our work provides practical guidance on optimizing test-time scaling by balancing solution generation and verification.
[ "Nishad Singhi", "Hritik Bansal", "Arian Hosseini", "Aditya Grover", "Kai-Wei Chang", "Marcus Rohrbach", "Anna Rohrbach" ]
https://openreview.net/forum?id=R7qRUFHGTx
R7qRUFHGTx
R7qRUFHGTx
[ "~Nishad_Singhi1", "~Hritik_Bansal2", "~Arian_Hosseini1", "~Aditya_Grover1", "~Kai-Wei_Chang1", "~Marcus_Rohrbach1", "~Anna_Rohrbach1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/5122ff943a54a4a24606efe50d369cfcb423a44b.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "test-time scaling", "self-consistency", "generative reward models", "compute-matched analysis" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ singhi2025when, title={When To Solve, When To Verify: Compute-Optimal Problem Solving and Generative Verification for {LLM} Reasoning}, author={Nishad Singhi and Hritik Bansal and Arian Hosseini and Aditya Grover and Kai-Wei Chang and Marcus Rohrbach and Anna Rohrbach}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=R7qRUFHGTx} }
singhi|when_to_solve_when_to_verify_computeoptimal_problem_solving_and_generative_verification_for_llm_reasoning
null
null
null
null
null
Knowledge Graph Retrieval-Augmented Generation via GNN-Guided Prompting
We propose GGR, a GNN-guided KG-RAG framework that enhances LLM retrieval in KG-RAG by incorporating GNN Guidance to preserve key reasoning paths and improve relation selection.
Large Language Models (LLMs) have demonstrated remarkable performance in open-domain question answering (QA), but their reliance on knowledge learned during pretraining limits their ability to provide accurate and up-to-date information. Knowledge Graph Retrieval-Augmented Generation (KG-RAG) enhances LLMs by incorporating structured knowledge from knowledge graphs (KGs). A common approach in KG-RAG is to retrieve relevant knowledge paths starting from entities in the input question and expanding along KG edges by LLM reasoning. However, existing KG-RAG methods suffer from the challenge that retrieval is performed step by step greedily using only local graph context, which can lead to retrieval errors that prematurely discard essential paths. To address the issue and perform more accurate retrieval, we propose GGR (GNN-Guided Retrieval for LLM Reasoning), a novel GNN-enhanced KG-RAG framework that integrates graph-based relevance scoring into the retrieval process. Our approach computes global importance scores across a contextualized subgraph, ensuring that key reasoning knowledge paths are preserved, even if their local relevance appears weak. Additionally, we introduce local semantic alignment by incorporating query-relation semantic similarity, refining the relation selection of LLM. Extensive experiments on Question-Answering tasks demonstrate that our method significantly improves retrieval accuracy and answer quality, demonstrating the effectiveness of combining graph-based reasoning and LLM-driven retrieval for structured knowledge integration.
[ "Haochen Liu", "Song Wang", "Jundong Li" ]
https://openreview.net/forum?id=R1NWMExESj
R1NWMExESj
R1NWMExESj
[ "~Haochen_Liu3", "~Song_Wang6", "~Jundong_Li2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/cbb38d13d9f0fcfebd7975de753f0b0daa8006b4.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Knowledge Graph", "Retrieval-Augmented Generation", "Question Answering", "Large Language Model", "Prompt" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ liu2025knowledge, title={Knowledge Graph Retrieval-Augmented Generation via {GNN}-Guided Prompting}, author={Haochen Liu and Song Wang and Jundong Li}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=R1NWMExESj} }
liu|knowledge_graph_retrievalaugmented_generation_via_gnnguided_prompting
null
null
null
null
null
Impact of LLM Alignment on Impression Formation in Social Interactions
Tests of LLMs against Affect Control Theory in gendered social interactions reveal that alignment influences impression formation unpredictably and that models largely ignore context in favor of the actor's identity.
Impression formation plays a crucial role in shaping social life, influencing behaviors, attitudes, and interactions across different contexts. Affect Control Theory (ACT) offers a well-established, empirically grounded model of how people form impressions and evaluate social interactions. We investigate whether Large Language Models (LLMs) exhibit patterns of impression formation that align with ACT's predictions. As a case study, we focus on gendered social interactions—how an LLM perceives gender in a prototypic social interaction. We compare several preference-tuned derivatives of LLaMA-3 model family (including LLaMA-Instruct, Tulu-3, and DeepSeek-R1-Distill) with GPT-4 as a baseline, examining the extent to which alignment or preference tuning influences the models' tendencies in forming gender impressions. We find that LLMs form impressions quite differently than ACT. Notably, LLMs are insensitive to situational context: the impression of an interaction is overwhelmingly driven by the identity of the actor, regardless of the actor’s actions or the recipient of those actions. This stands in contrast to ACT’s interaction-based reasoning, which accounts for the interplay of identities, behaviors, and recipients. We further find that preference tuning often amplifies or skews certain impressions in unpredicted ways. Our corpus offers a benchmark for assessing LLMs' social intelligence; we encourage further research using ACT-like frameworks to explore how tuning influences impression formation across diverse social dimensions.
[ "Ala N. Tak", "Anahita Bolourani", "Daniel B. Shank", "Jonathan Gratch" ]
https://openreview.net/forum?id=R135tO3SJJ
R135tO3SJJ
R135tO3SJJ
[ "~Ala_N._Tak1", "~Anahita_Bolourani1", "~Daniel_B._Shank1", "~Jonathan_Gratch1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/b02d921a48789e5acc49fd89b51b6f87b53fe69f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Impression Formation", "Alignment", "Preference-tuning", "Affect Control Theory" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ tak2025impact, title={Impact of {LLM} Alignment on Impression Formation in Social Interactions}, author={Ala N. Tak and Anahita Bolourani and Daniel B. Shank and Jonathan Gratch}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=R135tO3SJJ} }
tak|impact_of_llm_alignment_on_impression_formation_in_social_interactions
/attachment/5b714302ae223c96227996ba6cd5ec1442f0154d.zip
null
null
null
null
Can LLMs Handle WebShell Detection? Overcoming Detection Challenges with Behavioral Function-Aware Framework
We are the first to analyze the potential and limitations of LLMs on the WebShell detection, and propose a framework to improve the performance of LLMs, allowing larger LLMs to outperform SOTA and smaller LLMs to be competitive.
WebShell attacks, where malicious scripts are injected into web servers, pose a significant cybersecurity threat. Traditional machine learning and deep learning methods are often hampered by challenges such as the need for extensive training data, catastrophic forgetting, and poor generalization. Recently, Large Language Models (LLMs) have emerged as a powerful alternative for code-related tasks, but their potential in WebShell detection remains underexplored. In this paper, we make two major contributions: (1) a comprehensive evaluation of seven LLMs, including GPT-4, LLaMA 3.1 70B, and Qwen 2.5 variants, benchmarked against traditional sequence- and graph-based methods using a dataset of 26.59K PHP scripts, and (2) the Behavioral Function-Aware Detection (BFAD) framework, designed to address the specific challenges of applying LLMs to this domain. Our framework integrates three components: a Critical Function Filter that isolates malicious PHP function calls, a Context-Aware Code Extraction strategy that captures the most behaviorally indicative code segments, and Weighted Behavioral Function Profiling (WBFP) that enhances in-context learning by prioritizing the most relevant demonstrations based on discriminative function-level profiles. Our results show that, stemming from their distinct analytical strategies, larger LLMs achieve near-perfect precision but lower recall, while smaller models exhibit the opposite trade-off. However, all baseline models lag behind previous State-Of-The-Art (SOTA) methods. With the application of BFAD, the performance of all LLMs improves significantly, yielding an average F1 score increase of 13.82%. Notably, larger models like GPT-4, LLaMA-3.1-70B, and Qwen-2.5-Coder-14B now outperform SOTA benchmarks, while smaller models such as Qwen-2.5-Coder-3B achieve performance competitive with traditional methods. This work is the first to explore the feasibility and limitations of LLMs for WebShell detection and provides solutions to address the challenges in this task.
[ "Feijiang Han", "Jiaming Zhang", "Chuyi Deng", "Jianheng Tang", "Yunhuai Liu" ]
https://openreview.net/forum?id=QzJRtz8HNx
QzJRtz8HNx
QzJRtz8HNx
[ "~Feijiang_Han1", "~Jiaming_Zhang17", "~Chuyi_Deng1", "~Jianheng_Tang2", "~Yunhuai_Liu1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/3e5232e85fad3301a2dbd0beb5ee8fe5bfa99ca3.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "WebShell detection", "Large Language Models", "Code Analysis", "Cybersecurity", "In-Context Learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ han2025can, title={Can {LLM}s Handle WebShell Detection? Overcoming Detection Challenges with Behavioral Function-Aware Framework}, author={Feijiang Han and Jiaming Zhang and Chuyi Deng and Jianheng Tang and Yunhuai Liu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=QzJRtz8HNx} }
han|can_llms_handle_webshell_detection_overcoming_detection_challenges_with_behavioral_functionaware_framework
null
null
null
null
null
Do Language Models Agree with Human Perceptions of Suspense in Stories?
We show that while language models can detect when a text is meant to be suspenseful, they fail to match human judgments on its intensity and dynamics and are vulnerable to adversarial manipulations.
Suspense is an affective response to narrative text that is believed to involve complex cognitive processes in humans. Several psychological models have been developed to describe this phenomenon and the circumstances under which text might trigger it. We replicate four seminal psychological studies of human perceptions of suspense, substituting human responses with those of different open-weight and closed-source LMs. We conclude that while LMs can distinguish whether a text is intended to induce suspense in people, LMs cannot accurately estimate the relative amount of suspense within a text sequence as compared to human judgments, nor can LMs properly capture the human perception for the rise and fall of suspense across multiple text segments. We probe the abilities of LM suspense understanding by adversarially permuting the story text to identify what cause human and LM perceptions of suspense to diverge. We conclude that, while LMs can superficially identify and track certain facets of suspense, they do not process suspense in the same way as human readers.
[ "Glenn Matlin", "Devin Zhang", "Rodrigo Barroso Loza", "Diana M. Popescu", "Joni Isbell", "Chandreyi Chakraborty", "Mark Riedl" ]
https://openreview.net/forum?id=Qu0znWWckM
Qu0znWWckM
Qu0znWWckM
[ "~Glenn_Matlin1", "~Devin_Zhang1", "~Rodrigo_Barroso_Loza1", "~Diana_M._Popescu1", "~Joni_Isbell1", "~Chandreyi_Chakraborty1", "~Mark_Riedl1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/69a31724b402656f84fd2b0e7ff98df2ff4a9c2a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Language Models (LMs)", "Cognitive Science", "Psycholinguistics", "Human Alignment", "Theory of Mind" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ matlin2025do, title={Do Language Models Agree with Human Perceptions of Suspense in Stories?}, author={Glenn Matlin and Devin Zhang and Rodrigo Barroso Loza and Diana M. Popescu and Joni Isbell and Chandreyi Chakraborty and Mark Riedl}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Qu0znWWckM} }
matlin|do_language_models_agree_with_human_perceptions_of_suspense_in_stories
null
null
null
null
null
Humans overrely on overconfident language models, across languages
Multilingual LLMs are overconfident across languages, and that users overrely on confident responses
As large language models (LLMs) are deployed globally, it is crucial that their responses are calibrated across languages to accurately convey uncertainty and limitations. Prior work shows that LLMs are linguistically overconfident in English, leading users to overrely on confident generations. However, the usage and interpretation of epistemic markers (e.g., 'I think it's') differs sharply across languages. Here, we study the risks of multilingual linguistic (mis)calibration, overconfidence, and overreliance across five languages to evaluate LLM safety in a global context. Our work finds that overreliance risks are high across languages. We first analyze the distribution of LLM-generated epistemic markers and observe that LLMs are overconfident across languages, frequently generating strengtheners even as part of incorrect responses. Model generations are, however, sensitive to documented cross-linguistic variation in usage: for example, models generate the most markers of uncertainty in Japanese and the most markers of certainty in German and Mandarin. Next, we measure human reliance rates across languages, finding that reliance behaviors differ cross-linguistically: for example, participants are significantly more likely to discount expressions of uncertainty in Japanese than in English (i.e., ignore their 'hedging' function and rely on generations that contain them). Taken together, these results indicate a high risk of reliance on overconfident model generations across languages. Our findings highlight the challenges of multilingual linguistic calibration and stress the importance of culturally and linguistically contextualized model safety evaluations.
[ "Neil Rathi", "Dan Jurafsky", "Kaitlyn Zhou" ]
https://openreview.net/forum?id=QsQatTzATT
QsQatTzATT
QsQatTzATT
[ "~Neil_Rathi1", "~Dan_Jurafsky1", "~Kaitlyn_Zhou1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/98780e1485e4b6e82cc6c169631d5ca7d01f99b2.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "multilingual language models", "uncertainty" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ rathi2025humans, title={Humans overrely on overconfident language models, across languages}, author={Neil Rathi and Dan Jurafsky and Kaitlyn Zhou}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=QsQatTzATT} }
rathi|humans_overrely_on_overconfident_language_models_across_languages
null
null
null
null
null
Reinforcement Learning Enhanced Full-Duplex Spoken Dialogue Language Models for Conversational Interactions
Use Reinforcement Learning to optimize the Spoken Dialogue Models
Mainstream spoken dialogue language models (SDLMs) primarily handle turn-based interactions by alternating between processing user speech and generating responses. Recently emerging full-duplex SDLMs have showcased more natural and engaging conversational performance by simultaneously listening and speaking. However, the complex dynamics of human conversation introduce unique challenges to full-duplex SDLMs: Beyond generating reasonable responses, these models must exhibit diverse and prompt conversational behaviors in real-time interactions with the user. In this work, we present an efficient full-duplex SDLM optimized by Online Reinforcement with Interactive Speech Evaluation (ORISE). In ORISE, we design a customized reward function derived from automated annotations of online generated speech to guide the model toward well-formed and speech-text aligned responses. Experimental results show that ORISE effectively improves robustness and accuracy in handling conversational dynamics, including turn-taking, user barge-in, and backchanneling. Furthermore, ORISE enables the model to adapt to unseen noise conditions without relying on any labeled data, demonstrating the generalization of ORISE in real-world scenarios.
[ "Chen Chen", "Ke Hu", "Chao-Han Huck Yang", "Ankita Pasad", "Edresson Casanova", "Weiqing Wang", "Szu-Wei Fu", "Jason Li", "Zhehuai Chen", "Jagadeesh Balam", "Boris Ginsburg" ]
https://openreview.net/forum?id=QbLbXz8Idp
QbLbXz8Idp
QbLbXz8Idp
[ "~Chen_Chen56", "~Ke_Hu12", "~Chao-Han_Huck_Yang1", "~Ankita_Pasad1", "~Edresson_Casanova1", "~Weiqing_Wang4", "~Szu-Wei_Fu1", "~Jason_Li1", "~Zhehuai_Chen1", "~Jagadeesh_Balam1", "~Boris_Ginsburg1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8d53560d6423d1919770e67b37d4b1561e30de3e.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Full-Duplex model", "Spoken Dialogue Models", "Speech-to-Speech model" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ chen2025reinforcement, title={Reinforcement Learning Enhanced Full-Duplex Spoken Dialogue Language Models for Conversational Interactions}, author={Chen Chen and Ke Hu and Chao-Han Huck Yang and Ankita Pasad and Edresson Casanova and Weiqing Wang and Szu-Wei Fu and Jason Li and Zhehuai Chen and Jagadeesh Balam and Boris Ginsburg}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=QbLbXz8Idp} }
chen|reinforcement_learning_enhanced_fullduplex_spoken_dialogue_language_models_for_conversational_interactions
null
null
null
null
null
Language Model Uncertainty Quantification with Attention Chain
Investigates large language model uncertainty estimation with critical reasoning token backtracking using attention chain.
Accurately quantifying a large language model's (LLM) predictive uncertainty is crucial for judging the reliability of its answers. While most existing research focuses on short, directly answerable questions with closed-form outputs (e.g., multiple-choice), involving intermediate reasoning steps in LLM responses is increasingly important. This added complexity complicates uncertainty quantification (UQ) because the probabilities assigned to answer tokens are conditioned on a vast space of preceding reasoning tokens. Direct marginalization is infeasible, and the dependency inflates probability estimates, causing overconfidence in UQ. To address this, we propose UQAC, an efficient method that narrows the reasoning space to a tractable size for marginalization. UQAC iteratively constructs an "attention chain" of tokens deemed "semantically crucial to the final answer via a backtracking procedure. Starting from the answer tokens, it uses attention weights to identify the most influential predecessors, then iterates this process until reaching the input tokens. The resulting chain is further refined with similarity filtering and probability thresholding, which reduce the reasoning space, facilitating the approximation of the marginal answer token probabilities. We validate UQAC on multiple reasoning benchmarks with advanced open-source LLMs, demonstrating that it consistently delivers reliable UQ estimates with high computational efficiency.
[ "Yinghao Li", "Rushi Qiang", "Lama Moukheiber", "Chao Zhang" ]
https://openreview.net/forum?id=QTrW2HWNXe
QTrW2HWNXe
QTrW2HWNXe
[ "~Yinghao_Li3", "~Rushi_Qiang1", "~Lama_Moukheiber2", "~Chao_Zhang15" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/4a1ba392c01b705b8520ba423a11c47a8c524f81.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Uncertainty Estimation", "Large Language Model", "Attention" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ li2025language, title={Language Model Uncertainty Quantification with Attention Chain}, author={Yinghao Li and Rushi Qiang and Lama Moukheiber and Chao Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=QTrW2HWNXe} }
li|language_model_uncertainty_quantification_with_attention_chain
null
null
null
null
null
Can Large Language Models Integrate Spatial Data? Empirical Insights into Reasoning Strengths and Computational Weaknesses
We use large languge models for the spatial integration application. Our proposed heuristic-driven method and review-and-refine method demonstrate remarkable effectvieness in the application.
We explore the application of large language models (LLMs) to empower domain experts in integrating large, heterogeneous, and noisy urban spatial datasets. Traditional rule-based integration methods are unable to cover all edge cases, requiring manual verification and repair. Machine learning approaches require collecting and labeling of large numbers of task-specific samples. In this study, we investigate the potential of LLMs for spatial data integration. Our analysis first considers how LLMs reason about environmental spatial relationships mediated by human experience, such as between roads and sidewalks. We show that while LLMs exhibit spatial reasoning capabilities, they struggle to connect the macro-scale environment with the relevant computational geometry tasks, often producing logically incoherent responses. But when provided relevant features, thereby reducing dependence on spatial reasoning, LLMs are able to generate high-performing results. We then adapt a review-and-refine method, which proves remarkably effective in correcting erroneous initial responses while preserving accurate responses. We discuss practical implications of employing LLMs for spatial data integration in real-world contexts and outline future research directions, including post-training, multi-modal integration methods, and support for diverse data formats. Our findings position LLMs as a promising and flexible alternative to traditional rule-based heuristics, advancing the capabilities of adaptive spatial data integration.
[ "Bin HAN", "Robert Wolfe", "Anat Caspi", "Bill Howe" ]
https://openreview.net/forum?id=QNaHC8njYt
QNaHC8njYt
QNaHC8njYt
[ "~Bin_HAN1", "~Robert_Wolfe1", "~Anat_Caspi1", "~Bill_Howe1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/69c17c97256437504728268c94cdd5b6bba528ca.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "large language models", "language model application", "spatial data integration", "spatial reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ han2025can, title={Can Large Language Models Integrate Spatial Data? Empirical Insights into Reasoning Strengths and Computational Weaknesses}, author={Bin HAN and Robert Wolfe and Anat Caspi and Bill Howe}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=QNaHC8njYt} }
han|can_large_language_models_integrate_spatial_data_empirical_insights_into_reasoning_strengths_and_computational_weaknesses
null
null
null
null
null
Cognitive Behaviors that Enable Self-Improving Reasoners, or, Four Habits of Highly Effective STaRs
We show that language models with built-in cognitive behaviors like verification and backtracking learn better through reinforcement learning than those without.
Test-time inference has emerged as a powerful paradigm for enabling language models to ``think'' longer and more carefully about complex challenges, much like skilled human experts. While reinforcement learning (RL) can drive self-improvement in language models on verifiable tasks, some models exhibit substantial gains while others quickly plateau. For instance, we find that Qwen-2.5-3B far exceeds Llama-3.2-3B under identical RL training for the game of Countdown. This discrepancy raises a critical question: what intrinsic properties enable effective self-improvement? We introduce a framework to investigate this question by analyzing four key cognitive behaviors --- verification, backtracking, subgoal setting, and backward chaining --- that both expert human problem solvers and successful language models employ. Our study reveals that Qwen naturally exhibits these reasoning behaviors, whereas Llama initially lacks them. In systematic experimentation with controlled behavioral datasets, we find that priming Llama with examples containing these reasoning behaviors enables substantial improvements during RL, matching or exceeding Qwen's performance. Importantly, the presence of reasoning behaviors, rather than correctness of answers, proves to be the critical factor --- models primed with incorrect solutions containing proper reasoning patterns achieve comparable performance to those trained on correct solutions. Finally, leveraging continued pretraining with OpenWebMath data, filtered to amplify reasoning behaviors, enables the Llama model to match Qwen's self-improvement trajectory. Our findings establish a fundamental relationship between initial reasoning behaviors and the capacity for improvement, explaining why some language models effectively utilize additional computation while others plateau.
[ "Kanishk Gandhi", "Ayush K Chakravarthy", "Anikait Singh", "Nathan Lile", "Noah Goodman" ]
https://openreview.net/forum?id=QGJ9ttXLTy
QGJ9ttXLTy
QGJ9ttXLTy
[ "~Kanishk_Gandhi1", "~Ayush_K_Chakravarthy1", "~Anikait_Singh1", "~Nathan_Lile1", "~Noah_Goodman1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/e32fbe0f53c45c0770ca8bed05f63276b7eff3e4.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Reasoning", "RL", "self-improvement", "backtracking", "test-time compute", "planning", "search" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ gandhi2025cognitive, title={Cognitive Behaviors that Enable Self-Improving Reasoners, or, Four Habits of Highly Effective {ST}aRs}, author={Kanishk Gandhi and Ayush K Chakravarthy and Anikait Singh and Nathan Lile and Noah Goodman}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=QGJ9ttXLTy} }
gandhi|cognitive_behaviors_that_enable_selfimproving_reasoners_or_four_habits_of_highly_effective_stars
null
null
null
null
null
Breaking the Data Barrier -- Building GUI Agents Through Task Generalization
We comprehensively study how middle training on a series of tasks, except for GUI domain, can enhance specific capabilities such as GUI perception, visual reasoning, and knowledge.
Graphical User Interface (GUI) agents offer cross-platform solutions for automating complex digital tasks, with significant potential to transform productivity workflows. However, their performance is often constrained by the scarcity of high-quality trajectory data. To address this limitation, we propose training Vision Language Models (VLMs) on data-rich, reasoning-intensive tasks during a dedicated mid-training stage, and then examine how incorporating these tasks in the mid-training phase facilitates generalization to GUI planning scenarios. Specifically, we explore a range of tasks with readily available instruction-tuning data, including GUI perception, multimodal reasoning, and textual reasoning. Through extensive experiments across 11 mid-training tasks, we demonstrate that: (1) Task generalization proves highly effective, yielding substantial improvements across most settings. For instance, multimodal mathematical reasoning enhances performance on AndroidWorld by an absolute 6.3\%. Remarkably, text-only mathematical data significantly boosts GUI web agent performance, achieving a 5.6\% improvement on WebArena and an 5.4\% improvement on AndroidWorld, underscoring notable cross-modal generalization from text-based to visual domains; (2) Contrary to prior assumptions, GUI perception data—previously considered closely aligned with GUI agent tasks and widely utilized for training—has a comparatively limited impact on final performance; (3) Building on these insights, we identify the most effective mid-training tasks and curate optimized mixture datasets, resulting in absolute performance gains of 8.0\% on WebArena and 12.2\% on AndroidWorld.
[ "Junlei Zhang", "Zichen Ding", "Chang Ma", "Zijie Chen", "Qiushi Sun", "Zhenzhong Lan", "Junxian He" ]
https://openreview.net/forum?id=QDtORaZt8K
QDtORaZt8K
QDtORaZt8K
[ "~Junlei_Zhang1", "~Zichen_Ding1", "~Chang_Ma2", "~Zijie_Chen3", "~Qiushi_Sun1", "~Zhenzhong_Lan2", "~Junxian_He1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/65903331eb76e9b57f2b8de9291044b97381d0d6.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "GUI agent", "middle training", "llm as agent" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhang2025breaking, title={Breaking the Data Barrier -- Building {GUI} Agents Through Task Generalization}, author={Junlei Zhang and Zichen Ding and Chang Ma and Zijie Chen and Qiushi Sun and Zhenzhong Lan and Junxian He}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=QDtORaZt8K} }
zhang|breaking_the_data_barrier_building_gui_agents_through_task_generalization
null
null
null
null
null
HyperINF: Unleashing the HyperPower of Schulz's Method for Data Influence Estimation
We propose HyperINF, an efficient and accurate influence function approximation which leverages the hyperpower method, specifically Schulz's iterative algorithm.
Influence functions provide a principled approach to assess individual training samples' contributions to specific targets. However, their high computational costs have limited applications in large-scale models and datasets. While existing approximation methods have reduced computational overhead, they often suffer from inaccurate estimation due to weak convergence guarantees. Hyperpower methods offer rigorous convergence guarantees for matrix inverse approximation, but their matrix multiplication operations typically involve intractable memory and computation costs for large-scale models. We propose HyperINF, an efficient and accurate influence function approximation leveraging the hyperpower HyperINF, specifically Schulz's iterative algorithm. To address computation-intensive matrix multiplication, we incorporate generalized Fisher information (GFIM) as a low-rank Hessian matrix approximation, reducing memory and computation overhead to constant costs. Through comprehensive convergence simulations on matrix inversion, we demonstrate HyperINF's superior accuracy and stability compared to baselines. We further validate its efficacy through extensive real-world data attribution tasks, including mislabeled data detection and data selection for LLM and VLM fine-tuning. On LoRA-tuned models, HyperINF achieves superior downstream performance with minimal memory and computational overhead, while other approaches suffer significant degradation. Our code is available at https://github.com/Blackzxy/HyperINF .
[ "Xinyu Zhou", "Simin Fan", "Martin Jaggi" ]
https://openreview.net/forum?id=QByEdZMJdx
QByEdZMJdx
QByEdZMJdx
[ "~Xinyu_Zhou8", "~Simin_Fan1", "~Martin_Jaggi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/2cb27e8f99cce1830e4a75d9aee480e3e43756b4.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "language model; data; influence score" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhou2025hyperinf, title={Hyper{INF}: Unleashing the HyperPower of Schulz's Method for Data Influence Estimation}, author={Xinyu Zhou and Simin Fan and Martin Jaggi}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=QByEdZMJdx} }
zhou|hyperinf_unleashing_the_hyperpower_of_schulzs_method_for_data_influence_estimation
null
null
null
null
null
Cascade Reward Sampling for Efficient Decoding-Time Alignment
We significantly improve decoding efficiency for decoding-time alignment methods while achieving better alignment quality.
Aligning large language models (LLMs) with human preferences is essential for their applications. Recently, decoding-time alignment has emerged as an effective plug-and-play technique that avoids fine-tuning model parameters. This approach retains the general utility of pretrained LLMs but often suffers from significant inefficiencies during decoding, primarily due to wasted token generation and excessive reward evaluations. To address these challenges, we introduce Cascade Reward Sampling (CARDS) to resolve both efficiency bottlenecks in decoding-time alignment. Specifically, we develop a segment-level rejection sampling algorithm that minimizes redundant computations of both LLMs and reward models (RMs). Central to CARDS is an uncertainty-based segmentation mechanism, which ensures the accuracy of RMs evaluations on incomplete segments. Furthermore, we provide a detailed analysis of reward scores on segments to elucidate the improved alignment performance. Experimental results demonstrate that CARDS significantly improves decoding efficiency, alignment quality, and general utility compared to existing decoding-time alignment methods, achieving approximately a 70\% reduction in decoding time and over 90\% win-ties in utility and safety benchmarks.
[ "Bolian Li", "Yifan Wang", "Anamika Lochab", "Ananth Grama", "Ruqi Zhang" ]
https://openreview.net/forum?id=QBmxLlmRYG
QBmxLlmRYG
QBmxLlmRYG
[ "~Bolian_Li1", "~Yifan_Wang14", "~Anamika_Lochab1", "~Ananth_Grama1", "~Ruqi_Zhang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/c08d4964c6e7706dfeb2f687a2ed9633ed0a3bed.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "LLM Alignment" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ li2025cascade, title={Cascade Reward Sampling for Efficient Decoding-Time Alignment}, author={Bolian Li and Yifan Wang and Anamika Lochab and Ananth Grama and Ruqi Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=QBmxLlmRYG} }
li|cascade_reward_sampling_for_efficient_decodingtime_alignment
/attachment/5dc60966cda3252835f4c0db2fb156fc31772b56.zip
null
null
null
null
HIPPO-VIDEO : Simulating Watch Histories with Large Language Models for History-Driven Video Highlighting
We introduce a large-scale dataset for personalized video highlighting by simulating user watch history and generating segment-wise saliency scores, enabling more user-centric video summarization.
The exponential growth of video content has made personalized video highlighting an essential task, as user preferences are highly variable and complex. Existing video datasets, however, often lack personalization, relying on isolated videos or simple text queries that fail to capture the intricacies of user behavior. In this work, we introduce HIPPO-VIDEO, a novel dataset for personalized video highlighting, created using an LLM-based user simulator to generate realistic watch histories reflecting diverse user preferences. The dataset includes 2,040 (watch history, saliency score) pairs, covering 20,400 videos across 170 semantic categories. To validate our dataset, we propose HiPHer, a method that leverages these personalized watch histories to predict preference-conditioned segment-wise saliency scores. Through extensive experiments, we demonstrate that our method outperforms existing generic and query-based approaches, showcasing its potential for highly user-centric video highlighting in real-world scenarios. The code is publicly available at https://anonymous.4open.science/r/HIPPO-4EEE/README.md.
[ "Jeongeun Lee", "Youngjae Yu", "Dongha Lee" ]
https://openreview.net/forum?id=Q6TCkggzQ2
Q6TCkggzQ2
Q6TCkggzQ2
[ "~Jeongeun_Lee3", "~Youngjae_Yu1", "~Dongha_Lee1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/fdd7fb096ae738e491cc7b7405981d0020c22a4a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "video understanding", "personalization", "highlight detection" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lee2025hippovideo, title={{HIPPO}-{VIDEO} : Simulating Watch Histories with Large Language Models for History-Driven Video Highlighting}, author={Jeongeun Lee and Youngjae Yu and Dongha Lee}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Q6TCkggzQ2} }
lee|hippovideo_simulating_watch_histories_with_large_language_models_for_historydriven_video_highlighting
null
null
null
null
null
CodeARC: Benchmarking Reasoning Capabilities of LLM Agents for Inductive Program Synthesis
We introduce CodeARC, a benchmark for inductive program synthesis where LLM agents iteratively refine code via oracle feedback, enabling more realistic and challenging evaluation for inductive reasoning.
Inductive program synthesis, or programming by example, requires synthesizing functions from input-output examples that generalize to unseen inputs. While large language model agents have shown promise in programming tasks guided by natural language, their ability to perform inductive program synthesis is underexplored. Existing evaluation protocols rely on static sets of examples and held-out tests, offering no feedback when synthesized functions are incorrect and failing to reflect real-world scenarios such as reverse engineering. We propose CodeARC, the *Code Abstraction and Reasoning Challenge*, a new evaluation framework where agents interact with a hidden target function by querying it with new inputs, synthesizing candidate functions, and iteratively refining their solutions using a differential testing oracle. This interactive setting encourages agents to perform function calls and self-correction based on feedback. We construct the first large-scale benchmark for general-purpose inductive program synthesis, featuring 1114 functions. Among 18 models evaluated, o3-mini performs best with a success rate of 52.7%, highlighting the difficulty of this task. Fine-tuning LLaMA-3.1-8B-Instruct on curated synthesis traces yields up to a 31% relative performance gain. CodeARC provides a more realistic and challenging testbed for evaluating LLM-based program synthesis and inductive reasoning. Our code, data, and models are publicly available at https://github.com/Anjiang-Wei/CodeARC
[ "Anjiang Wei", "Tarun Suresh", "Jiannan Cao", "Naveen Kannan", "Yuheng Wu", "Kai Yan", "Thiago S. F. X. Teixeira", "Ke Wang", "Alex Aiken" ]
https://openreview.net/forum?id=Q5pVZCrrKr
Q5pVZCrrKr
Q5pVZCrrKr
[ "~Anjiang_Wei1", "~Tarun_Suresh1", "~Jiannan_Cao1", "~Naveen_Kannan1", "~Yuheng_Wu2", "~Kai_Yan1", "~Thiago_S._F._X._Teixeira1", "~Ke_Wang1", "~Alex_Aiken1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/8f979cde6db2a776c44f5873ad229b587608061a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Agent", "Large Language Model", "Reasoning", "Code", "Program Synthesis" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wei2025codearc, title={Code{ARC}: Benchmarking Reasoning Capabilities of {LLM} Agents for Inductive Program Synthesis}, author={Anjiang Wei and Tarun Suresh and Jiannan Cao and Naveen Kannan and Yuheng Wu and Kai Yan and Thiago S. F. X. Teixeira and Ke Wang and Alex Aiken}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Q5pVZCrrKr} }
wei|codearc_benchmarking_reasoning_capabilities_of_llm_agents_for_inductive_program_synthesis
null
null
null
null
null
RRO: LLM Agent Optimization Through Rising Reward Trajectories
A new "Reward Rising Optimization" method trains AI agents more efficiently by only collecting data when rewards increase between steps.
Large language models (LLMs) have exhibited extraordinary performance in a variety of tasks, while it remains challenging for them to solve complex multi-step tasks as agents. In practice, agents are sensitive to the outcome of certain key steps, which makes them likely to fail the task because of a subtle mistake in the planning trajectory. Recent approaches resort to calibrating the reasoning process through reinforcement learning. They reward or penalize every reasoning step with process supervision, known as Process Reward Models (PRMs). However, PRMs are difficult and costly to scale up with a large number of next action candidates since they require extensive computations to acquire the training data through per-step trajectory exploration. To mitigate this issue, we focus on the relative reward trend across successive reasoning steps and propose maintaining an increasing reward in the collected trajectories for process supervision, which we term Reward Rising Optimization (RRO). Specifically, we incrementally augment the process supervision until we identify a step exhibiting positive reward differentials, i.e., rising rewards, relative to its preceding iteration. This method dynamically expands the search space for the next action candidates, efficiently capturing high-quality data. We provide mathematical groundings and empirical results on the WebShop and InterCode-SQL benchmarks, showing that our proposed RRO method achieves superior performance while requiring much less exploration cost.
[ "Zilong Wang", "Jingfeng Yang", "Sreyashi Nag", "Samarth Varshney", "Xianfeng Tang", "Haoming Jiang", "Jingbo Shang", "Sheikh Muhammad Sarwar" ]
https://openreview.net/forum?id=PhaE8TSM5j
PhaE8TSM5j
PhaE8TSM5j
[ "~Zilong_Wang1", "~Jingfeng_Yang2", "~Sreyashi_Nag1", "~Samarth_Varshney1", "~Xianfeng_Tang1", "~Haoming_Jiang1", "~Jingbo_Shang2", "~Sheikh_Muhammad_Sarwar1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/fba685046b9a84f29e12efe295ee88aac2124f32.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "large language model", "agent", "reinforcement learning", "process reward model" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025rro, title={{RRO}: {LLM} Agent Optimization Through Rising Reward Trajectories}, author={Zilong Wang and Jingfeng Yang and Sreyashi Nag and Samarth Varshney and Xianfeng Tang and Haoming Jiang and Jingbo Shang and Sheikh Muhammad Sarwar}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=PhaE8TSM5j} }
wang|rro_llm_agent_optimization_through_rising_reward_trajectories
null
null
null
null
null
Rank1: Test-Time Compute for Reranking in Information Retrieval
We train the first reranker using test-time compute in information retrieval
We introduce Rank1, the first reranking model trained to take advantage of test-time compute. Rank1 demonstrates the applicability within retrieval of using a reasoning language model (i.e. OpenAI's o1, Deepseek's R1, etc.) for distillation in order to rapidly improve the performance of a smaller model. We gather and open-source a dataset of more than 600,000 examples of R1 reasoning traces from queries and passages in MS MARCO. Models trained on this dataset show: (1) state-of-the-art performance on advanced reasoning and instruction following datasets; (2) work remarkably well out of distribution due to the ability to respond to user-input prompts; and (3) have explainable reasoning chains that can be given to users or RAG-based systems. Further, we demonstrate that quantized versions of these models retain strong performance while using less compute/memory. Overall, Rank1 shows that test-time compute allows for a fundamentally new type of explainable and performant reranker model for search.
[ "Orion Weller", "Kathryn Ricci", "Eugene Yang", "Andrew Yates", "Dawn Lawrie", "Benjamin Van Durme" ]
https://openreview.net/forum?id=Pg0PAvbhGv
Pg0PAvbhGv
Pg0PAvbhGv
[ "~Orion_Weller1", "~Kathryn_Ricci1", "~Eugene_Yang2", "~Andrew_Yates2", "~Dawn_Lawrie1", "~Benjamin_Van_Durme2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/6ad5cc3faf8da5bd58f1ac774688c871598827e1.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "retrieval", "reranking", "test-time compute" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ weller2025rank, title={Rank1: Test-Time Compute for Reranking in Information Retrieval}, author={Orion Weller and Kathryn Ricci and Eugene Yang and Andrew Yates and Dawn Lawrie and Benjamin Van Durme}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Pg0PAvbhGv} }
weller|rank1_testtime_compute_for_reranking_in_information_retrieval
null
null
null
null
null
Pretraining on the Test Set Is No Longer All You Need: A Debate-Driven Approach to QA Benchmarks
We introduce a debate-driven evaluation paradigm that transforms existing QA benchmarks into adversarial debates between models, providing a more robust assessment of reasoning abilities while penalizing shallow memorization.
As frontier language models increasingly saturate standard QA benchmarks, concerns about data contamination, memorization, and escalating dataset creation costs persist. We propose a debate-driven evaluation paradigm that transforms any existing QA dataset into structured adversarial debates—where one model is given the official answer to defend, and another constructs and defends an alternative answer—adjudicated by a judge model blind to the correct solution. By forcing multi-round argumentation, this approach substantially increases difficulty while penalizing shallow memorization, yet reuses QA items to reduce curation overhead. We make two main contributions: (1) an evaluation pipeline to systematically convert QA tasks into debate-based assessments, and (2) a public benchmark that demonstrates our paradigm's effectiveness on a subset of MMLU-Pro questions, complete with standardized protocols and reference models. Empirical results validate the robustness of the method and its effectiveness against data contamination—a Llama 3.1 model fine-tuned on test questions showed dramatic accuracy improvements (50% → 82%) but performed worse in debates. Results also show that even weaker judges can reliably differentiate stronger debaters, highlighting how debate-based evaluation can scale to future, more capable systems while maintaining a fraction of the cost of creating new benchmarks. Overall, our framework underscores that "pretraining on the test set is no longer all you need," offering a sustainable path for measuring the genuine reasoning ability of advanced language models.
[ "Linbo Cao", "Jinman Zhao" ]
https://openreview.net/forum?id=Pdyh3USc2A
Pdyh3USc2A
Pdyh3USc2A
[ "~Linbo_Cao1", "~Jinman_Zhao2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/09936fe14703f9228a42fab935d3064134b7c293.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "debate-driven evaluation", "QA benchmarks", "multi-agent debate", "language model evaluation", "benchmark contamination", "model memorization", "adversarial evaluation", "dynamic assessment" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ cao2025pretraining, title={Pretraining on the Test Set Is No Longer All You Need: A Debate-Driven Approach to {QA} Benchmarks}, author={Linbo Cao and Jinman Zhao}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Pdyh3USc2A} }
cao|pretraining_on_the_test_set_is_no_longer_all_you_need_a_debatedriven_approach_to_qa_benchmarks
null
null
null
null
null
Refusal Tokens: A Simple Way to Calibrate Refusals in Large Language Models
Introduce refusal tokens to enable control over a single model’s refusal rates and discuss desirable data properties for optimizing this approach.
A key component of building safe and reliable language models is enabling the models to appropriately refuse to follow certain instructions or answer certain questions. We may want models to output refusal messages for various categories of user queries, for example, ill-posed questions, instructions for committing illegal acts, or queries which require information past the model's knowledge horizon. Engineering models that refuse to answer such questions is complicated by the fact that an individual may want their model to exhibit varying levels of sensitivity for refusing queries of various categories, and different users may want different refusal rates. The current default approach involves training multiple models with varying proportions of refusal messages from each category to achieve the desired refusal rates, which is computationally expensive and may require training a new model to accommodate each user's desired preference over refusal rates. To address these challenges, we propose refusal tokens, one such token for each refusal category or a single refusal token, which are prepended to the model's responses during training. We then show how to increase or decrease the probability of generating the refusal token for each category during inference to steer the model's refusal behavior. Refusal tokens enable controlling a single model's refusal rates without the need of any further fine-tuning, but only by selectively intervening during generation.
[ "Neel Jain", "Aditya Shrivastava", "Chenyang Zhu", "Daben Liu", "Alfy Samuel", "Ashwinee Panda", "Anoop Kumar", "Micah Goldblum", "Tom Goldstein" ]
https://openreview.net/forum?id=Pbs4i3FgbD
Pbs4i3FgbD
Pbs4i3FgbD
[ "~Neel_Jain1", "~Aditya_Shrivastava1", "~Chenyang_Zhu3", "~Daben_Liu1", "~Alfy_Samuel1", "~Ashwinee_Panda1", "~Anoop_Kumar1", "~Micah_Goldblum1", "~Tom_Goldstein1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/f1f06df0b4d2acfebdf8baf0ef258f783339b79d.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM", "Refusals" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ jain2025refusal, title={Refusal Tokens: A Simple Way to Calibrate Refusals in Large Language Models}, author={Neel Jain and Aditya Shrivastava and Chenyang Zhu and Daben Liu and Alfy Samuel and Ashwinee Panda and Anoop Kumar and Micah Goldblum and Tom Goldstein}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Pbs4i3FgbD} }
jain|refusal_tokens_a_simple_way_to_calibrate_refusals_in_large_language_models
/attachment/7752bce29aea8f6a0690ad433de3e3048047aa96.zip
null
null
null
null
VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information
We introduce VisOnlyQA, a dataset to evaluate the capability of Large Vision Language Models to perceive geometric information, such as lengths, angles, and shapes, and reveal that they still cannot accurately perceive basic geometric information.
Large Vision Language Models (LVLMs) have achieved remarkable performance in various vision-language tasks. However, it is still unclear how accurately LVLMs can perceive visual information in images. In particular, the capability of LVLMs to perceive geometric information, such as shape, angle, and size, remains insufficiently analyzed, although the perception of these properties is crucial for tasks that require a detailed visual understanding. In this work, we introduce VisOnlyQA, a dataset for evaluating the geometric perception of LVLMs, and reveal that LVLMs often cannot accurately perceive basic geometric information in images, while human performance is nearly perfect. VisOnlyQA consists of 12 tasks that directly ask about geometric information in geometric shapes, charts, chemical structures, and 3D shapes. Our experiments highlight the following findings: (i) State-of-the-art LVLMs struggle with basic geometric perception. 23 LVLMs we evaluate, including GPT-4o and Gemini 2.5 Pro, work poorly on VisOnlyQA. (ii) Additional training data does not resolve this issue. Fine-tuning on the training set of VisOnlyQA is not always effective, even for in-distribution tasks. (iii) LLM may be the bottleneck. LVLMs using stronger LLMs exhibit better geometric perception on VisOnlyQA, while it does not require complex reasoning, suggesting that the way LVLMs process information from visual encoders is a bottleneck. The datasets, code, and model responses are provided at https://github.com/psunlpgroup/VisOnlyQA.
[ "Ryo Kamoi", "Yusen Zhang", "Sarkar Snigdha Sarathi Das", "Ranran Haoran Zhang", "Rui Zhang" ]
https://openreview.net/forum?id=PYHwlyu2fa
PYHwlyu2fa
PYHwlyu2fa
[ "~Ryo_Kamoi1", "~Yusen_Zhang1", "~Sarkar_Snigdha_Sarathi_Das1", "~Ranran_Haoran_Zhang2", "~Rui_Zhang7" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a9e26bf74b28475a0a1452686d6142b96af68b9a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "vision-language models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ kamoi2025visonlyqa, title={VisOnly{QA}: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information}, author={Ryo Kamoi and Yusen Zhang and Sarkar Snigdha Sarathi Das and Ranran Haoran Zhang and Rui Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=PYHwlyu2fa} }
kamoi|visonlyqa_large_vision_language_models_still_struggle_with_visual_perception_of_geometric_information
/attachment/47d05e93f5817d673511927ceb3287d1c28ae8dc.zip
null
null
null
null
Multi-Agent Retrieval-Augmented Framework for Evidence-Based Counterspeech Against Health Misinformation
A multi-agent retrieval-augmented framework leveraging multiple LLMs to enhance evidence-based counterspeech generation against health misinformation with greater accuracy and refinement.
Large language models (LLMs) incorporated with Retrieval-Augmented Generation (RAG) have demonstrated powerful capabilities in generating counterspeech against misinformation. However, current studies rely on limited evidence and offer less control over final outputs. To address these challenges, we propose a Multi-agent Retrieval-Augmented Framework to generate counterspeech against health misinformation, incorporating multiple LLMs to optimize knowledge retrieval, evidence enhancement, and response refinement. Our approach integrates both static and dynamic evidence, ensuring that the generated counterspeech is relevant, well-grounded, and up-to-date. Our method outperforms baseline approaches in politeness, relevance, informativeness, and factual accuracy, demonstrating its effectiveness in generating high-quality counterspeech. To further validate our approach, we conduct ablation studies to verify the necessity of each component in our framework. Furthermore, cross evaluations show that our system generalizes well across diverse health misinformation topics and datasets. And human evaluations reveal that refinement significantly enhances counterspeech quality and obtains human preference.
[ "Anirban Saha Anik", "Xiaoying Song", "Elliott Wang", "Bryan Wang", "Bengisu Yarimbas", "Lingzi Hong" ]
https://openreview.net/forum?id=P61AgRyU7E
P61AgRyU7E
P61AgRyU7E
[ "~Anirban_Saha_Anik1", "~Xiaoying_Song1", "~Elliott_Wang1", "~Bryan_Wang3", "~Bengisu_Yarimbas1", "~Lingzi_Hong1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/228fd2caf6b4d52b0f4718294d46b4f44d03fb2b.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Model", "Multi-agent", "Retrieval-Augmented Generation", "Health Misinformation", "Counterspeech" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ anik2025multiagent, title={Multi-Agent Retrieval-Augmented Framework for Evidence-Based Counterspeech Against Health Misinformation}, author={Anirban Saha Anik and Xiaoying Song and Elliott Wang and Bryan Wang and Bengisu Yarimbas and Lingzi Hong}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=P61AgRyU7E} }
anik|multiagent_retrievalaugmented_framework_for_evidencebased_counterspeech_against_health_misinformation
null
null
null
null
null
Epistemic Alignment: A Mediating Framework for User-LLM Knowledge Delivery
A framework of ten epistemic challenges revealing the gap between how users want knowledge presented and what LLMs currently deliver
Large Language Models (LLMs) increasingly serve as tools for knowledge acquisition, yet users cannot effectively specify how they want information presented. When users request that LLMs "cite reputable sources," "express appropriate uncertainty," or "include multiple perspectives," they discover that current interfaces provide no structured way to articulate these preferences. The result is prompt sharing folklore: community-specific copied prompts passed through trust relationships rather than based on measured efficacy. We propose the Epistemic Alignment Framework, a set of ten challenges in knowledge transmission derived from the philosophical literature of epistemology, concerning issues such as uncertainty expression, evidence quality assessment, and calibration of testimonial reliance. The framework serves as a structured intermediary between user needs and system capabilities, creating a common vocabulary to bridge the gap between what users want and what systems deliver. Through a thematic analysis of custom prompts and personalization strategies shared on online communities where these issues are actively discussed, we find users develop elaborate workarounds to address each of the challenges. We then apply our framework to two prominent model providers, OpenAI and Anthropic, through structured content analysis of their documented policies and product features. Our analysis shows that while these providers have partially addressed the challenges we identified, they fail to establish adequate mechanisms for specifying epistemic preferences, lack transparency about how preferences are implemented, and offer no verification tools to confirm whether preferences were followed. For AI developers, the Epistemic Alignment Framework offers concrete guidance for supporting diverse approaches to knowledge; for users, it works toward information delivery that aligns with their specific needs rather than defaulting to one-size-fits-all approaches.
[ "Nicholas Clark", "Hua Shen", "Bill Howe", "Tanu Mitra" ]
https://openreview.net/forum?id=Orvjm9UqH2
Orvjm9UqH2
Orvjm9UqH2
[ "~Nicholas_Clark2", "~Hua_Shen1", "~Bill_Howe1", "~Tanu_Mitra1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/e40d3c738a2f84d2c439b00945bc1db28eecb5ff.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "epistemology of AI", "language model behavior", "human-AI interaction" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ clark2025epistemic, title={Epistemic Alignment: A Mediating Framework for User-{LLM} Knowledge Delivery}, author={Nicholas Clark and Hua Shen and Bill Howe and Tanu Mitra}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Orvjm9UqH2} }
clark|epistemic_alignment_a_mediating_framework_for_userllm_knowledge_delivery
null
true
null
null
null
Enhancing LLM Reasoning with Iterative DPO: A Comprehensive Empirical Investigation
DPO enables iterative self-improvement for LLMs, achieving RL-level reasoning performance with lower computational cost through preference-based learning and verifiable rewards.
Recent advancements in post-training methodologies for large language models (LLMs) have highlighted reinforcement learning (RL) as a critical component for enhancing reasoning. However, the substantial computational costs associated with RL-based approaches have led to growing interest in alternative paradigms, such as Direct Preference Optimization (DPO). In this study, we investigate the effectiveness of DPO in facilitating self-improvement for LLMs through iterative preference-based learning. We demonstrate that a single round of DPO with coarse filtering significantly enhances mathematical reasoning performance, particularly for strong base model. Furthermore, we design an iterative enhancement framework for both the generator and the reward model (RM), enabling their mutual improvement through online interaction across multiple rounds of DPO. Finally, with simple verifiable rewards, our model DPO-VP achieves RL-level performance with significantly lower computational overhead. These findings highlight DPO as a scalable and cost-effective alternative to RL, offering a practical solution for enhancing LLM reasoning in resource-constrained situations.
[ "Songjun Tu", "Jiahao Lin", "Xiangyu Tian", "Qichao Zhang", "Linjing Li", "Yuqian Fu", "Nan Xu", "Wei He", "Xiangyuan Lan", "Dongmei Jiang", "Dongbin Zhao" ]
https://openreview.net/forum?id=OgWh4J7bkT
OgWh4J7bkT
OgWh4J7bkT
[ "~Songjun_Tu1", "~Jiahao_Lin4", "~Xiangyu_Tian1", "~Qichao_Zhang3", "~Linjing_Li1", "~Yuqian_Fu3", "~Nan_Xu4", "~Wei_He14", "~Xiangyuan_Lan4", "~Dongmei_Jiang2", "~Dongbin_Zhao1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/bfd8b38b48b9c4d0e40fdafc3605a788ee02041a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM Reasoning", "Iterative Optimization" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ tu2025enhancing, title={Enhancing {LLM} Reasoning with Iterative {DPO}: A Comprehensive Empirical Investigation}, author={Songjun Tu and Jiahao Lin and Xiangyu Tian and Qichao Zhang and Linjing Li and Yuqian Fu and Nan Xu and Wei He and Xiangyuan Lan and Dongmei Jiang and Dongbin Zhao}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=OgWh4J7bkT} }
tu|enhancing_llm_reasoning_with_iterative_dpo_a_comprehensive_empirical_investigation
null
null
null
null
null
LM Agents May Fail to Act on Their Own Risk Knowledge
This paper develops a systematic safety evaluation framework for LM agents, reveals persistent gaps between risk awareness and safe execution, and proposes effective mitigation strategies.
Language model (LM) agents have demonstrated significant potential for automating real-world tasks, yet they pose a diverse array of potential, severe risks in safety-critical scenarios. In this work, we identify a significant gap between LM agents' risk awareness and safety execution abilities: while they often answer "Yes'' to queries like $\texttt{"Is executing `sudo rm -rf /*' dangerous?"}$, they will likely fail to identify such risks in instantiated trajectories or even directly perform these risky actions when acting as agents. To systematically investigate this, we develop a comprehensive evaluation framework to examine agents' safety across three progressive dimensions: 1) their knowledge about potential risks, 2) their ability to identify corresponding risks in execution trajectories, and 3) their actual behaviors to avoid executing these risky actions. Our evaluation reveals two critical performance gaps that resemble the generator-validator gaps observed in LMs: while agents demonstrate near-perfect risk knowledge (>98\% pass rates), they fail to apply this knowledge when identifying risks in actual scenarios, with performance dropping by >23\%, and often still execute risky actions (<26\% pass rates). This trend persists even in specialized reasoning models like DeepSeek-R1, reinforcing the challenge of translating an LM's risk knowledge into safe decision-making. We take advantage of these observed gaps to develop a risk verifier that independently critiques the proposed actions by agents, with an abstractor that converts specific execution trajectories into abstract descriptions where LMs can more effectively identify the risks. Our overall system achieves a significant reduction of risky action execution by 55.3\% over vanilla-prompted agents.
[ "Yuzhi Tang", "Tianxiao Li", "Elizabeth Li", "Chris J. Maddison", "Honghua Dong", "Yangjun Ruan" ]
https://openreview.net/forum?id=OeYdS51k8F
OeYdS51k8F
OeYdS51k8F
[ "~Yuzhi_Tang2", "~Tianxiao_Li2", "~Elizabeth_Li1", "~Chris_J._Maddison1", "~Honghua_Dong1", "~Yangjun_Ruan1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/71fe0a81e5a774812c204edec896f538589e31c8.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Language Model Agents", "AI Safety", "Evaluation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ tang2025lm, title={{LM} Agents May Fail to Act on Their Own Risk Knowledge}, author={Yuzhi Tang and Tianxiao Li and Elizabeth Li and Chris J. Maddison and Honghua Dong and Yangjun Ruan}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=OeYdS51k8F} }
tang|lm_agents_may_fail_to_act_on_their_own_risk_knowledge
null
null
null
null
null
Limitations of refinement methods for weak to strong generalization
We study label refinement methods for weak to strong generalization.
Standard techniques for aligning large language models (LLMs) utilize human-produced data, which could limit the capability of any aligned LLM to human level. Label refinement and weak training have emerged as promising strategies to address this *superalignment* problem. In this work, we adopt probabilistic assumptions commonly used to study label refinement and analyze whether refinement can be outperformed by alternative approaches, including computationally intractable oracle methods. We show that both weak training and label refinement suffer from irreducible error, leaving a performance gap between label refinement and the oracle. These results motivate future research into developing alternative methods for weak to strong generalization that synthesize the practicality of label refinement or weak training and the optimality of the oracle procedure.
[ "Seamus Somerstep", "Yaacov Ritov", "Mikhail Yurochkin", "Subha Maity", "Yuekai Sun" ]
https://openreview.net/forum?id=OKvSnV5Ar7
OKvSnV5Ar7
OKvSnV5Ar7
[ "~Seamus_Somerstep1", "~Yaacov_Ritov2", "~Mikhail_Yurochkin1", "~Subha_Maity1", "~Yuekai_Sun1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/005f74528280f3bdaeb656e8351844729f0cf1b9.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Weak to strong generalization", "superalignment", "transfer learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ somerstep2025limitations, title={Limitations of refinement methods for weak to strong generalization}, author={Seamus Somerstep and Yaacov Ritov and Mikhail Yurochkin and Subha Maity and Yuekai Sun}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=OKvSnV5Ar7} }
somerstep|limitations_of_refinement_methods_for_weak_to_strong_generalization
/attachment/abc7fd4434a497d95c9d4ae706bc33726fe44207.zip
null
null
null
null
G1yphD3c0de: Towards Safer Language Models on Visually Perturbed Texts
Towards Safer Language Models on Visually Perturbed Texts
Visual text perturbations are increasingly used to bypass content moderation systems, where characters are replaced with visually similar Unicode alternatives that humans can easily recognize but text-only filters fail to detect. While existing research has examined the generation and classification of such evasion techniques, the critical task of restoration remains underexplored. To address this challenge, we present GlyphDecode, a novel framework designed to restore visually perturbed text to its original form. Our framework consists of two key components: (1) GlyphPerturber, which generates visually perturbed text images for training, and (2) GlyphRestorer, which learns to recover the original text through a multimodal transformer architecture. GlyphRestorer is a light-weight and fast module that can be applied in a plug-and-play manner with off-the-shelf LLMs and multimodal LLMs to enhance harmful content detection. To evaluate restoration efficacy in real-world scenarios, we introduce GlyphSynth publicly available, a specialized dataset containing realistic examples of content moderation evasion from diverse sources including DEA(Drug Enforcement Administration) reports and social media platforms. Experimental results demonstrate that our approach significantly outperforms baselines in text restoration, and enabling multimodal language models to better detect harmful content disguised through visual manipulations. Our work bridges an important gap in content moderation systems by addressing not only the detection but also the recovery of manipulated text, contributing to more effective safeguards against increasingly sophisticated evasion tactics.
[ "Yejinchoi", "Yejin Yeo", "Yejin Son", "Seungju Han", "Youngjae Yu" ]
https://openreview.net/forum?id=OGwE7LwtcR
OGwE7LwtcR
OGwE7LwtcR
[ "~Yejinchoi1", "~Yejin_Yeo1", "~Yejin_Son3", "~Seungju_Han2", "~Youngjae_Yu1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/da3d98a642cb0dc79ab3ca14ec82271fee674548.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "safety", "societal implications", "multimodality" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
true
I think the authors may have built a model that is perfect for solving CAPTCHAs. I don't think this should qualify as a barrier to acceptance – this is a good paper! – but I think it should be mentioned in the Ethics statement.
@inproceedings{ yejinchoi2025gyphdcde, title={G1yphD3c0de: Towards Safer Language Models on Visually Perturbed Texts}, author={Yejinchoi and Yejin Yeo and Yejin Son and Seungju Han and Youngjae Yu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=OGwE7LwtcR} }
yejinchoi|g1yphd3c0de_towards_safer_language_models_on_visually_perturbed_texts
/attachment/73ae442f313b2bcb950d1ff8c21fbc0d155ca084.zip
null
null
null
null
Evaluating the Diversity and Quality of LLM Generated Content
We introduce a methodology/dataset for evaluating the diversity and quality of open-ended LLM generated content. We find RLHF and more broadly preference-tuning meaningfully increase diversity of generations.
Recent work suggests that preference-tuning techniques—such as Reinforcement Learning from Human Feedback (RLHF) methods like PPO and GRPO, as well as alternatives like DPO—reduce diversity, creating a dilemma given that these models are widely deployed in applications requiring varied outputs. We argue that diversity without consideration of quality has limited practical value. To address this issue, we introduce a framework for measuring effective semantic diversity—diversity among outputs that meet quality thresholds—which better reflects the practical utility of large language models (LLMs). Using open-ended tasks that require no human intervention, we find counterintuitive results: when using diversity metrics that do not explicitly consider quality, preference-tuned models—particularly those trained via RL—often produce outputs with lower diversity; however, these same preference-tuned models generate greater effective semantic diversity than supervised fine-tuned (SFT) or base models. Our analysis further shows another trend: while larger models may exhibit greater effective semantic diversity than smaller models, the smaller models are consistently more parameter-efficient at producing unique content within a fixed sampling budget. These findings have practical implications for applications that require diverse yet high-quality outputs, from creative assistance to synthetic data generation.
[ "Alexander Shypula", "Shuo Li", "Botong Zhang", "Vishakh Padmakumar", "Kayo Yin", "Osbert Bastani" ]
https://openreview.net/forum?id=O7bF6nlSOD
O7bF6nlSOD
O7bF6nlSOD
[ "~Alexander_Shypula1", "~Shuo_Li7", "~Botong_Zhang1", "~Vishakh_Padmakumar1", "~Kayo_Yin1", "~Osbert_Bastani1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/c5bcff49672b1b239114e6db833087090df5d450.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Diversity;Alignment;LLMs;Evaluation;Program Synthesis;Code Generation;Creative Writing" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ shypula2025evaluating, title={Evaluating the Diversity and Quality of {LLM} Generated Content}, author={Alexander Shypula and Shuo Li and Botong Zhang and Vishakh Padmakumar and Kayo Yin and Osbert Bastani}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=O7bF6nlSOD} }
shypula|evaluating_the_diversity_and_quality_of_llm_generated_content
/attachment/42f1580fbebf5f89e0d841b74303206a0ec077c6.zip
null
null
null
null
Reasoning Models Know When They’re Right: Probing Hidden States for Self-Verification
Reasoning models with long chain-of-thought encode strong signals about the correctness of intermediate answers in model's hidden states, and we can use it for early exit.
Reasoning models have achieved remarkable performance on tasks like math and logical reasoning thanks to their ability to search during reasoning. However, they still suffer from \textit{overthinking}, often performing unnecessary reasoning steps even after reaching the correct answer. This raises the question: \textit{can models evaluate the correctness of their intermediate answers during reasoning?} In this work, we study whether reasoning models encode information about answer correctness through probing the model's hidden states. The resulting probe can verify intermediate answers with high accuracy and produces highly calibrated scores. Additionally, we find models' hidden states encode correctness of future answers, enabling ealy prediction of the correctness before the intermediate answer is fully formulated. We then use the probe as a verifier to decide whether to exit reasoning at intermediate answers during inference, reducing the number of inference tokens by 24\% without compromising performance. These findings confirm that reasoning models do encode a notion of correctness yet fail to exploit it, revealing substantial untapped potential to enhance their efficiency.
[ "Anqi Zhang", "Yulin Chen", "Jane Pan", "Chen Zhao", "Aurojit Panda", "Jinyang Li", "He He" ]
https://openreview.net/forum?id=O6I0Av7683
O6I0Av7683
O6I0Av7683
[ "~Anqi_Zhang1", "~Yulin_Chen1", "~Jane_Pan1", "~Chen_Zhao2", "~Aurojit_Panda1", "~Jinyang_Li1", "~He_He2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/98d87a29faec03b994dcc0f2da69d5c73580d749.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Reasoning models; Chain-of-thought reasoning (CoT); Intermediate answers; Overthinking" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhang2025reasoning, title={Reasoning Models Know When They{\textquoteright}re Right: Probing Hidden States for Self-Verification}, author={Anqi Zhang and Yulin Chen and Jane Pan and Chen Zhao and Aurojit Panda and Jinyang Li and He He}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=O6I0Av7683} }
zhang|reasoning_models_know_when_theyre_right_probing_hidden_states_for_selfverification
null
null
null
null
null
Analyzing Multilingualism in Large Language Models with Sparse Autoencoders
We provide several distinct patterns between high- and low-resource languages in LLMs through the lens of Sparse Autoencoders
Despite the impressive multilingual capabilities of recent large language models (LLMs), the mechanisms underlying their language-specific processing remain largely unclear. In this paper, we investigate how LLMs handle multilingualism through the lens of sparse autoencoders (SAEs), uncovering distinctive patterns that offer new insights into their internal workings. Specifically, we introduce two novel concepts—task instruction–focused (TF) and heading-focused (HF) SAE features—and use them to reveal intrinsic discrepancies between high- and low-performing languages. Our analysis yields several key findings: (1) SAEs provide concrete evidence that LLMs have a precise understanding of prompt structure; (2) heading keywords (e.g., “Question,” “Choices,” and “Answer”) play a distinct role in LLM processing; and (3) low-performing languages exhibit a relative deficiency in TF features compared to high-performing languages. Building on these insights, we propose two practical strategies to improve zero-shot multilingual performance: (1) incorporating English heading keywords and (2) amplifying TF features through steering. Our approach improves zero-shot performance in low-performing languages by up to 3.7% on average on ARC-Challenge and MMLU, while also shedding new light on fundamental differences between high- and low-performing languages in LLMs. Our code is available at https://github.com/ihcho2/SAE-ML.
[ "Ikhyun Cho", "Julia Hockenmaier" ]
https://openreview.net/forum?id=NmGSvZoU3K
NmGSvZoU3K
NmGSvZoU3K
[ "~Ikhyun_Cho4", "~Julia_Hockenmaier1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/170a5e82ae85994de8bf4fda0fc86f34ce8a2975.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Multilingualism", "Sparse Autoencoders" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ cho2025analyzing, title={Analyzing Multilingualism in Large Language Models with Sparse Autoencoders}, author={Ikhyun Cho and Julia Hockenmaier}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=NmGSvZoU3K} }
cho|analyzing_multilingualism_in_large_language_models_with_sparse_autoencoders
null
null
null
null
null
Efficient Self-Improvement in Multimodal Large Language Models: A Model-Level Judge-Free Approach
A novel judge-free self-improvement framework for multimodal large language models (MLLMs) efficiently enhances reliability by controlling hallucinations without costly model-level verification loops.
Self-improvement in multimodal large language models (MLLMs) is crucial for enhancing their reliability and robustness. However, current methods often rely heavily on MLLMs themselves as judges, leading to high computational costs and potential pitfalls like reward hacking and model collapse. This paper introduces a novel, model-level judge-free self-improvement framework. Our approach employs a controlled feedback mechanism while eliminating the need for MLLMs in the verification loop. We generate preference learning pairs using a controllable hallucination mechanism and optimize data quality by leveraging lightweight, contrastive language-image encoders to evaluate and reverse pairs when necessary. Evaluations across public benchmarks and our newly introduced IC dataset, designed to challenge hallucination control, demonstrate that our model outperforms conventional techniques. We achieve superior precision and recall with significantly lower computational demands. This method offers an efficient pathway to scalable self-improvement in MLLMs, balancing performance gains with reduced resource requirements.
[ "Shijian Deng", "Wentian Zhao", "Yu-Jhe Li", "Kun Wan", "Daniel Miranda", "Ajinkya Kale", "Yapeng Tian" ]
https://openreview.net/forum?id=NRrXHppaBg
NRrXHppaBg
NRrXHppaBg
[ "~Shijian_Deng1", "~Wentian_Zhao3", "~Yu-Jhe_Li1", "~Kun_Wan1", "~Daniel_Miranda1", "~Ajinkya_Kale1", "~Yapeng_Tian1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/f3b8619d0d4e0aaad0303ebd851f4f8e90747e9e.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Self-Improvement", "Multimodal Large Language Models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ deng2025efficient, title={Efficient Self-Improvement in Multimodal Large Language Models: A Model-Level Judge-Free Approach}, author={Shijian Deng and Wentian Zhao and Yu-Jhe Li and Kun Wan and Daniel Miranda and Ajinkya Kale and Yapeng Tian}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=NRrXHppaBg} }
deng|efficient_selfimprovement_in_multimodal_large_language_models_a_modellevel_judgefree_approach
null
null
null
null
null
LLM Unlearning Reveals a Stronger-Than-Expected Coreset Effect in Current Benchmarks
We uncover a novel coreset effect in current LLM Unlearning benchmarks, where unlearning performance can be effectively maintained using significantly smaller subsets, e.g., as little as 5% of the forget set.
Large language model (LLM) unlearning has become a critical challenge in ensuring safety and controlled model behavior by removing *undesired* data-model influences from the pretrained model while preserving its general utility. Significant recent efforts have been dedicated to developing LLM unlearning benchmarks such as WMDP (Weapons of Mass Destruction Proxy) and MUSE (Machine Unlearning Six-way Evaluation), facilitating standardized unlearning performance assessment and method comparison. Despite their usefulness, we uncover for the first time a novel *coreset effect* within these benchmarks. Specifically, we find that LLM unlearning achieved with the original (full) forget set can be effectively maintained using a significantly smaller subset (functioning as a "coreset"), *e.g.*, as little as 5% of the forget set, even when selected at random. This suggests that LLM unlearning in these benchmarks can be performed surprisingly easily, even in an extremely low-data regime. We demonstrate that this coreset effect remains strong, regardless of the LLM unlearning method used, such as NPO (Negative Preference Optimization) and RMU (Representation Misdirection Unlearning), the popular ones in these benchmarks. The surprisingly strong coreset effect is also robust across various data selection methods, ranging from random selection to more sophisticated heuristic approaches. We explain the coreset effect in LLM unlearning through a keyword-based perspective, showing that keywords extracted from the forget set alone contribute significantly to unlearning effectiveness and indicating that current unlearning is driven by a compact set of high-impact tokens rather than the entire dataset. We further justify the faithfulness of coreset-unlearned models along additional dimensions, such as mode connectivity and robustness to jailbreaking attacks.
[ "Soumyadeep Pal", "Changsheng Wang", "James Diffenderfer", "Bhavya Kailkhura", "Sijia Liu" ]
https://openreview.net/forum?id=NMIqKUdDkw
NMIqKUdDkw
NMIqKUdDkw
[ "~Soumyadeep_Pal1", "~Changsheng_Wang1", "~James_Diffenderfer1", "~Bhavya_Kailkhura1", "~Sijia_Liu1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/3ebe54a4517ba5c5146eb4601360e351ec40ffee.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Machine Unlearning", "Coreset", "Large Language Models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ pal2025llm, title={{LLM} Unlearning Reveals a Stronger-Than-Expected Coreset Effect in Current Benchmarks}, author={Soumyadeep Pal and Changsheng Wang and James Diffenderfer and Bhavya Kailkhura and Sijia Liu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=NMIqKUdDkw} }
pal|llm_unlearning_reveals_a_strongerthanexpected_coreset_effect_in_current_benchmarks
null
null
null
null
null
Phased Training for LLM-powered Text Retrieval Models Beyond Data Scaling
Training powerful general-purpose text embedding and reranking models by a multi-stage training framework and efficient data synthesis.
Current efforts in building large language models (LLMs) based general-purpose text retrieval models primarily focus on architectural design and training data scaling. However, significant challenges remain in effectively modeling diverse retrieval tasks and domains, including multi-task conflict, data imbalance, and training efficiency. To address these challenges, we propose a novel phased training framework for text retrieval, featuring: (1) robust foundation modeling with core relevance data, (2) progressive specialization through modular task adaptation, and (3) knowledge fusion via weight interpolation based model merging. This framework simultaneously optimizes both embedding and reranking models through a unified architecture. We also present an efficient yet scalable data synthesis pipeline to expand training data, based on open-source LLMs. These synthetic data can be efficiently incorporated into the phased training framework, enhancing model performance. We identify five distinct types of retrieval tasks, \ie basic relevance retrieval, code retrieval, tool retrieval, complex instruction-based retrieval, as well as reasoning-intensive retrieval, conducting extensive experiments. Our method achieves the best performance across MTEB and various retrieval benchmarks of the five task types. Further analysis demonstrates the effectiveness and efficiency of our proposed training framework and data synthesis pipeline.
[ "Xin Zhang", "Yanzhao Zhang", "Wen Xie", "Dingkun Long", "Mingxin Li", "Pengjun Xie", "Meishan Zhang", "Wenjie Li", "Min Zhang" ]
https://openreview.net/forum?id=NC6G1KCxlt
NC6G1KCxlt
NC6G1KCxlt
[ "~Xin_Zhang15", "~Yanzhao_Zhang1", "~Wen_Xie3", "~Dingkun_Long1", "~Mingxin_Li2", "~Pengjun_Xie2", "~Meishan_Zhang1", "~Wenjie_Li1", "~Min_Zhang9" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a20b993f28b1029aa779a8f0d710161fc8438355.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Text Retrieval", "Text Embedding", "Reranking", "LLM-based Embedding" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zhang2025phased, title={Phased Training for {LLM}-powered Text Retrieval Models Beyond Data Scaling}, author={Xin Zhang and Yanzhao Zhang and Wen Xie and Dingkun Long and Mingxin Li and Pengjun Xie and Meishan Zhang and Wenjie Li and Min Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=NC6G1KCxlt} }
zhang|phased_training_for_llmpowered_text_retrieval_models_beyond_data_scaling
null
null
null
null
null
Reverse-engineering NLI: A study of the meta-inferential properties of Natural Language Inference
We perform a comprehesive analysis of NLI under three different logical interpretations of its labels and test the meta-inferential behavior of models trained on SNLI to better understand the logical properties of the task encoded by the dataset.
Natural Language Inference (NLI) has been an important task for evaluating language models for Natural Language Understanding, but the logical properties of the task are poorly understood and often mischaracterized. Understanding the notion of inference captured by NLI is key to interpreting model performance on the task. In this paper we formulate three possible readings of the NLI label set and perform a comprehensive analysis of their respective _meta-inferential_ properties. Focusing on the SNLI dataset, we exploit (1) NLI items with shared premises and (2) items generated by LLMs to evaluate models trained on SNLI for meta-inferential consistency and derive insights into which reading of the logical relations is encoded by the dataset.
[ "Rasmus Blanck", "Bill Noble", "Stergios Chatzikyriakidis" ]
https://openreview.net/forum?id=NAcvSI2CRM
NAcvSI2CRM
NAcvSI2CRM
[ "~Rasmus_Blanck1", "~Bill_Noble1", "~Stergios_Chatzikyriakidis1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/b65364b872fa8af6c1effeaeabbf54157aaf7a3f.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "computational semantics", "natural language inference", "logic" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ blanck2025reverseengineering, title={Reverse-engineering {NLI}: A study of the meta-inferential properties of Natural Language Inference}, author={Rasmus Blanck and Bill Noble and Stergios Chatzikyriakidis}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=NAcvSI2CRM} }
blanck|reverseengineering_nli_a_study_of_the_metainferential_properties_of_natural_language_inference
/attachment/03b068cff59811924a42c4f55e2f14114e1bedfa.zip
null
null
null
null
LawFlow: Collecting and Simulating Lawyers’ Thought Processes on Business Formation Case Studies
LawFlow provides a dataset capturing complex legal workflows for small business-entity formation, highlighting multi-stage reasoning, client communication, and iterative revisions. It reveals human workflows’ flexibility and informs more adaptive AI.
Legal practitioners, particularly those early in their careers, face complex, high-stakes tasks that require adaptive, context-sensitive reasoning. While AI holds promise in supporting legal work, current datasets and models are narrowly focused on isolated subtasks and fail to capture the end-to-end decision-making required in real-world practice. To address this gap, we introduce _LawFlow_, a dataset of complete end-to-end legal workflows collected from trained law students, grounded in real-world business entity formation scenarios. Unlike prior datasets focused on input-output pairs or linear chains of thought, _LawFlow_ captures dynamic, modular, and iterative reasoning processes that reflect the ambiguity, revision, and client-adaptive strategies of legal practice. Using _LawFlow_, we compare human and LLM-generated workflows, revealing systematic differences in structure, reasoning flexibility, and plan execution. Human workflows tend to be modular and adaptive, while LLM workflows are more sequential, exhaustive, and less sensitive to downstream implications. Our findings also suggest that legal professionals prefer AI to carry out supportive roles, such as brainstorming, identifying blind spots, and surfacing alternatives, rather than executing complex workflows end-to-end. Our results highlight both the current limitations of LLMs in supporting complex legal workflows and opportunities for developing more collaborative, reasoning-aware legal AI systems. All data and code are available on our project page (https://minnesotanlp.github.io/LawFlow-website/)
[ "Debarati Das", "Khanh Chi Le", "Ritik Sachin Parkar", "Karin De Langis", "Brendan Madson", "Chad M. Berryman", "Robin M Willis", "Daniel H Moses", "Brett McDonnell", "Daniel Schwarcz", "Dongyeop Kang" ]
https://openreview.net/forum?id=MsgdEkcLRz
MsgdEkcLRz
MsgdEkcLRz
[ "~Debarati_Das1", "~Khanh_Chi_Le1", "~Ritik_Sachin_Parkar1", "~Karin_De_Langis1", "~Brendan_Madson1", "~Chad_M._Berryman1", "~Robin_M_Willis1", "~Daniel_H_Moses1", "~Brett_McDonnell1", "~Daniel_Schwarcz1", "~Dongyeop_Kang2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/4729d55b46049703475f554c13f7988cb17de39b.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "pretraining data; synthetic data; reasoning; legal perspectives; legal drafting; LLM; chain of thought" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ das2025lawflow, title={LawFlow: Collecting and Simulating Lawyers{\textquoteright} Thought Processes on Business Formation Case Studies}, author={Debarati Das and Khanh Chi Le and Ritik Sachin Parkar and Karin De Langis and Brendan Madson and Chad M. Berryman and Robin M Willis and Daniel H Moses and Brett McDonnell and Daniel Schwarcz and Dongyeop Kang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=MsgdEkcLRz} }
das|lawflow_collecting_and_simulating_lawyers_thought_processes_on_business_formation_case_studies
null
null
null
null
null
$\mu$KE: Matryoshka Unstructured Knowledge Editing of Large Language Models
A simple yet effective improvement on unstructured locate-and-edit via Matryoshka-style working memory update.
Large language models (LLMs) have emerged as powerful knowledge bases yet are limited by static training data, leading to issues such as hallucinations and safety risks. Editing a model’s internal knowledge through the locate-and-edit paradigm has proven a cost-effective alternative to retraining, though current unstructured approaches—especially window-based autoregressive methods—often disrupt the causal dependency between early memory updates and later output tokens. In this work, we first theoretically analyze these limitations and then introduce Matryoshka Unstructured Knowledge Editing (\toolname), a novel memory update mechanism that preserves such dependencies via a Matryoshka-style objective and adaptive loss coefficients. Empirical evaluations on two models across five benchmarks demonstrate that \toolname improves edit efficacy by up to 12.33\% over state-of-the-art methods, and remains robust when applied to diverse formatted edits, underscoring its potential for effective unstructured knowledge editing in LLMs.
[ "Zian Su", "Ziyang Huang", "Kaiyuan Zhang", "Xiangyu Zhang" ]
https://openreview.net/forum?id=MiR3ObcF3C
MiR3ObcF3C
MiR3ObcF3C
[ "~Zian_Su1", "~Ziyang_Huang4", "~Kaiyuan_Zhang1", "~Xiangyu_Zhang3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/259428e6508eaa3eed90891f808aa88db8556492.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "knowledge editing", "model editing", "large language models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ su2025muke, title={\${\textbackslash}mu\${KE}: Matryoshka Unstructured Knowledge Editing of Large Language Models}, author={Zian Su and Ziyang Huang and Kaiyuan Zhang and Xiangyu Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=MiR3ObcF3C} }
su|\muke_matryoshka_unstructured_knowledge_editing_of_large_language_models
null
null
null
null
null
Have Large Language Models Learned to Reason? A Characterization via 3-SAT
We use phase transitions in random 3-SAT to characterize reasoning abilities of LLMs.
Large Language Models (LLMs) have been touted as AI models possessing advanced reasoning abilities. In theory, autoregressive LLMs with Chain-of-Thought (CoT) can perform more serial computations to solve complex reasoning tasks. However, recent studies suggest that, despite this capacity, LLMs do not truly learn to reason but instead fit on statistical features. To study the reasoning capabilities in a principled fashion, we adopt a computational theory perspective and propose an experimental protocol centered on 3-SAT -- the prototypical NP-complete problem lying at the core of logical reasoning and constraint satisfaction tasks. Specifically, we examine the phase transitions in random 3-SAT and characterize the reasoning abilities of state-of-the-art LLMs by varying the inherent hardness of the problem instances. By comparing DeepSeek R1 with other LLMs, our findings reveal two key insights (1) LLM accuracy drops significantly on harder instances, suggesting all current models struggle when statistical shortcuts are unavailable (2) Unlike other LLMs, R1 shows signs of having learned the underlying reasoning. Following a principled experimental protocol, our study moves beyond the benchmark-driven evidence often found in LLM reasoning research. Our findings highlight important gaps and suggest clear directions for future research. Link to our code.
[ "RISHI HAZRA", "Gabriele Venturato", "Pedro Zuidberg Dos Martires", "Luc De Raedt" ]
https://openreview.net/forum?id=MPTlWIVSMU
MPTlWIVSMU
MPTlWIVSMU
[ "~RISHI_HAZRA1", "~Gabriele_Venturato1", "~Pedro_Zuidberg_Dos_Martires1", "~Luc_De_Raedt1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9c2fc7475e20f41686932bf67d787220ec571d91.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Reasoning", "Computational Complexity", "Logic", "Satisfiability", "Phase Transitions" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ hazra2025have, title={Have Large Language Models Learned to Reason? A Characterization via 3-{SAT}}, author={RISHI HAZRA and Gabriele Venturato and Pedro Zuidberg Dos Martires and Luc De Raedt}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=MPTlWIVSMU} }
hazra|have_large_language_models_learned_to_reason_a_characterization_via_3sat
null
null
null
null
null
Can Language Models Falsify? Evaluating Algorithmic Reasoning with Counterexample Creation
We test the abilities of models to find counterexamples automatically using code-execution, and this can be hard for reasoning models.
There is growing excitement about the potential of Language Models (LMs) to accelerate scientific discovery. *Falsifying* hypotheses is key to scientific progress, as it allows claims to be iteratively refined over time. This process requires significant researcher effort, reasoning, and ingenuity. Yet current benchmarks for LMs predominantly assess their ability to generate solutions rather than challenge them. We advocate for developing benchmarks that evaluate this inverse capability — creating counterexamples for subtly incorrect solutions. To demonstrate this approach, we start with the domain of algorithmic problem solving, where counterexamples can be evaluated automatically using code execution. Specifically, we introduce REFUTE, a dynamically updating benchmark that includes recent problems and incorrect submissions from programming competitions, where human experts successfully identified counterexamples. Our analysis finds that the best reasoning agents, even OpenAI o3-mini (high) with code execution feedback, can create counterexamples for only $<9$% of incorrect solutions in REFUTE, even though ratings indicate its ability to solve up to $48$% of these problems from scratch. We hope our work spurs progress in evaluating and enhancing LMs' ability to falsify incorrect solutions — a capability that is crucial for both accelerating research and making models self-improve through reliable reflective reasoning.
[ "Shiven Sinha", "Shashwat Goel", "Ponnurangam Kumaraguru", "Jonas Geiping", "Matthias Bethge", "Ameya Prabhu" ]
https://openreview.net/forum?id=M7cl4Ldw61
M7cl4Ldw61
M7cl4Ldw61
[ "~Shiven_Sinha1", "~Shashwat_Goel1", "~Ponnurangam_Kumaraguru3", "~Jonas_Geiping1", "~Matthias_Bethge1", "~Ameya_Prabhu1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ccb87b304e168880bd2862278e73f80477f11cfe.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "code; self-repair; falsification" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ sinha2025can, title={Can Language Models Falsify? Evaluating Algorithmic Reasoning with Counterexample Creation}, author={Shiven Sinha and Shashwat Goel and Ponnurangam Kumaraguru and Jonas Geiping and Matthias Bethge and Ameya Prabhu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=M7cl4Ldw61} }
sinha|can_language_models_falsify_evaluating_algorithmic_reasoning_with_counterexample_creation
/attachment/8be7e705cc31c530663056dbb1c28e95a1c2dd87.zip
null
null
null
null
Multi-Agent Verification: Scaling Test-Time Compute with Multiple Verifiers
We explore scaling the number of verifier models as a novel test-time scaling dimension for improving language model performance and introduce an algorithm that enables simple scaling along this dimension.
By utilizing more computational resources at test-time, large language models (LLMs) can improve without additional training. One common strategy uses *verifiers* to evaluate candidate outputs. In this work, we propose a novel scaling dimension for test-time compute: *scaling the number of verifier models*. We introduce Multi-Agent Verification (MAV) as a test-time compute paradigm that combines multiple verifiers to improve performance. To investigate scaling up the verification compute, we propose to combine multiple Aspect Verifiers (AVs) --- off-the-shelf LLMs prompted to verify different aspects of outputs. AVs are a convenient building block for MAV since they can be easily combined without any additional training. We introduce BoN-MAV as a simple multi-agent verification algorithm that combines best-of-*n* sampling with aspect verifiers, and we show that performance improves as we spend more verification compute at test-time by increasing the number and type of verifiers. Moreover, we demonstrate both weak-to-strong generalization, where combining weak verifiers improves even stronger LLMs, and self-improvement, where the same base model is used to both generate and verify outputs. Our results establish scaling the number and type of verifier models as a promising new dimension for improving language model performance at test time.
[ "Shalev Lifshitz", "Sheila A. McIlraith", "Yilun Du" ]
https://openreview.net/forum?id=LriQ3NY9uL
LriQ3NY9uL
LriQ3NY9uL
[ "~Shalev_Lifshitz1", "~Sheila_A._McIlraith1", "~Yilun_Du1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/7b678a8c5a2418eb04bf51471bd5fe69d40500f5.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "large language models", "test-time compute", "verification", "scaling" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lifshitz2025multiagent, title={Multi-Agent Verification: Scaling Test-Time Compute with Multiple Verifiers}, author={Shalev Lifshitz and Sheila A. McIlraith and Yilun Du}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=LriQ3NY9uL} }
lifshitz|multiagent_verification_scaling_testtime_compute_with_multiple_verifiers
null
null
null
null
null
Language Agents Mirror Human Causal Reasoning Biases. How Can We Help Them Think Like Scientists?
Language model agents exhibit human-like reasoning biases, leading them to arrive at incorrect conclusions of causal relationships
Language model (LM) agents are increasingly used as autonomous decision-makers which need to actively gather information to guide their decisions. A crucial cognitive skill for such agents is the efficient exploration and understanding of the causal structure of the world—key to robust, scientifically grounded reasoning. Yet, it remains unclear whether LMs possess this capability or exhibit systematic biases leading to erroneous conclusions. In this work, we examine LMs’ ability to explore and infer causal relationships, using the well-established Blicket Test paradigm from developmental psychology. We find that LMs reliably infer the common, intuitive disjunctive causal relationships but systematically struggle with the unusual, yet equally (or sometimes even more) evidenced conjunctive ones. This “disjunctive bias” persists across model families, sizes, and prompting strategies, and performance further declines as task complexity increases. Interestingly, an analogous bias appears in human adults, suggesting that LMs may have inherited deep-seated reasoning heuristics from their training data. To this end, we quantify similarities between LMs and humans, finding that LMs exhibit adult-like inference profiles (but not child-like). Finally, we propose a test-time sampling method which explicitly samples and eliminates hypotheses about causal relationships from the LM. This scalable approach significantly reduces the disjunctive bias and moves LMs closer to the goal of scientific, causally rigorous reasoning.
[ "Anthony GX-Chen", "Dongyan Lin", "Mandana Samiei", "Doina Precup", "Blake Aaron Richards", "Rob Fergus", "Kenneth Marino" ]
https://openreview.net/forum?id=LKINTp7Gdo
LKINTp7Gdo
LKINTp7Gdo
[ "~Anthony_GX-Chen1", "~Dongyan_Lin1", "~Mandana_Samiei1", "~Doina_Precup1", "~Blake_Aaron_Richards1", "~Rob_Fergus1", "~Kenneth_Marino1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9703b61f95dd7c53d48028df86babc548ee88ee8.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "agents", "decision making", "exploration", "cognitive science", "causal inference" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ gx-chen2025language, title={Language Agents Mirror Human Causal Reasoning Biases. How Can We Help Them Think Like Scientists?}, author={Anthony GX-Chen and Dongyan Lin and Mandana Samiei and Doina Precup and Blake Aaron Richards and Rob Fergus and Kenneth Marino}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=LKINTp7Gdo} }
gxchen|language_agents_mirror_human_causal_reasoning_biases_how_can_we_help_them_think_like_scientists
null
null
null
null
null
Data-Centric Human Preference with Rationales for Direct Preference Alignment
We introduce rationales to boost learning from provided human preference pairs in direct preference training.
Aligning language models with human preferences through reinforcement learning from human feedback is crucial for their safe and effective deployment. The human preference is typically represented through comparison where one response is chosen over another for a given prompt. However, standard preference datasets often lack explicit information on why a particular choice was made, presenting an ambiguity that can hinder efficient learning and robust alignment, especially given the high cost of acquiring extensive human annotations. While many studies focus on algorithmic improvements, this work adopts a data-centric perspective, exploring how to enhance learning from existing preference data. We propose augmenting standard preference pairs with rationales that explain the reasoning behind the human preference. Specifically, we introduce a simple and principled framework that leverages machine-generated rationales to enrich preference data for preference optimization algorithms. Our comprehensive analysis demonstrates that incorporating rationales improves learning efficiency. Extensive experiments reveal some advantages: rationale-augmented learning accelerates convergence and can achieve higher final model performance. Furthermore, this approach is versatile and compatible with various direct preference optimization algorithms. Our findings showcase the potential of thoughtful data design in preference learning, demonstrating that enriching existing datasets with explanatory rationales can help unlock improvements in model alignment and annotation efficiency.
[ "Hoang Anh Just", "Ming Jin", "Anit Kumar Sahu", "Huy Phan", "Ruoxi Jia" ]
https://openreview.net/forum?id=LH2ZKviJoI
LH2ZKviJoI
LH2ZKviJoI
[ "~Hoang_Anh_Just1", "~Ming_Jin1", "~Anit_Kumar_Sahu1", "~Huy_Phan2", "~Ruoxi_Jia1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/c4624a7b191f2cf23500caabb1557300455d715c.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "data-centric AI", "rationales" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ just2025datacentric, title={Data-Centric Human Preference with Rationales for Direct Preference Alignment}, author={Hoang Anh Just and Ming Jin and Anit Kumar Sahu and Huy Phan and Ruoxi Jia}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=LH2ZKviJoI} }
just|datacentric_human_preference_with_rationales_for_direct_preference_alignment
null
null
null
null
null
SlowFast-LLaVA-1.5: A Family of Token-Efficient Video Large Language Models for Long-Form Video Understanding
We introduce SlowFast-LLaVA-1.5, a family of video large language models offering a token-efficient solution for long-form video understanding.
We introduce SlowFast-LLaVA-1.5 (abbreviated as SF-LLaVA-1.5), a family of video large language models (LLMs) offering a token-efficient solution for long-form video understanding. We incorporate the two-stream SlowFast mechanism into a streamlined training pipeline, and perform joint video-image training on a carefully curated data mixture of only publicly available datasets. Our primary focus is on highly efficient model scales (1B and 3B), demonstrating that even relatively small Video LLMs can achieve state-of-the-art performance on video understanding, meeting the demand for mobile-friendly models. Experimental results demonstrate that SF-LLaVA-1.5 achieves superior performance on a wide range of video and image tasks, with robust results at all model sizes (ranging from 1B to 7B). Notably, SF-LLaVA-1.5 achieves state-of-the-art results in long-form video understanding (e.g., LongVideoBench and MLVU) and excels at small scales across various video benchmarks.
[ "Mingze Xu", "Mingfei Gao", "Shiyu Li", "Jiasen Lu", "Zhe Gan", "Zhengfeng Lai", "Meng Cao", "Kai Kang", "Yinfei Yang", "Afshin Dehghan" ]
https://openreview.net/forum?id=L7jS3peM3w
L7jS3peM3w
L7jS3peM3w
[ "~Mingze_Xu2", "~Mingfei_Gao1", "~Shiyu_Li2", "~Jiasen_Lu2", "~Zhe_Gan1", "~Zhengfeng_Lai1", "~Meng_Cao2", "~Kai_Kang2", "~Yinfei_Yang1", "~Afshin_Dehghan5" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/15abd95f797d26263ca04908ab5ee202cab985a0.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Multimodal LLM", "Video Understanding", "Video Question Answering", "Long-Form Video Understanding" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ xu2025slowfastllava, title={SlowFast-{LL}a{VA}-1.5: A Family of Token-Efficient Video Large Language Models for Long-Form Video Understanding}, author={Mingze Xu and Mingfei Gao and Shiyu Li and Jiasen Lu and Zhe Gan and Zhengfeng Lai and Meng Cao and Kai Kang and Yinfei Yang and Afshin Dehghan}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=L7jS3peM3w} }
xu|slowfastllava15_a_family_of_tokenefficient_video_large_language_models_for_longform_video_understanding
null
null
null
null
null
AIOS: LLM Agent Operating System
Design and implement a novel infrastructure for serving agents
LLM-based intelligent agents face significant deployment challenges, particularly related to resource management. Allowing unrestricted access to LLM or tool resources can lead to inefficient or even potentially harmful resource allocation and utilization for agents. Furthermore, the absence of proper scheduling and resource management mechanisms in current agent designs hinders concurrent processing and limits overall system efficiency. To address these challenges, this paper proposes the architecture of AIOS (LLM-based AI Agent Operating System) under the context of managing LLM-based agents. It introduces a novel architecture for serving LLM-based agents by isolating resources and LLM-specific services from agent applications into an AIOS kernel. This AIOS kernel provides fundamental services (e.g., scheduling, context management, memory management, storage management, access control) for runtime agents. To enhance usability, AIOS also includes an AIOS SDK, a comprehensive suite of APIs designed for utilizing functionalities provided by the AIOS kernel. Experimental results demonstrate that using AIOS can achieve up to $2.1\times$ faster execution for serving agents built by various agent frameworks. The source code is available at https://github.com/agiresearch/AIOS.
[ "Kai Mei", "Xi Zhu", "Wujiang Xu", "Mingyu Jin", "Wenyue Hua", "Zelong Li", "Shuyuan Xu", "Ruosong Ye", "Yingqiang Ge", "Yongfeng Zhang" ]
https://openreview.net/forum?id=L4HHkCDz2x
L4HHkCDz2x
L4HHkCDz2x
[ "~Kai_Mei1", "~Xi_Zhu2", "~Wujiang_Xu1", "~Mingyu_Jin1", "~Wenyue_Hua1", "~Zelong_Li1", "~Shuyuan_Xu1", "~Ruosong_Ye1", "~Yingqiang_Ge1", "~Yongfeng_Zhang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/985019bb0dc542ae8a26b7f9997df9cd05f2b4ec.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Model", "LLM Agent" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ mei2025aios, title={{AIOS}: {LLM} Agent Operating System}, author={Kai Mei and Xi Zhu and Wujiang Xu and Mingyu Jin and Wenyue Hua and Zelong Li and Shuyuan Xu and Ruosong Ye and Yingqiang Ge and Yongfeng Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=L4HHkCDz2x} }
mei|aios_llm_agent_operating_system
null
null
null
null
null
In-context Ranking Preference Optimization
We introduce IRPO, a framework that optimizes LLMs using natural, in-context ranking feedback to enhance ranking quality while reducing computational cost.
Recent developments in Direct Preference Optimization (DPO) allow large language models (LLMs) to function as implicit ranking models by maximizing the margin between preferred and non-preferred responses. In practice, user feedback on such lists typically involves identifying a few relevant items in context rather than providing detailed pairwise comparisons for every possible item pair. Besides, many complex information retrieval tasks, such as conversational agents and summarization systems, critically depend on ranking the highest-quality outputs at the top, further emphasizing the need to support natural and flexible forms of user feedback. To address the challenge of limited and sparse pairwise feedback in the in-context setting, we propose an In-context Ranking Preference Optimization (IRPO) framework that directly optimizes LLMs based on ranking lists constructed during inference. To further capture the natural and flexible forms of feedback, IRPO extends the DPO objective by incorporating both the relevance of items and their positions in the list. Modeling these aspects jointly is non-trivial, as ranking metrics are inherently discrete and non-differentiable, making direct optimization challenging. To overcome this, IRPO introduces a differentiable objective based on positional aggregation of pairwise item preferences, enabling effective gradient-based optimization of discrete ranking metrics. We further provide theoretical insights showing that IRPO (i) automatically emphasizes items with greater disagreement between the model and the reference ranking, and (ii) shows its gradient's linkage to an importance sampling estimator, resulting in an unbiased gradient estimator with reduced variance. Empirical evaluations demonstrate that IRPO outperforms standard DPO approaches in ranking performance, highlighting its effectiveness and efficiency in aligning LLMs with direct in-context ranking preferences.
[ "Junda Wu", "Rohan Surana", "Zhouhang Xie", "Yiran Shen", "Yu Xia", "Tong Yu", "Ryan A. Rossi", "Prithviraj Ammanabrolu", "Julian McAuley" ]
https://openreview.net/forum?id=L2NPhLAKEd
L2NPhLAKEd
L2NPhLAKEd
[ "~Junda_Wu1", "~Rohan_Surana1", "~Zhouhang_Xie1", "~Yiran_Shen2", "~Yu_Xia9", "~Tong_Yu3", "~Ryan_A._Rossi2", "~Prithviraj_Ammanabrolu1", "~Julian_McAuley1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/60089be2b3c493053166fa47885cbbe7dfd578c6.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "direct preference optimization", "large language model" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wu2025incontext, title={In-context Ranking Preference Optimization}, author={Junda Wu and Rohan Surana and Zhouhang Xie and Yiran Shen and Yu Xia and Tong Yu and Ryan A. Rossi and Prithviraj Ammanabrolu and Julian McAuley}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=L2NPhLAKEd} }
wu|incontext_ranking_preference_optimization
null
null
null
null
null
MSRS: Evaluating Multi-Source Retrieval-Augmented Generation
This paper introduces a scalable framework for constructing evaluation benchmarks that challenge RAG systems to integrate information across distinct sources and generate long-form responses.
Retrieval-augmented systems are typically evaluated in settings where information required to answer the query can be found within a single source or the answer is short-form or factoid-based. However, many real-world applications demand the ability to integrate and summarize information scattered across multiple sources, where no single source is sufficient to respond to the user's question. In such settings, the retrieval component of a RAG pipeline must recognize a variety of relevance signals, and the generation component must connect and synthesize information across multiple sources. We present a scalable framework for constructing evaluation benchmarks that challenge RAG systems to integrate information across distinct sources and generate long-form responses. Using our framework, we build two new benchmarks on Multi-Source Retrieval and Synthesis: MSRS-Story and MSRS-Meet, representing narrative synthesis and summarization tasks, respectively, that require retrieval from large collections. Our extensive experiments with various RAG pipelines—including sparse and dense retrievers combined with frontier LLMs—reveal that generation quality is highly dependent on retrieval effectiveness, which varies greatly by task. While multi-source synthesis proves challenging even in an oracle retrieval setting, we find that reasoning models significantly outperform standard LLMs at this distinct step.
[ "Rohan Phanse", "Ej Zhou", "Kejian Shi", "WENCAI ZHANG", "Yixin Liu", "Yilun Zhao", "Arman Cohan" ]
https://openreview.net/forum?id=KtGsJm8bOC
KtGsJm8bOC
KtGsJm8bOC
[ "~Rohan_Phanse1", "~Ej_Zhou1", "~Kejian_Shi2", "~WENCAI_ZHANG1", "~Yixin_Liu2", "~Yilun_Zhao1", "~Arman_Cohan1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/9e9b1b9fad20b01f4b0cdee3401c415981cb042a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Retrieval-Augmented Generation", "Retrieval", "Summarization", "Evaluation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ phanse2025msrs, title={{MSRS}: Benchmarking Multi-Source Retrieval-Augmented Generation}, author={Rohan Phanse and Ej Zhou and Kejian Shi and WENCAI ZHANG and Yixin Liu and Yilun Zhao and Arman Cohan}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=KtGsJm8bOC} }
phanse|msrs_evaluating_multisource_retrievalaugmented_generation
null
null
null
null
null
Not All Data Are Unlearned Equally
We investigate how data frequency and model scale affect the feasibility of gradient based unlearning
Machine unlearning is concerned with the task of removing knowledge learned from particular data points from a trained model. In the context of large language models (LLMs), unlearning has recently received increased attention, particularly for removing knowledge about named entities from models for privacy purposes. While various approaches have been proposed to address the unlearning problem, most existing approaches treat all data points to be unlearned equally, i.e., unlearning that Montreal is a city in Canada is treated exactly the same as unlearning the phone number of the first author of this paper. In this work, we show that this all data is equal assumption does not hold for LLM unlearning. We study how the success of unlearning depends on the frequency of the knowledge we want to unlearn in the pre-training data of a model and find that frequency strongly affects unlearning, i.e., more frequent knowledge is harder to unlearn. Additionally, we uncover a misalignment between probability- and generation-based evaluations of unlearning and show that this problem worsens as models become larger. Overall, our experiments highlight the need for better evaluation practices and novel methods for LLM unlearning that take the training data of models into account.
[ "Aravind Krishnan", "Siva Reddy", "Marius Mosbach" ]
https://openreview.net/forum?id=Kd97lfFfTu
Kd97lfFfTu
Kd97lfFfTu
[ "~Aravind_Krishnan1", "~Siva_Reddy1", "~Marius_Mosbach1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ad8913eceb4cf60ba758bd7f69cd282e564dc397.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM unlearning", "analysis", "frequency" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ krishnan2025not, title={Not All Data Are Unlearned Equally}, author={Aravind Krishnan and Siva Reddy and Marius Mosbach}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=Kd97lfFfTu} }
krishnan|not_all_data_are_unlearned_equally
null
null
null
null
null
Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs
This work studies the origins of cognitive biases in LLMs using causal experiments and shows that while biases are slightly affected by instruction-tuning randomness, they primarily stem from pretraining.
Large language models (LLMs) exhibit cognitive biases -- systematic tendencies of irrational decision-making, similar to those seen in humans. Prior work has found that these biases vary across models and can be amplified by instruction tuning. However, it remains unclear if these differences in biases stem from pretraining, finetuning, or even random noise due to training stochasticity. We propose a two-step causal experimental approach to disentangle these factors. First, we finetune models multiple times using different random seeds to study how training randomness affects over $30$ cognitive biases. Second, we introduce \emph{cross-tuning} -- swapping instruction datasets between models to isolate bias sources. This swap uses datasets that led to different bias patterns, directly testing whether biases are dataset-dependent. Our findings reveal that while training randomness introduces some variability, biases are mainly shaped by pretraining: models with the same pretrained backbone exhibit more similar bias patterns than those sharing only finetuning data. These insights suggest that understanding biases in finetuned models requires considering their pretraining origins, especially given their high post-finetuning variability. This perspective can guide future efforts to develop principled strategies for evaluating and mitigating bias in LLMs. See our code and models at: https://itay1itzhak.github.io/planted-in-pretraining.
[ "Itay Itzhak", "Yonatan Belinkov", "Gabriel Stanovsky" ]
https://openreview.net/forum?id=KQhUEoPmJy
KQhUEoPmJy
KQhUEoPmJy
[ "~Itay_Itzhak1", "~Yonatan_Belinkov1", "~Gabriel_Stanovsky1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/15e8a81f3173cf11f46a82a010f9ced451f97b73.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "bias", "cognitive biases", "large language models", "LLM biases", "bias analysis", "instruction tuning", "pretraining biases", "causal experiments", "bias evaluation", "machine learning biases", "model robustness", "language model interpretability", "bias sources" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ itzhak2025planted, title={Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in {LLM}s}, author={Itay Itzhak and Yonatan Belinkov and Gabriel Stanovsky}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=KQhUEoPmJy} }
itzhak|planted_in_pretraining_swayed_by_finetuning_a_case_study_on_the_origins_of_cognitive_biases_in_llms
null
null
null
null
null