Dataset Viewer
Auto-converted to Parquet
title
stringlengths
21
128
content_TLDR
stringlengths
40
250
abstract
stringlengths
613
2.09k
authors
listlengths
1
42
openreview_url
stringlengths
42
42
id
stringlengths
10
10
forum
stringlengths
10
10
authorids
listlengths
1
42
venue
dict
venueid
dict
pdf_url
dict
invitation
stringclasses
1 value
group
stringclasses
1 value
venue_name
stringclasses
1 value
year
int64
2.03k
2.03k
conference
stringclasses
1 value
content_keywords
listlengths
1
16
content_code_of_ethics
stringclasses
1 value
content_author_guide
stringclasses
1 value
content_flagged_for_ethics_review
bool
1 class
content_ethics_comments
stringclasses
11 values
content__bibtex
stringlengths
246
1.01k
content_paperhash
stringlengths
29
134
content_supplementary_material
stringclasses
73 values
content_award_nomination
bool
1 class
content_reciprocal_reviewing_status
stringclasses
1 value
content_reciprocal_reviewing_author
stringclasses
4 values
content_reciprocal_reviewing_exemption_reason
dict
Jigsaw Puzzles: Splitting Harmful Questions to Jailbreak Large Language Models in Multi-turn Interactions
In this work, we propose Jigsaw Puzzles (JSP), a straightforward yet effective multi-turn jailbreak strategy, exposing LLM vulnerabilities to inform future safety improvements.
Large language models (LLMs) have exhibited outstanding performance in engaging with humans and addressing complex questions by leveraging their vast implicit knowledge and robust reasoning capabilities. However, such models are vulnerable to jailbreak attacks, leading to the generation of harmful responses. Despite recent research on single-turn jailbreak strategies to facilitate the development of defence mechanisms, the challenge of revealing vulnerabilities under multi-turn setting remains relatively under-explored. In this work, we propose Jigsaw Puzzles (JSP), a straightforward yet effective multi-turn jailbreak strategy against the advanced LLMs. JSP splits questions into harmless fractions as the input of each turn, and requests LLMs to reconstruct and respond to questions under multi-turn interaction. Our results demonstrate the proposed JSP jailbreak bypasses original safeguards against explicitly harmful content, achieving an average attack success rate of 93.76% on 189 harmful queries across 5 advanced LLMs (Gemini-1.5-Pro, Llama-3.1-70B, GPT-4, GPT-4o, GPT-4o-mini), and exhibits consistent performance on jailbreaking benchmarks. Moreover, JSP exhibits strong resistance to input-side and output-side defence tactics. Warning: this paper contains offensive examples.
[ "Hao Yang", "Lizhen Qu", "Ehsan Shareghi", "Gholamreza Haffari" ]
https://openreview.net/forum?id=zuNM3eoPVi
zuNM3eoPVi
zuNM3eoPVi
[ "~Hao_Yang26", "~Lizhen_Qu2", "~Ehsan_Shareghi1", "~Gholamreza_Haffari2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/37b5d26cf61599e9f7a4d742ff910b1026aec236.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Jailbreak", "Red teaming" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
true
Does not discuss potentially harmful ramifications and dual use
@inproceedings{ yang2025jigsaw, title={Jigsaw Puzzles: Splitting Harmful Questions to Jailbreak Large Language Models in Multi-turn Interactions}, author={Hao Yang and Lizhen Qu and Ehsan Shareghi and Gholamreza Haffari}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=zuNM3eoPVi} }
yang|jigsaw_puzzles_splitting_harmful_questions_to_jailbreak_large_language_models_in_multiturn_interactions
null
null
null
null
null
Agent S2: A Compositional Generalist-Specialist Framework for Computer Use Agents
State-of-the-art results on Computer Use using a framework of Generalist and Specialist modules.
Computer use agents automate digital tasks by directly interacting with graphical user interfaces (GUIs) on computers and mobile devices, offering significant potential to enhance human productivity by completing an open-ended space of user queries. However, current agents face significant challenges: imprecise grounding of GUI elements, difficulties with long-horizon task planning, and performance bottlenecks from relying on single generalist models for diverse cognitive tasks. To this end, we introduce Agent S2, a novel compositional framework that delegates cognitive responsibilities across various generalist and specialist models. We propose a novel Mixture-of-Grounding technique to achieve precise GUI localization and introduce Proactive Hierarchical Planning, dynamically refining action plans at multiple temporal scales in response to evolving observations. Evaluations demonstrate that Agent S2 establishes new state-of-the-art (SOTA) performance on three prominent computer use benchmarks. Specifically, Agent S2 achieves 18.9% and 32.7% relative improvements over leading baseline agents such as Claude Computer Use and UI-TARS on the OSWorld 15-step and 50-step evaluation. Moreover, Agent S2 generalizes effectively to other operating systems and applications, surpassing previous best methods by 52.8% on WindowsAgentArena and by 16.52% on AndroidWorld relatively. Code available at https://github.com/simular-ai/Agent-S.
[ "Saaket Agashe", "Kyle Wong", "Vincent Tu", "Jiachen Yang", "Ang Li", "Xin Eric Wang" ]
https://openreview.net/forum?id=zg5is4GJ3R
zg5is4GJ3R
zg5is4GJ3R
[ "~Saaket_Agashe1", "~Kyle_Wong1", "~Vincent_Tu1", "~Jiachen_Yang1", "~Ang_Li1", "~Xin_Eric_Wang2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/51a372ef953dfccf2d22d4657f707fe1bf9383b2.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Computer Use", "GUI Agents", "Multimodal Large Language Models", "Planning", "Grounding", "Vision" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ agashe2025agent, title={Agent S2: A Compositional Generalist-Specialist Framework for Computer Use Agents}, author={Saaket Agashe and Kyle Wong and Vincent Tu and Jiachen Yang and Ang Li and Xin Eric Wang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=zg5is4GJ3R} }
agashe|agent_s2_a_compositional_generalistspecialist_framework_for_computer_use_agents
/attachment/ecc357b402b416c7ea4804242770e4c521a46cfd.zip
null
null
null
null
GenerationPrograms: Fine-grained Attribution with Executable Programs
GenerationPrograms: Fine-grained Attribution via Neural Modular Trees
Recent large language models (LLMs) achieve impressive performance in text generation but often fail to accurately attribute their outputs, undermining trust and verifiability. Moreover, existing attribution methods do not explain how and why models leverage the provided source documents to generate their final responses, limiting interpretability. Furthermore, current attributions fail to provide a reason as to how and why the model uses the context to arrive at the final output. To overcome these challenges, we introduce a modular generation framework, GenerationPrograms, inspired by recent advancements in executable ``code agent'' architectures. Unlike conventional generation methods that simultaneously generate outputs and attributions or rely on post-hoc attribution, GenerationPrograms decomposes the process into two distinct stages: first, creating an executable program plan composed of modular text operations (such as paraphrasing, compression, and fusion) explicitly tailored to the query, and second, executing these operations following the program's specified instructions to produce the final response. Empirical evaluations demonstrate that GenerationPrograms significantly improves attribution quality at both document-level and sentence-level granularity across two long-form question-answering tasks. We further demonstrate that GenerationPrograms can effectively function as a post-hoc attribution method, outperforming traditional techniques in recovering accurate attributions. In addition, the interpretable programs generated by GenerationPrograms enable localized refinement through modular-level improvements that further enhance overall attribution quality.
[ "David Wan", "Eran Hirsch", "Elias Stengel-Eskin", "Ido Dagan", "Mohit Bansal" ]
https://openreview.net/forum?id=zTKYKiWzIm
zTKYKiWzIm
zTKYKiWzIm
[ "~David_Wan1", "~Eran_Hirsch1", "~Elias_Stengel-Eskin1", "~Ido_Dagan1", "~Mohit_Bansal2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/879c600a2233a7fa325b3ccd7a81cd7277332584.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "long-form qa", "rag", "summarization", "attributed text generation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wan2025generationprograms, title={GenerationPrograms: Fine-grained Attribution with Executable Programs}, author={David Wan and Eran Hirsch and Elias Stengel-Eskin and Ido Dagan and Mohit Bansal}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=zTKYKiWzIm} }
wan|generationprograms_finegrained_attribution_with_executable_programs
/attachment/22e800fcc6dc11a936f21b1f04a507d404c9515d.zip
null
null
null
null
Can A Society of Generative Agents Simulate Human Behavior and Inform Public Health Policy? A Case Study on Vaccine Hesitancy
Investigate if a multi LLM agent system can simulate human health behaviors and inform policymaking.
Can we simulate a sandbox society with generative agents to model human behavior, thereby reducing the over-reliance on real human trials for assessing public policies? In this work, we investigate the feasibility of simulating health-related decision-making, using vaccine hesitancy, defined as the delay in acceptance or refusal of vaccines despite the availability of vaccination services (MacDonald, 2015), as a case study. To this end, we introduce the VacSim framework with 100 generative agents powered by Large Language Models (LLMs). VacSim simulates vaccine policy outcomes with the following steps: 1) instantiate a population of agents with demographics based on census data; 2) connect the agents via a social network and model vaccine attitudes as a function of social dynamics and disease-related information; 3) design and evaluate various public health interventions aimed at mitigating vaccine hesitancy. To align with real-world results, we also introduce simulation warmup and attitude modulation to adjust agents' attitudes. We propose a series of evaluations to assess the reliability of various LLM simulations. Experiments indicate that models like Llama and Qwen can simulate aspects of human behavior but also highlight real-world alignment challenges, such as inconsistent responses with demographic profiles. This early exploration of LLM-driven simulations is not meant to serve as definitive policy guidance; instead, it serves as a call for action to examine social simulation for policy development.
[ "Abe Bohan Hou", "Hongru Du", "Yichen Wang", "Jingyu Zhang", "Zixiao Wang", "Paul Pu Liang", "Daniel Khashabi", "Lauren M Gardner", "Tianxing He" ]
https://openreview.net/forum?id=zSbecER9il
zSbecER9il
zSbecER9il
[ "~Abe_Bohan_Hou1", "~Hongru_Du1", "~Yichen_Wang4", "~Jingyu_Zhang2", "~Zixiao_Wang6", "~Paul_Pu_Liang1", "~Daniel_Khashabi2", "~Lauren_M_Gardner1", "~Tianxing_He1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/efdbcf1c87e1f5c9d8ded3b5bd028477a557f88e.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM agent", "multi-agent system", "social simulation", "public health", "AI for health" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ hou2025can, title={Can A Society of Generative Agents Simulate Human Behavior and Inform Public Health Policy? A Case Study on Vaccine Hesitancy}, author={Abe Bohan Hou and Hongru Du and Yichen Wang and Jingyu Zhang and Zixiao Wang and Paul Pu Liang and Daniel Khashabi and Lauren M Gardner and Tianxing He}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=zSbecER9il} }
hou|can_a_society_of_generative_agents_simulate_human_behavior_and_inform_public_health_policy_a_case_study_on_vaccine_hesitancy
null
null
null
null
null
REFA: Reference Free Alignment with Fine-Grained Length Control
Reference-free alignment methods that optimize over multiple user preferences with fine-grained control of length
To mitigate reward hacking from response verbosity, modern preference optimization methods are increasingly adopting length normalization (e.g., SimPO, ORPO, LN-DPO). While effective against this bias, we demonstrate that length normalization itself introduces a failure mode: the **URSLA shortcut**. Here models learn to satisfy the alignment objective by prematurely truncating low-quality responses rather than learning from their semantic content. To address this, we introduce **REFA**, a new alignment framework that proposes probabilistic control on a structural token that controls termination. Our core innovation is a new class of regularizers that operate directly on the probability of the End-of-Sequence (EOS) token, a previously unexploited control lever. This token-level intervention provides a principled solution to the URSLA shortcut, ensuring genuine quality improvements. Furthermore, it unlocks a versatile mechanism for managing the alignment-efficiency tradeoff, enabling practitioners to fine-tune models that adhere to specific token budgets. Empirically, REFA achieves a **60.29\%** win rate and a **52.17\%** length-controlled win rate on AlpacaEval2 with Llama-3-8B-Instruct, demonstrating the power of our token-level control paradigm.
[ "Taneesh Gupta", "Rahul Madhavan", "Xuchao Zhang", "Chetan Bansal", "Saravan Rajmohan" ]
https://openreview.net/forum?id=zP6DJaBBcR
zP6DJaBBcR
zP6DJaBBcR
[ "~Taneesh_Gupta1", "~Rahul_Madhavan1", "~Xuchao_Zhang1", "~Chetan_Bansal1", "~Saravan_Rajmohan2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/dc1c639c05cb38e22dd2b44774293041d8886ec9.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Model Alignment", "RLHF", "Preference Optimization" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ gupta2025refa, title={{REFA}: Reference Free Alignment with Fine-Grained Length Control}, author={Taneesh Gupta and Rahul Madhavan and Xuchao Zhang and Chetan Bansal and Saravan Rajmohan}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=zP6DJaBBcR} }
gupta|refa_reference_free_alignment_with_finegrained_length_control
null
null
null
null
null
Investigating Intersectional Bias in Large Language Models using Confidence Disparities in Coreference Resolution
We propose a fairness benchmark that evaluates intersectional biases in LLMs based on disparities in model confidence while performing coreference resolution on different intersectional identities
Large language models (LLMs) have achieved impressive performance, leading to their widespread adoption as decision-support tools in resource-constrained contexts like hiring and admissions. There is, however, scientific consensus that AI systems can reflect and exacerbate societal biases, raising concerns about identity-based harm when used in critical social contexts. Prior work has laid a solid foundation for assessing bias in LLMs by evaluating demographic disparities in different language reasoning tasks. In this work, we extend single-axis fairness evaluations to examine intersectional bias, recognizing that when multiple axes of discrimination intersect, they create distinct patterns of disadvantage. We create a new benchmark called WinoIdentity by augmenting the WinoBias dataset with 25 demographic markers across 10 attributes, including age, nationality, and race, intersected with binary gender, yielding 245,700 prompts to evaluate 50 distinct bias patterns. Focusing on harms of omission due to underrepresentation, we investigate bias through the lens of uncertainty and propose a group (un)fairness metric called \emph{Coreference Confidence Disparity} which measures whether models are more or less confident for some intersectional identities than others. We evaluate five recently published LLMs and find confidence disparities as high as 40\% along various demographic attributes including body type, sexual orientation and socio-economic status, with models being most uncertain about doubly-disadvantaged identities in anti-stereotypical settings, such as when assigning transgender women to historically male-dominated occupations. Surprisingly, coreference confidence decreases even for hegemonic or privileged markers (e.g., 'White' or 'cisgender'), indicating that the recent impressive performance of LLMs is more likely due to memorization than logical reasoning. Notably, these are two independent failures in value alignment and validity that can compound to cause social harm.
[ "Falaah Arif Khan", "Nivedha Sivakumar", "Yinong Oliver Wang", "Katherine Metcalf", "Cezanne Camacho", "Barry-John Theobald", "Luca Zappella", "Nicholas Apostoloff" ]
https://openreview.net/forum?id=zOw2it5Ni6
zOw2it5Ni6
zOw2it5Ni6
[ "~Falaah_Arif_Khan1", "~Nivedha_Sivakumar1", "~Yinong_Oliver_Wang1", "~Katherine_Metcalf1", "~Cezanne_Camacho1", "~Barry-John_Theobald1", "~Luca_Zappella1", "~Nicholas_Apostoloff1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/036165617f0c285d7a8f21490ec7432f50b68556.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "fairness", "uncertainty", "intersectionality" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ khan2025investigating, title={Investigating Intersectional Bias in Large Language Models using Confidence Disparities in Coreference Resolution}, author={Falaah Arif Khan and Nivedha Sivakumar and Yinong Oliver Wang and Katherine Metcalf and Cezanne Camacho and Barry-John Theobald and Luca Zappella and Nicholas Apostoloff}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=zOw2it5Ni6} }
khan|investigating_intersectional_bias_in_large_language_models_using_confidence_disparities_in_coreference_resolution
null
null
null
null
null
MeMAD: Structured Memory of Debates for Enhanced Multi-Agent Reasoning
We propose Memory-Augmented Multi-Agent Debate (MeMAD), which systematically organizes and reuses past debate transcripts to improve performance on complex reasoning tasks without requiring parameter updates.
Large Language Models (LLMs) demonstrate remarkable in-context learning capabilities but often struggle with complex, multi-step reasoning. Multi-Agent Debate (MAD) frameworks partially address these limitations by enabling iterative agent interactions. However, they neglect valuable historical insights by treating each new debate independently. In this paper, we propose Memory-Augmented MAD (MeMAD), a parameter-free memory-augmented MAD framework that systematically organizes and reuses past debate transcripts. MeMAD stores structured representations of successful and unsuccessful reasoning attempts enriched with self-reflections and peer feedback. It systematically retrieves them via semantic similarity at inference time to inform new reasoning tasks. Our experiments on challenging mathematical reasoning, scientific question answering, and language understanding benchmarks show that MeMAD achieves significant accuracy gains (up to 3.3\% over conventional MAD baselines) without parameter updates. Our findings underscore structured memory as a pivotal mechanism for achieving deeper and more reliable multi-agent reasoning in LLMs. Code is available in ~\url{https://github.com/LSHCoding/MeMAD}.
[ "Shuai Ling", "Lizi Liao", "Dongmei Jiang", "Weili Guan" ]
https://openreview.net/forum?id=zLbmsdyTiN
zLbmsdyTiN
zLbmsdyTiN
[ "~Shuai_Ling1", "~Lizi_Liao1", "~Dongmei_Jiang2", "~Weili_Guan4" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/a0b1930d621647e4dd73a698d9879287d9ebcb8d.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Multi-Agent Debate", "Memory Augmentation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ling2025memad, title={Me{MAD}: Structured Memory of Debates for Enhanced Multi-Agent Reasoning}, author={Shuai Ling and Lizi Liao and Dongmei Jiang and Weili Guan}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=zLbmsdyTiN} }
ling|memad_structured_memory_of_debates_for_enhanced_multiagent_reasoning
null
null
null
null
null
Values in the Wild: Discovering and Mapping Values in Real-World Language Model Interactions
Our privacy-preserving analysis of values in real-world language model interactions reveals a novel taxonomy of AI values that differs from human frameworks, is highly context-dependent, and becomes most explicit/legible during moments of resistance.
AI assistants interact with millions of real users everyday, imparting normative judgments that can have significant personal and societal impact—but little is known about what values guide these interactions in practice. To address this, we develop a method to empirically analyze values expressed in hundreds of thousands of real-world conversations with Claude models. We empirically discover and taxonomize 3,308 AI values, and study how model values and responses depend on context. We find that Claude expresses many professional and intellectual values, and typically supports prosocial human values while resisting values like "moral nihilism." While some values appear consistently (e.g. "professionalism"), most are highly context-dependent—"harm prevention" emerges when the model resists users, "historical accuracy" when discussing controversial events, "healthy boundaries" in relationship advice, and "human agency" in technology ethics discussions. By providing the first large-scale empirical mapping of AI values in deployment, this work creates a foundation for more grounded evaluation and design of values in increasingly influential AI systems.
[ "Saffron Huang", "Esin DURMUS", "Kunal Handa", "Miles McCain", "Alex Tamkin", "Michael Stern", "Jerry Hong", "Deep Ganguli" ]
https://openreview.net/forum?id=zJHZJClG1Z
zJHZJClG1Z
zJHZJClG1Z
[ "~Saffron_Huang1", "~Esin_DURMUS2", "~Kunal_Handa1", "~Miles_McCain1", "~Alex_Tamkin1", "~Michael_Stern1", "~Jerry_Hong1", "~Deep_Ganguli2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/d9cb13b573fe0a00779ee4fba3d32dee93dce2d3.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "language models", "values", "AI ethics", "AI values", "empirical analysis", "human-AI interaction", "value alignment", "privacy-preserving analysis", "value pluralism", "AI and society" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ huang2025values, title={Values in the Wild: Discovering and Mapping Values in Real-World Language Model Interactions}, author={Saffron Huang and Esin DURMUS and Kunal Handa and Miles McCain and Alex Tamkin and Michael Stern and Jerry Hong and Deep Ganguli}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=zJHZJClG1Z} }
huang|values_in_the_wild_discovering_and_mapping_values_in_realworld_language_model_interactions
null
null
null
null
null
Deep Binding of Language Model Virtual Personas: a Study on Approximating Political Partisan Misperceptions
We propose a method to build virtual personas for deeper user binding and demonstrate its superiority in approximating metaperception in political science.
Large language models (LLMs) are increasingly capable of simulating human behavior, offering cost-effective ways to estimate user responses during the early phases of survey design. While previous studies have examined whether models can reflect individual opinions or attitudes, we argue that a higher-order binding of virtual personas requires successfully approximating not only the opinions of a user as an identified member of a group, but also the nuanced ways in which that user perceives and evaluates those outside the group. In particular, faithfully simulating how humans perceive different social groups is critical for applying LLMs to various political science studies, including timely topics on polarization dynamics, inter-group conflict, and democratic backsliding. To this end, we propose a novel methodology for constructing virtual personas with synthetic user "backstories" generated as extended, multi-turn interview transcripts. Our generated backstories are longer, rich in detail, and consistent in authentically describing a singular individual, compared to previous methods. We show that virtual personas conditioned on our backstories closely replicate human response distributions (up to an 87% improvement as measured by Wasserstein Distance) and produce effect sizes that closely match those observed in the original studies. Altogether, our work extends the applicability of LLMs beyond estimating individual self-opinions, enabling their use in a broader range of human studies.
[ "Minwoo Kang", "Suhong Moon", "Seung Hyeong Lee", "Ayush Raj", "Joseph Suh", "David Chan" ]
https://openreview.net/forum?id=zHdSCtNmM4
zHdSCtNmM4
zHdSCtNmM4
[ "~Minwoo_Kang1", "~Suhong_Moon1", "~Seung_Hyeong_Lee1", "~Ayush_Raj3", "~Joseph_Suh1", "~David_Chan3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/08c4c62957f19503b9cb14a781f9366ed5d2ff58.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "user approximation", "metaperception", "social psycholog", "democratic backsliding", "outgroup hostility" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ kang2025deep, title={Deep Binding of Language Model Virtual Personas: a Study on Approximating Political Partisan Misperceptions}, author={Minwoo Kang and Suhong Moon and Seung Hyeong Lee and Ayush Raj and Joseph Suh and David Chan}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=zHdSCtNmM4} }
kang|deep_binding_of_language_model_virtual_personas_a_study_on_approximating_political_partisan_misperceptions
/attachment/887605a60f6440213636e72d74e01e35df30335b.zip
null
null
null
null
QUDsim: Quantifying Discourse Similarities in LLM-Generated Text
We introduce an abstraction based on linguistics theories in Questions Under Discussion (QUD) and question semantics to quantify repetitive discourse structures found in texts generated by large language models.
As large language models become increasingly capable at various tasks including writing, the need to generate unique and creative content arises. Although LLMs have the ability to generate text covering diverse topics, there is an overall sense of repetitiveness across texts that we aim to formalize. Such familiarity between documents is induced through the persistence of underlying discourse structures. However, existing similarity metrics dependent on lexical overlap and syntactic patterns are overly sensitive to volatility in content overlap, thus making them unsuitable for detecting $\textit{structural}$ similarities. We introduce an abstraction based on linguistics theories in Questions Under Discussion (QUD) and question semantics to help quantify differences in discourse progression. We then use this framework to build $\textbf{QUDsim}$, a similarity metric that can detect discursive parallels between documents. Using QUDsim, we find that LLMs often reuse discourse structures (more so than humans) to create seemingly new documents by simply swapping content. Furthermore, LLMs are not only repetitive and structurally uniform, but are also divergent from human authors in the types of structures they use.
[ "Ramya Namuduri", "Yating Wu", "Anshun Asher Zheng", "Manya Wadhwa", "Greg Durrett", "Junyi Jessy Li" ]
https://openreview.net/forum?id=zFz1BJu211
zFz1BJu211
zFz1BJu211
[ "~Ramya_Namuduri1", "~Yating_Wu1", "~Anshun_Asher_Zheng1", "~Manya_Wadhwa1", "~Greg_Durrett1", "~Junyi_Jessy_Li2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/5974197cbc65ce889f5cb6d2e8a8bf2a2394d65c.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "discourse diversity", "discourse structure", "large language models", "Questions Under Discussion" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ namuduri2025qudsim, title={{QUD}sim: Quantifying Discourse Similarities in {LLM}-Generated Text}, author={Ramya Namuduri and Yating Wu and Anshun Asher Zheng and Manya Wadhwa and Greg Durrett and Junyi Jessy Li}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=zFz1BJu211} }
namuduri|qudsim_quantifying_discourse_similarities_in_llmgenerated_text
null
null
null
null
null
Probing then Editing Response Personality of Large Language Models
This paper introduces a layer-wise probing framework revealing how LLMs encode personality traits within parameters and further proposes a progressive perturbation method that edits personality during inference using the probing classifier.
Large Language Models (LLMs) have demonstrated promising capabilities to generate responses that simulate consistent personality traits. Despite the major attempts to analyze personality expression through output-based evaluations, little is known about how such traits are internally encoded within LLM parameters. In this paper, we introduce a layer-wise probing framework to systematically investigate the layer-wise capability of LLMs in simulating personality for responding. We conduct probing experiments on 11 open-source LLMs over the PersonalityEdit benchmark and find that LLMs predominantly simulate personality for responding in their middle and upper layers, with instruction-tuned models demonstrating a slightly clearer separation of personality traits. Furthermore, by interpreting the trained probing hyperplane as a layer-wise boundary for each personality category, we propose a layer-wise perturbation method to edit the personality expressed by LLMs during inference. Our results show that even when the prompt explicitly specifies a particular personality, our method can still successfully alter the response personality of LLMs. Interestingly, the difficulty of converting between certain personality traits varies substantially, which aligns with the representational distances in our probing experiments. Finally, we conduct a comprehensive MMLU benchmark evaluation and time overhead analysis, demonstrating that our proposed personality editing method incurs only minimal degradation in general capabilities while maintaining low training costs and acceptable inference latency. Our code is publicly available at https://github.com/universe-sky/probing-then-editing-personality.
[ "Tianjie Ju", "Zhenyu Shao", "Bowen Wang", "Yujia Chen", "Zhuosheng Zhang", "Hao Fei", "Mong-Li Lee", "Wynne Hsu", "Sufeng Duan", "Gongshen Liu" ]
https://openreview.net/forum?id=z9SbcYYP0M
z9SbcYYP0M
z9SbcYYP0M
[ "~Tianjie_Ju1", "~Zhenyu_Shao2", "~Bowen_Wang10", "~Yujia_Chen5", "~Zhuosheng_Zhang1", "~Hao_Fei1", "~Mong-Li_Lee1", "~Wynne_Hsu1", "~Sufeng_Duan1", "~Gongshen_Liu2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/498cc4dc5e0ee0f10ac6252c267617bc18a5cc09.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "large language model", "personality", "interpretability", "knowledge editing" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ju2025probing, title={Probing then Editing Response Personality of Large Language Models}, author={Tianjie Ju and Zhenyu Shao and Bowen Wang and Yujia Chen and Zhuosheng Zhang and Hao Fei and Mong-Li Lee and Wynne Hsu and Sufeng Duan and Gongshen Liu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=z9SbcYYP0M} }
ju|probing_then_editing_response_personality_of_large_language_models
null
null
null
null
null
CodeXEmbed: A Generalist Embedding Model Family for Multilingual and Multi-task Code Retrieval
We introduce CodeXEmbed, a large-scale code embedding model achieving SOTA on CoIR and strong BeIR performance, enhancing code retrieval and RAG.
Despite the success of text retrieval in many NLP tasks, code retrieval remains a largely underexplored area. Most text retrieval systems are tailored for natural language queries, often neglecting the specific challenges of retrieving code. This gap leaves existing models unable to effectively capture the diversity of programming languages and tasks across different domains, highlighting the need for more focused research in code retrieval. To address this, we introduce CodeXEmbed, a family of large-scale code embedding models ranging from 400M to 7B parameters. Our novel training pipeline unifies multiple programming languages and transforms various code-related tasks into a common retrieval framework, enhancing model generalizability and retrieval performance. Our 7B model achieves a new state-of-the-art (SOTA) in code retrieval, topping the CoIR Leaderboard. In addition to excelling in code retrieval, our models demonstrate competitive performance on the widely adopted BeIR text retrieval benchmark, offering versatility across domains. Experimental results demonstrate that improving retrieval performance significantly enhances end-to-end Retrieval-Augmented Generation (RAG) performance for code-related tasks.
[ "Ye Liu", "Rui Meng", "Shafiq Joty", "silvio savarese", "Caiming Xiong", "Yingbo Zhou", "Semih Yavuz" ]
https://openreview.net/forum?id=z3lG70Azbg
z3lG70Azbg
z3lG70Azbg
[ "~Ye_Liu4", "~Rui_Meng1", "~Shafiq_Joty1", "~silvio_savarese2", "~Caiming_Xiong1", "~Yingbo_Zhou1", "~Semih_Yavuz1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/556d386c8c3457c2f182c786da442e3b3cbd673a.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Code and Text Retrieval; Code Embedding Model; Text Embedding Model; Retrieval-Augmented Code Generation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ liu2025codexembed, title={Code{XE}mbed: A Generalist Embedding Model Family for Multilingual and Multi-task Code Retrieval}, author={Ye Liu and Rui Meng and Shafiq Joty and silvio savarese and Caiming Xiong and Yingbo Zhou and Semih Yavuz}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=z3lG70Azbg} }
liu|codexembed_a_generalist_embedding_model_family_for_multilingual_and_multitask_code_retrieval
null
null
null
null
null
Retrieval-Augmented Generation with Conflicting Evidence
We propose a benchmark and multi-agent framework for RAG systems to handle ambiguity, conflicting evidence, and misinformation in real-world retrieval scenarios.
Large language model (LLM) agents are increasingly employing retrieval-augmented generation (RAG) to improve the factuality of their responses. However, in practice, these systems often need to handle ambiguous user queries and potentially conflicting information from multiple sources while also suppressing inaccurate information from noisy or irrelevant documents. Prior work has generally studied and addressed these challenges in isolation, considering only one aspect at a time, such as handling ambiguity or robustness to noise and misinformation. We instead consider multiple factors simultaneously, proposing (i) RAMDocs (Retrieval with Ambiguity and Misinformation in Documents), a new dataset that simulates complex and realistic scenarios for conflicting evidence for a user query, including ambiguity, misinformation, and noise; and (ii) MADAM-RAG, a multi-agent approach in which LLM agents debate over the merits of an answer over multiple rounds, allowing an aggregator to collate responses corresponding to disambiguated entities while discarding misinformation and noise, thereby handling diverse sources of conflict jointly. We demonstrate the effectiveness of MADAM-RAG using both closed and open-source models on AmbigDocs – which requires presenting all valid answers for ambiguous queries – improving over strong RAG baselines by up to 11.40%, and on FaithEval – which requires suppressing misinformation – where we improve by up to 15.80% (absolute) with Llama3.3-70B-Instruct. Furthermore, we find that our proposed RAMDocs dataset poses a challenge for existing RAG baselines (the most performant Llama3.3-70B-Instruct only yields up to a 32.60 exact match score), as it requires handling conflicting information due to ambiguity, noise, and misinformation simultaneously. While MADAM-RAG begins to address these conflicting factors, our analysis indicates that a substantial gap remains, especially when increasing the level of imbalance in supporting evidence and misinformation.
[ "Han Wang", "Archiki Prasad", "Elias Stengel-Eskin", "Mohit Bansal" ]
https://openreview.net/forum?id=z1MHB2m3V9
z1MHB2m3V9
z1MHB2m3V9
[ "~Han_Wang9", "~Archiki_Prasad1", "~Elias_Stengel-Eskin1", "~Mohit_Bansal2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/1452223fd8e091c35c8990c14f2d5e4979e749bc.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Retrieval-augmented Generation", "Knowledge Conflict", "Multi-agent" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025retrievalaugmented, title={Retrieval-Augmented Generation with Conflicting Evidence}, author={Han Wang and Archiki Prasad and Elias Stengel-Eskin and Mohit Bansal}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=z1MHB2m3V9} }
wang|retrievalaugmented_generation_with_conflicting_evidence
null
null
null
null
null
Déjà Vu: Multilingual LLM Evaluation through the Lens of Machine Translation Evaluation
What can multilingual LLM evaluation learn from MT evaluation?
Generation capabilities and language coverage of multilingual large language models (mLLMs) are advancing rapidly. However, evaluation practices for generative abilities of mLLMs are still lacking comprehensiveness, scientific rigor, and consistent adoption across research labs, which undermines their potential to meaningfully guide mLLM development. We draw parallels with machine translation (MT) evaluation, a field that faced similar challenges and has, over decades, developed transparent reporting standards and reliable evaluations for multilingual generative models. Through targeted experiments across key stages of the generative evaluation pipeline, we demonstrate how best practices from MT evaluation can deepen the understanding of quality differences between models. Additionally, we identify essential components for robust meta-evaluation of mLLMs, ensuring the evaluation methods themselves are rigorously assessed. We distill these insights into a checklist of actionable recommendations for mLLM research and development.
[ "Julia Kreutzer", "Eleftheria Briakou", "Sweta Agrawal", "Marzieh Fadaee", "Tom Kocmi" ]
https://openreview.net/forum?id=yxzVanFoij
yxzVanFoij
yxzVanFoij
[ "~Julia_Kreutzer1", "~Eleftheria_Briakou1", "~Sweta_Agrawal1", "~Marzieh_Fadaee2", "~Tom_Kocmi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/d390b099c4e0bc139be3a1a975837803ed0bf6db.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "multilingual", "evaluation", "meta-evaluation", "machine translation evaluation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ kreutzer2025dj, title={D\'ej\`a Vu: Multilingual {LLM} Evaluation through the Lens of Machine Translation Evaluation}, author={Julia Kreutzer and Eleftheria Briakou and Sweta Agrawal and Marzieh Fadaee and Tom Kocmi}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=yxzVanFoij} }
kreutzer|déjà_vu_multilingual_llm_evaluation_through_the_lens_of_machine_translation_evaluation
null
null
null
null
null
CONCAP: Seeing Beyond English with Concepts Retrieval-Augmented Captioning
Image captioning with concept and captions retrieval augmented generation.
Multilingual vision-language models have made significant strides in image captioning, yet they still lag behind their English counterparts due to limited multilingual training data and costly large-scale model parameterization. Retrieval-augmented generation (RAG) offers a promising alternative by conditioning caption generation on retrieved examples in the target language, reducing the need for extensive multilingual training. However, multilingual RAG captioning models often depend on retrieved captions translated from English, which can introduce mismatches and linguistic biases relative to the source language. We introduce CONCAP, a multilingual image captioning model that integrates retrieved captions with image-specific concepts, enhancing the contextualization of the input image and grounding the captioning process across different languages. Experiments on the XM3600 dataset indicate that CONCAP enables strong performance on low- and mid-resource languages, with highly reduced data requirements. Our findings highlight the effectiveness of concept-aware retrieval augmentation in bridging multilingual performance gaps.
[ "George Ibrahim", "Rita Ramos", "Yova Kementchedjhieva" ]
https://openreview.net/forum?id=yfnaK1pZxu
yfnaK1pZxu
yfnaK1pZxu
[ "~George_Ibrahim1", "~Rita_Ramos1", "~Yova_Kementchedjhieva1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/b261ae04f4cd4225e18480084dc5543014d43819.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Image Captioning", "Concepts", "Retrieval", "RAG", "Multilingual" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ibrahim2025concap, title={{CONCAP}: Seeing Beyond English with Concepts Retrieval-Augmented Captioning}, author={George Ibrahim and Rita Ramos and Yova Kementchedjhieva}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=yfnaK1pZxu} }
ibrahim|concap_seeing_beyond_english_with_concepts_retrievalaugmented_captioning
null
null
null
null
null
Prompt-Reverse Inconsistency: LLM Self-Inconsistency Beyond Generative Randomness and Prompt Paraphrasing
This paper introduces Prompt-Reverse Inconsistency (PRIN), where Large Language Models give conflicting answers when identifying correct versus incorrect responses, raising concerns about their logical reliability.
While the inconsistency of LLMs is not a novel topic, prior research has predominantly addressed two types of generative inconsistencies: i) Randomness Inconsistency: running the same LLM multiple trials, yielding varying responses; ii) Paraphrase Inconsistency: paraphrased prompts result in different responses from the same LLM. Randomness Inconsistency arises from the inherent randomness due to stochastic sampling in generative models, while Paraphrase Inconsistency is a consequence of the language modeling objectives, where paraphrased prompts alter the distribution of vocabulary logits. This research discovers Prompt-Reverse Inconsistency (PRIN), a new form of LLM self-inconsistency: given a question and a couple of LLM-generated answer candidates, the LLM often has conflicting responses when prompted "Which are correct answers?" and "Which are incorrect answers?". PRIN poses a big concern as it undermines the credibility of LLM-as-a-judge, and suggests a challenge for LLMs to adhere to basic logical rules. We conduct a series of experiments to investigate PRIN, examining the extent of PRIN across different LLMs, methods to mitigate it, potential applications, and its relationship with Randomness Inconsistency and Paraphrase Inconsistency. As the first study to explore PRIN, our findings offer valuable insights into the inner workings of LLMs and contribute to advancing trustworthy AI.
[ "Jihyun Janice Ahn", "Wenpeng Yin" ]
https://openreview.net/forum?id=yfRkNRFLzl
yfRkNRFLzl
yfRkNRFLzl
[ "~Jihyun_Janice_Ahn1", "~Wenpeng_Yin1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/44b6a89c82f5042a567ea17d3070285af52ead04.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Model", "Natural Language Process", "Inconsistency of LLMs", "Prompt-Reverse Inconsistency", "Randomness Inconsistency", "Paraphrase Inconsistency" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ ahn2025promptreverse, title={Prompt-Reverse Inconsistency: {LLM} Self-Inconsistency Beyond Generative Randomness and Prompt Paraphrasing}, author={Jihyun Janice Ahn and Wenpeng Yin}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=yfRkNRFLzl} }
ahn|promptreverse_inconsistency_llm_selfinconsistency_beyond_generative_randomness_and_prompt_paraphrasing
null
null
null
null
null
Learning to Generate Unit Tests for Automated Debugging
LLM training pipeline for generating unit tests for code debugging and assessing code correctness
Unit tests (UTs) play an instrumental role in assessing code correctness as well as providing feedback to large language models (LLMs), motivating automated test generation. However, we uncover a trade-off between generating unit test inputs that reveal errors when given a faulty code and correctly predicting the unit test output without access to the gold solution. To address this trade-off, we propose UTGen, which teaches LLMs to generate unit test inputs that reveal errors along with their correct expected outputs based on task descriptions. Since model-generated tests can provide noisy signals (e.g., from incorrectly predicted outputs), we propose UTDebug that (i) scales UTGen via test-time compute to improve UT output prediction, and (ii) validates and backtracks edits based on multiple generated UTs to avoid overfitting, and helps LLMs debug effectively. We show that UTGen outperforms other LLM-based baselines by 7.59% based on a metric measuring the presence of both error-revealing UT inputs and correct UT outputs. When used with UTDebug, we find that feedback from UTGen's unit tests improves pass@1 accuracy of Qwen2.5 32B on HumanEvalFix and our own harder debugging split of MBPP+ by over 3.17% and 12.35% (respectively) over other LLM-based UT generation baselines. Moreover, we observe that feedback from Qwen2.5 32B-based UTGen model can enhance debugging with frontier LLMs like GPT-4o by 13.8%. Lastly, we demonstrate that UTGen is a better judge for code correctness, outperforming a state-of-the-art trained 8B reward model by 4.43% on HumanEval+ with best-of-10 sampling using Qwen2.5 7B.
[ "Archiki Prasad", "Elias Stengel-Eskin", "Justin Chen", "Zaid Khan", "Mohit Bansal" ]
https://openreview.net/forum?id=yeVBHPLXxi
yeVBHPLXxi
yeVBHPLXxi
[ "~Archiki_Prasad1", "~Elias_Stengel-Eskin1", "~Justin_Chen1", "~Zaid_Khan1", "~Mohit_Bansal2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/d0e3dc4e72f75d7f7f18fe8c0ab78512b18f88a0.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Unit Tests Generation", "LLMs for code generation", "LLMs for code debugging" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ prasad2025learning, title={Learning to Generate Unit Tests for Automated Debugging}, author={Archiki Prasad and Elias Stengel-Eskin and Justin Chen and Zaid Khan and Mohit Bansal}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=yeVBHPLXxi} }
prasad|learning_to_generate_unit_tests_for_automated_debugging
/attachment/f3e3abd874e6c1ad0a461bbcbb6c9cf7fe922262.zip
null
null
null
null
VideoSAVi: Self-Aligned Video Language Models without Human Supervision
VideoSAVi introduces a self-aligning approach that enables video-language models to generate high-quality preference pairs from their own outputs, achieving state-of-the-art performance without external supervision.
Recent advances in video-large language models (Video-LLMs) have led to significant progress in video understanding. Current preference optimization methods often rely on proprietary APIs or ground-truth captions to generate preference data (i.e., pairs of model outputs ranked based on their quality or alignment with human judgment), which is then used to train models for video-language alignment. This approach is both costly and labor-intensive. To address this limitation, we introduce $\textbf{VideoSAVi}$ ($\underline{\textbf{S}}$elf-$\underline{\textbf{A}}$ligned $\underline{\textbf{Vi}}$deo Language Model), a self-training pipeline that enables Video-LLMs to reason over video content without external supervision. Our approach includes a self-critiquing mechanism that identifies reasoning errors in the model's initial responses and generates improved alternatives, creating preference pairs directly from video content. VideoSAVi then applies Direct Preference Optimization (DPO) to iteratively train the model using the preference data, thus enhancing its temporal and spatial reasoning for video understanding. Experiments show that VideoSAVi delivers significant improvements across multiple benchmarks, including a +4.2 percentage point gain on MVBench, +3.9 on PerceptionTest, and +6.8 on the challenging EgoSchema dataset compared to baseline models. Our model-agnostic approach is computationally efficient, requiring only 32 frames, offering a promising direction for self-aligned video understanding without reliance on external models or annotations.
[ "Yogesh Kulkarni", "Pooyan Fazli" ]
https://openreview.net/forum?id=ybcZEWaM7U
ybcZEWaM7U
ybcZEWaM7U
[ "~Yogesh_Kulkarni1", "~Pooyan_Fazli1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/5846d0a6f494f556df2f25ba346da586faf3d28b.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Video understanding", "Self-alignment", "Video-language models", "Direct preference optimization", "Self-critiquing" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ kulkarni2025videosavi, title={Video{SAV}i: Self-Aligned Video Language Models without Human Supervision}, author={Yogesh Kulkarni and Pooyan Fazli}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=ybcZEWaM7U} }
kulkarni|videosavi_selfaligned_video_language_models_without_human_supervision
null
null
null
null
null
Streaming DiLoCo with overlapping communication
Distributed training where only a subset of the outer gradients is communicated
Training of large language models (LLMs) is typically distributed across a large number of accelerators to reduce training time. Since internal states and parameter gradients need to be exchanged at each and every single gradient step, all devices need to be co-located using low-latency high-bandwidth communication links to support the required high volume of data exchange. Recently, algorithms like DiLoCo have relaxed the constraint that all devices need co-location: accelerators can be grouped into ``workers'', where synchronizations between workers need only occur infrequently. This in turn means that workers can afford being connected by lower bandwidth communication links without affecting learning quality. However, in these methods, communication across workers still requires the same peak bandwidth as before, as the synchronizations require all parameters to be exchanged across all workers. In this paper, we improve DiLoCo in three ways. First, we synchronize only subsets of parameters in sequence, rather than all at once, which greatly reduces peak bandwidth. Second, we allow workers to continue training while synchronizing, which decreases wall clock time. Third, we quantize the data exchanged by workers, which further reduces bandwith across workers. We show experimentally that by properly combining these modifications we can distribute training of billion-scale parameters and attain models of similar quality as before, but reducing required bandwidth by a factor of up to two orders of magnitude.
[ "Arthur Douillard", "Yani Donchev", "J Keith Rush", "Satyen Kale", "Zachary Charles", "Gabriel Teston", "Zachary Garrett", "Jiajun Shen", "Ross McIlroy", "David Lacey", "Alexandre Rame", "Arthur Szlam", "MarcAurelio Ranzato", "Paul R Barham" ]
https://openreview.net/forum?id=yYk3zK0X6Q
yYk3zK0X6Q
yYk3zK0X6Q
[ "~Arthur_Douillard1", "~Yani_Donchev1", "~J_Keith_Rush1", "~Satyen_Kale2", "~Zachary_Charles1", "~Gabriel_Teston1", "~Zachary_Garrett1", "~Jiajun_Shen1", "~Ross_McIlroy1", "~David_Lacey1", "~Alexandre_Rame1", "~Arthur_Szlam3", "~MarcAurelio_Ranzato1", "~Paul_R_Barham2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/cfdba2f22f8184c00ce2bee496c5685b22b6922e.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "distributed training", "large-scale", "llm" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ douillard2025streaming, title={Streaming DiLoCo with overlapping communication}, author={Arthur Douillard and Yani Donchev and J Keith Rush and Satyen Kale and Zachary Charles and Gabriel Teston and Zachary Garrett and Jiajun Shen and Ross McIlroy and David Lacey and Alexandre Rame and Arthur Szlam and MarcAurelio Ranzato and Paul R Barham}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=yYk3zK0X6Q} }
douillard|streaming_diloco_with_overlapping_communication
null
null
null
null
null
Multilingual and Multi-Accent Jailbreaking of Audio LLMs
We propose Multi-AudioJail --- a novel audio jailbreak attack that exploits multilingual and multi-accent audio inputs enhanced with audio adversarial perturbations.
Large Audio Language Models (LALMs) have significantly advanced audio understanding but introduce critical security risks, particularly through audio jailbreaks. While prior work has focused on English-centric attacks, we expose a far more severe vulnerability: adversarial multilingual and multi-accent audio jailbreaks, where linguistic and acoustic variations dramatically amplify attack success. In this paper, we introduce Multi-AudioJail, the first systematic framework to exploit these vulnerabilities through (1) a novel dataset of adversarially perturbed multilingual/multi-accent audio jailbreaking prompts, and (2) a hierarchical evaluation pipeline revealing that how acoustic perturbations (e.g., reverberation, echo, and whisper effects) interacts with cross-lingual phonetics to cause jailbreak success rates (JSRs) to surge by up to +57.25 percentage points (e.g., reverberated Kenyan-accented attack on MERaLiON). Crucially, our work further reveals that multimodal LLMs are inherently more vulnerable than unimodal systems: attackers need only exploit the weakest link (e.g., non-English audio inputs) to compromise the entire model, which we empirically show by multilingual audio-only attacks achieving 3.1x higher success rates than text-only attacks. We plan to release our dataset to spur research into cross-modal defenses, urging the community to address this expanding attack surface in multimodality as LALMs evolve.
[ "Jaechul Roh", "Virat Shejwalkar", "Amir Houmansadr" ]
https://openreview.net/forum?id=yGa8CYT8kS
yGa8CYT8kS
yGa8CYT8kS
[ "~Jaechul_Roh1", "~Virat_Shejwalkar1", "~Amir_Houmansadr1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/6147a6961d7ae656a86a7d1238824560951dada6.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Audio", "LLM", "Jailbreak", "Multilingual", "Security" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
true
the audio jailbreaking might be offensive to some audience.
@inproceedings{ roh2025multilingual, title={Multilingual and Multi-Accent Jailbreaking of Audio {LLM}s}, author={Jaechul Roh and Virat Shejwalkar and Amir Houmansadr}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=yGa8CYT8kS} }
roh|multilingual_and_multiaccent_jailbreaking_of_audio_llms
null
null
null
null
null
Hypothesis-Driven Theory-of-Mind Reasoning for Large Language Models
We introduce a novel inference-time algorithm, ThoughtTracing, which uses LLMs to probabilistically trace and weight hypotheses about agents’ evolving mental states without relying on questions and ground-truth answers in benchmarks.
Existing LLM reasoning methods have shown impressive capabilities across various tasks, such as solving math and coding problems. However, applying these methods to scenarios without ground-truth answers or rule-based verification methods - such as tracking the mental states of an agent - remains challenging. Inspired by the sequential Monte Carlo algorithm, we introduce ThoughtTracing, an inference-time reasoning algorithm designed to trace the mental states of specific agents by generating hypotheses and weighting them based on observations without relying on ground-truth solutions to questions in datasets. Our algorithm is modeled after the Bayesian theory-of-mind framework, using LLMs to approximate probabilistic inference over agents' evolving mental states based on their perceptions and actions. We evaluate ThoughtTracing on diverse theory-of-mind benchmarks, demonstrating significant performance improvements compared to baseline LLMs. Our experiments also reveal interesting behaviors of the recent reasoning models - e.g., o3 and R1 - on theory-of-mind, highlighting the difference of social reasoning compared to other domains.
[ "Hyunwoo Kim", "Melanie Sclar", "Tan Zhi-Xuan", "Lance Ying", "Sydney Levine", "Yang Liu", "Joshua B. Tenenbaum", "Yejin Choi" ]
https://openreview.net/forum?id=yGQqTuSJPK
yGQqTuSJPK
yGQqTuSJPK
[ "~Hyunwoo_Kim3", "~Melanie_Sclar1", "~Tan_Zhi-Xuan1", "~Lance_Ying1", "~Sydney_Levine1", "~Yang_Liu60", "~Joshua_B._Tenenbaum1", "~Yejin_Choi1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/5d8ab1d8f4c22c924ea0d6ee9ae0a18ef00a958b.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "theory of mind", "reasoning", "large language model", "inference-time algorithm" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ kim2025hypothesisdriven, title={Hypothesis-Driven Theory-of-Mind Reasoning for Large Language Models}, author={Hyunwoo Kim and Melanie Sclar and Tan Zhi-Xuan and Lance Ying and Sydney Levine and Yang Liu and Joshua B. Tenenbaum and Yejin Choi}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=yGQqTuSJPK} }
kim|hypothesisdriven_theoryofmind_reasoning_for_large_language_models
null
null
null
null
null
IterKey: Iterative Keyword Generation with LLMs for Enhanced Retrieval Augmented Generation
We introduce IterKey, an LLM-based iterative keyword generation method that optimize the Retrieval-Augmented Generation process, improving accuracy by refining keywords and self-evaluating responses.
Retrieval Augmented Generation (RAG) has emerged as a way to complement the in-context knowledge of Large Language Models (LLMs) by integrating external documents. However, real-world applications demand not only accuracy but also interpretability. Dense retrieval methods provide high accuracy but lack interpretability, while sparse retrieval is transparent but often misses query intent due to keyword matching. Thus, balancing accuracy and interpretability remains a challenge. To address these issues, we introduce IterKey, an LLM-driven iterative keyword generation framework that enhances RAG via sparse retrieval. IterKey consists of three LLM-driven stages: generating keywords for retrieval, generating answers based on retrieved documents, and validating the answers. If validation fails, the process iteratively repeats with refined keywords. Across four QA tasks, experimental results show that IterKey achieves 5% to 20% accuracy improvements over BM25-based RAG and simple baselines. Its performance is comparable to dense retrieval based RAG and prior iterative query refinement methods using dense models. In summary, IterKey is a novel BM25-based iterative RAG framework that leverages LLMs to balance accuracy and interpretability.
[ "Kazuki Hayashi", "Hidetaka Kamigaito", "Shinya Kouda", "Taro Watanabe" ]
https://openreview.net/forum?id=y56BuSo8Uj
y56BuSo8Uj
y56BuSo8Uj
[ "~Kazuki_Hayashi1", "~Hidetaka_Kamigaito2", "~Shinya_Kouda1", "~Taro_Watanabe1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/89b81fc6b363dd0e2529ebd9dbc0474cdee121a7.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "retrieval-augmented generation", "RAG", "sparse retrieval", "LLM", "Iterative" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ hayashi2025iterkey, title={IterKey: Iterative Keyword Generation with {LLM}s for Enhanced Retrieval Augmented Generation}, author={Kazuki Hayashi and Hidetaka Kamigaito and Shinya Kouda and Taro Watanabe}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=y56BuSo8Uj} }
hayashi|iterkey_iterative_keyword_generation_with_llms_for_enhanced_retrieval_augmented_generation
null
null
null
null
null
Can LLM "Self-report"?: Evaluating the Validity of Self-report Scales in Measuring Personality Design in LLM-based Chatbots
Evaluating the Validity of Self-report Scales in Measuring Personality Design in LLM-based Chatbots
A chatbot’s personality design is key to interaction quality. As chatbots evolved from rule-based systems to those powered by large language models (LLMs), evaluating the effectiveness of their personality design has become increasingly complex, particularly due to the open-ended nature of interactions. A recent and widely adopted method for assessing the personality design of LLM-based chatbots is the use of self-report questionnaires. These questionnaires, often borrowed from established human personality inventories, ask the chatbot to rate itself on various personality traits. Can LLM-based chatbots meaningfully "self-report" their personality? We created 500 chatbots with distinct personality designs and evaluated the validity of their self-report personality scores by examining human perceptions formed during interactions with these chatbots. Our findings indicate that the chatbot's answers on human personality scales exhibit weak correlations with both human-perceived personality traits and the overall interaction quality. These findings raise concerns about both the criterion validity and the predictive validity of self-report methods in this context. Further analysis revealed the role of task context and interaction in the chatbot's personality design assessment. We further discuss design implications for creating more contextualized and interactive evaluation.
[ "Huiqi Zou", "Pengda Wang", "Zihan Yan", "Tianjun Sun", "Ziang Xiao" ]
https://openreview.net/forum?id=xqIwK9mNkj
xqIwK9mNkj
xqIwK9mNkj
[ "~Huiqi_Zou1", "~Pengda_Wang1", "~Zihan_Yan1", "~Tianjun_Sun1", "~Ziang_Xiao1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/388bb244be1dd675a49c4295cc3cd296fe6906a1.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "human factors in NLP; evaluation methodologies" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zou2025can, title={Can {LLM} ''Self-report''?: Evaluating the Validity of Self-report Scales in Measuring Personality Design in {LLM}-based Chatbots}, author={Huiqi Zou and Pengda Wang and Zihan Yan and Tianjun Sun and Ziang Xiao}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=xqIwK9mNkj} }
zou|can_llm_selfreport_evaluating_the_validity_of_selfreport_scales_in_measuring_personality_design_in_llmbased_chatbots
null
null
null
null
null
Always Tell Me The Odds: Fine-grained Conditional Probability Estimation
We present a state-of-the-art model for fine-grained probability estimation of textual outcomes conditioned on context.
We present a state-of-the-art model for fine-grained probability estimation of propositions conditioned on context. Recent advances in large language models (LLMs) have significantly enhanced their reasoning capabilities, particularly on well-defined tasks with complete information. However, LLMs continue to struggle with making accurate and well-calibrated \emph{probabilistic} predictions under uncertainty or partial information. While incorporating uncertainty into model predictions often boosts performance, obtaining reliable estimates of that uncertainty remains understudied. In particular, LLM probability estimates tend to be coarse and biased towards more frequent numbers. Through a combination of human and synthetic data creation and assessment, scaling to larger models, and better supervision, we propose a set of strong and precise probability estimation models. We conduct systematic evaluations across tasks that rely on conditional probability estimation and show that our approach consistently outperforms existing fine-tuned and prompting-based methods by a large margin.
[ "Liaoyaqi Wang", "Zhengping Jiang", "Anqi Liu", "Benjamin Van Durme" ]
https://openreview.net/forum?id=xhDcG8qtw9
xhDcG8qtw9
xhDcG8qtw9
[ "~Liaoyaqi_Wang1", "~Zhengping_Jiang1", "~Anqi_Liu2", "~Benjamin_Van_Durme2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/c53d10b668e237deedcef8b115cc9a506df677e2.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Model", "Probabilistic Reasoning", "Semantics", "Calibration" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025always, title={Always Tell Me The Odds: Fine-grained Conditional Probability Estimation}, author={Liaoyaqi Wang and Zhengping Jiang and Anqi Liu and Benjamin Van Durme}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=xhDcG8qtw9} }
wang|always_tell_me_the_odds_finegrained_conditional_probability_estimation
null
null
null
null
null
CALLME: Call Graph Augmentation with Large Language Models for Javascript
Handling edge cases in call graph construction for Javascript that cannot be handled with static analysis using large language models.
Building precise call graphs for Javascript programs is a fundamental build- ing block for many important software engineering and security applications such as bug detection, program repair, and refactoring. However, resolving dynamic calls using static analysis is challenging because it requires enumerating all possible values of both the object and the field. As a result, static call graph construction algorithms for Javascript ignore such dynamic calls, resulting in missed edges and a high false negative rate. We present a new approach, CALLME, that combines Language Models (LMs) with a custom static analyzer to address this challenge. Our key insight is in using LMs to incorporate additional modalities such as variable names, natural language documentation, and calling contexts, which are often sufficient to resolve dynamic property calls, but are difficult to incorporate in traditional static analysis. We implement our approach in CALLME and evaluate it on a dataset of call edges that are dependent on dynamic property accesses. CALLME achieves 80% accuracy and .79 F1, outperforming the state-of-the- art static analyzer by 30% and .60, respectively. To study the effectiveness of CALLME on downstream analysis tasks, we evaluate it on our manually curated dataset with 25 known Javascript vulnerabilities. CALLME can detect 24 vulnerabilities with only 3 false positives, whereas static analysis tools based on current call graph construction algorithms miss all of them.
[ "Michael Wang", "Kexin Pei", "Armando Solar-Lezama" ]
https://openreview.net/forum?id=xZi2rMUcAO
xZi2rMUcAO
xZi2rMUcAO
[ "~Michael_Wang1", "~Kexin_Pei1", "~Armando_Solar-Lezama1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/41f82f781ab3f27465ee5b45ef2e3218c13a0f63.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Javascript", "program analysis" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025callme, title={{CALLME}: Call Graph Augmentation with Large Language Models for Javascript}, author={Michael Wang and Kexin Pei and Armando Solar-Lezama}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=xZi2rMUcAO} }
wang|callme_call_graph_augmentation_with_large_language_models_for_javascript
null
null
null
null
null
Adaptive Computation Pruning for the Forgetting Transformer
We propose a method that adaptively prunes computations in the Forgetting Transformer based on forget gate values.
The recently proposed Forgetting Transformer (FoX) incorporates a forget gate into softmax attention and has shown consistently better or on-par performance compared to the standard RoPE-based Transformer. Notably, many attention heads in FoX tend to forget quickly, causing their output at each timestep to rely primarily on local context. Based on this observation, we propose Adaptive Computation Pruning (ACP) for FoX, a method that dynamically prunes computations involving input-output dependencies that are strongly decayed by the forget gate. In particular, our method performs *provably safe* pruning via a dynamically set pruning threshold that guarantees the pruned attention weights are negligible. We apply ACP to language model pretraining with FoX and show it consistently reduces the number of FLOPs and memory accesses in softmax attention by around 70\% across different model sizes and context lengths, resulting in a roughly 50\% to 70\% reduction in attention runtime (or a 2--3$\times$ speedup) and a roughly 10\% to 40\% increase in end-to-end training throughput. Furthermore, longer context lengths yield greater computational savings. All these speed improvements are achieved without any performance degradation. Our code is available at https://github.com/zhixuan-lin/forgetting-transformer.
[ "Zhixuan Lin", "Johan Obando-Ceron", "Xu Owen He", "Aaron Courville" ]
https://openreview.net/forum?id=xNj14CY5S1
xNj14CY5S1
xNj14CY5S1
[ "~Zhixuan_Lin1", "~Johan_Obando-Ceron1", "~Xu_Owen_He1", "~Aaron_Courville3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/dabdb7eb3af1dbd15e1d91d6bb30f2a59bd77334.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "transformer", "forgetting transformer", "efficient transformer", "sequence modeling", "adaptive computation pruning", "forget gate", "sparse attention", "FlashAttention", "hardware-aware optimization" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lin2025adaptive, title={Adaptive Computation Pruning for the Forgetting Transformer}, author={Zhixuan Lin and Johan Obando-Ceron and Xu Owen He and Aaron Courville}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=xNj14CY5S1} }
lin|adaptive_computation_pruning_for_the_forgetting_transformer
null
null
null
null
null
Energy-Based Reward Models for Robust Language Model Alignment
We introduce Energy-Based Reward Model (EBRM), a post-hoc method to refine reward models using EBMs.
Reward models (RMs) are essential for aligning Large Language Models (LLMs) with human preference. However, they often struggle with capturing complex human preferences and generalizing to unseen data. To address these challenges, we introduce \emph{Energy-Based Reward Model} (EBRM), a lightweight post-hoc refinement framework that enhances RM robustness and generalization. EBRM models the reward distribution explicitly, capturing uncertainty in human preferences and mitigating the impact of noisy or misaligned annotations. It achieves this through conflict-aware data filtering, label-noise-aware contrastive training, and hybrid initialization. Notably, EBRM enhances RMs without retraining, making it computationally efficient and adaptable across different models and tasks. Empirical evaluations on RM benchmarks demonstrate significant improvements in both robustness and generalization, achieving up to a 5.97\% improvement in safety-critical alignment tasks compared to standard RMs. Furthermore, reinforcement learning experiments confirm that our refined rewards enhance alignment quality, effectively delaying reward hacking. These results demonstrate our approach as a scalable and effective enhancement for existing RMs and alignment pipelines.
[ "Anamika Lochab", "Ruqi Zhang" ]
https://openreview.net/forum?id=x6evCULIOQ
x6evCULIOQ
x6evCULIOQ
[ "~Anamika_Lochab1", "~Ruqi_Zhang1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/0cbcf905a030463b01e16ef7de935a5d0ef38517.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Reward Models", "Alignment", "Energy Based Models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lochab2025energybased, title={Energy-Based Reward Models for Robust Language Model Alignment}, author={Anamika Lochab and Ruqi Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=x6evCULIOQ} }
lochab|energybased_reward_models_for_robust_language_model_alignment
/attachment/d94995fd2c118eb604ec15b3b03867d739a2ce0c.zip
null
null
null
null
Guided Reasoning in LLM-Driven Penetration Testing Using Structured Attack Trees
We propose a reasoning pipeline for penetration testing LLM agents using a structured task tree based on proven cybersecurity kill chains. Our method achieves 74.4% attack subtask completion (vs. 35.2% by the SOTA) and requires 55.9% fewer queries.
Recent advances in large language models (LLMs) have driven interest in automating cybersecurity penetration testing workflows, offering the promise of faster and more consistent vulnerability assessment for enterprise systems. Existing LLM agents for penetration testing primarily rely on self‐guided reasoning, which can produce inaccurate or hallucinated procedural steps. As a result, the LLM agent may undertake unproductive actions, such as exploiting unused software libraries or generating cyclical responses that repeat prior tactics. In this work, we propose a reasoning pipeline for penetration testing LLM agents that incorporates a deterministic task tree built from the MITRE ATT\&CK Matrix, a proven penetration testing kill chain, to constrain the LLM's reasoning process to explicitly defined tactics, techniques, and procedures. This anchors reasoning in proven penetration testing methodologies and filters out ineffective actions by guiding the agent towards more productive attack procedures. To evaluate our approach, we built an automated penetration testing LLM agent using three LLMs (Llama-3-8B, Gemini-1.5, and GPT-4) and applied it to navigate 10 HackTheBox cybersecurity exercises with 103 discrete subtasks representing real-world cyberattack scenarios. Our proposed reasoning pipeline guided the LLM agent through 71.8\%, 72.8\%, and 78.6\% of subtasks using Llama-3-8B, Gemini-1.5, and GPT-4, respectively. Comparatively, the state-of-the-art LLM penetration testing tool using self-guided reasoning completed only 13.5\%, 16.5\%, and 75.7\% of subtasks and required 86.2\%, 118.7\%, and 205.9\% more model queries. This suggests that incorporating a deterministic task tree into LLM reasoning pipelines can enhance the accuracy and efficiency of automated cybersecurity assessments.
[ "Katsuaki Nakano", "Reza Fayyazi", "Shanchieh Yang", "Michael Zuzak" ]
https://openreview.net/forum?id=x4sdXZ7Jdu
x4sdXZ7Jdu
x4sdXZ7Jdu
[ "~Katsuaki_Nakano1", "~Reza_Fayyazi1", "~Shanchieh_Yang1", "~Michael_Zuzak1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/972da2fcf7e33064c89c2aed8f19ab736d656ab1.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Penetration Testing", "Large Language Models", "Autonomous Penetration Testing Agents" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ nakano2025guided, title={Guided Reasoning in {LLM}-Driven Penetration Testing Using Structured Attack Trees}, author={Katsuaki Nakano and Reza Fayyazi and Shanchieh Yang and Michael Zuzak}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=x4sdXZ7Jdu} }
nakano|guided_reasoning_in_llmdriven_penetration_testing_using_structured_attack_trees
null
null
null
null
null
Goedel-Prover: A Frontier Model for Open-Source Automated Theorem Proving
Introduce Goedel-Prover, an open-source language model that achieves SOTA in automated theorem proving in Lean
We introduce Goedel-Prover, an open-source language model that achieves state-of-the-art performance in automated formal proof generation for mathematical problems. A key challenge in this field is the scarcity of formalized mathematical statements and proofs, which we address through the following approaches. First, we train statement formalizers to translate natural language math problems from Numina into the formal language Lean 4, and use an LLM to verify that the formal statements accurately preserve the content of the original problems. This results in a dataset of 1.64 million formal statements. We then iteratively build a large dataset of formal proofs by training a series of provers: each prover is able to prove many statements that the previous ones could not, and these new proofs are added to the training set for the next prover. Despite using only supervised fine-tuning, our final prover (fine-tuned on DeepSeek-Prover-V1.5-base) significantly outperforms the previous best open-source model, DeepSeek-Prover-V1.5, which uses reinforcement learning. On the MiniF2F benchmark, our model achieves a success rate of 57.6\% (Pass@32), surpassing DeepSeek-Prover-V1.5 by 7.6\%. On PutnamBench, Goedel-Prover successfully solves 7 problems (Pass@512), ranking first on the leaderboard. Furthermore, it generates 29.7K formal proofs for Lean Workbook problems, nearly doubling the 15.7K produced by prior work. We provide extensive discussion of our training methodology, highlighting the key design choices that contribute to Goedel-Prover’s strong performance. Finally, we explore reinforcement learning on top of Goedel-Prover-SFT, offering insights into its potential benefits and limitations.
[ "Yong Lin", "Shange Tang", "Bohan Lyu", "Jiayun Wu", "Hongzhou Lin", "Kaiyu Yang", "Jia LI", "Mengzhou Xia", "Danqi Chen", "Sanjeev Arora", "Chi Jin" ]
https://openreview.net/forum?id=x2y9i2HDjD
x2y9i2HDjD
x2y9i2HDjD
[ "~Yong_Lin2", "~Shange_Tang1", "~Bohan_Lyu1", "~Jiayun_Wu1", "~Hongzhou_Lin1", "~Kaiyu_Yang1", "~Jia_LI18", "~Mengzhou_Xia1", "~Danqi_Chen1", "~Sanjeev_Arora1", "~Chi_Jin1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/047be6bba4d4ce1c4aae8dfe2c987deef335ef45.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Formal reasoning", "verification", "Lean", "self-improvement" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ lin2025goedelprover, title={Goedel-Prover: A Frontier Model for Open-Source Automated Theorem Proving}, author={Yong Lin and Shange Tang and Bohan Lyu and Jiayun Wu and Hongzhou Lin and Kaiyu Yang and Jia LI and Mengzhou Xia and Danqi Chen and Sanjeev Arora and Chi Jin}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=x2y9i2HDjD} }
lin|goedelprover_a_frontier_model_for_opensource_automated_theorem_proving
null
null
null
null
null
EnrichIndex: Using LLMs to Enrich Retrieval Indices Offline
EnrichIndex enriches documents offline using LLMs, improving retrieval performance on complex retrieval tasks with significantly lower latency and online cost.
Existing information retrieval systems excel in cases where the language of target documents closely matches that of the user query. However, real-world retrieval systems are often required to *implicitly reason* whether a document is relevant. For example, when retrieving technical texts or tables, their relevance to the user query may be implied through a particular jargon or structure, rather than explicitly expressed in their content. Large language models (LLMs) hold great potential in identifying such implied relevance by leveraging their reasoning skills. Nevertheless, current LLM-augmented retrieval is hindered by high latency and computation cost, as the LLM typically computes the query-document relevance *online*, for every query anew. To tackle this issue we introduce EnrichIndex, a retrieval approach which instead uses the LLM *offline* to build semantically-enriched retrieval indices, by performing a single pass over all documents in the retrieval corpus once during ingestion time. Furthermore, the semantically-enriched indices can complement existing online retrieval approaches, boosting the performance of LLM re-rankers. We evaluated EnrichIndex on five retrieval tasks, involving passages and tables, and found that it outperforms strong online LLM-based retrieval systems, with an average improvement of 11.7 points in recall @ 10 and 10.6 points in NDCG @ 10 compared to strong baselines. In terms of online calls to the LLM, it processes 293.3 times fewer tokens which greatly reduces the online latency and cost. Overall, EnrichIndex is an effective way to build better retrieval indices offline by leveraging the strong reasoning skills of LLMs.
[ "Peter Baile Chen", "Tomer Wolfson", "Mike Cafarella", "Dan Roth" ]
https://openreview.net/forum?id=wyYL5Jov6e
wyYL5Jov6e
wyYL5Jov6e
[ "~Peter_Baile_Chen1", "~Tomer_Wolfson1", "~Mike_Cafarella1", "~Dan_Roth3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/7e22d77624c4e6b84c79cf58406e46c07740df64.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "retrieval", "offline enrichment", "implicit reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ chen2025enrichindex, title={EnrichIndex: Using {LLM}s to Enrich Retrieval Indices Offline}, author={Peter Baile Chen and Tomer Wolfson and Mike Cafarella and Dan Roth}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=wyYL5Jov6e} }
chen|enrichindex_using_llms_to_enrich_retrieval_indices_offline
null
null
null
null
null
Elucidating the Design Space of Decay in Linear Attention
Elucidating the Design Space of Decay in Linear Attention
This paper presents a comprehensive investigation into the decay mechanisms inherent in linear complexity sequence models. We systematically delineate the design space of decay mechanisms across four pivotal dimensions: parameterization strategy, which refers to the computational methodology for decay; parameter sharing, which involves the utilization of supplementary parameters for decay computation; decay granularity. comparing scalar versus vector-based decay; and compatibility with relative positional encoding methods, such as Rotary Position Embedding (RoPE). Through an extensive series of experiments conducted on diverse language modeling tasks, we uncovered several critical insights. Firstly, the design of the parameterization strategy for decay requires meticulous consideration. Our findings indicate that effective configurations are typically confined to a specific range of parameters. Secondly, parameter sharing cannot be used arbitrarily, as it may cause decay values to be too large or too small, thereby significantly impacting performance. Thirdly, under identical parameterization strategies, scalar decay generally underperforms compared to its vector-based counterpart. However, in certain scenarios with alternative parameterization strategies, scalar decay may unexpectedly surpass vector decay in efficacy. Lastly, our analysis reveals that RoPE, a commonly employed relative positional encoding method, typically fails to provide tangible benefits to the majority of linear attention mechanisms.
[ "Zhen Qin", "Xuyang Shen", "Yiran Zhong" ]
https://openreview.net/forum?id=whXh2YxMbt
whXh2YxMbt
whXh2YxMbt
[ "~Zhen_Qin6", "~Xuyang_Shen1", "~Yiran_Zhong1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/815a0f2594e2d32ff855a8a9d677e3855205a1c4.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Linear Attention" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ qin2025elucidating, title={Elucidating the Design Space of Decay in Linear Attention}, author={Zhen Qin and Xuyang Shen and Yiran Zhong}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=whXh2YxMbt} }
qin|elucidating_the_design_space_of_decay_in_linear_attention
null
null
null
null
null
PolyGuard: A Multilingual Safety Moderation Tool for 17 Languages
We introduce PolyGuard, a new state-of-the-art multilingual safety model for safeguarding LLM generations along with PolyGuardMix for safety detection training and PolyGuardPrompts for safety guardrail evaluation.
Truly multilingual safety moderation efforts for Large Language Models (LLMs) have been hindered by a narrow focus on a small set of languages (e.g., English, Chinese) as well as a limited scope of safety definition, resulting in significant gaps in moderation capabilities. To bridge these gaps, we release POLYGUARD, a new state-of-the-art multilingual safety model for safeguarding LLM generations, and the corresponding training and evaluation datasets. POLYGUARD is trained on POLYGUARDMIX, the largest multilingual safety training corpus to date containing 1.91M samples across 17 languages (e.g., Chinese, Czech, English, Hindi). We also introduce POLYGUARDPROMPTS, a high quality multilingual benchmark with 29K samples for the evaluation of safety guardrails. Created by combining naturally occurring multilingual human-LLM interactions and human-verified machine translations of an English-only safety dataset (WildGuardMix; Han et al., 2024), our datasets contain prompt-output pairs with labels of prompt harmfulness, response harmfulness, and response refusal. Through extensive evaluations across multiple safety and toxicity benchmarks, we demonstrate that POLYGUARD outperforms existing state-of-the-art open-weight and commercial safety classifiers by 5.5%. Our contributions advance efforts toward safer multilingual LLMs for all global users.
[ "Priyanshu Kumar", "Devansh Jain", "Akhila Yerukola", "Liwei Jiang", "Himanshu Beniwal", "Thomas Hartvigsen", "Maarten Sap" ]
https://openreview.net/forum?id=wbAWKXNeQ4
wbAWKXNeQ4
wbAWKXNeQ4
[ "~Priyanshu_Kumar1", "~Devansh_Jain1", "~Akhila_Yerukola1", "~Liwei_Jiang2", "~Himanshu_Beniwal1", "~Thomas_Hartvigsen1", "~Maarten_Sap1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/083b5e10f049e428ff125b8665aea78cf0284e75.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "ai safety", "hate-speech detection" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ kumar2025polyguard, title={PolyGuard: A Multilingual Safety Moderation Tool for 17 Languages}, author={Priyanshu Kumar and Devansh Jain and Akhila Yerukola and Liwei Jiang and Himanshu Beniwal and Thomas Hartvigsen and Maarten Sap}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=wbAWKXNeQ4} }
kumar|polyguard_a_multilingual_safety_moderation_tool_for_17_languages
null
null
null
null
null
More is Less: The Pitfalls of Multi-Model Synthetic Preference Data in DPO Safety Alignment
LLMs learns about safety better from their own outputs than from others.
Aligning large language models (LLMs) with human values is an increasingly critical step in post-training. Direct Preference Optimization (DPO) has emerged as a simple, yet effective alternative to reinforcement learning from human feedback (RLHF). Synthetic preference data with its low cost and high quality enable effective alignment through single- or multi-model generated preference data. Our study reveals a striking, safety-specific phenomenon associated with DPO alignment: Although multi-model generated data enhances performance on general tasks (ARC, Hellaswag, MMLU, TruthfulQA, Winogrande) by providing diverse responses, it also tends to facilitate reward hacking during training. This can lead to a high attack success rate (ASR) when models encounter jailbreaking prompts. The issue is particularly pronounced when employing stronger models like GPT-4o or larger models in the same family to generate chosen responses paired with target model self-generated rejected responses, resulting in dramatically poorer safety outcomes. Furthermore, with respect to safety, using solely self-generated responses (single-model generation) for both chosen and rejected pairs significantly outperforms configurations that incorporate responses from stronger models, whether used directly as chosen data or as part of a multi-model response pool. We demonstrate that multi-model preference data exhibits high linear separability between chosen and rejected responses, which allows models to exploit superficial cues rather than internalizing robust safety constraints. Our experiments, conducted on models from the Llama, Mistral, and Qwen families, consistently validate these findings. The code is available at \href{https://github.com/cacayaya/More-is-Less}{github.com/cacayaya/More-is-Less}.
[ "Yifan Wang", "Runjin Chen", "Bolian Li", "David Cho", "Yihe Deng", "Ruqi Zhang", "Tianlong Chen", "Zhangyang Wang", "Ananth Grama", "Junyuan Hong" ]
https://openreview.net/forum?id=wXOUYzNv5k
wXOUYzNv5k
wXOUYzNv5k
[ "~Yifan_Wang14", "~Runjin_Chen1", "~Bolian_Li1", "~David_Cho2", "~Yihe_Deng1", "~Ruqi_Zhang1", "~Tianlong_Chen1", "~Zhangyang_Wang1", "~Ananth_Grama1", "~Junyuan_Hong1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/094c4e24c1d72adc8f0ceeded062c9fdf6235d7b.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Alignment", "Synthetic Data", "Safety", "Large Language Models" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025more, title={More is Less: The Pitfalls of Multi-Model Synthetic Preference Data in {DPO} Safety Alignment}, author={Yifan Wang and Runjin Chen and Bolian Li and David Cho and Yihe Deng and Ruqi Zhang and Tianlong Chen and Zhangyang Wang and Ananth Grama and Junyuan Hong}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=wXOUYzNv5k} }
wang|more_is_less_the_pitfalls_of_multimodel_synthetic_preference_data_in_dpo_safety_alignment
null
null
null
null
null
Sherkala-Chat: Building a State-of-the-Art LLM for Kazakh in a Moderately Resourced Setting
Sherkala-Chat (8B) is a state-of-the-art, instruction-tuned open LLM for Kazakh, excelling in Kazakh language tasks while remaining competitive in English.
Llama-3.1-Sherkala-8B-Chat, or Sherkala-Chat (8B) for short, is a state-of-the-art instruction-tuned open generative large language model (LLM) designed for Kazakh. Sherkala-Chat (8B) aims to enhance the inclusivity of LLM advancements for Kazakh speakers. Adapted from the LLaMA-3.1-8B model, Sherkala-Chat (8B) is trained on 45.3B tokens across Kazakh, English, Russian, and Turkish. With 8 billion parameters, it demonstrates strong knowledge and reasoning abilities in Kazakh, significantly outperforming existing open Kazakh and multilingual models of similar scale while achieving competitive performance in English. To ensure effective and responsible alignment, we leverage translated instruction datasets, a Kazakhstan-specific instruction dataset that is automatically constructed and manually verified, and Kazakh-specific safety data. We release Sherkala-Chat (8B) as an open-weight model, along with a detailed description of its training, alignment, and evaluation, to support research and real-world applications for Kazakh speakers.
[ "Fajri Koto", "Rituraj Joshi", "Nurdaulet Mukhituly", "Yuxia Wang", "Zhuohan Xie", "Rahul Pal", "Daniil Orel", "Parvez Mullah", "Diana Turmakhan", "Maiya Goloburda", "Mohammed Kamran", "Samujjwal Ghosh", "Bokang Jia", "Jonibek Mansurov", "Mukhammed Togmanov", "Debopriyo Banerjee", "Nurkhan Laiyk", "Akhmed Sakip", "Xudong Han", "Ekaterina Kochmar", "Alham Fikri Aji", "Aaryamonvikram Singh", "Alok Anil Jadhav", "Satheesh Katipomu", "Samta Kamboj", "Monojit Choudhury", "Gurpreet Gosal", "Gokulakrishnan Ramakrishnan", "Biswajit Mishra", "Sarath Chandran", "Avraham Sheinin", "Natalia Vassilieva", "Neha Sengupta", "Preslav Nakov" ]
https://openreview.net/forum?id=wRcTCcb0H5
wRcTCcb0H5
wRcTCcb0H5
[ "~Fajri_Koto1", "~Rituraj_Joshi1", "~Nurdaulet_Mukhituly1", "~Yuxia_Wang1", "~Zhuohan_Xie1", "~Rahul_Pal1", "~Daniil_Orel1", "~Parvez_Mullah1", "~Diana_Turmakhan1", "~Maiya_Goloburda1", "~Mohammed_Kamran1", "~Samujjwal_Ghosh1", "~Bokang_Jia1", "~Jonibek_Mansurov1", "~Mukhammed_Togmanov1", "~Debopriyo_Banerjee1", "~Nurkhan_Laiyk1", "~Akhmed_Sakip2", "~Xudong_Han2", "~Ekaterina_Kochmar2", "~Alham_Fikri_Aji1", "~Aaryamonvikram_Singh1", "~Alok_Anil_Jadhav1", "~Satheesh_Katipomu1", "~Samta_Kamboj1", "~Monojit_Choudhury1", "~Gurpreet_Gosal2", "~Gokulakrishnan_Ramakrishnan1", "~Biswajit_Mishra1", "~Sarath_Chandran1", "~Avraham_Sheinin1", "~Natalia_Vassilieva1", "~Neha_Sengupta1", "~Preslav_Nakov2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/81303233cce33f5509d80aa0ef16ce80cdba0fb8.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM", "LLaMA-3.1", "Kazakh", "low-resource language modeling", "fine-tuning", "safety alignment", "model evaluation", "generative AI" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ koto2025sherkalachat, title={Sherkala-Chat: Building a State-of-the-Art {LLM} for Kazakh in a Moderately Resourced Setting}, author={Fajri Koto and Rituraj Joshi and Nurdaulet Mukhituly and Yuxia Wang and Zhuohan Xie and Rahul Pal and Daniil Orel and Parvez Mullah and Diana Turmakhan and Maiya Goloburda and Mohammed Kamran and Samujjwal Ghosh and Bokang Jia and Jonibek Mansurov and Mukhammed Togmanov and Debopriyo Banerjee and Nurkhan Laiyk and Akhmed Sakip and Xudong Han and Ekaterina Kochmar and Alham Fikri Aji and Aaryamonvikram Singh and Alok Anil Jadhav and Satheesh Katipomu and Samta Kamboj and Monojit Choudhury and Gurpreet Gosal and Gokulakrishnan Ramakrishnan and Biswajit Mishra and Sarath Chandran and Avraham Sheinin and Natalia Vassilieva and Neha Sengupta and Preslav Nakov}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=wRcTCcb0H5} }
koto|sherkalachat_building_a_stateoftheart_llm_for_kazakh_in_a_moderately_resourced_setting
null
null
null
null
null
Quantifying Fairness in LLMs Beyond Tokens: A Semantic and Statistical Perspective
We introduce a framework for assessing and analyzing bias in long text outputs at group level.
Large Language Models (LLMs) often generate responses with inherent biases, undermining their reliability in real-world applications. Existing evaluation meth- ods often overlook biases in long-form responses and the intrinsic variability of LLM outputs. To address these challenges, we propose FiSCo (Fine-grained Se- mantic Comparison), a novel statistical framework to evaluate group-level fairness in LLMs by detecting subtle semantic differences in long-form responses across demographic groups. Unlike prior work focusing on sentiment or token-level comparisons, FiSCo goes beyond surface-level analysis by operating at the claim level, leveraging entailment checks to assess the consistency of meaning across responses. We decompose model outputs into semantically distinct claims and apply statistical hypothesis testing to compare inter- and intra-group similarities, enabling robust detection of subtle biases. We formalize a new group counterfac- tual fairness definition and validate FiSCo on both synthetic and human-annotated datasets spanning gender, race, and age. Experiments show that FiSCo more reliably identifies nuanced biases while reducing the impact of stochastic LLM variability, outperforming various evaluation metrics.
[ "Weijie Xu", "Yiwen Wang", "Chi Xue", "Xiangkun Hu", "Xi Fang", "Guimin Dong", "Chandan K. Reddy" ]
https://openreview.net/forum?id=wKVtjs0w4a
wKVtjs0w4a
wKVtjs0w4a
[ "~Weijie_Xu1", "~Yiwen_Wang4", "~Chi_Xue1", "~Xiangkun_Hu1", "~Xi_Fang3", "~Guimin_Dong1", "~Chandan_K._Reddy1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/371c9407949422ef1df4a9f9b27d0ad0aa911a2b.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Fairness", "Bias", "Evaluation" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ xu2025quantifying, title={Quantifying Fairness in {LLM}s Beyond Tokens: A Semantic and Statistical Perspective}, author={Weijie Xu and Yiwen Wang and Chi Xue and Xiangkun Hu and Xi Fang and Guimin Dong and Chandan K. Reddy}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=wKVtjs0w4a} }
xu|quantifying_fairness_in_llms_beyond_tokens_a_semantic_and_statistical_perspective
null
null
null
null
null
How Post-Training Reshapes LLMs: A Mechanistic View on Knowledge, Truthfulness, Refusal, and Confidence
We compare base(pretrained) LLM and instruct(post-trained) LLM mechanistically in four perspectives and provide insight to what is preserved and altered.
Post-training is essential for the success of large language models (LLMs), transforming pre-trained base models into more useful and aligned post-trained models. While plenty of works have studied post-training algorithms and evaluated post-training models by their outputs, it remains understudied how post-training reshapes LLMs internally. In this paper, we compare base and post-trained LLMs mechanistically from four perspectives to better understand post-training effects. Our findings across model families and datasets reveal that: (1) Post-training does not change the factual knowledge storage locations, and it adapts knowledge representations from the base model while developing new knowledge representations; (2) Both truthfulness and refusal can be represented by vectors in the hidden representation space. The truthfulness direction is highly similar between the base and post-trained model, and it is effectively transferable for interventions; (3) The refusal direction is different between the base and post-trained models, and it shows limited forward transferability; (4) Differences in confidence between the base and post-trained models cannot be attributed to entropy neurons. Our study provides insights into the fundamental mechanisms preserved and altered during post-training, facilitates downstream tasks like model steering, and could potentially benefit future research in interpretability and LLM post-training. Our code is publicly available at https://github.com/HZD01/post-training-mechanistic-analysis.
[ "Hongzhe Du", "Weikai Li", "Min Cai", "Karim Saraipour", "Zimin Zhang", "Himabindu Lakkaraju", "Yizhou Sun", "Shichang Zhang" ]
https://openreview.net/forum?id=w5DSwn9wTC
w5DSwn9wTC
w5DSwn9wTC
[ "~Hongzhe_Du1", "~Weikai_Li2", "~Min_Cai2", "~Karim_Saraipour1", "~Zimin_Zhang1", "~Himabindu_Lakkaraju1", "~Yizhou_Sun1", "~Shichang_Zhang2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ed7d9a75939697cf5c03dbd93501fd32e855ede6.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Mechanistic Interpretability", "Instruction-tuning", "Post-training", "Alignment" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ du2025how, title={How Post-Training Reshapes {LLM}s: A Mechanistic View on Knowledge, Truthfulness, Refusal, and Confidence}, author={Hongzhe Du and Weikai Li and Min Cai and Karim Saraipour and Zimin Zhang and Himabindu Lakkaraju and Yizhou Sun and Shichang Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=w5DSwn9wTC} }
du|how_posttraining_reshapes_llms_a_mechanistic_view_on_knowledge_truthfulness_refusal_and_confidence
null
null
null
null
null
The Zero Body Problem: Probing LLM Use of Sensory Language
Popular large language models fail to replicate human use of sensory language, an important feature of storytelling.
Sensory language expresses embodied experiences ranging from taste and sound to excitement and stomachache. It is of interest to scholars from a wide range of domains including robotics, narratology, linguistics, and cognitive science. In this work, we explore whether language models, which are not embodied, can approximate human use of embodied language. To do this, we extend an existing corpus of parallel human and model responses to short story prompts with an additional 18,000 stories generated by 18 popular language models. We find that all models generate stories that differ significantly from human usage of sensory language. However, the direction of these differences varies considerably between model families; Gemini models use significantly more sensory language than humans along most axes whereas most models from the remaining five families use significantly less. Linear probes run on five models suggest that they are capable of \textit{identifying} sensory language, meaning an inability to recognize sensory content is unlikely to be the cause of the observed differences. Instead, we find preliminary evidence indicating that instruction tuning may discourage usage of sensory language in some models. To support further work, we release \href{https://github.com/srhm-ca/sensorylanguage}{our expanded story dataset.}
[ "Rebecca M. M. Hicke", "Sil Hamilton", "David Mimno" ]
https://openreview.net/forum?id=vv1ZyQF8LD
vv1ZyQF8LD
vv1ZyQF8LD
[ "~Rebecca_M._M._Hicke1", "~Sil_Hamilton1", "~David_Mimno1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/3c070894fc335f646c5bbdc34c6ec387206d6861.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "model evaluation", "model interpretability", "sensory language", "model creativity" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ hicke2025the, title={The Zero Body Problem: Probing {LLM} Use of Sensory Language}, author={Rebecca M. M. Hicke and Sil Hamilton and David Mimno}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=vv1ZyQF8LD} }
hicke|the_zero_body_problem_probing_llm_use_of_sensory_language
null
null
null
null
null
Base Models Beat Aligned Models at Randomness and Creativity
Alignment seems to hurt performance on a set of tasks that require randomness or creativity
Alignment has quickly become a default ingredient in LLM development, with techniques such as reinforcement learning from human feedback making models act safely, follow instructions, and perform ever-better on complex tasks. While these techniques are certainly useful, we propose that they should not be universally applied and demonstrate a range of tasks on which base language models consistently outperform their popular aligned forms. Particularly, we study tasks that require unpredictable outputs, such as random number generation, mixed strategy games (rock-paper-scissors and hide-and-seek), and creative writing. In each case, aligned models tend towards narrow behaviors that result in distinct disadvantages, for instance, preferring to generate ``7'' over other uniformly random numbers, becoming almost fully predictable in some game states, or prioritizing pleasant writing over originality. Across models tested, better performance on common benchmarks tends to correlate with worse performance on our tasks, suggesting an effective trade-off in the required capabilities.
[ "Peter West", "Christopher Potts" ]
https://openreview.net/forum?id=vqN8uom4A1
vqN8uom4A1
vqN8uom4A1
[ "~Peter_West1", "~Christopher_Potts1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/fac784fe4c8e095a2a44374be0e7e9378dc7c012.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "alignment", "pretrained", "limitations", "limits", "capabilities", "randomness", "creativity" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ west2025base, title={Base Models Beat Aligned Models at Randomness and Creativity}, author={Peter West and Christopher Potts}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=vqN8uom4A1} }
west|base_models_beat_aligned_models_at_randomness_and_creativity
null
null
null
null
null
Improving Table Understanding with LLMs and Entity-Oriented Search
We introduce an entity-oriented search method to enhance table understanding in LLMs, reducing preprocessing and achieving state-of-the-art results.
Our work addresses the challenges of understanding tables. Existing methods often struggle with the unpredictable nature of table content, leading to a reliance on preprocessing and keyword matching. They also face limitations due to the lack of contextual information, which complicates the reasoning processes of large language models (LLMs). To overcome these challenges, we introduce an entity-oriented search method to improve table understanding with LLMs. This approach effectively leverages the semantic similarities between questions and table data, as well as the implicit relationships between table cells, minimizing the need for data preprocessing and keyword matching. Additionally, it focuses on table entities, ensuring that table cells are semantically tightly bound, thereby enhancing contextual clarity. Furthermore, we pioneer the use of a graph query language for table understanding, establishing a new research direction. Experiments show that our approach achieves new state-of-the-art performances on standard benchmarks WikiTableQuestions and TabFact.
[ "Thi-Nhung Nguyen", "Hoang Ngo", "Dinh Phung", "Thuy-Trang Vu", "Dat Quoc Nguyen" ]
https://openreview.net/forum?id=vlyl9xZVAL
vlyl9xZVAL
vlyl9xZVAL
[ "~Thi-Nhung_Nguyen1", "~Hoang_Ngo1", "~Dinh_Phung2", "~Thuy-Trang_Vu1", "~Dat_Quoc_Nguyen1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/68932df0b9be8315ff4155a7db786fcffd032e2b.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "table understanding", "llm" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ nguyen2025improving, title={Improving Table Understanding with {LLM}s and Entity-Oriented Search}, author={Thi-Nhung Nguyen and Hoang Ngo and Dinh Phung and Thuy-Trang Vu and Dat Quoc Nguyen}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=vlyl9xZVAL} }
nguyen|improving_table_understanding_with_llms_and_entityoriented_search
null
null
null
null
null
Positional Biases Shift as Inputs Approach Context Window Limits
This paper examines how input length, relative to a model’s context window, affects positional biases in LLMs.
Large Language Models (LLMs) often struggle to use information across long inputs effectively. Prior work has identified positional biases, such as the Lost in the Middle (LiM) effect, where models perform better when information appears at the beginning (primacy bias) or end (recency bias) of the input, rather than in the middle. However, long-context studies have not consistently replicated these effects, raising questions about their intensity and the conditions under which they manifest. To address this, we conducted a comprehensive analysis using relative rather than absolute input lengths, defined with respect to each model’s context window. Our findings reveal that the LiM effect is strongest when inputs occupy up to 50\% of a model’s context window. Beyond that, the primacy bias weakens, while recency bias remains relatively stable. This effectively eliminates the LiM effect; instead, we observe a distance-based bias, where model performance is better when relevant information is closer to the end of the input. Furthermore, our results suggest that successful retrieval is a prerequisite for reasoning in LLMs, and that the observed positional biases in reasoning are largely inherited from retrieval. These insights have implications for long-context tasks, the design of future LLM benchmarks, and evaluation methodologies for LLMs handling extended inputs.
[ "Blerta Veseli", "Julian Chibane", "Mariya Toneva", "Alexander Koller" ]
https://openreview.net/forum?id=vlUk8z8LaM
vlUk8z8LaM
vlUk8z8LaM
[ "~Blerta_Veseli1", "~Julian_Chibane1", "~Mariya_Toneva1", "~Alexander_Koller2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/1fa87a52bb87f4535aa4c24f858f215c6d329083.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Long-context understanding", "positional biases" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ veseli2025positional, title={Positional Biases Shift as Inputs Approach Context Window Limits}, author={Blerta Veseli and Julian Chibane and Mariya Toneva and Alexander Koller}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=vlUk8z8LaM} }
veseli|positional_biases_shift_as_inputs_approach_context_window_limits
null
null
null
null
null
Agree to Disagree? A Meta-Evaluation of LLM Misgendering
We conduct a systematic meta-evaluation of different methods for measuring LLM misgendering across three datasets and find that they can disagree.
Numerous methods have been proposed to measure LLM misgendering, including probability-based evaluations (e.g., automatically with templatic sentences) and generation-based evaluations (e.g., with automatic heuristics or human validation). However, it has gone unexamined whether these evaluation methods have convergent validity, that is, whether their results align. Therefore, we conduct a systematic meta-evaluation of these methods across three existing datasets for LLM misgendering. We propose a method to transform each dataset to enable parallel probability- and generation-based evaluation. Then, by automatically evaluating a suite of 6 models from 3 families, we find that these methods can disagree with each other at the instance, dataset, and model levels, conflicting on 20.2% of evaluation instances. Finally, with a human evaluation of 2400 LLM generations, we show that misgendering behaviour is complex and goes far beyond pronouns, which automatic evaluations are not currently designed to capture, suggesting essential disagreement with human evaluations. Based on our findings, we provide recommendations for future evaluations of LLM misgendering. Our results are also more widely relevant, as they call into question broader methodological conventions in LLM evaluation, which often assume that different evaluation methods agree.
[ "Arjun Subramonian", "Vagrant Gautam", "Preethi Seshadri", "Dietrich Klakow", "Kai-Wei Chang", "Yizhou Sun" ]
https://openreview.net/forum?id=vgmiRvpCLA
vgmiRvpCLA
vgmiRvpCLA
[ "~Arjun_Subramonian1", "~Vagrant_Gautam1", "~Preethi_Seshadri2", "~Dietrich_Klakow1", "~Kai-Wei_Chang1", "~Yizhou_Sun1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/26bde4c0897c899d2a27271ba979a953201f4f50.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "fairness", "meta-evaluation", "misgendering" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ subramonian2025agree, title={Agree to Disagree? A Meta-Evaluation of {LLM} Misgendering}, author={Arjun Subramonian and Vagrant Gautam and Preethi Seshadri and Dietrich Klakow and Kai-Wei Chang and Yizhou Sun}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=vgmiRvpCLA} }
subramonian|agree_to_disagree_a_metaevaluation_of_llm_misgendering
null
null
null
null
null
Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate
We introduce Critique Fine-Tuning, a training method that teaches LM to critique responses, achieving better performance than SFT with fewer training samples and comparable results to RL methods.
Supervised Fine-Tuning (SFT) is commonly used to train language models to imitate annotated responses for given instructions. In this paper, we propose Critique Fine-Tuning (CFT), a method more effective than SFT for reasoning tasks. Instead of simply imitating correct responses, CFT trains models to critique noisy responses, inspired by human learning processes that emphasize critical thinking, deeper analysis, and nuanced understanding--traits often overlooked by standard SFT. To validate the effectiveness of CFT, we construct multiple critique datasets (e.g., WebInstruct, MetaMath, NuminaMath), where GPT-4o serves as the teacher to generate critiques in the form of ([query; noisy response], critique). Experiments on these datasets demonstrate that CFT consistently outperforms SFT by 4--10% across six mathematical reasoning benchmarks, and is effective across different base models including Qwen2.5, Qwen2.5-Math, and DeepSeek-Math. Notably, our model Qwen2.5-Math-CFT only requires 1 hour of training on 8xH100 over the 50K examples, yet matches or outperforms strong competitors like Qwen2.5-Math-Instruct on most benchmarks, which use over 2M samples. Moreover, it matches the performance of SimpleRL, which is a DeepSeek-r1 replication trained with 140x more compute. Experiments on IF_Eval and MT-Bench further demonstrate that CFT can significantly enhance the model's general generation and instruction-following capabilities, outperforming the Qwen2.5-Math-Instruct by a large margin. Ablation studies show that CFT is robust to noisy response sources and teacher critique models. These findings highlight that CFT offers a more effective alternative to advance the reasoning of language models.
[ "Yubo Wang", "Xiang Yue", "Wenhu Chen" ]
https://openreview.net/forum?id=vTAz44GgOA
vTAz44GgOA
vTAz44GgOA
[ "~Yubo_Wang9", "~Xiang_Yue1", "~Wenhu_Chen3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/dc57be85188955311eeeb6b9cad46b5469c6b35e.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Reasoning", "Large Language Model", "Fine-Tuning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wang2025critique, title={Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate}, author={Yubo Wang and Xiang Yue and Wenhu Chen}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=vTAz44GgOA} }
wang|critique_finetuning_learning_to_critique_is_more_effective_than_learning_to_imitate
null
null
null
null
null
SimpleRL-Zoo: Investigating and Taming Zero Reinforcement Learning for Open Base Models in the Wild
The paper explores zero training with rule-based rewards for emergent chain-of-thought reasoning in smaller models, producing significant improvements in both reasoning accuracy and CoT length across all settings.
DeepSeek-R1 has shown that long chain-of-thought (CoT) reasoning can naturally emerge through a simple reinforcement learning (RL) framework with rule-based rewards, where the training may directly start from the base models—a paradigm referred to as zero RL training. Most recent efforts to reproduce zero RL training have primarily focused on the Qwen2.5 model series, which may not be representative as we find the base models already exhibit strong instruction-following and self-reflection abilities. In this work, we investigate zero RL training across 10 diverse base models, spanning different families and sizes including LLama3-8B, Mistral-7B/24B, DeepSeek-Math-7B, Qwen2.5-math-7B, and all Qwen2.5 models from 0.5B to 32B. Leveraging several key design strategies—such as adjusting format reward and controlling query difficulty—we achieve substantial improvements in both reasoning accuracy and response length across most settings. However, by carefully monitoring the training dynamics, we observe that different base models exhibit distinct patterns during training. For instance, the increased response length does not always correlate with the emergence of certain cognitive behaviors such as verification (i.e., the "aha moment"). Notably, we observe the ``aha moment'' for the first time in small models not from the Qwen family. We share the key designs that enable successful zero RL training, along with our findings and practices. To facilitate further research, we open-source the code, models, and analysis tools.
[ "Weihao Zeng", "Yuzhen Huang", "Qian Liu", "Wei Liu", "Keqing He", "Zejun MA", "Junxian He" ]
https://openreview.net/forum?id=vSMCBUgrQj
vSMCBUgrQj
vSMCBUgrQj
[ "~Weihao_Zeng2", "~Yuzhen_Huang2", "~Qian_Liu2", "~Wei_Liu25", "~Keqing_He1", "~Zejun_MA1", "~Junxian_He1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/bbe20558d2168bcb2992d581750387291d026a34.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Reasoning", "Large Language Model" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zeng2025simplerlzoo, title={Simple{RL}-Zoo: Investigating and Taming Zero Reinforcement Learning for Open Base Models in the Wild}, author={Weihao Zeng and Yuzhen Huang and Qian Liu and Wei Liu and Keqing He and Zejun MA and Junxian He}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=vSMCBUgrQj} }
zeng|simplerlzoo_investigating_and_taming_zero_reinforcement_learning_for_open_base_models_in_the_wild
null
null
null
null
null
Do Large Language Models Have a Planning Theory of Mind? Evidence from MindGames: a Multi-Step Persuasion Task
Humans significantly outperform LLMs at our complex theory of mind task
Recent evidence suggests Large Language Models (LLMs) display Theory of Mind (ToM) abilities. Most ToM experiments place participants in a spectatorial role, wherein they predict and interpret other agents' behavior. However, human ToM also contributes to dynamically planning action and strategically intervening on others' mental states. We present MindGames: a novel `planning theory of mind' (PToM) task which requires agents to infer an interlocutor's beliefs and desires to persuade them to alter their behavior. Unlike previous evaluations, we explicitly evaluate use cases of ToM. We find that humans significantly outperform o1-preview (an LLM) at our PToM task (11% higher; $p=0.006$). We hypothesize this is because humans have an implicit causal model of other agents (e.g., they know, as our task requires, to ask about people's preferences). In contrast, o1-preview outperforms humans in a baseline condition which requires a similar amount of planning but minimal mental state inferences (e.g., o1-preview is better than humans at planning when already given someone's preferences). These results suggest a significant gap between human-like social reasoning and LLM abilities.
[ "Jared Moore", "Ned Cooper", "Rasmus Overmark", "Beba Cibralic", "Cameron Robert Jones", "Nick Haber" ]
https://openreview.net/forum?id=vNJbDhgrM4
vNJbDhgrM4
vNJbDhgrM4
[ "~Jared_Moore1", "~Ned_Cooper1", "~Rasmus_Overmark1", "~Beba_Cibralic1", "~Cameron_Robert_Jones1", "~Nick_Haber1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/5e63f274f962ce2282ab6b0841a50e9c17d911ec.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "theory of mind", "planning", "causal model", "persuasion" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ moore2025do, title={Do Large Language Models Have a Planning Theory of Mind? Evidence from MindGames: a Multi-Step Persuasion Task}, author={Jared Moore and Ned Cooper and Rasmus Overmark and Beba Cibralic and Cameron Robert Jones and Nick Haber}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=vNJbDhgrM4} }
moore|do_large_language_models_have_a_planning_theory_of_mind_evidence_from_mindgames_a_multistep_persuasion_task
null
null
null
null
null
Do Biased Models Have Biased Thoughts?
This paper explores whether biased language models have biased reasoning, finding that their thought processes are not strongly linked to biased outputs.
The impressive performance of language models is undeniable. However, the presence of biases based on gender, race, socio-economic status, physical appearance, and sexual orientation makes the deployment of language models challenging. This paper studies the effect of chain-of-thought prompting, a recent approach that studies the steps followed by the model before it responds, on fairness. More specifically, we ask the following question: *Do biased models have biased thoughts*? To answer our question, we conduct experiments on $5$ popular large language models using fairness metrics to quantify $11$ different biases in the model's thoughts and output. Our results show that the bias in the thinking steps is not highly correlated with the output bias (less than $0.6$ correlation with a $p$-value smaller than $0.001$ in most cases). In other words, unlike human beings, the tested models with biased decisions do not always possess biased thoughts.
[ "Swati Rajwal", "Shivank Garg", "Reem Abdel-Salam", "Abdelrahman Zayed" ]
https://openreview.net/forum?id=vDr0RV3590
vDr0RV3590
vDr0RV3590
[ "~Swati_Rajwal2", "~Shivank_Garg1", "~Reem_Abdel-Salam1", "~Abdelrahman_Zayed1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ded6a3254f5a138cc7200d82253bd49853ededb2.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Bias in language models", "Large Language Models", "biased thoughts", "Chain-of-Thought prompting" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ rajwal2025do, title={Do Biased Models Have Biased Thoughts?}, author={Swati Rajwal and Shivank Garg and Reem Abdel-Salam and Abdelrahman Zayed}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=vDr0RV3590} }
rajwal|do_biased_models_have_biased_thoughts
null
null
null
null
null
How do language models learn facts? Dynamics, curricula and hallucinations
We analyze learning dynamics of language models on a synthetic memory task and show that they learn sequentially, that some data distribution properties lead to faster learning, and that hallucinations appear simulataneously to knowledge acquisition.
Large language models accumulate vast amounts of knowledge during their pre-training, yet the dynamics governing this acquisition remain poorly understood. This work investigates the learning dynamics of language models on a synthetic factual recall task, uncovering three key findings: First, language models learn in three phases, with performance plateauing before they acquire precise factual knowledge. Mechanistically, this plateau coincides with the formation of attention-based circuits that support recall. Second, the training data distribution significantly impacts learning dynamics, with imbalanced distributions shortening the plateau. Finally, hallucinations appear simultaneously to knowledge, and integrating new knowledge into the model through fine-tuning is challenging, as it quickly corrupts its existing parametric associative memories. Our results emphasize the importance of data distribution in knowledge acquisition and suggest novel data scheduling strategies to accelerate neural network training.
[ "Nicolas Zucchet", "Jorg Bornschein", "Stephanie C.Y. Chan", "Andrew Kyle Lampinen", "Razvan Pascanu", "Soham De" ]
https://openreview.net/forum?id=vBcGnragkr
vBcGnragkr
vBcGnragkr
[ "~Nicolas_Zucchet1", "~Jorg_Bornschein1", "~Stephanie_C.Y._Chan1", "~Andrew_Kyle_Lampinen1", "~Razvan_Pascanu1", "~Soham_De2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/462209e2d72b398c2da64f53207c0eeea6740b0d.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "learning dynamics", "factual recall", "curricula", "data distribution", "hallucinations" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ zucchet2025how, title={How do language models learn facts? Dynamics, curricula and hallucinations}, author={Nicolas Zucchet and Jorg Bornschein and Stephanie C.Y. Chan and Andrew Kyle Lampinen and Razvan Pascanu and Soham De}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=vBcGnragkr} }
zucchet|how_do_language_models_learn_facts_dynamics_curricula_and_hallucinations
null
null
null
null
null
News is More than a Collection of Facts: Moral Frame Preserving News Summarization
The first investigation in how LLMs can summarize news articles while preserving moral framing.
News articles are more than collections of facts; they reflect journalists' framing, shaping how events are presented to the audience. One key aspect of framing is the choice to write in (or quote verbatim) morally charged language as opposed to using neutral terms. This moral framing carries implicit judgments that automated news summarizers should recognize and preserve to maintain the original intent of the writer. In this work, we perform the first study on the preservation of moral framing in AI-generated news summaries. We propose an approach that leverages the intuition that journalists intentionally use or report specific moral-laden words, which should be retained in summaries. Through automated, crowd-sourced, and expert evaluations, we demonstrate that our approach enhances the preservation of moral framing while maintaining overall summary quality.
[ "Enrico Liscio", "Michela Lorandi", "Pradeep K. Murukannaiah" ]
https://openreview.net/forum?id=uzauWUW9u3
uzauWUW9u3
uzauWUW9u3
[ "~Enrico_Liscio1", "~Michela_Lorandi1", "~Pradeep_K._Murukannaiah1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/6940325f5f0ed453a99c0b091dcc120b38f1a6f4.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLMs", "news", "summarization", "morality", "framing" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ liscio2025news, title={News is More than a Collection of Facts: Moral Frame Preserving News Summarization}, author={Enrico Liscio and Michela Lorandi and Pradeep K. Murukannaiah}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=uzauWUW9u3} }
liscio|news_is_more_than_a_collection_of_facts_moral_frame_preserving_news_summarization
null
null
null
null
null
Pairwise or Pointwise? Evaluating Feedback Protocols for Bias in LLM-Based Evaluation
This work examines how feedback protocols (absolute scores vs. pairwise preferences) impact biases in LLM evaluations, revealing that absolute scoring is more robust to distractor features.
Large Language Models (LLMs) are widely used as proxies for human labelers in both training (Reinforcement Learning from AI Feedback) and large-scale response evaluation (LLM-as-a-judge). Alignment and evaluation are critical components in the development of reliable LLMs, and the choice of feedback protocol plays a central role in both but remains understudied. In this work, we show that the choice of feedback protocol for evaluation (absolute scores versus relative preferences) can significantly affect evaluation reliability and induce systematic biases. In the context of LLM-as-a-judge evaluation, we show that pairwise protocols are more vulnerable to **distracted evaluation**. Generator models can exploit spurious attributes (or distractor features) favored by the LLM judge, resulting in inflated scores for lower-quality outputs. We find that absolute scoring is more robust to such manipulation, producing judgments that better reflect response quality and are less influenced by distractor features. Our results demonstrate that generator models can flip preferences by embedding distractor features, skewing LLM-as-a-judge comparisons and leading to inaccurate conclusions about model quality in benchmark evaluations. **Pairwise preferences flip in about 35\% of the cases, compared to only 9\% for absolute scores**. We offer recommendations for choosing feedback protocols based on dataset characteristics and evaluation objectives.
[ "Tuhina Tripathi", "Manya Wadhwa", "Greg Durrett", "Scott Niekum" ]
https://openreview.net/forum?id=uyX5Vnow3U
uyX5Vnow3U
uyX5Vnow3U
[ "~Tuhina_Tripathi1", "~Manya_Wadhwa1", "~Greg_Durrett1", "~Scott_Niekum1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/ac5ede1c4f3e38efcb8223c7e17ba3fb8949b2ee.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "evaluation", "data", "alignment" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ tripathi2025pairwise, title={Pairwise or Pointwise? Evaluating Feedback Protocols for Bias in {LLM}-Based Evaluation}, author={Tuhina Tripathi and Manya Wadhwa and Greg Durrett and Scott Niekum}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=uyX5Vnow3U} }
tripathi|pairwise_or_pointwise_evaluating_feedback_protocols_for_bias_in_llmbased_evaluation
null
null
null
null
null
Estimating Optimal Context Length for Hybrid Retrieval-augmented Multi-document Summarization
We present a novel method to estimate optimal context length for retrieval-augmented generation. Our estimate is a function of the retriever, summarizer and the downstream task.
Recent advances in long-context reasoning abilities of language models led to interesting applications in large-scale multi-document summarization. However, prior work has shown that these long-context models are not effective at their claimed context windows. To this end, retrieval-augmented systems provide an efficient and effective alternative. However, their performance can be highly sensitive to the choice of retrieval context length. In this work, we present a hybrid method that combines retrieval-augmented systems with long-context windows supported by recent language models. Our method first estimates the optimal retrieval length as a function of the retriever, summarizer, and dataset. On a randomly sampled subset of the dataset, we use a panel of LMs to generate a pool of silver references. We use these silver references to estimate the optimal context length for a given RAG system configuration. Our results on the multi-document summarization task showcase the effectiveness of our method across model classes and sizes. We compare against length estimates from strong long-context benchmarks such as RULER and HELMET. Our analysis also highlights the effectiveness of our estimation method for very long-context LMs and its generalization to new classes of LMs.
[ "Adithya Pratapa", "Teruko Mitamura" ]
https://openreview.net/forum?id=uh0Sf8yN7n
uh0Sf8yN7n
uh0Sf8yN7n
[ "~Adithya_Pratapa1", "~Teruko_Mitamura1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/4e0303ac5eb0858029dfdcaa571f784512a0ccdf.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "retrieval-augmented generation", "long-context", "multi-document summarization" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ pratapa2025estimating, title={Estimating Optimal Context Length for Hybrid Retrieval-augmented Multi-document Summarization}, author={Adithya Pratapa and Teruko Mitamura}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=uh0Sf8yN7n} }
pratapa|estimating_optimal_context_length_for_hybrid_retrievalaugmented_multidocument_summarization
null
null
null
null
null
Missing Premise exacerbates Overthinking: Are Reasoning Models losing Critical Thinking Skill?
Reasoning models can’t think critically when premise is missing.
We find that the response length of reasoning LLMs, whether trained by reinforcement learning or supervised learning, drastically increases for ill-posed questions with missing premises (MiP), ending up with redundant and ineffective thinking. Such failures are against the ``test-time scaling law'' but have been widely observed on multiple datasets we curated with MiP, indicating the harm of cheap overthinking and a lack of critical thinking. Surprisingly, LLMs not specifically trained for reasoning exhibit much better critical thinking ability, producing much shorter responses that quickly identify ill-posed queries and ask for the MiP. This implies a critical flaw of the current training recipe for reasoning LLMs, which does not encourage efficient thinking adequately, leading to the abuse of thinking patterns. To further investigate the reasons behind such failures, we conduct fine-grained analyses of the reasoning length, overthinking patterns, and location of critical thinking on different types of LLMs. Moreover, our extended ablation study reveals that the overthinking is contagious through the distillation of reasoning models' responses. These results improve the understanding of overthinking and shed novel insights into mitigating the problem. Our code and data can be found in: https://github.com/tianyi-lab/MiP-Overthinking.
[ "Chenrui Fan", "Ming Li", "Lichao Sun", "Tianyi Zhou" ]
https://openreview.net/forum?id=ufozo2Wc9e
ufozo2Wc9e
ufozo2Wc9e
[ "~Chenrui_Fan1", "~Ming_Li18", "~Lichao_Sun1", "~Tianyi_Zhou2" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/018bea3ab55d1c1e4c93295a221d9e746fe84514.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "LLM", "Reasoning Model", "Overthinking", "Abstain" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ fan2025missing, title={Missing Premise exacerbates Overthinking: Are Reasoning Models losing Critical Thinking Skill?}, author={Chenrui Fan and Ming Li and Lichao Sun and Tianyi Zhou}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=ufozo2Wc9e} }
fan|missing_premise_exacerbates_overthinking_are_reasoning_models_losing_critical_thinking_skill
/attachment/825254c1747951fd04bf0c9067828ba4da07f36f.zip
null
null
null
null
Brains vs. Bytes: Evaluating LLM Proficiency in Olympiad Mathematics
We evaluate large language models on Olympiad-level mathematics, revealing their inability to produce rigorous and logically sound proofs despite occasional correct final answers.
Recent advancements in large language models (LLMs) have shown impressive progress in mathematical reasoning tasks. However, current evaluation benchmarks predominantly focus on the accuracy of final answers, often overlooking the logical rigor crucial for mathematical problem-solving. The claim that state-of-the-art LLMs can solve Math Olympiad-level problems requires closer examination. To explore this, we conducted both qualitative and quantitative human evaluations of proofs generated by LLMs, and developed a schema for automatically assessing their reasoning capabilities. Our study reveals that current LLMs fall significantly short of solving challenging Olympiad-level problems and frequently fail to distinguish correct mathematical reasoning from clearly flawed solutions. We also found that occasional correct final answers provided by LLMs often result from pattern recognition or heuristic shortcuts rather than genuine mathematical reasoning. These findings underscore the substantial gap between LLM performance and human expertise in advanced mathematical reasoning and highlight the importance of developing benchmarks that prioritize the rigor and coherence of mathematical arguments rather than merely the correctness of final answers.
[ "Hamed Mahdavi", "Alireza Hashemi", "Majid Daliri", "Pegah Mohammadipour", "Alireza Farhadi", "Samira Malek", "Yekta Yazdanifard", "Amir Khasahmadi", "Vasant G Honavar" ]
https://openreview.net/forum?id=uXR2KsA4L9
uXR2KsA4L9
uXR2KsA4L9
[ "~Hamed_Mahdavi1", "~Alireza_Hashemi2", "~Majid_Daliri1", "~Pegah_Mohammadipour1", "~Alireza_Farhadi2", "~Samira_Malek1", "~Yekta_Yazdanifard1", "~Amir_Khasahmadi1", "~Vasant_G_Honavar1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/d53d2675aef66e061647aa4edbe85fe1f4104521.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Mathematical Reasoning", "Human Evaluation", "Reasoning Evaluation", "Math Problem-Solving" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ mahdavi2025brains, title={Brains vs. Bytes: Evaluating {LLM} Proficiency in Olympiad Mathematics}, author={Hamed Mahdavi and Alireza Hashemi and Majid Daliri and Pegah Mohammadipour and Alireza Farhadi and Samira Malek and Yekta Yazdanifard and Amir Khasahmadi and Vasant G Honavar}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=uXR2KsA4L9} }
mahdavi|brains_vs_bytes_evaluating_llm_proficiency_in_olympiad_mathematics
null
null
null
null
null
BlockFFN: Towards End-Side Acceleration-Friendly Mixture-of-Experts with Chunk-Level Activation Sparsity
We propose BlockFFN, an effective MoE architecture more friendly for end-side acceleration, as well as its sparsity-aware training objectives and efficient acceleration kernels.
To alleviate the computational burden of large language models (LLMs), architectures with activation sparsity, represented by mixture-of-experts (MoE), have attracted increasing attention. However, the non-differentiable and inflexible routing of vanilla MoE hurts model performance. Moreover, while each token activates only a few parameters, these sparsely-activated architectures exhibit low chunk-level sparsity, indicating that the union of multiple consecutive tokens activates a large ratio of parameters. Such a sparsity pattern is unfriendly for acceleration under low-resource conditions (e.g., end-side devices) and incompatible with mainstream acceleration techniques (e.g., speculative decoding). To address these challenges, we introduce a novel MoE architecture, BlockFFN, as well as its efficient training and deployment techniques. Specifically, we use a router integrating ReLU activation and RMSNorm for differentiable and flexible routing. Next, to promote both token-level sparsity (TLS) and chunk-level sparsity (CLS), CLS-aware training objectives are designed, making BlockFFN more acceleration-friendly. Finally, we implement efficient acceleration kernels, combining activation sparsity and speculative decoding for the first time. The experimental results demonstrate the superior performance of BlockFFN over other MoE baselines, achieving over 80\% TLS and 70\% 8-token CLS. Our kernels achieve up to 3.67$\times$ speedup on real end-side devices than dense models. All codes and checkpoints are available publicly at https://github.com/thunlp/BlockFFN.
[ "Chenyang Song", "Weilin Zhao", "Xu Han", "Chaojun Xiao", "Yingfa Chen", "Yuxuan Li", "Zhiyuan Liu", "Maosong Sun" ]
https://openreview.net/forum?id=uLl7tSUOir
uLl7tSUOir
uLl7tSUOir
[ "~Chenyang_Song1", "~Weilin_Zhao1", "~Xu_Han2", "~Chaojun_Xiao1", "~Yingfa_Chen1", "~Yuxuan_Li19", "~Zhiyuan_Liu1", "~Maosong_Sun1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/894e65b966a485306b765993e6715a25060f8737.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "mixture-of-experts", "activation sparsity", "inference acceleration" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ song2025blockffn, title={Block{FFN}: Towards End-Side Acceleration-Friendly Mixture-of-Experts with Chunk-Level Activation Sparsity}, author={Chenyang Song and Weilin Zhao and Xu Han and Chaojun Xiao and Yingfa Chen and Yuxuan Li and Zhiyuan Liu and Maosong Sun}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=uLl7tSUOir} }
song|blockffn_towards_endside_accelerationfriendly_mixtureofexperts_with_chunklevel_activation_sparsity
/attachment/64657ad2b7008be33d41cfe2edd34b44a1e24875.zip
null
null
null
null
ProsodyLM: Uncovering the Emerging Prosody Processing Capabilities in Speech Language Models
We propose ProsodyLM, a speech language model that demonstrate impressive emerging prosody generation and understand capabilities simply through pre-training on 30k audiobooks.
Speech language models refer to language models with speech processing and understanding capabilities. One key desirable capability for speech language models is the ability to capture the intricate interdependency between content and prosody. The existing mainstream paradigm of training speech language models, which converts speech into discrete tokens before feeding them into LLMs, is sub-optimal in learning prosody information --- we find that the resulting LLMs do not exhibit obvious emerging prosody processing capabilities via pre-training alone. To overcome this, we propose ProsodyLM, which introduces a simple tokenization scheme amenable to learning prosody. Each speech utterance is first transcribed into text, followed by a sequence of word-level prosody tokens. Compared with conventional speech tokenization schemes, the proposed tokenization scheme retains more complete prosody information, and is more understandable to text-based LLMs. We find that ProsodyLM can learn surprisingly diverse emerging prosody processing capabilities through pre-training alone, ranging from harnessing the prosody nuances in generated speech, such as contrastive focus, understanding emotion and stress in an utterance, to maintaining prosody consistency in long contexts.
[ "Kaizhi Qian", "Xulin Fan", "Junrui Ni", "Slava Shechtman", "Mark A. Hasegawa-Johnson", "Chuang Gan", "Yang Zhang" ]
https://openreview.net/forum?id=uBg8PClMUu
uBg8PClMUu
uBg8PClMUu
[ "~Kaizhi_Qian1", "~Xulin_Fan1", "~Junrui_Ni1", "~Slava_Shechtman1", "~Mark_A._Hasegawa-Johnson1", "~Chuang_Gan1", "~Yang_Zhang3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/6a359ebba5a3de8baa31631883a6b21c9cbc97a3.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Speech LM: Multi-modal LLM" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ qian2025prosodylm, title={Prosody{LM}: Uncovering the Emerging Prosody Processing Capabilities in Speech Language Models}, author={Kaizhi Qian and Xulin Fan and Junrui Ni and Slava Shechtman and Mark A. Hasegawa-Johnson and Chuang Gan and Yang Zhang}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=uBg8PClMUu} }
qian|prosodylm_uncovering_the_emerging_prosody_processing_capabilities_in_speech_language_models
null
null
null
null
null
VaPR - Vision-language Preference alignment for Reasoning
VaPR, a hard-negative preference dataset that mitigates stylistic and length biases in AI feedback, enabling improved reasoning and robustness in preference finetuned (DPO) vision-language models across ten benchmarks.
Preference finetuning methods like Direct Preference Optimization (DPO) with AI-generated feedback have shown promise in aligning Large Vision-Language Models (LVLMs) with human preferences. However, existing techniques overlook the prevalence of noise in synthetic preference annotations in the form of stylistic and length biases. To this end, we introduce a hard-negative response generation framework based on LLM-guided response editing, that produces rejected responses with targeted errors, maintaining stylistic and length similarity to the accepted ones. Using this framework, we develop the VaPR dataset, comprising 30K high-quality samples, to finetune three LVLM families: LLaVA-V1.5, Qwen2VL \& Qwen2.5VL (2B-13B sizes). Our VaPR models deliver significant performance improvements across ten benchmarks, achieving average gains of 6.5% (LLaVA), 4.0% (Qwen2VL), and 1.5% (Qwen2.5VL), with notable improvements on reasoning tasks. A scaling analysis shows that performance consistently improves with data size, with LLaVA models benefiting even at smaller scales. Moreover, VaPR reduces the tendency to answer "Yes" in binary questions - addressing a common failure mode in LVLMs like LLaVA. Lastly, we show that the framework generalizes to open-source LLMs as editors, with models trained on VaPR-OS achieving ~99% of the performance of models trained on VaPR, which is synthesized using GPT-4o. Our data, models, and code can be found on the project page https://vap-r.github.io/vap-r/
[ "Rohan Wadhawan", "Fabrice Y Harel-Canada", "Zi-Yi Dou", "Suhaila Shakiah", "Robinson Piramuthu", "Nanyun Peng" ]
https://openreview.net/forum?id=uBAubFwymy
uBAubFwymy
uBAubFwymy
[ "~Rohan_Wadhawan1", "~Fabrice_Y_Harel-Canada1", "~Zi-Yi_Dou1", "~Suhaila_Shakiah1", "~Robinson_Piramuthu1", "~Nanyun_Peng1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/70b7995114502f1826a2a3c5a3f55008f8261306.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Vision Language Models", "Preference Optimization", "DPO", "Data Generation", "Reasoning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ wadhawan2025vapr, title={Va{PR} - Vision-language Preference alignment for Reasoning}, author={Rohan Wadhawan and Fabrice Y Harel-Canada and Zi-Yi Dou and Suhaila Shakiah and Robinson Piramuthu and Nanyun Peng}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=uBAubFwymy} }
wadhawan|vapr_visionlanguage_preference_alignment_for_reasoning
null
null
null
null
null
DeepRetrieval: Hacking Real Search Engines and Retrievers with Large Language Models via Reinforcement Learning
DeepRetrieval trains query generation models through reinforcement learning instead of supervised data, achieving state-of-the-art performance across diverse retrieval tasks while being more efficient than existing approaches.
Information retrieval systems are crucial for enabling effective access to large document collections. Recent approaches have leveraged Large Language Models (LLMs) to enhance retrieval performance through query augmentation, but often rely on expensive supervised learning or distillation techniques that require significant computational resources and hand-labeled data. We introduce DeepRetrieval, a reinforcement learning approach that trains LLMs for query generation through trial and error without supervised data for reference query. Using retrieval metrics as rewards, our system generates queries that maximize retrieval performance. DeepRetrieval outperforms state-of-the-art methods on literature search with 65.07\% (vs.\ previous SOTA 24.68\%) recall for publication search and 63.18\% (vs.\ previous SOTA 32.11\%) recall for trial search using real-world search engines. DeepRetrieval also dominates in evidence-seeking retrieval, classic information retrieval and SQL database search. With only 3B parameters, it outperforms industry-leading models like GPT-4o and Claude-3.5-Sonnet on those tasks. These results demonstrate that our reinforcement learning approach offers a more efficient and effective paradigm for information retrieval.
[ "Pengcheng Jiang", "Jiacheng Lin", "Lang Cao", "Runchu Tian", "SeongKu Kang", "Zifeng Wang", "Jimeng Sun", "Jiawei Han" ]
https://openreview.net/forum?id=u9JXu4L17I
u9JXu4L17I
u9JXu4L17I
[ "~Pengcheng_Jiang2", "~Jiacheng_Lin3", "~Lang_Cao2", "~Runchu_Tian1", "~SeongKu_Kang1", "~Zifeng_Wang3", "~Jimeng_Sun3", "~Jiawei_Han1" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/d6e93b6b5f6f01c6793b460b37410d7d7ec3a1cd.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Large Language Models", "Information Retrieval", "Reinforcement Learning" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ jiang2025deepretrieval, title={DeepRetrieval: Hacking Real Search Engines and Retrievers with Large Language Models via Reinforcement Learning}, author={Pengcheng Jiang and Jiacheng Lin and Lang Cao and Runchu Tian and SeongKu Kang and Zifeng Wang and Jimeng Sun and Jiawei Han}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=u9JXu4L17I} }
jiang|deepretrieval_hacking_real_search_engines_and_retrievers_with_large_language_models_via_reinforcement_learning
/attachment/e1c13f35faae5175020c5266f60439cdda5f1f8f.zip
null
null
null
null
SecurityLingua: Efficient Defense of LLM Jailbreak Attacks via Security-Aware Prompt Compression
SecurityLingua defends LLMs from jailbreak attacks using secutriy-aware prompt compression to extract the true intention. It helps the model activate its safety guardrails without altering the original prompt in minimal compute and latency overhead.
Large language models (LLMs) have achieved widespread adoption across numerous applications. However, many LLMs are vulnerable to malicious attacks even after safety alignment. These attacks typically bypass LLMs’ safety guardrails by wrapping the original malicious instructions inside adversarial jailbreaks prompts. Previous research has proposed methods such as adversarial training and prompt rephrasing to mitigate these safety vulnerabilities, but these methods often reduce the utility of LLMs or lead to significant computational overhead and online latency. In this paper, we propose SecurityLingua, an effective and efficient approach to defend LLMs against jailbreak attacks via security-oriented prompt compression. Specifically, we train a prompt compressor designed to discern the “true intention” of the input prompt, with a particular focus on detecting the malicious intentions of adversarial prompts. Then, in addition to the original prompt, the intention is passed via the system prompt to the target LLM to help it identify the true intention of the request. SecurityLingua ensures a consistent user experience by leaving the original input prompt intact while revealing the user’s potentially malicious intention and stimulating the built-in safety guardrails of the LLM. Moreover, thanks to prompt compression, SecurityLingua incurs only a negligible overhead and extra token cost compared to all existing defense methods, making it an especially practical solution for LLM defense. Experimental results demonstrate that SecurityLingua can effectively defend against malicious attacks and maintain utility of the LLM with negligible compute and latency overhead. Our code is available at https://aka.ms/SecurityLingua.
[ "Yucheng Li", "Surin Ahn", "Huiqiang Jiang", "Amir H. Abdi", "Yuqing Yang", "Lili Qiu" ]
https://openreview.net/forum?id=tybbSo6wba
tybbSo6wba
tybbSo6wba
[ "~Yucheng_Li5", "~Surin_Ahn1", "~Huiqiang_Jiang2", "~Amir_H._Abdi1", "~Yuqing_Yang1", "~Lili_Qiu3" ]
{ "value": "COLM 2025" }
{ "value": "colmweb.org/COLM/2025/Conference" }
{ "value": "/pdf/d284f5f7d160ec273e8ba2c87857c7a6d07c1e91.pdf" }
conference
colmweb.org/COLM/2025/Conference
2,025
COLM
[ "Jailbreak Attacks Defense", "Prompt Compression" ]
I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
null
null
@inproceedings{ li2025securitylingua, title={SecurityLingua: Efficient Defense of {LLM} Jailbreak Attacks via Security-Aware Prompt Compression}, author={Yucheng Li and Surin Ahn and Huiqiang Jiang and Amir H. Abdi and Yuqing Yang and Lili Qiu}, booktitle={Second Conference on Language Modeling}, year={2025}, url={https://openreview.net/forum?id=tybbSo6wba} }
li|securitylingua_efficient_defense_of_llm_jailbreak_attacks_via_securityaware_prompt_compression
null
null
null
null
null
End of preview. Expand in Data Studio

Fresh papers, direct from OpenReview 🇨🇦

Downloads last month
33