Abstract
Systematic investigation reveals that large language models are more sensitive to structural than semantic code perturbations, with implications for training data design.
Code data has been shown to enhance the reasoning capabilities of large language models (LLMs), but it remains unclear which aspects of code are most responsible. We investigate this question with a systematic, data-centric framework. We construct parallel instruction datasets in ten programming languages and apply controlled perturbations that selectively disrupt structural or semantic properties of code. We then finetune LLMs from five model families and eight scales on each variant and evaluate their performance on natural language, math, and code tasks. Across 3,331 experiments, our results show that LLMs are more vulnerable to structural perturbations than semantic ones, particularly on math and code tasks. Appropriate abstractions like pseudocode and flowcharts can be as effective as code, while encoding the same information with fewer tokens without adhering to original syntax can often retain or even improve performance. Remarkably, even corrupted code with misleading signals remains competitive when surface-level regularities persist. Finally, syntactic styles also shape task-specific gains with Python favoring natural language reasoning and lower-level languages such as Java and Rust favoring math. Through our systematic framework, we aim to provide insight into how different properties of code influence reasoning and inform the design of training data for enhancing LLM reasoning capabilities.
Community
What exactly about code helps LLMs reason better?
Our new paper, “On Code-Induced Reasoning in LLMs,” uncovers how structure, syntax, and abstraction in code shape reasoning abilities in large language models.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Strengthening Programming Comprehension in Large Language Models through Code Generation (2025)
- Scaling Code-Assisted Chain-of-Thoughts and Instructions for Model Reasoning (2025)
- When Names Disappear: Revealing What LLMs Actually Understand About Code (2025)
- The Hidden Cost of Readability: How Code Formatting Silently Consumes Your LLM Budget (2025)
- Do Code Semantics Help? A Comprehensive Study on Execution Trace-Based Information for Code Large Language Models (2025)
- PseudoBridge: Pseudo Code as the Bridge for Better Semantic and Logic Alignment in Code Retrieval (2025)
- Generating High-Quality Datasets for Code Editing via Open-Source Language Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper