---
base_model:
- Qwen/Qwen3-4B-Instruct-2507
pipeline_tag: text-generation
library_name: transformers
license: apache-2.0
---
# Code Aesthetics with Agentic Reward Feedback
1,2Bang Xiao#,
1,3Lingjie Jiang#,
1Shaohan Huang✉,
1Tengchao Lv,
1Yupan Huang,
1Xun Wu,
1Lei Cui,
1Furu Wei
1Microsoft Research Asia
2Zhiyuan College, Shanghai Jiao Tong University
3Peking University
#Equal Contribution
✉Corresponding author
For the codebase, refer to: https://github.com/bangx7/code_aesthetics
## 🎉 News
- __[2025.10.29]__: Release the [AesCoder-4B](https://huggingface.co/SamuelBang/AesCoder-4B/) model.
- __[2025.10.27]__: Release the [Project Page](https://bangx7.github.io/code-aesthetics/) and the [Arxiv](https://arxiv.org/abs/2510.23272) version.
## 📷 Abstract
Large Language Models (LLMs) have become valuable assistants for developers in code-related tasks. While LLMs excel at traditional programming tasks such as code generation and bug fixing, they struggle with visually-oriented coding tasks, often producing suboptimal aesthetics. In this paper, we introduce a new pipeline to enhance the aesthetic quality of LLM-generated code. We first construct AesCode-358K, a large-scale instruction-tuning dataset focused on code aesthetics. Next, we propose agentic reward feedback, a multi-agent system that evaluates executability, static aesthetics, and interactive aesthetics. Building on this, we develop GRPO-AR, which integrates these signals into the GRPO algorithm for joint optimization of functionality and code aesthetics. Finally, we develop OpenDesign, a benchmark for assessing code aesthetics. Experimental results show that combining supervised fine-tuning on AesCode-358K with reinforcement learning using agentic reward feedback significantly improves performance on OpenDesign and also enhances results on existing benchmarks such as PandasPlotBench. Notably, our AesCoder-4B surpasses GPT-4o and GPT-4.1, and achieves performance comparable to large open-source models with 480B-685B parameters, underscoring the effectiveness of our approach.
## To-do List
- [x] Release paper and project page
- [x] Release our AesCoder model
- [ ] Release code
**Note: This is the version of AesCoder-4B model for only webpage design.**
## Quickstart
### VLLM deployment (Recommended)
We recommend using `vllm>=0.8.5` for efficient inference and deployment. Here's how to get started:
**Installation:**
```bash
pip install vllm>=0.8.5
```
**API Server Deployment:**
To create an OpenAI-compatible API endpoint:
```bash
vllm serve SamuelBang/AesCoder-4B --max-model-len 262144
```
**Using with OpenAI Client:**
```python
from openai import OpenAI
# Initialize the client
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="None"
)
# Generate completion
response = client.chat.completions.create(
model="SamuelBang/AesCoder-4B",
messages=[
{"role": "user", "content": "Create a user-friendly website for a landing page dedicated to selling dog-related products."}
],
temperature=0.8,
max_tokens=16384
)
# Get the generated content
content = response.choices[0].message.content
print("Generated content:", content)
```
**Basic vLLM Usage:**
```python
from vllm import LLM, SamplingParams
model_name = "SamuelBang/AesCoder-4B"
# Initialize the model
llm = LLM(
model=model_name,
max_model_len=262144, # Maximum context length
tensor_parallel_size=1, # Adjust based on your GPU setup
)
# Define sampling parameters
sampling_params = SamplingParams(
temperature=0.8,
top_p=0.8,
top_k=20,
min_p=0,
max_tokens=16384
)
# Prepare the prompt
prompt = "Create a user-friendly website for a landing page dedicated to selling dog-related products. "
messages = [
{"role": "user", "content": prompt}
]
# Apply chat template (you'll need to get the tokenizer for this)
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
# Generate text
outputs = llm.generate([text], sampling_params)
# Get the result
content = outputs[0].outputs[0].text
print("Generated content:", content)
```
**Note:** If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as `32,768` or adjusting the `tensor_parallel_size` based on your available GPU memory.
### SGLang Deployment
You can use `sglang>=0.4.6.post1` to create an OpenAI-compatible API endpoint:
```shell
python -m sglang.launch_server --model-path SamuelBang/AesCoder-4B --context-length 262144
```
**Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as `32,768`.**
### Use with origin `transformer` package (NOT recommended, very slow)
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "SamuelBang/AesCoder-4B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Create a user-friendly website for a landing page dedicated to selling dog-related products. "
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=16384,
temperature=0.8
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True)
print("content:", content)
```
## Best Practices
### Sampling Parameters
To achieve optimal performance, we suggest using `Temperature=0.8`, `TopP=0.8`, `TopK=20`, and `MinP=0` for sampling.
### System Prompt
To achieve better webpage generation results with our model, please use appropriate system prompts. We have categorized webpage generation into 5 main categories, and the recommended system prompts for each category are as follows (reference to: https://designarena.ai):
Website:
```txt
You are an expert web developer and designer specializing in modern websites. Create a complete, working HTML page with embedded CSS and JavaScript if needed. Feel free to use lightweight libraries like Tailwind CSS to enhance the design as long as they can be rendered in an iframe.
Requirements:
1. Create a fully functional, modern, and responsive website design
2. Use only HTML, CSS, and JavaScript, but feel free to use libraries like Tailwind CSS to make the design better. Libraries such as ThreeJS, react three fiber, drei, @react-three/postprocessing, @react-three/cannon, d3, and recharts as additional libraries can be imported.
3. Include all styles inline within