CodeV:Empowering LLMs for HDL Generation through Multi-Level Summarization

CodeV is an innovative series of open-source, instruction-tuned Large Language Models (LLMs) specifically designed for the generation of high-quality HDL code, addressing the challenges faced by existing models in this domain. (This repo is under development)

Models and Datasets

Test

If you want to test the generation capability of existing models on Verilog, you need to install the VerilogEval and RTLLM environments.

Quick Start

from transformers import pipeline

import torch



prompt= "FILL IN THE QUESTION"



generator = pipeline(

  model="CODEV",

  task="text-generation",

  torch_dtype=torch.bfloat16,

  device_map="auto",

)



result = generator(prompt , max_length=2048, num_return_sequences=1, temperature=0.0)

response = result[0]["generated_text"]

print("Response:", response)

Usage Recommendations

  1. The template of chat task

The goal of the Chat task is to generate complete Verilog or Chisel code from natural language descriptions. The input includes natural language descriptions and optional module headers, while the output is the corresponding HDL code.

<LanguageTag>
[Natural Language Description]
[Optional Module Header]
  1. The template of FIM task

The goal of the FIM task is to fill in the missing parts of the code, generating the middle code based on the prefix and suffix. The input includes language tags, prefix, suffix, and special FIM markers, while the output is the missing middle code snippet.

[PRE]```[verilog/scala]
<LanguageTag>
{prefix}[SUF]{suffix}[MID]

It is recommended to use our template during inference.

Paper

Arxiv: https://arxiv.org/abs/2407.10424

Please cite the paper if you use the models from CodeV.

@misc{zhao2025codevempoweringllmshdl,
      title={CodeV: Empowering LLMs with HDL Generation through Multi-Level Summarization}, 
      author={Yang Zhao and Di Huang and Chongxiao Li and Pengwei Jin and Muxin Song and Yinan Xu and Ziyuan Nan and Mingju Gao and Tianyun Ma and Lei Qi and Yansong Pan and Zhenxing Zhang and Rui Zhang and Xishan Zhang and Zidong Du and Qi Guo and Xing Hu},
      year={2025},
      eprint={2407.10424},
      archivePrefix={arXiv},
      primaryClass={cs.PL},
      url={https://arxiv.org/abs/2407.10424}, 
}

Acknowledgements

Downloads last month
11
Safetensors
Model size
7.62B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for yang-z/CodeV-All-QC

Quantizations
1 model