Update README.md
Browse filesThis repository contains the DISC-LawLLM-7B
<div align="center">
[Paper](https://link.springer.com/chapter/10.1007/978-981-97-5569-1_19) | [Technical Report](https://arxiv.org/abs/2309.11325)
</div>
DISC-LawLLM is a large language model specialized in the Chinese legal domain, developed and open-sourced by [Data Intelligence and Social Computing Lab of Fudan University (Fudan-DISC)](http://fudan-disc.com), to provide comprehensive intelligent legal services.
Check our [HOME](https://github.com/FudanDISC/DISC-LawLLM) for more information.
# Quickstart
We advise you to install transformers>=4.37.0
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"ShengbinYue/LawLLM-7B,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-7B-Instruct")
prompt ="生产销售假冒伪劣商品罪如何判刑?"
messages = [
{"role": "system", "content": "你是LawLLM,一个由复旦大学DISC实验室创造的法律助手。"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
# VLLM
- Install vLLM: You can install vLLM by running the following command.
```
pip install "vllm>=0.4.3"
```
- run LawllM-7B
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from vllm import LLM, SamplingParams
model_name ='ShengbinYue/LawLLM-7B'
sampling_params = SamplingParams(
temperature=0.1,
top_p=0.9,
top_k=50,
max_tokens=4096
)
llm = LLM(model=model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "生产销售假冒伪劣商品罪如何判刑?"
messages = [
{"role": "system", "content": "你是LawLLM,一个由复旦大学DISC实验室创造的法律助手。"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
outputs = llm.generate([text], sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
# Citation
If our work is helpful to you, please kindly cite our work as follows:
```
@misc
{yue2023disclawllm,
title={DISC-LawLLM: Fine-tuning Large Language Models for Intelligent Legal Services},
author={Shengbin Yue and Wei Chen and Siyuan Wang and Bingxuan Li and Chenchen Shen and Shujun Liu and Yuxuan Zhou and Yao Xiao and Song Yun and Wei Lin and Xuanjing Huang and Zhongyu Wei},
year={2023},
eprint={2309.11325},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{yue2024lawllm,
title={LawLLM: Intelligent Legal System with Legal Reasoning and Verifiable Retrieval},
author={Yue, Shengbin and Liu, Shujun and Zhou, Yuxuan and Shen, Chenchen and Wang, Siyuan and Xiao, Yao and Li, Bingxuan and Song, Yun and Shen, Xiaoyu and Chen, Wei and others},
booktitle={International Conference on Database Systems for Advanced Applications},
pages={304--321},
year={2024},
organization={Springer}
}
```