Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Chinese-Alpaca-7B-GPTQ

Chinese-Alpaca-7B-GPTQ is based on the Chinese-LLaMA-Alpaca model, and was quantized using GPTQ for faster inference with reduced memory requirements.

We used bigscience-data/roots_zh-cn_wikipedia for calibration.

Usage

To use Chinese-Alpaca-7B-GPTQ, you will need to use the GPTQ-for-LLaMa repository to load the model.

python llama_inference.py ./chinese-alpaca-7b-gptq --wbits 4 --groupsize 128 --load chinese-alpaca-7b-gptq/llama7b-4bit-128g.pt --text "### Instruction: 为什么苹果支付 没有在中国流行?\n\n### Response:"

Acknowledgments

We would like to thank the original authors of above-mentioned projects for their contributions to the NLP community.

Downloads last month
10
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using keyfan/chinese-alpaca-7b-gptq 1