ๅŠ ๅ…ฅไธญๆ–‡่ฏ่กจๅนถ็ปง็ปญ้ข„่ฎญ็ปƒไธญๆ–‡Embedding๏ผŒๅนถๅœจๆญคๅŸบ็ก€ไธŠ็ปง็ปญไฝฟ็”จๆŒ‡ไปคๆ•ฐๆฎ้›†finetuning๏ผŒๅพ—ๅˆฐ็š„ไธญๆ–‡Alpaca-33Bๆจกๅž‹ใ€‚

ๆจกๅž‹่ฝฌๆข็”จๅˆฐ็š„็›ธๅ…ณbaseๅŠloraๆจกๅž‹ๅฆ‚ไธ‹๏ผš

  • base-model: elinas/llama-30b-hf-transformers-4.29
  • lora-model: ziqingyang/chinese-alpaca-lora-33b

่ฏฆๆƒ…ๅฏๅ‚่€ƒ๏ผšhttps://github.com/ymcui/Chinese-LLaMA-Alpaca/releases/tag/v4.0

ไฝฟ็”จๆ–นๆณ•ๅ‚่€ƒ

  1. ๅฎ‰่ฃ…ๆจกๅ—ๅŒ…
pip install sentencepiece
pip install transformers>=4.28.0
  1. ็”Ÿๆˆๆ–‡ๆœฌ
import torch
import transformers
from transformers import LlamaTokenizer, LlamaForCausalLM

def generate_prompt(text):
    return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{text}

### Response:"""


tokenizer = LlamaTokenizer.from_pretrained('minlik/chinese-alpaca-33b-merged')
model = LlamaForCausalLM.from_pretrained('minlik/chinese-alpaca-33b-merged').half().to('cuda')
model.eval()

text = '็ฌฌไธ€ไธช็™ปไธŠๆœˆ็ƒ็š„ไบบๆ˜ฏ่ฐ๏ผŸ'
prompt = generate_prompt(text)
input_ids = tokenizer.encode(prompt, return_tensors='pt').to('cuda')


with torch.no_grad():
    output_ids = model.generate(
        input_ids=input_ids,
        max_new_tokens=128,
        temperature=1,
        top_k=40,
        top_p=0.9,
        repetition_penalty=1.15
    ).cuda()
output = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output.replace(prompt, '').strip())

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 53.09
ARC (25-shot) 59.3
HellaSwag (10-shot) 78.43
MMLU (5-shot) 57.69
TruthfulQA (0-shot) 52.45
Winogrande (5-shot) 76.09
GSM8K (5-shot) 8.04
DROP (3-shot) 39.67
Downloads last month
1,792
Safetensors
Model size
32.8B params
Tensor type
F32
ยท
FP16
ยท
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for minlik/chinese-alpaca-33b-merged

Quantizations
1 model

Spaces using minlik/chinese-alpaca-33b-merged 22