A cutting-edge foundation for your very own LLM.
💻Github • 🌐 TigerBot • 🤗 Hugging Face
快速开始
方法1,通过transformers使用
下载 TigerBot Repo
git clone https://github.com/TigerResearch/TigerBot.git启动infer代码
python infer.py --model_path TigerResearch/tigerbot-13b-base-v2 --model_type base
方法2:
下载 TigerBot Repo
git clone https://github.com/TigerResearch/TigerBot.git安装git lfs:
git lfs install通过huggingface或modelscope平台下载权重
git clone https://huggingface.co/TigerResearch/tigerbot-13b-base-v2 git clone https://www.modelscope.cn/TigerResearch/tigerbot-13b-base-v2.git启动infer代码
python infer.py --model_path tigerbot-13b-base-v2 --model_type base --max_generate_length 64
Quick Start
Method 1, use through transformers
Clone TigerBot Repo
git clone https://github.com/TigerResearch/TigerBot.gitRun infer script
python infer.py --model_path TigerResearch/tigerbot-13b-base-v2 --model_type base
Method 2:
Clone TigerBot Repo
git clone https://github.com/TigerResearch/TigerBot.gitinstall git lfs:
git lfs installDownload weights from huggingface or modelscope
git clone https://huggingface.co/TigerResearch/tigerbot-13b-base-v2 git clone https://www.modelscope.cn/TigerResearch/tigerbot-13b-base-v2.gitRun infer script
python infer.py --model_path tigerbot-13b-base-v2 --model_type base --max_generate_length 64
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 52.11 |
| ARC (25-shot) | 53.84 |
| HellaSwag (10-shot) | 77.05 |
| MMLU (5-shot) | 53.57 |
| TruthfulQA (0-shot) | 44.06 |
| Winogrande (5-shot) | 74.98 |
| GSM8K (5-shot) | 17.06 |
| DROP (3-shot) | 44.21 |
- Downloads last month
- 46