File size: 1,686 Bytes
62d4520 c0caf1d 62d4520 c0caf1d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
---
language:
- ko
datasets:
- garage-bAInd/Open-Platypus
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
# **PlatYi-34B-LoRA**
<img src='./PlatYi.png' width=256>
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
PlatYi-34B-LoRA is an auto-regressive language model based on the Yi-34B transformer architecture.
**Blog Link**
Blog: [Coming soon...]
Github: [Coming soon...]
**Base Model**
[01-ai/Yi-34B](https://huggingface.co/01-ai/Yi-34B)
**Training Dataset**
[garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
**Notice**
While training, I used LoRA.
The `lora_r` values is 16.
# **Model Benchmark**
## Open leaderboard
- Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| PlatYi-34B-Q | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| **PlatYi-34B-LoRA** | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| [01-ai/Yi-34B](https://huggingface.co/01-ai/Yi-34B) | 69.42 | 64.59 | 85.69 | 76.35 | 56.23 | 83.03 | 50.64 |
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/PlatYi-34B-LoRA"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- |