kuotient's picture
Update README.md
6e910fd verified
---
license: cc-by-nc-sa-4.0
datasets:
- squarelike/sharegpt_deepl_ko_translation
language:
- ko
pipeline_tag: translation
tags:
- translate
- awq
---
# **Seagull-13b-translation-AWQ ๐Ÿ“‡**
![Seagull-typewriter](./Seagull-typewriter-pixelated.png)
## This is quantized version of original model: Seagull-13b-translation.
**Seagull-13b-translation** is yet another translator model, but carefully considered the following issues from existing translation models.
- `newline` or `space` not matching the original text
- Using translated dataset with first letter removed for training
- Codes
- Markdown format
- LaTeX format
- etc
์ด๋Ÿฐ ์ด์Šˆ๋“ค์„ ์ถฉ๋ถ„ํžˆ ์ฒดํฌํ•˜๊ณ  ํ•™์Šต์„ ์ง„ํ–‰ํ•˜์˜€์ง€๋งŒ, ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ๋•Œ๋Š” ์ด๋Ÿฐ ๋ถ€๋ถ„์— ๋Œ€ํ•œ ๊ฒฐ๊ณผ๋ฅผ ๋ฉด๋ฐ€ํ•˜๊ฒŒ ์‚ดํŽด๋ณด๋Š” ๊ฒƒ์„ ์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค(์ฝ”๋“œ๊ฐ€ ํฌํ•จ๋œ ํ…์ŠคํŠธ ๋“ฑ).
> If you're interested in building large-scale language models to solve a wide variety of problems in a wide variety of domains, you should consider joining [Allganize](https://allganize.career.greetinghr.com/o/65146).
For a coffee chat or if you have any questions, please do not hesitate to contact me as well! - [email protected]
This model was created as a personal experiment, unrelated to the organization I work for.
# **License**
## From original model author:
- Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under LLAMA 2 COMMUNITY LICENSE AGREEMENT
- Full License available at: https://huggingface.co/beomi/llama-2-koen-13b/blob/main/LICENSE
# **Model Details**
#### **Developed by**
Jisoo Kim(kuotient)
#### **Base Model**
[beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b)
#### **Datasets**
- [sharegpt_deepl_ko_translation](https://huggingface.co/datasets/squarelike/sharegpt_deepl_ko_translation)
- AIHUB
- ๊ธฐ์ˆ ๊ณผํ•™ ๋ถ„์•ผ ํ•œ-์˜ ๋ฒˆ์—ญ ๋ณ‘๋ ฌ ๋ง๋ญ‰์น˜ ๋ฐ์ดํ„ฐ
- ์ผ์ƒ์ƒํ™œ ๋ฐ ๊ตฌ์–ด์ฒด ํ•œ-์˜ ๋ฒˆ์—ญ ๋ณ‘๋ ฌ ๋ง๋ญ‰์น˜ ๋ฐ์ดํ„ฐ
## Usage
#### Format
It follows only **ChatML** format.
```python
<|im_start|>system
์ฃผ์–ด์ง„ ๋ฌธ์žฅ์„ ํ•œ๊ตญ์–ด๋กœ ๋ฒˆ์—ญํ•˜์„ธ์š”.<|im_end|>
<|im_start|>user
{instruction}<|im_end|>
<|im_start|>assistant
# Don't miss newline here
```
```python
<|im_start|>system
์ฃผ์–ด์ง„ ๋ฌธ์žฅ์„ ์˜์–ด๋กœ ๋ฒˆ์—ญํ•˜์„ธ์š”.<|im_end|>
<|im_start|>user
{instruction}<|im_end|>
<|im_start|>assistant
# Don't miss newline here
```
#### Example
**I highly recommend to inference model with vllm. I will write a guide for quick and easy inference if requested.**
Since, chat_template already contains insturction format above.
You can use the code below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("kuotient/Seagull-13B-translation")
tokenizer = AutoTokenizer.from_pretrained("kuotient/Seagull-13B-translation")
messages = [
{"role": "system", "content", "์ฃผ์–ด์ง„ ๋ฌธ์žฅ์„ ํ•œ๊ตญ์–ด๋กœ ๋ฒˆ์—ญํ•˜์„ธ์š”."}
{"role": "user", "content": "Here are five examples of nutritious foods to serve your kids."},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```