Model Card for Model ID
base_model : Ko-Llama3-Luxia-8B
Basic usage
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("MDDDDR/Ko-Luxia-8B-it-v0.3")
model = AutoModelForCausalLM.from_pretrained(
"MDDDDR/Ko-Luxia-8B-it-v0.3",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "사과가 뭐야?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
Training dataset
dataset : kyujinpy/KOpen-platypus
lora_config and bnb_config in Training
bnd_config = BitsAndBytesConfig(
load_in_8bit = True
)
lora_config = LoraConfig(
r = 16,
lora_alpha = 16,
lora_dropout = 0.05,
target_modules = ['gate_proj', 'up_proj', 'down_proj']
)
Hardware
RTX 3090 Ti 24GB x 1
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.