How to use

from transformers import AutoModelForCausalLM, AutoTokenizer, TextGenerationPipeline
model_path = 'fiveflow/KoLlama-3-8B-Instruct'
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, 
                                                  device_map="auto",
                                                #   load_in_4bit=True,
                                                  low_cpu_mem_usage=True)

pipe = TextGenerationPipeline(model = model, tokenizer = tokenizer)
Downloads last month
1,108
Safetensors
Model size
8.03B params
Tensor type
F16
Β·
Inference Providers NEW
Input a message to start chatting with fiveflow/KoLlama-3-8B-Instruct.

Model tree for fiveflow/KoLlama-3-8B-Instruct

Finetuned
(681)
this model
Quantizations
3 models

Spaces using fiveflow/KoLlama-3-8B-Instruct 7