Uploaded model

  • Developed by: satoyutaka
  • License: apache-2.0
  • Finetuned from model : llm-jp/llm-jp-3-13b

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.


sample of use

from transformers import (

AutoModelForCausalLM,

AutoTokenizer,

BitsAndBytesConfig,

)

import torch from tqdm import tqdm import json

HF_TOKEN = "Hugging Face Token"

model_name = "satoyutaka/llm-jp-3-13b-ft-2"

bnb_config = BitsAndBytesConfig(

load_in_4bit=True,

bnb_4bit_quant_type="nf4",

bnb_4bit_compute_dtype=torch.bfloat16,

bnb_4bit_use_double_quant=False,

)

datasets = []

with open("「elyza-tasks-100-TV_0.jsonl」のパスをご指定ください。", "r") as f:

item = ""

for line in f:

  line = line.strip()

  item += line

  if item.endswith("}"):

    datasets.append(json.loads(item))

    item = ""

results = []

for data in tqdm(datasets):

input = data["input"]

prompt = f"""### 指示 {input}

"""

tokenized_input = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt").to(model.device)

with torch.no_grad():

  outputs = model.generate(

      tokenized_input,

      max_new_tokens=100,

      do_sample=False,

      repetition_penalty=1.2

  )[0]

output = tokenizer.decode(outputs[tokenized_input.size(1):], skip_special_tokens=True)

results.append({"task_id": data["task_id"], "input": input, "output": output})

import re

model_name = re.sub(".*/", "", model_name)

with open(f"./{model_name}-outputs.jsonl", 'w', encoding='utf-8') as f:

for result in results:

    json.dump(result, f, ensure_ascii=False)  # ensure_ascii=False for handling non-ASCII characters

    f.write('\n')
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for satoyutaka/llm-jp-3-13b-ft-2

Finetuned
(1130)
this model