GP-GPT / README.md
IanL10's picture
Update README.md
a4ee87f verified
metadata
license: apache-2.0
language:
  - en
base_model:
  - meta-llama/Llama-3.1-8B-Instruct
  - meta-llama/Llama-2-7b
pipeline_tag: question-answering
tags:
  - medical
  - biology
  - genetics
  - bioinformatics

GP-GTP is an open-weight genetic-phenotype knowledge language model. For "medical-genetic-information".

Arvix version: arXiv:2409.09825

Usage

from transformers import AutoModelForCausalLM, BitsAndBytesConfig, HfArgumentParser, TrainingArguments
from peft import AutoPeftModelForCausalLM
from peft import PeftModel
from peft import LoraConfig, get_peft_model

#init
parser = HfArgumentParser(ScriptArguments)
script_args = parser.parse_args_into_dataclasses()[0]

# specific the model to load
# For GP-GPT small:
script_args.model_name = "meta-llama/Llama-2-7b"
script_args.peft_model_id = "./small/"

# For GP-GPT base:
script_args.model_name = "meta-llama/Meta-Llama-3.1-8B"
script_args.peft_model_id = "./base/"

# Cache model
model = AutoModelForCausalLM.from_pretrained(
        script_args.model_name,
        #quantization_config=quantization_config, # activate when using quantization setting
        device_map=device_map,
        torch_dtype=torch_dtype,
        use_auth_token=False,
    )

#load PEFT adapter
if script_args.peft_model_id is not None:
    peft_model_id = script_args.peft_model_id
    model = PeftModel.from_pretrained(model, peft_model_id)
    model = model.merge_and_unload()