MedGPT-OSS-20B

Merged LoRA model based on openai/gpt-oss-20b.

Model Details

  • Base Model: openai/gpt-oss-20b
  • Languages: en
  • Type: Merged LoRA (adapter weights merged into base)
  • Library: 🤗 Transformers

LoRA Configuration

  • r: 8
  • alpha (lora_alpha): 16
  • dropout: 0.0
  • target_modules: k_proj, o_proj, v_proj, q_proj
  • bias: none

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_name = "justinj92/MedGPT-OSS-20B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.float16,
    device_map="auto"
)

prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Merge Process

Base model and LoRA adapter weights were merged using PeftModel.merge_and_unload() to create a standalone model (no PEFT dependency required at inference).

Limitations & Bias

This model inherits the limitations and potential biases of openai/gpt-oss-20b and the fine-tuning dataset.

License

Licensed under apache-2.0. Check the base model's license for additional terms.

Downloads last month
10
Safetensors
Model size
20.9B params
Tensor type
F32
·
BF16
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for justinj92/MedGPT-OSS-20B

Base model

openai/gpt-oss-20b
Adapter
(63)
this model
Adapters
2 models