MedGPT-OSS-20B
Merged LoRA model based on openai/gpt-oss-20b
.
Model Details
- Base Model:
openai/gpt-oss-20b
- Languages: en
- Type: Merged LoRA (adapter weights merged into base)
- Library: 🤗 Transformers
LoRA Configuration
- r: 8
- alpha (lora_alpha): 16
- dropout: 0.0
- target_modules: k_proj, o_proj, v_proj, q_proj
- bias: none
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "justinj92/MedGPT-OSS-20B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Merge Process
Base model and LoRA adapter weights were merged using PeftModel.merge_and_unload()
to create a standalone model (no PEFT dependency required at inference).
Limitations & Bias
This model inherits the limitations and potential biases of openai/gpt-oss-20b
and the fine-tuning dataset.
License
Licensed under apache-2.0
. Check the base model's license for additional terms.
- Downloads last month
- 10