GPT-OSS-20B Fine-Tuned
A fine-tuned gpt-oss-20b model optimized for efficient text generation, multilingual conversational tasks, and instruction-following.
Overview
Item | Details |
---|---|
Base checkpoint | unsloth/gpt-oss-20b |
Fine-tune method | LoRA (PEFT) with Unsloth |
Training run | 30 steps • Multilingual-Thinking dataset |
Trainable params | [To be calculated, if available] |
Loss | [Loss metrics unavailable] |
Hardware | [Hardware details unavailable] |
License | MIT License (Base model: Refer to gpt-oss-20b license) |
Intended use | Educational, research, and chat-based applications |
Datasets
Dataset | Size | Focus |
---|---|---|
HuggingFaceH4/Multilingual-Thinking |
[Size unavailable] | Multilingual reasoning and conversational tasks |
The dataset was wrapped with the chat template before training.
Installation
To use this model, install the required dependencies:
pip install torch>=2.8.0 triton>=3.4.0 transformers>=4.55.3 bitsandbytes unsloth
Usage
Loading the Model
from unsloth import FastLanguageModel
import torch
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="unsloth/gpt-oss-20b",
max_seq_length=1024,
dtype=torch.float16,
load_in_4bit=True,
)
Fine-Tuning with LoRA
model = FastLanguageModel.get_peft_model(
model,
r=8,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"],
lora_alpha=16,
lora_dropout=0,
bias="none",
use_gradient_checkpointing="unsloth",
)
Inference
from transformers import TextStreamer
messages = [
{"role": "user", "content": "Solve x^5 + 3x^4 - 10 = 3."},
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512, streamer=TextStreamer(tokenizer))
Training Details
Training Configuration
- Batch Size: 1
- Gradient Accumulation Steps: 4
- Learning Rate: 2e-4
- Optimizer: adamw_8bit
- Warmup Steps: 5
- Max Steps: 30
Responsible Use
- Bias: The model may reflect biases in the training data. Users should evaluate outputs for fairness.
- Misuse: Avoid using for harmful or misleading content generation.
- Limitations: Optimized for efficiency with 4-bit quantization, which may introduce minor accuracy trade-offs. Limited to 1024-token sequences.
- Disclaimer: Not intended for critical decision-making. The author and base-model creators accept no liability for misuse or errors.
Acknowledgements
- The unsloth library for enabling efficient fine-tuning.
- Hugging Face for providing the base model and training infrastructure.