--- base_model: Qwen/Qwen3-8B library_name: peft --- # LoRA Adapter for SFT This is a LoRA (Low-Rank Adaptation) adapter trained using supervised fine-tuning (SFT). ## Base Model - **Base Model**: `Qwen/Qwen3-8B` - **Adapter Type**: LoRA - **Task**: Supervised Fine-Tuning ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel # Load base model and tokenizer base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-8B") tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-8B") # Load LoRA adapter model = PeftModel.from_pretrained(base_model, "thejaminator/misalignedfacts-20251007") ``` ## Training Details This adapter was trained using supervised fine-tuning on conversation data to improve the model's ability to follow instructions and generate helpful responses.