File size: 4,390 Bytes
04659be 1aebbf1 04659be 1aebbf1 04659be 1aebbf1 04659be 1aebbf1 04659be 1aebbf1 04659be 1aebbf1 04659be 1aebbf1 04659be 1aebbf1 04659be 1aebbf1 04659be 1aebbf1 04659be 1aebbf1 04659be 1aebbf1 04659be 1aebbf1 04659be 1aebbf1 04659be 1aebbf1 04659be 1aebbf1 04659be 1aebbf1 04659be a3b6dee 04659be 1aebbf1 04659be 1aebbf1 04659be 1aebbf1 04659be |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 |
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-4B-Instruct-2507
base_model_relation: adapter
library_name: peft
tags:
- canis-teach
- qwen3
- education
- lora
- transformers
- science
- tutoring
pipeline_tag: text-generation
datasets:
- CanisAI/teach-science-v1
---
# Canis.teach - Qwen3-4B Instruct (Science)
LoRA adapters for the Science tutor in the Canis.teach suite.
- **Base Model**: Qwen/Qwen3-4B-Instruct-2507
- **Release**: CanisAI/teach-science-qwen3-4b-2507-r1
- **Project**: Canis.teach - Learning that fits.
- **Subject**: Science
## What is this?
This repository provides LoRA adapters fine-tuned on Science tutoring dialogues. Apply these adapters to the base model to enable subject-aware, didactic behavior without downloading a full merged checkpoint.
The model is designed to **teach, not just answer** - providing step-by-step explanations, hints, and pedagogically structured responses.
For ready-to-run merged models or Ollama-friendly GGUF quantizations, see the "Related Models" section.
## Quick Start
### Installation
```bash
pip install transformers peft torch
```
### Usage (LoRA)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base = "Qwen/Qwen3-4B-Instruct-2507"
adapter = "CanisAI/teach-science-qwen3-4b-2507-r1"
tokenizer = AutoTokenizer.from_pretrained(base, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
base,
device_map="auto",
torch_dtype="auto"
)
model = PeftModel.from_pretrained(model, adapter)
# Example prompt
prompt = "Briefly compare mitosis and meiosis: purpose, divisions, chromosome number, variation."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.7,
top_p=0.8,
top_k=20,
do_sample=True
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Details
- **Base Model**: Qwen/Qwen3-4B-Instruct-2507
- **Training Method**: Supervised Fine-Tuning (SFT) with LoRA
- **Framework**: Unsloth + TRL/PEFT
- **Data**: Canis.lab-curated Science tutoring dialogues
- **Target Modules**: Query, Key, Value, Output projections
- **Rank**: 16
- **Alpha**: 32
## Intended Use
- **Primary**: Subject-aware tutoring for Science education
- **Applications**: Educational prototypes, tutoring systems, research
- **Approach**: Stepwise explanations, pedagogical hints, rubric-aligned responses
- **Target Audience**: Students, educators, researchers
## Model Behavior
The model is optimized for:
- Clear, step-by-step explanations
- Appropriate difficulty progression
- Encouraging learning through hints rather than direct answers
- Subject-specific pedagogical approaches
- Maintaining educational standards and accuracy
## Recommended Settings
For optimal tutoring behavior:
- **Temperature**: 0.6-0.8
- **Top-p**: 0.8-0.9
- **Top-k**: 20-40
- **Max tokens**: 256-512 (depending on complexity)
## Safety and Limitations
**Important Considerations**:
- Human oversight required for educational use
- May occasionally hallucinate or oversimplify complex topics
- For fact-critical applications, consider RAG with verified curriculum sources
- Follow your institution's data privacy and AI usage policies
- Not a replacement for qualified human instruction
## Related Models
| Type | Repository | Description |
|------|------------|-------------|
| **LoRA Adapters** | `CanisAI/teach-science-qwen3-4b-2507-r1` | This repository (lightweight) |
| **Merged Model** | (Coming Soon) | Ready-to-use full model |
| **GGUF Quantized** | (Coming Soon) | Ollama/llama.cpp compatible |
| **Dataset** | `CanisAI/teach-science-v1` | Training data |
## License
This model inherits the license from the base model (Qwen/Qwen3-4B-Instruct-2507). Please review the base model's license terms before use.
## Citation
```bibtex
@misc{canis-teach-science,
title={Canis.teach Science Tutor},
author={CanisAI},
year={2025},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/CanisAI/teach-science-qwen3-4b-2507-r1}}
}
```
## Acknowledgments
- **Qwen Team** for the excellent base model
- **Unsloth** for efficient training tools
- **Hugging Face** ecosystem (Transformers, PEFT, TRL)
- Educators and contributors supporting the Canis.teach project
---
**Canis.teach** - Learning that fits. |