Medical-NER-Qwen-4B-Thinking
Model Description
This is a fine-tuned medical LLM based on Qwen3-4B-Thinking, specialized for medical entity and relationship extraction.
Model Details
- Base Model: unsloth/Qwen3-4B-Thinking-2507
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Domain: Medical Literature Analysis
- Tasks: Entity Recognition, Relationship Extraction
- Language: English
Performance Metrics
Metric | Entity Extraction | Relationship Extraction |
---|---|---|
Precision | 0.000 | 0.000 |
Recall | 0.000 | 0.000 |
F1-Score | 0.000 | 0.000 |
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2.5-3B-Thinking",
torch_dtype="auto",
device_map="auto"
)
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "xingqiang/Medical-NER-Qwen-4B-Thinking")
tokenizer = AutoTokenizer.from_pretrained("xingqiang/Medical-NER-Qwen-4B-Thinking")
# Generate medical analysis
text = "Hepatitis C virus causes chronic liver infection."
messages = [
{"role": "user", "content": f"Extract medical entities and relationships from: {text}"}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
Model tree for xingqiang/Medical-NER-Qwen-4B-Thinking
Base model
Qwen/Qwen3-4B-Thinking-2507
Finetuned
unsloth/Qwen3-4B-Thinking-2507