MiniMedra 0.6b

MiniMedra 0.6b is a fine-tuned medical language model based on Gemma 0.6b architecture. This model has been specifically trained for medical and healthcare-related tasks.

Model Details

  • Base Model: Gemma 0.6b
  • Fine-tuning: LoRA (Low-Rank Adaptation)
  • Domain: Medical/Healthcare
  • Parameters: ~0.6 billion
  • Format: SafeTensors

Training

This model was fine-tuned using Axolotl with LoRA adapters on medical datasets. The training focused on improving the model's understanding and generation capabilities for medical content.

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("drwlf/MiniMedra-0.6b")
model = AutoModelForCausalLM.from_pretrained("drwlf/MiniMedra-0.6b")

# Example usage
input_text = "What are the symptoms of diabetes?"
inputs = tokenizer.encode(input_text, return_tensors="pt")
outputs = model.generate(inputs, max_length=100, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

License

Apache 2.0

Disclaimer

This model is for research and educational purposes only. It should not be used as a substitute for professional medical advice, diagnosis, or treatment. Always consult with qualified healthcare professionals for medical concerns.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support