Med-Llama-3.1-8B-DeepSeek-Distilled
Model Overview
This model was fine-tuned from enesarda22/Llama-3.1-8B-DeepSeek67B-Distilled using a medical corpus.
Evaluation Scores
- bigbio/med_qa: Accuracy: 0.514
- qiaojin/PubMedQA: Accuracy: 0.817
Usage
Load the model and tokenizer from the Hugging Face Hub:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("enesarda22/Med-Llama-3.1-8B-DeepSeek67B-Distilled")
tokenizer = AutoTokenizer.from_pretrained("enesarda22/Med-Llama-3.1-8B-DeepSeek67B-Distilled")
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support