Llama 3.1 DeepSeek R1 Medical - GGUF
This is a GGUF conversion of alaabh/llama3.1-deepseek-r1-medical-merged-16bit.
Model Details
- Format: GGUF
- Precision: F16
- Size: ~10.6GB
- Compatible with: llama.cpp, Ollama, LM Studio, and other GGUF-supported inference engines
Usage
With llama.cpp:
./llama-cli -m model.gguf -p "What is hypertension?" -n 100
With Ollama:
# Create Modelfile
echo 'FROM ./model.gguf' > Modelfile
ollama create medical-llama -f Modelfile
ollama run medical-llama
Original Model
This GGUF conversion is based on the merged 16-bit model that combines Llama 3.1 with DeepSeek R1 for medical applications.
- Downloads last month
- 96
Hardware compatibility
Log In
to view the estimation
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support