Model Card for phi3-mini-yoda-adapter
This model is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct. It has been trained using TRL.
Quick start
from transformers import pipeline
question = "An 65 year old M has chestpain of ASY, a resting blood pressure of 130 mmHg, and cholesterol levels of 275 mg/dl. Their fasting blood sugar is normal (120 mg/dl or less). Resting ECG results are ST. The maximum heart rate achieved is 115. Exercise induced angina is Y. The oldpeak value is 1.0. The slope of the peak exercise ST segment is Flat. Does this individual likely have heart disease?"
generator = pipeline("text-generation", model="LordPatil/phi3-mini-yoda-adapter", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
Training procedure
This model was trained with SFT.
Framework versions
- TRL: 0.12.1
- Transformers: 4.46.2
- Pytorch: 2.6.0+cu124
- Datasets: 3.1.0
- Tokenizers: 0.20.3
Citations
Cite TRL as:
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for LordPatil/phi3-mini-heart-failure-prediction
Base model
microsoft/Phi-3-mini-4k-instruct