MedGPT β GPT-2 Fine-Tuned on PubMed RCT
MedGPT is a GPT-2 model fine-tuned on the pubmed-200k-rct
dataset. It classifies individual sentences from biomedical abstracts into one of five standard sections:
- Background
- Objective
- Methods
- Results
- Conclusion
This model is useful for tasks requiring structured understanding or summarization of scientific literature.
Training Details
- Base Model:
gpt2
(124M parameters) - Dataset:
pietrolesci/pubmed-200k-rct
- Task: Sentence classification
- Labels: Background, Objective, Methods, Results, Conclusion
- Epochs: 1 (partial training)
- Loss Function: CrossEntropy
- Optimizer: AdamW
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for devmanpreet/Medical-GPT2-Classifier
Base model
openai-community/gpt2