Model Card for oswestry-mistral-finetuned
This is a fine-tuned version of the Mistral-7B-Instruct-v0.2
model specialized in scoring functional disability interviews using the Oswestry Disability Index (ODI) scale in Spanish. The model is designed to transform clinical-style interview transcripts into structured scores, demonstrating improved performance compared to the base model.
Model Details
Model Description
- Developed by: [Alejandro M.L]
- Model type: Causal Decoder-Only Transformer (LLM)
- Language(s): Spanish (with clinical vocabulary)
- License: apache-2.0
- Fine-tuned from model: mistral-7b-instruct-v0.2
Model Sources
- Repository: Github
- Paper [Transformación de Informes Médicos en escalas funcionales]:
Uses
Direct Use
The model takes as input a clinical interview transcript in Spanish (following a structured instruction format) and returns a text output containing the scores of each item in the Oswestry scale.
Downstream Use
Can be integrated into tools that support:
- Preliminary functional assessment in telemedicine
- Research pipelines for NLP in healthcare
- Spanish-language LLM benchmarking on medical tasks
Out-of-Scope Use
- Not suitable for general-purpose chat applications
- Should not be used for real clinical decisions without expert supervision
- Not intended for languages other than Spanish
Bias, Risks, and Limitations
- The model was fine-tuned on synthetic data, which may limit generalizability.
- Outputs might include hallucinations if the input format is not followed.
- It reflects the biases of the base model and the prompt structure used.
Recommendations
- Use only for research and non-critical applications.
- Always validate outputs against clinical judgment.
- Further training with real anonymized clinical data is highly recommended.
How to Get Started with the Model
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("DrAleML/Oswestry-Instruct")
tokenizer = AutoTokenizer.from_pretrained("DrAleML/Oswestry-Instruct")
prompt = "Entrevista:\nPaciente refiere dolor lumbar que aumenta al estar de pie...\n\nResponde con puntuación Oswestry:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=300)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support