IIC
/

MEL: Legal Spanish Language Model

Model Name: MEL (Modelo de Español Legal)
Model Type: Encoder-only Transformer Language: Spanish
Domain: Legal Texts
Paper: Link to paper


Overview

MEL is a transformer-based language model designed specifically for processing and understanding Spanish legal texts. Built upon XLM-RoBERTa-large, it is further pre-trained on a large corpus of legal documents, including the Boletín Oficial del Estado (BOE), parliamentary transcripts, court rulings, and other legislative texts. MEL significantly improves the performance of legal NLP tasks, such as legal text classification and named entity recognition (NER).


Model Description

Architecture

  • Base Model: XLM-RoBERTa-large
  • Training Objective: Masked Language Modeling (MLM)
  • Pre-training Strategy: Continued pre-training on Spanish legal texts
  • Context Window: 512 tokens

Training Data

MEL is trained on a curated corpus of 5.52 million legal texts (~92.7GB) sourced from:

  • BOE (Boletín Oficial del Estado)
  • Parliamentary records
  • Court rulings
  • Legal statutes

To ensure high-quality text processing, documents were preprocessed by removing unwanted characters, normalizing spacing, chunking texts, and filtering non-Spanish content.

Cutoff date: February 2024

Training Configuration

  • GPU: NVIDIA A100 80GB PCIe
  • Training Time: 13.9 days (~7 days per epoch, 2 epochs total)
  • Optimizer: AdamW (β1=0.9, β2=0.98, ϵ=1e-6)
  • Batch Size: 16 (Gradient Accumulation: 4, Effective Batch Size: 64)
  • Scheduler: Cosine Learning Rate Scheduler
  • Warmup Steps: 8% of total training steps
  • Learning Rate: 1e-4
  • Weight Decay: 0.01
drawing

Evaluation

MEL was benchmarked on two datasets:

1. Multieurlex (Spanish Legal Texts Classification)

2. Private Multiclass Classification Dataset

  • Task: Classify legal documents into one of 9 categories
  • Performance:
    • MEL achieves an F1 score of 0.9260, surpassing XLM-RoBERTa-Large (0.9103), Legal-XLM-RoBERTa (0.8935), and RoBERTalex (0.7007).
  • Small Data Learning: MEL shows better generalization even with limited training data, achieving an F1 score of 0.8812 in early training compared to the next best 0.7803.

Model Performance

Key Findings

Outperforms general multilingual models (XLM-RoBERTa) and other domain-specific models in Spanish legal text classification.
Requires less fine-tuning, demonstrating strong domain adaptation from the pre-training phase.
Shows high sample efficiency, achieving strong results even with limited training data.

Limitations

Not evaluated on NER or token-level tasks due to the lack of annotated Spanish legal datasets.
Trained only on Spanish legal texts, so performance in multilingual legal contexts is unknown.
Potential bias in legal terminology due to corpus selection.


How to Use

from transformers import AutoModel, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("IIC/MEL")
model = AutoModel.from_pretrained("IIC/MEL")

text = "El artículo 45 de la Constitución establece que..."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)

For fine-tuning on specific legal tasks, use Trainer from Hugging Face’s transformers library.


Future Work

  • Develop NER models for legal entity extraction.
  • Expand dataset to cover more diverse legal domains (e.g., contracts, case law, administrative procedures).
  • Fine-tune on additional downstream tasks (question answering, legal summarization, information retrieval).
  • Improve bias detection and mitigation strategies.

Citation

If you use MEL, please cite:

@misc{sánchez2025mellegalspanishlanguage,
      title={MEL: Legal Spanish Language Model}, 
      author={David Betancur Sánchez and Nuria Aldama García and Álvaro Barbero Jiménez and Marta Guerrero Nieto and Patricia Marsà Morales and Nicolás Serrano Salas and Carlos García Hernán and Pablo Haya Coll and Elena Montiel Ponsoda and Pablo Calleja Ibáñez},
      year={2025},
      eprint={2501.16011},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2501.16011}, 
}

Acknowledgements

This work has received funding from the Inesdata-project (Infrastructure to Investigate Data Spaces in Distributed Environments at UPM), a project funded under the UNICO I+D CLOUD call by the Ministry for Digital Transformation and the Civil Service, in the framework of the recovery plan PRTR financed by the European Union (NextGenerationEU)

Código del proyecto: TSI-063100-2022-0001

Contributors:

  • David Betancur Sánchez, Instituto de Ingeniería del Conocimiento (IIC)
  • Nuria Aldama García, Instituto de Ingeniería del Conocimiento (IIC)
  • Álvaro Barbero Jiménez, Instituto de Ingeniería del Conocimiento (IIC)
  • Marta Guerrero Nieto, Instituto de Ingeniería del Conocimiento (IIC)
  • Patricia Marsà Morales, Instituto de Ingeniería del Conocimiento (IIC)
  • Nicolás Serrano Salas, Instituto de Ingeniería del Conocimiento (IIC)
  • Carlos García Hernán, Instituto de Ingeniería del Conocimiento (IIC)
  • Pablo Haya Coll, Instituto de Ingeniería del Conocimiento (IIC)
  • Elena Montiel Ponsoda, Universidad Politécnica de Madrid
  • Pablo Calleja Ibáñez, Universidad Politécnica de Madrid

Downloads last month
25
Safetensors
Model size
560M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for IIC/MEL

Finetuned
(429)
this model