Paraformer: Attentive Deep Neural Networks for Legal Document Retrieval
This repository provides a simplified Hugging Face implementation of the Paraformer model for legal document retrieval, based on the paper "Attentive Deep Neural Networks for Legal Document Retrieval" by Nguyen et al.
π¨ Important Notes
Usage Scope
- This is a simplified, lazy implementation designed for easy integration with Hugging Face Transformers
- For full functionality and customization, please visit the original repository: https://github.com/nguyenthanhasia/paraformer
- The original repository contains the complete training pipeline, evaluation scripts, and advanced features
Licensing & Usage
- β Research purposes: Free to use
- β οΈ Commercial purposes: Use at your own risk
- Please refer to the original repository for detailed licensing information
ποΈ Model Architecture
Paraformer employs a hierarchical attention mechanism specifically designed for legal document retrieval:
- Sentence-level encoding using pre-trained SentenceTransformer (paraphrase-mpnet-base-v2)
- Query-aware attention mechanism with optional sparsemax activation
- Binary classification for document relevance prediction
- Interpretable attention weights for understanding model decisions
π Quick Start
Installation
pip install transformers torch sentence-transformers
Basic Usage
from transformers import AutoModel
# Load the model
model = AutoModel.from_pretrained('nguyenthanhasia/paraformer', trust_remote_code=True)
# Example usage
query = "What are the legal requirements for contract formation?"
article = [
"A contract is a legally binding agreement between two or more parties.",
"For a contract to be valid, it must have offer, acceptance, and consideration.",
"The parties must have legal capacity to enter into the contract."
]
# Get relevance score (0.0 to 1.0)
relevance_score = model.get_relevance_score(query, article)
print(f"Relevance Score: {relevance_score:.4f}") # Example output: 0.5500
# Get binary prediction (0 = not relevant, 1 = relevant)
prediction = model.predict_relevance(query, article)
print(f"Prediction: {prediction}") # Example output: 1
Batch Processing
queries = [
"What constitutes a valid contract?",
"How can employment be terminated?"
]
articles = [
["A contract requires offer, acceptance, and consideration.", "All parties must have legal capacity."],
["Employment can be terminated by mutual agreement.", "Notice period must be respected."]
]
# Forward pass for batch processing
import torch
outputs = model.forward(
query_texts=queries,
article_texts=articles,
return_dict=True
)
# Get probabilities and predictions
probabilities = torch.softmax(outputs.logits, dim=-1)
predictions = torch.argmax(outputs.logits, dim=-1)
for i, (query, article) in enumerate(zip(queries, articles)):
score = probabilities[i, 1].item()
pred = predictions[i].item()
print(f"Query: {query}")
print(f"Score: {score:.4f}, Prediction: {pred}")
π Model Specifications
Parameter | Value |
---|---|
Model Size | ~445 MB |
Hidden Size | 768 |
Base Model | paraphrase-mpnet-base-v2 |
Attention Type | General with Sparsemax |
Output Classes | 2 (relevant/not relevant) |
Input Format | Query string + Article sentences (list) |
β οΈ Important Considerations
Input Format
- Documents must be pre-segmented into sentences (provided as a list of strings)
- The model processes each sentence individually before applying attention
- Empty articles are handled gracefully
Model Behavior
- Scores are not absolute relevance judgments - they represent relative similarity in the learned feature space
- Results should be interpreted as similarity scores rather than definitive relevance conclusions
- The model was trained on legal documents and may perform differently on other domains
Performance Notes
- The model includes pretrained weights converted from the original PyTorch Lightning checkpoint
- Some weights (particularly SentenceTransformer components) may not be perfectly aligned due to architecture differences
- For optimal performance, consider fine-tuning on your specific dataset
π§ Advanced Usage
Custom Configuration
from transformers import AutoConfig
# Load configuration
config = AutoConfig.from_pretrained('nguyenthanhasia/paraformer', trust_remote_code=True)
# Modify configuration if needed
config.dropout_prob = 0.2
config.use_sparsemax = False # Use softmax instead
# Create model with custom config
model = AutoModel.from_pretrained(
'nguyenthanhasia/paraformer',
config=config,
trust_remote_code=True
)
Accessing Attention Weights
# Get attention weights for interpretability
outputs = model.forward(
query_texts=["Your query"],
article_texts=[["Sentence 1", "Sentence 2", "Sentence 3"]],
return_dict=True
)
# Access attention weights
attention_weights = outputs.attentions[0] # Shape: [1, num_sentences]
print("Attention weights:", attention_weights)
π¬ Research & Citation
This model is based on the research paper:
@article{nguyen2022attentive,
title={Attentive Deep Neural Networks for Legal Document Retrieval},
author={Nguyen, Ha-Thanh and Phi, Manh-Kien and Ngo, Xuan-Bach and Tran, Vu and Nguyen, Le-Minh and Tu, Minh-Phuong},
journal={Artificial Intelligence and Law},
pages={1--30},
year={2022},
publisher={Springer}
}
π Related Resources
- Original Repository: https://github.com/nguyenthanhasia/paraformer - Full implementation with training scripts
- Research Paper: https://arxiv.org/abs/2212.13899
- COLIEE Competition: Data and evaluation framework used in the original research
π€ Contributing
For contributions, feature requests, or issues related to the core model:
- Visit the original repository: https://github.com/nguyenthanhasia/paraformer
For issues specific to this Hugging Face implementation:
- Please open an issue in the Hugging Face model repository
π Disclaimer
This is a simplified implementation for easy integration. The original repository contains the complete research implementation with full training and evaluation capabilities. Users seeking to reproduce research results or implement custom training should refer to the original repository.
Use responsibly: This model is provided for research purposes. Commercial usage is at your own risk and discretion.
- Downloads last month
- 1