Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,165 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-nc-3.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-nc-3.0
|
3 |
+
language:
|
4 |
+
- es
|
5 |
+
base_model:
|
6 |
+
- allenai/longformer-base-4096
|
7 |
+
pipeline_tag: text-classification
|
8 |
+
library_name: transformers
|
9 |
+
tags:
|
10 |
+
- sgd,
|
11 |
+
- documental
|
12 |
+
- gestion
|
13 |
+
- documental type
|
14 |
+
- text-classification
|
15 |
+
- longformer
|
16 |
+
- spanish
|
17 |
+
- document-management
|
18 |
+
- smote
|
19 |
+
- multi-class
|
20 |
+
- fine-tuned
|
21 |
+
- transformers
|
22 |
+
- gpu
|
23 |
+
- a100
|
24 |
+
- cc-by-nc-3.0
|
25 |
+
---
|
26 |
+
# Exscribe Classifier SGD Longformer 4096
|
27 |
+
|
28 |
+
## Model Overview
|
29 |
+
|
30 |
+
**Exscribe/Classifier_SGD_Longformer_4099** is a fine-tuned version of the `allenai/longformer-base-4096` model, designed for text classification tasks in document management, specifically for classifying Spanish-language input documents into document type categories (`tipo_documento_codigo`). Developed by **Exscribe.co**, this model leverages the Longformer architecture to handle long texts (up to 4096 tokens) and is optimized for GPU environments, such as NVIDIA A100.
|
31 |
+
|
32 |
+
The model was trained on a Spanish dataset (`final.parquet`) containing 8,850 samples across 109 document type classes. It addresses class imbalance using SMOTE (Synthetic Minority Over-sampling Technique) applied to the training set, ensuring robust performance on minority classes. The fine-tuning process achieved an evaluation F1-score of **0.4855**, accuracy of **0.6096**, precision of **0.5212**, and recall of **0.5006** on a validation set of 1,770 samples.
|
33 |
+
|
34 |
+
### Key Features
|
35 |
+
- **Task**: Multi-class text classification for document type identification.
|
36 |
+
- **Language**: Spanish.
|
37 |
+
- **Input**: Raw text (`texto_entrada`) from documents.
|
38 |
+
- **Output**: Predicted document type code (`tipo_documento_codigo`) from 109 classes.
|
39 |
+
- **Handling Long Texts**: Processes the first 4096-token chunk of input text.
|
40 |
+
- **Class Imbalance**: Mitigated using SMOTE on the training set.
|
41 |
+
- **Hardware Optimization**: Fine-tuned with mixed precision (fp16) and gradient accumulation for A100 GPUs.
|
42 |
+
|
43 |
+
## Dataset
|
44 |
+
|
45 |
+
The training dataset (`final.parquet`) consists of 8,850 Spanish text samples, each labeled with a document type code (`tipo_documento_codigo`). The dataset exhibits significant class imbalance, with class frequencies ranging from 10 to 2,363 samples per class. The dataset was split into:
|
46 |
+
- **Training set**: 7,080 samples (before SMOTE, expanded to 9,903 after SMOTE).
|
47 |
+
- **Validation set**: 1,770 samples (untouched by SMOTE for unbiased evaluation).
|
48 |
+
|
49 |
+
SMOTE was applied to the training set to oversample minority classes (those with fewer than 30 samples) to a target of 40 samples per class, generating 2,823 synthetic samples. Single-instance classes were excluded from SMOTE to avoid resampling errors and were included in the training set as-is.
|
50 |
+
|
51 |
+
## Model Training
|
52 |
+
|
53 |
+
### Base Model
|
54 |
+
The model is based on `allenai/longformer-base-4096`, a transformer model designed for long-document processing with a sparse attention mechanism, allowing efficient handling of sequences up to 4096 tokens.
|
55 |
+
|
56 |
+
### Fine-Tuning
|
57 |
+
The fine-tuning process was conducted using the Hugging Face `Trainer` API with the following configuration:
|
58 |
+
- **Epochs**: 3
|
59 |
+
- **Learning Rate**: 2e-5
|
60 |
+
- **Batch Size**: Effective batch size of 16 (per_device_train_batch_size=2, gradient_accumulation_steps=8)
|
61 |
+
- **Optimizer**: AdamW with weight decay (0.01)
|
62 |
+
- **Warmup Steps**: 50
|
63 |
+
- **Mixed Precision**: fp16 for GPU efficiency
|
64 |
+
- **Evaluation Strategy**: Per epoch, with the best model selected based on the macro F1-score
|
65 |
+
- **SMOTE**: Applied to the training set to balance classes
|
66 |
+
- **Hardware**: NVIDIA A100 GPU
|
67 |
+
|
68 |
+
The training process took approximately 159.09 minutes (9,545.32 seconds) and produced the following evaluation metrics on the validation set:
|
69 |
+
- **Eval Loss**: 1.5475
|
70 |
+
- **Eval Accuracy**: 0.6096
|
71 |
+
- **Eval F1 (macro)**: 0.4855
|
72 |
+
- **Eval Precision (macro)**: 0.5212
|
73 |
+
- **Eval Recall (macro)**: 0.5006
|
74 |
+
|
75 |
+
Training logs and checkpoints are saved in `./results`, with TensorBoard logs in `./logs`. The final model and tokenizer are saved in `./fine_tuned_longformer`.
|
76 |
+
|
77 |
+
## Usage
|
78 |
+
|
79 |
+
### Installation
|
80 |
+
To use the model, install the required dependencies:
|
81 |
+
```bash
|
82 |
+
pip install transformers torch pandas scikit-learn numpy
|
83 |
+
```
|
84 |
+
|
85 |
+
### Inference Example
|
86 |
+
Below is a Python script to load and use the fine-tuned model for inference:
|
87 |
+
|
88 |
+
```python
|
89 |
+
from transformers import LongformerTokenizer, LongformerForSequenceClassification
|
90 |
+
import torch
|
91 |
+
import numpy as np
|
92 |
+
|
93 |
+
# Load the model and tokenizer
|
94 |
+
model_path = "exscribe/classifier_sgd_longformer_4099"
|
95 |
+
tokenizer = LongformerTokenizer.from_pretrained(model_path)
|
96 |
+
model = LongformerForSequenceClassification.from_pretrained(model_path)
|
97 |
+
|
98 |
+
# Load label encoder classes
|
99 |
+
label_encoder_classes = np.load("label_encoder_classes.npy", allow_pickle=True)
|
100 |
+
id2label = {i: int(label) for i, label in enumerate(label_encoder_classes)}
|
101 |
+
|
102 |
+
# Example text
|
103 |
+
text = "Your Spanish document text here..."
|
104 |
+
|
105 |
+
# Tokenize input
|
106 |
+
inputs = tokenizer(
|
107 |
+
text,
|
108 |
+
add_special_tokens=True,
|
109 |
+
max_length=4096,
|
110 |
+
padding="max_length",
|
111 |
+
truncation=True,
|
112 |
+
return_tensors="pt"
|
113 |
+
)
|
114 |
+
|
115 |
+
# Move inputs to GPU if available
|
116 |
+
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
117 |
+
model.to(device)
|
118 |
+
inputs = {k: v.to(device) for k, v in inputs.items()}
|
119 |
+
|
120 |
+
# Perform inference
|
121 |
+
model.eval()
|
122 |
+
with torch.no_grad():
|
123 |
+
outputs = model(**inputs)
|
124 |
+
logits = outputs.logits
|
125 |
+
predicted_id = torch.argmax(logits, dim=1).item()
|
126 |
+
|
127 |
+
# Map prediction to label
|
128 |
+
predicted_label = id2label[predicted_id]
|
129 |
+
print(f"Predicted document type code: {predicted_label}")
|
130 |
+
```
|
131 |
+
|
132 |
+
### Notes
|
133 |
+
- The model processes only the first 4096 tokens of the input text. For longer documents, consider chunking strategies or alternative models.
|
134 |
+
- Ensure the input text is in Spanish, as the model was trained exclusively on Spanish data.
|
135 |
+
- The label encoder classes (`label_encoder_classes.npy`) must be available to map predicted IDs to document type codes.
|
136 |
+
|
137 |
+
## Limitations
|
138 |
+
- **First Chunk Limitation**: The model uses only the first 4096-token chunk, which may miss relevant information in longer documents.
|
139 |
+
- **Class Imbalance**: While SMOTE improves minority class performance, some classes (e.g., single-instance classes) may still be underrepresented.
|
140 |
+
- **Macro Metrics**: The reported F1-score (0.4855) is macro-averaged, meaning it treats all classes equally, which may mask performance disparities across imbalanced classes.
|
141 |
+
- **Hardware Requirements**: Inference on CPU is possible but slower; a GPU is recommended for efficiency.
|
142 |
+
|
143 |
+
## License
|
144 |
+
This model is licensed under the **Creative Commons Attribution-NonCommercial 3.0 (CC BY-NC 3.0)** license. You are free to share and adapt the model for non-commercial purposes, provided appropriate credit is given to Exscribe.co.
|
145 |
+
|
146 |
+
## Author
|
147 |
+
- **Organization**: Exscribe.co
|
148 |
+
- **Contact**: Reach out via Hugging Face (https://huggingface.co/exscribe)
|
149 |
+
|
150 |
+
## Citation
|
151 |
+
If you use this model in your work, please cite:
|
152 |
+
```
|
153 |
+
@misc{exscribe_classifier_sgd_longformer_4099,
|
154 |
+
author = {Exscribe.co},
|
155 |
+
title = {Classifier SGD Longformer 4099: A Fine-Tuned Model for Spanish Document Type Classification},
|
156 |
+
year = {2025},
|
157 |
+
publisher = {Hugging Face},
|
158 |
+
url = {https://huggingface.co/exscribe/classifier_sgd_longformer_4099}
|
159 |
+
}
|
160 |
+
```
|
161 |
+
|
162 |
+
## Acknowledgments
|
163 |
+
- Built upon the `allenai/longformer-base-4096` model.
|
164 |
+
- Utilizes the Hugging Face `transformers` library and `Trainer` API.
|
165 |
+
- Thanks to the open-source community for tools like `imbalanced-learn` and `scikit-learn`.
|