Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
json
Languages:
Vietnamese
Size:
1K - 10K
DOI:
License:
Vu Anh
commited on
Commit
·
ff2289d
1
Parent(s):
1dbfe02
Add comprehensive Vietnamese banking dataset with 3 subsets
Browse files- Added 2,471 annotated examples (1,977 train, 494 test)
- Created 3 task-specific subsets:
* Classification: 14 banking aspect categories
* Sentiment: 3-class sentiment analysis
* Aspect-Sentiment: Fine-grained aspect-based sentiment
- Added preprocessing and statistics scripts
- Updated README with detailed dataset documentation
- Added .gitignore for Python files
- .gitignore +10 -0
- .python-version +1 -0
- README.md +154 -42
- data/aspect_sentiment/test.jsonl +0 -0
- data/aspect_sentiment/train.jsonl +0 -0
- data/classification/test.jsonl +0 -0
- data/classification/train.jsonl +0 -0
- data/sentiment/test.jsonl +0 -0
- data/sentiment/train.jsonl +0 -0
- data/test.txt +0 -2
- data/train.txt +0 -2
- dataset_statistics.py +227 -0
- preprocess_data.py +143 -0
- pyproject.toml +8 -0
- raw_data/test.txt +0 -0
- raw_data/train.txt +0 -0
- statistics_report.md +262 -0
- uv.lock +0 -0
.gitignore
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
__pycache__/
|
| 2 |
+
*.pyc
|
| 3 |
+
*.pyo
|
| 4 |
+
*.pyd
|
| 5 |
+
.Python
|
| 6 |
+
.venv/
|
| 7 |
+
venv/
|
| 8 |
+
ENV/
|
| 9 |
+
env/
|
| 10 |
+
.DS_Store
|
.python-version
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
3.13
|
README.md
CHANGED
|
@@ -9,17 +9,32 @@ tags:
|
|
| 9 |
- banking
|
| 10 |
- finance
|
| 11 |
- nlp
|
|
|
|
|
|
|
| 12 |
task_categories:
|
| 13 |
- text-classification
|
|
|
|
| 14 |
size_categories:
|
| 15 |
-
- n<
|
| 16 |
configs:
|
| 17 |
-
- config_name:
|
| 18 |
data_files:
|
| 19 |
- split: train
|
| 20 |
-
path: data/train.
|
| 21 |
- split: test
|
| 22 |
-
path: data/test.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
---
|
| 24 |
|
| 25 |
# UTS2017_Bank Dataset
|
|
@@ -28,14 +43,23 @@ configs:
|
|
| 28 |
|
| 29 |
### Dataset Summary
|
| 30 |
|
| 31 |
-
The UTS2017_Bank dataset is a Vietnamese
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
|
| 33 |
-
|
|
|
|
|
|
|
| 34 |
|
| 35 |
-
- **
|
| 36 |
-
-
|
| 37 |
-
-
|
| 38 |
-
- **Language Modeling**: Train Vietnamese language models specialized for banking domain
|
| 39 |
|
| 40 |
### Languages
|
| 41 |
|
|
@@ -45,83 +69,170 @@ The dataset is exclusively in Vietnamese (`vi`).
|
|
| 45 |
|
| 46 |
### Data Instances
|
| 47 |
|
| 48 |
-
Each
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 49 |
|
|
|
|
| 50 |
```json
|
| 51 |
{
|
| 52 |
-
"text": "
|
|
|
|
|
|
|
|
|
|
|
|
|
| 53 |
}
|
| 54 |
```
|
| 55 |
|
| 56 |
### Data Fields
|
| 57 |
|
| 58 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 59 |
|
| 60 |
-
|
| 61 |
|
| 62 |
-
|
| 63 |
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
| train | 2 |
|
| 67 |
-
| test | 2 |
|
| 68 |
|
| 69 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 70 |
|
| 71 |
## Dataset Creation
|
| 72 |
|
| 73 |
### Curation Rationale
|
| 74 |
|
| 75 |
-
This dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
| 76 |
|
| 77 |
### Source Data
|
| 78 |
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
The text samples were collected from public banking communications, financial documents, and banking service descriptions in Vietnamese. All texts have been normalized for consistency in formatting and encoding.
|
| 82 |
|
| 83 |
### Annotations
|
| 84 |
|
| 85 |
-
|
|
|
|
|
|
|
|
|
|
| 86 |
|
| 87 |
### Personal and Sensitive Information
|
| 88 |
|
| 89 |
-
|
| 90 |
|
| 91 |
## Considerations for Using the Data
|
| 92 |
|
| 93 |
-
### Social Impact
|
| 94 |
|
| 95 |
-
This dataset
|
| 96 |
-
-
|
| 97 |
-
-
|
| 98 |
-
-
|
| 99 |
|
| 100 |
### Discussion of Biases
|
| 101 |
|
| 102 |
-
|
|
|
|
|
|
|
| 103 |
|
| 104 |
-
###
|
| 105 |
|
| 106 |
-
- Limited
|
| 107 |
-
-
|
| 108 |
-
-
|
|
|
|
| 109 |
|
| 110 |
## Additional Information
|
| 111 |
|
| 112 |
### Dataset Curators
|
| 113 |
|
| 114 |
-
|
| 115 |
|
| 116 |
### Licensing Information
|
| 117 |
|
| 118 |
-
|
| 119 |
|
| 120 |
### Citation Information
|
| 121 |
|
| 122 |
```bibtex
|
| 123 |
@dataset{uts2017_bank,
|
| 124 |
-
title={UTS2017_Bank: A Vietnamese Banking Domain Dataset},
|
| 125 |
author={UnderTheSea NLP},
|
| 126 |
year={2017},
|
| 127 |
publisher={Hugging Face},
|
|
@@ -131,13 +242,14 @@ This dataset is released under the Apache 2.0 License.
|
|
| 131 |
|
| 132 |
### Contributions
|
| 133 |
|
| 134 |
-
Thanks to the UnderTheSea NLP community for creating and maintaining this dataset.
|
| 135 |
|
| 136 |
## Contact
|
| 137 |
|
| 138 |
-
For questions
|
| 139 |
- Open an issue on the [dataset repository](https://huggingface.co/datasets/undertheseanlp/UTS2017_Bank/discussions)
|
|
|
|
| 140 |
|
| 141 |
## Updates and Versions
|
| 142 |
|
| 143 |
-
- **Version 1.0.0** (Current): Initial release with
|
|
|
|
| 9 |
- banking
|
| 10 |
- finance
|
| 11 |
- nlp
|
| 12 |
+
- sentiment-analysis
|
| 13 |
+
- aspect-based-sentiment-analysis
|
| 14 |
task_categories:
|
| 15 |
- text-classification
|
| 16 |
+
- sentiment-classification
|
| 17 |
size_categories:
|
| 18 |
+
- 1K<n<10K
|
| 19 |
configs:
|
| 20 |
+
- config_name: classification
|
| 21 |
data_files:
|
| 22 |
- split: train
|
| 23 |
+
path: data/classification/train.jsonl
|
| 24 |
- split: test
|
| 25 |
+
path: data/classification/test.jsonl
|
| 26 |
+
- config_name: sentiment
|
| 27 |
+
data_files:
|
| 28 |
+
- split: train
|
| 29 |
+
path: data/sentiment/train.jsonl
|
| 30 |
+
- split: test
|
| 31 |
+
path: data/sentiment/test.jsonl
|
| 32 |
+
- config_name: aspect_sentiment
|
| 33 |
+
data_files:
|
| 34 |
+
- split: train
|
| 35 |
+
path: data/aspect_sentiment/train.jsonl
|
| 36 |
+
- split: test
|
| 37 |
+
path: data/aspect_sentiment/test.jsonl
|
| 38 |
---
|
| 39 |
|
| 40 |
# UTS2017_Bank Dataset
|
|
|
|
| 43 |
|
| 44 |
### Dataset Summary
|
| 45 |
|
| 46 |
+
The UTS2017_Bank dataset is a comprehensive Vietnamese banking domain dataset containing customer feedback and reviews about banking services. It contains **2,471 annotated examples** (1,977 train, 494 test) with both aspect labels and sentiment annotations. The dataset supports multiple NLP tasks including aspect classification, sentiment analysis, and aspect-based sentiment analysis in the Vietnamese banking sector.
|
| 47 |
+
|
| 48 |
+
### Supported Tasks
|
| 49 |
+
|
| 50 |
+
The dataset is provided in three subsets for different tasks:
|
| 51 |
+
|
| 52 |
+
1. **Classification** (`classification`): Banking aspect classification
|
| 53 |
+
- 14 aspect categories (CUSTOMER_SUPPORT, TRADEMARK, LOAN, etc.)
|
| 54 |
+
- Train: 1,977 examples | Test: 494 examples
|
| 55 |
|
| 56 |
+
2. **Sentiment Analysis** (`sentiment`): Overall sentiment classification
|
| 57 |
+
- 3 sentiment classes: positive (61.3%), negative (37.6%), neutral (1.2%)
|
| 58 |
+
- Train: 1,977 examples | Test: 494 examples
|
| 59 |
|
| 60 |
+
3. **Aspect-Based Sentiment** (`aspect_sentiment`): Fine-grained aspect-sentiment pairs
|
| 61 |
+
- Multi-aspect support (1.5% examples have multiple aspects)
|
| 62 |
+
- Train: 1,977 examples | Test: 494 examples
|
|
|
|
| 63 |
|
| 64 |
### Languages
|
| 65 |
|
|
|
|
| 69 |
|
| 70 |
### Data Instances
|
| 71 |
|
| 72 |
+
Each subset has a different structure:
|
| 73 |
+
|
| 74 |
+
**Classification subset:**
|
| 75 |
+
```json
|
| 76 |
+
{
|
| 77 |
+
"text": "Hotline khó gọi quá gọi mãi ko thưa máy à",
|
| 78 |
+
"label": "CUSTOMER_SUPPORT"
|
| 79 |
+
}
|
| 80 |
+
```
|
| 81 |
+
|
| 82 |
+
**Sentiment subset:**
|
| 83 |
+
```json
|
| 84 |
+
{
|
| 85 |
+
"text": "Dịch vụ tiện dụng quá!",
|
| 86 |
+
"sentiment": "positive"
|
| 87 |
+
}
|
| 88 |
+
```
|
| 89 |
|
| 90 |
+
**Aspect-Sentiment subset:**
|
| 91 |
```json
|
| 92 |
{
|
| 93 |
+
"text": "Mình xài cái thể VISA của BIDV hạn mức 100tr...",
|
| 94 |
+
"aspects": [
|
| 95 |
+
{"aspect": "CARD", "sentiment": "negative"},
|
| 96 |
+
{"aspect": "CUSTOMER_SUPPORT", "sentiment": "negative"}
|
| 97 |
+
]
|
| 98 |
}
|
| 99 |
```
|
| 100 |
|
| 101 |
### Data Fields
|
| 102 |
|
| 103 |
+
- **Classification subset:**
|
| 104 |
+
- `text` (string): Customer feedback text in Vietnamese
|
| 105 |
+
- `label` (string): Banking aspect category
|
| 106 |
+
|
| 107 |
+
- **Sentiment subset:**
|
| 108 |
+
- `text` (string): Customer feedback text in Vietnamese
|
| 109 |
+
- `sentiment` (string): Overall sentiment (positive/negative/neutral)
|
| 110 |
+
|
| 111 |
+
- **Aspect-Sentiment subset:**
|
| 112 |
+
- `text` (string): Customer feedback text in Vietnamese
|
| 113 |
+
- `aspects` (list): List of aspect-sentiment pairs
|
| 114 |
+
|
| 115 |
+
### Aspect Categories
|
| 116 |
+
|
| 117 |
+
The dataset contains 14 banking aspect categories:
|
| 118 |
+
|
| 119 |
+
| Aspect | Train Count | Test Count | Description |
|
| 120 |
+
|--------|------------|------------|-------------|
|
| 121 |
+
| CUSTOMER_SUPPORT | 774 (39.2%) | 338 (68.4%) | Customer service quality |
|
| 122 |
+
| TRADEMARK | 699 (35.4%) | 41 (8.3%) | Bank brand and reputation |
|
| 123 |
+
| LOAN | 74 (3.7%) | 3 (0.6%) | Loan services |
|
| 124 |
+
| INTERNET_BANKING | 70 (3.5%) | 32 (6.5%) | Online banking services |
|
| 125 |
+
| CARD | 66 (3.3%) | 44 (8.9%) | Credit/debit card services |
|
| 126 |
+
| INTEREST_RATE | 60 (3.0%) | 6 (1.2%) | Interest rates |
|
| 127 |
+
| PROMOTION | 53 (2.7%) | 9 (1.8%) | Promotional offers |
|
| 128 |
+
| OTHER | 69 (3.5%) | 12 (2.4%) | Other topics |
|
| 129 |
+
| DISCOUNT | 41 (2.1%) | 2 (0.4%) | Discounts and benefits |
|
| 130 |
+
| MONEY_TRANSFER | 34 (1.7%) | 2 (0.4%) | Money transfer services |
|
| 131 |
+
| PAYMENT | 15 (0.8%) | 2 (0.4%) | Payment services |
|
| 132 |
+
| SAVING | 13 (0.7%) | 3 (0.6%) | Savings accounts |
|
| 133 |
+
| ACCOUNT | 5 (0.3%) | 0 (0%) | Account management |
|
| 134 |
+
| SECURITY | 4 (0.2%) | 0 (0%) | Security concerns |
|
| 135 |
+
|
| 136 |
+
### Data Statistics
|
| 137 |
+
|
| 138 |
+
- **Text Length:**
|
| 139 |
+
- Train: Average 23.8 words (median 14, range 1-816)
|
| 140 |
+
- Test: Average 30.6 words (median 18, range 1-411)
|
| 141 |
+
|
| 142 |
+
- **Sentiment Distribution:**
|
| 143 |
+
- Train: Positive 61.3%, Negative 37.6%, Neutral 1.2%
|
| 144 |
+
- Test: Negative 60.9%, Positive 37.4%, Neutral 1.6%
|
| 145 |
|
| 146 |
+
## Usage
|
| 147 |
|
| 148 |
+
### Loading the Dataset
|
| 149 |
|
| 150 |
+
```python
|
| 151 |
+
from datasets import load_dataset
|
|
|
|
|
|
|
| 152 |
|
| 153 |
+
# Load classification subset
|
| 154 |
+
dataset_clf = load_dataset("undertheseanlp/UTS2017_Bank", "classification")
|
| 155 |
+
|
| 156 |
+
# Load sentiment subset
|
| 157 |
+
dataset_sent = load_dataset("undertheseanlp/UTS2017_Bank", "sentiment")
|
| 158 |
+
|
| 159 |
+
# Load aspect-sentiment subset
|
| 160 |
+
dataset_aspect = load_dataset("undertheseanlp/UTS2017_Bank", "aspect_sentiment")
|
| 161 |
+
```
|
| 162 |
+
|
| 163 |
+
### Data Preprocessing
|
| 164 |
+
|
| 165 |
+
A preprocessing script is available in the repository:
|
| 166 |
+
|
| 167 |
+
```python
|
| 168 |
+
from preprocess_data import process_banking_data
|
| 169 |
+
|
| 170 |
+
# Process raw data into three subsets
|
| 171 |
+
process_banking_data("raw_data/train.txt", "data")
|
| 172 |
+
```
|
| 173 |
|
| 174 |
## Dataset Creation
|
| 175 |
|
| 176 |
### Curation Rationale
|
| 177 |
|
| 178 |
+
This dataset addresses the lack of Vietnamese NLP resources in the banking sector, supporting the development of specialized models for:
|
| 179 |
+
- Customer feedback analysis
|
| 180 |
+
- Service quality monitoring
|
| 181 |
+
- Banking chatbots and virtual assistants
|
| 182 |
+
- Financial sentiment analysis
|
| 183 |
|
| 184 |
### Source Data
|
| 185 |
|
| 186 |
+
The dataset contains real customer feedback and reviews about Vietnamese banking services, collected from public sources and banking communications. All texts have been anonymized to remove personal information.
|
|
|
|
|
|
|
| 187 |
|
| 188 |
### Annotations
|
| 189 |
|
| 190 |
+
The dataset includes:
|
| 191 |
+
- 14 banking-specific aspect categories
|
| 192 |
+
- 3-class sentiment labels (positive, negative, neutral)
|
| 193 |
+
- Support for multi-aspect annotations (1.5% of examples)
|
| 194 |
|
| 195 |
### Personal and Sensitive Information
|
| 196 |
|
| 197 |
+
All personal information, account numbers, and sensitive financial data have been removed. The dataset contains only general banking terminology and anonymized feedback.
|
| 198 |
|
| 199 |
## Considerations for Using the Data
|
| 200 |
|
| 201 |
+
### Social Impact
|
| 202 |
|
| 203 |
+
This dataset enables:
|
| 204 |
+
- Better understanding of customer needs in Vietnamese banking
|
| 205 |
+
- Improved customer service through automated analysis
|
| 206 |
+
- Enhanced financial inclusion through better language support
|
| 207 |
|
| 208 |
### Discussion of Biases
|
| 209 |
|
| 210 |
+
- **Language style**: Primarily informal customer feedback language
|
| 211 |
+
- **Sentiment imbalance**: More positive examples in training set
|
| 212 |
+
- **Aspect distribution**: Heavy skew towards CUSTOMER_SUPPORT and TRADEMARK
|
| 213 |
|
| 214 |
+
### Known Limitations
|
| 215 |
|
| 216 |
+
- Limited neutral sentiment examples (1-2%)
|
| 217 |
+
- Aspect distribution varies between train and test sets
|
| 218 |
+
- Multi-aspect examples are rare (1.5%)
|
| 219 |
+
- Domain-specific to banking sector
|
| 220 |
|
| 221 |
## Additional Information
|
| 222 |
|
| 223 |
### Dataset Curators
|
| 224 |
|
| 225 |
+
Created and maintained by the UnderTheSea NLP team, focusing on Vietnamese NLP resources and tools development.
|
| 226 |
|
| 227 |
### Licensing Information
|
| 228 |
|
| 229 |
+
Released under the Apache 2.0 License.
|
| 230 |
|
| 231 |
### Citation Information
|
| 232 |
|
| 233 |
```bibtex
|
| 234 |
@dataset{uts2017_bank,
|
| 235 |
+
title={UTS2017_Bank: A Vietnamese Banking Domain Dataset for Aspect-Based Sentiment Analysis},
|
| 236 |
author={UnderTheSea NLP},
|
| 237 |
year={2017},
|
| 238 |
publisher={Hugging Face},
|
|
|
|
| 242 |
|
| 243 |
### Contributions
|
| 244 |
|
| 245 |
+
Thanks to the UnderTheSea NLP community for creating and maintaining this dataset.
|
| 246 |
|
| 247 |
## Contact
|
| 248 |
|
| 249 |
+
For questions or contributions:
|
| 250 |
- Open an issue on the [dataset repository](https://huggingface.co/datasets/undertheseanlp/UTS2017_Bank/discussions)
|
| 251 |
+
- Visit [UnderTheSea NLP](https://github.com/undertheseanlp)
|
| 252 |
|
| 253 |
## Updates and Versions
|
| 254 |
|
| 255 |
+
- **Version 1.0.0** (Current): Initial release with 2,471 annotated banking feedback examples in three task-specific subsets
|
data/aspect_sentiment/test.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/aspect_sentiment/train.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/classification/test.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/classification/train.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/sentiment/test.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/sentiment/train.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/test.txt
DELETED
|
@@ -1,2 +0,0 @@
|
|
| 1 |
-
Chuyển khoản nhanh 24/7
|
| 2 |
-
Vay tín chấp không thế chấp
|
|
|
|
|
|
|
|
|
data/train.txt
DELETED
|
@@ -1,2 +0,0 @@
|
|
| 1 |
-
Ngân hàng Nhà nước Việt Nam
|
| 2 |
-
Tài khoản tiết kiệm lãi suất cao
|
|
|
|
|
|
|
|
|
dataset_statistics.py
ADDED
|
@@ -0,0 +1,227 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import json
|
| 2 |
+
from pathlib import Path
|
| 3 |
+
from collections import Counter
|
| 4 |
+
import statistics as stats
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
def load_jsonl(file_path):
|
| 8 |
+
"""Load JSONL file and return list of items."""
|
| 9 |
+
items = []
|
| 10 |
+
with open(file_path, 'r', encoding='utf-8') as f:
|
| 11 |
+
for line in f:
|
| 12 |
+
items.append(json.loads(line.strip()))
|
| 13 |
+
return items
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
def calculate_text_statistics(items):
|
| 17 |
+
"""Calculate statistics for text fields."""
|
| 18 |
+
text_lengths = [len(item['text'].split()) for item in items]
|
| 19 |
+
char_lengths = [len(item['text']) for item in items]
|
| 20 |
+
|
| 21 |
+
return {
|
| 22 |
+
'avg_words': stats.mean(text_lengths),
|
| 23 |
+
'min_words': min(text_lengths),
|
| 24 |
+
'max_words': max(text_lengths),
|
| 25 |
+
'median_words': stats.median(text_lengths),
|
| 26 |
+
'avg_chars': stats.mean(char_lengths),
|
| 27 |
+
'min_chars': min(char_lengths),
|
| 28 |
+
'max_chars': max(char_lengths),
|
| 29 |
+
'median_chars': stats.median(char_lengths),
|
| 30 |
+
}
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
def analyze_classification_subset():
|
| 34 |
+
"""Analyze classification subset statistics."""
|
| 35 |
+
print("\n" + "="*60)
|
| 36 |
+
print("CLASSIFICATION SUBSET ANALYSIS")
|
| 37 |
+
print("="*60)
|
| 38 |
+
|
| 39 |
+
for split in ['train', 'test']:
|
| 40 |
+
file_path = Path(f'data/classification/{split}.jsonl')
|
| 41 |
+
items = load_jsonl(file_path)
|
| 42 |
+
|
| 43 |
+
print(f"\n{split.upper()} Split:")
|
| 44 |
+
print(f" Total examples: {len(items)}")
|
| 45 |
+
|
| 46 |
+
# Label distribution
|
| 47 |
+
label_counter = Counter(item['label'] for item in items)
|
| 48 |
+
print("\n Label Distribution:")
|
| 49 |
+
for label, count in label_counter.most_common():
|
| 50 |
+
percentage = (count / len(items)) * 100
|
| 51 |
+
print(f" {label:20s}: {count:4d} ({percentage:5.1f}%)")
|
| 52 |
+
|
| 53 |
+
# Text statistics
|
| 54 |
+
text_stats = calculate_text_statistics(items)
|
| 55 |
+
print("\n Text Statistics:")
|
| 56 |
+
print(f" Words per text - Avg: {text_stats['avg_words']:.1f}, "
|
| 57 |
+
f"Min: {text_stats['min_words']}, Max: {text_stats['max_words']}, "
|
| 58 |
+
f"Median: {text_stats['median_words']:.1f}")
|
| 59 |
+
print(f" Chars per text - Avg: {text_stats['avg_chars']:.1f}, "
|
| 60 |
+
f"Min: {text_stats['min_chars']}, Max: {text_stats['max_chars']}, "
|
| 61 |
+
f"Median: {text_stats['median_chars']:.1f}")
|
| 62 |
+
|
| 63 |
+
|
| 64 |
+
def analyze_sentiment_subset():
|
| 65 |
+
"""Analyze sentiment subset statistics."""
|
| 66 |
+
print("\n" + "="*60)
|
| 67 |
+
print("SENTIMENT SUBSET ANALYSIS")
|
| 68 |
+
print("="*60)
|
| 69 |
+
|
| 70 |
+
for split in ['train', 'test']:
|
| 71 |
+
file_path = Path(f'data/sentiment/{split}.jsonl')
|
| 72 |
+
items = load_jsonl(file_path)
|
| 73 |
+
|
| 74 |
+
print(f"\n{split.upper()} Split:")
|
| 75 |
+
print(f" Total examples: {len(items)}")
|
| 76 |
+
|
| 77 |
+
# Sentiment distribution
|
| 78 |
+
sentiment_counter = Counter(item['sentiment'] for item in items)
|
| 79 |
+
print("\n Sentiment Distribution:")
|
| 80 |
+
for sentiment, count in sentiment_counter.most_common():
|
| 81 |
+
percentage = (count / len(items)) * 100
|
| 82 |
+
print(f" {sentiment:10s}: {count:4d} ({percentage:5.1f}%)")
|
| 83 |
+
|
| 84 |
+
# Text statistics
|
| 85 |
+
text_stats = calculate_text_statistics(items)
|
| 86 |
+
print("\n Text Statistics:")
|
| 87 |
+
print(f" Words per text - Avg: {text_stats['avg_words']:.1f}, "
|
| 88 |
+
f"Min: {text_stats['min_words']}, Max: {text_stats['max_words']}, "
|
| 89 |
+
f"Median: {text_stats['median_words']:.1f}")
|
| 90 |
+
|
| 91 |
+
|
| 92 |
+
def analyze_aspect_sentiment_subset():
|
| 93 |
+
"""Analyze aspect-sentiment subset statistics."""
|
| 94 |
+
print("\n" + "="*60)
|
| 95 |
+
print("ASPECT-SENTIMENT SUBSET ANALYSIS")
|
| 96 |
+
print("="*60)
|
| 97 |
+
|
| 98 |
+
for split in ['train', 'test']:
|
| 99 |
+
file_path = Path(f'data/aspect_sentiment/{split}.jsonl')
|
| 100 |
+
items = load_jsonl(file_path)
|
| 101 |
+
|
| 102 |
+
print(f"\n{split.upper()} Split:")
|
| 103 |
+
print(f" Total examples: {len(items)}")
|
| 104 |
+
|
| 105 |
+
# Multi-aspect analysis
|
| 106 |
+
single_aspect = sum(1 for item in items if len(item['aspects']) == 1)
|
| 107 |
+
multi_aspect = sum(1 for item in items if len(item['aspects']) > 1)
|
| 108 |
+
max_aspects = max(len(item['aspects']) for item in items)
|
| 109 |
+
|
| 110 |
+
print(f"\n Aspect Coverage:")
|
| 111 |
+
print(f" Single aspect: {single_aspect} ({(single_aspect/len(items))*100:.1f}%)")
|
| 112 |
+
print(f" Multi-aspect: {multi_aspect} ({(multi_aspect/len(items))*100:.1f}%)")
|
| 113 |
+
print(f" Max aspects per example: {max_aspects}")
|
| 114 |
+
|
| 115 |
+
# Aspect-sentiment pair distribution
|
| 116 |
+
aspect_sentiment_pairs = []
|
| 117 |
+
for item in items:
|
| 118 |
+
for asp in item['aspects']:
|
| 119 |
+
aspect_sentiment_pairs.append(f"{asp['aspect']}#{asp['sentiment']}")
|
| 120 |
+
|
| 121 |
+
pair_counter = Counter(aspect_sentiment_pairs)
|
| 122 |
+
print("\n Top 10 Aspect-Sentiment Pairs:")
|
| 123 |
+
for pair, count in pair_counter.most_common(10):
|
| 124 |
+
aspect, sentiment = pair.split('#')
|
| 125 |
+
percentage = (count / len(aspect_sentiment_pairs)) * 100
|
| 126 |
+
print(f" {aspect:20s} + {sentiment:8s}: {count:4d} ({percentage:5.1f}%)")
|
| 127 |
+
|
| 128 |
+
# Overall aspect distribution
|
| 129 |
+
aspect_counter = Counter()
|
| 130 |
+
sentiment_by_aspect = {}
|
| 131 |
+
|
| 132 |
+
for item in items:
|
| 133 |
+
for asp in item['aspects']:
|
| 134 |
+
aspect = asp['aspect']
|
| 135 |
+
sentiment = asp['sentiment']
|
| 136 |
+
aspect_counter[aspect] += 1
|
| 137 |
+
|
| 138 |
+
if aspect not in sentiment_by_aspect:
|
| 139 |
+
sentiment_by_aspect[aspect] = Counter()
|
| 140 |
+
sentiment_by_aspect[aspect][sentiment] += 1
|
| 141 |
+
|
| 142 |
+
print("\n Aspect Distribution with Sentiment Breakdown:")
|
| 143 |
+
for aspect, count in aspect_counter.most_common():
|
| 144 |
+
percentage = (count / sum(aspect_counter.values())) * 100
|
| 145 |
+
print(f"\n {aspect:20s}: {count:4d} ({percentage:5.1f}%)")
|
| 146 |
+
|
| 147 |
+
# Sentiment breakdown for this aspect
|
| 148 |
+
sentiments = sentiment_by_aspect[aspect]
|
| 149 |
+
total_aspect = sum(sentiments.values())
|
| 150 |
+
for sentiment in ['positive', 'negative', 'neutral']:
|
| 151 |
+
if sentiment in sentiments:
|
| 152 |
+
sent_count = sentiments[sentiment]
|
| 153 |
+
sent_pct = (sent_count / total_aspect) * 100
|
| 154 |
+
print(f" - {sentiment:8s}: {sent_count:3d} ({sent_pct:5.1f}%)")
|
| 155 |
+
|
| 156 |
+
|
| 157 |
+
def generate_summary_statistics():
|
| 158 |
+
"""Generate overall summary statistics."""
|
| 159 |
+
print("\n" + "="*60)
|
| 160 |
+
print("DATASET SUMMARY")
|
| 161 |
+
print("="*60)
|
| 162 |
+
|
| 163 |
+
total_train = len(load_jsonl('data/classification/train.jsonl'))
|
| 164 |
+
total_test = len(load_jsonl('data/classification/test.jsonl'))
|
| 165 |
+
|
| 166 |
+
print("\nTotal Dataset Size:")
|
| 167 |
+
print(f" Train: {total_train} examples")
|
| 168 |
+
print(f" Test: {total_test} examples")
|
| 169 |
+
print(f" Total: {total_train + total_test} examples")
|
| 170 |
+
print(f" Train/Test Ratio: {total_train/total_test:.2f}:1")
|
| 171 |
+
|
| 172 |
+
# Available subsets
|
| 173 |
+
print("\nAvailable Subsets:")
|
| 174 |
+
print(" 1. Classification: Text → Label (14 banking aspect categories)")
|
| 175 |
+
print(" 2. Sentiment: Text → Sentiment (positive/negative/neutral)")
|
| 176 |
+
print(" 3. Aspect-Sentiment: Text → Multiple (Aspect, Sentiment) pairs")
|
| 177 |
+
|
| 178 |
+
# Data format
|
| 179 |
+
print("\nData Format:")
|
| 180 |
+
print(" - All subsets use JSONL format")
|
| 181 |
+
print(" - UTF-8 encoding")
|
| 182 |
+
print(" - Vietnamese language text")
|
| 183 |
+
|
| 184 |
+
# Use cases
|
| 185 |
+
print("\nRecommended Use Cases:")
|
| 186 |
+
print(" - Classification: Banking domain text classification")
|
| 187 |
+
print(" - Sentiment: Customer feedback sentiment analysis")
|
| 188 |
+
print(" - Aspect-Sentiment: Fine-grained aspect-based sentiment analysis")
|
| 189 |
+
|
| 190 |
+
|
| 191 |
+
def save_statistics_report():
|
| 192 |
+
"""Save statistics to a markdown file."""
|
| 193 |
+
import sys
|
| 194 |
+
from io import StringIO
|
| 195 |
+
|
| 196 |
+
# Capture output
|
| 197 |
+
old_stdout = sys.stdout
|
| 198 |
+
sys.stdout = buffer = StringIO()
|
| 199 |
+
|
| 200 |
+
# Run all analyses
|
| 201 |
+
generate_summary_statistics()
|
| 202 |
+
analyze_classification_subset()
|
| 203 |
+
analyze_sentiment_subset()
|
| 204 |
+
analyze_aspect_sentiment_subset()
|
| 205 |
+
|
| 206 |
+
# Get output
|
| 207 |
+
output = buffer.getvalue()
|
| 208 |
+
sys.stdout = old_stdout
|
| 209 |
+
|
| 210 |
+
# Save to file
|
| 211 |
+
with open('statistics_report.md', 'w', encoding='utf-8') as f:
|
| 212 |
+
f.write("# UTS2017_Bank Dataset Statistics Report\n\n")
|
| 213 |
+
f.write("```\n")
|
| 214 |
+
f.write(output)
|
| 215 |
+
f.write("```\n")
|
| 216 |
+
|
| 217 |
+
print("Statistics report saved to statistics_report.md")
|
| 218 |
+
|
| 219 |
+
|
| 220 |
+
if __name__ == "__main__":
|
| 221 |
+
generate_summary_statistics()
|
| 222 |
+
analyze_classification_subset()
|
| 223 |
+
analyze_sentiment_subset()
|
| 224 |
+
analyze_aspect_sentiment_subset()
|
| 225 |
+
|
| 226 |
+
print("\n" + "="*60)
|
| 227 |
+
save_statistics_report()
|
preprocess_data.py
ADDED
|
@@ -0,0 +1,143 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import re
|
| 2 |
+
import json
|
| 3 |
+
from pathlib import Path
|
| 4 |
+
|
| 5 |
+
|
| 6 |
+
def process_banking_data(input_file, output_dir):
|
| 7 |
+
"""
|
| 8 |
+
Process banking data into three subsets:
|
| 9 |
+
1. Classification: text -> label (main aspect only)
|
| 10 |
+
2. Sentiment: text -> sentiment (overall sentiment)
|
| 11 |
+
3. Aspect-Sentiment: text -> aspect, sentiment pairs
|
| 12 |
+
"""
|
| 13 |
+
|
| 14 |
+
classification_data = []
|
| 15 |
+
sentiment_data = []
|
| 16 |
+
aspect_sentiment_data = []
|
| 17 |
+
|
| 18 |
+
with open(input_file, 'r', encoding='utf-8') as f:
|
| 19 |
+
for line_num, line in enumerate(f, 1):
|
| 20 |
+
line = line.strip()
|
| 21 |
+
if not line:
|
| 22 |
+
continue
|
| 23 |
+
|
| 24 |
+
# Extract labels and sentiments
|
| 25 |
+
label_pattern = r'__label__([A-Z_]+)#(positive|negative|neutral)'
|
| 26 |
+
matches = re.findall(label_pattern, line)
|
| 27 |
+
|
| 28 |
+
# Remove labels from text
|
| 29 |
+
text = re.sub(r'__label__[A-Z_]+#(positive|negative|neutral)\s*', '', line).strip()
|
| 30 |
+
|
| 31 |
+
if not text or not matches:
|
| 32 |
+
print(f"Skipping line {line_num}: No valid data found")
|
| 33 |
+
continue
|
| 34 |
+
|
| 35 |
+
# Extract aspects and sentiments
|
| 36 |
+
aspects = [m[0] for m in matches]
|
| 37 |
+
sentiments = [m[1] for m in matches]
|
| 38 |
+
|
| 39 |
+
# 1. Classification subset (first aspect as main label)
|
| 40 |
+
classification_data.append({
|
| 41 |
+
"text": text,
|
| 42 |
+
"label": aspects[0]
|
| 43 |
+
})
|
| 44 |
+
|
| 45 |
+
# 2. Sentiment-only subset (overall sentiment)
|
| 46 |
+
# If multiple sentiments, use the first one or most frequent
|
| 47 |
+
overall_sentiment = sentiments[0]
|
| 48 |
+
if len(set(sentiments)) == 1:
|
| 49 |
+
overall_sentiment = sentiments[0]
|
| 50 |
+
else:
|
| 51 |
+
# Use most common sentiment
|
| 52 |
+
sentiment_counts = {}
|
| 53 |
+
for s in sentiments:
|
| 54 |
+
sentiment_counts[s] = sentiment_counts.get(s, 0) + 1
|
| 55 |
+
overall_sentiment = max(sentiment_counts, key=sentiment_counts.get)
|
| 56 |
+
|
| 57 |
+
sentiment_data.append({
|
| 58 |
+
"text": text,
|
| 59 |
+
"sentiment": overall_sentiment
|
| 60 |
+
})
|
| 61 |
+
|
| 62 |
+
# 3. Aspect-Sentiment subset
|
| 63 |
+
aspect_sentiment_pairs = []
|
| 64 |
+
for aspect, sentiment in zip(aspects, sentiments):
|
| 65 |
+
aspect_sentiment_pairs.append({
|
| 66 |
+
"aspect": aspect,
|
| 67 |
+
"sentiment": sentiment
|
| 68 |
+
})
|
| 69 |
+
|
| 70 |
+
aspect_sentiment_data.append({
|
| 71 |
+
"text": text,
|
| 72 |
+
"aspects": aspect_sentiment_pairs
|
| 73 |
+
})
|
| 74 |
+
|
| 75 |
+
# Save the three subsets
|
| 76 |
+
output_dir = Path(output_dir)
|
| 77 |
+
output_dir.mkdir(parents=True, exist_ok=True)
|
| 78 |
+
|
| 79 |
+
# Determine if this is train or test based on input filename
|
| 80 |
+
split = "train" if "train" in str(input_file) else "test"
|
| 81 |
+
|
| 82 |
+
# Save classification subset
|
| 83 |
+
classification_file = output_dir / "classification" / f"{split}.jsonl"
|
| 84 |
+
classification_file.parent.mkdir(parents=True, exist_ok=True)
|
| 85 |
+
with open(classification_file, 'w', encoding='utf-8') as f:
|
| 86 |
+
for item in classification_data:
|
| 87 |
+
f.write(json.dumps(item, ensure_ascii=False) + '\n')
|
| 88 |
+
print(f"Saved {len(classification_data)} classification examples to {classification_file}")
|
| 89 |
+
|
| 90 |
+
# Save sentiment subset
|
| 91 |
+
sentiment_file = output_dir / "sentiment" / f"{split}.jsonl"
|
| 92 |
+
sentiment_file.parent.mkdir(parents=True, exist_ok=True)
|
| 93 |
+
with open(sentiment_file, 'w', encoding='utf-8') as f:
|
| 94 |
+
for item in sentiment_data:
|
| 95 |
+
f.write(json.dumps(item, ensure_ascii=False) + '\n')
|
| 96 |
+
print(f"Saved {len(sentiment_data)} sentiment examples to {sentiment_file}")
|
| 97 |
+
|
| 98 |
+
# Save aspect-sentiment subset
|
| 99 |
+
aspect_sentiment_file = output_dir / "aspect_sentiment" / f"{split}.jsonl"
|
| 100 |
+
aspect_sentiment_file.parent.mkdir(parents=True, exist_ok=True)
|
| 101 |
+
with open(aspect_sentiment_file, 'w', encoding='utf-8') as f:
|
| 102 |
+
for item in aspect_sentiment_data:
|
| 103 |
+
f.write(json.dumps(item, ensure_ascii=False) + '\n')
|
| 104 |
+
print(f"Saved {len(aspect_sentiment_data)} aspect-sentiment examples to {aspect_sentiment_file}")
|
| 105 |
+
|
| 106 |
+
# Print statistics
|
| 107 |
+
print("\n=== Statistics ===")
|
| 108 |
+
print(f"Total examples processed: {len(classification_data)}")
|
| 109 |
+
|
| 110 |
+
# Label distribution
|
| 111 |
+
label_counts = {}
|
| 112 |
+
for item in classification_data:
|
| 113 |
+
label = item['label']
|
| 114 |
+
label_counts[label] = label_counts.get(label, 0) + 1
|
| 115 |
+
|
| 116 |
+
print("\nLabel distribution:")
|
| 117 |
+
for label, count in sorted(label_counts.items(), key=lambda x: x[1], reverse=True):
|
| 118 |
+
print(f" {label}: {count}")
|
| 119 |
+
|
| 120 |
+
# Sentiment distribution
|
| 121 |
+
sentiment_counts = {}
|
| 122 |
+
for item in sentiment_data:
|
| 123 |
+
sentiment = item['sentiment']
|
| 124 |
+
sentiment_counts[sentiment] = sentiment_counts.get(sentiment, 0) + 1
|
| 125 |
+
|
| 126 |
+
print("\nSentiment distribution:")
|
| 127 |
+
for sentiment, count in sorted(sentiment_counts.items(), key=lambda x: x[1], reverse=True):
|
| 128 |
+
print(f" {sentiment}: {count}")
|
| 129 |
+
|
| 130 |
+
# Multi-aspect examples
|
| 131 |
+
multi_aspect_count = sum(1 for item in aspect_sentiment_data if len(item['aspects']) > 1)
|
| 132 |
+
print(f"\nExamples with multiple aspects: {multi_aspect_count}")
|
| 133 |
+
|
| 134 |
+
|
| 135 |
+
if __name__ == "__main__":
|
| 136 |
+
# Process train data
|
| 137 |
+
print("Processing train data...")
|
| 138 |
+
process_banking_data("raw_data/train.txt", "data")
|
| 139 |
+
|
| 140 |
+
# Process test data
|
| 141 |
+
print("\n" + "="*50)
|
| 142 |
+
print("Processing test data...")
|
| 143 |
+
process_banking_data("raw_data/test.txt", "data")
|
pyproject.toml
ADDED
|
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[project]
|
| 2 |
+
name = "uts2017-bank"
|
| 3 |
+
version = "1.0.0"
|
| 4 |
+
description = "Underthesea 2017 Bank Dataset"
|
| 5 |
+
requires-python = ">=3.13"
|
| 6 |
+
dependencies = [
|
| 7 |
+
"datasets>=4.1.1",
|
| 8 |
+
]
|
raw_data/test.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
raw_data/train.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
statistics_report.md
ADDED
|
@@ -0,0 +1,262 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# UTS2017_Bank Dataset Statistics Report
|
| 2 |
+
|
| 3 |
+
```
|
| 4 |
+
|
| 5 |
+
============================================================
|
| 6 |
+
DATASET SUMMARY
|
| 7 |
+
============================================================
|
| 8 |
+
|
| 9 |
+
Total Dataset Size:
|
| 10 |
+
Train: 1977 examples
|
| 11 |
+
Test: 494 examples
|
| 12 |
+
Total: 2471 examples
|
| 13 |
+
Train/Test Ratio: 4.00:1
|
| 14 |
+
|
| 15 |
+
Available Subsets:
|
| 16 |
+
1. Classification: Text → Label (14 banking aspect categories)
|
| 17 |
+
2. Sentiment: Text → Sentiment (positive/negative/neutral)
|
| 18 |
+
3. Aspect-Sentiment: Text → Multiple (Aspect, Sentiment) pairs
|
| 19 |
+
|
| 20 |
+
Data Format:
|
| 21 |
+
- All subsets use JSONL format
|
| 22 |
+
- UTF-8 encoding
|
| 23 |
+
- Vietnamese language text
|
| 24 |
+
|
| 25 |
+
Recommended Use Cases:
|
| 26 |
+
- Classification: Banking domain text classification
|
| 27 |
+
- Sentiment: Customer feedback sentiment analysis
|
| 28 |
+
- Aspect-Sentiment: Fine-grained aspect-based sentiment analysis
|
| 29 |
+
|
| 30 |
+
============================================================
|
| 31 |
+
CLASSIFICATION SUBSET ANALYSIS
|
| 32 |
+
============================================================
|
| 33 |
+
|
| 34 |
+
TRAIN Split:
|
| 35 |
+
Total examples: 1977
|
| 36 |
+
|
| 37 |
+
Label Distribution:
|
| 38 |
+
CUSTOMER_SUPPORT : 774 ( 39.2%)
|
| 39 |
+
TRADEMARK : 699 ( 35.4%)
|
| 40 |
+
LOAN : 74 ( 3.7%)
|
| 41 |
+
INTERNET_BANKING : 70 ( 3.5%)
|
| 42 |
+
OTHER : 69 ( 3.5%)
|
| 43 |
+
CARD : 66 ( 3.3%)
|
| 44 |
+
INTEREST_RATE : 60 ( 3.0%)
|
| 45 |
+
PROMOTION : 53 ( 2.7%)
|
| 46 |
+
DISCOUNT : 41 ( 2.1%)
|
| 47 |
+
MONEY_TRANSFER : 34 ( 1.7%)
|
| 48 |
+
PAYMENT : 15 ( 0.8%)
|
| 49 |
+
SAVING : 13 ( 0.7%)
|
| 50 |
+
ACCOUNT : 5 ( 0.3%)
|
| 51 |
+
SECURITY : 4 ( 0.2%)
|
| 52 |
+
|
| 53 |
+
Text Statistics:
|
| 54 |
+
Words per text - Avg: 23.8, Min: 1, Max: 816, Median: 14.0
|
| 55 |
+
Chars per text - Avg: 106.3, Min: 3, Max: 3787, Median: 62.0
|
| 56 |
+
|
| 57 |
+
TEST Split:
|
| 58 |
+
Total examples: 494
|
| 59 |
+
|
| 60 |
+
Label Distribution:
|
| 61 |
+
CUSTOMER_SUPPORT : 338 ( 68.4%)
|
| 62 |
+
CARD : 44 ( 8.9%)
|
| 63 |
+
TRADEMARK : 41 ( 8.3%)
|
| 64 |
+
INTERNET_BANKING : 32 ( 6.5%)
|
| 65 |
+
OTHER : 12 ( 2.4%)
|
| 66 |
+
PROMOTION : 9 ( 1.8%)
|
| 67 |
+
INTEREST_RATE : 6 ( 1.2%)
|
| 68 |
+
LOAN : 3 ( 0.6%)
|
| 69 |
+
SAVING : 3 ( 0.6%)
|
| 70 |
+
MONEY_TRANSFER : 2 ( 0.4%)
|
| 71 |
+
DISCOUNT : 2 ( 0.4%)
|
| 72 |
+
PAYMENT : 2 ( 0.4%)
|
| 73 |
+
|
| 74 |
+
Text Statistics:
|
| 75 |
+
Words per text - Avg: 30.6, Min: 1, Max: 411, Median: 18.0
|
| 76 |
+
Chars per text - Avg: 134.8, Min: 4, Max: 1800, Median: 80.0
|
| 77 |
+
|
| 78 |
+
============================================================
|
| 79 |
+
SENTIMENT SUBSET ANALYSIS
|
| 80 |
+
============================================================
|
| 81 |
+
|
| 82 |
+
TRAIN Split:
|
| 83 |
+
Total examples: 1977
|
| 84 |
+
|
| 85 |
+
Sentiment Distribution:
|
| 86 |
+
positive : 1211 ( 61.3%)
|
| 87 |
+
negative : 743 ( 37.6%)
|
| 88 |
+
neutral : 23 ( 1.2%)
|
| 89 |
+
|
| 90 |
+
Text Statistics:
|
| 91 |
+
Words per text - Avg: 23.8, Min: 1, Max: 816, Median: 14.0
|
| 92 |
+
|
| 93 |
+
TEST Split:
|
| 94 |
+
Total examples: 494
|
| 95 |
+
|
| 96 |
+
Sentiment Distribution:
|
| 97 |
+
negative : 301 ( 60.9%)
|
| 98 |
+
positive : 185 ( 37.4%)
|
| 99 |
+
neutral : 8 ( 1.6%)
|
| 100 |
+
|
| 101 |
+
Text Statistics:
|
| 102 |
+
Words per text - Avg: 30.6, Min: 1, Max: 411, Median: 18.0
|
| 103 |
+
|
| 104 |
+
============================================================
|
| 105 |
+
ASPECT-SENTIMENT SUBSET ANALYSIS
|
| 106 |
+
============================================================
|
| 107 |
+
|
| 108 |
+
TRAIN Split:
|
| 109 |
+
Total examples: 1977
|
| 110 |
+
|
| 111 |
+
Aspect Coverage:
|
| 112 |
+
Single aspect: 1948 (98.5%)
|
| 113 |
+
Multi-aspect: 29 (1.5%)
|
| 114 |
+
Max aspects per example: 4
|
| 115 |
+
|
| 116 |
+
Top 10 Aspect-Sentiment Pairs:
|
| 117 |
+
TRADEMARK + positive: 652 ( 32.5%)
|
| 118 |
+
CUSTOMER_SUPPORT + positive: 409 ( 20.4%)
|
| 119 |
+
CUSTOMER_SUPPORT + negative: 370 ( 18.4%)
|
| 120 |
+
INTEREST_RATE + negative: 63 ( 3.1%)
|
| 121 |
+
LOAN + negative: 61 ( 3.0%)
|
| 122 |
+
INTERNET_BANKING + negative: 57 ( 2.8%)
|
| 123 |
+
CARD + negative: 54 ( 2.7%)
|
| 124 |
+
TRADEMARK + negative: 47 ( 2.3%)
|
| 125 |
+
OTHER + negative: 35 ( 1.7%)
|
| 126 |
+
PROMOTION + positive: 33 ( 1.6%)
|
| 127 |
+
|
| 128 |
+
Aspect Distribution with Sentiment Breakdown:
|
| 129 |
+
|
| 130 |
+
CUSTOMER_SUPPORT : 784 ( 39.0%)
|
| 131 |
+
- positive: 409 ( 52.2%)
|
| 132 |
+
- negative: 370 ( 47.2%)
|
| 133 |
+
- neutral : 5 ( 0.6%)
|
| 134 |
+
|
| 135 |
+
TRADEMARK : 699 ( 34.8%)
|
| 136 |
+
- positive: 652 ( 93.3%)
|
| 137 |
+
- negative: 47 ( 6.7%)
|
| 138 |
+
|
| 139 |
+
INTERNET_BANKING : 79 ( 3.9%)
|
| 140 |
+
- positive: 20 ( 25.3%)
|
| 141 |
+
- negative: 57 ( 72.2%)
|
| 142 |
+
- neutral : 2 ( 2.5%)
|
| 143 |
+
|
| 144 |
+
LOAN : 74 ( 3.7%)
|
| 145 |
+
- positive: 13 ( 17.6%)
|
| 146 |
+
- negative: 61 ( 82.4%)
|
| 147 |
+
|
| 148 |
+
OTHER : 69 ( 3.4%)
|
| 149 |
+
- positive: 30 ( 43.5%)
|
| 150 |
+
- negative: 35 ( 50.7%)
|
| 151 |
+
- neutral : 4 ( 5.8%)
|
| 152 |
+
|
| 153 |
+
INTEREST_RATE : 68 ( 3.4%)
|
| 154 |
+
- positive: 4 ( 5.9%)
|
| 155 |
+
- negative: 63 ( 92.6%)
|
| 156 |
+
- neutral : 1 ( 1.5%)
|
| 157 |
+
|
| 158 |
+
CARD : 67 ( 3.3%)
|
| 159 |
+
- positive: 12 ( 17.9%)
|
| 160 |
+
- negative: 54 ( 80.6%)
|
| 161 |
+
- neutral : 1 ( 1.5%)
|
| 162 |
+
|
| 163 |
+
PROMOTION : 53 ( 2.6%)
|
| 164 |
+
- positive: 33 ( 62.3%)
|
| 165 |
+
- negative: 17 ( 32.1%)
|
| 166 |
+
- neutral : 3 ( 5.7%)
|
| 167 |
+
|
| 168 |
+
DISCOUNT : 42 ( 2.1%)
|
| 169 |
+
- positive: 20 ( 47.6%)
|
| 170 |
+
- negative: 18 ( 42.9%)
|
| 171 |
+
- neutral : 4 ( 9.5%)
|
| 172 |
+
|
| 173 |
+
MONEY_TRANSFER : 36 ( 1.8%)
|
| 174 |
+
- positive: 5 ( 13.9%)
|
| 175 |
+
- negative: 31 ( 86.1%)
|
| 176 |
+
|
| 177 |
+
PAYMENT : 15 ( 0.7%)
|
| 178 |
+
- positive: 11 ( 73.3%)
|
| 179 |
+
- negative: 4 ( 26.7%)
|
| 180 |
+
|
| 181 |
+
SAVING : 13 ( 0.6%)
|
| 182 |
+
- positive: 6 ( 46.2%)
|
| 183 |
+
- negative: 6 ( 46.2%)
|
| 184 |
+
- neutral : 1 ( 7.7%)
|
| 185 |
+
|
| 186 |
+
ACCOUNT : 5 ( 0.2%)
|
| 187 |
+
- negative: 5 (100.0%)
|
| 188 |
+
|
| 189 |
+
SECURITY : 5 ( 0.2%)
|
| 190 |
+
- positive: 1 ( 20.0%)
|
| 191 |
+
- negative: 2 ( 40.0%)
|
| 192 |
+
- neutral : 2 ( 40.0%)
|
| 193 |
+
|
| 194 |
+
TEST Split:
|
| 195 |
+
Total examples: 494
|
| 196 |
+
|
| 197 |
+
Aspect Coverage:
|
| 198 |
+
Single aspect: 492 (99.6%)
|
| 199 |
+
Multi-aspect: 2 (0.4%)
|
| 200 |
+
Max aspects per example: 3
|
| 201 |
+
|
| 202 |
+
Top 10 Aspect-Sentiment Pairs:
|
| 203 |
+
CUSTOMER_SUPPORT + negative: 215 ( 43.3%)
|
| 204 |
+
CUSTOMER_SUPPORT + positive: 122 ( 24.5%)
|
| 205 |
+
CARD + negative: 37 ( 7.4%)
|
| 206 |
+
TRADEMARK + positive: 35 ( 7.0%)
|
| 207 |
+
INTERNET_BANKING + negative: 27 ( 5.4%)
|
| 208 |
+
PROMOTION + positive: 7 ( 1.4%)
|
| 209 |
+
TRADEMARK + negative: 6 ( 1.2%)
|
| 210 |
+
OTHER + positive: 6 ( 1.2%)
|
| 211 |
+
INTERNET_BANKING + positive: 6 ( 1.2%)
|
| 212 |
+
INTEREST_RATE + negative: 6 ( 1.2%)
|
| 213 |
+
|
| 214 |
+
Aspect Distribution with Sentiment Breakdown:
|
| 215 |
+
|
| 216 |
+
CUSTOMER_SUPPORT : 339 ( 68.2%)
|
| 217 |
+
- positive: 122 ( 36.0%)
|
| 218 |
+
- negative: 215 ( 63.4%)
|
| 219 |
+
- neutral : 2 ( 0.6%)
|
| 220 |
+
|
| 221 |
+
CARD : 44 ( 8.9%)
|
| 222 |
+
- positive: 5 ( 11.4%)
|
| 223 |
+
- negative: 37 ( 84.1%)
|
| 224 |
+
- neutral : 2 ( 4.5%)
|
| 225 |
+
|
| 226 |
+
TRADEMARK : 41 ( 8.2%)
|
| 227 |
+
- positive: 35 ( 85.4%)
|
| 228 |
+
- negative: 6 ( 14.6%)
|
| 229 |
+
|
| 230 |
+
INTERNET_BANKING : 33 ( 6.6%)
|
| 231 |
+
- positive: 6 ( 18.2%)
|
| 232 |
+
- negative: 27 ( 81.8%)
|
| 233 |
+
|
| 234 |
+
OTHER : 12 ( 2.4%)
|
| 235 |
+
- positive: 6 ( 50.0%)
|
| 236 |
+
- negative: 3 ( 25.0%)
|
| 237 |
+
- neutral : 3 ( 25.0%)
|
| 238 |
+
|
| 239 |
+
PROMOTION : 9 ( 1.8%)
|
| 240 |
+
- positive: 7 ( 77.8%)
|
| 241 |
+
- negative: 1 ( 11.1%)
|
| 242 |
+
- neutral : 1 ( 11.1%)
|
| 243 |
+
|
| 244 |
+
INTEREST_RATE : 6 ( 1.2%)
|
| 245 |
+
- negative: 6 (100.0%)
|
| 246 |
+
|
| 247 |
+
LOAN : 3 ( 0.6%)
|
| 248 |
+
- negative: 3 (100.0%)
|
| 249 |
+
|
| 250 |
+
MONEY_TRANSFER : 3 ( 0.6%)
|
| 251 |
+
- negative: 3 (100.0%)
|
| 252 |
+
|
| 253 |
+
SAVING : 3 ( 0.6%)
|
| 254 |
+
- positive: 3 (100.0%)
|
| 255 |
+
|
| 256 |
+
DISCOUNT : 2 ( 0.4%)
|
| 257 |
+
- negative: 2 (100.0%)
|
| 258 |
+
|
| 259 |
+
PAYMENT : 2 ( 0.4%)
|
| 260 |
+
- positive: 1 ( 50.0%)
|
| 261 |
+
- negative: 1 ( 50.0%)
|
| 262 |
+
```
|
uv.lock
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|