File size: 3,429 Bytes
8e9dc83 84f4133 d819efd 3c3ac3f d819efd 3c3ac3f d819efd 3c3ac3f 8e9dc83 201ee0d 8e9dc83 9696aa2 8e9dc83 3c3ac3f 8e9dc83 9696aa2 8e9dc83 3c3ac3f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
---
license: mit
tags:
- tabular
- text
dataset_info:
languages:
- en
size_categories:
- 100M-1B
pretty_name: ShopTC-100K
task_categories:
- text-classification
- summarization
language:
- en
pretty_name: ShopTC-100K
size_categories:
- 100M<n<1B
---
# ShopTC-100K Dataset

The ShopTC-100K dataset is collected using [TermMiner](https://github.com/eltsai/term_miner/), an open-source data collection and topic modeling pipeline introduced in the paper:
[Harmful Terms and Where to Find Them: Measuring and Modeling Unfavorable Financial Terms and Conditions in Shopping Websites at Scale](https://www.arxiv.org/abs/2502.01798)
If you find this dataset or the related paper useful for your research, please cite our paper:
```
@inproceedings{tsai2025harmful,
author = {Elisa Tsai and Neal Mangaokar and Boyuan Zheng and Haizhong Zheng and Atul Prakash},
title = {Harmful Terms and Where to Find Them: Measuring and Modeling Unfavorable Financial Terms and Conditions in Shopping Websites at Scale},
booktitle = {Proceedings of the ACM Web Conference 2025 (WWW β25)},
year = {2025},
location = {Sydney, NSW, Australia},
publisher = {ACM},
address = {New York, NY, USA},
pages = {14},
month = {April 28-May 2},
doi = {10.1145/3696410.3714573}
}
```
## Dataset Description
The dataset consists of sanitized terms extracted from 8,251 e-commerce websites with English-language terms and conditions. The websites were sourced from the [Tranco list](https://tranco-list.eu/) (as of April 2024). The dataset contains:
- 1,825,231 sanitized sentences
- 7,777 unique websites
- Four split files for ease of use:
```
ShopTC-100K
βββ sanitized_split1.csv
βββ sanitized_split2.csv
βββ sanitized_split3.csv
βββ sanitized_split4.csv
```
### Data Sanitization Process
The extracted terms are cleaned and structured using a multi-step sanitization pipeline:
- HTML Parsing: Raw HTML content is processed to extract text from `<p>` tags.
- Sentence Tokenization: Text is split into sentences using a transformer-based tokenization model.
- Filtering: Short sentences (<10 words) and duplicates are removed.
- Preprocessing: Newline characters and extra whitespace are cleaned.
| Split File | Rows | Columns | Unique Websites |
|--------------------------------------|---------|---------|----------------|
| sanitized_split1.csv | 523,760 | 2 | 1,979 |
| sanitized_split2.csv | 454,966 | 2 | 1,973 |
| sanitized_split3.csv | 425,028 | 2 | 1,988 |
| sanitized_split4.csv | 421,477 | 2 | 1,837 |
### Example Data
The dataset is structured as follows:
| URL | Paragraph |
|-----------------------|----------------------------------------------------------------|
| pythonanywhere.com | Copyright Β© 2011-2024 PythonAnywhere LLP β Terms of Service apply. |
| pythonanywhere.com | We use cookies to provide social media features and to analyze our traffic. |
| pythonanywhere.com | 2.8 You acknowledge that clicking on Links may lead to third-party sites. |
| pythonanywhere.com | 3.4 No payment will be made unless and until Account verification is complete. |
| pythonanywhere.com | 11.3 All licenses granted to you in this agreement are non-transferable. | |