Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 12,064 Bytes
fe9471d
 
 
 
 
 
 
 
 
 
 
 
 
 
dc65379
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fe9471d
 
0854e1c
fe9471d
 
 
5503344
fe9471d
 
b5c6e76
fe9471d
 
 
3881690
 
42315fa
 
40b4a92
3cd7282
42315fa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88a27a4
 
 
 
 
 
 
 
 
 
 
 
 
d5957f8
40b4a92
fe9471d
 
 
 
 
 
 
 
 
 
b9d8e4d
fe9471d
 
 
 
 
a23a032
 
 
 
 
 
 
 
 
 
4bce21b
fe9471d
 
 
 
 
5f5cfd6
 
 
 
 
 
 
 
 
 
 
 
 
 
fe9471d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1k<10K
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: TweetTopicSingle
dataset_info:
  config_name: tweet_topic_multi
  features:
  - name: text
    dtype: string
  - name: date
    dtype: string
  - name: label
    sequence:
      class_label:
        names:
          '0': arts_&_culture
          '1': business_&_entrepreneurs
          '2': celebrity_&_pop_culture
          '3': diaries_&_daily_life
          '4': family
          '5': fashion_&_style
          '6': film_tv_&_video
          '7': fitness_&_health
          '8': food_&_dining
          '9': gaming
          '10': learning_&_educational
          '11': music
          '12': news_&_social_concern
          '13': other_hobbies
          '14': relationships
          '15': science_&_technology
          '16': sports
          '17': travel_&_adventure
          '18': youth_&_student_life
  - name: label_name
    sequence: string
  - name: id
    dtype: string
  splits:
  - name: test_2020
    num_bytes: 231142
    num_examples: 573
  - name: test_2021
    num_bytes: 666444
    num_examples: 1679
  - name: train_2020
    num_bytes: 1864206
    num_examples: 4585
  - name: train_2021
    num_bytes: 595183
    num_examples: 1505
  - name: train_all
    num_bytes: 2459389
    num_examples: 6090
  - name: validation_2020
    num_bytes: 233321
    num_examples: 573
  - name: validation_2021
    num_bytes: 73135
    num_examples: 188
  - name: train_random
    num_bytes: 1860509
    num_examples: 4564
  - name: validation_random
    num_bytes: 233541
    num_examples: 573
  - name: test_coling2022_random
    num_bytes: 2250137
    num_examples: 5536
  - name: train_coling2022_random
    num_bytes: 2326257
    num_examples: 5731
  - name: test_coling2022
    num_bytes: 2247725
    num_examples: 5536
  - name: train_coling2022
    num_bytes: 2328669
    num_examples: 5731
  download_size: 6377923
  dataset_size: 17369658
configs:
- config_name: tweet_topic_multi
  data_files:
  - split: test_2020
    path: tweet_topic_multi/test_2020-*
  - split: test_2021
    path: tweet_topic_multi/test_2021-*
  - split: train_2020
    path: tweet_topic_multi/train_2020-*
  - split: train_2021
    path: tweet_topic_multi/train_2021-*
  - split: train_all
    path: tweet_topic_multi/train_all-*
  - split: validation_2020
    path: tweet_topic_multi/validation_2020-*
  - split: validation_2021
    path: tweet_topic_multi/validation_2021-*
  - split: train_random
    path: tweet_topic_multi/train_random-*
  - split: validation_random
    path: tweet_topic_multi/validation_random-*
  - split: test_coling2022_random
    path: tweet_topic_multi/test_coling2022_random-*
  - split: train_coling2022_random
    path: tweet_topic_multi/train_coling2022_random-*
  - split: test_coling2022
    path: tweet_topic_multi/test_coling2022-*
  - split: train_coling2022
    path: tweet_topic_multi/train_coling2022-*
  default: true
---

# Dataset Card for "cardiffnlp/tweet_topic_multi"

## Dataset Description

- **Paper:** [https://arxiv.org/abs/2209.09824](https://arxiv.org/abs/2209.09824)
- **Dataset:** Tweet Topic Dataset
- **Domain:** Twitter
- **Number of Class:** 19


### Dataset Summary
This is the official repository of TweetTopic (["Twitter Topic Classification
, COLING main conference 2022"](https://arxiv.org/abs/2209.09824)), a topic classification dataset on Twitter with 19 labels.
Each instance of TweetTopic comes with a timestamp which distributes from September 2019 to August 2021.
See [cardiffnlp/tweet_topic_single](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single) for single label version of TweetTopic.
The tweet collection used in TweetTopic is same as what used in [TweetNER7](https://huggingface.co/datasets/tner/tweetner7).
The dataset is integrated in [TweetNLP](https://tweetnlp.org/) too.

### Preprocessing
We pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token `{{URL}}` and non-verified usernames into `{{USERNAME}}`.
For verified usernames, we replace its display name (or account name) with symbols `{@}`.
For example, a tweet

```
Get the all-analog Classic Vinyl Edition
of "Takin' Off" Album from @herbiehancock
via @bluenoterecords link below: 
http://bluenote.lnk.to/AlbumOfTheWeek
```

is transformed into the following text.
```
Get the all-analog Classic Vinyl Edition
of "Takin' Off" Album from {@herbiehancock@}
via {@bluenoterecords@} link below: {{URL}}
```

A simple function to format tweet follows below.

```python
import re
from urlextract import URLExtract
extractor = URLExtract()

def format_tweet(tweet):
    # mask web urls
    urls = extractor.find_urls(tweet)
    for url in urls:
        tweet = tweet.replace(url, "{{URL}}")
    # format twitter account
    tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet)
    return tweet

target = """Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek"""
target_format = format_tweet(target)
print(target_format)
'Get the all-analog Classic Vinyl Edition of "Takin\' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}'
```

### Data Splits

| split                   | number of texts | description |
|:------------------------|-----:|------:|
| test_2020               |  573 | test dataset from September 2019 to August 2020 |
| test_2021               | 1679 | test dataset from September 2020 to August 2021 |
| train_2020              | 4585 | training dataset from September 2019 to August 2020 |
| train_2021              | 1505 | training dataset from September 2020 to August 2021 |
| train_all               | 6090 | combined training dataset of `train_2020` and `train_2021` |
| validation_2020         |  573 | validation dataset from September 2019 to August 2020 |
| validation_2021         |  188 | validation dataset from September 2020 to August 2021 | 
| train_random            | 4564 | randomly sampled training dataset with the same size as `train_2020` from `train_all` |
| validation_random       |  573 | randomly sampled training dataset with the same size as `validation_2020` from `validation_all` |
| test_coling2022_random  | 5536 | random split used in the COLING 2022 paper |
| train_coling2022_random | 5731 | random split used in the COLING 2022 paper |
| test_coling2022         | 5536 | temporal split used in the COLING 2022 paper |
| train_coling2022        | 5731 | temporal split used in the COLING 2022 paper |

For the temporal-shift setting, model should be trained on `train_2020` with `validation_2020` and evaluate on `test_2021`.
In general, model would be trained on `train_all`, the most representative training set with `validation_2021` and evaluate on `test_2021`.

**IMPORTANT NOTE:** To get a result that is comparable with the results of the COLING 2022 Tweet Topic paper, please use `train_coling2022` and `test_coling2022` for temporal-shift, and `train_coling2022_random` and `test_coling2022_random` fir random split (the coling2022 split does not have validation set).

### Models

| model                                                                                                                                                     | training data     |       F1 |   F1 (macro) |   Accuracy |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------|---------:|-------------:|-----------:|
| [cardiffnlp/roberta-large-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-multi-all)                                   | all (2020 + 2021) | 0.763104 |     0.620257 |   0.536629 |
| [cardiffnlp/roberta-base-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-multi-all)                                     | all (2020 + 2021) | 0.751814 |     0.600782 |   0.531864 |
| [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all)   | all (2020 + 2021) | 0.762513 |     0.603533 |   0.547945 |
| [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-all)     | all (2020 + 2021) | 0.759917 |     0.59901  |   0.536033 |
| [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all)     | all (2020 + 2021) | 0.764767 |     0.618702 |   0.548541 |
| [cardiffnlp/roberta-large-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-multi-2020)                                 | 2020 only         | 0.732366 |     0.579456 |   0.493746 |
| [cardiffnlp/roberta-base-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-multi-2020)                                   | 2020 only         | 0.725229 |     0.561261 |   0.499107 |
| [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-2020) | 2020 only         | 0.73671  |     0.565624 |   0.513401 |
| [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-2020)   | 2020 only         | 0.729446 |     0.534799 |   0.50268  |
| [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-2020)   | 2020 only         | 0.731106 |     0.532141 |   0.509827 |


Model fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi/blob/main/lm_finetuning.py).

## Dataset Structure

### Data Instances
An example of `train` looks as follows.

```python
{
    "date": "2021-03-07",
    "text": "The latest The Movie theater Daily! {{URL}} Thanks to {{USERNAME}} {{USERNAME}} {{USERNAME}} #lunchtimeread #amc1000",
    "id": "1368464923370676231",
    "label": [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    "label_name": ["film_tv_&_video"]
}
```

### Labels
| <span style="font-weight:normal">0: arts_&_culture</span>           | <span style="font-weight:normal">5: fashion_&_style</span>   | <span style="font-weight:normal">10: learning_&_educational</span>  | <span style="font-weight:normal">15: science_&_technology</span>  |
|-----------------------------|---------------------|----------------------------|--------------------------|
| 1: business_&_entrepreneurs | 6: film_tv_&_video  | 11: music                  | 16: sports               |
| 2: celebrity_&_pop_culture  | 7: fitness_&_health | 12: news_&_social_concern  | 17: travel_&_adventure   |
| 3: diaries_&_daily_life     | 8: food_&_dining    | 13: other_hobbies          | 18: youth_&_student_life |
| 4: family                   | 9: gaming           | 14: relationships          |                          |

Annotation instructions can be found [here](https://docs.google.com/document/d/1IaIXZYof3iCLLxyBdu_koNmjy--zqsuOmxQ2vOxYd_g/edit?usp=sharing).

The label2id dictionary can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi/blob/main/dataset/label.multi.json).


### Citation Information

```
@inproceedings{dimosthenis-etal-2022-twitter,
    title = "{T}witter {T}opic {C}lassification",
    author = "Antypas, Dimosthenis  and
    Ushio, Asahi  and
    Camacho-Collados, Jose  and
    Neves, Leonardo  and
    Silva, Vitor  and
    Barbieri, Francesco",
    booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
    month = oct,
    year = "2022",
    address = "Gyeongju, Republic of Korea",
    publisher = "International Committee on Computational Linguistics"
}
```