Datasets:
NYTK
/

Modalities:
Text
Formats:
json
Languages:
Hungarian
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
b5fcac4
·
verified ·
1 Parent(s): c94a936

Update parquet files

Browse files
README.md DELETED
@@ -1,208 +0,0 @@
1
- ---
2
- YAML tags:
3
- annotations_creators:
4
- - found
5
- language_creators:
6
- - found
7
- - expert-generated
8
- language:
9
- - hu
10
- license:
11
- - bsd-2-clause
12
- multilinguality:
13
- - monolingual
14
- pretty_name: HuSST
15
- size_categories:
16
- - unknown
17
- source_datasets:
18
- - extended|other
19
- task_categories:
20
- - text-classification
21
- - text-scoring
22
- task_ids:
23
- - sentiment-classification
24
- - sentiment-scoring
25
- ---
26
-
27
- # Dataset Card for HuSST
28
-
29
- ## Table of Contents
30
-
31
- - [Table of Contents](#table-of-contents)
32
-
33
- - [Dataset Description](#dataset-description)
34
-
35
- - [Dataset Summary](#dataset-summary)
36
-
37
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
38
-
39
- - [Language](#language)
40
-
41
- - [Dataset Structure](#dataset-structure)
42
-
43
- - [Data Instances](#data-instances)
44
-
45
- - [Data Fields](#data-fields)
46
-
47
- - [Data Splits](#data-splits)
48
-
49
- - [Dataset Creation](#dataset-creation)
50
-
51
- - [Curation Rationale](#curation-rationale)
52
-
53
- - [Source Data](#source-data)
54
-
55
- - [Annotations](#annotations)
56
-
57
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
58
-
59
- - [Considerations for Using the Data](#considerations-for-using-the-data)
60
-
61
- - [Social Impact of Dataset](#social-impact-of-dataset)
62
-
63
- - [Discussion of Biases](#discussion-of-biases)
64
-
65
- - [Other Known Limitations](#other-known-limitations)
66
-
67
- - [Additional Information](#additional-information)
68
-
69
- - [Dataset Curators](#dataset-curators)
70
-
71
- - [Licensing Information](#licensing-information)
72
-
73
- - [Citation Information](#citation-information)
74
-
75
- - [Contributions](#contributions)
76
-
77
- ## Dataset Description
78
- - **Homepage:**
79
- - **Repository:**
80
- [HuSST dataset](https://github.com/nytud/HuSST)
81
- - **Paper:**
82
- - **Leaderboard:**
83
- - **Point of Contact:**
84
- [lnnoemi](mailto:[email protected])
85
-
86
- ### Dataset Summary
87
-
88
- This is the dataset card for the Hungarian version of the Stanford Sentiment Treebank. This dataset which is also part of the Hungarian Language Understanding Evaluation Benchmark Kit [HuLU](hulu.nlp.nytud.hu). The corpus was created by translating and re-annotating the original SST (Roemmele et al., 2011).
89
-
90
- ### Supported Tasks and Leaderboards
91
-
92
- 'sentiment classification'
93
-
94
-
95
- 'sentiment scoring'
96
-
97
- ### Language
98
-
99
- The BCP-47 code for Hungarian, the only represented language in this dataset, is hu-HU.
100
-
101
- ## Dataset Structure
102
-
103
- ### Data Instances
104
-
105
- For each instance, there is an id, a sentence and a sentiment label.
106
-
107
- An example:
108
-
109
- ```
110
- {
111
- "Sent_id": "dev_0",
112
- "Sent": "Nos, a Jason elment Manhattanbe és a Pokolba kapcsán, azt hiszem, az elkerülhetetlen folytatások ötletlistájáról kihúzhatunk egy űrállomást 2455-ben (hé, ne lődd le a poént).",
113
- "Label": "neutral"
114
- }
115
-
116
- ```
117
-
118
- ### Data Fields
119
-
120
- - Sent_id: unique id of the instances;
121
-
122
- - Sent: the sentence, translation of an instance of the SST dataset;
123
-
124
- - Label: "negative", "neutral", or "positive".
125
-
126
- ### Data Splits
127
-
128
- HuSST has 3 splits: *train*, *validation* and *test*.
129
-
130
- | Dataset split | Number of instances in the split |
131
- |---------------|----------------------------------|
132
- | train | 9344 |
133
- | validation | 1168 |
134
- | test | 1168 |
135
-
136
- The test data is distributed without the labels. To evaluate your model, please [contact us](mailto:[email protected]), or check [HuLU's website](hulu.nlp.nytud.hu) for an automatic evaluation (this feature is under construction at the moment).
137
-
138
- ## Dataset Creation
139
-
140
- ### Source Data
141
-
142
- #### Initial Data Collection and Normalization
143
-
144
- The data is a translation of the content of the SST dataset (only the whole sentences were used). Each sentence was translated by a human translator. Each translation was manually checked and further refined by another annotator.
145
-
146
- ### Annotations
147
-
148
- #### Annotation process
149
-
150
- The translated sentences were annotated by three human annotators with one of the following labels: negative, neutral and positive. Each sentence was then curated by a fourth annotator (the 'curator'). The final label is the decision of the curator based on the three labels of the annotators.
151
-
152
- #### Who are the annotators?
153
-
154
- The translators were native Hungarian speakers with English proficiency. The annotators were university students with some linguistic background.
155
-
156
- ## Additional Information
157
-
158
- ### Licensing Information
159
-
160
-
161
- ### Citation Information
162
-
163
- If you use this resource or any part of its documentation, please refer to:
164
-
165
- Ligeti-Nagy, N., Ferenczi, G., Héja, E., Jelencsik-Mátyus, K., Laki, L. J., Vadász, N., Yang, Z. Gy. and Vadász, T. (2022) HuLU: magyar nyelvű benchmark adatbázis
166
- kiépítése a neurális nyelvmodellek kiértékelése céljából [HuLU: Hungarian benchmark dataset to evaluate neural language models]. XVIII. Magyar Számítógépes Nyelvészeti Konferencia. pp. 431–446.
167
-
168
- ```
169
-
170
- @inproceedings{ligetinagy2022hulu,
171
- title={HuLU: magyar nyelvű benchmark adatbázis kiépítése a neurális nyelvmodellek kiértékelése céljából},
172
- author={Ligeti-Nagy, N. and Ferenczi, G. and Héja, E. and Jelencsik-Mátyus, K. and Laki, L. J. and Vadász, N. and Yang, Z. Gy. and Vadász, T.},
173
- booktitle={XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
174
- year={2022},
175
- pages = {431--446}
176
- }
177
-
178
- ```
179
-
180
- and to:
181
-
182
- Socher et al. (2013), Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. 1631--1642.
183
-
184
- ```
185
-
186
- @inproceedings{socher-etal-2013-recursive,
187
- title = "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank",
188
- author = "Socher, Richard and
189
- Perelygin, Alex and
190
- Wu, Jean and
191
- Chuang, Jason and
192
- Manning, Christopher D. and
193
- Ng, Andrew and
194
- Potts, Christopher",
195
- booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
196
- month = oct,
197
- year = "2013",
198
- address = "Seattle, Washington, USA",
199
- publisher = "Association for Computational Linguistics",
200
- url = "https://www.aclweb.org/anthology/D13-1170",
201
- pages = "1631--1642",
202
- }
203
-
204
- ```
205
-
206
- ### Contributions
207
-
208
- Thanks to [lnnoemi](https://github.com/lnnoemi) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/sst_dev.json DELETED
The diff for this file is too large to render. See raw diff
 
data/sst_test.json DELETED
The diff for this file is too large to render. See raw diff
 
data/sst_train.json DELETED
The diff for this file is too large to render. See raw diff
 
default/test/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:923219bb94804d2d85f7fa7b239774d4d01e47266bf013a57a8c9c5af027e95a
3
+ size 87466
default/train/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ce849e47b347708dd24785ae186bc99e310f74f91d677cc941bd045b436461a
3
+ size 839505
default/validation/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:662f91764c5be9afe90577a3fe7664adce23f4f2020918dc2252a295e7d49622
3
+ size 108126