parquet-converter commited on
Commit
0fc0a9f
·
1 Parent(s): 0df2900

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,37 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- # Audio files - uncompressed
29
- *.pcm filter=lfs diff=lfs merge=lfs -text
30
- *.sam filter=lfs diff=lfs merge=lfs -text
31
- *.raw filter=lfs diff=lfs merge=lfs -text
32
- # Audio files - compressed
33
- *.aac filter=lfs diff=lfs merge=lfs -text
34
- *.flac filter=lfs diff=lfs merge=lfs -text
35
- *.mp3 filter=lfs diff=lfs merge=lfs -text
36
- *.ogg filter=lfs diff=lfs merge=lfs -text
37
- *.wav filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,82 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - machine-generated
4
- language_creators:
5
- - machine-generated
6
- language:
7
- - hi
8
- - en
9
- license:
10
- - unknown
11
- multilinguality:
12
- - multilingual
13
- size_categories:
14
- - unknown
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - structure-prediction
19
- task_ids: []
20
- pretty_name: HashSet Distant
21
- tags:
22
- - word-segmentation
23
- ---
24
-
25
- # Dataset Card for HashSet Distant
26
-
27
- ## Dataset Description
28
-
29
- - **Repository:** [prashantkodali/HashSet](https://github.com/prashantkodali/HashSet)
30
- - **Paper:** [HashSet -- A Dataset For Hashtag Segmentation](https://arxiv.org/abs/2201.06741)
31
-
32
- ### Dataset Summary
33
-
34
- Hashset is a new dataset consisiting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
35
- efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
36
- baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
37
- as a good benchmark for hashtag segmentation tasks.
38
-
39
- HashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.
40
-
41
- ### Languages
42
-
43
- Hindi and English.
44
-
45
- ## Dataset Structure
46
-
47
- ### Data Instances
48
-
49
- ```
50
- {
51
- 'index': 282559,
52
- 'hashtag': 'Youth4Nation',
53
- 'segmentation': 'Youth 4 Nation'
54
- }
55
- ```
56
-
57
- ## Dataset Creation
58
-
59
- - All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
60
-
61
- - The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
62
-
63
- - There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
64
-
65
- - If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
66
-
67
- ## Additional Information
68
-
69
- ### Citation Information
70
-
71
- ```
72
- @article{kodali2022hashset,
73
- title={HashSet--A Dataset For Hashtag Segmentation},
74
- author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
75
- journal={arXiv preprint arXiv:2201.06741},
76
- year={2022}
77
- }
78
- ```
79
-
80
- ### Contributions
81
-
82
- This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
default/hashset_distant-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb465aabc7467e83eb4218abefc425f2638e548b24bd37cc95373b6da72c63e8
3
+ size 13263420
hashset_distant.py DELETED
@@ -1,57 +0,0 @@
1
- """HashSet dataset."""
2
-
3
- import datasets
4
- import pandas as pd
5
-
6
- _CITATION = """
7
- @article{kodali2022hashset,
8
- title={HashSet--A Dataset For Hashtag Segmentation},
9
- author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
10
- journal={arXiv preprint arXiv:2201.06741},
11
- year={2022}
12
- }
13
- """
14
-
15
- _DESCRIPTION = """
16
- Hashset is a new dataset consisiting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
17
- efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
18
- baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
19
- as a good benchmark for hashtag segmentation tasks.
20
-
21
- HashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.
22
- """
23
- _URL = "https://raw.githubusercontent.com/prashantkodali/HashSet/master/datasets/hashset/HashSet-Distant.csv"
24
-
25
- class HashSetDistant(datasets.GeneratorBasedBuilder):
26
-
27
- VERSION = datasets.Version("1.0.0")
28
-
29
- def _info(self):
30
- return datasets.DatasetInfo(
31
- description=_DESCRIPTION,
32
- features=datasets.Features(
33
- {
34
- "index": datasets.Value("int32"),
35
- "hashtag": datasets.Value("string"),
36
- "segmentation": datasets.Value("string")
37
- }
38
- ),
39
- supervised_keys=None,
40
- homepage="https://github.com/prashantkodali/HashSet/",
41
- citation=_CITATION,
42
- )
43
-
44
- def _split_generators(self, dl_manager):
45
- downloaded_files = dl_manager.download(_URL)
46
- return [
47
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files }),
48
- ]
49
-
50
- def _generate_examples(self, filepath):
51
- records = pd.read_csv(filepath).to_dict("records")
52
- for idx, row in enumerate(records):
53
- yield idx, {
54
- "index": row["Unnamed: 0.1"],
55
- "hashtag": row["Unsegmented_hashtag"],
56
- "segmentation": row["Segmented_hashtag"]
57
- }