parquet-converter commited on
Commit
0f79499
·
1 Parent(s): dec0e19

Update parquet files

Browse files
Files changed (4) hide show
  1. .gitattributes +0 -37
  2. README.md +0 -83
  3. default/snap-train.parquet +3 -0
  4. snap.py +0 -53
.gitattributes DELETED
@@ -1,37 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- # Audio files - uncompressed
29
- *.pcm filter=lfs diff=lfs merge=lfs -text
30
- *.sam filter=lfs diff=lfs merge=lfs -text
31
- *.raw filter=lfs diff=lfs merge=lfs -text
32
- # Audio files - compressed
33
- *.aac filter=lfs diff=lfs merge=lfs -text
34
- *.flac filter=lfs diff=lfs merge=lfs -text
35
- *.mp3 filter=lfs diff=lfs merge=lfs -text
36
- *.ogg filter=lfs diff=lfs merge=lfs -text
37
- *.wav filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,83 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language_creators:
5
- - machine-generated
6
- language:
7
- - en
8
- license:
9
- - unknown
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - unknown
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - structure-prediction
18
- task_ids: []
19
- pretty_name: SNAP
20
- tags:
21
- - word-segmentation
22
- ---
23
-
24
- # Dataset Card for SNAP
25
-
26
- ## Dataset Description
27
-
28
- - **Repository:** [ardax/hashtag-segmentor](https://github.com/ardax/hashtag-segmentor)
29
- - **Paper:** [Segmenting hashtags using automatically created training data](http://www.lrec-conf.org/proceedings/lrec2016/pdf/708_Paper.pdf)
30
-
31
- ### Dataset Summary
32
-
33
- Automatically segmented 803K SNAP Twitter Data Set hashtags with the heuristic described in the paper "Segmenting hashtags using automatically created training data".
34
-
35
- ### Languages
36
-
37
- English
38
-
39
- ## Dataset Structure
40
-
41
- ### Data Instances
42
-
43
- ```
44
- {
45
- "index": 0,
46
- "hashtag": "BrandThunder",
47
- "segmentation": "Brand Thunder"
48
- }
49
- ```
50
-
51
- ### Data Fields
52
-
53
- - `index`: a numerical index.
54
- - `hashtag`: the original hashtag.
55
- - `segmentation`: the gold segmentation for the hashtag.
56
-
57
- ## Dataset Creation
58
-
59
- - All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
60
-
61
- - The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
62
-
63
- - There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
64
-
65
- - If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
66
-
67
- ## Additional Information
68
-
69
- ### Citation Information
70
-
71
- ```
72
- @inproceedings{celebi2016segmenting,
73
- title={Segmenting hashtags using automatically created training data},
74
- author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
75
- booktitle={Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)},
76
- pages={2981--2985},
77
- year={2016}
78
- }
79
- ```
80
-
81
- ### Contributions
82
-
83
- This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
default/snap-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d0171a82d4288977d5437798151907907bd53f46eb1e7c6c1b0626aa26b26cc
3
+ size 28820562
snap.py DELETED
@@ -1,53 +0,0 @@
1
- """SNAP dataset"""
2
-
3
- import datasets
4
-
5
- _CITATION = """
6
- @inproceedings{celebi2016segmenting,
7
- title={Segmenting hashtags using automatically created training data},
8
- author={Celebi, Arda and {\"O}zg{\"u}r, Arzucan},
9
- booktitle={Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)},
10
- pages={2981--2985},
11
- year={2016}
12
- }
13
- """
14
-
15
- _DESCRIPTION = """
16
- Automatically segmented 803K SNAP Twitter Data Set hashtags with the heuristic described in the paper "Segmenting hashtags using automatically created training data".
17
- """
18
- _URL = "https://raw.githubusercontent.com/ruanchaves/hashformers/master/datasets/SNAP.Hashtags.Segmented.w.Heuristics.txt"
19
-
20
- class Snap(datasets.GeneratorBasedBuilder):
21
-
22
- VERSION = datasets.Version("1.0.0")
23
-
24
- def _info(self):
25
- return datasets.DatasetInfo(
26
- description=_DESCRIPTION,
27
- features=datasets.Features(
28
- {
29
- "index": datasets.Value("int32"),
30
- "hashtag": datasets.Value("string"),
31
- "segmentation": datasets.Value("string")
32
- }
33
- ),
34
- supervised_keys=None,
35
- homepage="https://github.com/ardax/hashtag-segmentor",
36
- citation=_CITATION,
37
- )
38
-
39
- def _split_generators(self, dl_manager):
40
- downloaded_files = dl_manager.download(_URL)
41
- return [
42
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files}),
43
- ]
44
-
45
- def _generate_examples(self, filepath):
46
-
47
- with open(filepath, 'r') as f:
48
- for idx, line in enumerate(f):
49
- yield idx, {
50
- "index": idx,
51
- "hashtag": line.strip().replace(" ", ""),
52
- "segmentation": line.strip()
53
- }