GDGiangi commited on
Commit
2d46194
·
verified ·
1 Parent(s): 515545e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -23
README.md CHANGED
@@ -11,17 +11,10 @@ size_categories:
11
  - 100K<n<1M
12
  task_categories:
13
  - audio-classification
14
- extra_gated_prompt: "To obtain an access token, the database licence must be purchased through https://gabegiangi.wordpress.com/2023/05/15/seir-db/"
15
- extra_gated_fields:
16
- Name: text
17
- Email: text
18
- Company: text
19
- Country: text
20
- Access Token: text
21
- I agree not to give access to any other entities: checkbox
22
  ---
23
 
24
-
25
  # Speech Emotion Intensity Recognition Database (SEIR-DB)
26
 
27
  ## Dataset Description
@@ -38,9 +31,27 @@ The SEIR-DB is a comprehensive, multilingual speech emotion intensity recognitio
38
 
39
  ### Supported Tasks and Leaderboards
40
 
41
- The SEIR dataset is suitable for speech emotion recognition and speech emotion intensity estimation tasks (a subset of the dataset).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
 
43
- ### Languages
44
 
45
  SEIR-DB encompasses multilingual data, featuring languages such as English, Russian, Mandarin, Greek, Italian, and French.
46
 
@@ -54,21 +65,21 @@ After processing, cleaning, and formatting, the dataset contains approximately 1
54
 
55
  ### Data Fields
56
 
57
- - ID: unique sample identifier
58
- - WAV: path to the audio file, located in the data directory
59
- - EMOTION: annotated emotion
60
- - INTENSITY: annotated intensity (ranging from 1-5), where 1 denotes low intensity, and 5 signifies high intensity; 0 indicates no annotation
61
- - LENGTH: duration of the audio utterance
62
 
63
  ### Data Splits
64
 
65
  The data is divided into train, test, and validation sets, located in the respective JSON manifest files.
66
 
67
- - Train: 80%
68
- - Validation: 10%
69
- - Test: 10%
70
 
71
- For added flexibility, unsplit data is also available in data.csv to allow custom splits.
72
 
73
  ## Dataset Creation
74
 
@@ -112,11 +123,11 @@ No specific limitations have been identified at this time.
112
 
113
  ### Dataset Curators
114
 
115
- Gabriel Giangi - Concordia University - Montreal, QC Canada - [email protected]
116
 
117
  ### Licensing Information
118
 
119
- This dataset can be used for research and academic purposes. For commercial purposes, please contact [email protected] .
120
 
121
  ### Citation Information
122
 
@@ -174,4 +185,4 @@ Zhang, R., & Liu, M. (2020). Speech emotion recognition with self-attention. In
174
 
175
  ### Contributions
176
 
177
- Gabriel Giangi - Concordia University - Montreal, QC Canada - [email protected]
 
11
  - 100K<n<1M
12
  task_categories:
13
  - audio-classification
14
+ tags:
15
+ - SER
 
 
 
 
 
 
16
  ---
17
 
 
18
  # Speech Emotion Intensity Recognition Database (SEIR-DB)
19
 
20
  ## Dataset Description
 
31
 
32
  ### Supported Tasks and Leaderboards
33
 
34
+ The SEIR-DB is suitable for:
35
+ - **Speech Emotion Recognition** (classification of discrete emotional states)
36
+ - **Speech Emotion Intensity Estimation** (a subset of this dataset, where intensity is rated from 1–5)
37
+
38
+ #### SPEAR (8 emotions – 375 hours)
39
+
40
+ [SPEAR (Speech Emotion Analysis and Recognition System)](mailto:[email protected]) is an **ensemble model** developed by Gabriel Giangi and serves as the SER **benchmark** for this dataset. Below is a comparison of its performance against the best fine-tuned pre-trained model (WavLM Large):
41
+
42
+ | WavLM Large Test Accuracy | SPEAR Test Accuracy | Improvement |
43
+ |---------------------------|---------------------|-------------|
44
+ | 87.8% | 90.8% | +3.0% |
45
+
46
+ More detailed metrics for **SPEAR**:
47
+
48
+ | Train Accuracy (%) | Validation Accuracy (%) | Test Accuracy (%) |
49
+ |--------------------|-------------------------|-------------------|
50
+ | 99.8% | 90.4% | 90.8% |
51
+
52
+ ---
53
 
54
+ ## Languages
55
 
56
  SEIR-DB encompasses multilingual data, featuring languages such as English, Russian, Mandarin, Greek, Italian, and French.
57
 
 
65
 
66
  ### Data Fields
67
 
68
+ - **ID**: unique sample identifier
69
+ - **WAV**: path to the audio file, located in the data directory
70
+ - **EMOTION**: annotated emotion
71
+ - **INTENSITY**: annotated intensity (ranging from 1-5), where 1 denotes low intensity, and 5 signifies high intensity; 0 indicates no annotation
72
+ - **LENGTH**: duration of the audio utterance
73
 
74
  ### Data Splits
75
 
76
  The data is divided into train, test, and validation sets, located in the respective JSON manifest files.
77
 
78
+ - **Train**: 80%
79
+ - **Validation**: 10%
80
+ - **Test**: 10%
81
 
82
+ For added flexibility, unsplit data is also available in `data.csv` to allow custom splits.
83
 
84
  ## Dataset Creation
85
 
 
123
 
124
  ### Dataset Curators
125
 
126
+ Gabriel Giangi - Concordia University - Montreal, QC Canada - [[email protected]](mailto:[email protected])
127
 
128
  ### Licensing Information
129
 
130
+ This dataset can be used for research and academic purposes. For commercial purposes, please contact [[email protected]](mailto:gabegiangi@gmail.com).
131
 
132
  ### Citation Information
133
 
 
185
 
186
  ### Contributions
187
 
188
+ Gabriel Giangi - Concordia University - Montreal, QC Canada - [[email protected]](mailto:[email protected])