aiface commited on
Commit
02fd695
·
verified ·
1 Parent(s): 77b5a1d

Model save

Browse files
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ base_model: FacebookAI/xlm-roberta-base
5
+ tags:
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: xlm-roberta-base_massive_crf_v1
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # xlm-roberta-base_massive_crf_v1
16
+
17
+ This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 4.4117
20
+ - Slot P: 0.6934
21
+ - Slot R: 0.7706
22
+ - Slot F1: 0.7300
23
+ - Slot Exact Match: 0.6995
24
+ - Intent Acc: 0.8495
25
+
26
+ ## Model description
27
+
28
+ More information needed
29
+
30
+ ## Intended uses & limitations
31
+
32
+ More information needed
33
+
34
+ ## Training and evaluation data
35
+
36
+ More information needed
37
+
38
+ ## Training procedure
39
+
40
+ ### Training hyperparameters
41
+
42
+ The following hyperparameters were used during training:
43
+ - learning_rate: 5e-05
44
+ - train_batch_size: 128
45
+ - eval_batch_size: 128
46
+ - seed: 42
47
+ - gradient_accumulation_steps: 2
48
+ - total_train_batch_size: 256
49
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
50
+ - lr_scheduler_type: cosine
51
+ - lr_scheduler_warmup_ratio: 0.06
52
+ - num_epochs: 30
53
+ - mixed_precision_training: Native AMP
54
+
55
+ ### Training results
56
+
57
+ | Training Loss | Epoch | Step | Validation Loss | Slot P | Slot R | Slot F1 | Slot Exact Match | Intent Acc |
58
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:----------------:|:----------:|
59
+ | No log | 1.0 | 45 | 22.8757 | 0.0 | 0.0 | 0.0 | 0.3187 | 0.0300 |
60
+ | 95.1993 | 2.0 | 90 | 15.1787 | 0.3194 | 0.2164 | 0.2580 | 0.3015 | 0.1117 |
61
+ | 36.1644 | 3.0 | 135 | 10.7793 | 0.4180 | 0.4502 | 0.4335 | 0.4506 | 0.1864 |
62
+ | 24.5568 | 4.0 | 180 | 7.5359 | 0.5813 | 0.6333 | 0.6062 | 0.5706 | 0.3586 |
63
+ | 16.5092 | 5.0 | 225 | 5.7306 | 0.6266 | 0.7020 | 0.6621 | 0.6203 | 0.5957 |
64
+ | 11.609 | 6.0 | 270 | 4.9020 | 0.6610 | 0.7363 | 0.6966 | 0.6626 | 0.7280 |
65
+ | 8.4757 | 7.0 | 315 | 4.4249 | 0.6701 | 0.7448 | 0.7055 | 0.6744 | 0.7762 |
66
+ | 6.8454 | 8.0 | 360 | 4.3691 | 0.6841 | 0.7532 | 0.7170 | 0.6960 | 0.7973 |
67
+ | 5.6898 | 9.0 | 405 | 4.4460 | 0.6747 | 0.7647 | 0.7169 | 0.6886 | 0.8141 |
68
+ | 4.6831 | 10.0 | 450 | 4.2133 | 0.7067 | 0.7552 | 0.7302 | 0.7073 | 0.8342 |
69
+ | 4.6831 | 11.0 | 495 | 4.4300 | 0.6954 | 0.7542 | 0.7236 | 0.6995 | 0.8347 |
70
+ | 3.9992 | 12.0 | 540 | 4.3942 | 0.6977 | 0.7637 | 0.7292 | 0.7024 | 0.8416 |
71
+ | 3.5154 | 13.0 | 585 | 4.4117 | 0.6934 | 0.7706 | 0.7300 | 0.6995 | 0.8495 |
72
+
73
+
74
+ ### Framework versions
75
+
76
+ - Transformers 4.55.0
77
+ - Pytorch 2.7.0+cu126
78
+ - Datasets 3.6.0
79
+ - Tokenizers 0.21.4
intent_report_test.txt ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ precision recall f1-score support
2
+
3
+ 0 0.86 0.94 0.90 88
4
+ 1 0.76 0.94 0.84 36
5
+ 2 0.92 0.97 0.94 35
6
+ 3 0.81 0.83 0.82 35
7
+ 4 0.92 0.88 0.90 26
8
+ 5 0.00 0.00 0.00 1
9
+ 6 0.92 0.79 0.85 43
10
+ 7 0.00 0.00 0.00 4
11
+ 8 1.00 0.83 0.91 18
12
+ 9 0.87 0.85 0.86 72
13
+ 10 0.95 1.00 0.97 39
14
+ 11 0.68 1.00 0.81 15
15
+ 12 0.57 0.54 0.56 169
16
+ 13 0.93 0.96 0.94 156
17
+ 14 0.56 0.69 0.62 13
18
+ 15 0.67 0.67 0.67 12
19
+ 16 0.89 0.77 0.83 22
20
+ 17 0.75 0.81 0.78 26
21
+ 18 0.92 0.81 0.86 27
22
+ 19 0.73 0.87 0.79 31
23
+ 20 0.89 0.80 0.85 41
24
+ 21 0.83 0.87 0.85 39
25
+ 22 0.89 0.86 0.88 124
26
+ 23 0.91 0.85 0.88 34
27
+ 24 1.00 0.40 0.57 10
28
+ 25 0.95 0.95 0.95 19
29
+ 26 0.87 0.84 0.86 57
30
+ 27 0.79 0.76 0.78 25
31
+ 28 0.00 0.00 0.00 6
32
+ 29 0.00 0.00 0.00 6
33
+ 30 0.90 0.99 0.94 67
34
+ 31 0.72 0.62 0.67 21
35
+ 32 0.74 0.83 0.79 126
36
+ 33 0.95 0.92 0.93 114
37
+ 34 0.74 0.88 0.81 26
38
+ 35 0.88 0.64 0.74 11
39
+ 36 0.75 0.81 0.78 72
40
+ 37 0.00 0.00 0.00 0
41
+ 38 1.00 0.20 0.33 15
42
+ 39 0.91 0.80 0.85 25
43
+ 40 0.93 0.93 0.93 43
44
+ 41 0.00 0.00 0.00 3
45
+ 42 0.87 0.78 0.82 51
46
+ 43 0.65 0.36 0.46 36
47
+ 44 0.96 0.92 0.94 119
48
+ 45 0.81 0.91 0.86 176
49
+ 46 0.74 0.91 0.82 32
50
+ 47 0.97 0.88 0.92 81
51
+ 48 0.88 0.93 0.90 41
52
+ 49 0.74 0.83 0.78 141
53
+ 50 0.88 0.90 0.89 209
54
+ 51 0.92 0.94 0.93 35
55
+ 52 0.95 0.90 0.93 21
56
+ 53 0.98 0.90 0.94 52
57
+ 54 0.92 0.96 0.94 23
58
+ 55 0.76 0.80 0.78 20
59
+ 56 0.94 0.86 0.90 36
60
+ 57 0.62 0.83 0.71 35
61
+ 58 0.92 0.70 0.79 63
62
+ 59 0.85 0.80 0.83 51
63
+
64
+ accuracy 0.84 2974
65
+ macro avg 0.76 0.74 0.74 2974
66
+ weighted avg 0.84 0.84 0.83 2974
67
+
68
+ Confusion matrix:
69
+ [[83 0 0 ... 0 0 0]
70
+ [ 0 34 0 ... 0 0 0]
71
+ [ 0 0 34 ... 0 0 0]
72
+ ...
73
+ [ 0 0 0 ... 29 0 0]
74
+ [ 0 0 0 ... 0 44 0]
75
+ [ 0 0 0 ... 0 0 41]]
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:76689ec22ded558d752254d1e294d2a6d9a2bd03bfc835567aaa86f5d882c98b
3
  size 1112775472
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:402d2de6d7d404ac8d90f55b33f0637121a62e29e6b21fee847b4b608623def1
3
  size 1112775472
model_predict_test.csv ADDED
The diff for this file is too large to render. See raw diff
 
slot_report_test.txt ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ precision recall f1-score support
2
+
3
+ alarm_type 0.00 0.00 0.00 2
4
+ app_name 0.08 0.20 0.11 5
5
+ artist_name 0.69 0.85 0.76 61
6
+ audiobook_author 0.00 0.00 0.00 5
7
+ audiobook_name 0.71 0.74 0.72 23
8
+ business_name 0.75 0.77 0.76 92
9
+ business_type 0.50 0.58 0.54 31
10
+ change_amount 0.38 0.33 0.35 9
11
+ coffee_type 0.33 0.25 0.29 4
12
+ color_type 0.60 0.69 0.64 26
13
+ cooking_type 0.00 0.00 0.00 8
14
+ currency_name 0.81 0.96 0.88 50
15
+ date 0.81 0.89 0.85 415
16
+ definition_word 0.77 0.80 0.79 51
17
+ device_type 0.80 0.70 0.75 57
18
+ drink_type 0.00 0.00 0.00 1
19
+ email_address 0.89 0.89 0.89 9
20
+ email_folder 0.57 0.80 0.67 5
21
+ event_name 0.67 0.71 0.69 260
22
+ food_type 0.55 0.74 0.63 72
23
+ game_name 0.86 0.92 0.89 26
24
+ general_frequency 0.68 0.75 0.71 20
25
+ house_place 0.83 0.90 0.86 58
26
+ ingredient 0.00 0.00 0.00 6
27
+ joke_type 0.45 0.45 0.45 11
28
+ list_name 0.73 0.67 0.70 61
29
+ meal_type 0.61 0.94 0.74 18
30
+ media_type 0.83 0.80 0.82 128
31
+ movie_name 0.00 0.00 0.00 2
32
+ movie_type 0.00 0.00 0.00 3
33
+ music_album 0.00 0.00 0.00 1
34
+ music_descriptor 0.00 0.00 0.00 7
35
+ music_genre 0.69 0.84 0.76 50
36
+ news_topic 0.52 0.58 0.55 52
37
+ order_type 0.61 0.85 0.71 20
38
+ person 0.75 0.83 0.79 216
39
+ personal_info 0.71 0.71 0.71 14
40
+ place_name 0.78 0.79 0.78 281
41
+ player_setting 0.58 0.45 0.51 40
42
+ playlist_name 0.00 0.00 0.00 15
43
+ podcast_descriptor 0.43 0.42 0.43 24
44
+ podcast_name 0.75 0.71 0.73 17
45
+ radio_name 0.49 0.55 0.51 33
46
+ relation 0.72 0.75 0.73 59
47
+ song_name 0.47 0.64 0.54 39
48
+ time 0.70 0.70 0.70 191
49
+ time_zone 0.58 0.54 0.56 13
50
+ timeofday 0.70 0.70 0.70 60
51
+ transport_agency 0.88 0.78 0.82 9
52
+ transport_descriptor 0.00 0.00 0.00 2
53
+ transport_name 0.00 0.00 0.00 4
54
+ transport_type 0.76 0.83 0.79 65
55
+ weather_descriptor 0.61 0.68 0.64 82
56
+
57
+ micro avg 0.71 0.75 0.73 2813
58
+ macro avg 0.50 0.54 0.52 2813
59
+ weighted avg 0.70 0.75 0.72 2813