File size: 152,827 Bytes
bea92f1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
2023-03-16 03:45:00,403	44k	INFO	{'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 31415926, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 5}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'爱梅斯': 0, '花凛': 1, '佩可莉姆': 2, '咲恋': 3, '凯露': 4, '可可萝': 5}, 'model_dir': './logs/44k'}
2023-03-16 03:45:12,030	44k	INFO	Loaded checkpoint './logs/44k/G_0.pth' (iteration 1)
2023-03-16 03:45:12,390	44k	INFO	Loaded checkpoint './logs/44k/D_0.pth' (iteration 1)
2023-03-16 03:45:32,001	44k	INFO	Train Epoch: 1 [0%]
2023-03-16 03:45:32,001	44k	INFO	Losses: [3.291661262512207, 1.764503002166748, 12.157134056091309, 37.00117874145508, 8.6184663772583], step: 0, lr: 0.0001
2023-03-16 03:45:48,575	44k	INFO	Saving model and optimizer state at iteration 1 to ./logs/44k/G_0.pth
2023-03-16 03:45:50,228	44k	INFO	Saving model and optimizer state at iteration 1 to ./logs/44k/D_0.pth
2023-03-16 03:49:10,560	44k	INFO	Train Epoch: 1 [41%]
2023-03-16 03:49:10,561	44k	INFO	Losses: [2.456336736679077, 2.2636466026306152, 11.285392761230469, 23.48797035217285, 1.7761276960372925], step: 200, lr: 0.0001
2023-03-16 03:52:25,452	44k	INFO	Train Epoch: 1 [82%]
2023-03-16 03:52:25,453	44k	INFO	Losses: [2.619351625442505, 2.413145065307617, 5.774464130401611, 17.83196449279785, 1.4286918640136719], step: 400, lr: 0.0001
2023-03-16 03:53:54,545	44k	INFO	====> Epoch: 1, cost 534.15 s
2023-03-16 03:55:47,876	44k	INFO	Train Epoch: 2 [23%]
2023-03-16 03:55:47,878	44k	INFO	Losses: [2.3617289066314697, 2.34017276763916, 11.430126190185547, 23.098453521728516, 1.6386312246322632], step: 600, lr: 9.99875e-05
2023-03-16 03:58:49,432	44k	INFO	Train Epoch: 2 [64%]
2023-03-16 03:58:49,433	44k	INFO	Losses: [2.6481170654296875, 2.172445297241211, 11.095765113830566, 22.20691680908203, 1.3821688890457153], step: 800, lr: 9.99875e-05
2023-03-16 03:59:04,118	44k	INFO	Saving model and optimizer state at iteration 2 to ./logs/44k/G_800.pth
2023-03-16 03:59:07,802	44k	INFO	Saving model and optimizer state at iteration 2 to ./logs/44k/D_800.pth
2023-03-16 04:01:53,100	44k	INFO	====> Epoch: 2, cost 478.55 s
2023-03-16 04:02:23,840	44k	INFO	Train Epoch: 3 [5%]
2023-03-16 04:02:23,842	44k	INFO	Losses: [2.440416097640991, 2.114820718765259, 9.250975608825684, 21.680259704589844, 1.6363701820373535], step: 1000, lr: 9.99750015625e-05
2023-03-16 04:05:25,520	44k	INFO	Train Epoch: 3 [46%]
2023-03-16 04:05:25,522	44k	INFO	Losses: [2.3823187351226807, 2.206317901611328, 7.773815155029297, 21.99563217163086, 1.6792030334472656], step: 1200, lr: 9.99750015625e-05
2023-03-16 04:08:26,714	44k	INFO	Train Epoch: 3 [87%]
2023-03-16 04:08:26,716	44k	INFO	Losses: [2.639606237411499, 2.4339189529418945, 7.845114231109619, 18.720239639282227, 1.6298623085021973], step: 1400, lr: 9.99750015625e-05
2023-03-16 04:09:25,830	44k	INFO	====> Epoch: 3, cost 452.73 s
2023-03-16 04:11:39,460	44k	INFO	Train Epoch: 4 [28%]
2023-03-16 04:11:39,462	44k	INFO	Losses: [2.6709542274475098, 2.1050305366516113, 6.581157684326172, 18.311243057250977, 1.3236173391342163], step: 1600, lr: 9.996250468730469e-05
2023-03-16 04:11:53,038	44k	INFO	Saving model and optimizer state at iteration 4 to ./logs/44k/G_1600.pth
2023-03-16 04:11:56,885	44k	INFO	Saving model and optimizer state at iteration 4 to ./logs/44k/D_1600.pth
2023-03-16 04:15:05,252	44k	INFO	Train Epoch: 4 [69%]
2023-03-16 04:15:05,254	44k	INFO	Losses: [2.4878756999969482, 2.51745343208313, 9.609076499938965, 22.48293113708496, 1.4268285036087036], step: 1800, lr: 9.996250468730469e-05
2023-03-16 04:17:22,854	44k	INFO	====> Epoch: 4, cost 477.02 s
2023-03-16 04:18:15,871	44k	INFO	Train Epoch: 5 [10%]
2023-03-16 04:18:15,872	44k	INFO	Losses: [2.606598138809204, 2.1731693744659424, 10.153559684753418, 22.90613555908203, 1.7365959882736206], step: 2000, lr: 9.995000937421877e-05
2023-03-16 04:21:18,504	44k	INFO	Train Epoch: 5 [51%]
2023-03-16 04:21:18,505	44k	INFO	Losses: [2.441821813583374, 2.3543171882629395, 11.869065284729004, 19.522531509399414, 1.5469204187393188], step: 2200, lr: 9.995000937421877e-05
2023-03-16 04:24:20,204	44k	INFO	Train Epoch: 5 [92%]
2023-03-16 04:24:20,206	44k	INFO	Losses: [2.5969419479370117, 2.2188103199005127, 7.577230453491211, 21.363588333129883, 1.363086462020874], step: 2400, lr: 9.995000937421877e-05
2023-03-16 04:24:34,481	44k	INFO	Saving model and optimizer state at iteration 5 to ./logs/44k/G_2400.pth
2023-03-16 04:24:38,025	44k	INFO	Saving model and optimizer state at iteration 5 to ./logs/44k/D_2400.pth
2023-03-16 04:25:20,248	44k	INFO	====> Epoch: 5, cost 477.39 s
2023-03-16 04:27:54,035	44k	INFO	Train Epoch: 6 [33%]
2023-03-16 04:27:54,037	44k	INFO	Losses: [2.5279340744018555, 1.942718505859375, 9.702193260192871, 18.886552810668945, 1.5635700225830078], step: 2600, lr: 9.993751562304699e-05
2023-03-16 04:30:56,699	44k	INFO	Train Epoch: 6 [74%]
2023-03-16 04:30:56,701	44k	INFO	Losses: [2.5547738075256348, 2.1835579872131348, 9.265650749206543, 23.518890380859375, 1.5289368629455566], step: 2800, lr: 9.993751562304699e-05
2023-03-16 04:32:51,395	44k	INFO	====> Epoch: 6, cost 451.15 s
2023-03-16 04:34:06,142	44k	INFO	Train Epoch: 7 [15%]
2023-03-16 04:34:06,144	44k	INFO	Losses: [2.5520966053009033, 2.525559663772583, 10.293418884277344, 19.26520538330078, 1.4581613540649414], step: 3000, lr: 9.99250234335941e-05
2023-03-16 04:37:07,639	44k	INFO	Train Epoch: 7 [56%]
2023-03-16 04:37:07,641	44k	INFO	Losses: [2.6504194736480713, 2.1736576557159424, 6.023209571838379, 18.464496612548828, 1.5369186401367188], step: 3200, lr: 9.99250234335941e-05
2023-03-16 04:37:21,478	44k	INFO	Saving model and optimizer state at iteration 7 to ./logs/44k/G_3200.pth
2023-03-16 04:37:25,251	44k	INFO	Saving model and optimizer state at iteration 7 to ./logs/44k/D_3200.pth
2023-03-16 04:40:33,181	44k	INFO	Train Epoch: 7 [97%]
2023-03-16 04:40:33,183	44k	INFO	Losses: [2.450155019760132, 2.5208957195281982, 9.753071784973145, 19.666173934936523, 1.601668119430542], step: 3400, lr: 9.99250234335941e-05
2023-03-16 04:40:48,508	44k	INFO	====> Epoch: 7, cost 477.11 s
2023-03-16 04:43:44,187	44k	INFO	Train Epoch: 8 [38%]
2023-03-16 04:43:44,189	44k	INFO	Losses: [2.3517274856567383, 2.4090371131896973, 9.329937934875488, 22.46272087097168, 0.9372869729995728], step: 3600, lr: 9.991253280566489e-05
2023-03-16 04:46:45,403	44k	INFO	Train Epoch: 8 [79%]
2023-03-16 04:46:45,404	44k	INFO	Losses: [2.424236297607422, 2.2478489875793457, 9.796577453613281, 19.961515426635742, 1.3253408670425415], step: 3800, lr: 9.991253280566489e-05
2023-03-16 04:48:19,436	44k	INFO	====> Epoch: 8, cost 450.93 s
2023-03-16 04:49:57,041	44k	INFO	Train Epoch: 9 [20%]
2023-03-16 04:49:57,043	44k	INFO	Losses: [2.559412717819214, 2.307934284210205, 6.929838180541992, 15.601181030273438, 1.5490977764129639], step: 4000, lr: 9.990004373906418e-05
2023-03-16 04:50:10,917	44k	INFO	Saving model and optimizer state at iteration 9 to ./logs/44k/G_4000.pth
2023-03-16 04:50:15,871	44k	INFO	Saving model and optimizer state at iteration 9 to ./logs/44k/D_4000.pth
2023-03-16 04:53:23,572	44k	INFO	Train Epoch: 9 [61%]
2023-03-16 04:53:23,574	44k	INFO	Losses: [2.530341625213623, 2.0512852668762207, 8.43735408782959, 22.20954704284668, 1.2125424146652222], step: 4200, lr: 9.990004373906418e-05
2023-03-16 04:56:17,606	44k	INFO	====> Epoch: 9, cost 478.17 s
2023-03-16 04:56:34,875	44k	INFO	Train Epoch: 10 [2%]
2023-03-16 04:56:34,877	44k	INFO	Losses: [2.7438266277313232, 1.8799948692321777, 7.8603010177612305, 20.14884376525879, 1.3113723993301392], step: 4400, lr: 9.98875562335968e-05
2023-03-16 04:59:38,567	44k	INFO	Train Epoch: 10 [43%]
2023-03-16 04:59:38,569	44k	INFO	Losses: [2.42875337600708, 2.293686866760254, 9.781872749328613, 18.593624114990234, 1.4600536823272705], step: 4600, lr: 9.98875562335968e-05
2023-03-16 05:02:41,666	44k	INFO	Train Epoch: 10 [84%]
2023-03-16 05:02:41,670	44k	INFO	Losses: [2.5926754474639893, 2.7037694454193115, 6.898773670196533, 21.30295753479004, 1.2300797700881958], step: 4800, lr: 9.98875562335968e-05
2023-03-16 05:02:56,211	44k	INFO	Saving model and optimizer state at iteration 10 to ./logs/44k/G_4800.pth
2023-03-16 05:03:00,468	44k	INFO	Saving model and optimizer state at iteration 10 to ./logs/44k/D_4800.pth
2023-03-16 05:03:03,487	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_800.pth
2023-03-16 05:03:03,489	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_800.pth
2023-03-16 05:04:19,879	44k	INFO	====> Epoch: 10, cost 482.27 s
2023-03-16 05:06:18,375	44k	INFO	Train Epoch: 11 [25%]
2023-03-16 05:06:18,377	44k	INFO	Losses: [2.5129313468933105, 2.097684383392334, 11.3854341506958, 22.485979080200195, 1.615236520767212], step: 5000, lr: 9.987507028906759e-05
2023-03-16 05:09:19,703	44k	INFO	Train Epoch: 11 [66%]
2023-03-16 05:09:19,704	44k	INFO	Losses: [2.7106878757476807, 2.373891592025757, 7.397819519042969, 16.615337371826172, 1.2022877931594849], step: 5200, lr: 9.987507028906759e-05
2023-03-16 05:11:51,964	44k	INFO	====> Epoch: 11, cost 452.09 s
2023-03-16 05:12:31,805	44k	INFO	Train Epoch: 12 [7%]
2023-03-16 05:12:31,807	44k	INFO	Losses: [2.7057793140411377, 2.2924859523773193, 7.111566066741943, 18.727497100830078, 1.2804045677185059], step: 5400, lr: 9.986258590528146e-05
2023-03-16 05:15:33,982	44k	INFO	Train Epoch: 12 [48%]
2023-03-16 05:15:33,984	44k	INFO	Losses: [2.556389331817627, 2.2130212783813477, 9.802085876464844, 19.90097427368164, 1.4671552181243896], step: 5600, lr: 9.986258590528146e-05
2023-03-16 05:15:49,246	44k	INFO	Saving model and optimizer state at iteration 12 to ./logs/44k/G_5600.pth
2023-03-16 05:15:52,672	44k	INFO	Saving model and optimizer state at iteration 12 to ./logs/44k/D_5600.pth
2023-03-16 05:15:55,139	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_1600.pth
2023-03-16 05:15:55,141	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_1600.pth
2023-03-16 05:19:02,115	44k	INFO	Train Epoch: 12 [89%]
2023-03-16 05:19:02,117	44k	INFO	Losses: [2.291825294494629, 2.4433064460754395, 11.285550117492676, 21.537174224853516, 1.6749378442764282], step: 5800, lr: 9.986258590528146e-05
2023-03-16 05:19:53,740	44k	INFO	====> Epoch: 12, cost 481.78 s
2023-03-16 05:22:15,179	44k	INFO	Train Epoch: 13 [30%]
2023-03-16 05:22:15,181	44k	INFO	Losses: [2.7077765464782715, 2.5591022968292236, 8.292502403259277, 21.418359756469727, 1.6379843950271606], step: 6000, lr: 9.98501030820433e-05
2023-03-16 05:25:18,284	44k	INFO	Train Epoch: 13 [70%]
2023-03-16 05:25:18,285	44k	INFO	Losses: [2.474001884460449, 2.4212117195129395, 10.527393341064453, 23.17765235900879, 1.3198602199554443], step: 6200, lr: 9.98501030820433e-05
2023-03-16 05:27:28,128	44k	INFO	====> Epoch: 13, cost 454.39 s
2023-03-16 05:28:29,410	44k	INFO	Train Epoch: 14 [11%]
2023-03-16 05:28:29,412	44k	INFO	Losses: [2.487844467163086, 2.280858278274536, 11.38546085357666, 21.897716522216797, 1.609121322631836], step: 6400, lr: 9.983762181915804e-05
2023-03-16 05:28:42,941	44k	INFO	Saving model and optimizer state at iteration 14 to ./logs/44k/G_6400.pth
2023-03-16 05:28:47,379	44k	INFO	Saving model and optimizer state at iteration 14 to ./logs/44k/D_6400.pth
2023-03-16 05:28:50,125	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_2400.pth
2023-03-16 05:28:50,129	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_2400.pth
2023-03-16 05:31:55,531	44k	INFO	Train Epoch: 14 [52%]
2023-03-16 05:31:55,533	44k	INFO	Losses: [2.395012140274048, 2.4408841133117676, 8.866538047790527, 21.54804229736328, 1.4580787420272827], step: 6600, lr: 9.983762181915804e-05
2023-03-16 05:34:56,333	44k	INFO	Train Epoch: 14 [93%]
2023-03-16 05:34:56,334	44k	INFO	Losses: [2.5645859241485596, 2.1747446060180664, 6.60874605178833, 20.333681106567383, 1.515870213508606], step: 6800, lr: 9.983762181915804e-05
2023-03-16 05:35:25,854	44k	INFO	====> Epoch: 14, cost 477.73 s
2023-03-16 05:38:06,692	44k	INFO	Train Epoch: 15 [34%]
2023-03-16 05:38:06,694	44k	INFO	Losses: [2.657505512237549, 2.1412887573242188, 7.838269233703613, 18.579402923583984, 1.1715682744979858], step: 7000, lr: 9.982514211643064e-05
2023-03-16 05:41:07,577	44k	INFO	Train Epoch: 15 [75%]
2023-03-16 05:41:07,579	44k	INFO	Losses: [2.606098175048828, 2.226483106613159, 7.853048801422119, 21.761579513549805, 1.3788846731185913], step: 7200, lr: 9.982514211643064e-05
2023-03-16 05:41:21,612	44k	INFO	Saving model and optimizer state at iteration 15 to ./logs/44k/G_7200.pth
2023-03-16 05:41:25,049	44k	INFO	Saving model and optimizer state at iteration 15 to ./logs/44k/D_7200.pth
2023-03-16 05:41:27,318	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_3200.pth
2023-03-16 05:41:27,323	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_3200.pth
2023-03-16 05:43:20,429	44k	INFO	====> Epoch: 15, cost 474.57 s
2023-03-16 05:44:42,960	44k	INFO	Train Epoch: 16 [16%]
2023-03-16 05:44:42,962	44k	INFO	Losses: [2.5230307579040527, 2.365442991256714, 6.982161998748779, 21.792232513427734, 1.7448978424072266], step: 7400, lr: 9.981266397366609e-05
2023-03-16 05:51:59,036	44k	INFO	{'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 31415926, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 5}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'爱梅斯': 0, '花凛': 1, '佩可莉姆': 2, '咲恋': 3, '凯露': 4, '可可萝': 5}, 'model_dir': './logs/44k'}
2023-03-16 05:52:09,604	44k	INFO	Loaded checkpoint './logs/44k/G_7200.pth' (iteration 15)
2023-03-16 05:52:13,179	44k	INFO	Loaded checkpoint './logs/44k/D_7200.pth' (iteration 15)
2023-03-16 05:55:05,305	44k	INFO	Train Epoch: 15 [34%]
2023-03-16 05:55:05,306	44k	INFO	Losses: [2.700556755065918, 1.8807555437088013, 9.647850036621094, 20.80097198486328, 1.4296956062316895], step: 7000, lr: 9.981266397366609e-05
2023-03-16 05:58:09,964	44k	INFO	Train Epoch: 15 [75%]
2023-03-16 05:58:09,966	44k	INFO	Losses: [2.456688642501831, 2.3176803588867188, 10.375439643859863, 21.627145767211914, 1.5252001285552979], step: 7200, lr: 9.981266397366609e-05
2023-03-16 05:58:29,997	44k	INFO	Saving model and optimizer state at iteration 15 to ./logs/44k/G_7200.pth
2023-03-16 05:58:33,699	44k	INFO	Saving model and optimizer state at iteration 15 to ./logs/44k/D_7200.pth
2023-03-16 06:00:32,071	44k	INFO	====> Epoch: 15, cost 513.04 s
2023-03-16 06:01:56,496	44k	INFO	Train Epoch: 16 [16%]
2023-03-16 06:01:56,498	44k	INFO	Losses: [2.7294743061065674, 2.2666993141174316, 8.199426651000977, 20.438791275024414, 1.2583919763565063], step: 7400, lr: 9.980018739066937e-05
2023-03-16 06:04:59,043	44k	INFO	Train Epoch: 16 [57%]
2023-03-16 06:04:59,044	44k	INFO	Losses: [2.3027632236480713, 2.5179648399353027, 10.218314170837402, 19.362916946411133, 1.5512841939926147], step: 7600, lr: 9.980018739066937e-05
2023-03-16 06:08:02,333	44k	INFO	Train Epoch: 16 [98%]
2023-03-16 06:08:02,336	44k	INFO	Losses: [2.474194049835205, 2.098743200302124, 10.509648323059082, 19.585729598999023, 1.2483811378479004], step: 7800, lr: 9.980018739066937e-05
2023-03-16 06:08:09,957	44k	INFO	====> Epoch: 16, cost 457.89 s
2023-03-16 06:11:15,354	44k	INFO	Train Epoch: 17 [39%]
2023-03-16 06:11:15,356	44k	INFO	Losses: [2.3673224449157715, 2.105393886566162, 8.37169075012207, 19.664026260375977, 1.166558027267456], step: 8000, lr: 9.978771236724554e-05
2023-03-16 06:11:31,236	44k	INFO	Saving model and optimizer state at iteration 17 to ./logs/44k/G_8000.pth
2023-03-16 06:11:35,378	44k	INFO	Saving model and optimizer state at iteration 17 to ./logs/44k/D_8000.pth
2023-03-16 06:11:38,059	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_4000.pth
2023-03-16 06:11:38,185	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_4000.pth
2023-03-16 06:14:44,567	44k	INFO	Train Epoch: 17 [80%]
2023-03-16 06:14:44,568	44k	INFO	Losses: [2.7636914253234863, 1.8335176706314087, 10.230378150939941, 21.400732040405273, 1.2269840240478516], step: 8200, lr: 9.978771236724554e-05
2023-03-16 06:16:13,029	44k	INFO	====> Epoch: 17, cost 483.07 s
2023-03-16 06:17:59,537	44k	INFO	Train Epoch: 18 [21%]
2023-03-16 06:17:59,540	44k	INFO	Losses: [2.7225890159606934, 1.9996113777160645, 7.610608100891113, 20.34429931640625, 1.6251106262207031], step: 8400, lr: 9.977523890319963e-05
2023-03-16 06:21:04,534	44k	INFO	Train Epoch: 18 [62%]
2023-03-16 06:21:04,535	44k	INFO	Losses: [2.341667890548706, 2.268481969833374, 11.519766807556152, 22.280040740966797, 1.4153791666030884], step: 8600, lr: 9.977523890319963e-05
2023-03-16 06:23:54,864	44k	INFO	====> Epoch: 18, cost 461.84 s
2023-03-16 06:24:19,044	44k	INFO	Train Epoch: 19 [3%]
2023-03-16 06:24:19,046	44k	INFO	Losses: [2.740995407104492, 2.0446672439575195, 12.358692169189453, 20.678874969482422, 1.4825718402862549], step: 8800, lr: 9.976276699833672e-05
2023-03-16 06:24:32,203	44k	INFO	Saving model and optimizer state at iteration 19 to ./logs/44k/G_8800.pth
2023-03-16 06:24:35,980	44k	INFO	Saving model and optimizer state at iteration 19 to ./logs/44k/D_8800.pth
2023-03-16 06:24:38,737	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_4800.pth
2023-03-16 06:24:38,739	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_4800.pth
2023-03-16 06:27:45,580	44k	INFO	Train Epoch: 19 [44%]
2023-03-16 06:27:45,581	44k	INFO	Losses: [2.6062631607055664, 2.1529738903045654, 7.581636905670166, 18.038063049316406, 1.1178854703903198], step: 9000, lr: 9.976276699833672e-05
2023-03-16 06:30:50,161	44k	INFO	Train Epoch: 19 [85%]
2023-03-16 06:30:50,163	44k	INFO	Losses: [2.3310985565185547, 2.40073299407959, 10.207969665527344, 22.044940948486328, 1.4970680475234985], step: 9200, lr: 9.976276699833672e-05
2023-03-16 06:31:54,985	44k	INFO	====> Epoch: 19, cost 480.12 s
2023-03-16 06:34:02,871	44k	INFO	Train Epoch: 20 [26%]
2023-03-16 06:34:02,873	44k	INFO	Losses: [2.5796658992767334, 2.31149959564209, 7.460807800292969, 17.63202667236328, 1.1823945045471191], step: 9400, lr: 9.975029665246193e-05
2023-03-16 06:37:05,291	44k	INFO	Train Epoch: 20 [67%]
2023-03-16 06:37:05,292	44k	INFO	Losses: [2.3096089363098145, 2.2630562782287598, 11.921524047851562, 23.9654541015625, 1.0267102718353271], step: 9600, lr: 9.975029665246193e-05
2023-03-16 06:37:19,257	44k	INFO	Saving model and optimizer state at iteration 20 to ./logs/44k/G_9600.pth
2023-03-16 06:37:22,849	44k	INFO	Saving model and optimizer state at iteration 20 to ./logs/44k/D_9600.pth
2023-03-16 06:37:25,363	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_5600.pth
2023-03-16 06:37:25,365	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_5600.pth
2023-03-16 06:39:56,774	44k	INFO	====> Epoch: 20, cost 481.79 s
2023-03-16 06:40:43,948	44k	INFO	Train Epoch: 21 [8%]
2023-03-16 06:40:43,951	44k	INFO	Losses: [2.2427148818969727, 2.1527554988861084, 11.74283504486084, 17.88407325744629, 1.1844799518585205], step: 9800, lr: 9.973782786538036e-05
2023-03-16 06:43:49,417	44k	INFO	Train Epoch: 21 [49%]
2023-03-16 06:43:49,419	44k	INFO	Losses: [2.6059117317199707, 2.3247103691101074, 8.073648452758789, 18.50623321533203, 1.4986897706985474], step: 10000, lr: 9.973782786538036e-05
2023-03-16 06:46:52,057	44k	INFO	Train Epoch: 21 [90%]
2023-03-16 06:46:52,059	44k	INFO	Losses: [2.4914746284484863, 2.116398811340332, 7.762205600738525, 17.82062339782715, 1.569253921508789], step: 10200, lr: 9.973782786538036e-05
2023-03-16 06:47:36,396	44k	INFO	====> Epoch: 21, cost 459.62 s
2023-03-16 06:50:06,653	44k	INFO	Train Epoch: 22 [31%]
2023-03-16 06:50:06,656	44k	INFO	Losses: [2.4326674938201904, 2.162986993789673, 9.135976791381836, 21.036026000976562, 1.3172006607055664], step: 10400, lr: 9.972536063689719e-05
2023-03-16 06:50:21,807	44k	INFO	Saving model and optimizer state at iteration 22 to ./logs/44k/G_10400.pth
2023-03-16 06:50:25,975	44k	INFO	Saving model and optimizer state at iteration 22 to ./logs/44k/D_10400.pth
2023-03-16 06:50:28,794	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_6400.pth
2023-03-16 06:50:28,812	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_6400.pth
2023-03-16 06:53:34,952	44k	INFO	Train Epoch: 22 [72%]
2023-03-16 06:53:34,955	44k	INFO	Losses: [2.6645891666412354, 2.2696688175201416, 7.377318382263184, 19.98308753967285, 1.3985227346420288], step: 10600, lr: 9.972536063689719e-05
2023-03-16 06:55:40,220	44k	INFO	====> Epoch: 22, cost 483.82 s
2023-03-16 06:56:48,328	44k	INFO	Train Epoch: 23 [13%]
2023-03-16 06:56:48,330	44k	INFO	Losses: [2.532457113265991, 2.1207449436187744, 8.504952430725098, 19.68596649169922, 1.082635521888733], step: 10800, lr: 9.971289496681757e-05
2023-03-16 06:59:51,991	44k	INFO	Train Epoch: 23 [54%]
2023-03-16 06:59:51,993	44k	INFO	Losses: [2.626880168914795, 2.3608415126800537, 7.994516849517822, 19.393821716308594, 1.4976881742477417], step: 11000, lr: 9.971289496681757e-05
2023-03-16 09:23:05,318	44k	INFO	{'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 31415926, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 5}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'爱梅斯': 0, '花凛': 1, '佩可莉姆': 2, '咲恋': 3, '凯露': 4, '可可萝': 5}, 'model_dir': './logs/44k'}
2023-03-16 09:23:21,238	44k	INFO	Loaded checkpoint './logs/44k/G_10400.pth' (iteration 22)
2023-03-16 09:23:27,987	44k	INFO	Loaded checkpoint './logs/44k/D_10400.pth' (iteration 22)
2023-03-16 09:26:12,428	44k	INFO	Train Epoch: 22 [31%]
2023-03-16 09:26:12,430	44k	INFO	Losses: [2.2434141635894775, 2.2889671325683594, 10.926834106445312, 20.01066780090332, 1.2102007865905762], step: 10400, lr: 9.971289496681757e-05
2023-03-16 09:26:32,055	44k	INFO	Saving model and optimizer state at iteration 22 to ./logs/44k/G_10400.pth
2023-03-16 09:26:35,747	44k	INFO	Saving model and optimizer state at iteration 22 to ./logs/44k/D_10400.pth
2023-03-16 09:29:54,063	44k	INFO	Train Epoch: 22 [72%]
2023-03-16 09:29:54,065	44k	INFO	Losses: [2.3256235122680664, 2.6369853019714355, 8.865528106689453, 19.07881736755371, 1.1233159303665161], step: 10600, lr: 9.971289496681757e-05
2023-03-16 09:32:07,633	44k	INFO	====> Epoch: 22, cost 542.32 s
2023-03-16 09:33:15,136	44k	INFO	Train Epoch: 23 [13%]
2023-03-16 09:33:15,138	44k	INFO	Losses: [2.542282819747925, 2.167771100997925, 9.536635398864746, 20.894145965576172, 1.2026317119598389], step: 10800, lr: 9.970043085494672e-05
2023-03-16 09:36:14,489	44k	INFO	Train Epoch: 23 [54%]
2023-03-16 09:36:14,490	44k	INFO	Losses: [2.374000072479248, 2.178696870803833, 13.43363094329834, 20.896793365478516, 1.3948171138763428], step: 11000, lr: 9.970043085494672e-05
2023-03-16 09:39:14,759	44k	INFO	Train Epoch: 23 [95%]
2023-03-16 09:39:14,760	44k	INFO	Losses: [2.5829479694366455, 2.1564114093780518, 11.19134521484375, 22.142688751220703, 1.6007252931594849], step: 11200, lr: 9.970043085494672e-05
2023-03-16 09:39:30,755	44k	INFO	Saving model and optimizer state at iteration 23 to ./logs/44k/G_11200.pth
2023-03-16 09:39:34,805	44k	INFO	Saving model and optimizer state at iteration 23 to ./logs/44k/D_11200.pth
2023-03-16 09:39:37,520	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_7200.pth
2023-03-16 09:39:37,537	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_7200.pth
2023-03-16 09:40:01,345	44k	INFO	====> Epoch: 23, cost 473.71 s
2023-03-16 09:42:47,828	44k	INFO	Train Epoch: 24 [36%]
2023-03-16 09:42:47,830	44k	INFO	Losses: [2.660961866378784, 2.122011184692383, 6.772215843200684, 19.343496322631836, 1.1945880651474], step: 11400, lr: 9.968796830108985e-05
2023-03-16 09:45:48,287	44k	INFO	Train Epoch: 24 [77%]
2023-03-16 09:45:48,288	44k	INFO	Losses: [2.2675209045410156, 2.2053070068359375, 12.244622230529785, 24.114473342895508, 1.4315365552902222], step: 11600, lr: 9.968796830108985e-05
2023-03-16 09:47:30,743	44k	INFO	====> Epoch: 24, cost 449.40 s
2023-03-16 09:49:00,507	44k	INFO	Train Epoch: 25 [18%]
2023-03-16 09:49:00,510	44k	INFO	Losses: [2.452841281890869, 2.139713764190674, 10.158618927001953, 18.858318328857422, 0.9244416952133179], step: 11800, lr: 9.967550730505221e-05
2023-03-16 09:52:00,803	44k	INFO	Train Epoch: 25 [59%]
2023-03-16 09:52:00,805	44k	INFO	Losses: [2.6358675956726074, 2.1151206493377686, 8.482427597045898, 22.07154083251953, 1.5226523876190186], step: 12000, lr: 9.967550730505221e-05
2023-03-16 09:52:15,760	44k	INFO	Saving model and optimizer state at iteration 25 to ./logs/44k/G_12000.pth
2023-03-16 09:52:19,827	44k	INFO	Saving model and optimizer state at iteration 25 to ./logs/44k/D_12000.pth
2023-03-16 09:52:22,968	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_8000.pth
2023-03-16 09:52:22,987	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_8000.pth
2023-03-16 09:55:27,988	44k	INFO	====> Epoch: 25, cost 477.25 s
2023-03-16 09:55:36,778	44k	INFO	Train Epoch: 26 [0%]
2023-03-16 09:55:36,780	44k	INFO	Losses: [2.632934093475342, 2.035573959350586, 7.708637714385986, 19.25495719909668, 1.215615153312683], step: 12200, lr: 9.966304786663908e-05
2023-03-16 09:58:39,151	44k	INFO	Train Epoch: 26 [41%]
2023-03-16 09:58:39,153	44k	INFO	Losses: [2.489377975463867, 2.1507515907287598, 5.702184200286865, 17.984861373901367, 1.5492478609085083], step: 12400, lr: 9.966304786663908e-05
2023-03-16 10:01:40,958	44k	INFO	Train Epoch: 26 [82%]
2023-03-16 10:01:40,960	44k	INFO	Losses: [2.4334115982055664, 2.128483295440674, 11.235027313232422, 21.648330688476562, 1.0081102848052979], step: 12600, lr: 9.966304786663908e-05
2023-03-16 10:02:59,900	44k	INFO	====> Epoch: 26, cost 451.91 s
2023-03-16 10:04:51,001	44k	INFO	Train Epoch: 27 [23%]
2023-03-16 10:04:51,003	44k	INFO	Losses: [2.455256700515747, 2.060979127883911, 11.485939979553223, 22.15370750427246, 1.3699042797088623], step: 12800, lr: 9.965058998565574e-05
2023-03-16 10:05:04,279	44k	INFO	Saving model and optimizer state at iteration 27 to ./logs/44k/G_12800.pth
2023-03-16 10:05:08,665	44k	INFO	Saving model and optimizer state at iteration 27 to ./logs/44k/D_12800.pth
2023-03-16 10:05:11,969	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_8800.pth
2023-03-16 10:05:11,980	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_8800.pth
2023-03-16 10:08:15,586	44k	INFO	Train Epoch: 27 [64%]
2023-03-16 10:08:15,587	44k	INFO	Losses: [2.388317346572876, 2.4689040184020996, 9.226552963256836, 18.59186553955078, 1.4097687005996704], step: 13000, lr: 9.965058998565574e-05
2023-03-16 10:10:52,997	44k	INFO	====> Epoch: 27, cost 473.10 s
2023-03-16 10:11:24,251	44k	INFO	Train Epoch: 28 [5%]
2023-03-16 10:11:24,253	44k	INFO	Losses: [2.303039312362671, 2.3062708377838135, 10.330097198486328, 21.01088523864746, 1.7230188846588135], step: 13200, lr: 9.963813366190753e-05
2023-03-16 10:14:24,443	44k	INFO	Train Epoch: 28 [46%]
2023-03-16 10:14:24,444	44k	INFO	Losses: [2.550565481185913, 2.2409093379974365, 9.212244987487793, 20.167245864868164, 1.1931952238082886], step: 13400, lr: 9.963813366190753e-05
2023-03-16 10:17:24,817	44k	INFO	Train Epoch: 28 [87%]
2023-03-16 10:17:24,819	44k	INFO	Losses: [1.979323387145996, 2.996157169342041, 10.178831100463867, 17.771411895751953, 1.288474440574646], step: 13600, lr: 9.963813366190753e-05
2023-03-16 10:17:40,667	44k	INFO	Saving model and optimizer state at iteration 28 to ./logs/44k/G_13600.pth
2023-03-16 10:17:44,367	44k	INFO	Saving model and optimizer state at iteration 28 to ./logs/44k/D_13600.pth
2023-03-16 10:17:46,890	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_9600.pth
2023-03-16 10:17:46,892	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_9600.pth
2023-03-16 10:18:47,595	44k	INFO	====> Epoch: 28, cost 474.60 s
2023-03-16 10:20:59,559	44k	INFO	Train Epoch: 29 [28%]
2023-03-16 10:20:59,562	44k	INFO	Losses: [2.705497980117798, 1.7568820714950562, 4.887446403503418, 14.678738594055176, 1.1326416730880737], step: 13800, lr: 9.962567889519979e-05
2023-03-16 10:23:59,055	44k	INFO	Train Epoch: 29 [69%]
2023-03-16 10:23:59,056	44k	INFO	Losses: [2.522183656692505, 2.2034082412719727, 10.945884704589844, 23.23166275024414, 1.325547456741333], step: 14000, lr: 9.962567889519979e-05
2023-03-16 10:26:15,796	44k	INFO	====> Epoch: 29, cost 448.20 s
2023-03-16 10:27:07,506	44k	INFO	Train Epoch: 30 [10%]
2023-03-16 10:27:07,508	44k	INFO	Losses: [2.671726942062378, 1.9347296953201294, 7.3934645652771, 15.718672752380371, 1.497109293937683], step: 14200, lr: 9.961322568533789e-05
2023-03-16 10:30:07,016	44k	INFO	Train Epoch: 30 [51%]
2023-03-16 10:30:07,018	44k	INFO	Losses: [2.6759390830993652, 2.072394847869873, 8.502863883972168, 18.489797592163086, 1.4777787923812866], step: 14400, lr: 9.961322568533789e-05
2023-03-16 10:30:21,281	44k	INFO	Saving model and optimizer state at iteration 30 to ./logs/44k/G_14400.pth
2023-03-16 10:30:24,797	44k	INFO	Saving model and optimizer state at iteration 30 to ./logs/44k/D_14400.pth
2023-03-16 10:30:27,016	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_10400.pth
2023-03-16 10:30:27,034	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_10400.pth
2023-03-16 10:33:29,984	44k	INFO	Train Epoch: 30 [92%]
2023-03-16 10:33:29,986	44k	INFO	Losses: [2.522040843963623, 2.346306324005127, 7.859183311462402, 19.24114418029785, 1.1200071573257446], step: 14600, lr: 9.961322568533789e-05
2023-03-16 10:34:05,792	44k	INFO	====> Epoch: 30, cost 470.00 s
2023-03-16 10:36:38,266	44k	INFO	Train Epoch: 31 [33%]
2023-03-16 10:36:38,268	44k	INFO	Losses: [2.8069024085998535, 1.9522345066070557, 7.851792812347412, 19.763160705566406, 1.047946572303772], step: 14800, lr: 9.960077403212722e-05
2023-03-16 10:39:39,697	44k	INFO	Train Epoch: 31 [74%]
2023-03-16 10:39:39,699	44k	INFO	Losses: [2.5202255249023438, 2.297905206680298, 10.847701072692871, 24.754371643066406, 1.4519727230072021], step: 15000, lr: 9.960077403212722e-05
2023-03-16 10:41:35,496	44k	INFO	====> Epoch: 31, cost 449.70 s
2023-03-16 10:42:50,278	44k	INFO	Train Epoch: 32 [15%]
2023-03-16 10:42:50,279	44k	INFO	Losses: [2.560187339782715, 2.428621292114258, 9.085417747497559, 21.535371780395508, 1.4692496061325073], step: 15200, lr: 9.95883239353732e-05
2023-03-16 10:43:03,248	44k	INFO	Saving model and optimizer state at iteration 32 to ./logs/44k/G_15200.pth
2023-03-16 10:43:07,035	44k	INFO	Saving model and optimizer state at iteration 32 to ./logs/44k/D_15200.pth
2023-03-16 10:43:09,169	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_11200.pth
2023-03-16 10:43:09,173	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_11200.pth
2023-03-16 10:46:14,178	44k	INFO	Train Epoch: 32 [56%]
2023-03-16 10:46:14,180	44k	INFO	Losses: [2.7253329753875732, 2.1714789867401123, 9.479945182800293, 20.28587532043457, 1.4515960216522217], step: 15400, lr: 9.95883239353732e-05
2023-03-16 10:49:16,102	44k	INFO	Train Epoch: 32 [97%]
2023-03-16 10:49:16,103	44k	INFO	Losses: [2.566298484802246, 2.1448235511779785, 6.149750232696533, 17.659595489501953, 1.2707719802856445], step: 15600, lr: 9.95883239353732e-05
2023-03-16 10:49:30,782	44k	INFO	====> Epoch: 32, cost 475.29 s
2023-03-16 10:52:28,370	44k	INFO	Train Epoch: 33 [38%]
2023-03-16 10:52:28,372	44k	INFO	Losses: [2.6889359951019287, 2.0892183780670166, 9.148015975952148, 21.44872283935547, 1.3934203386306763], step: 15800, lr: 9.957587539488128e-05
2023-03-16 10:55:27,899	44k	INFO	Train Epoch: 33 [79%]
2023-03-16 10:55:27,901	44k	INFO	Losses: [2.063403606414795, 3.245924711227417, 7.179663181304932, 12.818585395812988, 1.0402812957763672], step: 16000, lr: 9.957587539488128e-05
2023-03-16 10:55:42,676	44k	INFO	Saving model and optimizer state at iteration 33 to ./logs/44k/G_16000.pth
2023-03-16 10:55:47,384	44k	INFO	Saving model and optimizer state at iteration 33 to ./logs/44k/D_16000.pth
2023-03-16 10:55:49,917	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_12000.pth
2023-03-16 10:55:49,920	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_12000.pth
2023-03-16 10:57:27,208	44k	INFO	====> Epoch: 33, cost 476.43 s
2023-03-16 10:59:03,125	44k	INFO	Train Epoch: 34 [20%]
2023-03-16 10:59:03,127	44k	INFO	Losses: [2.7936315536499023, 2.233839273452759, 6.825787544250488, 18.035015106201172, 1.2573646306991577], step: 16200, lr: 9.956342841045691e-05
2023-03-16 11:02:05,520	44k	INFO	Train Epoch: 34 [61%]
2023-03-16 11:02:05,521	44k	INFO	Losses: [2.716434955596924, 2.1549556255340576, 8.152503967285156, 20.39545249938965, 1.2697386741638184], step: 16400, lr: 9.956342841045691e-05
2023-03-16 11:05:01,947	44k	INFO	====> Epoch: 34, cost 454.74 s
2023-03-16 11:05:18,816	44k	INFO	Train Epoch: 35 [2%]
2023-03-16 11:05:18,817	44k	INFO	Losses: [2.6796929836273193, 2.1861860752105713, 8.270397186279297, 18.147890090942383, 1.7037100791931152], step: 16600, lr: 9.95509829819056e-05
2023-03-16 11:08:21,403	44k	INFO	Train Epoch: 35 [43%]
2023-03-16 11:08:21,405	44k	INFO	Losses: [2.3818092346191406, 2.369901657104492, 9.482978820800781, 20.707965850830078, 1.5063105821609497], step: 16800, lr: 9.95509829819056e-05
2023-03-16 11:08:36,970	44k	INFO	Saving model and optimizer state at iteration 35 to ./logs/44k/G_16800.pth
2023-03-16 11:08:40,460	44k	INFO	Saving model and optimizer state at iteration 35 to ./logs/44k/D_16800.pth
2023-03-16 11:08:42,892	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_12800.pth
2023-03-16 11:08:42,895	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_12800.pth
2023-03-16 11:11:49,002	44k	INFO	Train Epoch: 35 [84%]
2023-03-16 11:11:49,004	44k	INFO	Losses: [2.5385072231292725, 2.3350865840911865, 7.238544464111328, 16.286821365356445, 1.0897456407546997], step: 17000, lr: 9.95509829819056e-05
2023-03-16 11:13:01,568	44k	INFO	====> Epoch: 35, cost 479.62 s
2023-03-16 11:15:03,566	44k	INFO	Train Epoch: 36 [25%]
2023-03-16 11:15:03,569	44k	INFO	Losses: [2.3334832191467285, 2.616835117340088, 9.474438667297363, 21.676931381225586, 1.425167441368103], step: 17200, lr: 9.953853910903285e-05
2023-03-16 11:18:15,346	44k	INFO	Train Epoch: 36 [66%]
2023-03-16 11:18:15,348	44k	INFO	Losses: [2.5393788814544678, 2.0842111110687256, 9.579951286315918, 22.571887969970703, 1.3158255815505981], step: 17400, lr: 9.953853910903285e-05
2023-03-16 11:20:48,685	44k	INFO	====> Epoch: 36, cost 467.12 s
2023-03-16 11:21:27,107	44k	INFO	Train Epoch: 37 [7%]
2023-03-16 11:21:27,109	44k	INFO	Losses: [2.6189870834350586, 2.2640576362609863, 4.732796669006348, 13.975717544555664, 1.2774616479873657], step: 17600, lr: 9.952609679164422e-05
2023-03-16 11:21:39,961	44k	INFO	Saving model and optimizer state at iteration 37 to ./logs/44k/G_17600.pth
2023-03-16 11:21:43,520	44k	INFO	Saving model and optimizer state at iteration 37 to ./logs/44k/D_17600.pth
2023-03-16 11:21:46,165	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_13600.pth
2023-03-16 11:21:46,171	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_13600.pth
2023-03-16 11:24:55,175	44k	INFO	Train Epoch: 37 [48%]
2023-03-16 11:24:55,176	44k	INFO	Losses: [2.3131136894226074, 2.3363685607910156, 12.300803184509277, 23.914302825927734, 1.3034307956695557], step: 17800, lr: 9.952609679164422e-05
2023-03-16 11:27:58,648	44k	INFO	Train Epoch: 37 [89%]
2023-03-16 11:27:58,649	44k	INFO	Losses: [2.5874199867248535, 2.112287998199463, 7.590005874633789, 23.086366653442383, 1.510000228881836], step: 18000, lr: 9.952609679164422e-05
2023-03-16 11:28:51,168	44k	INFO	====> Epoch: 37, cost 482.48 s
2023-03-16 11:31:12,811	44k	INFO	Train Epoch: 38 [30%]
2023-03-16 11:31:12,813	44k	INFO	Losses: [2.484844446182251, 2.642493486404419, 9.594411849975586, 17.566965103149414, 1.5320457220077515], step: 18200, lr: 9.951365602954526e-05
2023-03-16 11:34:16,038	44k	INFO	Train Epoch: 38 [70%]
2023-03-16 11:34:16,040	44k	INFO	Losses: [2.484736680984497, 2.19224214553833, 10.893013954162598, 21.40000343322754, 1.2885123491287231], step: 18400, lr: 9.951365602954526e-05
2023-03-16 11:34:29,583	44k	INFO	Saving model and optimizer state at iteration 38 to ./logs/44k/G_18400.pth
2023-03-16 11:34:33,282	44k	INFO	Saving model and optimizer state at iteration 38 to ./logs/44k/D_18400.pth
2023-03-16 11:34:35,944	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_14400.pth
2023-03-16 11:34:35,946	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_14400.pth
2023-03-16 11:36:52,128	44k	INFO	====> Epoch: 38, cost 480.96 s
2023-03-16 11:37:53,458	44k	INFO	Train Epoch: 39 [11%]
2023-03-16 11:37:53,460	44k	INFO	Losses: [2.5545494556427, 2.284029483795166, 11.76029109954834, 20.730674743652344, 1.5566941499710083], step: 18600, lr: 9.950121682254156e-05
2023-03-16 11:40:56,946	44k	INFO	Train Epoch: 39 [52%]
2023-03-16 11:40:56,948	44k	INFO	Losses: [2.4372613430023193, 2.4106082916259766, 10.181136131286621, 17.73690414428711, 1.1941860914230347], step: 18800, lr: 9.950121682254156e-05
2023-03-16 11:44:00,091	44k	INFO	Train Epoch: 39 [93%]
2023-03-16 11:44:00,093	44k	INFO	Losses: [2.6526505947113037, 2.0136594772338867, 6.850375652313232, 15.903082847595215, 1.1818984746932983], step: 19000, lr: 9.950121682254156e-05
2023-03-16 11:44:29,303	44k	INFO	====> Epoch: 39, cost 457.18 s
2023-03-16 11:47:12,652	44k	INFO	Train Epoch: 40 [34%]
2023-03-16 11:47:12,653	44k	INFO	Losses: [2.518411636352539, 2.215726375579834, 7.488936424255371, 19.28771209716797, 1.1380958557128906], step: 19200, lr: 9.948877917043875e-05
2023-03-16 11:47:26,532	44k	INFO	Saving model and optimizer state at iteration 40 to ./logs/44k/G_19200.pth
2023-03-16 11:47:30,348	44k	INFO	Saving model and optimizer state at iteration 40 to ./logs/44k/D_19200.pth
2023-03-16 11:47:33,089	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_15200.pth
2023-03-16 11:47:33,093	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_15200.pth
2023-03-16 11:50:41,305	44k	INFO	Train Epoch: 40 [75%]
2023-03-16 11:50:41,308	44k	INFO	Losses: [2.6516857147216797, 1.9518625736236572, 7.348056316375732, 18.13136863708496, 1.400861144065857], step: 19400, lr: 9.948877917043875e-05
2023-03-16 11:52:31,631	44k	INFO	====> Epoch: 40, cost 482.33 s
2023-03-16 11:53:54,794	44k	INFO	Train Epoch: 41 [16%]
2023-03-16 11:53:54,795	44k	INFO	Losses: [2.536409854888916, 2.2687175273895264, 6.985504627227783, 18.625009536743164, 1.1701894998550415], step: 19600, lr: 9.947634307304244e-05
2023-03-16 11:56:57,924	44k	INFO	Train Epoch: 41 [57%]
2023-03-16 11:56:57,926	44k	INFO	Losses: [2.496490478515625, 2.422637701034546, 10.175827980041504, 23.408754348754883, 1.5327624082565308], step: 19800, lr: 9.947634307304244e-05
2023-03-16 12:00:01,182	44k	INFO	Train Epoch: 41 [98%]
2023-03-16 12:00:01,184	44k	INFO	Losses: [2.481825590133667, 2.140934705734253, 8.170811653137207, 17.614809036254883, 1.6691944599151611], step: 20000, lr: 9.947634307304244e-05
2023-03-16 12:00:16,375	44k	INFO	Saving model and optimizer state at iteration 41 to ./logs/44k/G_20000.pth
2023-03-16 12:00:20,051	44k	INFO	Saving model and optimizer state at iteration 41 to ./logs/44k/D_20000.pth
2023-03-16 12:00:22,588	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_16000.pth
2023-03-16 12:00:22,595	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_16000.pth
2023-03-16 12:00:30,390	44k	INFO	====> Epoch: 41, cost 478.76 s
2023-03-16 12:03:38,157	44k	INFO	Train Epoch: 42 [39%]
2023-03-16 12:03:38,159	44k	INFO	Losses: [2.601439952850342, 1.8881006240844727, 9.187948226928711, 20.55603790283203, 1.2415629625320435], step: 20200, lr: 9.94639085301583e-05
2023-03-16 12:06:40,201	44k	INFO	Train Epoch: 42 [80%]
2023-03-16 12:06:40,203	44k	INFO	Losses: [2.513798952102661, 2.03918194770813, 9.119217872619629, 17.703624725341797, 1.2781336307525635], step: 20400, lr: 9.94639085301583e-05
2023-03-16 12:08:08,541	44k	INFO	====> Epoch: 42, cost 458.15 s
2023-03-16 12:09:52,800	44k	INFO	Train Epoch: 43 [21%]
2023-03-16 12:09:52,802	44k	INFO	Losses: [2.4725472927093506, 1.8947622776031494, 8.817089080810547, 21.94397735595703, 1.1570287942886353], step: 20600, lr: 9.945147554159202e-05
2023-03-16 12:12:55,411	44k	INFO	Train Epoch: 43 [62%]
2023-03-16 12:12:55,412	44k	INFO	Losses: [2.4212989807128906, 2.178441286087036, 7.221798419952393, 19.40806007385254, 1.236891508102417], step: 20800, lr: 9.945147554159202e-05
2023-03-16 12:13:10,434	44k	INFO	Saving model and optimizer state at iteration 43 to ./logs/44k/G_20800.pth
2023-03-16 12:13:15,293	44k	INFO	Saving model and optimizer state at iteration 43 to ./logs/44k/D_20800.pth
2023-03-16 12:13:18,056	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_16800.pth
2023-03-16 12:13:18,059	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_16800.pth
2023-03-16 12:16:10,935	44k	INFO	====> Epoch: 43, cost 482.39 s
2023-03-16 12:16:36,145	44k	INFO	Train Epoch: 44 [3%]
2023-03-16 12:16:36,147	44k	INFO	Losses: [2.422684669494629, 2.3103551864624023, 9.979968070983887, 17.907142639160156, 1.0299410820007324], step: 21000, lr: 9.943904410714931e-05
2023-03-16 12:19:38,596	44k	INFO	Train Epoch: 44 [44%]
2023-03-16 12:19:38,598	44k	INFO	Losses: [2.624260425567627, 2.2220616340637207, 9.346229553222656, 19.159770965576172, 1.3557531833648682], step: 21200, lr: 9.943904410714931e-05
2023-03-16 12:22:41,429	44k	INFO	Train Epoch: 44 [85%]
2023-03-16 12:22:41,431	44k	INFO	Losses: [2.684530258178711, 2.0625109672546387, 9.31837272644043, 21.707088470458984, 1.265385389328003], step: 21400, lr: 9.943904410714931e-05
2023-03-16 12:23:47,716	44k	INFO	====> Epoch: 44, cost 456.78 s
2023-03-16 12:25:53,464	44k	INFO	Train Epoch: 45 [26%]
2023-03-16 12:25:53,466	44k	INFO	Losses: [2.6503918170928955, 1.9353251457214355, 6.973599433898926, 17.195920944213867, 1.2290711402893066], step: 21600, lr: 9.942661422663591e-05
2023-03-16 12:26:07,310	44k	INFO	Saving model and optimizer state at iteration 45 to ./logs/44k/G_21600.pth
2023-03-16 12:26:11,159	44k	INFO	Saving model and optimizer state at iteration 45 to ./logs/44k/D_21600.pth
2023-03-16 12:26:13,507	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_17600.pth
2023-03-16 12:26:13,616	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_17600.pth
2023-03-16 12:29:20,986	44k	INFO	Train Epoch: 45 [67%]
2023-03-16 12:29:20,988	44k	INFO	Losses: [2.5444116592407227, 2.2055976390838623, 12.526103973388672, 20.060684204101562, 1.3477643728256226], step: 21800, lr: 9.942661422663591e-05
2023-03-16 12:31:47,133	44k	INFO	====> Epoch: 45, cost 479.42 s
2023-03-16 12:32:33,687	44k	INFO	Train Epoch: 46 [8%]
2023-03-16 12:32:33,689	44k	INFO	Losses: [2.439268112182617, 2.4360885620117188, 9.108556747436523, 18.97551727294922, 1.3491157293319702], step: 22000, lr: 9.941418589985758e-05
2023-03-16 12:35:36,846	44k	INFO	Train Epoch: 46 [49%]
2023-03-16 12:35:36,847	44k	INFO	Losses: [2.766345739364624, 2.1339211463928223, 9.848959922790527, 20.946237564086914, 1.1080049276351929], step: 22200, lr: 9.941418589985758e-05
2023-03-16 12:38:38,888	44k	INFO	Train Epoch: 46 [90%]
2023-03-16 12:38:38,890	44k	INFO	Losses: [2.5533759593963623, 2.1898012161254883, 7.086889743804932, 16.532241821289062, 1.2960323095321655], step: 22400, lr: 9.941418589985758e-05
2023-03-16 12:38:54,627	44k	INFO	Saving model and optimizer state at iteration 46 to ./logs/44k/G_22400.pth
2023-03-16 12:38:59,188	44k	INFO	Saving model and optimizer state at iteration 46 to ./logs/44k/D_22400.pth
2023-03-16 12:39:01,627	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_18400.pth
2023-03-16 12:39:01,630	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_18400.pth
2023-03-16 12:39:48,846	44k	INFO	====> Epoch: 46, cost 481.71 s
2023-03-16 12:42:15,315	44k	INFO	Train Epoch: 47 [31%]
2023-03-16 12:42:15,317	44k	INFO	Losses: [2.5099427700042725, 2.297588348388672, 10.203536987304688, 20.232912063598633, 1.2333426475524902], step: 22600, lr: 9.940175912662009e-05
2023-03-16 12:45:17,085	44k	INFO	Train Epoch: 47 [72%]
2023-03-16 12:45:17,087	44k	INFO	Losses: [2.528428316116333, 2.2028472423553467, 8.007969856262207, 20.613597869873047, 1.0816898345947266], step: 22800, lr: 9.940175912662009e-05
2023-03-16 12:47:21,944	44k	INFO	====> Epoch: 47, cost 453.10 s
2023-03-16 12:48:29,987	44k	INFO	Train Epoch: 48 [13%]
2023-03-16 12:48:29,989	44k	INFO	Losses: [2.621966600418091, 2.0755844116210938, 10.405730247497559, 20.60601234436035, 1.1892584562301636], step: 23000, lr: 9.938933390672926e-05
2023-03-16 12:51:32,046	44k	INFO	Train Epoch: 48 [54%]
2023-03-16 12:51:32,047	44k	INFO	Losses: [2.6453280448913574, 2.0276060104370117, 7.4733195304870605, 19.877803802490234, 1.192906141281128], step: 23200, lr: 9.938933390672926e-05
2023-03-16 12:51:45,700	44k	INFO	Saving model and optimizer state at iteration 48 to ./logs/44k/G_23200.pth
2023-03-16 12:51:49,463	44k	INFO	Saving model and optimizer state at iteration 48 to ./logs/44k/D_23200.pth
2023-03-16 12:51:51,802	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_19200.pth
2023-03-16 12:51:51,806	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_19200.pth
2023-03-16 12:54:59,207	44k	INFO	Train Epoch: 48 [95%]
2023-03-16 12:54:59,209	44k	INFO	Losses: [2.703230381011963, 2.1648409366607666, 6.074045658111572, 16.42542266845703, 1.2859731912612915], step: 23400, lr: 9.938933390672926e-05
2023-03-16 12:55:21,251	44k	INFO	====> Epoch: 48, cost 479.31 s
2023-03-16 12:58:10,582	44k	INFO	Train Epoch: 49 [36%]
2023-03-16 12:58:10,583	44k	INFO	Losses: [2.625852108001709, 2.2031400203704834, 7.755006790161133, 21.79294776916504, 1.3348978757858276], step: 23600, lr: 9.937691023999092e-05
2023-03-16 13:01:11,763	44k	INFO	Train Epoch: 49 [77%]
2023-03-16 13:01:11,765	44k	INFO	Losses: [2.569096088409424, 2.3753645420074463, 8.749518394470215, 22.141204833984375, 1.291549801826477], step: 23800, lr: 9.937691023999092e-05
2023-03-16 13:02:53,329	44k	INFO	====> Epoch: 49, cost 452.08 s
2023-03-16 13:04:23,280	44k	INFO	Train Epoch: 50 [18%]
2023-03-16 13:04:23,282	44k	INFO	Losses: [2.3723580837249756, 2.2730939388275146, 6.994572162628174, 17.355161666870117, 1.2401801347732544], step: 24000, lr: 9.936448812621091e-05
2023-03-16 13:04:35,124	44k	INFO	Saving model and optimizer state at iteration 50 to ./logs/44k/G_24000.pth
2023-03-16 13:04:38,935	44k	INFO	Saving model and optimizer state at iteration 50 to ./logs/44k/D_24000.pth
2023-03-16 13:04:41,288	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_20000.pth
2023-03-16 13:04:41,290	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_20000.pth
2023-03-16 13:07:50,407	44k	INFO	Train Epoch: 50 [59%]
2023-03-16 13:07:50,409	44k	INFO	Losses: [2.6356141567230225, 2.267526149749756, 8.210156440734863, 19.540035247802734, 1.498274326324463], step: 24200, lr: 9.936448812621091e-05
2023-03-16 13:18:50,606	44k	INFO	{'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 31415926, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 5}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'爱梅斯': 0, '花凛': 1, '佩可莉姆': 2, '咲恋': 3, '凯露': 4, '可可萝': 5}, 'model_dir': './logs/44k'}
2023-03-16 13:19:08,091	44k	INFO	Loaded checkpoint './logs/44k/G_24000.pth' (iteration 50)
2023-03-16 13:19:14,233	44k	INFO	Loaded checkpoint './logs/44k/D_24000.pth' (iteration 50)
2023-03-16 13:21:04,243	44k	INFO	Train Epoch: 50 [18%]
2023-03-16 13:21:04,245	44k	INFO	Losses: [2.4692835807800293, 2.446688413619995, 9.919221878051758, 18.332162857055664, 1.2991989850997925], step: 24000, lr: 9.935206756519513e-05
2023-03-16 13:21:20,507	44k	INFO	Saving model and optimizer state at iteration 50 to ./logs/44k/G_24000.pth
2023-03-16 13:21:24,266	44k	INFO	Saving model and optimizer state at iteration 50 to ./logs/44k/D_24000.pth
2023-03-16 13:24:53,448	44k	INFO	Train Epoch: 50 [59%]
2023-03-16 13:24:53,449	44k	INFO	Losses: [2.6354753971099854, 1.8232835531234741, 6.918533802032471, 18.028013229370117, 1.5363731384277344], step: 24200, lr: 9.935206756519513e-05
2023-03-16 13:28:13,483	44k	INFO	====> Epoch: 50, cost 562.88 s
2023-03-16 13:28:24,454	44k	INFO	Train Epoch: 51 [0%]
2023-03-16 13:28:24,456	44k	INFO	Losses: [2.6727185249328613, 2.263068675994873, 9.122915267944336, 19.228836059570312, 1.1437422037124634], step: 24400, lr: 9.933964855674948e-05
2023-03-16 13:31:30,374	44k	INFO	Train Epoch: 51 [41%]
2023-03-16 13:31:30,375	44k	INFO	Losses: [2.6017069816589355, 2.048917055130005, 7.151366233825684, 16.109514236450195, 1.212850570678711], step: 24600, lr: 9.933964855674948e-05
2023-03-16 13:34:36,146	44k	INFO	Train Epoch: 51 [82%]
2023-03-16 13:34:36,148	44k	INFO	Losses: [2.731339931488037, 2.316772937774658, 5.760687351226807, 17.271717071533203, 1.328108310699463], step: 24800, lr: 9.933964855674948e-05
2023-03-16 13:34:52,021	44k	INFO	Saving model and optimizer state at iteration 51 to ./logs/44k/G_24800.pth
2023-03-16 13:34:56,207	44k	INFO	Saving model and optimizer state at iteration 51 to ./logs/44k/D_24800.pth
2023-03-16 13:34:59,358	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_20800.pth
2023-03-16 13:34:59,364	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_20800.pth
2023-03-16 13:36:24,370	44k	INFO	====> Epoch: 51, cost 490.89 s
2023-03-16 13:38:18,613	44k	INFO	Train Epoch: 52 [23%]
2023-03-16 13:38:18,614	44k	INFO	Losses: [2.281430959701538, 2.505606174468994, 8.787135124206543, 16.645620346069336, 1.3416118621826172], step: 25000, lr: 9.932723110067987e-05
2023-03-16 13:41:22,474	44k	INFO	Train Epoch: 52 [64%]
2023-03-16 13:41:22,476	44k	INFO	Losses: [2.5741732120513916, 2.077474355697632, 7.083030700683594, 16.91986846923828, 1.1220862865447998], step: 25200, lr: 9.932723110067987e-05
2023-03-16 13:44:05,932	44k	INFO	====> Epoch: 52, cost 461.56 s
2023-03-16 13:44:37,727	44k	INFO	Train Epoch: 53 [5%]
2023-03-16 13:44:37,729	44k	INFO	Losses: [2.3706090450286865, 2.169912338256836, 10.625147819519043, 20.189577102661133, 1.2919025421142578], step: 25400, lr: 9.931481519679228e-05
2023-03-16 13:47:42,908	44k	INFO	Train Epoch: 53 [46%]
2023-03-16 13:47:42,910	44k	INFO	Losses: [2.587810754776001, 2.067965507507324, 6.724053382873535, 15.212294578552246, 1.0409502983093262], step: 25600, lr: 9.931481519679228e-05
2023-03-16 13:47:59,569	44k	INFO	Saving model and optimizer state at iteration 53 to ./logs/44k/G_25600.pth
2023-03-16 13:48:03,659	44k	INFO	Saving model and optimizer state at iteration 53 to ./logs/44k/D_25600.pth
2023-03-16 13:48:05,910	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_21600.pth
2023-03-16 13:48:05,912	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_21600.pth
2023-03-16 13:51:13,566	44k	INFO	Train Epoch: 53 [87%]
2023-03-16 13:51:13,567	44k	INFO	Losses: [2.454693078994751, 2.2848057746887207, 9.469411849975586, 20.335559844970703, 1.1935824155807495], step: 25800, lr: 9.931481519679228e-05
2023-03-16 13:52:11,933	44k	INFO	====> Epoch: 53, cost 486.00 s
2023-03-16 13:54:25,973	44k	INFO	Train Epoch: 54 [28%]
2023-03-16 13:54:25,976	44k	INFO	Losses: [2.7108962535858154, 1.9522663354873657, 4.4008307456970215, 16.614524841308594, 1.340120792388916], step: 26000, lr: 9.930240084489267e-05
2023-03-16 13:57:31,068	44k	INFO	Train Epoch: 54 [69%]
2023-03-16 13:57:31,070	44k	INFO	Losses: [2.744401216506958, 2.2820606231689453, 9.002656936645508, 18.787479400634766, 1.3171197175979614], step: 26200, lr: 9.930240084489267e-05
2023-03-16 13:59:50,364	44k	INFO	====> Epoch: 54, cost 458.43 s
2023-03-16 14:00:44,449	44k	INFO	Train Epoch: 55 [10%]
2023-03-16 14:00:44,450	44k	INFO	Losses: [2.677558660507202, 2.4664900302886963, 6.928734302520752, 18.922264099121094, 1.5801939964294434], step: 26400, lr: 9.928998804478705e-05
2023-03-16 14:00:57,828	44k	INFO	Saving model and optimizer state at iteration 55 to ./logs/44k/G_26400.pth
2023-03-16 14:01:01,470	44k	INFO	Saving model and optimizer state at iteration 55 to ./logs/44k/D_26400.pth
2023-03-16 14:01:03,778	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_22400.pth
2023-03-16 14:01:03,780	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_22400.pth
2023-03-16 14:04:11,857	44k	INFO	Train Epoch: 55 [51%]
2023-03-16 14:04:11,859	44k	INFO	Losses: [2.7008979320526123, 2.170504570007324, 7.963308811187744, 16.859394073486328, 1.2103198766708374], step: 26600, lr: 9.928998804478705e-05
2023-03-16 14:07:14,951	44k	INFO	Train Epoch: 55 [92%]
2023-03-16 14:07:14,952	44k	INFO	Losses: [2.6881911754608154, 2.193416118621826, 9.614363670349121, 18.391061782836914, 1.0457745790481567], step: 26800, lr: 9.928998804478705e-05
2023-03-16 14:07:51,057	44k	INFO	====> Epoch: 55, cost 480.69 s
2023-03-16 14:10:29,798	44k	INFO	Train Epoch: 56 [33%]
2023-03-16 14:10:29,799	44k	INFO	Losses: [2.516676902770996, 2.070483922958374, 7.8622002601623535, 17.59926414489746, 1.2309237718582153], step: 27000, lr: 9.927757679628145e-05
2023-03-16 14:13:34,381	44k	INFO	Train Epoch: 56 [74%]
2023-03-16 14:13:34,383	44k	INFO	Losses: [2.5253868103027344, 1.8196165561676025, 8.880829811096191, 19.75553321838379, 1.2578105926513672], step: 27200, lr: 9.927757679628145e-05
2023-03-16 14:13:51,642	44k	INFO	Saving model and optimizer state at iteration 56 to ./logs/44k/G_27200.pth
2023-03-16 14:13:56,432	44k	INFO	Saving model and optimizer state at iteration 56 to ./logs/44k/D_27200.pth
2023-03-16 14:13:58,969	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_23200.pth
2023-03-16 14:13:58,973	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_23200.pth
2023-03-16 14:16:00,712	44k	INFO	====> Epoch: 56, cost 489.66 s
2023-03-16 14:17:16,039	44k	INFO	Train Epoch: 57 [15%]
2023-03-16 14:17:16,042	44k	INFO	Losses: [2.481264352798462, 2.1093060970306396, 9.263083457946777, 20.112600326538086, 1.4384979009628296], step: 27400, lr: 9.926516709918191e-05
2023-03-16 14:20:19,995	44k	INFO	Train Epoch: 57 [56%]
2023-03-16 14:20:19,996	44k	INFO	Losses: [2.3411600589752197, 2.582979917526245, 8.98721981048584, 18.274932861328125, 1.3591454029083252], step: 27600, lr: 9.926516709918191e-05
2023-03-16 14:23:26,121	44k	INFO	Train Epoch: 57 [97%]
2023-03-16 14:23:26,123	44k	INFO	Losses: [2.5430245399475098, 1.9995101690292358, 6.614638328552246, 17.214950561523438, 1.2342784404754639], step: 27800, lr: 9.926516709918191e-05
2023-03-16 14:23:40,662	44k	INFO	====> Epoch: 57, cost 459.95 s
2023-03-16 14:26:39,495	44k	INFO	Train Epoch: 58 [38%]
2023-03-16 14:26:39,497	44k	INFO	Losses: [2.5874879360198975, 1.973245620727539, 6.864962100982666, 19.247074127197266, 1.2281551361083984], step: 28000, lr: 9.92527589532945e-05
2023-03-16 14:26:54,223	44k	INFO	Saving model and optimizer state at iteration 58 to ./logs/44k/G_28000.pth
2023-03-16 14:26:58,086	44k	INFO	Saving model and optimizer state at iteration 58 to ./logs/44k/D_28000.pth
2023-03-16 14:27:00,305	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_24000.pth
2023-03-16 14:27:00,312	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_24000.pth
2023-03-16 14:30:09,423	44k	INFO	Train Epoch: 58 [79%]
2023-03-16 14:30:09,424	44k	INFO	Losses: [2.6614861488342285, 2.2932651042938232, 8.656551361083984, 15.753890037536621, 1.2384966611862183], step: 28200, lr: 9.92527589532945e-05
2023-03-16 14:31:45,198	44k	INFO	====> Epoch: 58, cost 484.54 s
2023-03-16 14:33:22,982	44k	INFO	Train Epoch: 59 [20%]
2023-03-16 14:33:22,984	44k	INFO	Losses: [2.562333106994629, 2.3804547786712646, 7.292979717254639, 16.645328521728516, 1.4423201084136963], step: 28400, lr: 9.924035235842533e-05
2023-03-16 14:36:27,026	44k	INFO	Train Epoch: 59 [61%]
2023-03-16 14:36:27,028	44k	INFO	Losses: [2.772216558456421, 1.995288372039795, 6.699609279632568, 18.315597534179688, 1.2430319786071777], step: 28600, lr: 9.924035235842533e-05
2023-03-16 14:39:25,019	44k	INFO	====> Epoch: 59, cost 459.82 s
2023-03-16 14:39:42,561	44k	INFO	Train Epoch: 60 [2%]
2023-03-16 14:39:42,562	44k	INFO	Losses: [2.707663059234619, 2.218473434448242, 9.781296730041504, 20.252290725708008, 1.1182050704956055], step: 28800, lr: 9.922794731438052e-05
2023-03-16 14:39:54,764	44k	INFO	Saving model and optimizer state at iteration 60 to ./logs/44k/G_28800.pth
2023-03-16 14:39:58,339	44k	INFO	Saving model and optimizer state at iteration 60 to ./logs/44k/D_28800.pth
2023-03-16 14:40:00,560	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_24800.pth
2023-03-16 14:40:00,564	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_24800.pth
2023-03-16 14:43:10,570	44k	INFO	Train Epoch: 60 [43%]
2023-03-16 14:43:10,572	44k	INFO	Losses: [2.573453187942505, 2.1995015144348145, 8.725892066955566, 17.742557525634766, 1.2147800922393799], step: 29000, lr: 9.922794731438052e-05
2023-03-16 14:46:16,093	44k	INFO	Train Epoch: 60 [84%]
2023-03-16 14:46:16,094	44k	INFO	Losses: [2.4506399631500244, 2.4774932861328125, 11.0800199508667, 22.124006271362305, 1.2493702173233032], step: 29200, lr: 9.922794731438052e-05
2023-03-16 14:47:31,690	44k	INFO	====> Epoch: 60, cost 486.67 s
2023-03-16 14:49:32,572	44k	INFO	Train Epoch: 61 [25%]
2023-03-16 14:49:32,575	44k	INFO	Losses: [2.3690905570983887, 2.454897403717041, 11.587372779846191, 21.197277069091797, 1.450961709022522], step: 29400, lr: 9.921554382096622e-05
2023-03-16 14:52:36,753	44k	INFO	Train Epoch: 61 [66%]
2023-03-16 14:52:36,755	44k	INFO	Losses: [2.574634313583374, 2.2488276958465576, 8.2444486618042, 21.25333595275879, 1.4296631813049316], step: 29600, lr: 9.921554382096622e-05
2023-03-16 14:52:52,182	44k	INFO	Saving model and optimizer state at iteration 61 to ./logs/44k/G_29600.pth
2023-03-16 14:52:55,550	44k	INFO	Saving model and optimizer state at iteration 61 to ./logs/44k/D_29600.pth
2023-03-16 14:52:57,851	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_25600.pth
2023-03-16 14:52:57,855	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_25600.pth
2023-03-16 14:55:36,150	44k	INFO	====> Epoch: 61, cost 484.46 s
2023-03-16 14:56:14,546	44k	INFO	Train Epoch: 62 [7%]
2023-03-16 14:56:14,547	44k	INFO	Losses: [2.6944408416748047, 1.9618580341339111, 3.0558042526245117, 11.302093505859375, 0.9839851260185242], step: 29800, lr: 9.92031418779886e-05
2023-03-16 14:59:18,905	44k	INFO	Train Epoch: 62 [48%]
2023-03-16 14:59:18,907	44k	INFO	Losses: [2.532884120941162, 1.979246973991394, 9.408533096313477, 18.637187957763672, 1.2008438110351562], step: 30000, lr: 9.92031418779886e-05
2023-03-16 15:02:23,794	44k	INFO	Train Epoch: 62 [89%]
2023-03-16 15:02:23,796	44k	INFO	Losses: [2.436702251434326, 2.5716209411621094, 6.511348724365234, 18.36617088317871, 1.4560133218765259], step: 30200, lr: 9.92031418779886e-05
2023-03-16 15:03:15,235	44k	INFO	====> Epoch: 62, cost 459.09 s
2023-03-16 15:05:39,062	44k	INFO	Train Epoch: 63 [30%]
2023-03-16 15:05:39,064	44k	INFO	Losses: [2.643864154815674, 2.4299960136413574, 8.278367042541504, 18.665084838867188, 1.2700059413909912], step: 30400, lr: 9.919074148525384e-05
2023-03-16 15:05:53,609	44k	INFO	Saving model and optimizer state at iteration 63 to ./logs/44k/G_30400.pth
2023-03-16 15:05:57,180	44k	INFO	Saving model and optimizer state at iteration 63 to ./logs/44k/D_30400.pth
2023-03-16 15:05:59,379	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_26400.pth
2023-03-16 15:05:59,383	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_26400.pth
2023-03-17 00:55:11,607	44k	INFO	{'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 31415926, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 5}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'爱梅斯': 0, '花凛': 1, '佩可莉姆': 2, '咲恋': 3, '凯露': 4, '可可萝': 5}, 'model_dir': './logs/44k'}
2023-03-17 00:55:12,238	44k	WARNING	git hash values are different. 55dd086f(saved) != d54bf592(current)
2023-03-17 00:55:29,828	44k	INFO	Loaded checkpoint './logs/44k/G_30400.pth' (iteration 63)
2023-03-17 00:55:36,918	44k	INFO	Loaded checkpoint './logs/44k/D_30400.pth' (iteration 63)
2023-03-17 00:58:11,773	44k	INFO	Train Epoch: 63 [30%]
2023-03-17 00:58:11,774	44k	INFO	Losses: [2.484586715698242, 2.1262001991271973, 12.090802192687988, 19.90692138671875, 1.6595423221588135], step: 30400, lr: 9.917834264256819e-05
2023-03-17 00:58:29,244	44k	INFO	Saving model and optimizer state at iteration 63 to ./logs/44k/G_30400.pth
2023-03-17 00:58:32,687	44k	INFO	Saving model and optimizer state at iteration 63 to ./logs/44k/D_30400.pth
2023-03-17 01:01:48,810	44k	INFO	Train Epoch: 63 [70%]
2023-03-17 01:01:48,812	44k	INFO	Losses: [2.4645233154296875, 2.318657875061035, 10.524216651916504, 21.752504348754883, 1.1110655069351196], step: 30600, lr: 9.917834264256819e-05
2023-03-17 01:04:16,840	44k	INFO	====> Epoch: 63, cost 545.24 s
2023-03-17 01:05:17,249	44k	INFO	Train Epoch: 64 [11%]
2023-03-17 01:05:17,251	44k	INFO	Losses: [2.609168529510498, 2.2651188373565674, 8.148669242858887, 17.156068801879883, 1.2939645051956177], step: 30800, lr: 9.916594534973787e-05
2023-03-17 01:08:15,527	44k	INFO	Train Epoch: 64 [52%]
2023-03-17 01:08:15,528	44k	INFO	Losses: [2.5646047592163086, 2.2578229904174805, 9.680649757385254, 20.996994018554688, 1.3195247650146484], step: 31000, lr: 9.916594534973787e-05
2023-03-17 01:11:14,200	44k	INFO	Train Epoch: 64 [93%]
2023-03-17 01:11:14,202	44k	INFO	Losses: [2.707653760910034, 1.959355354309082, 6.446928024291992, 18.784202575683594, 1.1379108428955078], step: 31200, lr: 9.916594534973787e-05
2023-03-17 01:11:29,528	44k	INFO	Saving model and optimizer state at iteration 64 to ./logs/44k/G_31200.pth
2023-03-17 01:11:33,456	44k	INFO	Saving model and optimizer state at iteration 64 to ./logs/44k/D_31200.pth
2023-03-17 01:11:36,257	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_27200.pth
2023-03-17 01:11:36,259	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_27200.pth
2023-03-17 01:12:07,394	44k	INFO	====> Epoch: 64, cost 470.55 s
2023-03-17 01:14:45,135	44k	INFO	Train Epoch: 65 [34%]
2023-03-17 01:14:45,136	44k	INFO	Losses: [2.748737335205078, 1.7619506120681763, 6.99699592590332, 13.925227165222168, 1.3247754573822021], step: 31400, lr: 9.915354960656915e-05
2023-03-17 01:17:44,326	44k	INFO	Train Epoch: 65 [75%]
2023-03-17 01:17:44,327	44k	INFO	Losses: [2.718717336654663, 2.2537217140197754, 10.094719886779785, 20.878253936767578, 1.1247361898422241], step: 31600, lr: 9.915354960656915e-05
2023-03-17 01:19:30,952	44k	INFO	====> Epoch: 65, cost 443.56 s
2023-03-17 01:20:51,867	44k	INFO	Train Epoch: 66 [16%]
2023-03-17 01:20:51,868	44k	INFO	Losses: [2.3518831729888916, 2.8273115158081055, 8.577445983886719, 19.460765838623047, 1.025953769683838], step: 31800, lr: 9.914115541286833e-05
2023-03-17 01:23:52,305	44k	INFO	Train Epoch: 66 [57%]
2023-03-17 01:23:52,307	44k	INFO	Losses: [2.5832154750823975, 2.1611838340759277, 8.606710433959961, 20.580976486206055, 1.3249191045761108], step: 32000, lr: 9.914115541286833e-05
2023-03-17 01:24:06,025	44k	INFO	Saving model and optimizer state at iteration 66 to ./logs/44k/G_32000.pth
2023-03-17 01:24:10,093	44k	INFO	Saving model and optimizer state at iteration 66 to ./logs/44k/D_32000.pth
2023-03-17 01:24:12,284	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_28000.pth
2023-03-17 01:24:12,286	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_28000.pth
2023-03-17 01:27:15,629	44k	INFO	Train Epoch: 66 [98%]
2023-03-17 01:27:15,631	44k	INFO	Losses: [2.625286817550659, 1.89324152469635, 7.697768688201904, 18.995267868041992, 1.3168432712554932], step: 32200, lr: 9.914115541286833e-05
2023-03-17 01:27:23,354	44k	INFO	====> Epoch: 66, cost 472.40 s
2023-03-17 01:30:24,190	44k	INFO	Train Epoch: 67 [39%]
2023-03-17 01:30:24,192	44k	INFO	Losses: [2.5684475898742676, 1.8973731994628906, 5.9238057136535645, 15.734672546386719, 1.2734485864639282], step: 32400, lr: 9.912876276844171e-05
2023-03-17 01:33:24,520	44k	INFO	Train Epoch: 67 [80%]
2023-03-17 01:33:24,522	44k	INFO	Losses: [2.408538818359375, 2.245204210281372, 10.826396942138672, 18.869733810424805, 1.111036777496338], step: 32600, lr: 9.912876276844171e-05
2023-03-17 01:34:52,202	44k	INFO	====> Epoch: 67, cost 448.85 s
2023-03-17 01:36:35,928	44k	INFO	Train Epoch: 68 [21%]
2023-03-17 01:36:35,929	44k	INFO	Losses: [2.265761613845825, 2.315279483795166, 11.713556289672852, 20.679689407348633, 1.2566416263580322], step: 32800, lr: 9.911637167309565e-05
2023-03-17 01:36:50,601	44k	INFO	Saving model and optimizer state at iteration 68 to ./logs/44k/G_32800.pth
2023-03-17 01:36:54,549	44k	INFO	Saving model and optimizer state at iteration 68 to ./logs/44k/D_32800.pth
2023-03-17 01:36:56,776	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_28800.pth
2023-03-17 01:36:56,779	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_28800.pth
2023-03-17 01:40:01,065	44k	INFO	Train Epoch: 68 [62%]
2023-03-17 01:40:01,067	44k	INFO	Losses: [2.5711069107055664, 2.273301124572754, 9.78982925415039, 21.67451286315918, 1.2979782819747925], step: 33000, lr: 9.911637167309565e-05
2023-03-17 01:42:47,388	44k	INFO	====> Epoch: 68, cost 475.19 s
2023-03-17 01:43:12,092	44k	INFO	Train Epoch: 69 [3%]
2023-03-17 01:43:12,094	44k	INFO	Losses: [2.5658419132232666, 2.247443199157715, 8.636250495910645, 19.110456466674805, 0.9347850680351257], step: 33200, lr: 9.910398212663652e-05
2023-03-17 01:46:12,994	44k	INFO	Train Epoch: 69 [44%]
2023-03-17 01:46:12,996	44k	INFO	Losses: [2.575089454650879, 2.0063514709472656, 6.667556285858154, 15.460140228271484, 1.49322509765625], step: 33400, lr: 9.910398212663652e-05
2023-03-17 01:49:13,097	44k	INFO	Train Epoch: 69 [85%]
2023-03-17 01:49:13,098	44k	INFO	Losses: [2.4486289024353027, 2.355808734893799, 8.402924537658691, 17.28294563293457, 1.2662371397018433], step: 33600, lr: 9.910398212663652e-05
2023-03-17 01:49:27,684	44k	INFO	Saving model and optimizer state at iteration 69 to ./logs/44k/G_33600.pth
2023-03-17 01:49:32,158	44k	INFO	Saving model and optimizer state at iteration 69 to ./logs/44k/D_33600.pth
2023-03-17 01:49:34,780	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_29600.pth
2023-03-17 01:49:34,782	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_29600.pth
2023-03-17 01:50:42,388	44k	INFO	====> Epoch: 69, cost 475.00 s
2023-03-17 01:52:47,667	44k	INFO	Train Epoch: 70 [26%]
2023-03-17 01:52:47,669	44k	INFO	Losses: [2.460369825363159, 2.194075107574463, 8.44504165649414, 19.009178161621094, 1.248958945274353], step: 33800, lr: 9.909159412887068e-05
2023-03-17 01:55:48,421	44k	INFO	Train Epoch: 70 [67%]
2023-03-17 01:55:48,423	44k	INFO	Losses: [2.250486373901367, 2.3232510089874268, 9.388315200805664, 24.06303596496582, 1.0709611177444458], step: 34000, lr: 9.909159412887068e-05
2023-03-17 01:58:12,653	44k	INFO	====> Epoch: 70, cost 450.27 s
2023-03-17 01:58:57,115	44k	INFO	Train Epoch: 71 [8%]
2023-03-17 01:58:57,117	44k	INFO	Losses: [2.098215103149414, 2.404918670654297, 12.405963897705078, 19.657108306884766, 1.2732973098754883], step: 34200, lr: 9.907920767960457e-05
2023-03-17 02:01:55,786	44k	INFO	Train Epoch: 71 [49%]
2023-03-17 02:01:55,787	44k	INFO	Losses: [2.4711456298828125, 2.0794568061828613, 7.965388774871826, 15.10488510131836, 1.089398741722107], step: 34400, lr: 9.907920767960457e-05
2023-03-17 02:02:09,646	44k	INFO	Saving model and optimizer state at iteration 71 to ./logs/44k/G_34400.pth
2023-03-17 02:02:13,482	44k	INFO	Saving model and optimizer state at iteration 71 to ./logs/44k/D_34400.pth
2023-03-17 02:02:16,210	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_30400.pth
2023-03-17 02:02:16,217	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_30400.pth
2023-03-17 02:05:19,020	44k	INFO	Train Epoch: 71 [90%]
2023-03-17 02:05:19,022	44k	INFO	Losses: [2.2883424758911133, 2.2863283157348633, 9.164148330688477, 20.280336380004883, 1.2387274503707886], step: 34600, lr: 9.907920767960457e-05
2023-03-17 02:06:02,154	44k	INFO	====> Epoch: 71, cost 469.50 s
2023-03-17 02:08:28,471	44k	INFO	Train Epoch: 72 [31%]
2023-03-17 02:08:28,473	44k	INFO	Losses: [2.566708564758301, 2.31142258644104, 12.614298820495605, 18.949323654174805, 1.295151710510254], step: 34800, lr: 9.906682277864462e-05
2023-03-17 02:11:28,308	44k	INFO	Train Epoch: 72 [72%]
2023-03-17 02:11:28,309	44k	INFO	Losses: [2.516707420349121, 2.290302276611328, 7.738724231719971, 18.858633041381836, 1.1163291931152344], step: 35000, lr: 9.906682277864462e-05
2023-03-17 02:13:29,395	44k	INFO	====> Epoch: 72, cost 447.24 s
2023-03-17 02:14:36,053	44k	INFO	Train Epoch: 73 [13%]
2023-03-17 02:14:36,055	44k	INFO	Losses: [2.753389835357666, 2.287280559539795, 7.758563041687012, 18.239856719970703, 1.1111623048782349], step: 35200, lr: 9.905443942579728e-05
2023-03-17 02:14:47,943	44k	INFO	Saving model and optimizer state at iteration 73 to ./logs/44k/G_35200.pth
2023-03-17 02:14:51,336	44k	INFO	Saving model and optimizer state at iteration 73 to ./logs/44k/D_35200.pth
2023-03-17 02:14:53,801	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_31200.pth
2023-03-17 02:14:53,812	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_31200.pth
2023-03-17 02:17:56,905	44k	INFO	Train Epoch: 73 [54%]
2023-03-17 02:17:56,906	44k	INFO	Losses: [2.536832809448242, 2.287694215774536, 10.239502906799316, 20.189800262451172, 1.3937040567398071], step: 35400, lr: 9.905443942579728e-05
2023-03-17 02:20:56,202	44k	INFO	Train Epoch: 73 [95%]
2023-03-17 02:20:56,204	44k	INFO	Losses: [2.4674880504608154, 2.5104575157165527, 10.891117095947266, 20.02257537841797, 1.5259878635406494], step: 35600, lr: 9.905443942579728e-05
2023-03-17 02:21:18,812	44k	INFO	====> Epoch: 73, cost 469.42 s
2023-03-17 02:24:04,997	44k	INFO	Train Epoch: 74 [36%]
2023-03-17 02:24:04,999	44k	INFO	Losses: [2.658992290496826, 2.011537551879883, 8.290947914123535, 20.944766998291016, 1.3119019269943237], step: 35800, lr: 9.904205762086905e-05
2023-03-17 02:27:06,323	44k	INFO	Train Epoch: 74 [77%]
2023-03-17 02:27:06,324	44k	INFO	Losses: [2.4251742362976074, 2.3144357204437256, 10.027667999267578, 20.102537155151367, 1.2213525772094727], step: 36000, lr: 9.904205762086905e-05
2023-03-17 02:27:20,610	44k	INFO	Saving model and optimizer state at iteration 74 to ./logs/44k/G_36000.pth
2023-03-17 02:27:24,321	44k	INFO	Saving model and optimizer state at iteration 74 to ./logs/44k/D_36000.pth
2023-03-17 02:27:27,153	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_32000.pth
2023-03-17 02:27:27,158	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_32000.pth
2023-03-17 02:29:10,263	44k	INFO	====> Epoch: 74, cost 471.45 s
2023-03-17 02:30:39,027	44k	INFO	Train Epoch: 75 [18%]
2023-03-17 02:30:39,029	44k	INFO	Losses: [2.3514227867126465, 2.2037813663482666, 8.84196949005127, 17.636266708374023, 0.9482463598251343], step: 36200, lr: 9.902967736366644e-05
2023-03-17 02:33:38,694	44k	INFO	Train Epoch: 75 [59%]
2023-03-17 02:33:38,695	44k	INFO	Losses: [2.604753255844116, 2.139510154724121, 8.243342399597168, 19.139680862426758, 1.551486849784851], step: 36400, lr: 9.902967736366644e-05
2023-03-17 02:36:37,550	44k	INFO	====> Epoch: 75, cost 447.29 s
2023-03-17 02:36:47,116	44k	INFO	Train Epoch: 76 [0%]
2023-03-17 02:36:47,119	44k	INFO	Losses: [2.6522536277770996, 1.9894168376922607, 8.612791061401367, 16.956806182861328, 1.2624683380126953], step: 36600, lr: 9.901729865399597e-05
2023-03-17 02:39:47,033	44k	INFO	Train Epoch: 76 [41%]
2023-03-17 02:39:47,034	44k	INFO	Losses: [2.832819938659668, 1.9300906658172607, 4.573934555053711, 15.33503246307373, 1.078842282295227], step: 36800, lr: 9.901729865399597e-05
2023-03-17 02:40:00,318	44k	INFO	Saving model and optimizer state at iteration 76 to ./logs/44k/G_36800.pth
2023-03-17 02:40:03,828	44k	INFO	Saving model and optimizer state at iteration 76 to ./logs/44k/D_36800.pth
2023-03-17 02:40:05,949	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_32800.pth
2023-03-17 02:40:05,953	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_32800.pth
2023-03-17 02:43:10,219	44k	INFO	Train Epoch: 76 [82%]
2023-03-17 02:43:10,221	44k	INFO	Losses: [2.5366857051849365, 2.3596749305725098, 8.144891738891602, 18.23089027404785, 1.481116533279419], step: 37000, lr: 9.901729865399597e-05
2023-03-17 02:44:30,867	44k	INFO	====> Epoch: 76, cost 473.32 s
2023-03-17 02:46:20,092	44k	INFO	Train Epoch: 77 [23%]
2023-03-17 02:46:20,094	44k	INFO	Losses: [2.732069969177246, 2.531210422515869, 7.891313552856445, 14.55479907989502, 1.049509882926941], step: 37200, lr: 9.900492149166423e-05
2023-03-17 02:49:18,595	44k	INFO	Train Epoch: 77 [64%]
2023-03-17 02:49:18,597	44k	INFO	Losses: [2.479750633239746, 2.3029139041900635, 7.636006832122803, 16.742828369140625, 1.1809018850326538], step: 37400, lr: 9.900492149166423e-05
2023-03-17 02:51:57,636	44k	INFO	====> Epoch: 77, cost 446.77 s
2023-03-17 02:52:29,299	44k	INFO	Train Epoch: 78 [5%]
2023-03-17 02:52:29,301	44k	INFO	Losses: [2.436704635620117, 2.138270378112793, 11.176417350769043, 20.470787048339844, 1.0590966939926147], step: 37600, lr: 9.899254587647776e-05
2023-03-17 02:52:41,863	44k	INFO	Saving model and optimizer state at iteration 78 to ./logs/44k/G_37600.pth
2023-03-17 02:52:45,534	44k	INFO	Saving model and optimizer state at iteration 78 to ./logs/44k/D_37600.pth
2023-03-17 02:52:47,745	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_33600.pth
2023-03-17 02:52:47,748	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_33600.pth
2023-03-17 02:55:50,341	44k	INFO	Train Epoch: 78 [46%]
2023-03-17 02:55:50,342	44k	INFO	Losses: [2.348386526107788, 2.1494929790496826, 10.359945297241211, 18.013044357299805, 0.9645937085151672], step: 37800, lr: 9.899254587647776e-05
2023-03-17 02:58:50,043	44k	INFO	Train Epoch: 78 [87%]
2023-03-17 02:58:50,045	44k	INFO	Losses: [2.440535545349121, 2.4878768920898438, 7.8412299156188965, 18.96208381652832, 1.0696218013763428], step: 38000, lr: 9.899254587647776e-05
2023-03-17 02:59:47,374	44k	INFO	====> Epoch: 78, cost 469.74 s
2023-03-17 03:01:59,158	44k	INFO	Train Epoch: 79 [28%]
2023-03-17 03:01:59,159	44k	INFO	Losses: [2.6074976921081543, 2.1965298652648926, 6.797619819641113, 17.210399627685547, 1.206662893295288], step: 38200, lr: 9.89801718082432e-05
2023-03-17 03:04:59,637	44k	INFO	Train Epoch: 79 [69%]
2023-03-17 03:04:59,639	44k	INFO	Losses: [2.7296714782714844, 2.167003870010376, 8.956235885620117, 22.188779830932617, 1.2457859516143799], step: 38400, lr: 9.89801718082432e-05
2023-03-17 03:05:13,705	44k	INFO	Saving model and optimizer state at iteration 79 to ./logs/44k/G_38400.pth
2023-03-17 03:05:17,170	44k	INFO	Saving model and optimizer state at iteration 79 to ./logs/44k/D_38400.pth
2023-03-17 03:05:19,819	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_34400.pth
2023-03-17 03:05:19,824	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_34400.pth
2023-03-17 03:07:40,656	44k	INFO	====> Epoch: 79, cost 473.28 s
2023-03-17 03:08:32,302	44k	INFO	Train Epoch: 80 [10%]
2023-03-17 03:08:32,303	44k	INFO	Losses: [2.4296324253082275, 2.181828260421753, 8.199735641479492, 18.141691207885742, 1.3645236492156982], step: 38600, lr: 9.896779928676716e-05
2023-03-17 03:11:31,949	44k	INFO	Train Epoch: 80 [51%]
2023-03-17 03:11:31,950	44k	INFO	Losses: [2.565019369125366, 1.947805404663086, 6.14719295501709, 16.742931365966797, 1.2787712812423706], step: 38800, lr: 9.896779928676716e-05
2023-03-17 03:14:31,501	44k	INFO	Train Epoch: 80 [92%]
2023-03-17 03:14:31,503	44k	INFO	Losses: [2.384526014328003, 2.2261240482330322, 8.069917678833008, 15.260270118713379, 1.087229609489441], step: 39000, lr: 9.896779928676716e-05
2023-03-17 03:15:08,731	44k	INFO	====> Epoch: 80, cost 448.08 s
2023-03-17 03:17:41,466	44k	INFO	Train Epoch: 81 [33%]
2023-03-17 03:17:41,468	44k	INFO	Losses: [2.5365254878997803, 2.2828545570373535, 9.738179206848145, 17.017852783203125, 1.1615246534347534], step: 39200, lr: 9.895542831185631e-05
2023-03-17 03:17:54,292	44k	INFO	Saving model and optimizer state at iteration 81 to ./logs/44k/G_39200.pth
2023-03-17 03:17:57,602	44k	INFO	Saving model and optimizer state at iteration 81 to ./logs/44k/D_39200.pth
2023-03-17 03:17:59,923	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_35200.pth
2023-03-17 03:17:59,928	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_35200.pth
2023-03-17 03:21:03,852	44k	INFO	Train Epoch: 81 [74%]
2023-03-17 03:21:03,854	44k	INFO	Losses: [2.3370437622070312, 2.6323676109313965, 7.631609916687012, 19.311100006103516, 1.4604064226150513], step: 39400, lr: 9.895542831185631e-05
2023-03-17 03:22:58,064	44k	INFO	====> Epoch: 81, cost 469.33 s
2023-03-17 03:24:13,041	44k	INFO	Train Epoch: 82 [15%]
2023-03-17 03:24:13,043	44k	INFO	Losses: [2.4250082969665527, 2.5629730224609375, 10.725067138671875, 19.095808029174805, 1.2458372116088867], step: 39600, lr: 9.894305888331732e-05
2023-03-17 03:27:11,113	44k	INFO	Train Epoch: 82 [56%]
2023-03-17 03:27:11,115	44k	INFO	Losses: [2.3638675212860107, 2.3367221355438232, 9.018434524536133, 16.980154037475586, 0.9602565765380859], step: 39800, lr: 9.894305888331732e-05
2023-03-17 03:30:10,749	44k	INFO	Train Epoch: 82 [97%]
2023-03-17 03:30:10,750	44k	INFO	Losses: [2.2778704166412354, 2.4481430053710938, 10.221363067626953, 18.69451141357422, 1.3225191831588745], step: 40000, lr: 9.894305888331732e-05
2023-03-17 03:30:26,681	44k	INFO	Saving model and optimizer state at iteration 82 to ./logs/44k/G_40000.pth
2023-03-17 03:30:30,695	44k	INFO	Saving model and optimizer state at iteration 82 to ./logs/44k/D_40000.pth
2023-03-17 03:30:33,421	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_36000.pth
2023-03-17 03:30:33,425	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_36000.pth
2023-03-17 03:30:51,191	44k	INFO	====> Epoch: 82, cost 473.13 s
2023-03-17 03:33:42,705	44k	INFO	Train Epoch: 83 [38%]
2023-03-17 03:33:42,707	44k	INFO	Losses: [2.3225724697113037, 2.6144087314605713, 9.680810928344727, 19.193286895751953, 1.3272085189819336], step: 40200, lr: 9.89306910009569e-05
2023-03-17 03:36:41,212	44k	INFO	Train Epoch: 83 [79%]
2023-03-17 03:36:41,214	44k	INFO	Losses: [2.8023276329040527, 1.7692387104034424, 3.868022918701172, 11.095769882202148, 1.1627871990203857], step: 40400, lr: 9.89306910009569e-05
2023-03-17 03:38:13,624	44k	INFO	====> Epoch: 83, cost 442.43 s
2023-03-17 03:39:48,827	44k	INFO	Train Epoch: 84 [20%]
2023-03-17 03:39:48,829	44k	INFO	Losses: [2.4232993125915527, 2.1706693172454834, 8.57836627960205, 16.38633155822754, 1.011132001876831], step: 40600, lr: 9.891832466458178e-05
2023-03-17 03:42:47,497	44k	INFO	Train Epoch: 84 [61%]
2023-03-17 03:42:47,499	44k	INFO	Losses: [2.437356472015381, 2.395631790161133, 6.806859970092773, 17.000635147094727, 1.0631414651870728], step: 40800, lr: 9.891832466458178e-05
2023-03-17 03:43:01,457	44k	INFO	Saving model and optimizer state at iteration 84 to ./logs/44k/G_40800.pth
2023-03-17 03:43:04,934	44k	INFO	Saving model and optimizer state at iteration 84 to ./logs/44k/D_40800.pth
2023-03-17 03:43:07,107	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_36800.pth
2023-03-17 03:43:07,141	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_36800.pth
2023-03-17 03:46:02,396	44k	INFO	====> Epoch: 84, cost 468.77 s
2023-03-17 03:46:18,559	44k	INFO	Train Epoch: 85 [2%]
2023-03-17 03:46:18,561	44k	INFO	Losses: [2.6078553199768066, 1.9820866584777832, 8.470038414001465, 18.487531661987305, 1.0303090810775757], step: 41000, lr: 9.89059598739987e-05
2023-03-17 03:49:18,271	44k	INFO	Train Epoch: 85 [43%]
2023-03-17 03:49:18,273	44k	INFO	Losses: [2.4907898902893066, 2.1003904342651367, 6.9233479499816895, 18.588647842407227, 1.3046303987503052], step: 41200, lr: 9.89059598739987e-05
2023-03-17 03:52:17,447	44k	INFO	Train Epoch: 85 [84%]
2023-03-17 03:52:17,449	44k	INFO	Losses: [2.356060743331909, 2.574362277984619, 8.9258394241333, 20.44322395324707, 1.1470999717712402], step: 41400, lr: 9.89059598739987e-05
2023-03-17 03:53:29,481	44k	INFO	====> Epoch: 85, cost 447.08 s
2023-03-17 03:55:26,290	44k	INFO	Train Epoch: 86 [25%]
2023-03-17 03:55:26,292	44k	INFO	Losses: [2.4835870265960693, 2.245039701461792, 9.411276817321777, 19.828161239624023, 1.3884118795394897], step: 41600, lr: 9.889359662901445e-05
2023-03-17 03:55:40,654	44k	INFO	Saving model and optimizer state at iteration 86 to ./logs/44k/G_41600.pth
2023-03-17 03:55:44,704	44k	INFO	Saving model and optimizer state at iteration 86 to ./logs/44k/D_41600.pth
2023-03-17 03:55:47,071	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_37600.pth
2023-03-17 03:55:47,076	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_37600.pth
2023-03-17 03:58:50,342	44k	INFO	Train Epoch: 86 [66%]
2023-03-17 03:58:50,344	44k	INFO	Losses: [2.6028146743774414, 2.200443744659424, 9.620811462402344, 18.883745193481445, 1.307152271270752], step: 41800, lr: 9.889359662901445e-05
2023-03-17 04:01:20,048	44k	INFO	====> Epoch: 86, cost 470.57 s
2023-03-17 04:01:57,992	44k	INFO	Train Epoch: 87 [7%]
2023-03-17 04:01:57,994	44k	INFO	Losses: [2.569676160812378, 1.9188189506530762, 6.053400993347168, 13.968046188354492, 0.8926028609275818], step: 42000, lr: 9.888123492943583e-05
2023-03-17 04:04:58,307	44k	INFO	Train Epoch: 87 [48%]
2023-03-17 04:04:58,308	44k	INFO	Losses: [2.347348213195801, 2.212087392807007, 9.778621673583984, 22.348703384399414, 1.5022640228271484], step: 42200, lr: 9.888123492943583e-05
2023-03-17 04:07:56,804	44k	INFO	Train Epoch: 87 [89%]
2023-03-17 04:07:56,805	44k	INFO	Losses: [2.340458869934082, 2.4751017093658447, 10.095475196838379, 18.85011100769043, 1.552014946937561], step: 42400, lr: 9.888123492943583e-05
2023-03-17 04:08:09,952	44k	INFO	Saving model and optimizer state at iteration 87 to ./logs/44k/G_42400.pth
2023-03-17 04:08:13,425	44k	INFO	Saving model and optimizer state at iteration 87 to ./logs/44k/D_42400.pth
2023-03-17 04:08:15,716	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_38400.pth
2023-03-17 04:08:15,975	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_38400.pth
2023-03-17 04:09:10,189	44k	INFO	====> Epoch: 87, cost 470.14 s
2023-03-17 04:11:28,030	44k	INFO	Train Epoch: 88 [30%]
2023-03-17 04:11:28,031	44k	INFO	Losses: [2.676636219024658, 1.9538204669952393, 6.597738742828369, 20.08915138244629, 1.1037224531173706], step: 42600, lr: 9.886887477506964e-05
2023-03-17 04:14:26,167	44k	INFO	Train Epoch: 88 [70%]
2023-03-17 04:14:26,169	44k	INFO	Losses: [2.546870231628418, 2.0122125148773193, 9.648585319519043, 19.564075469970703, 1.2506898641586304], step: 42800, lr: 9.886887477506964e-05
2023-03-17 04:16:36,631	44k	INFO	====> Epoch: 88, cost 446.44 s
2023-03-17 04:17:36,705	44k	INFO	Train Epoch: 89 [11%]
2023-03-17 04:17:36,707	44k	INFO	Losses: [2.4755544662475586, 2.139252185821533, 7.932869911193848, 14.89391803741455, 1.3280549049377441], step: 43000, lr: 9.885651616572276e-05
2023-03-17 04:20:36,515	44k	INFO	Train Epoch: 89 [52%]
2023-03-17 04:20:36,517	44k	INFO	Losses: [2.177128314971924, 2.8584585189819336, 9.482152938842773, 14.80997371673584, 1.2164100408554077], step: 43200, lr: 9.885651616572276e-05
2023-03-17 04:20:49,808	44k	INFO	Saving model and optimizer state at iteration 89 to ./logs/44k/G_43200.pth
2023-03-17 04:20:53,232	44k	INFO	Saving model and optimizer state at iteration 89 to ./logs/44k/D_43200.pth
2023-03-17 04:20:55,432	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_39200.pth
2023-03-17 04:20:55,438	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_39200.pth
2023-03-17 04:23:59,846	44k	INFO	Train Epoch: 89 [93%]
2023-03-17 04:23:59,848	44k	INFO	Losses: [2.444333553314209, 2.3706679344177246, 8.763633728027344, 21.718589782714844, 1.1112216711044312], step: 43400, lr: 9.885651616572276e-05
2023-03-17 04:24:28,612	44k	INFO	====> Epoch: 89, cost 471.98 s
2023-03-17 04:27:07,627	44k	INFO	Train Epoch: 90 [34%]
2023-03-17 04:27:07,630	44k	INFO	Losses: [2.5771193504333496, 2.3104891777038574, 8.878454208374023, 16.915977478027344, 1.4198206663131714], step: 43600, lr: 9.884415910120204e-05
2023-03-17 04:30:05,300	44k	INFO	Train Epoch: 90 [75%]
2023-03-17 04:30:05,302	44k	INFO	Losses: [2.5268924236297607, 1.997037410736084, 9.072538375854492, 23.166839599609375, 1.556585669517517], step: 43800, lr: 9.884415910120204e-05
2023-03-17 04:31:54,607	44k	INFO	====> Epoch: 90, cost 445.99 s
2023-03-17 04:33:16,725	44k	INFO	Train Epoch: 91 [16%]
2023-03-17 04:33:16,727	44k	INFO	Losses: [2.29752254486084, 2.5615763664245605, 8.96706485748291, 17.70204734802246, 0.9858293533325195], step: 44000, lr: 9.883180358131438e-05
2023-03-17 04:33:29,138	44k	INFO	Saving model and optimizer state at iteration 91 to ./logs/44k/G_44000.pth
2023-03-17 04:33:32,546	44k	INFO	Saving model and optimizer state at iteration 91 to ./logs/44k/D_44000.pth
2023-03-17 04:33:34,745	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_40000.pth
2023-03-17 04:33:34,751	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_40000.pth
2023-03-17 04:36:39,521	44k	INFO	Train Epoch: 91 [57%]
2023-03-17 04:36:39,523	44k	INFO	Losses: [2.4972803592681885, 2.4023308753967285, 9.37154483795166, 21.22870445251465, 1.2557520866394043], step: 44200, lr: 9.883180358131438e-05
2023-03-17 07:50:02,582	44k	INFO	{'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 31415926, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 5}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'爱梅斯': 0, '花凛': 1, '佩可莉姆': 2, '咲恋': 3, '凯露': 4, '可可萝': 5}, 'model_dir': './logs/44k'}
2023-03-17 07:50:03,181	44k	WARNING	git hash values are different. 55dd086f(saved) != d54bf592(current)
2023-03-17 07:50:21,064	44k	INFO	Loaded checkpoint './logs/44k/G_44000.pth' (iteration 91)
2023-03-17 07:50:26,445	44k	INFO	Loaded checkpoint './logs/44k/D_44000.pth' (iteration 91)
2023-03-17 07:52:03,109	44k	INFO	Train Epoch: 91 [16%]
2023-03-17 07:52:03,110	44k	INFO	Losses: [2.668210029602051, 1.92995023727417, 5.357793807983398, 15.178995132446289, 0.6955949068069458], step: 44000, lr: 9.881944960586671e-05
2023-03-17 07:52:20,139	44k	INFO	Saving model and optimizer state at iteration 91 to ./logs/44k/G_44000.pth
2023-03-17 07:52:24,729	44k	INFO	Saving model and optimizer state at iteration 91 to ./logs/44k/D_44000.pth
2023-03-17 07:55:48,253	44k	INFO	Train Epoch: 91 [57%]
2023-03-17 07:55:48,254	44k	INFO	Losses: [2.3036863803863525, 2.774142026901245, 8.110414505004883, 19.43911361694336, 1.1284072399139404], step: 44200, lr: 9.881944960586671e-05
2023-03-17 07:59:02,562	44k	INFO	Train Epoch: 91 [98%]
2023-03-17 07:59:02,563	44k	INFO	Losses: [2.331641912460327, 2.3361172676086426, 9.700390815734863, 20.05875587463379, 1.1746416091918945], step: 44400, lr: 9.881944960586671e-05
2023-03-17 07:59:13,049	44k	INFO	====> Epoch: 91, cost 550.47 s
2023-03-17 08:02:16,470	44k	INFO	Train Epoch: 92 [39%]
2023-03-17 08:02:16,472	44k	INFO	Losses: [2.5861916542053223, 2.151218891143799, 7.270633220672607, 17.958580017089844, 1.1644787788391113], step: 44600, lr: 9.880709717466598e-05
2023-03-17 08:05:19,316	44k	INFO	Train Epoch: 92 [80%]
2023-03-17 08:05:19,318	44k	INFO	Losses: [2.1906516551971436, 2.6002354621887207, 9.858673095703125, 18.635112762451172, 1.2691702842712402], step: 44800, lr: 9.880709717466598e-05
2023-03-17 08:05:35,343	44k	INFO	Saving model and optimizer state at iteration 92 to ./logs/44k/G_44800.pth
2023-03-17 08:05:38,982	44k	INFO	Saving model and optimizer state at iteration 92 to ./logs/44k/D_44800.pth
2023-03-17 08:05:41,301	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_40800.pth
2023-03-17 08:05:41,303	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_40800.pth
2023-03-17 08:07:12,325	44k	INFO	====> Epoch: 92, cost 479.28 s
2023-03-17 08:08:55,212	44k	INFO	Train Epoch: 93 [21%]
2023-03-17 08:08:55,213	44k	INFO	Losses: [2.3211071491241455, 2.2624714374542236, 10.817269325256348, 20.754608154296875, 1.2969350814819336], step: 45000, lr: 9.879474628751914e-05
2023-03-17 08:11:58,655	44k	INFO	Train Epoch: 93 [62%]
2023-03-17 08:11:58,656	44k	INFO	Losses: [2.5462288856506348, 2.5714659690856934, 8.31057071685791, 17.768707275390625, 1.3567047119140625], step: 45200, lr: 9.879474628751914e-05
2023-03-17 08:14:46,255	44k	INFO	====> Epoch: 93, cost 453.93 s
2023-03-17 08:15:11,822	44k	INFO	Train Epoch: 94 [3%]
2023-03-17 08:15:11,824	44k	INFO	Losses: [2.829561710357666, 2.2598392963409424, 7.7262773513793945, 19.109375, 1.2395902872085571], step: 45400, lr: 9.87823969442332e-05
2023-03-17 08:18:15,898	44k	INFO	Train Epoch: 94 [44%]
2023-03-17 08:18:15,900	44k	INFO	Losses: [2.2507426738739014, 2.435422897338867, 7.702707767486572, 16.130598068237305, 1.1940419673919678], step: 45600, lr: 9.87823969442332e-05
2023-03-17 08:18:29,775	44k	INFO	Saving model and optimizer state at iteration 94 to ./logs/44k/G_45600.pth
2023-03-17 08:18:33,481	44k	INFO	Saving model and optimizer state at iteration 94 to ./logs/44k/D_45600.pth
2023-03-17 08:18:35,791	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_41600.pth
2023-03-17 08:18:35,793	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_41600.pth
2023-03-17 08:21:42,923	44k	INFO	Train Epoch: 94 [85%]
2023-03-17 08:21:42,925	44k	INFO	Losses: [2.312337875366211, 2.2897658348083496, 10.366857528686523, 18.517322540283203, 1.291533350944519], step: 45800, lr: 9.87823969442332e-05
2023-03-17 08:22:48,841	44k	INFO	====> Epoch: 94, cost 482.59 s
2023-03-17 08:24:56,231	44k	INFO	Train Epoch: 95 [26%]
2023-03-17 08:24:56,233	44k	INFO	Losses: [2.267899513244629, 2.2353272438049316, 10.288509368896484, 22.099145889282227, 1.1590378284454346], step: 46000, lr: 9.877004914461517e-05
2023-03-17 08:27:58,436	44k	INFO	Train Epoch: 95 [67%]
2023-03-17 08:27:58,437	44k	INFO	Losses: [2.557178258895874, 2.3174993991851807, 11.835665702819824, 21.42255210876465, 1.1371198892593384], step: 46200, lr: 9.877004914461517e-05
2023-03-17 08:30:26,287	44k	INFO	====> Epoch: 95, cost 457.45 s
2023-03-17 08:31:12,110	44k	INFO	Train Epoch: 96 [8%]
2023-03-17 08:31:12,113	44k	INFO	Losses: [2.6272759437561035, 2.25323486328125, 8.6062593460083, 17.728918075561523, 1.0275278091430664], step: 46400, lr: 9.875770288847208e-05
2023-03-17 08:31:25,822	44k	INFO	Saving model and optimizer state at iteration 96 to ./logs/44k/G_46400.pth
2023-03-17 08:31:30,791	44k	INFO	Saving model and optimizer state at iteration 96 to ./logs/44k/D_46400.pth
2023-03-17 08:31:33,280	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_42400.pth
2023-03-17 08:31:33,282	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_42400.pth
2023-03-17 08:34:40,788	44k	INFO	Train Epoch: 96 [49%]
2023-03-17 08:34:40,790	44k	INFO	Losses: [2.6838648319244385, 2.2486367225646973, 7.324362277984619, 17.17033576965332, 1.053290605545044], step: 46600, lr: 9.875770288847208e-05
2023-03-17 08:37:45,331	44k	INFO	Train Epoch: 96 [90%]
2023-03-17 08:37:45,333	44k	INFO	Losses: [2.367755174636841, 2.3769948482513428, 10.315049171447754, 19.893564224243164, 1.3874090909957886], step: 46800, lr: 9.875770288847208e-05
2023-03-17 08:38:29,242	44k	INFO	====> Epoch: 96, cost 482.96 s
2023-03-17 08:40:57,751	44k	INFO	Train Epoch: 97 [31%]
2023-03-17 08:40:57,753	44k	INFO	Losses: [2.4361391067504883, 2.2059686183929443, 10.056574821472168, 21.73434066772461, 1.358241319656372], step: 47000, lr: 9.874535817561101e-05
2023-03-17 08:43:59,970	44k	INFO	Train Epoch: 97 [72%]
2023-03-17 08:43:59,972	44k	INFO	Losses: [2.5105888843536377, 2.2866930961608887, 8.014640808105469, 20.10850715637207, 1.1686780452728271], step: 47200, lr: 9.874535817561101e-05
2023-03-17 08:44:15,348	44k	INFO	Saving model and optimizer state at iteration 97 to ./logs/44k/G_47200.pth
2023-03-17 08:44:20,317	44k	INFO	Saving model and optimizer state at iteration 97 to ./logs/44k/D_47200.pth
2023-03-17 08:44:22,899	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_43200.pth
2023-03-17 08:44:22,903	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_43200.pth
2023-03-17 08:46:30,539	44k	INFO	====> Epoch: 97, cost 481.30 s
2023-03-17 08:47:38,289	44k	INFO	Train Epoch: 98 [13%]
2023-03-17 08:47:38,290	44k	INFO	Losses: [2.544938802719116, 2.0673418045043945, 8.76973819732666, 18.828649520874023, 1.1368085145950317], step: 47400, lr: 9.873301500583906e-05
2023-03-17 08:50:40,336	44k	INFO	Train Epoch: 98 [54%]
2023-03-17 08:50:40,338	44k	INFO	Losses: [2.452686309814453, 2.483546733856201, 9.027301788330078, 18.327295303344727, 1.3026354312896729], step: 47600, lr: 9.873301500583906e-05
2023-03-17 08:53:44,429	44k	INFO	Train Epoch: 98 [95%]
2023-03-17 08:53:44,431	44k	INFO	Losses: [2.4441497325897217, 2.1515862941741943, 11.609452247619629, 20.753938674926758, 1.2595906257629395], step: 47800, lr: 9.873301500583906e-05
2023-03-17 08:54:05,998	44k	INFO	====> Epoch: 98, cost 455.46 s
2023-03-17 08:56:58,630	44k	INFO	Train Epoch: 99 [36%]
2023-03-17 08:56:58,632	44k	INFO	Losses: [2.5482840538024902, 2.185764789581299, 6.734715938568115, 16.002769470214844, 0.9853835105895996], step: 48000, lr: 9.872067337896332e-05
2023-03-17 08:57:13,863	44k	INFO	Saving model and optimizer state at iteration 99 to ./logs/44k/G_48000.pth
2023-03-17 08:57:17,771	44k	INFO	Saving model and optimizer state at iteration 99 to ./logs/44k/D_48000.pth
2023-03-17 08:57:19,990	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_44000.pth
2023-03-17 08:57:20,052	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_44000.pth
2023-03-17 09:00:29,111	44k	INFO	Train Epoch: 99 [77%]
2023-03-17 09:00:29,113	44k	INFO	Losses: [2.4988951683044434, 2.22397780418396, 7.070035934448242, 18.890058517456055, 1.212518334388733], step: 48200, lr: 9.872067337896332e-05
2023-03-17 09:02:12,081	44k	INFO	====> Epoch: 99, cost 486.08 s
2023-03-17 09:03:41,565	44k	INFO	Train Epoch: 100 [18%]
2023-03-17 09:03:41,567	44k	INFO	Losses: [2.319136142730713, 2.3721981048583984, 7.680482864379883, 17.073713302612305, 1.0361475944519043], step: 48400, lr: 9.870833329479095e-05
2023-03-17 09:06:44,599	44k	INFO	Train Epoch: 100 [59%]
2023-03-17 09:06:44,601	44k	INFO	Losses: [2.6347293853759766, 2.3255438804626465, 7.423192501068115, 18.45656394958496, 1.353227972984314], step: 48600, lr: 9.870833329479095e-05
2023-03-17 09:09:47,604	44k	INFO	====> Epoch: 100, cost 455.52 s
2023-03-17 09:09:58,694	44k	INFO	Train Epoch: 101 [0%]
2023-03-17 09:09:58,696	44k	INFO	Losses: [2.217794179916382, 2.4163689613342285, 9.86723518371582, 17.803302764892578, 1.3338123559951782], step: 48800, lr: 9.86959947531291e-05
2023-03-17 09:10:12,580	44k	INFO	Saving model and optimizer state at iteration 101 to ./logs/44k/G_48800.pth
2023-03-17 09:10:16,815	44k	INFO	Saving model and optimizer state at iteration 101 to ./logs/44k/D_48800.pth
2023-03-17 09:10:19,010	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_44800.pth
2023-03-17 09:10:19,018	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_44800.pth
2023-03-17 09:13:25,683	44k	INFO	Train Epoch: 101 [41%]
2023-03-17 09:13:25,685	44k	INFO	Losses: [2.6170883178710938, 2.0905160903930664, 8.067964553833008, 20.103612899780273, 1.2798022031784058], step: 49000, lr: 9.86959947531291e-05
2023-03-17 09:16:29,109	44k	INFO	Train Epoch: 101 [82%]
2023-03-17 09:16:29,111	44k	INFO	Losses: [2.6155846118927, 2.0383036136627197, 9.363716125488281, 16.237890243530273, 1.026833176612854], step: 49200, lr: 9.86959947531291e-05
2023-03-17 09:17:49,121	44k	INFO	====> Epoch: 101, cost 481.52 s
2023-03-17 09:19:41,943	44k	INFO	Train Epoch: 102 [23%]
2023-03-17 09:19:41,946	44k	INFO	Losses: [2.4634878635406494, 2.214381694793701, 9.180997848510742, 19.21164321899414, 1.177732229232788], step: 49400, lr: 9.868365775378495e-05
2023-03-17 09:22:45,388	44k	INFO	Train Epoch: 102 [64%]
2023-03-17 09:22:45,390	44k	INFO	Losses: [2.6667304039001465, 2.1756675243377686, 9.323272705078125, 20.198108673095703, 1.140903353691101], step: 49600, lr: 9.868365775378495e-05
2023-03-17 09:23:01,274	44k	INFO	Saving model and optimizer state at iteration 102 to ./logs/44k/G_49600.pth
2023-03-17 09:23:06,051	44k	INFO	Saving model and optimizer state at iteration 102 to ./logs/44k/D_49600.pth
2023-03-17 09:23:08,269	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_45600.pth
2023-03-17 09:23:08,273	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_45600.pth
2023-03-17 09:25:53,214	44k	INFO	====> Epoch: 102, cost 484.09 s
2023-03-17 09:26:25,238	44k	INFO	Train Epoch: 103 [5%]
2023-03-17 09:26:25,241	44k	INFO	Losses: [2.367825984954834, 2.233006000518799, 9.9767427444458, 19.749467849731445, 1.0239226818084717], step: 49800, lr: 9.867132229656573e-05
2023-03-17 09:29:28,189	44k	INFO	Train Epoch: 103 [46%]
2023-03-17 09:29:28,190	44k	INFO	Losses: [2.686293601989746, 1.9158427715301514, 5.075340270996094, 14.639267921447754, 1.110611915588379], step: 50000, lr: 9.867132229656573e-05
2023-03-17 09:32:31,443	44k	INFO	Train Epoch: 103 [87%]
2023-03-17 09:32:31,445	44k	INFO	Losses: [2.400881767272949, 2.129938840866089, 10.987777709960938, 21.766904830932617, 1.4266135692596436], step: 50200, lr: 9.867132229656573e-05
2023-03-17 09:33:30,536	44k	INFO	====> Epoch: 103, cost 457.32 s
2023-03-17 09:35:42,821	44k	INFO	Train Epoch: 104 [28%]
2023-03-17 09:35:42,823	44k	INFO	Losses: [2.5903377532958984, 2.485609531402588, 9.927873611450195, 21.513835906982422, 1.0367764234542847], step: 50400, lr: 9.865898838127865e-05
2023-03-17 09:35:55,479	44k	INFO	Saving model and optimizer state at iteration 104 to ./logs/44k/G_50400.pth
2023-03-17 09:35:58,955	44k	INFO	Saving model and optimizer state at iteration 104 to ./logs/44k/D_50400.pth
2023-03-17 09:36:01,406	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_46400.pth
2023-03-17 09:36:01,412	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_46400.pth
2023-03-17 09:39:05,246	44k	INFO	Train Epoch: 104 [69%]
2023-03-17 09:39:05,248	44k	INFO	Losses: [2.6484134197235107, 2.0377111434936523, 8.031493186950684, 16.828983306884766, 1.4036097526550293], step: 50600, lr: 9.865898838127865e-05
2023-03-17 09:41:22,555	44k	INFO	====> Epoch: 104, cost 472.02 s
2023-03-17 09:42:16,156	44k	INFO	Train Epoch: 105 [10%]
2023-03-17 09:42:16,158	44k	INFO	Losses: [2.6690385341644287, 1.9996957778930664, 6.172468662261963, 14.816864967346191, 1.0503015518188477], step: 50800, lr: 9.864665600773098e-05
2023-03-17 09:45:16,324	44k	INFO	Train Epoch: 105 [51%]
2023-03-17 09:45:16,326	44k	INFO	Losses: [2.5709991455078125, 2.057671546936035, 6.696719646453857, 16.80034828186035, 1.3635740280151367], step: 51000, lr: 9.864665600773098e-05
2023-03-17 09:48:17,973	44k	INFO	Train Epoch: 105 [92%]
2023-03-17 09:48:17,976	44k	INFO	Losses: [2.5135085582733154, 2.184070110321045, 7.171174049377441, 17.870094299316406, 1.165198802947998], step: 51200, lr: 9.864665600773098e-05
2023-03-17 09:48:34,247	44k	INFO	Saving model and optimizer state at iteration 105 to ./logs/44k/G_51200.pth
2023-03-17 09:48:38,461	44k	INFO	Saving model and optimizer state at iteration 105 to ./logs/44k/D_51200.pth
2023-03-17 09:48:40,903	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_47200.pth
2023-03-17 09:48:40,906	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_47200.pth
2023-03-17 09:49:21,492	44k	INFO	====> Epoch: 105, cost 478.94 s
2023-03-17 09:51:54,070	44k	INFO	Train Epoch: 106 [33%]
2023-03-17 09:51:54,072	44k	INFO	Losses: [2.560924530029297, 2.3064897060394287, 6.469823360443115, 16.36894416809082, 1.4065260887145996], step: 51400, lr: 9.863432517573002e-05
2023-03-17 09:54:54,725	44k	INFO	Train Epoch: 106 [74%]
2023-03-17 09:54:54,727	44k	INFO	Losses: [2.396714687347412, 2.354140281677246, 11.342338562011719, 19.688703536987305, 1.2030150890350342], step: 51600, lr: 9.863432517573002e-05
2023-03-17 09:56:51,115	44k	INFO	====> Epoch: 106, cost 449.62 s
2023-03-17 09:58:06,391	44k	INFO	Train Epoch: 107 [15%]
2023-03-17 09:58:06,393	44k	INFO	Losses: [2.5922422409057617, 2.3076748847961426, 8.031786918640137, 18.143476486206055, 1.2532134056091309], step: 51800, lr: 9.862199588508305e-05
2023-03-17 10:01:08,318	44k	INFO	Train Epoch: 107 [56%]
2023-03-17 10:01:08,320	44k	INFO	Losses: [2.6791019439697266, 2.1522114276885986, 9.648640632629395, 20.026235580444336, 1.3821077346801758], step: 52000, lr: 9.862199588508305e-05
2023-03-17 10:01:22,650	44k	INFO	Saving model and optimizer state at iteration 107 to ./logs/44k/G_52000.pth
2023-03-17 10:01:26,344	44k	INFO	Saving model and optimizer state at iteration 107 to ./logs/44k/D_52000.pth
2023-03-17 10:01:28,482	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_48000.pth
2023-03-17 10:01:28,486	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_48000.pth
2023-03-17 10:04:33,626	44k	INFO	Train Epoch: 107 [97%]
2023-03-17 10:04:33,628	44k	INFO	Losses: [2.6326851844787598, 2.200147867202759, 9.564226150512695, 18.618101119995117, 1.151081919670105], step: 52200, lr: 9.862199588508305e-05
2023-03-17 10:04:49,148	44k	INFO	====> Epoch: 107, cost 478.03 s
2023-03-17 10:07:45,458	44k	INFO	Train Epoch: 108 [38%]
2023-03-17 10:07:45,460	44k	INFO	Losses: [2.451735734939575, 2.198728322982788, 12.083086967468262, 22.490345001220703, 1.44159734249115], step: 52400, lr: 9.86096681355974e-05
2023-03-17 10:10:47,965	44k	INFO	Train Epoch: 108 [79%]
2023-03-17 10:10:47,967	44k	INFO	Losses: [2.304168224334717, 2.5060200691223145, 11.587242126464844, 21.356895446777344, 1.1251931190490723], step: 52600, lr: 9.86096681355974e-05
2023-03-17 10:12:22,159	44k	INFO	====> Epoch: 108, cost 453.01 s
2023-03-17 10:13:59,628	44k	INFO	Train Epoch: 109 [20%]
2023-03-17 10:13:59,630	44k	INFO	Losses: [2.473924398422241, 2.248304605484009, 8.610283851623535, 17.419002532958984, 1.2463079690933228], step: 52800, lr: 9.859734192708044e-05
2023-03-17 10:14:13,778	44k	INFO	Saving model and optimizer state at iteration 109 to ./logs/44k/G_52800.pth
2023-03-17 10:14:17,775	44k	INFO	Saving model and optimizer state at iteration 109 to ./logs/44k/D_52800.pth
2023-03-17 10:14:20,424	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_48800.pth
2023-03-17 10:14:20,427	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_48800.pth
2023-03-17 10:17:25,906	44k	INFO	Train Epoch: 109 [61%]
2023-03-17 10:17:25,909	44k	INFO	Losses: [2.534329414367676, 1.9661431312561035, 9.9132661819458, 20.506990432739258, 1.1420259475708008], step: 53000, lr: 9.859734192708044e-05
2023-03-17 10:20:20,533	44k	INFO	====> Epoch: 109, cost 478.37 s
2023-03-17 10:20:37,465	44k	INFO	Train Epoch: 110 [2%]
2023-03-17 10:20:37,467	44k	INFO	Losses: [2.624727725982666, 1.959425926208496, 7.974331378936768, 15.811946868896484, 1.21015202999115], step: 53200, lr: 9.858501725933955e-05
2023-03-17 10:23:41,338	44k	INFO	Train Epoch: 110 [43%]
2023-03-17 10:23:41,340	44k	INFO	Losses: [2.393941879272461, 2.3581132888793945, 10.60067367553711, 20.834491729736328, 1.0546199083328247], step: 53400, lr: 9.858501725933955e-05
2023-03-17 10:26:44,466	44k	INFO	Train Epoch: 110 [84%]
2023-03-17 10:26:44,468	44k	INFO	Losses: [2.5766501426696777, 2.014758586883545, 6.656016826629639, 18.496519088745117, 1.2025514841079712], step: 53600, lr: 9.858501725933955e-05
2023-03-17 10:26:59,220	44k	INFO	Saving model and optimizer state at iteration 110 to ./logs/44k/G_53600.pth
2023-03-17 10:27:02,726	44k	INFO	Saving model and optimizer state at iteration 110 to ./logs/44k/D_53600.pth
2023-03-17 10:27:05,149	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_49600.pth
2023-03-17 10:27:05,157	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_49600.pth
2023-03-17 10:28:21,979	44k	INFO	====> Epoch: 110, cost 481.45 s
2023-03-17 10:30:22,895	44k	INFO	Train Epoch: 111 [25%]
2023-03-17 10:30:22,897	44k	INFO	Losses: [2.416818141937256, 2.4272170066833496, 9.014447212219238, 20.61700439453125, 1.18415367603302], step: 53800, lr: 9.857269413218213e-05
2023-03-17 10:33:26,685	44k	INFO	Train Epoch: 111 [66%]
2023-03-17 10:33:26,687	44k	INFO	Losses: [2.7063441276550293, 2.2111012935638428, 9.299739837646484, 20.346240997314453, 0.9914289116859436], step: 54000, lr: 9.857269413218213e-05
2023-03-17 10:36:01,370	44k	INFO	====> Epoch: 111, cost 459.39 s
2023-03-17 10:36:39,091	44k	INFO	Train Epoch: 112 [7%]
2023-03-17 10:36:39,094	44k	INFO	Losses: [2.728736400604248, 2.0638718605041504, 7.160993576049805, 13.844667434692383, 1.0313878059387207], step: 54200, lr: 9.85603725454156e-05
2023-03-17 10:39:43,047	44k	INFO	Train Epoch: 112 [48%]
2023-03-17 10:39:43,049	44k	INFO	Losses: [2.389542818069458, 2.3154985904693604, 10.85708999633789, 21.832353591918945, 1.4632632732391357], step: 54400, lr: 9.85603725454156e-05
2023-03-17 10:39:57,768	44k	INFO	Saving model and optimizer state at iteration 112 to ./logs/44k/G_54400.pth
2023-03-17 10:40:02,542	44k	INFO	Saving model and optimizer state at iteration 112 to ./logs/44k/D_54400.pth
2023-03-17 10:40:05,445	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_50400.pth
2023-03-17 10:40:05,450	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_50400.pth
2023-03-17 10:43:12,137	44k	INFO	Train Epoch: 112 [89%]
2023-03-17 10:43:12,139	44k	INFO	Losses: [2.2183568477630615, 2.409961223602295, 10.971826553344727, 19.66034507751465, 1.3942534923553467], step: 54600, lr: 9.85603725454156e-05
2023-03-17 10:44:02,918	44k	INFO	====> Epoch: 112, cost 481.55 s
2023-03-17 10:46:24,796	44k	INFO	Train Epoch: 113 [30%]
2023-03-17 10:46:24,798	44k	INFO	Losses: [2.4191715717315674, 2.3714749813079834, 10.134894371032715, 21.89127540588379, 1.0536304712295532], step: 54800, lr: 9.854805249884741e-05
2023-03-17 10:49:29,420	44k	INFO	Train Epoch: 113 [70%]
2023-03-17 10:49:29,422	44k	INFO	Losses: [2.461608648300171, 2.305603504180908, 12.661972045898438, 21.27015495300293, 1.138624668121338], step: 55000, lr: 9.854805249884741e-05
2023-03-17 10:51:41,881	44k	INFO	====> Epoch: 113, cost 458.96 s
2023-03-17 10:52:42,466	44k	INFO	Train Epoch: 114 [11%]
2023-03-17 10:52:42,468	44k	INFO	Losses: [2.429858922958374, 2.3441522121429443, 9.194158554077148, 17.314563751220703, 0.9205238819122314], step: 55200, lr: 9.853573399228505e-05
2023-03-17 10:52:56,441	44k	INFO	Saving model and optimizer state at iteration 114 to ./logs/44k/G_55200.pth
2023-03-17 10:53:00,033	44k	INFO	Saving model and optimizer state at iteration 114 to ./logs/44k/D_55200.pth
2023-03-17 10:53:02,412	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_51200.pth
2023-03-17 10:53:02,419	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_51200.pth
2023-03-17 10:56:10,211	44k	INFO	Train Epoch: 114 [52%]
2023-03-17 10:56:10,212	44k	INFO	Losses: [2.5370161533355713, 2.0080819129943848, 7.957099437713623, 15.610247611999512, 0.9361606240272522], step: 55400, lr: 9.853573399228505e-05
2023-03-17 10:59:12,801	44k	INFO	Train Epoch: 114 [93%]
2023-03-17 10:59:12,803	44k	INFO	Losses: [2.563983917236328, 2.1723594665527344, 10.448431015014648, 20.848648071289062, 1.1929681301116943], step: 55600, lr: 9.853573399228505e-05
2023-03-17 10:59:43,264	44k	INFO	====> Epoch: 114, cost 481.38 s
2023-03-17 11:02:26,126	44k	INFO	Train Epoch: 115 [34%]
2023-03-17 11:02:26,128	44k	INFO	Losses: [2.5831854343414307, 2.2265872955322266, 9.641938209533691, 19.983287811279297, 0.9807518124580383], step: 55800, lr: 9.8523417025536e-05
2023-03-17 11:05:29,812	44k	INFO	Train Epoch: 115 [75%]
2023-03-17 11:05:29,815	44k	INFO	Losses: [2.563887596130371, 2.307697296142578, 9.734845161437988, 20.534194946289062, 1.1951038837432861], step: 56000, lr: 9.8523417025536e-05
2023-03-17 11:05:47,922	44k	INFO	Saving model and optimizer state at iteration 115 to ./logs/44k/G_56000.pth
2023-03-17 11:05:51,767	44k	INFO	Saving model and optimizer state at iteration 115 to ./logs/44k/D_56000.pth
2023-03-17 11:05:54,113	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_52000.pth
2023-03-17 11:05:54,118	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_52000.pth
2023-03-17 11:07:47,392	44k	INFO	====> Epoch: 115, cost 484.13 s
2023-03-17 11:09:10,836	44k	INFO	Train Epoch: 116 [16%]
2023-03-17 11:09:10,838	44k	INFO	Losses: [2.477738618850708, 2.160352945327759, 7.024170875549316, 16.734811782836914, 0.9827545285224915], step: 56200, lr: 9.851110159840781e-05
2023-03-17 11:12:13,509	44k	INFO	Train Epoch: 116 [57%]
2023-03-17 11:12:13,512	44k	INFO	Losses: [2.4602110385894775, 2.128208875656128, 9.715372085571289, 18.516027450561523, 1.2244703769683838], step: 56400, lr: 9.851110159840781e-05
2023-03-17 11:15:16,730	44k	INFO	Train Epoch: 116 [98%]
2023-03-17 11:15:16,733	44k	INFO	Losses: [2.474620819091797, 1.964585781097412, 7.929133415222168, 17.412694931030273, 1.4728379249572754], step: 56600, lr: 9.851110159840781e-05
2023-03-17 11:15:23,588	44k	INFO	====> Epoch: 116, cost 456.20 s
2023-03-17 11:18:27,175	44k	INFO	Train Epoch: 117 [39%]
2023-03-17 11:18:27,177	44k	INFO	Losses: [2.420013189315796, 2.18603253364563, 9.591678619384766, 14.596207618713379, 0.8012672662734985], step: 56800, lr: 9.8498787710708e-05
2023-03-17 11:18:40,572	44k	INFO	Saving model and optimizer state at iteration 117 to ./logs/44k/G_56800.pth
2023-03-17 11:18:44,162	44k	INFO	Saving model and optimizer state at iteration 117 to ./logs/44k/D_56800.pth
2023-03-17 11:18:46,833	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_52800.pth
2023-03-17 11:18:46,838	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_52800.pth
2023-03-17 11:21:54,521	44k	INFO	Train Epoch: 117 [80%]
2023-03-17 11:21:54,523	44k	INFO	Losses: [2.651813268661499, 2.256319522857666, 6.6362433433532715, 15.512826919555664, 1.0284459590911865], step: 57000, lr: 9.8498787710708e-05
2023-03-17 11:23:23,222	44k	INFO	====> Epoch: 117, cost 479.63 s
2023-03-17 11:25:07,785	44k	INFO	Train Epoch: 118 [21%]
2023-03-17 11:25:07,787	44k	INFO	Losses: [2.271695137023926, 2.2755846977233887, 12.089309692382812, 18.84491729736328, 1.480966329574585], step: 57200, lr: 9.848647536224416e-05
2023-03-17 11:28:10,010	44k	INFO	Train Epoch: 118 [62%]
2023-03-17 11:28:10,011	44k	INFO	Losses: [2.5381710529327393, 2.075659990310669, 9.390398979187012, 19.075077056884766, 1.192421317100525], step: 57400, lr: 9.848647536224416e-05
2023-03-17 11:30:57,546	44k	INFO	====> Epoch: 118, cost 454.32 s
2023-03-17 11:31:23,244	44k	INFO	Train Epoch: 119 [3%]
2023-03-17 11:31:23,246	44k	INFO	Losses: [2.3417224884033203, 2.3969225883483887, 11.169589042663574, 17.177366256713867, 1.2214947938919067], step: 57600, lr: 9.847416455282387e-05
2023-03-17 11:31:37,206	44k	INFO	Saving model and optimizer state at iteration 119 to ./logs/44k/G_57600.pth
2023-03-17 11:31:41,131	44k	INFO	Saving model and optimizer state at iteration 119 to ./logs/44k/D_57600.pth
2023-03-17 11:31:43,726	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_53600.pth
2023-03-17 11:31:43,732	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_53600.pth
2023-03-17 11:34:50,363	44k	INFO	Train Epoch: 119 [44%]
2023-03-17 11:34:50,365	44k	INFO	Losses: [2.639533281326294, 2.552556037902832, 7.157193660736084, 16.7408447265625, 1.1907761096954346], step: 57800, lr: 9.847416455282387e-05
2023-03-17 12:37:53,741	44k	INFO	{'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 3.1415926, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 5}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'爱梅斯': 0, '花凛': 1, '佩可莉姆': 2, '咲恋': 3, '凯露': 4, '可可萝': 5}, 'model_dir': './logs/44k'}
2023-03-17 12:37:54,854	44k	WARNING	git hash values are different. 55dd086f(saved) != d54bf592(current)
2023-03-17 12:38:09,422	44k	INFO	Loaded checkpoint './logs/44k/G_57600.pth' (iteration 119)
2023-03-17 12:38:21,317	44k	INFO	Loaded checkpoint './logs/44k/D_57600.pth' (iteration 119)
2023-03-17 12:38:58,005	44k	INFO	Train Epoch: 119 [3%]
2023-03-17 12:38:58,006	44k	INFO	Losses: [2.6381843090057373, 2.3323657512664795, 7.3547258377075195, 16.336719512939453, 1.127211093902588], step: 57600, lr: 9.846185528225477e-05
2023-03-17 12:39:12,343	44k	INFO	Saving model and optimizer state at iteration 119 to ./logs/44k/G_57600.pth
2023-03-17 12:39:15,842	44k	INFO	Saving model and optimizer state at iteration 119 to ./logs/44k/D_57600.pth
2023-03-19 03:48:11,610	44k	INFO	{'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 3.14159, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 12, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 1, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 10}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'咲恋': 0, '爱梅斯': 1, '佩可莉姆': 2, '花凛': 3, '凯露': 4, '可可萝': 5}, 'model_dir': './logs/44k'}
2023-03-19 03:48:12,247	44k	WARNING	git hash values are different. 55dd086f(saved) != 447d95bc(current)
2023-03-19 03:48:35,794	44k	INFO	Loaded checkpoint './logs/44k/G_57600.pth' (iteration 119)
2023-03-19 03:48:43,786	44k	INFO	Loaded checkpoint './logs/44k/D_57600.pth' (iteration 119)
2023-03-19 03:49:20,849	44k	INFO	Train Epoch: 119 [3%]
2023-03-19 03:49:20,850	44k	INFO	Losses: [2.6117446422576904, 2.5298197269439697, 8.718894958496094, 18.367584228515625, 1.648695468902588], step: 28800, lr: 9.84495475503445e-05
2023-03-19 03:49:39,430	44k	INFO	Saving model and optimizer state at iteration 119 to ./logs/44k/G_28800.pth
2023-03-19 03:49:43,944	44k	INFO	Saving model and optimizer state at iteration 119 to ./logs/44k/D_28800.pth
2023-03-19 03:55:12,414	44k	INFO	Train Epoch: 119 [85%]
2023-03-19 03:55:12,417	44k	INFO	Losses: [2.5723941326141357, 2.1749308109283447, 9.912409782409668, 19.12677001953125, 1.1882245540618896], step: 29000, lr: 9.84495475503445e-05
2023-03-19 03:56:11,166	44k	INFO	====> Epoch: 119, cost 479.56 s
2023-03-19 04:00:10,317	44k	INFO	Train Epoch: 120 [67%]
2023-03-19 04:00:10,320	44k	INFO	Losses: [2.554133415222168, 2.170962333679199, 8.286349296569824, 16.83053207397461, 1.3115869760513306], step: 29200, lr: 9.84372413569007e-05
2023-03-19 04:02:00,325	44k	INFO	====> Epoch: 120, cost 349.16 s
2023-03-19 04:04:57,929	44k	INFO	Train Epoch: 121 [49%]
2023-03-19 04:04:57,931	44k	INFO	Losses: [2.5637712478637695, 2.2343292236328125, 7.5599799156188965, 16.201656341552734, 1.14766526222229], step: 29400, lr: 9.842493670173108e-05
2023-03-19 04:07:50,734	44k	INFO	====> Epoch: 121, cost 350.41 s
2023-03-19 04:09:47,121	44k	INFO	Train Epoch: 122 [31%]
2023-03-19 04:09:47,123	44k	INFO	Losses: [2.1760997772216797, 2.577721357345581, 7.668175220489502, 16.05946159362793, 1.0146536827087402], step: 29600, lr: 9.841263358464336e-05
2023-03-19 04:10:01,741	44k	INFO	Saving model and optimizer state at iteration 122 to ./logs/44k/G_29600.pth
2023-03-19 04:10:05,829	44k	INFO	Saving model and optimizer state at iteration 122 to ./logs/44k/D_29600.pth
2023-03-19 04:14:02,373	44k	INFO	====> Epoch: 122, cost 371.64 s
2023-03-19 04:14:59,679	44k	INFO	Train Epoch: 123 [13%]
2023-03-19 04:14:59,681	44k	INFO	Losses: [2.5114970207214355, 2.3344173431396484, 9.933822631835938, 18.707149505615234, 1.2205214500427246], step: 29800, lr: 9.840033200544528e-05
2023-03-19 04:19:36,970	44k	INFO	Train Epoch: 123 [95%]
2023-03-19 04:19:36,971	44k	INFO	Losses: [2.362931489944458, 2.4701714515686035, 9.389654159545898, 17.956693649291992, 1.0756415128707886], step: 30000, lr: 9.840033200544528e-05
2023-03-19 04:19:52,500	44k	INFO	====> Epoch: 123, cost 350.13 s
2023-03-19 04:24:24,887	44k	INFO	Train Epoch: 124 [77%]
2023-03-19 04:24:24,895	44k	INFO	Losses: [2.3956665992736816, 2.2771549224853516, 8.791420936584473, 16.911115646362305, 1.1890572309494019], step: 30200, lr: 9.838803196394459e-05
2023-03-19 04:25:41,996	44k	INFO	====> Epoch: 124, cost 349.50 s
2023-03-19 04:29:11,409	44k	INFO	Train Epoch: 125 [59%]
2023-03-19 04:29:11,411	44k	INFO	Losses: [2.354029655456543, 2.630589246749878, 9.738694190979004, 19.589468002319336, 1.2809773683547974], step: 30400, lr: 9.837573345994909e-05
2023-03-19 04:29:26,152	44k	INFO	Saving model and optimizer state at iteration 125 to ./logs/44k/G_30400.pth
2023-03-19 04:29:30,209	44k	INFO	Saving model and optimizer state at iteration 125 to ./logs/44k/D_30400.pth
2023-03-19 04:31:53,084	44k	INFO	====> Epoch: 125, cost 371.09 s
2023-03-19 04:34:21,919	44k	INFO	Train Epoch: 126 [41%]
2023-03-19 04:34:21,921	44k	INFO	Losses: [2.3473196029663086, 2.3910813331604004, 9.856499671936035, 18.58342742919922, 1.0948435068130493], step: 30600, lr: 9.836343649326659e-05
2023-03-19 04:37:41,259	44k	INFO	====> Epoch: 126, cost 348.18 s
2023-03-19 04:39:10,481	44k	INFO	Train Epoch: 127 [23%]
2023-03-19 04:39:10,482	44k	INFO	Losses: [2.4615707397460938, 2.06150221824646, 9.964756965637207, 17.74311637878418, 1.1590461730957031], step: 30800, lr: 9.835114106370493e-05
2023-03-19 04:43:30,687	44k	INFO	====> Epoch: 127, cost 349.43 s
2023-03-19 04:43:58,709	44k	INFO	Train Epoch: 128 [5%]
2023-03-19 04:43:58,710	44k	INFO	Losses: [2.5154454708099365, 2.0758163928985596, 9.66000747680664, 20.303064346313477, 1.333095908164978], step: 31000, lr: 9.833884717107196e-05
2023-03-19 04:48:36,520	44k	INFO	Train Epoch: 128 [87%]
2023-03-19 04:48:36,523	44k	INFO	Losses: [2.5564117431640625, 1.8802831172943115, 5.399607181549072, 13.982666969299316, 1.1477104425430298], step: 31200, lr: 9.833884717107196e-05
2023-03-19 04:48:51,681	44k	INFO	Saving model and optimizer state at iteration 128 to ./logs/44k/G_31200.pth
2023-03-19 04:48:56,149	44k	INFO	Saving model and optimizer state at iteration 128 to ./logs/44k/D_31200.pth
2023-03-19 04:49:45,408	44k	INFO	====> Epoch: 128, cost 374.72 s
2023-03-19 04:53:48,928	44k	INFO	Train Epoch: 129 [69%]
2023-03-19 04:53:48,930	44k	INFO	Losses: [2.376661777496338, 2.3400487899780273, 10.183448791503906, 18.81839370727539, 1.2373069524765015], step: 31400, lr: 9.832655481517557e-05
2023-03-19 04:55:33,509	44k	INFO	====> Epoch: 129, cost 348.10 s
2023-03-19 04:58:35,106	44k	INFO	Train Epoch: 130 [51%]
2023-03-19 04:58:35,108	44k	INFO	Losses: [2.453214645385742, 2.255253553390503, 9.941259384155273, 20.30086898803711, 1.1844775676727295], step: 31600, lr: 9.831426399582366e-05
2023-03-19 05:01:19,697	44k	INFO	====> Epoch: 130, cost 346.19 s
2023-03-19 05:03:20,099	44k	INFO	Train Epoch: 131 [33%]
2023-03-19 05:03:20,101	44k	INFO	Losses: [2.545170783996582, 2.3894519805908203, 9.066315650939941, 19.431034088134766, 1.296032428741455], step: 31800, lr: 9.830197471282419e-05
2023-03-19 05:07:05,261	44k	INFO	====> Epoch: 131, cost 345.56 s
2023-03-19 05:08:06,628	44k	INFO	Train Epoch: 132 [15%]
2023-03-19 05:08:06,630	44k	INFO	Losses: [2.572995901107788, 2.0897107124328613, 8.455870628356934, 19.27798080444336, 1.2310394048690796], step: 32000, lr: 9.828968696598508e-05
2023-03-19 05:08:21,705	44k	INFO	Saving model and optimizer state at iteration 132 to ./logs/44k/G_32000.pth
2023-03-19 05:08:25,529	44k	INFO	Saving model and optimizer state at iteration 132 to ./logs/44k/D_32000.pth
2023-03-19 05:13:07,691	44k	INFO	Train Epoch: 132 [97%]
2023-03-19 05:13:07,693	44k	INFO	Losses: [2.5495755672454834, 2.272268772125244, 8.980924606323242, 20.105144500732422, 1.1467938423156738], step: 32200, lr: 9.828968696598508e-05
2023-03-19 05:13:20,631	44k	INFO	====> Epoch: 132, cost 375.37 s
2023-03-19 05:17:57,925	44k	INFO	Train Epoch: 133 [79%]
2023-03-19 05:17:57,927	44k	INFO	Losses: [2.6762030124664307, 2.3116543292999268, 9.210368156433105, 20.27691650390625, 1.0518969297409058], step: 32400, lr: 9.827740075511432e-05
2023-03-19 05:19:10,204	44k	INFO	====> Epoch: 133, cost 349.57 s
2023-03-19 05:22:47,686	44k	INFO	Train Epoch: 134 [61%]
2023-03-19 05:22:47,688	44k	INFO	Losses: [2.6595070362091064, 2.001474142074585, 8.9099760055542, 15.667600631713867, 1.376304268836975], step: 32600, lr: 9.826511608001993e-05
2023-03-19 05:25:00,871	44k	INFO	====> Epoch: 134, cost 350.67 s
2023-03-19 05:27:37,183	44k	INFO	Train Epoch: 135 [43%]
2023-03-19 05:27:37,185	44k	INFO	Losses: [2.2516837120056152, 2.621290683746338, 9.578805923461914, 20.273048400878906, 1.2863372564315796], step: 32800, lr: 9.825283294050992e-05
2023-03-19 05:27:53,133	44k	INFO	Saving model and optimizer state at iteration 135 to ./logs/44k/G_32800.pth
2023-03-19 05:27:57,523	44k	INFO	Saving model and optimizer state at iteration 135 to ./logs/44k/D_32800.pth
2023-03-19 05:27:59,715	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_54400.pth
2023-03-19 05:27:59,717	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_54400.pth
2023-03-19 05:31:16,891	44k	INFO	====> Epoch: 135, cost 376.02 s
2023-03-19 05:32:51,591	44k	INFO	Train Epoch: 136 [25%]
2023-03-19 05:32:51,593	44k	INFO	Losses: [2.550990581512451, 2.299443244934082, 8.599310874938965, 17.00333023071289, 1.1935644149780273], step: 33000, lr: 9.824055133639235e-05
2023-03-19 05:37:06,857	44k	INFO	====> Epoch: 136, cost 349.97 s
2023-03-19 05:37:40,480	44k	INFO	Train Epoch: 137 [7%]
2023-03-19 05:37:40,482	44k	INFO	Losses: [2.5285518169403076, 2.2456958293914795, 8.032464027404785, 16.138957977294922, 1.1882539987564087], step: 33200, lr: 9.822827126747529e-05
2023-03-19 05:42:18,313	44k	INFO	Train Epoch: 137 [89%]
2023-03-19 05:42:18,314	44k	INFO	Losses: [2.38613224029541, 2.4379029273986816, 9.768050193786621, 19.363834381103516, 1.0418587923049927], step: 33400, lr: 9.822827126747529e-05
2023-03-19 05:42:56,549	44k	INFO	====> Epoch: 137, cost 349.69 s
2023-03-19 05:47:05,937	44k	INFO	Train Epoch: 138 [70%]
2023-03-19 05:47:05,940	44k	INFO	Losses: [2.4908342361450195, 2.3220832347869873, 10.196332931518555, 18.554168701171875, 1.287099838256836], step: 33600, lr: 9.821599273356685e-05
2023-03-19 05:47:21,887	44k	INFO	Saving model and optimizer state at iteration 138 to ./logs/44k/G_33600.pth
2023-03-19 05:47:26,443	44k	INFO	Saving model and optimizer state at iteration 138 to ./logs/44k/D_33600.pth
2023-03-19 05:47:28,722	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_55200.pth
2023-03-19 05:47:28,724	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/D_55200.pth
2023-03-19 05:49:11,821	44k	INFO	====> Epoch: 138, cost 375.27 s
2023-03-19 05:52:21,326	44k	INFO	Train Epoch: 139 [52%]
2023-03-19 05:52:21,328	44k	INFO	Losses: [2.678709030151367, 2.0283408164978027, 9.1529541015625, 20.652057647705078, 0.9994276165962219], step: 33800, lr: 9.820371573447515e-05
2023-03-19 05:55:00,674	44k	INFO	====> Epoch: 139, cost 348.85 s
2023-03-19 06:00:14,148	44k	INFO	{'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 3.14159, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 12, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 1, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 10}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'咲恋': 0, '爱梅斯': 1, '佩可莉姆': 2, '花凛': 3, '凯露': 4, '可可萝': 5}, 'model_dir': './logs/44k'}
2023-03-19 06:00:14,660	44k	WARNING	git hash values are different. 55dd086f(saved) != 447d95bc(current)
2023-03-19 06:00:34,044	44k	INFO	Loaded checkpoint './logs/44k/G_57600.pth' (iteration 119)
2023-03-19 06:00:44,042	44k	INFO	Loaded checkpoint './logs/44k/D_57600.pth' (iteration 119)
2023-03-19 06:01:19,446	44k	INFO	Train Epoch: 119 [3%]
2023-03-19 06:01:19,446	44k	INFO	Losses: [2.611046314239502, 2.5111546516418457, 8.693563461303711, 18.36782455444336, 1.648695468902588], step: 28800, lr: 9.84495475503445e-05
2023-03-19 06:01:34,850	44k	INFO	Saving model and optimizer state at iteration 119 to ./logs/44k/G_28800.pth
2023-03-19 06:01:44,948	44k	INFO	Saving model and optimizer state at iteration 119 to ./logs/44k/D_28800.pth
2023-03-19 06:10:25,755	44k	INFO	{'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 3.14159, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 12, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 1, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 10}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'咲恋': 0, '爱梅斯': 1, '佩可莉姆': 2, '花凛': 3, '凯露': 4, '可可萝': 5}, 'model_dir': './logs/44k'}
2023-03-19 06:10:26,335	44k	WARNING	git hash values are different. 55dd086f(saved) != 447d95bc(current)
2023-03-19 06:10:42,963	44k	INFO	Loaded checkpoint './logs/44k/G_57600.pth' (iteration 119)
2023-03-19 06:10:47,976	44k	INFO	Loaded checkpoint './logs/44k/D_57600.pth' (iteration 119)
2023-03-19 06:11:23,144	44k	INFO	Train Epoch: 119 [3%]
2023-03-19 06:11:23,151	44k	INFO	Losses: [2.6149513721466064, 2.5370380878448486, 8.72342300415039, 18.367277145385742, 1.6487497091293335], step: 28800, lr: 9.84495475503445e-05
2023-03-19 06:11:40,378	44k	INFO	Saving model and optimizer state at iteration 119 to ./logs/44k/G_28800.pth
2023-03-19 06:11:58,100	44k	INFO	Saving model and optimizer state at iteration 119 to ./logs/44k/D_28800.pth
2023-03-19 06:17:22,170	44k	INFO	Train Epoch: 119 [85%]
2023-03-19 06:17:22,173	44k	INFO	Losses: [2.478893995285034, 2.4014482498168945, 10.103824615478516, 19.06524658203125, 1.186173677444458], step: 29000, lr: 9.84495475503445e-05
2023-03-19 06:18:17,650	44k	INFO	====> Epoch: 119, cost 471.90 s
2023-03-19 06:22:12,876	44k	INFO	Train Epoch: 120 [67%]
2023-03-19 06:22:12,878	44k	INFO	Losses: [2.5584959983825684, 2.252742290496826, 8.289676666259766, 16.782520294189453, 1.2712703943252563], step: 29200, lr: 9.84372413569007e-05
2023-03-19 06:24:01,604	44k	INFO	====> Epoch: 120, cost 343.95 s
2023-03-19 06:26:55,905	44k	INFO	Train Epoch: 121 [49%]
2023-03-19 06:26:55,906	44k	INFO	Losses: [2.544400691986084, 2.2445619106292725, 7.344658851623535, 15.869691848754883, 1.1362035274505615], step: 29400, lr: 9.842493670173108e-05
2023-03-19 06:29:44,689	44k	INFO	====> Epoch: 121, cost 343.09 s
2023-03-19 06:31:39,448	44k	INFO	Train Epoch: 122 [31%]
2023-03-19 06:31:39,451	44k	INFO	Losses: [2.5556859970092773, 2.1673104763031006, 6.94382381439209, 15.537896156311035, 1.0236930847167969], step: 29600, lr: 9.841263358464336e-05
2023-03-19 06:31:55,237	44k	INFO	Saving model and optimizer state at iteration 122 to ./logs/44k/G_29600.pth
2023-03-19 06:32:07,926	44k	INFO	Saving model and optimizer state at iteration 122 to ./logs/44k/D_29600.pth
2023-03-19 06:36:17,183	44k	INFO	====> Epoch: 122, cost 392.49 s
2023-03-19 06:37:13,670	44k	INFO	Train Epoch: 123 [13%]
2023-03-19 06:37:13,679	44k	INFO	Losses: [2.62937593460083, 2.07771635055542, 9.523256301879883, 18.81388282775879, 1.2406041622161865], step: 29800, lr: 9.840033200544528e-05
2023-03-19 06:41:45,034	44k	INFO	Train Epoch: 123 [95%]
2023-03-19 06:41:45,035	44k	INFO	Losses: [2.4676156044006348, 2.252629041671753, 9.206219673156738, 17.628591537475586, 1.0935561656951904], step: 30000, lr: 9.840033200544528e-05
2023-03-19 06:42:01,026	44k	INFO	====> Epoch: 123, cost 343.84 s
2023-03-19 06:46:26,779	44k	INFO	Train Epoch: 124 [77%]
2023-03-19 06:46:26,781	44k	INFO	Losses: [2.3520586490631104, 2.2396674156188965, 9.005539894104004, 16.792951583862305, 1.176187515258789], step: 30200, lr: 9.838803196394459e-05
2023-03-19 06:47:42,660	44k	INFO	====> Epoch: 124, cost 341.63 s
2023-03-19 06:51:08,578	44k	INFO	Train Epoch: 125 [59%]
2023-03-19 06:51:08,580	44k	INFO	Losses: [2.4618020057678223, 2.4736528396606445, 9.541234970092773, 19.455158233642578, 1.2822479009628296], step: 30400, lr: 9.837573345994909e-05
2023-03-19 06:51:22,918	44k	INFO	Saving model and optimizer state at iteration 125 to ./logs/44k/G_30400.pth
2023-03-19 06:51:36,813	44k	INFO	Saving model and optimizer state at iteration 125 to ./logs/44k/D_30400.pth
2023-03-19 06:54:09,241	44k	INFO	====> Epoch: 125, cost 386.58 s
2023-03-19 06:56:35,638	44k	INFO	Train Epoch: 126 [41%]
2023-03-19 06:56:35,640	44k	INFO	Losses: [2.406524181365967, 2.34045147895813, 9.467703819274902, 18.4973201751709, 1.071862816810608], step: 30600, lr: 9.836343649326659e-05
2023-03-19 06:59:50,610	44k	INFO	====> Epoch: 126, cost 341.37 s
2023-03-19 07:01:16,635	44k	INFO	Train Epoch: 127 [23%]
2023-03-19 07:01:16,637	44k	INFO	Losses: [2.160978317260742, 2.53950572013855, 10.809408187866211, 17.73338508605957, 1.1707638502120972], step: 30800, lr: 9.835114106370493e-05
2023-03-19 07:05:32,069	44k	INFO	====> Epoch: 127, cost 341.46 s
2023-03-19 07:05:58,271	44k	INFO	Train Epoch: 128 [5%]
2023-03-19 07:05:58,272	44k	INFO	Losses: [2.5683066844940186, 2.1148858070373535, 9.596181869506836, 20.255708694458008, 1.327038049697876], step: 31000, lr: 9.833884717107196e-05
2023-03-19 07:10:30,983	44k	INFO	Train Epoch: 128 [87%]
2023-03-19 07:10:30,985	44k	INFO	Losses: [2.526930332183838, 1.9264739751815796, 5.521120071411133, 14.04356861114502, 1.1642577648162842], step: 31200, lr: 9.833884717107196e-05
2023-03-19 07:10:44,902	44k	INFO	Saving model and optimizer state at iteration 128 to ./logs/44k/G_31200.pth
2023-03-19 07:10:55,553	44k	INFO	Saving model and optimizer state at iteration 128 to ./logs/44k/D_31200.pth
2023-03-19 07:11:56,393	44k	INFO	====> Epoch: 128, cost 384.32 s
2023-03-19 07:15:54,605	44k	INFO	Train Epoch: 129 [69%]
2023-03-19 07:15:54,607	44k	INFO	Losses: [2.5165069103240967, 2.326845407485962, 10.025596618652344, 19.09140396118164, 1.2301136255264282], step: 31400, lr: 9.832655481517557e-05
2023-03-19 07:17:37,547	44k	INFO	====> Epoch: 129, cost 341.15 s
2023-03-19 07:20:36,167	44k	INFO	Train Epoch: 130 [51%]
2023-03-19 07:20:36,169	44k	INFO	Losses: [2.3509130477905273, 2.2406866550445557, 10.232770919799805, 20.243438720703125, 1.1790398359298706], step: 31600, lr: 9.831426399582366e-05
2023-03-19 07:31:35,511	44k	INFO	{'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 3.14159, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 12, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 1, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 10}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'咲恋': 0, '爱梅斯': 1, '佩可莉姆': 2, '花凛': 3, '凯露': 4, '可可萝': 5}, 'model_dir': './logs/44k'}
2023-03-19 07:31:35,570	44k	WARNING	git hash values are different. 55dd086f(saved) != 447d95bc(current)
2023-03-19 07:31:47,854	44k	INFO	Loaded checkpoint './logs/44k/G_57600.pth' (iteration 119)
2023-03-19 07:31:51,865	44k	INFO	Loaded checkpoint './logs/44k/D_57600.pth' (iteration 119)
2023-03-19 07:32:23,275	44k	INFO	Train Epoch: 119 [3%]
2023-03-19 07:32:23,276	44k	INFO	Losses: [2.6113407611846924, 2.5411152839660645, 8.731165885925293, 18.367197036743164, 1.6487497091293335], step: 28800, lr: 9.84495475503445e-05
2023-03-19 07:32:39,430	44k	INFO	Saving model and optimizer state at iteration 119 to ./logs/44k/G_28800.pth
2023-03-19 07:32:42,974	44k	INFO	Saving model and optimizer state at iteration 119 to ./logs/44k/D_28800.pth
2023-03-19 07:36:16,030	44k	INFO	{'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 3.14159, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 12, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 1, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 10}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'咲恋': 0, '爱梅斯': 1, '佩可莉姆': 2, '花凛': 3, '凯露': 4, '可可萝': 5}, 'model_dir': './logs/44k'}
2023-03-19 07:36:16,089	44k	WARNING	git hash values are different. 55dd086f(saved) != 447d95bc(current)
2023-03-19 07:36:30,626	44k	INFO	Loaded checkpoint './logs/44k/G_31200.pth' (iteration 128)
2023-03-19 07:36:34,113	44k	INFO	Loaded checkpoint './logs/44k/D_57600.pth' (iteration 119)
2023-03-19 07:37:02,070	44k	INFO	Train Epoch: 119 [3%]
2023-03-19 07:37:02,070	44k	INFO	Losses: [2.8425185680389404, 2.165978193283081, 8.048396110534668, 17.02152442932129, 1.4814859628677368], step: 28800, lr: 9.832655481517557e-05
2023-03-19 07:38:08,125	44k	INFO	{'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 3.14159, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 12, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 1, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 10}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'咲恋': 0, '爱梅斯': 1, '佩可莉姆': 2, '花凛': 3, '凯露': 4, '可可萝': 5}, 'model_dir': './logs/44k'}
2023-03-19 07:38:08,174	44k	WARNING	git hash values are different. 55dd086f(saved) != 447d95bc(current)
2023-03-19 07:38:18,551	44k	INFO	Loaded checkpoint './logs/44k/G_31200.pth' (iteration 128)
2023-03-19 07:38:22,059	44k	INFO	Loaded checkpoint './logs/44k/D_31200.pth' (iteration 128)
2023-03-19 07:38:55,812	44k	INFO	Train Epoch: 128 [5%]
2023-03-19 07:38:55,814	44k	INFO	Losses: [2.495979070663452, 2.364474296569824, 11.206805229187012, 19.741397857666016, 1.3408305644989014], step: 31000, lr: 9.832655481517557e-05
2023-03-19 07:43:33,166	44k	INFO	Train Epoch: 128 [87%]
2023-03-19 07:43:33,166	44k	INFO	Losses: [2.4889912605285645, 2.110476016998291, 10.982046127319336, 17.08719825744629, 1.0594382286071777], step: 31200, lr: 9.832655481517557e-05
2023-03-19 07:43:51,106	44k	INFO	Saving model and optimizer state at iteration 128 to ./logs/44k/G_31200.pth
2023-03-19 07:43:54,937	44k	INFO	Saving model and optimizer state at iteration 128 to ./logs/44k/D_31200.pth
2023-03-19 07:44:46,215	44k	INFO	====> Epoch: 128, cost 398.09 s
2023-03-19 07:48:44,115	44k	INFO	Train Epoch: 129 [69%]
2023-03-19 07:48:44,117	44k	INFO	Losses: [2.493502378463745, 2.2225961685180664, 10.911959648132324, 19.7308292388916, 1.0857512950897217], step: 31400, lr: 9.831426399582366e-05
2023-03-19 07:50:26,501	44k	INFO	====> Epoch: 129, cost 340.29 s
2023-03-19 07:53:24,477	44k	INFO	Train Epoch: 130 [51%]
2023-03-19 07:53:24,485	44k	INFO	Losses: [2.4950523376464844, 2.307392120361328, 7.865989685058594, 19.17140769958496, 1.1914111375808716], step: 31600, lr: 9.830197471282419e-05
2023-03-19 07:56:05,781	44k	INFO	====> Epoch: 130, cost 339.28 s
2023-03-19 07:58:04,419	44k	INFO	Train Epoch: 131 [33%]
2023-03-19 07:58:04,421	44k	INFO	Losses: [2.6162869930267334, 2.106891393661499, 8.499510765075684, 16.93584442138672, 1.1345093250274658], step: 31800, lr: 9.828968696598508e-05
2023-03-19 08:01:43,853	44k	INFO	====> Epoch: 131, cost 338.07 s
2023-03-19 08:02:44,292	44k	INFO	Train Epoch: 132 [15%]
2023-03-19 08:02:44,294	44k	INFO	Losses: [2.81638765335083, 2.1512932777404785, 5.555813789367676, 14.49542236328125, 1.442597508430481], step: 32000, lr: 9.827740075511432e-05
2023-03-19 08:02:57,333	44k	INFO	Saving model and optimizer state at iteration 132 to ./logs/44k/G_32000.pth
2023-03-19 08:03:00,986	44k	INFO	Saving model and optimizer state at iteration 132 to ./logs/44k/D_32000.pth
2023-03-19 08:07:37,382	44k	INFO	Train Epoch: 132 [97%]
2023-03-19 08:07:37,383	44k	INFO	Losses: [2.492255449295044, 2.20237398147583, 9.363052368164062, 19.99129867553711, 1.3208929300308228], step: 32200, lr: 9.827740075511432e-05
2023-03-19 08:07:48,606	44k	INFO	====> Epoch: 132, cost 364.75 s
2023-03-19 08:12:18,060	44k	INFO	Train Epoch: 133 [79%]
2023-03-19 08:12:18,062	44k	INFO	Losses: [2.469524383544922, 2.232806444168091, 9.899406433105469, 18.757871627807617, 1.2723671197891235], step: 32400, lr: 9.826511608001993e-05
2023-03-19 08:13:27,999	44k	INFO	====> Epoch: 133, cost 339.39 s
2023-03-19 08:16:58,283	44k	INFO	Train Epoch: 134 [61%]
2023-03-19 08:16:58,285	44k	INFO	Losses: [2.475066900253296, 2.0961790084838867, 9.593061447143555, 18.191526412963867, 1.151270866394043], step: 32600, lr: 9.825283294050992e-05
2023-03-19 08:19:06,960	44k	INFO	====> Epoch: 134, cost 338.96 s
2023-03-19 08:21:37,404	44k	INFO	Train Epoch: 135 [43%]
2023-03-19 08:21:37,406	44k	INFO	Losses: [2.563338041305542, 2.1201441287994385, 7.688729763031006, 19.278770446777344, 1.2072458267211914], step: 32800, lr: 9.824055133639235e-05
2023-03-19 08:21:51,279	44k	INFO	Saving model and optimizer state at iteration 135 to ./logs/44k/G_32800.pth
2023-03-19 08:21:54,772	44k	INFO	Saving model and optimizer state at iteration 135 to ./logs/44k/D_32800.pth
2023-03-19 08:25:08,240	44k	INFO	====> Epoch: 135, cost 361.28 s
2023-03-19 08:26:40,170	44k	INFO	Train Epoch: 136 [25%]
2023-03-19 08:26:40,172	44k	INFO	Losses: [2.617574691772461, 2.2839863300323486, 9.137656211853027, 14.755854606628418, 1.2641507387161255], step: 33000, lr: 9.822827126747529e-05
2023-03-19 08:30:47,138	44k	INFO	====> Epoch: 136, cost 338.90 s
2023-03-19 08:31:19,837	44k	INFO	Train Epoch: 137 [7%]
2023-03-19 08:31:19,844	44k	INFO	Losses: [2.5636496543884277, 2.3666346073150635, 7.662520408630371, 16.88466453552246, 1.189200758934021], step: 33200, lr: 9.821599273356685e-05
2023-03-19 08:35:49,699	44k	INFO	Train Epoch: 137 [89%]
2023-03-19 08:35:49,700	44k	INFO	Losses: [2.452141284942627, 2.1798009872436523, 8.91804027557373, 18.442901611328125, 1.312769889831543], step: 33400, lr: 9.821599273356685e-05
2023-03-19 08:36:27,568	44k	INFO	====> Epoch: 137, cost 340.43 s
2023-03-19 08:40:30,007	44k	INFO	Train Epoch: 138 [70%]
2023-03-19 08:40:30,009	44k	INFO	Losses: [2.617067337036133, 2.113490104675293, 7.150920391082764, 15.901472091674805, 1.3270865678787231], step: 33600, lr: 9.820371573447515e-05
2023-03-19 08:40:46,873	44k	INFO	Saving model and optimizer state at iteration 138 to ./logs/44k/G_33600.pth
2023-03-19 08:40:51,460	44k	INFO	Saving model and optimizer state at iteration 138 to ./logs/44k/D_33600.pth
2023-03-19 08:42:33,832	44k	INFO	====> Epoch: 138, cost 366.26 s
2023-03-19 08:45:38,047	44k	INFO	Train Epoch: 139 [52%]
2023-03-19 08:45:38,052	44k	INFO	Losses: [2.316445827484131, 2.195986032485962, 10.281219482421875, 18.55415916442871, 1.215269923210144], step: 33800, lr: 9.819144027000834e-05
2023-03-19 08:48:15,249	44k	INFO	====> Epoch: 139, cost 341.42 s
2023-03-19 08:50:20,298	44k	INFO	Train Epoch: 140 [34%]
2023-03-19 08:50:20,300	44k	INFO	Losses: [2.6842730045318604, 2.1175990104675293, 7.919965744018555, 18.743572235107422, 1.3355644941329956], step: 34000, lr: 9.817916633997459e-05
2023-03-19 08:53:55,720	44k	INFO	====> Epoch: 140, cost 340.47 s
2023-03-19 08:55:00,976	44k	INFO	Train Epoch: 141 [16%]
2023-03-19 08:55:00,978	44k	INFO	Losses: [2.619845390319824, 2.0506067276000977, 10.158342361450195, 18.480674743652344, 1.236604928970337], step: 34200, lr: 9.816689394418209e-05
2023-03-19 08:59:30,338	44k	INFO	Train Epoch: 141 [98%]
2023-03-19 08:59:30,340	44k	INFO	Losses: [2.4256668090820312, 2.433798313140869, 8.169852256774902, 17.573326110839844, 1.070956826210022], step: 34400, lr: 9.816689394418209e-05
2023-03-19 08:59:45,109	44k	INFO	Saving model and optimizer state at iteration 141 to ./logs/44k/G_34400.pth
2023-03-19 08:59:49,562	44k	INFO	Saving model and optimizer state at iteration 141 to ./logs/44k/D_34400.pth
2023-03-19 08:59:52,748	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_29600.pth
2023-03-19 08:59:57,393	44k	INFO	====> Epoch: 141, cost 361.67 s
2023-03-19 09:04:33,801	44k	INFO	Train Epoch: 142 [80%]
2023-03-19 09:04:33,803	44k	INFO	Losses: [2.531536102294922, 2.1438846588134766, 8.408173561096191, 16.42824935913086, 0.936545193195343], step: 34600, lr: 9.815462308243906e-05
2023-03-19 09:05:37,581	44k	INFO	====> Epoch: 142, cost 340.19 s
2023-03-19 09:09:11,612	44k	INFO	Train Epoch: 143 [62%]
2023-03-19 09:09:11,613	44k	INFO	Losses: [2.678959369659424, 2.0179028511047363, 7.447989463806152, 17.023529052734375, 1.249837040901184], step: 34800, lr: 9.814235375455375e-05
2023-03-19 09:11:14,770	44k	INFO	====> Epoch: 143, cost 337.19 s
2023-03-19 09:13:49,301	44k	INFO	Train Epoch: 144 [44%]
2023-03-19 09:13:49,302	44k	INFO	Losses: [2.475449562072754, 2.3263020515441895, 7.712765216827393, 13.488282203674316, 1.1845589876174927], step: 35000, lr: 9.813008596033443e-05
2023-03-19 09:16:51,345	44k	INFO	====> Epoch: 144, cost 336.57 s
2023-03-19 09:18:27,889	44k	INFO	Train Epoch: 145 [26%]
2023-03-19 09:18:27,905	44k	INFO	Losses: [2.4268431663513184, 2.1846930980682373, 11.101517677307129, 21.056636810302734, 1.1044888496398926], step: 35200, lr: 9.811781969958938e-05
2023-03-19 09:18:41,266	44k	INFO	Saving model and optimizer state at iteration 145 to ./logs/44k/G_35200.pth
2023-03-19 09:18:45,685	44k	INFO	Saving model and optimizer state at iteration 145 to ./logs/44k/D_35200.pth
2023-03-19 09:18:48,979	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_30400.pth
2023-03-19 09:22:52,471	44k	INFO	====> Epoch: 145, cost 361.13 s
2023-03-19 09:23:29,157	44k	INFO	Train Epoch: 146 [8%]
2023-03-19 09:23:29,159	44k	INFO	Losses: [2.8504366874694824, 1.936305284500122, 5.268221378326416, 13.815414428710938, 0.9653102159500122], step: 35400, lr: 9.810555497212693e-05
2023-03-19 09:27:58,941	44k	INFO	Train Epoch: 146 [90%]
2023-03-19 09:27:58,943	44k	INFO	Losses: [2.5447323322296143, 2.415102958679199, 9.219429969787598, 18.689132690429688, 1.075164556503296], step: 35600, lr: 9.810555497212693e-05
2023-03-19 09:28:31,793	44k	INFO	====> Epoch: 146, cost 339.32 s
2023-03-19 09:32:40,616	44k	INFO	Train Epoch: 147 [72%]
2023-03-19 09:32:40,617	44k	INFO	Losses: [2.342400312423706, 2.3232884407043457, 10.165895462036133, 16.43034553527832, 1.179328203201294], step: 35800, lr: 9.809329177775541e-05
2023-03-19 09:34:11,780	44k	INFO	====> Epoch: 147, cost 339.99 s
2023-03-19 09:37:20,350	44k	INFO	Train Epoch: 148 [54%]
2023-03-19 09:37:20,351	44k	INFO	Losses: [2.5748507976531982, 2.206359386444092, 7.658997535705566, 17.656526565551758, 1.1171844005584717], step: 36000, lr: 9.808103011628319e-05
2023-03-19 09:37:34,409	44k	INFO	Saving model and optimizer state at iteration 148 to ./logs/44k/G_36000.pth
2023-03-19 09:37:38,092	44k	INFO	Saving model and optimizer state at iteration 148 to ./logs/44k/D_36000.pth
2023-03-19 09:37:41,048	44k	INFO	.. Free up space by deleting ckpt ./logs/44k/G_28800.pth
2023-03-19 09:40:14,482	44k	INFO	====> Epoch: 148, cost 362.70 s
2023-03-19 09:42:24,056	44k	INFO	Train Epoch: 149 [36%]
2023-03-19 09:42:24,057	44k	INFO	Losses: [2.2920098304748535, 2.473909378051758, 10.183199882507324, 17.65275001525879, 1.2044261693954468], step: 36200, lr: 9.806876998751865e-05
2023-03-19 09:45:53,246	44k	INFO	====> Epoch: 149, cost 338.76 s
2023-03-19 09:47:02,913	44k	INFO	Train Epoch: 150 [18%]
2023-03-19 09:47:02,914	44k	INFO	Losses: [2.4962925910949707, 2.254976272583008, 6.165390491485596, 16.950763702392578, 1.07537043094635], step: 36400, lr: 9.80565113912702e-05