SOVITS4.0 / KOKOMI1 /44k /train.log
spectrum7's picture
心海
4fd09f6
2023-02-28 12:58:15,477 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 1234, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-02-28 12:58:22,961 44k INFO emb_g.weight is not in the checkpoint
2023-02-28 12:58:23,045 44k INFO Loaded checkpoint './logs/44k/G_0.pth' (iteration 0)
2023-02-28 12:58:23,424 44k INFO Loaded checkpoint './logs/44k/D_0.pth' (iteration 0)
2023-02-28 12:58:37,034 44k INFO Train Epoch: 1 [0%]
2023-02-28 12:58:37,035 44k INFO Losses: [2.5471854209899902, 2.287363052368164, 8.129371643066406, 25.281707763671875, 3.162733316421509], step: 0, lr: 0.0001
2023-02-28 12:58:45,336 44k INFO Saving model and optimizer state at iteration 1 to ./logs/44k/G_0.pth
2023-02-28 12:58:47,735 44k INFO Saving model and optimizer state at iteration 1 to ./logs/44k/D_0.pth
2023-02-28 13:00:04,743 44k INFO ====> Epoch: 1, cost 109.27 s
2023-02-28 13:01:18,240 44k INFO ====> Epoch: 2, cost 73.50 s
2023-02-28 13:02:32,047 44k INFO ====> Epoch: 3, cost 73.81 s
2023-02-28 13:02:39,865 44k INFO Train Epoch: 4 [3%]
2023-02-28 13:02:39,867 44k INFO Losses: [2.4934775829315186, 2.1019155979156494, 9.47184944152832, 18.184940338134766, 1.5548597574234009], step: 200, lr: 9.996250468730469e-05
2023-02-28 13:03:46,973 44k INFO ====> Epoch: 4, cost 74.93 s
2023-02-28 13:05:00,759 44k INFO ====> Epoch: 5, cost 73.79 s
2023-02-28 13:06:15,686 44k INFO ====> Epoch: 6, cost 74.93 s
2023-02-28 13:06:25,299 44k INFO Train Epoch: 7 [6%]
2023-02-28 13:06:25,301 44k INFO Losses: [2.7149460315704346, 1.922666072845459, 9.565447807312012, 17.794979095458984, 1.4414783716201782], step: 400, lr: 9.99250234335941e-05
2023-02-28 13:07:30,583 44k INFO ====> Epoch: 7, cost 74.90 s
2023-02-28 13:08:44,495 44k INFO ====> Epoch: 8, cost 73.91 s
2023-02-28 13:09:59,204 44k INFO ====> Epoch: 9, cost 74.71 s
2023-02-28 13:10:14,088 44k INFO Train Epoch: 10 [9%]
2023-02-28 13:10:14,090 44k INFO Losses: [2.5431606769561768, 2.338045120239258, 9.342167854309082, 16.919967651367188, 0.9485687017440796], step: 600, lr: 9.98875562335968e-05
2023-02-28 13:11:17,366 44k INFO ====> Epoch: 10, cost 78.16 s
2023-02-28 13:12:31,887 44k INFO ====> Epoch: 11, cost 74.52 s
2023-02-28 13:13:45,835 44k INFO ====> Epoch: 12, cost 73.95 s
2023-02-28 13:13:59,985 44k INFO Train Epoch: 13 [12%]
2023-02-28 13:13:59,986 44k INFO Losses: [2.6672537326812744, 2.31874942779541, 8.52326774597168, 17.828096389770508, 1.4467674493789673], step: 800, lr: 9.98501030820433e-05
2023-02-28 13:14:04,859 44k INFO Saving model and optimizer state at iteration 13 to ./logs/44k/G_800.pth
2023-02-28 13:14:08,952 44k INFO Saving model and optimizer state at iteration 13 to ./logs/44k/D_800.pth
2023-02-28 13:15:13,665 44k INFO ====> Epoch: 13, cost 87.83 s
2023-02-28 13:16:27,434 44k INFO ====> Epoch: 14, cost 73.77 s
2023-02-28 13:17:41,296 44k INFO ====> Epoch: 15, cost 73.86 s
2023-02-28 13:17:58,962 44k INFO Train Epoch: 16 [15%]
2023-02-28 13:17:58,964 44k INFO Losses: [2.791425943374634, 2.111208200454712, 9.503885269165039, 17.4194393157959, 1.1992781162261963], step: 1000, lr: 9.981266397366609e-05
2023-02-28 13:18:58,713 44k INFO ====> Epoch: 16, cost 77.42 s
2023-02-28 13:20:13,219 44k INFO ====> Epoch: 17, cost 74.51 s
2023-02-28 13:21:26,750 44k INFO ====> Epoch: 18, cost 73.53 s
2023-02-28 13:21:45,198 44k INFO Train Epoch: 19 [18%]
2023-02-28 13:21:45,200 44k INFO Losses: [2.6043426990509033, 2.0303421020507812, 10.949682235717773, 17.934022903442383, 1.2471243143081665], step: 1200, lr: 9.977523890319963e-05
2023-02-28 13:22:41,221 44k INFO ====> Epoch: 19, cost 74.47 s
2023-02-28 13:23:56,848 44k INFO ====> Epoch: 20, cost 75.63 s
2023-02-28 13:25:13,012 44k INFO ====> Epoch: 21, cost 76.16 s
2023-02-28 13:25:34,686 44k INFO Train Epoch: 22 [21%]
2023-02-28 13:25:34,688 44k INFO Losses: [2.47169828414917, 2.5450048446655273, 10.391633033752441, 18.09650993347168, 1.1005759239196777], step: 1400, lr: 9.973782786538036e-05
2023-02-28 13:26:28,829 44k INFO ====> Epoch: 22, cost 75.82 s
2023-02-28 13:27:43,126 44k INFO ====> Epoch: 23, cost 74.30 s
2023-02-28 13:28:56,772 44k INFO ====> Epoch: 24, cost 73.65 s
2023-02-28 13:29:19,414 44k INFO Train Epoch: 25 [24%]
2023-02-28 13:29:19,415 44k INFO Losses: [2.490743398666382, 2.1569743156433105, 6.341503620147705, 16.115381240844727, 0.8458325862884521], step: 1600, lr: 9.970043085494672e-05
2023-02-28 13:29:25,458 44k INFO Saving model and optimizer state at iteration 25 to ./logs/44k/G_1600.pth
2023-02-28 13:29:27,711 44k INFO Saving model and optimizer state at iteration 25 to ./logs/44k/D_1600.pth
2023-02-28 13:30:22,864 44k INFO ====> Epoch: 25, cost 86.09 s
2023-02-28 13:31:36,489 44k INFO ====> Epoch: 26, cost 73.63 s
2023-02-28 13:32:50,452 44k INFO ====> Epoch: 27, cost 73.96 s
2023-02-28 13:33:16,415 44k INFO Train Epoch: 28 [27%]
2023-02-28 13:33:16,417 44k INFO Losses: [2.7591214179992676, 2.24820613861084, 8.908722877502441, 17.70888900756836, 1.1876769065856934], step: 1800, lr: 9.966304786663908e-05
2023-02-28 13:34:06,789 44k INFO ====> Epoch: 28, cost 76.34 s
2023-02-28 13:35:21,576 44k INFO ====> Epoch: 29, cost 74.79 s
2023-02-28 13:36:35,365 44k INFO ====> Epoch: 30, cost 73.79 s
2023-02-28 13:37:02,139 44k INFO Train Epoch: 31 [30%]
2023-02-28 13:37:02,141 44k INFO Losses: [2.345362424850464, 2.426661968231201, 11.006977081298828, 16.982173919677734, 0.9066070318222046], step: 2000, lr: 9.962567889519979e-05
2023-02-28 13:37:49,899 44k INFO ====> Epoch: 31, cost 74.53 s
2023-02-28 13:39:04,395 44k INFO ====> Epoch: 32, cost 74.50 s
2023-02-28 13:40:20,560 44k INFO ====> Epoch: 33, cost 76.16 s
2023-02-28 13:40:49,338 44k INFO Train Epoch: 34 [33%]
2023-02-28 13:40:49,339 44k INFO Losses: [2.672901153564453, 2.1147024631500244, 7.22958517074585, 16.456411361694336, 0.9545275568962097], step: 2200, lr: 9.95883239353732e-05
2023-02-28 13:41:34,743 44k INFO ====> Epoch: 34, cost 74.18 s
2023-02-28 13:42:49,471 44k INFO ====> Epoch: 35, cost 74.73 s
2023-02-28 13:44:03,145 44k INFO ====> Epoch: 36, cost 73.67 s
2023-02-28 13:44:34,983 44k INFO Train Epoch: 37 [36%]
2023-02-28 13:44:34,985 44k INFO Losses: [2.4869394302368164, 2.054269790649414, 10.121505737304688, 17.453989028930664, 0.9608755111694336], step: 2400, lr: 9.95509829819056e-05
2023-02-28 13:44:41,491 44k INFO Saving model and optimizer state at iteration 37 to ./logs/44k/G_2400.pth
2023-02-28 13:44:43,818 44k INFO Saving model and optimizer state at iteration 37 to ./logs/44k/D_2400.pth
2023-02-28 13:45:30,536 44k INFO ====> Epoch: 37, cost 87.39 s
2023-02-28 13:46:45,550 44k INFO ====> Epoch: 38, cost 75.01 s
2023-02-28 13:48:01,441 44k INFO ====> Epoch: 39, cost 75.89 s
2023-02-28 13:48:34,396 44k INFO Train Epoch: 40 [39%]
2023-02-28 13:48:34,398 44k INFO Losses: [2.5546257495880127, 2.274440288543701, 8.828275680541992, 16.06529426574707, 0.8403733968734741], step: 2600, lr: 9.951365602954526e-05
2023-02-28 13:49:15,727 44k INFO ====> Epoch: 40, cost 74.29 s
2023-02-28 13:50:29,574 44k INFO ====> Epoch: 41, cost 73.85 s
2023-02-28 13:51:43,635 44k INFO ====> Epoch: 42, cost 74.06 s
2023-02-28 13:52:20,462 44k INFO Train Epoch: 43 [42%]
2023-02-28 13:52:20,464 44k INFO Losses: [2.7284128665924072, 2.332390069961548, 6.872231483459473, 16.307586669921875, 0.8291447758674622], step: 2800, lr: 9.947634307304244e-05
2023-02-28 13:53:00,407 44k INFO ====> Epoch: 43, cost 76.77 s
2023-02-28 13:54:14,355 44k INFO ====> Epoch: 44, cost 73.95 s
2023-02-28 13:55:27,919 44k INFO ====> Epoch: 45, cost 73.56 s
2023-02-28 13:56:05,090 44k INFO Train Epoch: 46 [45%]
2023-02-28 13:56:05,091 44k INFO Losses: [2.4306740760803223, 2.2507266998291016, 11.587876319885254, 18.6252384185791, 1.1041194200515747], step: 3000, lr: 9.943904410714931e-05
2023-02-28 13:56:42,546 44k INFO ====> Epoch: 46, cost 74.63 s
2023-02-28 13:57:58,429 44k INFO ====> Epoch: 47, cost 75.88 s
2023-02-28 13:59:12,883 44k INFO ====> Epoch: 48, cost 74.45 s
2023-02-28 13:59:52,046 44k INFO Train Epoch: 49 [48%]
2023-02-28 13:59:52,048 44k INFO Losses: [2.548037528991699, 2.243398904800415, 11.1314697265625, 17.60015106201172, 1.1479979753494263], step: 3200, lr: 9.940175912662009e-05
2023-02-28 13:59:58,188 44k INFO Saving model and optimizer state at iteration 49 to ./logs/44k/G_3200.pth
2023-02-28 14:00:00,434 44k INFO Saving model and optimizer state at iteration 49 to ./logs/44k/D_3200.pth
2023-02-28 14:00:02,710 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_800.pth
2023-02-28 14:00:02,712 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_800.pth
2023-02-28 14:00:39,101 44k INFO ====> Epoch: 49, cost 86.22 s
2023-02-28 14:01:52,338 44k INFO ====> Epoch: 50, cost 73.24 s
2023-02-28 14:03:06,420 44k INFO ====> Epoch: 51, cost 74.08 s
2023-02-28 14:03:48,912 44k INFO Train Epoch: 52 [52%]
2023-02-28 14:03:48,914 44k INFO Losses: [2.605712413787842, 2.118232488632202, 8.931315422058105, 17.973251342773438, 1.0647422075271606], step: 3400, lr: 9.936448812621091e-05
2023-02-28 14:04:22,695 44k INFO ====> Epoch: 52, cost 76.28 s
2023-02-28 14:05:37,907 44k INFO ====> Epoch: 53, cost 75.21 s
2023-02-28 14:06:51,519 44k INFO ====> Epoch: 54, cost 73.61 s
2023-02-28 14:07:34,853 44k INFO Train Epoch: 55 [55%]
2023-02-28 14:07:34,855 44k INFO Losses: [2.587838649749756, 2.0631420612335205, 7.7173237800598145, 15.191953659057617, 1.1199090480804443], step: 3600, lr: 9.932723110067987e-05
2023-02-28 14:08:06,062 44k INFO ====> Epoch: 55, cost 74.54 s
2023-02-28 14:09:20,850 44k INFO ====> Epoch: 56, cost 74.79 s
2023-02-28 14:10:36,819 44k INFO ====> Epoch: 57, cost 75.97 s
2023-02-28 14:11:21,961 44k INFO Train Epoch: 58 [58%]
2023-02-28 14:11:21,963 44k INFO Losses: [2.5817642211914062, 2.159748077392578, 7.8062825202941895, 14.950019836425781, 0.7315710783004761], step: 3800, lr: 9.928998804478705e-05
2023-02-28 14:11:51,005 44k INFO ====> Epoch: 58, cost 74.19 s
2023-02-28 14:13:04,302 44k INFO ====> Epoch: 59, cost 73.30 s
2023-02-28 14:14:18,274 44k INFO ====> Epoch: 60, cost 73.97 s
2023-02-28 14:15:07,271 44k INFO Train Epoch: 61 [61%]
2023-02-28 14:15:07,273 44k INFO Losses: [2.7243902683258057, 1.9707286357879639, 9.737435340881348, 17.495309829711914, 1.0905342102050781], step: 4000, lr: 9.92527589532945e-05
2023-02-28 14:15:12,474 44k INFO Saving model and optimizer state at iteration 61 to ./logs/44k/G_4000.pth
2023-02-28 14:15:15,204 44k INFO Saving model and optimizer state at iteration 61 to ./logs/44k/D_4000.pth
2023-02-28 14:15:17,291 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_1600.pth
2023-02-28 14:15:17,294 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_1600.pth
2023-02-28 14:15:44,779 44k INFO ====> Epoch: 61, cost 86.51 s
2023-02-28 14:17:00,385 44k INFO ====> Epoch: 62, cost 75.61 s
2023-02-28 14:18:14,709 44k INFO ====> Epoch: 63, cost 74.32 s
2023-02-28 14:19:03,637 44k INFO Train Epoch: 64 [64%]
2023-02-28 14:19:03,639 44k INFO Losses: [2.704010248184204, 1.8972713947296143, 7.680373191833496, 15.452510833740234, 1.2449761629104614], step: 4200, lr: 9.921554382096622e-05
2023-02-28 14:19:28,541 44k INFO ====> Epoch: 64, cost 73.83 s
2023-02-28 14:20:41,671 44k INFO ====> Epoch: 65, cost 73.13 s
2023-02-28 14:21:55,285 44k INFO ====> Epoch: 66, cost 73.61 s
2023-02-28 14:22:48,779 44k INFO Train Epoch: 67 [67%]
2023-02-28 14:22:48,780 44k INFO Losses: [2.4766836166381836, 2.1398825645446777, 9.387099266052246, 17.52686882019043, 0.6741310954093933], step: 4400, lr: 9.917834264256819e-05
2023-02-28 14:23:11,917 44k INFO ====> Epoch: 67, cost 76.63 s
2023-02-28 14:24:25,362 44k INFO ====> Epoch: 68, cost 73.44 s
2023-02-28 14:25:38,577 44k INFO ====> Epoch: 69, cost 73.22 s
2023-02-28 14:26:32,509 44k INFO Train Epoch: 70 [70%]
2023-02-28 14:26:32,510 44k INFO Losses: [2.5740795135498047, 1.8826556205749512, 8.132512092590332, 15.88471508026123, 0.8013160824775696], step: 4600, lr: 9.914115541286833e-05
2023-02-28 14:26:52,769 44k INFO ====> Epoch: 70, cost 74.19 s
2023-02-28 14:28:07,168 44k INFO ====> Epoch: 71, cost 74.40 s
2023-02-28 14:29:22,887 44k INFO ====> Epoch: 72, cost 75.72 s
2023-02-28 14:30:18,671 44k INFO Train Epoch: 73 [73%]
2023-02-28 14:30:18,673 44k INFO Losses: [2.4931139945983887, 2.064826011657715, 11.462966918945312, 16.78502082824707, 0.7054392695426941], step: 4800, lr: 9.910398212663652e-05
2023-02-28 14:30:24,287 44k INFO Saving model and optimizer state at iteration 73 to ./logs/44k/G_4800.pth
2023-02-28 14:30:26,623 44k INFO Saving model and optimizer state at iteration 73 to ./logs/44k/D_4800.pth
2023-02-28 14:30:28,871 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_2400.pth
2023-02-28 14:30:28,873 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_2400.pth
2023-02-28 14:30:48,315 44k INFO ====> Epoch: 73, cost 85.43 s
2023-02-28 14:32:02,121 44k INFO ====> Epoch: 74, cost 73.81 s
2023-02-28 14:33:15,463 44k INFO ====> Epoch: 75, cost 73.34 s
2023-02-28 14:34:13,399 44k INFO Train Epoch: 76 [76%]
2023-02-28 14:34:13,400 44k INFO Losses: [2.6621763706207275, 2.1752851009368896, 7.554469585418701, 17.05042266845703, 1.040917992591858], step: 5000, lr: 9.906682277864462e-05
2023-02-28 14:34:30,081 44k INFO ====> Epoch: 76, cost 74.62 s
2023-02-28 14:35:44,712 44k INFO ====> Epoch: 77, cost 74.63 s
2023-02-28 14:37:00,735 44k INFO ====> Epoch: 78, cost 76.02 s
2023-02-28 14:38:00,257 44k INFO Train Epoch: 79 [79%]
2023-02-28 14:38:00,258 44k INFO Losses: [2.5352425575256348, 2.311486005783081, 7.6704816818237305, 17.204071044921875, 0.9419063925743103], step: 5200, lr: 9.902967736366644e-05
2023-02-28 14:38:14,901 44k INFO ====> Epoch: 79, cost 74.17 s
2023-02-28 14:39:28,428 44k INFO ====> Epoch: 80, cost 73.53 s
2023-02-28 14:40:41,892 44k INFO ====> Epoch: 81, cost 73.46 s
2023-02-28 14:41:44,562 44k INFO Train Epoch: 82 [82%]
2023-02-28 14:41:44,564 44k INFO Losses: [2.5333497524261475, 2.12280535697937, 11.959494590759277, 16.77216911315918, 0.8535165190696716], step: 5400, lr: 9.899254587647776e-05
2023-02-28 14:41:58,589 44k INFO ====> Epoch: 82, cost 76.70 s
2023-02-28 14:43:13,346 44k INFO ====> Epoch: 83, cost 74.76 s
2023-02-28 14:44:27,256 44k INFO ====> Epoch: 84, cost 73.91 s
2023-02-28 14:45:30,639 44k INFO Train Epoch: 85 [85%]
2023-02-28 14:45:30,640 44k INFO Losses: [2.7888104915618896, 2.0978782176971436, 8.370659828186035, 16.60346031188965, 1.063904047012329], step: 5600, lr: 9.895542831185631e-05
2023-02-28 14:45:37,202 44k INFO Saving model and optimizer state at iteration 85 to ./logs/44k/G_5600.pth
2023-02-28 14:45:39,556 44k INFO Saving model and optimizer state at iteration 85 to ./logs/44k/D_5600.pth
2023-02-28 14:45:41,711 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_3200.pth
2023-02-28 14:45:41,714 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_3200.pth
2023-02-28 14:45:52,836 44k INFO ====> Epoch: 85, cost 85.58 s
2023-02-28 14:47:07,776 44k INFO ====> Epoch: 86, cost 74.94 s
2023-02-28 14:48:21,712 44k INFO ====> Epoch: 87, cost 73.94 s
2023-02-28 14:49:29,232 44k INFO Train Epoch: 88 [88%]
2023-02-28 14:49:29,233 44k INFO Losses: [2.7750144004821777, 2.074420690536499, 4.384814739227295, 13.493303298950195, 0.7895516753196716], step: 5800, lr: 9.891832466458178e-05
2023-02-28 14:49:38,593 44k INFO ====> Epoch: 88, cost 76.88 s
2023-02-28 14:50:53,326 44k INFO ====> Epoch: 89, cost 74.73 s
2023-02-28 14:52:07,231 44k INFO ====> Epoch: 90, cost 73.91 s
2023-02-28 14:53:15,557 44k INFO Train Epoch: 91 [91%]
2023-02-28 14:53:15,559 44k INFO Losses: [2.6227004528045654, 1.9688911437988281, 10.923945426940918, 17.212446212768555, 0.7580318450927734], step: 6000, lr: 9.888123492943583e-05
2023-02-28 14:53:21,988 44k INFO ====> Epoch: 91, cost 74.76 s
2023-02-28 14:54:36,795 44k INFO ====> Epoch: 92, cost 74.81 s
2023-02-28 14:55:53,088 44k INFO ====> Epoch: 93, cost 76.29 s
2023-02-28 14:57:03,850 44k INFO Train Epoch: 94 [94%]
2023-02-28 14:57:03,852 44k INFO Losses: [2.3421754837036133, 2.4148967266082764, 11.268254280090332, 15.194820404052734, 0.6364802122116089], step: 6200, lr: 9.884415910120204e-05
2023-02-28 14:57:08,816 44k INFO ====> Epoch: 94, cost 75.73 s
2023-02-28 14:58:22,439 44k INFO ====> Epoch: 95, cost 73.62 s
2023-02-28 14:59:36,532 44k INFO ====> Epoch: 96, cost 74.09 s
2023-02-28 15:00:49,702 44k INFO Train Epoch: 97 [97%]
2023-02-28 15:00:49,703 44k INFO Losses: [3.0742392539978027, 2.0508759021759033, 6.718446731567383, 14.818535804748535, 0.8880882263183594], step: 6400, lr: 9.880709717466598e-05
2023-02-28 15:00:55,763 44k INFO Saving model and optimizer state at iteration 97 to ./logs/44k/G_6400.pth
2023-02-28 15:00:58,642 44k INFO Saving model and optimizer state at iteration 97 to ./logs/44k/D_6400.pth
2023-02-28 15:01:01,161 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_4000.pth
2023-02-28 15:01:01,164 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_4000.pth
2023-02-28 15:01:02,829 44k INFO ====> Epoch: 97, cost 86.30 s
2023-02-28 15:02:19,015 44k INFO ====> Epoch: 98, cost 76.19 s
2023-02-28 15:03:35,530 44k INFO ====> Epoch: 99, cost 76.52 s
2023-02-28 15:04:49,823 44k INFO ====> Epoch: 100, cost 74.29 s
2023-02-28 15:04:55,602 44k INFO Train Epoch: 101 [0%]
2023-02-28 15:04:55,603 44k INFO Losses: [2.5342299938201904, 2.09214448928833, 11.24268627166748, 16.873966217041016, 0.5841363668441772], step: 6600, lr: 9.875770288847208e-05
2023-02-28 15:06:04,315 44k INFO ====> Epoch: 101, cost 74.49 s
2023-02-28 15:07:19,018 44k INFO ====> Epoch: 102, cost 74.70 s
2023-02-28 15:08:34,984 44k INFO ====> Epoch: 103, cost 75.97 s
2023-02-28 15:08:44,478 44k INFO Train Epoch: 104 [3%]
2023-02-28 15:08:44,479 44k INFO Losses: [2.637145519256592, 2.4403607845306396, 7.870616436004639, 14.937127113342285, 1.0785678625106812], step: 6800, lr: 9.872067337896332e-05
2023-02-28 15:09:51,883 44k INFO ====> Epoch: 104, cost 76.90 s
2023-02-28 15:11:06,230 44k INFO ====> Epoch: 105, cost 74.35 s
2023-02-28 15:12:20,303 44k INFO ====> Epoch: 106, cost 74.07 s
2023-02-28 15:12:31,345 44k INFO Train Epoch: 107 [6%]
2023-02-28 15:12:31,346 44k INFO Losses: [2.4770545959472656, 2.184659719467163, 10.05259895324707, 15.645376205444336, 0.7222138047218323], step: 7000, lr: 9.868365775378495e-05
2023-02-28 15:13:36,420 44k INFO ====> Epoch: 107, cost 76.12 s
2023-02-28 15:14:52,579 44k INFO ====> Epoch: 108, cost 76.16 s
2023-02-28 15:16:07,451 44k INFO ====> Epoch: 109, cost 74.87 s
2023-02-28 15:16:19,287 44k INFO Train Epoch: 110 [9%]
2023-02-28 15:16:19,289 44k INFO Losses: [2.5279159545898438, 2.0886292457580566, 10.978144645690918, 17.581748962402344, 0.8634505271911621], step: 7200, lr: 9.864665600773098e-05
2023-02-28 15:16:24,760 44k INFO Saving model and optimizer state at iteration 110 to ./logs/44k/G_7200.pth
2023-02-28 15:16:27,250 44k INFO Saving model and optimizer state at iteration 110 to ./logs/44k/D_7200.pth
2023-02-28 15:16:29,468 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_4800.pth
2023-02-28 15:16:29,471 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_4800.pth
2023-02-28 15:17:32,975 44k INFO ====> Epoch: 110, cost 85.52 s
2023-02-28 15:18:46,680 44k INFO ====> Epoch: 111, cost 73.71 s
2023-02-28 15:20:00,460 44k INFO ====> Epoch: 112, cost 73.78 s
2023-02-28 15:20:14,959 44k INFO Train Epoch: 113 [12%]
2023-02-28 15:20:14,960 44k INFO Losses: [2.7952322959899902, 2.2553482055664062, 10.7861909866333, 16.28217315673828, 0.7031867504119873], step: 7400, lr: 9.86096681355974e-05
2023-02-28 15:21:15,553 44k INFO ====> Epoch: 113, cost 75.09 s
2023-02-28 15:22:32,196 44k INFO ====> Epoch: 114, cost 76.64 s
2023-02-28 15:23:46,822 44k INFO ====> Epoch: 115, cost 74.63 s
2023-02-28 15:24:03,180 44k INFO Train Epoch: 116 [15%]
2023-02-28 15:24:03,182 44k INFO Losses: [2.4615769386291504, 2.0242230892181396, 9.838528633117676, 16.86975860595703, 0.7818409204483032], step: 7600, lr: 9.857269413218213e-05
2023-02-28 15:25:01,334 44k INFO ====> Epoch: 116, cost 74.51 s
2023-02-28 15:26:15,168 44k INFO ====> Epoch: 117, cost 73.83 s
2023-02-28 15:27:29,804 44k INFO ====> Epoch: 118, cost 74.64 s
2023-02-28 15:27:50,288 44k INFO Train Epoch: 119 [18%]
2023-02-28 15:27:50,289 44k INFO Losses: [2.6016998291015625, 2.0932233333587646, 9.31592845916748, 16.06891632080078, 0.817284882068634], step: 7800, lr: 9.853573399228505e-05
2023-02-28 15:28:46,776 44k INFO ====> Epoch: 119, cost 76.97 s
2023-02-28 15:30:00,444 44k INFO ====> Epoch: 120, cost 73.67 s
2023-02-28 15:31:14,743 44k INFO ====> Epoch: 121, cost 74.30 s
2023-02-28 15:31:35,174 44k INFO Train Epoch: 122 [21%]
2023-02-28 15:31:35,175 44k INFO Losses: [2.605755567550659, 2.092508554458618, 7.069595813751221, 15.72784423828125, 1.1065024137496948], step: 8000, lr: 9.8498787710708e-05
2023-02-28 15:31:41,386 44k INFO Saving model and optimizer state at iteration 122 to ./logs/44k/G_8000.pth
2023-02-28 15:31:43,732 44k INFO Saving model and optimizer state at iteration 122 to ./logs/44k/D_8000.pth
2023-02-28 15:31:46,095 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_5600.pth
2023-02-28 15:31:46,098 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_5600.pth
2023-02-28 15:32:41,200 44k INFO ====> Epoch: 122, cost 86.46 s
2023-02-28 15:33:54,906 44k INFO ====> Epoch: 123, cost 73.71 s
2023-02-28 15:35:09,086 44k INFO ====> Epoch: 124, cost 74.18 s
2023-02-28 15:35:33,341 44k INFO Train Epoch: 125 [24%]
2023-02-28 15:35:33,342 44k INFO Losses: [2.624086380004883, 2.195646286010742, 10.454461097717285, 14.863266944885254, 1.1154903173446655], step: 8200, lr: 9.846185528225477e-05
2023-02-28 15:36:25,735 44k INFO ====> Epoch: 125, cost 76.65 s
2023-02-28 15:37:39,876 44k INFO ====> Epoch: 126, cost 74.14 s
2023-02-28 15:38:53,400 44k INFO ====> Epoch: 127, cost 73.52 s
2023-02-28 15:39:18,526 44k INFO Train Epoch: 128 [27%]
2023-02-28 15:39:18,527 44k INFO Losses: [2.624419927597046, 2.0473601818084717, 10.896341323852539, 16.386354446411133, 0.5846019387245178], step: 8400, lr: 9.842493670173108e-05
2023-02-28 15:40:08,349 44k INFO ====> Epoch: 128, cost 74.95 s
2023-02-28 15:41:23,754 44k INFO ====> Epoch: 129, cost 75.40 s
2023-02-28 15:42:39,964 44k INFO ====> Epoch: 130, cost 76.21 s
2023-02-28 15:43:06,871 44k INFO Train Epoch: 131 [30%]
2023-02-28 15:43:06,873 44k INFO Losses: [2.453179121017456, 2.1952531337738037, 10.993245124816895, 16.908966064453125, 0.8238515853881836], step: 8600, lr: 9.838803196394459e-05
2023-02-28 15:43:55,611 44k INFO ====> Epoch: 131, cost 75.65 s
2023-02-28 15:45:09,644 44k INFO ====> Epoch: 132, cost 74.03 s
2023-02-28 15:46:24,242 44k INFO ====> Epoch: 133, cost 74.60 s
2023-02-28 15:46:54,971 44k INFO Train Epoch: 134 [33%]
2023-02-28 15:46:54,973 44k INFO Losses: [2.430457353591919, 2.2943060398101807, 13.018928527832031, 18.355567932128906, 1.0122909545898438], step: 8800, lr: 9.835114106370493e-05
2023-02-28 15:47:01,573 44k INFO Saving model and optimizer state at iteration 134 to ./logs/44k/G_8800.pth
2023-02-28 15:47:03,779 44k INFO Saving model and optimizer state at iteration 134 to ./logs/44k/D_8800.pth
2023-02-28 15:47:06,534 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_6400.pth
2023-02-28 15:47:06,537 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_6400.pth
2023-02-28 15:47:52,977 44k INFO ====> Epoch: 134, cost 88.74 s
2023-02-28 15:49:09,902 44k INFO ====> Epoch: 135, cost 76.93 s
2023-02-28 15:50:24,704 44k INFO ====> Epoch: 136, cost 74.80 s
2023-02-28 15:50:55,746 44k INFO Train Epoch: 137 [36%]
2023-02-28 15:50:55,747 44k INFO Losses: [2.2673099040985107, 2.4235212802886963, 11.738456726074219, 16.664871215820312, 0.9037197828292847], step: 9000, lr: 9.831426399582366e-05
2023-02-28 15:51:39,495 44k INFO ====> Epoch: 137, cost 74.79 s
2023-02-28 15:52:53,418 44k INFO ====> Epoch: 138, cost 73.92 s
2023-02-28 15:54:07,701 44k INFO ====> Epoch: 139, cost 74.28 s
2023-02-28 15:54:42,270 44k INFO Train Epoch: 140 [39%]
2023-02-28 15:54:42,271 44k INFO Losses: [2.5136141777038574, 2.0283584594726562, 11.8695707321167, 18.233121871948242, 0.9541251063346863], step: 9200, lr: 9.827740075511432e-05
2023-02-28 15:55:24,807 44k INFO ====> Epoch: 140, cost 77.11 s
2023-02-28 15:56:39,674 44k INFO ====> Epoch: 141, cost 74.87 s
2023-02-28 15:57:53,591 44k INFO ====> Epoch: 142, cost 73.92 s
2023-02-28 15:58:29,457 44k INFO Train Epoch: 143 [42%]
2023-02-28 15:58:29,459 44k INFO Losses: [2.7191567420959473, 1.9613689184188843, 12.014168739318848, 16.903831481933594, 0.7805163264274597], step: 9400, lr: 9.824055133639235e-05
2023-02-28 15:59:09,758 44k INFO ====> Epoch: 143, cost 76.17 s
2023-02-28 16:00:26,356 44k INFO ====> Epoch: 144, cost 76.60 s
2023-02-28 16:01:41,524 44k INFO ====> Epoch: 145, cost 75.17 s
2023-02-28 16:02:18,602 44k INFO Train Epoch: 146 [45%]
2023-02-28 16:02:18,604 44k INFO Losses: [2.4962189197540283, 2.2066287994384766, 8.785140037536621, 18.4150447845459, 0.9340229630470276], step: 9600, lr: 9.820371573447515e-05
2023-02-28 16:02:25,194 44k INFO Saving model and optimizer state at iteration 146 to ./logs/44k/G_9600.pth
2023-02-28 16:02:27,387 44k INFO Saving model and optimizer state at iteration 146 to ./logs/44k/D_9600.pth
2023-02-28 16:02:29,692 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_7200.pth
2023-02-28 16:02:29,695 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_7200.pth
2023-02-28 16:03:08,070 44k INFO ====> Epoch: 146, cost 86.55 s
2023-02-28 16:04:21,511 44k INFO ====> Epoch: 147, cost 73.44 s
2023-02-28 16:05:35,229 44k INFO ====> Epoch: 148, cost 73.72 s
2023-02-28 16:06:14,966 44k INFO Train Epoch: 149 [48%]
2023-02-28 16:06:14,968 44k INFO Losses: [2.4135518074035645, 2.1313223838806152, 8.576716423034668, 15.54837703704834, 0.7402218580245972], step: 9800, lr: 9.816689394418209e-05
2023-02-28 16:06:49,942 44k INFO ====> Epoch: 149, cost 74.71 s
2023-02-28 16:08:06,917 44k INFO ====> Epoch: 150, cost 76.97 s
2023-02-28 16:09:21,436 44k INFO ====> Epoch: 151, cost 74.52 s
2023-02-28 16:10:03,185 44k INFO Train Epoch: 152 [52%]
2023-02-28 16:10:03,186 44k INFO Losses: [2.6875929832458496, 1.9698684215545654, 12.922085762023926, 16.81633758544922, 0.5604832172393799], step: 10000, lr: 9.813008596033443e-05
2023-02-28 16:10:36,147 44k INFO ====> Epoch: 152, cost 74.71 s
2023-02-28 16:11:49,961 44k INFO ====> Epoch: 153, cost 73.81 s
2023-02-28 16:13:05,976 44k INFO ====> Epoch: 154, cost 76.01 s
2023-02-28 16:13:50,847 44k INFO Train Epoch: 155 [55%]
2023-02-28 16:13:50,849 44k INFO Losses: [2.4670228958129883, 2.375901460647583, 8.73001480102539, 15.435790061950684, 1.0661587715148926], step: 10200, lr: 9.809329177775541e-05
2023-02-28 16:14:22,209 44k INFO ====> Epoch: 155, cost 76.23 s
2023-02-28 16:15:36,696 44k INFO ====> Epoch: 156, cost 74.49 s
2023-02-28 16:16:51,062 44k INFO ====> Epoch: 157, cost 74.37 s
2023-02-28 16:17:37,245 44k INFO Train Epoch: 158 [58%]
2023-02-28 16:17:37,246 44k INFO Losses: [2.82281231880188, 2.1057395935058594, 6.731410980224609, 15.62610149383545, 0.5069025158882141], step: 10400, lr: 9.80565113912702e-05
2023-02-28 16:17:43,964 44k INFO Saving model and optimizer state at iteration 158 to ./logs/44k/G_10400.pth
2023-02-28 16:17:46,181 44k INFO Saving model and optimizer state at iteration 158 to ./logs/44k/D_10400.pth
2023-02-28 16:17:48,821 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_8000.pth
2023-02-28 16:17:48,823 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_8000.pth
2023-02-28 16:18:18,719 44k INFO ====> Epoch: 158, cost 87.66 s
2023-02-28 16:19:33,485 44k INFO ====> Epoch: 159, cost 74.77 s
2023-02-28 16:20:49,767 44k INFO ====> Epoch: 160, cost 76.28 s
2023-02-28 16:21:37,390 44k INFO Train Epoch: 161 [61%]
2023-02-28 16:21:37,392 44k INFO Losses: [2.5345299243927, 2.018198013305664, 9.566350936889648, 16.32387924194336, 0.9715358018875122], step: 10600, lr: 9.801974479570593e-05
2023-02-28 16:22:04,560 44k INFO ====> Epoch: 161, cost 74.79 s
2023-02-28 16:23:18,198 44k INFO ====> Epoch: 162, cost 73.64 s
2023-02-28 16:24:32,370 44k INFO ====> Epoch: 163, cost 74.17 s
2023-02-28 16:25:24,037 44k INFO Train Epoch: 164 [64%]
2023-02-28 16:25:24,039 44k INFO Losses: [2.678898811340332, 1.9506888389587402, 7.440878391265869, 15.816774368286133, 0.9422857165336609], step: 10800, lr: 9.798299198589162e-05
2023-02-28 16:25:49,570 44k INFO ====> Epoch: 164, cost 77.20 s
2023-02-28 16:27:04,964 44k INFO ====> Epoch: 165, cost 75.40 s
2023-02-28 16:28:18,899 44k INFO ====> Epoch: 166, cost 73.93 s
2023-02-28 16:29:11,418 44k INFO Train Epoch: 167 [67%]
2023-02-28 16:29:11,420 44k INFO Losses: [2.67657208442688, 2.0174965858459473, 8.824281692504883, 16.363262176513672, 1.1555246114730835], step: 11000, lr: 9.794625295665828e-05
2023-02-28 16:29:33,997 44k INFO ====> Epoch: 167, cost 75.10 s
2023-02-28 16:30:49,622 44k INFO ====> Epoch: 168, cost 75.62 s
2023-02-28 16:32:03,969 44k INFO ====> Epoch: 169, cost 74.35 s
2023-02-28 16:32:57,588 44k INFO Train Epoch: 170 [70%]
2023-02-28 16:32:57,590 44k INFO Losses: [2.7399706840515137, 2.2036898136138916, 8.227840423583984, 14.57651138305664, 0.6206619143486023], step: 11200, lr: 9.790952770283884e-05
2023-02-28 16:33:03,353 44k INFO Saving model and optimizer state at iteration 170 to ./logs/44k/G_11200.pth
2023-02-28 16:33:06,068 44k INFO Saving model and optimizer state at iteration 170 to ./logs/44k/D_11200.pth
2023-02-28 16:33:08,215 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_8800.pth
2023-02-28 16:33:08,545 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_8800.pth
2023-02-28 16:33:29,796 44k INFO ====> Epoch: 170, cost 85.83 s
2023-02-28 16:34:43,240 44k INFO ====> Epoch: 171, cost 73.44 s
2023-02-28 16:35:57,219 44k INFO ====> Epoch: 172, cost 73.98 s
2023-03-01 02:01:15,330 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 1234, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-01 02:01:40,554 44k INFO Loaded checkpoint './logs/44k/G_11200.pth' (iteration 170)
2023-03-01 02:01:47,252 44k INFO Loaded checkpoint './logs/44k/D_11200.pth' (iteration 170)
2023-03-01 02:02:55,544 44k INFO Train Epoch: 170 [70%]
2023-03-01 02:02:55,545 44k INFO Losses: [2.42437744140625, 2.112516403198242, 11.52952766418457, 17.163972854614258, 0.8818821907043457], step: 11200, lr: 9.789728901187598e-05
2023-03-01 02:03:02,949 44k INFO Saving model and optimizer state at iteration 170 to ./logs/44k/G_11200.pth
2023-03-01 02:03:05,521 44k INFO Saving model and optimizer state at iteration 170 to ./logs/44k/D_11200.pth
2023-03-01 02:03:32,910 44k INFO ====> Epoch: 170, cost 137.58 s
2023-03-01 02:04:48,162 44k INFO ====> Epoch: 171, cost 75.25 s
2023-03-01 02:06:03,459 44k INFO ====> Epoch: 172, cost 75.30 s
2023-03-01 02:07:01,188 44k INFO Train Epoch: 173 [73%]
2023-03-01 02:07:01,189 44k INFO Losses: [2.639918565750122, 1.7068700790405273, 7.884442329406738, 14.869682312011719, 0.9139379858970642], step: 11400, lr: 9.786058211724074e-05
2023-03-01 02:07:19,956 44k INFO ====> Epoch: 173, cost 76.50 s
2023-03-01 02:08:35,584 44k INFO ====> Epoch: 174, cost 75.63 s
2023-03-01 02:09:50,615 44k INFO ====> Epoch: 175, cost 75.03 s
2023-03-01 02:10:48,488 44k INFO Train Epoch: 176 [76%]
2023-03-01 02:10:48,490 44k INFO Losses: [2.4752862453460693, 2.171187400817871, 9.323185920715332, 16.752338409423828, 0.7409180402755737], step: 11600, lr: 9.782388898597041e-05
2023-03-01 02:11:05,925 44k INFO ====> Epoch: 176, cost 75.31 s
2023-03-01 02:12:20,436 44k INFO ====> Epoch: 177, cost 74.51 s
2023-03-01 02:13:33,737 44k INFO ====> Epoch: 178, cost 73.30 s
2023-03-01 02:14:33,287 44k INFO Train Epoch: 179 [79%]
2023-03-01 02:14:33,288 44k INFO Losses: [2.545363187789917, 2.0379843711853027, 7.348099708557129, 15.076837539672852, 0.7552559971809387], step: 11800, lr: 9.778720961290439e-05
2023-03-01 02:14:47,821 44k INFO ====> Epoch: 179, cost 74.08 s
2023-03-01 02:16:01,858 44k INFO ====> Epoch: 180, cost 74.04 s
2023-03-01 02:17:16,874 44k INFO ====> Epoch: 181, cost 75.02 s
2023-03-01 02:18:18,799 44k INFO Train Epoch: 182 [82%]
2023-03-01 02:18:18,800 44k INFO Losses: [2.546231746673584, 2.0883071422576904, 7.342122554779053, 15.470341682434082, 0.8302884697914124], step: 12000, lr: 9.7750543992884e-05
2023-03-01 02:18:23,796 44k INFO Saving model and optimizer state at iteration 182 to ./logs/44k/G_12000.pth
2023-03-01 02:18:25,995 44k INFO Saving model and optimizer state at iteration 182 to ./logs/44k/D_12000.pth
2023-03-01 02:18:28,429 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_9600.pth
2023-03-01 02:18:28,430 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_9600.pth
2023-03-01 02:18:43,634 44k INFO ====> Epoch: 182, cost 86.76 s
2023-03-01 02:20:00,720 44k INFO ====> Epoch: 183, cost 77.09 s
2023-03-01 02:21:16,271 44k INFO ====> Epoch: 184, cost 75.55 s
2023-03-01 02:22:19,995 44k INFO Train Epoch: 185 [85%]
2023-03-01 02:22:19,996 44k INFO Losses: [2.6187498569488525, 2.0655298233032227, 4.116823196411133, 13.925782203674316, 0.625942051410675], step: 12200, lr: 9.771389212075249e-05
2023-03-01 02:22:31,555 44k INFO ====> Epoch: 185, cost 75.28 s
2023-03-01 02:23:45,579 44k INFO ====> Epoch: 186, cost 74.02 s
2023-03-01 02:24:59,434 44k INFO ====> Epoch: 187, cost 73.85 s
2023-03-01 02:26:05,269 44k INFO Train Epoch: 188 [88%]
2023-03-01 02:26:05,270 44k INFO Losses: [2.7771339416503906, 1.8332241773605347, 5.971385955810547, 14.140213966369629, 0.9858102202415466], step: 12400, lr: 9.767725399135504e-05
2023-03-01 02:26:14,724 44k INFO ====> Epoch: 188, cost 75.29 s
2023-03-01 02:27:28,294 44k INFO ====> Epoch: 189, cost 73.57 s
2023-03-01 02:28:41,540 44k INFO ====> Epoch: 190, cost 73.25 s
2023-03-01 02:29:49,735 44k INFO Train Epoch: 191 [91%]
2023-03-01 02:29:49,736 44k INFO Losses: [2.677638292312622, 1.9580601453781128, 7.813292503356934, 14.108997344970703, 1.0552946329116821], step: 12600, lr: 9.764062959953878e-05
2023-03-01 02:29:56,347 44k INFO ====> Epoch: 191, cost 74.81 s
2023-03-01 02:31:09,719 44k INFO ====> Epoch: 192, cost 73.37 s
2023-03-01 02:32:23,137 44k INFO ====> Epoch: 193, cost 73.42 s
2023-03-01 02:33:33,324 44k INFO Train Epoch: 194 [94%]
2023-03-01 02:33:33,325 44k INFO Losses: [2.533267021179199, 1.9380160570144653, 7.807443618774414, 16.014238357543945, 0.814891517162323], step: 12800, lr: 9.760401894015275e-05
2023-03-01 02:33:39,643 44k INFO Saving model and optimizer state at iteration 194 to ./logs/44k/G_12800.pth
2023-03-01 02:33:42,143 44k INFO Saving model and optimizer state at iteration 194 to ./logs/44k/D_12800.pth
2023-03-01 02:33:44,620 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_10400.pth
2023-03-01 02:33:44,623 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_10400.pth
2023-03-01 02:33:48,247 44k INFO ====> Epoch: 194, cost 85.11 s
2023-03-01 02:35:08,234 44k INFO ====> Epoch: 195, cost 79.99 s
2023-03-01 02:36:22,758 44k INFO ====> Epoch: 196, cost 74.52 s
2023-03-01 02:37:35,036 44k INFO Train Epoch: 197 [97%]
2023-03-01 02:37:35,038 44k INFO Losses: [2.646730422973633, 2.078885316848755, 4.741165637969971, 13.590365409851074, 0.8392080068588257], step: 13000, lr: 9.756742200804793e-05
2023-03-01 02:37:37,793 44k INFO ====> Epoch: 197, cost 75.04 s
2023-03-01 02:38:51,341 44k INFO ====> Epoch: 198, cost 73.55 s
2023-03-01 02:40:04,847 44k INFO ====> Epoch: 199, cost 73.51 s
2023-03-01 02:41:20,260 44k INFO ====> Epoch: 200, cost 75.41 s
2023-03-01 02:41:26,085 44k INFO Train Epoch: 201 [0%]
2023-03-01 02:41:26,087 44k INFO Losses: [2.704281806945801, 2.043736696243286, 10.83144474029541, 18.60426902770996, 0.9464163780212402], step: 13200, lr: 9.75186474432275e-05
2023-03-01 02:42:35,135 44k INFO ====> Epoch: 201, cost 74.87 s
2023-03-01 02:43:48,720 44k INFO ====> Epoch: 202, cost 73.59 s
2023-03-01 02:45:03,109 44k INFO ====> Epoch: 203, cost 74.39 s
2023-03-01 02:45:10,642 44k INFO Train Epoch: 204 [3%]
2023-03-01 02:45:10,644 44k INFO Losses: [2.3756022453308105, 2.4554734230041504, 11.081616401672363, 15.974448204040527, 0.7035766243934631], step: 13400, lr: 9.748208252143241e-05
2023-03-01 02:46:18,161 44k INFO ====> Epoch: 204, cost 75.05 s
2023-03-01 02:47:32,049 44k INFO ====> Epoch: 205, cost 73.89 s
2023-03-01 02:48:46,216 44k INFO ====> Epoch: 206, cost 74.17 s
2023-03-01 02:48:55,688 44k INFO Train Epoch: 207 [6%]
2023-03-01 02:48:55,690 44k INFO Losses: [2.6072726249694824, 1.84812593460083, 10.361230850219727, 14.994611740112305, 0.9061499238014221], step: 13600, lr: 9.744553130976908e-05
2023-03-01 02:49:02,312 44k INFO Saving model and optimizer state at iteration 207 to ./logs/44k/G_13600.pth
2023-03-01 02:49:04,444 44k INFO Saving model and optimizer state at iteration 207 to ./logs/44k/D_13600.pth
2023-03-01 02:49:06,804 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_11200.pth
2023-03-01 02:49:06,807 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_11200.pth
2023-03-01 05:10:26,513 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 2940192363, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-01 05:10:47,479 44k INFO Loaded checkpoint './logs/44k/G_13600.pth' (iteration 207)
2023-03-01 05:11:00,891 44k INFO Loaded checkpoint './logs/44k/D_13600.pth' (iteration 207)
2023-03-01 05:11:23,259 44k INFO Train Epoch: 207 [6%]
2023-03-01 05:11:23,261 44k INFO Losses: [2.653352975845337, 1.9217225313186646, 8.817423820495605, 14.982244491577148, 0.5979122519493103], step: 13600, lr: 9.743335061835535e-05
2023-03-01 05:11:31,580 44k INFO Saving model and optimizer state at iteration 207 to ./logs/44k/G_13600.pth
2023-03-01 05:11:34,123 44k INFO Saving model and optimizer state at iteration 207 to ./logs/44k/D_13600.pth
2023-03-01 05:12:46,940 44k INFO ====> Epoch: 207, cost 140.43 s
2023-03-01 05:13:48,211 44k INFO ====> Epoch: 208, cost 61.27 s
2023-03-01 05:14:48,755 44k INFO ====> Epoch: 209, cost 60.54 s
2023-03-01 05:15:00,755 44k INFO Train Epoch: 210 [9%]
2023-03-01 05:15:00,757 44k INFO Losses: [2.6160221099853516, 2.237356185913086, 12.533127784729004, 16.945981979370117, 0.9404376745223999], step: 13800, lr: 9.739681767887146e-05
2023-03-01 05:15:50,071 44k INFO ====> Epoch: 210, cost 61.32 s
2023-03-01 05:16:53,055 44k INFO ====> Epoch: 211, cost 62.98 s
2023-03-01 09:00:14,257 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 1234, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'nyaru': 0, 'huiyu': 1, 'nen': 2, 'paimon': 3, 'yunhao': 4}, 'model_dir': './logs/44k'}
2023-03-01 09:00:34,410 44k INFO Loaded checkpoint './logs/44k/G_19200.pth' (iteration 291)
2023-03-01 09:02:20,075 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 2940192363, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-01 09:02:27,158 44k INFO Loaded checkpoint './logs/44k/G_19200.pth' (iteration 291)
2023-03-01 09:02:35,727 44k INFO Loaded checkpoint './logs/44k/D_19200.pth' (iteration 291)
2023-03-01 09:03:58,256 44k INFO Train Epoch: 291 [91%]
2023-03-01 09:03:58,258 44k INFO Losses: [2.714459180831909, 1.876783847808838, 8.321307182312012, 14.657979965209961, 0.7581849098205566], step: 19200, lr: 9.636739066648303e-05
2023-03-01 09:04:06,862 44k INFO Saving model and optimizer state at iteration 291 to ./logs/44k/G_19200.pth
2023-03-01 09:04:09,256 44k INFO Saving model and optimizer state at iteration 291 to ./logs/44k/D_19200.pth
2023-03-01 09:04:11,554 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_12000.pth
2023-03-01 09:04:11,556 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_12000.pth
2023-03-01 09:04:18,396 44k INFO ====> Epoch: 291, cost 118.32 s
2023-03-01 09:05:19,207 44k INFO ====> Epoch: 292, cost 60.81 s
2023-03-01 09:06:17,139 44k INFO ====> Epoch: 293, cost 57.93 s
2023-03-01 09:07:13,083 44k INFO Train Epoch: 294 [94%]
2023-03-01 09:07:13,085 44k INFO Losses: [2.4118154048919678, 2.15513277053833, 10.235026359558105, 16.197677612304688, 0.7712147235870361], step: 19400, lr: 9.633125741201631e-05
2023-03-01 09:07:17,345 44k INFO ====> Epoch: 294, cost 60.21 s
2023-03-01 09:08:15,393 44k INFO ====> Epoch: 295, cost 58.05 s
2023-03-01 09:09:13,587 44k INFO ====> Epoch: 296, cost 58.19 s
2023-03-01 09:10:10,380 44k INFO Train Epoch: 297 [97%]
2023-03-01 09:10:10,381 44k INFO Losses: [2.6719000339508057, 2.5296897888183594, 7.979108810424805, 14.461162567138672, 0.4655492901802063], step: 19600, lr: 9.629513770582634e-05
2023-03-01 09:10:13,310 44k INFO ====> Epoch: 297, cost 59.72 s
2023-03-01 09:11:14,166 44k INFO ====> Epoch: 298, cost 60.86 s
2023-03-01 09:12:15,970 44k INFO ====> Epoch: 299, cost 61.80 s
2023-03-01 09:13:16,813 44k INFO ====> Epoch: 300, cost 60.84 s
2023-03-01 09:13:22,918 44k INFO Train Epoch: 301 [0%]
2023-03-01 09:13:22,919 44k INFO Losses: [2.7301554679870605, 2.100914716720581, 10.44601058959961, 15.447175979614258, 0.70086669921875], step: 19800, lr: 9.62469991638903e-05
2023-03-01 09:14:17,831 44k INFO ====> Epoch: 301, cost 61.02 s
2023-03-01 09:15:20,028 44k INFO ====> Epoch: 302, cost 62.20 s
2023-03-01 09:16:18,818 44k INFO ====> Epoch: 303, cost 58.79 s
2023-03-01 09:16:25,731 44k INFO Train Epoch: 304 [3%]
2023-03-01 09:16:25,733 44k INFO Losses: [2.3955142498016357, 2.3032424449920654, 14.448984146118164, 17.651521682739258, 0.8693689703941345], step: 20000, lr: 9.621091105059392e-05
2023-03-01 09:16:31,768 44k INFO Saving model and optimizer state at iteration 304 to ./logs/44k/G_20000.pth
2023-03-01 09:16:34,508 44k INFO Saving model and optimizer state at iteration 304 to ./logs/44k/D_20000.pth
2023-03-01 09:16:36,709 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_12800.pth
2023-03-01 09:16:36,711 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_12800.pth
2023-03-01 09:17:32,152 44k INFO ====> Epoch: 304, cost 73.33 s
2023-03-01 09:18:32,754 44k INFO ====> Epoch: 305, cost 60.60 s
2023-03-01 09:19:33,218 44k INFO ====> Epoch: 306, cost 60.46 s
2023-03-01 09:19:43,350 44k INFO Train Epoch: 307 [6%]
2023-03-01 09:19:43,351 44k INFO Losses: [2.7478885650634766, 1.8886593580245972, 7.8157057762146, 14.090278625488281, 0.5759338736534119], step: 20200, lr: 9.617483646864849e-05
2023-03-01 09:20:34,405 44k INFO ====> Epoch: 307, cost 61.19 s
2023-03-01 09:21:34,257 44k INFO ====> Epoch: 308, cost 59.85 s
2023-03-01 09:22:35,269 44k INFO ====> Epoch: 309, cost 61.01 s
2023-03-01 09:22:45,522 44k INFO Train Epoch: 310 [9%]
2023-03-01 09:22:45,523 44k INFO Losses: [2.6224944591522217, 2.2484076023101807, 11.001341819763184, 15.169388771057129, 0.8562605977058411], step: 20400, lr: 9.613877541298036e-05
2023-03-01 09:23:35,334 44k INFO ====> Epoch: 310, cost 60.07 s
2023-03-01 09:24:34,057 44k INFO ====> Epoch: 311, cost 58.72 s
2023-03-01 09:25:33,200 44k INFO ====> Epoch: 312, cost 59.14 s
2023-03-01 09:25:45,525 44k INFO Train Epoch: 313 [12%]
2023-03-01 09:25:45,527 44k INFO Losses: [2.5372068881988525, 2.157820224761963, 6.680222511291504, 14.781391143798828, 0.5170071125030518], step: 20600, lr: 9.61027278785178e-05
2023-03-01 09:26:34,518 44k INFO ====> Epoch: 313, cost 61.32 s
2023-03-01 09:27:32,901 44k INFO ====> Epoch: 314, cost 58.38 s
2023-03-01 09:28:31,562 44k INFO ====> Epoch: 315, cost 58.66 s
2023-03-01 09:28:45,707 44k INFO Train Epoch: 316 [15%]
2023-03-01 09:28:45,709 44k INFO Losses: [2.7442286014556885, 1.9079498052597046, 7.322719097137451, 15.376588821411133, 0.6032760143280029], step: 20800, lr: 9.606669386019102e-05
2023-03-01 09:28:51,577 44k INFO Saving model and optimizer state at iteration 316 to ./logs/44k/G_20800.pth
2023-03-01 09:28:53,880 44k INFO Saving model and optimizer state at iteration 316 to ./logs/44k/D_20800.pth
2023-03-01 09:28:56,143 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_13600.pth
2023-03-01 09:28:56,145 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_13600.pth
2023-03-01 09:29:44,709 44k INFO ====> Epoch: 316, cost 73.15 s
2023-03-01 09:30:45,501 44k INFO ====> Epoch: 317, cost 60.79 s
2023-03-01 09:31:44,236 44k INFO ====> Epoch: 318, cost 58.73 s
2023-03-01 09:32:00,733 44k INFO Train Epoch: 319 [18%]
2023-03-01 09:32:00,735 44k INFO Losses: [2.814589262008667, 1.9871160984039307, 10.013313293457031, 16.331682205200195, 1.0382276773452759], step: 21000, lr: 9.603067335293209e-05
2023-03-01 09:32:44,574 44k INFO ====> Epoch: 319, cost 60.34 s
2023-03-01 09:33:43,330 44k INFO ====> Epoch: 320, cost 58.76 s
2023-03-01 09:34:43,656 44k INFO ====> Epoch: 321, cost 60.33 s
2023-03-01 09:35:01,208 44k INFO Train Epoch: 322 [21%]
2023-03-01 09:35:01,209 44k INFO Losses: [2.662564992904663, 2.0016989707946777, 8.979185104370117, 14.829337120056152, 0.40535154938697815], step: 21200, lr: 9.599466635167497e-05
2023-03-01 09:35:43,995 44k INFO ====> Epoch: 322, cost 60.34 s
2023-03-01 09:36:42,613 44k INFO ====> Epoch: 323, cost 58.62 s
2023-03-01 09:37:41,720 44k INFO ====> Epoch: 324, cost 59.11 s
2023-03-01 09:38:01,169 44k INFO Train Epoch: 325 [24%]
2023-03-01 09:38:01,170 44k INFO Losses: [2.5041823387145996, 1.9535661935806274, 8.63454532623291, 14.871014595031738, 0.48563066124916077], step: 21400, lr: 9.595867285135558e-05
2023-03-01 09:38:42,834 44k INFO ====> Epoch: 325, cost 61.11 s
2023-03-01 09:39:41,166 44k INFO ====> Epoch: 326, cost 58.33 s
2023-03-01 09:40:39,644 44k INFO ====> Epoch: 327, cost 58.48 s
2023-03-01 09:41:00,609 44k INFO Train Epoch: 328 [27%]
2023-03-01 09:41:00,611 44k INFO Losses: [2.670797109603882, 1.8436683416366577, 8.354750633239746, 14.883867263793945, 0.8324559926986694], step: 21600, lr: 9.592269284691169e-05
2023-03-01 09:41:05,413 44k INFO Saving model and optimizer state at iteration 328 to ./logs/44k/G_21600.pth
2023-03-01 09:41:07,826 44k INFO Saving model and optimizer state at iteration 328 to ./logs/44k/D_21600.pth
2023-03-01 09:41:10,428 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_19200.pth
2023-03-01 09:41:10,430 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_19200.pth
2023-03-01 09:41:52,051 44k INFO ====> Epoch: 328, cost 72.41 s
2023-03-01 09:42:53,583 44k INFO ====> Epoch: 329, cost 61.53 s
2023-03-01 09:43:53,779 44k INFO ====> Epoch: 330, cost 60.20 s
2023-03-01 09:44:17,770 44k INFO Train Epoch: 331 [30%]
2023-03-01 09:44:17,772 44k INFO Losses: [2.6422293186187744, 2.1201345920562744, 7.852966785430908, 13.874168395996094, 0.48024967312812805], step: 21800, lr: 9.588672633328296e-05
2023-03-01 09:44:55,533 44k INFO ====> Epoch: 331, cost 61.75 s
2023-03-01 09:45:55,617 44k INFO ====> Epoch: 332, cost 60.08 s
2023-03-01 09:46:57,142 44k INFO ====> Epoch: 333, cost 61.52 s
2023-03-01 09:47:21,554 44k INFO Train Epoch: 334 [33%]
2023-03-01 09:47:21,556 44k INFO Losses: [2.6043567657470703, 2.2250912189483643, 8.535768508911133, 14.207335472106934, 1.096189260482788], step: 22000, lr: 9.5850773305411e-05
2023-03-01 09:47:57,829 44k INFO ====> Epoch: 334, cost 60.69 s
2023-03-01 09:48:57,035 44k INFO ====> Epoch: 335, cost 59.21 s
2023-03-01 09:49:56,646 44k INFO ====> Epoch: 336, cost 59.61 s
2023-03-01 09:50:23,414 44k INFO Train Epoch: 337 [36%]
2023-03-01 09:50:23,415 44k INFO Losses: [2.5704874992370605, 2.0810489654541016, 7.006008625030518, 15.303852081298828, 1.0223956108093262], step: 22200, lr: 9.581483375823925e-05
2023-03-01 09:50:57,897 44k INFO ====> Epoch: 337, cost 61.25 s
2023-03-01 09:51:58,019 44k INFO ====> Epoch: 338, cost 60.12 s
2023-03-01 09:52:57,621 44k INFO ====> Epoch: 339, cost 59.60 s
2023-03-01 09:53:24,743 44k INFO Train Epoch: 340 [39%]
2023-03-01 09:53:24,745 44k INFO Losses: [2.5108706951141357, 2.047982931137085, 9.669243812561035, 16.2545108795166, 0.6553496718406677], step: 22400, lr: 9.577890768671308e-05
2023-03-01 09:53:30,971 44k INFO Saving model and optimizer state at iteration 340 to ./logs/44k/G_22400.pth
2023-03-01 09:53:33,381 44k INFO Saving model and optimizer state at iteration 340 to ./logs/44k/D_22400.pth
2023-03-01 09:53:35,567 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_20000.pth
2023-03-01 09:53:35,570 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_20000.pth
2023-03-01 09:54:11,167 44k INFO ====> Epoch: 340, cost 73.55 s
2023-03-01 09:55:13,729 44k INFO ====> Epoch: 341, cost 62.56 s
2023-03-01 09:56:13,140 44k INFO ====> Epoch: 342, cost 59.41 s
2023-03-01 09:56:42,086 44k INFO Train Epoch: 343 [42%]
2023-03-01 09:56:42,088 44k INFO Losses: [2.6323752403259277, 2.3115041255950928, 10.204834938049316, 15.963228225708008, 0.7380309700965881], step: 22600, lr: 9.574299508577979e-05
2023-03-01 09:57:14,197 44k INFO ====> Epoch: 343, cost 61.06 s
2023-03-01 09:58:38,406 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 3601, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-01 09:58:49,389 44k INFO Loaded checkpoint './logs/44k/G_22400.pth' (iteration 340)
2023-03-01 09:58:50,582 44k INFO Loaded checkpoint './logs/44k/D_22400.pth' (iteration 340)
2023-03-01 09:59:30,311 44k INFO Train Epoch: 340 [39%]
2023-03-01 09:59:30,311 44k INFO Losses: [2.993380546569824, 2.2395238876342773, 5.807098388671875, 14.473311424255371, 0.6003932356834412], step: 22400, lr: 9.576693532325224e-05
2023-03-01 09:59:36,609 44k INFO Saving model and optimizer state at iteration 340 to ./logs/44k/G_22400.pth
2023-03-01 09:59:39,213 44k INFO Saving model and optimizer state at iteration 340 to ./logs/44k/D_22400.pth
2023-03-01 10:00:28,955 44k INFO ====> Epoch: 340, cost 110.55 s
2023-03-01 10:01:28,066 44k INFO ====> Epoch: 341, cost 59.11 s
2023-03-01 10:01:45,854 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 3601, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 1, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-01 10:01:54,313 44k INFO Loaded checkpoint './logs/44k/G_22400.pth' (iteration 340)
2023-03-01 10:01:55,766 44k INFO Loaded checkpoint './logs/44k/D_22400.pth' (iteration 340)
2023-03-01 10:02:35,551 44k INFO Train Epoch: 340 [39%]
2023-03-01 10:02:35,553 44k INFO Losses: [2.675992965698242, 2.213108777999878, 5.444459915161133, 13.684792518615723, 0.3924337923526764], step: 22400, lr: 9.575496445633683e-05
2023-03-01 10:02:42,981 44k INFO Saving model and optimizer state at iteration 340 to ./logs/44k/G_22400.pth
2023-03-01 10:02:45,388 44k INFO Saving model and optimizer state at iteration 340 to ./logs/44k/D_22400.pth
2023-03-01 10:03:33,947 44k INFO ====> Epoch: 340, cost 108.10 s
2023-03-01 10:04:35,379 44k INFO ====> Epoch: 341, cost 61.43 s
2023-03-01 10:05:06,859 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 3601, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 1}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-01 10:05:14,455 44k INFO emb_g.weight is not in the checkpoint
2023-03-01 10:05:14,534 44k INFO Loaded checkpoint './logs/44k/G_22400.pth' (iteration 340)
2023-03-01 10:05:15,719 44k INFO Loaded checkpoint './logs/44k/D_22400.pth' (iteration 340)
2023-03-01 10:06:58,706 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 3601, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 0}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-01 10:07:48,814 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 3601, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 256}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-01 10:07:55,740 44k INFO emb_g.weight is not in the checkpoint
2023-03-01 10:07:55,812 44k INFO Loaded checkpoint './logs/44k/G_22400.pth' (iteration 340)
2023-03-01 10:07:57,124 44k INFO Loaded checkpoint './logs/44k/D_22400.pth' (iteration 340)
2023-03-01 10:08:50,749 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 3601, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-01 10:08:59,151 44k INFO Loaded checkpoint './logs/44k/G_22400.pth' (iteration 340)
2023-03-01 10:09:00,316 44k INFO Loaded checkpoint './logs/44k/D_22400.pth' (iteration 340)
2023-03-01 10:09:39,850 44k INFO Train Epoch: 340 [39%]
2023-03-01 10:09:39,851 44k INFO Losses: [2.4996232986450195, 2.197064161300659, 5.590308666229248, 13.342594146728516, 0.29234984517097473], step: 22400, lr: 9.574299508577979e-05
2023-03-01 10:09:46,076 44k INFO Saving model and optimizer state at iteration 340 to ./logs/44k/G_22400.pth
2023-03-01 10:09:48,661 44k INFO Saving model and optimizer state at iteration 340 to ./logs/44k/D_22400.pth
2023-03-01 10:10:36,299 44k INFO ====> Epoch: 340, cost 105.55 s
2023-03-01 10:11:35,129 44k INFO ====> Epoch: 341, cost 58.83 s
2023-03-01 10:12:35,046 44k INFO ====> Epoch: 342, cost 59.92 s
2023-03-01 10:13:03,994 44k INFO Train Epoch: 343 [42%]
2023-03-01 10:13:03,996 44k INFO Losses: [2.6156153678894043, 2.0350868701934814, 9.163956642150879, 16.11782455444336, 0.8058428764343262], step: 22600, lr: 9.570709595038851e-05
2023-03-01 10:13:34,859 44k INFO ====> Epoch: 343, cost 59.81 s
2023-03-01 10:14:33,925 44k INFO ====> Epoch: 344, cost 59.07 s
2023-03-01 10:15:33,065 44k INFO ====> Epoch: 345, cost 59.14 s
2023-03-01 10:16:03,520 44k INFO Train Epoch: 346 [45%]
2023-03-01 10:16:03,522 44k INFO Losses: [2.737438201904297, 2.0276880264282227, 6.612607955932617, 15.230156898498535, 0.4350101053714752], step: 22800, lr: 9.56712102754903e-05
2023-03-01 10:16:34,280 44k INFO ====> Epoch: 346, cost 61.22 s
2023-03-01 10:17:33,558 44k INFO ====> Epoch: 347, cost 59.28 s
2023-03-01 10:18:33,032 44k INFO ====> Epoch: 348, cost 59.47 s
2023-03-01 10:19:04,876 44k INFO Train Epoch: 349 [48%]
2023-03-01 10:19:04,878 44k INFO Losses: [2.3136959075927734, 2.3560190200805664, 12.35429859161377, 16.747446060180664, 0.7114447355270386], step: 23000, lr: 9.56353380560381e-05
2023-03-01 10:19:33,123 44k INFO ====> Epoch: 349, cost 60.09 s
2023-03-01 10:20:32,969 44k INFO ====> Epoch: 350, cost 59.85 s
2023-03-01 10:21:34,866 44k INFO ====> Epoch: 351, cost 61.90 s
2023-03-01 10:22:10,450 44k INFO Train Epoch: 352 [52%]
2023-03-01 10:22:10,452 44k INFO Losses: [2.7048182487487793, 2.020225763320923, 10.444074630737305, 15.365710258483887, 0.9079698324203491], step: 23200, lr: 9.559947928698674e-05
2023-03-01 10:22:15,295 44k INFO Saving model and optimizer state at iteration 352 to ./logs/44k/G_23200.pth
2023-03-01 10:22:17,666 44k INFO Saving model and optimizer state at iteration 352 to ./logs/44k/D_23200.pth
2023-03-01 10:22:19,820 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_20800.pth
2023-03-01 10:22:19,822 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_20800.pth
2023-03-01 10:22:50,023 44k INFO ====> Epoch: 352, cost 75.16 s
2023-03-01 10:23:51,172 44k INFO ====> Epoch: 353, cost 61.15 s
2023-03-01 10:32:29,088 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 3601, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-01 10:32:39,201 44k INFO Loaded checkpoint './logs/44k/G_23200.pth' (iteration 352)
2023-03-01 10:32:42,331 44k INFO Loaded checkpoint './logs/44k/D_23200.pth' (iteration 352)
2023-03-01 10:33:36,189 44k INFO Train Epoch: 352 [52%]
2023-03-01 10:33:36,190 44k INFO Losses: [2.735295295715332, 2.472989082336426, 9.938817977905273, 14.813241958618164, 0.6433053016662598], step: 23200, lr: 9.558752935207586e-05
2023-03-01 10:33:43,653 44k INFO Saving model and optimizer state at iteration 352 to ./logs/44k/G_23200.pth
2023-03-01 10:33:46,136 44k INFO Saving model and optimizer state at iteration 352 to ./logs/44k/D_23200.pth
2023-03-01 10:34:25,493 44k INFO ====> Epoch: 352, cost 116.41 s
2023-03-01 10:35:26,170 44k INFO ====> Epoch: 353, cost 60.68 s
2023-03-01 10:36:27,508 44k INFO ====> Epoch: 354, cost 61.34 s
2023-03-01 10:37:06,317 44k INFO Train Epoch: 355 [55%]
2023-03-01 10:37:06,319 44k INFO Losses: [2.406062364578247, 2.1142289638519287, 8.363036155700684, 16.152191162109375, 0.7565690875053406], step: 23400, lr: 9.555168850904757e-05
2023-03-01 10:37:30,790 44k INFO ====> Epoch: 355, cost 63.28 s
2023-03-01 10:38:32,246 44k INFO ====> Epoch: 356, cost 61.46 s
2023-03-01 10:39:32,210 44k INFO ====> Epoch: 357, cost 59.96 s
2023-03-01 10:40:08,740 44k INFO Train Epoch: 358 [58%]
2023-03-01 10:40:08,742 44k INFO Losses: [2.5132408142089844, 2.1595797538757324, 10.749167442321777, 14.639748573303223, 0.3269229233264923], step: 23600, lr: 9.551586110465545e-05
2023-03-01 10:40:31,991 44k INFO ====> Epoch: 358, cost 59.78 s
2023-03-01 10:41:32,670 44k INFO ====> Epoch: 359, cost 60.68 s
2023-03-01 10:42:32,177 44k INFO ====> Epoch: 360, cost 59.51 s
2023-03-01 10:43:11,369 44k INFO Train Epoch: 361 [61%]
2023-03-01 10:43:11,372 44k INFO Losses: [2.5821666717529297, 2.0704190731048584, 12.333581924438477, 17.22037124633789, 0.8824537992477417], step: 23800, lr: 9.548004713386062e-05
2023-03-01 10:43:32,805 44k INFO ====> Epoch: 361, cost 60.63 s
2023-03-01 10:44:33,356 44k INFO ====> Epoch: 362, cost 60.55 s
2023-03-01 10:45:35,949 44k INFO ====> Epoch: 363, cost 62.59 s
2023-03-01 10:46:17,307 44k INFO Train Epoch: 364 [64%]
2023-03-01 10:46:17,308 44k INFO Losses: [2.8095600605010986, 2.058666944503784, 10.181412696838379, 16.171974182128906, 0.580284595489502], step: 24000, lr: 9.544424659162614e-05
2023-03-01 10:46:23,105 44k INFO Saving model and optimizer state at iteration 364 to ./logs/44k/G_24000.pth
2023-03-01 10:46:26,059 44k INFO Saving model and optimizer state at iteration 364 to ./logs/44k/D_24000.pth
2023-03-01 10:46:28,595 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_21600.pth
2023-03-01 10:46:28,597 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_21600.pth
2023-03-01 10:46:51,074 44k INFO ====> Epoch: 364, cost 75.12 s
2023-03-01 10:47:53,061 44k INFO ====> Epoch: 365, cost 61.99 s
2023-03-01 10:48:53,298 44k INFO ====> Epoch: 366, cost 60.24 s
2023-03-01 10:49:35,959 44k INFO Train Epoch: 367 [67%]
2023-03-01 10:49:35,961 44k INFO Losses: [2.6258316040039062, 1.951851487159729, 6.876603603363037, 13.964946746826172, 0.7942814230918884], step: 24200, lr: 9.540845947291691e-05
2023-03-01 10:49:54,502 44k INFO ====> Epoch: 367, cost 61.20 s
2023-03-01 10:50:53,718 44k INFO ====> Epoch: 368, cost 59.22 s
2023-03-01 10:51:53,449 44k INFO ====> Epoch: 369, cost 59.73 s
2023-03-01 10:52:37,397 44k INFO Train Epoch: 370 [70%]
2023-03-01 10:52:37,399 44k INFO Losses: [2.692796468734741, 2.1265370845794678, 6.302618026733398, 13.37375545501709, 0.6948615908622742], step: 24400, lr: 9.537268577269974e-05
2023-03-01 10:52:53,906 44k INFO ====> Epoch: 370, cost 60.46 s
2023-03-01 10:53:53,689 44k INFO ====> Epoch: 371, cost 59.78 s
2023-03-01 10:54:53,432 44k INFO ====> Epoch: 372, cost 59.74 s
2023-03-01 10:55:39,228 44k INFO Train Epoch: 373 [73%]
2023-03-01 10:55:39,230 44k INFO Losses: [2.520315170288086, 2.121858835220337, 11.418971061706543, 17.02090072631836, 0.8488866686820984], step: 24600, lr: 9.533692548594333e-05
2023-03-01 10:55:54,148 44k INFO ====> Epoch: 373, cost 60.72 s
2023-03-01 10:56:55,071 44k INFO ====> Epoch: 374, cost 60.92 s
2023-03-01 10:57:56,024 44k INFO ====> Epoch: 375, cost 60.95 s
2023-03-01 10:58:43,173 44k INFO Train Epoch: 376 [76%]
2023-03-01 10:58:43,174 44k INFO Losses: [2.6445298194885254, 2.0626070499420166, 13.288188934326172, 17.02774429321289, 0.8616889715194702], step: 24800, lr: 9.530117860761828e-05
2023-03-01 10:58:48,052 44k INFO Saving model and optimizer state at iteration 376 to ./logs/44k/G_24800.pth
2023-03-01 10:58:50,405 44k INFO Saving model and optimizer state at iteration 376 to ./logs/44k/D_24800.pth
2023-03-01 10:58:52,708 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_22400.pth
2023-03-01 10:58:52,710 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_22400.pth
2023-03-01 10:59:09,607 44k INFO ====> Epoch: 376, cost 73.58 s
2023-03-01 11:00:09,665 44k INFO ====> Epoch: 377, cost 60.06 s
2023-03-01 11:01:08,472 44k INFO ====> Epoch: 378, cost 58.81 s
2023-03-01 11:01:56,204 44k INFO Train Epoch: 379 [79%]
2023-03-01 11:01:56,206 44k INFO Losses: [2.7561092376708984, 1.932814121246338, 4.349462509155273, 13.73442268371582, 0.6902963519096375], step: 25000, lr: 9.526544513269702e-05
2023-03-01 11:02:08,108 44k INFO ====> Epoch: 379, cost 59.64 s
2023-03-01 11:03:07,036 44k INFO ====> Epoch: 380, cost 58.93 s
2023-03-01 11:04:06,340 44k INFO ====> Epoch: 381, cost 59.30 s
2023-03-01 11:04:56,075 44k INFO Train Epoch: 382 [82%]
2023-03-01 11:04:56,077 44k INFO Losses: [2.5582737922668457, 2.202568292617798, 11.3698148727417, 15.842337608337402, 0.6124117374420166], step: 25200, lr: 9.522972505615393e-05
2023-03-01 11:05:06,948 44k INFO ====> Epoch: 382, cost 60.61 s
2023-03-01 11:06:06,484 44k INFO ====> Epoch: 383, cost 59.54 s
2023-03-01 11:07:06,731 44k INFO ====> Epoch: 384, cost 60.25 s
2023-03-01 11:07:59,726 44k INFO Train Epoch: 385 [85%]
2023-03-01 11:07:59,728 44k INFO Losses: [2.7923583984375, 1.9000351428985596, 8.982157707214355, 15.928044319152832, 0.5768665075302124], step: 25400, lr: 9.519401837296521e-05
2023-03-01 11:08:07,934 44k INFO ====> Epoch: 385, cost 61.20 s
2023-03-01 11:09:09,219 44k INFO ====> Epoch: 386, cost 61.28 s
2023-03-01 11:10:09,732 44k INFO ====> Epoch: 387, cost 60.51 s
2023-03-01 11:11:02,665 44k INFO Train Epoch: 388 [88%]
2023-03-01 11:11:02,666 44k INFO Losses: [2.7367920875549316, 2.0357213020324707, 9.263534545898438, 15.61760139465332, 0.8847417235374451], step: 25600, lr: 9.515832507810904e-05
2023-03-01 11:11:08,424 44k INFO Saving model and optimizer state at iteration 388 to ./logs/44k/G_25600.pth
2023-03-01 11:11:11,429 44k INFO Saving model and optimizer state at iteration 388 to ./logs/44k/D_25600.pth
2023-03-01 11:11:13,910 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_23200.pth
2023-03-01 11:11:13,911 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_23200.pth
2023-03-01 11:11:20,142 44k INFO ====> Epoch: 388, cost 70.41 s
2023-03-01 11:12:24,775 44k INFO ====> Epoch: 389, cost 64.63 s
2023-03-01 11:13:24,140 44k INFO ====> Epoch: 390, cost 59.37 s
2023-03-01 11:14:17,912 44k INFO Train Epoch: 391 [91%]
2023-03-01 11:14:17,914 44k INFO Losses: [2.6800880432128906, 1.9326128959655762, 9.511737823486328, 15.87286376953125, 0.9449729919433594], step: 25800, lr: 9.512264516656537e-05
2023-03-01 11:14:23,800 44k INFO ====> Epoch: 391, cost 59.66 s
2023-03-01 11:15:23,089 44k INFO ====> Epoch: 392, cost 59.29 s
2023-03-01 11:16:22,485 44k INFO ====> Epoch: 393, cost 59.40 s
2023-03-01 11:17:19,473 44k INFO Train Epoch: 394 [94%]
2023-03-01 11:17:19,474 44k INFO Losses: [2.617201566696167, 2.2207987308502197, 11.711480140686035, 16.177356719970703, 0.5072634220123291], step: 26000, lr: 9.508697863331611e-05
2023-03-01 11:17:23,261 44k INFO ====> Epoch: 394, cost 60.78 s
2023-03-01 11:18:23,382 44k INFO ====> Epoch: 395, cost 60.12 s
2023-03-01 11:19:23,619 44k INFO ====> Epoch: 396, cost 60.24 s
2023-03-01 11:20:23,028 44k INFO Train Epoch: 397 [97%]
2023-03-01 11:20:23,030 44k INFO Losses: [2.7058663368225098, 2.2946617603302, 5.8908772468566895, 13.171356201171875, 0.6678426861763], step: 26200, lr: 9.505132547334502e-05
2023-03-01 11:20:25,015 44k INFO ====> Epoch: 397, cost 61.40 s
2023-03-01 11:21:26,715 44k INFO ====> Epoch: 398, cost 61.70 s
2023-03-01 11:22:28,307 44k INFO ====> Epoch: 399, cost 61.59 s
2023-03-01 11:23:28,549 44k INFO ====> Epoch: 400, cost 60.24 s
2023-03-01 11:23:34,071 44k INFO Train Epoch: 401 [0%]
2023-03-01 11:23:34,072 44k INFO Losses: [2.4617769718170166, 2.0439717769622803, 9.65514087677002, 16.647689819335938, 0.2396409511566162], step: 26400, lr: 9.500380872092753e-05
2023-03-01 11:23:38,872 44k INFO Saving model and optimizer state at iteration 401 to ./logs/44k/G_26400.pth
2023-03-01 11:23:42,357 44k INFO Saving model and optimizer state at iteration 401 to ./logs/44k/D_26400.pth
2023-03-01 11:23:44,693 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_24000.pth
2023-03-01 11:23:44,695 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_24000.pth
2023-03-01 11:24:42,183 44k INFO ====> Epoch: 401, cost 73.63 s
2023-03-01 11:25:42,213 44k INFO ====> Epoch: 402, cost 60.03 s
2023-03-01 11:26:41,609 44k INFO ====> Epoch: 403, cost 59.40 s
2023-03-01 11:26:48,456 44k INFO Train Epoch: 404 [3%]
2023-03-01 11:26:48,458 44k INFO Losses: [2.553297519683838, 2.071713447570801, 10.953821182250977, 16.601877212524414, 0.5710247755050659], step: 26600, lr: 9.496818674577514e-05
2023-03-01 11:27:41,802 44k INFO ====> Epoch: 404, cost 60.19 s
2023-03-01 11:28:41,174 44k INFO ====> Epoch: 405, cost 59.37 s
2023-03-01 11:29:40,691 44k INFO ====> Epoch: 406, cost 59.52 s
2023-03-01 11:29:49,491 44k INFO Train Epoch: 407 [6%]
2023-03-01 11:29:49,493 44k INFO Losses: [2.6071436405181885, 2.1059563159942627, 10.831452369689941, 14.779434204101562, 0.7541150450706482], step: 26800, lr: 9.493257812719373e-05
2023-03-01 11:30:41,195 44k INFO ====> Epoch: 407, cost 60.50 s
2023-03-01 11:31:40,647 44k INFO ====> Epoch: 408, cost 59.45 s
2023-03-01 11:32:39,963 44k INFO ====> Epoch: 409, cost 59.32 s
2023-03-01 11:32:51,572 44k INFO Train Epoch: 410 [9%]
2023-03-01 11:32:51,574 44k INFO Losses: [2.521911144256592, 2.143777847290039, 7.3539557456970215, 13.901455879211426, 0.702579915523529], step: 27000, lr: 9.489698286017521e-05
2023-03-01 11:33:40,592 44k INFO ====> Epoch: 410, cost 60.63 s
2023-03-01 11:34:41,302 44k INFO ====> Epoch: 411, cost 60.71 s
2023-03-01 11:35:42,654 44k INFO ====> Epoch: 412, cost 61.35 s
2023-03-01 11:35:55,757 44k INFO Train Epoch: 413 [12%]
2023-03-01 11:35:55,759 44k INFO Losses: [2.4729509353637695, 2.128504991531372, 8.535633087158203, 15.058328628540039, 0.4478798806667328], step: 27200, lr: 9.486140093971337e-05
2023-03-01 11:36:02,394 44k INFO Saving model and optimizer state at iteration 413 to ./logs/44k/G_27200.pth
2023-03-01 11:36:04,611 44k INFO Saving model and optimizer state at iteration 413 to ./logs/44k/D_27200.pth
2023-03-01 11:36:06,884 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_24800.pth
2023-03-01 11:36:06,886 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_24800.pth
2023-03-01 11:36:59,036 44k INFO ====> Epoch: 413, cost 76.38 s
2023-03-01 11:37:58,759 44k INFO ====> Epoch: 414, cost 59.72 s
2023-03-01 11:38:57,868 44k INFO ====> Epoch: 415, cost 59.11 s
2023-03-01 11:39:11,971 44k INFO Train Epoch: 416 [15%]
2023-03-01 11:39:11,973 44k INFO Losses: [2.61643123626709, 2.0023515224456787, 10.21114444732666, 15.97014045715332, 0.9590216875076294], step: 27400, lr: 9.482583236080386e-05
2023-03-01 11:39:58,277 44k INFO ====> Epoch: 416, cost 60.41 s
2023-03-01 11:40:57,092 44k INFO ====> Epoch: 417, cost 58.81 s
2023-03-01 11:41:56,393 44k INFO ====> Epoch: 418, cost 59.30 s
2023-03-01 11:42:12,190 44k INFO Train Epoch: 419 [18%]
2023-03-01 11:42:12,192 44k INFO Losses: [2.5653939247131348, 2.2571423053741455, 8.391668319702148, 14.131061553955078, 0.738591194152832], step: 27600, lr: 9.479027711844423e-05
2023-03-01 11:42:56,397 44k INFO ====> Epoch: 419, cost 60.00 s
2023-03-01 11:43:55,597 44k INFO ====> Epoch: 420, cost 59.20 s
2023-03-01 11:44:54,856 44k INFO ====> Epoch: 421, cost 59.26 s
2023-03-01 11:45:12,086 44k INFO Train Epoch: 422 [21%]
2023-03-01 11:45:12,088 44k INFO Losses: [2.482163906097412, 2.293987274169922, 13.730241775512695, 16.66456413269043, 0.8006649017333984], step: 27800, lr: 9.475473520763392e-05
2023-03-01 11:45:54,349 44k INFO ====> Epoch: 422, cost 59.49 s
2023-03-01 11:46:53,278 44k INFO ====> Epoch: 423, cost 58.93 s
2023-03-01 11:47:53,108 44k INFO ====> Epoch: 424, cost 59.83 s
2023-03-01 11:48:13,258 44k INFO Train Epoch: 425 [24%]
2023-03-01 11:48:13,260 44k INFO Losses: [2.45951247215271, 2.3967111110687256, 11.057534217834473, 15.734918594360352, 0.6340961456298828], step: 28000, lr: 9.471920662337418e-05
2023-03-01 11:48:19,079 44k INFO Saving model and optimizer state at iteration 425 to ./logs/44k/G_28000.pth
2023-03-01 11:48:21,272 44k INFO Saving model and optimizer state at iteration 425 to ./logs/44k/D_28000.pth
2023-03-01 11:48:23,787 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_25600.pth
2023-03-01 11:48:23,789 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_25600.pth
2023-03-01 11:49:07,694 44k INFO ====> Epoch: 425, cost 74.59 s
2023-03-01 11:50:07,955 44k INFO ====> Epoch: 426, cost 60.26 s
2023-03-01 11:51:08,490 44k INFO ====> Epoch: 427, cost 60.53 s
2023-03-01 11:51:31,083 44k INFO Train Epoch: 428 [27%]
2023-03-01 11:51:31,084 44k INFO Losses: [2.7843778133392334, 2.053317070007324, 9.86362361907959, 15.2178373336792, 0.7958869934082031], step: 28200, lr: 9.468369136066823e-05
2023-03-01 11:52:10,646 44k INFO ====> Epoch: 428, cost 62.16 s
2023-03-01 11:53:10,884 44k INFO ====> Epoch: 429, cost 60.24 s
2023-03-01 11:54:10,091 44k INFO ====> Epoch: 430, cost 59.21 s
2023-03-01 11:54:31,738 44k INFO Train Epoch: 431 [30%]
2023-03-01 11:54:31,739 44k INFO Losses: [2.532853364944458, 2.2656667232513428, 10.75473403930664, 16.409469604492188, 0.4878678023815155], step: 28400, lr: 9.464818941452107e-05
2023-03-01 11:55:09,653 44k INFO ====> Epoch: 431, cost 59.56 s
2023-03-01 11:56:08,297 44k INFO ====> Epoch: 432, cost 58.64 s
2023-03-01 11:57:07,307 44k INFO ====> Epoch: 433, cost 59.01 s
2023-03-01 11:57:30,631 44k INFO Train Epoch: 434 [33%]
2023-03-01 11:57:30,633 44k INFO Losses: [2.8186745643615723, 1.82132887840271, 11.16453742980957, 16.751678466796875, 0.5652674436569214], step: 28600, lr: 9.461270077993965e-05
2023-03-01 11:58:07,437 44k INFO ====> Epoch: 434, cost 60.13 s
2023-03-01 11:59:06,474 44k INFO ====> Epoch: 435, cost 59.04 s
2023-03-01 12:00:05,554 44k INFO ====> Epoch: 436, cost 59.08 s
2023-03-01 12:00:31,420 44k INFO Train Epoch: 437 [36%]
2023-03-01 12:00:31,421 44k INFO Losses: [2.6765542030334473, 2.281714677810669, 5.441047668457031, 15.316182136535645, 0.6776236891746521], step: 28800, lr: 9.457722545193272e-05
2023-03-01 12:00:36,344 44k INFO Saving model and optimizer state at iteration 437 to ./logs/44k/G_28800.pth
2023-03-01 12:00:39,031 44k INFO Saving model and optimizer state at iteration 437 to ./logs/44k/D_28800.pth
2023-03-01 12:00:41,426 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_26400.pth
2023-03-01 12:00:41,428 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_26400.pth
2023-03-01 12:01:19,501 44k INFO ====> Epoch: 437, cost 73.95 s
2023-03-01 12:02:18,934 44k INFO ====> Epoch: 438, cost 59.43 s
2023-03-01 12:03:18,237 44k INFO ====> Epoch: 439, cost 59.30 s
2023-03-01 12:03:46,024 44k INFO Train Epoch: 440 [39%]
2023-03-01 12:03:46,026 44k INFO Losses: [2.5623855590820312, 2.1879703998565674, 7.0038018226623535, 15.862749099731445, 0.7107902765274048], step: 29000, lr: 9.454176342551095e-05
2023-03-01 12:04:18,819 44k INFO ====> Epoch: 440, cost 60.58 s
2023-03-01 12:05:19,019 44k INFO ====> Epoch: 441, cost 60.20 s
2023-03-01 12:06:19,533 44k INFO ====> Epoch: 442, cost 60.51 s
2023-03-01 12:06:49,693 44k INFO Train Epoch: 443 [42%]
2023-03-01 12:06:49,695 44k INFO Losses: [2.7032082080841064, 2.017979860305786, 5.5479254722595215, 13.592876434326172, 0.7528047561645508], step: 29200, lr: 9.450631469568687e-05
2023-03-01 12:07:21,416 44k INFO ====> Epoch: 443, cost 61.88 s
2023-03-01 12:08:21,951 44k INFO ====> Epoch: 444, cost 60.54 s
2023-03-01 12:09:21,328 44k INFO ====> Epoch: 445, cost 59.38 s
2023-03-01 12:09:51,648 44k INFO Train Epoch: 446 [45%]
2023-03-01 12:09:51,650 44k INFO Losses: [2.560692310333252, 1.9196646213531494, 9.254467010498047, 15.230669975280762, 0.5908154249191284], step: 29400, lr: 9.44708792574749e-05
2023-03-01 12:10:20,946 44k INFO ====> Epoch: 446, cost 59.62 s
2023-03-01 12:11:19,812 44k INFO ====> Epoch: 447, cost 58.87 s
2023-03-01 12:12:19,054 44k INFO ====> Epoch: 448, cost 59.24 s
2023-03-01 12:12:50,899 44k INFO Train Epoch: 449 [48%]
2023-03-01 12:12:50,901 44k INFO Losses: [2.4187440872192383, 2.3309295177459717, 12.498779296875, 17.357587814331055, 0.21674950420856476], step: 29600, lr: 9.443545710589128e-05
2023-03-01 12:12:55,646 44k INFO Saving model and optimizer state at iteration 449 to ./logs/44k/G_29600.pth
2023-03-01 12:12:59,324 44k INFO Saving model and optimizer state at iteration 449 to ./logs/44k/D_29600.pth
2023-03-01 12:13:01,545 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_27200.pth
2023-03-01 12:13:01,547 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_27200.pth
2023-03-01 12:13:32,144 44k INFO ====> Epoch: 449, cost 73.09 s
2023-03-01 12:14:31,070 44k INFO ====> Epoch: 450, cost 58.93 s
2023-03-01 12:15:30,063 44k INFO ====> Epoch: 451, cost 58.99 s
2023-03-01 12:16:03,477 44k INFO Train Epoch: 452 [52%]
2023-03-01 12:16:03,479 44k INFO Losses: [2.560603618621826, 2.01310396194458, 8.583684921264648, 14.90054988861084, 0.6231300830841064], step: 29800, lr: 9.440004823595418e-05
2023-03-01 12:16:30,079 44k INFO ====> Epoch: 452, cost 60.02 s
2023-03-01 12:17:29,454 44k INFO ====> Epoch: 453, cost 59.37 s
2023-03-01 12:18:29,073 44k INFO ====> Epoch: 454, cost 59.62 s
2023-03-01 12:19:04,166 44k INFO Train Epoch: 455 [55%]
2023-03-01 12:19:04,168 44k INFO Losses: [2.6214823722839355, 2.353670597076416, 8.557900428771973, 15.25704288482666, 0.7305015325546265], step: 30000, lr: 9.436465264268356e-05
2023-03-01 12:19:29,270 44k INFO ====> Epoch: 455, cost 60.20 s
2023-03-01 12:20:29,100 44k INFO ====> Epoch: 456, cost 59.83 s
2023-03-01 12:21:29,239 44k INFO ====> Epoch: 457, cost 60.14 s
2023-03-01 12:22:07,399 44k INFO Train Epoch: 458 [58%]
2023-03-01 12:22:07,400 44k INFO Losses: [2.826925277709961, 2.0527522563934326, 7.553061485290527, 15.126119613647461, 1.0823116302490234], step: 30200, lr: 9.432927032110133e-05
2023-03-01 12:22:30,200 44k INFO ====> Epoch: 458, cost 60.96 s
2023-03-01 12:23:32,767 44k INFO ====> Epoch: 459, cost 62.57 s
2023-03-01 12:24:33,131 44k INFO ====> Epoch: 460, cost 60.36 s
2023-03-01 12:25:11,799 44k INFO Train Epoch: 461 [61%]
2023-03-01 12:25:11,801 44k INFO Losses: [2.5083065032958984, 2.1714558601379395, 11.80184555053711, 16.490991592407227, 0.6738836169242859], step: 30400, lr: 9.42939012662312e-05
2023-03-01 12:25:17,552 44k INFO Saving model and optimizer state at iteration 461 to ./logs/44k/G_30400.pth
2023-03-01 12:25:20,553 44k INFO Saving model and optimizer state at iteration 461 to ./logs/44k/D_30400.pth
2023-03-01 12:25:22,811 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_28000.pth
2023-03-01 12:25:22,814 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_28000.pth
2023-03-01 12:25:47,897 44k INFO ====> Epoch: 461, cost 74.77 s
2023-03-01 12:26:47,248 44k INFO ====> Epoch: 462, cost 59.35 s
2023-03-01 12:27:46,428 44k INFO ====> Epoch: 463, cost 59.18 s
2023-03-01 12:28:26,928 44k INFO Train Epoch: 464 [64%]
2023-03-01 12:28:26,930 44k INFO Losses: [2.6853432655334473, 2.1840994358062744, 7.406452655792236, 16.49937629699707, 0.8056984543800354], step: 30600, lr: 9.425854547309881e-05
2023-03-01 12:28:47,157 44k INFO ====> Epoch: 464, cost 60.73 s
2023-03-01 12:29:46,467 44k INFO ====> Epoch: 465, cost 59.31 s
2023-03-01 12:30:46,120 44k INFO ====> Epoch: 466, cost 59.65 s
2023-03-01 12:31:28,156 44k INFO Train Epoch: 467 [67%]
2023-03-01 12:31:28,157 44k INFO Losses: [2.503626823425293, 2.283301830291748, 11.053704261779785, 16.334251403808594, 0.8259708881378174], step: 30800, lr: 9.422320293673162e-05
2023-03-01 12:31:45,791 44k INFO ====> Epoch: 467, cost 59.67 s
2023-03-01 12:32:45,553 44k INFO ====> Epoch: 468, cost 59.76 s
2023-03-01 12:33:45,697 44k INFO ====> Epoch: 469, cost 60.14 s
2023-03-01 12:34:30,186 44k INFO Train Epoch: 470 [70%]
2023-03-01 12:34:30,187 44k INFO Losses: [2.6647984981536865, 1.8008670806884766, 8.619013786315918, 14.019116401672363, 0.7683737874031067], step: 31000, lr: 9.418787365215894e-05
2023-03-01 12:34:46,752 44k INFO ====> Epoch: 470, cost 61.05 s
2023-03-01 12:35:47,808 44k INFO ====> Epoch: 471, cost 61.06 s
2023-03-01 12:36:49,535 44k INFO ====> Epoch: 472, cost 61.73 s
2023-03-01 12:37:35,795 44k INFO Train Epoch: 473 [73%]
2023-03-01 12:37:35,797 44k INFO Losses: [2.496058464050293, 2.0381667613983154, 8.5133695602417, 15.089336395263672, 0.7004619240760803], step: 31200, lr: 9.4152557614412e-05
2023-03-01 12:37:41,042 44k INFO Saving model and optimizer state at iteration 473 to ./logs/44k/G_31200.pth
2023-03-01 12:37:43,245 44k INFO Saving model and optimizer state at iteration 473 to ./logs/44k/D_31200.pth
2023-03-01 12:37:45,542 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_28800.pth
2023-03-01 12:37:45,544 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_28800.pth
2023-03-01 12:38:04,327 44k INFO ====> Epoch: 473, cost 74.79 s
2023-03-01 12:39:04,103 44k INFO ====> Epoch: 474, cost 59.78 s
2023-03-01 12:40:03,290 44k INFO ====> Epoch: 475, cost 59.19 s
2023-03-01 12:40:49,839 44k INFO Train Epoch: 476 [76%]
2023-03-01 12:40:49,841 44k INFO Losses: [2.4418015480041504, 2.19101619720459, 9.39306354522705, 15.096506118774414, 0.4876495599746704], step: 31400, lr: 9.411725481852385e-05
2023-03-01 12:41:03,144 44k INFO ====> Epoch: 476, cost 59.85 s
2023-03-01 12:42:02,387 44k INFO ====> Epoch: 477, cost 59.24 s
2023-03-01 12:43:01,471 44k INFO ====> Epoch: 478, cost 59.08 s
2023-03-01 12:43:49,202 44k INFO Train Epoch: 479 [79%]
2023-03-01 12:43:49,204 44k INFO Losses: [2.665022850036621, 2.1457877159118652, 5.24586820602417, 14.595362663269043, 0.8567426800727844], step: 31600, lr: 9.408196525952938e-05
2023-03-01 12:44:01,331 44k INFO ====> Epoch: 479, cost 59.86 s
2023-03-01 12:45:00,785 44k INFO ====> Epoch: 480, cost 59.45 s
2023-03-01 12:46:00,446 44k INFO ====> Epoch: 481, cost 59.66 s
2023-03-01 12:46:51,523 44k INFO Train Epoch: 482 [82%]
2023-03-01 12:46:51,524 44k INFO Losses: [2.75923228263855, 2.147505044937134, 10.844860076904297, 16.117279052734375, 0.8281464576721191], step: 31800, lr: 9.404668893246542e-05
2023-03-01 12:47:01,594 44k INFO ====> Epoch: 482, cost 61.15 s
2023-03-01 12:48:02,122 44k INFO ====> Epoch: 483, cost 60.53 s
2023-03-01 12:49:02,617 44k INFO ====> Epoch: 484, cost 60.50 s
2023-03-01 12:49:54,249 44k INFO Train Epoch: 485 [85%]
2023-03-01 12:49:54,250 44k INFO Losses: [2.540709972381592, 2.268244981765747, 8.284003257751465, 14.66291618347168, 0.8275660276412964], step: 32000, lr: 9.401142583237059e-05
2023-03-01 12:49:59,426 44k INFO Saving model and optimizer state at iteration 485 to ./logs/44k/G_32000.pth
2023-03-01 12:50:02,890 44k INFO Saving model and optimizer state at iteration 485 to ./logs/44k/D_32000.pth
2023-03-01 12:50:05,030 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_29600.pth
2023-03-01 12:50:05,032 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_29600.pth
2023-03-01 12:50:13,387 44k INFO ====> Epoch: 485, cost 70.77 s
2023-03-01 12:51:16,412 44k INFO ====> Epoch: 486, cost 63.02 s
2023-03-01 12:52:15,932 44k INFO ====> Epoch: 487, cost 59.52 s
2023-03-01 12:53:08,676 44k INFO Train Epoch: 488 [88%]
2023-03-01 12:53:08,677 44k INFO Losses: [2.6723597049713135, 2.1551945209503174, 8.113043785095215, 14.136974334716797, 0.5009911060333252], step: 32200, lr: 9.397617595428541e-05
2023-03-01 12:53:15,878 44k INFO ====> Epoch: 488, cost 59.95 s
2023-03-01 12:54:14,526 44k INFO ====> Epoch: 489, cost 58.65 s
2023-03-01 12:55:13,142 44k INFO ====> Epoch: 490, cost 58.62 s
2023-03-01 12:56:07,054 44k INFO Train Epoch: 491 [91%]
2023-03-01 12:56:07,056 44k INFO Losses: [2.5764403343200684, 2.0644869804382324, 11.879953384399414, 15.331342697143555, 0.5466853976249695], step: 32400, lr: 9.394093929325224e-05
2023-03-01 12:56:12,840 44k INFO ====> Epoch: 491, cost 59.70 s
2023-03-01 12:57:11,916 44k INFO ====> Epoch: 492, cost 59.08 s
2023-03-01 12:58:11,000 44k INFO ====> Epoch: 493, cost 59.08 s
2023-03-01 12:59:07,508 44k INFO Train Epoch: 494 [94%]
2023-03-01 12:59:07,510 44k INFO Losses: [2.447939395904541, 2.1818184852600098, 8.475127220153809, 16.035709381103516, 0.7269818186759949], step: 32600, lr: 9.39057158443153e-05
2023-03-01 12:59:11,013 44k INFO ====> Epoch: 494, cost 60.01 s
2023-03-01 13:00:10,881 44k INFO ====> Epoch: 495, cost 59.87 s
2023-03-01 13:01:10,052 44k INFO ====> Epoch: 496, cost 59.17 s
2023-03-01 13:02:08,801 44k INFO Train Epoch: 497 [97%]
2023-03-01 13:02:08,802 44k INFO Losses: [2.674241304397583, 2.0308947563171387, 7.6580491065979, 14.979961395263672, 0.7076011300086975], step: 32800, lr: 9.38705056025207e-05
2023-03-01 13:02:13,863 44k INFO Saving model and optimizer state at iteration 497 to ./logs/44k/G_32800.pth
2023-03-01 13:02:17,168 44k INFO Saving model and optimizer state at iteration 497 to ./logs/44k/D_32800.pth
2023-03-01 13:02:19,334 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_30400.pth
2023-03-01 13:02:19,340 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_30400.pth
2023-03-01 13:02:20,672 44k INFO ====> Epoch: 497, cost 70.62 s
2023-03-01 13:03:23,729 44k INFO ====> Epoch: 498, cost 63.06 s
2023-03-01 13:04:23,872 44k INFO ====> Epoch: 499, cost 60.14 s
2023-03-01 13:05:24,535 44k INFO ====> Epoch: 500, cost 60.66 s
2023-03-01 13:05:31,880 44k INFO Train Epoch: 501 [0%]
2023-03-01 13:05:31,882 44k INFO Losses: [2.5453133583068848, 1.9341522455215454, 9.557120323181152, 15.227277755737305, 0.8549346923828125], step: 33000, lr: 9.382357914934599e-05
2023-03-01 13:06:26,121 44k INFO ====> Epoch: 501, cost 61.59 s
2023-03-01 13:07:25,730 44k INFO ====> Epoch: 502, cost 59.61 s
2023-03-01 13:08:24,251 44k INFO ====> Epoch: 503, cost 58.52 s
2023-03-01 13:08:31,128 44k INFO Train Epoch: 504 [3%]
2023-03-01 13:08:31,129 44k INFO Losses: [2.6591885089874268, 2.0255377292633057, 9.025327682495117, 15.610697746276855, 0.687014639377594], step: 33200, lr: 9.3788399704962e-05
2023-03-01 13:09:23,000 44k INFO ====> Epoch: 504, cost 58.75 s
2023-03-01 13:10:21,563 44k INFO ====> Epoch: 505, cost 58.56 s
2023-03-01 13:11:20,322 44k INFO ====> Epoch: 506, cost 58.76 s
2023-03-01 13:11:28,787 44k INFO Train Epoch: 507 [6%]
2023-03-01 13:11:28,790 44k INFO Losses: [2.960423707962036, 2.152158260345459, 8.557804107666016, 15.45103645324707, 0.9053779244422913], step: 33400, lr: 9.37532334512207e-05
2023-03-01 13:12:20,801 44k INFO ====> Epoch: 507, cost 60.48 s
2023-03-01 13:13:19,445 44k INFO ====> Epoch: 508, cost 58.64 s
2023-03-01 13:14:18,532 44k INFO ====> Epoch: 509, cost 59.09 s
2023-03-01 13:14:29,508 44k INFO Train Epoch: 510 [9%]
2023-03-01 13:14:29,509 44k INFO Losses: [2.586200714111328, 2.1638712882995605, 7.734503269195557, 14.737333297729492, 0.5216233730316162], step: 33600, lr: 9.371808038317619e-05
2023-03-01 13:14:34,330 44k INFO Saving model and optimizer state at iteration 510 to ./logs/44k/G_33600.pth
2023-03-01 13:14:36,631 44k INFO Saving model and optimizer state at iteration 510 to ./logs/44k/D_33600.pth
2023-03-01 13:14:39,018 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_31200.pth
2023-03-01 13:14:39,020 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_31200.pth
2023-03-01 13:15:31,019 44k INFO ====> Epoch: 510, cost 72.49 s
2023-03-01 13:16:29,956 44k INFO ====> Epoch: 511, cost 58.94 s
2023-03-01 13:17:28,808 44k INFO ====> Epoch: 512, cost 58.85 s
2023-03-01 13:17:41,197 44k INFO Train Epoch: 513 [12%]
2023-03-01 13:17:41,198 44k INFO Losses: [2.756080150604248, 2.2194111347198486, 10.804746627807617, 15.705421447753906, 0.5175661444664001], step: 33800, lr: 9.368294049588446e-05
2023-03-01 13:18:28,148 44k INFO ====> Epoch: 513, cost 59.34 s
2023-03-01 13:19:27,390 44k INFO ====> Epoch: 514, cost 59.24 s
2023-03-01 13:20:27,153 44k INFO ====> Epoch: 515, cost 59.76 s
2023-03-01 13:20:42,489 44k INFO Train Epoch: 516 [15%]
2023-03-01 13:20:42,490 44k INFO Losses: [2.6303505897521973, 2.0178070068359375, 11.096175193786621, 17.11891746520996, 0.4409589469432831], step: 34000, lr: 9.364781378440336e-05
2023-03-01 13:21:28,516 44k INFO ====> Epoch: 516, cost 61.36 s
2023-03-01 13:22:28,924 44k INFO ====> Epoch: 517, cost 60.41 s
2023-03-01 13:23:28,571 44k INFO ====> Epoch: 518, cost 59.65 s
2023-03-01 13:23:44,697 44k INFO Train Epoch: 519 [18%]
2023-03-01 13:23:44,699 44k INFO Losses: [2.763784170150757, 2.0468761920928955, 6.136360168457031, 14.199729919433594, 0.6999683380126953], step: 34200, lr: 9.361270024379255e-05
2023-03-01 13:24:28,087 44k INFO ====> Epoch: 519, cost 59.52 s
2023-03-01 13:25:26,638 44k INFO ====> Epoch: 520, cost 58.55 s
2023-03-01 13:26:24,831 44k INFO ====> Epoch: 521, cost 58.19 s
2023-03-01 13:26:41,820 44k INFO Train Epoch: 522 [21%]
2023-03-01 13:26:41,822 44k INFO Losses: [2.549745559692383, 2.244152069091797, 11.310295104980469, 16.547887802124023, 0.8038300275802612], step: 34400, lr: 9.357759986911361e-05
2023-03-01 13:26:46,649 44k INFO Saving model and optimizer state at iteration 522 to ./logs/44k/G_34400.pth
2023-03-01 13:26:49,226 44k INFO Saving model and optimizer state at iteration 522 to ./logs/44k/D_34400.pth
2023-03-01 13:26:51,714 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_32000.pth
2023-03-01 13:26:51,722 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_32000.pth
2023-03-01 13:27:37,227 44k INFO ====> Epoch: 522, cost 72.40 s
2023-03-01 13:28:35,835 44k INFO ====> Epoch: 523, cost 58.61 s
2023-03-01 13:29:33,949 44k INFO ====> Epoch: 524, cost 58.11 s
2023-03-01 13:29:52,468 44k INFO Train Epoch: 525 [24%]
2023-03-01 13:29:52,469 44k INFO Losses: [2.5771877765655518, 2.214401960372925, 10.763819694519043, 15.482449531555176, 0.5703819394111633], step: 34600, lr: 9.35425126554299e-05
2023-03-01 13:30:32,871 44k INFO ====> Epoch: 525, cost 58.92 s
2023-03-01 13:31:31,308 44k INFO ====> Epoch: 526, cost 58.44 s
2023-03-01 13:32:29,733 44k INFO ====> Epoch: 527, cost 58.42 s
2023-03-01 13:32:49,862 44k INFO Train Epoch: 528 [27%]
2023-03-01 13:32:49,863 44k INFO Losses: [2.420454263687134, 2.2803425788879395, 12.815909385681152, 15.6262845993042, 0.5721554756164551], step: 34800, lr: 9.350743859780667e-05
2023-03-01 13:33:29,357 44k INFO ====> Epoch: 528, cost 59.62 s
2023-03-01 13:34:28,405 44k INFO ====> Epoch: 529, cost 59.05 s
2023-03-01 13:35:27,035 44k INFO ====> Epoch: 530, cost 58.63 s
2023-03-01 13:35:49,151 44k INFO Train Epoch: 531 [30%]
2023-03-01 13:35:49,154 44k INFO Losses: [2.4044675827026367, 2.2027463912963867, 10.061988830566406, 15.523695945739746, 0.9495930671691895], step: 35000, lr: 9.347237769131105e-05
2023-03-01 13:36:26,640 44k INFO ====> Epoch: 531, cost 59.60 s
2023-03-01 13:37:25,498 44k INFO ====> Epoch: 532, cost 58.86 s
2023-03-01 13:38:25,052 44k INFO ====> Epoch: 533, cost 59.55 s
2023-03-01 13:38:50,104 44k INFO Train Epoch: 534 [33%]
2023-03-01 13:38:50,106 44k INFO Losses: [2.568544387817383, 1.9308691024780273, 8.283150672912598, 15.194867134094238, 0.7338870763778687], step: 35200, lr: 9.343732993101193e-05
2023-03-01 13:38:54,934 44k INFO Saving model and optimizer state at iteration 534 to ./logs/44k/G_35200.pth
2023-03-01 13:38:57,165 44k INFO Saving model and optimizer state at iteration 534 to ./logs/44k/D_35200.pth
2023-03-01 13:38:59,744 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_32800.pth
2023-03-01 13:38:59,746 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_32800.pth
2023-03-01 13:39:38,242 44k INFO ====> Epoch: 534, cost 73.19 s
2023-03-01 13:40:37,783 44k INFO ====> Epoch: 535, cost 59.54 s
2023-03-01 13:41:37,793 44k INFO ====> Epoch: 536, cost 60.01 s
2023-03-01 13:42:04,474 44k INFO Train Epoch: 537 [36%]
2023-03-01 13:42:04,475 44k INFO Losses: [2.464290142059326, 2.0024421215057373, 8.628362655639648, 16.320446014404297, 0.9047841429710388], step: 35400, lr: 9.340229531198015e-05
2023-03-01 13:42:39,046 44k INFO ====> Epoch: 537, cost 61.25 s
2023-03-01 13:43:39,058 44k INFO ====> Epoch: 538, cost 60.01 s
2023-03-01 13:44:37,744 44k INFO ====> Epoch: 539, cost 58.69 s
2023-03-01 13:45:04,109 44k INFO Train Epoch: 540 [39%]
2023-03-01 13:45:04,111 44k INFO Losses: [2.4833459854125977, 2.378417491912842, 11.034912109375, 17.714981079101562, 0.9426990151405334], step: 35600, lr: 9.336727382928831e-05
2023-03-01 13:45:37,656 44k INFO ====> Epoch: 540, cost 59.91 s
2023-03-01 13:46:36,264 44k INFO ====> Epoch: 541, cost 58.61 s
2023-03-01 13:47:34,787 44k INFO ====> Epoch: 542, cost 58.52 s
2023-03-01 13:48:03,521 44k INFO Train Epoch: 543 [42%]
2023-03-01 13:48:03,522 44k INFO Losses: [2.8033523559570312, 2.562302589416504, 5.504421234130859, 15.571795463562012, 0.5904726386070251], step: 35800, lr: 9.33322654780109e-05
2023-03-01 13:48:34,337 44k INFO ====> Epoch: 543, cost 59.55 s
2023-03-01 13:49:33,137 44k INFO ====> Epoch: 544, cost 58.80 s
2023-03-01 13:50:31,708 44k INFO ====> Epoch: 545, cost 58.57 s
2023-03-01 13:51:01,989 44k INFO Train Epoch: 546 [45%]
2023-03-01 13:51:01,991 44k INFO Losses: [2.6344995498657227, 2.0228610038757324, 11.396275520324707, 16.18845558166504, 0.4743148982524872], step: 36000, lr: 9.32972702532243e-05
2023-03-01 13:51:06,854 44k INFO Saving model and optimizer state at iteration 546 to ./logs/44k/G_36000.pth
2023-03-01 13:51:09,892 44k INFO Saving model and optimizer state at iteration 546 to ./logs/44k/D_36000.pth
2023-03-01 13:51:12,299 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_33600.pth
2023-03-01 13:51:12,308 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_33600.pth
2023-03-01 13:51:43,924 44k INFO ====> Epoch: 546, cost 72.22 s
2023-03-01 13:52:42,843 44k INFO ====> Epoch: 547, cost 58.92 s
2023-03-01 13:53:41,879 44k INFO ====> Epoch: 548, cost 59.04 s
2023-03-01 13:54:13,744 44k INFO Train Epoch: 549 [48%]
2023-03-01 13:54:13,746 44k INFO Losses: [2.5563697814941406, 1.8011733293533325, 8.374330520629883, 15.469585418701172, 0.5209680795669556], step: 36200, lr: 9.326228815000664e-05
2023-03-01 13:54:41,780 44k INFO ====> Epoch: 549, cost 59.90 s
2023-03-01 13:55:40,767 44k INFO ====> Epoch: 550, cost 58.99 s
2023-03-01 13:56:39,584 44k INFO ====> Epoch: 551, cost 58.82 s
2023-03-01 13:57:13,103 44k INFO Train Epoch: 552 [52%]
2023-03-01 13:57:13,104 44k INFO Losses: [2.597008466720581, 1.9561820030212402, 11.036867141723633, 15.509681701660156, 0.4933129847049713], step: 36400, lr: 9.322731916343797e-05
2023-03-01 13:57:39,733 44k INFO ====> Epoch: 552, cost 60.15 s
2023-03-01 13:58:39,574 44k INFO ====> Epoch: 553, cost 59.84 s
2023-03-01 13:59:39,625 44k INFO ====> Epoch: 554, cost 60.05 s
2023-03-01 14:00:16,064 44k INFO Train Epoch: 555 [55%]
2023-03-01 14:00:16,065 44k INFO Losses: [2.6289939880371094, 2.4249353408813477, 9.396543502807617, 16.526453018188477, 0.6601778268814087], step: 36600, lr: 9.319236328860017e-05
2023-03-01 14:00:40,481 44k INFO ====> Epoch: 555, cost 60.86 s
2023-03-01 14:01:39,782 44k INFO ====> Epoch: 556, cost 59.30 s
2023-03-01 14:02:37,982 44k INFO ====> Epoch: 557, cost 58.20 s
2023-03-01 14:03:13,794 44k INFO Train Epoch: 558 [58%]
2023-03-01 14:03:13,796 44k INFO Losses: [2.6148245334625244, 2.1251304149627686, 6.254908084869385, 13.463936805725098, 0.7263805866241455], step: 36800, lr: 9.315742052057694e-05
2023-03-01 14:03:18,706 44k INFO Saving model and optimizer state at iteration 558 to ./logs/44k/G_36800.pth
2023-03-01 14:03:21,992 44k INFO Saving model and optimizer state at iteration 558 to ./logs/44k/D_36800.pth
2023-03-01 14:03:24,146 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_34400.pth
2023-03-01 14:03:24,148 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_34400.pth
2023-03-01 14:03:50,150 44k INFO ====> Epoch: 558, cost 72.17 s
2023-03-01 14:04:49,728 44k INFO ====> Epoch: 559, cost 59.58 s
2023-03-01 14:05:48,534 44k INFO ====> Epoch: 560, cost 58.81 s
2023-03-01 14:06:26,390 44k INFO Train Epoch: 561 [61%]
2023-03-01 14:06:26,391 44k INFO Losses: [2.4856743812561035, 2.304982900619507, 9.530709266662598, 15.976686477661133, 0.7084562182426453], step: 37000, lr: 9.312249085445385e-05
2023-03-01 14:06:48,563 44k INFO ====> Epoch: 561, cost 60.03 s
2023-03-01 14:07:47,195 44k INFO ====> Epoch: 562, cost 58.63 s
2023-03-01 14:08:45,732 44k INFO ====> Epoch: 563, cost 58.54 s
2023-03-01 14:09:25,873 44k INFO Train Epoch: 564 [64%]
2023-03-01 14:09:25,875 44k INFO Losses: [2.5972371101379395, 2.155546188354492, 10.777972221374512, 15.86385726928711, 0.7534441947937012], step: 37200, lr: 9.30875742853183e-05
2023-03-01 14:09:44,966 44k INFO ====> Epoch: 564, cost 59.23 s
2023-03-01 14:10:43,872 44k INFO ====> Epoch: 565, cost 58.91 s
2023-03-01 14:11:42,539 44k INFO ====> Epoch: 566, cost 58.67 s
2023-03-01 14:12:24,630 44k INFO Train Epoch: 567 [67%]
2023-03-01 14:12:24,631 44k INFO Losses: [2.5119266510009766, 2.37658429145813, 12.96456527709961, 16.236738204956055, 0.7003054618835449], step: 37400, lr: 9.305267080825953e-05
2023-03-01 14:12:42,374 44k INFO ====> Epoch: 567, cost 59.83 s
2023-03-01 14:13:42,295 44k INFO ====> Epoch: 568, cost 59.92 s
2023-03-01 14:14:42,267 44k INFO ====> Epoch: 569, cost 59.97 s
2023-03-01 14:15:26,399 44k INFO Train Epoch: 570 [70%]
2023-03-01 14:15:26,401 44k INFO Losses: [2.6498327255249023, 2.099717855453491, 11.323370933532715, 15.819286346435547, 0.7436490654945374], step: 37600, lr: 9.301778041836861e-05
2023-03-01 14:15:32,619 44k INFO Saving model and optimizer state at iteration 570 to ./logs/44k/G_37600.pth
2023-03-01 14:15:34,952 44k INFO Saving model and optimizer state at iteration 570 to ./logs/44k/D_37600.pth
2023-03-01 14:15:37,154 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_35200.pth
2023-03-01 14:15:37,157 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_35200.pth
2023-03-01 14:15:56,537 44k INFO ====> Epoch: 570, cost 74.27 s
2023-03-01 14:16:57,096 44k INFO ====> Epoch: 571, cost 60.56 s
2023-03-01 14:17:56,435 44k INFO ====> Epoch: 572, cost 59.34 s
2023-03-01 14:18:40,702 44k INFO Train Epoch: 573 [73%]
2023-03-01 14:18:40,704 44k INFO Losses: [2.436288356781006, 2.102679967880249, 12.912232398986816, 16.862470626831055, 0.6236627697944641], step: 37800, lr: 9.29829031107385e-05
2023-03-01 14:18:55,301 44k INFO ====> Epoch: 573, cost 58.87 s
2023-03-01 14:19:53,682 44k INFO ====> Epoch: 574, cost 58.38 s
2023-03-01 14:20:51,775 44k INFO ====> Epoch: 575, cost 58.09 s
2023-03-01 14:57:39,608 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 1234, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-01 14:58:16,368 44k INFO Loaded checkpoint './logs/44k/G_37600.pth' (iteration 570)
2023-03-01 14:58:35,183 44k INFO Loaded checkpoint './logs/44k/D_37600.pth' (iteration 570)
2023-03-01 14:59:46,725 44k INFO Train Epoch: 570 [70%]
2023-03-01 14:59:46,726 44k INFO Losses: [2.5271847248077393, 1.993210792541504, 11.426197052001953, 16.748212814331055, 0.6959831714630127], step: 37600, lr: 9.300615319581631e-05
2023-03-01 14:59:54,384 44k INFO Saving model and optimizer state at iteration 570 to ./logs/44k/G_37600.pth
2023-03-01 14:59:57,050 44k INFO Saving model and optimizer state at iteration 570 to ./logs/44k/D_37600.pth
2023-03-01 15:00:22,562 44k INFO ====> Epoch: 570, cost 162.96 s
2023-03-01 15:01:39,696 44k INFO ====> Epoch: 571, cost 77.13 s
2023-03-01 15:13:41,933 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 1234, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-01 15:13:54,201 44k INFO Loaded checkpoint './logs/44k/G_37600.pth' (iteration 570)
2023-03-01 15:13:55,462 44k INFO Loaded checkpoint './logs/44k/D_37600.pth' (iteration 570)
2023-03-01 15:14:59,464 44k INFO Train Epoch: 570 [70%]
2023-03-01 15:14:59,465 44k INFO Losses: [2.5743300914764404, 2.09476900100708, 11.48232364654541, 16.289464950561523, 0.5625264644622803], step: 37600, lr: 9.299452742666683e-05
2023-03-01 15:15:06,490 44k INFO Saving model and optimizer state at iteration 570 to ./logs/44k/G_37600.pth
2023-03-01 15:15:08,907 44k INFO Saving model and optimizer state at iteration 570 to ./logs/44k/D_37600.pth
2023-03-01 15:15:33,650 44k INFO ====> Epoch: 570, cost 111.72 s
2023-03-01 15:16:33,880 44k INFO ====> Epoch: 571, cost 60.23 s
2023-03-01 15:17:33,683 44k INFO ====> Epoch: 572, cost 59.80 s
2023-03-01 15:18:19,240 44k INFO Train Epoch: 573 [73%]
2023-03-01 15:18:19,242 44k INFO Losses: [2.5929229259490967, 1.8230674266815186, 8.24261474609375, 13.88111686706543, 0.7610593438148499], step: 37800, lr: 9.295965883781867e-05
2023-03-01 15:18:35,494 44k INFO ====> Epoch: 573, cost 61.81 s
2023-03-01 15:19:34,710 44k INFO ====> Epoch: 574, cost 59.22 s
2023-03-01 15:20:33,744 44k INFO ====> Epoch: 575, cost 59.03 s
2023-03-01 15:21:19,840 44k INFO Train Epoch: 576 [76%]
2023-03-01 15:21:19,842 44k INFO Losses: [2.473421096801758, 2.3995962142944336, 9.603655815124512, 15.83156967163086, 0.6382614374160767], step: 38000, lr: 9.292480332305691e-05
2023-03-01 15:21:33,089 44k INFO ====> Epoch: 576, cost 59.35 s
2023-03-01 15:22:31,702 44k INFO ====> Epoch: 577, cost 58.61 s
2023-03-01 15:23:30,369 44k INFO ====> Epoch: 578, cost 58.67 s
2023-03-01 15:24:17,439 44k INFO Train Epoch: 579 [79%]
2023-03-01 15:24:17,441 44k INFO Losses: [2.5350828170776367, 2.0715274810791016, 8.255446434020996, 14.324309349060059, 0.6776455640792847], step: 38200, lr: 9.288996087747943e-05
2023-03-01 15:24:29,455 44k INFO ====> Epoch: 579, cost 59.09 s
2023-03-01 15:25:28,357 44k INFO ====> Epoch: 580, cost 58.90 s
2023-03-01 15:26:27,175 44k INFO ====> Epoch: 581, cost 58.82 s
2023-03-01 15:27:17,826 44k INFO Train Epoch: 582 [82%]
2023-03-01 15:27:17,828 44k INFO Losses: [2.5525190830230713, 2.0423576831817627, 7.743140697479248, 14.843338966369629, 0.704612135887146], step: 38400, lr: 9.285513149618585e-05
2023-03-01 15:27:22,792 44k INFO Saving model and optimizer state at iteration 582 to ./logs/44k/G_38400.pth
2023-03-01 15:27:25,484 44k INFO Saving model and optimizer state at iteration 582 to ./logs/44k/D_38400.pth
2023-03-01 15:27:28,081 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_36000.pth
2023-03-01 15:27:28,085 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_36000.pth
2023-03-01 15:27:37,963 44k INFO ====> Epoch: 582, cost 70.79 s
2023-03-01 15:28:41,391 44k INFO ====> Epoch: 583, cost 63.43 s
2023-03-01 15:29:41,618 44k INFO ====> Epoch: 584, cost 60.23 s
2023-03-01 15:30:33,765 44k INFO Train Epoch: 585 [85%]
2023-03-01 15:30:33,766 44k INFO Losses: [2.6747803688049316, 1.8389732837677002, 4.084410667419434, 13.512724876403809, 0.4512246549129486], step: 38600, lr: 9.282031517427769e-05
2023-03-01 15:30:42,492 44k INFO ====> Epoch: 585, cost 60.87 s
2023-03-01 15:31:42,163 44k INFO ====> Epoch: 586, cost 59.67 s
2023-03-01 15:32:42,022 44k INFO ====> Epoch: 587, cost 59.86 s
2023-03-01 15:33:35,656 44k INFO Train Epoch: 588 [88%]
2023-03-01 15:33:35,658 44k INFO Losses: [2.760638475418091, 1.8996620178222656, 6.954154968261719, 13.57196044921875, 0.8784849643707275], step: 38800, lr: 9.27855119068583e-05
2023-03-01 15:33:42,980 44k INFO ====> Epoch: 588, cost 60.96 s
2023-03-01 15:34:42,420 44k INFO ====> Epoch: 589, cost 59.44 s
2023-03-01 15:35:42,017 44k INFO ====> Epoch: 590, cost 59.60 s
2023-03-01 15:36:36,931 44k INFO Train Epoch: 591 [91%]
2023-03-01 15:36:36,933 44k INFO Losses: [2.8437998294830322, 1.8302569389343262, 7.807892322540283, 13.566389083862305, 0.9706428050994873], step: 39000, lr: 9.275072168903288e-05
2023-03-01 15:36:43,979 44k INFO ====> Epoch: 591, cost 61.96 s
2023-03-01 15:37:42,920 44k INFO ====> Epoch: 592, cost 58.94 s
2023-03-01 15:38:42,025 44k INFO ====> Epoch: 593, cost 59.11 s
2023-03-01 15:39:38,108 44k INFO Train Epoch: 594 [94%]
2023-03-01 15:39:38,111 44k INFO Losses: [2.5111255645751953, 2.086841106414795, 8.652637481689453, 15.42671012878418, 0.6505913138389587], step: 39200, lr: 9.27159445159084e-05
2023-03-01 15:39:44,819 44k INFO Saving model and optimizer state at iteration 594 to ./logs/44k/G_39200.pth
2023-03-01 15:39:47,148 44k INFO Saving model and optimizer state at iteration 594 to ./logs/44k/D_39200.pth
2023-03-01 15:39:49,383 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_36800.pth
2023-03-01 15:39:49,385 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_36800.pth
2023-03-01 15:39:52,301 44k INFO ====> Epoch: 594, cost 70.28 s
2023-03-01 15:40:56,153 44k INFO ====> Epoch: 595, cost 63.85 s
2023-03-01 15:41:55,659 44k INFO ====> Epoch: 596, cost 59.51 s
2023-03-01 15:42:53,592 44k INFO Train Epoch: 597 [97%]
2023-03-01 15:42:53,594 44k INFO Losses: [2.7418437004089355, 1.9342641830444336, 4.708745956420898, 13.274650573730469, 0.7652420997619629], step: 39400, lr: 9.268118038259374e-05
2023-03-01 15:42:56,533 44k INFO ====> Epoch: 597, cost 60.87 s
2023-03-01 15:43:55,923 44k INFO ====> Epoch: 598, cost 59.39 s
2023-03-01 15:44:55,355 44k INFO ====> Epoch: 599, cost 59.43 s
2023-03-01 15:45:54,706 44k INFO ====> Epoch: 600, cost 59.35 s
2023-03-01 15:46:00,486 44k INFO Train Epoch: 601 [0%]
2023-03-01 15:46:00,488 44k INFO Losses: [2.7353615760803223, 2.1014316082000732, 11.150449752807617, 17.331787109375, 0.8354551792144775], step: 39600, lr: 9.263484848053902e-05
2023-03-01 15:46:55,015 44k INFO ====> Epoch: 601, cost 60.31 s
2023-03-01 15:47:55,223 44k INFO ====> Epoch: 602, cost 60.21 s
2023-03-01 15:48:55,208 44k INFO ====> Epoch: 603, cost 59.99 s
2023-03-01 15:49:03,064 44k INFO Train Epoch: 604 [3%]
2023-03-01 15:49:03,066 44k INFO Losses: [2.6378090381622314, 1.9106868505477905, 10.66452693939209, 15.136334419250488, 0.6131340265274048], step: 39800, lr: 9.260011475443641e-05
2023-03-01 15:49:55,746 44k INFO ====> Epoch: 604, cost 60.54 s
2023-03-01 15:50:54,888 44k INFO ====> Epoch: 605, cost 59.14 s
2023-03-01 15:51:54,032 44k INFO ====> Epoch: 606, cost 59.14 s
2023-03-01 15:52:02,966 44k INFO Train Epoch: 607 [6%]
2023-03-01 15:52:02,968 44k INFO Losses: [2.7368581295013428, 1.6898276805877686, 10.520997047424316, 14.771178245544434, 0.7766459584236145], step: 40000, lr: 9.2565394051853e-05
2023-03-01 15:52:08,705 44k INFO Saving model and optimizer state at iteration 607 to ./logs/44k/G_40000.pth
2023-03-01 15:52:11,778 44k INFO Saving model and optimizer state at iteration 607 to ./logs/44k/D_40000.pth
2023-03-01 15:52:13,965 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_37600.pth
2023-03-01 15:52:13,967 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_37600.pth
2023-03-01 15:53:07,838 44k INFO ====> Epoch: 607, cost 73.81 s
2023-03-01 15:54:07,939 44k INFO ====> Epoch: 608, cost 60.10 s
2023-03-01 15:55:08,243 44k INFO ====> Epoch: 609, cost 60.30 s
2023-03-01 15:55:19,385 44k INFO Train Epoch: 610 [9%]
2023-03-01 15:55:19,386 44k INFO Losses: [2.5521254539489746, 2.1671903133392334, 8.565643310546875, 15.609643936157227, 0.5948749780654907], step: 40200, lr: 9.25306863679056e-05
2023-03-01 15:56:09,000 44k INFO ====> Epoch: 610, cost 60.76 s
2023-03-01 15:57:09,690 44k INFO ====> Epoch: 611, cost 60.69 s
2023-03-01 15:58:09,256 44k INFO ====> Epoch: 612, cost 59.57 s
2023-03-01 15:58:21,786 44k INFO Train Epoch: 613 [12%]
2023-03-01 15:58:21,788 44k INFO Losses: [2.6245226860046387, 2.3357348442077637, 7.989675998687744, 15.156067848205566, 0.5746421217918396], step: 40400, lr: 9.249599169771281e-05
2023-03-01 15:59:09,935 44k INFO ====> Epoch: 613, cost 60.68 s
2023-03-01 16:00:08,724 44k INFO ====> Epoch: 614, cost 58.79 s
2023-03-01 16:01:08,072 44k INFO ====> Epoch: 615, cost 59.35 s
2023-03-01 16:01:22,464 44k INFO Train Epoch: 616 [15%]
2023-03-01 16:01:22,466 44k INFO Losses: [2.578674793243408, 2.0254931449890137, 6.607078552246094, 15.0504150390625, 1.009507179260254], step: 40600, lr: 9.246131003639512e-05
2023-03-01 16:02:09,192 44k INFO ====> Epoch: 616, cost 61.12 s
2023-03-01 16:03:08,549 44k INFO ====> Epoch: 617, cost 59.36 s
2023-03-01 16:04:07,629 44k INFO ====> Epoch: 618, cost 59.08 s
2023-03-01 16:04:24,069 44k INFO Train Epoch: 619 [18%]
2023-03-01 16:04:24,071 44k INFO Losses: [2.552483081817627, 2.024620771408081, 8.069636344909668, 15.692915916442871, 0.4599842429161072], step: 40800, lr: 9.242664137907478e-05
2023-03-01 16:04:29,123 44k INFO Saving model and optimizer state at iteration 619 to ./logs/44k/G_40800.pth
2023-03-01 16:04:31,375 44k INFO Saving model and optimizer state at iteration 619 to ./logs/44k/D_40800.pth
2023-03-01 16:04:33,954 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_38400.pth
2023-03-01 16:04:33,956 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_38400.pth
2023-03-01 16:05:20,554 44k INFO ====> Epoch: 619, cost 72.92 s
2023-03-01 16:06:21,206 44k INFO ====> Epoch: 620, cost 60.65 s
2023-03-01 16:07:22,726 44k INFO ====> Epoch: 621, cost 61.52 s
2023-03-01 16:07:42,023 44k INFO Train Epoch: 622 [21%]
2023-03-01 16:07:42,025 44k INFO Losses: [2.460630416870117, 2.3488998413085938, 8.322394371032715, 14.209061622619629, 0.5670028924942017], step: 41000, lr: 9.239198572087591e-05
2023-03-01 16:08:25,079 44k INFO ====> Epoch: 622, cost 62.35 s
2023-03-01 16:09:25,167 44k INFO ====> Epoch: 623, cost 60.09 s
2023-03-01 16:10:25,760 44k INFO ====> Epoch: 624, cost 60.59 s
2023-03-01 16:10:45,573 44k INFO Train Epoch: 625 [24%]
2023-03-01 16:10:45,575 44k INFO Losses: [2.725945234298706, 1.9381709098815918, 6.556834697723389, 14.893658638000488, 0.7960931062698364], step: 41200, lr: 9.235734305692444e-05
2023-03-01 16:11:26,404 44k INFO ====> Epoch: 625, cost 60.64 s
2023-03-01 16:12:25,693 44k INFO ====> Epoch: 626, cost 59.29 s
2023-03-01 16:13:25,233 44k INFO ====> Epoch: 627, cost 59.54 s
2023-03-01 16:13:46,325 44k INFO Train Epoch: 628 [27%]
2023-03-01 16:13:46,327 44k INFO Losses: [2.6802175045013428, 2.1775424480438232, 8.059988975524902, 13.38552474975586, 0.6495214700698853], step: 41400, lr: 9.232271338234815e-05
2023-03-01 16:14:25,531 44k INFO ====> Epoch: 628, cost 60.30 s
2023-03-01 16:15:24,751 44k INFO ====> Epoch: 629, cost 59.22 s
2023-03-01 16:16:23,544 44k INFO ====> Epoch: 630, cost 58.79 s
2023-03-01 16:16:46,337 44k INFO Train Epoch: 631 [30%]
2023-03-01 16:16:46,339 44k INFO Losses: [2.8067805767059326, 2.0939817428588867, 8.274828910827637, 14.636399269104004, 0.7277933955192566], step: 41600, lr: 9.228809669227663e-05
2023-03-01 16:16:52,796 44k INFO Saving model and optimizer state at iteration 631 to ./logs/44k/G_41600.pth
2023-03-01 16:16:55,040 44k INFO Saving model and optimizer state at iteration 631 to ./logs/44k/D_41600.pth
2023-03-01 16:16:57,411 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_39200.pth
2023-03-01 16:16:57,413 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_39200.pth
2023-03-01 16:17:36,808 44k INFO ====> Epoch: 631, cost 73.26 s
2023-03-01 16:18:35,393 44k INFO ====> Epoch: 632, cost 58.59 s
2023-03-01 16:19:33,336 44k INFO ====> Epoch: 633, cost 57.94 s
2023-03-01 16:19:57,635 44k INFO Train Epoch: 634 [33%]
2023-03-01 16:19:57,637 44k INFO Losses: [2.5420570373535156, 2.260408401489258, 10.348950386047363, 16.022619247436523, 0.47534552216529846], step: 41800, lr: 9.22534929818413e-05
2023-03-01 16:20:32,905 44k INFO ====> Epoch: 634, cost 59.57 s
2023-03-01 16:21:32,716 44k INFO ====> Epoch: 635, cost 59.81 s
2023-03-01 16:22:31,544 44k INFO ====> Epoch: 636, cost 58.83 s
2023-03-01 16:22:55,922 44k INFO Train Epoch: 637 [36%]
2023-03-01 16:22:55,924 44k INFO Losses: [2.547175884246826, 2.0122873783111572, 9.627043724060059, 16.519147872924805, 0.5207715630531311], step: 42000, lr: 9.221890224617541e-05
2023-03-01 16:23:30,271 44k INFO ====> Epoch: 637, cost 58.73 s
2023-03-01 16:24:27,930 44k INFO ====> Epoch: 638, cost 57.66 s
2023-03-02 02:09:36,746 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 3601, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-02 02:10:07,963 44k INFO Loaded checkpoint './logs/44k/G_41600.pth' (iteration 631)
2023-03-02 02:10:16,686 44k INFO Loaded checkpoint './logs/44k/D_41600.pth' (iteration 631)
2023-03-02 02:10:58,015 44k INFO Train Epoch: 631 [30%]
2023-03-02 02:10:58,016 44k INFO Losses: [2.5948731899261475, 2.131763219833374, 10.075628280639648, 15.502296447753906, 0.549894392490387], step: 41600, lr: 9.22765606801901e-05
2023-03-02 02:11:05,873 44k INFO Saving model and optimizer state at iteration 631 to ./logs/44k/G_41600.pth
2023-03-02 02:11:09,437 44k INFO Saving model and optimizer state at iteration 631 to ./logs/44k/D_41600.pth
2023-03-02 02:12:03,782 44k INFO ====> Epoch: 631, cost 147.04 s
2023-03-02 02:13:03,481 44k INFO ====> Epoch: 632, cost 59.70 s
2023-03-02 02:14:02,928 44k INFO ====> Epoch: 633, cost 59.45 s
2023-03-02 02:14:29,028 44k INFO Train Epoch: 634 [33%]
2023-03-02 02:14:29,030 44k INFO Losses: [2.685391902923584, 2.0111937522888184, 9.741365432739258, 15.648954391479492, 0.7411051988601685], step: 41800, lr: 9.224196129521857e-05
2023-03-02 02:15:04,196 44k INFO ====> Epoch: 634, cost 61.27 s
2023-03-02 02:16:02,571 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 68951, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-02 02:16:15,143 44k INFO Loaded checkpoint './logs/44k/G_41600.pth' (iteration 631)
2023-03-02 02:16:16,542 44k INFO Loaded checkpoint './logs/44k/D_41600.pth' (iteration 631)
2023-03-02 02:16:51,859 44k INFO Train Epoch: 631 [30%]
2023-03-02 02:16:51,860 44k INFO Losses: [2.6362810134887695, 1.9682914018630981, 7.582870006561279, 14.39011001586914, 0.7817023992538452], step: 41600, lr: 9.226502611010507e-05
2023-03-02 02:16:58,488 44k INFO Saving model and optimizer state at iteration 631 to ./logs/44k/G_41600.pth
2023-03-02 02:17:00,864 44k INFO Saving model and optimizer state at iteration 631 to ./logs/44k/D_41600.pth
2023-03-02 02:17:54,276 44k INFO ====> Epoch: 631, cost 111.71 s
2023-03-02 02:18:53,693 44k INFO ====> Epoch: 632, cost 59.42 s
2023-03-02 02:19:51,948 44k INFO ====> Epoch: 633, cost 58.25 s
2023-03-02 02:20:15,177 44k INFO Train Epoch: 634 [33%]
2023-03-02 02:20:15,181 44k INFO Losses: [2.7480480670928955, 2.211920976638794, 7.862847805023193, 14.619319915771484, 0.4849771559238434], step: 41800, lr: 9.223043105005667e-05
2023-03-02 02:20:50,950 44k INFO ====> Epoch: 634, cost 59.00 s
2023-03-02 02:21:48,922 44k INFO ====> Epoch: 635, cost 57.97 s
2023-03-02 02:22:48,074 44k INFO ====> Epoch: 636, cost 59.15 s
2023-03-02 02:23:13,490 44k INFO Train Epoch: 637 [36%]
2023-03-02 02:23:13,491 44k INFO Losses: [2.483285903930664, 2.321869134902954, 8.127042770385742, 15.568990707397461, 0.7233983874320984], step: 42000, lr: 9.21958489615342e-05
2023-03-02 02:23:46,747 44k INFO ====> Epoch: 637, cost 58.67 s
2023-03-02 02:24:45,297 44k INFO ====> Epoch: 638, cost 58.55 s
2023-03-02 02:25:44,523 44k INFO ====> Epoch: 639, cost 59.23 s
2023-03-02 02:26:13,439 44k INFO Train Epoch: 640 [39%]
2023-03-02 02:26:13,440 44k INFO Losses: [2.6135668754577637, 2.0454912185668945, 6.2655816078186035, 15.124895095825195, 0.4330812990665436], step: 42200, lr: 9.216127983967398e-05
2023-03-02 02:26:46,104 44k INFO ====> Epoch: 640, cost 61.58 s
2023-03-02 02:27:45,258 44k INFO ====> Epoch: 641, cost 59.15 s
2023-03-02 02:28:42,872 44k INFO ====> Epoch: 642, cost 57.61 s
2023-03-02 02:29:10,842 44k INFO Train Epoch: 643 [42%]
2023-03-02 02:29:10,844 44k INFO Losses: [2.717855453491211, 2.0071310997009277, 10.223896980285645, 17.183940887451172, 0.5971488952636719], step: 42400, lr: 9.212672367961408e-05
2023-03-02 02:29:15,731 44k INFO Saving model and optimizer state at iteration 643 to ./logs/44k/G_42400.pth
2023-03-02 02:29:17,821 44k INFO Saving model and optimizer state at iteration 643 to ./logs/44k/D_42400.pth
2023-03-02 02:29:20,392 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_40000.pth
2023-03-02 02:29:20,393 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_40000.pth
2023-03-02 02:29:53,933 44k INFO ====> Epoch: 643, cost 71.06 s
2023-03-02 02:30:53,528 44k INFO ====> Epoch: 644, cost 59.59 s
2023-03-02 02:31:50,893 44k INFO ====> Epoch: 645, cost 57.37 s
2023-03-02 02:32:20,345 44k INFO Train Epoch: 646 [45%]
2023-03-02 02:32:20,347 44k INFO Losses: [2.5595123767852783, 2.1294803619384766, 9.957379341125488, 16.37886619567871, 0.6384275555610657], step: 42600, lr: 9.209218047649445e-05
2023-03-02 02:32:49,034 44k INFO ====> Epoch: 646, cost 58.14 s
2023-03-02 02:33:46,907 44k INFO ====> Epoch: 647, cost 57.87 s
2023-03-02 02:34:47,465 44k INFO ====> Epoch: 648, cost 60.56 s
2023-03-02 02:35:20,512 44k INFO Train Epoch: 649 [48%]
2023-03-02 02:35:20,514 44k INFO Losses: [2.5382392406463623, 1.9430217742919922, 11.988456726074219, 16.64310073852539, 0.5826382040977478], step: 42800, lr: 9.205765022545685e-05
2023-03-02 02:35:48,492 44k INFO ====> Epoch: 649, cost 61.03 s
2023-03-02 02:36:47,299 44k INFO ====> Epoch: 650, cost 58.81 s
2023-03-02 02:37:46,023 44k INFO ====> Epoch: 651, cost 58.72 s
2023-03-02 02:38:18,732 44k INFO Train Epoch: 652 [52%]
2023-03-02 02:38:18,734 44k INFO Losses: [2.3444318771362305, 2.2297427654266357, 10.70944595336914, 15.315006256103516, 0.6929917931556702], step: 43000, lr: 9.202313292164485e-05
2023-03-02 02:38:44,688 44k INFO ====> Epoch: 652, cost 58.66 s
2023-03-02 02:39:42,568 44k INFO ====> Epoch: 653, cost 57.88 s
2023-03-02 02:40:40,833 44k INFO ====> Epoch: 654, cost 58.26 s
2023-03-02 02:41:15,220 44k INFO Train Epoch: 655 [55%]
2023-03-02 02:41:15,221 44k INFO Losses: [2.5300960540771484, 2.2865591049194336, 9.606197357177734, 15.253090858459473, 0.3918098509311676], step: 43200, lr: 9.198862856020383e-05
2023-03-02 02:41:21,642 44k INFO Saving model and optimizer state at iteration 655 to ./logs/44k/G_43200.pth
2023-03-02 02:41:23,911 44k INFO Saving model and optimizer state at iteration 655 to ./logs/44k/D_43200.pth
2023-03-02 02:41:26,352 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_40800.pth
2023-03-02 02:41:26,355 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_40800.pth
2023-03-02 02:41:54,526 44k INFO ====> Epoch: 655, cost 73.69 s
2023-03-02 02:42:52,505 44k INFO ====> Epoch: 656, cost 57.98 s
2023-03-02 02:43:50,482 44k INFO ====> Epoch: 657, cost 57.98 s
2023-03-02 02:44:26,658 44k INFO Train Epoch: 658 [58%]
2023-03-02 02:44:26,660 44k INFO Losses: [2.5758485794067383, 2.421053886413574, 8.049322128295898, 13.723917961120605, 0.4899924099445343], step: 43400, lr: 9.195413713628104e-05
2023-03-02 02:44:49,547 44k INFO ====> Epoch: 658, cost 59.07 s
2023-03-02 02:45:49,550 44k INFO ====> Epoch: 659, cost 60.00 s
2023-03-02 02:46:48,315 44k INFO ====> Epoch: 660, cost 58.76 s
2023-03-02 02:47:27,389 44k INFO Train Epoch: 661 [61%]
2023-03-02 02:47:27,390 44k INFO Losses: [2.656381845474243, 2.0379602909088135, 7.2834649085998535, 14.324346542358398, 0.8305155634880066], step: 43600, lr: 9.191965864502551e-05
2023-03-02 02:47:48,362 44k INFO ====> Epoch: 661, cost 60.05 s
2023-03-02 02:48:47,552 44k INFO ====> Epoch: 662, cost 59.19 s
2023-03-02 02:49:47,180 44k INFO ====> Epoch: 663, cost 59.63 s
2023-03-02 02:50:25,981 44k INFO Train Epoch: 664 [64%]
2023-03-02 02:50:25,983 44k INFO Losses: [2.614119291305542, 2.2606003284454346, 6.270823001861572, 14.337455749511719, 0.4649489223957062], step: 43800, lr: 9.188519308158808e-05
2023-03-02 02:50:45,423 44k INFO ====> Epoch: 664, cost 58.24 s
2023-03-02 02:51:42,574 44k INFO ====> Epoch: 665, cost 57.15 s
2023-03-02 02:52:40,250 44k INFO ====> Epoch: 666, cost 57.68 s
2023-03-02 02:53:22,678 44k INFO Train Epoch: 667 [67%]
2023-03-02 02:53:22,679 44k INFO Losses: [2.6997251510620117, 2.1232404708862305, 6.095002174377441, 14.46658992767334, 0.6743598580360413], step: 44000, lr: 9.185074044112143e-05
2023-03-02 02:53:27,485 44k INFO Saving model and optimizer state at iteration 667 to ./logs/44k/G_44000.pth
2023-03-02 02:53:29,649 44k INFO Saving model and optimizer state at iteration 667 to ./logs/44k/D_44000.pth
2023-03-02 02:53:31,954 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_41600.pth
2023-03-02 02:53:31,955 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_41600.pth
2023-03-02 02:53:51,545 44k INFO ====> Epoch: 667, cost 71.29 s
2023-03-02 02:54:49,678 44k INFO ====> Epoch: 668, cost 58.13 s
2023-03-02 02:55:47,686 44k INFO ====> Epoch: 669, cost 58.01 s
2023-03-02 02:56:30,702 44k INFO Train Epoch: 670 [70%]
2023-03-02 02:56:30,703 44k INFO Losses: [2.6931114196777344, 2.0623056888580322, 8.812015533447266, 15.163840293884277, 1.0017961263656616], step: 44200, lr: 9.181630071878007e-05
2023-03-02 02:56:46,909 44k INFO ====> Epoch: 670, cost 59.22 s
2023-03-02 02:57:47,198 44k INFO ====> Epoch: 671, cost 60.29 s
2023-03-02 02:58:45,662 44k INFO ====> Epoch: 672, cost 58.46 s
2023-03-02 02:59:30,508 44k INFO Train Epoch: 673 [73%]
2023-03-02 02:59:30,510 44k INFO Losses: [2.2975914478302, 2.259751796722412, 13.392903327941895, 15.382834434509277, 0.7057444453239441], step: 44400, lr: 9.178187390972029e-05
2023-03-02 02:59:45,564 44k INFO ====> Epoch: 673, cost 59.90 s
2023-03-02 03:00:46,008 44k INFO ====> Epoch: 674, cost 60.44 s
2023-03-02 03:01:45,847 44k INFO ====> Epoch: 675, cost 59.84 s
2023-03-02 03:02:32,112 44k INFO Train Epoch: 676 [76%]
2023-03-02 03:02:32,114 44k INFO Losses: [2.4372594356536865, 2.2088661193847656, 11.92065715789795, 16.10643196105957, 0.5699023604393005], step: 44600, lr: 9.174746000910022e-05
2023-03-02 03:02:45,234 44k INFO ====> Epoch: 676, cost 59.39 s
2023-03-02 03:03:42,986 44k INFO ====> Epoch: 677, cost 57.75 s
2023-03-02 03:04:41,657 44k INFO ====> Epoch: 678, cost 58.67 s
2023-03-02 03:05:28,527 44k INFO Train Epoch: 679 [79%]
2023-03-02 03:05:28,529 44k INFO Losses: [2.671450138092041, 2.194634199142456, 3.959312677383423, 13.580015182495117, 0.7120764851570129], step: 44800, lr: 9.171305901207978e-05
2023-03-02 03:05:33,822 44k INFO Saving model and optimizer state at iteration 679 to ./logs/44k/G_44800.pth
2023-03-02 03:05:36,819 44k INFO Saving model and optimizer state at iteration 679 to ./logs/44k/D_44800.pth
2023-03-02 03:05:39,026 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_42400.pth
2023-03-02 03:05:39,028 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_42400.pth
2023-03-02 03:05:51,798 44k INFO ====> Epoch: 679, cost 70.14 s
2023-03-02 03:06:50,830 44k INFO ====> Epoch: 680, cost 59.03 s
2023-03-02 03:07:48,518 44k INFO ====> Epoch: 681, cost 57.69 s
2023-03-02 03:08:38,486 44k INFO Train Epoch: 682 [82%]
2023-03-02 03:08:38,486 44k INFO Losses: [2.7550530433654785, 1.9307076930999756, 6.034752368927002, 14.16136646270752, 0.6298778057098389], step: 45000, lr: 9.167867091382074e-05
2023-03-02 03:08:49,285 44k INFO ====> Epoch: 682, cost 60.77 s
2023-03-02 03:09:47,976 44k INFO ====> Epoch: 683, cost 58.69 s
2023-03-02 03:10:47,291 44k INFO ====> Epoch: 684, cost 59.32 s
2023-03-02 03:11:40,368 44k INFO Train Epoch: 685 [85%]
2023-03-02 03:11:40,370 44k INFO Losses: [2.7377800941467285, 2.1001713275909424, 7.849257469177246, 14.731910705566406, 0.5108032822608948], step: 45200, lr: 9.164429570948667e-05
2023-03-02 03:11:48,718 44k INFO ====> Epoch: 685, cost 61.43 s
2023-03-02 03:12:48,586 44k INFO ====> Epoch: 686, cost 59.87 s
2023-03-02 03:13:47,331 44k INFO ====> Epoch: 687, cost 58.75 s
2023-03-02 03:14:38,366 44k INFO Train Epoch: 688 [88%]
2023-03-02 03:14:38,367 44k INFO Losses: [2.452291488647461, 2.0157089233398438, 7.149826526641846, 13.567577362060547, 0.5959404706954956], step: 45400, lr: 9.160993339424298e-05
2023-03-02 03:14:45,476 44k INFO ====> Epoch: 688, cost 58.14 s
2023-03-02 03:15:43,949 44k INFO ====> Epoch: 689, cost 58.47 s
2023-03-02 03:16:41,384 44k INFO ====> Epoch: 690, cost 57.43 s
2023-03-02 03:17:34,074 44k INFO Train Epoch: 691 [91%]
2023-03-02 03:17:34,076 44k INFO Losses: [2.5435314178466797, 2.1364939212799072, 12.256353378295898, 16.2484188079834, 0.6809106469154358], step: 45600, lr: 9.157558396325682e-05
2023-03-02 03:17:39,758 44k INFO Saving model and optimizer state at iteration 691 to ./logs/44k/G_45600.pth
2023-03-02 03:17:42,274 44k INFO Saving model and optimizer state at iteration 691 to ./logs/44k/D_45600.pth
2023-03-02 03:17:44,807 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_43200.pth
2023-03-02 03:17:44,809 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_43200.pth
2023-03-02 03:17:49,858 44k INFO ====> Epoch: 691, cost 68.47 s
2023-03-02 03:18:51,747 44k INFO ====> Epoch: 692, cost 61.89 s
2023-03-02 03:19:49,911 44k INFO ====> Epoch: 693, cost 58.16 s
2023-03-02 03:20:45,230 44k INFO Train Epoch: 694 [94%]
2023-03-02 03:20:45,231 44k INFO Losses: [2.560474395751953, 2.1314141750335693, 10.571968078613281, 16.710195541381836, 0.4647202789783478], step: 45800, lr: 9.154124741169722e-05
2023-03-02 03:20:48,948 44k INFO ====> Epoch: 694, cost 59.04 s
2023-03-02 03:21:47,740 44k INFO ====> Epoch: 695, cost 58.79 s
2023-03-02 03:22:48,336 44k INFO ====> Epoch: 696, cost 60.60 s
2023-03-02 03:23:46,051 44k INFO Train Epoch: 697 [97%]
2023-03-02 03:23:46,053 44k INFO Losses: [2.7043349742889404, 2.0875020027160645, 8.878937721252441, 14.227090835571289, 0.592502772808075], step: 46000, lr: 9.150692373473501e-05
2023-03-02 03:23:48,199 44k INFO ====> Epoch: 697, cost 59.86 s
2023-03-02 03:24:48,254 44k INFO ====> Epoch: 698, cost 60.06 s
2023-03-02 03:25:47,112 44k INFO ====> Epoch: 699, cost 58.86 s
2023-03-02 03:26:44,448 44k INFO ====> Epoch: 700, cost 57.34 s
2023-03-02 03:26:49,774 44k INFO Train Epoch: 701 [0%]
2023-03-02 03:26:49,776 44k INFO Losses: [2.4718196392059326, 2.176609992980957, 10.389583587646484, 16.11941146850586, 1.0833539962768555], step: 46200, lr: 9.146117885092685e-05
2023-03-02 03:27:42,964 44k INFO ====> Epoch: 701, cost 58.52 s
2023-03-02 03:28:40,822 44k INFO ====> Epoch: 702, cost 57.86 s
2023-03-02 03:29:39,651 44k INFO ====> Epoch: 703, cost 58.83 s
2023-03-02 03:29:47,034 44k INFO Train Epoch: 704 [3%]
2023-03-02 03:29:47,040 44k INFO Losses: [2.841576337814331, 1.8080276250839233, 8.50521469116211, 16.12368392944336, 0.9166112542152405], step: 46400, lr: 9.142688519592185e-05
2023-03-02 03:29:52,514 44k INFO Saving model and optimizer state at iteration 704 to ./logs/44k/G_46400.pth
2023-03-02 03:29:54,712 44k INFO Saving model and optimizer state at iteration 704 to ./logs/44k/D_46400.pth
2023-03-02 03:29:56,996 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_44000.pth
2023-03-02 03:29:57,014 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_44000.pth
2023-03-02 03:30:51,552 44k INFO ====> Epoch: 704, cost 71.90 s
2023-03-02 03:31:50,174 44k INFO ====> Epoch: 705, cost 58.62 s
2023-03-02 03:32:51,596 44k INFO ====> Epoch: 706, cost 61.42 s
2023-03-02 03:33:01,347 44k INFO Train Epoch: 707 [6%]
2023-03-02 03:33:01,349 44k INFO Losses: [2.6492395401000977, 1.9678059816360474, 6.889136791229248, 13.798266410827637, 0.9384015202522278], step: 46600, lr: 9.139260439943005e-05
2023-03-02 03:33:51,931 44k INFO ====> Epoch: 707, cost 60.33 s
2023-03-02 03:34:49,092 44k INFO ====> Epoch: 708, cost 57.16 s
2023-03-02 03:35:47,459 44k INFO ====> Epoch: 709, cost 58.37 s
2023-03-02 03:35:57,918 44k INFO Train Epoch: 710 [9%]
2023-03-02 03:35:57,920 44k INFO Losses: [2.801201820373535, 1.9763972759246826, 6.9863104820251465, 14.295787811279297, 0.5979728102684021], step: 46800, lr: 9.13583364566301e-05
2023-03-02 03:36:46,044 44k INFO ====> Epoch: 710, cost 58.59 s
2023-03-02 03:37:44,025 44k INFO ====> Epoch: 711, cost 57.98 s
2023-03-02 03:38:42,121 44k INFO ====> Epoch: 712, cost 58.10 s
2023-03-02 03:38:55,912 44k INFO Train Epoch: 713 [12%]
2023-03-02 03:38:55,914 44k INFO Losses: [2.593735933303833, 2.0812768936157227, 8.497804641723633, 14.896484375, 0.5548284649848938], step: 47000, lr: 9.132408136270243e-05
2023-03-02 03:39:42,405 44k INFO ====> Epoch: 713, cost 60.28 s
2023-03-02 03:40:41,494 44k INFO ====> Epoch: 714, cost 59.09 s
2023-03-02 03:41:40,914 44k INFO ====> Epoch: 715, cost 59.42 s
2023-03-02 03:41:54,662 44k INFO Train Epoch: 716 [15%]
2023-03-02 03:41:54,664 44k INFO Losses: [2.5161349773406982, 2.30387020111084, 9.9384183883667, 14.604636192321777, 0.7223911285400391], step: 47200, lr: 9.128983911282936e-05
2023-03-02 03:42:00,086 44k INFO Saving model and optimizer state at iteration 716 to ./logs/44k/G_47200.pth
2023-03-02 03:42:02,431 44k INFO Saving model and optimizer state at iteration 716 to ./logs/44k/D_47200.pth
2023-03-02 03:42:04,664 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_44800.pth
2023-03-02 03:42:04,667 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_44800.pth
2023-03-02 03:42:53,579 44k INFO ====> Epoch: 716, cost 72.67 s
2023-03-02 03:43:52,150 44k INFO ====> Epoch: 717, cost 58.57 s
2023-03-02 03:44:49,722 44k INFO ====> Epoch: 718, cost 57.57 s
2023-03-02 03:45:05,061 44k INFO Train Epoch: 719 [18%]
2023-03-02 03:45:05,063 44k INFO Losses: [2.5338754653930664, 1.930866003036499, 9.357100486755371, 14.435588836669922, 0.5638878345489502], step: 47400, lr: 9.125560970219495e-05
2023-03-02 03:45:49,117 44k INFO ====> Epoch: 719, cost 59.39 s
2023-03-02 03:46:46,713 44k INFO ====> Epoch: 720, cost 57.60 s
2023-03-02 03:47:44,184 44k INFO ====> Epoch: 721, cost 57.47 s
2023-03-02 03:48:01,059 44k INFO Train Epoch: 722 [21%]
2023-03-02 03:48:01,061 44k INFO Losses: [2.6545112133026123, 2.1486454010009766, 7.099390029907227, 13.87285327911377, 0.7502254843711853], step: 47600, lr: 9.122139312598508e-05
2023-03-02 03:48:42,350 44k INFO ====> Epoch: 722, cost 58.17 s
2023-03-02 03:49:41,694 44k INFO ====> Epoch: 723, cost 59.34 s
2023-03-02 03:50:39,633 44k INFO ====> Epoch: 724, cost 57.94 s
2023-03-02 03:50:58,987 44k INFO Train Epoch: 725 [24%]
2023-03-02 03:50:58,989 44k INFO Losses: [2.5127580165863037, 2.1117594242095947, 7.201158046722412, 13.31640338897705, 0.6080480217933655], step: 47800, lr: 9.118718937938746e-05
2023-03-02 03:51:39,214 44k INFO ====> Epoch: 725, cost 59.58 s
2023-03-02 03:52:39,795 44k INFO ====> Epoch: 726, cost 60.58 s
2023-03-02 03:53:38,767 44k INFO ====> Epoch: 727, cost 58.97 s
2023-03-02 03:53:58,948 44k INFO Train Epoch: 728 [27%]
2023-03-02 03:53:58,949 44k INFO Losses: [2.564847469329834, 2.1737136840820312, 6.065760612487793, 14.973374366760254, 0.9307472109794617], step: 48000, lr: 9.115299845759157e-05
2023-03-02 03:54:03,729 44k INFO Saving model and optimizer state at iteration 728 to ./logs/44k/G_48000.pth
2023-03-02 03:54:07,312 44k INFO Saving model and optimizer state at iteration 728 to ./logs/44k/D_48000.pth
2023-03-02 03:54:09,897 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_45600.pth
2023-03-02 03:54:09,898 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_45600.pth
2023-03-02 03:54:50,437 44k INFO ====> Epoch: 728, cost 71.67 s
2023-03-02 03:55:48,556 44k INFO ====> Epoch: 729, cost 58.12 s
2023-03-02 03:56:45,884 44k INFO ====> Epoch: 730, cost 57.33 s
2023-03-02 03:57:06,982 44k INFO Train Epoch: 731 [30%]
2023-03-02 03:57:06,983 44k INFO Losses: [2.3879594802856445, 2.3248441219329834, 11.395038604736328, 16.206912994384766, 0.40104568004608154], step: 48200, lr: 9.111882035578874e-05
2023-03-02 03:57:44,146 44k INFO ====> Epoch: 731, cost 58.26 s
2023-03-02 03:58:42,357 44k INFO ====> Epoch: 732, cost 58.21 s
2023-03-02 03:59:43,483 44k INFO ====> Epoch: 733, cost 61.13 s
2023-03-02 04:00:08,339 44k INFO Train Epoch: 734 [33%]
2023-03-02 04:00:08,341 44k INFO Losses: [2.495140314102173, 2.2871336936950684, 7.037008285522461, 14.002639770507812, 1.016313910484314], step: 48400, lr: 9.108465506917204e-05
2023-03-02 04:00:43,543 44k INFO ====> Epoch: 734, cost 60.06 s
2023-03-02 04:01:43,447 44k INFO ====> Epoch: 735, cost 59.90 s
2023-03-02 04:02:42,311 44k INFO ====> Epoch: 736, cost 58.86 s
2023-03-02 04:03:06,920 44k INFO Train Epoch: 737 [36%]
2023-03-02 04:03:06,921 44k INFO Losses: [2.576669454574585, 2.4103174209594727, 10.728577613830566, 16.05739974975586, 0.5428088307380676], step: 48600, lr: 9.10505025929364e-05
2023-03-02 04:03:41,221 44k INFO ====> Epoch: 737, cost 58.91 s
2023-03-02 04:04:39,207 44k INFO ====> Epoch: 738, cost 57.99 s
2023-03-02 04:05:38,244 44k INFO ====> Epoch: 739, cost 59.04 s
2023-03-02 04:06:05,470 44k INFO Train Epoch: 740 [39%]
2023-03-02 04:06:05,472 44k INFO Losses: [2.7716500759124756, 2.0621347427368164, 10.734313011169434, 16.739429473876953, 0.674547016620636], step: 48800, lr: 9.101636292227852e-05
2023-03-02 04:06:10,398 44k INFO Saving model and optimizer state at iteration 740 to ./logs/44k/G_48800.pth
2023-03-02 04:06:13,372 44k INFO Saving model and optimizer state at iteration 740 to ./logs/44k/D_48800.pth
2023-03-02 04:06:15,715 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_46400.pth
2023-03-02 04:06:15,717 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_46400.pth
2023-03-02 04:06:49,709 44k INFO ====> Epoch: 740, cost 71.47 s
2023-03-02 04:07:47,610 44k INFO ====> Epoch: 741, cost 57.90 s
2023-03-02 04:08:46,482 44k INFO ====> Epoch: 742, cost 58.87 s
2023-03-02 04:09:16,721 44k INFO Train Epoch: 743 [42%]
2023-03-02 04:09:16,723 44k INFO Losses: [2.6152420043945312, 2.0403897762298584, 6.035984039306641, 14.092401504516602, 0.7611944079399109], step: 49000, lr: 9.098223605239689e-05
2023-03-02 04:09:47,800 44k INFO ====> Epoch: 743, cost 61.32 s
2023-03-02 04:10:46,804 44k INFO ====> Epoch: 744, cost 59.00 s
2023-03-02 04:11:44,089 44k INFO ====> Epoch: 745, cost 57.29 s
2023-03-02 04:12:13,754 44k INFO Train Epoch: 746 [45%]
2023-03-02 04:12:13,756 44k INFO Losses: [2.4621551036834717, 2.103376626968384, 12.90227222442627, 16.736867904663086, 0.8445587754249573], step: 49200, lr: 9.094812197849185e-05
2023-03-02 04:12:43,790 44k INFO ====> Epoch: 746, cost 59.70 s
2023-03-02 04:13:42,248 44k INFO ====> Epoch: 747, cost 58.46 s
2023-03-02 04:14:40,296 44k INFO ====> Epoch: 748, cost 58.05 s
2023-03-02 04:15:11,674 44k INFO Train Epoch: 749 [48%]
2023-03-02 04:15:11,675 44k INFO Losses: [2.554318428039551, 2.2493135929107666, 8.027237892150879, 14.697410583496094, 0.541877269744873], step: 49400, lr: 9.091402069576549e-05
2023-03-02 04:15:39,458 44k INFO ====> Epoch: 749, cost 59.16 s
2023-03-02 04:16:39,787 44k INFO ====> Epoch: 750, cost 60.33 s
2023-03-02 04:17:39,996 44k INFO ====> Epoch: 751, cost 60.21 s
2023-03-02 04:18:12,875 44k INFO Train Epoch: 752 [52%]
2023-03-02 04:18:12,876 44k INFO Losses: [2.6036057472229004, 2.124340057373047, 13.987582206726074, 15.634875297546387, 0.7240552306175232], step: 49600, lr: 9.087993219942171e-05
2023-03-02 04:18:17,653 44k INFO Saving model and optimizer state at iteration 752 to ./logs/44k/G_49600.pth
2023-03-02 04:18:20,172 44k INFO Saving model and optimizer state at iteration 752 to ./logs/44k/D_49600.pth
2023-03-02 04:18:22,607 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_47200.pth
2023-03-02 04:18:22,609 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_47200.pth
2023-03-02 04:18:50,295 44k INFO ====> Epoch: 752, cost 70.30 s
2023-03-02 04:19:49,456 44k INFO ====> Epoch: 753, cost 59.16 s
2023-03-02 04:20:46,781 44k INFO ====> Epoch: 754, cost 57.32 s
2023-03-02 04:21:20,606 44k INFO Train Epoch: 755 [55%]
2023-03-02 04:21:20,608 44k INFO Losses: [2.558242082595825, 2.0892183780670166, 10.390260696411133, 15.462811470031738, 0.5773155093193054], step: 49800, lr: 9.084585648466622e-05
2023-03-02 04:21:44,786 44k INFO ====> Epoch: 755, cost 58.01 s
2023-03-02 04:22:43,842 44k INFO ====> Epoch: 756, cost 59.06 s
2023-03-02 04:23:40,959 44k INFO ====> Epoch: 757, cost 57.12 s
2023-03-02 04:24:16,515 44k INFO Train Epoch: 758 [58%]
2023-03-02 04:24:16,516 44k INFO Losses: [2.690032958984375, 2.2367308139801025, 10.459470748901367, 16.24781608581543, 0.5534948706626892], step: 50000, lr: 9.081179354670654e-05
2023-03-02 04:24:39,189 44k INFO ====> Epoch: 758, cost 58.23 s
2023-03-02 04:25:37,491 44k INFO ====> Epoch: 759, cost 58.30 s
2023-03-02 04:26:37,108 44k INFO ====> Epoch: 760, cost 59.62 s
2023-03-02 04:27:15,689 44k INFO Train Epoch: 761 [61%]
2023-03-02 04:27:15,690 44k INFO Losses: [2.5942797660827637, 2.115506172180176, 9.310372352600098, 16.441102981567383, 0.7882214784622192], step: 50200, lr: 9.077774338075196e-05
2023-03-02 04:27:36,108 44k INFO ====> Epoch: 761, cost 59.00 s
2023-03-02 04:28:35,536 44k INFO ====> Epoch: 762, cost 59.43 s
2023-03-02 04:29:35,432 44k INFO ====> Epoch: 763, cost 59.90 s
2023-03-02 04:30:14,251 44k INFO Train Epoch: 764 [64%]
2023-03-02 04:30:14,254 44k INFO Losses: [2.678270101547241, 2.056702136993408, 7.666993141174316, 15.56500244140625, 0.6134679913520813], step: 50400, lr: 9.074370598201358e-05
2023-03-02 04:30:21,173 44k INFO Saving model and optimizer state at iteration 764 to ./logs/44k/G_50400.pth
2023-03-02 04:30:23,484 44k INFO Saving model and optimizer state at iteration 764 to ./logs/44k/D_50400.pth
2023-03-02 04:30:26,053 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_48000.pth
2023-03-02 04:30:26,055 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_48000.pth
2023-03-02 04:30:48,339 44k INFO ====> Epoch: 764, cost 72.91 s
2023-03-02 04:31:46,261 44k INFO ====> Epoch: 765, cost 57.92 s
2023-03-02 04:32:45,271 44k INFO ====> Epoch: 766, cost 59.01 s
2023-03-02 04:33:26,862 44k INFO Train Epoch: 767 [67%]
2023-03-02 04:33:26,864 44k INFO Losses: [2.6806540489196777, 2.004979372024536, 8.107794761657715, 14.844000816345215, 0.4141501784324646], step: 50600, lr: 9.07096813457043e-05
2023-03-02 04:33:44,502 44k INFO ====> Epoch: 767, cost 59.23 s
2023-03-02 04:34:42,319 44k INFO ====> Epoch: 768, cost 57.82 s
2023-03-02 04:35:40,731 44k INFO ====> Epoch: 769, cost 58.41 s
2023-03-02 04:36:26,025 44k INFO Train Epoch: 770 [70%]
2023-03-02 04:36:26,027 44k INFO Losses: [2.6599369049072266, 1.87601637840271, 10.233906745910645, 15.611205101013184, 0.4818692207336426], step: 50800, lr: 9.067566946703881e-05
2023-03-02 04:36:42,161 44k INFO ====> Epoch: 770, cost 61.43 s
2023-03-02 04:37:41,372 44k INFO ====> Epoch: 771, cost 59.21 s
2023-03-02 04:38:39,630 44k INFO ====> Epoch: 772, cost 58.26 s
2023-03-02 04:39:23,193 44k INFO Train Epoch: 773 [73%]
2023-03-02 04:39:23,195 44k INFO Losses: [2.485257387161255, 1.9475377798080444, 13.093193054199219, 16.23479652404785, 0.5378583669662476], step: 51000, lr: 9.064167034123356e-05
2023-03-02 04:39:38,631 44k INFO ====> Epoch: 773, cost 59.00 s
2023-03-02 04:40:36,760 44k INFO ====> Epoch: 774, cost 58.13 s
2023-03-02 04:41:34,041 44k INFO ====> Epoch: 775, cost 57.28 s
2023-03-02 04:42:18,967 44k INFO Train Epoch: 776 [76%]
2023-03-02 04:42:18,968 44k INFO Losses: [2.6813642978668213, 1.8807628154754639, 9.903955459594727, 14.994125366210938, 0.7824285626411438], step: 51200, lr: 9.060768396350687e-05
2023-03-02 04:42:24,255 44k INFO Saving model and optimizer state at iteration 776 to ./logs/44k/G_51200.pth
2023-03-02 04:42:27,291 44k INFO Saving model and optimizer state at iteration 776 to ./logs/44k/D_51200.pth
2023-03-02 04:42:29,460 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_48800.pth
2023-03-02 04:42:29,462 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_48800.pth
2023-03-02 04:42:43,430 44k INFO ====> Epoch: 776, cost 69.39 s
2023-03-02 04:43:44,162 44k INFO ====> Epoch: 777, cost 60.73 s
2023-03-02 04:44:42,725 44k INFO ====> Epoch: 778, cost 58.56 s
2023-03-02 04:45:29,161 44k INFO Train Epoch: 779 [79%]
2023-03-02 04:45:29,162 44k INFO Losses: [2.6453967094421387, 2.2349092960357666, 9.449899673461914, 15.748457908630371, 0.6683043241500854], step: 51400, lr: 9.057371032907876e-05
2023-03-02 04:45:41,204 44k INFO ====> Epoch: 779, cost 58.48 s
2023-03-02 04:46:40,916 44k INFO ====> Epoch: 780, cost 59.71 s
2023-03-02 04:47:40,114 44k INFO ====> Epoch: 781, cost 59.20 s
2023-03-02 04:48:29,712 44k INFO Train Epoch: 782 [82%]
2023-03-02 04:48:29,713 44k INFO Losses: [2.4721078872680664, 2.2500619888305664, 10.715545654296875, 14.826188087463379, 0.5792394876480103], step: 51600, lr: 9.053974943317111e-05
2023-03-02 04:48:39,894 44k INFO ====> Epoch: 782, cost 59.78 s
2023-03-02 04:49:37,827 44k INFO ====> Epoch: 783, cost 57.93 s
2023-03-02 04:50:36,290 44k INFO ====> Epoch: 784, cost 58.46 s
2023-03-02 04:51:25,618 44k INFO Train Epoch: 785 [85%]
2023-03-02 04:51:25,619 44k INFO Losses: [2.5571815967559814, 2.126671314239502, 10.084488868713379, 15.456644058227539, 0.5143797993659973], step: 51800, lr: 9.050580127100758e-05
2023-03-02 04:51:34,202 44k INFO ====> Epoch: 785, cost 57.91 s
2023-03-02 04:52:31,146 44k INFO ====> Epoch: 786, cost 56.94 s
2023-03-02 04:53:29,342 44k INFO ====> Epoch: 787, cost 58.20 s
2023-03-02 04:54:20,409 44k INFO Train Epoch: 788 [88%]
2023-03-02 04:54:20,410 44k INFO Losses: [2.460880756378174, 2.130080223083496, 9.329163551330566, 14.91320514678955, 0.6955270767211914], step: 52000, lr: 9.04718658378136e-05
2023-03-02 04:54:26,886 44k INFO Saving model and optimizer state at iteration 788 to ./logs/44k/G_52000.pth
2023-03-02 04:54:29,128 44k INFO Saving model and optimizer state at iteration 788 to ./logs/44k/D_52000.pth
2023-03-02 04:54:31,368 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_49600.pth
2023-03-02 04:54:31,373 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_49600.pth
2023-03-02 04:54:38,359 44k INFO ====> Epoch: 788, cost 69.02 s
2023-03-02 04:55:38,642 44k INFO ====> Epoch: 789, cost 60.28 s
2023-03-02 04:56:35,917 44k INFO ====> Epoch: 790, cost 57.27 s
2023-03-02 04:57:31,697 44k INFO Train Epoch: 791 [91%]
2023-03-02 04:57:31,699 44k INFO Losses: [2.4677679538726807, 2.08296537399292, 7.790469169616699, 15.238496780395508, 0.29934367537498474], step: 52200, lr: 9.043794312881642e-05
2023-03-02 04:57:36,767 44k INFO ====> Epoch: 791, cost 60.85 s
2023-03-02 04:58:35,844 44k INFO ====> Epoch: 792, cost 59.08 s
2023-03-02 04:59:36,343 44k INFO ====> Epoch: 793, cost 60.50 s
2023-03-02 05:00:32,341 44k INFO Train Epoch: 794 [94%]
2023-03-02 05:00:32,343 44k INFO Losses: [2.79270601272583, 1.846032738685608, 10.537714004516602, 16.36833381652832, 0.5139970779418945], step: 52400, lr: 9.040403313924505e-05
2023-03-02 05:00:36,739 44k INFO ====> Epoch: 794, cost 60.40 s
2023-03-02 05:01:34,010 44k INFO ====> Epoch: 795, cost 57.27 s
2023-03-02 05:02:31,627 44k INFO ====> Epoch: 796, cost 57.62 s
2023-03-02 05:03:27,961 44k INFO Train Epoch: 797 [97%]
2023-03-02 05:03:27,963 44k INFO Losses: [2.6635031700134277, 2.106999397277832, 8.515193939208984, 15.78533935546875, 0.416564404964447], step: 52600, lr: 9.03701358643303e-05
2023-03-02 05:03:30,123 44k INFO ====> Epoch: 797, cost 58.50 s
2023-03-02 05:04:29,747 44k INFO ====> Epoch: 798, cost 59.62 s
2023-03-02 05:05:27,533 44k INFO ====> Epoch: 799, cost 57.79 s
2023-03-02 05:06:25,536 44k INFO ====> Epoch: 800, cost 58.00 s
2023-03-02 05:06:32,289 44k INFO Train Epoch: 801 [0%]
2023-03-02 05:06:32,291 44k INFO Losses: [2.5715363025665283, 1.905165433883667, 7.808526992797852, 15.150589942932129, 0.7373371124267578], step: 52800, lr: 9.032495926789236e-05
2023-03-02 05:06:37,354 44k INFO Saving model and optimizer state at iteration 801 to ./logs/44k/G_52800.pth
2023-03-02 05:06:39,803 44k INFO Saving model and optimizer state at iteration 801 to ./logs/44k/D_52800.pth
2023-03-02 05:06:42,135 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_50400.pth
2023-03-02 05:06:42,137 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_50400.pth
2023-03-02 05:07:38,623 44k INFO ====> Epoch: 801, cost 73.09 s
2023-03-02 11:55:20,520 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 1234, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'nyaru': 0, 'huiyu': 1, 'nen': 2, 'paimon': 3, 'yunhao': 4}, 'model_dir': './logs/44k'}
2023-03-02 11:55:21,235 44k WARNING git hash values are different. abdb0e28(saved) != cea6df30(current)
2023-03-02 11:57:18,150 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 68951, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'nyaru': 0, 'huiyu': 1, 'nen': 2, 'paimon': 3, 'yunhao': 4}, 'model_dir': './logs/44k'}
2023-03-02 11:57:18,163 44k WARNING git hash values are different. abdb0e28(saved) != cea6df30(current)
2023-03-02 11:57:53,044 44k INFO Loaded checkpoint './logs/44k/G_52800.pth' (iteration 801)
2023-03-02 11:58:02,582 44k INFO Loaded checkpoint './logs/44k/D_52800.pth' (iteration 801)
2023-03-02 12:01:04,939 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 3601, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-02 12:01:04,956 44k WARNING git hash values are different. abdb0e28(saved) != cea6df30(current)
2023-03-02 12:02:05,651 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 68951, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-02 12:02:05,664 44k WARNING git hash values are different. abdb0e28(saved) != cea6df30(current)
2023-03-02 12:02:13,354 44k INFO Loaded checkpoint './logs/44k/G_52800.pth' (iteration 801)
2023-03-02 12:02:14,953 44k INFO Loaded checkpoint './logs/44k/D_52800.pth' (iteration 801)
2023-03-02 12:02:31,855 44k INFO Train Epoch: 801 [0%]
2023-03-02 12:02:31,856 44k INFO Losses: [2.5281124114990234, 1.9473576545715332, 8.766615867614746, 15.216752052307129, 0.5539695620536804], step: 52800, lr: 9.031366864798387e-05
2023-03-02 12:02:38,184 44k INFO Saving model and optimizer state at iteration 801 to ./logs/44k/G_52800.pth
2023-03-02 12:02:40,536 44k INFO Saving model and optimizer state at iteration 801 to ./logs/44k/D_52800.pth
2023-03-02 12:03:57,715 44k INFO ====> Epoch: 801, cost 112.07 s
2023-03-02 12:04:54,369 44k INFO ====> Epoch: 802, cost 56.65 s
2023-03-02 12:05:51,472 44k INFO ====> Epoch: 803, cost 57.10 s
2023-03-02 12:05:58,092 44k INFO Train Epoch: 804 [3%]
2023-03-02 12:05:58,093 44k INFO Losses: [2.738523006439209, 2.04595947265625, 9.245983123779297, 15.636491775512695, 0.7293708920478821], step: 53000, lr: 9.027980525551768e-05
2023-03-02 12:06:49,507 44k INFO ====> Epoch: 804, cost 58.03 s
2023-03-02 12:07:46,872 44k INFO ====> Epoch: 805, cost 57.36 s
2023-03-02 12:08:23,236 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 68951, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-02 12:08:23,282 44k WARNING git hash values are different. abdb0e28(saved) != cea6df30(current)
2023-03-02 12:08:30,723 44k INFO Loaded checkpoint './logs/44k/G_52800.pth' (iteration 801)
2023-03-02 12:08:32,104 44k INFO Loaded checkpoint './logs/44k/D_52800.pth' (iteration 801)
2023-03-02 12:08:45,081 44k INFO Train Epoch: 801 [0%]
2023-03-02 12:08:45,082 44k INFO Losses: [2.5290448665618896, 1.9461716413497925, 8.769097328186035, 15.220534324645996, 0.5537859797477722], step: 52800, lr: 9.030237943940286e-05
2023-03-02 12:08:51,671 44k INFO Saving model and optimizer state at iteration 801 to ./logs/44k/G_52800.pth
2023-03-02 12:08:55,206 44k INFO Saving model and optimizer state at iteration 801 to ./logs/44k/D_52800.pth
2023-03-02 12:10:10,196 44k INFO ====> Epoch: 801, cost 106.96 s
2023-03-02 12:11:07,647 44k INFO ====> Epoch: 802, cost 57.45 s
2023-03-02 12:12:05,308 44k INFO ====> Epoch: 803, cost 57.66 s
2023-03-02 12:12:12,044 44k INFO Train Epoch: 804 [3%]
2023-03-02 12:12:12,045 44k INFO Losses: [2.7254109382629395, 2.2140634059906006, 9.286031723022461, 15.42168140411377, 0.6933740377426147], step: 53000, lr: 9.026852027986074e-05
2023-03-02 12:13:03,438 44k INFO ====> Epoch: 804, cost 58.13 s
2023-03-02 12:14:01,102 44k INFO ====> Epoch: 805, cost 57.66 s
2023-03-02 12:14:59,783 44k INFO ====> Epoch: 806, cost 58.68 s
2023-03-02 12:15:09,110 44k INFO Train Epoch: 807 [6%]
2023-03-02 12:15:09,111 44k INFO Losses: [2.630686044692993, 1.9620155096054077, 7.442999839782715, 14.134700775146484, 0.6419307589530945], step: 53200, lr: 9.023467381591636e-05
2023-03-02 12:15:58,919 44k INFO ====> Epoch: 807, cost 59.14 s
2023-03-02 12:16:57,819 44k INFO ====> Epoch: 808, cost 58.90 s
2023-03-02 12:17:56,321 44k INFO ====> Epoch: 809, cost 58.50 s
2023-03-02 12:18:07,999 44k INFO Train Epoch: 810 [9%]
2023-03-02 12:18:08,000 44k INFO Losses: [2.475161075592041, 2.2620067596435547, 10.451228141784668, 15.055403709411621, 0.44125381112098694], step: 53400, lr: 9.020084004280947e-05
2023-03-02 12:18:56,503 44k INFO ====> Epoch: 810, cost 60.18 s
2023-03-02 12:19:55,246 44k INFO ====> Epoch: 811, cost 58.74 s
2023-03-02 12:20:54,463 44k INFO ====> Epoch: 812, cost 59.22 s
2023-03-02 12:21:08,012 44k INFO Train Epoch: 813 [12%]
2023-03-02 12:21:08,014 44k INFO Losses: [2.398993492126465, 2.5818238258361816, 13.947368621826172, 16.47783851623535, 0.839489758014679], step: 53600, lr: 9.01670189557816e-05
2023-03-02 12:21:13,291 44k INFO Saving model and optimizer state at iteration 813 to ./logs/44k/G_53600.pth
2023-03-02 12:21:16,376 44k INFO Saving model and optimizer state at iteration 813 to ./logs/44k/D_53600.pth
2023-03-02 12:21:18,653 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_51200.pth
2023-03-02 12:21:18,655 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_51200.pth
2023-03-02 12:22:08,068 44k INFO ====> Epoch: 813, cost 73.60 s
2023-03-02 12:23:06,752 44k INFO ====> Epoch: 814, cost 58.68 s
2023-03-02 12:24:05,401 44k INFO ====> Epoch: 815, cost 58.65 s
2023-03-02 12:24:19,385 44k INFO Train Epoch: 816 [15%]
2023-03-02 12:24:19,387 44k INFO Losses: [2.5815441608428955, 2.177922248840332, 10.958093643188477, 15.186318397521973, 0.5426397323608398], step: 53800, lr: 9.013321055007607e-05
2023-03-02 12:25:03,641 44k INFO ====> Epoch: 816, cost 58.24 s
2023-03-02 12:26:01,375 44k INFO ====> Epoch: 817, cost 57.73 s
2023-03-02 12:26:58,773 44k INFO ====> Epoch: 818, cost 57.40 s
2023-03-02 12:27:13,878 44k INFO Train Epoch: 819 [18%]
2023-03-02 12:27:13,880 44k INFO Losses: [2.5999813079833984, 2.0934271812438965, 10.85334300994873, 16.03907012939453, 0.5375160574913025], step: 54000, lr: 9.009941482093798e-05
2023-03-02 12:27:56,266 44k INFO ====> Epoch: 819, cost 57.49 s
2023-03-02 12:28:53,070 44k INFO ====> Epoch: 820, cost 56.80 s
2023-03-02 12:29:50,317 44k INFO ====> Epoch: 821, cost 57.25 s
2023-03-02 12:30:07,960 44k INFO Train Epoch: 822 [21%]
2023-03-02 12:30:07,962 44k INFO Losses: [2.685563325881958, 1.8703593015670776, 9.191014289855957, 14.256898880004883, 0.8665612936019897], step: 54200, lr: 9.00656317636142e-05
2023-03-02 12:30:49,264 44k INFO ====> Epoch: 822, cost 58.95 s
2023-03-02 12:31:47,232 44k INFO ====> Epoch: 823, cost 57.97 s
2023-03-02 12:32:45,267 44k INFO ====> Epoch: 824, cost 58.04 s
2023-03-02 12:33:05,807 44k INFO Train Epoch: 825 [24%]
2023-03-02 12:33:05,809 44k INFO Losses: [2.5102603435516357, 2.120687961578369, 12.48281478881836, 15.196264266967773, 0.3321622312068939], step: 54400, lr: 9.003186137335341e-05
2023-03-02 12:33:10,527 44k INFO Saving model and optimizer state at iteration 825 to ./logs/44k/G_54400.pth
2023-03-02 12:33:12,809 44k INFO Saving model and optimizer state at iteration 825 to ./logs/44k/D_54400.pth
2023-03-02 12:33:15,196 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_52000.pth
2023-03-02 12:33:15,197 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_52000.pth
2023-03-02 12:33:57,423 44k INFO ====> Epoch: 825, cost 72.16 s
2023-03-02 12:34:56,021 44k INFO ====> Epoch: 826, cost 58.60 s
2023-03-02 12:35:54,213 44k INFO ====> Epoch: 827, cost 58.19 s
2023-03-02 12:36:14,584 44k INFO Train Epoch: 828 [27%]
2023-03-02 12:36:14,586 44k INFO Losses: [2.6040029525756836, 2.001410722732544, 6.421409606933594, 13.876726150512695, 0.7228734493255615], step: 54600, lr: 8.999810364540606e-05
2023-03-02 12:36:52,910 44k INFO ====> Epoch: 828, cost 58.70 s
2023-03-02 12:37:50,446 44k INFO ====> Epoch: 829, cost 57.54 s
2023-03-02 12:38:47,806 44k INFO ====> Epoch: 830, cost 57.36 s
2023-03-02 12:39:08,849 44k INFO Train Epoch: 831 [30%]
2023-03-02 12:39:08,851 44k INFO Losses: [2.339491605758667, 2.3273024559020996, 12.171259880065918, 15.233323097229004, 0.7573917508125305], step: 54800, lr: 8.996435857502436e-05
2023-03-02 12:39:46,036 44k INFO ====> Epoch: 831, cost 58.23 s
2023-03-02 12:40:42,865 44k INFO ====> Epoch: 832, cost 56.83 s
2023-03-02 12:41:39,826 44k INFO ====> Epoch: 833, cost 56.96 s
2023-03-02 12:42:02,494 44k INFO Train Epoch: 834 [33%]
2023-03-02 12:42:02,496 44k INFO Losses: [2.6657562255859375, 2.431041955947876, 12.436636924743652, 15.976394653320312, 0.6516526937484741], step: 55000, lr: 8.993062615746231e-05
2023-03-02 12:42:37,568 44k INFO ====> Epoch: 834, cost 57.74 s
2023-03-02 12:43:35,540 44k INFO ====> Epoch: 835, cost 57.97 s
2023-03-02 12:44:33,812 44k INFO ====> Epoch: 836, cost 58.27 s
2023-03-02 12:44:59,355 44k INFO Train Epoch: 837 [36%]
2023-03-02 12:44:59,356 44k INFO Losses: [2.5006682872772217, 2.4434731006622314, 11.303675651550293, 17.030460357666016, 0.9370664954185486], step: 55200, lr: 8.98969063879757e-05
2023-03-02 12:45:04,740 44k INFO Saving model and optimizer state at iteration 837 to ./logs/44k/G_55200.pth
2023-03-02 12:45:07,643 44k INFO Saving model and optimizer state at iteration 837 to ./logs/44k/D_55200.pth
2023-03-02 12:45:09,982 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_52800.pth
2023-03-02 12:45:09,984 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_52800.pth
2023-03-02 12:45:45,325 44k INFO ====> Epoch: 837, cost 71.51 s
2023-03-02 12:46:44,186 44k INFO ====> Epoch: 838, cost 58.86 s
2023-03-02 12:47:42,907 44k INFO ====> Epoch: 839, cost 58.72 s
2023-03-02 12:48:10,291 44k INFO Train Epoch: 840 [39%]
2023-03-02 12:48:10,293 44k INFO Losses: [2.640389919281006, 2.1708312034606934, 10.545830726623535, 15.334138870239258, 0.6339477896690369], step: 55400, lr: 8.98631992618221e-05
2023-03-02 12:48:41,831 44k INFO ====> Epoch: 840, cost 58.92 s
2023-03-02 12:49:39,980 44k INFO ====> Epoch: 841, cost 58.15 s
2023-03-02 12:50:37,685 44k INFO ====> Epoch: 842, cost 57.70 s
2023-03-02 12:51:05,906 44k INFO Train Epoch: 843 [42%]
2023-03-02 12:51:05,907 44k INFO Losses: [2.6588685512542725, 2.3798961639404297, 11.17093563079834, 16.339601516723633, 0.49810323119163513], step: 55600, lr: 8.982950477426087e-05
2023-03-02 12:51:36,139 44k INFO ====> Epoch: 843, cost 58.45 s
2023-03-02 12:52:33,735 44k INFO ====> Epoch: 844, cost 57.60 s
2023-03-02 12:53:31,179 44k INFO ====> Epoch: 845, cost 57.44 s
2023-03-02 12:54:00,564 44k INFO Train Epoch: 846 [45%]
2023-03-02 12:54:00,566 44k INFO Losses: [2.392975330352783, 2.263887405395508, 12.297293663024902, 17.516983032226562, 0.45464274287223816], step: 55800, lr: 8.979582292055309e-05
2023-03-02 12:54:28,939 44k INFO ====> Epoch: 846, cost 57.76 s
2023-03-02 12:55:25,915 44k INFO ====> Epoch: 847, cost 56.98 s
2023-03-02 12:56:22,805 44k INFO ====> Epoch: 848, cost 56.89 s
2023-03-02 12:56:54,111 44k INFO Train Epoch: 849 [48%]
2023-03-02 12:56:54,112 44k INFO Losses: [2.599750518798828, 2.2047019004821777, 6.599445343017578, 14.391129493713379, 0.5396345853805542], step: 56000, lr: 8.976215369596169e-05
2023-03-02 12:56:58,835 44k INFO Saving model and optimizer state at iteration 849 to ./logs/44k/G_56000.pth
2023-03-02 12:57:01,082 44k INFO Saving model and optimizer state at iteration 849 to ./logs/44k/D_56000.pth
2023-03-02 12:57:03,653 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_53600.pth
2023-03-02 12:57:03,658 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_53600.pth
2023-03-02 12:57:33,793 44k INFO ====> Epoch: 849, cost 70.99 s
2023-03-02 12:58:32,428 44k INFO ====> Epoch: 850, cost 58.64 s
2023-03-02 12:59:31,209 44k INFO ====> Epoch: 851, cost 58.78 s
2023-03-02 13:00:05,262 44k INFO Train Epoch: 852 [52%]
2023-03-02 13:00:05,264 44k INFO Losses: [2.527873992919922, 2.292983055114746, 10.024873733520508, 14.533777236938477, 0.4941965639591217], step: 56200, lr: 8.972849709575134e-05
2023-03-02 13:00:31,164 44k INFO ====> Epoch: 852, cost 59.95 s
2023-03-02 13:01:29,928 44k INFO ====> Epoch: 853, cost 58.76 s
2023-03-02 13:02:28,188 44k INFO ====> Epoch: 854, cost 58.26 s
2023-03-02 13:03:02,296 44k INFO Train Epoch: 855 [55%]
2023-03-02 13:03:02,299 44k INFO Losses: [2.684601306915283, 2.168551206588745, 10.371045112609863, 15.839573860168457, 0.8496739864349365], step: 56400, lr: 8.969485311518848e-05
2023-03-02 13:03:26,616 44k INFO ====> Epoch: 855, cost 58.43 s
2023-03-02 13:04:23,879 44k INFO ====> Epoch: 856, cost 57.26 s
2023-03-02 13:05:21,102 44k INFO ====> Epoch: 857, cost 57.22 s
2023-03-02 13:05:56,146 44k INFO Train Epoch: 858 [58%]
2023-03-02 13:05:56,148 44k INFO Losses: [2.649883270263672, 2.150902271270752, 10.54255199432373, 15.485033988952637, 0.7166399359703064], step: 56600, lr: 8.966122174954132e-05
2023-03-02 13:06:18,612 44k INFO ====> Epoch: 858, cost 57.51 s
2023-03-02 13:07:15,604 44k INFO ====> Epoch: 859, cost 56.99 s
2023-03-02 13:08:13,516 44k INFO ====> Epoch: 860, cost 57.91 s
2023-03-02 13:08:51,404 44k INFO Train Epoch: 861 [61%]
2023-03-02 13:08:51,406 44k INFO Losses: [2.3950774669647217, 2.4171199798583984, 12.994575500488281, 16.197185516357422, 0.6987941861152649], step: 56800, lr: 8.962760299407988e-05
2023-03-02 13:08:56,030 44k INFO Saving model and optimizer state at iteration 861 to ./logs/44k/G_56800.pth
2023-03-02 13:08:59,237 44k INFO Saving model and optimizer state at iteration 861 to ./logs/44k/D_56800.pth
2023-03-02 13:09:01,680 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_54400.pth
2023-03-02 13:09:01,683 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_54400.pth
2023-03-02 13:09:24,484 44k INFO ====> Epoch: 861, cost 70.97 s
2023-03-02 13:10:23,033 44k INFO ====> Epoch: 862, cost 58.55 s
2023-03-02 13:11:20,971 44k INFO ====> Epoch: 863, cost 57.94 s
2023-03-02 13:12:00,287 44k INFO Train Epoch: 864 [64%]
2023-03-02 13:12:00,289 44k INFO Losses: [2.5632762908935547, 2.1706743240356445, 7.657403469085693, 15.568636894226074, 0.2758500277996063], step: 57000, lr: 8.959399684407593e-05
2023-03-02 13:12:19,548 44k INFO ====> Epoch: 864, cost 58.58 s
2023-03-02 13:13:17,187 44k INFO ====> Epoch: 865, cost 57.64 s
2023-03-02 13:14:14,350 44k INFO ====> Epoch: 866, cost 57.16 s
2023-03-02 13:14:54,119 44k INFO Train Epoch: 867 [67%]
2023-03-02 13:14:54,121 44k INFO Losses: [2.462493896484375, 2.068551778793335, 7.471784591674805, 13.727977752685547, 0.5810964703559875], step: 57200, lr: 8.9560403294803e-05
2023-03-02 13:15:12,148 44k INFO ====> Epoch: 867, cost 57.80 s
2023-03-02 13:16:09,258 44k INFO ====> Epoch: 868, cost 57.11 s
2023-03-02 13:17:06,821 44k INFO ====> Epoch: 869, cost 57.56 s
2023-03-02 13:17:49,132 44k INFO Train Epoch: 870 [70%]
2023-03-02 13:17:49,134 44k INFO Losses: [2.6053626537323, 2.235959768295288, 9.702953338623047, 16.257829666137695, 0.5782600045204163], step: 57400, lr: 8.952682234153643e-05
2023-03-02 13:18:05,860 44k INFO ====> Epoch: 870, cost 59.04 s
2023-03-02 13:19:03,517 44k INFO ====> Epoch: 871, cost 57.66 s
2023-03-02 13:20:02,144 44k INFO ====> Epoch: 872, cost 58.63 s
2023-03-02 13:20:47,166 44k INFO Train Epoch: 873 [73%]
2023-03-02 13:20:47,167 44k INFO Losses: [2.3925700187683105, 2.242800235748291, 10.976139068603516, 15.83530044555664, 0.9908230304718018], step: 57600, lr: 8.949325397955328e-05
2023-03-02 13:20:53,397 44k INFO Saving model and optimizer state at iteration 873 to ./logs/44k/G_57600.pth
2023-03-02 13:20:55,610 44k INFO Saving model and optimizer state at iteration 873 to ./logs/44k/D_57600.pth
2023-03-02 13:20:57,913 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_55200.pth
2023-03-02 13:20:57,915 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_55200.pth
2023-03-02 13:21:14,758 44k INFO ====> Epoch: 873, cost 72.61 s
2023-03-02 13:22:13,185 44k INFO ====> Epoch: 874, cost 58.43 s
2023-03-02 13:23:10,956 44k INFO ====> Epoch: 875, cost 57.77 s
2023-03-02 13:23:55,904 44k INFO Train Epoch: 876 [76%]
2023-03-02 13:23:55,905 44k INFO Losses: [2.4486911296844482, 2.306509017944336, 13.696803092956543, 15.855753898620605, 0.5140385031700134], step: 57800, lr: 8.945969820413243e-05
2023-03-02 13:24:09,279 44k INFO ====> Epoch: 876, cost 58.32 s
2023-03-02 13:25:06,580 44k INFO ====> Epoch: 877, cost 57.30 s
2023-03-02 13:26:04,172 44k INFO ====> Epoch: 878, cost 57.59 s
2023-03-02 13:26:50,177 44k INFO Train Epoch: 879 [79%]
2023-03-02 13:26:50,178 44k INFO Losses: [2.608511447906494, 2.1255524158477783, 8.332140922546387, 14.958667755126953, 0.5954391360282898], step: 58000, lr: 8.942615501055449e-05
2023-03-02 13:27:01,813 44k INFO ====> Epoch: 879, cost 57.64 s
2023-03-02 13:27:58,775 44k INFO ====> Epoch: 880, cost 56.96 s
2023-03-02 13:28:56,462 44k INFO ====> Epoch: 881, cost 57.69 s
2023-03-02 13:29:45,345 44k INFO Train Epoch: 882 [82%]
2023-03-02 13:29:45,346 44k INFO Losses: [2.638094902038574, 2.0959653854370117, 12.667659759521484, 15.509319305419922, 0.5255048274993896], step: 58200, lr: 8.939262439410188e-05
2023-03-02 13:29:55,252 44k INFO ====> Epoch: 882, cost 58.79 s
2023-03-02 13:30:53,913 44k INFO ====> Epoch: 883, cost 58.66 s
2023-03-02 13:31:52,674 44k INFO ====> Epoch: 884, cost 58.76 s
2023-03-02 13:32:44,224 44k INFO Train Epoch: 885 [85%]
2023-03-02 13:32:44,226 44k INFO Losses: [2.7066783905029297, 1.8410203456878662, 8.828810691833496, 15.466930389404297, 0.7114719748497009], step: 58400, lr: 8.935910635005875e-05
2023-03-02 13:32:48,919 44k INFO Saving model and optimizer state at iteration 885 to ./logs/44k/G_58400.pth
2023-03-02 13:32:51,347 44k INFO Saving model and optimizer state at iteration 885 to ./logs/44k/D_58400.pth
2023-03-02 13:32:53,683 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_56000.pth
2023-03-02 13:32:53,684 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_56000.pth
2023-03-02 13:33:03,031 44k INFO ====> Epoch: 885, cost 70.36 s
2023-03-02 13:34:03,391 44k INFO ====> Epoch: 886, cost 60.36 s
2023-03-02 13:35:01,869 44k INFO ====> Epoch: 887, cost 58.48 s
2023-03-02 13:35:53,886 44k INFO Train Epoch: 888 [88%]
2023-03-02 13:35:53,887 44k INFO Losses: [2.648952007293701, 2.0473170280456543, 9.788250923156738, 15.0264253616333, 1.0104339122772217], step: 58600, lr: 8.932560087371105e-05
2023-03-02 13:36:00,674 44k INFO ====> Epoch: 888, cost 58.81 s
2023-03-02 13:36:58,804 44k INFO ====> Epoch: 889, cost 58.13 s
2023-03-02 13:37:56,388 44k INFO ====> Epoch: 890, cost 57.58 s
2023-03-02 13:38:49,153 44k INFO Train Epoch: 891 [91%]
2023-03-02 13:38:49,155 44k INFO Losses: [2.5813400745391846, 2.08500337600708, 10.76225471496582, 15.93575382232666, 0.6784761548042297], step: 58800, lr: 8.929210796034647e-05
2023-03-02 13:38:54,748 44k INFO ====> Epoch: 891, cost 58.36 s
2023-03-02 13:39:51,955 44k INFO ====> Epoch: 892, cost 57.21 s
2023-03-02 13:40:48,918 44k INFO ====> Epoch: 893, cost 56.96 s
2023-03-02 13:41:42,434 44k INFO Train Epoch: 894 [94%]
2023-03-02 13:41:42,436 44k INFO Losses: [2.5453760623931885, 2.248476028442383, 10.435331344604492, 15.484513282775879, 0.4626404941082001], step: 59000, lr: 8.925862760525449e-05
2023-03-02 13:41:46,595 44k INFO ====> Epoch: 894, cost 57.68 s
2023-03-02 13:42:44,060 44k INFO ====> Epoch: 895, cost 57.46 s
2023-03-02 13:43:41,878 44k INFO ====> Epoch: 896, cost 57.82 s
2023-03-02 13:44:38,668 44k INFO Train Epoch: 897 [97%]
2023-03-02 13:44:38,669 44k INFO Losses: [2.55935001373291, 2.176525354385376, 6.523709774017334, 14.455953598022461, 0.4945708215236664], step: 59200, lr: 8.922515980372634e-05
2023-03-02 13:44:44,984 44k INFO Saving model and optimizer state at iteration 897 to ./logs/44k/G_59200.pth
2023-03-02 13:44:47,305 44k INFO Saving model and optimizer state at iteration 897 to ./logs/44k/D_59200.pth
2023-03-02 13:44:49,830 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_56800.pth
2023-03-02 13:44:49,832 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_56800.pth
2023-03-02 13:44:51,163 44k INFO ====> Epoch: 897, cost 69.29 s
2023-03-02 13:45:52,532 44k INFO ====> Epoch: 898, cost 61.37 s
2023-03-02 13:46:51,631 44k INFO ====> Epoch: 899, cost 59.10 s
2023-03-02 13:47:50,221 44k INFO ====> Epoch: 900, cost 58.59 s
2023-03-02 13:47:57,233 44k INFO Train Epoch: 901 [0%]
2023-03-02 13:47:57,235 44k INFO Losses: [2.6594157218933105, 2.1257474422454834, 8.491215705871582, 14.929486274719238, 0.6012764573097229], step: 59400, lr: 8.918055558798614e-05
2023-03-02 13:48:49,300 44k INFO ====> Epoch: 901, cost 59.08 s
2023-03-02 13:49:47,916 44k INFO ====> Epoch: 902, cost 58.62 s
2023-03-02 13:50:46,162 44k INFO ====> Epoch: 903, cost 58.25 s
2023-03-02 13:50:53,287 44k INFO Train Epoch: 904 [3%]
2023-03-02 13:50:53,289 44k INFO Losses: [2.3681013584136963, 2.1891205310821533, 14.320527076721191, 16.64045524597168, 0.2680394947528839], step: 59600, lr: 8.9147117059805e-05
2023-03-02 13:51:45,057 44k INFO ====> Epoch: 904, cost 58.89 s
2023-03-02 13:52:43,449 44k INFO ====> Epoch: 905, cost 58.39 s
2023-03-02 13:53:42,071 44k INFO ====> Epoch: 906, cost 58.62 s
2023-03-02 13:53:50,832 44k INFO Train Epoch: 907 [6%]
2023-03-02 13:53:50,834 44k INFO Losses: [2.509126901626587, 2.215972900390625, 13.917742729187012, 15.557184219360352, 0.41088372468948364], step: 59800, lr: 8.911369106950454e-05
2023-03-02 13:54:41,699 44k INFO ====> Epoch: 907, cost 59.63 s
2023-03-02 13:55:40,468 44k INFO ====> Epoch: 908, cost 58.77 s
2023-03-02 13:56:39,393 44k INFO ====> Epoch: 909, cost 58.93 s
2023-03-02 13:56:49,626 44k INFO Train Epoch: 910 [9%]
2023-03-02 13:56:49,628 44k INFO Losses: [2.5548064708709717, 2.0536727905273438, 7.251802921295166, 13.181257247924805, 0.7674618363380432], step: 60000, lr: 8.908027761238368e-05
2023-03-02 13:56:56,359 44k INFO Saving model and optimizer state at iteration 910 to ./logs/44k/G_60000.pth
2023-03-02 13:56:58,888 44k INFO Saving model and optimizer state at iteration 910 to ./logs/44k/D_60000.pth
2023-03-02 13:57:01,085 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_57600.pth
2023-03-02 13:57:01,087 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_57600.pth
2023-03-02 13:57:52,968 44k INFO ====> Epoch: 910, cost 73.57 s
2023-03-02 13:58:51,414 44k INFO ====> Epoch: 911, cost 58.45 s
2023-03-02 13:59:50,084 44k INFO ====> Epoch: 912, cost 58.67 s
2023-03-02 14:00:02,732 44k INFO Train Epoch: 913 [12%]
2023-03-02 14:00:02,733 44k INFO Losses: [2.603339672088623, 2.1333460807800293, 10.379961013793945, 15.35138988494873, 0.6416956186294556], step: 60200, lr: 8.904687668374304e-05
2023-03-02 14:00:50,000 44k INFO ====> Epoch: 913, cost 59.92 s
2023-03-02 14:01:48,809 44k INFO ====> Epoch: 914, cost 58.81 s
2023-03-02 14:02:48,117 44k INFO ====> Epoch: 915, cost 59.31 s
2023-03-02 14:03:02,644 44k INFO Train Epoch: 916 [15%]
2023-03-02 14:03:02,646 44k INFO Losses: [2.631565570831299, 2.1212425231933594, 7.3924455642700195, 15.095670700073242, 0.32193723320961], step: 60400, lr: 8.901348827888507e-05
2023-03-02 14:03:48,345 44k INFO ====> Epoch: 916, cost 60.23 s
2023-03-02 14:04:47,180 44k INFO ====> Epoch: 917, cost 58.84 s
2023-03-02 14:05:46,485 44k INFO ====> Epoch: 918, cost 59.31 s
2023-03-02 14:06:03,338 44k INFO Train Epoch: 919 [18%]
2023-03-02 14:06:03,341 44k INFO Losses: [2.3903310298919678, 2.340803623199463, 12.972189903259277, 16.387706756591797, 0.39373114705085754], step: 60600, lr: 8.898011239311388e-05
2023-03-02 14:06:47,329 44k INFO ====> Epoch: 919, cost 60.84 s
2023-03-02 14:07:46,474 44k INFO ====> Epoch: 920, cost 59.15 s
2023-03-02 14:08:45,070 44k INFO ====> Epoch: 921, cost 58.60 s
2023-03-02 14:09:03,825 44k INFO Train Epoch: 922 [21%]
2023-03-02 14:09:03,828 44k INFO Losses: [2.5298004150390625, 2.1212401390075684, 10.606185913085938, 14.38536262512207, 0.7946345210075378], step: 60800, lr: 8.894674902173544e-05
2023-03-02 14:09:09,372 44k INFO Saving model and optimizer state at iteration 922 to ./logs/44k/G_60800.pth
2023-03-02 14:09:11,704 44k INFO Saving model and optimizer state at iteration 922 to ./logs/44k/D_60800.pth
2023-03-02 14:09:14,119 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_58400.pth
2023-03-02 14:09:14,121 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_58400.pth
2023-03-02 14:09:58,709 44k INFO ====> Epoch: 922, cost 73.64 s
2023-03-02 14:10:58,473 44k INFO ====> Epoch: 923, cost 59.76 s
2023-03-02 14:11:57,862 44k INFO ====> Epoch: 924, cost 59.39 s
2023-03-02 14:12:18,242 44k INFO Train Epoch: 925 [24%]
2023-03-02 14:12:18,244 44k INFO Losses: [2.567394971847534, 2.2232565879821777, 8.643302917480469, 14.158565521240234, 0.521108090877533], step: 61000, lr: 8.891339816005741e-05
2023-03-02 14:12:58,408 44k INFO ====> Epoch: 925, cost 60.55 s
2023-03-02 14:13:57,754 44k INFO ====> Epoch: 926, cost 59.35 s
2023-03-02 14:14:56,838 44k INFO ====> Epoch: 927, cost 59.08 s
2023-03-02 14:15:18,142 44k INFO Train Epoch: 928 [27%]
2023-03-02 14:15:18,144 44k INFO Losses: [2.35607647895813, 2.4889581203460693, 10.673897743225098, 15.720369338989258, 0.7584885954856873], step: 61200, lr: 8.888005980338925e-05
2023-03-02 14:15:56,407 44k INFO ====> Epoch: 928, cost 59.57 s
2023-03-02 14:16:55,415 44k INFO ====> Epoch: 929, cost 59.01 s
2023-03-02 14:17:53,988 44k INFO ====> Epoch: 930, cost 58.57 s
2023-03-02 14:18:16,459 44k INFO Train Epoch: 931 [30%]
2023-03-02 14:18:16,461 44k INFO Losses: [2.5703470706939697, 2.0083062648773193, 11.014739036560059, 15.868526458740234, 0.6534631848335266], step: 61400, lr: 8.884673394704218e-05
2023-03-02 14:18:52,956 44k INFO ====> Epoch: 931, cost 58.97 s
2023-03-02 14:19:51,160 44k INFO ====> Epoch: 932, cost 58.20 s
2023-03-02 14:20:48,760 44k INFO ====> Epoch: 933, cost 57.60 s
2023-03-02 14:21:11,711 44k INFO Train Epoch: 934 [33%]
2023-03-02 14:21:11,713 44k INFO Losses: [2.5280964374542236, 2.3868603706359863, 10.619189262390137, 15.284461975097656, 0.3954754173755646], step: 61600, lr: 8.881342058632916e-05
2023-03-02 14:21:17,565 44k INFO Saving model and optimizer state at iteration 934 to ./logs/44k/G_61600.pth
2023-03-02 14:21:20,221 44k INFO Saving model and optimizer state at iteration 934 to ./logs/44k/D_61600.pth
2023-03-02 14:21:22,271 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_59200.pth
2023-03-02 14:21:22,301 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_59200.pth
2023-03-02 14:21:59,546 44k INFO ====> Epoch: 934, cost 70.79 s
2023-03-02 14:22:56,826 44k INFO ====> Epoch: 935, cost 57.28 s
2023-03-02 14:23:54,965 44k INFO ====> Epoch: 936, cost 58.14 s
2023-03-02 14:24:20,031 44k INFO Train Epoch: 937 [36%]
2023-03-02 14:24:20,032 44k INFO Losses: [2.6220452785491943, 2.0545854568481445, 7.832897663116455, 15.814204216003418, 0.566460907459259], step: 61800, lr: 8.87801197165649e-05
2023-03-02 14:24:53,730 44k INFO ====> Epoch: 937, cost 58.77 s
2023-03-02 14:25:52,132 44k INFO ====> Epoch: 938, cost 58.40 s
2023-03-02 14:26:52,010 44k INFO ====> Epoch: 939, cost 59.88 s
2023-03-02 14:27:18,654 44k INFO Train Epoch: 940 [39%]
2023-03-02 14:27:18,656 44k INFO Losses: [2.2328197956085205, 2.395996570587158, 13.685931205749512, 17.072954177856445, 0.8479371666908264], step: 62000, lr: 8.874683133306588e-05
2023-03-02 14:27:51,019 44k INFO ====> Epoch: 940, cost 59.01 s
2023-03-02 14:28:49,547 44k INFO ====> Epoch: 941, cost 58.53 s
2023-03-02 14:29:48,483 44k INFO ====> Epoch: 942, cost 58.94 s
2023-03-02 14:30:17,731 44k INFO Train Epoch: 943 [42%]
2023-03-02 14:30:17,732 44k INFO Losses: [2.7103254795074463, 2.3739049434661865, 9.704411506652832, 16.380373001098633, 0.7594156861305237], step: 62200, lr: 8.871355543115036e-05
2023-03-02 14:30:48,342 44k INFO ====> Epoch: 943, cost 59.86 s
2023-03-02 14:31:47,449 44k INFO ====> Epoch: 944, cost 59.11 s
2023-03-02 14:32:46,526 44k INFO ====> Epoch: 945, cost 59.08 s
2023-03-02 14:33:17,365 44k INFO Train Epoch: 946 [45%]
2023-03-02 14:33:17,366 44k INFO Losses: [2.2444112300872803, 2.5649874210357666, 12.03126049041748, 16.310317993164062, 0.4103565514087677], step: 62400, lr: 8.868029200613832e-05
2023-03-02 14:33:23,864 44k INFO Saving model and optimizer state at iteration 946 to ./logs/44k/G_62400.pth
2023-03-02 14:33:25,960 44k INFO Saving model and optimizer state at iteration 946 to ./logs/44k/D_62400.pth
2023-03-02 14:33:28,298 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_60000.pth
2023-03-02 14:33:28,300 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_60000.pth
2023-03-02 14:34:00,007 44k INFO ====> Epoch: 946, cost 73.48 s
2023-03-02 14:34:58,402 44k INFO ====> Epoch: 947, cost 58.39 s
2023-03-02 14:35:56,108 44k INFO ====> Epoch: 948, cost 57.71 s
2023-03-02 14:36:26,931 44k INFO Train Epoch: 949 [48%]
2023-03-02 14:36:26,933 44k INFO Losses: [2.650667190551758, 1.9593294858932495, 8.058405876159668, 14.466927528381348, 0.5884894728660583], step: 62600, lr: 8.864704105335148e-05
2023-03-02 14:36:53,981 44k INFO ====> Epoch: 949, cost 57.87 s
2023-03-02 14:37:50,606 44k INFO ====> Epoch: 950, cost 56.63 s
2023-03-02 14:38:47,610 44k INFO ====> Epoch: 951, cost 57.00 s
2023-03-02 14:39:20,308 44k INFO Train Epoch: 952 [52%]
2023-03-02 14:39:20,310 44k INFO Losses: [2.5170812606811523, 2.299518585205078, 11.800094604492188, 16.113052368164062, 0.4398622214794159], step: 62800, lr: 8.861380256811337e-05
2023-03-02 14:39:45,806 44k INFO ====> Epoch: 952, cost 58.20 s
2023-03-02 14:40:45,112 44k INFO ====> Epoch: 953, cost 59.31 s
2023-03-02 14:41:44,226 44k INFO ====> Epoch: 954, cost 59.11 s
2023-03-02 14:42:19,749 44k INFO Train Epoch: 955 [55%]
2023-03-02 14:42:19,751 44k INFO Losses: [2.4383201599121094, 2.3167471885681152, 11.747109413146973, 16.1230525970459, 0.5751818418502808], step: 63000, lr: 8.858057654574923e-05
2023-03-02 14:42:43,778 44k INFO ====> Epoch: 955, cost 59.55 s
2023-03-02 14:43:42,573 44k INFO ====> Epoch: 956, cost 58.79 s
2023-03-02 14:44:40,884 44k INFO ====> Epoch: 957, cost 58.31 s
2023-03-02 14:45:17,407 44k INFO Train Epoch: 958 [58%]
2023-03-02 14:45:17,408 44k INFO Losses: [2.487609386444092, 2.116445302963257, 10.849960327148438, 15.073195457458496, 0.2002437859773636], step: 63200, lr: 8.854736298158609e-05
2023-03-02 14:45:22,388 44k INFO Saving model and optimizer state at iteration 958 to ./logs/44k/G_63200.pth
2023-03-02 14:45:24,746 44k INFO Saving model and optimizer state at iteration 958 to ./logs/44k/D_63200.pth
2023-03-02 14:45:27,173 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_60800.pth
2023-03-02 14:45:27,175 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_60800.pth
2023-03-02 14:45:52,400 44k INFO ====> Epoch: 958, cost 71.52 s
2023-03-02 14:46:51,672 44k INFO ====> Epoch: 959, cost 59.27 s
2023-03-02 14:47:50,141 44k INFO ====> Epoch: 960, cost 58.47 s
2023-03-02 14:48:27,974 44k INFO Train Epoch: 961 [61%]
2023-03-02 14:48:27,976 44k INFO Losses: [2.5289230346679688, 2.1837902069091797, 10.400602340698242, 15.525186538696289, 0.740431010723114], step: 63400, lr: 8.851416187095268e-05
2023-03-02 14:48:48,564 44k INFO ====> Epoch: 961, cost 58.42 s
2023-03-02 14:49:46,210 44k INFO ====> Epoch: 962, cost 57.65 s
2023-03-02 14:50:43,305 44k INFO ====> Epoch: 963, cost 57.09 s
2023-03-02 14:51:21,630 44k INFO Train Epoch: 964 [64%]
2023-03-02 14:51:21,632 44k INFO Losses: [2.5722999572753906, 2.2676644325256348, 10.478089332580566, 15.472160339355469, 0.4920017123222351], step: 63600, lr: 8.848097320917952e-05
2023-03-02 14:51:41,298 44k INFO ====> Epoch: 964, cost 57.99 s
2023-03-02 14:52:38,511 44k INFO ====> Epoch: 965, cost 57.21 s
2023-03-02 14:53:35,799 44k INFO ====> Epoch: 966, cost 57.29 s
2023-03-02 14:54:16,372 44k INFO Train Epoch: 967 [67%]
2023-03-02 14:54:16,373 44k INFO Losses: [2.4551520347595215, 2.2464609146118164, 10.212422370910645, 15.449177742004395, 0.6676531434059143], step: 63800, lr: 8.844779699159887e-05
2023-03-02 14:54:34,330 44k INFO ====> Epoch: 967, cost 58.53 s
2023-03-02 14:55:32,730 44k INFO ====> Epoch: 968, cost 58.40 s
2023-03-02 14:56:33,468 44k INFO ====> Epoch: 969, cost 60.74 s
2023-03-02 14:57:17,231 44k INFO Train Epoch: 970 [70%]
2023-03-02 14:57:17,233 44k INFO Losses: [2.392627477645874, 2.2017018795013428, 9.025821685791016, 15.590193748474121, 0.8051909804344177], step: 64000, lr: 8.841463321354475e-05
2023-03-02 14:57:23,856 44k INFO Saving model and optimizer state at iteration 970 to ./logs/44k/G_64000.pth
2023-03-02 14:57:26,134 44k INFO Saving model and optimizer state at iteration 970 to ./logs/44k/D_64000.pth
2023-03-02 14:57:28,431 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_61600.pth
2023-03-02 14:57:28,433 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_61600.pth
2023-03-02 14:57:47,303 44k INFO ====> Epoch: 970, cost 73.84 s
2023-03-02 14:58:47,053 44k INFO ====> Epoch: 971, cost 59.75 s
2023-03-02 14:59:46,130 44k INFO ====> Epoch: 972, cost 59.08 s
2023-03-02 15:00:31,121 44k INFO Train Epoch: 973 [73%]
2023-03-02 15:00:31,123 44k INFO Losses: [2.537045955657959, 2.0911831855773926, 6.994945526123047, 14.557729721069336, 0.5771148800849915], step: 64200, lr: 8.83814818703529e-05
2023-03-02 15:00:45,603 44k INFO ====> Epoch: 973, cost 59.47 s
2023-03-02 15:01:44,706 44k INFO ====> Epoch: 974, cost 59.10 s
2023-03-02 15:02:43,352 44k INFO ====> Epoch: 975, cost 58.65 s
2023-03-02 15:03:29,108 44k INFO Train Epoch: 976 [76%]
2023-03-02 15:03:29,110 44k INFO Losses: [2.524730682373047, 2.3602445125579834, 9.984143257141113, 15.268567085266113, 0.7213561534881592], step: 64400, lr: 8.834834295736085e-05
2023-03-02 15:03:42,340 44k INFO ====> Epoch: 976, cost 58.99 s
2023-03-02 15:04:40,551 44k INFO ====> Epoch: 977, cost 58.21 s
2023-03-02 15:05:38,084 44k INFO ====> Epoch: 978, cost 57.53 s
2023-03-02 15:06:24,599 44k INFO Train Epoch: 979 [79%]
2023-03-02 15:06:24,601 44k INFO Losses: [2.471107244491577, 2.157036066055298, 8.264527320861816, 14.485032081604004, 0.4646695554256439], step: 64600, lr: 8.831521646990785e-05
2023-03-02 15:06:36,651 44k INFO ====> Epoch: 979, cost 58.57 s
2023-03-02 15:07:34,153 44k INFO ====> Epoch: 980, cost 57.50 s
2023-03-02 15:08:31,471 44k INFO ====> Epoch: 981, cost 57.32 s
2023-03-02 15:09:18,949 44k INFO Train Epoch: 982 [82%]
2023-03-02 15:09:18,951 44k INFO Losses: [2.2883527278900146, 2.445152521133423, 12.942273139953613, 16.255687713623047, 0.2441449612379074], step: 64800, lr: 8.82821024033349e-05
2023-03-02 15:09:23,625 44k INFO Saving model and optimizer state at iteration 982 to ./logs/44k/G_64800.pth
2023-03-02 15:09:25,762 44k INFO Saving model and optimizer state at iteration 982 to ./logs/44k/D_64800.pth
2023-03-02 15:09:28,477 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_62400.pth
2023-03-02 15:09:28,479 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_62400.pth
2023-03-02 15:09:39,072 44k INFO ====> Epoch: 982, cost 67.60 s
2023-03-02 15:10:39,572 44k INFO ====> Epoch: 983, cost 60.50 s
2023-03-02 15:11:38,310 44k INFO ====> Epoch: 984, cost 58.74 s
2023-03-02 15:12:31,700 44k INFO Train Epoch: 985 [85%]
2023-03-02 15:12:31,702 44k INFO Losses: [2.715963125228882, 2.1737570762634277, 10.26751708984375, 14.866273880004883, 0.9211937189102173], step: 65000, lr: 8.824900075298475e-05
2023-03-02 15:12:40,005 44k INFO ====> Epoch: 985, cost 61.70 s
2023-03-02 15:13:38,896 44k INFO ====> Epoch: 986, cost 58.89 s
2023-03-02 15:14:37,706 44k INFO ====> Epoch: 987, cost 58.81 s
2023-03-02 15:15:30,119 44k INFO Train Epoch: 988 [88%]
2023-03-02 15:15:30,121 44k INFO Losses: [2.5231106281280518, 2.080065965652466, 9.539891242980957, 15.019026756286621, 0.6310486793518066], step: 65200, lr: 8.821591151420192e-05
2023-03-02 15:15:36,624 44k INFO ====> Epoch: 988, cost 58.92 s
2023-03-02 15:16:34,971 44k INFO ====> Epoch: 989, cost 58.35 s
2023-03-02 15:17:32,752 44k INFO ====> Epoch: 990, cost 57.78 s
2023-03-02 15:18:25,050 44k INFO Train Epoch: 991 [91%]
2023-03-02 15:18:25,051 44k INFO Losses: [2.4841372966766357, 2.1849021911621094, 12.312652587890625, 15.487249374389648, 0.6238894462585449], step: 65400, lr: 8.818283468233264e-05
2023-03-02 15:18:30,573 44k INFO ====> Epoch: 991, cost 57.82 s
2023-03-02 15:19:28,743 44k INFO ====> Epoch: 992, cost 58.17 s
2023-03-02 15:20:26,322 44k INFO ====> Epoch: 993, cost 57.58 s
2023-03-02 15:21:21,594 44k INFO Train Epoch: 994 [94%]
2023-03-02 15:21:21,596 44k INFO Losses: [2.4963293075561523, 2.2944495677948, 9.781072616577148, 16.052833557128906, 0.7163136005401611], step: 65600, lr: 8.814977025272491e-05
2023-03-02 15:21:27,054 44k INFO Saving model and optimizer state at iteration 994 to ./logs/44k/G_65600.pth
2023-03-02 15:21:29,914 44k INFO Saving model and optimizer state at iteration 994 to ./logs/44k/D_65600.pth
2023-03-02 15:21:32,134 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_63200.pth
2023-03-02 15:21:32,137 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_63200.pth
2023-03-02 15:21:35,138 44k INFO ====> Epoch: 994, cost 68.82 s
2023-03-02 15:22:37,347 44k INFO ====> Epoch: 995, cost 62.21 s
2023-03-02 15:23:36,578 44k INFO ====> Epoch: 996, cost 59.23 s
2023-03-02 15:24:34,114 44k INFO Train Epoch: 997 [97%]
2023-03-02 15:24:34,116 44k INFO Losses: [2.6452720165252686, 1.9651262760162354, 6.142256736755371, 13.840875625610352, 0.5373770594596863], step: 65800, lr: 8.811671822072844e-05
2023-03-02 15:24:36,073 44k INFO ====> Epoch: 997, cost 59.49 s
2023-03-02 15:27:06,646 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 68951, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-02 15:27:06,746 44k WARNING git hash values are different. abdb0e28(saved) != cea6df30(current)
2023-03-02 15:27:17,794 44k INFO Loaded checkpoint './logs/44k/G_65600.pth' (iteration 994)
2023-03-02 15:27:19,276 44k INFO Loaded checkpoint './logs/44k/D_65600.pth' (iteration 994)
2023-03-02 15:28:39,210 44k INFO Train Epoch: 994 [94%]
2023-03-02 15:28:39,211 44k INFO Losses: [2.448915958404541, 2.34163761138916, 14.118210792541504, 16.300416946411133, 0.7317944169044495], step: 65600, lr: 8.813875153144332e-05
2023-03-02 15:28:45,484 44k INFO Saving model and optimizer state at iteration 994 to ./logs/44k/G_65600.pth
2023-03-02 15:28:48,029 44k INFO Saving model and optimizer state at iteration 994 to ./logs/44k/D_65600.pth
2023-03-02 15:28:54,905 44k INFO ====> Epoch: 994, cost 108.26 s
2023-03-02 15:29:55,623 44k INFO ====> Epoch: 995, cost 60.72 s
2023-03-02 15:30:53,460 44k INFO ====> Epoch: 996, cost 57.84 s
2023-03-02 15:31:49,704 44k INFO Train Epoch: 997 [97%]
2023-03-02 15:31:49,705 44k INFO Losses: [2.5102953910827637, 2.030790328979492, 6.422291278839111, 13.803255081176758, 0.7264648675918579], step: 65800, lr: 8.810570363095084e-05
2023-03-02 15:31:51,583 44k INFO ====> Epoch: 997, cost 58.12 s
2023-03-02 15:32:48,812 44k INFO ====> Epoch: 998, cost 57.23 s
2023-03-02 15:33:46,008 44k INFO ====> Epoch: 999, cost 57.20 s
2023-03-02 15:34:43,412 44k INFO ====> Epoch: 1000, cost 57.40 s
2023-03-02 15:34:49,479 44k INFO Train Epoch: 1001 [0%]
2023-03-02 15:34:49,481 44k INFO Losses: [2.4662046432495117, 2.113739252090454, 12.199930191040039, 14.911579132080078, 0.7170138359069824], step: 66000, lr: 8.806165903835676e-05
2023-03-02 15:35:42,405 44k INFO ====> Epoch: 1001, cost 58.99 s
2023-03-02 15:36:40,527 44k INFO ====> Epoch: 1002, cost 58.12 s
2023-03-02 15:37:39,261 44k INFO ====> Epoch: 1003, cost 58.73 s
2023-03-02 15:37:47,779 44k INFO Train Epoch: 1004 [3%]
2023-03-02 15:37:47,780 44k INFO Losses: [2.436959981918335, 2.242112159729004, 12.285941123962402, 16.07857894897461, 0.8090506196022034], step: 66200, lr: 8.802864004393564e-05
2023-03-02 15:38:40,403 44k INFO ====> Epoch: 1004, cost 61.14 s
2023-03-02 15:39:40,133 44k INFO ====> Epoch: 1005, cost 59.73 s
2023-03-02 15:40:40,041 44k INFO ====> Epoch: 1006, cost 59.91 s
2023-03-02 15:40:50,602 44k INFO Train Epoch: 1007 [6%]
2023-03-02 15:40:50,604 44k INFO Losses: [2.794735908508301, 2.0131449699401855, 10.94609546661377, 14.131229400634766, 0.6425115466117859], step: 66400, lr: 8.799563343008971e-05
2023-03-02 15:40:55,493 44k INFO Saving model and optimizer state at iteration 1007 to ./logs/44k/G_66400.pth
2023-03-02 15:40:57,933 44k INFO Saving model and optimizer state at iteration 1007 to ./logs/44k/D_66400.pth
2023-03-02 15:41:00,572 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_64000.pth
2023-03-02 15:41:00,574 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_64000.pth
2023-03-02 16:10:40,769 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 68951, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-02 16:10:40,808 44k WARNING git hash values are different. abdb0e28(saved) != cea6df30(current)
2023-03-02 16:10:49,129 44k INFO Loaded checkpoint './logs/44k/G_66400.pth' (iteration 1007)
2023-03-02 16:10:52,975 44k INFO Loaded checkpoint './logs/44k/D_66400.pth' (iteration 1007)
2023-03-02 16:11:10,373 44k INFO Train Epoch: 1007 [6%]
2023-03-02 16:11:10,373 44k INFO Losses: [2.4120402336120605, 2.1556475162506104, 10.852917671203613, 14.825243949890137, 0.5081835985183716], step: 66400, lr: 8.798463397591094e-05
2023-03-02 16:11:17,017 44k INFO Saving model and optimizer state at iteration 1007 to ./logs/44k/G_66400.pth
2023-03-02 16:11:19,609 44k INFO Saving model and optimizer state at iteration 1007 to ./logs/44k/D_66400.pth
2023-03-02 16:12:27,400 44k INFO ====> Epoch: 1007, cost 106.63 s
2023-03-02 16:13:24,789 44k INFO ====> Epoch: 1008, cost 57.39 s
2023-03-02 16:14:21,972 44k INFO ====> Epoch: 1009, cost 57.18 s
2023-03-02 16:14:32,081 44k INFO Train Epoch: 1010 [9%]
2023-03-02 16:14:32,082 44k INFO Losses: [2.4927453994750977, 2.393113613128662, 9.480721473693848, 14.423904418945312, 0.4676958918571472], step: 66600, lr: 8.795164386227784e-05
2023-03-02 16:15:20,025 44k INFO ====> Epoch: 1010, cost 58.05 s
2023-03-02 16:16:17,151 44k INFO ====> Epoch: 1011, cost 57.13 s
2023-03-02 16:17:14,582 44k INFO ====> Epoch: 1012, cost 57.43 s
2023-03-02 16:17:26,493 44k INFO Train Epoch: 1013 [12%]
2023-03-02 16:17:26,495 44k INFO Losses: [2.4772722721099854, 2.4358394145965576, 11.107568740844727, 16.18451499938965, 0.3692566156387329], step: 66800, lr: 8.7918666118391e-05
2023-03-02 16:18:13,572 44k INFO ====> Epoch: 1013, cost 58.99 s
2023-03-02 16:19:11,590 44k INFO ====> Epoch: 1014, cost 58.02 s
2023-03-02 16:20:09,972 44k INFO ====> Epoch: 1015, cost 58.38 s
2023-03-02 16:20:24,555 44k INFO Train Epoch: 1016 [15%]
2023-03-02 16:20:24,557 44k INFO Losses: [2.5483031272888184, 2.295081615447998, 11.195950508117676, 15.491548538208008, 0.9003146290779114], step: 67000, lr: 8.788570073961236e-05
2023-03-02 16:21:06,571 44k INFO ====> Epoch: 1016, cost 56.60 s
2023-03-03 03:04:28,472 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 3601, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-03 03:04:29,009 44k WARNING git hash values are different. abdb0e28(saved) != cea6df30(current)
2023-03-03 03:05:23,332 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 693845, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-03 03:05:23,356 44k WARNING git hash values are different. abdb0e28(saved) != cea6df30(current)
2023-03-03 03:05:32,044 44k INFO Loaded checkpoint './logs/44k/G_66400.pth' (iteration 1007)
2023-03-03 03:05:37,295 44k INFO Loaded checkpoint './logs/44k/D_66400.pth' (iteration 1007)
2023-03-03 03:05:59,199 44k INFO Train Epoch: 1007 [6%]
2023-03-03 03:05:59,200 44k INFO Losses: [2.866969585418701, 2.0894126892089844, 14.386421203613281, 14.47426700592041, 0.14635609090328217], step: 66400, lr: 8.797363589666394e-05
2023-03-03 03:06:06,847 44k INFO Saving model and optimizer state at iteration 1007 to ./logs/44k/G_66400.pth
2023-03-03 03:06:09,522 44k INFO Saving model and optimizer state at iteration 1007 to ./logs/44k/D_66400.pth
2023-03-03 03:07:19,977 44k INFO ====> Epoch: 1007, cost 116.65 s
2023-03-03 03:08:18,008 44k INFO ====> Epoch: 1008, cost 58.03 s
2023-03-03 03:09:16,805 44k INFO ====> Epoch: 1009, cost 58.80 s
2023-03-03 03:09:27,152 44k INFO Train Epoch: 1010 [9%]
2023-03-03 03:09:27,153 44k INFO Losses: [2.5613338947296143, 2.323840379714966, 8.027608871459961, 14.717065811157227, 0.929591953754425], step: 66600, lr: 8.794064990679505e-05
2023-03-03 03:10:15,693 44k INFO ====> Epoch: 1010, cost 58.89 s
2023-03-03 03:11:13,271 44k INFO ====> Epoch: 1011, cost 57.58 s
2023-03-03 03:12:11,020 44k INFO ====> Epoch: 1012, cost 57.75 s
2023-03-03 03:12:22,756 44k INFO Train Epoch: 1013 [12%]
2023-03-03 03:12:22,758 44k INFO Losses: [2.3329386711120605, 2.5282557010650635, 12.153274536132812, 16.356679916381836, 0.47838395833969116], step: 66800, lr: 8.79076762851262e-05
2023-03-03 03:13:10,503 44k INFO ====> Epoch: 1013, cost 59.48 s
2023-03-03 03:14:08,111 44k INFO ====> Epoch: 1014, cost 57.61 s
2023-03-03 03:27:26,611 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 693845, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-03 03:27:26,660 44k WARNING git hash values are different. abdb0e28(saved) != cea6df30(current)
2023-03-03 03:27:34,379 44k INFO Loaded checkpoint './logs/44k/G_66400.pth' (iteration 1007)
2023-03-03 03:27:37,719 44k INFO Loaded checkpoint './logs/44k/D_66400.pth' (iteration 1007)
2023-03-03 03:27:57,496 44k INFO Train Epoch: 1007 [6%]
2023-03-03 03:27:57,497 44k INFO Losses: [2.8741445541381836, 2.079120397567749, 14.373940467834473, 14.476299285888672, 0.1463383585214615], step: 66400, lr: 8.796263919217686e-05
2023-03-03 03:28:03,897 44k INFO Saving model and optimizer state at iteration 1007 to ./logs/44k/G_66400.pth
2023-03-03 03:28:06,286 44k INFO Saving model and optimizer state at iteration 1007 to ./logs/44k/D_66400.pth
2023-03-03 03:29:15,218 44k INFO ====> Epoch: 1007, cost 108.61 s
2023-03-03 03:30:14,661 44k INFO ====> Epoch: 1008, cost 59.44 s
2023-03-03 03:31:12,716 44k INFO ====> Epoch: 1009, cost 58.06 s
2023-03-03 03:31:22,646 44k INFO Train Epoch: 1010 [9%]
2023-03-03 03:31:22,649 44k INFO Losses: [2.729382276535034, 2.048532009124756, 7.779181480407715, 14.579824447631836, 0.9394656419754028], step: 66600, lr: 8.79296573255567e-05
2023-03-03 03:32:11,651 44k INFO ====> Epoch: 1010, cost 58.93 s
2023-03-03 03:33:09,917 44k INFO ====> Epoch: 1011, cost 58.27 s
2023-03-03 03:34:09,551 44k INFO ====> Epoch: 1012, cost 59.63 s
2023-03-03 03:34:22,509 44k INFO Train Epoch: 1013 [12%]
2023-03-03 03:34:22,510 44k INFO Losses: [2.38413667678833, 2.678081750869751, 12.239191055297852, 16.162275314331055, 0.48076143860816956], step: 66800, lr: 8.789668782559057e-05
2023-03-03 03:35:09,454 44k INFO ====> Epoch: 1013, cost 59.90 s
2023-03-03 03:36:08,051 44k INFO ====> Epoch: 1014, cost 58.60 s
2023-03-03 03:37:08,161 44k INFO ====> Epoch: 1015, cost 60.11 s
2023-03-03 03:37:22,631 44k INFO Train Epoch: 1016 [15%]
2023-03-03 03:37:22,633 44k INFO Losses: [2.3676047325134277, 2.3309578895568848, 8.705889701843262, 16.185840606689453, 0.7006238698959351], step: 67000, lr: 8.786373068764153e-05
2023-03-03 03:38:08,673 44k INFO ====> Epoch: 1016, cost 60.51 s
2023-03-03 03:39:07,175 44k INFO ====> Epoch: 1017, cost 58.50 s
2023-03-03 03:40:06,995 44k INFO ====> Epoch: 1018, cost 59.82 s
2023-03-03 03:40:22,897 44k INFO Train Epoch: 1019 [18%]
2023-03-03 03:40:22,898 44k INFO Losses: [2.638935089111328, 2.016742467880249, 7.002728462219238, 13.613525390625, 0.9098711609840393], step: 67200, lr: 8.783078590707442e-05
2023-03-03 03:40:27,840 44k INFO Saving model and optimizer state at iteration 1019 to ./logs/44k/G_67200.pth
2023-03-03 03:40:30,244 44k INFO Saving model and optimizer state at iteration 1019 to ./logs/44k/D_67200.pth
2023-03-03 03:40:32,449 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_64800.pth
2023-03-03 03:40:32,451 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_64800.pth
2023-03-03 03:41:19,205 44k INFO ====> Epoch: 1019, cost 72.21 s
2023-03-03 03:42:18,155 44k INFO ====> Epoch: 1020, cost 58.95 s
2023-03-03 03:43:16,538 44k INFO ====> Epoch: 1021, cost 58.38 s
2023-03-03 03:43:35,760 44k INFO Train Epoch: 1022 [21%]
2023-03-03 03:43:35,762 44k INFO Losses: [2.496654987335205, 1.9946445226669312, 14.087538719177246, 16.56441879272461, 0.6158469915390015], step: 67400, lr: 8.779785347925579e-05
2023-03-03 03:44:17,394 44k INFO ====> Epoch: 1022, cost 60.86 s
2023-03-03 03:45:16,061 44k INFO ====> Epoch: 1023, cost 58.67 s
2023-03-03 03:46:14,831 44k INFO ====> Epoch: 1024, cost 58.77 s
2023-03-03 03:46:34,285 44k INFO Train Epoch: 1025 [24%]
2023-03-03 03:46:34,286 44k INFO Losses: [2.55062198638916, 1.9692344665527344, 14.594532012939453, 14.947566032409668, 0.46525290608406067], step: 67600, lr: 8.776493339955396e-05
2023-03-03 03:47:15,646 44k INFO ====> Epoch: 1025, cost 60.81 s
2023-03-03 03:48:14,420 44k INFO ====> Epoch: 1026, cost 58.77 s
2023-03-03 03:49:13,425 44k INFO ====> Epoch: 1027, cost 59.01 s
2023-03-03 03:49:34,127 44k INFO Train Epoch: 1028 [27%]
2023-03-03 03:49:34,129 44k INFO Losses: [2.5448036193847656, 2.060014247894287, 7.589969635009766, 14.053296089172363, 0.5878815650939941], step: 67800, lr: 8.773202566333896e-05
2023-03-03 03:50:13,683 44k INFO ====> Epoch: 1028, cost 60.26 s
2023-03-03 03:51:14,124 44k INFO ====> Epoch: 1029, cost 60.44 s
2023-03-03 03:52:12,906 44k INFO ====> Epoch: 1030, cost 58.78 s
2023-03-03 03:52:34,595 44k INFO Train Epoch: 1031 [30%]
2023-03-03 03:52:34,597 44k INFO Losses: [2.6449222564697266, 1.9784605503082275, 7.6652116775512695, 12.840441703796387, 0.6739291548728943], step: 68000, lr: 8.769913026598255e-05
2023-03-03 03:52:39,923 44k INFO Saving model and optimizer state at iteration 1031 to ./logs/44k/G_68000.pth
2023-03-03 03:52:43,308 44k INFO Saving model and optimizer state at iteration 1031 to ./logs/44k/D_68000.pth
2023-03-03 03:52:45,445 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_65600.pth
2023-03-03 03:52:45,448 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_65600.pth
2023-03-03 03:53:25,383 44k INFO ====> Epoch: 1031, cost 72.48 s
2023-03-03 03:54:27,467 44k INFO ====> Epoch: 1032, cost 62.08 s
2023-03-03 03:55:26,272 44k INFO ====> Epoch: 1033, cost 58.81 s
2023-03-03 03:55:49,834 44k INFO Train Epoch: 1034 [33%]
2023-03-03 03:55:49,836 44k INFO Losses: [2.655405282974243, 2.2234742641448975, 9.586669921875, 15.514881134033203, 1.056593418121338], step: 68200, lr: 8.766624720285824e-05
2023-03-03 03:56:25,850 44k INFO ====> Epoch: 1034, cost 59.58 s
2023-03-03 03:57:25,925 44k INFO ====> Epoch: 1035, cost 60.07 s
2023-03-03 03:58:25,239 44k INFO ====> Epoch: 1036, cost 59.31 s
2023-03-03 03:58:50,319 44k INFO Train Epoch: 1037 [36%]
2023-03-03 03:58:50,321 44k INFO Losses: [2.71100115776062, 2.3991072177886963, 9.954297065734863, 15.491327285766602, 0.2549438178539276], step: 68400, lr: 8.763337646934128e-05
2023-03-03 04:11:44,544 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 693845, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-03 04:11:44,591 44k WARNING git hash values are different. abdb0e28(saved) != cea6df30(current)
2023-03-03 04:11:54,748 44k INFO Loaded checkpoint './logs/44k/G_68000.pth' (iteration 1031)
2023-03-03 04:11:57,956 44k INFO Loaded checkpoint './logs/44k/D_68000.pth' (iteration 1031)
2023-03-03 04:12:30,980 44k INFO Train Epoch: 1031 [30%]
2023-03-03 04:12:30,981 44k INFO Losses: [2.4474520683288574, 2.1705055236816406, 11.521065711975098, 15.68740463256836, 0.9004988074302673], step: 68000, lr: 8.76881678746993e-05
2023-03-03 04:12:38,166 44k INFO Saving model and optimizer state at iteration 1031 to ./logs/44k/G_68000.pth
2023-03-03 04:12:40,752 44k INFO Saving model and optimizer state at iteration 1031 to ./logs/44k/D_68000.pth
2023-03-03 04:13:35,696 44k INFO ====> Epoch: 1031, cost 111.16 s
2023-03-03 04:14:34,123 44k INFO ====> Epoch: 1032, cost 58.43 s
2023-03-03 04:15:32,549 44k INFO ====> Epoch: 1033, cost 58.43 s
2023-03-03 04:15:56,761 44k INFO Train Epoch: 1034 [33%]
2023-03-03 04:15:56,763 44k INFO Losses: [2.5225460529327393, 2.342514991760254, 9.48428726196289, 14.663008689880371, 0.8923035860061646], step: 68200, lr: 8.765528892195788e-05
2023-03-03 04:16:32,921 44k INFO ====> Epoch: 1034, cost 60.37 s
2023-03-03 04:17:33,023 44k INFO ====> Epoch: 1035, cost 60.10 s
2023-03-03 04:18:33,427 44k INFO ====> Epoch: 1036, cost 60.40 s
2023-03-03 04:18:59,796 44k INFO Train Epoch: 1037 [36%]
2023-03-03 04:18:59,798 44k INFO Losses: [2.4975247383117676, 2.3406782150268555, 12.197464942932129, 15.951894760131836, 0.7769613862037659], step: 68400, lr: 8.76224222972826e-05
2023-03-03 04:19:34,362 44k INFO ====> Epoch: 1037, cost 60.94 s
2023-03-03 04:20:34,448 44k INFO ====> Epoch: 1038, cost 60.09 s
2023-03-03 04:21:32,979 44k INFO ====> Epoch: 1039, cost 58.53 s
2023-03-03 04:22:00,139 44k INFO Train Epoch: 1040 [39%]
2023-03-03 04:22:00,140 44k INFO Losses: [2.4641809463500977, 2.29160737991333, 10.57172679901123, 16.1800594329834, 1.0594960451126099], step: 68600, lr: 8.758956799605101e-05
2023-03-03 04:22:33,325 44k INFO ====> Epoch: 1040, cost 60.35 s
2023-03-03 04:23:33,172 44k INFO ====> Epoch: 1041, cost 59.85 s
2023-03-03 04:24:33,385 44k INFO ====> Epoch: 1042, cost 60.21 s
2023-03-03 04:25:02,884 44k INFO Train Epoch: 1043 [42%]
2023-03-03 04:25:02,886 44k INFO Losses: [2.5838756561279297, 2.33103084564209, 9.360503196716309, 15.877737045288086, 0.7280654311180115], step: 68800, lr: 8.75567260136424e-05
2023-03-03 04:25:09,627 44k INFO Saving model and optimizer state at iteration 1043 to ./logs/44k/G_68800.pth
2023-03-03 04:25:11,978 44k INFO Saving model and optimizer state at iteration 1043 to ./logs/44k/D_68800.pth
2023-03-03 04:25:14,136 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_66400.pth
2023-03-03 04:25:14,139 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_66400.pth
2023-03-03 04:25:47,391 44k INFO ====> Epoch: 1043, cost 74.01 s
2023-03-03 04:26:47,359 44k INFO ====> Epoch: 1044, cost 59.97 s
2023-03-03 04:27:49,355 44k INFO ====> Epoch: 1045, cost 62.00 s
2023-03-03 04:28:20,750 44k INFO Train Epoch: 1046 [45%]
2023-03-03 04:28:20,752 44k INFO Losses: [2.358421802520752, 2.4498133659362793, 9.586126327514648, 16.83898162841797, 0.12598958611488342], step: 69000, lr: 8.75238963454378e-05
2023-03-03 04:28:50,157 44k INFO ====> Epoch: 1046, cost 60.80 s
2023-03-03 04:29:51,483 44k INFO ====> Epoch: 1047, cost 61.33 s
2023-03-03 04:30:54,563 44k INFO ====> Epoch: 1048, cost 63.08 s
2023-03-03 04:31:28,055 44k INFO Train Epoch: 1049 [48%]
2023-03-03 04:31:28,056 44k INFO Losses: [2.5131731033325195, 2.1427321434020996, 8.546964645385742, 14.284497261047363, 0.7184283137321472], step: 69200, lr: 8.749107898681995e-05
2023-03-03 04:31:55,790 44k INFO ====> Epoch: 1049, cost 61.23 s
2023-03-03 04:32:55,865 44k INFO ====> Epoch: 1050, cost 60.07 s
2023-03-03 04:33:56,982 44k INFO ====> Epoch: 1051, cost 61.12 s
2023-03-03 04:34:31,738 44k INFO Train Epoch: 1052 [52%]
2023-03-03 04:34:31,743 44k INFO Losses: [2.2953457832336426, 2.4278080463409424, 14.276473999023438, 15.509469032287598, 0.8462198376655579], step: 69400, lr: 8.745827393317333e-05
2023-03-03 04:34:57,708 44k INFO ====> Epoch: 1052, cost 60.73 s
2023-03-03 04:35:57,683 44k INFO ====> Epoch: 1053, cost 59.97 s
2023-03-03 04:36:56,476 44k INFO ====> Epoch: 1054, cost 58.79 s
2023-03-03 04:37:32,116 44k INFO Train Epoch: 1055 [55%]
2023-03-03 04:37:32,118 44k INFO Losses: [2.5096023082733154, 2.302539825439453, 10.961396217346191, 14.723677635192871, 0.7823094725608826], step: 69600, lr: 8.742548117988416e-05
2023-03-03 04:37:36,872 44k INFO Saving model and optimizer state at iteration 1055 to ./logs/44k/G_69600.pth
2023-03-03 04:37:40,252 44k INFO Saving model and optimizer state at iteration 1055 to ./logs/44k/D_69600.pth
2023-03-03 04:37:42,501 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_67200.pth
2023-03-03 04:37:42,504 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_67200.pth
2023-03-03 04:38:09,611 44k INFO ====> Epoch: 1055, cost 73.13 s
2023-03-03 04:39:07,578 44k INFO ====> Epoch: 1056, cost 57.97 s
2023-03-03 04:40:04,651 44k INFO ====> Epoch: 1057, cost 57.07 s
2023-03-03 04:40:41,056 44k INFO Train Epoch: 1058 [58%]
2023-03-03 04:40:41,058 44k INFO Losses: [2.66143798828125, 2.223412036895752, 8.370268821716309, 13.564188003540039, 0.4579058587551117], step: 69800, lr: 8.739270072234037e-05
2023-03-03 04:41:03,520 44k INFO ====> Epoch: 1058, cost 58.87 s
2023-03-03 04:42:01,117 44k INFO ====> Epoch: 1059, cost 57.60 s
2023-03-03 04:42:59,064 44k INFO ====> Epoch: 1060, cost 57.95 s
2023-03-03 04:43:37,045 44k INFO Train Epoch: 1061 [61%]
2023-03-03 04:43:37,047 44k INFO Losses: [2.6132516860961914, 1.9401332139968872, 7.074481964111328, 14.467610359191895, 0.9787586331367493], step: 70000, lr: 8.735993255593163e-05
2023-03-03 04:43:58,540 44k INFO ====> Epoch: 1061, cost 59.48 s
2023-03-03 04:44:57,023 44k INFO ====> Epoch: 1062, cost 58.48 s
2023-03-03 04:45:56,350 44k INFO ====> Epoch: 1063, cost 59.33 s
2023-03-03 04:46:36,323 44k INFO Train Epoch: 1064 [64%]
2023-03-03 04:46:36,326 44k INFO Losses: [2.5439629554748535, 2.131565570831299, 10.220173835754395, 14.836071014404297, 0.5917439460754395], step: 70200, lr: 8.732717667604937e-05
2023-03-03 04:46:56,041 44k INFO ====> Epoch: 1064, cost 59.69 s
2023-03-03 04:47:54,520 44k INFO ====> Epoch: 1065, cost 58.48 s
2023-03-03 04:48:52,394 44k INFO ====> Epoch: 1066, cost 57.87 s
2023-03-03 04:49:33,590 44k INFO Train Epoch: 1067 [67%]
2023-03-03 04:49:33,591 44k INFO Losses: [2.3708932399749756, 2.2500646114349365, 10.247241020202637, 15.295866012573242, 0.7495132088661194], step: 70400, lr: 8.729443307808668e-05
2023-03-03 04:49:38,459 44k INFO Saving model and optimizer state at iteration 1067 to ./logs/44k/G_70400.pth
2023-03-03 04:49:41,004 44k INFO Saving model and optimizer state at iteration 1067 to ./logs/44k/D_70400.pth
2023-03-03 04:49:43,561 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_68000.pth
2023-03-03 04:49:43,563 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_68000.pth
2023-03-03 04:50:03,694 44k INFO ====> Epoch: 1067, cost 71.30 s
2023-03-03 04:51:02,590 44k INFO ====> Epoch: 1068, cost 58.90 s
2023-03-03 04:52:00,315 44k INFO ====> Epoch: 1069, cost 57.73 s
2023-03-03 04:52:42,761 44k INFO Train Epoch: 1070 [70%]
2023-03-03 04:52:42,762 44k INFO Losses: [2.596491575241089, 2.035263776779175, 8.96207332611084, 14.518881797790527, 0.8340903520584106], step: 70600, lr: 8.726170175743843e-05
2023-03-03 04:52:58,697 44k INFO ====> Epoch: 1070, cost 58.38 s
2023-03-03 04:53:57,683 44k INFO ====> Epoch: 1071, cost 58.99 s
2023-03-03 04:54:59,341 44k INFO ====> Epoch: 1072, cost 61.66 s
2023-03-03 04:55:44,637 44k INFO Train Epoch: 1073 [73%]
2023-03-03 04:55:44,638 44k INFO Losses: [2.451061964035034, 2.193272113800049, 12.148381233215332, 15.439547538757324, 0.48759016394615173], step: 70800, lr: 8.722898270950122e-05
2023-03-03 04:55:59,205 44k INFO ====> Epoch: 1073, cost 59.86 s
2023-03-06 15:40:14,532 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 68795210, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 10}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-06 15:40:15,168 44k WARNING git hash values are different. abdb0e28(saved) != 4ce70dd5(current)
2023-03-06 15:40:36,112 44k INFO Loaded checkpoint './logs/44k/G_70400.pth' (iteration 1067)
2023-03-06 15:40:41,885 44k INFO Loaded checkpoint './logs/44k/D_70400.pth' (iteration 1067)
2023-03-06 15:41:47,981 44k INFO Train Epoch: 1067 [67%]
2023-03-06 15:41:47,983 44k INFO Losses: [2.688399314880371, 2.356191873550415, 7.749070644378662, 14.233426094055176, 0.565635085105896], step: 70400, lr: 8.728352127395191e-05
2023-03-06 15:41:55,217 44k INFO Saving model and optimizer state at iteration 1067 to ./logs/44k/G_70400.pth
2023-03-06 15:41:58,120 44k INFO Saving model and optimizer state at iteration 1067 to ./logs/44k/D_70400.pth
2023-03-06 15:42:26,303 44k INFO ====> Epoch: 1067, cost 131.77 s
2023-03-06 15:43:23,659 44k INFO ====> Epoch: 1068, cost 57.36 s
2023-03-06 15:44:21,689 44k INFO ====> Epoch: 1069, cost 58.03 s
2023-03-06 15:45:05,408 44k INFO Train Epoch: 1070 [70%]
2023-03-06 15:45:05,411 44k INFO Losses: [2.3931021690368652, 2.2531657218933105, 8.366067886352539, 14.13066577911377, 0.46042609214782715], step: 70600, lr: 8.725079404471875e-05
2023-03-06 15:45:22,517 44k INFO ====> Epoch: 1070, cost 60.83 s
2023-03-06 15:46:21,561 44k INFO ====> Epoch: 1071, cost 59.04 s
2023-03-06 15:47:20,436 44k INFO ====> Epoch: 1072, cost 58.87 s
2023-03-06 15:48:05,278 44k INFO Train Epoch: 1073 [73%]
2023-03-06 15:48:05,280 44k INFO Losses: [2.5676321983337402, 2.1011645793914795, 7.767877101898193, 15.013181686401367, 0.8011049032211304], step: 70800, lr: 8.721807908666253e-05
2023-03-06 15:48:20,268 44k INFO ====> Epoch: 1073, cost 59.83 s
2023-03-06 15:49:19,179 44k INFO ====> Epoch: 1074, cost 58.91 s
2023-03-06 15:50:18,644 44k INFO ====> Epoch: 1075, cost 59.46 s
2023-03-06 15:51:05,924 44k INFO Train Epoch: 1076 [76%]
2023-03-06 15:51:05,925 44k INFO Losses: [2.688868522644043, 2.1977174282073975, 7.8933186531066895, 13.803485870361328, 0.46525517106056213], step: 71000, lr: 8.718537639518214e-05
2023-03-06 15:51:19,719 44k INFO ====> Epoch: 1076, cost 61.07 s
2023-03-06 15:52:18,968 44k INFO ====> Epoch: 1077, cost 59.25 s
2023-03-06 15:53:19,878 44k INFO ====> Epoch: 1078, cost 60.91 s
2023-03-06 15:54:07,925 44k INFO Train Epoch: 1079 [79%]
2023-03-06 15:54:07,927 44k INFO Losses: [2.522763252258301, 2.223994255065918, 9.03896713256836, 14.275020599365234, 0.9417188763618469], step: 71200, lr: 8.715268596567818e-05
2023-03-06 15:54:12,673 44k INFO Saving model and optimizer state at iteration 1079 to ./logs/44k/G_71200.pth
2023-03-06 15:54:14,919 44k INFO Saving model and optimizer state at iteration 1079 to ./logs/44k/D_71200.pth
2023-03-06 16:09:35,536 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 68795210, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 10}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kokomi': 0}, 'model_dir': './logs/44k'}
2023-03-06 16:09:35,578 44k WARNING git hash values are different. abdb0e28(saved) != 4ce70dd5(current)
2023-03-06 16:09:46,944 44k INFO Loaded checkpoint './logs/44k/G_71200.pth' (iteration 1079)
2023-03-06 16:09:50,313 44k INFO Loaded checkpoint './logs/44k/D_71200.pth' (iteration 1079)
2023-03-06 16:11:03,682 44k INFO Train Epoch: 1079 [79%]
2023-03-06 16:11:03,683 44k INFO Losses: [2.607119560241699, 1.9361743927001953, 7.42510461807251, 13.841412544250488, 0.48254138231277466], step: 71200, lr: 8.714179187993246e-05
2023-03-06 16:11:10,480 44k INFO Saving model and optimizer state at iteration 1079 to ./logs/44k/G_71200.pth
2023-03-06 16:11:13,310 44k INFO Saving model and optimizer state at iteration 1079 to ./logs/44k/D_71200.pth
2023-03-06 16:11:33,223 44k INFO ====> Epoch: 1079, cost 117.69 s
2023-03-06 16:12:32,130 44k INFO ====> Epoch: 1080, cost 58.91 s
2023-03-06 16:13:30,876 44k INFO ====> Epoch: 1081, cost 58.75 s
2023-03-06 16:14:19,539 44k INFO Train Epoch: 1082 [82%]
2023-03-06 16:14:19,541 44k INFO Losses: [2.3125030994415283, 2.2661819458007812, 13.210684776306152, 15.546886444091797, 0.4996629059314728], step: 71400, lr: 8.710911779257877e-05
2023-03-06 16:14:30,089 44k INFO ====> Epoch: 1082, cost 59.21 s
2023-03-06 16:15:29,398 44k INFO ====> Epoch: 1083, cost 59.31 s
2023-03-06 16:16:27,899 44k INFO ====> Epoch: 1084, cost 58.50 s
2023-03-06 16:17:17,899 44k INFO Train Epoch: 1085 [85%]
2023-03-06 16:17:17,900 44k INFO Losses: [2.399630546569824, 2.1961686611175537, 8.416030883789062, 15.3222017288208, 0.6436339616775513], step: 71600, lr: 8.707645595647632e-05
2023-03-06 16:17:27,215 44k INFO ====> Epoch: 1085, cost 59.32 s