|
2023-03-07 04:21:03,113 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 6536180, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 10}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'nahida': 0, 'pecorine': 1, 'ayaka': 2}, 'model_dir': './logs/44k'} |
|
2023-03-07 04:21:15,543 44k INFO emb_g.weight is not in the checkpoint |
|
2023-03-07 04:21:15,618 44k INFO Loaded checkpoint './logs/44k/G_0.pth' (iteration 0) |
|
2023-03-07 04:21:15,955 44k INFO Loaded checkpoint './logs/44k/D_0.pth' (iteration 0) |
|
2023-03-07 04:21:36,260 44k INFO Train Epoch: 1 [0%] |
|
2023-03-07 04:21:36,261 44k INFO Losses: [3.543975830078125, 1.4798818826675415, 11.735664367675781, 38.96860885620117, 115.48365783691406], step: 0, lr: 0.0001 |
|
2023-03-07 04:21:46,824 44k INFO Saving model and optimizer state at iteration 1 to ./logs/44k/G_0.pth |
|
2023-03-07 04:21:48,176 44k INFO Saving model and optimizer state at iteration 1 to ./logs/44k/D_0.pth |
|
2023-03-07 04:25:21,155 44k INFO Train Epoch: 1 [72%] |
|
2023-03-07 04:25:21,156 44k INFO Losses: [2.597398281097412, 2.07065486907959, 11.151609420776367, 18.67756462097168, 1.7591882944107056], step: 200, lr: 0.0001 |
|
2023-03-07 04:26:42,345 44k INFO ====> Epoch: 1, cost 339.24 s |
|
2023-03-07 04:28:39,558 44k INFO Train Epoch: 2 [44%] |
|
2023-03-07 04:28:39,560 44k INFO Losses: [2.5425071716308594, 2.033686399459839, 8.144768714904785, 23.269121170043945, 1.1510231494903564], step: 400, lr: 9.99875e-05 |
|
2023-03-07 04:30:53,120 44k INFO ====> Epoch: 2, cost 250.77 s |
|
2023-03-07 04:31:39,067 44k INFO Train Epoch: 3 [16%] |
|
2023-03-07 04:31:39,070 44k INFO Losses: [2.548614501953125, 2.653628349304199, 8.729273796081543, 18.47227668762207, 1.2930660247802734], step: 600, lr: 9.99750015625e-05 |
|
2023-03-07 04:34:34,642 44k INFO Train Epoch: 3 [88%] |
|
2023-03-07 04:34:34,644 44k INFO Losses: [2.478203296661377, 2.331847906112671, 10.69192886352539, 22.939756393432617, 1.3195568323135376], step: 800, lr: 9.99750015625e-05 |
|
2023-03-07 04:34:42,832 44k INFO Saving model and optimizer state at iteration 3 to ./logs/44k/G_800.pth |
|
2023-03-07 04:34:46,578 44k INFO Saving model and optimizer state at iteration 3 to ./logs/44k/D_800.pth |
|
2023-03-07 04:35:24,674 44k INFO ====> Epoch: 3, cost 271.55 s |
|
2023-03-07 04:37:59,507 44k INFO Train Epoch: 4 [60%] |
|
2023-03-07 04:37:59,509 44k INFO Losses: [2.8386425971984863, 2.289201259613037, 10.13615608215332, 18.693723678588867, 1.6255731582641602], step: 1000, lr: 9.996250468730469e-05 |
|
2023-03-07 04:39:38,458 44k INFO ====> Epoch: 4, cost 253.78 s |
|
2023-03-07 04:40:59,900 44k INFO Train Epoch: 5 [32%] |
|
2023-03-07 04:40:59,902 44k INFO Losses: [2.533114433288574, 2.182861328125, 11.884943008422852, 20.91280746459961, 1.3490363359451294], step: 1200, lr: 9.995000937421877e-05 |
|
2023-03-07 04:43:43,139 44k INFO ====> Epoch: 5, cost 244.68 s |
|
2023-03-07 04:43:59,011 44k INFO Train Epoch: 6 [4%] |
|
2023-03-07 04:43:59,013 44k INFO Losses: [2.7925572395324707, 1.8682271242141724, 7.50873327255249, 19.440292358398438, 1.1244580745697021], step: 1400, lr: 9.993751562304699e-05 |
|
2023-03-07 04:46:49,377 44k INFO Train Epoch: 6 [76%] |
|
2023-03-07 04:46:49,378 44k INFO Losses: [2.6287946701049805, 1.9342074394226074, 7.630975723266602, 17.754249572753906, 1.0620096921920776], step: 1600, lr: 9.993751562304699e-05 |
|
2023-03-07 04:46:57,567 44k INFO Saving model and optimizer state at iteration 6 to ./logs/44k/G_1600.pth |
|
2023-03-07 04:47:00,487 44k INFO Saving model and optimizer state at iteration 6 to ./logs/44k/D_1600.pth |
|
2023-03-07 04:48:03,486 44k INFO ====> Epoch: 6, cost 260.35 s |
|
2023-03-07 04:50:01,853 44k INFO Train Epoch: 7 [47%] |
|
2023-03-07 04:50:01,854 44k INFO Losses: [2.686668634414673, 2.1358020305633545, 8.84028434753418, 16.75594711303711, 1.2701526880264282], step: 1800, lr: 9.99250234335941e-05 |
|
2023-03-07 04:52:05,667 44k INFO ====> Epoch: 7, cost 242.18 s |
|
2023-03-07 04:53:00,936 44k INFO Train Epoch: 8 [19%] |
|
2023-03-07 04:53:00,938 44k INFO Losses: [2.634305477142334, 2.0911600589752197, 7.258265018463135, 17.282140731811523, 1.1096409559249878], step: 2000, lr: 9.991253280566489e-05 |
|
2023-03-07 04:55:52,034 44k INFO Train Epoch: 8 [91%] |
|
2023-03-07 04:55:52,036 44k INFO Losses: [2.7485175132751465, 1.8093236684799194, 10.034452438354492, 20.606712341308594, 1.0243103504180908], step: 2200, lr: 9.991253280566489e-05 |
|
2023-03-07 04:56:12,826 44k INFO ====> Epoch: 8, cost 247.16 s |
|
2023-03-07 04:58:49,419 44k INFO Train Epoch: 9 [63%] |
|
2023-03-07 04:58:49,421 44k INFO Losses: [2.499004602432251, 2.1389877796173096, 8.825071334838867, 17.517284393310547, 1.1475107669830322], step: 2400, lr: 9.990004373906418e-05 |
|
2023-03-07 04:58:56,793 44k INFO Saving model and optimizer state at iteration 9 to ./logs/44k/G_2400.pth |
|
2023-03-07 04:59:01,064 44k INFO Saving model and optimizer state at iteration 9 to ./logs/44k/D_2400.pth |
|
2023-03-07 05:00:33,878 44k INFO ====> Epoch: 9, cost 261.05 s |
|
2023-03-07 05:02:03,846 44k INFO Train Epoch: 10 [35%] |
|
2023-03-07 05:02:03,848 44k INFO Losses: [2.5012993812561035, 2.427371025085449, 9.356152534484863, 20.212690353393555, 1.3596128225326538], step: 2600, lr: 9.98875562335968e-05 |
|
2023-03-07 05:04:40,100 44k INFO ====> Epoch: 10, cost 246.22 s |
|
2023-03-07 05:05:03,251 44k INFO Train Epoch: 11 [7%] |
|
2023-03-07 05:05:03,253 44k INFO Losses: [2.4956400394439697, 2.1035985946655273, 11.164962768554688, 18.634178161621094, 1.3643189668655396], step: 2800, lr: 9.987507028906759e-05 |
|
2023-03-07 05:08:00,114 44k INFO Train Epoch: 11 [79%] |
|
2023-03-07 05:08:00,116 44k INFO Losses: [2.4335012435913086, 2.367199420928955, 9.393879890441895, 18.814390182495117, 1.1170237064361572], step: 3000, lr: 9.987507028906759e-05 |
|
2023-03-07 05:08:49,842 44k INFO ====> Epoch: 11, cost 249.74 s |
|
2023-03-07 05:10:57,945 44k INFO Train Epoch: 12 [51%] |
|
2023-03-07 05:10:57,947 44k INFO Losses: [2.587806463241577, 2.00645112991333, 10.56314754486084, 17.56720733642578, 1.1663157939910889], step: 3200, lr: 9.986258590528146e-05 |
|
2023-03-07 05:11:05,989 44k INFO Saving model and optimizer state at iteration 12 to ./logs/44k/G_3200.pth |
|
2023-03-07 05:11:09,049 44k INFO Saving model and optimizer state at iteration 12 to ./logs/44k/D_3200.pth |
|
2023-03-07 05:13:10,152 44k INFO ====> Epoch: 12, cost 260.31 s |
|
2023-03-07 05:14:12,861 44k INFO Train Epoch: 13 [23%] |
|
2023-03-07 05:14:12,863 44k INFO Losses: [2.6357336044311523, 2.093174457550049, 8.349996566772461, 19.872102737426758, 1.0389378070831299], step: 3400, lr: 9.98501030820433e-05 |
|
2023-03-07 05:17:07,990 44k INFO Train Epoch: 13 [95%] |
|
2023-03-07 05:17:07,992 44k INFO Losses: [2.806152820587158, 1.741226077079773, 4.843060493469238, 13.849231719970703, 1.2450897693634033], step: 3600, lr: 9.98501030820433e-05 |
|
2023-03-07 05:17:20,006 44k INFO ====> Epoch: 13, cost 249.85 s |
|
2023-03-07 05:20:06,409 44k INFO Train Epoch: 14 [67%] |
|
2023-03-07 05:20:06,411 44k INFO Losses: [2.668164014816284, 2.259545087814331, 10.303858757019043, 18.768526077270508, 1.1716548204421997], step: 3800, lr: 9.983762181915804e-05 |
|
2023-03-07 05:21:25,149 44k INFO ====> Epoch: 14, cost 245.14 s |
|
2023-03-07 05:23:02,486 44k INFO Train Epoch: 15 [39%] |
|
2023-03-07 05:23:02,488 44k INFO Losses: [2.4119551181793213, 2.190473794937134, 12.65767765045166, 15.82565975189209, 1.2502213716506958], step: 4000, lr: 9.982514211643064e-05 |
|
2023-03-07 05:23:09,343 44k INFO Saving model and optimizer state at iteration 15 to ./logs/44k/G_4000.pth |
|
2023-03-07 05:23:13,665 44k INFO Saving model and optimizer state at iteration 15 to ./logs/44k/D_4000.pth |
|
2023-03-07 05:25:44,913 44k INFO ====> Epoch: 15, cost 259.76 s |
|
2023-03-07 05:26:16,750 44k INFO Train Epoch: 16 [11%] |
|
2023-03-07 05:26:16,751 44k INFO Losses: [2.801978588104248, 2.035661220550537, 11.924650192260742, 20.20277214050293, 0.9627301692962646], step: 4200, lr: 9.981266397366609e-05 |
|
2023-03-07 05:29:07,875 44k INFO Train Epoch: 16 [83%] |
|
2023-03-07 05:29:07,877 44k INFO Losses: [2.754967212677002, 2.160874128341675, 11.15700626373291, 18.386734008789062, 1.1072274446487427], step: 4400, lr: 9.981266397366609e-05 |
|
2023-03-07 05:29:48,630 44k INFO ====> Epoch: 16, cost 243.72 s |
|
2023-03-07 05:32:04,194 44k INFO Train Epoch: 17 [55%] |
|
2023-03-07 05:32:04,196 44k INFO Losses: [2.624680757522583, 2.1410300731658936, 12.433168411254883, 22.119606018066406, 1.0287683010101318], step: 4600, lr: 9.980018739066937e-05 |
|
2023-03-07 05:33:50,390 44k INFO ====> Epoch: 17, cost 241.76 s |
|
2023-03-07 05:35:00,098 44k INFO Train Epoch: 18 [27%] |
|
2023-03-07 05:35:00,100 44k INFO Losses: [2.5146710872650146, 2.389819383621216, 11.082135200500488, 18.382461547851562, 1.5022131204605103], step: 4800, lr: 9.978771236724554e-05 |
|
2023-03-07 05:35:08,309 44k INFO Saving model and optimizer state at iteration 18 to ./logs/44k/G_4800.pth |
|
2023-03-07 05:35:11,295 44k INFO Saving model and optimizer state at iteration 18 to ./logs/44k/D_4800.pth |
|
2023-03-07 05:38:06,225 44k INFO Train Epoch: 18 [99%] |
|
2023-03-07 05:38:06,227 44k INFO Losses: [2.550516128540039, 2.1181890964508057, 8.869426727294922, 17.22913932800293, 1.2092989683151245], step: 5000, lr: 9.978771236724554e-05 |
|
2023-03-07 05:38:09,840 44k INFO ====> Epoch: 18, cost 259.45 s |
|
2023-03-07 05:41:02,385 44k INFO Train Epoch: 19 [71%] |
|
2023-03-07 05:41:02,393 44k INFO Losses: [2.6263914108276367, 2.1377058029174805, 10.595470428466797, 21.15692710876465, 1.089450478553772], step: 5200, lr: 9.977523890319963e-05 |
|
2023-03-07 05:42:12,281 44k INFO ====> Epoch: 19, cost 242.44 s |
|
2023-03-07 05:43:58,929 44k INFO Train Epoch: 20 [42%] |
|
2023-03-07 05:43:58,931 44k INFO Losses: [2.7500247955322266, 2.0473220348358154, 7.403374671936035, 17.29874610900879, 1.4409162998199463], step: 5400, lr: 9.976276699833672e-05 |
|
2023-03-07 05:46:16,248 44k INFO ====> Epoch: 20, cost 243.97 s |
|
2023-03-07 05:46:58,145 44k INFO Train Epoch: 21 [14%] |
|
2023-03-07 05:46:58,147 44k INFO Losses: [2.665860652923584, 2.237001895904541, 9.64706039428711, 17.03026008605957, 1.2313685417175293], step: 5600, lr: 9.975029665246193e-05 |
|
2023-03-07 05:47:04,617 44k INFO Saving model and optimizer state at iteration 21 to ./logs/44k/G_5600.pth |
|
2023-03-07 05:47:09,118 44k INFO Saving model and optimizer state at iteration 21 to ./logs/44k/D_5600.pth |
|
2023-03-07 05:50:04,403 44k INFO Train Epoch: 21 [86%] |
|
2023-03-07 05:50:04,405 44k INFO Losses: [2.565178871154785, 2.152285575866699, 8.40060043334961, 18.72293472290039, 1.1223020553588867], step: 5800, lr: 9.975029665246193e-05 |
|
2023-03-07 05:50:36,749 44k INFO ====> Epoch: 21, cost 260.50 s |
|
2023-03-07 05:53:05,230 44k INFO Train Epoch: 22 [58%] |
|
2023-03-07 05:53:05,232 44k INFO Losses: [2.7232723236083984, 1.9722800254821777, 6.698011875152588, 15.892937660217285, 1.0770684480667114], step: 6000, lr: 9.973782786538036e-05 |
|
2023-03-07 05:54:42,561 44k INFO ====> Epoch: 22, cost 245.81 s |
|
2023-03-07 05:56:03,109 44k INFO Train Epoch: 23 [30%] |
|
2023-03-07 05:56:03,112 44k INFO Losses: [2.932976484298706, 1.9556177854537964, 8.317593574523926, 17.726564407348633, 0.9240208268165588], step: 6200, lr: 9.972536063689719e-05 |
|
2023-03-07 05:58:49,117 44k INFO ====> Epoch: 23, cost 246.56 s |
|
2023-03-07 05:59:02,255 44k INFO Train Epoch: 24 [2%] |
|
2023-03-07 05:59:02,257 44k INFO Losses: [2.6005189418792725, 2.2574479579925537, 7.94605827331543, 19.483564376831055, 1.168753743171692], step: 6400, lr: 9.971289496681757e-05 |
|
2023-03-07 05:59:10,021 44k INFO Saving model and optimizer state at iteration 24 to ./logs/44k/G_6400.pth |
|
2023-03-07 05:59:12,913 44k INFO Saving model and optimizer state at iteration 24 to ./logs/44k/D_6400.pth |
|
2023-03-07 06:02:07,126 44k INFO Train Epoch: 24 [74%] |
|
2023-03-07 06:02:07,128 44k INFO Losses: [2.519803047180176, 2.3115246295928955, 13.393256187438965, 19.497238159179688, 1.2631845474243164], step: 6600, lr: 9.971289496681757e-05 |
|
2023-03-07 06:03:11,211 44k INFO ====> Epoch: 24, cost 262.09 s |
|
2023-03-07 06:05:09,890 44k INFO Train Epoch: 25 [46%] |
|
2023-03-07 06:05:09,893 44k INFO Losses: [2.6202070713043213, 2.03403639793396, 9.150493621826172, 16.206737518310547, 1.093073844909668], step: 6800, lr: 9.970043085494672e-05 |
|
2023-03-07 06:07:23,501 44k INFO ====> Epoch: 25, cost 252.29 s |
|
2023-03-07 06:08:13,858 44k INFO Train Epoch: 26 [18%] |
|
2023-03-07 06:08:13,860 44k INFO Losses: [2.6275434494018555, 1.970278263092041, 7.87446928024292, 13.797062873840332, 1.3514039516448975], step: 7000, lr: 9.968796830108985e-05 |
|
2023-03-07 06:11:12,131 44k INFO Train Epoch: 26 [90%] |
|
2023-03-07 06:11:12,133 44k INFO Losses: [2.737353801727295, 1.9515436887741089, 6.990047931671143, 14.805665969848633, 1.1941959857940674], step: 7200, lr: 9.968796830108985e-05 |
|
2023-03-07 06:11:21,712 44k INFO Saving model and optimizer state at iteration 26 to ./logs/44k/G_7200.pth |
|
2023-03-07 06:11:24,914 44k INFO Saving model and optimizer state at iteration 26 to ./logs/44k/D_7200.pth |
|
2023-03-07 06:11:55,563 44k INFO ====> Epoch: 26, cost 272.06 s |
|
2023-03-07 06:14:37,842 44k INFO Train Epoch: 27 [62%] |
|
2023-03-07 06:14:37,844 44k INFO Losses: [2.5336413383483887, 2.1450982093811035, 8.692865371704102, 16.977632522583008, 1.1963155269622803], step: 7400, lr: 9.967550730505221e-05 |
|
2023-03-07 06:16:11,071 44k INFO ====> Epoch: 27, cost 255.51 s |
|
2023-03-07 06:17:41,493 44k INFO Train Epoch: 28 [34%] |
|
2023-03-07 06:17:41,496 44k INFO Losses: [2.2800095081329346, 2.5161995887756348, 8.64326000213623, 17.908842086791992, 0.8208427429199219], step: 7600, lr: 9.966304786663908e-05 |
|
2023-03-07 06:20:23,300 44k INFO ====> Epoch: 28, cost 252.23 s |
|
2023-03-07 06:20:47,155 44k INFO Train Epoch: 29 [6%] |
|
2023-03-07 06:20:47,158 44k INFO Losses: [2.575345039367676, 2.0231308937072754, 8.365586280822754, 17.556428909301758, 1.0459877252578735], step: 7800, lr: 9.965058998565574e-05 |
|
2023-03-07 06:23:42,628 44k INFO Train Epoch: 29 [78%] |
|
2023-03-07 06:23:42,630 44k INFO Losses: [2.716033458709717, 2.034611940383911, 9.335721015930176, 14.906460762023926, 0.7885928750038147], step: 8000, lr: 9.965058998565574e-05 |
|
2023-03-07 06:23:49,673 44k INFO Saving model and optimizer state at iteration 29 to ./logs/44k/G_8000.pth |
|
2023-03-07 06:23:54,175 44k INFO Saving model and optimizer state at iteration 29 to ./logs/44k/D_8000.pth |
|
2023-03-07 06:24:55,070 44k INFO ====> Epoch: 29, cost 271.77 s |
|
2023-03-07 06:27:00,175 44k INFO Train Epoch: 30 [50%] |
|
2023-03-07 06:27:00,177 44k INFO Losses: [2.4563863277435303, 2.194610834121704, 8.788387298583984, 17.64336585998535, 0.7280042171478271], step: 8200, lr: 9.963813366190753e-05 |
|
2023-03-07 06:29:00,803 44k INFO ====> Epoch: 30, cost 245.73 s |
|
2023-03-07 06:29:57,991 44k INFO Train Epoch: 31 [22%] |
|
2023-03-07 06:29:57,993 44k INFO Losses: [2.4485974311828613, 2.099264144897461, 12.293371200561523, 17.675947189331055, 1.3530832529067993], step: 8400, lr: 9.962567889519979e-05 |
|
2023-03-07 06:32:51,667 44k INFO Train Epoch: 31 [94%] |
|
2023-03-07 06:32:51,669 44k INFO Losses: [2.478673219680786, 2.168321371078491, 8.064741134643555, 16.40643882751465, 0.7406769394874573], step: 8600, lr: 9.962567889519979e-05 |
|
2023-03-07 06:33:07,086 44k INFO ====> Epoch: 31, cost 246.28 s |
|
2023-03-07 06:35:52,469 44k INFO Train Epoch: 32 [65%] |
|
2023-03-07 06:35:52,471 44k INFO Losses: [2.4513065814971924, 2.630798578262329, 11.20294189453125, 20.24482536315918, 1.1837836503982544], step: 8800, lr: 9.961322568533789e-05 |
|
2023-03-07 06:35:59,696 44k INFO Saving model and optimizer state at iteration 32 to ./logs/44k/G_8800.pth |
|
2023-03-07 06:36:03,169 44k INFO Saving model and optimizer state at iteration 32 to ./logs/44k/D_8800.pth |
|
2023-03-07 06:36:06,134 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_800.pth |
|
2023-03-07 06:36:06,138 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_800.pth |
|
2023-03-07 06:37:31,232 44k INFO ====> Epoch: 32, cost 264.15 s |
|
2023-03-07 06:39:09,137 44k INFO Train Epoch: 33 [37%] |
|
2023-03-07 06:39:09,140 44k INFO Losses: [2.5521469116210938, 2.072925090789795, 8.994251251220703, 17.99446678161621, 1.0526460409164429], step: 9000, lr: 9.960077403212722e-05 |
|
2023-03-07 06:41:36,609 44k INFO ====> Epoch: 33, cost 245.38 s |
|
2023-03-07 06:42:05,917 44k INFO Train Epoch: 34 [9%] |
|
2023-03-07 06:42:05,919 44k INFO Losses: [2.6534881591796875, 1.973475694656372, 7.927048683166504, 15.803838729858398, 1.050992727279663], step: 9200, lr: 9.95883239353732e-05 |
|
2023-03-07 06:44:57,334 44k INFO Train Epoch: 34 [81%] |
|
2023-03-07 06:44:57,336 44k INFO Losses: [2.5706284046173096, 1.9730966091156006, 8.840133666992188, 18.16031265258789, 1.1020385026931763], step: 9400, lr: 9.95883239353732e-05 |
|
2023-03-07 06:45:40,484 44k INFO ====> Epoch: 34, cost 243.87 s |
|
2023-03-07 06:47:52,283 44k INFO Train Epoch: 35 [53%] |
|
2023-03-07 06:47:52,285 44k INFO Losses: [2.7228848934173584, 1.8054351806640625, 7.591585636138916, 16.257360458374023, 1.1753681898117065], step: 9600, lr: 9.957587539488128e-05 |
|
2023-03-07 06:47:58,544 44k INFO Saving model and optimizer state at iteration 35 to ./logs/44k/G_9600.pth |
|
2023-03-07 06:48:01,555 44k INFO Saving model and optimizer state at iteration 35 to ./logs/44k/D_9600.pth |
|
2023-03-07 06:48:04,480 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_1600.pth |
|
2023-03-07 06:48:04,482 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_1600.pth |
|
2023-03-07 07:02:16,767 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 1234, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'nyaru': 0, 'huiyu': 1, 'nen': 2, 'paimon': 3, 'yunhao': 4}, 'model_dir': './logs/44k'} |
|
2023-03-07 07:02:44,756 44k INFO Loaded checkpoint './logs/44k/G_9600.pth' (iteration 35) |
|
2023-03-07 07:02:57,382 44k INFO Loaded checkpoint './logs/44k/D_9600.pth' (iteration 35) |
|
2023-03-07 07:07:16,022 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 1234, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'nyaru': 0, 'huiyu': 1, 'nen': 2, 'paimon': 3, 'yunhao': 4}, 'model_dir': './logs/44k'} |
|
2023-03-07 07:07:24,190 44k INFO Loaded checkpoint './logs/44k/G_9600.pth' (iteration 35) |
|
2023-03-07 07:07:25,552 44k INFO Loaded checkpoint './logs/44k/D_9600.pth' (iteration 35) |
|
2023-03-07 07:32:30,582 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 6536180, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'nahida': 0, 'pecorine': 1, 'ayaka': 2}, 'model_dir': './logs/44k'} |
|
2023-03-07 07:32:47,838 44k INFO Loaded checkpoint './logs/44k/G_9600.pth' (iteration 35) |
|
2023-03-07 07:32:52,676 44k INFO Loaded checkpoint './logs/44k/D_9600.pth' (iteration 35) |
|
2023-03-07 07:35:40,118 44k INFO Train Epoch: 35 [53%] |
|
2023-03-07 07:35:40,123 44k INFO Losses: [2.642314910888672, 1.9616827964782715, 7.74857234954834, 17.229703903198242, 1.024670958518982], step: 9600, lr: 9.956342841045691e-05 |
|
2023-03-07 07:35:53,475 44k INFO Saving model and optimizer state at iteration 35 to ./logs/44k/G_9600.pth |
|
2023-03-07 07:35:56,162 44k INFO Saving model and optimizer state at iteration 35 to ./logs/44k/D_9600.pth |
|
2023-03-07 07:35:58,492 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_0.pth.1 |
|
2023-03-07 07:35:58,493 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_2400.pth |
|
2023-03-07 07:35:58,495 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_3200.pth |
|
2023-03-07 07:35:58,496 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_4000.pth |
|
2023-03-07 07:35:58,498 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_4800.pth |
|
2023-03-07 07:35:58,499 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_5600.pth |
|
2023-03-07 07:35:58,549 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_6400.pth |
|
2023-03-07 07:35:58,552 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_7200.pth |
|
2023-03-07 07:35:58,554 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_0.pth.1 |
|
2023-03-07 07:35:58,555 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_2400.pth |
|
2023-03-07 07:35:58,556 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_3200.pth |
|
2023-03-07 07:35:58,558 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_4000.pth |
|
2023-03-07 07:35:58,561 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_4800.pth |
|
2023-03-07 07:35:58,563 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_5600.pth |
|
2023-03-07 07:35:58,564 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_6400.pth |
|
2023-03-07 07:35:58,569 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_7200.pth |
|
2023-03-07 07:38:09,413 44k INFO ====> Epoch: 35, cost 338.83 s |
|
2023-03-07 07:39:14,487 44k INFO Train Epoch: 36 [25%] |
|
2023-03-07 07:39:14,490 44k INFO Losses: [2.7183139324188232, 1.8399269580841064, 7.3326416015625, 17.587196350097656, 0.9791800379753113], step: 9800, lr: 9.95509829819056e-05 |
|
2023-03-07 07:42:00,694 44k INFO Train Epoch: 36 [97%] |
|
2023-03-07 07:42:00,696 44k INFO Losses: [2.346916675567627, 2.6773409843444824, 9.220958709716797, 18.319215774536133, 0.5668321251869202], step: 10000, lr: 9.95509829819056e-05 |
|
2023-03-07 07:42:07,892 44k INFO ====> Epoch: 36, cost 238.48 s |
|
2023-03-07 07:44:52,788 44k INFO Train Epoch: 37 [69%] |
|
2023-03-07 07:44:52,790 44k INFO Losses: [2.724806070327759, 1.9389129877090454, 12.118914604187012, 16.00565528869629, 1.0658726692199707], step: 10200, lr: 9.953853910903285e-05 |
|
2023-03-07 07:46:02,627 44k INFO ====> Epoch: 37, cost 234.74 s |
|
2023-03-07 07:47:43,520 44k INFO Train Epoch: 38 [41%] |
|
2023-03-07 07:47:43,522 44k INFO Losses: [2.530158042907715, 2.0477166175842285, 11.085125923156738, 18.511581420898438, 0.8004183173179626], step: 10400, lr: 9.952609679164422e-05 |
|
2023-03-07 07:47:49,722 44k INFO Saving model and optimizer state at iteration 38 to ./logs/44k/G_10400.pth |
|
2023-03-07 07:47:52,946 44k INFO Saving model and optimizer state at iteration 38 to ./logs/44k/D_10400.pth |
|
2023-03-07 07:47:55,334 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_8000.pth |
|
2023-03-07 07:47:55,336 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_8000.pth |
|
2023-03-07 07:50:12,267 44k INFO ====> Epoch: 38, cost 249.64 s |
|
2023-03-07 07:50:47,437 44k INFO Train Epoch: 39 [13%] |
|
2023-03-07 07:50:47,438 44k INFO Losses: [2.7594900131225586, 2.1668283939361572, 6.762094974517822, 13.174894332885742, 0.9686982035636902], step: 10600, lr: 9.951365602954526e-05 |
|
2023-03-07 07:53:32,239 44k INFO Train Epoch: 39 [85%] |
|
2023-03-07 07:53:32,241 44k INFO Losses: [2.7302756309509277, 2.0099236965179443, 8.370537757873535, 17.475419998168945, 1.1079752445220947], step: 10800, lr: 9.951365602954526e-05 |
|
2023-03-07 07:54:06,512 44k INFO ====> Epoch: 39, cost 234.25 s |
|
2023-03-07 07:56:23,307 44k INFO Train Epoch: 40 [57%] |
|
2023-03-07 07:56:23,309 44k INFO Losses: [2.654186248779297, 1.9137489795684814, 6.000055313110352, 14.39924430847168, 1.0657968521118164], step: 11000, lr: 9.950121682254156e-05 |
|
2023-03-07 07:58:01,451 44k INFO ====> Epoch: 40, cost 234.94 s |
|
2023-03-07 07:59:14,973 44k INFO Train Epoch: 41 [29%] |
|
2023-03-07 07:59:14,976 44k INFO Losses: [2.594477653503418, 2.0960772037506104, 9.184497833251953, 18.6002197265625, 1.095924973487854], step: 11200, lr: 9.948877917043875e-05 |
|
2023-03-07 07:59:21,192 44k INFO Saving model and optimizer state at iteration 41 to ./logs/44k/G_11200.pth |
|
2023-03-07 07:59:23,755 44k INFO Saving model and optimizer state at iteration 41 to ./logs/44k/D_11200.pth |
|
2023-03-07 07:59:26,340 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_8800.pth |
|
2023-03-07 07:59:26,342 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_8800.pth |
|
2023-03-07 08:02:10,208 44k INFO ====> Epoch: 41, cost 248.76 s |
|
2023-03-07 08:02:18,429 44k INFO Train Epoch: 42 [1%] |
|
2023-03-07 08:02:18,430 44k INFO Losses: [2.4492578506469727, 2.117008924484253, 10.597413063049316, 17.72152328491211, 1.154609203338623], step: 11400, lr: 9.947634307304244e-05 |
|
2023-03-07 08:03:30,657 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 6536180, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 10}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'nahida': 0, 'pecorine': 1, 'ayaka': 2}, 'model_dir': './logs/44k'} |
|
2023-03-07 08:03:45,339 44k INFO Loaded checkpoint './logs/44k/G_11200.pth' (iteration 41) |
|
2023-03-07 08:03:48,814 44k INFO Loaded checkpoint './logs/44k/D_11200.pth' (iteration 41) |
|
2023-03-07 08:05:26,610 44k INFO Train Epoch: 41 [29%] |
|
2023-03-07 08:05:26,611 44k INFO Losses: [2.6247730255126953, 1.9095958471298218, 10.087982177734375, 19.38198471069336, 1.1881312131881714], step: 11200, lr: 9.947634307304244e-05 |
|
2023-03-07 08:05:35,198 44k INFO Saving model and optimizer state at iteration 41 to ./logs/44k/G_11200.pth |
|
2023-03-07 08:05:38,056 44k INFO Saving model and optimizer state at iteration 41 to ./logs/44k/D_11200.pth |
|
2023-03-07 08:08:52,980 44k INFO ====> Epoch: 41, cost 322.32 s |
|
2023-03-07 08:09:01,307 44k INFO Train Epoch: 42 [1%] |
|
2023-03-07 08:09:01,309 44k INFO Losses: [2.684488296508789, 1.9959248304367065, 10.913714408874512, 18.393997192382812, 0.960388720035553], step: 11400, lr: 9.94639085301583e-05 |
|
2023-03-07 08:11:48,296 44k INFO Train Epoch: 42 [73%] |
|
2023-03-07 08:11:48,298 44k INFO Losses: [2.5422801971435547, 2.2386116981506348, 10.54316520690918, 21.215917587280273, 0.8136235475540161], step: 11600, lr: 9.94639085301583e-05 |
|
2023-03-07 08:12:51,424 44k INFO ====> Epoch: 42, cost 238.44 s |
|
2023-03-07 08:14:41,422 44k INFO Train Epoch: 43 [45%] |
|
2023-03-07 08:14:41,424 44k INFO Losses: [2.661695957183838, 2.2479186058044434, 5.340694427490234, 13.786053657531738, 1.0400354862213135], step: 11800, lr: 9.945147554159202e-05 |
|
2023-03-07 08:16:48,705 44k INFO ====> Epoch: 43, cost 237.28 s |
|
2023-03-07 08:17:33,453 44k INFO Train Epoch: 44 [17%] |
|
2023-03-07 08:17:33,455 44k INFO Losses: [2.7896199226379395, 1.7841088771820068, 7.866842269897461, 16.58633041381836, 0.799792468547821], step: 12000, lr: 9.943904410714931e-05 |
|
2023-03-07 08:17:40,505 44k INFO Saving model and optimizer state at iteration 44 to ./logs/44k/G_12000.pth |
|
2023-03-07 08:17:43,130 44k INFO Saving model and optimizer state at iteration 44 to ./logs/44k/D_12000.pth |
|
2023-03-07 08:20:35,781 44k INFO Train Epoch: 44 [88%] |
|
2023-03-07 08:20:35,782 44k INFO Losses: [2.652988910675049, 2.0590739250183105, 6.6182708740234375, 16.060216903686523, 1.0787829160690308], step: 12200, lr: 9.943904410714931e-05 |
|
2023-03-07 08:21:01,734 44k INFO ====> Epoch: 44, cost 253.03 s |
|
2023-03-07 08:23:29,054 44k INFO Train Epoch: 45 [60%] |
|
2023-03-07 08:23:29,055 44k INFO Losses: [2.3886585235595703, 2.2649729251861572, 12.287318229675293, 16.469440460205078, 0.9210419654846191], step: 12400, lr: 9.942661422663591e-05 |
|
2023-03-07 08:24:59,996 44k INFO ====> Epoch: 45, cost 238.26 s |
|
2023-03-07 08:26:22,986 44k INFO Train Epoch: 46 [32%] |
|
2023-03-07 08:26:22,988 44k INFO Losses: [2.505093812942505, 2.25510573387146, 11.794397354125977, 18.561588287353516, 1.0951637029647827], step: 12600, lr: 9.941418589985758e-05 |
|
2023-03-07 08:28:57,547 44k INFO ====> Epoch: 46, cost 237.55 s |
|
2023-03-07 08:29:15,001 44k INFO Train Epoch: 47 [4%] |
|
2023-03-07 08:29:15,002 44k INFO Losses: [2.7381699085235596, 1.9557750225067139, 5.539374828338623, 13.914876937866211, 0.9886545538902283], step: 12800, lr: 9.940175912662009e-05 |
|
2023-03-07 08:29:22,697 44k INFO Saving model and optimizer state at iteration 47 to ./logs/44k/G_12800.pth |
|
2023-03-07 08:29:25,254 44k INFO Saving model and optimizer state at iteration 47 to ./logs/44k/D_12800.pth |
|
2023-03-07 08:32:16,865 44k INFO Train Epoch: 47 [76%] |
|
2023-03-07 08:32:16,866 44k INFO Losses: [2.7734010219573975, 2.0934243202209473, 6.7789764404296875, 15.232510566711426, 0.914120078086853], step: 13000, lr: 9.940175912662009e-05 |
|
2023-03-07 08:33:11,480 44k INFO ====> Epoch: 47, cost 253.93 s |
|
2023-03-07 08:35:10,662 44k INFO Train Epoch: 48 [48%] |
|
2023-03-07 08:35:10,664 44k INFO Losses: [2.691950798034668, 2.1277260780334473, 7.920324325561523, 13.717920303344727, 1.3819385766983032], step: 13200, lr: 9.938933390672926e-05 |
|
2023-03-07 08:37:09,748 44k INFO ====> Epoch: 48, cost 238.27 s |
|
2023-03-07 08:38:04,314 44k INFO Train Epoch: 49 [20%] |
|
2023-03-07 08:38:04,318 44k INFO Losses: [2.7675859928131104, 1.8623976707458496, 6.453995704650879, 16.64856719970703, 0.9921563267707825], step: 13400, lr: 9.937691023999092e-05 |
|
2023-03-07 08:40:50,430 44k INFO Train Epoch: 49 [92%] |
|
2023-03-07 08:40:50,431 44k INFO Losses: [2.4808473587036133, 2.154651165008545, 8.632558822631836, 18.903560638427734, 0.9644693732261658], step: 13600, lr: 9.937691023999092e-05 |
|
2023-03-07 08:40:56,969 44k INFO Saving model and optimizer state at iteration 49 to ./logs/44k/G_13600.pth |
|
2023-03-07 08:40:59,618 44k INFO Saving model and optimizer state at iteration 49 to ./logs/44k/D_13600.pth |
|
2023-03-07 08:41:22,711 44k INFO ====> Epoch: 49, cost 252.96 s |
|
2023-03-07 08:43:56,519 44k INFO Train Epoch: 50 [64%] |
|
2023-03-07 08:43:56,521 44k INFO Losses: [2.6491880416870117, 2.3061861991882324, 7.480834007263184, 15.16545295715332, 1.0035150051116943], step: 13800, lr: 9.936448812621091e-05 |
|
2023-03-07 08:45:18,263 44k INFO ====> Epoch: 50, cost 235.55 s |
|
2023-03-07 08:46:47,637 44k INFO Train Epoch: 51 [36%] |
|
2023-03-07 08:46:47,639 44k INFO Losses: [2.6429011821746826, 1.959134817123413, 5.175213813781738, 12.719242095947266, 1.3731069564819336], step: 14000, lr: 9.935206756519513e-05 |
|
2023-03-07 08:49:14,046 44k INFO ====> Epoch: 51, cost 235.78 s |
|
2023-03-07 08:49:40,406 44k INFO Train Epoch: 52 [8%] |
|
2023-03-07 08:49:40,407 44k INFO Losses: [2.686530113220215, 2.052610158920288, 8.319181442260742, 17.81112289428711, 0.8839596509933472], step: 14200, lr: 9.933964855674948e-05 |
|
2023-03-07 08:52:24,624 44k INFO Train Epoch: 52 [80%] |
|
2023-03-07 08:52:24,626 44k INFO Losses: [2.588836669921875, 2.321237325668335, 10.043384552001953, 17.284605026245117, 1.2321714162826538], step: 14400, lr: 9.933964855674948e-05 |
|
2023-03-07 08:52:32,524 44k INFO Saving model and optimizer state at iteration 52 to ./logs/44k/G_14400.pth |
|
2023-03-07 08:52:34,970 44k INFO Saving model and optimizer state at iteration 52 to ./logs/44k/D_14400.pth |
|
2023-03-07 08:53:26,160 44k INFO ====> Epoch: 52, cost 252.11 s |
|
2023-03-07 08:55:32,040 44k INFO Train Epoch: 53 [52%] |
|
2023-03-07 08:55:32,042 44k INFO Losses: [2.6045517921447754, 2.1107609272003174, 10.538167953491211, 18.611204147338867, 1.049749732017517], step: 14600, lr: 9.932723110067987e-05 |
|
2023-03-07 08:57:21,114 44k INFO ====> Epoch: 53, cost 234.95 s |
|
2023-03-07 08:58:23,307 44k INFO Train Epoch: 54 [24%] |
|
2023-03-07 08:58:23,309 44k INFO Losses: [2.619805097579956, 2.140129566192627, 8.988656997680664, 15.093971252441406, 1.0072216987609863], step: 14800, lr: 9.931481519679228e-05 |
|
2023-03-07 09:01:09,007 44k INFO Train Epoch: 54 [96%] |
|
2023-03-07 09:01:09,009 44k INFO Losses: [2.596301555633545, 1.9555364847183228, 7.690006256103516, 16.235876083374023, 1.3096226453781128], step: 15000, lr: 9.931481519679228e-05 |
|
2023-03-07 09:01:18,871 44k INFO ====> Epoch: 54, cost 237.76 s |
|
2023-03-07 09:04:01,455 44k INFO Train Epoch: 55 [68%] |
|
2023-03-07 09:04:01,458 44k INFO Losses: [2.5863559246063232, 1.938653588294983, 9.730915069580078, 18.146326065063477, 0.9171246290206909], step: 15200, lr: 9.930240084489267e-05 |
|
2023-03-07 09:04:07,882 44k INFO Saving model and optimizer state at iteration 55 to ./logs/44k/G_15200.pth |
|
2023-03-07 09:04:11,606 44k INFO Saving model and optimizer state at iteration 55 to ./logs/44k/D_15200.pth |
|
2023-03-07 09:05:30,753 44k INFO ====> Epoch: 55, cost 251.88 s |
|
2023-03-07 09:07:06,565 44k INFO Train Epoch: 56 [40%] |
|
2023-03-07 09:07:06,567 44k INFO Losses: [2.7119669914245605, 1.9439709186553955, 8.629279136657715, 17.02840232849121, 1.3089628219604492], step: 15400, lr: 9.928998804478705e-05 |
|
2023-03-07 09:09:24,874 44k INFO ====> Epoch: 56, cost 234.12 s |
|
2023-03-07 09:09:58,436 44k INFO Train Epoch: 57 [12%] |
|
2023-03-07 09:09:58,436 44k INFO Losses: [2.6091232299804688, 1.9974900484085083, 6.625960350036621, 11.739620208740234, 0.8048025965690613], step: 15600, lr: 9.927757679628145e-05 |
|
2023-03-07 09:12:43,565 44k INFO Train Epoch: 57 [83%] |
|
2023-03-07 09:12:43,567 44k INFO Losses: [2.7364017963409424, 2.073594331741333, 5.499330997467041, 12.30970287322998, 0.7131618857383728], step: 15800, lr: 9.927757679628145e-05 |
|
2023-03-07 09:13:21,343 44k INFO ====> Epoch: 57, cost 236.47 s |
|
2023-03-07 09:15:34,564 44k INFO Train Epoch: 58 [55%] |
|
2023-03-07 09:15:34,566 44k INFO Losses: [2.558323621749878, 2.145087242126465, 10.05433177947998, 17.508724212646484, 1.1820834875106812], step: 16000, lr: 9.926516709918191e-05 |
|
2023-03-07 09:15:40,687 44k INFO Saving model and optimizer state at iteration 58 to ./logs/44k/G_16000.pth |
|
2023-03-07 09:15:43,259 44k INFO Saving model and optimizer state at iteration 58 to ./logs/44k/D_16000.pth |
|
2023-03-07 09:17:30,617 44k INFO ====> Epoch: 58, cost 249.27 s |
|
2023-03-07 09:18:40,527 44k INFO Train Epoch: 59 [27%] |
|
2023-03-07 09:18:40,528 44k INFO Losses: [2.6915173530578613, 2.0221006870269775, 12.357112884521484, 19.153165817260742, 0.8149062991142273], step: 16200, lr: 9.92527589532945e-05 |
|
2023-03-07 09:21:24,807 44k INFO Train Epoch: 59 [99%] |
|
2023-03-07 09:21:24,808 44k INFO Losses: [2.5932652950286865, 1.900865077972412, 10.516340255737305, 16.12399673461914, 0.7320185303688049], step: 16400, lr: 9.92527589532945e-05 |
|
2023-03-07 09:21:26,727 44k INFO ====> Epoch: 59, cost 236.11 s |
|
2023-03-07 09:24:18,461 44k INFO Train Epoch: 60 [71%] |
|
2023-03-07 09:24:18,462 44k INFO Losses: [2.598914623260498, 2.02502703666687, 9.530688285827637, 15.57526969909668, 1.1604382991790771], step: 16600, lr: 9.924035235842533e-05 |
|
2023-03-07 09:25:23,701 44k INFO ====> Epoch: 60, cost 236.97 s |
|
2023-03-07 09:27:10,053 44k INFO Train Epoch: 61 [43%] |
|
2023-03-07 09:27:10,055 44k INFO Losses: [2.5381767749786377, 2.040952444076538, 8.723997116088867, 16.769763946533203, 0.943171501159668], step: 16800, lr: 9.922794731438052e-05 |
|
2023-03-07 09:27:15,889 44k INFO Saving model and optimizer state at iteration 61 to ./logs/44k/G_16800.pth |
|
2023-03-07 09:27:19,850 44k INFO Saving model and optimizer state at iteration 61 to ./logs/44k/D_16800.pth |
|
2023-03-07 09:29:34,970 44k INFO ====> Epoch: 61, cost 251.27 s |
|
2023-03-07 09:30:14,528 44k INFO Train Epoch: 62 [15%] |
|
2023-03-07 09:30:14,530 44k INFO Losses: [2.5676074028015137, 2.1760034561157227, 9.546493530273438, 20.224748611450195, 1.0243515968322754], step: 17000, lr: 9.921554382096622e-05 |
|
2023-03-07 09:33:00,653 44k INFO Train Epoch: 62 [87%] |
|
2023-03-07 09:33:00,654 44k INFO Losses: [2.630910634994507, 2.0539801120758057, 7.5318169593811035, 17.19669532775879, 0.9078588485717773], step: 17200, lr: 9.921554382096622e-05 |
|
2023-03-07 09:33:30,821 44k INFO ====> Epoch: 62, cost 235.85 s |
|
2023-03-07 09:35:53,546 44k INFO Train Epoch: 63 [59%] |
|
2023-03-07 09:35:53,548 44k INFO Losses: [2.7379932403564453, 1.9635565280914307, 7.410027503967285, 16.908191680908203, 1.1460034847259521], step: 17400, lr: 9.92031418779886e-05 |
|
2023-03-07 09:37:26,495 44k INFO ====> Epoch: 63, cost 235.67 s |
|
2023-03-07 09:38:45,030 44k INFO Train Epoch: 64 [31%] |
|
2023-03-07 09:38:45,031 44k INFO Losses: [2.6821656227111816, 2.014768123626709, 7.260769367218018, 17.761646270751953, 1.290578842163086], step: 17600, lr: 9.919074148525384e-05 |
|
2023-03-07 09:38:52,467 44k INFO Saving model and optimizer state at iteration 64 to ./logs/44k/G_17600.pth |
|
2023-03-07 09:38:54,980 44k INFO Saving model and optimizer state at iteration 64 to ./logs/44k/D_17600.pth |
|
2023-03-07 09:38:57,309 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_9600.pth |
|
2023-03-07 09:38:57,313 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_9600.pth |
|
2023-03-07 09:41:38,450 44k INFO ====> Epoch: 64, cost 251.96 s |
|
2023-03-07 09:41:51,795 44k INFO Train Epoch: 65 [3%] |
|
2023-03-07 09:41:51,796 44k INFO Losses: [2.6174774169921875, 2.1231558322906494, 8.38846492767334, 17.853803634643555, 1.2050317525863647], step: 17800, lr: 9.917834264256819e-05 |
|
2023-03-07 09:44:38,684 44k INFO Train Epoch: 65 [75%] |
|
2023-03-07 09:44:38,685 44k INFO Losses: [2.4815263748168945, 2.219257354736328, 10.560876846313477, 18.788105010986328, 0.877513587474823], step: 18000, lr: 9.917834264256819e-05 |
|
2023-03-07 09:45:36,533 44k INFO ====> Epoch: 65, cost 238.08 s |
|
2023-03-07 09:47:30,901 44k INFO Train Epoch: 66 [47%] |
|
2023-03-07 09:47:30,903 44k INFO Losses: [2.6850380897521973, 1.9703338146209717, 7.891664981842041, 15.571673393249512, 1.0549273490905762], step: 18200, lr: 9.916594534973787e-05 |
|
2023-03-07 09:49:32,763 44k INFO ====> Epoch: 66, cost 236.23 s |
|
2023-03-07 09:50:21,817 44k INFO Train Epoch: 67 [19%] |
|
2023-03-07 09:50:21,818 44k INFO Losses: [2.668536901473999, 2.1673455238342285, 7.9723801612854, 16.044538497924805, 0.9959537386894226], step: 18400, lr: 9.915354960656915e-05 |
|
2023-03-07 09:50:27,778 44k INFO Saving model and optimizer state at iteration 67 to ./logs/44k/G_18400.pth |
|
2023-03-07 09:50:33,262 44k INFO Saving model and optimizer state at iteration 67 to ./logs/44k/D_18400.pth |
|
2023-03-07 09:50:36,269 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_10400.pth |
|
2023-03-07 09:50:36,270 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_10400.pth |
|
2023-03-07 09:53:25,898 44k INFO Train Epoch: 67 [91%] |
|
2023-03-07 09:53:25,900 44k INFO Losses: [2.6659655570983887, 2.1085245609283447, 9.08434009552002, 17.412464141845703, 0.9106059074401855], step: 18600, lr: 9.915354960656915e-05 |
|
2023-03-07 09:53:47,728 44k INFO ====> Epoch: 67, cost 254.96 s |
|
2023-03-07 09:56:20,039 44k INFO Train Epoch: 68 [63%] |
|
2023-03-07 09:56:20,041 44k INFO Losses: [2.5655832290649414, 2.086219310760498, 6.493435382843018, 16.1307373046875, 1.0131218433380127], step: 18800, lr: 9.914115541286833e-05 |
|
2023-03-07 09:57:45,809 44k INFO ====> Epoch: 68, cost 238.08 s |
|
2023-03-07 09:59:14,563 44k INFO Train Epoch: 69 [35%] |
|
2023-03-07 09:59:14,565 44k INFO Losses: [2.6375253200531006, 2.2231435775756836, 9.13197135925293, 15.079229354858398, 1.1442131996154785], step: 19000, lr: 9.912876276844171e-05 |
|
2023-03-07 10:01:44,399 44k INFO ====> Epoch: 69, cost 238.59 s |
|
2023-03-07 10:02:06,036 44k INFO Train Epoch: 70 [6%] |
|
2023-03-07 10:02:06,037 44k INFO Losses: [2.7699332237243652, 1.939623236656189, 5.967057228088379, 11.710137367248535, 0.8021688461303711], step: 19200, lr: 9.911637167309565e-05 |
|
2023-03-07 10:02:13,207 44k INFO Saving model and optimizer state at iteration 70 to ./logs/44k/G_19200.pth |
|
2023-03-07 10:02:15,892 44k INFO Saving model and optimizer state at iteration 70 to ./logs/44k/D_19200.pth |
|
2023-03-07 10:02:18,335 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_11200.pth |
|
2023-03-07 10:02:18,336 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_11200.pth |
|
2023-03-07 10:05:09,043 44k INFO Train Epoch: 70 [78%] |
|
2023-03-07 10:05:09,044 44k INFO Losses: [2.685474395751953, 2.047402858734131, 6.953428745269775, 17.58191680908203, 1.1085171699523926], step: 19400, lr: 9.911637167309565e-05 |
|
2023-03-07 10:05:58,937 44k INFO ====> Epoch: 70, cost 254.54 s |
|
2023-03-07 10:08:02,508 44k INFO Train Epoch: 71 [50%] |
|
2023-03-07 10:08:02,510 44k INFO Losses: [2.6824569702148438, 1.9385805130004883, 8.192475318908691, 16.1959285736084, 1.277029275894165], step: 19600, lr: 9.910398212663652e-05 |
|
2023-03-07 10:09:54,933 44k INFO ====> Epoch: 71, cost 236.00 s |
|
2023-03-07 10:10:55,261 44k INFO Train Epoch: 72 [22%] |
|
2023-03-07 10:10:55,263 44k INFO Losses: [2.6745386123657227, 2.0968589782714844, 9.924616813659668, 16.880382537841797, 1.3404916524887085], step: 19800, lr: 9.909159412887068e-05 |
|
2023-03-07 10:13:41,243 44k INFO Train Epoch: 72 [94%] |
|
2023-03-07 10:13:41,244 44k INFO Losses: [2.5880956649780273, 2.3576791286468506, 11.393799781799316, 16.47504425048828, 1.1941883563995361], step: 20000, lr: 9.909159412887068e-05 |
|
2023-03-07 10:13:49,390 44k INFO Saving model and optimizer state at iteration 72 to ./logs/44k/G_20000.pth |
|
2023-03-07 10:13:52,016 44k INFO Saving model and optimizer state at iteration 72 to ./logs/44k/D_20000.pth |
|
2023-03-07 10:13:54,241 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_12000.pth |
|
2023-03-07 10:13:54,243 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_12000.pth |
|
2023-03-07 10:14:10,641 44k INFO ====> Epoch: 72, cost 255.71 s |
|
2023-03-07 10:21:21,741 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 6536180, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'nahida': 0, 'pecorine': 1, 'ayaka': 2}, 'model_dir': './logs/44k'} |
|
2023-03-07 10:21:41,739 44k INFO Loaded checkpoint './logs/44k/G_20000.pth' (iteration 72) |
|
2023-03-07 10:21:51,350 44k INFO Loaded checkpoint './logs/44k/D_20000.pth' (iteration 72) |
|
2023-03-07 10:23:17,367 44k INFO Train Epoch: 72 [22%] |
|
2023-03-07 10:23:17,368 44k INFO Losses: [2.624009847640991, 2.2527122497558594, 9.984403610229492, 16.14898681640625, 1.0251562595367432], step: 19800, lr: 9.907920767960457e-05 |
|
2023-03-07 10:26:41,828 44k INFO Train Epoch: 72 [94%] |
|
2023-03-07 10:26:41,829 44k INFO Losses: [2.77197527885437, 2.0887768268585205, 8.704852104187012, 17.044437408447266, 0.9959118962287903], step: 20000, lr: 9.907920767960457e-05 |
|
2023-03-07 10:26:54,144 44k INFO Saving model and optimizer state at iteration 72 to ./logs/44k/G_20000.pth |
|
2023-03-07 10:26:58,186 44k INFO Saving model and optimizer state at iteration 72 to ./logs/44k/D_20000.pth |
|
2023-03-07 10:27:00,629 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_12800.pth |
|
2023-03-07 10:27:00,631 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_13600.pth |
|
2023-03-07 10:27:00,633 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_14400.pth |
|
2023-03-07 10:27:00,634 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_15200.pth |
|
2023-03-07 10:27:00,636 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_16000.pth |
|
2023-03-07 10:27:00,638 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_16800.pth |
|
2023-03-07 10:27:00,640 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_17600.pth |
|
2023-03-07 10:27:00,642 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_12800.pth |
|
2023-03-07 10:27:00,644 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_13600.pth |
|
2023-03-07 10:27:00,645 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_14400.pth |
|
2023-03-07 10:27:00,647 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_15200.pth |
|
2023-03-07 10:27:00,649 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_16000.pth |
|
2023-03-07 10:27:00,650 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_16800.pth |
|
2023-03-07 10:27:00,652 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_17600.pth |
|
2023-03-07 10:27:20,527 44k INFO ====> Epoch: 72, cost 358.79 s |
|
2023-03-07 10:30:03,538 44k INFO Train Epoch: 73 [66%] |
|
2023-03-07 10:30:03,540 44k INFO Losses: [2.6611201763153076, 2.099795341491699, 11.716313362121582, 14.757657051086426, 0.9594847559928894], step: 20200, lr: 9.906682277864462e-05 |
|
2023-03-07 10:31:24,109 44k INFO ====> Epoch: 73, cost 243.58 s |
|
2023-03-07 10:33:00,986 44k INFO Train Epoch: 74 [38%] |
|
2023-03-07 10:33:00,988 44k INFO Losses: [2.648531436920166, 2.043774366378784, 9.4091796875, 15.233405113220215, 1.0001845359802246], step: 20400, lr: 9.905443942579728e-05 |
|
2023-03-07 10:35:28,259 44k INFO ====> Epoch: 74, cost 244.15 s |
|
2023-03-07 10:35:59,969 44k INFO Train Epoch: 75 [10%] |
|
2023-03-07 10:35:59,971 44k INFO Losses: [2.637782096862793, 2.2026848793029785, 8.699969291687012, 17.0012264251709, 0.8430761694908142], step: 20600, lr: 9.904205762086905e-05 |
|
2023-03-07 10:38:50,869 44k INFO Train Epoch: 75 [82%] |
|
2023-03-07 10:38:50,871 44k INFO Losses: [2.718297243118286, 2.1846415996551514, 10.339932441711426, 17.13775634765625, 0.8830113410949707], step: 20800, lr: 9.904205762086905e-05 |
|
2023-03-07 10:38:57,677 44k INFO Saving model and optimizer state at iteration 75 to ./logs/44k/G_20800.pth |
|
2023-03-07 10:39:00,413 44k INFO Saving model and optimizer state at iteration 75 to ./logs/44k/D_20800.pth |
|
2023-03-07 10:39:03,121 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_18400.pth |
|
2023-03-07 10:39:03,124 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_18400.pth |
|
2023-03-07 10:39:46,009 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 6536180, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 5}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'nahida': 0, 'pecorine': 1, 'ayaka': 2}, 'model_dir': './logs/44k'} |
|
2023-03-07 10:39:58,491 44k INFO Loaded checkpoint './logs/44k/G_20800.pth' (iteration 75) |
|
2023-03-07 10:39:59,953 44k INFO Loaded checkpoint './logs/44k/D_20800.pth' (iteration 75) |
|
2023-03-07 10:40:47,782 44k INFO Train Epoch: 75 [10%] |
|
2023-03-07 10:40:47,784 44k INFO Losses: [2.6162524223327637, 2.404218912124634, 10.315699577331543, 17.09986114501953, 0.8618772029876709], step: 20600, lr: 9.902967736366644e-05 |
|
2023-03-07 10:44:07,452 44k INFO Train Epoch: 75 [82%] |
|
2023-03-07 10:44:07,453 44k INFO Losses: [2.5940821170806885, 2.2668755054473877, 11.375072479248047, 15.475739479064941, 0.6878691911697388], step: 20800, lr: 9.902967736366644e-05 |
|
2023-03-07 10:44:21,073 44k INFO Saving model and optimizer state at iteration 75 to ./logs/44k/G_20800.pth |
|
2023-03-07 10:44:23,754 44k INFO Saving model and optimizer state at iteration 75 to ./logs/44k/D_20800.pth |
|
2023-03-07 10:45:17,743 44k INFO ====> Epoch: 75, cost 331.74 s |
|
2023-03-07 10:47:31,470 44k INFO Train Epoch: 76 [54%] |
|
2023-03-07 10:47:31,472 44k INFO Losses: [2.473663806915283, 2.6710684299468994, 7.955134868621826, 17.815927505493164, 0.7153506278991699], step: 21000, lr: 9.901729865399597e-05 |
|
2023-03-07 10:49:19,632 44k INFO ====> Epoch: 76, cost 241.89 s |
|
2023-03-07 10:50:26,869 44k INFO Train Epoch: 77 [26%] |
|
2023-03-07 10:50:26,871 44k INFO Losses: [2.648346424102783, 1.853987216949463, 13.134393692016602, 17.38188362121582, 0.8923838138580322], step: 21200, lr: 9.900492149166423e-05 |
|
2023-03-07 10:53:18,549 44k INFO Train Epoch: 77 [98%] |
|
2023-03-07 10:53:18,551 44k INFO Losses: [2.6795389652252197, 2.1009273529052734, 6.716603755950928, 15.290496826171875, 1.1096683740615845], step: 21400, lr: 9.900492149166423e-05 |
|
2023-03-07 10:53:24,017 44k INFO ====> Epoch: 77, cost 244.39 s |
|
2023-03-07 10:56:16,070 44k INFO Train Epoch: 78 [70%] |
|
2023-03-07 10:56:16,072 44k INFO Losses: [2.4683101177215576, 2.2954437732696533, 11.113466262817383, 16.59324073791504, 0.6977340579032898], step: 21600, lr: 9.899254587647776e-05 |
|
2023-03-07 10:56:24,031 44k INFO Saving model and optimizer state at iteration 78 to ./logs/44k/G_21600.pth |
|
2023-03-07 10:56:26,563 44k INFO Saving model and optimizer state at iteration 78 to ./logs/44k/D_21600.pth |
|
2023-03-07 10:57:42,472 44k INFO ====> Epoch: 78, cost 258.46 s |
|
2023-03-07 10:59:26,959 44k INFO Train Epoch: 79 [42%] |
|
2023-03-07 10:59:26,961 44k INFO Losses: [2.779606819152832, 1.9818193912506104, 7.328914165496826, 14.22529125213623, 0.8128287196159363], step: 21800, lr: 9.89801718082432e-05 |
|
2023-03-07 11:01:45,835 44k INFO ====> Epoch: 79, cost 243.36 s |
|
2023-03-07 11:02:25,787 44k INFO Train Epoch: 80 [14%] |
|
2023-03-07 11:02:25,789 44k INFO Losses: [2.5567448139190674, 2.0336644649505615, 7.533416748046875, 17.515047073364258, 0.8820903301239014], step: 22000, lr: 9.896779928676716e-05 |
|
2023-03-07 11:05:17,436 44k INFO Train Epoch: 80 [86%] |
|
2023-03-07 11:05:17,438 44k INFO Losses: [2.572031021118164, 2.374467372894287, 9.081830978393555, 19.22201156616211, 0.9428823590278625], step: 22200, lr: 9.896779928676716e-05 |
|
2023-03-07 11:05:51,366 44k INFO ====> Epoch: 80, cost 245.53 s |
|
2023-03-07 11:08:17,156 44k INFO Train Epoch: 81 [58%] |
|
2023-03-07 11:08:17,158 44k INFO Losses: [2.56310772895813, 2.263118267059326, 10.707916259765625, 20.664592742919922, 0.8022601008415222], step: 22400, lr: 9.895542831185631e-05 |
|
2023-03-07 11:08:23,281 44k INFO Saving model and optimizer state at iteration 81 to ./logs/44k/G_22400.pth |
|
2023-03-07 11:08:26,033 44k INFO Saving model and optimizer state at iteration 81 to ./logs/44k/D_22400.pth |
|
2023-03-07 11:10:11,368 44k INFO ====> Epoch: 81, cost 260.00 s |
|
2023-03-07 11:11:30,753 44k INFO Train Epoch: 82 [29%] |
|
2023-03-07 11:11:30,755 44k INFO Losses: [2.5960514545440674, 2.072114944458008, 6.2542572021484375, 15.661253929138184, 0.8077398538589478], step: 22600, lr: 9.894305888331732e-05 |
|
2023-03-07 11:14:17,204 44k INFO ====> Epoch: 82, cost 245.84 s |
|
2023-03-07 11:14:27,227 44k INFO Train Epoch: 83 [1%] |
|
2023-03-07 11:14:27,228 44k INFO Losses: [2.729780673980713, 2.178766965866089, 10.639708518981934, 17.26081657409668, 0.8835186958312988], step: 22800, lr: 9.89306910009569e-05 |
|
2023-03-07 11:17:18,400 44k INFO Train Epoch: 83 [73%] |
|
2023-03-07 11:17:18,402 44k INFO Losses: [2.7299461364746094, 1.77739417552948, 4.7329559326171875, 13.06923770904541, 1.0410281419754028], step: 23000, lr: 9.89306910009569e-05 |
|
2023-03-07 11:18:21,597 44k INFO ====> Epoch: 83, cost 244.39 s |
|
2023-03-07 11:20:14,269 44k INFO Train Epoch: 84 [45%] |
|
2023-03-07 11:20:14,271 44k INFO Losses: [2.7867848873138428, 1.8935874700546265, 9.714607238769531, 17.693798065185547, 1.142695426940918], step: 23200, lr: 9.891832466458178e-05 |
|
2023-03-07 11:20:20,920 44k INFO Saving model and optimizer state at iteration 84 to ./logs/44k/G_23200.pth |
|
2023-03-07 11:20:24,294 44k INFO Saving model and optimizer state at iteration 84 to ./logs/44k/D_23200.pth |
|
2023-03-07 11:20:26,567 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_19200.pth |
|
2023-03-07 11:20:26,570 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_19200.pth |
|
2023-03-07 11:22:41,660 44k INFO ====> Epoch: 84, cost 260.06 s |
|
2023-03-07 11:23:28,338 44k INFO Train Epoch: 85 [17%] |
|
2023-03-07 11:23:28,339 44k INFO Losses: [2.6299381256103516, 2.0690574645996094, 9.850110054016113, 15.331695556640625, 0.8573498129844666], step: 23400, lr: 9.89059598739987e-05 |
|
2023-03-07 11:26:17,058 44k INFO Train Epoch: 85 [89%] |
|
2023-03-07 11:26:17,060 44k INFO Losses: [2.6537418365478516, 2.1390793323516846, 12.62900161743164, 20.696577072143555, 0.6748669147491455], step: 23600, lr: 9.89059598739987e-05 |
|
2023-03-07 11:26:43,158 44k INFO ====> Epoch: 85, cost 241.50 s |
|
2023-03-07 11:29:14,847 44k INFO Train Epoch: 86 [61%] |
|
2023-03-07 11:29:14,849 44k INFO Losses: [2.576536178588867, 2.384481906890869, 7.012294769287109, 15.218843460083008, 0.9203285574913025], step: 23800, lr: 9.889359662901445e-05 |
|
2023-03-07 11:30:45,771 44k INFO ====> Epoch: 86, cost 242.61 s |
|
2023-03-07 11:32:11,603 44k INFO Train Epoch: 87 [33%] |
|
2023-03-07 11:32:11,605 44k INFO Losses: [2.7623422145843506, 1.8711085319519043, 7.922089576721191, 14.498408317565918, 0.9695112705230713], step: 24000, lr: 9.888123492943583e-05 |
|
2023-03-07 11:32:18,223 44k INFO Saving model and optimizer state at iteration 87 to ./logs/44k/G_24000.pth |
|
2023-03-07 11:32:20,947 44k INFO Saving model and optimizer state at iteration 87 to ./logs/44k/D_24000.pth |
|
2023-03-07 11:32:23,545 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_20000.pth |
|
2023-03-07 11:32:23,548 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_20000.pth |
|
2023-03-07 11:35:04,339 44k INFO ====> Epoch: 87, cost 258.57 s |
|
2023-03-07 11:35:23,369 44k INFO Train Epoch: 88 [5%] |
|
2023-03-07 11:35:23,371 44k INFO Losses: [2.6211295127868652, 2.044856071472168, 5.203814506530762, 14.260181427001953, 0.7971086502075195], step: 24200, lr: 9.886887477506964e-05 |
|
2023-03-07 11:38:13,029 44k INFO Train Epoch: 88 [77%] |
|
2023-03-07 11:38:13,030 44k INFO Losses: [2.681881904602051, 1.689557671546936, 6.003676414489746, 16.262372970581055, 0.9090717434883118], step: 24400, lr: 9.886887477506964e-05 |
|
2023-03-07 11:39:07,383 44k INFO ====> Epoch: 88, cost 243.04 s |
|
2023-03-07 11:41:08,928 44k INFO Train Epoch: 89 [49%] |
|
2023-03-07 11:41:08,930 44k INFO Losses: [2.4905405044555664, 2.4597482681274414, 10.432656288146973, 15.523944854736328, 0.6022452116012573], step: 24600, lr: 9.885651616572276e-05 |
|
2023-03-07 11:43:09,066 44k INFO ====> Epoch: 89, cost 241.68 s |
|
2023-03-07 11:44:05,804 44k INFO Train Epoch: 90 [21%] |
|
2023-03-07 11:44:05,805 44k INFO Losses: [2.536304473876953, 2.0080478191375732, 6.283693790435791, 15.755486488342285, 0.9408957958221436], step: 24800, lr: 9.884415910120204e-05 |
|
2023-03-07 11:44:11,853 44k INFO Saving model and optimizer state at iteration 90 to ./logs/44k/G_24800.pth |
|
2023-03-07 11:44:16,024 44k INFO Saving model and optimizer state at iteration 90 to ./logs/44k/D_24800.pth |
|
2023-03-07 11:44:18,256 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_20800.pth |
|
2023-03-07 11:44:18,262 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_20800.pth |
|
2023-03-07 11:47:09,457 44k INFO Train Epoch: 90 [93%] |
|
2023-03-07 11:47:09,459 44k INFO Losses: [2.6557815074920654, 2.2029154300689697, 10.919027328491211, 16.953351974487305, 1.2213168144226074], step: 25000, lr: 9.884415910120204e-05 |
|
2023-03-07 11:47:26,740 44k INFO ====> Epoch: 90, cost 257.67 s |
|
2023-03-07 11:50:08,219 44k INFO Train Epoch: 91 [65%] |
|
2023-03-07 11:50:08,221 44k INFO Losses: [2.643355131149292, 1.9500988721847534, 8.487619400024414, 13.657479286193848, 1.3830907344818115], step: 25200, lr: 9.883180358131438e-05 |
|
2023-03-07 11:51:30,819 44k INFO ====> Epoch: 91, cost 244.08 s |
|
2023-03-07 11:53:03,455 44k INFO Train Epoch: 92 [37%] |
|
2023-03-07 11:53:03,458 44k INFO Losses: [2.465785026550293, 2.340013027191162, 11.1765775680542, 18.482587814331055, 1.4818311929702759], step: 25400, lr: 9.881944960586671e-05 |
|
2023-03-07 11:55:31,994 44k INFO ====> Epoch: 92, cost 241.17 s |
|
2023-03-07 11:55:59,586 44k INFO Train Epoch: 93 [9%] |
|
2023-03-07 11:55:59,588 44k INFO Losses: [2.5963516235351562, 2.162351369857788, 11.699685096740723, 17.511823654174805, 0.7361944317817688], step: 25600, lr: 9.880709717466598e-05 |
|
2023-03-07 11:56:06,174 44k INFO Saving model and optimizer state at iteration 93 to ./logs/44k/G_25600.pth |
|
2023-03-07 11:56:08,651 44k INFO Saving model and optimizer state at iteration 93 to ./logs/44k/D_25600.pth |
|
2023-03-07 11:56:10,961 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_21600.pth |
|
2023-03-07 11:56:10,965 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_21600.pth |
|
2023-03-07 11:59:05,873 44k INFO Train Epoch: 93 [81%] |
|
2023-03-07 11:59:05,874 44k INFO Losses: [2.417661428451538, 2.158268690109253, 9.552419662475586, 18.211334228515625, 0.5788415670394897], step: 25800, lr: 9.880709717466598e-05 |
|
2023-03-07 11:59:51,672 44k INFO ====> Epoch: 93, cost 259.68 s |
|
2023-03-07 12:02:03,880 44k INFO Train Epoch: 94 [53%] |
|
2023-03-07 12:02:03,881 44k INFO Losses: [2.475259304046631, 2.297731399536133, 7.828700065612793, 17.018850326538086, 1.3255740404129028], step: 26000, lr: 9.879474628751914e-05 |
|
2023-03-07 12:03:55,351 44k INFO ====> Epoch: 94, cost 243.68 s |
|
2023-03-07 12:05:00,951 44k INFO Train Epoch: 95 [24%] |
|
2023-03-07 12:05:00,954 44k INFO Losses: [2.504439353942871, 2.1617119312286377, 9.756885528564453, 18.6434383392334, 1.0004949569702148], step: 26200, lr: 9.87823969442332e-05 |
|
2023-03-07 12:07:52,201 44k INFO Train Epoch: 95 [96%] |
|
2023-03-07 12:07:52,203 44k INFO Losses: [2.8341360092163086, 1.7737336158752441, 10.919249534606934, 16.648649215698242, 1.1253951787948608], step: 26400, lr: 9.87823969442332e-05 |
|
2023-03-07 12:07:59,577 44k INFO Saving model and optimizer state at iteration 95 to ./logs/44k/G_26400.pth |
|
2023-03-07 12:08:02,674 44k INFO Saving model and optimizer state at iteration 95 to ./logs/44k/D_26400.pth |
|
2023-03-07 12:08:05,456 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_22400.pth |
|
2023-03-07 12:08:05,468 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_22400.pth |
|
2023-03-07 12:08:13,402 44k INFO ====> Epoch: 95, cost 258.05 s |
|
2023-03-07 12:11:05,847 44k INFO Train Epoch: 96 [68%] |
|
2023-03-07 12:11:05,849 44k INFO Losses: [2.671950578689575, 1.869432806968689, 5.500725269317627, 11.785305976867676, 0.9629312753677368], step: 26600, lr: 9.877004914461517e-05 |
|
2023-03-07 12:12:21,246 44k INFO ====> Epoch: 96, cost 247.84 s |
|
2023-03-07 12:14:02,854 44k INFO Train Epoch: 97 [40%] |
|
2023-03-07 12:14:02,857 44k INFO Losses: [2.476696014404297, 2.61649489402771, 12.454306602478027, 18.133182525634766, 0.8170329332351685], step: 26800, lr: 9.875770288847208e-05 |
|
2023-03-07 12:16:25,207 44k INFO ====> Epoch: 97, cost 243.96 s |
|
2023-03-07 12:17:00,715 44k INFO Train Epoch: 98 [12%] |
|
2023-03-07 12:17:00,717 44k INFO Losses: [2.572542190551758, 2.131918430328369, 9.89295768737793, 19.846866607666016, 0.5132474303245544], step: 27000, lr: 9.874535817561101e-05 |
|
2023-03-07 12:19:51,475 44k INFO Train Epoch: 98 [84%] |
|
2023-03-07 12:19:51,477 44k INFO Losses: [2.6798222064971924, 2.0035128593444824, 9.687154769897461, 17.422258377075195, 1.0687354803085327], step: 27200, lr: 9.874535817561101e-05 |
|
2023-03-07 12:19:59,745 44k INFO Saving model and optimizer state at iteration 98 to ./logs/44k/G_27200.pth |
|
2023-03-07 12:20:02,288 44k INFO Saving model and optimizer state at iteration 98 to ./logs/44k/D_27200.pth |
|
2023-03-07 12:20:04,672 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_23200.pth |
|
2023-03-07 12:20:04,674 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_23200.pth |
|
2023-03-07 12:20:45,504 44k INFO ====> Epoch: 98, cost 260.30 s |
|
2023-03-07 12:23:04,786 44k INFO Train Epoch: 99 [56%] |
|
2023-03-07 12:23:04,789 44k INFO Losses: [2.547023296356201, 2.15366530418396, 8.34714412689209, 14.82181167602539, 1.293775200843811], step: 27400, lr: 9.873301500583906e-05 |
|
2023-03-07 12:24:49,349 44k INFO ====> Epoch: 99, cost 243.85 s |
|
2023-03-07 12:26:03,274 44k INFO Train Epoch: 100 [28%] |
|
2023-03-07 12:26:03,276 44k INFO Losses: [2.511744976043701, 2.2077744007110596, 11.229504585266113, 17.388071060180664, 1.197561502456665], step: 27600, lr: 9.872067337896332e-05 |
|
2023-03-07 12:28:52,932 44k INFO ====> Epoch: 100, cost 243.58 s |
|
2023-03-07 12:29:01,492 44k INFO Train Epoch: 101 [0%] |
|
2023-03-07 12:29:01,493 44k INFO Losses: [2.882242202758789, 1.949296236038208, 4.996922016143799, 15.366400718688965, 0.5618728399276733], step: 27800, lr: 9.870833329479095e-05 |
|
2023-03-07 12:31:52,104 44k INFO Train Epoch: 101 [72%] |
|
2023-03-07 12:31:52,107 44k INFO Losses: [2.670868158340454, 2.0066986083984375, 8.438194274902344, 17.876659393310547, 0.915682852268219], step: 28000, lr: 9.870833329479095e-05 |
|
2023-03-07 12:31:59,981 44k INFO Saving model and optimizer state at iteration 101 to ./logs/44k/G_28000.pth |
|
2023-03-07 12:32:02,418 44k INFO Saving model and optimizer state at iteration 101 to ./logs/44k/D_28000.pth |
|
2023-03-07 12:32:04,632 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_24000.pth |
|
2023-03-07 12:32:04,637 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_24000.pth |
|
2023-03-07 12:33:15,062 44k INFO ====> Epoch: 101, cost 262.13 s |
|
2023-03-07 12:35:04,700 44k INFO Train Epoch: 102 [44%] |
|
2023-03-07 12:35:04,702 44k INFO Losses: [2.568235158920288, 2.1914169788360596, 8.721714973449707, 17.62751579284668, 0.7303943037986755], step: 28200, lr: 9.86959947531291e-05 |
|
2023-03-07 12:37:17,176 44k INFO ====> Epoch: 102, cost 242.11 s |
|
2023-03-07 12:38:00,720 44k INFO Train Epoch: 103 [16%] |
|
2023-03-07 12:38:00,722 44k INFO Losses: [2.6602561473846436, 2.117055654525757, 11.561266899108887, 17.47982406616211, 1.185001254081726], step: 28400, lr: 9.868365775378495e-05 |
|
2023-03-07 12:40:50,665 44k INFO Train Epoch: 103 [88%] |
|
2023-03-07 12:40:50,667 44k INFO Losses: [2.645419120788574, 1.9627279043197632, 8.640616416931152, 15.662515640258789, 0.8400827050209045], step: 28600, lr: 9.868365775378495e-05 |
|
2023-03-07 12:41:19,392 44k INFO ====> Epoch: 103, cost 242.22 s |
|
2023-03-07 12:43:48,184 44k INFO Train Epoch: 104 [60%] |
|
2023-03-07 12:43:48,185 44k INFO Losses: [2.6283035278320312, 2.0546162128448486, 8.193836212158203, 14.83495044708252, 0.6911525130271912], step: 28800, lr: 9.867132229656573e-05 |
|
2023-03-07 12:43:54,185 44k INFO Saving model and optimizer state at iteration 104 to ./logs/44k/G_28800.pth |
|
2023-03-07 12:43:57,124 44k INFO Saving model and optimizer state at iteration 104 to ./logs/44k/D_28800.pth |
|
2023-03-07 12:43:59,912 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_24800.pth |
|
2023-03-07 12:43:59,915 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_24800.pth |
|
2023-03-07 12:45:37,281 44k INFO ====> Epoch: 104, cost 257.89 s |
|
2023-03-07 12:46:59,314 44k INFO Train Epoch: 105 [32%] |
|
2023-03-07 12:46:59,317 44k INFO Losses: [2.6218605041503906, 1.9709086418151855, 7.031970024108887, 12.128981590270996, 1.3576488494873047], step: 29000, lr: 9.865898838127865e-05 |
|
2023-03-07 12:49:41,913 44k INFO ====> Epoch: 105, cost 244.63 s |
|
2023-03-07 12:49:58,465 44k INFO Train Epoch: 106 [4%] |
|
2023-03-07 12:49:58,467 44k INFO Losses: [2.6090340614318848, 2.2641139030456543, 9.675869941711426, 17.124406814575195, 0.6704288125038147], step: 29200, lr: 9.864665600773098e-05 |
|
2023-03-07 12:52:48,272 44k INFO Train Epoch: 106 [76%] |
|
2023-03-07 12:52:48,275 44k INFO Losses: [2.541085720062256, 2.2691619396209717, 7.870415687561035, 15.166110038757324, 0.7620126008987427], step: 29400, lr: 9.864665600773098e-05 |
|
2023-03-07 12:53:46,355 44k INFO ====> Epoch: 106, cost 244.44 s |
|
2023-03-07 12:55:45,854 44k INFO Train Epoch: 107 [47%] |
|
2023-03-07 12:55:45,858 44k INFO Losses: [2.485736846923828, 2.171243667602539, 11.848072052001953, 18.230337142944336, 1.0845019817352295], step: 29600, lr: 9.863432517573002e-05 |
|
2023-03-07 12:55:53,491 44k INFO Saving model and optimizer state at iteration 107 to ./logs/44k/G_29600.pth |
|
2023-03-07 12:55:55,903 44k INFO Saving model and optimizer state at iteration 107 to ./logs/44k/D_29600.pth |
|
2023-03-07 12:55:58,293 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_25600.pth |
|
2023-03-07 12:55:58,297 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_25600.pth |
|
2023-03-07 12:58:05,803 44k INFO ====> Epoch: 107, cost 259.45 s |
|
2023-03-07 12:58:57,546 44k INFO Train Epoch: 108 [19%] |
|
2023-03-07 12:58:57,548 44k INFO Losses: [2.517256736755371, 1.902040719985962, 11.825295448303223, 17.26810646057129, 0.6904833316802979], step: 29800, lr: 9.862199588508305e-05 |
|
2023-03-07 13:01:49,066 44k INFO Train Epoch: 108 [91%] |
|
2023-03-07 13:01:49,068 44k INFO Losses: [2.5995163917541504, 2.390113592147827, 8.344016075134277, 17.042997360229492, 1.071793556213379], step: 30000, lr: 9.862199588508305e-05 |
|
2023-03-07 13:02:09,805 44k INFO ====> Epoch: 108, cost 244.00 s |
|
2023-03-07 13:04:47,349 44k INFO Train Epoch: 109 [63%] |
|
2023-03-07 13:04:47,350 44k INFO Losses: [2.7540855407714844, 1.9166958332061768, 10.336080551147461, 17.86919403076172, 0.9032294154167175], step: 30200, lr: 9.86096681355974e-05 |
|
2023-03-07 13:06:14,163 44k INFO ====> Epoch: 109, cost 244.36 s |
|
2023-03-07 13:07:45,530 44k INFO Train Epoch: 110 [35%] |
|
2023-03-07 13:07:45,532 44k INFO Losses: [2.562225103378296, 2.136636257171631, 8.96766185760498, 17.134483337402344, 0.7859663963317871], step: 30400, lr: 9.859734192708044e-05 |
|
2023-03-07 13:07:51,740 44k INFO Saving model and optimizer state at iteration 110 to ./logs/44k/G_30400.pth |
|
2023-03-07 13:07:56,131 44k INFO Saving model and optimizer state at iteration 110 to ./logs/44k/D_30400.pth |
|
2023-03-07 13:07:58,409 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_26400.pth |
|
2023-03-07 13:07:58,694 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_26400.pth |
|
2023-03-07 13:10:35,355 44k INFO ====> Epoch: 110, cost 261.19 s |
|
2023-03-07 13:11:00,541 44k INFO Train Epoch: 111 [7%] |
|
2023-03-07 13:11:00,543 44k INFO Losses: [2.568572521209717, 2.2901718616485596, 9.550481796264648, 19.000839233398438, 0.6459348201751709], step: 30600, lr: 9.858501725933955e-05 |
|
2023-03-07 13:13:50,257 44k INFO Train Epoch: 111 [79%] |
|
2023-03-07 13:13:50,259 44k INFO Losses: [2.5719234943389893, 2.3685643672943115, 9.390961647033691, 16.239389419555664, 0.8426384925842285], step: 30800, lr: 9.858501725933955e-05 |
|
2023-03-07 13:14:39,371 44k INFO ====> Epoch: 111, cost 244.02 s |
|
2023-03-07 13:16:46,858 44k INFO Train Epoch: 112 [51%] |
|
2023-03-07 13:16:46,860 44k INFO Losses: [2.6636009216308594, 1.9371347427368164, 10.338105201721191, 16.245664596557617, 0.8601118326187134], step: 31000, lr: 9.857269413218213e-05 |
|
2023-03-07 13:18:42,458 44k INFO ====> Epoch: 112, cost 243.09 s |
|
2023-03-07 13:19:43,809 44k INFO Train Epoch: 113 [23%] |
|
2023-03-07 13:19:43,811 44k INFO Losses: [2.6893467903137207, 2.207916736602783, 9.720571517944336, 21.429061889648438, 1.0991566181182861], step: 31200, lr: 9.85603725454156e-05 |
|
2023-03-07 13:19:49,745 44k INFO Saving model and optimizer state at iteration 113 to ./logs/44k/G_31200.pth |
|
2023-03-07 13:19:52,637 44k INFO Saving model and optimizer state at iteration 113 to ./logs/44k/D_31200.pth |
|
2023-03-07 13:19:55,380 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_27200.pth |
|
2023-03-07 13:19:55,384 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_27200.pth |
|
2023-03-07 13:22:49,922 44k INFO Train Epoch: 113 [95%] |
|
2023-03-07 13:22:49,925 44k INFO Losses: [2.827308416366577, 1.8313740491867065, 5.4917168617248535, 12.913220405578613, 0.9222707748413086], step: 31400, lr: 9.85603725454156e-05 |
|
2023-03-07 13:23:01,849 44k INFO ====> Epoch: 113, cost 259.39 s |
|
2023-03-07 13:25:47,452 44k INFO Train Epoch: 114 [67%] |
|
2023-03-07 13:25:47,454 44k INFO Losses: [2.586848497390747, 1.934704065322876, 6.173429489135742, 14.944602966308594, 0.9793536067008972], step: 31600, lr: 9.854805249884741e-05 |
|
2023-03-07 13:27:05,330 44k INFO ====> Epoch: 114, cost 243.48 s |
|
2023-03-07 13:28:45,430 44k INFO Train Epoch: 115 [39%] |
|
2023-03-07 13:28:45,432 44k INFO Losses: [2.6021857261657715, 2.0042614936828613, 12.000445365905762, 16.46188735961914, 0.9626584053039551], step: 31800, lr: 9.853573399228505e-05 |
|
2023-03-07 13:31:09,902 44k INFO ====> Epoch: 115, cost 244.57 s |
|
2023-03-07 13:31:44,148 44k INFO Train Epoch: 116 [11%] |
|
2023-03-07 13:31:44,150 44k INFO Losses: [2.6798675060272217, 2.0813381671905518, 7.602932929992676, 15.558710098266602, 1.1048258543014526], step: 32000, lr: 9.8523417025536e-05 |
|
2023-03-07 13:31:50,934 44k INFO Saving model and optimizer state at iteration 116 to ./logs/44k/G_32000.pth |
|
2023-03-07 13:31:53,790 44k INFO Saving model and optimizer state at iteration 116 to ./logs/44k/D_32000.pth |
|
2023-03-07 13:31:56,121 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_28000.pth |
|
2023-03-07 13:31:56,124 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_28000.pth |
|
2023-03-07 13:34:53,705 44k INFO Train Epoch: 116 [83%] |
|
2023-03-07 13:34:53,707 44k INFO Losses: [2.532484292984009, 1.8786741495132446, 9.674373626708984, 17.075969696044922, 0.9130160808563232], step: 32200, lr: 9.8523417025536e-05 |
|
2023-03-07 13:35:34,448 44k INFO ====> Epoch: 116, cost 264.55 s |
|
2023-03-07 13:37:50,767 44k INFO Train Epoch: 117 [55%] |
|
2023-03-07 13:37:50,769 44k INFO Losses: [2.6982851028442383, 1.82241690158844, 6.009973049163818, 13.193355560302734, 0.7604075074195862], step: 32400, lr: 9.851110159840781e-05 |
|
2023-03-07 13:39:37,304 44k INFO ====> Epoch: 117, cost 242.86 s |
|
2023-03-07 13:40:46,586 44k INFO Train Epoch: 118 [27%] |
|
2023-03-07 13:40:46,588 44k INFO Losses: [2.7482619285583496, 1.9498790502548218, 3.5998103618621826, 12.886141777038574, 0.8763274550437927], step: 32600, lr: 9.8498787710708e-05 |
|
2023-03-07 13:43:37,478 44k INFO Train Epoch: 118 [99%] |
|
2023-03-07 13:43:37,479 44k INFO Losses: [2.414304256439209, 2.3612687587738037, 12.184165000915527, 20.735902786254883, 1.1798557043075562], step: 32800, lr: 9.8498787710708e-05 |
|
2023-03-07 13:43:44,812 44k INFO Saving model and optimizer state at iteration 118 to ./logs/44k/G_32800.pth |
|
2023-03-07 13:43:49,084 44k INFO Saving model and optimizer state at iteration 118 to ./logs/44k/D_32800.pth |
|
2023-03-07 13:43:51,450 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_28800.pth |
|
2023-03-07 13:43:51,453 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_28800.pth |
|
2023-03-07 13:43:54,005 44k INFO ====> Epoch: 118, cost 256.70 s |
|
2023-03-07 13:46:50,986 44k INFO Train Epoch: 119 [71%] |
|
2023-03-07 13:46:50,988 44k INFO Losses: [2.5251121520996094, 2.175858736038208, 9.47917366027832, 15.234940528869629, 0.5667465329170227], step: 33000, lr: 9.848647536224416e-05 |
|
2023-03-07 13:48:00,258 44k INFO ====> Epoch: 119, cost 246.25 s |
|
2023-03-07 13:49:48,302 44k INFO Train Epoch: 120 [42%] |
|
2023-03-07 13:49:48,306 44k INFO Losses: [2.541978359222412, 2.227010726928711, 9.67823314666748, 16.843433380126953, 1.0796656608581543], step: 33200, lr: 9.847416455282387e-05 |
|
2023-03-07 13:52:05,317 44k INFO ====> Epoch: 120, cost 245.06 s |
|
2023-03-07 13:52:45,574 44k INFO Train Epoch: 121 [14%] |
|
2023-03-07 13:52:45,576 44k INFO Losses: [2.716115951538086, 2.085996150970459, 5.891523361206055, 12.521631240844727, 1.0317001342773438], step: 33400, lr: 9.846185528225477e-05 |
|
2023-03-07 13:55:37,508 44k INFO Train Epoch: 121 [86%] |
|
2023-03-07 13:55:37,509 44k INFO Losses: [2.534734010696411, 2.177962303161621, 11.1658296585083, 15.923080444335938, 0.9274502992630005], step: 33600, lr: 9.846185528225477e-05 |
|
2023-03-07 13:55:45,054 44k INFO Saving model and optimizer state at iteration 121 to ./logs/44k/G_33600.pth |
|
2023-03-07 13:55:48,285 44k INFO Saving model and optimizer state at iteration 121 to ./logs/44k/D_33600.pth |
|
2023-03-07 13:55:51,132 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_29600.pth |
|
2023-03-07 13:55:51,134 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_29600.pth |
|
2023-03-07 13:56:25,877 44k INFO ====> Epoch: 121, cost 260.56 s |
|
2023-03-07 13:58:49,216 44k INFO Train Epoch: 122 [58%] |
|
2023-03-07 13:58:49,219 44k INFO Losses: [2.542301893234253, 2.0711164474487305, 8.077302932739258, 17.5947265625, 0.5760728120803833], step: 33800, lr: 9.84495475503445e-05 |
|
2023-03-07 14:00:27,938 44k INFO ====> Epoch: 122, cost 242.06 s |
|
2023-03-07 14:01:46,228 44k INFO Train Epoch: 123 [30%] |
|
2023-03-07 14:01:46,230 44k INFO Losses: [2.5644540786743164, 2.088146209716797, 12.895208358764648, 17.019132614135742, 0.8408350348472595], step: 34000, lr: 9.84372413569007e-05 |
|
2023-03-07 14:04:30,871 44k INFO ====> Epoch: 123, cost 242.93 s |
|
2023-03-07 14:04:44,203 44k INFO Train Epoch: 124 [2%] |
|
2023-03-07 14:04:44,205 44k INFO Losses: [2.5868711471557617, 2.048147201538086, 10.051579475402832, 17.50965118408203, 0.6974374651908875], step: 34200, lr: 9.842493670173108e-05 |
|
2023-03-07 14:07:33,479 44k INFO Train Epoch: 124 [74%] |
|
2023-03-07 14:07:33,480 44k INFO Losses: [2.522310495376587, 2.2740702629089355, 10.525261878967285, 21.867076873779297, 0.7790706157684326], step: 34400, lr: 9.842493670173108e-05 |
|
2023-03-07 14:07:41,586 44k INFO Saving model and optimizer state at iteration 124 to ./logs/44k/G_34400.pth |
|
2023-03-07 14:07:44,929 44k INFO Saving model and optimizer state at iteration 124 to ./logs/44k/D_34400.pth |
|
2023-03-07 14:07:47,080 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_30400.pth |
|
2023-03-07 14:07:47,087 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_30400.pth |
|
2023-03-07 14:08:51,406 44k INFO ====> Epoch: 124, cost 260.53 s |
|
2023-03-07 14:10:46,652 44k INFO Train Epoch: 125 [46%] |
|
2023-03-07 14:10:46,654 44k INFO Losses: [2.347188949584961, 2.2917301654815674, 14.611985206604004, 18.828716278076172, 0.6149383187294006], step: 34600, lr: 9.841263358464336e-05 |
|
2023-03-07 14:12:54,699 44k INFO ====> Epoch: 125, cost 243.29 s |
|
2023-03-07 14:13:43,483 44k INFO Train Epoch: 126 [18%] |
|
2023-03-07 14:13:43,485 44k INFO Losses: [2.8683958053588867, 1.9333560466766357, 8.078256607055664, 14.20930004119873, 0.9114355444908142], step: 34800, lr: 9.840033200544528e-05 |
|
2023-03-07 14:16:34,201 44k INFO Train Epoch: 126 [90%] |
|
2023-03-07 14:16:34,202 44k INFO Losses: [2.685029983520508, 1.9542362689971924, 8.398722648620605, 16.716718673706055, 1.0323829650878906], step: 35000, lr: 9.840033200544528e-05 |
|
2023-03-07 14:16:58,681 44k INFO ====> Epoch: 126, cost 243.98 s |
|
2023-03-07 14:19:33,147 44k INFO Train Epoch: 127 [62%] |
|
2023-03-07 14:19:33,149 44k INFO Losses: [2.5145368576049805, 2.056718111038208, 11.5772066116333, 18.14095687866211, 0.8480492234230042], step: 35200, lr: 9.838803196394459e-05 |
|
2023-03-07 14:19:41,540 44k INFO Saving model and optimizer state at iteration 127 to ./logs/44k/G_35200.pth |
|
2023-03-07 14:19:44,183 44k INFO Saving model and optimizer state at iteration 127 to ./logs/44k/D_35200.pth |
|
2023-03-07 14:19:46,410 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_31200.pth |
|
2023-03-07 14:19:46,412 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_31200.pth |
|
2023-03-07 14:21:19,423 44k INFO ====> Epoch: 127, cost 260.74 s |
|
2023-03-07 14:22:46,657 44k INFO Train Epoch: 128 [34%] |
|
2023-03-07 14:22:46,660 44k INFO Losses: [2.666360855102539, 2.0985002517700195, 9.987605094909668, 14.965264320373535, 1.096308708190918], step: 35400, lr: 9.837573345994909e-05 |
|
2023-03-07 14:25:23,482 44k INFO ====> Epoch: 128, cost 244.06 s |
|
2023-03-07 14:25:44,835 44k INFO Train Epoch: 129 [6%] |
|
2023-03-07 14:25:44,837 44k INFO Losses: [2.6134445667266846, 1.9246110916137695, 11.040657997131348, 20.860008239746094, 1.0753899812698364], step: 35600, lr: 9.836343649326659e-05 |
|
2023-03-07 14:28:34,997 44k INFO Train Epoch: 129 [78%] |
|
2023-03-07 14:28:34,998 44k INFO Losses: [2.4784433841705322, 2.135598659515381, 9.75454044342041, 16.967927932739258, 0.8474924564361572], step: 35800, lr: 9.836343649326659e-05 |
|
2023-03-07 14:29:27,586 44k INFO ====> Epoch: 129, cost 244.10 s |
|
2023-03-07 14:31:30,660 44k INFO Train Epoch: 130 [50%] |
|
2023-03-07 14:31:30,662 44k INFO Losses: [2.728429079055786, 1.9212170839309692, 7.325490951538086, 16.043420791625977, 1.1176148653030396], step: 36000, lr: 9.835114106370493e-05 |
|
2023-03-07 14:31:36,788 44k INFO Saving model and optimizer state at iteration 130 to ./logs/44k/G_36000.pth |
|
2023-03-07 14:31:39,832 44k INFO Saving model and optimizer state at iteration 130 to ./logs/44k/D_36000.pth |
|
2023-03-07 14:31:42,498 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_32000.pth |
|
2023-03-07 14:31:42,501 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_32000.pth |
|
2023-03-07 14:33:45,334 44k INFO ====> Epoch: 130, cost 257.75 s |
|
2023-03-07 14:34:41,935 44k INFO Train Epoch: 131 [22%] |
|
2023-03-07 14:34:41,937 44k INFO Losses: [2.611250877380371, 1.8564379215240479, 10.761762619018555, 17.34300994873047, 0.7456722855567932], step: 36200, lr: 9.833884717107196e-05 |
|
2023-03-07 14:37:33,517 44k INFO Train Epoch: 131 [94%] |
|
2023-03-07 14:37:33,519 44k INFO Losses: [2.556187152862549, 2.0489161014556885, 8.852100372314453, 13.79720401763916, 0.7240440845489502], step: 36400, lr: 9.833884717107196e-05 |
|
2023-03-07 14:37:48,781 44k INFO ====> Epoch: 131, cost 243.45 s |
|
2023-03-07 14:40:31,412 44k INFO Train Epoch: 132 [65%] |
|
2023-03-07 14:40:31,414 44k INFO Losses: [2.551508665084839, 2.065232276916504, 10.317933082580566, 17.246503829956055, 0.4623892307281494], step: 36600, lr: 9.832655481517557e-05 |
|
2023-03-07 14:41:52,768 44k INFO ====> Epoch: 132, cost 243.99 s |
|
2023-03-07 14:43:28,657 44k INFO Train Epoch: 133 [37%] |
|
2023-03-07 14:43:28,660 44k INFO Losses: [2.595510482788086, 2.1055619716644287, 10.110427856445312, 17.553598403930664, 0.9527382254600525], step: 36800, lr: 9.831426399582366e-05 |
|
2023-03-07 14:43:36,394 44k INFO Saving model and optimizer state at iteration 133 to ./logs/44k/G_36800.pth |
|
2023-03-07 14:43:39,031 44k INFO Saving model and optimizer state at iteration 133 to ./logs/44k/D_36800.pth |
|
2023-03-07 14:43:41,408 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_32800.pth |
|
2023-03-07 14:43:41,412 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_32800.pth |
|
2023-03-07 14:46:15,226 44k INFO ====> Epoch: 133, cost 262.46 s |
|
2023-03-07 14:46:45,185 44k INFO Train Epoch: 134 [9%] |
|
2023-03-07 14:46:45,187 44k INFO Losses: [2.5570132732391357, 2.12508487701416, 10.48717975616455, 17.457794189453125, 1.1258270740509033], step: 37000, lr: 9.830197471282419e-05 |
|
2023-03-07 14:49:38,842 44k INFO Train Epoch: 134 [81%] |
|
2023-03-07 14:49:38,844 44k INFO Losses: [2.727623701095581, 2.0779953002929688, 7.831822395324707, 15.51980972290039, 0.8485692739486694], step: 37200, lr: 9.830197471282419e-05 |
|
2023-03-07 14:50:23,316 44k INFO ====> Epoch: 134, cost 248.09 s |
|
2023-03-07 14:52:37,310 44k INFO Train Epoch: 135 [53%] |
|
2023-03-07 14:52:37,312 44k INFO Losses: [2.8447556495666504, 1.9116102457046509, 6.507226943969727, 16.886478424072266, 0.9066415429115295], step: 37400, lr: 9.828968696598508e-05 |
|
2023-03-07 14:54:28,052 44k INFO ====> Epoch: 135, cost 244.74 s |
|
2023-03-07 14:55:34,251 44k INFO Train Epoch: 136 [25%] |
|
2023-03-07 14:55:34,253 44k INFO Losses: [2.589242935180664, 2.2363431453704834, 9.712594985961914, 17.27865982055664, 0.9201604723930359], step: 37600, lr: 9.827740075511432e-05 |
|
2023-03-07 14:55:42,172 44k INFO Saving model and optimizer state at iteration 136 to ./logs/44k/G_37600.pth |
|
2023-03-07 14:55:45,046 44k INFO Saving model and optimizer state at iteration 136 to ./logs/44k/D_37600.pth |
|
2023-03-07 14:55:47,217 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_33600.pth |
|
2023-03-07 14:55:47,221 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_33600.pth |
|
2023-03-07 14:58:43,575 44k INFO Train Epoch: 136 [97%] |
|
2023-03-07 14:58:43,577 44k INFO Losses: [2.268256187438965, 2.4601871967315674, 10.843506813049316, 20.129066467285156, 0.5969776511192322], step: 37800, lr: 9.827740075511432e-05 |
|
2023-03-07 14:58:49,698 44k INFO ====> Epoch: 136, cost 261.65 s |
|
2023-03-07 15:01:43,073 44k INFO Train Epoch: 137 [69%] |
|
2023-03-07 15:01:43,076 44k INFO Losses: [2.612961769104004, 1.8237528800964355, 10.260077476501465, 14.124398231506348, 1.0738472938537598], step: 38000, lr: 9.826511608001993e-05 |
|
2023-03-07 15:02:56,617 44k INFO ====> Epoch: 137, cost 246.92 s |
|
2023-03-07 15:04:41,092 44k INFO Train Epoch: 138 [41%] |
|
2023-03-07 15:04:41,094 44k INFO Losses: [2.801295757293701, 1.9080822467803955, 6.491360187530518, 15.194589614868164, 0.732703447341919], step: 38200, lr: 9.825283294050992e-05 |
|
2023-03-07 15:07:03,142 44k INFO ====> Epoch: 138, cost 246.53 s |
|
2023-03-07 15:07:39,560 44k INFO Train Epoch: 139 [13%] |
|
2023-03-07 15:07:39,561 44k INFO Losses: [2.5810441970825195, 2.0589118003845215, 8.09807300567627, 15.709505081176758, 1.1056398153305054], step: 38400, lr: 9.824055133639235e-05 |
|
2023-03-07 15:07:46,164 44k INFO Saving model and optimizer state at iteration 139 to ./logs/44k/G_38400.pth |
|
2023-03-07 15:07:49,790 44k INFO Saving model and optimizer state at iteration 139 to ./logs/44k/D_38400.pth |
|
2023-03-07 15:07:52,348 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_34400.pth |
|
2023-03-07 15:07:52,352 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_34400.pth |
|
2023-03-07 15:10:49,082 44k INFO Train Epoch: 139 [85%] |
|
2023-03-07 15:10:49,084 44k INFO Losses: [2.5378503799438477, 2.268864154815674, 9.447332382202148, 16.74654197692871, 0.7547262907028198], step: 38600, lr: 9.824055133639235e-05 |
|
2023-03-07 15:11:25,266 44k INFO ====> Epoch: 139, cost 262.12 s |
|
2023-03-07 15:13:47,293 44k INFO Train Epoch: 140 [57%] |
|
2023-03-07 15:13:47,295 44k INFO Losses: [2.4265220165252686, 2.432382345199585, 8.281749725341797, 15.038516998291016, 1.0439666509628296], step: 38800, lr: 9.822827126747529e-05 |
|
2023-03-07 15:15:30,238 44k INFO ====> Epoch: 140, cost 244.97 s |
|
2023-03-07 15:16:45,963 44k INFO Train Epoch: 141 [29%] |
|
2023-03-07 15:16:45,966 44k INFO Losses: [2.4159817695617676, 2.230175256729126, 6.03447151184082, 14.917130470275879, 0.8768081068992615], step: 39000, lr: 9.821599273356685e-05 |
|
2023-03-07 15:19:35,571 44k INFO ====> Epoch: 141, cost 245.33 s |
|
2023-03-07 15:19:45,547 44k INFO Train Epoch: 142 [1%] |
|
2023-03-07 15:19:45,549 44k INFO Losses: [2.722186803817749, 2.25022292137146, 10.59304141998291, 16.91985511779785, 0.7805759906768799], step: 39200, lr: 9.820371573447515e-05 |
|
2023-03-07 15:19:51,322 44k INFO Saving model and optimizer state at iteration 142 to ./logs/44k/G_39200.pth |
|
2023-03-07 15:19:55,844 44k INFO Saving model and optimizer state at iteration 142 to ./logs/44k/D_39200.pth |
|
2023-03-07 15:19:58,190 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_35200.pth |
|
2023-03-07 15:19:58,195 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_35200.pth |
|
2023-03-07 15:22:53,517 44k INFO Train Epoch: 142 [73%] |
|
2023-03-07 15:22:53,519 44k INFO Losses: [2.6577816009521484, 1.8956093788146973, 8.7767972946167, 15.530049324035645, 1.3834797143936157], step: 39400, lr: 9.820371573447515e-05 |
|
2023-03-07 15:23:59,677 44k INFO ====> Epoch: 142, cost 264.11 s |
|
2023-03-07 15:25:52,088 44k INFO Train Epoch: 143 [45%] |
|
2023-03-07 15:25:52,090 44k INFO Losses: [2.485358953475952, 2.075366258621216, 8.294974327087402, 16.81549835205078, 1.1152175664901733], step: 39600, lr: 9.819144027000834e-05 |
|
2023-03-07 15:28:04,891 44k INFO ====> Epoch: 143, cost 245.21 s |
|
2023-03-07 15:28:51,656 44k INFO Train Epoch: 144 [17%] |
|
2023-03-07 15:28:51,658 44k INFO Losses: [2.5104942321777344, 2.117520809173584, 6.98276424407959, 15.484089851379395, 0.9780415892601013], step: 39800, lr: 9.817916633997459e-05 |
|
2023-03-07 15:31:44,312 44k INFO Train Epoch: 144 [88%] |
|
2023-03-07 15:31:44,314 44k INFO Losses: [2.8077499866485596, 2.157170295715332, 10.667325973510742, 17.325740814208984, 0.8047347068786621], step: 40000, lr: 9.817916633997459e-05 |
|
2023-03-07 15:31:53,248 44k INFO Saving model and optimizer state at iteration 144 to ./logs/44k/G_40000.pth |
|
2023-03-07 15:31:56,059 44k INFO Saving model and optimizer state at iteration 144 to ./logs/44k/D_40000.pth |
|
2023-03-07 15:31:58,294 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_36000.pth |
|
2023-03-07 15:31:58,304 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_36000.pth |
|
2023-03-07 15:32:29,602 44k INFO ====> Epoch: 144, cost 264.71 s |
|
2023-03-07 15:35:01,349 44k INFO Train Epoch: 145 [60%] |
|
2023-03-07 15:35:01,351 44k INFO Losses: [2.477918863296509, 2.0148065090179443, 10.240796089172363, 17.502338409423828, 0.5396723747253418], step: 40200, lr: 9.816689394418209e-05 |
|
2023-03-07 15:36:36,351 44k INFO ====> Epoch: 145, cost 246.75 s |
|
2023-03-07 15:38:00,606 44k INFO Train Epoch: 146 [32%] |
|
2023-03-07 15:38:00,608 44k INFO Losses: [2.6972265243530273, 2.2379965782165527, 10.642809867858887, 17.13044548034668, 0.7844314575195312], step: 40400, lr: 9.815462308243906e-05 |
|
2023-03-07 15:40:42,894 44k INFO ====> Epoch: 146, cost 246.54 s |
|
2023-03-07 15:41:00,228 44k INFO Train Epoch: 147 [4%] |
|
2023-03-07 15:41:00,230 44k INFO Losses: [2.7573306560516357, 2.015153169631958, 6.866921424865723, 17.249662399291992, 0.917712390422821], step: 40600, lr: 9.814235375455375e-05 |
|
2023-03-07 15:43:51,225 44k INFO Train Epoch: 147 [76%] |
|
2023-03-07 15:43:51,226 44k INFO Losses: [2.686882495880127, 1.9636616706848145, 11.041534423828125, 18.591346740722656, 0.8938453793525696], step: 40800, lr: 9.814235375455375e-05 |
|
2023-03-07 15:43:58,853 44k INFO Saving model and optimizer state at iteration 147 to ./logs/44k/G_40800.pth |
|
2023-03-07 15:44:01,503 44k INFO Saving model and optimizer state at iteration 147 to ./logs/44k/D_40800.pth |
|
2023-03-07 15:44:03,807 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_36800.pth |
|
2023-03-07 15:44:03,810 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_36800.pth |
|
2023-03-07 15:45:03,211 44k INFO ====> Epoch: 147, cost 260.32 s |
|
2023-03-07 15:47:02,971 44k INFO Train Epoch: 148 [48%] |
|
2023-03-07 15:47:02,972 44k INFO Losses: [2.791567325592041, 1.8524407148361206, 9.042804718017578, 14.243025779724121, 0.8905417919158936], step: 41000, lr: 9.813008596033443e-05 |
|
2023-03-07 15:49:04,078 44k INFO ====> Epoch: 148, cost 240.87 s |
|
2023-03-07 15:49:57,246 44k INFO Train Epoch: 149 [20%] |
|
2023-03-07 15:49:57,248 44k INFO Losses: [2.4793646335601807, 2.1416540145874023, 6.649944305419922, 16.792537689208984, 0.7819536924362183], step: 41200, lr: 9.811781969958938e-05 |
|
2023-03-07 15:52:46,701 44k INFO Train Epoch: 149 [92%] |
|
2023-03-07 15:52:46,703 44k INFO Losses: [2.463216543197632, 1.9820969104766846, 10.679522514343262, 19.082372665405273, 0.7853336334228516], step: 41400, lr: 9.811781969958938e-05 |
|
2023-03-07 15:53:04,773 44k INFO ====> Epoch: 149, cost 240.70 s |
|
2023-03-07 15:55:42,742 44k INFO Train Epoch: 150 [64%] |
|
2023-03-07 15:55:42,744 44k INFO Losses: [2.647197723388672, 2.299222469329834, 7.331103801727295, 17.678911209106445, 0.9585152268409729], step: 41600, lr: 9.810555497212693e-05 |
|
2023-03-07 15:55:49,380 44k INFO Saving model and optimizer state at iteration 150 to ./logs/44k/G_41600.pth |
|
2023-03-07 15:55:52,253 44k INFO Saving model and optimizer state at iteration 150 to ./logs/44k/D_41600.pth |
|
2023-03-07 15:55:54,676 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_37600.pth |
|
2023-03-07 15:55:54,712 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_37600.pth |
|
2023-03-07 15:57:21,901 44k INFO ====> Epoch: 150, cost 257.13 s |
|
2023-03-07 15:58:52,988 44k INFO Train Epoch: 151 [36%] |
|
2023-03-07 15:58:52,990 44k INFO Losses: [2.6337318420410156, 1.8377368450164795, 10.304336547851562, 17.388080596923828, 0.8704838156700134], step: 41800, lr: 9.809329177775541e-05 |
|
2023-03-07 16:01:22,071 44k INFO ====> Epoch: 151, cost 240.17 s |
|
2023-03-07 16:01:48,226 44k INFO Train Epoch: 152 [8%] |
|
2023-03-07 16:01:48,227 44k INFO Losses: [2.8213083744049072, 1.7802414894104004, 4.918356895446777, 13.4742431640625, 0.9557060599327087], step: 42000, lr: 9.808103011628319e-05 |
|
2023-03-07 16:04:36,530 44k INFO Train Epoch: 152 [80%] |
|
2023-03-07 16:04:36,531 44k INFO Losses: [2.5344505310058594, 2.319838047027588, 9.44050407409668, 16.864612579345703, 0.6482043862342834], step: 42200, lr: 9.808103011628319e-05 |
|
2023-03-07 16:05:23,044 44k INFO ====> Epoch: 152, cost 240.97 s |
|
2023-03-07 16:07:31,466 44k INFO Train Epoch: 153 [52%] |
|
2023-03-07 16:07:31,468 44k INFO Losses: [2.3980166912078857, 2.129762887954712, 12.265533447265625, 16.87816047668457, 0.8513701558113098], step: 42400, lr: 9.806876998751865e-05 |
|
2023-03-07 16:07:37,382 44k INFO Saving model and optimizer state at iteration 153 to ./logs/44k/G_42400.pth |
|
2023-03-07 16:07:40,699 44k INFO Saving model and optimizer state at iteration 153 to ./logs/44k/D_42400.pth |
|
2023-03-07 16:07:43,547 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_38400.pth |
|
2023-03-07 16:07:43,549 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_38400.pth |
|
2023-03-07 16:09:38,726 44k INFO ====> Epoch: 153, cost 255.68 s |
|
2023-03-07 16:10:41,454 44k INFO Train Epoch: 154 [24%] |
|
2023-03-07 16:10:41,456 44k INFO Losses: [2.64570689201355, 1.9623569250106812, 7.632351398468018, 15.81658935546875, 1.0724657773971558], step: 42600, lr: 9.80565113912702e-05 |
|
2023-03-07 16:13:29,962 44k INFO Train Epoch: 154 [96%] |
|
2023-03-07 16:13:29,963 44k INFO Losses: [2.554548740386963, 2.245011568069458, 7.115328311920166, 16.560129165649414, 0.9685333371162415], step: 42800, lr: 9.80565113912702e-05 |
|
2023-03-07 16:13:39,995 44k INFO ====> Epoch: 154, cost 241.27 s |
|
2023-03-07 16:16:24,558 44k INFO Train Epoch: 155 [68%] |
|
2023-03-07 16:16:24,560 44k INFO Losses: [2.596010208129883, 1.9347838163375854, 8.885001182556152, 14.531281471252441, 0.8205533623695374], step: 43000, lr: 9.804425432734629e-05 |
|
2023-03-07 16:17:40,375 44k INFO ====> Epoch: 155, cost 240.38 s |
|
2023-03-07 16:19:17,955 44k INFO Train Epoch: 156 [40%] |
|
2023-03-07 16:19:17,956 44k INFO Losses: [2.709822177886963, 2.301928997039795, 7.812820911407471, 16.93438720703125, 0.8064535856246948], step: 43200, lr: 9.803199879555537e-05 |
|
2023-03-07 16:19:24,355 44k INFO Saving model and optimizer state at iteration 156 to ./logs/44k/G_43200.pth |
|
2023-03-07 16:19:27,994 44k INFO Saving model and optimizer state at iteration 156 to ./logs/44k/D_43200.pth |
|
2023-03-07 16:19:30,398 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_39200.pth |
|
2023-03-07 16:19:30,402 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_39200.pth |
|
2023-03-07 16:21:54,701 44k INFO ====> Epoch: 156, cost 254.33 s |
|
2023-03-07 16:22:27,409 44k INFO Train Epoch: 157 [12%] |
|
2023-03-07 16:22:27,410 44k INFO Losses: [2.4813711643218994, 2.2981083393096924, 11.50885009765625, 17.688541412353516, 1.1247652769088745], step: 43400, lr: 9.801974479570593e-05 |
|
2023-03-07 16:25:15,665 44k INFO Train Epoch: 157 [83%] |
|
2023-03-07 16:25:15,667 44k INFO Losses: [2.9073727130889893, 1.8447747230529785, 5.932702541351318, 12.930627822875977, 0.6803305149078369], step: 43600, lr: 9.801974479570593e-05 |
|
2023-03-08 02:21:28,711 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 6536180, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 12, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 5}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'nahida': 0, 'pecorine': 1, 'ayaka': 2}, 'model_dir': './logs/44k'} |
|
2023-03-08 02:21:42,057 44k INFO Loaded checkpoint './logs/44k/G_43200.pth' (iteration 156) |
|
2023-03-08 02:21:49,395 44k INFO Loaded checkpoint './logs/44k/D_43200.pth' (iteration 156) |
|
2023-03-08 02:23:36,590 44k INFO Train Epoch: 156 [40%] |
|
2023-03-08 02:23:36,591 44k INFO Losses: [2.5819149017333984, 2.143174648284912, 10.522336959838867, 15.130985260009766, 0.7662724256515503], step: 21600, lr: 9.801974479570593e-05 |
|
2023-03-08 02:23:46,843 44k INFO Saving model and optimizer state at iteration 156 to ./logs/44k/G_21600.pth |
|
2023-03-08 02:23:51,391 44k INFO Saving model and optimizer state at iteration 156 to ./logs/44k/D_21600.pth |
|
2023-03-08 02:23:53,682 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_40000.pth |
|
2023-03-08 02:23:53,684 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_40000.pth |
|
2023-03-08 02:26:06,396 44k INFO ====> Epoch: 156, cost 277.69 s |
|
2023-03-08 02:28:53,443 44k INFO Train Epoch: 157 [83%] |
|
2023-03-08 02:28:53,445 44k INFO Losses: [2.66318941116333, 2.127767324447632, 10.73057746887207, 17.54991912841797, 0.7208425402641296], step: 21800, lr: 9.800749232760646e-05 |
|
2023-03-08 02:29:23,790 44k INFO ====> Epoch: 157, cost 197.39 s |
|
2023-03-08 02:30:03,505 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 6536180, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 5}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'nahida': 0, 'pecorine': 1, 'ayaka': 2}, 'model_dir': './logs/44k'} |
|
2023-03-08 02:30:19,690 44k INFO Loaded checkpoint './logs/44k/G_43200.pth' (iteration 156) |
|
2023-03-08 02:30:23,688 44k INFO Loaded checkpoint './logs/44k/D_43200.pth' (iteration 156) |
|
2023-03-08 02:32:42,110 44k INFO Train Epoch: 156 [40%] |
|
2023-03-08 02:32:42,111 44k INFO Losses: [2.4732213020324707, 2.1469333171844482, 10.939823150634766, 15.046886444091797, 0.9504019618034363], step: 43200, lr: 9.801974479570593e-05 |
|
2023-03-08 02:32:51,627 44k INFO Saving model and optimizer state at iteration 156 to ./logs/44k/G_43200.pth |
|
2023-03-08 02:32:54,774 44k INFO Saving model and optimizer state at iteration 156 to ./logs/44k/D_43200.pth |
|
2023-03-08 02:35:56,688 44k INFO ====> Epoch: 156, cost 353.19 s |
|
2023-03-08 02:36:35,152 44k INFO Train Epoch: 157 [12%] |
|
2023-03-08 02:36:35,155 44k INFO Losses: [2.508427143096924, 2.118438720703125, 10.691849708557129, 15.574374198913574, 1.048387885093689], step: 43400, lr: 9.800749232760646e-05 |
|
2023-03-08 02:39:39,009 44k INFO Train Epoch: 157 [83%] |
|
2023-03-08 02:39:39,010 44k INFO Losses: [2.332420587539673, 2.4597249031066895, 11.89755630493164, 18.3675479888916, 0.9792011976242065], step: 43600, lr: 9.800749232760646e-05 |
|
2023-03-08 02:40:21,235 44k INFO ====> Epoch: 157, cost 264.55 s |
|
2023-03-08 02:42:50,194 44k INFO Train Epoch: 158 [55%] |
|
2023-03-08 02:42:50,196 44k INFO Losses: [2.607238292694092, 2.066110134124756, 9.890717506408691, 15.75928783416748, 0.7510154843330383], step: 43800, lr: 9.79952413910655e-05 |
|
2023-03-08 02:44:46,028 44k INFO ====> Epoch: 158, cost 264.79 s |
|
2023-03-08 02:46:03,666 44k INFO Train Epoch: 159 [27%] |
|
2023-03-08 02:46:03,668 44k INFO Losses: [2.469644069671631, 2.326986789703369, 9.601941108703613, 16.331581115722656, 0.698388934135437], step: 44000, lr: 9.798299198589162e-05 |
|
2023-03-08 02:46:11,439 44k INFO Saving model and optimizer state at iteration 159 to ./logs/44k/G_44000.pth |
|
2023-03-08 02:46:14,316 44k INFO Saving model and optimizer state at iteration 159 to ./logs/44k/D_44000.pth |
|
2023-03-08 02:46:16,752 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_40800.pth |
|
2023-03-08 02:46:16,754 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_40800.pth |
|
2023-03-08 02:49:22,985 44k INFO Train Epoch: 159 [99%] |
|
2023-03-08 02:49:22,986 44k INFO Losses: [2.9767568111419678, 2.080909013748169, 6.0124006271362305, 12.236157417297363, 1.0207706689834595], step: 44200, lr: 9.798299198589162e-05 |
|
2023-03-08 02:49:24,945 44k INFO ====> Epoch: 159, cost 278.92 s |
|
2023-03-08 02:52:34,260 44k INFO Train Epoch: 160 [71%] |
|
2023-03-08 02:52:34,262 44k INFO Losses: [2.449756145477295, 2.2825958728790283, 10.627461433410645, 13.672066688537598, 0.9797255396842957], step: 44400, lr: 9.797074411189339e-05 |
|
2023-03-08 02:53:46,586 44k INFO ====> Epoch: 160, cost 261.64 s |
|
2023-03-08 02:55:43,436 44k INFO Train Epoch: 161 [43%] |
|
2023-03-08 02:55:43,438 44k INFO Losses: [2.461012363433838, 2.2964553833007812, 13.088207244873047, 19.27591323852539, 0.792690634727478], step: 44600, lr: 9.795849776887939e-05 |
|
2023-03-08 02:58:07,705 44k INFO ====> Epoch: 161, cost 261.12 s |
|
2023-03-08 02:58:54,206 44k INFO Train Epoch: 162 [15%] |
|
2023-03-08 02:58:54,208 44k INFO Losses: [2.686145544052124, 2.195071220397949, 10.532417297363281, 19.825410842895508, 0.3764630854129791], step: 44800, lr: 9.794625295665828e-05 |
|
2023-03-08 02:59:00,926 44k INFO Saving model and optimizer state at iteration 162 to ./logs/44k/G_44800.pth |
|
2023-03-08 02:59:05,398 44k INFO Saving model and optimizer state at iteration 162 to ./logs/44k/D_44800.pth |
|
2023-03-08 02:59:07,639 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_41600.pth |
|
2023-03-08 02:59:07,641 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_41600.pth |
|
2023-03-08 03:02:13,531 44k INFO Train Epoch: 162 [87%] |
|
2023-03-08 03:02:13,534 44k INFO Losses: [2.7293052673339844, 1.9577159881591797, 7.694588661193848, 15.992636680603027, 0.8690779805183411], step: 45000, lr: 9.794625295665828e-05 |
|
2023-03-08 03:02:46,339 44k INFO ====> Epoch: 162, cost 278.63 s |
|
2023-03-08 03:05:25,008 44k INFO Train Epoch: 163 [59%] |
|
2023-03-08 03:05:25,009 44k INFO Losses: [2.837630271911621, 1.9181642532348633, 6.675662517547607, 14.70255184173584, 0.914516806602478], step: 45200, lr: 9.79340096750387e-05 |
|
2023-03-08 03:07:07,747 44k INFO ====> Epoch: 163, cost 261.41 s |
|
2023-03-08 03:08:35,936 44k INFO Train Epoch: 164 [31%] |
|
2023-03-08 03:08:35,938 44k INFO Losses: [2.495232343673706, 2.2274389266967773, 10.259429931640625, 14.76123046875, 0.3277120888233185], step: 45400, lr: 9.792176792382932e-05 |
|
2023-03-08 03:11:31,136 44k INFO ====> Epoch: 164, cost 263.39 s |
|
2023-03-08 03:11:46,716 44k INFO Train Epoch: 165 [3%] |
|
2023-03-08 03:11:46,718 44k INFO Losses: [2.753824234008789, 2.093254566192627, 9.706669807434082, 17.512969970703125, 1.0468066930770874], step: 45600, lr: 9.790952770283884e-05 |
|
2023-03-08 03:11:53,271 44k INFO Saving model and optimizer state at iteration 165 to ./logs/44k/G_45600.pth |
|
2023-03-08 03:11:56,170 44k INFO Saving model and optimizer state at iteration 165 to ./logs/44k/D_45600.pth |
|
2023-03-08 03:11:58,876 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_42400.pth |
|
2023-03-08 03:11:58,878 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_42400.pth |
|
2023-03-08 03:15:04,486 44k INFO Train Epoch: 165 [75%] |
|
2023-03-08 03:15:04,487 44k INFO Losses: [2.5104198455810547, 2.2979352474212646, 9.09818172454834, 17.025728225708008, 0.8729590773582458], step: 45800, lr: 9.790952770283884e-05 |
|
2023-03-08 03:16:07,578 44k INFO ====> Epoch: 165, cost 276.44 s |
|
2023-03-08 03:18:15,329 44k INFO Train Epoch: 166 [47%] |
|
2023-03-08 03:18:15,331 44k INFO Losses: [2.5445079803466797, 1.9724833965301514, 7.351832866668701, 13.205491065979004, 0.7603639364242554], step: 46000, lr: 9.789728901187598e-05 |
|
2023-03-08 03:20:29,310 44k INFO ====> Epoch: 166, cost 261.73 s |
|
2023-03-08 03:21:24,958 44k INFO Train Epoch: 167 [19%] |
|
2023-03-08 03:21:24,960 44k INFO Losses: [2.696256160736084, 2.0927894115448, 6.182933807373047, 14.072891235351562, 0.8930944204330444], step: 46200, lr: 9.78850518507495e-05 |
|
2023-03-08 03:24:26,835 44k INFO Train Epoch: 167 [91%] |
|
2023-03-08 03:24:26,838 44k INFO Losses: [2.750711679458618, 1.9618878364562988, 9.12593936920166, 15.634551048278809, 0.7306580543518066], step: 46400, lr: 9.78850518507495e-05 |
|
2023-03-08 03:24:36,823 44k INFO Saving model and optimizer state at iteration 167 to ./logs/44k/G_46400.pth |
|
2023-03-08 03:24:40,274 44k INFO Saving model and optimizer state at iteration 167 to ./logs/44k/D_46400.pth |
|
2023-03-08 03:24:42,943 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_21600.pth |
|
2023-03-08 03:24:42,947 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_21600.pth |
|
2023-03-08 03:25:10,061 44k INFO ====> Epoch: 167, cost 280.75 s |
|
2023-03-08 03:27:55,744 44k INFO Train Epoch: 168 [63%] |
|
2023-03-08 03:27:55,746 44k INFO Losses: [2.563018321990967, 1.9744563102722168, 8.07748794555664, 17.766172409057617, 0.7537419199943542], step: 46600, lr: 9.787281621926815e-05 |
|
2023-03-08 03:29:29,839 44k INFO ====> Epoch: 168, cost 259.78 s |
|
2023-03-08 03:31:04,161 44k INFO Train Epoch: 169 [35%] |
|
2023-03-08 03:31:04,164 44k INFO Losses: [2.780285596847534, 1.9597985744476318, 7.2716169357299805, 12.88646125793457, 1.0950372219085693], step: 46800, lr: 9.786058211724074e-05 |
|
2023-03-08 03:33:50,003 44k INFO ====> Epoch: 169, cost 260.16 s |
|
2023-03-08 03:34:15,624 44k INFO Train Epoch: 170 [6%] |
|
2023-03-08 03:34:15,626 44k INFO Losses: [2.6613404750823975, 2.165069580078125, 11.322265625, 17.966665267944336, 0.6004488468170166], step: 47000, lr: 9.784834954447608e-05 |
|
2023-03-08 03:37:17,573 44k INFO Train Epoch: 170 [78%] |
|
2023-03-08 03:37:17,575 44k INFO Losses: [2.6439225673675537, 2.0790014266967773, 8.110756874084473, 14.156158447265625, 0.8605051636695862], step: 47200, lr: 9.784834954447608e-05 |
|
2023-03-08 03:37:26,363 44k INFO Saving model and optimizer state at iteration 170 to ./logs/44k/G_47200.pth |
|
2023-03-08 03:37:29,600 44k INFO Saving model and optimizer state at iteration 170 to ./logs/44k/D_47200.pth |
|
2023-03-08 03:37:31,943 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_43200.pth |
|
2023-03-08 03:37:31,946 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_43200.pth |
|
2023-03-08 03:38:31,418 44k INFO ====> Epoch: 170, cost 281.42 s |
|
2023-03-08 03:40:43,322 44k INFO Train Epoch: 171 [50%] |
|
2023-03-08 03:40:43,323 44k INFO Losses: [2.5307302474975586, 2.092162609100342, 11.870805740356445, 18.22563362121582, 0.8635382652282715], step: 47400, lr: 9.783611850078301e-05 |
|
2023-03-08 03:42:49,094 44k INFO ====> Epoch: 171, cost 257.68 s |
|
2023-03-08 03:43:53,222 44k INFO Train Epoch: 172 [22%] |
|
2023-03-08 03:43:53,224 44k INFO Losses: [2.7115089893341064, 2.3179149627685547, 9.526841163635254, 15.643858909606934, 0.7768955826759338], step: 47600, lr: 9.782388898597041e-05 |
|
2023-03-08 03:46:55,922 44k INFO Train Epoch: 172 [94%] |
|
2023-03-08 03:46:55,924 44k INFO Losses: [2.587049722671509, 2.2732245922088623, 10.001681327819824, 15.657183647155762, 0.9902274012565613], step: 47800, lr: 9.782388898597041e-05 |
|
2023-03-08 03:47:10,436 44k INFO ====> Epoch: 172, cost 261.34 s |
|
2023-03-08 03:50:05,245 44k INFO Train Epoch: 173 [66%] |
|
2023-03-08 03:50:05,247 44k INFO Losses: [2.655766010284424, 2.018812417984009, 10.79494571685791, 17.07520294189453, 0.7845973372459412], step: 48000, lr: 9.781166099984716e-05 |
|
2023-03-08 03:50:13,117 44k INFO Saving model and optimizer state at iteration 173 to ./logs/44k/G_48000.pth |
|
2023-03-08 03:50:16,091 44k INFO Saving model and optimizer state at iteration 173 to ./logs/44k/D_48000.pth |
|
2023-03-08 03:50:18,837 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_44000.pth |
|
2023-03-08 03:50:18,841 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_44000.pth |
|
2023-03-08 03:51:49,125 44k INFO ====> Epoch: 173, cost 278.69 s |
|
2023-03-08 03:53:31,431 44k INFO Train Epoch: 174 [38%] |
|
2023-03-08 03:53:31,434 44k INFO Losses: [2.5499701499938965, 2.0284340381622314, 9.611529350280762, 15.632028579711914, 0.7346274256706238], step: 48200, lr: 9.779943454222217e-05 |
|
2023-03-08 03:56:09,211 44k INFO ====> Epoch: 174, cost 260.09 s |
|
2023-03-08 03:56:43,006 44k INFO Train Epoch: 175 [10%] |
|
2023-03-08 03:56:43,009 44k INFO Losses: [2.540177345275879, 2.122141122817993, 12.148147583007812, 17.560848236083984, 0.8152323365211487], step: 48400, lr: 9.778720961290439e-05 |
|
2023-03-08 03:59:45,324 44k INFO Train Epoch: 175 [82%] |
|
2023-03-08 03:59:45,326 44k INFO Losses: [2.729022741317749, 2.029737949371338, 9.751473426818848, 16.87692642211914, 0.8272270560264587], step: 48600, lr: 9.778720961290439e-05 |
|
2023-03-08 04:00:30,347 44k INFO ====> Epoch: 175, cost 261.14 s |
|
2023-03-08 04:02:55,292 44k INFO Train Epoch: 176 [54%] |
|
2023-03-08 04:02:55,294 44k INFO Losses: [2.6743788719177246, 2.0432138442993164, 11.252622604370117, 16.821430206298828, 0.8071271181106567], step: 48800, lr: 9.777498621170277e-05 |
|
2023-03-08 04:03:03,699 44k INFO Saving model and optimizer state at iteration 176 to ./logs/44k/G_48800.pth |
|
2023-03-08 04:03:06,608 44k INFO Saving model and optimizer state at iteration 176 to ./logs/44k/D_48800.pth |
|
2023-03-08 04:03:09,069 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_44800.pth |
|
2023-03-08 04:03:09,074 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_44800.pth |
|
2023-03-08 04:05:09,824 44k INFO ====> Epoch: 176, cost 279.48 s |
|
2023-03-08 04:06:22,239 44k INFO Train Epoch: 177 [26%] |
|
2023-03-08 04:06:22,241 44k INFO Losses: [2.527803421020508, 1.8891561031341553, 10.983811378479004, 14.759749412536621, 0.9421578049659729], step: 49000, lr: 9.776276433842631e-05 |
|
2023-03-08 04:09:25,182 44k INFO Train Epoch: 177 [98%] |
|
2023-03-08 04:09:25,183 44k INFO Losses: [2.6188366413116455, 2.402463674545288, 11.196674346923828, 16.19263458251953, 0.6358367800712585], step: 49200, lr: 9.776276433842631e-05 |
|
2023-03-08 04:09:30,447 44k INFO ====> Epoch: 177, cost 260.62 s |
|
2023-03-08 04:12:34,454 44k INFO Train Epoch: 178 [70%] |
|
2023-03-08 04:12:34,456 44k INFO Losses: [2.5544166564941406, 2.3566675186157227, 5.920743942260742, 11.365242004394531, 0.4844017028808594], step: 49400, lr: 9.7750543992884e-05 |
|
2023-03-08 04:13:50,409 44k INFO ====> Epoch: 178, cost 259.96 s |
|
2023-03-08 04:15:42,818 44k INFO Train Epoch: 179 [42%] |
|
2023-03-08 04:15:42,820 44k INFO Losses: [2.642854928970337, 1.8851852416992188, 9.175716400146484, 15.819239616394043, 1.2416026592254639], step: 49600, lr: 9.773832517488488e-05 |
|
2023-03-08 04:15:49,387 44k INFO Saving model and optimizer state at iteration 179 to ./logs/44k/G_49600.pth |
|
2023-03-08 04:15:53,318 44k INFO Saving model and optimizer state at iteration 179 to ./logs/44k/D_49600.pth |
|
2023-03-08 04:15:56,039 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_45600.pth |
|
2023-03-08 04:15:56,046 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_45600.pth |
|
2023-03-08 04:18:24,570 44k INFO ====> Epoch: 179, cost 274.16 s |
|
2023-03-08 04:19:06,264 44k INFO Train Epoch: 180 [14%] |
|
2023-03-08 04:19:06,266 44k INFO Losses: [2.374657154083252, 2.1695542335510254, 11.633705139160156, 19.08207893371582, 1.1676304340362549], step: 49800, lr: 9.772610788423802e-05 |
|
2023-03-08 04:22:05,643 44k INFO Train Epoch: 180 [86%] |
|
2023-03-08 04:22:05,644 44k INFO Losses: [2.5712502002716064, 1.9997704029083252, 12.931563377380371, 19.863094329833984, 0.7680585980415344], step: 50000, lr: 9.772610788423802e-05 |
|
2023-03-08 04:22:43,364 44k INFO ====> Epoch: 180, cost 258.79 s |
|
2023-03-08 04:25:14,287 44k INFO Train Epoch: 181 [58%] |
|
2023-03-08 04:25:14,288 44k INFO Losses: [2.284815549850464, 2.5167884826660156, 10.566186904907227, 20.76923179626465, 0.9518967866897583], step: 50200, lr: 9.771389212075249e-05 |
|
2023-03-08 04:27:00,493 44k INFO ====> Epoch: 181, cost 257.13 s |
|
2023-03-08 04:28:21,210 44k INFO Train Epoch: 182 [29%] |
|
2023-03-08 04:28:21,213 44k INFO Losses: [2.438420534133911, 2.2374000549316406, 8.809670448303223, 14.085413932800293, 0.6117709875106812], step: 50400, lr: 9.77016778842374e-05 |
|
2023-03-08 04:28:29,365 44k INFO Saving model and optimizer state at iteration 182 to ./logs/44k/G_50400.pth |
|
2023-03-08 04:28:32,279 44k INFO Saving model and optimizer state at iteration 182 to ./logs/44k/D_50400.pth |
|
2023-03-08 04:28:34,839 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_46400.pth |
|
2023-03-08 04:28:34,844 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_46400.pth |
|
2023-03-08 04:31:35,485 44k INFO ====> Epoch: 182, cost 274.99 s |
|
2023-03-08 04:31:47,799 44k INFO Train Epoch: 183 [1%] |
|
2023-03-08 04:31:47,801 44k INFO Losses: [2.626110076904297, 1.8963866233825684, 8.286408424377441, 14.33017349243164, 0.8458701968193054], step: 50600, lr: 9.768946517450186e-05 |
|
2023-03-08 04:34:48,612 44k INFO Train Epoch: 183 [73%] |
|
2023-03-08 04:34:48,613 44k INFO Losses: [2.739945888519287, 1.8550355434417725, 5.788541316986084, 14.680170059204102, 1.1566790342330933], step: 50800, lr: 9.768946517450186e-05 |
|
2023-03-08 04:35:56,790 44k INFO ====> Epoch: 183, cost 261.31 s |
|
2023-03-08 04:37:57,064 44k INFO Train Epoch: 184 [45%] |
|
2023-03-08 04:37:57,066 44k INFO Losses: [2.437542676925659, 2.186265230178833, 10.198734283447266, 19.723705291748047, 0.47157907485961914], step: 51000, lr: 9.767725399135504e-05 |
|
2023-03-08 04:40:13,687 44k INFO ====> Epoch: 184, cost 256.90 s |
|
2023-03-08 04:41:03,980 44k INFO Train Epoch: 185 [17%] |
|
2023-03-08 04:41:03,982 44k INFO Losses: [2.7737083435058594, 2.264303684234619, 8.459867477416992, 15.984721183776855, 0.9685423374176025], step: 51200, lr: 9.766504433460612e-05 |
|
2023-03-08 04:41:11,580 44k INFO Saving model and optimizer state at iteration 185 to ./logs/44k/G_51200.pth |
|
2023-03-08 04:41:14,600 44k INFO Saving model and optimizer state at iteration 185 to ./logs/44k/D_51200.pth |
|
2023-03-08 04:41:17,028 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_47200.pth |
|
2023-03-08 04:41:17,031 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_47200.pth |
|
2023-03-08 04:44:22,597 44k INFO Train Epoch: 185 [89%] |
|
2023-03-08 04:44:22,599 44k INFO Losses: [2.5582494735717773, 2.125342845916748, 11.07038688659668, 17.17374038696289, 1.0533064603805542], step: 51400, lr: 9.766504433460612e-05 |
|
2023-03-08 04:44:49,959 44k INFO ====> Epoch: 185, cost 276.27 s |
|
2023-03-08 04:47:33,169 44k INFO Train Epoch: 186 [61%] |
|
2023-03-08 04:47:33,171 44k INFO Losses: [2.512091875076294, 2.293306827545166, 7.662440776824951, 14.665122985839844, 1.069049596786499], step: 51600, lr: 9.765283620406429e-05 |
|
2023-03-08 04:49:09,171 44k INFO ====> Epoch: 186, cost 259.21 s |
|
2023-03-08 04:50:41,415 44k INFO Train Epoch: 187 [33%] |
|
2023-03-08 04:50:41,417 44k INFO Losses: [2.6489601135253906, 2.0015437602996826, 8.139839172363281, 15.817876815795898, 0.7842486500740051], step: 51800, lr: 9.764062959953878e-05 |
|
2023-03-08 04:53:29,549 44k INFO ====> Epoch: 187, cost 260.38 s |
|
2023-03-08 04:53:49,498 44k INFO Train Epoch: 188 [5%] |
|
2023-03-08 04:53:49,500 44k INFO Losses: [2.735672950744629, 2.029693365097046, 8.614529609680176, 15.862418174743652, 1.1612235307693481], step: 52000, lr: 9.762842452083883e-05 |
|
2023-03-08 04:53:56,398 44k INFO Saving model and optimizer state at iteration 188 to ./logs/44k/G_52000.pth |
|
2023-03-08 04:54:00,186 44k INFO Saving model and optimizer state at iteration 188 to ./logs/44k/D_52000.pth |
|
2023-03-08 04:54:02,581 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_48000.pth |
|
2023-03-08 04:54:02,584 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_48000.pth |
|
2023-03-08 04:57:05,326 44k INFO Train Epoch: 188 [77%] |
|
2023-03-08 04:57:05,327 44k INFO Losses: [2.5432536602020264, 1.950775146484375, 6.037012100219727, 15.000456809997559, 1.254520058631897], step: 52200, lr: 9.762842452083883e-05 |
|
2023-03-08 04:58:02,550 44k INFO ====> Epoch: 188, cost 273.00 s |
|
2023-03-08 05:00:13,919 44k INFO Train Epoch: 189 [49%] |
|
2023-03-08 05:00:13,921 44k INFO Losses: [2.642697334289551, 2.3072731494903564, 9.758556365966797, 18.388626098632812, 0.9308420419692993], step: 52400, lr: 9.761622096777372e-05 |
|
2023-03-08 05:02:20,726 44k INFO ====> Epoch: 189, cost 258.18 s |
|
2023-03-08 05:03:21,705 44k INFO Train Epoch: 190 [21%] |
|
2023-03-08 05:03:21,707 44k INFO Losses: [2.6143081188201904, 2.19193696975708, 11.921883583068848, 18.4991512298584, 0.7481468915939331], step: 52600, lr: 9.760401894015275e-05 |
|
2023-03-08 05:06:21,289 44k INFO Train Epoch: 190 [93%] |
|
2023-03-08 05:06:21,290 44k INFO Losses: [2.637528419494629, 1.9682215452194214, 10.27647876739502, 16.894824981689453, 0.8998668789863586], step: 52800, lr: 9.760401894015275e-05 |
|
2023-03-08 05:06:30,203 44k INFO Saving model and optimizer state at iteration 190 to ./logs/44k/G_52800.pth |
|
2023-03-08 05:06:33,171 44k INFO Saving model and optimizer state at iteration 190 to ./logs/44k/D_52800.pth |
|
2023-03-08 05:06:35,583 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_48800.pth |
|
2023-03-08 05:06:35,585 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_48800.pth |
|
2023-03-08 05:06:59,059 44k INFO ====> Epoch: 190, cost 278.33 s |
|
2023-03-08 05:09:47,847 44k INFO Train Epoch: 191 [65%] |
|
2023-03-08 05:09:47,849 44k INFO Losses: [2.640974521636963, 2.1343133449554443, 11.663924217224121, 18.034332275390625, 0.8877753019332886], step: 53000, lr: 9.759181843778522e-05 |
|
2023-03-08 05:11:16,208 44k INFO ====> Epoch: 191, cost 257.15 s |
|
2023-03-08 05:12:53,991 44k INFO Train Epoch: 192 [37%] |
|
2023-03-08 05:12:53,993 44k INFO Losses: [2.503286123275757, 2.2636313438415527, 11.60041618347168, 19.578296661376953, 0.9012569785118103], step: 53200, lr: 9.757961946048049e-05 |
|
2023-03-08 05:15:32,584 44k INFO ====> Epoch: 192, cost 256.38 s |
|
2023-03-08 05:16:02,806 44k INFO Train Epoch: 193 [9%] |
|
2023-03-08 05:16:02,808 44k INFO Losses: [2.530435085296631, 2.2164292335510254, 9.435452461242676, 14.907962799072266, 0.6407942175865173], step: 53400, lr: 9.756742200804793e-05 |
|
2023-03-08 05:19:03,314 44k INFO Train Epoch: 193 [81%] |
|
2023-03-08 05:19:03,316 44k INFO Losses: [2.4777112007141113, 2.214780807495117, 10.001119613647461, 15.452930450439453, 0.8069971799850464], step: 53600, lr: 9.756742200804793e-05 |
|
2023-03-08 05:19:10,845 44k INFO Saving model and optimizer state at iteration 193 to ./logs/44k/G_53600.pth |
|
2023-03-08 05:19:14,193 44k INFO Saving model and optimizer state at iteration 193 to ./logs/44k/D_53600.pth |
|
2023-03-08 05:19:17,316 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_49600.pth |
|
2023-03-08 05:19:17,319 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_49600.pth |
|
2023-03-08 05:20:08,245 44k INFO ====> Epoch: 193, cost 275.66 s |
|
2023-03-08 05:22:27,621 44k INFO Train Epoch: 194 [53%] |
|
2023-03-08 05:22:27,622 44k INFO Losses: [2.5554091930389404, 2.327653408050537, 9.592726707458496, 17.241243362426758, 0.8570868372917175], step: 53800, lr: 9.755522608029692e-05 |
|
2023-03-08 05:24:27,298 44k INFO ====> Epoch: 194, cost 259.05 s |
|
2023-03-08 05:25:36,719 44k INFO Train Epoch: 195 [24%] |
|
2023-03-08 05:25:36,721 44k INFO Losses: [2.5192668437957764, 2.2793262004852295, 8.477702140808105, 16.781925201416016, 1.04868745803833], step: 54000, lr: 9.754303167703689e-05 |
|
2023-03-08 05:28:37,501 44k INFO Train Epoch: 195 [96%] |
|
2023-03-08 05:28:37,503 44k INFO Losses: [2.556217670440674, 1.8666434288024902, 8.66825008392334, 15.158280372619629, 1.203574538230896], step: 54200, lr: 9.754303167703689e-05 |
|
2023-03-08 05:28:45,643 44k INFO ====> Epoch: 195, cost 258.34 s |
|
2023-03-08 05:31:45,232 44k INFO Train Epoch: 196 [68%] |
|
2023-03-08 05:31:45,235 44k INFO Losses: [2.627040147781372, 2.0793492794036865, 10.513725280761719, 15.390976905822754, 0.9893448352813721], step: 54400, lr: 9.753083879807726e-05 |
|
2023-03-08 05:31:54,011 44k INFO Saving model and optimizer state at iteration 196 to ./logs/44k/G_54400.pth |
|
2023-03-08 05:31:56,757 44k INFO Saving model and optimizer state at iteration 196 to ./logs/44k/D_54400.pth |
|
2023-03-08 05:31:59,228 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_50400.pth |
|
2023-03-08 05:31:59,236 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_50400.pth |
|
2023-03-08 05:33:21,983 44k INFO ====> Epoch: 196, cost 276.34 s |
|
2023-03-08 05:35:09,517 44k INFO Train Epoch: 197 [40%] |
|
2023-03-08 05:35:09,519 44k INFO Losses: [2.6448466777801514, 1.947778344154358, 6.6383209228515625, 15.120026588439941, 0.9264582991600037], step: 54600, lr: 9.75186474432275e-05 |
|
2023-03-08 05:37:40,324 44k INFO ====> Epoch: 197, cost 258.34 s |
|
2023-03-08 05:38:19,148 44k INFO Train Epoch: 198 [12%] |
|
2023-03-08 05:38:19,150 44k INFO Losses: [2.3952550888061523, 2.393462896347046, 14.773964881896973, 20.60763931274414, 0.7379561066627502], step: 54800, lr: 9.750645761229709e-05 |
|
2023-03-08 05:41:20,312 44k INFO Train Epoch: 198 [84%] |
|
2023-03-08 05:41:20,314 44k INFO Losses: [2.4372334480285645, 2.1041972637176514, 10.741508483886719, 19.983760833740234, 0.8741334676742554], step: 55000, lr: 9.750645761229709e-05 |
|
2023-03-08 05:41:58,893 44k INFO ====> Epoch: 198, cost 258.57 s |
|
2023-03-08 05:44:28,730 44k INFO Train Epoch: 199 [56%] |
|
2023-03-08 05:44:28,732 44k INFO Losses: [2.6375505924224854, 2.1682519912719727, 9.023967742919922, 16.405948638916016, 0.7947947978973389], step: 55200, lr: 9.749426930509556e-05 |
|
2023-03-08 05:44:36,645 44k INFO Saving model and optimizer state at iteration 199 to ./logs/44k/G_55200.pth |
|
2023-03-08 05:44:39,689 44k INFO Saving model and optimizer state at iteration 199 to ./logs/44k/D_55200.pth |
|
2023-03-08 05:44:42,058 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_51200.pth |
|
2023-03-08 05:44:42,062 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_51200.pth |
|
2023-03-10 06:11:31,011 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 6536180, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 5}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'nahida': 0, 'pecorine': 1, 'ayaka': 2}, 'model_dir': './logs/44k'} |
|
2023-03-10 06:11:49,210 44k INFO Loaded checkpoint './logs/44k/G_55200.pth' (iteration 199) |
|
2023-03-10 06:11:56,768 44k INFO Loaded checkpoint './logs/44k/D_55200.pth' (iteration 199) |
|
2023-03-10 11:32:13,881 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 6536180, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 5}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'nahida': 0, 'pecorine': 1, 'ayaka': 2}, 'model_dir': './logs/44k'} |
|
2023-03-10 11:32:14,574 44k WARNING git hash values are different. 8eb41030(saved) != ca6c8465(current) |
|
2023-03-10 11:32:35,132 44k INFO Loaded checkpoint './logs/44k/G_55200.pth' (iteration 199) |
|
2023-03-10 11:32:41,989 44k INFO Loaded checkpoint './logs/44k/D_55200.pth' (iteration 199) |
|
2023-03-12 13:06:05,298 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 6536180, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 5}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'nahida': 0, 'pecorine': 1, 'ayaka': 2}, 'model_dir': './logs/44k'} |
|
2023-03-12 13:06:06,052 44k WARNING git hash values are different. 8eb41030(saved) != b3430e73(current) |
|
2023-03-12 13:06:22,775 44k INFO Loaded checkpoint './logs/44k/G_55200.pth' (iteration 199) |
|
2023-03-12 13:06:28,873 44k INFO Loaded checkpoint './logs/44k/D_55200.pth' (iteration 199) |
|
2023-03-12 13:09:42,702 44k INFO Train Epoch: 199 [56%] |
|
2023-03-12 13:09:42,706 44k INFO Losses: [2.867094039916992, 2.096658706665039, 5.602962017059326, 13.343400955200195, 1.0457953214645386], step: 55200, lr: 9.748208252143241e-05 |
|
2023-03-12 13:09:54,076 44k INFO Saving model and optimizer state at iteration 199 to ./logs/44k/G_55200.pth |
|
2023-03-12 13:09:57,056 44k INFO Saving model and optimizer state at iteration 199 to ./logs/44k/D_55200.pth |
|
2023-03-12 13:12:12,545 44k INFO ====> Epoch: 199, cost 367.25 s |
|
2023-03-12 13:13:30,708 44k INFO Train Epoch: 200 [28%] |
|
2023-03-12 13:13:30,709 44k INFO Losses: [2.579282522201538, 2.0904085636138916, 10.616829872131348, 15.402456283569336, 0.7100027203559875], step: 55400, lr: 9.746989726111722e-05 |
|
2023-03-12 13:16:26,015 44k INFO ====> Epoch: 200, cost 253.47 s |
|
2023-03-12 13:16:35,536 44k INFO Train Epoch: 201 [0%] |
|
2023-03-12 13:16:35,538 44k INFO Losses: [2.5722951889038086, 2.367297410964966, 8.220600128173828, 16.465055465698242, 0.6941350698471069], step: 55600, lr: 9.745771352395957e-05 |
|
2023-03-12 13:19:32,836 44k INFO Train Epoch: 201 [72%] |
|
2023-03-12 13:19:32,837 44k INFO Losses: [2.4825634956359863, 2.2788455486297607, 10.557487487792969, 18.21208953857422, 0.8206163048744202], step: 55800, lr: 9.745771352395957e-05 |
|
2023-03-12 13:20:42,399 44k INFO ====> Epoch: 201, cost 256.38 s |
|
2023-03-12 13:22:39,352 44k INFO Train Epoch: 202 [44%] |
|
2023-03-12 13:22:39,354 44k INFO Losses: [2.560046672821045, 2.214576244354248, 9.238738059997559, 15.501837730407715, 0.8402547240257263], step: 56000, lr: 9.744553130976908e-05 |
|
2023-03-12 13:22:46,095 44k INFO Saving model and optimizer state at iteration 202 to ./logs/44k/G_56000.pth |
|
2023-03-12 13:22:48,705 44k INFO Saving model and optimizer state at iteration 202 to ./logs/44k/D_56000.pth |
|
2023-03-12 13:22:51,241 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_52000.pth |
|
2023-03-12 13:22:51,243 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_52000.pth |
|
2023-03-12 13:25:09,913 44k INFO ====> Epoch: 202, cost 267.51 s |
|
2023-03-12 13:25:57,778 44k INFO Train Epoch: 203 [16%] |
|
2023-03-12 13:25:57,780 44k INFO Losses: [2.6917061805725098, 2.1397457122802734, 8.015119552612305, 13.175267219543457, 0.7950461506843567], step: 56200, lr: 9.743335061835535e-05 |
|
2023-03-12 13:28:53,907 44k INFO Train Epoch: 203 [88%] |
|
2023-03-12 13:28:53,908 44k INFO Losses: [2.6017580032348633, 2.3494272232055664, 10.359628677368164, 17.97820472717285, 0.8368026614189148], step: 56400, lr: 9.743335061835535e-05 |
|
2023-03-12 13:29:23,037 44k INFO ====> Epoch: 203, cost 253.12 s |
|
2023-03-12 13:31:56,484 44k INFO Train Epoch: 204 [60%] |
|
2023-03-12 13:31:56,486 44k INFO Losses: [2.559668779373169, 2.1426491737365723, 8.86723518371582, 16.02739143371582, 0.7935746908187866], step: 56600, lr: 9.742117144952805e-05 |
|
2023-03-12 13:33:35,045 44k INFO ====> Epoch: 204, cost 252.01 s |
|
2023-03-12 13:34:58,981 44k INFO Train Epoch: 205 [32%] |
|
2023-03-12 13:34:58,984 44k INFO Losses: [2.8014609813690186, 1.8959035873413086, 8.509600639343262, 14.43089771270752, 0.8097782731056213], step: 56800, lr: 9.740899380309685e-05 |
|
2023-03-12 13:35:07,719 44k INFO Saving model and optimizer state at iteration 205 to ./logs/44k/G_56800.pth |
|
2023-03-12 13:35:10,272 44k INFO Saving model and optimizer state at iteration 205 to ./logs/44k/D_56800.pth |
|
2023-03-12 13:35:12,608 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_52800.pth |
|
2023-03-12 13:35:12,610 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_52800.pth |
|
2023-03-12 13:38:03,915 44k INFO ====> Epoch: 205, cost 268.87 s |
|
2023-03-12 13:38:19,365 44k INFO Train Epoch: 206 [4%] |
|
2023-03-12 13:38:19,367 44k INFO Losses: [2.456610679626465, 2.2464256286621094, 10.524425506591797, 17.89781951904297, 0.7622843980789185], step: 57000, lr: 9.739681767887146e-05 |
|
2023-03-12 13:41:13,852 44k INFO Train Epoch: 206 [76%] |
|
2023-03-12 13:41:13,853 44k INFO Losses: [2.872180938720703, 1.5828981399536133, 5.144289493560791, 10.761124610900879, 0.7896245718002319], step: 57200, lr: 9.739681767887146e-05 |
|
2023-03-12 13:42:13,382 44k INFO ====> Epoch: 206, cost 249.47 s |
|
2023-03-12 13:44:19,411 44k INFO Train Epoch: 207 [47%] |
|
2023-03-12 13:44:19,413 44k INFO Losses: [2.4885921478271484, 2.348688840866089, 8.213828086853027, 13.464022636413574, 0.8682831525802612], step: 57400, lr: 9.73846430766616e-05 |
|
2023-03-12 13:46:27,864 44k INFO ====> Epoch: 207, cost 254.48 s |
|
2023-03-12 13:47:23,618 44k INFO Train Epoch: 208 [19%] |
|
2023-03-12 13:47:23,620 44k INFO Losses: [2.510831832885742, 2.145885705947876, 8.884943962097168, 16.34922981262207, 0.3343981206417084], step: 57600, lr: 9.7372469996277e-05 |
|
2023-03-12 13:47:30,088 44k INFO Saving model and optimizer state at iteration 208 to ./logs/44k/G_57600.pth |
|
2023-03-12 13:47:32,659 44k INFO Saving model and optimizer state at iteration 208 to ./logs/44k/D_57600.pth |
|
2023-03-12 13:47:35,080 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_53600.pth |
|
2023-03-12 13:47:35,083 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_53600.pth |
|
2023-03-12 13:50:36,212 44k INFO Train Epoch: 208 [91%] |
|
2023-03-12 13:50:36,214 44k INFO Losses: [2.5798935890197754, 2.0975406169891357, 10.629899024963379, 18.604713439941406, 0.3755583167076111], step: 57800, lr: 9.7372469996277e-05 |
|
2023-03-12 13:50:57,113 44k INFO ====> Epoch: 208, cost 269.25 s |
|
2023-03-12 13:53:40,696 44k INFO Train Epoch: 209 [63%] |
|
2023-03-12 13:53:40,698 44k INFO Losses: [2.4766204357147217, 2.0255250930786133, 11.849401473999023, 17.192157745361328, 0.5934497117996216], step: 58000, lr: 9.736029843752747e-05 |
|
2023-03-12 13:55:10,547 44k INFO ====> Epoch: 209, cost 253.43 s |
|
2023-03-12 13:56:42,204 44k INFO Train Epoch: 210 [35%] |
|
2023-03-12 13:56:42,206 44k INFO Losses: [2.452080488204956, 2.191817045211792, 9.996981620788574, 15.367362976074219, 0.986117959022522], step: 58200, lr: 9.734812840022278e-05 |
|
2023-03-12 13:59:21,426 44k INFO ====> Epoch: 210, cost 250.88 s |
|
2023-03-12 13:59:48,801 44k INFO Train Epoch: 211 [7%] |
|
2023-03-12 13:59:48,803 44k INFO Losses: [2.572927236557007, 2.247307300567627, 11.049181938171387, 17.25975799560547, 1.0064451694488525], step: 58400, lr: 9.733595988417275e-05 |
|
2023-03-12 13:59:55,912 44k INFO Saving model and optimizer state at iteration 211 to ./logs/44k/G_58400.pth |
|
2023-03-12 13:59:58,482 44k INFO Saving model and optimizer state at iteration 211 to ./logs/44k/D_58400.pth |
|
2023-03-12 14:00:01,248 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_54400.pth |
|
2023-03-12 14:00:01,250 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_54400.pth |
|
2023-03-12 14:03:02,418 44k INFO Train Epoch: 211 [79%] |
|
2023-03-12 14:03:02,420 44k INFO Losses: [2.8126258850097656, 2.0518317222595215, 6.454041481018066, 12.571066856384277, 0.8966721296310425], step: 58600, lr: 9.733595988417275e-05 |
|
2023-03-12 14:03:53,635 44k INFO ====> Epoch: 211, cost 272.21 s |
|
2023-03-12 14:06:05,614 44k INFO Train Epoch: 212 [51%] |
|
2023-03-12 14:06:05,616 44k INFO Losses: [2.587939739227295, 2.1030514240264893, 11.71280574798584, 17.648576736450195, 0.7072951793670654], step: 58800, lr: 9.732379288918723e-05 |
|
2023-03-12 14:08:03,618 44k INFO ====> Epoch: 212, cost 249.98 s |
|
2023-03-12 14:09:06,147 44k INFO Train Epoch: 213 [23%] |
|
2023-03-12 14:09:06,149 44k INFO Losses: [2.781830072402954, 2.186081886291504, 7.905210971832275, 18.877336502075195, 0.8565673828125], step: 59000, lr: 9.731162741507607e-05 |
|
2023-03-12 14:12:01,275 44k INFO Train Epoch: 213 [95%] |
|
2023-03-12 14:12:01,277 44k INFO Losses: [2.575045585632324, 2.016603469848633, 10.632702827453613, 15.472416877746582, 1.2178444862365723], step: 59200, lr: 9.731162741507607e-05 |
|
2023-03-12 14:12:10,430 44k INFO Saving model and optimizer state at iteration 213 to ./logs/44k/G_59200.pth |
|
2023-03-12 14:12:14,495 44k INFO Saving model and optimizer state at iteration 213 to ./logs/44k/D_59200.pth |
|
2023-03-12 14:12:17,123 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_55200.pth |
|
2023-03-12 14:12:17,127 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_55200.pth |
|
2023-03-12 14:12:30,712 44k INFO ====> Epoch: 213, cost 267.09 s |
|
2023-03-12 14:15:22,859 44k INFO Train Epoch: 214 [67%] |
|
2023-03-12 14:15:22,861 44k INFO Losses: [2.8381552696228027, 2.1603240966796875, 8.633986473083496, 15.808666229248047, 0.4168325364589691], step: 59400, lr: 9.729946346164919e-05 |
|
2023-03-12 14:16:43,250 44k INFO ====> Epoch: 214, cost 252.54 s |
|
2023-03-12 14:18:23,519 44k INFO Train Epoch: 215 [39%] |
|
2023-03-12 14:18:23,521 44k INFO Losses: [2.3445827960968018, 2.306351661682129, 12.358965873718262, 18.1209716796875, 0.6720661520957947], step: 59600, lr: 9.728730102871649e-05 |
|
2023-03-12 14:20:50,037 44k INFO ====> Epoch: 215, cost 246.79 s |
|
2023-03-12 14:21:24,112 44k INFO Train Epoch: 216 [11%] |
|
2023-03-12 14:21:24,113 44k INFO Losses: [2.5946149826049805, 2.237588882446289, 11.049094200134277, 16.303421020507812, 0.7405263185501099], step: 59800, lr: 9.727514011608789e-05 |
|
2023-03-12 14:24:19,412 44k INFO Train Epoch: 216 [83%] |
|
2023-03-12 14:24:19,413 44k INFO Losses: [2.388878107070923, 2.317551851272583, 12.293089866638184, 20.145349502563477, 0.7792332768440247], step: 60000, lr: 9.727514011608789e-05 |
|
2023-03-12 14:24:26,998 44k INFO Saving model and optimizer state at iteration 216 to ./logs/44k/G_60000.pth |
|
2023-03-12 14:24:30,437 44k INFO Saving model and optimizer state at iteration 216 to ./logs/44k/D_60000.pth |
|
2023-03-12 14:24:32,647 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_56000.pth |
|
2023-03-12 14:24:32,650 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_56000.pth |
|
2023-03-12 14:25:18,418 44k INFO ====> Epoch: 216, cost 268.38 s |
|
2023-03-12 14:27:36,893 44k INFO Train Epoch: 217 [55%] |
|
2023-03-12 14:27:36,896 44k INFO Losses: [2.597727060317993, 2.160306692123413, 11.322202682495117, 18.636821746826172, 0.6614423394203186], step: 60200, lr: 9.726298072357337e-05 |
|
2023-03-12 14:29:27,749 44k INFO ====> Epoch: 217, cost 249.33 s |
|
2023-03-12 14:30:39,554 44k INFO Train Epoch: 218 [27%] |
|
2023-03-12 14:30:39,557 44k INFO Losses: [2.6058602333068848, 1.995628833770752, 5.30747127532959, 14.649539947509766, 0.5701935291290283], step: 60400, lr: 9.725082285098293e-05 |
|
2023-03-12 14:33:33,465 44k INFO Train Epoch: 218 [99%] |
|
2023-03-12 14:33:33,467 44k INFO Losses: [2.5021491050720215, 2.313009738922119, 13.171293258666992, 21.075410842895508, 0.6914074420928955], step: 60600, lr: 9.725082285098293e-05 |
|
2023-03-12 14:33:37,201 44k INFO ====> Epoch: 218, cost 249.45 s |
|
2023-03-12 14:36:32,131 44k INFO Train Epoch: 219 [71%] |
|
2023-03-12 14:36:32,133 44k INFO Losses: [2.525357723236084, 2.4076426029205322, 13.72713565826416, 19.235841751098633, 0.7879021167755127], step: 60800, lr: 9.723866649812655e-05 |
|
2023-03-12 14:36:39,734 44k INFO Saving model and optimizer state at iteration 219 to ./logs/44k/G_60800.pth |
|
2023-03-12 14:36:44,198 44k INFO Saving model and optimizer state at iteration 219 to ./logs/44k/D_60800.pth |
|
2023-03-12 14:36:46,802 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_56800.pth |
|
2023-03-12 14:36:46,808 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_56800.pth |
|
2023-03-12 14:37:59,936 44k INFO ====> Epoch: 219, cost 262.73 s |
|
2023-03-12 14:39:48,583 44k INFO Train Epoch: 220 [42%] |
|
2023-03-12 14:39:48,585 44k INFO Losses: [2.588949680328369, 2.0648741722106934, 9.743427276611328, 17.24971580505371, 1.1336355209350586], step: 61000, lr: 9.722651166481428e-05 |
|
2023-03-12 14:42:08,786 44k INFO ====> Epoch: 220, cost 248.85 s |
|
2023-03-12 14:42:49,964 44k INFO Train Epoch: 221 [14%] |
|
2023-03-12 14:42:49,966 44k INFO Losses: [2.620333433151245, 2.0564239025115967, 6.077672004699707, 13.936506271362305, 0.4658326804637909], step: 61200, lr: 9.721435835085619e-05 |
|
2023-03-12 14:45:45,610 44k INFO Train Epoch: 221 [86%] |
|
2023-03-12 14:45:45,612 44k INFO Losses: [2.6854801177978516, 2.211747169494629, 12.74728775024414, 17.967695236206055, 0.811805784702301], step: 61400, lr: 9.721435835085619e-05 |
|
2023-03-12 14:46:18,920 44k INFO ====> Epoch: 221, cost 250.13 s |
|
2023-03-12 14:48:46,759 44k INFO Train Epoch: 222 [58%] |
|
2023-03-12 14:48:46,761 44k INFO Losses: [2.7275261878967285, 2.1530649662017822, 12.941401481628418, 22.115461349487305, 0.6978079676628113], step: 61600, lr: 9.720220655606233e-05 |
|
2023-03-12 14:48:52,768 44k INFO Saving model and optimizer state at iteration 222 to ./logs/44k/G_61600.pth |
|
2023-03-12 14:48:56,972 44k INFO Saving model and optimizer state at iteration 222 to ./logs/44k/D_61600.pth |
|
2023-03-12 14:48:59,495 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_57600.pth |
|
2023-03-12 14:48:59,499 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_57600.pth |
|
2023-03-12 14:50:43,322 44k INFO ====> Epoch: 222, cost 264.40 s |
|
2023-03-12 14:52:03,266 44k INFO Train Epoch: 223 [30%] |
|
2023-03-12 14:52:03,268 44k INFO Losses: [2.619351625442505, 2.148245334625244, 8.897567749023438, 14.752212524414062, 0.9480798840522766], step: 61800, lr: 9.719005628024282e-05 |
|
2023-03-12 14:54:52,300 44k INFO ====> Epoch: 223, cost 248.98 s |
|
2023-03-12 14:55:03,905 44k INFO Train Epoch: 224 [2%] |
|
2023-03-12 14:55:03,908 44k INFO Losses: [2.5803394317626953, 2.006023645401001, 10.79987621307373, 19.320594787597656, 1.0995209217071533], step: 62000, lr: 9.717790752320778e-05 |
|
2023-03-12 14:57:58,706 44k INFO Train Epoch: 224 [74%] |
|
2023-03-12 14:57:58,707 44k INFO Losses: [2.333056688308716, 2.8939707279205322, 10.228975296020508, 18.292320251464844, 0.7481739521026611], step: 62200, lr: 9.717790752320778e-05 |
|
2023-03-12 14:59:01,970 44k INFO ====> Epoch: 224, cost 249.67 s |
|
2023-03-12 15:01:01,079 44k INFO Train Epoch: 225 [46%] |
|
2023-03-12 15:01:01,081 44k INFO Losses: [2.5180459022521973, 2.05826997756958, 9.170095443725586, 16.7716121673584, 0.3223544955253601], step: 62400, lr: 9.716576028476738e-05 |
|
2023-03-12 15:01:08,379 44k INFO Saving model and optimizer state at iteration 225 to ./logs/44k/G_62400.pth |
|
2023-03-12 15:01:11,467 44k INFO Saving model and optimizer state at iteration 225 to ./logs/44k/D_62400.pth |
|
2023-03-12 15:01:13,740 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_58400.pth |
|
2023-03-12 15:01:13,752 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_58400.pth |
|
2023-03-12 15:03:29,037 44k INFO ====> Epoch: 225, cost 267.07 s |
|
2023-03-12 15:04:20,315 44k INFO Train Epoch: 226 [18%] |
|
2023-03-12 15:04:20,317 44k INFO Losses: [2.413461685180664, 2.4479119777679443, 10.422577857971191, 14.863335609436035, 0.7665829658508301], step: 62600, lr: 9.715361456473177e-05 |
|
2023-03-12 15:07:15,432 44k INFO Train Epoch: 226 [90%] |
|
2023-03-12 15:07:15,434 44k INFO Losses: [2.6173720359802246, 2.04468035697937, 10.96248722076416, 18.17538833618164, 0.5090356469154358], step: 62800, lr: 9.715361456473177e-05 |
|
2023-03-12 15:07:39,698 44k INFO ====> Epoch: 226, cost 250.66 s |
|
2023-03-12 15:10:17,005 44k INFO Train Epoch: 227 [62%] |
|
2023-03-12 15:10:17,007 44k INFO Losses: [2.567997932434082, 2.016686201095581, 11.887686729431152, 20.10011100769043, 0.44921478629112244], step: 63000, lr: 9.714147036291117e-05 |
|
2023-03-12 15:11:48,372 44k INFO ====> Epoch: 227, cost 248.67 s |
|
2023-03-12 15:13:15,732 44k INFO Train Epoch: 228 [34%] |
|
2023-03-12 15:13:15,734 44k INFO Losses: [2.730997085571289, 2.037764549255371, 10.460418701171875, 16.122173309326172, 0.7870036363601685], step: 63200, lr: 9.71293276791158e-05 |
|
2023-03-12 15:13:22,819 44k INFO Saving model and optimizer state at iteration 228 to ./logs/44k/G_63200.pth |
|
2023-03-12 15:13:27,984 44k INFO Saving model and optimizer state at iteration 228 to ./logs/44k/D_63200.pth |
|
2023-03-12 15:13:30,182 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_59200.pth |
|
2023-03-12 15:13:30,187 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_59200.pth |
|
2023-03-12 15:16:13,199 44k INFO ====> Epoch: 228, cost 264.83 s |
|
2023-03-12 15:16:33,083 44k INFO Train Epoch: 229 [6%] |
|
2023-03-12 15:16:33,085 44k INFO Losses: [2.4242103099823, 2.2893857955932617, 12.6552095413208, 19.816349029541016, 0.4829581379890442], step: 63400, lr: 9.711718651315591e-05 |
|
2023-03-12 15:19:28,198 44k INFO Train Epoch: 229 [78%] |
|
2023-03-12 15:19:28,200 44k INFO Losses: [2.5137088298797607, 2.0569515228271484, 7.1267218589782715, 14.959567070007324, 0.8801150321960449], step: 63600, lr: 9.711718651315591e-05 |
|
2023-03-12 15:20:22,694 44k INFO ====> Epoch: 229, cost 249.49 s |
|
2023-03-12 15:22:28,710 44k INFO Train Epoch: 230 [50%] |
|
2023-03-12 15:22:28,712 44k INFO Losses: [2.579265832901001, 2.095125198364258, 7.911256313323975, 14.356263160705566, 1.0221589803695679], step: 63800, lr: 9.710504686484176e-05 |
|
2023-03-12 15:24:30,081 44k INFO ====> Epoch: 230, cost 247.39 s |
|
2023-03-12 15:25:27,388 44k INFO Train Epoch: 231 [22%] |
|
2023-03-12 15:25:27,395 44k INFO Losses: [2.475656509399414, 2.0625710487365723, 11.336868286132812, 17.005508422851562, 1.149176836013794], step: 64000, lr: 9.709290873398365e-05 |
|
2023-03-12 15:25:36,717 44k INFO Saving model and optimizer state at iteration 231 to ./logs/44k/G_64000.pth |
|
2023-03-12 15:25:39,361 44k INFO Saving model and optimizer state at iteration 231 to ./logs/44k/D_64000.pth |
|
2023-03-12 15:25:41,913 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_60000.pth |
|
2023-03-12 15:25:41,916 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_60000.pth |
|
2023-03-12 15:28:40,784 44k INFO Train Epoch: 231 [94%] |
|
2023-03-12 15:28:40,786 44k INFO Losses: [2.443161725997925, 2.304654598236084, 7.273852825164795, 14.532280921936035, 0.8637359738349915], step: 64200, lr: 9.709290873398365e-05 |
|
2023-03-12 15:28:57,423 44k INFO ====> Epoch: 231, cost 267.34 s |
|
2023-03-12 15:31:43,472 44k INFO Train Epoch: 232 [65%] |
|
2023-03-12 15:31:43,475 44k INFO Losses: [2.4536659717559814, 2.011228322982788, 12.22918701171875, 18.97743034362793, 0.4795770049095154], step: 64400, lr: 9.70807721203919e-05 |
|
2023-03-12 15:33:06,504 44k INFO ====> Epoch: 232, cost 249.08 s |
|
2023-03-12 15:34:45,186 44k INFO Train Epoch: 233 [37%] |
|
2023-03-12 15:34:45,188 44k INFO Losses: [2.6807632446289062, 1.846329689025879, 9.645062446594238, 15.59117317199707, 0.9902579188346863], step: 64600, lr: 9.706863702387684e-05 |
|
2023-03-12 15:37:16,098 44k INFO ====> Epoch: 233, cost 249.59 s |
|
2023-03-12 15:37:46,697 44k INFO Train Epoch: 234 [9%] |
|
2023-03-12 15:37:46,699 44k INFO Losses: [2.553539752960205, 2.05068302154541, 12.330363273620605, 18.520849227905273, 0.4849688708782196], step: 64800, lr: 9.705650344424885e-05 |
|
2023-03-12 15:37:52,970 44k INFO Saving model and optimizer state at iteration 234 to ./logs/44k/G_64800.pth |
|
2023-03-12 15:37:57,213 44k INFO Saving model and optimizer state at iteration 234 to ./logs/44k/D_64800.pth |
|
2023-03-12 15:37:59,396 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_60800.pth |
|
2023-03-12 15:37:59,400 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_60800.pth |
|
2023-03-12 15:40:58,196 44k INFO Train Epoch: 234 [81%] |
|
2023-03-12 15:40:58,198 44k INFO Losses: [2.6669418811798096, 2.0348470211029053, 8.598617553710938, 16.818626403808594, 0.5185145139694214], step: 65000, lr: 9.705650344424885e-05 |
|
2023-03-12 15:41:43,264 44k INFO ====> Epoch: 234, cost 267.17 s |
|
2023-03-12 15:44:01,049 44k INFO Train Epoch: 235 [53%] |
|
2023-03-12 15:44:01,051 44k INFO Losses: [2.404184579849243, 2.1889047622680664, 13.174363136291504, 17.788490295410156, 0.9351546764373779], step: 65200, lr: 9.704437138131832e-05 |
|
2023-03-12 15:45:54,601 44k INFO ====> Epoch: 235, cost 251.34 s |
|
2023-03-12 15:47:02,323 44k INFO Train Epoch: 236 [25%] |
|
2023-03-12 15:47:02,326 44k INFO Losses: [2.553358316421509, 2.249821662902832, 8.715449333190918, 15.344515800476074, 0.8274469971656799], step: 65400, lr: 9.703224083489565e-05 |
|
2023-03-12 15:49:57,024 44k INFO Train Epoch: 236 [97%] |
|
2023-03-12 15:49:57,028 44k INFO Losses: [2.43141508102417, 2.317412853240967, 12.316389083862305, 19.838327407836914, 0.9262612462043762], step: 65600, lr: 9.703224083489565e-05 |
|
2023-03-12 15:50:06,286 44k INFO Saving model and optimizer state at iteration 236 to ./logs/44k/G_65600.pth |
|
2023-03-12 15:50:08,795 44k INFO Saving model and optimizer state at iteration 236 to ./logs/44k/D_65600.pth |
|
2023-03-12 15:50:11,453 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_61600.pth |
|
2023-03-12 15:50:11,455 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_61600.pth |
|
2023-03-12 15:50:18,397 44k INFO ====> Epoch: 236, cost 263.80 s |
|
2023-03-12 15:53:15,703 44k INFO Train Epoch: 237 [69%] |
|
2023-03-12 15:53:15,704 44k INFO Losses: [2.6023664474487305, 2.032954692840576, 10.863702774047852, 16.146764755249023, 0.5743778944015503], step: 65800, lr: 9.702011180479129e-05 |
|
2023-03-12 15:54:28,921 44k INFO ====> Epoch: 237, cost 250.52 s |
|
2023-03-12 15:56:16,146 44k INFO Train Epoch: 238 [41%] |
|
2023-03-12 15:56:16,148 44k INFO Losses: [2.5099191665649414, 2.1877362728118896, 11.875418663024902, 19.411998748779297, 1.0612143278121948], step: 66000, lr: 9.700798429081568e-05 |
|
2023-03-12 15:58:37,695 44k INFO ====> Epoch: 238, cost 248.77 s |
|
2023-03-12 15:59:13,780 44k INFO Train Epoch: 239 [13%] |
|
2023-03-12 15:59:13,782 44k INFO Losses: [2.636387586593628, 2.17634916305542, 8.95136833190918, 16.582645416259766, 0.8876423835754395], step: 66200, lr: 9.699585829277933e-05 |
|
2023-03-12 16:02:09,106 44k INFO Train Epoch: 239 [85%] |
|
2023-03-12 16:02:09,107 44k INFO Losses: [2.7759220600128174, 1.925954818725586, 4.897927284240723, 10.74158763885498, 0.8560176491737366], step: 66400, lr: 9.699585829277933e-05 |
|
2023-03-12 16:02:15,916 44k INFO Saving model and optimizer state at iteration 239 to ./logs/44k/G_66400.pth |
|
2023-03-12 16:02:18,392 44k INFO Saving model and optimizer state at iteration 239 to ./logs/44k/D_66400.pth |
|
2023-03-12 16:02:21,070 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_62400.pth |
|
2023-03-12 16:02:21,073 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_62400.pth |
|
2023-03-12 16:03:01,581 44k INFO ====> Epoch: 239, cost 263.89 s |
|
2023-03-12 16:05:24,002 44k INFO Train Epoch: 240 [57%] |
|
2023-03-12 16:05:24,004 44k INFO Losses: [2.4151320457458496, 2.193824291229248, 13.378921508789062, 16.258865356445312, 0.7470610737800598], step: 66600, lr: 9.698373381049272e-05 |
|
2023-03-12 16:07:06,780 44k INFO ====> Epoch: 240, cost 245.20 s |
|
2023-03-12 16:08:22,955 44k INFO Train Epoch: 241 [29%] |
|
2023-03-12 16:08:22,957 44k INFO Losses: [2.478982448577881, 2.1977834701538086, 8.759859085083008, 18.140165328979492, 1.1082794666290283], step: 66800, lr: 9.69716108437664e-05 |
|
2023-03-12 16:11:14,285 44k INFO ====> Epoch: 241, cost 247.51 s |
|
2023-03-12 16:11:24,393 44k INFO Train Epoch: 242 [1%] |
|
2023-03-12 16:11:24,394 44k INFO Losses: [2.523944854736328, 2.1188552379608154, 8.087194442749023, 14.8621244430542, 0.6529416441917419], step: 67000, lr: 9.695948939241093e-05 |
|
2023-03-12 16:14:17,183 44k INFO Train Epoch: 242 [73%] |
|
2023-03-12 16:14:17,185 44k INFO Losses: [2.466418504714966, 2.3935930728912354, 11.363789558410645, 18.98698616027832, 0.8103567361831665], step: 67200, lr: 9.695948939241093e-05 |
|
2023-03-12 16:14:25,090 44k INFO Saving model and optimizer state at iteration 242 to ./logs/44k/G_67200.pth |
|
2023-03-12 16:14:27,616 44k INFO Saving model and optimizer state at iteration 242 to ./logs/44k/D_67200.pth |
|
2023-03-12 16:14:29,966 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_63200.pth |
|
2023-03-12 16:14:29,970 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_63200.pth |
|
2023-03-12 16:15:38,795 44k INFO ====> Epoch: 242, cost 264.51 s |
|
2023-03-12 16:17:31,695 44k INFO Train Epoch: 243 [45%] |
|
2023-03-12 16:17:31,696 44k INFO Losses: [2.738664150238037, 1.9747236967086792, 5.855485916137695, 15.118000030517578, 1.0234506130218506], step: 67400, lr: 9.694736945623688e-05 |
|
2023-03-12 16:19:43,603 44k INFO ====> Epoch: 243, cost 244.81 s |
|
2023-03-12 16:20:28,302 44k INFO Train Epoch: 244 [17%] |
|
2023-03-12 16:20:28,303 44k INFO Losses: [2.6213173866271973, 2.154243230819702, 5.325231075286865, 15.083169937133789, 0.5539212822914124], step: 67600, lr: 9.693525103505484e-05 |
|
2023-03-12 16:23:19,932 44k INFO Train Epoch: 244 [88%] |
|
2023-03-12 16:23:19,934 44k INFO Losses: [2.601935625076294, 2.169739007949829, 8.720893859863281, 16.572656631469727, 0.9842542409896851], step: 67800, lr: 9.693525103505484e-05 |
|
2023-03-12 16:23:46,553 44k INFO ====> Epoch: 244, cost 242.95 s |
|
2023-03-12 16:26:17,137 44k INFO Train Epoch: 245 [60%] |
|
2023-03-12 16:26:17,139 44k INFO Losses: [2.4458537101745605, 2.097778081893921, 8.78833293914795, 15.284719467163086, 0.70112544298172], step: 68000, lr: 9.692313412867544e-05 |
|
2023-03-12 16:26:24,898 44k INFO Saving model and optimizer state at iteration 245 to ./logs/44k/G_68000.pth |
|
2023-03-12 16:26:27,388 44k INFO Saving model and optimizer state at iteration 245 to ./logs/44k/D_68000.pth |
|
2023-03-12 16:26:29,860 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_64000.pth |
|
2023-03-12 16:26:29,862 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_64000.pth |
|
2023-03-12 16:28:06,195 44k INFO ====> Epoch: 245, cost 259.64 s |
|
2023-03-12 16:29:29,587 44k INFO Train Epoch: 246 [32%] |
|
2023-03-12 16:29:29,588 44k INFO Losses: [2.631030797958374, 2.1552133560180664, 10.98391342163086, 17.606794357299805, 0.553112268447876], step: 68200, lr: 9.691101873690936e-05 |
|
2023-03-12 16:32:10,457 44k INFO ====> Epoch: 246, cost 244.26 s |
|
2023-03-12 16:32:26,855 44k INFO Train Epoch: 247 [4%] |
|
2023-03-12 16:32:26,857 44k INFO Losses: [2.7471697330474854, 2.4043848514556885, 6.4419636726379395, 13.926630020141602, 0.6787595152854919], step: 68400, lr: 9.689890485956725e-05 |
|
2023-03-13 02:05:14,223 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 6536180, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 5}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'nahida': 0, 'pecorine': 1, 'ayaka': 2}, 'model_dir': './logs/44k'} |
|
2023-03-13 02:05:14,771 44k WARNING git hash values are different. 8eb41030(saved) != a2e0be5d(current) |
|
2023-03-13 02:05:31,891 44k INFO Loaded checkpoint './logs/44k/G_68000.pth' (iteration 245) |
|
2023-03-13 02:05:37,841 44k INFO Loaded checkpoint './logs/44k/D_68000.pth' (iteration 245) |
|
2023-03-13 02:06:53,364 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 6536180, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 5}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'nahida': 0, 'pecorine': 1, 'ayaka': 2}, 'model_dir': './logs/44k'} |
|
2023-03-13 02:06:53,396 44k WARNING git hash values are different. 8eb41030(saved) != a2e0be5d(current) |
|
2023-03-13 02:07:03,062 44k INFO Loaded checkpoint './logs/44k/G_68000.pth' (iteration 245) |
|
2023-03-13 02:07:06,682 44k INFO Loaded checkpoint './logs/44k/D_68000.pth' (iteration 245) |
|
2023-03-13 02:10:25,272 44k INFO Train Epoch: 245 [60%] |
|
2023-03-13 02:10:25,273 44k INFO Losses: [2.3753011226654053, 2.27905535697937, 8.918689727783203, 16.247440338134766, 0.5379068851470947], step: 68000, lr: 9.691101873690936e-05 |
|
2023-03-13 02:10:37,944 44k INFO Saving model and optimizer state at iteration 245 to ./logs/44k/G_68000.pth |
|
2023-03-13 02:10:40,554 44k INFO Saving model and optimizer state at iteration 245 to ./logs/44k/D_68000.pth |
|
2023-03-13 02:12:41,554 44k INFO ====> Epoch: 245, cost 348.19 s |
|
2023-03-13 02:14:08,696 44k INFO Train Epoch: 246 [32%] |
|
2023-03-13 02:14:08,699 44k INFO Losses: [2.514475107192993, 2.0931711196899414, 10.658770561218262, 17.481931686401367, 0.8005085587501526], step: 68200, lr: 9.689890485956725e-05 |
|
2023-03-13 02:16:53,608 44k INFO ====> Epoch: 246, cost 252.05 s |
|
2023-03-13 02:17:14,082 44k INFO Train Epoch: 247 [4%] |
|
2023-03-13 02:17:14,084 44k INFO Losses: [2.7624120712280273, 1.7863105535507202, 6.620454788208008, 13.87052059173584, 0.6549901962280273], step: 68400, lr: 9.68867924964598e-05 |
|
2023-03-13 02:20:10,258 44k INFO Train Epoch: 247 [76%] |
|
2023-03-13 02:20:10,259 44k INFO Losses: [2.518576145172119, 2.0264625549316406, 8.912225723266602, 15.209362983703613, 0.949323296546936], step: 68600, lr: 9.68867924964598e-05 |
|
2023-03-13 02:21:07,724 44k INFO ====> Epoch: 247, cost 254.12 s |
|
2023-03-13 02:23:13,611 44k INFO Train Epoch: 248 [48%] |
|
2023-03-13 02:23:13,613 44k INFO Losses: [2.687075138092041, 2.020977735519409, 9.937043190002441, 16.66067886352539, 0.9747544527053833], step: 68800, lr: 9.687468164739773e-05 |
|
2023-03-13 02:23:20,094 44k INFO Saving model and optimizer state at iteration 248 to ./logs/44k/G_68800.pth |
|
2023-03-13 02:23:22,727 44k INFO Saving model and optimizer state at iteration 248 to ./logs/44k/D_68800.pth |
|
2023-03-13 02:23:24,970 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_64800.pth |
|
2023-03-13 02:23:25,163 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_64800.pth |
|
2023-03-13 02:25:34,630 44k INFO ====> Epoch: 248, cost 266.91 s |
|
2023-03-13 02:26:30,631 44k INFO Train Epoch: 249 [20%] |
|
2023-03-13 02:26:30,634 44k INFO Losses: [2.5681660175323486, 2.3107786178588867, 8.722359657287598, 17.35997772216797, 0.7733471393585205], step: 69000, lr: 9.68625723121918e-05 |
|
2023-03-13 02:29:25,131 44k INFO Train Epoch: 249 [92%] |
|
2023-03-13 02:29:25,132 44k INFO Losses: [2.3187320232391357, 2.2292468547821045, 11.42715835571289, 18.826480865478516, 0.9165173172950745], step: 69200, lr: 9.68625723121918e-05 |
|
2023-03-13 02:29:44,999 44k INFO ====> Epoch: 249, cost 250.37 s |
|
2023-03-13 02:32:28,499 44k INFO Train Epoch: 250 [64%] |
|
2023-03-13 02:32:28,501 44k INFO Losses: [2.523083209991455, 2.269631862640381, 10.666534423828125, 17.45243263244629, 0.3906581997871399], step: 69400, lr: 9.685046449065278e-05 |
|
2023-03-13 02:33:54,678 44k INFO ====> Epoch: 250, cost 249.68 s |
|
2023-03-13 02:35:28,805 44k INFO Train Epoch: 251 [36%] |
|
2023-03-13 02:35:28,807 44k INFO Losses: [2.6067616939544678, 2.359903335571289, 9.269600868225098, 17.67122459411621, 0.5511075258255005], step: 69600, lr: 9.683835818259144e-05 |
|
2023-03-13 02:35:35,030 44k INFO Saving model and optimizer state at iteration 251 to ./logs/44k/G_69600.pth |
|
2023-03-13 02:35:37,621 44k INFO Saving model and optimizer state at iteration 251 to ./logs/44k/D_69600.pth |
|
2023-03-13 02:35:40,014 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_65600.pth |
|
2023-03-13 02:35:40,016 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_65600.pth |
|
2023-03-13 02:38:17,453 44k INFO ====> Epoch: 251, cost 262.77 s |
|
2023-03-13 02:38:44,620 44k INFO Train Epoch: 252 [8%] |
|
2023-03-13 02:38:44,623 44k INFO Losses: [2.6517302989959717, 2.121471405029297, 8.613264083862305, 13.760108947753906, 0.7544089555740356], step: 69800, lr: 9.68262533878186e-05 |
|
2023-03-13 02:41:39,926 44k INFO Train Epoch: 252 [80%] |
|
2023-03-13 02:41:39,928 44k INFO Losses: [2.6268632411956787, 2.295790672302246, 9.927370071411133, 14.731513977050781, 0.9378222823143005], step: 70000, lr: 9.68262533878186e-05 |
|
2023-03-13 02:42:29,076 44k INFO ====> Epoch: 252, cost 251.62 s |
|
2023-03-13 02:44:41,596 44k INFO Train Epoch: 253 [52%] |
|
2023-03-13 02:44:41,598 44k INFO Losses: [2.497894763946533, 2.178009033203125, 12.212751388549805, 17.594038009643555, 0.41936349868774414], step: 70200, lr: 9.681415010614512e-05 |
|
2023-03-13 02:46:37,643 44k INFO ====> Epoch: 253, cost 248.57 s |
|
2023-03-13 02:47:41,820 44k INFO Train Epoch: 254 [24%] |
|
2023-03-13 02:47:41,822 44k INFO Losses: [2.913564682006836, 1.9914987087249756, 6.174798011779785, 15.305342674255371, 0.5757282972335815], step: 70400, lr: 9.680204833738185e-05 |
|
2023-03-13 02:47:48,322 44k INFO Saving model and optimizer state at iteration 254 to ./logs/44k/G_70400.pth |
|
2023-03-13 02:47:52,547 44k INFO Saving model and optimizer state at iteration 254 to ./logs/44k/D_70400.pth |
|
2023-03-13 02:47:55,586 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_66400.pth |
|
2023-03-13 02:47:55,588 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_66400.pth |
|
2023-03-13 02:50:51,632 44k INFO Train Epoch: 254 [96%] |
|
2023-03-13 02:50:51,634 44k INFO Losses: [2.5018014907836914, 2.1117162704467773, 10.565845489501953, 16.94025993347168, 0.6719183921813965], step: 70600, lr: 9.680204833738185e-05 |
|
2023-03-13 02:51:03,231 44k INFO ====> Epoch: 254, cost 265.59 s |
|
2023-03-13 02:53:55,330 44k INFO Train Epoch: 255 [68%] |
|
2023-03-13 02:53:55,332 44k INFO Losses: [2.6623358726501465, 2.0286827087402344, 6.1648101806640625, 14.981965065002441, 0.6920318007469177], step: 70800, lr: 9.678994808133967e-05 |
|
2023-03-13 02:55:14,118 44k INFO ====> Epoch: 255, cost 250.89 s |
|
2023-03-13 02:56:55,156 44k INFO Train Epoch: 256 [40%] |
|
2023-03-13 02:56:55,158 44k INFO Losses: [2.5348100662231445, 2.104614734649658, 6.101798057556152, 13.747686386108398, 1.110923409461975], step: 71000, lr: 9.67778493378295e-05 |
|
2023-03-13 02:59:21,988 44k INFO ====> Epoch: 256, cost 247.87 s |
|
2023-03-13 02:59:58,163 44k INFO Train Epoch: 257 [12%] |
|
2023-03-13 02:59:58,165 44k INFO Losses: [2.5153656005859375, 2.1039676666259766, 12.398317337036133, 19.302356719970703, 0.6084452867507935], step: 71200, lr: 9.676575210666227e-05 |
|
2023-03-13 03:00:04,352 44k INFO Saving model and optimizer state at iteration 257 to ./logs/44k/G_71200.pth |
|
2023-03-13 03:00:07,034 44k INFO Saving model and optimizer state at iteration 257 to ./logs/44k/D_71200.pth |
|
2023-03-13 03:00:09,673 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_67200.pth |
|
2023-03-13 03:00:09,675 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_67200.pth |
|
2023-03-13 03:03:07,566 44k INFO Train Epoch: 257 [83%] |
|
2023-03-13 03:03:07,567 44k INFO Losses: [2.467252254486084, 2.2102339267730713, 10.489019393920898, 18.76942253112793, 0.8705142140388489], step: 71400, lr: 9.676575210666227e-05 |
|
2023-03-13 03:03:48,596 44k INFO ====> Epoch: 257, cost 266.61 s |
|
2023-03-13 03:06:09,739 44k INFO Train Epoch: 258 [55%] |
|
2023-03-13 03:06:09,742 44k INFO Losses: [2.4284486770629883, 2.3166213035583496, 11.90697956085205, 18.278430938720703, 0.5965749621391296], step: 71600, lr: 9.675365638764893e-05 |
|
2023-03-13 03:07:58,427 44k INFO ====> Epoch: 258, cost 249.83 s |
|
2023-03-13 03:09:11,608 44k INFO Train Epoch: 259 [27%] |
|
2023-03-13 03:09:11,610 44k INFO Losses: [2.4512147903442383, 2.466780424118042, 10.763468742370605, 18.52787971496582, 0.9005886316299438], step: 71800, lr: 9.674156218060047e-05 |
|
2023-03-13 03:12:04,635 44k INFO Train Epoch: 259 [99%] |
|
2023-03-13 03:12:04,637 44k INFO Losses: [2.744731903076172, 2.050581216812134, 7.851590633392334, 13.458305358886719, 0.9999933242797852], step: 72000, lr: 9.674156218060047e-05 |
|
2023-03-13 03:12:12,429 44k INFO Saving model and optimizer state at iteration 259 to ./logs/44k/G_72000.pth |
|
2023-03-13 03:12:15,175 44k INFO Saving model and optimizer state at iteration 259 to ./logs/44k/D_72000.pth |
|
2023-03-13 03:12:17,375 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_68000.pth |
|
2023-03-13 03:12:17,534 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_68000.pth |
|
2023-03-13 03:12:19,370 44k INFO ====> Epoch: 259, cost 260.94 s |
|
2023-03-13 03:15:22,947 44k INFO Train Epoch: 260 [71%] |
|
2023-03-13 03:15:22,949 44k INFO Losses: [2.627340078353882, 2.1337890625, 10.33919620513916, 14.991039276123047, 0.7457967400550842], step: 72200, lr: 9.67294694853279e-05 |
|
2023-03-13 03:16:32,715 44k INFO ====> Epoch: 260, cost 253.35 s |
|
2023-03-13 03:18:24,725 44k INFO Train Epoch: 261 [43%] |
|
2023-03-13 03:18:24,726 44k INFO Losses: [2.4448747634887695, 2.525312900543213, 8.502880096435547, 14.479256629943848, 1.0680208206176758], step: 72400, lr: 9.671737830164223e-05 |
|
2023-03-13 03:20:41,696 44k INFO ====> Epoch: 261, cost 248.98 s |
|
2023-03-13 03:21:24,075 44k INFO Train Epoch: 262 [15%] |
|
2023-03-13 03:21:24,079 44k INFO Losses: [2.365391254425049, 2.4375035762786865, 13.040332794189453, 19.652545928955078, 0.8381502628326416], step: 72600, lr: 9.670528862935451e-05 |
|
2023-03-13 03:24:19,542 44k INFO Train Epoch: 262 [87%] |
|
2023-03-13 03:24:19,543 44k INFO Losses: [2.491877794265747, 2.3855247497558594, 13.531455039978027, 18.78086280822754, 0.6034623384475708], step: 72800, lr: 9.670528862935451e-05 |
|
2023-03-13 03:24:25,892 44k INFO Saving model and optimizer state at iteration 262 to ./logs/44k/G_72800.pth |
|
2023-03-13 03:24:30,191 44k INFO Saving model and optimizer state at iteration 262 to ./logs/44k/D_72800.pth |
|
2023-03-13 03:24:32,878 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_68800.pth |
|
2023-03-13 03:24:32,885 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_68800.pth |
|
2023-03-13 03:25:07,648 44k INFO ====> Epoch: 262, cost 265.95 s |
|
2023-03-13 03:27:35,787 44k INFO Train Epoch: 263 [59%] |
|
2023-03-13 03:27:35,788 44k INFO Losses: [2.4102370738983154, 2.3918375968933105, 5.023255825042725, 12.673888206481934, 0.7361613512039185], step: 73000, lr: 9.669320046827584e-05 |
|
2023-03-13 03:29:14,459 44k INFO ====> Epoch: 263, cost 246.81 s |
|
2023-03-13 03:30:35,665 44k INFO Train Epoch: 264 [31%] |
|
2023-03-13 03:30:35,666 44k INFO Losses: [2.807122230529785, 2.145918607711792, 8.7868070602417, 16.63538932800293, 0.7199400663375854], step: 73200, lr: 9.668111381821731e-05 |
|
2023-03-13 03:33:21,775 44k INFO ====> Epoch: 264, cost 247.32 s |
|
2023-03-13 03:33:36,312 44k INFO Train Epoch: 265 [3%] |
|
2023-03-13 03:33:36,314 44k INFO Losses: [2.540374994277954, 2.1270623207092285, 9.955894470214844, 16.891508102416992, 1.204465627670288], step: 73400, lr: 9.666902867899003e-05 |
|
2023-03-13 03:36:30,157 44k INFO Train Epoch: 265 [75%] |
|
2023-03-13 03:36:30,159 44k INFO Losses: [2.5797204971313477, 2.332695722579956, 8.979292869567871, 15.575179100036621, 0.8092585802078247], step: 73600, lr: 9.666902867899003e-05 |
|
2023-03-13 03:36:38,299 44k INFO Saving model and optimizer state at iteration 265 to ./logs/44k/G_73600.pth |
|
2023-03-13 03:36:41,028 44k INFO Saving model and optimizer state at iteration 265 to ./logs/44k/D_73600.pth |
|
2023-03-13 03:36:43,230 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_69600.pth |
|
2023-03-13 03:36:43,232 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_69600.pth |
|
2023-03-13 03:37:47,357 44k INFO ====> Epoch: 265, cost 265.58 s |
|
2023-03-13 03:39:48,931 44k INFO Train Epoch: 266 [47%] |
|
2023-03-13 03:39:48,933 44k INFO Losses: [2.3858401775360107, 2.4122538566589355, 11.31737232208252, 18.403722763061523, 0.9384048581123352], step: 73800, lr: 9.665694505040515e-05 |
|
2023-03-13 03:41:58,379 44k INFO ====> Epoch: 266, cost 251.02 s |
|
2023-03-13 03:42:51,770 44k INFO Train Epoch: 267 [19%] |
|
2023-03-13 03:42:51,773 44k INFO Losses: [2.554157018661499, 2.147225856781006, 9.451533317565918, 14.032059669494629, 0.6526715159416199], step: 74000, lr: 9.664486293227385e-05 |
|
2023-03-13 03:45:44,945 44k INFO Train Epoch: 267 [91%] |
|
2023-03-13 03:45:44,947 44k INFO Losses: [2.55033540725708, 2.1478939056396484, 9.805465698242188, 16.581083297729492, 0.8333197236061096], step: 74200, lr: 9.664486293227385e-05 |
|
2023-03-13 03:46:08,419 44k INFO ====> Epoch: 267, cost 250.04 s |
|
2023-03-13 03:48:46,404 44k INFO Train Epoch: 268 [63%] |
|
2023-03-13 03:48:46,406 44k INFO Losses: [2.790998697280884, 1.9368486404418945, 6.259941577911377, 12.856212615966797, 0.6998184323310852], step: 74400, lr: 9.663278232440732e-05 |
|
2023-03-13 03:48:53,069 44k INFO Saving model and optimizer state at iteration 268 to ./logs/44k/G_74400.pth |
|
2023-03-13 03:48:57,175 44k INFO Saving model and optimizer state at iteration 268 to ./logs/44k/D_74400.pth |
|
2023-03-13 03:48:59,798 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_70400.pth |
|
2023-03-13 03:48:59,803 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_70400.pth |
|
2023-03-13 03:50:32,770 44k INFO ====> Epoch: 268, cost 264.35 s |
|
2023-03-13 03:52:02,639 44k INFO Train Epoch: 269 [35%] |
|
2023-03-13 03:52:02,640 44k INFO Losses: [3.0721209049224854, 2.0333642959594727, 5.202426910400391, 9.940613746643066, 0.7797895669937134], step: 74600, lr: 9.662070322661676e-05 |
|
2023-03-13 03:54:40,932 44k INFO ====> Epoch: 269, cost 248.16 s |
|
2023-03-13 03:55:01,878 44k INFO Train Epoch: 270 [6%] |
|
2023-03-13 03:55:01,880 44k INFO Losses: [2.6990408897399902, 2.055765151977539, 10.191926002502441, 15.032064437866211, 0.6618675589561462], step: 74800, lr: 9.660862563871342e-05 |
|
2023-03-13 03:57:56,416 44k INFO Train Epoch: 270 [78%] |
|
2023-03-13 03:57:56,418 44k INFO Losses: [2.050360918045044, 2.952244997024536, 5.662623882293701, 11.766480445861816, 1.2061803340911865], step: 75000, lr: 9.660862563871342e-05 |
|
2023-03-13 03:58:48,536 44k INFO ====> Epoch: 270, cost 247.60 s |
|
2023-03-13 04:00:55,092 44k INFO Train Epoch: 271 [50%] |
|
2023-03-13 04:00:55,094 44k INFO Losses: [2.6953256130218506, 2.033416986465454, 7.169999599456787, 14.30778694152832, 1.0952359437942505], step: 75200, lr: 9.659654956050859e-05 |
|
2023-03-13 04:01:03,920 44k INFO Saving model and optimizer state at iteration 271 to ./logs/44k/G_75200.pth |
|
2023-03-13 04:01:06,377 44k INFO Saving model and optimizer state at iteration 271 to ./logs/44k/D_75200.pth |
|
2023-03-13 04:01:08,926 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_71200.pth |
|
2023-03-13 04:01:08,928 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_71200.pth |
|
2023-03-13 04:03:11,484 44k INFO ====> Epoch: 271, cost 262.95 s |
|
2023-03-13 04:04:11,014 44k INFO Train Epoch: 272 [22%] |
|
2023-03-13 04:04:11,019 44k INFO Losses: [2.5392966270446777, 2.1847896575927734, 9.974017143249512, 14.376911163330078, 0.7030041813850403], step: 75400, lr: 9.658447499181352e-05 |
|
2023-03-13 04:07:06,179 44k INFO Train Epoch: 272 [94%] |
|
2023-03-13 04:07:06,180 44k INFO Losses: [2.678795099258423, 2.165299415588379, 6.205851078033447, 12.76257038116455, 0.9570678472518921], step: 75600, lr: 9.658447499181352e-05 |
|
2023-03-13 04:07:19,325 44k INFO ====> Epoch: 272, cost 247.84 s |
|
2023-03-13 04:10:06,829 44k INFO Train Epoch: 273 [66%] |
|
2023-03-13 04:10:06,831 44k INFO Losses: [2.408597946166992, 2.1699063777923584, 14.492467880249023, 18.160053253173828, 0.9006099700927734], step: 75800, lr: 9.657240193243954e-05 |
|
2023-03-13 04:11:26,578 44k INFO ====> Epoch: 273, cost 247.25 s |
|
2023-03-13 04:13:04,684 44k INFO Train Epoch: 274 [38%] |
|
2023-03-13 04:13:04,686 44k INFO Losses: [2.51920747756958, 2.0433549880981445, 10.949957847595215, 15.603584289550781, 0.8394017219543457], step: 76000, lr: 9.656033038219798e-05 |
|
2023-03-13 04:13:12,693 44k INFO Saving model and optimizer state at iteration 274 to ./logs/44k/G_76000.pth |
|
2023-03-13 04:13:15,585 44k INFO Saving model and optimizer state at iteration 274 to ./logs/44k/D_76000.pth |
|
2023-03-13 04:13:17,790 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_72000.pth |
|
2023-03-13 04:13:17,792 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_72000.pth |
|
2023-03-13 04:15:52,543 44k INFO ====> Epoch: 274, cost 265.96 s |
|
2023-03-13 04:16:24,174 44k INFO Train Epoch: 275 [10%] |
|
2023-03-13 04:16:24,175 44k INFO Losses: [2.3568592071533203, 2.390465021133423, 13.915321350097656, 18.41580581665039, 0.4638746678829193], step: 76200, lr: 9.65482603409002e-05 |
|
2023-03-13 04:19:19,997 44k INFO Train Epoch: 275 [82%] |
|
2023-03-13 04:19:19,998 44k INFO Losses: [2.6843764781951904, 2.2024312019348145, 7.619515895843506, 16.0986270904541, 0.7730833888053894], step: 76400, lr: 9.65482603409002e-05 |
|
2023-03-13 04:20:01,834 44k INFO ====> Epoch: 275, cost 249.29 s |
|
2023-03-13 04:22:19,881 44k INFO Train Epoch: 276 [54%] |
|
2023-03-13 04:22:19,883 44k INFO Losses: [2.5161995887756348, 2.1091787815093994, 11.278655052185059, 18.62177848815918, 1.0863178968429565], step: 76600, lr: 9.653619180835758e-05 |
|
2023-03-13 04:24:11,269 44k INFO ====> Epoch: 276, cost 249.44 s |
|
2023-03-13 04:25:21,746 44k INFO Train Epoch: 277 [26%] |
|
2023-03-13 04:25:21,748 44k INFO Losses: [2.5324113368988037, 2.0667383670806885, 11.127120018005371, 13.805733680725098, 0.5423396229743958], step: 76800, lr: 9.652412478438153e-05 |
|
2023-03-13 04:25:27,426 44k INFO Saving model and optimizer state at iteration 277 to ./logs/44k/G_76800.pth |
|
2023-03-13 04:25:32,015 44k INFO Saving model and optimizer state at iteration 277 to ./logs/44k/D_76800.pth |
|
2023-03-13 04:25:34,551 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_72800.pth |
|
2023-03-13 04:25:34,556 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_72800.pth |
|
2023-03-13 04:28:31,611 44k INFO Train Epoch: 277 [98%] |
|
2023-03-13 04:28:31,617 44k INFO Losses: [2.999903440475464, 2.0638670921325684, 7.500654697418213, 13.63475513458252, 1.0076853036880493], step: 77000, lr: 9.652412478438153e-05 |
|
2023-03-13 04:28:38,621 44k INFO ====> Epoch: 277, cost 267.35 s |
|
2023-03-13 04:31:33,418 44k INFO Train Epoch: 278 [70%] |
|
2023-03-13 04:31:33,420 44k INFO Losses: [2.555307149887085, 2.1924989223480225, 9.826170921325684, 15.324247360229492, 0.9565949440002441], step: 77200, lr: 9.651205926878348e-05 |
|
2023-03-13 04:32:44,982 44k INFO ====> Epoch: 278, cost 246.36 s |
|
2023-03-13 04:34:32,485 44k INFO Train Epoch: 279 [42%] |
|
2023-03-13 04:34:32,493 44k INFO Losses: [2.4517159461975098, 2.160984516143799, 9.403005599975586, 15.838665962219238, 0.8669955134391785], step: 77400, lr: 9.649999526137489e-05 |
|
2023-03-13 04:36:54,029 44k INFO ====> Epoch: 279, cost 249.05 s |
|
2023-03-13 04:37:33,929 44k INFO Train Epoch: 280 [14%] |
|
2023-03-13 04:37:33,931 44k INFO Losses: [2.502990484237671, 2.24861478805542, 11.379737854003906, 17.468067169189453, 1.0338046550750732], step: 77600, lr: 9.64879327619672e-05 |
|
2023-03-13 04:37:39,910 44k INFO Saving model and optimizer state at iteration 280 to ./logs/44k/G_77600.pth |
|
2023-03-13 04:37:43,114 44k INFO Saving model and optimizer state at iteration 280 to ./logs/44k/D_77600.pth |
|
2023-03-13 04:37:45,968 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_73600.pth |
|
2023-03-13 04:37:45,972 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_73600.pth |
|
2023-03-13 04:40:45,642 44k INFO Train Epoch: 280 [86%] |
|
2023-03-13 04:40:45,644 44k INFO Losses: [2.6817214488983154, 1.8833922147750854, 8.346673965454102, 18.028839111328125, 0.699009120464325], step: 77800, lr: 9.64879327619672e-05 |
|
2023-03-13 04:41:18,775 44k INFO ====> Epoch: 280, cost 264.75 s |
|
2023-03-13 04:43:43,964 44k INFO Train Epoch: 281 [58%] |
|
2023-03-13 04:43:43,966 44k INFO Losses: [2.466158866882324, 2.297776460647583, 10.740347862243652, 18.46600341796875, 0.8861173987388611], step: 78000, lr: 9.647587177037196e-05 |
|
2023-03-13 04:45:26,234 44k INFO ====> Epoch: 281, cost 247.46 s |
|
2023-03-13 04:46:46,730 44k INFO Train Epoch: 282 [29%] |
|
2023-03-13 04:46:46,731 44k INFO Losses: [2.4028093814849854, 2.1903233528137207, 10.434988021850586, 16.61601448059082, 0.6496502757072449], step: 78200, lr: 9.646381228640066e-05 |
|
2023-03-13 04:49:35,471 44k INFO ====> Epoch: 282, cost 249.24 s |
|
2023-03-13 04:49:45,113 44k INFO Train Epoch: 283 [1%] |
|
2023-03-13 04:49:45,115 44k INFO Losses: [2.900740623474121, 1.619629979133606, 8.670475006103516, 12.757932662963867, 0.7599830031394958], step: 78400, lr: 9.645175430986486e-05 |
|
2023-03-13 04:49:51,817 44k INFO Saving model and optimizer state at iteration 283 to ./logs/44k/G_78400.pth |
|
2023-03-13 04:49:54,740 44k INFO Saving model and optimizer state at iteration 283 to ./logs/44k/D_78400.pth |
|
2023-03-13 04:49:57,324 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_74400.pth |
|
2023-03-13 04:49:57,330 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_74400.pth |
|
2023-03-13 04:52:54,749 44k INFO Train Epoch: 283 [73%] |
|
2023-03-13 04:52:54,750 44k INFO Losses: [2.702277421951294, 1.9365179538726807, 8.455533981323242, 16.741046905517578, 0.9870875477790833], step: 78600, lr: 9.645175430986486e-05 |
|
2023-03-13 04:53:57,338 44k INFO ====> Epoch: 283, cost 261.87 s |
|
2023-03-13 04:55:50,516 44k INFO Train Epoch: 284 [45%] |
|
2023-03-13 04:55:50,518 44k INFO Losses: [2.6061131954193115, 2.0455663204193115, 7.336194038391113, 16.324636459350586, 0.9572528004646301], step: 78800, lr: 9.643969784057613e-05 |
|
2023-03-13 04:58:00,922 44k INFO ====> Epoch: 284, cost 243.58 s |
|
2023-03-13 04:58:49,934 44k INFO Train Epoch: 285 [17%] |
|
2023-03-13 04:58:49,937 44k INFO Losses: [2.7365245819091797, 2.1080188751220703, 8.27223014831543, 15.18332576751709, 0.8505760431289673], step: 79000, lr: 9.642764287834605e-05 |
|
2023-03-13 05:01:40,229 44k INFO Train Epoch: 285 [89%] |
|
2023-03-13 05:01:40,230 44k INFO Losses: [2.5961384773254395, 2.1109437942504883, 7.64267110824585, 16.26929473876953, 0.9975913763046265], step: 79200, lr: 9.642764287834605e-05 |
|
2023-03-13 05:01:48,402 44k INFO Saving model and optimizer state at iteration 285 to ./logs/44k/G_79200.pth |
|
2023-03-13 05:01:51,327 44k INFO Saving model and optimizer state at iteration 285 to ./logs/44k/D_79200.pth |
|
2023-03-13 05:01:53,634 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_75200.pth |
|
2023-03-13 05:01:53,637 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_75200.pth |
|
2023-03-13 05:02:23,716 44k INFO ====> Epoch: 285, cost 262.79 s |
|
2023-03-13 05:04:56,297 44k INFO Train Epoch: 286 [61%] |
|
2023-03-13 05:04:56,299 44k INFO Losses: [2.5625150203704834, 2.1464009284973145, 11.184694290161133, 18.77011489868164, 0.8900436162948608], step: 79400, lr: 9.641558942298625e-05 |
|
2023-03-13 05:06:27,431 44k INFO ====> Epoch: 286, cost 243.72 s |
|
2023-03-13 05:07:53,370 44k INFO Train Epoch: 287 [33%] |
|
2023-03-13 05:07:53,373 44k INFO Losses: [2.4975037574768066, 1.9522514343261719, 9.984993934631348, 14.12394905090332, 0.9637250304222107], step: 79600, lr: 9.640353747430838e-05 |
|
2023-03-13 05:10:32,738 44k INFO ====> Epoch: 287, cost 245.31 s |
|
2023-03-13 05:10:53,640 44k INFO Train Epoch: 288 [5%] |
|
2023-03-13 05:10:53,643 44k INFO Losses: [2.4237921237945557, 2.7752254009246826, 7.622463226318359, 15.162446975708008, 1.0523195266723633], step: 79800, lr: 9.639148703212408e-05 |
|
2023-03-13 05:13:45,943 44k INFO Train Epoch: 288 [77%] |
|
2023-03-13 05:13:45,945 44k INFO Losses: [2.5421671867370605, 2.0984795093536377, 13.122998237609863, 18.174592971801758, 0.8791930079460144], step: 80000, lr: 9.639148703212408e-05 |
|
2023-03-13 05:13:53,790 44k INFO Saving model and optimizer state at iteration 288 to ./logs/44k/G_80000.pth |
|
2023-03-13 05:13:56,637 44k INFO Saving model and optimizer state at iteration 288 to ./logs/44k/D_80000.pth |
|
2023-03-13 05:13:58,788 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_76000.pth |
|
2023-03-13 05:13:58,790 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_76000.pth |
|
2023-03-13 05:14:57,497 44k INFO ====> Epoch: 288, cost 264.76 s |
|
2023-03-13 05:17:01,275 44k INFO Train Epoch: 289 [49%] |
|
2023-03-13 05:17:01,277 44k INFO Losses: [2.509288787841797, 2.160587787628174, 9.227270126342773, 16.645044326782227, 0.4063487946987152], step: 80200, lr: 9.637943809624507e-05 |
|
2023-03-13 05:19:02,640 44k INFO ====> Epoch: 289, cost 245.14 s |
|
2023-03-13 05:19:57,885 44k INFO Train Epoch: 290 [21%] |
|
2023-03-13 05:19:57,887 44k INFO Losses: [2.505749464035034, 2.2792060375213623, 7.860806465148926, 13.22053050994873, 0.8144619464874268], step: 80400, lr: 9.636739066648303e-05 |
|
2023-03-13 05:22:48,632 44k INFO Train Epoch: 290 [93%] |
|
2023-03-13 05:22:48,634 44k INFO Losses: [2.6049301624298096, 1.8997372388839722, 12.994314193725586, 19.9601993560791, 0.8269798159599304], step: 80600, lr: 9.636739066648303e-05 |
|
2023-03-13 05:23:06,547 44k INFO ====> Epoch: 290, cost 243.91 s |
|
2023-03-13 05:25:50,533 44k INFO Train Epoch: 291 [65%] |
|
2023-03-13 05:25:50,535 44k INFO Losses: [2.53548264503479, 2.181180715560913, 7.551103591918945, 15.422593116760254, 0.4769038259983063], step: 80800, lr: 9.635534474264972e-05 |
|
2023-03-13 05:25:57,213 44k INFO Saving model and optimizer state at iteration 291 to ./logs/44k/G_80800.pth |
|
2023-03-13 05:26:00,128 44k INFO Saving model and optimizer state at iteration 291 to ./logs/44k/D_80800.pth |
|
2023-03-13 05:26:02,821 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_76800.pth |
|
2023-03-13 05:26:02,823 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_76800.pth |
|
2023-03-13 05:27:28,632 44k INFO ====> Epoch: 291, cost 262.09 s |
|
2023-03-13 05:29:04,094 44k INFO Train Epoch: 292 [37%] |
|
2023-03-13 05:29:04,096 44k INFO Losses: [2.5795931816101074, 1.9008526802062988, 9.190285682678223, 15.463434219360352, 0.7516130805015564], step: 81000, lr: 9.634330032455689e-05 |
|
2023-03-13 05:31:36,182 44k INFO ====> Epoch: 292, cost 247.55 s |
|
2023-03-13 05:32:04,104 44k INFO Train Epoch: 293 [9%] |
|
2023-03-13 05:32:04,106 44k INFO Losses: [2.7005510330200195, 1.8779635429382324, 8.90747356414795, 15.522599220275879, 0.9284096956253052], step: 81200, lr: 9.633125741201631e-05 |
|
2023-03-13 05:34:55,828 44k INFO Train Epoch: 293 [81%] |
|
2023-03-13 05:34:55,830 44k INFO Losses: [2.733914613723755, 2.0520241260528564, 6.47420597076416, 12.920723915100098, 0.2508591115474701], step: 81400, lr: 9.633125741201631e-05 |
|
2023-03-13 05:35:42,481 44k INFO ====> Epoch: 293, cost 246.30 s |
|
2023-03-13 05:37:54,093 44k INFO Train Epoch: 294 [53%] |
|
2023-03-13 05:37:54,095 44k INFO Losses: [2.475879430770874, 2.4023711681365967, 10.831110954284668, 14.356058120727539, 0.8560411334037781], step: 81600, lr: 9.631921600483981e-05 |
|
2023-03-13 05:38:00,062 44k INFO Saving model and optimizer state at iteration 294 to ./logs/44k/G_81600.pth |
|
2023-03-13 05:38:04,577 44k INFO Saving model and optimizer state at iteration 294 to ./logs/44k/D_81600.pth |
|
2023-03-13 05:38:06,929 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_77600.pth |
|
2023-03-13 05:38:06,931 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_77600.pth |
|
2023-03-13 05:40:04,720 44k INFO ====> Epoch: 294, cost 262.24 s |
|
2023-03-13 05:41:08,363 44k INFO Train Epoch: 295 [24%] |
|
2023-03-13 05:41:08,365 44k INFO Losses: [2.5433900356292725, 2.2037622928619385, 8.468703269958496, 16.323165893554688, 0.7417632341384888], step: 81800, lr: 9.63071761028392e-05 |
|
2023-03-13 05:44:01,047 44k INFO Train Epoch: 295 [96%] |
|
2023-03-13 05:44:01,049 44k INFO Losses: [2.583505630493164, 1.986404299736023, 10.101560592651367, 14.745351791381836, 0.8262935876846313], step: 82000, lr: 9.63071761028392e-05 |
|
2023-03-13 05:44:09,431 44k INFO ====> Epoch: 295, cost 244.71 s |
|
2023-03-13 05:46:58,694 44k INFO Train Epoch: 296 [68%] |
|
2023-03-13 05:46:58,696 44k INFO Losses: [2.485297203063965, 1.983520746231079, 9.129393577575684, 14.347054481506348, 0.6353933811187744], step: 82200, lr: 9.629513770582634e-05 |
|
2023-03-13 05:48:12,807 44k INFO ====> Epoch: 296, cost 243.38 s |
|
2023-03-13 12:57:07,890 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 6536180, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 5}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'nahida': 0, 'pecorine': 1, 'ayaka': 2}, 'model_dir': './logs/44k'} |
|
2023-03-13 12:57:09,064 44k WARNING git hash values are different. 8eb41030(saved) != 6a6e8193(current) |
|
2023-03-13 12:57:30,468 44k INFO Loaded checkpoint './logs/44k/G_81600.pth' (iteration 294) |
|
2023-03-13 12:57:40,127 44k INFO Loaded checkpoint './logs/44k/D_81600.pth' (iteration 294) |
|
2023-03-13 13:00:28,253 44k INFO Train Epoch: 294 [53%] |
|
2023-03-13 13:00:28,254 44k INFO Losses: [2.5323948860168457, 2.0171995162963867, 7.652010917663574, 16.477975845336914, 0.6345352530479431], step: 81600, lr: 9.63071761028392e-05 |
|
2023-03-13 13:00:38,247 44k INFO Saving model and optimizer state at iteration 294 to ./logs/44k/G_81600.pth |
|
2023-03-13 13:00:42,401 44k INFO Saving model and optimizer state at iteration 294 to ./logs/44k/D_81600.pth |
|
2023-03-13 13:03:00,959 44k INFO ====> Epoch: 294, cost 353.08 s |
|
2023-03-13 13:04:06,387 44k INFO Train Epoch: 295 [24%] |
|
2023-03-13 13:04:06,389 44k INFO Losses: [2.4702253341674805, 2.320293426513672, 11.194626808166504, 17.631946563720703, 0.5210567712783813], step: 81800, lr: 9.629513770582634e-05 |
|
2023-03-13 13:06:55,700 44k INFO Train Epoch: 295 [96%] |
|
2023-03-13 13:06:55,701 44k INFO Losses: [2.9641079902648926, 1.9547572135925293, 10.343207359313965, 13.821266174316406, 0.8333009481430054], step: 82000, lr: 9.629513770582634e-05 |
|
2023-03-13 13:07:04,466 44k INFO ====> Epoch: 295, cost 243.51 s |
|
2023-03-13 13:09:52,270 44k INFO Train Epoch: 296 [68%] |
|
2023-03-13 13:09:52,272 44k INFO Losses: [2.565741539001465, 2.006493091583252, 9.547576904296875, 15.105944633483887, 0.7451357841491699], step: 82200, lr: 9.628310081361311e-05 |
|
2023-03-13 13:11:06,713 44k INFO ====> Epoch: 296, cost 242.25 s |
|
2023-03-13 13:12:46,466 44k INFO Train Epoch: 297 [40%] |
|
2023-03-13 13:12:46,468 44k INFO Losses: [2.358616590499878, 2.6202189922332764, 12.379379272460938, 18.46044921875, 0.579123318195343], step: 82400, lr: 9.627106542601141e-05 |
|
2023-03-13 13:12:52,811 44k INFO Saving model and optimizer state at iteration 297 to ./logs/44k/G_82400.pth |
|
2023-03-13 13:12:56,588 44k INFO Saving model and optimizer state at iteration 297 to ./logs/44k/D_82400.pth |
|
2023-03-13 13:12:58,975 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_78400.pth |
|
2023-03-13 13:12:58,977 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_78400.pth |
|
2023-03-13 13:15:21,813 44k INFO ====> Epoch: 297, cost 255.10 s |
|
2023-03-13 13:15:56,535 44k INFO Train Epoch: 298 [12%] |
|
2023-03-13 13:15:56,536 44k INFO Losses: [2.3852028846740723, 2.420138120651245, 15.26012134552002, 19.824026107788086, 0.8694034218788147], step: 82600, lr: 9.625903154283315e-05 |
|
2023-03-13 13:18:45,937 44k INFO Train Epoch: 298 [84%] |
|
2023-03-13 13:18:45,938 44k INFO Losses: [2.4864554405212402, 2.0984396934509277, 10.02286434173584, 17.816999435424805, 0.7055031061172485], step: 82800, lr: 9.625903154283315e-05 |
|
2023-03-13 13:19:22,440 44k INFO ====> Epoch: 298, cost 240.63 s |
|
2023-03-13 13:21:41,846 44k INFO Train Epoch: 299 [56%] |
|
2023-03-13 13:21:41,848 44k INFO Losses: [2.6428792476654053, 2.2145490646362305, 6.939781188964844, 15.677523612976074, 0.7081798911094666], step: 83000, lr: 9.62469991638903e-05 |
|
2023-03-13 13:23:23,652 44k INFO ====> Epoch: 299, cost 241.21 s |
|
2023-03-13 13:24:37,278 44k INFO Train Epoch: 300 [28%] |
|
2023-03-13 13:24:37,280 44k INFO Losses: [2.5924956798553467, 1.887627363204956, 10.902088165283203, 16.338912963867188, 0.8771535158157349], step: 83200, lr: 9.62349682889948e-05 |
|
2023-03-13 13:24:43,279 44k INFO Saving model and optimizer state at iteration 300 to ./logs/44k/G_83200.pth |
|
2023-03-13 13:24:45,751 44k INFO Saving model and optimizer state at iteration 300 to ./logs/44k/D_83200.pth |
|
2023-03-13 13:24:48,143 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_79200.pth |
|
2023-03-13 13:24:48,146 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_79200.pth |
|
2023-03-13 13:27:37,999 44k INFO ====> Epoch: 300, cost 254.35 s |
|
2023-03-13 13:27:46,190 44k INFO Train Epoch: 301 [0%] |
|
2023-03-13 13:27:46,192 44k INFO Losses: [2.291407823562622, 2.805206775665283, 8.076194763183594, 15.932962417602539, 0.6191627979278564], step: 83400, lr: 9.622293891795867e-05 |
|
2023-03-13 13:30:33,536 44k INFO Train Epoch: 301 [72%] |
|
2023-03-13 13:30:33,537 44k INFO Losses: [2.7497646808624268, 2.042283773422241, 8.622313499450684, 13.59796142578125, 0.8083536028862], step: 83600, lr: 9.622293891795867e-05 |
|
2023-03-13 13:31:38,423 44k INFO ====> Epoch: 301, cost 240.42 s |
|
2023-03-13 13:33:27,121 44k INFO Train Epoch: 302 [44%] |
|
2023-03-13 13:33:27,123 44k INFO Losses: [2.4168035984039307, 2.1989006996154785, 10.002168655395508, 15.37344741821289, 0.6667568683624268], step: 83800, lr: 9.621091105059392e-05 |
|
2023-03-13 13:35:37,229 44k INFO ====> Epoch: 302, cost 238.81 s |
|
2023-03-14 14:18:42,485 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 6536180, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 5}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'nahida': 0, 'pecorine': 1, 'ayaka': 2}, 'model_dir': './logs/44k'} |
|
2023-03-14 14:18:43,107 44k WARNING git hash values are different. 8eb41030(saved) != e7019554(current) |
|
2023-03-14 14:19:00,470 44k INFO Loaded checkpoint './logs/44k/G_83200.pth' (iteration 300) |
|
2023-03-14 14:19:05,626 44k INFO Loaded checkpoint './logs/44k/D_83200.pth' (iteration 300) |
|
2023-03-14 14:20:45,634 44k INFO Train Epoch: 300 [28%] |
|
2023-03-14 14:20:45,635 44k INFO Losses: [2.439831256866455, 2.1558845043182373, 10.465799331665039, 17.440933227539062, 0.6243970990180969], step: 83200, lr: 9.622293891795867e-05 |
|
2023-03-14 14:20:55,756 44k INFO Saving model and optimizer state at iteration 300 to ./logs/44k/G_83200.pth |
|
2023-03-14 14:20:58,285 44k INFO Saving model and optimizer state at iteration 300 to ./logs/44k/D_83200.pth |
|
2023-03-14 14:24:23,491 44k INFO ====> Epoch: 300, cost 341.01 s |
|
2023-03-14 14:24:31,779 44k INFO Train Epoch: 301 [0%] |
|
2023-03-14 14:24:31,781 44k INFO Losses: [2.637014389038086, 2.191267490386963, 8.90167236328125, 15.426639556884766, 0.6036518216133118], step: 83400, lr: 9.621091105059392e-05 |
|
2023-03-14 14:27:18,440 44k INFO Train Epoch: 301 [72%] |
|
2023-03-14 14:27:18,441 44k INFO Losses: [2.6758861541748047, 2.279783010482788, 10.664588928222656, 15.795894622802734, 0.6671145558357239], step: 83600, lr: 9.621091105059392e-05 |
|
2023-03-14 14:28:22,900 44k INFO ====> Epoch: 301, cost 239.41 s |
|
2023-03-14 14:30:11,503 44k INFO Train Epoch: 302 [44%] |
|
2023-03-14 14:30:11,505 44k INFO Losses: [2.445223331451416, 2.2301766872406006, 10.745494842529297, 17.76961898803711, 0.46479448676109314], step: 83800, lr: 9.619888468671259e-05 |
|
2023-03-14 14:32:20,569 44k INFO ====> Epoch: 302, cost 237.67 s |
|
2023-03-14 14:33:04,417 44k INFO Train Epoch: 303 [16%] |
|
2023-03-14 14:33:04,418 44k INFO Losses: [2.545315742492676, 2.1759114265441895, 5.451295852661133, 14.00610637664795, 0.4908599555492401], step: 84000, lr: 9.618685982612675e-05 |
|
2023-03-14 14:33:10,734 44k INFO Saving model and optimizer state at iteration 303 to ./logs/44k/G_84000.pth |
|
2023-03-14 14:33:14,276 44k INFO Saving model and optimizer state at iteration 303 to ./logs/44k/D_84000.pth |
|
2023-03-14 14:33:16,511 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_80000.pth |
|
2023-03-14 14:33:16,513 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_80000.pth |
|
2023-03-14 14:36:08,415 44k INFO Train Epoch: 303 [88%] |
|
2023-03-14 14:36:08,416 44k INFO Losses: [2.451686382293701, 2.3184592723846436, 12.586256980895996, 18.961803436279297, 0.7260699272155762], step: 84200, lr: 9.618685982612675e-05 |
|
2023-03-14 14:36:36,402 44k INFO ====> Epoch: 303, cost 255.83 s |
|
2023-03-14 14:39:02,783 44k INFO Train Epoch: 304 [60%] |
|
2023-03-14 14:39:02,785 44k INFO Losses: [2.3461556434631348, 2.397482395172119, 11.099039077758789, 17.453035354614258, 0.6397073864936829], step: 84400, lr: 9.617483646864849e-05 |
|
2023-03-14 14:40:35,873 44k INFO ====> Epoch: 304, cost 239.47 s |
|
2023-03-14 14:41:56,164 44k INFO Train Epoch: 305 [32%] |
|
2023-03-14 14:41:56,166 44k INFO Losses: [2.658247232437134, 2.0475456714630127, 12.511134147644043, 16.754852294921875, 0.6239403486251831], step: 84600, lr: 9.61628146140899e-05 |
|
2023-03-14 14:44:35,770 44k INFO ====> Epoch: 305, cost 239.90 s |
|
2023-03-14 14:44:51,246 44k INFO Train Epoch: 306 [4%] |
|
2023-03-14 14:44:51,247 44k INFO Losses: [2.375030755996704, 2.266873836517334, 13.29317855834961, 17.851877212524414, 0.5522587299346924], step: 84800, lr: 9.615079426226314e-05 |
|
2023-03-14 14:44:57,232 44k INFO Saving model and optimizer state at iteration 306 to ./logs/44k/G_84800.pth |
|
2023-03-14 14:45:00,254 44k INFO Saving model and optimizer state at iteration 306 to ./logs/44k/D_84800.pth |
|
2023-03-14 14:45:02,785 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_80800.pth |
|
2023-03-14 14:45:02,787 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_80800.pth |
|
2023-03-14 14:47:52,971 44k INFO Train Epoch: 306 [76%] |
|
2023-03-14 14:47:52,972 44k INFO Losses: [2.5291428565979004, 2.107820510864258, 13.451617240905762, 17.738101959228516, 0.68429034948349], step: 85000, lr: 9.615079426226314e-05 |
|
2023-03-14 14:48:49,800 44k INFO ====> Epoch: 306, cost 254.03 s |
|
2023-03-14 14:50:46,281 44k INFO Train Epoch: 307 [47%] |
|
2023-03-14 14:50:46,283 44k INFO Losses: [2.623943328857422, 1.89698326587677, 13.413527488708496, 16.832061767578125, 0.7891984581947327], step: 85200, lr: 9.613877541298036e-05 |
|
2023-03-14 14:52:49,088 44k INFO ====> Epoch: 307, cost 239.29 s |
|
2023-03-14 14:53:41,920 44k INFO Train Epoch: 308 [19%] |
|
2023-03-14 14:53:41,921 44k INFO Losses: [2.4748523235321045, 2.246570110321045, 14.271547317504883, 17.613962173461914, 0.712148904800415], step: 85400, lr: 9.612675806605373e-05 |
|
2023-03-14 14:56:30,545 44k INFO Train Epoch: 308 [91%] |
|
2023-03-14 14:56:30,546 44k INFO Losses: [2.3637425899505615, 2.1454429626464844, 11.943222045898438, 16.43263053894043, 0.7434548139572144], step: 85600, lr: 9.612675806605373e-05 |
|
2023-03-14 14:56:38,141 44k INFO Saving model and optimizer state at iteration 308 to ./logs/44k/G_85600.pth |
|
2023-03-14 14:56:41,442 44k INFO Saving model and optimizer state at iteration 308 to ./logs/44k/D_85600.pth |
|
2023-03-14 14:56:43,803 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_81600.pth |
|
2023-03-14 14:56:43,806 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_81600.pth |
|
2023-03-14 14:57:06,534 44k INFO ====> Epoch: 308, cost 257.45 s |
|
2023-03-14 14:59:41,120 44k INFO Train Epoch: 309 [63%] |
|
2023-03-14 14:59:41,122 44k INFO Losses: [2.4530856609344482, 2.2399251461029053, 12.868755340576172, 18.784799575805664, 0.5872700214385986], step: 85800, lr: 9.611474222129547e-05 |
|
2023-03-14 15:01:06,045 44k INFO ====> Epoch: 309, cost 239.51 s |
|
2023-03-14 15:02:34,891 44k INFO Train Epoch: 310 [35%] |
|
2023-03-14 15:02:34,893 44k INFO Losses: [2.536738157272339, 2.208761692047119, 12.655405044555664, 17.73944091796875, 0.6489978432655334], step: 86000, lr: 9.61027278785178e-05 |
|
2023-03-14 15:05:06,034 44k INFO ====> Epoch: 310, cost 239.99 s |
|
2023-03-14 15:05:28,904 44k INFO Train Epoch: 311 [7%] |
|
2023-03-14 15:05:28,906 44k INFO Losses: [2.7000277042388916, 1.9777604341506958, 5.43064022064209, 16.0863094329834, 0.7809068560600281], step: 86200, lr: 9.609071503753299e-05 |
|
2023-03-14 15:08:16,269 44k INFO Train Epoch: 311 [79%] |
|
2023-03-14 15:08:16,271 44k INFO Losses: [2.4419972896575928, 2.1927945613861084, 12.296661376953125, 16.104684829711914, 1.1925441026687622], step: 86400, lr: 9.609071503753299e-05 |
|
2023-03-14 15:08:22,889 44k INFO Saving model and optimizer state at iteration 311 to ./logs/44k/G_86400.pth |
|
2023-03-14 15:08:26,112 44k INFO Saving model and optimizer state at iteration 311 to ./logs/44k/D_86400.pth |
|
2023-03-14 15:08:28,773 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_82400.pth |
|
2023-03-14 15:08:28,776 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_82400.pth |
|
2023-03-14 15:09:21,718 44k INFO ====> Epoch: 311, cost 255.68 s |
|
2023-03-14 15:11:27,163 44k INFO Train Epoch: 312 [51%] |
|
2023-03-14 15:11:27,165 44k INFO Losses: [2.3859262466430664, 2.3998208045959473, 12.338197708129883, 17.57341194152832, 0.7627207040786743], step: 86600, lr: 9.60787036981533e-05 |
|
2023-03-14 15:13:23,776 44k INFO ====> Epoch: 312, cost 242.06 s |
|
2023-03-14 15:14:22,923 44k INFO Train Epoch: 313 [23%] |
|
2023-03-14 15:14:22,925 44k INFO Losses: [2.420071601867676, 2.232604742050171, 11.440793991088867, 19.372875213623047, 0.7864899039268494], step: 86800, lr: 9.606669386019102e-05 |
|
2023-03-14 15:17:12,924 44k INFO Train Epoch: 313 [95%] |
|
2023-03-14 15:17:12,926 44k INFO Losses: [2.6901373863220215, 2.046050548553467, 10.371825218200684, 14.649495124816895, 0.728828489780426], step: 87000, lr: 9.606669386019102e-05 |
|
2023-03-14 15:17:24,783 44k INFO ====> Epoch: 313, cost 241.01 s |
|
2023-03-14 15:20:10,594 44k INFO Train Epoch: 314 [67%] |
|
2023-03-14 15:20:10,596 44k INFO Losses: [2.582371234893799, 2.192521572113037, 11.030780792236328, 15.720025062561035, 0.46608051657676697], step: 87200, lr: 9.60546855234585e-05 |
|
2023-03-14 15:20:18,375 44k INFO Saving model and optimizer state at iteration 314 to ./logs/44k/G_87200.pth |
|
2023-03-14 15:20:20,909 44k INFO Saving model and optimizer state at iteration 314 to ./logs/44k/D_87200.pth |
|
2023-03-14 15:20:23,347 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_83200.pth |
|
2023-03-14 15:20:23,350 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_83200.pth |
|
2023-03-14 15:21:44,531 44k INFO ====> Epoch: 314, cost 259.75 s |
|
2023-03-14 15:23:22,948 44k INFO Train Epoch: 315 [39%] |
|
2023-03-14 15:23:22,950 44k INFO Losses: [2.4129831790924072, 2.154977798461914, 12.702766418457031, 15.40412712097168, 1.0377197265625], step: 87400, lr: 9.604267868776807e-05 |
|
2023-03-14 15:25:46,961 44k INFO ====> Epoch: 315, cost 242.43 s |
|
2023-03-14 15:26:19,666 44k INFO Train Epoch: 316 [11%] |
|
2023-03-14 15:26:19,668 44k INFO Losses: [2.4932878017425537, 2.283784866333008, 11.091073989868164, 16.37421417236328, 1.0654058456420898], step: 87600, lr: 9.603067335293209e-05 |
|
2023-03-14 15:29:08,451 44k INFO Train Epoch: 316 [83%] |
|
2023-03-14 15:29:08,453 44k INFO Losses: [2.470743417739868, 2.2436625957489014, 12.457110404968262, 20.883207321166992, 0.7155753970146179], step: 87800, lr: 9.603067335293209e-05 |
|
2023-03-14 15:29:49,773 44k INFO ====> Epoch: 316, cost 242.81 s |
|
2023-03-14 15:32:02,807 44k INFO Train Epoch: 317 [55%] |
|
2023-03-14 15:32:02,809 44k INFO Losses: [2.4932234287261963, 2.3412065505981445, 9.860641479492188, 16.95916175842285, 1.1016749143600464], step: 88000, lr: 9.601866951876297e-05 |
|
2023-03-14 15:32:09,934 44k INFO Saving model and optimizer state at iteration 317 to ./logs/44k/G_88000.pth |
|
2023-03-14 15:32:12,406 44k INFO Saving model and optimizer state at iteration 317 to ./logs/44k/D_88000.pth |
|
2023-03-14 15:32:14,813 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_84000.pth |
|
2023-03-14 15:32:14,817 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_84000.pth |
|
2023-03-14 15:34:03,714 44k INFO ====> Epoch: 317, cost 253.94 s |
|
2023-03-14 15:35:13,262 44k INFO Train Epoch: 318 [27%] |
|
2023-03-14 15:35:13,264 44k INFO Losses: [2.6882758140563965, 2.0082712173461914, 7.690409183502197, 15.615729331970215, 0.7366906404495239], step: 88200, lr: 9.600666718507311e-05 |
|
2023-03-14 15:38:02,416 44k INFO Train Epoch: 318 [99%] |
|
2023-03-14 15:38:02,418 44k INFO Losses: [2.621169090270996, 2.340460777282715, 11.907442092895508, 18.100584030151367, 0.5082666277885437], step: 88400, lr: 9.600666718507311e-05 |
|
2023-03-14 15:38:05,527 44k INFO ====> Epoch: 318, cost 241.81 s |
|
2023-03-14 15:40:57,136 44k INFO Train Epoch: 319 [71%] |
|
2023-03-14 15:40:57,138 44k INFO Losses: [2.306849956512451, 2.1684632301330566, 13.087013244628906, 19.196033477783203, 0.9069151282310486], step: 88600, lr: 9.599466635167497e-05 |
|
2023-03-14 15:42:05,131 44k INFO ====> Epoch: 319, cost 239.60 s |
|
2023-03-14 15:43:49,798 44k INFO Train Epoch: 320 [42%] |
|
2023-03-14 15:43:49,800 44k INFO Losses: [2.3289270401000977, 2.357412815093994, 10.919024467468262, 17.183185577392578, 0.35892394185066223], step: 88800, lr: 9.5982667018381e-05 |
|
2023-03-14 15:43:56,513 44k INFO Saving model and optimizer state at iteration 320 to ./logs/44k/G_88800.pth |
|
2023-03-14 15:43:59,109 44k INFO Saving model and optimizer state at iteration 320 to ./logs/44k/D_88800.pth |
|
2023-03-14 15:44:01,357 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_84800.pth |
|
2023-03-14 15:44:01,649 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_84800.pth |
|
2023-03-14 15:46:20,386 44k INFO ====> Epoch: 320, cost 255.26 s |
|
2023-03-14 15:47:01,087 44k INFO Train Epoch: 321 [14%] |
|
2023-03-14 15:47:01,089 44k INFO Losses: [2.615994453430176, 2.0832791328430176, 7.357691764831543, 15.531330108642578, 0.5523858666419983], step: 89000, lr: 9.59706691850037e-05 |
|
2023-03-14 15:49:49,836 44k INFO Train Epoch: 321 [86%] |
|
2023-03-14 15:49:49,837 44k INFO Losses: [2.4982402324676514, 1.9822453260421753, 8.587516784667969, 15.304880142211914, 0.7752494215965271], step: 89200, lr: 9.59706691850037e-05 |
|
2023-03-14 15:50:20,797 44k INFO ====> Epoch: 321, cost 240.41 s |
|
2023-03-14 15:52:44,894 44k INFO Train Epoch: 322 [58%] |
|
2023-03-14 15:52:44,896 44k INFO Losses: [2.6467559337615967, 1.9384024143218994, 8.632706642150879, 17.25908851623535, 0.7421079874038696], step: 89400, lr: 9.595867285135558e-05 |
|
2023-03-14 15:54:21,149 44k INFO ====> Epoch: 322, cost 240.35 s |
|
2023-03-14 15:55:38,053 44k INFO Train Epoch: 323 [30%] |
|
2023-03-14 15:55:38,055 44k INFO Losses: [2.4401700496673584, 2.4312024116516113, 6.6430134773254395, 11.198991775512695, 0.6423346996307373], step: 89600, lr: 9.594667801724916e-05 |
|
2023-03-14 15:55:44,277 44k INFO Saving model and optimizer state at iteration 323 to ./logs/44k/G_89600.pth |
|
2023-03-14 15:55:47,611 44k INFO Saving model and optimizer state at iteration 323 to ./logs/44k/D_89600.pth |
|
2023-03-14 15:55:50,310 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_85600.pth |
|
2023-03-14 15:55:50,315 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_85600.pth |
|
2023-03-14 15:58:35,640 44k INFO ====> Epoch: 323, cost 254.49 s |
|
2023-03-14 15:58:46,727 44k INFO Train Epoch: 324 [2%] |
|
2023-03-14 15:58:46,729 44k INFO Losses: [2.5411245822906494, 2.3895537853240967, 8.90858268737793, 19.080501556396484, 1.2101801633834839], step: 89800, lr: 9.5934684682497e-05 |
|
2023-03-14 16:01:35,704 44k INFO Train Epoch: 324 [74%] |
|
2023-03-14 16:01:35,707 44k INFO Losses: [2.576366901397705, 1.9962455034255981, 8.28966999053955, 16.673263549804688, 0.7863953709602356], step: 90000, lr: 9.5934684682497e-05 |
|
2023-03-14 16:02:36,388 44k INFO ====> Epoch: 324, cost 240.75 s |
|
2023-03-14 16:04:30,129 44k INFO Train Epoch: 325 [46%] |
|
2023-03-14 16:04:30,131 44k INFO Losses: [2.490835428237915, 2.2136659622192383, 11.371402740478516, 17.93475914001465, 0.5636425018310547], step: 90200, lr: 9.592269284691169e-05 |
|
2023-03-14 16:06:35,242 44k INFO ====> Epoch: 325, cost 238.85 s |
|
2023-03-14 16:07:24,043 44k INFO Train Epoch: 326 [18%] |
|
2023-03-14 16:07:24,044 44k INFO Losses: [2.535959005355835, 2.2831342220306396, 9.634233474731445, 15.628883361816406, 0.7263363599777222], step: 90400, lr: 9.591070251030582e-05 |
|
2023-03-14 16:07:30,529 44k INFO Saving model and optimizer state at iteration 326 to ./logs/44k/G_90400.pth |
|
2023-03-14 16:07:33,895 44k INFO Saving model and optimizer state at iteration 326 to ./logs/44k/D_90400.pth |
|
2023-03-14 16:07:36,150 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_86400.pth |
|
2023-03-14 16:07:36,152 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_86400.pth |
|
2023-03-14 16:10:27,956 44k INFO Train Epoch: 326 [90%] |
|
2023-03-14 16:10:27,959 44k INFO Losses: [2.6093649864196777, 2.2020647525787354, 8.039438247680664, 14.06871509552002, 0.7444884181022644], step: 90600, lr: 9.591070251030582e-05 |
|
2023-03-14 16:10:51,721 44k INFO ====> Epoch: 326, cost 256.48 s |
|
2023-03-14 16:13:23,560 44k INFO Train Epoch: 327 [62%] |
|
2023-03-14 16:13:23,563 44k INFO Losses: [2.648064136505127, 2.1458873748779297, 10.38569450378418, 16.59149932861328, 0.5737046599388123], step: 90800, lr: 9.589871367249203e-05 |
|
2023-03-14 16:14:51,525 44k INFO ====> Epoch: 327, cost 239.80 s |
|
2023-03-14 16:16:16,891 44k INFO Train Epoch: 328 [34%] |
|
2023-03-14 16:16:16,893 44k INFO Losses: [2.7570254802703857, 2.0112016201019287, 9.411999702453613, 16.69658660888672, 0.7643370628356934], step: 91000, lr: 9.588672633328296e-05 |
|
2023-03-14 16:18:50,570 44k INFO ====> Epoch: 328, cost 239.05 s |
|
2023-03-14 16:19:11,770 44k INFO Train Epoch: 329 [6%] |
|
2023-03-14 16:19:11,772 44k INFO Losses: [2.551208019256592, 1.878153920173645, 9.942197799682617, 17.504793167114258, 1.020406723022461], step: 91200, lr: 9.58747404924913e-05 |
|
2023-03-14 16:19:17,797 44k INFO Saving model and optimizer state at iteration 329 to ./logs/44k/G_91200.pth |
|
2023-03-14 16:19:20,933 44k INFO Saving model and optimizer state at iteration 329 to ./logs/44k/D_91200.pth |
|
2023-03-14 16:19:23,558 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_87200.pth |
|
2023-03-14 16:19:23,561 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_87200.pth |
|
2023-03-14 16:22:15,736 44k INFO Train Epoch: 329 [78%] |
|
2023-03-14 16:22:15,738 44k INFO Losses: [2.5164217948913574, 2.340355634689331, 7.455539703369141, 14.140837669372559, 0.9828551411628723], step: 91400, lr: 9.58747404924913e-05 |
|
2023-03-14 16:23:08,724 44k INFO ====> Epoch: 329, cost 258.15 s |
|
2023-03-14 16:25:11,519 44k INFO Train Epoch: 330 [50%] |
|
2023-03-14 16:25:11,521 44k INFO Losses: [2.6456658840179443, 2.308988332748413, 9.544283866882324, 16.457048416137695, 0.8277677297592163], step: 91600, lr: 9.586275614992974e-05 |
|
2023-03-14 16:27:10,475 44k INFO ====> Epoch: 330, cost 241.75 s |
|
2023-03-14 16:28:06,111 44k INFO Train Epoch: 331 [22%] |
|
2023-03-14 16:28:06,113 44k INFO Losses: [2.4638164043426514, 2.098388671875, 12.376907348632812, 16.726776123046875, 0.49771878123283386], step: 91800, lr: 9.5850773305411e-05 |
|
2023-03-14 16:30:55,202 44k INFO Train Epoch: 331 [94%] |
|
2023-03-14 16:30:55,203 44k INFO Losses: [2.557213068008423, 2.2902417182922363, 11.574346542358398, 17.332351684570312, 0.8163413405418396], step: 92000, lr: 9.5850773305411e-05 |
|
2023-03-14 16:31:03,622 44k INFO Saving model and optimizer state at iteration 331 to ./logs/44k/G_92000.pth |
|
2023-03-14 16:31:06,336 44k INFO Saving model and optimizer state at iteration 331 to ./logs/44k/D_92000.pth |
|
2023-03-14 16:31:08,816 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_88000.pth |
|
2023-03-14 16:31:08,818 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_88000.pth |
|
2023-03-14 16:31:26,829 44k INFO ====> Epoch: 331, cost 256.35 s |
|
2023-03-14 16:34:06,077 44k INFO Train Epoch: 332 [65%] |
|
2023-03-14 16:34:06,078 44k INFO Losses: [2.6344683170318604, 2.3394408226013184, 12.022417068481445, 17.380685806274414, 0.8422899842262268], step: 92200, lr: 9.583879195874782e-05 |
|
2023-03-14 16:35:25,807 44k INFO ====> Epoch: 332, cost 238.98 s |
|
2023-03-14 16:37:00,375 44k INFO Train Epoch: 333 [37%] |
|
2023-03-14 16:37:00,376 44k INFO Losses: [2.6510331630706787, 1.9512279033660889, 9.684484481811523, 17.259605407714844, 0.8012605309486389], step: 92400, lr: 9.582681210975297e-05 |
|
2023-03-14 16:39:25,613 44k INFO ====> Epoch: 333, cost 239.81 s |
|
2023-03-14 16:39:55,022 44k INFO Train Epoch: 334 [9%] |
|
2023-03-14 16:39:55,024 44k INFO Losses: [2.594446897506714, 2.0764400959014893, 10.79778003692627, 15.319005966186523, 1.059230089187622], step: 92600, lr: 9.581483375823925e-05 |
|
2023-03-14 16:42:42,526 44k INFO Train Epoch: 334 [81%] |
|
2023-03-14 16:42:42,528 44k INFO Losses: [2.5049545764923096, 2.0912435054779053, 9.960893630981445, 17.159273147583008, 0.8657487034797668], step: 92800, lr: 9.581483375823925e-05 |
|
2023-03-14 16:42:50,869 44k INFO Saving model and optimizer state at iteration 334 to ./logs/44k/G_92800.pth |
|
2023-03-14 16:42:53,565 44k INFO Saving model and optimizer state at iteration 334 to ./logs/44k/D_92800.pth |
|
2023-03-14 16:42:56,142 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_88800.pth |
|
2023-03-14 16:42:56,145 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_88800.pth |
|
2023-03-14 16:43:43,458 44k INFO ====> Epoch: 334, cost 257.85 s |
|
2023-03-14 16:45:54,188 44k INFO Train Epoch: 335 [53%] |
|
2023-03-14 16:45:54,190 44k INFO Losses: [2.506495475769043, 2.2429282665252686, 10.285168647766113, 17.445371627807617, 0.7309085130691528], step: 93000, lr: 9.580285690401946e-05 |
|
2023-03-14 16:47:42,271 44k INFO ====> Epoch: 335, cost 238.81 s |
|
2023-03-14 16:48:48,250 44k INFO Train Epoch: 336 [25%] |
|
2023-03-14 16:48:48,252 44k INFO Losses: [2.629438877105713, 2.363443374633789, 10.162297248840332, 17.06772804260254, 0.7410354018211365], step: 93200, lr: 9.579088154690645e-05 |
|
2023-03-14 16:51:36,905 44k INFO Train Epoch: 336 [97%] |
|
2023-03-14 16:51:36,907 44k INFO Losses: [2.4318318367004395, 2.429161548614502, 13.651132583618164, 21.957290649414062, 0.5290459990501404], step: 93400, lr: 9.579088154690645e-05 |
|
2023-03-14 16:51:43,763 44k INFO ====> Epoch: 336, cost 241.49 s |
|
2023-03-14 16:54:31,635 44k INFO Train Epoch: 337 [69%] |
|
2023-03-14 16:54:31,637 44k INFO Losses: [2.4540209770202637, 2.2616589069366455, 13.027868270874023, 16.265796661376953, 0.922389805316925], step: 93600, lr: 9.577890768671308e-05 |
|
2023-03-14 16:54:38,340 44k INFO Saving model and optimizer state at iteration 337 to ./logs/44k/G_93600.pth |
|
2023-03-14 16:54:41,204 44k INFO Saving model and optimizer state at iteration 337 to ./logs/44k/D_93600.pth |
|
2023-03-14 16:54:43,868 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_89600.pth |
|
2023-03-14 16:54:43,873 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_89600.pth |
|
2023-03-14 16:55:59,022 44k INFO ====> Epoch: 337, cost 255.26 s |
|
2023-03-14 16:57:40,992 44k INFO Train Epoch: 338 [41%] |
|
2023-03-14 16:57:40,993 44k INFO Losses: [2.3064489364624023, 2.2026302814483643, 12.308977127075195, 16.84659767150879, 0.5307810306549072], step: 93800, lr: 9.576693532325224e-05 |
|
2023-03-14 16:59:59,060 44k INFO ====> Epoch: 338, cost 240.04 s |
|
2023-03-14 17:00:36,196 44k INFO Train Epoch: 339 [13%] |
|
2023-03-14 17:00:36,198 44k INFO Losses: [2.801006555557251, 2.344095230102539, 6.055492401123047, 14.985566139221191, 0.9637230634689331], step: 94000, lr: 9.575496445633683e-05 |
|
2023-03-14 17:03:25,050 44k INFO Train Epoch: 339 [85%] |
|
2023-03-14 17:03:25,052 44k INFO Losses: [2.604231119155884, 2.3925299644470215, 8.113595962524414, 18.03574562072754, 0.4634934663772583], step: 94200, lr: 9.575496445633683e-05 |
|
2023-03-14 17:04:00,767 44k INFO ====> Epoch: 339, cost 241.71 s |
|
2023-03-14 17:06:17,691 44k INFO Train Epoch: 340 [57%] |
|
2023-03-14 17:06:17,693 44k INFO Losses: [2.432461738586426, 2.1083149909973145, 11.694343566894531, 16.526676177978516, 0.8474188446998596], step: 94400, lr: 9.574299508577979e-05 |
|
2023-03-14 17:06:23,920 44k INFO Saving model and optimizer state at iteration 340 to ./logs/44k/G_94400.pth |
|
2023-03-14 17:06:27,609 44k INFO Saving model and optimizer state at iteration 340 to ./logs/44k/D_94400.pth |
|
2023-03-14 17:06:30,182 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_90400.pth |
|
2023-03-14 17:06:30,184 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_90400.pth |
|
2023-03-14 17:08:12,921 44k INFO ====> Epoch: 340, cost 252.15 s |
|
2023-03-14 17:09:24,602 44k INFO Train Epoch: 341 [29%] |
|
2023-03-14 17:09:24,604 44k INFO Losses: [2.612091302871704, 2.0129923820495605, 9.874459266662598, 18.7952823638916, 0.6894750595092773], step: 94600, lr: 9.573102721139406e-05 |
|
2023-03-14 17:12:08,521 44k INFO ====> Epoch: 341, cost 235.60 s |
|
2023-03-14 17:12:17,850 44k INFO Train Epoch: 342 [1%] |
|
2023-03-14 17:12:17,852 44k INFO Losses: [2.6415698528289795, 2.0726912021636963, 9.225942611694336, 15.050968170166016, 0.9450719952583313], step: 94800, lr: 9.571906083299264e-05 |
|
2023-03-14 17:15:03,716 44k INFO Train Epoch: 342 [73%] |
|
2023-03-14 17:15:03,717 44k INFO Losses: [2.641526460647583, 2.013510227203369, 14.526355743408203, 20.35441017150879, 1.2136683464050293], step: 95000, lr: 9.571906083299264e-05 |
|
2023-03-14 17:16:06,302 44k INFO ====> Epoch: 342, cost 237.78 s |
|
2023-03-14 17:17:56,605 44k INFO Train Epoch: 343 [45%] |
|
2023-03-14 17:17:56,607 44k INFO Losses: [2.5603644847869873, 2.2579598426818848, 9.65169906616211, 13.514219284057617, 0.7044890522956848], step: 95200, lr: 9.570709595038851e-05 |
|
2023-03-14 17:18:03,023 44k INFO Saving model and optimizer state at iteration 343 to ./logs/44k/G_95200.pth |
|
2023-03-14 17:18:05,529 44k INFO Saving model and optimizer state at iteration 343 to ./logs/44k/D_95200.pth |
|
2023-03-14 17:18:08,107 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_91200.pth |
|
2023-03-14 17:18:08,109 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_91200.pth |
|
2023-03-14 17:20:19,734 44k INFO ====> Epoch: 343, cost 253.43 s |
|
2023-03-14 17:21:04,683 44k INFO Train Epoch: 344 [17%] |
|
2023-03-14 17:21:04,685 44k INFO Losses: [2.498408317565918, 2.323843479156494, 10.895755767822266, 17.6402645111084, 0.6987907290458679], step: 95400, lr: 9.569513256339471e-05 |
|
2023-03-14 17:23:50,902 44k INFO Train Epoch: 344 [88%] |
|
2023-03-14 17:23:50,903 44k INFO Losses: [2.627406358718872, 2.0355846881866455, 12.475443840026855, 16.756271362304688, 0.805642306804657], step: 95600, lr: 9.569513256339471e-05 |
|
2023-03-14 17:24:17,404 44k INFO ====> Epoch: 344, cost 237.67 s |
|
2023-03-14 17:26:43,781 44k INFO Train Epoch: 345 [60%] |
|
2023-03-14 17:26:43,784 44k INFO Losses: [2.7041358947753906, 2.1957123279571533, 8.675004005432129, 14.674505233764648, 0.6939337849617004], step: 95800, lr: 9.568317067182427e-05 |
|
2023-03-14 17:28:14,470 44k INFO ====> Epoch: 345, cost 237.07 s |
|
2023-03-14 17:29:35,125 44k INFO Train Epoch: 346 [32%] |
|
2023-03-14 17:29:35,127 44k INFO Losses: [2.574700117111206, 2.083808183670044, 9.556343078613281, 16.612659454345703, 1.1321511268615723], step: 96000, lr: 9.56712102754903e-05 |
|
2023-03-14 17:29:41,225 44k INFO Saving model and optimizer state at iteration 346 to ./logs/44k/G_96000.pth |
|
2023-03-14 17:29:45,250 44k INFO Saving model and optimizer state at iteration 346 to ./logs/44k/D_96000.pth |
|
2023-03-14 17:29:47,478 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_92000.pth |
|
2023-03-14 17:29:47,481 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_92000.pth |
|
2023-03-14 17:32:25,552 44k INFO ====> Epoch: 346, cost 251.08 s |
|
2023-03-14 17:32:42,960 44k INFO Train Epoch: 347 [4%] |
|
2023-03-14 17:32:42,962 44k INFO Losses: [2.652423620223999, 2.3747355937957764, 8.642638206481934, 16.9888858795166, 1.2172242403030396], step: 96200, lr: 9.565925137420586e-05 |
|
2023-03-14 17:35:28,309 44k INFO Train Epoch: 347 [76%] |
|
2023-03-14 17:35:28,310 44k INFO Losses: [2.576751232147217, 2.2108588218688965, 8.451359748840332, 13.638978004455566, 0.8213117718696594], step: 96400, lr: 9.565925137420586e-05 |
|
2023-03-14 17:36:22,941 44k INFO ====> Epoch: 347, cost 237.39 s |
|
2023-03-14 17:38:21,556 44k INFO Train Epoch: 348 [48%] |
|
2023-03-14 17:38:21,558 44k INFO Losses: [2.8071162700653076, 1.9874826669692993, 8.839780807495117, 13.429910659790039, 0.8627715110778809], step: 96600, lr: 9.564729396778408e-05 |
|
2023-03-14 17:40:21,002 44k INFO ====> Epoch: 348, cost 238.06 s |
|
2023-03-14 17:41:13,742 44k INFO Train Epoch: 349 [20%] |
|
2023-03-14 17:41:13,744 44k INFO Losses: [2.497246265411377, 2.3755946159362793, 8.846244812011719, 16.500080108642578, 0.5419936180114746], step: 96800, lr: 9.56353380560381e-05 |
|
2023-03-14 17:41:21,158 44k INFO Saving model and optimizer state at iteration 349 to ./logs/44k/G_96800.pth |
|
2023-03-14 17:41:23,602 44k INFO Saving model and optimizer state at iteration 349 to ./logs/44k/D_96800.pth |
|
2023-03-14 17:41:25,974 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_92800.pth |
|
2023-03-14 17:41:25,976 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_92800.pth |
|
2023-03-14 17:44:15,017 44k INFO Train Epoch: 349 [92%] |
|
2023-03-14 17:44:15,019 44k INFO Losses: [2.459667682647705, 2.385629653930664, 11.096501350402832, 16.58162498474121, 1.0776821374893188], step: 97000, lr: 9.56353380560381e-05 |
|
2023-03-14 17:44:33,753 44k INFO ====> Epoch: 349, cost 252.75 s |
|
2023-03-15 02:49:26,613 44k INFO {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 6536180, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 5}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'nahida': 0, 'pecorine': 1, 'ayaka': 2}, 'model_dir': './logs/44k'} |
|
2023-03-15 02:49:27,239 44k WARNING git hash values are different. 8eb41030(saved) != e7019554(current) |
|
2023-03-15 02:49:44,695 44k INFO Loaded checkpoint './logs/44k/G_96800.pth' (iteration 349) |
|
2023-03-15 02:49:54,508 44k INFO Loaded checkpoint './logs/44k/D_96800.pth' (iteration 349) |
|
2023-03-15 02:51:11,435 44k INFO Train Epoch: 349 [20%] |
|
2023-03-15 02:51:11,437 44k INFO Losses: [2.4343342781066895, 2.414609909057617, 11.605687141418457, 17.804725646972656, 0.9560793042182922], step: 96800, lr: 9.562338363878108e-05 |
|
2023-03-15 02:51:21,981 44k INFO Saving model and optimizer state at iteration 349 to ./logs/44k/G_96800.pth |
|
2023-03-15 02:51:24,651 44k INFO Saving model and optimizer state at iteration 349 to ./logs/44k/D_96800.pth |
|
2023-03-15 02:54:48,734 44k INFO Train Epoch: 349 [92%] |
|
2023-03-15 02:54:48,735 44k INFO Losses: [2.426919460296631, 2.3904731273651123, 13.510272979736328, 19.95844268798828, 0.5475712418556213], step: 97000, lr: 9.562338363878108e-05 |
|
2023-03-15 02:55:13,115 44k INFO ====> Epoch: 349, cost 346.50 s |
|
2023-03-15 02:57:48,943 44k INFO Train Epoch: 350 [64%] |
|
2023-03-15 02:57:48,945 44k INFO Losses: [2.368691921234131, 2.606565237045288, 10.966503143310547, 15.91542911529541, 0.582056999206543], step: 97200, lr: 9.561143071582622e-05 |
|
2023-03-15 02:59:10,670 44k INFO ====> Epoch: 350, cost 237.56 s |
|
2023-03-15 03:00:41,165 44k INFO Train Epoch: 351 [36%] |
|
2023-03-15 03:00:41,167 44k INFO Losses: [2.476201057434082, 2.037659168243408, 8.412042617797852, 16.69068717956543, 0.8613893389701843], step: 97400, lr: 9.559947928698674e-05 |
|
2023-03-15 03:03:09,578 44k INFO ====> Epoch: 351, cost 238.91 s |
|
2023-03-15 03:03:35,031 44k INFO Train Epoch: 352 [8%] |
|
2023-03-15 03:03:35,033 44k INFO Losses: [2.564617872238159, 2.272451400756836, 9.927124977111816, 15.066576957702637, 0.4482015371322632], step: 97600, lr: 9.558752935207586e-05 |
|
2023-03-15 03:03:41,956 44k INFO Saving model and optimizer state at iteration 352 to ./logs/44k/G_97600.pth |
|
2023-03-15 03:03:44,508 44k INFO Saving model and optimizer state at iteration 352 to ./logs/44k/D_97600.pth |
|
2023-03-15 03:03:46,659 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_93600.pth |
|
2023-03-15 03:03:46,662 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_93600.pth |
|
2023-03-15 03:06:38,798 44k INFO Train Epoch: 352 [80%] |
|
2023-03-15 03:06:38,799 44k INFO Losses: [2.543609619140625, 2.17563796043396, 11.539641380310059, 17.383991241455078, 0.6641329526901245], step: 97800, lr: 9.558752935207586e-05 |
|
2023-03-15 03:07:25,771 44k INFO ====> Epoch: 352, cost 256.19 s |
|
2023-03-15 03:09:31,453 44k INFO Train Epoch: 353 [52%] |
|
2023-03-15 03:09:31,455 44k INFO Losses: [2.320289134979248, 2.351623058319092, 7.008500099182129, 12.488496780395508, 0.705643892288208], step: 98000, lr: 9.557558091090685e-05 |
|
2023-03-15 03:11:21,941 44k INFO ====> Epoch: 353, cost 236.17 s |
|
2023-03-15 03:12:24,021 44k INFO Train Epoch: 354 [24%] |
|
2023-03-15 03:12:24,022 44k INFO Losses: [2.5237061977386475, 2.154273509979248, 7.446046829223633, 14.133417129516602, 0.7588844299316406], step: 98200, lr: 9.556363396329299e-05 |
|
2023-03-15 03:15:10,012 44k INFO Train Epoch: 354 [96%] |
|
2023-03-15 03:15:10,013 44k INFO Losses: [2.49538516998291, 2.312305450439453, 9.481083869934082, 17.93454933166504, 1.0255591869354248], step: 98400, lr: 9.556363396329299e-05 |
|
2023-03-15 03:15:18,345 44k INFO Saving model and optimizer state at iteration 354 to ./logs/44k/G_98400.pth |
|
2023-03-15 03:15:20,844 44k INFO Saving model and optimizer state at iteration 354 to ./logs/44k/D_98400.pth |
|
2023-03-15 03:15:23,038 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_94400.pth |
|
2023-03-15 03:15:23,086 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_94400.pth |
|
2023-03-15 03:15:34,968 44k INFO ====> Epoch: 354, cost 253.03 s |
|
2023-03-15 03:18:18,657 44k INFO Train Epoch: 355 [68%] |
|
2023-03-15 03:18:18,659 44k INFO Losses: [2.6317009925842285, 2.0271403789520264, 11.617039680480957, 16.601354598999023, 0.8377472758293152], step: 98600, lr: 9.555168850904757e-05 |
|
2023-03-15 03:19:32,608 44k INFO ====> Epoch: 355, cost 237.64 s |
|
2023-03-15 03:21:08,818 44k INFO Train Epoch: 356 [40%] |
|
2023-03-15 03:21:08,820 44k INFO Losses: [2.5479702949523926, 2.2287511825561523, 6.656454563140869, 15.698354721069336, 0.9610328674316406], step: 98800, lr: 9.553974454798393e-05 |
|
2023-03-15 03:23:27,521 44k INFO ====> Epoch: 356, cost 234.91 s |
|
2023-03-15 03:24:01,725 44k INFO Train Epoch: 357 [12%] |
|
2023-03-15 03:24:01,727 44k INFO Losses: [2.6012628078460693, 2.163604736328125, 9.731101036071777, 14.653319358825684, 0.5492858290672302], step: 99000, lr: 9.552780207991543e-05 |
|
2023-03-15 03:26:47,379 44k INFO Train Epoch: 357 [83%] |
|
2023-03-15 03:26:47,381 44k INFO Losses: [2.5721845626831055, 2.140993356704712, 12.241374015808105, 16.565324783325195, 0.6712016463279724], step: 99200, lr: 9.552780207991543e-05 |
|
2023-03-15 03:26:55,189 44k INFO Saving model and optimizer state at iteration 357 to ./logs/44k/G_99200.pth |
|
2023-03-15 03:26:57,777 44k INFO Saving model and optimizer state at iteration 357 to ./logs/44k/D_99200.pth |
|
2023-03-15 03:27:00,249 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_95200.pth |
|
2023-03-15 03:27:00,253 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_95200.pth |
|
2023-03-15 03:27:41,110 44k INFO ====> Epoch: 357, cost 253.59 s |
|
2023-03-15 03:29:54,905 44k INFO Train Epoch: 358 [55%] |
|
2023-03-15 03:29:54,906 44k INFO Losses: [2.5700020790100098, 2.2139997482299805, 13.678099632263184, 16.79111671447754, 0.9156624674797058], step: 99400, lr: 9.551586110465545e-05 |
|
2023-03-15 03:31:37,600 44k INFO ====> Epoch: 358, cost 236.49 s |
|
2023-03-15 03:32:46,041 44k INFO Train Epoch: 359 [27%] |
|
2023-03-15 03:32:46,043 44k INFO Losses: [2.5337395668029785, 2.158907651901245, 9.189568519592285, 16.904396057128906, 0.24995450675487518], step: 99600, lr: 9.550392162201736e-05 |
|
2023-03-15 03:35:32,606 44k INFO Train Epoch: 359 [99%] |
|
2023-03-15 03:35:32,608 44k INFO Losses: [2.5766704082489014, 2.318437099456787, 6.3500285148620605, 11.544081687927246, 0.7615235447883606], step: 99800, lr: 9.550392162201736e-05 |
|
2023-03-15 03:35:34,636 44k INFO ====> Epoch: 359, cost 237.04 s |
|
2023-03-15 03:38:25,706 44k INFO Train Epoch: 360 [71%] |
|
2023-03-15 03:38:25,708 44k INFO Losses: [2.523023843765259, 2.4098641872406006, 11.959630012512207, 16.401628494262695, 0.6475639939308167], step: 100000, lr: 9.54919836318146e-05 |
|
2023-03-15 03:38:33,202 44k INFO Saving model and optimizer state at iteration 360 to ./logs/44k/G_100000.pth |
|
2023-03-15 03:38:35,653 44k INFO Saving model and optimizer state at iteration 360 to ./logs/44k/D_100000.pth |
|
2023-03-15 03:38:38,062 44k INFO .. Free up space by deleting ckpt ./logs/44k/G_96000.pth |
|
2023-03-15 03:38:38,064 44k INFO .. Free up space by deleting ckpt ./logs/44k/D_96000.pth |
|
|