[True] | distributed init (rank 0): tcp://localhost:14738 | distributed init (rank 2): tcp://localhost:14738 | initialized host cerberus04 as rank 2 | distributed init (rank 3): tcp://localhost:14738 | initialized host cerberus04 as rank 3 | distributed init (rank 1): tcp://localhost:14738 | initialized host cerberus04 as rank 1 | initialized host cerberus04 as rank 0 Namespace(activation_dropout=0.0, activation_fn='relu', adam_betas='(0.9, 0.98)', adam_eps=1e-08, adaptive_input=False, adaptive_softmax_cutoff=None, adaptive_softmax_dropout=0, all_gather_list_size=16384, arch='transformer_multibranch_v2_wmt_en_de', attention_dropout=0.08, best_checkpoint_metric='loss', bucket_cap_mb=25, clip_norm=0.0, configs='configs/wmt14.en-fr/attention/multibranch_v2/embed496.yml', conv_linear=True, cpu=False, criterion='label_smoothed_cross_entropy', curriculum=0, data='data/binary/wmt14_en_fr', dataset_impl=None, ddp_backend='no_c10d', decoder_attention_heads=8, decoder_branch_type=['attn:1:248:4', 'dynamic:default:248:4'], decoder_embed_dim=496, decoder_embed_path=None, decoder_ffn_embed_dim=496, decoder_ffn_list=[True, True, True, True, True, True], decoder_glu=True, decoder_input_dim=496, decoder_kernel_size_list=[3, 7, 15, 31, 31, 31], decoder_layers=6, decoder_learned_pos=False, decoder_normalize_before=False, device_id=0, disable_validation=False, distributed_backend='nccl', distributed_init_method='tcp://localhost:14738', distributed_no_spawn=False, distributed_port=-1, distributed_rank=0, distributed_world_size=4, dropout=0.1, empty_cache_freq=0, encoder_attention_heads=8, encoder_branch_type=['attn:1:248:4', 'dynamic:default:248:4'], encoder_decoder_branch_type=None, encoder_embed_dim=496, encoder_embed_path=None, encoder_ffn_embed_dim=496, encoder_ffn_list=[True, True, True, True, True, True], encoder_glu=True, encoder_kernel_size_list=[3, 7, 15, 31, 31, 31], encoder_layers=6, encoder_learned_pos=False, encoder_normalize_before=False, fast_stat_sync=False, ffn_init=None, find_unused_parameters=False, fix_batches_to_gpus=False, fixed_validation_seed=None, fp16=False, fp16_init_scale=128, fp16_scale_tolerance=0.0, fp16_scale_window=None, input_dropout=0.1, keep_interval_updates=-1, keep_last_epochs=30, label_smoothing=0.1, lazy_load=False, left_pad_source='True', left_pad_target='False', log_format=None, log_interval=1000, lr=[1e-07], lr_period_updates=45000.0, lr_scheduler='cosine', lr_shrink=1.0, max_epoch=0, max_lr=0.001, max_sentences=None, max_sentences_valid=None, max_source_positions=1024, max_target_positions=1024, max_tokens=4096, max_tokens_valid=4096, max_update=50000, maximize_best_checkpoint_metric=False, memory_efficient_fp16=False, min_loss_scale=0.0001, min_lr=1e-09, no_epoch_checkpoints=False, no_last_checkpoints=False, no_progress_bar=True, no_save=False, no_save_optimizer_state=False, no_token_positional_embeddings=False, num_workers=1, optimizer='adam', optimizer_overrides='{}', patience=-1, raw_text=False, required_batch_size_multiple=8, reset_dataloader=False, reset_lr_scheduler=False, reset_meters=False, reset_optimizer=False, restore_file='checkpoint_last.pt', save_dir='checkpoints/wmt14.en-fr/attention/multibranch_v2/embed496', save_interval=1, save_interval_updates=0, seed=1, sentence_avg=False, share_all_embeddings=True, share_decoder_input_output_embed=False, skip_invalid_size_inputs_valid_test=False, source_lang=None, t_mult=1.0, target_lang=None, task='translation', tensorboard_logdir='checkpoints/wmt14.en-fr/attention/multibranch_v2/embed496/tensorboard', threshold_loss_scale=None, tie_adaptive_weights=None, train_subset='train', truncate_source=False, update_freq=[32], upsample_primary=1, use_bmuf=False, user_dir=None, valid_subset='valid', validate_interval=1, warmup_init_lr=1e-07, warmup_updates=5000, weight_decay=0.0, weight_dropout=0.08, weight_softmax=True) | [en] dictionary: 44512 types | [fr] dictionary: 44512 types | loaded 24888 examples from: data/binary/wmt14_en_fr/valid.en-fr.en | loaded 24888 examples from: data/binary/wmt14_en_fr/valid.en-fr.fr | data/binary/wmt14_en_fr valid en-fr 24888 examples [True, True, True, True, True, True] [WARNING] Fallback to xavier initializer TransformerMultibranchModel( (encoder): TransformerEncoder( (embed_tokens): Embedding(44512, 496, padding_idx=1) (embed_positions): SinusoidalPositionalEmbedding() (layers): ModuleList( (0): TransformerEncoderLayer( (self_attn_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) (self_attn): MultiBranch( (branches): ModuleList( (0): MultiheadAttention( (k_proj): Linear(in_features=248, out_features=248, bias=True) (v_proj): Linear(in_features=248, out_features=248, bias=True) (q_proj): Linear(in_features=248, out_features=248, bias=True) (out_proj): Linear(in_features=248, out_features=248, bias=True) ) (1): DynamicconvLayer( (weight_linear): Linear(in_features=248, out_features=12, bias=False) (linear1): Linear(in_features=248, out_features=496, bias=True) (act): GLU(dim=-1) (linear2): Linear(in_features=248, out_features=248, bias=True) ) ) ) (fc1): Linear(in_features=496, out_features=496, bias=True) (fc2): Linear(in_features=496, out_features=496, bias=True) (final_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) ) (1): TransformerEncoderLayer( (self_attn_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) (self_attn): MultiBranch( (branches): ModuleList( (0): MultiheadAttention( (k_proj): Linear(in_features=248, out_features=248, bias=True) (v_proj): Linear(in_features=248, out_features=248, bias=True) (q_proj): Linear(in_features=248, out_features=248, bias=True) (out_proj): Linear(in_features=248, out_features=248, bias=True) ) (1): DynamicconvLayer( (weight_linear): Linear(in_features=248, out_features=28, bias=False) (linear1): Linear(in_features=248, out_features=496, bias=True) (act): GLU(dim=-1) (linear2): Linear(in_features=248, out_features=248, bias=True) ) ) ) (fc1): Linear(in_features=496, out_features=496, bias=True) (fc2): Linear(in_features=496, out_features=496, bias=True) (final_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) ) (2): TransformerEncoderLayer( (self_attn_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) (self_attn): MultiBranch( (branches): ModuleList( (0): MultiheadAttention( (k_proj): Linear(in_features=248, out_features=248, bias=True) (v_proj): Linear(in_features=248, out_features=248, bias=True) (q_proj): Linear(in_features=248, out_features=248, bias=True) (out_proj): Linear(in_features=248, out_features=248, bias=True) ) (1): DynamicconvLayer( (weight_linear): Linear(in_features=248, out_features=60, bias=False) (linear1): Linear(in_features=248, out_features=496, bias=True) (act): GLU(dim=-1) (linear2): Linear(in_features=248, out_features=248, bias=True) ) ) ) (fc1): Linear(in_features=496, out_features=496, bias=True) (fc2): Linear(in_features=496, out_features=496, bias=True) (final_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) ) (3): TransformerEncoderLayer( (self_attn_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) (self_attn): MultiBranch( (branches): ModuleList( (0): MultiheadAttention( (k_proj): Linear(in_features=248, out_features=248, bias=True) (v_proj): Linear(in_features=248, out_features=248, bias=True) (q_proj): Linear(in_features=248, out_features=248, bias=True) (out_proj): Linear(in_features=248, out_features=248, bias=True) ) (1): DynamicconvLayer( (weight_linear): Linear(in_features=248, out_features=124, bias=False) (linear1): Linear(in_features=248, out_features=496, bias=True) (act): GLU(dim=-1) (linear2): Linear(in_features=248, out_features=248, bias=True) ) ) ) (fc1): Linear(in_features=496, out_features=496, bias=True) (fc2): Linear(in_features=496, out_features=496, bias=True) (final_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) ) (4): TransformerEncoderLayer( (self_attn_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) (self_attn): MultiBranch( (branches): ModuleList( (0): MultiheadAttention( (k_proj): Linear(in_features=248, out_features=248, bias=True) (v_proj): Linear(in_features=248, out_features=248, bias=True) (q_proj): Linear(in_features=248, out_features=248, bias=True) (out_proj): Linear(in_features=248, out_features=248, bias=True) ) (1): DynamicconvLayer( (weight_linear): Linear(in_features=248, out_features=124, bias=False) (linear1): Linear(in_features=248, out_features=496, bias=True) (act): GLU(dim=-1) (linear2): Linear(in_features=248, out_features=248, bias=True) ) ) ) (fc1): Linear(in_features=496, out_features=496, bias=True) (fc2): Linear(in_features=496, out_features=496, bias=True) (final_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) ) (5): TransformerEncoderLayer( (self_attn_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) (self_attn): MultiBranch( (branches): ModuleList( (0): MultiheadAttention( (k_proj): Linear(in_features=248, out_features=248, bias=True) (v_proj): Linear(in_features=248, out_features=248, bias=True) (q_proj): Linear(in_features=248, out_features=248, bias=True) (out_proj): Linear(in_features=248, out_features=248, bias=True) ) (1): DynamicconvLayer( (weight_linear): Linear(in_features=248, out_features=124, bias=False) (linear1): Linear(in_features=248, out_features=496, bias=True) (act): GLU(dim=-1) (linear2): Linear(in_features=248, out_features=248, bias=True) ) ) ) (fc1): Linear(in_features=496, out_features=496, bias=True) (fc2): Linear(in_features=496, out_features=496, bias=True) (final_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) ) ) ) (decoder): TransformerDecoder( (embed_tokens): Embedding(44512, 496, padding_idx=1) (embed_positions): SinusoidalPositionalEmbedding() (layers): ModuleList( (0): TransformerDecoderLayer( (self_attn): MultiBranch( (branches): ModuleList( (0): MultiheadAttention( (k_proj): Linear(in_features=248, out_features=248, bias=True) (v_proj): Linear(in_features=248, out_features=248, bias=True) (q_proj): Linear(in_features=248, out_features=248, bias=True) (out_proj): Linear(in_features=248, out_features=248, bias=True) ) (1): DynamicconvLayer( (weight_linear): Linear(in_features=248, out_features=12, bias=False) (linear1): Linear(in_features=248, out_features=496, bias=True) (act): GLU(dim=-1) (linear2): Linear(in_features=248, out_features=248, bias=True) ) ) ) (self_attn_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (k_proj): Linear(in_features=496, out_features=496, bias=True) (v_proj): Linear(in_features=496, out_features=496, bias=True) (q_proj): Linear(in_features=496, out_features=496, bias=True) (out_proj): Linear(in_features=496, out_features=496, bias=True) ) (encoder_attn_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=496, out_features=496, bias=True) (fc2): Linear(in_features=496, out_features=496, bias=True) (final_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) ) (1): TransformerDecoderLayer( (self_attn): MultiBranch( (branches): ModuleList( (0): MultiheadAttention( (k_proj): Linear(in_features=248, out_features=248, bias=True) (v_proj): Linear(in_features=248, out_features=248, bias=True) (q_proj): Linear(in_features=248, out_features=248, bias=True) (out_proj): Linear(in_features=248, out_features=248, bias=True) ) (1): DynamicconvLayer( (weight_linear): Linear(in_features=248, out_features=28, bias=False) (linear1): Linear(in_features=248, out_features=496, bias=True) (act): GLU(dim=-1) (linear2): Linear(in_features=248, out_features=248, bias=True) ) ) ) (self_attn_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (k_proj): Linear(in_features=496, out_features=496, bias=True) (v_proj): Linear(in_features=496, out_features=496, bias=True) (q_proj): Linear(in_features=496, out_features=496, bias=True) (out_proj): Linear(in_features=496, out_features=496, bias=True) ) (encoder_attn_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=496, out_features=496, bias=True) (fc2): Linear(in_features=496, out_features=496, bias=True) (final_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) ) (2): TransformerDecoderLayer( (self_attn): MultiBranch( (branches): ModuleList( (0): MultiheadAttention( (k_proj): Linear(in_features=248, out_features=248, bias=True) (v_proj): Linear(in_features=248, out_features=248, bias=True) (q_proj): Linear(in_features=248, out_features=248, bias=True) (out_proj): Linear(in_features=248, out_features=248, bias=True) ) (1): DynamicconvLayer( (weight_linear): Linear(in_features=248, out_features=60, bias=False) (linear1): Linear(in_features=248, out_features=496, bias=True) (act): GLU(dim=-1) (linear2): Linear(in_features=248, out_features=248, bias=True) ) ) ) (self_attn_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (k_proj): Linear(in_features=496, out_features=496, bias=True) (v_proj): Linear(in_features=496, out_features=496, bias=True) (q_proj): Linear(in_features=496, out_features=496, bias=True) (out_proj): Linear(in_features=496, out_features=496, bias=True) ) (encoder_attn_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=496, out_features=496, bias=True) (fc2): Linear(in_features=496, out_features=496, bias=True) (final_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) ) (3): TransformerDecoderLayer( (self_attn): MultiBranch( (branches): ModuleList( (0): MultiheadAttention( (k_proj): Linear(in_features=248, out_features=248, bias=True) (v_proj): Linear(in_features=248, out_features=248, bias=True) (q_proj): Linear(in_features=248, out_features=248, bias=True) (out_proj): Linear(in_features=248, out_features=248, bias=True) ) (1): DynamicconvLayer( (weight_linear): Linear(in_features=248, out_features=124, bias=False) (linear1): Linear(in_features=248, out_features=496, bias=True) (act): GLU(dim=-1) (linear2): Linear(in_features=248, out_features=248, bias=True) ) ) ) (self_attn_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (k_proj): Linear(in_features=496, out_features=496, bias=True) (v_proj): Linear(in_features=496, out_features=496, bias=True) (q_proj): Linear(in_features=496, out_features=496, bias=True) (out_proj): Linear(in_features=496, out_features=496, bias=True) ) (encoder_attn_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=496, out_features=496, bias=True) (fc2): Linear(in_features=496, out_features=496, bias=True) (final_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) ) (4): TransformerDecoderLayer( (self_attn): MultiBranch( (branches): ModuleList( (0): MultiheadAttention( (k_proj): Linear(in_features=248, out_features=248, bias=True) (v_proj): Linear(in_features=248, out_features=248, bias=True) (q_proj): Linear(in_features=248, out_features=248, bias=True) (out_proj): Linear(in_features=248, out_features=248, bias=True) ) (1): DynamicconvLayer( (weight_linear): Linear(in_features=248, out_features=124, bias=False) (linear1): Linear(in_features=248, out_features=496, bias=True) (act): GLU(dim=-1) (linear2): Linear(in_features=248, out_features=248, bias=True) ) ) ) (self_attn_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (k_proj): Linear(in_features=496, out_features=496, bias=True) (v_proj): Linear(in_features=496, out_features=496, bias=True) (q_proj): Linear(in_features=496, out_features=496, bias=True) (out_proj): Linear(in_features=496, out_features=496, bias=True) ) (encoder_attn_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=496, out_features=496, bias=True) (fc2): Linear(in_features=496, out_features=496, bias=True) (final_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) ) (5): TransformerDecoderLayer( (self_attn): MultiBranch( (branches): ModuleList( (0): MultiheadAttention( (k_proj): Linear(in_features=248, out_features=248, bias=True) (v_proj): Linear(in_features=248, out_features=248, bias=True) (q_proj): Linear(in_features=248, out_features=248, bias=True) (out_proj): Linear(in_features=248, out_features=248, bias=True) ) (1): DynamicconvLayer( (weight_linear): Linear(in_features=248, out_features=124, bias=False) (linear1): Linear(in_features=248, out_features=496, bias=True) (act): GLU(dim=-1) (linear2): Linear(in_features=248, out_features=248, bias=True) ) ) ) (self_attn_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) (encoder_attn): MultiheadAttention( (k_proj): Linear(in_features=496, out_features=496, bias=True) (v_proj): Linear(in_features=496, out_features=496, bias=True) (q_proj): Linear(in_features=496, out_features=496, bias=True) (out_proj): Linear(in_features=496, out_features=496, bias=True) ) (encoder_attn_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=496, out_features=496, bias=True) (fc2): Linear(in_features=496, out_features=496, bias=True) (final_layer_norm): LayerNorm((496,), eps=1e-05, elementwise_affine=True) ) ) ) ) | model transformer_multibranch_v2_wmt_en_de, criterion LabelSmoothedCrossEntropyCriterion | num. model params: 39361568 (num. trained: 39361568) | training on 4 GPUs | max tokens per GPU = 4096 and max sentences per GPU = None | NOTICE: your device may support faster training with --fp16 | loaded checkpoint checkpoints/wmt14.en-fr/attention/multibranch_v2/embed496/checkpoint_last.pt (epoch 15 @ 38130 updates) | loading train data for epoch 15 | loaded 33125551 examples from: data/binary/wmt14_en_fr/train.en-fr.en | loaded 33125551 examples from: data/binary/wmt14_en_fr/train.en-fr.fr | data/binary/wmt14_en_fr train en-fr 33125551 examples tensorboard or required dependencies not found, please see README for using tensorboard. (e.g. pip install tensorboardX) | epoch 016: 1000 / 2542 loss=3.455, nll_loss=1.703, ppl=3.25, wps=86414, ups=0, wpb=488735.862, bsz=13055.441, num_updates=39131, lr=0.000137255, gnorm=0.118, clip=0.000, oom=0.000, wall=6015, train_wall=198418 | epoch 016: 2000 / 2542 loss=3.456, nll_loss=1.704, ppl=3.26, wps=87508, ups=0, wpb=488714.780, bsz=13032.091, num_updates=40131, lr=0.000114143, gnorm=0.116, clip=0.000, oom=0.000, wall=11529, train_wall=203487 | epoch 016 | loss 3.455 | nll_loss 1.703 | ppl 3.26 | wps 87753 | ups 0 | wpb 488637.205 | bsz 13031.295 | num_updates 40672 | lr 0.000102416 | gnorm 0.115 | clip 0.000 | oom 0.000 | wall 14508 | train_wall 206228 tensorboard or required dependencies not found, please see README for using tensorboard. (e.g. pip install tensorboardX) | epoch 016 | valid on 'valid' subset | loss 3.381 | nll_loss 1.574 | ppl 2.98 | num_updates 40672 | best_loss 3.38064 | saved checkpoint checkpoints/wmt14.en-fr/attention/multibranch_v2/embed496/checkpoint16.pt (epoch 16 @ 40672 updates) (writing took 4.32442045211792 seconds) tensorboard or required dependencies not found, please see README for using tensorboard. (e.g. pip install tensorboardX) | epoch 017: 1000 / 2542 loss=3.452, nll_loss=1.699, ppl=3.25, wps=85999, ups=0, wpb=488578.637, bsz=13020.842, num_updates=41673, lr=8.22264e-05, gnorm=0.107, clip=0.000, oom=0.000, wall=20522, train_wall=211350 | epoch 017: 2000 / 2542 loss=3.451, nll_loss=1.698, ppl=3.25, wps=87049, ups=0, wpb=488655.628, bsz=13032.567, num_updates=42673, lr=6.40931e-05, gnorm=0.105, clip=0.000, oom=0.000, wall=26068, train_wall=216460 | epoch 017 | loss 3.450 | nll_loss 1.698 | ppl 3.24 | wps 87333 | ups 0 | wpb 488637.205 | bsz 13031.295 | num_updates 43214 | lr 5.51631e-05 | gnorm 0.104 | clip 0.000 | oom 0.000 | wall 29057 | train_wall 219213 tensorboard or required dependencies not found, please see README for using tensorboard. (e.g. pip install tensorboardX) | epoch 017 | valid on 'valid' subset | loss 3.377 | nll_loss 1.569 | ppl 2.97 | num_updates 43214 | best_loss 3.37692 | saved checkpoint checkpoints/wmt14.en-fr/attention/multibranch_v2/embed496/checkpoint17.pt (epoch 17 @ 43214 updates) (writing took 3.68332576751709 seconds) tensorboard or required dependencies not found, please see README for using tensorboard. (e.g. pip install tensorboardX) | epoch 018: 1000 / 2542 loss=3.447, nll_loss=1.694, ppl=3.23, wps=86137, ups=0, wpb=488788.014, bsz=13096.871, num_updates=44215, lr=4.03223e-05, gnorm=0.097, clip=0.000, oom=0.000, wall=35091, train_wall=224331 | epoch 018: 2000 / 2542 loss=3.447, nll_loss=1.694, ppl=3.24, wps=87171, ups=0, wpb=488695.879, bsz=13043.578, num_updates=45215, lr=2.77371e-05, gnorm=0.096, clip=0.000, oom=0.000, wall=40629, train_wall=229438 | epoch 018 | loss 3.447 | nll_loss 1.694 | ppl 3.24 | wps 87402 | ups 0 | wpb 488637.205 | bsz 13031.295 | num_updates 45756 | lr 2.18843e-05 | gnorm 0.095 | clip 0.000 | oom 0.000 | wall 43622 | train_wall 232200 tensorboard or required dependencies not found, please see README for using tensorboard. (e.g. pip install tensorboardX) | epoch 018 | valid on 'valid' subset | loss 3.375 | nll_loss 1.567 | ppl 2.96 | num_updates 45756 | best_loss 3.37467 | saved checkpoint checkpoints/wmt14.en-fr/attention/multibranch_v2/embed496/checkpoint18.pt (epoch 18 @ 45756 updates) (writing took 3.450366497039795 seconds) tensorboard or required dependencies not found, please see README for using tensorboard. (e.g. pip install tensorboardX) | epoch 019: 1000 / 2542 loss=3.445, nll_loss=1.692, ppl=3.23, wps=86350, ups=0, wpb=488896.674, bsz=13044.155, num_updates=46757, lr=1.28588e-05, gnorm=0.089, clip=0.000, oom=0.000, wall=49616, train_wall=237309 | epoch 019: 2000 / 2542 loss=3.444, nll_loss=1.691, ppl=3.23, wps=87034, ups=0, wpb=488717.104, bsz=13042.614, num_updates=47757, lr=6.21705e-06, gnorm=0.088, clip=0.000, oom=0.000, wall=55184, train_wall=242411 | epoch 019 | loss 3.444 | nll_loss 1.691 | ppl 3.23 | wps 87213 | ups 0 | wpb 488637.205 | bsz 13031.295 | num_updates 48298 | lr 3.62516e-06 | gnorm 0.087 | clip 0.000 | oom 0.000 | wall 58191 | train_wall 245166 tensorboard or required dependencies not found, please see README for using tensorboard. (e.g. pip install tensorboardX) | epoch 019 | valid on 'valid' subset | loss 3.374 | nll_loss 1.566 | ppl 2.96 | num_updates 48298 | best_loss 3.37367 | saved checkpoint checkpoints/wmt14.en-fr/attention/multibranch_v2/embed496/checkpoint19.pt (epoch 19 @ 48298 updates) (writing took 3.327911615371704 seconds) tensorboard or required dependencies not found, please see README for using tensorboard. (e.g. pip install tensorboardX) | epoch 020: 1000 / 2542 loss=3.444, nll_loss=1.691, ppl=3.23, wps=87395, ups=0, wpb=488457.279, bsz=13026.757, num_updates=49299, lr=6.98578e-07, gnorm=0.084, clip=0.000, oom=0.000, wall=64110, train_wall=250228 | epoch 020 | loss 3.444 | nll_loss 1.691 | ppl 3.23 | wps 88294 | ups 0 | wpb 488649.153 | bsz 13033.655 | num_updates 50000 | lr 0.001 | gnorm 0.083 | clip 0.000 | oom 0.000 | wall 67934 | train_wall 253763 tensorboard or required dependencies not found, please see README for using tensorboard. (e.g. pip install tensorboardX) | epoch 020 | valid on 'valid' subset | loss 3.374 | nll_loss 1.566 | ppl 2.96 | num_updates 50000 | best_loss 3.37367 | saved checkpoint checkpoints/wmt14.en-fr/attention/multibranch_v2/embed496/checkpoint_last.pt (epoch 20 @ 50000 updates) (writing took 1.5249574184417725 seconds) | done training in 67918.4 seconds