I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally INFO:tensorflow:Loading config from /home/devendra/Desktop/Neural_MT/seq2seq/example_configs/nmt_small.yml INFO:tensorflow:Loading config from /home/devendra/Desktop/Neural_MT/seq2seq/example_configs/train_seq2seq.yml INFO:tensorflow:Loading config from /home/devendra/Desktop/Neural_MT/seq2seq/example_configs/text_metrics_bpe.yml INFO:tensorflow:Final Config: buckets: 10,20,30,40 default_params: - {separator: ' '} - {postproc_fn: seq2seq.data.postproc.strip_bpe} hooks: - {class: PrintModelAnalysisHook} - {class: MetadataCaptureHook} - class: TrainSampleHook params: {every_n_steps: 1000} - class: TokensPerSecondCounter params: {every_n_steps: 100} metrics: - {class: LogPerplexityMetricSpec} - class: BleuMetricSpec params: {postproc_fn: seq2seq.data.postproc.strip_bpe, separator: ' '} - class: RougeMetricSpec params: {postproc_fn: seq2seq.data.postproc.strip_bpe, rouge_type: rouge_1/f_score, separator: ' '} - class: RougeMetricSpec params: {postproc_fn: seq2seq.data.postproc.strip_bpe, rouge_type: rouge_1/r_score, separator: ' '} - class: RougeMetricSpec params: {postproc_fn: seq2seq.data.postproc.strip_bpe, rouge_type: rouge_1/p_score, separator: ' '} - class: RougeMetricSpec params: {postproc_fn: seq2seq.data.postproc.strip_bpe, rouge_type: rouge_2/f_score, separator: ' '} - class: RougeMetricSpec params: {postproc_fn: seq2seq.data.postproc.strip_bpe, rouge_type: rouge_2/r_score, separator: ' '} - class: RougeMetricSpec params: {postproc_fn: seq2seq.data.postproc.strip_bpe, rouge_type: rouge_2/p_score, separator: ' '} - class: RougeMetricSpec params: {postproc_fn: seq2seq.data.postproc.strip_bpe, rouge_type: rouge_l/f_score, separator: ' '} model: AttentionSeq2Seq model_params: attention.class: seq2seq.decoders.attention.AttentionLayerDot attention.params: {num_units: 128} bridge.class: seq2seq.models.bridges.ZeroBridge decoder.class: seq2seq.decoders.AttentionDecoder decoder.params: rnn_cell: cell_class: GRUCell cell_params: {num_units: 128} dropout_input_keep_prob: 0.8 dropout_output_keep_prob: 1.0 num_layers: 1 embedding.dim: 128 encoder.class: seq2seq.encoders.BidirectionalRNNEncoder encoder.params: rnn_cell: cell_class: GRUCell cell_params: {num_units: 128} dropout_input_keep_prob: 0.8 dropout_output_keep_prob: 1.0 num_layers: 1 optimizer.learning_rate: 0.0001 optimizer.name: Adam source.max_seq_len: 50 source.reverse: false target.max_seq_len: 50 WARNING:tensorflow:Ignoring config flag: default_params INFO:tensorflow:Setting save_checkpoints_secs to 600 INFO:tensorflow:Creating ParallelTextInputPipeline in mode=train INFO:tensorflow: ParallelTextInputPipeline: !!python/unicode 'num_epochs': null !!python/unicode 'shuffle': true !!python/unicode 'source_delimiter': !!python/unicode ' ' !!python/unicode 'source_files': [/home/devendra/Desktop/Neural_MT/data/en-fr/train.en.rawtok.tok] !!python/unicode 'target_delimiter': !!python/unicode ' ' !!python/unicode 'target_files': [/home/devendra/Desktop/Neural_MT/data/en-fr/train.fr.rawtok.tok] INFO:tensorflow:Creating ParallelTextInputPipeline in mode=eval INFO:tensorflow: ParallelTextInputPipeline: !!python/unicode 'num_epochs': 1 !!python/unicode 'shuffle': false !!python/unicode 'source_delimiter': !!python/unicode ' ' !!python/unicode 'source_files': [/home/devendra/Desktop/Neural_MT/data/dev_test/dev/newstest_dev.en.tok] !!python/unicode 'target_delimiter': !!python/unicode ' ' !!python/unicode 'target_files': [/home/devendra/Desktop/Neural_MT/data/dev_test/dev/newstest_dev.fr.tok] INFO:tensorflow:Using config: {'_save_checkpoints_secs': 600, '_num_ps_replicas': 0, '_keep_checkpoint_max': 5, '_tf_random_seed': None, '_task_type': None, '_environment': 'local', '_is_chief': True, '_cluster_spec': , '_tf_config': , '_task_id': 0, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_evaluation_master': '', '_keep_checkpoint_every_n_hours': 4, '_master': ''} INFO:tensorflow:Creating PrintModelAnalysisHook in mode=train INFO:tensorflow: PrintModelAnalysisHook: {} INFO:tensorflow:Creating MetadataCaptureHook in mode=train INFO:tensorflow: MetadataCaptureHook: {!!python/unicode 'step': 10} INFO:tensorflow:Creating TrainSampleHook in mode=train INFO:tensorflow: TrainSampleHook: {!!python/unicode 'every_n_secs': null, !!python/unicode 'every_n_steps': 1000, !!python/unicode 'source_delimiter': !!python/unicode ' ', !!python/unicode 'target_delimiter': !!python/unicode ' '} INFO:tensorflow:Creating TokensPerSecondCounter in mode=train INFO:tensorflow: TokensPerSecondCounter: {!!python/unicode 'every_n_secs': null, !!python/unicode 'every_n_steps': 100} INFO:tensorflow:Creating LogPerplexityMetricSpec in mode=eval INFO:tensorflow: LogPerplexityMetricSpec: {} INFO:tensorflow:Creating BleuMetricSpec in mode=eval INFO:tensorflow: BleuMetricSpec: {!!python/unicode 'eos_token': !!python/unicode 'SEQUENCE_END', !!python/unicode 'postproc_fn': !!python/unicode 'seq2seq.data.postproc.strip_bpe', !!python/unicode 'separator': !!python/unicode ' ', !!python/unicode 'sos_token': !!python/unicode 'SEQUENCE_START'} INFO:tensorflow:Creating RougeMetricSpec in mode=eval INFO:tensorflow: RougeMetricSpec: {!!python/unicode 'eos_token': !!python/unicode 'SEQUENCE_END', !!python/unicode 'postproc_fn': !!python/unicode 'seq2seq.data.postproc.strip_bpe', !!python/unicode 'rouge_type': !!python/unicode 'rouge_1/f_score', !!python/unicode 'separator': !!python/unicode ' ', !!python/unicode 'sos_token': !!python/unicode 'SEQUENCE_START'} INFO:tensorflow:Creating RougeMetricSpec in mode=eval INFO:tensorflow: RougeMetricSpec: {!!python/unicode 'eos_token': !!python/unicode 'SEQUENCE_END', !!python/unicode 'postproc_fn': !!python/unicode 'seq2seq.data.postproc.strip_bpe', !!python/unicode 'rouge_type': !!python/unicode 'rouge_1/r_score', !!python/unicode 'separator': !!python/unicode ' ', !!python/unicode 'sos_token': !!python/unicode 'SEQUENCE_START'} INFO:tensorflow:Creating RougeMetricSpec in mode=eval INFO:tensorflow: RougeMetricSpec: {!!python/unicode 'eos_token': !!python/unicode 'SEQUENCE_END', !!python/unicode 'postproc_fn': !!python/unicode 'seq2seq.data.postproc.strip_bpe', !!python/unicode 'rouge_type': !!python/unicode 'rouge_1/p_score', !!python/unicode 'separator': !!python/unicode ' ', !!python/unicode 'sos_token': !!python/unicode 'SEQUENCE_START'} INFO:tensorflow:Creating RougeMetricSpec in mode=eval INFO:tensorflow: RougeMetricSpec: {!!python/unicode 'eos_token': !!python/unicode 'SEQUENCE_END', !!python/unicode 'postproc_fn': !!python/unicode 'seq2seq.data.postproc.strip_bpe', !!python/unicode 'rouge_type': !!python/unicode 'rouge_2/f_score', !!python/unicode 'separator': !!python/unicode ' ', !!python/unicode 'sos_token': !!python/unicode 'SEQUENCE_START'} INFO:tensorflow:Creating RougeMetricSpec in mode=eval INFO:tensorflow: RougeMetricSpec: {!!python/unicode 'eos_token': !!python/unicode 'SEQUENCE_END', !!python/unicode 'postproc_fn': !!python/unicode 'seq2seq.data.postproc.strip_bpe', !!python/unicode 'rouge_type': !!python/unicode 'rouge_2/r_score', !!python/unicode 'separator': !!python/unicode ' ', !!python/unicode 'sos_token': !!python/unicode 'SEQUENCE_START'} INFO:tensorflow:Creating RougeMetricSpec in mode=eval INFO:tensorflow: RougeMetricSpec: {!!python/unicode 'eos_token': !!python/unicode 'SEQUENCE_END', !!python/unicode 'postproc_fn': !!python/unicode 'seq2seq.data.postproc.strip_bpe', !!python/unicode 'rouge_type': !!python/unicode 'rouge_2/p_score', !!python/unicode 'separator': !!python/unicode ' ', !!python/unicode 'sos_token': !!python/unicode 'SEQUENCE_START'} INFO:tensorflow:Creating RougeMetricSpec in mode=eval INFO:tensorflow: RougeMetricSpec: {!!python/unicode 'eos_token': !!python/unicode 'SEQUENCE_END', !!python/unicode 'postproc_fn': !!python/unicode 'seq2seq.data.postproc.strip_bpe', !!python/unicode 'rouge_type': !!python/unicode 'rouge_l/f_score', !!python/unicode 'separator': !!python/unicode ' ', !!python/unicode 'sos_token': !!python/unicode 'SEQUENCE_START'} WARNING:tensorflow:From /home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/monitors.py:267: __init__ (from tensorflow.contrib.learn.python.learn.monitors) is deprecated and will be removed after 2016-12-05. Instructions for updating: Monitors are deprecated. Please use tf.train.SessionRunHook. INFO:tensorflow:Creating AttentionSeq2Seq in mode=train INFO:tensorflow: AttentionSeq2Seq: !!python/unicode 'attention.class': !!python/unicode 'seq2seq.decoders.attention.AttentionLayerDot' !!python/unicode 'attention.params': {num_units: 128} !!python/unicode 'bridge.class': !!python/unicode 'seq2seq.models.bridges.ZeroBridge' !!python/unicode 'bridge.params': {} !!python/unicode 'decoder.class': !!python/unicode 'seq2seq.decoders.AttentionDecoder' !!python/unicode 'decoder.params': rnn_cell: cell_class: GRUCell cell_params: {num_units: 128} dropout_input_keep_prob: 0.8 dropout_output_keep_prob: 1.0 num_layers: 1 !!python/unicode 'embedding.dim': 128 !!python/unicode 'embedding.share': false !!python/unicode 'encoder.class': !!python/unicode 'seq2seq.encoders.BidirectionalRNNEncoder' !!python/unicode 'encoder.params': rnn_cell: cell_class: GRUCell cell_params: {num_units: 128} dropout_input_keep_prob: 0.8 dropout_output_keep_prob: 1.0 num_layers: 1 !!python/unicode 'inference.beam_search.beam_width': 0 !!python/unicode 'inference.beam_search.choose_successors_fn': !!python/unicode 'choose_top_k' !!python/unicode 'inference.beam_search.length_penalty_weight': 0.0 !!python/unicode 'optimizer.clip_gradients': 5.0 !!python/unicode 'optimizer.learning_rate': 0.0001 !!python/unicode 'optimizer.lr_decay_rate': 0.99 !!python/unicode 'optimizer.lr_decay_steps': 100 !!python/unicode 'optimizer.lr_decay_type': !!python/unicode '' !!python/unicode 'optimizer.lr_min_learning_rate': 1.0e-12 !!python/unicode 'optimizer.lr_staircase': false !!python/unicode 'optimizer.lr_start_decay_at': 0 !!python/unicode 'optimizer.lr_stop_decay_at': 1000000000.0 !!python/unicode 'optimizer.name': !!python/unicode 'Adam' !!python/unicode 'source.max_seq_len': 50 !!python/unicode 'source.reverse': false !!python/unicode 'target.max_seq_len': 50 !!python/unicode 'vocab_source': !!python/unicode '/home/devendra/Desktop/Neural_MT/data/en-fr/enfr.bpe32000' !!python/unicode 'vocab_target': !!python/unicode '/home/devendra/Desktop/Neural_MT/data/en-fr/enfr.bpe32000' INFO:tensorflow:Creating vocabulary lookup table of size 32003 INFO:tensorflow:Creating vocabulary lookup table of size 32003 INFO:tensorflow:Creating BidirectionalRNNEncoder in mode=train INFO:tensorflow: BidirectionalRNNEncoder: rnn_cell: cell_class: GRUCell cell_params: {num_units: 128} dropout_input_keep_prob: 0.8 dropout_output_keep_prob: 1.0 num_layers: 1 residual_combiner: add residual_connections: false residual_dense: false INFO:tensorflow:Creating AttentionLayerDot in mode=train INFO:tensorflow: AttentionLayerDot: {!!python/unicode 'num_units': 128} INFO:tensorflow:Creating AttentionDecoder in mode=train INFO:tensorflow: AttentionDecoder: !!python/unicode 'max_decode_length': 100 !!python/unicode 'rnn_cell': cell_class: GRUCell cell_params: {num_units: 128} dropout_input_keep_prob: 0.8 dropout_output_keep_prob: 1.0 num_layers: 1 residual_combiner: add residual_connections: false residual_dense: false INFO:tensorflow:Creating ZeroBridge in mode=train INFO:tensorflow: ZeroBridge: {} INFO:tensorflow:Create CheckpointSaverHook. 4 ops no flops stats due to incomplete shapes. Consider passing run_meta to use run_time shapes. Parsing GraphDef... Parsing RunMetadata... Parsing OpLog... Preparing Views... INFO:tensorflow:_TFProfRoot (--/12.81m params) model/att_seq2seq/decode/attention/att_keys/biases (128, 128/128 params) model/att_seq2seq/decode/attention/att_keys/weights (256x128, 32.77k/32.77k params) model/att_seq2seq/decode/attention/att_query/biases (128, 128/128 params) model/att_seq2seq/decode/attention/att_query/weights (128x128, 16.38k/16.38k params) model/att_seq2seq/decode/attention_decoder/decoder/attention_mix/biases (128, 128/128 params) model/att_seq2seq/decode/attention_decoder/decoder/attention_mix/weights (384x128, 49.15k/49.15k params) model/att_seq2seq/decode/attention_decoder/decoder/gru_cell/candidate/biases (128, 128/128 params) model/att_seq2seq/decode/attention_decoder/decoder/gru_cell/candidate/weights (512x128, 65.54k/65.54k params) model/att_seq2seq/decode/attention_decoder/decoder/gru_cell/gates/biases (256, 256/256 params) model/att_seq2seq/decode/attention_decoder/decoder/gru_cell/gates/weights (512x256, 131.07k/131.07k params) model/att_seq2seq/decode/attention_decoder/decoder/logits/biases (32003, 32.00k/32.00k params) model/att_seq2seq/decode/attention_decoder/decoder/logits/weights (128x32003, 4.10m/4.10m params) model/att_seq2seq/decode/target_embedding/W (32003x128, 4.10m/4.10m params) model/att_seq2seq/encode/bidi_rnn_encoder/bidirectional_rnn/bw/gru_cell/candidate/biases (128, 128/128 params) model/att_seq2seq/encode/bidi_rnn_encoder/bidirectional_rnn/bw/gru_cell/candidate/weights (256x128, 32.77k/32.77k params) model/att_seq2seq/encode/bidi_rnn_encoder/bidirectional_rnn/bw/gru_cell/gates/biases (256, 256/256 params) model/att_seq2seq/encode/bidi_rnn_encoder/bidirectional_rnn/bw/gru_cell/gates/weights (256x256, 65.54k/65.54k params) model/att_seq2seq/encode/bidi_rnn_encoder/bidirectional_rnn/fw/gru_cell/candidate/biases (128, 128/128 params) model/att_seq2seq/encode/bidi_rnn_encoder/bidirectional_rnn/fw/gru_cell/candidate/weights (256x128, 32.77k/32.77k params) model/att_seq2seq/encode/bidi_rnn_encoder/bidirectional_rnn/fw/gru_cell/gates/biases (256, 256/256 params) model/att_seq2seq/encode/bidi_rnn_encoder/bidirectional_rnn/fw/gru_cell/gates/weights (256x256, 65.54k/65.54k params) model/att_seq2seq/encode/source_embedding/W (32003x128, 4.10m/4.10m params) W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: name: GeForce GTX TITAN X major: 5 minor: 2 memoryClockRate (GHz) 1.076 pciBusID 0000:08:00.0 Total memory: 11.92GiB Free memory: 11.53GiB I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:08:00.0) E tensorflow/stream_executor/cuda/cuda_driver.cc:1002] failed to allocate 11.92G (12799180800 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY ^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[A^[[AI tensorflow/core/common_runtime/gpu/pool_allocator.cc:247] PoolAllocator: After 42965 get requests, put_count=42648 evicted_count=1000 eviction_rate=0.0234478 and unsatisfied allocation rate=0.0329803 I tensorflow/core/common_runtime/gpu/pool_allocator.cc:259] Raising pool_size_limit_ from 100 to 110 INFO:tensorflow:Saving checkpoints for 2 into /tmp/nmt_tutorial/model.ckpt. INFO:tensorflow:loss = 10.3718, step = 2 INFO:tensorflow:Prediction followed by Target @ Step 2 ==================================================================================================== directr ice directr ice directr ice in struction in sufficient in sufficient Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Copyright La Poste Suisse - Pour des informations complémentaires , contac■ tez-nous . SEQUENCE_END v is directr ice directr ice directr ice directr ice 201-1-DO_TO PIC.html Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Voir la FAQ intitulée " Comment la directive regis■ ter■ _■ glob■ als affec■ te-■ t-elle mes s■ cripts ? SEQUENCE_END directr ice directr ice directr ice directr ice directr ice Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov sont utiles pour caractér■ iser vos données de diverses fa ¸ ons . SEQUENCE_END estim ated directr ice barri er directr ice 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Dé■ velopp■ er un programme non-■ libre n ' apporte aucune contribution à la société . SEQUENCE_END v is directr ice directr ice directr ice directr ice in depend in depend in depend Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Toute action qui ne se limite pas à la modification d ' une copie d ' un logiciel ou à son exécution est une action de propagation de l ' œuvre . SEQUENCE_END estim ated directr ice directr ice directr ice directr ice directr ice Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Enfin la première version stable de Ma Ti■ reli■ re 2 est disponible ! SEQUENCE_END v is directr ice directr ice directr ice directr ice directr ice Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Ils peuvent alors vous dire " D ' accord , nous vous par■ donn■ ons " . SEQUENCE_END directr ice directr ice directr ice 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov O■ uv■ rez la photo dans votre logiciel de traitement d ' images . SEQUENCE_END v is v is in sufficient in sufficient in sufficient in depend in depend in depend in depend in depend in depend in depend Im prov Im prov Im prov Im prov Cet appartement en location á C■ adi■ z est situé en pleine . . . SEQUENCE_END v is in sufficient in sufficient in depend in depend in depend 201-1-DO_TO PIC.html in depend Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Vous n ' avez pas le droit de modifier , de transformer ou d ' adapter cette création . SEQUENCE_END v is directr ice in sufficient in sufficient in sufficient in sufficient in sufficient in sufficient in sufficient Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Pe■ ine de mort / Equ■ ité du procès . Man■ j■ it / S■ arab■ j■ it . . . SEQUENCE_END v is directr ice in depend in depend directr ice 201-1-DO_TO PIC.html in depend Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov La liberté numéro zéro est la liberté d ' exécuter le programme comme vous le voulez . SEQUENCE_END v is directr ice directr ice directr ice directr ice directr ice Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Ap■ partement à Ess■ a■ ou■ ira situé à quelques minutes de la . . . SEQUENCE_END v is in sufficient in sufficient in sufficient in sufficient in sufficient 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html Im prov Im prov 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html Im prov Im prov 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html Im prov Et nous avions donc besoin d ' un mécanisme plus compli■ qué qui permette d ' en garder trace . SEQUENCE_END directr ice directr ice in depend in depend in depend in depend in depend 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html Règ lement Règ lement Im prov Im prov Règ lement Règ lement Règ lement Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov A fine H■ ano■ ian restaurant that also serves and the training ground for under■ privileged Viet■ nam■ ese youth . Good food . SEQUENCE_END v is directr ice directr ice 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Le logiciel fonctionne en version autonome ( stand■ alone ) et comme un plu■ g-■ in ( module externe ) . La version autonome sup■ porte la technologie H■ DR . SEQUENCE_END estim ated directr ice directr ice directr ice in depend Im prov Im prov Im prov Im prov Im prov Im prov Im prov Les sch■ émas restent la propriété de leurs auteurs respectifs . SEQUENCE_END v is in sufficient directr ice 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov N ' oubli■ ez pas de réserver vos chambres M■ au■ le■ on France . SEQUENCE_END v is al t in sufficient in sufficient in sufficient in sufficient in sufficient in sufficient in sufficient Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov D■ ur d ’ imag■ iner que ce mois de privation alimentaire peut être attendu avec grand plaisir par les musulmans , non ? SEQUENCE_END v is in sufficient in depend in depend in depend in depend in depend in depend Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov C ’ est à partir du X■ VII siècle le commerce entre C■ adi■ z et le reste du monde a augmenté de manière considérable , un boom économique pour la ville de C■ adi■ z . SEQUENCE_END M ONU directr ice directr ice directr ice 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Il ne vous est pas demandé de l ' étudier ou de le modifier . SEQUENCE_END v is directr ice directr ice directr ice 201-1-DO_TO PIC.html Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov E■ du■ ardo Simp■ le dispose de chambres avec place pour 2 à 6 personnes . SEQUENCE_END directr ice directr ice directr ice directr ice directr ice 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html e n Im prov Im prov Im prov Im prov Im prov Les brevets logiciels sont des armes qui ne devraient pas exister . SEQUENCE_END v is directr ice 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov AK■ VIS Ar■ t■ Suite est une collection d ' effets et cadres pour photos numériques . SEQUENCE_END directr ice directr ice directr ice directr ice in depend Im prov Im prov Im prov Im prov Im prov Im prov Im prov - I■ dé■ al pour des enregistrements vidéo haute résolution . SEQUENCE_END v is directr ice directr ice directr ice directr ice 201-1-DO_TO PIC.html Im prov Im prov in depend Im prov Elle a également pré■ fac■ é le livre . SEQUENCE_END estim ated directr ice directr ice directr ice in depend in depend 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html Im prov 201-1-DO_TO PIC.html Im prov Im prov Im prov Im prov Im prov Dans ce cas , la commande échou■ era , et s ' annul■ era . SEQUENCE_END v is directr ice directr ice directr ice in depend 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html Im prov Im prov 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html Cette manifestation n ' est pas un salon commercial : son entrée en est libre et gratuite . SEQUENCE_END directr ice directr ice directr ice directr ice 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html Im prov Im prov Im prov Im prov Im prov Le premier est dirigé contre le rôle de Microsoft dans cet accord . SEQUENCE_END v is v is in sufficient in depend in sufficient in sufficient in depend 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html Im prov 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html 201-1-DO_TO PIC.html Im prov 201-1-DO_TO PIC.html Im prov Im prov Im prov La série K est idéale pour la bureau■ tique et les applications simples de tous les jours . SEQUENCE_END v is directr ice directr ice directr ice in depend in depend Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Im prov Les envois de publicité ( sp■ am ) et d ’ autre information in■ correcte restent à supprim■ mer . SEQUENCE_END v is directr ice directr ice in depend Im prov Im prov in depend Im prov Im prov Im prov Im prov C ' est aussi un puissant vérificateur de liens . SEQUENCE_END ==================================================================================================== I tensorflow/core/common_runtime/gpu/pool_allocator.cc:247] PoolAllocator: After 1464 get requests, put_count=2482 evicted_count=1000 eviction_rate=0.402901 and unsatisfied allocation rate=0.00068306 INFO:tensorflow:Performing full trace on next step. I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcupti.so.8.0 locally INFO:tensorflow:Captured full trace at step 11 INFO:tensorflow:Saved run_metadata to /tmp/nmt_tutorial/run_meta INFO:tensorflow:Saved timeline to /tmp/nmt_tutorial/timeline.json INFO:tensorflow:Saved op log to /tmp/nmt_tutorial I tensorflow/core/common_runtime/gpu/pool_allocator.cc:247] PoolAllocator: After 10929 get requests, put_count=11370 evicted_count=2000 eviction_rate=0.175901 and unsatisfied allocation rate=0.145667 I tensorflow/core/common_runtime/gpu/pool_allocator.cc:259] Raising pool_size_limit_ from 372 to 409 I tensorflow/core/common_runtime/gpu/pool_allocator.cc:247] PoolAllocator: After 9743 get requests, put_count=9676 evicted_count=1000 eviction_rate=0.103348 and unsatisfied allocation rate=0.116904 I tensorflow/core/common_runtime/gpu/pool_allocator.cc:259] Raising pool_size_limit_ from 792 to 871 INFO:tensorflow:global_step/sec: 0.807915 INFO:tensorflow:loss = 6.64414, step = 102 INFO:tensorflow:tokens/sec: 1561.09 INFO:tensorflow:global_step/sec: 1.50103 INFO:tensorflow:loss = 3.27593, step = 202 INFO:tensorflow:tokens/sec: 2877.6 INFO:tensorflow:global_step/sec: 1.50218 INFO:tensorflow:loss = 0.971994, step = 302 INFO:tensorflow:tokens/sec: 2916.6 INFO:tensorflow:global_step/sec: 1.48937 INFO:tensorflow:loss = 0.387597, step = 402 INFO:tensorflow:tokens/sec: 2862.53 INFO:tensorflow:global_step/sec: 1.48311 INFO:tensorflow:loss = 0.227224, step = 502 INFO:tensorflow:tokens/sec: 2889.24 INFO:tensorflow:global_step/sec: 1.47555 INFO:tensorflow:loss = 0.238307, step = 602 INFO:tensorflow:tokens/sec: 2818.91 INFO:tensorflow:global_step/sec: 1.47784 INFO:tensorflow:loss = 0.0963268, step = 702 INFO:tensorflow:tokens/sec: 2858.41 INFO:tensorflow:global_step/sec: 1.46893 INFO:tensorflow:loss = 0.157202, step = 802 INFO:tensorflow:tokens/sec: 2909.63 INFO:tensorflow:Saving checkpoints for 803 into /tmp/nmt_tutorial/model.ckpt. INFO:tensorflow:global_step/sec: 1.36358 INFO:tensorflow:loss = 0.144173, step = 902 INFO:tensorflow:tokens/sec: 2710.58 INFO:tensorflow:Creating AttentionSeq2Seq in mode=eval INFO:tensorflow: AttentionSeq2Seq: !!python/unicode 'attention.class': !!python/unicode 'seq2seq.decoders.attention.AttentionLayerDot' !!python/unicode 'attention.params': {num_units: 128} !!python/unicode 'bridge.class': !!python/unicode 'seq2seq.models.bridges.ZeroBridge' !!python/unicode 'bridge.params': {} !!python/unicode 'decoder.class': !!python/unicode 'seq2seq.decoders.AttentionDecoder' !!python/unicode 'decoder.params': rnn_cell: cell_class: GRUCell cell_params: {num_units: 128} dropout_input_keep_prob: 0.8 dropout_output_keep_prob: 1.0 num_layers: 1 !!python/unicode 'embedding.dim': 128 !!python/unicode 'embedding.share': false !!python/unicode 'encoder.class': !!python/unicode 'seq2seq.encoders.BidirectionalRNNEncoder' !!python/unicode 'encoder.params': rnn_cell: cell_class: GRUCell cell_params: {num_units: 128} dropout_input_keep_prob: 0.8 dropout_output_keep_prob: 1.0 num_layers: 1 !!python/unicode 'inference.beam_search.beam_width': 0 !!python/unicode 'inference.beam_search.choose_successors_fn': !!python/unicode 'choose_top_k' !!python/unicode 'inference.beam_search.length_penalty_weight': 0.0 !!python/unicode 'optimizer.clip_gradients': 5.0 !!python/unicode 'optimizer.learning_rate': 0.0001 !!python/unicode 'optimizer.lr_decay_rate': 0.99 !!python/unicode 'optimizer.lr_decay_steps': 100 !!python/unicode 'optimizer.lr_decay_type': !!python/unicode '' !!python/unicode 'optimizer.lr_min_learning_rate': 1.0e-12 !!python/unicode 'optimizer.lr_staircase': false !!python/unicode 'optimizer.lr_start_decay_at': 0 !!python/unicode 'optimizer.lr_stop_decay_at': 1000000000.0 !!python/unicode 'optimizer.name': !!python/unicode 'Adam' !!python/unicode 'source.max_seq_len': 50 !!python/unicode 'source.reverse': false !!python/unicode 'target.max_seq_len': 50 !!python/unicode 'vocab_source': !!python/unicode '/home/devendra/Desktop/Neural_MT/data/en-fr/enfr.bpe32000' !!python/unicode 'vocab_target': !!python/unicode '/home/devendra/Desktop/Neural_MT/data/en-fr/enfr.bpe32000' INFO:tensorflow:Creating vocabulary lookup table of size 32003 INFO:tensorflow:Creating vocabulary lookup table of size 32003 INFO:tensorflow:Creating BidirectionalRNNEncoder in mode=eval INFO:tensorflow: BidirectionalRNNEncoder: rnn_cell: cell_class: GRUCell cell_params: {num_units: 128} dropout_input_keep_prob: 0.8 dropout_output_keep_prob: 1.0 num_layers: 1 residual_combiner: add residual_connections: false residual_dense: false INFO:tensorflow:Creating AttentionLayerDot in mode=eval INFO:tensorflow: AttentionLayerDot: {!!python/unicode 'num_units': 128} INFO:tensorflow:Creating AttentionDecoder in mode=eval INFO:tensorflow: AttentionDecoder: !!python/unicode 'max_decode_length': 100 !!python/unicode 'rnn_cell': cell_class: GRUCell cell_params: {num_units: 128} dropout_input_keep_prob: 0.8 dropout_output_keep_prob: 1.0 num_layers: 1 residual_combiner: add residual_connections: false residual_dense: false INFO:tensorflow:Creating ZeroBridge in mode=eval INFO:tensorflow: ZeroBridge: {} INFO:tensorflow:Starting evaluation at 2017-03-23-19:06:22 I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:08:00.0) W tensorflow/core/framework/op_kernel.cc:993] Out of range: Reached limit of 1 [[Node: parallel_read/filenames/limit_epochs/CountUpTo = CountUpTo[T=DT_INT64, _class=["loc:@parallel_read/filenames/limit_epochs/epochs"], limit=1, _device="/job:localhost/replica:0/task:0/cpu:0"](parallel_read/filenames/limit_epochs/epochs)]] W tensorflow/core/framework/op_kernel.cc:993] Invalid argument: logits and labels must have the same first dimension, got logits shape [1088,32003] and labels shape [1568] [[Node: model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT64, _device="/job:localhost/replica:0/task:0/gpu:0"](model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape, model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape_1)]] W tensorflow/core/framework/op_kernel.cc:993] Invalid argument: logits and labels must have the same first dimension, got logits shape [1088,32003] and labels shape [1568] [[Node: model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT64, _device="/job:localhost/replica:0/task:0/gpu:0"](model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape, model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape_1)]] W tensorflow/core/framework/op_kernel.cc:993] Invalid argument: logits and labels must have the same first dimension, got logits shape [1088,32003] and labels shape [1568] [[Node: model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT64, _device="/job:localhost/replica:0/task:0/gpu:0"](model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape, model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape_1)]] W tensorflow/core/framework/op_kernel.cc:993] Invalid argument: logits and labels must have the same first dimension, got logits shape [1088,32003] and labels shape [1568] [[Node: model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT64, _device="/job:localhost/replica:0/task:0/gpu:0"](model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape, model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape_1)]] W tensorflow/core/framework/op_kernel.cc:993] Invalid argument: logits and labels must have the same first dimension, got logits shape [1088,32003] and labels shape [1568] [[Node: model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT64, _device="/job:localhost/replica:0/task:0/gpu:0"](model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape, model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape_1)]] W tensorflow/core/framework/op_kernel.cc:993] Invalid argument: logits and labels must have the same first dimension, got logits shape [1088,32003] and labels shape [1568] [[Node: model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT64, _device="/job:localhost/replica:0/task:0/gpu:0"](model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape, model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape_1)]] W tensorflow/core/framework/op_kernel.cc:993] Invalid argument: logits and labels must have the same first dimension, got logits shape [1088,32003] and labels shape [1568] [[Node: model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT64, _device="/job:localhost/replica:0/task:0/gpu:0"](model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape, model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape_1)]] W tensorflow/core/framework/op_kernel.cc:993] Invalid argument: logits and labels must have the same first dimension, got logits shape [1088,32003] and labels shape [1568] [[Node: model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT64, _device="/job:localhost/replica:0/task:0/gpu:0"](model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape, model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape_1)]] W tensorflow/core/framework/op_kernel.cc:993] Invalid argument: logits and labels must have the same first dimension, got logits shape [1088,32003] and labels shape [1568] [[Node: model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT64, _device="/job:localhost/replica:0/task:0/gpu:0"](model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape, model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape_1)]] W tensorflow/core/framework/op_kernel.cc:993] Invalid argument: logits and labels must have the same first dimension, got logits shape [1088,32003] and labels shape [1568] [[Node: model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT64, _device="/job:localhost/replica:0/task:0/gpu:0"](model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape, model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape_1)]] Traceback (most recent call last): File "/home/devendra/anaconda2/lib/python2.7/runpy.py", line 174, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/home/devendra/anaconda2/lib/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/home/devendra/Desktop/Neural_MT/seq2seq/bin/train.py", line 263, in tf.app.run() File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 44, in run _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "/home/devendra/Desktop/Neural_MT/seq2seq/bin/train.py", line 258, in main schedule=FLAGS.schedule) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/learn_runner.py", line 106, in run return task() File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/experiment.py", line 459, in train_and_evaluate self.train(delay_secs=0) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/experiment.py", line 281, in train monitors=self._train_monitors + extra_hooks) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 280, in new_func return func(*args, **kwargs) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 426, in fit loss = self._train_model(input_fn=input_fn, hooks=hooks) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 984, in _train_model _, loss = mon_sess.run([model_fn_ops.train_op, model_fn_ops.loss]) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 462, in run run_metadata=run_metadata) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 786, in run run_metadata=run_metadata) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 744, in run return self._sess.run(*args, **kwargs) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 899, in run run_metadata=run_metadata)) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/monitors.py", line 1157, in after_run induce_stop = m.step_end(self._last_step, result) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/monitors.py", line 356, in step_end return self.every_n_step_end(step, output) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/monitors.py", line 657, in every_n_step_end steps=self.eval_steps, metrics=self.metrics, name=self.name) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 280, in new_func return func(*args, **kwargs) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 514, in evaluate log_progress=log_progress) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 836, in _evaluate_model hooks=hooks) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/training/python/training/evaluation.py", line 430, in evaluate_once session.run(eval_ops, feed_dict) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 462, in run run_metadata=run_metadata) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 786, in run run_metadata=run_metadata) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 744, in run return self._sess.run(*args, **kwargs) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 891, in run run_metadata=run_metadata) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 744, in run return self._sess.run(*args, **kwargs) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 767, in run run_metadata_ptr) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 965, in _run feed_dict_string, options, run_metadata) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1015, in _do_run target_list, options, run_metadata) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1035, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: logits and labels must have the same first dimension, got logits shape [1088,32003] and labels shape [1568] [[Node: model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT64, _device="/job:localhost/replica:0/task:0/gpu:0"](model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape, model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape_1)]] Caused by op u'model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits', defined at: File "/home/devendra/anaconda2/lib/python2.7/runpy.py", line 174, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/home/devendra/anaconda2/lib/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/home/devendra/Desktop/Neural_MT/seq2seq/bin/train.py", line 263, in tf.app.run() File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 44, in run _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "/home/devendra/Desktop/Neural_MT/seq2seq/bin/train.py", line 258, in main schedule=FLAGS.schedule) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/learn_runner.py", line 106, in run return task() File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/experiment.py", line 459, in train_and_evaluate self.train(delay_secs=0) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/experiment.py", line 281, in train monitors=self._train_monitors + extra_hooks) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 280, in new_func return func(*args, **kwargs) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 426, in fit loss = self._train_model(input_fn=input_fn, hooks=hooks) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 984, in _train_model _, loss = mon_sess.run([model_fn_ops.train_op, model_fn_ops.loss]) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 462, in run run_metadata=run_metadata) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 786, in run run_metadata=run_metadata) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 744, in run return self._sess.run(*args, **kwargs) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/training/monitored_session.py", line 899, in run run_metadata=run_metadata)) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/monitors.py", line 1157, in after_run induce_stop = m.step_end(self._last_step, result) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/monitors.py", line 356, in step_end return self.every_n_step_end(step, output) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/monitors.py", line 657, in every_n_step_end steps=self.eval_steps, metrics=self.metrics, name=self.name) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 280, in new_func return func(*args, **kwargs) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 514, in evaluate log_progress=log_progress) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 810, in _evaluate_model eval_ops = self._get_eval_ops(features, labels, metrics) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 1190, in _get_eval_ops features, labels, model_fn_lib.ModeKeys.EVAL) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 1133, in _call_model_fn model_fn_results = self._model_fn(features, labels, **kwargs) File "/home/devendra/Desktop/Neural_MT/seq2seq/bin/train.py", line 169, in model_fn return model(features, labels, params) File "seq2seq/models/model_base.py", line 112, in __call__ return self._build(features, labels, params) File "seq2seq/models/seq2seq_model.py", line 270, in _build losses, loss = self.compute_loss(decoder_output, features, labels) File "seq2seq/models/seq2seq_model.py", line 249, in compute_loss sequence_length=labels["target_len"] - 1) File "seq2seq/losses.py", line 39, in cross_entropy_sequence_loss logits=logits, labels=targets) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 1727, in sparse_softmax_cross_entropy_with_logits precise_logits, labels, name=name) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 2378, in _sparse_softmax_cross_entropy_with_logits features=features, labels=labels, name=name) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op op_def=op_def) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2327, in create_op original_op=self._default_original_op, op_def=op_def) File "/home/devendra/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1226, in __init__ self._traceback = _extract_stack() InvalidArgumentError (see above for traceback): logits and labels must have the same first dimension, got logits shape [1088,32003] and labels shape [1568] [[Node: model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT64, _device="/job:localhost/replica:0/task:0/gpu:0"](model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape, model/att_seq2seq/cross_entropy_sequence_loss/SparseSoftmaxCrossEntropyWithLogits/Reshape_1)]]