-
Notifications
You must be signed in to change notification settings - Fork 355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues in Running LIT #13
Comments
The system cannot find the path specified: '/tmp/lit_data'
I created this directory '/tmp/lit_data' and done. (windows:
c:\tmp\lit_data)
Em qui., 20 de ago. de 2020 às 10:53, salmanahmed1993 <
notifications@github.com> escreveu:
… Hi There,
I am trying to run LIT Quick-start: sentiment classifier
cd ~/lit
python -m lit_nlp.examples.quickstart_sst_demo --port=5432
The output is:
(lit-nlp) C:~\lit>python -m lit_nlp.examples.quickstart_sst_demo
--port=5432
2020-08-20 14:37:27.651045: I
tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports
instructions that this TensorFlow binary was not compiled to use: AVX2
I0820 14:37:27.670744 33968 quickstart_sst_demo.py:47] Working directory:
C:\Users\SB0079~1\AppData\Local\Temp\tmp2582r1b0
W0820 14:37:27.926524 33968 dataset_builder.py:575] Found a different
version 1.0.0 of dataset glue in data_dir
C:\Users\SB00790107\tensorflow_datasets. Using currently defined version
0.0.2.
I0820 14:37:27.926524 33968 dataset_builder.py:184] Overwrite dataset info
from restored data version.
I0820 14:37:27.933496 33968 dataset_builder.py:253] Reusing dataset glue
(C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2)
I0820 14:37:27.934466 33968 dataset_builder.py:399] Constructing
tf.data.Dataset for split train, from
C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2
W0820 14:37:27.934466 33968 dataset_builder.py:439] Warning: Setting
shuffle_files=True because split=TRAIN and shuffle_files=None. This
behavior will be deprecated on 2019-08-06, at which point
shuffle_files=False will be the default for all splits.
W0820 14:37:35.189518 33968 dataset_builder.py:575] Found a different
version 1.0.0 of dataset glue in data_dir
C:\Users\SB00790107\tensorflow_datasets. Using currently defined version
0.0.2.
I0820 14:37:35.190503 33968 dataset_builder.py:184] Overwrite dataset info
from restored data version.
I0820 14:37:35.192508 33968 dataset_builder.py:253] Reusing dataset glue
(C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2)
I0820 14:37:35.192508 33968 dataset_builder.py:399] Constructing
tf.data.Dataset for split validation, from
C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2
I0820 14:37:35.302182 33968 tokenization_utils.py:306] Model name
'google/bert_uncased_L-2_H-128_A-2' not found in model shortcut name list
(bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased,
bert-base-multilingual-uncased, bert-base-multilingual-cased,
bert-base-chinese, bert-base-german-cased,
bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking,
bert-large-uncased-whole-word-masking-finetuned-squad,
bert-large-cased-whole-word-masking-finetuned-squad,
bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased,
bert-base-german-dbmdz-uncased). Assuming
'google/bert_uncased_L-2_H-128_A-2' is a path or url to a directory
containing tokenizer files.
I0820 14:37:35.302182 33968 tokenization_utils.py:317] Didn't find file
google/bert_uncased_L-2_H-128_A-2. We won't load it.
I0820 14:37:35.303180 33968 tokenization_utils.py:335] Didn't find file
google/bert_uncased_L-2_H-128_A-2\added_tokens.json. We won't load it.
I0820 14:37:35.303180 33968 tokenization_utils.py:335] Didn't find file
google/bert_uncased_L-2_H-128_A-2\special_tokens_map.json. We won't load it.
I0820 14:37:35.303180 33968 tokenization_utils.py:335] Didn't find file
google/bert_uncased_L-2_H-128_A-2\tokenizer_config.json. We won't load it.
Traceback (most recent call last):
File
"C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\runpy.py",
line 193, in _run_module_as_main
"*main*", mod_spec)
File
"C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\runpy.py",
line 85, in _run_code
exec(code, run_globals)
File "C:~\lit\lit_nlp\examples\quickstart_sst_demo.py", line 60, in
app.run(main)
File
"C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\absl\app.py",
line 299, in run
_run_main(main, args)
File
"C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\absl\app.py",
line 250, in _run_main
sys.exit(main(argv))
File "C:~\lit\lit_nlp\examples\quickstart_sst_demo.py", line 48, in main
run_finetuning(model_path)
File "C:~\lit\lit_nlp\examples\quickstart_sst_demo.py", line 40, in
run_finetuning
model = glue_models.SST2Model(FLAGS.encoder_name, for_training=True)
File "C:~\lit\lit_nlp\examples\models\glue_models.py", line 319, in *init*
**kw)
File "C:~\lit\lit_nlp\examples\models\glue_models.py", line 59, in *init*
model_name_or_path)
File
"C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\transformers\tokenization_auto.py",
line 109, in from_pretrained
return BertTokenizer.from_pretrained(pretrained_model_name_or_path,
*inputs, **kwargs)
File
"C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\transformers\tokenization_utils.py",
line 282, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File
"C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\transformers\tokenization_utils.py",
line 346, in _from_pretrained
list(cls.vocab_files_names.values())))
OSError: Model name 'google/bert_uncased_L-2_H-128_A-2' was not found in
tokenizers model name list (bert-base-uncased, bert-large-uncased,
bert-base-cased, bert-large-cased, bert-base-multilingual-uncased,
bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased,
bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking,
bert-large-uncased-whole-word-masking-finetuned-squad,
bert-large-cased-whole-word-masking-finetuned-squad,
bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased,
bert-base-german-dbmdz-uncased). We assumed
'google/bert_uncased_L-2_H-128_A-2' was a path or url to a directory
containing vocabulary files named ['vocab.txt'] but couldn't find such
vocabulary files at this path or url.
*For Running Quick start: language modeling*
cd ~/lit
python -m lit_nlp.examples.pretrained_lm_demo --models=bert-base-uncased
--port=5432
*The error output is*
(lit-nlp) C:~\lit>python -m lit_nlp.examples.pretrained_lm_demo
--models=bert-base-uncased --port=5432
2020-08-20 14:32:20.119230: I
tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports
instructions that this TensorFlow binary was not compiled to use: AVX2
I0820 14:32:20.634253 32000 tokenization_utils.py:374] loading file
https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt
from cache at
C:\Users\SB00790107.cache\torch\transformers\26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
I0820 14:32:21.133054 32000 configuration_utils.py:151] loading
configuration file
https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json
from cache at
C:\Users\SB00790107.cache\torch\transformers\4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517
I0820 14:32:21.143045 32000 configuration_utils.py:168] Model config {
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"num_labels": 2,
"output_attentions": true,
"output_hidden_states": true,
"output_past": true,
"pad_token_id": 0,
"pruned_heads": {},
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 30522
}
I0820 14:32:21.576282 32000 modeling_tf_utils.py:258] loading weights file
https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-tf_model.h5
from cache at
C:\Users\SB00790107.cache\torch\transformers\d667df51ec24c20190f01fb4c20a21debc4c4fc12f7e2f5441ac0a99690e3ee9.4733ec82e81d40e9cf5fd04556267d8958fb150e9339390fc64206b7e5a79c83.h5
W0820 14:32:24.903656 32000 dataset_builder.py:575] Found a different
version 1.0.0 of dataset glue in data_dir
C:\Users\SB00790107\tensorflow_datasets. Using currently defined version
0.0.2.
I0820 14:32:24.904676 32000 dataset_builder.py:187] Load pre-computed
datasetinfo (eg: splits) from bucket.
I0820 14:32:25.158797 32000 dataset_info.py:410] Loading info from GCS for
glue/sst2/0.0.2
I0820 14:32:26.526896 32000 dataset_builder.py:273] Generating dataset
glue (C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2)
�[1mDownloading and preparing dataset glue (7.09 MiB) to
C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2...�[0m
Dl Completed...: 0 url [00:00, ? url/s]
Dl Size...: 0 MiB [00:00, ? MiB/s]
Extraction completed...: 0 file [00:00, ? file/s]I0820 14:32:26.530886
32000 download_manager.py:241] Downloading
https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8
into
C:\Users\SB00790107\tensorflow_datasets\downloads\fire.goog.com_v0_b_mtl-sent-repr.apps.cowOhVrpNUsvqdZqI70Nq3ISu63l9SOhTqYqoz6uEW3-Y.zipalt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8.tmp.6f44416196e74a44a10bca183839e172...
Dl Completed...: 0%| | 0/1 [00:00<?, ? url/s]
Dl Size...: 0 MiB [00:00, ? MiB/s]
Extraction completed...: 0 file [00:00, ?
file/s]C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\urllib3\connectionpool.py:988:
InsecureRequestWarning: Unverified HTTPS request is being made to host '
firebasestorage.googleapis.com'. Adding certificate verification is
strongly advised. See:
https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning,
Dl Completed...: 0%| | 0/1 [00:00<?, ? url/s]
Dl Size...: 0%| | 0/7 [00:00<?, ? MiB/s]
Extraction completed...: 0 file [00:00, ? file/s]
Dl Completed...: 0%| | 0/1 [00:01<?, ? url/s]
Dl Size...: 14%|███████████████████▎ | 1/7 [00:01<00:06, 1.10s/ MiB]
Extraction completed...: 0 file [00:01, ? file/s]
Dl Completed...: 0%| | 0/1 [00:01<?, ? url/s]
Dl Size...: 29%|██████████████████████████████████████▌ | 2/7
[00:01<00:04, 1.19 MiB/s]
Dl Completed...: 0%| | 0/1 [00:01<?, ? url/s]
Dl Size...: 43%|█████████████████████████████████████████████████████████▊
| 3/7 [00:01<00:03, 1.19 MiB/s]
Dl Completed...: 0%| | 0/1 [00:01<?, ? url/s]
Dl Size...:
57%|█████████████████████████████████████████████████████████████████████████████▏
| 4/7 [00:01<00:02, 1.19 MiB/s]
Dl Completed...: 0%| | 0/1 [00:01<?, ? url/s]
Dl Size...:
71%|████████████████████████████████████████████████████████████████████████████████████████████████▍
| 5/7 [00:01<00:01, 1.19 MiB/s]
Dl Completed...: 0%| | 0/1 [00:01<?, ? url/s]
Dl Size...:
86%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋
| 6/7 [00:01<00:00, 1.19 MiB/s]
Dl Completed...: 0%| | 0/1 [00:01<?, ? url/s]
Dl Size...:
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|
7/7 [00:01<00:00, 1.19 MiB/s]
Dl Completed...:
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|
1/1 [00:01<00:00, 1.40s/ url]
Dl Size...:
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|
7/7 [00:01<00:00, 1.19 MiB/s]
Dl Completed...:
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|
1/1 [00:01<00:00, 1.40s/ url]
Dl Size...:
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|
7/7 [00:01<00:00, 1.19 MiB/s]
Extraction completed...: 0%| | 0/1 [00:01<?, ? file/s]
Dl Completed...:
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|
1/1 [00:01<00:00, 1.40s/ url]
Dl Size...:
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|
7/7 [00:01<00:00, 1.19 MiB/s]
Extraction completed...:
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|
1/1 [00:01<00:00, 1.74s/ file]
Extraction completed...:
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|
1/1 [00:01<00:00, 1.74s/ file]
Dl Size...:
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|
7/7 [00:01<00:00, 4.02 MiB/s]
Dl Completed...:
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|
1/1 [00:01<00:00, 1.74s/ url]
I0820 14:32:28.270815 32000 dataset_builder.py:812] Generating split train
I0820 14:32:28.270815 32000 file_format_adapter.py:233] Writing TFRecords
Shuffling...: 0%| | 0/1 [00:00<?, ? shard/s]WARNING:tensorflow:From
C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\tensorflow_datasets\core\file_format_adapter.py:209:
tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated
and will be removed in a future version.
Instructions for updating:
Use eager execution and:
tf.data.TFRecordDataset(path)
W0820 14:32:39.338444 32000 deprecation.py:323] From
C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\tensorflow_datasets\core\file_format_adapter.py:209:
tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated
and will be removed in a future version.
Instructions for updating:
Use eager execution and:
tf.data.TFRecordDataset(path)
Reading...: 0 examples [00:00, ? examples/s]
Reading...: 64184 examples [00:00, 637222.07 examples/s]
Writing...: 0%| | 0/67349 [00:00<?, ? examples/s]
Writing...: 15%|█████████████████▊ | 9980/67349 [00:00<00:00, 99082.42
examples/s]
Writing...: 30%|███████████████████████████████████▌ | 20094/67349
[00:00<00:00, 99477.68 examples/s]
Writing...: 45%|█████████████████████████████████████████████████████▎ |
30195/67349 [00:00<00:00, 99709.61 examples/s]
Writing...:
60%|██████████████████████████████████████████████████████████████████████▊
| 40401/67349 [00:00<00:00, 100188.93 examples/s]
Writing...:
75%|████████████████████████████████████████████████████████████████████████████████████████▋
| 50623/67349 [00:00<00:00, 100574.12 examples/s]
Writing...:
90%|██████████████████████████████████████████████████████████████████████████████████████████████████████████▍
| 60780/67349 [00:00<00:00, 100664.57 examples/s]
I0820 14:32:40.169348 32000 dataset_builder.py:812] Generating split
validation
I0820 14:32:40.170345 32000 file_format_adapter.py:233] Writing TFRecords
Shuffling...: 0%| | 0/1 [00:00<?, ? shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/872 [00:00<?, ? examples/s]
I0820 14:32:40.370083 32000 dataset_builder.py:812] Generating split test
I0820 14:32:40.373092 32000 file_format_adapter.py:233] Writing TFRecords
Shuffling...: 0%| | 0/1 [00:00<?, ? shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/1821 [00:00<?, ? examples/s]
I0820 14:32:40.717523 32000 dataset_builder.py:301] Skipping computing
stats for mode ComputeStatsMode.AUTO.
�[1mDataset glue downloaded and prepared to
C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2. Subsequent calls
will reuse this data.�[0m
I0820 14:32:40.735554 32000 dataset_builder.py:399] Constructing
tf.data.Dataset for split validation, from
C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2
I0820 14:32:41.163142 32000 dataset_builder.py:675] No config specified,
defaulting to first: imdb_reviews/plain_text
I0820 14:32:41.164139 32000 dataset_builder.py:187] Load pre-computed
datasetinfo (eg: splits) from bucket.
I0820 14:32:41.407350 32000 dataset_info.py:410] Loading info from GCS for
imdb_reviews/plain_text/0.1.0
I0820 14:32:42.439117 32000 dataset_builder.py:273] Generating dataset
imdb_reviews
(C:\Users\SB00790107\tensorflow_datasets\imdb_reviews\plain_text\0.1.0)
�[1mDownloading and preparing dataset imdb_reviews (80.23 MiB) to
C:\Users\SB00790107\tensorflow_datasets\imdb_reviews\plain_text\0.1.0...�[0m
Dl Completed...: 0 url [00:00, ? url/s]
Dl Size...: 0 MiB [00:00, ? MiB/s]I0820 14:32:42.443107 32000
download_manager.py:241] Downloading
http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz into
C:\Users\SB00790107\tensorflow_datasets\downloads\ai.stanfor.edu_amaas_sentime_aclImdb_v1PaujRp-TxjBWz59jHXsMDm5WiexbxzaFQkEnXc3Tvo8.tar.gz.tmp.69c9ef3d01b84444a160e5ba3160fb45...
Dl Completed...: 0%| | 0/1 [00:00<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:00<?, ? url/s]
Dl Size...: 0%| | 0/80 [00:00<?, ? MiB/s]
Dl Completed...: 0%| | 0/1 [00:02<?, ? url/s]
Dl Size...: 1%|█▋ | 1/80 [00:02<03:40, 2.80s/ MiB]
Dl Completed...: 0%| | 0/1 [00:03<?, ? url/s]
Dl Size...: 2%|███▎ | 2/80 [00:03<02:47, 2.15s/ MiB]
Dl Completed...: 0%| | 0/1 [00:03<?, ? url/s]
Dl Size...: 4%|█████ | 3/80 [00:03<02:06, 1.64s/ MiB]
Dl Completed...: 0%| | 0/1 [00:04<?, ? url/s]
Dl Size...: 5%|██████▋ | 4/80 [00:04<01:31, 1.21s/ MiB]
Dl Completed...: 0%| | 0/1 [00:04<?, ? url/s]
Dl Size...: 6%|████████▍ | 5/80 [00:04<01:09, 1.08 MiB/s]
Dl Completed...: 0%| | 0/1 [00:04<?, ? url/s]
Dl Size...: 8%|██████████ | 6/80 [00:04<00:51, 1.43 MiB/s]
Dl Completed...: 0%| | 0/1 [00:04<?, ? url/s]
Dl Size...: 9%|███████████▋ | 7/80 [00:04<00:39, 1.85 MiB/s]
Dl Completed...: 0%| | 0/1 [00:04<?, ? url/s]
Dl Size...: 10%|█████████████▍ | 8/80 [00:04<00:30, 2.34 MiB/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Size...: 12%|████████████████▋ | 10/80 [00:05<00:24, 2.89 MiB/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Size...: 14%|██████████████████▎ | 11/80 [00:05<00:18, 3.74 MiB/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Size...: 16%|█████████████████████▌ | 13/80 [00:05<00:15, 4.30 MiB/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Size...: 19%|████████████████████████▉ | 15/80 [00:05<00:12, 5.33 MiB/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Size...: 22%|█████████████████████████████▉ | 18/80 [00:05<00:09, 6.42
MiB/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Size...: 25%|█████████████████████████████████▎ | 20/80 [00:05<00:07,
7.91 MiB/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Size...: 29%|██████████████████████████████████████▏ | 23/80
[00:06<00:06, 9.10 MiB/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Size...: 32%|███████████████████████████████████████████▏ | 26/80
[00:06<00:05, 10.61 MiB/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Size...: 36%|████████████████████████████████████████████████▏ | 29/80
[00:06<00:04, 12.09 MiB/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Size...: 40%|█████████████████████████████████████████████████████▏ |
32/80 [00:06<00:03, 13.50 MiB/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Size...:
44%|██████████████████████████████████████████████████████████▏ | 35/80
[00:06<00:03, 14.62 MiB/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Size...:
46%|█████████████████████████████████████████████████████████████▌ | 37/80
[00:06<00:02, 15.45 MiB/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Size...:
50%|██████████████████████████████████████████████████████████████████▌ |
40/80 [00:07<00:02, 14.49 MiB/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Size...:
54%|███████████████████████████████████████████████████████████████████████▍
| 43/80 [00:07<00:02, 15.40 MiB/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Size...:
57%|████████████████████████████████████████████████████████████████████████████▍
| 46/80 [00:07<00:02, 16.19 MiB/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Size...:
61%|█████████████████████████████████████████████████████████████████████████████████▍
| 49/80 [00:07<00:01, 16.75 MiB/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Size...:
64%|████████████████████████████████████████████████████████████████████████████████████▊
| 51/80 [00:07<00:01, 17.14 MiB/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Size...:
68%|█████████████████████████████████████████████████████████████████████████████████████████▊
| 54/80 [00:07<00:01, 15.51 MiB/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Size...:
71%|██████████████████████████████████████████████████████████████████████████████████████████████▊
| 57/80 [00:08<00:01, 16.21 MiB/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Size...:
75%|███████████████████████████████████████████████████████████████████████████████████████████████████▊
| 60/80 [00:08<00:01, 16.74 MiB/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Size...:
79%|████████████████████████████████████████████████████████████████████████████████████████████████████████▋
| 63/80 [00:08<00:00, 17.16 MiB/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Size...:
82%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████▋
| 66/80 [00:08<00:00, 17.75 MiB/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Size...:
86%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋
| 69/80 [00:08<00:00, 17.19 MiB/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Size...:
89%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████
| 71/80 [00:08<00:00, 17.41 MiB/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:09<?, ? url/s]
Dl Size...:
92%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████
| 74/80 [00:09<00:00, 16.26 MiB/s]
Dl Completed...: 0%| | 0/1 [00:09<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:09<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:09<?, ? url/s]
Dl Size...:
96%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████
| 77/80 [00:09<00:00, 16.68 MiB/s]
Dl Completed...: 0%| | 0/1 [00:09<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:09<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:09<?, ? url/s]
Dl Completed...:
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|
1/1 [00:09<00:00, 9.58s/ url]
Dl Size...:
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|
80/80 [00:09<00:00, 17.05 MiB/s]
Dl Size...:
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|
80/80 [00:09<00:00, 8.34 MiB/s]
Dl Completed...:
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|
1/1 [00:09<00:00, 9.60s/ url]
I0820 14:32:52.047359 32000 dataset_builder.py:812] Generating split train
I0820 14:32:52.050351 32000 file_format_adapter.py:233] Writing TFRecords
Shuffling...: 0%| | 0/10 [00:00<?, ? shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 20%|██████████████████████████ | 2/10 [00:00<00:00, 14.12
shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 40%|████████████████████████████████████████████████████ |
4/10 [00:00<00:00, 14.21 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...:
60%|██████████████████████████████████████████████████████████████████████████████
| 6/10 [00:00<00:00, 14.12 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...:
80%|████████████████████████████████████████████████████████████████████████████████████████████████████████
| 8/10 [00:00<00:00, 14.14 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
I0820 14:33:03.785629 32000 dataset_builder.py:812] Generating split test
I0820 14:33:03.788612 32000 file_format_adapter.py:233] Writing TFRecords
Shuffling...: 0%| | 0/10 [00:00<?, ? shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 20%|██████████████████████████ | 2/10 [00:00<00:00, 14.22
shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 40%|████████████████████████████████████████████████████ |
4/10 [00:00<00:00, 14.37 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...:
60%|██████████████████████████████████████████████████████████████████████████████
| 6/10 [00:00<00:00, 14.34 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...:
80%|████████████████████████████████████████████████████████████████████████████████████████████████████████
| 8/10 [00:00<00:00, 14.05 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
I0820 14:33:15.452958 32000 dataset_builder.py:812] Generating split
unsupervised
I0820 14:33:15.457943 32000 file_format_adapter.py:233] Writing TFRecords
Shuffling...: 0%| | 0/20 [00:00<?, ? shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 10%|█████████████ | 2/20 [00:00<00:01, 14.01 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 20%|██████████████████████████ | 4/20 [00:00<00:01, 13.65
shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 30%|███████████████████████████████████████ | 6/20
[00:00<00:01, 13.69 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 40%|████████████████████████████████████████████████████ |
8/20 [00:00<00:00, 13.57 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...:
50%|████████████████████████████████████████████████████████████████▌ |
10/20 [00:00<00:00, 13.85 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...:
60%|█████████████████████████████████████████████████████████████████████████████▍
| 12/20 [00:00<00:00, 13.29 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...:
70%|██████████████████████████████████████████████████████████████████████████████████████████▎
| 14/20 [00:01<00:00, 13.38 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...:
80%|███████████████████████████████████████████████████████████████████████████████████████████████████████▏
| 16/20 [00:01<00:00, 13.22 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...:
90%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████
| 18/20 [00:01<00:00, 13.34 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
I0820 14:33:32.041668 32000 dataset_builder.py:301] Skipping computing
stats for mode ComputeStatsMode.AUTO.
�[1mDataset imdb_reviews downloaded and prepared to
C:\Users\SB00790107\tensorflow_datasets\imdb_reviews\plain_text\0.1.0.
Subsequent calls will reuse this data.�[0m
I0820 14:33:32.053635 32000 dataset_builder.py:399] Constructing
tf.data.Dataset for split test, from
C:\Users\SB00790107\tensorflow_datasets\imdb_reviews\plain_text\0.1.0
I0820 14:33:34.528547 32000 pretrained_lm_demo.py:92] Dataset: 'sst_dev'
with 872 examples
I0820 14:33:34.536590 32000 pretrained_lm_demo.py:92] Dataset:
'imdb_train' with 25000 examples
I0820 14:33:34.536590 32000 pretrained_lm_demo.py:92] Dataset: 'blank'
with 0 examples
I0820 14:33:34.536590 32000 dev_server.py:79]
( (
)\ ) )\ ) * )
(()/((()/(` ) /(
/(*))/(*))( )(
*)) (*)) (*)) (*(
*()) | | |* *||*
*| | |*_ | | | |
|*|| |*|
I0820 14:33:34.536590 32000 dev_server.py:80] Starting LIT server...
Traceback (most recent call last):
File
"C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\runpy.py",
line 193, in _run_module_as_main
"*main*", mod_spec)
File
"C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\runpy.py",
line 85, in _run_code
exec(code, run_globals)
File "C:~\lit\lit_nlp\examples\pretrained_lm_demo.py", line 102, in
app.run(main)
File
"C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\absl\app.py",
line 299, in run
_run_main(main, args)
File
"C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\absl\app.py",
line 250, in _run_main
sys.exit(main(argv))
File "C:~\lit\lit_nlp\examples\pretrained_lm_demo.py", line 98, in main
lit_demo.serve()
File "C:~\lit\lit_nlp\dev_server.py", line 81, in serve
app = lit_app.LitApp(*self._app_args, **self._app_kw)
File "C:~\lit\lit_nlp\app.py", line 293, in *init*
os.mkdir(data_dir)
FileNotFoundError: [WinError 3] The system cannot find the path specified:
'/tmp/lit_data'
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#13>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAFULXMB4OUW5UTT2FTXOPDSBUTFVANCNFSM4QGCN6KQ>
.
|
Sorry! That path is used to save the predictions cache between runs, but you can disable it with the flag FYI: we haven't tested LIT on Windows at all, so can't guarantee that other issues won't pop up here. |
It was not easy to install! But it's running.
I had to need to downgrade tensorflow to 2.0 because error on load
"absl-py" module.
Em sex, 21 de ago de 2020 02:33, Ian Tenney <notifications@github.com>
escreveu:
… Sorry! That path is used to save the predictions cache between runs, but
you can disable it with the flag --data-dir="" (see
https://github.com/PAIR-code/lit/blob/main/lit_nlp/server_flags.py#L43)
FYI: we haven't tested LIT on Windows at all, so can't guarantee that
other issues won't pop up here.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#13 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAFULXMTIXVT3ABVFGNQKBTSBYBMLANCNFSM4QGCN6KQ>
.
|
Hi There, I have resolved the issue but there is no index file is present... Kindly resolve the issue (lit-nlp) C:~\lit>python -m lit_nlp.examples.quickstart_sst_demo --port=5432 (lit-nlp) C:~\lit>python -m lit_nlp.examples.pretrained_lm_demo --models=bert-base-uncased --port=5432 I0824 10:27:31.845113 20960 modeling_tf_utils.py:258] loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-tf_model.h5 from cache at C:\Users\SB00790107.cache\torch\transformers\d667df51ec24c20190f01fb4c20a21debc4c4fc12f7e2f5441ac0a99690e3ee9.4733ec82e81d40e9cf5fd04556267d8958fb150e9339390fc64206b7e5a79c83.h5 I0824 10:27:36.732645 20960 dev_server.py:80] Starting LIT server... Starting Server on port 5432 I0824 10:27:36.735637 20960 _internal.py:122] * Running on http://127.0.0.1:5432/ (Press CTRL+C to quit) |
Have you run the steps to build the front-end? Running the "yarn" and "yarn build" commands in the client directory? And did they succeed? |
My Yarn is running but didnt show any interface (lit-nlp) C:~\lit>yarn && yarn build |
It looks like you're running from the root directory; can you try running yarn from |
Hello, I1207 23:10:53.014894 11092 caching.py:226] CachingModelWrapper 'bert-base-uncased': 1000 misses out of 1000 inputs Traceback (most recent call last): |
What version of |
yes, that is it. I just updated the transformers to 2.11.0 and it is done. Thanks. |
Updates Scalars to use .module-* class pattern.
Hi There,
I am trying to run LIT Quick-start: sentiment classifier
cd ~/lit
python -m lit_nlp.examples.quickstart_sst_demo --port=5432
The output is:
(lit-nlp) C:~\lit>python -m lit_nlp.examples.quickstart_sst_demo --port=5432
2020-08-20 14:37:27.651045: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
I0820 14:37:27.670744 33968 quickstart_sst_demo.py:47] Working directory: C:\Users\SB0079~1\AppData\Local\Temp\tmp2582r1b0
W0820 14:37:27.926524 33968 dataset_builder.py:575] Found a different version 1.0.0 of dataset glue in data_dir C:\Users\SB00790107\tensorflow_datasets. Using currently defined version 0.0.2.
I0820 14:37:27.926524 33968 dataset_builder.py:184] Overwrite dataset info from restored data version.
I0820 14:37:27.933496 33968 dataset_builder.py:253] Reusing dataset glue (C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2)
I0820 14:37:27.934466 33968 dataset_builder.py:399] Constructing tf.data.Dataset for split train, from C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2
W0820 14:37:27.934466 33968 dataset_builder.py:439] Warning: Setting shuffle_files=True because split=TRAIN and shuffle_files=None. This behavior will be deprecated on 2019-08-06, at which point shuffle_files=False will be the default for all splits.
W0820 14:37:35.189518 33968 dataset_builder.py:575] Found a different version 1.0.0 of dataset glue in data_dir C:\Users\SB00790107\tensorflow_datasets. Using currently defined version 0.0.2.
I0820 14:37:35.190503 33968 dataset_builder.py:184] Overwrite dataset info from restored data version.
I0820 14:37:35.192508 33968 dataset_builder.py:253] Reusing dataset glue (C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2)
I0820 14:37:35.192508 33968 dataset_builder.py:399] Constructing tf.data.Dataset for split validation, from C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2
I0820 14:37:35.302182 33968 tokenization_utils.py:306] Model name 'google/bert_uncased_L-2_H-128_A-2' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). Assuming 'google/bert_uncased_L-2_H-128_A-2' is a path or url to a directory containing tokenizer files.
I0820 14:37:35.302182 33968 tokenization_utils.py:317] Didn't find file google/bert_uncased_L-2_H-128_A-2. We won't load it.
I0820 14:37:35.303180 33968 tokenization_utils.py:335] Didn't find file google/bert_uncased_L-2_H-128_A-2\added_tokens.json. We won't load it.
I0820 14:37:35.303180 33968 tokenization_utils.py:335] Didn't find file google/bert_uncased_L-2_H-128_A-2\special_tokens_map.json. We won't load it.
I0820 14:37:35.303180 33968 tokenization_utils.py:335] Didn't find file google/bert_uncased_L-2_H-128_A-2\tokenizer_config.json. We won't load it.
Traceback (most recent call last):
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:~\lit\lit_nlp\examples\quickstart_sst_demo.py", line 60, in
app.run(main)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\absl\app.py", line 299, in run
_run_main(main, args)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\absl\app.py", line 250, in _run_main
sys.exit(main(argv))
File "C:~\lit\lit_nlp\examples\quickstart_sst_demo.py", line 48, in main
run_finetuning(model_path)
File "C:~\lit\lit_nlp\examples\quickstart_sst_demo.py", line 40, in run_finetuning
model = glue_models.SST2Model(FLAGS.encoder_name, for_training=True)
File "C:~\lit\lit_nlp\examples\models\glue_models.py", line 319, in init
**kw)
File "C:~\lit\lit_nlp\examples\models\glue_models.py", line 59, in init
model_name_or_path)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\transformers\tokenization_auto.py", line 109, in from_pretrained
return BertTokenizer.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\transformers\tokenization_utils.py", line 282, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\transformers\tokenization_utils.py", line 346, in _from_pretrained
list(cls.vocab_files_names.values())))
OSError: Model name 'google/bert_uncased_L-2_H-128_A-2' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). We assumed 'google/bert_uncased_L-2_H-128_A-2' was a path or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
For Running Quick start: language modeling
cd ~/lit
python -m lit_nlp.examples.pretrained_lm_demo --models=bert-base-uncased
--port=5432
The error output is
(lit-nlp) C:~\lit>python -m lit_nlp.examples.pretrained_lm_demo --models=bert-base-uncased --port=5432
2020-08-20 14:32:20.119230: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
I0820 14:32:20.634253 32000 tokenization_utils.py:374] loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at C:\Users\SB00790107.cache\torch\transformers\26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
I0820 14:32:21.133054 32000 configuration_utils.py:151] loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json from cache at C:\Users\SB00790107.cache\torch\transformers\4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.7156163d5fdc189c3016baca0775ffce230789d7fa2a42ef516483e4ca884517
I0820 14:32:21.143045 32000 configuration_utils.py:168] Model config {
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"num_labels": 2,
"output_attentions": true,
"output_hidden_states": true,
"output_past": true,
"pad_token_id": 0,
"pruned_heads": {},
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 30522
}
I0820 14:32:21.576282 32000 modeling_tf_utils.py:258] loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-tf_model.h5 from cache at C:\Users\SB00790107.cache\torch\transformers\d667df51ec24c20190f01fb4c20a21debc4c4fc12f7e2f5441ac0a99690e3ee9.4733ec82e81d40e9cf5fd04556267d8958fb150e9339390fc64206b7e5a79c83.h5
W0820 14:32:24.903656 32000 dataset_builder.py:575] Found a different version 1.0.0 of dataset glue in data_dir C:\Users\SB00790107\tensorflow_datasets. Using currently defined version 0.0.2.
I0820 14:32:24.904676 32000 dataset_builder.py:187] Load pre-computed datasetinfo (eg: splits) from bucket.
I0820 14:32:25.158797 32000 dataset_info.py:410] Loading info from GCS for glue/sst2/0.0.2
I0820 14:32:26.526896 32000 dataset_builder.py:273] Generating dataset glue (C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2)
�[1mDownloading and preparing dataset glue (7.09 MiB) to C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2...�[0m
Dl Completed...: 0 url [00:00, ? url/s]
Dl Size...: 0 MiB [00:00, ? MiB/s]
Extraction completed...: 0 file [00:00, ? file/s]I0820 14:32:26.530886 32000 download_manager.py:241] Downloading https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8 into C:\Users\SB00790107\tensorflow_datasets\downloads\fire.goog.com_v0_b_mtl-sent-repr.apps.cowOhVrpNUsvqdZqI70Nq3ISu63l9SOhTqYqoz6uEW3-Y.zipalt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8.tmp.6f44416196e74a44a10bca183839e172...
Dl Completed...: 0%| | 0/1 [00:00<?, ? url/s]
Dl Size...: 0 MiB [00:00, ? MiB/s]
Extraction completed...: 0 file [00:00, ? file/s]C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\urllib3\connectionpool.py:988: InsecureRequestWarning: Unverified HTTPS request is being made to host 'firebasestorage.googleapis.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning,
Dl Completed...: 0%| | 0/1 [00:00<?, ? url/s]
Dl Size...: 0%| | 0/7 [00:00<?, ? MiB/s]
Extraction completed...: 0 file [00:00, ? file/s]
Dl Completed...: 0%| | 0/1 [00:01<?, ? url/s]
Dl Size...: 14%|███████████████████▎ | 1/7 [00:01<00:06, 1.10s/ MiB]
Extraction completed...: 0 file [00:01, ? file/s]
Dl Completed...: 0%| | 0/1 [00:01<?, ? url/s]
Dl Size...: 29%|██████████████████████████████████████▌ | 2/7 [00:01<00:04, 1.19 MiB/s]
Dl Completed...: 0%| | 0/1 [00:01<?, ? url/s]
Dl Size...: 43%|█████████████████████████████████████████████████████████▊ | 3/7 [00:01<00:03, 1.19 MiB/s]
Dl Completed...: 0%| | 0/1 [00:01<?, ? url/s]
Dl Size...: 57%|█████████████████████████████████████████████████████████████████████████████▏ | 4/7 [00:01<00:02, 1.19 MiB/s]
Dl Completed...: 0%| | 0/1 [00:01<?, ? url/s]
Dl Size...: 71%|████████████████████████████████████████████████████████████████████████████████████████████████▍ | 5/7 [00:01<00:01, 1.19 MiB/s]
Dl Completed...: 0%| | 0/1 [00:01<?, ? url/s]
Dl Size...: 86%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 6/7 [00:01<00:00, 1.19 MiB/s]
Dl Completed...: 0%| | 0/1 [00:01<?, ? url/s]
Dl Size...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:01<00:00, 1.19 MiB/s]
Dl Completed...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.40s/ url]
Dl Size...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:01<00:00, 1.19 MiB/s]
Dl Completed...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.40s/ url]
Dl Size...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:01<00:00, 1.19 MiB/s]
Extraction completed...: 0%| | 0/1 [00:01<?, ? file/s]
Dl Completed...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.40s/ url]
Dl Size...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:01<00:00, 1.19 MiB/s]
Extraction completed...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.74s/ file]
Extraction completed...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.74s/ file]
Dl Size...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:01<00:00, 4.02 MiB/s]
Dl Completed...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.74s/ url]
I0820 14:32:28.270815 32000 dataset_builder.py:812] Generating split train
I0820 14:32:28.270815 32000 file_format_adapter.py:233] Writing TFRecords
Shuffling...: 0%| | 0/1 [00:00<?, ? shard/s]WARNING:tensorflow:From C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\tensorflow_datasets\core\file_format_adapter.py:209: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and:
tf.data.TFRecordDataset(path)
W0820 14:32:39.338444 32000 deprecation.py:323] From C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\tensorflow_datasets\core\file_format_adapter.py:209: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and:
tf.data.TFRecordDataset(path)
Reading...: 0 examples [00:00, ? examples/s]
Reading...: 64184 examples [00:00, 637222.07 examples/s]
Writing...: 0%| | 0/67349 [00:00<?, ? examples/s]
Writing...: 15%|█████████████████▊ | 9980/67349 [00:00<00:00, 99082.42 examples/s]
Writing...: 30%|███████████████████████████████████▌ | 20094/67349 [00:00<00:00, 99477.68 examples/s]
Writing...: 45%|█████████████████████████████████████████████████████▎ | 30195/67349 [00:00<00:00, 99709.61 examples/s]
Writing...: 60%|██████████████████████████████████████████████████████████████████████▊ | 40401/67349 [00:00<00:00, 100188.93 examples/s]
Writing...: 75%|████████████████████████████████████████████████████████████████████████████████████████▋ | 50623/67349 [00:00<00:00, 100574.12 examples/s]
Writing...: 90%|██████████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 60780/67349 [00:00<00:00, 100664.57 examples/s]
I0820 14:32:40.169348 32000 dataset_builder.py:812] Generating split validation
I0820 14:32:40.170345 32000 file_format_adapter.py:233] Writing TFRecords
Shuffling...: 0%| | 0/1 [00:00<?, ? shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/872 [00:00<?, ? examples/s]
I0820 14:32:40.370083 32000 dataset_builder.py:812] Generating split test
I0820 14:32:40.373092 32000 file_format_adapter.py:233] Writing TFRecords
Shuffling...: 0%| | 0/1 [00:00<?, ? shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/1821 [00:00<?, ? examples/s]
I0820 14:32:40.717523 32000 dataset_builder.py:301] Skipping computing stats for mode ComputeStatsMode.AUTO.
�[1mDataset glue downloaded and prepared to C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2. Subsequent calls will reuse this data.�[0m
I0820 14:32:40.735554 32000 dataset_builder.py:399] Constructing tf.data.Dataset for split validation, from C:\Users\SB00790107\tensorflow_datasets\glue\sst2\0.0.2
I0820 14:32:41.163142 32000 dataset_builder.py:675] No config specified, defaulting to first: imdb_reviews/plain_text
I0820 14:32:41.164139 32000 dataset_builder.py:187] Load pre-computed datasetinfo (eg: splits) from bucket.
I0820 14:32:41.407350 32000 dataset_info.py:410] Loading info from GCS for imdb_reviews/plain_text/0.1.0
I0820 14:32:42.439117 32000 dataset_builder.py:273] Generating dataset imdb_reviews (C:\Users\SB00790107\tensorflow_datasets\imdb_reviews\plain_text\0.1.0)
�[1mDownloading and preparing dataset imdb_reviews (80.23 MiB) to C:\Users\SB00790107\tensorflow_datasets\imdb_reviews\plain_text\0.1.0...�[0m
Dl Completed...: 0 url [00:00, ? url/s]
Dl Size...: 0 MiB [00:00, ? MiB/s]I0820 14:32:42.443107 32000 download_manager.py:241] Downloading http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz into C:\Users\SB00790107\tensorflow_datasets\downloads\ai.stanfor.edu_amaas_sentime_aclImdb_v1PaujRp-TxjBWz59jHXsMDm5WiexbxzaFQkEnXc3Tvo8.tar.gz.tmp.69c9ef3d01b84444a160e5ba3160fb45...
Dl Completed...: 0%| | 0/1 [00:00<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:00<?, ? url/s]
Dl Size...: 0%| | 0/80 [00:00<?, ? MiB/s]
Dl Completed...: 0%| | 0/1 [00:02<?, ? url/s]
Dl Size...: 1%|█▋ | 1/80 [00:02<03:40, 2.80s/ MiB]
Dl Completed...: 0%| | 0/1 [00:03<?, ? url/s]
Dl Size...: 2%|███▎ | 2/80 [00:03<02:47, 2.15s/ MiB]
Dl Completed...: 0%| | 0/1 [00:03<?, ? url/s]
Dl Size...: 4%|█████ | 3/80 [00:03<02:06, 1.64s/ MiB]
Dl Completed...: 0%| | 0/1 [00:04<?, ? url/s]
Dl Size...: 5%|██████▋ | 4/80 [00:04<01:31, 1.21s/ MiB]
Dl Completed...: 0%| | 0/1 [00:04<?, ? url/s]
Dl Size...: 6%|████████▍ | 5/80 [00:04<01:09, 1.08 MiB/s]
Dl Completed...: 0%| | 0/1 [00:04<?, ? url/s]
Dl Size...: 8%|██████████ | 6/80 [00:04<00:51, 1.43 MiB/s]
Dl Completed...: 0%| | 0/1 [00:04<?, ? url/s]
Dl Size...: 9%|███████████▋ | 7/80 [00:04<00:39, 1.85 MiB/s]
Dl Completed...: 0%| | 0/1 [00:04<?, ? url/s]
Dl Size...: 10%|█████████████▍ | 8/80 [00:04<00:30, 2.34 MiB/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Size...: 12%|████████████████▋ | 10/80 [00:05<00:24, 2.89 MiB/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Size...: 14%|██████████████████▎ | 11/80 [00:05<00:18, 3.74 MiB/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Size...: 16%|█████████████████████▌ | 13/80 [00:05<00:15, 4.30 MiB/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Size...: 19%|████████████████████████▉ | 15/80 [00:05<00:12, 5.33 MiB/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Size...: 22%|█████████████████████████████▉ | 18/80 [00:05<00:09, 6.42 MiB/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Size...: 25%|█████████████████████████████████▎ | 20/80 [00:05<00:07, 7.91 MiB/s]
Dl Completed...: 0%| | 0/1 [00:05<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Size...: 29%|██████████████████████████████████████▏ | 23/80 [00:06<00:06, 9.10 MiB/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Size...: 32%|███████████████████████████████████████████▏ | 26/80 [00:06<00:05, 10.61 MiB/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Size...: 36%|████████████████████████████████████████████████▏ | 29/80 [00:06<00:04, 12.09 MiB/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Size...: 40%|█████████████████████████████████████████████████████▏ | 32/80 [00:06<00:03, 13.50 MiB/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Size...: 44%|██████████████████████████████████████████████████████████▏ | 35/80 [00:06<00:03, 14.62 MiB/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Size...: 46%|█████████████████████████████████████████████████████████████▌ | 37/80 [00:06<00:02, 15.45 MiB/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:06<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Size...: 50%|██████████████████████████████████████████████████████████████████▌ | 40/80 [00:07<00:02, 14.49 MiB/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Size...: 54%|███████████████████████████████████████████████████████████████████████▍ | 43/80 [00:07<00:02, 15.40 MiB/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Size...: 57%|████████████████████████████████████████████████████████████████████████████▍ | 46/80 [00:07<00:02, 16.19 MiB/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Size...: 61%|█████████████████████████████████████████████████████████████████████████████████▍ | 49/80 [00:07<00:01, 16.75 MiB/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Size...: 64%|████████████████████████████████████████████████████████████████████████████████████▊ | 51/80 [00:07<00:01, 17.14 MiB/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Size...: 68%|█████████████████████████████████████████████████████████████████████████████████████████▊ | 54/80 [00:07<00:01, 15.51 MiB/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:07<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Size...: 71%|██████████████████████████████████████████████████████████████████████████████████████████████▊ | 57/80 [00:08<00:01, 16.21 MiB/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Size...: 75%|███████████████████████████████████████████████████████████████████████████████████████████████████▊ | 60/80 [00:08<00:01, 16.74 MiB/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Size...: 79%|████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 63/80 [00:08<00:00, 17.16 MiB/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Size...: 82%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 66/80 [00:08<00:00, 17.75 MiB/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Size...: 86%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 69/80 [00:08<00:00, 17.19 MiB/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Size...: 89%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 71/80 [00:08<00:00, 17.41 MiB/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:08<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:09<?, ? url/s]
Dl Size...: 92%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 74/80 [00:09<00:00, 16.26 MiB/s]
Dl Completed...: 0%| | 0/1 [00:09<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:09<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:09<?, ? url/s]
Dl Size...: 96%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 77/80 [00:09<00:00, 16.68 MiB/s]
Dl Completed...: 0%| | 0/1 [00:09<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:09<?, ? url/s]
Dl Completed...: 0%| | 0/1 [00:09<?, ? url/s]
Dl Completed...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:09<00:00, 9.58s/ url]
Dl Size...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80/80 [00:09<00:00, 17.05 MiB/s]
Dl Size...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80/80 [00:09<00:00, 8.34 MiB/s]
Dl Completed...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:09<00:00, 9.60s/ url]
I0820 14:32:52.047359 32000 dataset_builder.py:812] Generating split train
I0820 14:32:52.050351 32000 file_format_adapter.py:233] Writing TFRecords
Shuffling...: 0%| | 0/10 [00:00<?, ? shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 20%|██████████████████████████ | 2/10 [00:00<00:00, 14.12 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 40%|████████████████████████████████████████████████████ | 4/10 [00:00<00:00, 14.21 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 60%|██████████████████████████████████████████████████████████████████████████████ | 6/10 [00:00<00:00, 14.12 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 80%|████████████████████████████████████████████████████████████████████████████████████████████████████████ | 8/10 [00:00<00:00, 14.14 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
I0820 14:33:03.785629 32000 dataset_builder.py:812] Generating split test
I0820 14:33:03.788612 32000 file_format_adapter.py:233] Writing TFRecords
Shuffling...: 0%| | 0/10 [00:00<?, ? shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 20%|██████████████████████████ | 2/10 [00:00<00:00, 14.22 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 40%|████████████████████████████████████████████████████ | 4/10 [00:00<00:00, 14.37 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 60%|██████████████████████████████████████████████████████████████████████████████ | 6/10 [00:00<00:00, 14.34 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 80%|████████████████████████████████████████████████████████████████████████████████████████████████████████ | 8/10 [00:00<00:00, 14.05 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
I0820 14:33:15.452958 32000 dataset_builder.py:812] Generating split unsupervised
I0820 14:33:15.457943 32000 file_format_adapter.py:233] Writing TFRecords
Shuffling...: 0%| | 0/20 [00:00<?, ? shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 10%|█████████████ | 2/20 [00:00<00:01, 14.01 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 20%|██████████████████████████ | 4/20 [00:00<00:01, 13.65 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 30%|███████████████████████████████████████ | 6/20 [00:00<00:01, 13.69 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 40%|████████████████████████████████████████████████████ | 8/20 [00:00<00:00, 13.57 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 50%|████████████████████████████████████████████████████████████████▌ | 10/20 [00:00<00:00, 13.85 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 60%|█████████████████████████████████████████████████████████████████████████████▍ | 12/20 [00:00<00:00, 13.29 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 70%|██████████████████████████████████████████████████████████████████████████████████████████▎ | 14/20 [00:01<00:00, 13.38 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 80%|███████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 16/20 [00:01<00:00, 13.22 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Shuffling...: 90%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 18/20 [00:01<00:00, 13.34 shard/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
Reading...: 0 examples [00:00, ? examples/s]
Writing...: 0%| | 0/2500 [00:00<?, ? examples/s]
I0820 14:33:32.041668 32000 dataset_builder.py:301] Skipping computing stats for mode ComputeStatsMode.AUTO.
�[1mDataset imdb_reviews downloaded and prepared to C:\Users\SB00790107\tensorflow_datasets\imdb_reviews\plain_text\0.1.0. Subsequent calls will reuse this data.�[0m
I0820 14:33:32.053635 32000 dataset_builder.py:399] Constructing tf.data.Dataset for split test, from C:\Users\SB00790107\tensorflow_datasets\imdb_reviews\plain_text\0.1.0
I0820 14:33:34.528547 32000 pretrained_lm_demo.py:92] Dataset: 'sst_dev' with 872 examples
I0820 14:33:34.536590 32000 pretrained_lm_demo.py:92] Dataset: 'imdb_train' with 25000 examples
I0820 14:33:34.536590 32000 pretrained_lm_demo.py:92] Dataset: 'blank' with 0 examples
I0820 14:33:34.536590 32000 dev_server.py:79]
( (
)\ ) )\ ) * )
(()/((()/(` ) /(
/())/())( )())
()) ()) ((())
| | | || |
| |_ | | | |
||| ||
I0820 14:33:34.536590 32000 dev_server.py:80] Starting LIT server...
Traceback (most recent call last):
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:~\lit\lit_nlp\examples\pretrained_lm_demo.py", line 102, in
app.run(main)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\absl\app.py", line 299, in run
_run_main(main, args)
File "C:\Users\SB00790107\AppData\Local\Continuum\anaconda3\envs\lit-nlp\lib\site-packages\absl\app.py", line 250, in _run_main
sys.exit(main(argv))
File "C:~\lit\lit_nlp\examples\pretrained_lm_demo.py", line 98, in main
lit_demo.serve()
File "C:~\lit\lit_nlp\dev_server.py", line 81, in serve
app = lit_app.LitApp(*self._app_args, **self._app_kw)
File "C:~\lit\lit_nlp\app.py", line 293, in init
os.mkdir(data_dir)
FileNotFoundError: [WinError 3] The system cannot find the path specified: '/tmp/lit_data'
The text was updated successfully, but these errors were encountered: