You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When the model is downloaded from S3, it is stored to default cache directory in <user_home>/.cache/transformers/ directory, instead to ./cache, as specified in --cache_dir argument. Seems like --cache_dir argument isn't used in .from_pretrained() methods in lines 472, 473 and 477 in the run_lm_finetuning.py script.
Environment
OS: Ubuntu 18.04
Python version: 3.6.6
PyTorch version: 1.3
PyTorch Transformers version (or branch): 2.1.1
Using GPU ? Yes
Distributed of parallel setup ? No
Any other relevant information:
Additional context
The text was updated successfully, but these errors were encountered:
mpavlovic
changed the title
cache_dir argument in run_lm_finetuning.py not used at all
--cache_dir argument in run_lm_finetuning.py not used at all
Oct 24, 2019
* upstream/master:
Add RoBERTa-based GPT-2 Output Detector from OpenAI
Fix other PyTorch models
Fix BERT
[tests] Flag to test on cuda
[tests] get rid of warning
[run_tf_glue] Add comment for context
misc doc
Updating docblocks in optimizers.py
GPT-2 XL
add authors for models
Fixhuggingface#1686
add progress bar for convert_examples_to_features
[inputs_embeds] All PyTorch models
docstring + check
model forwards can take an inputs_embeds param
Fixhuggingface#1623
Fixing mode in evaluate during training
Add speed log to examples/run_squad.py
馃悰 Bug
Model I am using (Bert, XLNet....): GPT-2
Language I am using the model on (English, Chinese....): English
The problem arise when using:
The tasks I am working on is:
To Reproduce
Steps to reproduce the behavior:
Expected behavior
When the model is downloaded from S3, it is stored to default cache directory in
<user_home>/.cache/transformers/
directory, instead to./cache
, as specified in--cache_dir
argument. Seems like--cache_dir
argument isn't used in.from_pretrained()
methods in lines 472, 473 and 477 in the run_lm_finetuning.py script.Environment
Additional context
The text was updated successfully, but these errors were encountered: