-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pretraining with my own dataset #44
Comments
Yes, you can refer to /pretrain/README.md for the complete cmd, which is: $ cd /path/to/SparK/pretrain
$ torchrun --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr=localhost --master_port=<some_port> main.py \
--data_path=/path/to/imagenet --exp_name=<your_exp_name> --exp_dir=/path/to/logdir \
--model=resnet50 --bs=512 The first line is missing, e.g. |
Thank you very much. In the article, it is saying that 'All models are pre-trained with 1.28 million unlabeled images |
You need to define a new Python class for your dataset, to replace our ImageNetDataset in https://github.com/keyu-tian/SparK/blob/main/pretrain/utils/imagenet.py#L30. Just define a class with PS: i recommend to try your pretraining with or without |
Hi, thanks for your work. Whenever I try to pretrain with my own dataset, following error is happening:
torchrun --data_path=/home/user/augdata --exp_name=ptaugdata --exp_dir=/home/user/models
usage: torchrun [-h] [--nnodes NNODES] [--nproc-per-node NPROC_PER_NODE] [--rdzv-backend RDZV_BACKEND] [--rdzv-endpoint RDZV_ENDPOINT] [--rdzv-id RDZV_ID] [--rdzv-conf RDZV_CONF] [--standalone]
[--max-restarts MAX_RESTARTS] [--monitor-interval MONITOR_INTERVAL] [--start-method {spawn,fork,forkserver}] [--role ROLE] [-m] [--no-python] [--run-path] [--log-dir LOG_DIR]
[-r REDIRECTS] [-t TEE] [--node-rank NODE_RANK] [--master-addr MASTER_ADDR] [--master-port MASTER_PORT] [--local-addr LOCAL_ADDR]
training_script ...
torchrun: error: the following arguments are required: training_script, training_script_args
do I need to specify training_script and training_script_args?
The text was updated successfully, but these errors were encountered: