(dl_model) steeve@hulk:~/Git/Practical-Deep-Learning-at-Scale-with-MLFlow/chapter01$ python first_dl.py ### download IMDb data to local folder /home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/urllib3/connectionpool.py:1045: InsecureRequestWarning: Unverified HTTPS request is being made to host 'pl-flash-data.s3.amazonaws.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings warnings.warn( Using custom data configuration default-30f0d85b35c12b3d Downloading and preparing dataset csv/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/steeve/.cache/huggingface/datasets/csv/default-30f0d85b35c12b3d/0.0.0/e138af468cb14e747fb46a19c787ffcfa5170c821476d20d5304287ce12bbc23... Dataset csv downloaded and prepared to /home/steeve/.cache/huggingface/datasets/csv/default-30f0d85b35c12b3d/0.0.0/e138af468cb14e747fb46a19c787ffcfa5170c821476d20d5304287ce12bbc23. Subsequent calls will reuse this data. Parameter 'function'=functools.partial(, {'negative': 0, 'positive': 1}, 'sentiment') of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed. 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 22500/22500 [00:01<00:00, 19285.39ex/s] /home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/flash/text/classification/data.py:170: FutureWarning: rename_column_ is deprecated and will be removed in the next major version of datasets. Use DatasetDict.rename_column instead. dataset_dict.rename_column_(target, "labels") 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 23/23 [00:03<00:00, 6.62ba/s] Using custom data configuration default-434f86cfa4832864 Downloading and preparing dataset csv/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/steeve/.cache/huggingface/datasets/csv/default-434f86cfa4832864/0.0.0/e138af468cb14e747fb46a19c787ffcfa5170c821476d20d5304287ce12bbc23... Dataset csv downloaded and prepared to /home/steeve/.cache/huggingface/datasets/csv/default-434f86cfa4832864/0.0.0/e138af468cb14e747fb46a19c787ffcfa5170c821476d20d5304287ce12bbc23. Subsequent calls will reuse this data. 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2500/2500 [00:00<00:00, 19904.06ex/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 8.02ba/s] Using custom data configuration default-4cdf6a7c2b47f824 Downloading and preparing dataset csv/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/steeve/.cache/huggingface/datasets/csv/default-4cdf6a7c2b47f824/0.0.0/e138af468cb14e747fb46a19c787ffcfa5170c821476d20d5304287ce12bbc23... Dataset csv downloaded and prepared to /home/steeve/.cache/huggingface/datasets/csv/default-4cdf6a7c2b47f824/0.0.0/e138af468cb14e747fb46a19c787ffcfa5170c821476d20d5304287ce12bbc23. Subsequent calls will reuse this data. 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2500/2500 [00:00<00:00, 19868.95ex/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 6.43ba/s] ### define a text classifier Using 'prajjwal1/bert-tiny' provided by Hugging Face/transformers (https://github.com/huggingface/transformers). Some weights of the model checkpoint at prajjwal1/bert-tiny were not used when initializing BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.bias', 'cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.predictions.decoder.bias'] - This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BertForSequenceClassification were not initialized from the model checkpoint at prajjwal1/bert-tiny and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ### define the trainer GPU available: True, used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs ### fine tune the pretrained model to get a new model for sentiment classification LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3] ### download IMDb data to local folder ### download IMDb data to local folder Using custom data configuration default-51398f4182682e8c Downloading and preparing dataset csv/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/steeve/.cache/huggingface/datasets/csv/default-51398f4182682e8c/0.0.0/e138af468cb14e747fb46a19c787ffcfa5170c821476d20d5304287ce12bbc23... Using custom data configuration default-51398f4182682e8c Dataset csv downloaded and prepared to /home/steeve/.cache/huggingface/datasets/csv/default-51398f4182682e8c/0.0.0/e138af468cb14e747fb46a19c787ffcfa5170c821476d20d5304287ce12bbc23. Subsequent calls will reuse this data. Reusing dataset csv (/home/steeve/.cache/huggingface/datasets/csv/default-51398f4182682e8c/0.0.0/e138af468cb14e747fb46a19c787ffcfa5170c821476d20d5304287ce12bbc23) Parameter 'function'=functools.partial(, {'negative': 0, 'positive': 1}, 'sentiment') of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed. 0%| | 0/22500 [00:00, {'negative': 0, 'positive': 1}, 'sentiment') of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed. 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 22500/22500 [00:01<00:00, 18851.46ex/s] 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 22500/22500 [00:01<00:00, 17720.26ex/s] 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 23/23 [00:05<00:00, 4.23ba/s] 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 23/23 [00:05<00:00, 4.24ba/s] Using custom data configuration default-8171013d43f1aea9 Downloading and preparing dataset csv/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/steeve/.cache/huggingface/datasets/csv/default-8171013d43f1aea9/0.0.0/e138af468cb14e747fb46a19c787ffcfa5170c821476d20d5304287ce12bbc23... Dataset csv downloaded and prepared to /home/steeve/.cache/huggingface/datasets/csv/default-8171013d43f1aea9/0.0.0/e138af468cb14e747fb46a19c787ffcfa5170c821476d20d5304287ce12bbc23. Subsequent calls will reuse this data. 0%| | 0/2500 [00:00", line 1, in File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/spawn.py", line 125, in _main prepare(preparation_data) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare _fixup_main_from_path(data['init_main_from_path']) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/runpy.py", line 265, in run_path return _run_module_code(code, init_globals, run_name, File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/steeve/Git/Practical-Deep-Learning-at-Scale-with-MLFlow/chapter01/first_dl.py", line 23, in trainer.finetune(classifier_model, datamodule=datamodule, strategy="freeze") File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/flash/core/trainer.py", line 165, in finetune return super().fit(model, train_dataloader, val_dataloaders, datamodule) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 552, in fit self._run(model) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 922, in _run self._dispatch() File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 990, in _dispatch self.accelerator.start_training(self) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training self.training_type_plugin.start_training(trainer) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py", line 158, in start_training mp.spawn(self.new_process, **self.mp_spawn_kwargs) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 240, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 189, in start_processes process.start() File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 42, in _launch prep_data = spawn.get_preparation_data(process_obj._name) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/spawn.py", line 154, in get_preparation_data _check_not_importing_main() File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/spawn.py", line 134, in _check_not_importing_main raise RuntimeError(''' RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. Some weights of the model checkpoint at prajjwal1/bert-tiny were not used when initializing BertForSequenceClassification: ['cls.predictions.transform.LayerNorm.weight', 'cls.predictions.decoder.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.decoder.weight', 'cls.seq_relationship.bias', 'cls.seq_relationship.weight'] - This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BertForSequenceClassification were not initialized from the model checkpoint at prajjwal1/bert-tiny and are newly initialized: ['classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ### define the trainer GPU available: True, used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs ### fine tune the pretrained model to get a new model for sentiment classification LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3] Traceback (most recent call last): File "", line 1, in File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/spawn.py", line 125, in _main prepare(preparation_data) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare _fixup_main_from_path(data['init_main_from_path']) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/runpy.py", line 265, in run_path return _run_module_code(code, init_globals, run_name, File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/steeve/Git/Practical-Deep-Learning-at-Scale-with-MLFlow/chapter01/first_dl.py", line 23, in trainer.finetune(classifier_model, datamodule=datamodule, strategy="freeze") File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/flash/core/trainer.py", line 165, in finetune return super().fit(model, train_dataloader, val_dataloaders, datamodule) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 552, in fit self._run(model) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 922, in _run self._dispatch() File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 990, in _dispatch self.accelerator.start_training(self) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training self.training_type_plugin.start_training(trainer) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py", line 158, in start_training mp.spawn(self.new_process, **self.mp_spawn_kwargs) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 240, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 189, in start_processes process.start() File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 42, in _launch prep_data = spawn.get_preparation_data(process_obj._name) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/spawn.py", line 154, in get_preparation_data _check_not_importing_main() File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/multiprocessing/spawn.py", line 134, in _check_not_importing_main raise RuntimeError(''' RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. Traceback (most recent call last): File "first_dl.py", line 23, in trainer.finetune(classifier_model, datamodule=datamodule, strategy="freeze") File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/flash/core/trainer.py", line 165, in finetune return super().fit(model, train_dataloader, val_dataloaders, datamodule) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 552, in fit self._run(model) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 922, in _run self._dispatch() File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 990, in _dispatch self.accelerator.start_training(self) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training self.training_type_plugin.start_training(trainer) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp_spawn.py", line 158, in start_training mp.spawn(self.new_process, **self.mp_spawn_kwargs) File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 240, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 198, in start_processes while not context.join(): File "/home/steeve/Anaconda3/envs/dl_model/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 149, in join raise ProcessExitedException( torch.multiprocessing.spawn.ProcessExitedException: process 0 terminated with exit code 1