We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
>>>>>>>start training : ETTm1_96_672_Autoformer_ETTm1_ftM_sl96_ll48_pl672_dm512_nh8_el2_dl1_df2048_fc3_ebtimeF_dtTrue_Exp_0>>>>>>>>>>>>>>>>>>>>>>>>>> train 33793 val 10849 test 10849 iters: 100, epoch: 1 | loss: 0.5350901 speed: 4.5272s/iter; left time: 47358.7275s iters: 200, epoch: 1 | loss: 0.4332319 speed: 4.5712s/iter; left time: 47362.3486s iters: 300, epoch: 1 | loss: 0.4808801 speed: 4.5620s/iter; left time: 46810.8673s iters: 400, epoch: 1 | loss: 0.5168055 speed: 4.5696s/iter; left time: 46431.6680s iters: 500, epoch: 1 | loss: 0.4912913 speed: 4.5852s/iter; left time: 46131.6203s iters: 600, epoch: 1 | loss: 0.4775451 speed: 4.5742s/iter; left time: 45564.0464s iters: 700, epoch: 1 | loss: 0.5060694 speed: 4.5875s/iter; left time: 45237.1464s iters: 800, epoch: 1 | loss: 0.5316107 speed: 4.5767s/iter; left time: 44673.6112s iters: 900, epoch: 1 | loss: 0.4948809 speed: 4.5745s/iter; left time: 44194.6974s iters: 1000, epoch: 1 | loss: 0.4570811 speed: 4.5657s/iter; left time: 43652.2512s Epoch: 1 cost time: 4828.672526597977 Traceback (most recent call last): File "/home/fight/Desktop/Autoformer/Autoformer-main/run.py", line 116, in <module> exp.train(setting) File "/home/fight/Desktop/Autoformer/Autoformer-main/exp/exp_main.py", line 173, in train vali_loss = self.vali(vali_data, vali_loader, criterion) File "/home/fight/Desktop/Autoformer/Autoformer-main/exp/exp_main.py", line 76, in vali outputs = self.model(batch_x, batch_x_mark, dec_inp, batch_y_mark) File "/home/fight/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/fight/Desktop/Autoformer/Autoformer-main/models/Autoformer.py", line 90, in forward enc_out, attns = self.encoder(enc_out, attn_mask=enc_self_mask) File "/home/fight/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/fight/Desktop/Autoformer/Autoformer-main/layers/Autoformer_EncDec.py", line 103, in forward x, attn = attn_layer(x, attn_mask=attn_mask) File "/home/fight/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/fight/Desktop/Autoformer/Autoformer-main/layers/Autoformer_EncDec.py", line 69, in forward new_x, attn = self.attention( File "/home/fight/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/fight/Desktop/Autoformer/Autoformer-main/layers/AutoCorrelation.py", line 167, in forward out, attn = self.inner_correlation( File "/home/fight/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/fight/Desktop/Autoformer/Autoformer-main/layers/AutoCorrelation.py", line 135, in forward V = self.time_delay_agg_inference(values.permute(0, 2, 3, 1).contiguous(), corr).permute(0, 3, 1, 2) File "/home/fight/Desktop/Autoformer/Autoformer-main/layers/AutoCorrelation.py", line 66, in time_delay_agg_inference init_index = torch.arange(length).unsqueeze(0).unsqueeze(0).unsqueeze(0).repeat(batch, head, channel, 1).cuda() File "/home/fight/anaconda3/lib/python3.9/site-packages/torch/cuda/__init__.py", line 216, in _lazy_init torch._C._cuda_init() RuntimeError: No CUDA GPUs are available Hello,author Wu,have you met situation like this?Would you like to give some advice to this question? This is my configuration ------------------------------------ -+ssssssssssssssssssyyssss+- OS: Ubuntu 18.04.6 LTS x86_64 .ossssssssssssssssssdMMMNysssso. Host: 90Q90022CP ZHENGJIUZHE REN9000 /ssssssssssshdmmNNmmyNMMMMhssssss/ Kernel: 5.4.0-109-generic +ssssssssshmydMMMMMMMNddddyssssssss+ Uptime: 22 days, 8 hours, 7 mins /sssssssshNMMMyhhyyyyhmNMMMNhssssssss/ Packages: 1986 .ssssssssdMMMNhsssssssssshNMMMdssssssss. Shell: bash 4.4.20 +sssshhhyNMMNyssssssssssssyNMMMysssssss+ Resolution: 1920x1080 ossyNMMMNyMMhsssssssssssssshmmmhssssssso DE: GNOME 3.28.4 ossyNMMMNyMMhsssssssssssssshmmmhssssssso WM: GNOME Shell +sssshhhyNMMNyssssssssssssyNMMMysssssss+ WM Theme: Adwaita .ssssssssdMMMNhsssssssssshNMMMdssssssss. Theme: Ambiance [GTK2/3] /sssssssshNMMMyhhyyyyhdNMMMNhssssssss/ Icons: Ubuntu-mono-dark [GTK2/3] +sssssssssdmydMMMMMMMMddddyssssssss+ Terminal: gnome-terminal /ssssssssssshdmNNNNmyNMMMMhssssss/ CPU: Intel i9-10900K (20) @ 5.300GHz .ossssssssssssssssssdMMMNysssso. GPU: NVIDIA NVIDIA Corporation Devic -+sssssssssssssssssyyyssss+- Memory: 21480MiB / 64125MiB
>>>>>>>start training : ETTm1_96_672_Autoformer_ETTm1_ftM_sl96_ll48_pl672_dm512_nh8_el2_dl1_df2048_fc3_ebtimeF_dtTrue_Exp_0>>>>>>>>>>>>>>>>>>>>>>>>>> train 33793 val 10849 test 10849 iters: 100, epoch: 1 | loss: 0.5350901 speed: 4.5272s/iter; left time: 47358.7275s iters: 200, epoch: 1 | loss: 0.4332319 speed: 4.5712s/iter; left time: 47362.3486s iters: 300, epoch: 1 | loss: 0.4808801 speed: 4.5620s/iter; left time: 46810.8673s iters: 400, epoch: 1 | loss: 0.5168055 speed: 4.5696s/iter; left time: 46431.6680s iters: 500, epoch: 1 | loss: 0.4912913 speed: 4.5852s/iter; left time: 46131.6203s iters: 600, epoch: 1 | loss: 0.4775451 speed: 4.5742s/iter; left time: 45564.0464s iters: 700, epoch: 1 | loss: 0.5060694 speed: 4.5875s/iter; left time: 45237.1464s iters: 800, epoch: 1 | loss: 0.5316107 speed: 4.5767s/iter; left time: 44673.6112s iters: 900, epoch: 1 | loss: 0.4948809 speed: 4.5745s/iter; left time: 44194.6974s iters: 1000, epoch: 1 | loss: 0.4570811 speed: 4.5657s/iter; left time: 43652.2512s Epoch: 1 cost time: 4828.672526597977 Traceback (most recent call last): File "/home/fight/Desktop/Autoformer/Autoformer-main/run.py", line 116, in <module> exp.train(setting) File "/home/fight/Desktop/Autoformer/Autoformer-main/exp/exp_main.py", line 173, in train vali_loss = self.vali(vali_data, vali_loader, criterion) File "/home/fight/Desktop/Autoformer/Autoformer-main/exp/exp_main.py", line 76, in vali outputs = self.model(batch_x, batch_x_mark, dec_inp, batch_y_mark) File "/home/fight/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/fight/Desktop/Autoformer/Autoformer-main/models/Autoformer.py", line 90, in forward enc_out, attns = self.encoder(enc_out, attn_mask=enc_self_mask) File "/home/fight/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/fight/Desktop/Autoformer/Autoformer-main/layers/Autoformer_EncDec.py", line 103, in forward x, attn = attn_layer(x, attn_mask=attn_mask) File "/home/fight/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/fight/Desktop/Autoformer/Autoformer-main/layers/Autoformer_EncDec.py", line 69, in forward new_x, attn = self.attention( File "/home/fight/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/fight/Desktop/Autoformer/Autoformer-main/layers/AutoCorrelation.py", line 167, in forward out, attn = self.inner_correlation( File "/home/fight/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/fight/Desktop/Autoformer/Autoformer-main/layers/AutoCorrelation.py", line 135, in forward V = self.time_delay_agg_inference(values.permute(0, 2, 3, 1).contiguous(), corr).permute(0, 3, 1, 2) File "/home/fight/Desktop/Autoformer/Autoformer-main/layers/AutoCorrelation.py", line 66, in time_delay_agg_inference init_index = torch.arange(length).unsqueeze(0).unsqueeze(0).unsqueeze(0).repeat(batch, head, channel, 1).cuda() File "/home/fight/anaconda3/lib/python3.9/site-packages/torch/cuda/__init__.py", line 216, in _lazy_init torch._C._cuda_init() RuntimeError: No CUDA GPUs are available
------------------------------------ -+ssssssssssssssssssyyssss+- OS: Ubuntu 18.04.6 LTS x86_64 .ossssssssssssssssssdMMMNysssso. Host: 90Q90022CP ZHENGJIUZHE REN9000 /ssssssssssshdmmNNmmyNMMMMhssssss/ Kernel: 5.4.0-109-generic +ssssssssshmydMMMMMMMNddddyssssssss+ Uptime: 22 days, 8 hours, 7 mins /sssssssshNMMMyhhyyyyhmNMMMNhssssssss/ Packages: 1986 .ssssssssdMMMNhsssssssssshNMMMdssssssss. Shell: bash 4.4.20 +sssshhhyNMMNyssssssssssssyNMMMysssssss+ Resolution: 1920x1080 ossyNMMMNyMMhsssssssssssssshmmmhssssssso DE: GNOME 3.28.4 ossyNMMMNyMMhsssssssssssssshmmmhssssssso WM: GNOME Shell +sssshhhyNMMNyssssssssssssyNMMMysssssss+ WM Theme: Adwaita .ssssssssdMMMNhsssssssssshNMMMdssssssss. Theme: Ambiance [GTK2/3] /sssssssshNMMMyhhyyyyhdNMMMNhssssssss/ Icons: Ubuntu-mono-dark [GTK2/3] +sssssssssdmydMMMMMMMMddddyssssssss+ Terminal: gnome-terminal /ssssssssssshdmNNNNmyNMMMMhssssss/ CPU: Intel i9-10900K (20) @ 5.300GHz .ossssssssssssssssssdMMMNysssso. GPU: NVIDIA NVIDIA Corporation Devic -+sssssssssssssssssyyyssss+- Memory: 21480MiB / 64125MiB
The text was updated successfully, but these errors were encountered:
No branches or pull requests
>>>>>>>start training : ETTm1_96_672_Autoformer_ETTm1_ftM_sl96_ll48_pl672_dm512_nh8_el2_dl1_df2048_fc3_ebtimeF_dtTrue_Exp_0>>>>>>>>>>>>>>>>>>>>>>>>>> train 33793 val 10849 test 10849 iters: 100, epoch: 1 | loss: 0.5350901 speed: 4.5272s/iter; left time: 47358.7275s iters: 200, epoch: 1 | loss: 0.4332319 speed: 4.5712s/iter; left time: 47362.3486s iters: 300, epoch: 1 | loss: 0.4808801 speed: 4.5620s/iter; left time: 46810.8673s iters: 400, epoch: 1 | loss: 0.5168055 speed: 4.5696s/iter; left time: 46431.6680s iters: 500, epoch: 1 | loss: 0.4912913 speed: 4.5852s/iter; left time: 46131.6203s iters: 600, epoch: 1 | loss: 0.4775451 speed: 4.5742s/iter; left time: 45564.0464s iters: 700, epoch: 1 | loss: 0.5060694 speed: 4.5875s/iter; left time: 45237.1464s iters: 800, epoch: 1 | loss: 0.5316107 speed: 4.5767s/iter; left time: 44673.6112s iters: 900, epoch: 1 | loss: 0.4948809 speed: 4.5745s/iter; left time: 44194.6974s iters: 1000, epoch: 1 | loss: 0.4570811 speed: 4.5657s/iter; left time: 43652.2512s Epoch: 1 cost time: 4828.672526597977 Traceback (most recent call last): File "/home/fight/Desktop/Autoformer/Autoformer-main/run.py", line 116, in <module> exp.train(setting) File "/home/fight/Desktop/Autoformer/Autoformer-main/exp/exp_main.py", line 173, in train vali_loss = self.vali(vali_data, vali_loader, criterion) File "/home/fight/Desktop/Autoformer/Autoformer-main/exp/exp_main.py", line 76, in vali outputs = self.model(batch_x, batch_x_mark, dec_inp, batch_y_mark) File "/home/fight/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/fight/Desktop/Autoformer/Autoformer-main/models/Autoformer.py", line 90, in forward enc_out, attns = self.encoder(enc_out, attn_mask=enc_self_mask) File "/home/fight/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/fight/Desktop/Autoformer/Autoformer-main/layers/Autoformer_EncDec.py", line 103, in forward x, attn = attn_layer(x, attn_mask=attn_mask) File "/home/fight/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/fight/Desktop/Autoformer/Autoformer-main/layers/Autoformer_EncDec.py", line 69, in forward new_x, attn = self.attention( File "/home/fight/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/fight/Desktop/Autoformer/Autoformer-main/layers/AutoCorrelation.py", line 167, in forward out, attn = self.inner_correlation( File "/home/fight/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/fight/Desktop/Autoformer/Autoformer-main/layers/AutoCorrelation.py", line 135, in forward V = self.time_delay_agg_inference(values.permute(0, 2, 3, 1).contiguous(), corr).permute(0, 3, 1, 2) File "/home/fight/Desktop/Autoformer/Autoformer-main/layers/AutoCorrelation.py", line 66, in time_delay_agg_inference init_index = torch.arange(length).unsqueeze(0).unsqueeze(0).unsqueeze(0).repeat(batch, head, channel, 1).cuda() File "/home/fight/anaconda3/lib/python3.9/site-packages/torch/cuda/__init__.py", line 216, in _lazy_init torch._C._cuda_init() RuntimeError: No CUDA GPUs are available
Hello,author Wu,have you met situation like this?Would you like to give some advice to this question?
This is my configuration
------------------------------------ -+ssssssssssssssssssyyssss+- OS: Ubuntu 18.04.6 LTS x86_64 .ossssssssssssssssssdMMMNysssso. Host: 90Q90022CP ZHENGJIUZHE REN9000 /ssssssssssshdmmNNmmyNMMMMhssssss/ Kernel: 5.4.0-109-generic +ssssssssshmydMMMMMMMNddddyssssssss+ Uptime: 22 days, 8 hours, 7 mins /sssssssshNMMMyhhyyyyhmNMMMNhssssssss/ Packages: 1986 .ssssssssdMMMNhsssssssssshNMMMdssssssss. Shell: bash 4.4.20 +sssshhhyNMMNyssssssssssssyNMMMysssssss+ Resolution: 1920x1080 ossyNMMMNyMMhsssssssssssssshmmmhssssssso DE: GNOME 3.28.4 ossyNMMMNyMMhsssssssssssssshmmmhssssssso WM: GNOME Shell +sssshhhyNMMNyssssssssssssyNMMMysssssss+ WM Theme: Adwaita .ssssssssdMMMNhsssssssssshNMMMdssssssss. Theme: Ambiance [GTK2/3] /sssssssshNMMMyhhyyyyhdNMMMNhssssssss/ Icons: Ubuntu-mono-dark [GTK2/3] +sssssssssdmydMMMMMMMMddddyssssssss+ Terminal: gnome-terminal /ssssssssssshdmNNNNmyNMMMMhssssss/ CPU: Intel i9-10900K (20) @ 5.300GHz .ossssssssssssssssssdMMMNysssso. GPU: NVIDIA NVIDIA Corporation Devic -+sssssssssssssssssyyyssss+- Memory: 21480MiB / 64125MiB
The text was updated successfully, but these errors were encountered: