You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, that error is due to the dataset being too large for your system. Changing the flag self.trials_per_im to a smaller value should solve the issue. I changed the default value in the notebook (from 50 to 10 for the MultiMNIST dataset). Please let me know if this solves the issue. Thanks.
Hi,
Can you please provide a link to sample data. I tried to generate data with the stimuli ipynb but got the following error message.
tensor_train_ims = torch.Tensor(train_ims)/255 # transform to torch tensor
RuntimeError: [enforce fail at ..\c10\core\CPUAllocator.cpp:79] data. DefaultCPUAllocator: not enough memory: you tried to allocate 15552000000 bytes.
The other other ipynb apparently does not run without the sample data.
Thanks!
The text was updated successfully, but these errors were encountered: