Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Torch_model_base.py error in collate_fn #63

Closed
maerory opened this issue Jul 23, 2020 · 3 comments
Closed

Torch_model_base.py error in collate_fn #63

maerory opened this issue Jul 23, 2020 · 3 comments

Comments

@maerory
Copy link

maerory commented Jul 23, 2020

Hi,

I have been trying to run the codes in the notebook. However, as I try to run the code that utilizes torch_model, I keep getting an error when I fit the model. How can I resolve this issue?

Thank you,

Joey

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-60-77ee602003f0> in <module>
      1 giga_ae = TorchAutoencoder(max_iter=1000,
      2                           hidden_dim=100,
----> 3                           eta=0.03).fit(giga5_svd500)

~/Downloads/Stanford-CS224U/codebase/torch_autoencoder.py in fit(self, X)
    124 
    125         """
--> 126         super().fit(X, X)
    127         # Hidden representations:
    128         with torch.no_grad():

~/Downloads/Stanford-CS224U/codebase/torch_model_base.py in fit(self, *args)
    351             epoch_error = 0.0
    352 
--> 353             for batch_num, batch in enumerate(dataloader, start=1):
    354 
    355                 batch = [x.to(self.device, non_blocking=True) for x in batch]

/usr/local/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __next__(self)
    613         if self.num_workers == 0:  # same-process loading
    614             indices = next(self.sample_iter)  # may raise StopIteration
--> 615             batch = self.collate_fn([self.dataset[i] for i in indices])
    616             if self.pin_memory:
    617                 batch = pin_memory_batch(batch)

TypeError: 'NoneType' object is not callable
@maerory maerory closed this as completed Jul 25, 2020
@maerory
Copy link
Author

maerory commented Jul 25, 2020

For some reason, it is working after some time.

@cgpotts
Copy link
Owner

cgpotts commented Jul 25, 2020

@maerory Glad it worked out! If it happens again, do let me know!

@maerory
Copy link
Author

maerory commented Jul 25, 2020

@cgpotts Thank you for your concern! I am really enjoying this wonderful class.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants