Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dictionary update sequence element #0 has length 1; 2 is required #9318

Closed
cristianegea opened this issue Sep 3, 2021 · 17 comments
Closed
Labels
bug Something isn't working help wanted Open to be worked on

Comments

@cristianegea
Copy link

cristianegea commented Sep 3, 2021

I previously opened an issue in the pytorch-forecasting repository about a problem I'm having while trying to play the "Demand forecasting with the Temporal Fusion Transformer" tutorial. When trying to run the command trainer.fit(tft, train_dataloaders = train_dataloader, val_dataloaders = val_dataloader,) the following error message is returned "dictionary update sequence element #0 has length 1; 2 is required". However, when looking at this message I found that it is related to the file representer.py, which is part of the pytorch-lightning package.

I've tried all the possibilities I knew, but the error persists.

Issue link opened in pytorch-forecasting repository: jdb78/pytorch-forecasting#665

My notebook on Google Colab: https://colab.research.google.com/drive/1NX-ah_Nuqyt6m2Wsn2WcKkadRariISFu?usp=sharing

Error message:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-27-8e5fb483c5dc> in <module>()
      2 trainer.fit(tft,
      3             train_dataloaders = train_dataloader,
----> 4             val_dataloaders = val_dataloader,
      5 )

14 frames
/usr/local/lib/python3.7/dist-packages/yaml/representer.py in represent_object(self, data)
    328             listitems = list(listitems)
    329         if dictitems is not None:
--> 330             dictitems = dict(dictitems)
    331         if function.__name__ == '__newobj__':
    332             function = args[0]

@cristianegea cristianegea added bug Something isn't working help wanted Open to be worked on labels Sep 3, 2021
@ethanwharris
Copy link
Member

Hi @cristianegea this looks like the error from Lightning-AI/torchmetrics#492 - could you try installing torchmetrics <= 0.5.0 to see if that fixes the error? Thanks 😃

@cristianegea
Copy link
Author

Hi @ethanwharris! Thank you for your help. I installed the package however the error persists. I performed the test on both Google Colab and Anaconda....

The message below is the one that appears on Anaconda (it is more detailed than on Google Colab) :

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-27-8e5fb483c5dc> in <module>
      1 # fit the model on the data - redefine the model with the correct learning rate if necessary
----> 2 trainer.fit(tft,
      3             train_dataloaders = train_dataloader,
      4             val_dataloaders = val_dataloader,
      5 )

C:\ProgramData\Anaconda3\lib\site-packages\pytorch_lightning\trainer\trainer.py in fit(self, model, train_dataloaders, val_dataloaders, datamodule, train_dataloader)
    550         self.checkpoint_connector.resume_start()
    551 
--> 552         self._run(model)
    553 
    554         assert self.state.stopped

C:\ProgramData\Anaconda3\lib\site-packages\pytorch_lightning\trainer\trainer.py in _run(self, model)
    909 
    910         # plugin will setup fitting (e.g. ddp will launch child processes)
--> 911         self._pre_dispatch()
    912 
    913         # restore optimizers, etc.

C:\ProgramData\Anaconda3\lib\site-packages\pytorch_lightning\trainer\trainer.py in _pre_dispatch(self)
    938     def _pre_dispatch(self):
    939         self.accelerator.pre_dispatch(self)
--> 940         self._log_hyperparams()
    941 
    942     def _log_hyperparams(self):

C:\ProgramData\Anaconda3\lib\site-packages\pytorch_lightning\trainer\trainer.py in _log_hyperparams(self)
    967                 self.logger.log_hyperparams(hparams_initial)
    968             self.logger.log_graph(self.lightning_module)
--> 969             self.logger.save()
    970 
    971     def _post_dispatch(self):

C:\ProgramData\Anaconda3\lib\site-packages\pytorch_lightning\utilities\distributed.py in wrapped_fn(*args, **kwargs)
     46     def wrapped_fn(*args, **kwargs):
     47         if rank_zero_only.rank == 0:
---> 48             return fn(*args, **kwargs)
     49 
     50     return wrapped_fn

C:\ProgramData\Anaconda3\lib\site-packages\pytorch_lightning\loggers\tensorboard.py in save(self)
    248         # save the metatags file if it doesn't exist and the log directory exists
    249         if self._fs.isdir(dir_path) and not self._fs.isfile(hparams_file):
--> 250             save_hparams_to_yaml(hparams_file, self.hparams)
    251 
    252     @rank_zero_only

C:\ProgramData\Anaconda3\lib\site-packages\pytorch_lightning\core\saving.py in save_hparams_to_yaml(config_yaml, hparams)
    403     for k, v in hparams.items():
    404         try:
--> 405             yaml.dump(v)
    406         except TypeError:
    407             warn(f"Skipping '{k}' parameter because it is not possible to safely dump to YAML.")

C:\ProgramData\Anaconda3\lib\site-packages\yaml\__init__.py in dump(data, stream, Dumper, **kwds)
    288     If stream is None, return the produced string instead.
    289     """
--> 290     return dump_all([data], stream, Dumper=Dumper, **kwds)
    291 
    292 def safe_dump_all(documents, stream=None, **kwds):

C:\ProgramData\Anaconda3\lib\site-packages\yaml\__init__.py in dump_all(documents, stream, Dumper, default_style, default_flow_style, canonical, indent, width, allow_unicode, line_break, encoding, explicit_start, explicit_end, version, tags, sort_keys)
    276         dumper.open()
    277         for data in documents:
--> 278             dumper.represent(data)
    279         dumper.close()
    280     finally:

C:\ProgramData\Anaconda3\lib\site-packages\yaml\representer.py in represent(self, data)
     25 
     26     def represent(self, data):
---> 27         node = self.represent_data(data)
     28         self.serialize(node)
     29         self.represented_objects = {}

C:\ProgramData\Anaconda3\lib\site-packages\yaml\representer.py in represent_data(self, data)
     50             for data_type in data_types:
     51                 if data_type in self.yaml_multi_representers:
---> 52                     node = self.yaml_multi_representers[data_type](self, data)
     53                     break
     54             else:

C:\ProgramData\Anaconda3\lib\site-packages\yaml\representer.py in represent_object(self, data)
    340         if not args and not listitems and not dictitems \
    341                 and isinstance(state, dict) and newobj:
--> 342             return self.represent_mapping(
    343                     'tag:yaml.org,2002:python/object:'+function_name, state)
    344         if not listitems and not dictitems  \

C:\ProgramData\Anaconda3\lib\site-packages\yaml\representer.py in represent_mapping(self, tag, mapping, flow_style)
    116         for item_key, item_value in mapping:
    117             node_key = self.represent_data(item_key)
--> 118             node_value = self.represent_data(item_value)
    119             if not (isinstance(node_key, ScalarNode) and not node_key.style):
    120                 best_style = False

C:\ProgramData\Anaconda3\lib\site-packages\yaml\representer.py in represent_data(self, data)
     50             for data_type in data_types:
     51                 if data_type in self.yaml_multi_representers:
---> 52                     node = self.yaml_multi_representers[data_type](self, data)
     53                     break
     54             else:

C:\ProgramData\Anaconda3\lib\site-packages\yaml\representer.py in represent_object(self, data)
    328             listitems = list(listitems)
    329         if dictitems is not None:
--> 330             dictitems = dict(dictitems)
    331         if function.__name__ == '__newobj__':
    332             function = args[0]

ValueError: dictionary update sequence element #0 has length 1; 2 is required

@ethanwharris
Copy link
Member

ethanwharris commented Sep 3, 2021

It's possible something went wrong with the downgrade. @cristianegea could you double check your torchmetrics version with:

import torchmetrics
print(torchmetrics.__version__)

It should show 0.5.0 or lower, 0.5.1 is the only version with the bug afaik.

@cristianegea
Copy link
Author

Hi @ethanwharris ! Once again, thank you so much for your help!!!! I changed the version of the torchmetrics package and apparently the notebook is running (it hasn't finished running yet).

@jcha7071
Copy link

jcha7071 commented Sep 4, 2021

Hi @cristianegea, did you solve the problem by downgrading torchmetrics to 0.5.0? When I did it, the dictionary problem went away, but I am getting another AttributeError: 'functools.partial' object has no attribute 'name'. Any idea?

@cristianegea
Copy link
Author

@jcha7071, After I downgraded the package version no more error appeared. However, I noticed much longer time during the cnn training.

At the moment, I can't think of a solution. I can give you the versions of some packages I'm using.

@cristianegea
Copy link
Author

Hi @jcha7071!

I am currently working on Google Colab and this error message does not appear for me. However, when trying to run the notebook on my personal computer, the error message also appears.

Have you ever tried running your notebook on Google Colab?

@YangMcSim
Copy link

Hi @cristianegea, did you solve the problem by downgrading torchmetrics to 0.5.0? When I did it, the dictionary problem went away, but I am getting another AttributeError: 'functools.partial' object has no attribute 'name'. Any idea?

yaml/pyyaml#541, you can downgrad your pandas to solver this problem, pandas==1.2.4 worked

@R2D2oid
Copy link

R2D2oid commented Oct 1, 2021

It's possible something went wrong with the downgrade. @cristianegea could you double check your torchmetrics version with:

import torchmetrics
print(torchmetrics.__version__)

It should show 0.5.0 or lower, 0.5.1 is the only version with the bug afaik.

I see this issue in version 0.4.1

>>> print(torchmetrics.__version__)
0.4.1

@cristianegea
Copy link
Author

cristianegea commented Oct 1, 2021 via email

@R2D2oid
Copy link

R2D2oid commented Oct 1, 2021

I'm using a 0.5 version De: Zahra Vaseqi @.> Enviada em: sexta-feira, 1 de outubro de 2021 05:03 Para: PyTorchLightning/pytorch-lightning @.> Cc: Cristiane Gea @.>; Mention @.> Assunto: Re: [PyTorchLightning/pytorch-lightning] dictionary update sequence element #0 has length 1; 2 is required (#9318) It's possible something went wrong with the downgrade. @cristianegea https://github.com/cristianegea could you double check your torchmetrics version with: import torchmetrics print(torchmetrics.version) It should show 0.5.0 or lower, 0.5.1 is the only version with the bug afaik. I see this issue in version 0.4.1

print(torchmetrics.version)
0.4.1 — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#9318 (comment)> , or unsubscribe https://github.com/notifications/unsubscribe-auth/APBJGSZQ3BRT5APGW26JIN3UEVTL5ANCNFSM5DMPXEYQ . https://github.com/notifications/beacon/APBJGS447ZXB6IOJTBSYCFTUEVTL5A5CNFSM5DMPXEY2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOG6GVVYA.gif

Thanks for the reply! switching to version 0.5.0 did not resolve the issue either!

@di0002ya
Copy link

di0002ya commented Apr 9, 2022

Hi, I also encounter this issue. May I know how to solve it?

@josejimenezluna
Copy link

Encountering this issue with torchmetrics=0.8.2 too.

@zxk19981227
Copy link

Same Problem, how to solve it?

@aegonwolf
Copy link

I also still get this error after downgrading to 0.5.0

@CJunette
Copy link

I might have a clue for this problem.

class SomePredictor(pl.LightningModule):
    def __init__(self, some_method, n_features: int, n_classes: int):
        ...
        self.save_hyperparameters()
        ...

The error I met is also ValueError: dictionary update sequence element #0 has length 1; 2 is required.
And the problem is when the module tries to log the parameters, some_method causes some bugs.

The method to solve it is easy. We can simply add "ignore='some_method'" in self.save_hyperparameters().
Such as:

class SomePredictor(pl.LightningModule):
    def __init__(self, some_method, n_features: int, n_classes: int):
        ...
        self.save_hyperparameters(ignore='some_method')
        ...

@Zhuofeng-Li
Copy link

Zhuofeng-Li commented Aug 29, 2023

Do not save the weights of the model to hparams.yaml. This could solve this problem!

    def __init__(self, some_method, n_features: int, n_classes: int):
        ...
        save_hyperparameters(ignore=["model"])

#11494

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Open to be worked on
Projects
None yet
Development

No branches or pull requests