Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Resumable IterableDataset] Add IterableDataset state_dict #6658

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

lhoestq
Copy link
Member

@lhoestq lhoestq commented Feb 11, 2024

A simple implementation of a mechanism to resume an IterableDataset.
This is WIP and untested.

Example:

from datasets import Dataset, concatenate_datasets


ds = Dataset.from_dict({"a": range(5)}).to_iterable_dataset(num_shards=3)
ds = concatenate_datasets([ds] * 2)

print(f"{ds.state_dict()=}")
for i, example in enumerate(ds):
    print(example)
    if i == 6:
        state_dict = ds.state_dict()
ds.load_state_dict(state_dict)
print(f"{ds.state_dict()=}")
for example in ds:
    print(example)

returns

ds.state_dict()={'ex_iterable_idx': 0, 'ex_iterables': [{'shard_idx': 0, 'shard_example_idx': 0}, {'shard_idx': 0, 'shard_example_idx': 0}]}
{'a': 0}
{'a': 1}
{'a': 2}
{'a': 3}
{'a': 4}
{'a': 0}
{'a': 1}
{'a': 2}
{'a': 3}
{'a': 4}
ds.state_dict()={'ex_iterable_idx': 1, 'ex_iterables': [{'shard_idx': 3, 'shard_example_idx': 0}, {'shard_idx': 0, 'shard_example_idx': 2}]}
{'a': 2}
{'a': 3}
{'a': 4}

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@bwanglzu
Copy link

would be nice to have this feature in the new dataset release!

@lhoestq
Copy link
Member Author

lhoestq commented Apr 11, 2024

Before finalising this this I'd like to make sure this philosophy makes sense for other libs like accelerate for example.

cc @muellerzr I'd love your feedback on this one
cc @LysandreJik also if you think other people should take a look

Copy link

@muellerzr muellerzr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall I think this looks like a very nice API decision, and super easy for us to bring into Accelerate as part of load_state. Will be nice to not have to use skip_batches if a user is using an IterableDataset.

One design question though: what's the logic behind self._state_dict rather than having it all be state_dict?

Private stuff doesn't exist in python, so what's the aim in doing that here and having state_dict be a passthrough to it? (If this is a common design pattern over in datasets that's okay)

@lhoestq
Copy link
Member Author

lhoestq commented Apr 15, 2024

One design question though: what's the logic behind self._state_dict rather than having it all be state_dict?

The _state_dict is the internal object that is updated in-place while you iterate on the dataset.

We need to copy it every time the user accesses it.

Otherwise we would get

state_dict = ds.state_dict()
for x in ds:
    assert ds.state_dict() == state_dict  # and actually `assert ds.state_dict() is state_dict`

The state is updated in-place since it's made of dictionaries that are shared with the steps in the IterableDataset pipeline.

@muellerzr
Copy link

What do you think of making it a full property with a docstring explicitly stating users shouldn’t call/modify it directly?

I can imagine some exploratory users getting curious

@lhoestq
Copy link
Member Author

lhoestq commented Apr 15, 2024

I don't think users read docstrings of properties that often. What about explaining the logic in the .state_dict() docstring ? This also feels aligned with the way .state_dict() and .load_state_dict() works in pytorch (you should use load_state_dict to load a modified copy of the state dict)

@muellerzr
Copy link

Sure, I can agree with that!

@muellerzr
Copy link

Just a small note mentioning returns a copy of the state dict should be enough imo

@samsja
Copy link

samsja commented May 7, 2024

looking forward as well for this PR to be merge

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants