Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allows dicts batches in dataloader. #1354

Merged
merged 4 commits into from Apr 28, 2017
Merged

Conversation

chsasank
Copy link
Contributor

This is duplicate of #1131 (and #1350). In the previous PR, there is a test which failed. But I built pytorch from source and ran the tests. I couldn't reproduce the failing test; all tests pass fine.

Also, I have made small change: use collection.Sequence instead of collections.Iterable. This is because set is a iterable but not sequence. Batching a set is ambiguous and therefore shouldn't be batched like a list.

# of tensors; in that case we collate each element in the tuple
elif isinstance(batch[0], collections.Mapping):
return {key: default_collate([d[key] for d in batch]) for key in batch[0]}
elif isinstance(batch[0], collections.Sequence):

This comment was marked as off-topic.

This comment was marked as off-topic.

return {key: default_collate([d[key] for d in batch]) for key in batch[0]}
elif isinstance(batch[0], collections.Sequence):
# if each batch element is not a tensor, then it should be a sequence
# of tensors; in that case we collate each element in the sequence

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@soumith soumith merged commit 94b147f into pytorch:master Apr 28, 2017
Jiaming-Liu pushed a commit to Jiaming-Liu/pytorch that referenced this pull request May 18, 2017
* Allow dicts in Dataloader

* use collections.Sequence instead of collections.Iterable in dataloader
eqy pushed a commit to eqy/pytorch that referenced this pull request Jan 20, 2022
* Refactor War Sync Insertion Pass (pytorch#1339)
* Remove kir::Expr::scope_ (pytorch#1341)
* Fusion IR Refactor (pytorch#1343)
* Refactor KIR Step 1 - Remove kir::Node (pytorch#1347)
* Refactor KIR Step 2 - TMP IrUtils change (pytorch#1348)
* Refactor KIR Step 3 - Remove kir::Expr and kir::Val. (pytorch#1349)
* Refactor KIR Step 4 - Remove kir::Bool,Double,Int,NamedScalar. (pytorch#1350)
* Refactor KIR Step 5 - Remove kir::IterDomain/TensorDomain/TensorView (pytorch#1351)
* Refactor KIR Step 6 - Remove 
 kir::UnaryOp/BinaryOp/TernaryOp/ReductionOp/WelfordOp/BroadcastOp. (pytorch#1352)
* Refactor KIR Step 7 - Remove kir dispatch (pytorch#1353)
* Refactor KIR Step 8 - Clean up lower_utils (pytorch#1355)
* Refactor KIR Step 9 - lower_utils ir_utils::applyReplacements. (pytorch#1354)
* Refactor KIR Step 10 - Remove kir_printer in favor of io_stream (pytorch#1356)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants