Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

numpy like tensor.all and tensor.any #2481

Closed
tomsal opened this issue Aug 18, 2017 · 2 comments
Closed

numpy like tensor.all and tensor.any #2481

tomsal opened this issue Aug 18, 2017 · 2 comments

Comments

@tomsal
Copy link

tomsal commented Aug 18, 2017

Hi all!

As mentioned in issue #2228, it would be nice to have functions close to the numpy API. Currently, logic functions like np.all and np.any are still missing. Of course, workarounds can be easily achieved, but having such a function would increase readability.

A simple implementation could consist of using boolean_tensor.min() != 0 and boolean_tensor.max() == 1.

@vadimkantorov
Copy link
Contributor

As a note, any and all exist on ByteTensors, but do not appear in online documentation.

@tomsal
Copy link
Author

tomsal commented Aug 21, 2017

I didn't know. Thanks!

@tomsal tomsal closed this as completed Aug 21, 2017
mlaradji added a commit to mlaradji/LCFCN that referenced this issue Dec 26, 2018
This is a proposed change from `np.all()` to `torch.tensor.all()` in `datasets/trancos.py`. I don't know if it is necessary, but it seems like it shouldn't break anything, and it does get rid of the following error for me:
```
>> python main.py -m train -e trancos -r
Model: ResFCN - Dataset: trancos - Metric: MAE
Starting from scratch...
Training Epoch 1 .... 403 batches
Traceback (most recent call last):
  File "main.py", line 45, in <module>
    main()
  File "main.py", line 36, in main
    train.train(dataset_name, model_name, metric_name, path_history, path_model, path_opt, path_best_model, args.reset)
  File "/home/mlaradji/projects/ElementAI/LCFCN/train.py", line 76, in train
    epoch=epoch)
  File "/home/mlaradji/projects/ElementAI/LCFCN/utils.py", line 28, in fit
    for i, batch in enumerate(dataloader):
  File "/home/mlaradji/.conda/envs/LCFCN/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 615, in __next__
    batch = self.collate_fn([self.dataset[i] for i in indices])
  File "/home/mlaradji/.conda/envs/LCFCN/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 615, in <listcomp>
    batch = self.collate_fn([self.dataset[i] for i in indices])
  File "/home/mlaradji/projects/ElementAI/LCFCN/datasets/trancos.py", line 63, in __getitem__
    if np.all(points == -1):
  File "/home/mlaradji/.local/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 2089, in all
    return _wrapreduction(a, np.logical_and, 'all', axis, None, out, keepdims=keepdims)
  File "/home/mlaradji/.local/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 81, in _wrapreduction
    return reduction(axis=axis, out=out, **passkwargs)
TypeError: all() missing 1 required positional arguments: "dim"
```

According to this[pytorch/pytorch#2481], `torch.tensor.all` is apparently not well-documented but it exists.
samnordmann pushed a commit to samnordmann/pytorch that referenced this issue Mar 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants