Skip to content

Conversation

@riiswa
Copy link
Contributor

@riiswa riiswa commented Jan 1, 2023

Description

The goal of this PR is to add MultiDiscreteTensorSpec for n categorical actions (#781)

Example:

>>> ts = MultiDiscreteTensorSpec((3,2,3))
>>> ts.is_in(torch.tensor([2, 0, 1]))
True
>>> ts.is_in(torch.tensor([2, 2, 1]))
False
>>> ts.rand()
tensor([0, 1, 2])
>>> ts.rand(torch.Size((2, 2)))
tensor([[[0, 0, 1],
         [1, 0, 2]],

        [[2, 1, 1],
         [2, 1, 1]]])

This PR don't support yet nvec with several axes.

TODO (still in progress Ready for reviewing):

  • Implement MultiDiscreteTensorSpec
  • Support for Gym spaces
  • Write tests
  • Add to_onehot() and to_categorical()

Motivation and Context

This will close the issue #781

  • I have raised an issue to propose this change (required for new features and bug fixes)

Types of changes

  • New feature (non-breaking change which adds core functionality)
  • Documentation (update in the documentation)

Checklist

Go over all the following points, and put an x in all the boxes that apply.
If you are unsure about any of these, don't hesitate to ask. We are here to help!

  • I have read the CONTRIBUTION guide (required)
  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have updated the documentation accordingly.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 1, 2023
@riiswa riiswa force-pushed the feature/multdiscretetensorspec branch from 8f12b5c to 21983b1 Compare January 1, 2023 20:26
@codecov
Copy link

codecov bot commented Jan 1, 2023

Codecov Report

Merging #783 (1147efc) into main (5b9ff55) will increase coverage by 0.06%.
The diff coverage is 98.85%.

@@            Coverage Diff             @@
##             main     #783      +/-   ##
==========================================
+ Coverage   88.74%   88.81%   +0.06%     
==========================================
  Files         123      123              
  Lines       21170    21256      +86     
==========================================
+ Hits        18787    18878      +91     
+ Misses       2383     2378       -5     
Flag Coverage Δ
habitat-gpu 24.78% <33.33%> (+0.02%) ⬆️
linux-brax 29.37% <33.33%> (+0.01%) ⬆️
linux-cpu 85.29% <98.85%> (+0.06%) ⬆️
linux-gpu 86.27% <98.85%> (+0.07%) ⬆️
linux-jumanji 30.14% <33.33%> (+0.01%) ⬆️
linux-outdeps-gpu 72.37% <98.85%> (+0.11%) ⬆️
linux-stable-cpu 85.15% <98.85%> (+0.06%) ⬆️
linux-stable-gpu 85.91% <98.85%> (+0.05%) ⬆️
linux_examples-gpu 42.66% <33.33%> (-0.03%) ⬇️
macos-cpu 85.06% <98.85%> (+0.07%) ⬆️
olddeps-gpu 76.13% <98.85%> (+0.09%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
torchrl/data/__init__.py 100.00% <ø> (ø)
torchrl/envs/libs/gym.py 82.14% <0.00%> (ø)
test/test_tensor_spec.py 99.67% <100.00%> (+0.03%) ⬆️
torchrl/data/tensor_specs.py 85.00% <100.00%> (+0.75%) ⬆️
test/test_trainer.py 98.55% <0.00%> (+0.32%) ⬆️
torchrl/envs/vec_env.py 69.42% <0.00%> (+0.49%) ⬆️

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@riiswa riiswa force-pushed the feature/multdiscretetensorspec branch 2 times, most recently from 19ebdf5 to 542b474 Compare January 2, 2023 17:35
@riiswa riiswa force-pushed the feature/multdiscretetensorspec branch from 82efab3 to 435d1e2 Compare January 2, 2023 19:21
@riiswa riiswa marked this pull request as ready for review January 2, 2023 19:24
@riiswa riiswa changed the title [Feature] MultDiscreteTensorSpec [Feature] MultiDiscreteTensorSpec Jan 2, 2023
@riiswa riiswa force-pushed the feature/multdiscretetensorspec branch from da9eb40 to 1147efc Compare January 2, 2023 23:34
@vmoens vmoens added the enhancement New feature or request label Jan 3, 2023
Copy link
Collaborator

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
I left some minor comments
Can you elaborate a bit more the description of the PR?

@matteobettini curious to see if that suits your purpose

]
).squeeze()
_size = [self._size] if self._size > 1 else []
return x.T.reshape([*shape, *_size])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This leads to the following warning if the number of dims of x is greater than 2

<string>:3: UserWarning: The use of `x.T` on tensors of dimension other than 2 to reverse their shape is deprecated and it will throw an error in a future release. Consider `x.mT` to transpose batches of matricesor `x.permute(*torch.arange(x.ndim - 1, -1, -1))` to reverse the dimensions of a tensor. (Triggered internally at  /Users/distiller/project/conda/conda-bld/pytorch_1646756029501/work/aten/src/ATen/native/TensorShape.cpp:2318.)

).squeeze()

def is_in(self, val: torch.Tensor) -> bool:
vals = self._split(val)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should also check the dtype here
(note to myself: we should check that is_in always checks the dtype)

@matteobettini
Copy link
Contributor

matteobettini commented Jan 3, 2023

LGTM I left some minor comments Can you elaborate a bit more the description of the PR?

@matteobettini curious to see if that suits your purpose

Yep it seems to do what we want.

Maybe one little thig is. with the to_categoricaland to_one_hot() i was meaning something like to_categorical(tensor)so that it can also to the value conversion. This for some reason is done inside the to_numpy() method currently. We could think of removing this conversion from the numpy methods into their own methods. PS i dont think the numpy methods should do the conversions

This is what currently happens

spec = torchrl.data.MultOneHotDiscreteTensorSpec([5,10])
sample = spec.rand((2,3))
sample.shape
torch.Size([2, 3, 15])
sample_np =spec.to_numpy(sample)
sample_np.shape
(2, 3, 2)

Instead I think it would be nicer

spec = torchrl.data.MultOneHotDiscreteTensorSpec([5,10])
sample = spec.rand((2,3))
sample.shape
torch.Size([2, 3, 15])
sample_np =spec.to_numpy(sample)
sample_np.shape
(2, 3, 15)

sample_cat = spec.to_categorical(sample)
sample_cat.shape
torch.Size([2, 3, 2])

@vmoens
Copy link
Collaborator

vmoens commented Jan 3, 2023

You're right
This came from the the time where all was one hot, and the default for other libs was categorical. Now that we support both we should keep them separated.
@riiswa do you want to address that? I can find other people to do it if you'd like

@vmoens vmoens merged commit e958503 into pytorch:main Jan 3, 2023
@riiswa
Copy link
Contributor Author

riiswa commented Jan 3, 2023

I will work on it :) and I'm also working on multi-dimensional nvec (I forgot this case):

# Gym Example
>> d = MultiDiscrete(np.array([[1, 2], [3, 4]]))
>> d.sample()
array([[0, 0],
       [2, 3]])

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants