Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Image Tokenizers process the image stacked on iteself? #90

Open
andrearosasco opened this issue May 9, 2024 · 2 comments
Open

Image Tokenizers process the image stacked on iteself? #90

andrearosasco opened this issue May 9, 2024 · 2 comments

Comments

@andrearosasco
Copy link
Contributor

As you can see in the picture posted here #42 the field task_stack_keys for the observation tokenizers appears to be the same as obs_stack_keys. This result in the model stacking the image onto itself before processing it. Why is this happening?

@kpertsch
Copy link
Collaborator

kpertsch commented May 9, 2024

It's not the same image: the observation is the current time step's image, while the "task" is the goal image, ie the image from a randomly sampled future time step.

WenchangGaoT pushed a commit to WenchangGaoT/octo1 that referenced this issue May 10, 2024
…izer

Add tokenizer for non-spatial observations
@zwbx
Copy link

zwbx commented May 30, 2024

Hi, I notice that there is image augmentation during training, will the goal image be also augmented like observation? if not, they are not matched spatially.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants