You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd especially love to see some more explanations on how input data has to be shaped (what the dimensions are), how to reach a certain output shape and how to interpret all the numbers.
The text was updated successfully, but these errors were encountered:
Furthermore I'd love to see how to actually setup a full training loop for this model.
Some words about my use case: my input data is a point cloud consisting of a number of aroung 450 3D points, my output/target is a set of, for instance (might be more, might be less), around 100 "aspects" (i.e. floating point values between -1 and 1). Certain aspects modify certain points of the point cloud in specific ways. I can artificially generate point clouds by these aspects.
What I want a model (doesn't have to be Perceiver IO, but I thought this might work very well) to learn is the other direction, from the point cloud back to the aspects that generated it.
I'd like to see how you would tackle this problem with the PerceiverIO model and especially with the PyTorch implementation of it.
I'd especially love to see some more explanations on how input data has to be shaped (what the dimensions are), how to reach a certain output shape and how to interpret all the numbers.
The text was updated successfully, but these errors were encountered: