-
Notifications
You must be signed in to change notification settings - Fork 413
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Alternative way to concatenate AU conditions with input image #50
Comments
This produces much many parameters especially when image is high resolution, so it seems not so good. |
There exists a large literature on image conditional generation (from noise+conditioning) check them out, you should find some useful insights on conditional representation for non-image like inputs. |
@albertpumarola |
On my previous paper I checked a bunch of methods and concatenating had the best tradeoff performance-overhead. |
@albertpumarola
Hi, I have a question about a way to incorporate the AU conditions into an input.
Your paper says that desired AU conditions (expression) are originally a N-length vector which has a normalized activation value between 0 and 1 respectively, and are concatenated to input as additional channels of input by expanding them into the same size as that of input image.
I think this expansion is for the input to be compatible with the first convolutional layer.
I wonder what is the most reasonable way to construct an input with non-image-like (scalar or vector) conditions.
A possible alternative way to do it is concatenating AU conditions as a vector with image unrolled into a vector and replace the first convolutional layer with a fully-connected layer.
How do you think of it?
The text was updated successfully, but these errors were encountered: