You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The work by this artist (Yung Jake) is what inspired this whole thing.
Recently, I came across this image and started thinking about this again. I know this is something I tried and failed to do a long time ago, but this has just got to be possible, right?
Say we want to find the best locations/orientations of K different emojis from a library of N emojis. Let our parameter vector x have dimensionality 4K, where for each emoji we must specify its location (x,y), orientation, and identity (i.e., 1..N). Then we can use that vector to render an image, and then have its fitness be based on its difference from the target image (e.g., in pixel space, or potentially something fancier like the mid-layer activations of some imagenet-trained network). Given this setup, we can just use something like CMA-ES to optimize x.
One issue/trickiness is that emojis can be overlapping. So x needs to be ordered: the first emoji encoded in x should be top-most, the next one is under that, etc., so that the last 4 dims of x are on the bottom.
This approach at first seemed really helpful optimization-wise, because it means we could potentially optimize x using coordinate descent--i.e., optimize the first emoji in isolation, then add a second, etc. But this actually might not work...Say you place emoji A somewhere, and later you place emoji B halfway on top, such that now only the top half of emoji A is visible. Then emoji A might not have been the best emoji to choose if you'd known that only the top half would be visible...So probably x needs to be optimized jointly
Other references:
Yung Jake apparently used emoji.ink. Maybe an algorithmic approach could be similar to the interface here: Pick an emoji, then try to find the best places to put it
Also, see emoji-mosaic. I believe this is basically the same as my approach except instead of a regular grid, you just sample random points--which it turns out makes things look a lot more complex! Still, I think we could do better with some actual optimization...
The text was updated successfully, but these errors were encountered:
The work by this artist (Yung Jake) is what inspired this whole thing.
![](https://user-images.githubusercontent.com/1677179/123151778-d2335e80-d431-11eb-9f0f-1882e3e115aa.jpeg)
Recently, I came across this image and started thinking about this again. I know this is something I tried and failed to do a long time ago, but this has just got to be possible, right?
Say we want to find the best locations/orientations of K different emojis from a library of N emojis. Let our parameter vector x have dimensionality 4K, where for each emoji we must specify its location (x,y), orientation, and identity (i.e., 1..N). Then we can use that vector to render an image, and then have its fitness be based on its difference from the target image (e.g., in pixel space, or potentially something fancier like the mid-layer activations of some imagenet-trained network). Given this setup, we can just use something like CMA-ES to optimize x.
One issue/trickiness is that emojis can be overlapping. So x needs to be ordered: the first emoji encoded in x should be top-most, the next one is under that, etc., so that the last 4 dims of x are on the bottom.
Other references:
The text was updated successfully, but these errors were encountered: