Currently, the only way to create a new image, in order to draw on it using framebuffers or shaders, is to provide a full pixel blob for it. The problem with that is that this data has to be sent to the graphics card, which is very slow, especially for large images (In Ruby we also have to generate a large data string, which is even slower :P). If we just want an image set up that can have undefined data in it, then we shouldn't need to send a whole lot of data from the CPU memory to the GPU memory.
I could avoid the slow Ruby blob-string creation time in the Ashton extension, but I still have the problem of sending huge amounts of zeroed data to the graphics card that I don't need to :)
This isn't a high priority, however, since most generation occurs at startup, not on a per-frame basic, but it would be nice to be able to convert framebuffers into Gosu::Images on the fly.