Progressively Enhance to WebGL When Available #68

Open
natelawrence opened this Issue Apr 16, 2013 · 6 comments

5 participants

@natelawrence

Without Deep Zoom Collections support, this is less necessary, but given that they are implemented in the future I'd love to see the original ability to arbitrarily filter/reorganize/spatially arrange a Deep Zoom Collection in hardware accelerated 3D space.

Until this happens, you're not real Seadragon... and yes, I'd also say this to Silverlight Deep Zoom's face.

Basically, I'm looking for you to get to the point where it is trivial to add point cloud rendering and hook into the Photosynth webservice and implement a few transition options and camera coordinates to match the Photosynth viewer Javascript API and have a WebGL Photosynth viewer that rivals the original Direct3D Photosynth viewer.

Also, given that the original Seadragon application was written in OpenGL, this would be a fitting tribute.

Bonus points if you get the original text browsing demos to work (with a DZI for thumbnails and live queryable/copyable/pastable HTML text) and give people a simple set of tools for apps that render to a Seadragon viewer like the Mandelbrot set browser in the original Seadragon demos.

@iangilman
openseadragon member

That's a tall order, but a worthy goal! I agree that adding a WebGL drawer would be the first step to a lot of this (and probably a good performance improvement on browsers that support it, as well). The web has come a long way since the early Seadragon demos; I have no doubt we can match them now in JavaScript, given we put our minds to it.

Some context, for people who haven't seen the early demos: http://www.ted.com/talks/blaise_aguera_y_arcas_demos_photosynth.html

@thatcher
openseadragon member

wow good background ted talk. One comment I have about DZI Collections is that I generally feel that 'mixed tile source' collections, unlike precompiled dzi collections, get us much closer to larger dynamic sets presentations with open seadragon.

All that aside, the vision of information architecture being transformed, or significantly enhanced through broad zoom interfaces like this is something I fully believe in.

Thanks for the challenge!

@iangilman
openseadragon member

I agree that precompiled DZI collections are only part of the story, but they are a great optimization when possible. Ideally we can support both scenarios.

@petersibley

Implementing a full photosynth-style viewer is pretty far off from where open seadragon is today. Adding proper 3d support would make the code-base much more complicated as @iangilman can attest.

You’d need to plumb in 3D support for a perspective projection, or use something like Three.js which has a scene graph and camera library.

You’d also have to use a more sophisticated LOD determination system. The LOD system needs to know about both the camera’s projection parameters and frustum as well as the projective texturing. There are a few possible approaches:

(1) One approach would be to use a mega-texturing like system, which is a pretty radical departure from the current scheme, but does have the nice property that collections would be easier. There’s a nice summary of that approach in a few of the articles about virtual texturing in Engel’s GPU Pro.

(2) You could also transform the camera's frustum into tile space, then do scan line conversion of a slightly dilated version of the outline of the frustum. This works reasonably well except for where the plane is tilted very askew from the viewer. This is the approach we took when we built the photosynth/streetside silverlight3 viewer.

(3) Another approach is to do the multi-resolution version of the scanline approach where you keep around a quad tree and repeated subdivide in tile-space until the screenSpace-texelSpace ratio is 1:1. This works pretty well. It’s the technique that I used in the webgl-panorama viewer that’s some times featured on bing.com. This approach has the nice property that you can also implement a renderer based on image tags and CSS3-3D transforms, whereas a mega-texure style algorithm requires the ability to run a shader and read out mipmap levels.

@iangilman
openseadragon member

Good notes! And yes, @petersibley knows what he's talking about, from several years on Photosynth and Bing Maps.

I do think even getting a 2D webGL renderer together would be a good step, both for performance, and as a stepping stone to these more advanced scenarios. Also it would allow us to do 3D animated transitions for collections, even if we're not doing the full LOD calculation in 3D.

By the way, looks like IE11 will have webGL: http://www.theverge.com/2013/5/22/4355942/internet-explorer-11-webgl-support-teased-on-vine ... now we just need it to come to mobile!

@acdha

@iangilman: +1 - 2D would be a good win now and it'd be a good stepping stone towards the really interesting 3D parts. Since Google Maps seems to be aggressively pushing towards WebGL, I'm also assuming that support has matured, too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment