Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Progressively Enhance to WebGL When Available #68

Closed
natelawrence opened this issue Apr 16, 2013 · 11 comments
Closed

Progressively Enhance to WebGL When Available #68

natelawrence opened this issue Apr 16, 2013 · 11 comments

Comments

@natelawrence
Copy link

Without Deep Zoom Collections support, this is less necessary, but given that they are implemented in the future I'd love to see the original ability to arbitrarily filter/reorganize/spatially arrange a Deep Zoom Collection in hardware accelerated 3D space.

Until this happens, you're not real Seadragon... and yes, I'd also say this to Silverlight Deep Zoom's face.

Basically, I'm looking for you to get to the point where it is trivial to add point cloud rendering and hook into the Photosynth webservice and implement a few transition options and camera coordinates to match the Photosynth viewer Javascript API and have a WebGL Photosynth viewer that rivals the original Direct3D Photosynth viewer.

Also, given that the original Seadragon application was written in OpenGL, this would be a fitting tribute.

Bonus points if you get the original text browsing demos to work (with a DZI for thumbnails and live queryable/copyable/pastable HTML text) and give people a simple set of tools for apps that render to a Seadragon viewer like the Mandelbrot set browser in the original Seadragon demos.

@iangilman
Copy link
Member

That's a tall order, but a worthy goal! I agree that adding a WebGL drawer would be the first step to a lot of this (and probably a good performance improvement on browsers that support it, as well). The web has come a long way since the early Seadragon demos; I have no doubt we can match them now in JavaScript, given we put our minds to it.

Some context, for people who haven't seen the early demos: http://www.ted.com/talks/blaise_aguera_y_arcas_demos_photosynth.html

@thatcher
Copy link
Member

wow good background ted talk. One comment I have about DZI Collections is that I generally feel that 'mixed tile source' collections, unlike precompiled dzi collections, get us much closer to larger dynamic sets presentations with open seadragon.

All that aside, the vision of information architecture being transformed, or significantly enhanced through broad zoom interfaces like this is something I fully believe in.

Thanks for the challenge!

@iangilman
Copy link
Member

I agree that precompiled DZI collections are only part of the story, but they are a great optimization when possible. Ideally we can support both scenarios.

@petersibley
Copy link

Implementing a full photosynth-style viewer is pretty far off from where open seadragon is today. Adding proper 3d support would make the code-base much more complicated as @iangilman can attest.

You’d need to plumb in 3D support for a perspective projection, or use something like Three.js which has a scene graph and camera library.

You’d also have to use a more sophisticated LOD determination system. The LOD system needs to know about both the camera’s projection parameters and frustum as well as the projective texturing. There are a few possible approaches:

(1) One approach would be to use a mega-texturing like system, which is a pretty radical departure from the current scheme, but does have the nice property that collections would be easier. There’s a nice summary of that approach in a few of the articles about virtual texturing in Engel’s GPU Pro.

(2) You could also transform the camera's frustum into tile space, then do scan line conversion of a slightly dilated version of the outline of the frustum. This works reasonably well except for where the plane is tilted very askew from the viewer. This is the approach we took when we built the photosynth/streetside silverlight3 viewer.

(3) Another approach is to do the multi-resolution version of the scanline approach where you keep around a quad tree and repeated subdivide in tile-space until the screenSpace-texelSpace ratio is 1:1. This works pretty well. It’s the technique that I used in the webgl-panorama viewer that’s some times featured on bing.com. This approach has the nice property that you can also implement a renderer based on image tags and CSS3-3D transforms, whereas a mega-texure style algorithm requires the ability to run a shader and read out mipmap levels.

@iangilman
Copy link
Member

Good notes! And yes, @petersibley knows what he's talking about, from several years on Photosynth and Bing Maps.

I do think even getting a 2D webGL renderer together would be a good step, both for performance, and as a stepping stone to these more advanced scenarios. Also it would allow us to do 3D animated transitions for collections, even if we're not doing the full LOD calculation in 3D.

By the way, looks like IE11 will have webGL: http://www.theverge.com/2013/5/22/4355942/internet-explorer-11-webgl-support-teased-on-vine ... now we just need it to come to mobile!

@acdha
Copy link
Contributor

acdha commented May 23, 2013

@iangilman: +1 - 2D would be a good win now and it'd be a good stepping stone towards the really interesting 3D parts. Since Google Maps seems to be aggressively pushing towards WebGL, I'm also assuming that support has matured, too.

@iangilman
Copy link
Member

Possibly relevant to this issue, @karin-toth posted in #1683 (comment) a link to her stab at a WebGL renderer for OpenSeadragon:

https://github.com/karin-toth/openseadragon/tree/kato-webgl

Her comments:

I've been working on a webGL renderer for quite some time now, but as many other things, it's more complex that it seems. I got something to work, but taking it from a seems to work-state to the type of fail-safe production code that is required takes more time than I thought. So I would have loved to contribute, but I just don't have the time to complete this.

I'm of course open for answering questions and other help in the future if someone wants to continue.

@iangilman
Copy link
Member

Guess what? This is now fixed, by #2310!

@petersibley
Copy link

Great to see this closed all these years later. It looks like the refactoring in that PR, would make adopting something like webGPU or other lower level APIs - a lot easier in the future.

@iangilman
Copy link
Member

@petersibley Absolutely! It's good to have the drawer architecture opened up like this. @pearcetm did a great job!

@iangilman
Copy link
Member

BTW, as @pearcetm points out, #2310 doesn't implement everything suggested in this issue, but it certainly opens the door for it. If there are any particular items of interest in here that we want to pursue next, let's file new issues for them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants