-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase sci-wms performance to that of ncwms #96
Comments
Your sci-wms endpoint is hitting a DAP server and ncwms would be hitting files directly... |
Is sci-wms.whoi.edu the same machine as geoport-dev.whoi.edu? Is that basically network access vs. local access? |
Yes, sci-wms.whoi.edu is the same machine as geoport-dev.whoi.edu. I think Kyle is right, that OPeNDAP is simply much slower that local file read, even when the OPeNDAP access is local. See cells [6],[7] and [8] here: I added a test for CDMRemote, and it turns out it's quite a bit faster than OPeNDAP. So we could add a test to see if CDMRemote is enabled, and if so, use it. If not, fall back to OPeNDAP. It's the exact same syntax. It's just:
instead of
|
But the sad news is that sci-wms will never be as fast as ncWMS, I guess. |
We don't cache anything except a netcdf file on disk with the coordinate information. We could instead try keeping the object in memory using the newish netcdf in memory dataset functionality (haven't played with that). But yes, OPeNDAP will always be slower than disk access when requesting the actual data slice. |
This is a major bummer. Nobody will use sci-wms for structured grid (even if we have support for SGRID) if it's significantly slower than ncWMS. We will have support for velocity vectors, but staggered grid people have to choose between:
The best thing I guess would be to push for ncWMS to support SGRID conventions... Unless we can figure out how to speed up plotting scalar fields in sci-wms... |
The impact of this slowness is really felt when the clients request tiles. The map below from the sci-wms demo client took 30 seconds to finish drawing, issues 12 different getMap requests. The same map from ncWMS takes about 5 seconds. I did notice that while most of these getMap requests look okay, two look suspicious, with e-10 specified for y_min in the bounding box: |
That's a tile off the bottom of the map that just happens to be close to the equator. |
So should we be requesting tiles that are off the bottom of the map? |
As far as I know, there are only 3 open source projects to have implement WMS for netCDF files: ncWMS2 works well up to curvilinear grids but the contour style is very special I would say gently. With the adaguc server, I have seen a test from a generic grid and it can also handle properly curvilinear grids (see https://dev.knmi.nl/projects/adagucserver/wiki/DataFormats). The main challenge is of course to have a WMS in a very responsive way both with OPeNDAP access and local netCDF files. But I would be more interested to get a very efficient local server to propose WMS from any calculs on netCDF files made from a IPython notebooks. See this capture from the IPython notebook with the use of folium (a python wrapper to Leaflet) https://gist.github.com/PBrockmann/5874c88bb61ac2b7af50 Hope this possible perspective will interest you. |
@PBrockmann , thanks for the vote of confidence! |
I understand the need to get things working first and then worry about performance.
But very few people will use sci-wms for regular/curvilinear grids if it is much slower than ncwms.
For the test here:
https://gist.github.com/rsignell-usgs/c2d112d050c42914f9b7
the getmap request is taking about 5 times longer with sci-wms compared to ncwms.
Test 1: sci-wms request
Test 2: ncwms request
The text was updated successfully, but these errors were encountered: