-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider post processing for imagery layers to enable analysis #8110
Comments
This is actually pretty similar to #6449. |
Came up again on the forum:
And here as well to change the imagery's "color table" at runtime:
|
I would like to suggest an implementation for this feature. I have been experimenting with this to use a separate imagery layer as water mask for a custom water shader. The general idea is to push imagery layer data to a globe material. For this purpose, I add a "vec4 layerColor" field to the materialInput struct. This field can then be set in the GlobeFS.glsl sampleAndBlend function based on some condition. In my case, I pass an extra argument to that function which contains the imagery layer index and only set materialInput.layerColor for a given layer index. The layer index is passed to sampleAndBlend as a new uniform added to the tileProvider uniformMap (createTileUniformMap). However, it would be more generic to add something like a "pushToMaterial" option to the imageryLayer constructor. All this is necessary as there is no direct way to pass imageryLayer index between the javascript generated "sampleAndBlend" calls and the runtime execution as a shader. Ideally, any existing imageryLayer could be sent to the material as an array of layerColors[] if its "pushToMaterial" option is set to true. However, my limited experience with GLSL got me in trouble when dealing with the constraints around using arrays and loops in a dynamic fashion. The only option I found was to loop though all TEXTURE_UNITS at each call of sampleAndBlend and test and set imagery layer color in a materialInput.layerColors array. I also noticed an increase in shader compilation time that hangs the application for a short while before all variations of the shaders are cached. This may simply mean my implementation is not optimal. I am happy to discuss this further and see if it can be added in a clean way to CesiumJS. |
This came up on the forum. The idea is to be able to do some basic image processing in real-time, so you might have one imagery layer that has elevation data, one that has sunlight data, and one that has temperature data, and you can combine them in a small custom shader to highlight expected plant growing zones based on some model.
So this shader could have as input one or more imagery layers. Even just having one imagery layer as input would be useful in dynamically discarding values based on a slider for example.
OpenLayers does something like this on the CPU side with web workers (see https://openlayers.org/en/latest/examples/raster.html).
The text was updated successfully, but these errors were encountered: