Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigation: Generate mipmaps #386

Open
litherum opened this issue Aug 2, 2019 · 62 comments
Open

Investigation: Generate mipmaps #386

litherum opened this issue Aug 2, 2019 · 62 comments

Comments

@litherum
Copy link
Contributor

litherum commented Aug 2, 2019

Generating mipmaps is something that most 3D applications want to do. As textures are often the largest asset 3D apps need to download, generating the smaller miplevels can save significant downloading time, causing the app to be able to launch earlier. We've heard that download times are a significant concern to Web game developers; there's a strong inverse correlation between how long a games takes to load and how many users actually end up playing the game.

WebGPU currently provides no facilities to generate mipmaps. It is true that a 3rd party library could do this, but taking an additional dependency and sending a library across the wire for something that most 3D authors will want to do seems like bad design. Indeed; the purpose of doing this in the browser is to decrease download sizes, and adding an additional framework dependency is contrary to that goal.

WebGL solved this by including WebGLRenderingContext.generateMipmap(). Developers clearly desire using this; a quick search of GitHub shows that it's used more than 46,000 times. Indeed, at least one developer is using this WebGL API to generate mipmaps and then passing the results into WebGPU.

WebGPU should provide built-in facilities for generating mipmaps. Of course, we shouldn't force developers to use the built-in facilities. It is true that different developers desire mipmap filtering algorithms, and they should be free to write their own mipmap generation code. However, for most authors, the presence of built-in mipmap generation facilities will be the difference between mipmaps existing in their app at all, and mipmaps not existing in their app.

Metal

It's a method on MTLBlitCommandEncoder. It is executed on the GPU, so the command needs to be submitted to the queue and is therefore ordered with relation to other GPU operations. Also, because this is inside a command encoder, all compute/graphics command encoders need to be closed before this can be issued.

The docs have a note:

The filtering used to generate the mipmaps is implementation-dependent and may vary by Metal feature set.

Direct3D 12

The core API doesn't have any support for generating mipmaps. However, the Microsoft-authored DirectX Tool Kit 12 includes support for it. Unfortunately, this Kit isn't included in the Windows SDK; Windows browsers would take a dependency on it. The docs say that it's implemented on top of a compute shader.

The GenerateMips function uses a DirectCompute shader to perform the mip-generation using bi-linear interpolation

Direct3D 12 doesn't have passes, so there are no concerns there. However, the API isn't a one-shot generation; instead, it's a ResourceUploadBatch object that has to be opened and closed.

Vulkan

Vulkan doesn't have any built-in facilities for generating mipmaps (because of course it doesn't). It looks like you can do it with (a ton of code around)vkCmdBlitImage, which means that any render passes need to be closed.

Compute shader

Even if Windows browsers didn't want to take a dependency on DirectX Tool Kit 12, we could still implement it using a compute shader. This would require that all render passes are closed.

Recommendation

In keeping with the current design of putting blit-style commands directly on the GPUCommandEncoder, we can add a generateMipmaps(texture) call there. This will allow for us to require that any render or compute passes are closed before it's called. Such a design should be compatible with each of the above approaches.

@kvark
Copy link
Contributor

kvark commented Aug 2, 2019

You've almost lost me at the first sentence :)

Generating mipmaps is something that most 3D applications want to do.

Most 3D applications that are performance-sensitive wouldn't want to generate mipmaps at run-time, except for some cases where texture content is not known in advance (procedurally generated, captured from the environment, etc).

As textures are often the largest asset 3D apps need to download, generating the smaller miplevels can save significant downloading time, causing the app to be able to launch earlier.

Mipmaps constitute for 1/3 of the original texture volume. Besides, an application can start perfectly fine while the mipmap levels are still being transferred. The best way for an app to start quickly is to pre-generate the mipmaps and start loading the higher (smaller) mips first, which would allow it to start much faster than waiting for the full textures. Once other mipmap levels arrive, an application can issue copyBufferToTexture instructions to put it into the texture. Obviously, this option is very different from the approach suggested in this PR.

It is true that a 3rd party library could do this, but taking an additional dependency and sending a library across the wire for something that most 3D authors will want to do seems like bad design.

Is having a 3rd party dependency such a big trouble in JS/WASM world? I'm surprised we'd be trying to reduce the ecosystem as opposed to relying on it to grow.

Previously with setSubData we concluded that the users can just copy-paste a small function implementing this into their program. Is the main difference here the fact that a shader is involved? If that's the case, we may consider just providing blitImage method in our render pipelines:

  • it's fairly non-controversial, has been around since GL days
  • can be used to fill out the mip maps in a few lines of code

@magcius
Copy link

magcius commented Aug 2, 2019

  • Mipmaps need to be filtered differently depending on whether they are used as wrap or clamp (you would want to filter edge pixels differently)
  • Compressed textures cannot have mipmaps generated at runtime, unless we require shipping a runtime BCn compressor
  • For any sort of complex PBR model, you want to do special things to your mipmaps anyway (convolve normals with roughness, prefilter env maps)

I would encourage that if the WebGPU WG would like runtime mipmap compression, that this is shipped as a library developed inside the WG and released as open-source for application authors to use.

@prideout
Copy link

prideout commented Aug 2, 2019

Including a simple blitting function instead of a mipmap generator would be less controversial. Note that Vulkan provides vkCmdBlitImage with somewhat configurable filtering, but fixed to CLAMP_TO_EDGE wrapping behavior.

@magcius
Copy link

magcius commented Aug 2, 2019

I agree with including a blit image command, though I wouldn't encourage using it in place of offline mipmap generation if possible.

@deltakosh
Copy link

deltakosh commented Aug 2, 2019

You need to see the big picture here. We have thousands of users who are relying on us (babylon.js for instance but pretty sure Three has the same concern) to generate their mipmaps. Not every developer is a AAA game developer.

@kvark
Copy link
Contributor

kvark commented Aug 2, 2019

@deltakosh interesting that you mention this, because Three.js and Babylon.js are in a good position to implement it once and for all in their own codebase, using the most efficient methods available.

@deltakosh
Copy link

deltakosh commented Aug 2, 2019

So far (for webGPU) we are using webgl so I'm not sure it is the most efficient method :) We will obviously use a different technique if the spec ends up with no other option.
I do not care too much if this is a separate library but we try to reduce external dependencies as much as possible so I would prefer having it directly available in the browser

@kvark
Copy link
Contributor

kvark commented Aug 2, 2019

@deltakosh is your use of WebGL justified by the inability to generate mipmaps with WebGPU directly? I mean, just having a shader that samples an image, and then running it for each level is not too difficult for you as a framework author.

@deltakosh
Copy link

deltakosh commented Aug 2, 2019

It is not, I agree and as I stated earlier we will do it if no other option is available. We are already resampling textures using shaders to make them POT so no big deal. But again it is more about the overall usability of the API.

(also because of the fact that spec is not stabilized, we took some shortcuts for our first implementations)

Also as mentioned by @magcius it is not just about sampling down a texture as we need to filter the edges differently

@sebavan
Copy link

sebavan commented Aug 2, 2019

I am wondering why we try to push a lot of the functionalities to external lib and outside of the browser api scope. I am thinking the browser APIs is an amazing place to simplify the user code and prevent everybody from writing again the same kind of code.

Making it a lib means we expect people to use it so why not directly integrating in the browser ? (Even more as it is not just a trivial task to do well).

I am afraid that currently to get started I would need to install a lib for mipmaps, one to deal with buffers upload, maybe a compiler, and probably some other utils.

What is the main issue of keeping some default utils in the spec ? Like uploading imageData, we could argue that we do not need it but is it not good to have convenience method like those for the devs ?

@magcius
Copy link

magcius commented Aug 2, 2019

I would rather not have browser-implemented helper libraries, unless they're all guaranteed to be identical (and this is a very tricky thing to ensure). There isn't a one-size-fits-all solution, and there are a lot of constraints. The blit image approach won't work for filtering cubemap images, for instance.

We're already seeing a huge delay getting the WebGPU MVP added into browsers. If we add it into the spec, it should not be added for MVP, and Babylon will have to ship mipmap generation code anyway...

There will be third-party code involved, the question is whether it's shipped in the browser or shipped in the app (if we agree this is post-MVP, then it'll be shipped in both!). For platform consistency reasons, I would rather see it shipped in the app.

@KeepItOneHundred
Copy link

KeepItOneHundred commented Aug 2, 2019

IMO the problem isn't so much with having built-in helpers, it's those helpers incentivizing developers to write inefficient code. If the browser can generate mipmaps on-the-fly then the path of least resistance is to use uncompressed textures and waste a ton of memory/bandwidth, as was the case with WebGL.

If WebGPU is going to have high-level texture helpers then I'd rather see the effort put towards supporting Basis Universal as a first-class feature with transcoding provided by the browser, so the easy path is also the fast path.

@deltakosh
Copy link

Well if this is not done by the browser it will be done by the frameworks so this will not incentive users much to pick Basis path

@sebavan
Copy link

sebavan commented Aug 2, 2019

I would say that even with more compressed format and so on, we still need sometimes for procedural textures, textures from video or webcam capture, or even textures created from another canvas one easy path to mip maps.

I totally get the MVP timing requirements and I do not think there are any rush for helpers in the spec. In frameworks like babylon or three we will be able to go around it but as a general API surface on a longer term than the MVP, i think it still makes sense to have methods simplifying the devs life.

I might misunderstand the overall audience, but if we are not only targeting big engines out there, and we want to expand our reach, having convenience helpers is probably a good way forward.

Also as in browsers, the less call to the native APIs, the faster, it might be a tiny performance boost compared to doing all of the calls manually. It might even help reducing the JS file size which more and more people seem to take as a strong concern and metric.

@kdashg
Copy link
Contributor

kdashg commented Aug 2, 2019

I strongly believe this should be handled by applications and libraries.

WebGL's GenerateMipmaps is unstandardized(!), poor (unguaranteed!) quality, and has a number of restrictions that surprise users (filterable+renderable formats only!). While superficially useful, it's pretty limited, and I don't think is something we should try to replicate.

@RafaelCintron
Copy link
Contributor

RafaelCintron commented Aug 5, 2019

@litherum write:

Direct3D 12 doesn't have passes, so there are no concerns there

The latest versions of D3D12 have render passes.

If Mipmap generation can be done with a helper library, that seems like the best approach, assuming the helper library is not several hundred kilobytes in size.

@krogovin
Copy link

krogovin commented Aug 8, 2019

One important use case of mipmap generation is when a 3D scene renders to an offscreen buffer which is then textured directly in way that the sampling is not uniform.

From the point of view of hardware and drivers, many GL drivers implement a highly tuned implementation of mipmap generation potentially closely tied to the GPU architecture, so those might win performance over one made from a library. I can imagine a scenario where a tiled based GPU could make it faster if the driver new as part of the render target that mipmaps where needed, but there is no API doing that so I doubt any GPU does that.

On the otherhand, most of the time driver-done mipmap generation is a simple box-filter which is naturally the cheapest but not necessarily the best looking one.

@kainino0x
Copy link
Contributor

AFAICT, neither Vulkan nor D3D12 implements a GenerateMipmaps, so we don't have access to any highly optimized driver implementation, so the browsers would have to write our own.

@grorg
Copy link
Contributor

grorg commented Aug 12, 2019

Discussed at the 2019-08-12 meeting.

@litherum
Copy link
Contributor Author

litherum commented Aug 14, 2019

AFAICT, neither Vulkan nor D3D12 implements a GenerateMipmaps ... so the browsers would have to write our own.

See above. The Microsoft-authored DirectX Tool Kit 12 includes support for it.

@litherum
Copy link
Contributor Author

While superficially useful

It's hard to argue that 46,000 usages on GitHub is only superficially useful.

@litherum
Copy link
Contributor Author

a library developed inside the WG

I think we'd have to modify our charter to do this.

@kvark
Copy link
Contributor

kvark commented Aug 14, 2019

It's hard to argue that 46,000 usages on GitHub is only superficially useful.

To be fair, WebGL developers use it because OpenGL has it. We don't have this kind of legacy with WebGPU. So, I wouldn't treat this number as a strong indication that the built-in support is needed.

@litherum
Copy link
Contributor Author

If you consider OpenGL usages of this function, the count grows to over 200,000 usages. The fact that it's used so often in real code means these facilities are more than superficially useful.

@devshgraphicsprogramming

If you have the time and effort to do it, do it, just make sure that:

  1. Your Mip-Map Generation runs in compute shader
  2. That Compute Shader can do 4 or more mip-map levels in a single dispatch
  3. The mip-mapping within a dispatch happens in shared memory
  4. You can batch multiple textures to be mip-mapped at once
  5. You can control the filter used and provide at least the Kalman filter

If you provide any less than this then its simply not worth the effort.

@litherum you're right that everyone relies on OpenGL's and WebGL's mip-map generation, however they also accept (what the WebGPU WG has always been allergic to) inconsistency.

The OpenGL (and WebGL) spec leave it completely up to the implementation as to:

  • The Filter Used (mostly the frequency destroying box filter)
  • The filtering of non PoT mip-maps
  • The Speed and Efficiency
  • The Result (different mip-maps of the same base texture can look vastly different on 2 implementations)

Also mip-mapping a block compressed texture is a major pain.

@devshgraphicsprogramming

The only way to provide this feature is the way @magcius suggested, otherwise textures with minification will look different on different browsers (and possibly even backends/GPUs).

@kvark
Copy link
Contributor

kvark commented Aug 16, 2019

I developed a benchmark suite for comparing Vulkan blits versus transfers and shader draws, see gfx-rs/gfx#2960 (comment)
TL;DR: on NVidia GPU there isn't a convincing difference.

@Kangz
Copy link
Contributor

Kangz commented Jun 11, 2021

Note that WebGPU allows sampling from a mipmap while rendering to another. Chromium used to not support it but that was fixed over a year ago. @toji's texture helper that generate mipmaps uses that feature. It would be easy to extend it to support arbitrary filters specified as a WGSL function.

IIRC sampling from a mipmap and rendering to another should also be possible in WebGL (at least with ANGLE) by specifying the BASE_MIP_LEVEL texParameter.

The reason is, compressed formats on the GPU are just not practical in real life. Support is not reliable enough on different platforms and the cost of having a different code path when it's supported outweighs the benefit. Also in terms of transport size, jpeg, with the right compression parameters, is still the king.

This is surprising, WebGL should have good support for compressed formats, and something like Basis should produce smaller files from JPEG AFAIK. WebGPU will require support for compressed formats as well (at least on of BC and ETC2 probably), see #144.

One thought which is actually already in the above: having WebGPU give a generateMipmap() method, but that function's implementation is provided by the W3C in JavaScript which in turn is what browsers will expand that call to. This way, it is a common library thingy.

Doing this is complicated because the Web platform doesn't have good facilities for shipping a JS-based JS standard library. There have been multiple attempts in the past (for streams, for key-value store, etc) but none of them panned out. Having an official repo of helpers would definitely be great though.

@krogovin
Copy link

krogovin commented Jun 11, 2021

IIRC sampling from a mipmap and rendering to another should also be possible in WebGL (at least with ANGLE) by specifying the BASE_MIP_LEVEL texParameter.

I don't think that does the correct thing against textureLod(). The spec says:

Do a texture lookup as in texture but with explicit level-of-detail; lod specifies λbase

I am not so sure, but perhaps with TEXTURE_MIN_LOD and TEXTURE_MAX_LOD; on GL native, one would be better off using glTextureView(), but that is not present in WebGL2 and only present as an extension in native GLES3.x (though a fair number of platforms do support it there).

@Nehon
Copy link

Nehon commented Jun 11, 2021

I am not so sure, but perhaps with TEXTURE_MIN_LOD and TEXTURE_MAX_LOD the truth is that GL native, one would be better off using glTextureView(), but that is not present in WebGL2 and only present as an extension in native GLES3.x (though a fair number of platforms do support it there).

I tried the MIN/MAX LOD way and it didn't work, But let's not derail from the topic, The point is to make this crystal clear in WebGPU specs so that implementers are not left alone with their imagination.

@magcius
Copy link

magcius commented Jun 11, 2021

It would be nice to port the AMD SPD library to WGSL: https://gpuopen.com/fidelityfx-spd/

@kdashg
Copy link
Contributor

kdashg commented Jun 14, 2021

Mipmap generation via TEXTURE_BASE_LEVEL and TEXTURE_MAX_LEVEL does work in WebGL when used to preclude feedback.
MIN/MAX_LOD are for sampling/filtering, not mip level selection per se.
Here's the test: https://github.com/KhronosGroup/WebGL/blob/main/sdk/tests/conformance2/textures/misc/immutable-tex-render-feedback.html

@kvark
Copy link
Contributor

kvark commented Jun 14, 2021

@Nehon thank you for extensive feedback!

Pre generating mipmaps and sending them over the network is not a practical, and even a realistic solution. Seeing arguments like "it's only 1/3 of the texture data" is scary, to say the least. We are targeting any kind of hardware and any kind of network bandwidth. We can't rely in the fact that all our users will have a killing fiber connection. Most of them have a slow 3g bandwidth.

Hmm. Wouldn't it be faster to transfer higher mips first, followed by lower mips? I'd consider it a win for slow-connection users, and it's only possible if you pre-generate the mips. Your website would then always display the lowest available mip level for a texture. Once a new one comes it, you'd re-create the GPUTextureView and the GPUBindGroup for this lower mip.

Mipmap generation HAS to be possible at runtime.
Being able to read from a mip level and write into another WITHOUT throwing a feedback loop error.

Yes, it is possible in WebGPU, as noted in other comments.
You can see it on https://wgpu.rs/examples/?example=mipmap, for example.

Having 1 hardware command to generate mipmaps on regular textures (as opposed to RTT textures) is REALLY handy.

I guess we'll be keeping our ears wide open to listen if this is demanded wildly. D3D12 and Vulkan don't have built-in facilities for this, and D3D12 doesn't even have any blitting, so it's not totally clear how important the mipmap generation would be today. With proper support for compressed textures in the API, and the Basis Universal magic on the user side, it may be better to expect all of the mips coming this way.

@Nehon
Copy link

Nehon commented Jun 15, 2021

Hmm. Wouldn't it be faster to transfer higher mips first, followed by lower mips? I'd consider it a win for slow-connection users, and it's only possible if you pre-generate the mips. Your website would then always display the lowest available mip level for a texture. Once a new one comes it, you'd re-create the GPUTextureView and the GPUBindGroup for this lower mip.

We do send low res textures first so that the time to interaction is the shortest possible. but we don't send the whole mipmap chain.

Thanks for your answer.

@magcius
Copy link

magcius commented Jun 15, 2021

Having 1 hardware command to generate mipmaps on regular textures (as opposed to RTT textures) is REALLY handy.

Note that GenerateMipmaps is not a hardware command on any platform. The platforms that have it: D3D11's GenerateMipmaps and GL's glGenerateMipmaps are fully implemented in the driver, either as special shaders or as other tools.

One major reason that GenerateMipmaps was removed between D3D11 and D3D12 is because explicit APIs expose a lot more of the underlying hidden state that the driver can manage. As seen above, some methods of doing downsampling require tools like OUTPUT_ATTACHMENT. Others use a compute shader, which can require different usages.

Unless the exact implementation of GenerateMipmaps is specified in the spec, with all of the bits changing, it's hard to implement such conveniences efficiently, since it's locked into an implementation. Having a standard library for this, possibly a port of the compute shader downsampler released by AMD, would be preferable to having it in the browser in my opinion.

@Nehon
Copy link

Nehon commented Jun 15, 2021

Unless the exact implementation of GenerateMipmaps is specified in the spec, with all of the bits changing, it's hard to implement such conveniences efficiently, since it's locked into an implementation.

Yes that's my point. that's why I think the spec should be super specific about it. Whatever implementation used.

Having a standard library for this, possibly a port of the compute shader downsampler released by AMD, would be preferable to having it in the browser in my opinion.

I don't get why a 3rd party library would be better in every aspect we discussed, and I fail to see the benefit of it considering everybody need to generate mip maps.

@Kangz Kangz added this to the V1.0 milestone Sep 2, 2021
@Kangz
Copy link
Contributor

Kangz commented Sep 2, 2021

Adding to V1.0 milestone instead of post-V1.0 like other feature requests because this has been a very common request so we need to make a decision on whether it is in 1.0 or not and stick to it.

@Kangz Kangz modified the milestones: V1.0, post-V1 Oct 18, 2021
@davidar
Copy link

davidar commented Mar 21, 2022

It would be nice to port the AMD SPD library to WGSL: https://gpuopen.com/fidelityfx-spd/

I've ported a limited version of SPD to WGSL here, but there's a few issues:

@Kangz
Copy link
Contributor

Kangz commented Mar 21, 2022

Thanks for the feedback! I don't think #822 would allow more storage bindings overall unfortunately, just grouping them as an array for convenience. How would read-write storage textures allow doing more than 6 mips at once? The number of storage texture bindings is a limit that can be increased if the hardware supports it too.

Do you have some performance numbers? Esp. compared to doing a repeated manual blit?

@davidar
Copy link

davidar commented Mar 21, 2022

The first 6 mips only require workgroup shared memory, but mips 7-12 need to sample from the 6th mip texture (so need read-write in order to avoid a second dispatch). Though perhaps a storage buffer could be used as a workaround to handle the communication between workgroups.

I haven't looked at the performance yet, I suspect the port still needs some work to be competitive. I'll update with some numbers when I have a chance

@davidar
Copy link

davidar commented Mar 22, 2022

I've updated the code to lift those last two limitations (thanks for the clarification), and done a quick performance comparison. The total cost of the draws/dispatch is similar between the two, most of the time savings are from avoiding the need to setup multiple passes. For generating 10 mip levels, the SPD port takes 0.3-0.4ms, whereas repeated blit takes about 2ms.

@Kangz
Copy link
Contributor

Kangz commented Mar 22, 2022

That's quite a perf difference! Thanks for sharing. I assume that the textures you generate mipmaps for have the STORAGE_BINDING usage? So if generating for readonly textures that might be pessimizing future reads and a copy into an otherwise readonly texture could be needed. (that's the case for repeated blits too though)

@devshgraphicsprogramming

The first 6 mips only require workgroup shared memory, but mips 7-12 need to sample from the 6th mip texture (so need read-write in order to avoid a second dispatch). Though perhaps a storage buffer could be used as a workaround to handle the communication between workgroups.

I haven't looked at the performance yet, I suspect the port still needs some work to be competitive. I'll update with some numbers when I have a chance

You'll need to mark the 6th mip texture as coherent and insert a memoryBarrierImage() between the 6-level write op and 7-level read op, this might come with a penalty.

@davidar
Copy link

davidar commented Mar 23, 2022

I assume that the textures you generate mipmaps for have the STORAGE_BINDING usage? So if generating for readonly textures that might be pessimizing future reads and a copy into an otherwise readonly texture could be needed. (that's the case for repeated blits too though)

Yep, the only different from repeated blitting with a fragment shader is that it's using STORAGE_BINDING instead of RENDER_ATTACHMENT

You'll need to mark the 6th mip texture as coherent and insert a memoryBarrierImage() between the 6-level write op and 7-level read op, this might come with a penalty.

Thanks. I'm guessing storageBarrier would be the WGSL equivalent here? (I'm using a storage buffer rather than a texture for the communication between mips 6 and 7, due to the read-write limitation.) I think buffers are currently coherent by default (#1621)

@devshgraphicsprogramming

I'm using a storage buffer rather than a texture for the communication between mips 6 and 7, due to the read-write limitation.

You can't have read-write access on a storage image in WebGPU !?

@Kangz
Copy link
Contributor

Kangz commented Mar 23, 2022

Unfortunately not, it isn't a required capability in Metal. We gathered some statistics on the availability in Chrome macOS installs and there's about 10% of systems with no support at all, and another 50% systems that would support only r32float for read-write storage.

@maierfelix
Copy link

While glGenerateMipmap is heavily used in WebGL, it is a legacy method and in my opinion, the implementation of it should be left to the authors of WebGPU-based libraries. glGenerateMipmap only works reliable in a very limited subset of cases, while in modern graphics, there are too many different and specialised scenarios to cover in a spec. It is a handy method to have during development, but the performance characteristics are too intransparent and the results are too unreliable, or not usable at all.

@kvark

Most 3D applications that are performance-sensitive wouldn't want to generate mipmaps at run-time, except for some cases where texture content is not known in advance (procedurally generated, captured from the environment, etc).

Outside of regular textures, there are many use cases for generating/re-generating mipmaps at runtime, often to traverse hierarchical data structures or to approximate light traversal, e.g. LPV, VXCT (or generally where a volumetric representation is used to approximate a scene for e.g. lighting).

Also as a side note, I found that up to today, manually generating mipmaps in WebGL without glGenerateMipmap is surprisingly hard. AFAIK there are still vendors who don't correctly implement the TEXTURE_BASE_LEVEL texture parameter, which is necessary to efficiently generate mipmaps at runtime without double buffering (and without glGenerateMipmap). Here is an example to test if TEXTURE_BASE_LEVEL works in your browser.

@devshgraphicsprogramming

glGenerateMipmap only works reliable in a very limited subset of cases

GL and GLES impose a limit that for glGenerateMipmap to work, the format must be color renderable (probably to allow for low-effort implementations using blit sequences).

This gets tricky with RGB8 textures (which aren't too much of a problem since they don't exist in reality), RGB9E5 and so on.

Also as a side note, I found that up to today, manually generating mipmaps in WebGL without glGenerateMipmap is surprisingly hard.

Its currently a 3 month long project by @achalpandeyy in our framework Devsh-Graphics-Programming/Nabla#343

Box Filter is a rather bad default, and there's a lot of things to worry about when implementing even that poor method.

@devshgraphicsprogramming
Copy link

devshgraphicsprogramming commented Jun 13, 2022

Unfortunately not, it isn't a required capability in Metal. We gathered some statistics on the availability in Chrome macOS installs and there's about 10% of systems with no support at all, and another 50% systems that would support only r32float for read-write storage.

you should at least support r32u, r32i, as otherwise I cannot do atomic image ops.

Also just because it isn't a required capability, doesn't mean it shouldn't be exposed as a feature at all.

ben-clayton pushed a commit to ben-clayton/gpuweb that referenced this issue Sep 6, 2022
…uweb#386)

* Add dev server for standalone runner which compiles TS at runtime

* Remove Grunt tasks no longer needed for standalone

* Add dev server request logging

* Address review feedback

* Workaround the import cache for dev server .spec.ts imports

* Apply suggestions from code review

Co-authored-by: Kai Ninomiya <kainino@chromium.org>

* Address comments from code review

* Remove stray console.log

Co-authored-by: Kai Ninomiya <kainino@chromium.org>
@greggman
Copy link
Contributor

greggman commented Aug 3, 2023

Late to this discussion and not sure why it popped up in my sites but ...

  1. I'm on the side of this belongs in the user code, not in WebGPU for all the reasons mentioned above.

  2. Similar to @toji's example (which I hadn't seen) I wrote a generator. it's here. The mipmap generator is a single function that generates from mip level 0 to texture.mipLevelCount. At the moment it also handles 2d-arrays. I thought about adding 3d support but I suspect the usage for 3d mips is low. Also, there's a bunch of image/canvas/video loading functions which make it trivial to load one or more into a texture and then generate mips.

@tsherif
Copy link

tsherif commented Feb 10, 2024

Found this discussion super interesting, as I was curious why WebGPU couldn't generate mipmaps. One suggestion I have is to consolidate the arguments for excluding mipmap generation from WebGPU in one place for easy reference, if that hasn't been done already (maybe @greggman's WebGPU from WebGL article?). I suspect this question will come up a lot, and this thread is a long, convoluted way to get to an understanding of the situation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests