Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

glTF 2.0: New KHR_environments extension #946

Closed
McNopper opened this issue May 7, 2017 · 51 comments
Closed

glTF 2.0: New KHR_environments extension #946

McNopper opened this issue May 7, 2017 · 51 comments

Comments

@McNopper
Copy link
Contributor

McNopper commented May 7, 2017

For PBR and IB lighting, an environment map is needed. The follwoing JSON snippet defines evironments by example:

{
  "environments": [
    {
      "environmentTexture": 0,
      "type": "sphere"
    },
    {
      "environmentTexture": 0,
      "type": "panorama"
    }
  ]
}

"type" defines, the format of the environment texture.

For non-PBR materials, the texture can be used just as the environment. For PBR, the texture needs to be sampled and pore-filtered.
It needs to be discussed, if cube maps as textures should be supported. Furthermore, a standard HDR image format has to be selected.
Finally, find out a possibility, to provide the pre-sampled/-filtered images as well.

@McNopper McNopper changed the title glTF 2.0: KHR_materials_environments extension glTF 2.0: New KHR_materials_environments extension May 7, 2017
@McNopper McNopper changed the title glTF 2.0: New KHR_materials_environments extension glTF 2.0: New KHR_environments extension May 7, 2017
@javagl
Copy link
Contributor

javagl commented May 7, 2017

Interesting. I thought the IBL would just be one form of a light that was about to be defined in the new lights extension, but considering that it does not necessarily contribute to the lighting, a dedicated environments extension probably makes sense.

The lack of support of real cube maps might either require some workarounds, or extensions in the texture/image/sampler area as well (but I'm not so deeply familiar with that).

In any case, one probably has to either

  • define whether an environment should be used for lighting (some sort of isLightSource:boolean flag?)

  • or create a connection between the lights and the environments extension - maybe as a new light class ...

    {
      ...
      "spot": { ... }
      "environmental": { // New light class
        "environment" : 1  // Refer to the panorama environment from the above example
      }
    }
    

    but that's just a thought, that might not make sense at all...

@McNopper
Copy link
Contributor Author

McNopper commented May 7, 2017

The new light class makes sense. Probably only for PBR materials, but I prefer the reference.
As you mentioned, I have put the environment in a separate structure, as it can be used for other features as well e.g. background for common materials.

@emackey
Copy link
Member

emackey commented May 7, 2017

To make sure I'm understanding this, the general case would still be that we expect PBR models to pick up environments from the rendering engine, not ship them with the glTF file itself, right?

Take for example the models in sbtron's glTF 2 demo. The same glTF file can be loaded into multiple different environments, and will reflect the selected environment without changes to the glTF itself.

hourglassenvironments_v2

@McNopper
Copy link
Contributor Author

McNopper commented May 7, 2017

Yes, the general case would be to deploy without the environment map. And I see this extension as a "pure" extension like the PBR specular glossiness is right now.

To explain this extension a little bit further:

Normally, you deploy/use the glTF 2.0 file without any environment map. So the engine decides, which environment lighting is used. Like in the above images and glTF 2.0 is specified right now.

But I do see another use case:
In the above engine - also in any other - somehow it is encoded, what kind of environment map is used:
texture format (sphere, panorama, etc.) and probably also the tone mapping and so on.

So, if I want to send someone an asset and I want this person to see the 3D content exactly the way I want him/her to see the asset, you also need to send the environment map plus some additional information. For this reason, I also want to deploy the environment map inside the glTF - especially glb - file.

@xelatihy
Copy link
Contributor

If this is added, can we add a transform matrix to orient the envmap, and a color to scale it?

@McNopper
Copy link
Contributor Author

Good idea.
I suggest a 3x3 matrix and/or a rotation entry. It could be similar to the node, except that the translation and scale is removed.
Regarding the color, this would be a strength value, to make the scene brighter etc.?

@stevenvergenz
Copy link

Are these environment maps intended to be used as reflection probes? If so, it might be useful to be able to associate an environment map with a node. In many game engines, a scene may have multiple reflection probes, and it uses the weighted relative distances to the different probes to choose one for a particular object. The specific algorithm would have to be implementation-dependent, but the data should be available.

@UX3D-nopper
Copy link
Contributor

The original idea is having one static environment map, which influences the whole scene.
I will suggest your proposal to the working group tomorrow.

@pjcozzi
Copy link
Member

pjcozzi commented Jun 14, 2017

"type" defines, the format of the environment texture.

Are cubemaps most common? If so, should that be the only one supported to start or is that too limited compared to what shading tools will create? Could you give a brief rundown of each possibility for type?

@pjcozzi
Copy link
Member

pjcozzi commented Jun 14, 2017

CC @moneimne, this discussion may be of interest to you.

@moneimne
Copy link

I definitely see a strong use case for this when displaying/previewing glTF models. I expect that when an artist creates an asset, they often want it to be displayed in a relevant environment. If the engine imposes its own environment map, it might end up that the model looks out-of-place or awkward because of context.

An extension like this might not be as useful when loading multiple glTF models into the same scene, though. Contradicting environment maps would detract from the realism that PBR aims to add. I suppose the engine could ignore the extension at this point.

@McNopper
Copy link
Contributor Author

@pjcozzi Type should define, how the environment map is "encoded":
http://spiralgraphics.biz/genetica/help/index.htm?environment_maps_explained.htm
So they are:

  • latitude/longitude format aka panorama format aka equirectangular format
  • mirror ball format
  • horizontal cross aka cubic format
  • Cube maps

I suggest, that we should only support the panorama and mirror ball format:

  • No horizontal cross, as obsolete image data has to be transported.
  • No cube maps because
    • The environment map has to be sampled/filtered anyway, to have the final cube maps for IBL
    • We need to define, how these 6 sides/textures are encoded in glTF 2.0.
    • As far as I know, different graphics APIs expect the cube maps differently flipped and upside down.

@McNopper
Copy link
Contributor Author

@moneimne Regarding the environment maps included in the glTF scene, we should exactly define this in the specification e.g.:
"If the asset has to be rendered like the artist wants it to be seen, please use the environment map included in the file. If the asset is composed with several other glTF assets, the included environment map can be ignored".

Also, in the last case, I would suggest to store the environment map in in a separate glTF file without any other scene data. This glTF file would still be a valid glTF file plus it has all the information about the environment map like type and additonal rotation.

@xelatihy
Copy link
Contributor

Let me comment on the extension a bit.

  1. I think it would be great to have since most models are viewed under en envmap
  2. glTF can also store full scenes and envmaps are the best outdoor lighting there is – so they should be included
  3. transforms are needed on the envmap for reorientation
  4. I would include latlong projections since this is one of the most common format for envmap on the web, and since retrojecting an envmap is really hard to do right

One could also want to include envmap probes, i.e. envmaps that are local to a part of the scene. The problem with doing so is that it is very hard to define how to render them appropriately without some form of complex probe interpolation, typically done only on smoothed probes and using some form of angular basis for it. I would leave this out, unless there is clarity on a simple implementation that actually works.

@pjcozzi
Copy link
Member

pjcozzi commented Jun 15, 2017

If the asset has to be rendered like the artist wants it to be seen, please use the environment map included in the file. If the asset is composed with several other glTF assets, the included environment map can be ignored"

How would an app know "If the asset has to be rendered like the artist wants?" It seems that one environment map would need to always override the other, e.g., "if the runtime has an environment map, the environment map in the extension may be ignored."

I suggest, that we should only support the panorama and mirror ball format:

  • No horizontal cross, as obsolete image data has to be transported.
  • No cube maps because
    • The environment map has to be sampled/filtered anyway, to have the final cube maps for IBL
    • We need to define, how these 6 sides/textures are encoded in glTF 2.0.
    • As far as I know, different graphics APIs expect the cube maps differently flipped and upside down.

Sounds like cube maps will be a lot of work; this is probably why we punted on them earlier. 😄 But are they widely used enough that the work is justified to "get this right?"

Any thoughts @lexaknyazev @bghgary?

@McNopper
Copy link
Contributor Author

The app does not know. It is more depending on the context:
E.g. in the future, if I double click a glTF 2.0 file where an enviornment map is included, the viewer is using the enviornment map. If no environment is present, a default one or none is used.
If I am importing an glTF 2.0 asset into a game engine, it will probably ignore the environment map or ask me, if the environment map should be imported as well. But I think we should not specify this behaviour.

Having an environment map would also imply to support HDR images. Having cube maps, we also need to support more samplers. Basically easy to define, but I think it takes some time until all agree.

My suggestion is to put all extensions - except lighting and common materials - for now on hold.
We should all the engine and content tools developer give time, to adapt to glTF 2.0. But as lighting and common material are important, focus just on these right now.

@pjcozzi
Copy link
Member

pjcozzi commented Jun 16, 2017

My suggestion is to put all extensions - except lighting and common materials - for now on hold.

Sounds good.

@stevenvergenz
Copy link

stevenvergenz commented Jun 16, 2017 via email

@McNopper
Copy link
Contributor Author

No, no, just the offical "KHR_" ones.

@xelatihy
Copy link
Contributor

For orienting the environment map, could we add an extension to the node.
So environments would be treated like cameras and meshes.

The main advantage of this is that by having only one way to specify transforms in glTF we get an easier integration in libraries and support for all transformation features, like for example animation.

@UX3D-nopper
Copy link
Contributor

Yes, I think your suggestion is the better approach. Also, it would be possible to have several environment maps, using IBL depending on the position of the actor.
I will disucss this with the glTF working group on wednesday,

@emilian0
Copy link
Contributor

@McNopper Regarding environment map encoding. I am not convinced that we should support mirror balls. The reason is that they display heavy distortions (I believe Azimuthal equidistant corresponds to mirrorball).
I understand their physical value (light/environment physical probes). But for glTF I believe we should pick an encoding that roughly gives the same importance(pixels) to each direction and introduces less distortions (so that the resulting images can be more efficiently transmitted / compressed).
I think equirectangular projections are good (even if they introduce large distortions around the poles, and use less pixels to encode the equator).
I think cube maps are a better option in terms of Tissot's indicatrix (I couldn't find a diagram though).

@UX3D-nopper
Copy link
Contributor

I am fine without the support of mirror balls, as we use normally the equirectangular representation and I think others do not have a strong opinion on this.
Also having cube maps does make sense of course, as they can be used 1:1 by the graphics API.

What we have to define is, how these HDR images are encoded:
.hdr, .ktx, ???

.ktx does have the advantage, that it supports cube, mip maps and floating point textures. Also, it is a Khronos standard.

@emilian0
Copy link
Contributor

@McNopper we use equirectangular as well, that said I am leaning towards cube maps because of the lower distortions (on top of hardware support).
I understand that there are a lot of decisions to make in case we go with cube maps, a colleague of mine pointed me to google 360 video standardization effort as a source of inspiration.

In terms of HDR encoding we mostly use .hdr internally (exr as well).
Thanks for pointing me to .ktx: that is an in-memory format correct? Probably we can do better than that for transmission?

@UX3D-nopper
Copy link
Contributor

I mean both should be possible - cube maps and equirectangular - as they are just common used.

KTX is from Khronos: https://www.khronos.org/opengles/sdk/tools/KTX/file_format_spec/
If e.g. compression is missing - but I think it is in - it could be extended :-)

@stevesan
Copy link

(New to the community - hello :))

My 2 cents: My impression is that base glTF is meant to define the object (or hierarchy of objects), but not the lighting nor any aspect of the final rendering (post effects, etc. ...although we do have 'camera' which seems odd to me). The choice to stick to PBR materials makes sense for this goal: A glTF object can be dropped into any PBR renderer and look fairly accurate and fitting with the rest of the scene, regardless of lighting. Environment-based IBLs seems to me purely a lighting concern, and thus does not belong in glTF.

On the other hand, if glTF is meant to define an entire 3D experience, then certainly you need IBLs, but a bunch of other stuff too. I don't think this is the way to go, since the use cases of a "drop-in object" are numerous, and one could imagine another spec being defined for entire scenes/experiences.

@msfeldstein
Copy link

We're looking into something like this, and this would be handy because our app is purely a model viewer, so it'd be nice to have one file for everything, but i agree this should be stored next to a glTF model, not inside of it.

@emackey
Copy link
Member

emackey commented Dec 19, 2017

Welcome @stevesan

My impression is that base glTF is meant to define the object

There was a lot of early discussion of this in 2.0 (see #696 (comment) and #746 and other issues). The general idea was that we do want glTF to be capable of sending whole scenes. But, lights and environments and such weren't ready to go when 2.0 was released, so, they're being worked on as extensions. If and when the extensions become mature and widely-supported, they can be candidates for moving to core glTF in a future version.

@MiiBond
Copy link
Contributor

MiiBond commented Apr 3, 2018

Just waiting for a Windows build and thought that I'd writeup my current thoughts on this extension:

  • I like the idea of the environment lighting being another type of light. This would require KHR_lights and would make this the first extension to require another extension. Yay??

  • It's up to the runtime whether to use the lighting information or not when they load the glTF. This depends on what the model/scene is being loaded into and how it is being used. Same as KHR_lights.

    • If the runtime uses arrays of light probes for their environment lighting, they'd probably ignore the lighting in the glTF. However, if the runtime uses the lights in the glTF and the glTF contains more than one environment light, should we specify that the closest light to a mesh dictates which environment light gets applied?

    • Including the texture for the environment light will increase the size of the asset but, if the creator included it, they must feel that it's an important part of their scene. If a consumer of the asset wants to optimize it out because their use case doesn't need it, they can do so.

  • The texture format and the ability of glTF to reference a particular type of texture isn't really the concern of this extension. What's important is that the environment light tells the runtime which texture index to use and how to project the texture. It's up to other extensions (like MSFT_dds_texture) to support other texture types. Incidentally, DDS support already gives us a way to reference cube-maps and even HDR data (in completely uncompressed form, of course).

    • If everyone agrees that an RGBE-packed PNG is a bad idea for an extension, how about we propose a HDR or EXR extension?
    • My assumption is that the transform required to use the texture (e.g. equirectangular, spherical or cube-map) would be defined on the light. However, if the light specifies a cube-map projection, is that weird (as it assumes the ability to reference a cube texture)?
  • It's up to each runtime to decide how it wants to generate the convolved/filtered versions of the environment image for IBL rendering. This, of course, goes a bit against the ideal of glTF being as close to the runtime as possible but I think we need to be realistic here. Do we really want to dictate how the runtime should use the texture for IBL?

    • I have heard several complaints about how glTF models appear different in different runtimes. Since differences in lighting are one of the primary causes of this, this extension should really help out this cause. If we don't specify how to convolve the environment map, sizable lighting discrepancies may remain. Is that a big problem?
  • Do we require an modifications to material definitions for this extension? My assumption has been 'no'. We should be able to be pretty explicit when describing how a PBR material should be affected by the environment light. Do we need to specify how the environment lights will be treated by the materials_common extension?

@donmccurdy
Copy link
Contributor

Do we need to specify how the environment lights will be treated by the materials_common extension?

No immediate plans to move forward with KHR_materials_common, focusing on KHR_materials_unlit instead.

@garyo
Copy link

garyo commented Apr 6, 2018

I'm just getting started with glTF. I'll be using it for whole-scene transfer between apps, and was surprised to find HDR textures and environment maps missing. Just a data point.

@MiiBond
Copy link
Contributor

MiiBond commented Apr 26, 2018

If KHR_environment requires KHR_lights, how would that work exactly?

"extensions": {
        "KHR_lights": {
            "lights": [
                {
                    "color": [0.7,  0.7, 0.5 ],
                    "intensity": 1.0,
                    "name": "dayLight",
                    "type": "directional"
                }
            ],
            "extensions": {
                "KHR_environment": {
                    "lights": [
                        {
                            "type": "environment",
                            "layout": "equirectangular",
                            "texture": 2,
                            "name": "iblLight"
                        }
                    ]
                }
            }
        }
    },

Does this break any rules to have the lights array of KHR_environment appended to the lights array of KHR_lights rather than an override? If so, should it be something more like this:

"extensions": {
        "KHR_lights": {
            "lights": [
                {
                    "color": [0.7,  0.7, 0.5 ],
                    "intensity": 1.0,
                    "name": "dayLight",
                    "type": "directional"
                },
                {
                   "type": "ambient",
                   "color": [1, 1, 1],
                   "extensions": {
                       "KHR_environment": {
                           "type": "environment",
                               "layout": "equirectangular",
                               "texture": 2,
                               "name": "iblLight"
                           }
                        }
                    }
                }
            ]
        }
    },

@ivalylo
Copy link

ivalylo commented May 13, 2018

In my experience, you ultimately want to sample your environment map as a cube-map to avoid the discontinuity when using mipmaps. i.e. because the sampling at the boundary of the equirectangular texture isn't continuous (jumps from 0 to 1),

This looks like a bug with the mipmap generation. It should know how to "wrap" the texture while resizing, and there will be no problem.

IMO, KTX support may be not bad, but if added, it sounds more like another extension. This is generally useful format if you need custom mimaps, GPU compression support, etc. So this means huge work by itself, before getting to the env extension...

Why not just expose some HDR image formats as separate extensions like the DDS extension? The env extension will not have to deal with this issue, which is really not part of this extension. The engines will decide what to support

Supporting multiple probes is more advanced feature, so maybe it's for another extension?

Making KHR_environment dependent on KHR_lights sounds cool, but... what exactly it depends on? It doesn't care about the other lighting. IMO, the environment is the most basic form of lighting, so some implementation may decide to not support KHR_lights.

I don't think letting the implementations do the convolution will create much discrepancies, since they will all need to be fast and will do some simple blur probably :)... Maybe the question is how bad will be such runtime convolution quality-wise? Imagine also if you have multiple probes, this is a stress that even game engines don't need to handle, since it's always precalculated. However, if the convolution is offline, this will require format that also supports mipmaps to handle different roughness values. It also means, that the engine's BRDF model may be different then yours... Maybe if the engine supports mipmaps, and they are provided, it must just used them on your own responsibility. Otherwise do its thing?

@MiiBond
Copy link
Contributor

MiiBond commented Jun 5, 2018

This looks like a bug with the mipmap generation. It should know how to "wrap" the texture while resizing, and there will be no problem.

This actually has nothing to do with the mipmap generation. It's an issue with how mipmaps are sampled. The hardware chooses a mip level based on the screen-space derivatives of the UV's (how they're changing across the poly). If the UV's are discontinuous (i.e. jump directly from 0 to 1), like in this case, the derivative becomes undefined. Hence the artifact.

@UX3D-nopper
Copy link
Contributor

UX3D-nopper commented Aug 1, 2020

As the discussion regarding IBL and how to define it pop ups again, I want this extension to be discussed and reviewed.
For simplicity, only panorama images should be supported. No need for spherical etc.
Also, as a file format, I recommend to use .hdr files, as supported in nowadys tools plus a standardized way to describe HDR.

In addition, we need a parameter for the default orientation of "front" and/or the center of the panorama image.
+X or +Z and so on.
This is required, as DCC tools do have a different convention on this.

@donmccurdy
Copy link
Contributor

I still have some reservations about creating an IBL extension, and shipping one with only panorama .hdr images feels like a particularly short-term workaround to me. @UX3D-nopper could you say more about why you would like to revisit the extension?

@UX3D-nopper
Copy link
Contributor

As from 3D Commerce there is the demand to ship the IBL with the glTF.
HDR and panorama, as it is widely used and utilized by DCC tools.
Furthermore, as only the panorma image is provided, it is up to the engine implementor to use e.g. spherical harmonics vs. pre-filtered images.
KTX2 is out of scope for today.

@donmccurdy
Copy link
Contributor

As from 3D Commerce there is the demand to ship the IBL with the glTF.

I don't understand this requirement... Perhaps we can discuss more soon.

HDR and panorama, as it is widely used and utilized by DCC tools.

Unity stores reflection probes as cubemaps. three.js and Babylon support equirectangular IBL, but have to convert them to cubemaps at load time before using them, to my understanding. The projection produces artifacts at the poles, we tend to find cubemaps preferable.

@lexaknyazev
Copy link
Member

The IBL storage, transmission, and usage comprises several key questions that haven't been thoroughly investigated for glTF yet. We cannot make a KHR extension otherwise. The current state of existing DCC tools shouldn't be a deciding factor here.

Note that most of the following questions do not depend on each other.

Shape

  • Equirectangular
  • Regular Cubemap
  • Equi-Angular Cubemap (better data density distribution than both other options)

Orientation

glTF defines fixed XYZ directions and we even rejected an extension that was supposed to remap global axes. For the same reasons, standardized IBLs should have a fixed orientation.

Values Interpretation

The values coming from IBL should have a well-defined physical meaning. This implies their range and possible runtime adjustments (bias / multiplier).

Bitstream format

Regardless of prefiltering, there are multiple storage options. The final choice should take in account data transmission, runtime processing, and VRAM costs.

@emackey
Copy link
Member

emackey commented Aug 12, 2020

Also, storing non-pre-filtered IBLs, such as HDR / raw RGBE directly in the glTF, goes a little against the spirit of delivering the data in a ready-to-render form, as pre-filtered data would be.

For example, BabylonJS developed their own *.env format, (which is easy to create), to store the results of pre-filtering a *.hdr file. It would be fantastic if Khronos could offer an open-standard version of something in a similar role to Babylon's *.env, that could deliver an IBL that was pre-filtered and ready to use with a PBR model, with the orientation and exposure level firmly specified.

This could be KTX2 if it can include both diffuse and specular pre-filtered environments. Otherwise, I think we could use an empty glTF file with a KHR extension as a container for such an environment. It might even warrent a new file extension, to indicate that it contained only an environment with no model, and was intended to be loaded alongside some other glTF model file. Of course you could still bundle such an environment along with a model in a single glTF file.

@elalish
Copy link
Contributor

elalish commented Aug 12, 2020

I agree with @lexaknyazev but disagree with @emackey. Every renderer I know (babylon, filament, three, and the sample renderer) uses a different IBL prefiltering format, which also implies different shaders to interpret it. They have different pros and cons, different artifacts, different upfront and per frame costs. There is not at all a clear "right way" and they are in no way interoperable, yet they all do a pretty good job with PBR. Also, as I've demonstrated with three.js, it is even possible to get good results now with just-in-time prefiltering, which removes the need for transmitting prefiltered IBLs at all (since I can prefilter them in less time than a texture upload takes).

@emackey
Copy link
Member

emackey commented Aug 12, 2020

That's a fair observation that different engines need different pre-filtering.

I still think this type of extension is to be handled with extreme caution. Typical glTF models are intended to integrate into a variety of lighting environments, unless the model contains a complete scene description including the environment (which has not been a common use case so far, to my knowledge).

I've seen issues on GitHub (and I'm not naming names) where developers wanted to prevent users from selecting their own lighting environments, preferring to wait for Khronos to ship the IBL along with the model. This is not the expected default case. A typical glTF file contains a single object or a couple of objects that are to be placed into a lighting environment of the client's choosing.

@garyo
Copy link

garyo commented Aug 12, 2020

I don't know about others, and I'm sure I'm out of the mainstream, but I'm using glTF as an exchange format between my front end (three.js) and my back end (blender). Right now I have my own scene file format that includes the environment (typically .hdr or .exr, equirectangular) plus the glTF scene object with pretty much everything else, because I can't represent the environment in glTF. I'd love to have it all in glTF.

@UX3D-nopper
Copy link
Contributor

UX3D-nopper commented Aug 13, 2020

That's a fair observation that different engines need different pre-filtering.

I still think this type of extension is to be handled with extreme caution. Typical glTF models are intended to integrate into a variety of lighting environments, unless the model contains a complete scene description including the environment (which has not been a common use case so far, to my knowledge).
It is a common use case e.g.
Bright Little Tokyo https://sketchfab.com/3d-models/bright-little-tokyo-40ca86eb17d0418bbd1b5e5308ba346b
Postwar City - Exterior Scene https://sketchfab.com/3d-models/postwar-city-exterior-scene-30b694d1a4074855a1116a15a0f75731
And there are many more scenes available.
Also TurboSquid is supporting scenes e.g.
https://www.turbosquid.com/3d-model/architecture
And finally, glTF is supporting a scene including several root nodes plus a node hierarchy: https://github.com/KhronosGroup/glTF/tree/master/specification/2.0#scenes
If the only use case of glTF would be to display one mesh, the scene concept would be obsolete.

I've seen issues on GitHub (and I'm not naming names) where developers wanted to prevent users from selecting their own lighting environments, preferring to wait for Khronos to ship the IBL along with the model. This is not the expected default case. A typical glTF file contains a single object or a couple of objects that are to be placed into a lighting environment of the client's choosing.

@UX3D-nopper
Copy link
Contributor

Please continue discussion here: #1850

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests