2MGFX does not work on non-Windows platforms #2167

Open
hach-que opened this Issue Dec 16, 2013 · 80 comments

Comments

Projects
None yet
@hach-que
Contributor

hach-que commented Dec 16, 2013

I'm assuming that 2MGFX is not meant to compile under Linux. Are there any intentions to change this in the future so that compilation can be done on non-Windows platforms?

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Dec 16, 2013

Member

I would say the opposite... 2MGFX is intended to be used on non-Windows platforms. Just at the moment it cannot.

The big issue is that it depends on MojoShader and DX tools which only run on Windows. The planned fix for that is to support a pure GLSL FX file which can be fully processed on non-Windows platforms.

I've just not had the time to work on this.

Member

tomspilman commented Dec 16, 2013

I would say the opposite... 2MGFX is intended to be used on non-Windows platforms. Just at the moment it cannot.

The big issue is that it depends on MojoShader and DX tools which only run on Windows. The planned fix for that is to support a pure GLSL FX file which can be fully processed on non-Windows platforms.

I've just not had the time to work on this.

@hach-que

This comment has been minimized.

Show comment
Hide comment
@hach-que

hach-que Dec 16, 2013

Contributor

If you like I can look into adding GLSL shaded compilation into the MonoGame content pipeline. Since we're developing on multiple platforms we're going to need some way of compiling sharers on Linux and Windows.

Are there any ideas on how to handle sharers written in GLSL under the Windows platform? Last I checked, the WindowsGL platform did not have feature parity with the other GL platforms.

Contributor

hach-que commented Dec 16, 2013

If you like I can look into adding GLSL shaded compilation into the MonoGame content pipeline. Since we're developing on multiple platforms we're going to need some way of compiling sharers on Linux and Windows.

Are there any ideas on how to handle sharers written in GLSL under the Windows platform? Last I checked, the WindowsGL platform did not have feature parity with the other GL platforms.

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Dec 16, 2013

Member

If you like I can look into adding GLSL shaded compilation
into the MonoGame content pipeline.

That isn't want we need. GLSL shaders are not FX files... it would be a useless feature.

We want to extend 2MGFX to support GLSL code within an FX file.

Are there any ideas on how to handle sharers
written in GLSL under the Windows platform?

This is unnecessary.

The Windows platform supports MGFX... add support for GLSL to 2MGFX and every platform will support it.

I suggest you examine how the 2MGFX tool works before you do anything else.

Member

tomspilman commented Dec 16, 2013

If you like I can look into adding GLSL shaded compilation
into the MonoGame content pipeline.

That isn't want we need. GLSL shaders are not FX files... it would be a useless feature.

We want to extend 2MGFX to support GLSL code within an FX file.

Are there any ideas on how to handle sharers
written in GLSL under the Windows platform?

This is unnecessary.

The Windows platform supports MGFX... add support for GLSL to 2MGFX and every platform will support it.

I suggest you examine how the 2MGFX tool works before you do anything else.

@hach-que

This comment has been minimized.

Show comment
Hide comment
@hach-que

hach-que Dec 17, 2013

Contributor

Yeah that's what I mean (GLSL code in an FX file).

As far as I can tell, 2MGFX uses MojoShader and SharpDx to take a shader written in HLSL and convert it into either a compiled HLSL (DX11) or GLSL shader.

Unless I'm misunderstanding something, I don't see how adding support for compiling GLSL code to a GLSL shader in 2MGFX will assist in targeting Windows, unless the intention is writing the reverse of MojoShader (GLSL code / bytecode -> compiled HLSL shader)?

Contributor

hach-que commented Dec 17, 2013

Yeah that's what I mean (GLSL code in an FX file).

As far as I can tell, 2MGFX uses MojoShader and SharpDx to take a shader written in HLSL and convert it into either a compiled HLSL (DX11) or GLSL shader.

Unless I'm misunderstanding something, I don't see how adding support for compiling GLSL code to a GLSL shader in 2MGFX will assist in targeting Windows, unless the intention is writing the reverse of MojoShader (GLSL code / bytecode -> compiled HLSL shader)?

@hach-que

This comment has been minimized.

Show comment
Hide comment
@hach-que

hach-que Dec 17, 2013

Contributor

Okay, I think I misunderstood how shaders are compiled.

Shaders themselves are compiled on each machine; there's no intermediate / compiled version of the shader that is shipped because the shader code has to be compiled for each graphics card. The binary blobs (XNB or otherwise) that are shipped around are actually just wrappers around the shader code that provide metadata for MonoGame to load it (such as the parameters and whatnot). Thus the only thing that needs to be done to support GLSL is to parse the appropriate metadata and store the code in the resulting MGFXO file.

Am I understanding that correctly?

Contributor

hach-que commented Dec 17, 2013

Okay, I think I misunderstood how shaders are compiled.

Shaders themselves are compiled on each machine; there's no intermediate / compiled version of the shader that is shipped because the shader code has to be compiled for each graphics card. The binary blobs (XNB or otherwise) that are shipped around are actually just wrappers around the shader code that provide metadata for MonoGame to load it (such as the parameters and whatnot). Thus the only thing that needs to be done to support GLSL is to parse the appropriate metadata and store the code in the resulting MGFXO file.

Am I understanding that correctly?

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Dec 17, 2013

Member

I think I misunderstood how shaders are compiled.

Right... looks like you are starting to get it now.

MGFX at its core simply provides the "metadata" for the Effect:

  • What are the parameters names, types, etc?
  • What are all the texture slots and sampler states?
  • What are all the techniques and passes?

Depending on the platform HLSL and GLSL do not have access to some or all of this data at runtime. 2MGFX gathers this data offline and provides it in a common format that the MonoGame Effect system can understand.

2MGFX also provides other features:

  • Automatic translation of HLSL to GLSL.
  • Optimize the GLSL code... critical for mobile platforms.
  • Makes sure the HLSL/GLSL code is valid.
  • Precompile HLSL into a binary shader.

Currently MGFX only accepts HLSL FX files as input and can do automatic translation to GLSL. This and optimization of GLSL are the primary motivations of using MojoShader. The goal moving forward it to support user generated GLSL FX files as inputs:

#include "Macros.glsl"

 uniform mat4 mvp_unif;
 attribute vec3 pos_attr;
 attribute vec4 color_attr;
 varying vec4 color;

 void vertMain()
 {
     gl_Position = mvp_unif * vec4(pos_attr, 1.0);
     color = color_attr;
 }

 void fragMain() { gl_FragColor = color; }

technique { 
   pass {
        VertexShader = compile vertMain();
        PixelShader = compile fragMain();
   } 
}

To do this the MGFX parser has to be enhanced to support GLSL syntax. Since we cannot lean on the DX tools to provide reflection of GLSL shader parameters we need another solution... possibly parsing them ourselves.

Then there is the whole issue of GLSL optimization which is required to make mobile perform well. Even whitespace in the shader makes mobile startup slower. MojoShader has it easy... it is reading already optimized HLSL bytecode... so Microsoft did the optimization. We need to find some other library to do GLSL optimization that is cross platform.

So there are a lot of non-trival issues. We don't have to solve all of them at once, but we need to start taking steps towards it.

Member

tomspilman commented Dec 17, 2013

I think I misunderstood how shaders are compiled.

Right... looks like you are starting to get it now.

MGFX at its core simply provides the "metadata" for the Effect:

  • What are the parameters names, types, etc?
  • What are all the texture slots and sampler states?
  • What are all the techniques and passes?

Depending on the platform HLSL and GLSL do not have access to some or all of this data at runtime. 2MGFX gathers this data offline and provides it in a common format that the MonoGame Effect system can understand.

2MGFX also provides other features:

  • Automatic translation of HLSL to GLSL.
  • Optimize the GLSL code... critical for mobile platforms.
  • Makes sure the HLSL/GLSL code is valid.
  • Precompile HLSL into a binary shader.

Currently MGFX only accepts HLSL FX files as input and can do automatic translation to GLSL. This and optimization of GLSL are the primary motivations of using MojoShader. The goal moving forward it to support user generated GLSL FX files as inputs:

#include "Macros.glsl"

 uniform mat4 mvp_unif;
 attribute vec3 pos_attr;
 attribute vec4 color_attr;
 varying vec4 color;

 void vertMain()
 {
     gl_Position = mvp_unif * vec4(pos_attr, 1.0);
     color = color_attr;
 }

 void fragMain() { gl_FragColor = color; }

technique { 
   pass {
        VertexShader = compile vertMain();
        PixelShader = compile fragMain();
   } 
}

To do this the MGFX parser has to be enhanced to support GLSL syntax. Since we cannot lean on the DX tools to provide reflection of GLSL shader parameters we need another solution... possibly parsing them ourselves.

Then there is the whole issue of GLSL optimization which is required to make mobile perform well. Even whitespace in the shader makes mobile startup slower. MojoShader has it easy... it is reading already optimized HLSL bytecode... so Microsoft did the optimization. We need to find some other library to do GLSL optimization that is cross platform.

So there are a lot of non-trival issues. We don't have to solve all of them at once, but we need to start taking steps towards it.

@paewie

This comment has been minimized.

Show comment
Hide comment
@paewie

paewie Mar 17, 2015

Even if this is a quite old post I encountered this issue, too. The .fx files won't compile on a linux system, even if the pipeline itself claims to be cross-platform (and it is; for most content).
The question now: Were one year and three months enough to implement this compilation or did I get it right that it is not supported by now?

paewie commented Mar 17, 2015

Even if this is a quite old post I encountered this issue, too. The .fx files won't compile on a linux system, even if the pipeline itself claims to be cross-platform (and it is; for most content).
The question now: Were one year and three months enough to implement this compilation or did I get it right that it is not supported by now?

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Mar 17, 2015

Member

It isn't implemented yet.

We have one of the first steps worked out (#1972), but no one has had time/interest to do the work.

There is a push to get MGFX processing working on Mac which might be how this eventually gets worked on.

Member

tomspilman commented Mar 17, 2015

It isn't implemented yet.

We have one of the first steps worked out (#1972), but no one has had time/interest to do the work.

There is a push to get MGFX processing working on Mac which might be how this eventually gets worked on.

@paewie

This comment has been minimized.

Show comment
Hide comment
@paewie

paewie Mar 17, 2015

Okay, thanks for the info. Guess I'm switching the OS for shader compilation then :D

paewie commented Mar 17, 2015

Okay, thanks for the info. Guess I'm switching the OS for shader compilation then :D

@tgjones

This comment has been minimized.

Show comment
Hide comment
@tgjones

tgjones Apr 9, 2015

Contributor

#1972 is implemented now.

I'm not sure if we need to keep #3566 and #3270 open - they seem like dupes of this issue?

On a related note, it looks like the Paradox HLSL -> GLSL converter might be released separately under a more friendly OSS licence very soon: https://twitter.com/xoofx/status/586087334388445185

From what I've seen of it so far, it looks pretty robust. Unlike every other HLSL -> GLSL source translator that I've seen, it supports HLSL SM 4.0+ syntax. Obviously we'll have to see if it works well for MonoGame, but if it does, it could replace MojoShader, and provide a path towards shader compilation on non-Windows platforms.

There are two separate issues here, and perhaps they should be discussed separately:

  1. Extend 2MGFX to allow shaders to be written in GLSL.
  2. Replace MojoShader with something that does HLSL -> GLSL source translation, so that it can be done on non-Windows platforms.
Contributor

tgjones commented Apr 9, 2015

#1972 is implemented now.

I'm not sure if we need to keep #3566 and #3270 open - they seem like dupes of this issue?

On a related note, it looks like the Paradox HLSL -> GLSL converter might be released separately under a more friendly OSS licence very soon: https://twitter.com/xoofx/status/586087334388445185

From what I've seen of it so far, it looks pretty robust. Unlike every other HLSL -> GLSL source translator that I've seen, it supports HLSL SM 4.0+ syntax. Obviously we'll have to see if it works well for MonoGame, but if it does, it could replace MojoShader, and provide a path towards shader compilation on non-Windows platforms.

There are two separate issues here, and perhaps they should be discussed separately:

  1. Extend 2MGFX to allow shaders to be written in GLSL.
  2. Replace MojoShader with something that does HLSL -> GLSL source translation, so that it can be done on non-Windows platforms.
@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Apr 10, 2015

Member

1.Extend 2MGFX to allow shaders to be written in GLSL

Right... I think this is the next step to tackle.

We would need to test convert some complex shader to GLSL, but leave all the sampler states, render states, techniques, passes in the file. Then we need to make fixes to the MGFX parser to let it parse the info it needs (sampler state, render state, techniques, passes) while passing thru the GLSL code.

Then we need to figure out how to strip out the GLSL code from the FX stuff.

First I suspect the GLSL compiler will not like things like techniques in the file... where as the HLSL compiler just ignores it.

Second bigger GLSL files are slower to compile... so ideally we would compact the GLSL to have just the bits needed and no more. This should include removing dead code like methods from an #included file that were not used in this shader.

Last we need to figure out how to fake constant blocks efficiently. MojoShader packs all the constants into an array of float/int. This way we can do one glProgramUniform and not dozens to set the constants.

Once all this is worked out we can support GLSL FX files... then we can consider automatic translation.

Member

tomspilman commented Apr 10, 2015

1.Extend 2MGFX to allow shaders to be written in GLSL

Right... I think this is the next step to tackle.

We would need to test convert some complex shader to GLSL, but leave all the sampler states, render states, techniques, passes in the file. Then we need to make fixes to the MGFX parser to let it parse the info it needs (sampler state, render state, techniques, passes) while passing thru the GLSL code.

Then we need to figure out how to strip out the GLSL code from the FX stuff.

First I suspect the GLSL compiler will not like things like techniques in the file... where as the HLSL compiler just ignores it.

Second bigger GLSL files are slower to compile... so ideally we would compact the GLSL to have just the bits needed and no more. This should include removing dead code like methods from an #included file that were not used in this shader.

Last we need to figure out how to fake constant blocks efficiently. MojoShader packs all the constants into an array of float/int. This way we can do one glProgramUniform and not dozens to set the constants.

Once all this is worked out we can support GLSL FX files... then we can consider automatic translation.

@tgjones

This comment has been minimized.

Show comment
Hide comment
@tgjones

tgjones Apr 12, 2015

Contributor

Right... I think this is the next step to tackle.

I'm more interested in the HLSL to GLSL translation, but still, I'll have a look at this and see if I can make some progress ;-)

the GLSL compiler will not like things like techniques in the file... where as the HLSL compiler just ignores it

That's interesting - I actually thought the MGFX parser was already removing the techniques. I didn't realise the HLSL compiler just ignored it. Makes you wonder how many other legacy features are still hidden away in the current compiler - I assumed it would throw an error if you tried to use techniques without setting the legacy flag.

Contributor

tgjones commented Apr 12, 2015

Right... I think this is the next step to tackle.

I'm more interested in the HLSL to GLSL translation, but still, I'll have a look at this and see if I can make some progress ;-)

the GLSL compiler will not like things like techniques in the file... where as the HLSL compiler just ignores it

That's interesting - I actually thought the MGFX parser was already removing the techniques. I didn't realise the HLSL compiler just ignored it. Makes you wonder how many other legacy features are still hidden away in the current compiler - I assumed it would throw an error if you tried to use techniques without setting the legacy flag.

@tgjones

This comment has been minimized.

Show comment
Hide comment
@tgjones

tgjones Apr 12, 2015

Contributor

Second bigger GLSL files are slower to compile

GLSL optimizer seems to be quite popular for shrinking GLSL shaders, we could see if that works for us.

Contributor

tgjones commented Apr 12, 2015

Second bigger GLSL files are slower to compile

GLSL optimizer seems to be quite popular for shrinking GLSL shaders, we could see if that works for us.

@tgjones

This comment has been minimized.

Show comment
Hide comment
@tgjones

tgjones Apr 12, 2015

Contributor

I can see roughly how a GLSL FX format might work. But there are some specifics to work out: for example, how will sampler states work? In GLSL, you only define a sampler, and there's no mechanism for setting sampler states:

uniform sampler2D tex1;

Maybe we could also let users set the GLSL version using the compile directive:

technique
{
  pass
  {
    VertexShader = compile glsl_140 vertMain();
    PixelShader = compile glsl_140 fragMain();
   } 
}
Contributor

tgjones commented Apr 12, 2015

I can see roughly how a GLSL FX format might work. But there are some specifics to work out: for example, how will sampler states work? In GLSL, you only define a sampler, and there's no mechanism for setting sampler states:

uniform sampler2D tex1;

Maybe we could also let users set the GLSL version using the compile directive:

technique
{
  pass
  {
    VertexShader = compile glsl_140 vertMain();
    PixelShader = compile glsl_140 fragMain();
   } 
}
@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Apr 21, 2015

Member

Sorry... this slipped down into the bowels of my inbox. :)

how will sampler states work?

The sampler states are all an MGFX feature... so they can look however we want them to.

So maybe it can be like this:

uniform sampler2D tex1
{
    AddressU = Wrap;
    AddressV = Wrap;
};

... or like this...

sampler2D tex1
{
    AddressU = Wrap;
    AddressV = Wrap;
};

It would be up to us to transform this into valid GLSL after the MGFX parser gets done with it. So we would be altering it to remove the sampler state data before it is passed into any GLSL parser/compiler.

Maybe we could also let users set the GLSL version using the compile directive:

Absolutely... the compile directive is all an MGFX thing. We just need to remove that all before we pass it into anything expecting pure GLSL.

Member

tomspilman commented Apr 21, 2015

Sorry... this slipped down into the bowels of my inbox. :)

how will sampler states work?

The sampler states are all an MGFX feature... so they can look however we want them to.

So maybe it can be like this:

uniform sampler2D tex1
{
    AddressU = Wrap;
    AddressV = Wrap;
};

... or like this...

sampler2D tex1
{
    AddressU = Wrap;
    AddressV = Wrap;
};

It would be up to us to transform this into valid GLSL after the MGFX parser gets done with it. So we would be altering it to remove the sampler state data before it is passed into any GLSL parser/compiler.

Maybe we could also let users set the GLSL version using the compile directive:

Absolutely... the compile directive is all an MGFX thing. We just need to remove that all before we pass it into anything expecting pure GLSL.

@tgjones

This comment has been minimized.

Show comment
Hide comment
@tgjones

tgjones Apr 21, 2015

Contributor

The sampler states are all an MGFX feature

Yes, but the current syntax is entirely from D3D, so I just wanted to check that we're happy wedging that into GLSL. Your suggested syntax makes sense though - and declaring sample state properties is an optional feature, so people can just declare samplers using standard GLSL if they want to:

sampler2D tex1

Last question for now: how is the parser going to know which version of the parser to use? I assume we want to know upfront - trying to allow all possible syntax in a single parser would be pretty ugly. Will we use a different filename, or have some kind of #pragma at the start of the file? Not sure what's best.

(Obviously, the compile directive in the technique / pass node can have a GLSL-specific version, but we won't know what what is until we've parsed the file...)

Contributor

tgjones commented Apr 21, 2015

The sampler states are all an MGFX feature

Yes, but the current syntax is entirely from D3D, so I just wanted to check that we're happy wedging that into GLSL. Your suggested syntax makes sense though - and declaring sample state properties is an optional feature, so people can just declare samplers using standard GLSL if they want to:

sampler2D tex1

Last question for now: how is the parser going to know which version of the parser to use? I assume we want to know upfront - trying to allow all possible syntax in a single parser would be pretty ugly. Will we use a different filename, or have some kind of #pragma at the start of the file? Not sure what's best.

(Obviously, the compile directive in the technique / pass node can have a GLSL-specific version, but we won't know what what is until we've parsed the file...)

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Apr 21, 2015

Member

I just wanted to check that we're happy wedging that into GLSL.

Seems better to crib syntax from HLSL than invent a new one.

That is unless Vulcan has added something like this to the new GLSL spec?

how is the parser going to know which version of the parser to use?

I don't know. I was hoping not to have like myeffect.fx and myeffect.glfx.

We could maybe use a regex to count keywords for hlsl and glsl and guess based on frequency of keywords for one over the other.

We could look for the VertexShader = compile glsl_140 vertexShader() and decide on that.

trying to allow all possible syntax in a single parser would be pretty ugly.

I really don't want to have 2 parsers for MGFX... just about everything in MGFX is not specific to any shader language. I would hate to have to fix 2 copies of the parser every time we add/change something.

A single parser would be much better IMO. Especially since language specific work arounds are extremely minimal right now.

Member

tomspilman commented Apr 21, 2015

I just wanted to check that we're happy wedging that into GLSL.

Seems better to crib syntax from HLSL than invent a new one.

That is unless Vulcan has added something like this to the new GLSL spec?

how is the parser going to know which version of the parser to use?

I don't know. I was hoping not to have like myeffect.fx and myeffect.glfx.

We could maybe use a regex to count keywords for hlsl and glsl and guess based on frequency of keywords for one over the other.

We could look for the VertexShader = compile glsl_140 vertexShader() and decide on that.

trying to allow all possible syntax in a single parser would be pretty ugly.

I really don't want to have 2 parsers for MGFX... just about everything in MGFX is not specific to any shader language. I would hate to have to fix 2 copies of the parser every time we add/change something.

A single parser would be much better IMO. Especially since language specific work arounds are extremely minimal right now.

@theZMan

This comment has been minimized.

Show comment
Hide comment
@theZMan

theZMan Apr 21, 2015

Contributor

VertexShader = compile glsl_140 vertexShader() seems like a rather clean way of identifying what to do unless it means parsing the whole file twice - once to find the Shader compiler instruction and then again to parse it the right way

Contributor

theZMan commented Apr 21, 2015

VertexShader = compile glsl_140 vertexShader() seems like a rather clean way of identifying what to do unless it means parsing the whole file twice - once to find the Shader compiler instruction and then again to parse it the right way

@hach-que

This comment has been minimized.

Show comment
Hide comment
@hach-que

hach-que Apr 21, 2015

Contributor

This is why I think we should just define our own shader language and cross compile into DirectX and Vulkan byte code. For legacy GLSL-only platforms we could then do what we currently do and cross compile the DirectX bytecode to GLSL.

Contributor

hach-que commented Apr 21, 2015

This is why I think we should just define our own shader language and cross compile into DirectX and Vulkan byte code. For legacy GLSL-only platforms we could then do what we currently do and cross compile the DirectX bytecode to GLSL.

@theZMan

This comment has been minimized.

Show comment
Hide comment
@theZMan

theZMan Apr 21, 2015

Contributor

Nobody needs a new shader language...

MonoGame is not a big enough player to force learning something new on the world IMO even if it is the cleanest solution to the problem

Contributor

theZMan commented Apr 21, 2015

Nobody needs a new shader language...

MonoGame is not a big enough player to force learning something new on the world IMO even if it is the cleanest solution to the problem

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Apr 21, 2015

Member

unless it means parsing the whole file twice

It shouldn't... parsing the MGFX part of the effect is not platform specific.

https://github.com/mono/MonoGame/blob/develop/Tools/2MGFX/ShaderInfo.cs#L72

This is why I think we should just define our own shader language

We should not... this is exactly what the problem is. We don't need more languages.

If MonoGame was a "Game Engine" we would define a language for domain specific shaders like Unity or/and Unreal. We would not write the equivalent to HLSL or GLSL.

Member

tomspilman commented Apr 21, 2015

unless it means parsing the whole file twice

It shouldn't... parsing the MGFX part of the effect is not platform specific.

https://github.com/mono/MonoGame/blob/develop/Tools/2MGFX/ShaderInfo.cs#L72

This is why I think we should just define our own shader language

We should not... this is exactly what the problem is. We don't need more languages.

If MonoGame was a "Game Engine" we would define a language for domain specific shaders like Unity or/and Unreal. We would not write the equivalent to HLSL or GLSL.

@hach-que

This comment has been minimized.

Show comment
Hide comment
@hach-que

hach-que Apr 21, 2015

Contributor

I'd be surprised if Unity, Unreal and Source didn't go in that direction though, and it's almost guaranteed that they won't be the same language.

At the end of the day, MonoGame's / XNA's shader language is already custom; it's an extended version of HLSL with extra features so that MonoGame has appropriate metadata at runtime. It's also significantly different to what other game engines are using (Unity uses NVIDIA's Cg for cross platform shaders), so I don't believe it will be much of an issue, especially if it remains highly compatible with HLSL.

Contributor

hach-que commented Apr 21, 2015

I'd be surprised if Unity, Unreal and Source didn't go in that direction though, and it's almost guaranteed that they won't be the same language.

At the end of the day, MonoGame's / XNA's shader language is already custom; it's an extended version of HLSL with extra features so that MonoGame has appropriate metadata at runtime. It's also significantly different to what other game engines are using (Unity uses NVIDIA's Cg for cross platform shaders), so I don't believe it will be much of an issue, especially if it remains highly compatible with HLSL.

@theZMan

This comment has been minimized.

Show comment
Hide comment
@theZMan

theZMan Apr 21, 2015

Contributor

You are not wrong...

But MonoGames biggest selling point is that its XNA... making a big change even if its HLSL like is moving away from that goal. I we had 12 gazillion users like Unity and Unreal its much easier to make a change like that.

XNA's extensions are not XNA specific are they? I think it just used the DirectX effect files. So it was well documented by Microsoft.

Contributor

theZMan commented Apr 21, 2015

You are not wrong...

But MonoGames biggest selling point is that its XNA... making a big change even if its HLSL like is moving away from that goal. I we had 12 gazillion users like Unity and Unreal its much easier to make a change like that.

XNA's extensions are not XNA specific are they? I think it just used the DirectX effect files. So it was well documented by Microsoft.

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Apr 21, 2015

Member

It's also significantly different to what other game engines are using

MonoGame isn't a game engine... that is the point. If we were we would do what Unity, Unreal, and Source do... define high level languages for domain specific shading. They do not simply define a replacement for HLSL or GLSL.

We have a plan... it is a good one... we're not changing it.

Member

tomspilman commented Apr 21, 2015

It's also significantly different to what other game engines are using

MonoGame isn't a game engine... that is the point. If we were we would do what Unity, Unreal, and Source do... define high level languages for domain specific shading. They do not simply define a replacement for HLSL or GLSL.

We have a plan... it is a good one... we're not changing it.

@hach-que

This comment has been minimized.

Show comment
Hide comment
@hach-que

hach-que Apr 21, 2015

Contributor

Nobody is suggesting we get rid of XNA functionality. If you want to compile XNA shaders you could still do that, but obviously that requires Windows to compile because you need access to the DirectX APIs that parse HLSL.

My suggestion is to have an XNA/HLSL-like shading language that doesn't have this requirement (because we do the parsing and translation to bytecode in C#). I say HLSL-like because I doubt we'd be able to get 100% compatibility due to reimplementing the parser.

Contributor

hach-que commented Apr 21, 2015

Nobody is suggesting we get rid of XNA functionality. If you want to compile XNA shaders you could still do that, but obviously that requires Windows to compile because you need access to the DirectX APIs that parse HLSL.

My suggestion is to have an XNA/HLSL-like shading language that doesn't have this requirement (because we do the parsing and translation to bytecode in C#). I say HLSL-like because I doubt we'd be able to get 100% compatibility due to reimplementing the parser.

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Apr 21, 2015

Member

My suggestion is to have an XNA/HLSL-like shading language that doesn't have this requirement

Then what? How does it go from this language to DirectX shader bytecode?

Member

tomspilman commented Apr 21, 2015

My suggestion is to have an XNA/HLSL-like shading language that doesn't have this requirement

Then what? How does it go from this language to DirectX shader bytecode?

@hach-que

This comment has been minimized.

Show comment
Hide comment
@hach-que

hach-que Apr 21, 2015

Contributor

You write out the DirectX shader assembly or bytecode like you would compiling any language.

See https://msdn.microsoft.com/en-us/library/windows/desktop/bb219840%28v=vs.85%29.aspx for the assembly reference. It doesn't appear that there's a bytecode reference, although it should be possible to use D3DDisassemble to pair up assembly instructions with the resulting byte code (this is rather annoying because I thought they did have documented bytecode but apparently not).

Contributor

hach-que commented Apr 21, 2015

You write out the DirectX shader assembly or bytecode like you would compiling any language.

See https://msdn.microsoft.com/en-us/library/windows/desktop/bb219840%28v=vs.85%29.aspx for the assembly reference. It doesn't appear that there's a bytecode reference, although it should be possible to use D3DDisassemble to pair up assembly instructions with the resulting byte code (this is rather annoying because I thought they did have documented bytecode but apparently not).

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Apr 21, 2015

Member

You write out the DirectX shader assembly or bytecode like you would compiling any language.

So we just skip over all the hardware specific optimizations HLSL compiler provides?

Same for the hardware specific optimizations that Sony's PSSL compiler provides for PS4 hardware?

This seems like a bad idea.

Member

tomspilman commented Apr 21, 2015

You write out the DirectX shader assembly or bytecode like you would compiling any language.

So we just skip over all the hardware specific optimizations HLSL compiler provides?

Same for the hardware specific optimizations that Sony's PSSL compiler provides for PS4 hardware?

This seems like a bad idea.

@hach-que

This comment has been minimized.

Show comment
Hide comment
@hach-que

hach-que Apr 21, 2015

Contributor

How is HLSL going to be doing hardware specific optimizations when the same compiled shader (bytecode) is used on NVIDIA, AMD and Intel cards?

Unless you mean optimizations in general, but that's got nothing to do with hardware specific code.

Contributor

hach-que commented Apr 21, 2015

How is HLSL going to be doing hardware specific optimizations when the same compiled shader (bytecode) is used on NVIDIA, AMD and Intel cards?

Unless you mean optimizations in general, but that's got nothing to do with hardware specific code.

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Apr 21, 2015

Member

How is HLSL going to be doing hardware specific optimizations

It does... I can't find the slide decks now, but MS has talked about the things the HLSL compiler does under the hood before. Basically they know what is a common fast path on most shipping hardware and optimize to that path. It is all a lot more than you expect... this is why compiling HLSL is a slow process.

I stick to my assertion that thinking we know better than the DirectX team or Sony engineers on how to best optimize shaders is a bad idea.

Member

tomspilman commented Apr 21, 2015

How is HLSL going to be doing hardware specific optimizations

It does... I can't find the slide decks now, but MS has talked about the things the HLSL compiler does under the hood before. Basically they know what is a common fast path on most shipping hardware and optimize to that path. It is all a lot more than you expect... this is why compiling HLSL is a slow process.

I stick to my assertion that thinking we know better than the DirectX team or Sony engineers on how to best optimize shaders is a bad idea.

@tgjones

This comment has been minimized.

Show comment
Hide comment
@tgjones

tgjones Apr 22, 2015

Contributor

You write out the DirectX shader assembly or bytecode like you would compiling any language.

@hach-que That's a nice idea, but it's just not possible. The D3D compiler inserts a checksum into the compiled shader, and no-one outside MS knows the algorithm to calculate that checksum. (And I should know, I wrote a D3D bytecode disassembler.) It's certainly possible to go from bytecode back to shader code, but not currently the other way around, because of this checksum issue. You can see one of the Unity developers mention it here. Back when I wrote that disassembler, I actually hoped to write an assembler, but had to stop when I hit this checksum issue.

Anyway, I think the current plan is a good one:

  1. First, allow MGFX files to be written using [a form of] GLSL.
  2. Once that's done, implement a cross-compiler solution so that shaders can be written in HLSL, and cross-compiled to GLSL, even on non-Windows platforms. So far, Paradox's HLSL-to-GLSL converter looks the most promising to me. (I actually started writing my own, specifically for MonoGame, but paused when it became clear that Paradox's cross-compiler is going to be separately released under a more friendly open source license.)

I personally think (2) is going to be more useful in the long run, but I respect @tomspilman's opinion that (1) is the more important of the two paths right now.

Contributor

tgjones commented Apr 22, 2015

You write out the DirectX shader assembly or bytecode like you would compiling any language.

@hach-que That's a nice idea, but it's just not possible. The D3D compiler inserts a checksum into the compiled shader, and no-one outside MS knows the algorithm to calculate that checksum. (And I should know, I wrote a D3D bytecode disassembler.) It's certainly possible to go from bytecode back to shader code, but not currently the other way around, because of this checksum issue. You can see one of the Unity developers mention it here. Back when I wrote that disassembler, I actually hoped to write an assembler, but had to stop when I hit this checksum issue.

Anyway, I think the current plan is a good one:

  1. First, allow MGFX files to be written using [a form of] GLSL.
  2. Once that's done, implement a cross-compiler solution so that shaders can be written in HLSL, and cross-compiled to GLSL, even on non-Windows platforms. So far, Paradox's HLSL-to-GLSL converter looks the most promising to me. (I actually started writing my own, specifically for MonoGame, but paused when it became clear that Paradox's cross-compiler is going to be separately released under a more friendly open source license.)

I personally think (2) is going to be more useful in the long run, but I respect @tomspilman's opinion that (1) is the more important of the two paths right now.

@KonajuGames

This comment has been minimized.

Show comment
Hide comment
@KonajuGames

KonajuGames Apr 24, 2015

Contributor

For reference, here's an article about doing a cross-compilation of shaders. Might find some useful tips in there.
http://www.gamedev.net/page/resources/_/technical/apis-and-tools/shader-cross-compilation-and-savvy-the-smart-shader-cross-compiler-r4038

Contributor

KonajuGames commented Apr 24, 2015

For reference, here's an article about doing a cross-compilation of shaders. Might find some useful tips in there.
http://www.gamedev.net/page/resources/_/technical/apis-and-tools/shader-cross-compilation-and-savvy-the-smart-shader-cross-compiler-r4038

@SirCmpwn

This comment has been minimized.

Show comment
Hide comment
@SirCmpwn

SirCmpwn Jun 25, 2015

SirCmpwn/TrueCraft#169 - the situation with shaders in MonoGame is gross and unworkable.

SirCmpwn/TrueCraft#169 - the situation with shaders in MonoGame is gross and unworkable.

@hach-que

This comment has been minimized.

Show comment
Hide comment
@hach-que

hach-que Jun 25, 2015

Contributor

@SirCmpwn I think this mess is basically waiting on Vulkan, so that we can have a parser that parses HLSL and writes out Vulkan bytecode.

Tom suggested supporting GLSL, but with Vulkan around the corner, and the fact that no-one really wants to write every shader twice, I don't think this will move forward until Vulkan arrives (unless someone contributes the work, but it won't be me even though I use MonoGame pretty much solely on Linux, because I prefer the waiting for Vulkan option).

Contributor

hach-que commented Jun 25, 2015

@SirCmpwn I think this mess is basically waiting on Vulkan, so that we can have a parser that parses HLSL and writes out Vulkan bytecode.

Tom suggested supporting GLSL, but with Vulkan around the corner, and the fact that no-one really wants to write every shader twice, I don't think this will move forward until Vulkan arrives (unless someone contributes the work, but it won't be me even though I use MonoGame pretty much solely on Linux, because I prefer the waiting for Vulkan option).

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Jun 25, 2015

Member

the situation with shaders in MonoGame is gross and unworkable.

@SirCmpwn - Games have been shipped on all our supported platforms including consoles with the existing MonoGame shader system. Can you elaborate exactly what problem you're having?

Member

tomspilman commented Jun 25, 2015

the situation with shaders in MonoGame is gross and unworkable.

@SirCmpwn - Games have been shipped on all our supported platforms including consoles with the existing MonoGame shader system. Can you elaborate exactly what problem you're having?

@SirCmpwn

This comment has been minimized.

Show comment
Hide comment
@SirCmpwn

SirCmpwn Jun 25, 2015

@tomspilman I included a link in my earlier comment that highlights the problems with this.

@hach-que I'd prefer to write two shaders than to live with the current circumstances, and I can't imagine Vulkan will be around within the next year.

For the record, my project (and my stake in MonoGame) is moving to OpenTK. This is just retrospective feedback.

@tomspilman I included a link in my earlier comment that highlights the problems with this.

@hach-que I'd prefer to write two shaders than to live with the current circumstances, and I can't imagine Vulkan will be around within the next year.

For the record, my project (and my stake in MonoGame) is moving to OpenTK. This is just retrospective feedback.

@SirCmpwn

This comment has been minimized.

Show comment
Hide comment
@SirCmpwn

SirCmpwn Sep 1, 2015

Bump. OpenTK is giving us hell and I'm thinking about going back to MonoGame and shoehorning BasicEffect into our needs. This sucks, guys.

SirCmpwn commented Sep 1, 2015

Bump. OpenTK is giving us hell and I'm thinking about going back to MonoGame and shoehorning BasicEffect into our needs. This sucks, guys.

@paewie

This comment has been minimized.

Show comment
Hide comment
@paewie

paewie Sep 1, 2015

@SirCmpwn the most simple workaround I found is providing platform-specific packages for the shaders.
As an idea: You could (kind of first-time-setup) pack all shaders into an archive or a directory and your application picks the appropriate ones at launch. You could even delete the other ones. And since you are not using thousands of shaders the file sizes shouldn't pose a problem. Just do this before you attempt to use the shader, of course.
It's maybe not the "easiest" solution, but it seems to be the smoothest one atm and there are a lot of programs out there doing such stuff at their first launch. So it's like industry standard ;) And if you even are willing to write two shaders, then there shouldn't be a problem with compiling the shader twice (GL platforms and DX).
Acutally I'm not quite sure if the OpenGL shaders are even cross-platform compatible (for example built for WindowsGL, but do also work on Linux)...

paewie commented Sep 1, 2015

@SirCmpwn the most simple workaround I found is providing platform-specific packages for the shaders.
As an idea: You could (kind of first-time-setup) pack all shaders into an archive or a directory and your application picks the appropriate ones at launch. You could even delete the other ones. And since you are not using thousands of shaders the file sizes shouldn't pose a problem. Just do this before you attempt to use the shader, of course.
It's maybe not the "easiest" solution, but it seems to be the smoothest one atm and there are a lot of programs out there doing such stuff at their first launch. So it's like industry standard ;) And if you even are willing to write two shaders, then there shouldn't be a problem with compiling the shader twice (GL platforms and DX).
Acutally I'm not quite sure if the OpenGL shaders are even cross-platform compatible (for example built for WindowsGL, but do also work on Linux)...

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Jan 25, 2016

Member

FYI.

I'm currently working on removing all FX syntax (sampler state, render states, techniques/passes) after we parse the effect data out. This will let us pass a clean non-FX code to the shader compiler. This is important for some platforms that don't support FX syntax like GLSL.

Should have a PR up soon.

Member

tomspilman commented Jan 25, 2016

FYI.

I'm currently working on removing all FX syntax (sampler state, render states, techniques/passes) after we parse the effect data out. This will let us pass a clean non-FX code to the shader compiler. This is important for some platforms that don't support FX syntax like GLSL.

Should have a PR up soon.

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Jan 25, 2016

Member

Filtering FX syntax ended up pretty simple in the end. See #4462.

Member

tomspilman commented Jan 25, 2016

Filtering FX syntax ended up pretty simple in the end. See #4462.

@tgjones

This comment has been minimized.

Show comment
Hide comment
@tgjones

tgjones Feb 27, 2016

Contributor

In MonoGame.Content.Pipeline.Windows / 2MGFX, replace MojoShader with hlsl2glsl. This is hard, because MonoGame doesn't just use MojoShader to convert to GLSL - it also uses it to extract information about input attributes and their semantics, which is not (currently) exposed by hlsl2glsl. This is where I am now.

I've just found a possible solution to this problem (keeping the link between input attributes and semantics), that doesn't involve modifying hlsl2glsl. I'll give it a try.

Contributor

tgjones commented Feb 27, 2016

In MonoGame.Content.Pipeline.Windows / 2MGFX, replace MojoShader with hlsl2glsl. This is hard, because MonoGame doesn't just use MojoShader to convert to GLSL - it also uses it to extract information about input attributes and their semantics, which is not (currently) exposed by hlsl2glsl. This is where I am now.

I've just found a possible solution to this problem (keeping the link between input attributes and semantics), that doesn't involve modifying hlsl2glsl. I'll give it a try.

@tomspilman

This comment has been minimized.

Show comment
Hide comment
Member

tomspilman commented Feb 27, 2016

Great @tgjones !

@tgjones

This comment has been minimized.

Show comment
Hide comment
@tgjones

tgjones Mar 1, 2016

Contributor

One problem I've encountered doing the HLSL -> GLSL translation, which will also come up with our proposed GLFX format, is around vertex input attributes. I have a workaround for HLSL -> GLSL translation (basically: replace input attribute names with the stringified semantic name, so uv : TEXCOORD0 in HLSL becomes texCoord0 in GLSL). But this is a heads-up for whoever takes on the GLFX work.

XNA and MonoGame use input semantics, not attribute names, to provide the link between vertex data and the vertex shader. For example:

struct VertexInput {
    float3 position: POSITION;
    float3 normal: NORMAL;
}

VertexOutput MyVS(VertexInput input) {
   // ...
}

In MonoGame, the VertexElements in a VertexDeclaration link a VertexElementUsage (and index) to a particular location in the vertex data. VertexElementUsage, combined with usage index, map directly to the HLSL semantic.

Since GLSL doesn't have semantics, this is a problem. A couple of initial suggestions:

  • Add semantics, or some other syntax, to our GLFX format, only for input attributes. Sounds nasty.
  • Require convention-based attribute names. I.e. the position attribute MUST be named "position0", the second texture coordinate MUST be named "texCoord1", etc. Sounds like the type of thing that will generate lots of support requests.
  • Anyone got any better suggestions?

Some versions of GLSL let you bind attributes to specific locations, i.e.

layout(location=0) in vec4 position;

but I don't think this helps us, at least with MonoGame's current HLSL-semantic-centric design.

Anyway, like I said, I have a workaround for HLSL -> GLSL translation, so this just affects our proposed GLFX format, which we hope will allow direct compilation of GLSL shaders without converting from HLSL.

Contributor

tgjones commented Mar 1, 2016

One problem I've encountered doing the HLSL -> GLSL translation, which will also come up with our proposed GLFX format, is around vertex input attributes. I have a workaround for HLSL -> GLSL translation (basically: replace input attribute names with the stringified semantic name, so uv : TEXCOORD0 in HLSL becomes texCoord0 in GLSL). But this is a heads-up for whoever takes on the GLFX work.

XNA and MonoGame use input semantics, not attribute names, to provide the link between vertex data and the vertex shader. For example:

struct VertexInput {
    float3 position: POSITION;
    float3 normal: NORMAL;
}

VertexOutput MyVS(VertexInput input) {
   // ...
}

In MonoGame, the VertexElements in a VertexDeclaration link a VertexElementUsage (and index) to a particular location in the vertex data. VertexElementUsage, combined with usage index, map directly to the HLSL semantic.

Since GLSL doesn't have semantics, this is a problem. A couple of initial suggestions:

  • Add semantics, or some other syntax, to our GLFX format, only for input attributes. Sounds nasty.
  • Require convention-based attribute names. I.e. the position attribute MUST be named "position0", the second texture coordinate MUST be named "texCoord1", etc. Sounds like the type of thing that will generate lots of support requests.
  • Anyone got any better suggestions?

Some versions of GLSL let you bind attributes to specific locations, i.e.

layout(location=0) in vec4 position;

but I don't think this helps us, at least with MonoGame's current HLSL-semantic-centric design.

Anyway, like I said, I have a workaround for HLSL -> GLSL translation, so this just affects our proposed GLFX format, which we hope will allow direct compilation of GLSL shaders without converting from HLSL.

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Mar 1, 2016

Member

Add semantics, or some other syntax, to our GLFX format, only for input attributes. Sounds nasty.

Well... it might be a little alien to GLSL users... but so are techniques, passes, sampler state, and other FX syntax.

And I know now how to easily strip these bits out of the parsed FX file. So it would be easy to strip out what GLSL doesn't support.

Require convention-based attribute names.

Isn't that how it already works... doesn't GLSL use built in attribute names like gl_Position, gl_Normal, gl_TexCoord[0], etc?

Member

tomspilman commented Mar 1, 2016

Add semantics, or some other syntax, to our GLFX format, only for input attributes. Sounds nasty.

Well... it might be a little alien to GLSL users... but so are techniques, passes, sampler state, and other FX syntax.

And I know now how to easily strip these bits out of the parsed FX file. So it would be easy to strip out what GLSL doesn't support.

Require convention-based attribute names.

Isn't that how it already works... doesn't GLSL use built in attribute names like gl_Position, gl_Normal, gl_TexCoord[0], etc?

@KonajuGames

This comment has been minimized.

Show comment
Hide comment
@KonajuGames

KonajuGames Mar 1, 2016

Contributor

Prior to version 4.0 it used fixed names. These have been deprecated for the new layout scheme.

Contributor

KonajuGames commented Mar 1, 2016

Prior to version 4.0 it used fixed names. These have been deprecated for the new layout scheme.

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Mar 1, 2016

Member

These have been deprecated for the new layout scheme.

So do we need to support the old scheme still or can we count on the new layout scheme working on all our current target GL devices... iOS, Android, Windows, Mac, Linux, etc. ?

Member

tomspilman commented Mar 1, 2016

These have been deprecated for the new layout scheme.

So do we need to support the old scheme still or can we count on the new layout scheme working on all our current target GL devices... iOS, Android, Windows, Mac, Linux, etc. ?

@KonajuGames

This comment has been minimized.

Show comment
Hide comment
@KonajuGames

KonajuGames Mar 2, 2016

Contributor

So do we need to support the old scheme still or can we count on the new layout scheme working on all our current target GL devices... iOS, Android, Windows, Mac, Linux, etc. ?

That would be nice, but no. OpenGL ES Shading Language 1.0 does not support layout and still uses the fixed attribute names.

Contributor

KonajuGames commented Mar 2, 2016

So do we need to support the old scheme still or can we count on the new layout scheme working on all our current target GL devices... iOS, Android, Windows, Mac, Linux, etc. ?

That would be nice, but no. OpenGL ES Shading Language 1.0 does not support layout and still uses the fixed attribute names.

@Jjagg

This comment has been minimized.

Show comment
Hide comment
@Jjagg

Jjagg Sep 20, 2016

Contributor

It would be really nice to get this resolved so we can get rid of MojoShader and get access to newer shader features and pipeline stages. This also needs to be resolved to get DX12, Vulkan and Mantle working once we design and implement the new graphics API. @tgjones What's the status of your hlsl2glsl branch? Did anything change since the most recent comments? I'd like to help out here if possible at all.

Add semantics, or some other syntax, to our GLFX format, only for input attributes.

Is this what we want for the GLFX format? I personally prefer this to having convention based names and can't come up with anything better.

Contributor

Jjagg commented Sep 20, 2016

It would be really nice to get this resolved so we can get rid of MojoShader and get access to newer shader features and pipeline stages. This also needs to be resolved to get DX12, Vulkan and Mantle working once we design and implement the new graphics API. @tgjones What's the status of your hlsl2glsl branch? Did anything change since the most recent comments? I'd like to help out here if possible at all.

Add semantics, or some other syntax, to our GLFX format, only for input attributes.

Is this what we want for the GLFX format? I personally prefer this to having convention based names and can't come up with anything better.

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Sep 20, 2016

Member

@Jjagg

I'm still of the opinion that hlsl2glsl is irrelevant to this issue. It would be nice to have automatic HLSL source to GLSL source translation... but that isn't the blocker for this task.

These are the remaining things to address:

  • Manually port at least one of the existing stock shaders to GLSL to work out the syntax.
  • Add a new OpenGL2 shader profile that only accepts GLSL code.
  • Test and fix the MGFX parser if it has problems passing thru any GLSL syntax.
  • Figure out how to patch in automatic GLSL changes required by MonoGame.
  • Decide how to fake constant buffers and ensue efficient updates to uniforms.
  • Hook up glsl-optimizer or something like that to strip unused code from shaders.

If we started hitting these issues in this order... we would get this done.

I would love to do the work, but I've been off in console land. The good news of that is I've done lots of fixes to the MGFX system to allow for console shader languages which should make things more flexible for GLSL support. However no consoles we've seen so far use GLSL as their shader language... so it has not been something I could spend time on.

Member

tomspilman commented Sep 20, 2016

@Jjagg

I'm still of the opinion that hlsl2glsl is irrelevant to this issue. It would be nice to have automatic HLSL source to GLSL source translation... but that isn't the blocker for this task.

These are the remaining things to address:

  • Manually port at least one of the existing stock shaders to GLSL to work out the syntax.
  • Add a new OpenGL2 shader profile that only accepts GLSL code.
  • Test and fix the MGFX parser if it has problems passing thru any GLSL syntax.
  • Figure out how to patch in automatic GLSL changes required by MonoGame.
  • Decide how to fake constant buffers and ensue efficient updates to uniforms.
  • Hook up glsl-optimizer or something like that to strip unused code from shaders.

If we started hitting these issues in this order... we would get this done.

I would love to do the work, but I've been off in console land. The good news of that is I've done lots of fixes to the MGFX system to allow for console shader languages which should make things more flexible for GLSL support. However no consoles we've seen so far use GLSL as their shader language... so it has not been something I could spend time on.

@Jjagg

This comment has been minimized.

Show comment
Hide comment
@Jjagg

Jjagg Sep 21, 2016

Contributor

@tomspilman Hey, console land is a nice place too ^^

I'll port the SpriteEffect shader to 'glfx' later this week and I'll put a gist for review. Talking about samplers, techniques and input layout, reading through this thread I'd say

  • samplers can be glsl syntax for declaration followed by { SamplerState }; which will be parsed and filtered out by mgfx
  • techniques follow the regular fx syntax but with glsl version and get filtered out when parsed.
  • require semantics after 'in' declarations also filtered out after parsing

Another thing: With the technique system pixel and fragment shaders should be in a single file. (This also means we can't have 'main' for the shader stage entry point function name.) I've seen people combine them in a single file using preprocessor defines. It would be nice if we could add in #ifdef based on the techniques, but we'd need to know which input and output belongs to what shader stage. Should we figure out what methods use what variables? Or turn it a upside down and use function arguments and return type? The latter starts to deviate away from standard glsl a lot so I'd opt for parsing to figure it out.

Contributor

Jjagg commented Sep 21, 2016

@tomspilman Hey, console land is a nice place too ^^

I'll port the SpriteEffect shader to 'glfx' later this week and I'll put a gist for review. Talking about samplers, techniques and input layout, reading through this thread I'd say

  • samplers can be glsl syntax for declaration followed by { SamplerState }; which will be parsed and filtered out by mgfx
  • techniques follow the regular fx syntax but with glsl version and get filtered out when parsed.
  • require semantics after 'in' declarations also filtered out after parsing

Another thing: With the technique system pixel and fragment shaders should be in a single file. (This also means we can't have 'main' for the shader stage entry point function name.) I've seen people combine them in a single file using preprocessor defines. It would be nice if we could add in #ifdef based on the techniques, but we'd need to know which input and output belongs to what shader stage. Should we figure out what methods use what variables? Or turn it a upside down and use function arguments and return type? The latter starts to deviate away from standard glsl a lot so I'd opt for parsing to figure it out.

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Sep 21, 2016

Member

I'll port the SpriteEffect shader to 'glfx' later this week and I'll put a gist for review.

Perfect!

Trying to follow the regular FX syntax as much as possible with required changes to fit GLSL is the best approach and will make things easier on the MGFX parser.

we can't have 'main' for the shader stage entry point function name

Right.... they would be named and referenced in the techniques like we do for HLSL. Internal to the 2MGFX compiler we would deal with renaming methods to work how GL wants it.

I'd opt for parsing to figure it out.

That is the plan. We will deal with this internally.

Most GLSL optimization tools strip out unused code... so it would perfectly solve this issue while making the code nice and compact for runtime use.

Member

tomspilman commented Sep 21, 2016

I'll port the SpriteEffect shader to 'glfx' later this week and I'll put a gist for review.

Perfect!

Trying to follow the regular FX syntax as much as possible with required changes to fit GLSL is the best approach and will make things easier on the MGFX parser.

we can't have 'main' for the shader stage entry point function name

Right.... they would be named and referenced in the techniques like we do for HLSL. Internal to the 2MGFX compiler we would deal with renaming methods to work how GL wants it.

I'd opt for parsing to figure it out.

That is the plan. We will deal with this internally.

Most GLSL optimization tools strip out unused code... so it would perfectly solve this issue while making the code nice and compact for runtime use.

@Jjagg

This comment has been minimized.

Show comment
Hide comment
@Jjagg

Jjagg Sep 21, 2016

Contributor

Since SpriteEffect.fx is pretty small I'll just post the whole thing here

uniform mat4 MatrixTransform;
uniform sampler2D Texture;


attribute vec4 position : POSITION0;
attribute vec4 color    : COLOR0;
attribute vec2 texCoord : TEXCOORD0;

varying vec4 vColor;
varying vec2 vTexCoord;

void SpriteVertexShader()
{
    gl_Position = MatrixTransform * position;
    vColor = color;
    vTexCoord = texCoord;
}

void SpritePixelShader()
{
    gl_FragColor = texture2D(Texture, vTexCoord) * vColor;
}

technique SpriteBatch 
{
    pass 
    {
#if GLES
        VertexShader = compile glsl_100 SpriteVertexShader();
        PixelShader  = compile glsl_100 SpritePixelShader();
#else
        VertexShader = compile glsl_110 SpriteVertexShader();
        PixelShader  = compile glsl_110 SpritePixelShader();
#endif
    }
};

I used https://github.com/mattdesl/lwjgl-basics/wiki/GLSL-Versions to find the glsl versions for OpenGL 2.0 and GLES 2.0.

Contributor

Jjagg commented Sep 21, 2016

Since SpriteEffect.fx is pretty small I'll just post the whole thing here

uniform mat4 MatrixTransform;
uniform sampler2D Texture;


attribute vec4 position : POSITION0;
attribute vec4 color    : COLOR0;
attribute vec2 texCoord : TEXCOORD0;

varying vec4 vColor;
varying vec2 vTexCoord;

void SpriteVertexShader()
{
    gl_Position = MatrixTransform * position;
    vColor = color;
    vTexCoord = texCoord;
}

void SpritePixelShader()
{
    gl_FragColor = texture2D(Texture, vTexCoord) * vColor;
}

technique SpriteBatch 
{
    pass 
    {
#if GLES
        VertexShader = compile glsl_100 SpriteVertexShader();
        PixelShader  = compile glsl_100 SpritePixelShader();
#else
        VertexShader = compile glsl_110 SpriteVertexShader();
        PixelShader  = compile glsl_110 SpritePixelShader();
#endif
    }
};

I used https://github.com/mattdesl/lwjgl-basics/wiki/GLSL-Versions to find the glsl versions for OpenGL 2.0 and GLES 2.0.

@Jjagg

This comment has been minimized.

Show comment
Hide comment
@Jjagg

Jjagg Sep 21, 2016

Contributor

Do we need custom precision qualifiers? Especially to not use highp in GLES? E.g. precision mediump float;

Contributor

Jjagg commented Sep 21, 2016

Do we need custom precision qualifiers? Especially to not use highp in GLES? E.g. precision mediump float;

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Sep 21, 2016

Member

Since SpriteEffect.fx is pretty small I'll just post the whole thing here

I think that looks good. Pretty logical as a GLSL version of and FX file.

The next step would be to submit that FX file into 2MGFX and seeing if it gets thru the preprocessor step:

https://github.com/MonoGame/MonoGame/blob/develop/Tools/2MGFX/ShaderInfo.cs#L60

And if it gets thru the MGFX parser:

https://github.com/MonoGame/MonoGame/blob/develop/Tools/2MGFX/ShaderInfo.cs#L63

And if it then evaluates and returns some reasonable ShaderInfo:

https://github.com/MonoGame/MonoGame/blob/develop/Tools/2MGFX/ShaderInfo.cs#L74

Do we need custom precision qualifiers?

Maybe... you should manually run 2MGFX on the existing SpriteEffect.fx and dump out the GLSL generated by MojoShader. It will give you a good idea of everything the current GL implementation expects in the GLSL code.

Member

tomspilman commented Sep 21, 2016

Since SpriteEffect.fx is pretty small I'll just post the whole thing here

I think that looks good. Pretty logical as a GLSL version of and FX file.

The next step would be to submit that FX file into 2MGFX and seeing if it gets thru the preprocessor step:

https://github.com/MonoGame/MonoGame/blob/develop/Tools/2MGFX/ShaderInfo.cs#L60

And if it gets thru the MGFX parser:

https://github.com/MonoGame/MonoGame/blob/develop/Tools/2MGFX/ShaderInfo.cs#L63

And if it then evaluates and returns some reasonable ShaderInfo:

https://github.com/MonoGame/MonoGame/blob/develop/Tools/2MGFX/ShaderInfo.cs#L74

Do we need custom precision qualifiers?

Maybe... you should manually run 2MGFX on the existing SpriteEffect.fx and dump out the GLSL generated by MojoShader. It will give you a good idea of everything the current GL implementation expects in the GLSL code.

@Jjagg

This comment has been minimized.

Show comment
Hide comment
@Jjagg

Jjagg Sep 21, 2016

Contributor

MojoShader includes precision qualifiers for GLES. mediump float and mediump int for the vertex shader and highp float and mediump int for the fragment shader. Not sure how we should allow different precision in different shader stages. MojoShader can cleanly seperate shader stages because the code is generated so they don't have this problem, but you should be able to mix 'n match a bit with glfx. Somehow a user has to bind the precision qualifiers to a specific shader stage. Maybe we take the last precision qualifiers before a 'main()' function (which won't be called main, but we can figure it out from the techniques) to be used for the shader stage that that's the entry point of. Or we can force users set the preprocessor directives themselves, but I'm against this because we require a single file and IMO we should try to avoid any pain that could come with that.

Contributor

Jjagg commented Sep 21, 2016

MojoShader includes precision qualifiers for GLES. mediump float and mediump int for the vertex shader and highp float and mediump int for the fragment shader. Not sure how we should allow different precision in different shader stages. MojoShader can cleanly seperate shader stages because the code is generated so they don't have this problem, but you should be able to mix 'n match a bit with glfx. Somehow a user has to bind the precision qualifiers to a specific shader stage. Maybe we take the last precision qualifiers before a 'main()' function (which won't be called main, but we can figure it out from the techniques) to be used for the shader stage that that's the entry point of. Or we can force users set the preprocessor directives themselves, but I'm against this because we require a single file and IMO we should try to avoid any pain that could come with that.

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Sep 21, 2016

Member

@Jjagg

MojoShader includes precision qualifiers for GLES.

Well 2MGFX will continue to store one blob of GLSL code per shader stage. That blob of GLSL will contain only the code needed for that one stage. This is how it works now for MojoShader and we want to maintain that behavior.

This requires the 2MGFX to split up the FX. My expectation is we can do this with the GLSL optimization pass as it will strip all unnecessary code automatically.

So after the code for the stage is stripped we would automatically prepend the same MojoShader precision qualifiers to the GLSL for each individual stage.

Later after we have things working we can circle back and make precision qualifiers something the FX writer can tweak themselves.

Member

tomspilman commented Sep 21, 2016

@Jjagg

MojoShader includes precision qualifiers for GLES.

Well 2MGFX will continue to store one blob of GLSL code per shader stage. That blob of GLSL will contain only the code needed for that one stage. This is how it works now for MojoShader and we want to maintain that behavior.

This requires the 2MGFX to split up the FX. My expectation is we can do this with the GLSL optimization pass as it will strip all unnecessary code automatically.

So after the code for the stage is stripped we would automatically prepend the same MojoShader precision qualifiers to the GLSL for each individual stage.

Later after we have things working we can circle back and make precision qualifiers something the FX writer can tweak themselves.

@Jjagg

This comment has been minimized.

Show comment
Hide comment
@Jjagg

Jjagg Sep 21, 2016

Contributor

So after the code for the stage is stripped we would automatically prepend the same MojoShader precision qualifiers to the GLSL for each individual stage.
Later after we have things working we can circle back and make precision qualifiers something the FX writer can tweak themselves.

Sounds good

Contributor

Jjagg commented Sep 21, 2016

So after the code for the stage is stripped we would automatically prepend the same MojoShader precision qualifiers to the GLSL for each individual stage.
Later after we have things working we can circle back and make precision qualifiers something the FX writer can tweak themselves.

Sounds good

@dellis1972

This comment has been minimized.

Show comment
Hide comment
@dellis1972

dellis1972 Sep 22, 2016

Contributor

@Jjagg we do need precision stuff for GLES. otherwise the compiled shaders run really slowly on lower end devices.

Some notes about the GLSLOptimizer [1] stuff I did. You will need to split out the Vertex/Pixel shader bits before passing them to the optimiser, otherwise it will error since you tell it you are compiling either a vertex or pixel shader. Just a heads up and something to watch out for.

[1] https://github.com/infinitespace-studios/GLSLOptimizerSharp

Contributor

dellis1972 commented Sep 22, 2016

@Jjagg we do need precision stuff for GLES. otherwise the compiled shaders run really slowly on lower end devices.

Some notes about the GLSLOptimizer [1] stuff I did. You will need to split out the Vertex/Pixel shader bits before passing them to the optimiser, otherwise it will error since you tell it you are compiling either a vertex or pixel shader. Just a heads up and something to watch out for.

[1] https://github.com/infinitespace-studios/GLSLOptimizerSharp

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Sep 22, 2016

Member

otherwise it will error

Error how specifically?

My understanding was as long as only one method has the main() name it will optimize that one and remove everything else from the source.

We would do this to the source before passing it to the optimizer with a simple string replacement with the method name passed to the technique.

Member

tomspilman commented Sep 22, 2016

otherwise it will error

Error how specifically?

My understanding was as long as only one method has the main() name it will optimize that one and remove everything else from the source.

We would do this to the source before passing it to the optimizer with a simple string replacement with the method name passed to the technique.

@dellis1972

This comment has been minimized.

Show comment
Hide comment
@dellis1972

dellis1972 Sep 22, 2016

Contributor

certain code that is valid in a vertex shader is not valid in a pixel
shader, and vice verse. The Optimiser is a bit stupid in that it does
validate the code before optimising.my tests found that if I did not remove
the vertex shader bits from the code it would fail.

It might just have been my testing.. I dunno, I'm not a shader expert :)
But I just wanted to raise it as a potential problem we might hit.

On 22 September 2016 at 10:11, Tom Spilman notifications@github.com wrote:

otherwise it will error

Error how specifically?

My understanding was as long as only one method has the main() name it
will optimize that one and remove everything else from the source.

We would do this to the source before passing it to the optimizer with a
simple string replacement with the method name passed to the technique.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#2167 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAxeeRpA3MiIWDU0ovaald03j3yejPbsks5qskYpgaJpZM4BUjF2
.

Contributor

dellis1972 commented Sep 22, 2016

certain code that is valid in a vertex shader is not valid in a pixel
shader, and vice verse. The Optimiser is a bit stupid in that it does
validate the code before optimising.my tests found that if I did not remove
the vertex shader bits from the code it would fail.

It might just have been my testing.. I dunno, I'm not a shader expert :)
But I just wanted to raise it as a potential problem we might hit.

On 22 September 2016 at 10:11, Tom Spilman notifications@github.com wrote:

otherwise it will error

Error how specifically?

My understanding was as long as only one method has the main() name it
will optimize that one and remove everything else from the source.

We would do this to the source before passing it to the optimizer with a
simple string replacement with the method name passed to the technique.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#2167 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAxeeRpA3MiIWDU0ovaald03j3yejPbsks5qskYpgaJpZM4BUjF2
.

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Sep 22, 2016

Member

But I just wanted to raise it as a potential problem we might hit.

Ok... we will keep an eye out for that.

Worst case we will have to clean things up a bit ourselves first. That could be as simple as replacing some keywords specific to vertex or pixel shaders with whitespace.

Member

tomspilman commented Sep 22, 2016

But I just wanted to raise it as a potential problem we might hit.

Ok... we will keep an eye out for that.

Worst case we will have to clean things up a bit ourselves first. That could be as simple as replacing some keywords specific to vertex or pixel shaders with whitespace.

@Jjagg

This comment has been minimized.

Show comment
Hide comment
@Jjagg

Jjagg Sep 22, 2016

Contributor

Effect stuff and the sampler gets parsed nicely as expected. The next step is parsing and removing semantics. The parser should be extended to recognize input variables (attribute or in depending on glsl version) and semantics as well as methods (to replace their name for the optimizer). We should split up the parser so we have an hlsl and a glsl version to keep things clean and prevent parsing errors. Maybe have an EffectParser and then Hlsl- and GlslParser that extend it? Or just two seperate parsers if that's easier since TinyPG probably can't do stuff like that.
When semantics are removed we need to add precision qualifiers and the glsl version, pass the file to the optimizer for all functions defined in techniques (we should cache results so we don't do double work) and write out the results + metadata.

Contributor

Jjagg commented Sep 22, 2016

Effect stuff and the sampler gets parsed nicely as expected. The next step is parsing and removing semantics. The parser should be extended to recognize input variables (attribute or in depending on glsl version) and semantics as well as methods (to replace their name for the optimizer). We should split up the parser so we have an hlsl and a glsl version to keep things clean and prevent parsing errors. Maybe have an EffectParser and then Hlsl- and GlslParser that extend it? Or just two seperate parsers if that's easier since TinyPG probably can't do stuff like that.
When semantics are removed we need to add precision qualifiers and the glsl version, pass the file to the optimizer for all functions defined in techniques (we should cache results so we don't do double work) and write out the results + metadata.

@Jjagg

This comment has been minimized.

Show comment
Hide comment
@Jjagg

Jjagg Sep 23, 2016

Contributor

As @dellis1972 said the optimizer requires you pass correct syntax. That implies we need to do the following things before sending stuff to the optimizer:

  1. replace the name of the function that's the entry point of the shader stage to optimize with 'main'
  2. comment out function content of all other functions that can be an entry point. This is to get rid of usage of reserved variables (like gl_Position). Also prevents optimizer from complaining about assigning to varying variables in the vertex shader function when optimizing the fragment shader (you can't assign those in the fragment shader stage). I think this is generally a smart thing to do since you can't use these function from another shader stage anyway and there might be other errors lurking if we don't do this. We can just block comment from right after the opening bracket of a function to before the closing bracket (not sure if you can have nested brackets in any way in glsl, but we can count open/closing brackets to solve if you can)
  3. for the fragment shader remove 'attribute' before the vertex input variables or comment them out. Attribute variables are not allowed in the fragment shader.

Things will be harder with modern glsl since the 'in' and 'out' variables should be matched to the correct shader stage. I don't think it would be too hard if we can use the fact that you need an in and out variable with the same name between vertex and fragment shader, but that will complicate things when we add support for other shader stages. I'll just get this working with old glsl first

Contributor

Jjagg commented Sep 23, 2016

As @dellis1972 said the optimizer requires you pass correct syntax. That implies we need to do the following things before sending stuff to the optimizer:

  1. replace the name of the function that's the entry point of the shader stage to optimize with 'main'
  2. comment out function content of all other functions that can be an entry point. This is to get rid of usage of reserved variables (like gl_Position). Also prevents optimizer from complaining about assigning to varying variables in the vertex shader function when optimizing the fragment shader (you can't assign those in the fragment shader stage). I think this is generally a smart thing to do since you can't use these function from another shader stage anyway and there might be other errors lurking if we don't do this. We can just block comment from right after the opening bracket of a function to before the closing bracket (not sure if you can have nested brackets in any way in glsl, but we can count open/closing brackets to solve if you can)
  3. for the fragment shader remove 'attribute' before the vertex input variables or comment them out. Attribute variables are not allowed in the fragment shader.

Things will be harder with modern glsl since the 'in' and 'out' variables should be matched to the correct shader stage. I don't think it would be too hard if we can use the fact that you need an in and out variable with the same name between vertex and fragment shader, but that will complicate things when we add support for other shader stages. I'll just get this working with old glsl first

@Jjagg

This comment has been minimized.

Show comment
Hide comment
@Jjagg

Jjagg Sep 24, 2016

Contributor

VertexShader = compile glsl_140 vertexShader() seems like a rather clean way of identifying what to do unless it means parsing the whole file twice - once to find the Shader compiler instruction and then again to parse it the right way

...

It shouldn't... parsing the MGFX part of the effect is not platform specific.

It is now :/ because we parse input variables (to get the semantics) for GLSL and things may break if we do the same thing for HLSL... Specifically we match on the 'in' and 'attribute' keywords to know that we are parsing an input variable, but 'attribute' is not a keyword for HLSL so if you have a variable named attribute in your code the parser will expect to read Identifier Identifier Colon Identifier Semicolon - were an Identifier is just and alphanumeric word optionally with underscores in it - after that and throw if it can not... So we need to know the language beforehand to prevent this. Or just document that people shouldn't name their variables attribute and go with it, but that's a bit weird for users.

EDIT: maybe we can manage if we just parse "Colon Identifier Semicolon" for the semantics. You only use colons for semantics AFAIK and we parse comments seperately so colons in there won't get caught. If the glsl-optimizer gives the input variables in the same order they were defined (which I think it she does) we can just match the nth parsed semantic with the nth input variable. And yes... Yes... You are right, that would be very hacky! ;)

Contributor

Jjagg commented Sep 24, 2016

VertexShader = compile glsl_140 vertexShader() seems like a rather clean way of identifying what to do unless it means parsing the whole file twice - once to find the Shader compiler instruction and then again to parse it the right way

...

It shouldn't... parsing the MGFX part of the effect is not platform specific.

It is now :/ because we parse input variables (to get the semantics) for GLSL and things may break if we do the same thing for HLSL... Specifically we match on the 'in' and 'attribute' keywords to know that we are parsing an input variable, but 'attribute' is not a keyword for HLSL so if you have a variable named attribute in your code the parser will expect to read Identifier Identifier Colon Identifier Semicolon - were an Identifier is just and alphanumeric word optionally with underscores in it - after that and throw if it can not... So we need to know the language beforehand to prevent this. Or just document that people shouldn't name their variables attribute and go with it, but that's a bit weird for users.

EDIT: maybe we can manage if we just parse "Colon Identifier Semicolon" for the semantics. You only use colons for semantics AFAIK and we parse comments seperately so colons in there won't get caught. If the glsl-optimizer gives the input variables in the same order they were defined (which I think it she does) we can just match the nth parsed semantic with the nth input variable. And yes... Yes... You are right, that would be very hacky! ;)

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Dec 7, 2016

Member

@Jjagg - Got a little time to test this command line tool: https://github.com/LukasBanana/XShaderCompiler

Member

tomspilman commented Dec 7, 2016

@Jjagg - Got a little time to test this command line tool: https://github.com/LukasBanana/XShaderCompiler

@Jjagg

This comment has been minimized.

Show comment
Hide comment
@Jjagg

Jjagg Dec 7, 2016

Contributor

That looks very impressive! I don't think we can get reflection data from it (yet?) as is. It already has more features than the HLSL2GLSL + GLSLOptimizer path in terms of HLSL/GLSL versions and shader stages.

I haven't worked on this for over two months, but back then I got the build working at least. It was just the runtime that needed some work to handle GLSL in general (with MojoShader things are done a bit different) and some optimization for loose uniforms. I haven't gotten back to it because it seems a bit like a dead end. HLSL2GLSL and GLSLOptimizer still restrict us to vertex and pixel shaders and break for new features of DX11/12. Both are no longer in development. Expanding that path ourselves would take a lot of effort, though it would allow people to compile GLSL shaders on non-Windows platforms at least. But there are people out there making better stuff that we can use. XShaderCompiler is definitely worth following.

I've also been keeping an eye on development of glslang, specifically the HLSL parts (KhronosGroup/glslang#362). glslang - in the near future - should be able to compile HLSL/GLSL to SPIR-V. Then there's SPIRV-Cross which can disassemble SPIR-V to GLSL, Metal SL (experimental) and C++ (experimental). So glslang + SPIRV-Cross could be a way to go from HLSL to optimized GLSL too along with getting SPIR-V and MSL support.
Apparently Microsoft is also working on an open source HLSL compiler, so that would mean even HLSL could be compiled on non-Windows platforms.
Nice write-up of the whole situation here (I pretty much summed it up in this post though): http://ask.fm/aras_pr/answers/141344091756

Contributor

Jjagg commented Dec 7, 2016

That looks very impressive! I don't think we can get reflection data from it (yet?) as is. It already has more features than the HLSL2GLSL + GLSLOptimizer path in terms of HLSL/GLSL versions and shader stages.

I haven't worked on this for over two months, but back then I got the build working at least. It was just the runtime that needed some work to handle GLSL in general (with MojoShader things are done a bit different) and some optimization for loose uniforms. I haven't gotten back to it because it seems a bit like a dead end. HLSL2GLSL and GLSLOptimizer still restrict us to vertex and pixel shaders and break for new features of DX11/12. Both are no longer in development. Expanding that path ourselves would take a lot of effort, though it would allow people to compile GLSL shaders on non-Windows platforms at least. But there are people out there making better stuff that we can use. XShaderCompiler is definitely worth following.

I've also been keeping an eye on development of glslang, specifically the HLSL parts (KhronosGroup/glslang#362). glslang - in the near future - should be able to compile HLSL/GLSL to SPIR-V. Then there's SPIRV-Cross which can disassemble SPIR-V to GLSL, Metal SL (experimental) and C++ (experimental). So glslang + SPIRV-Cross could be a way to go from HLSL to optimized GLSL too along with getting SPIR-V and MSL support.
Apparently Microsoft is also working on an open source HLSL compiler, so that would mean even HLSL could be compiled on non-Windows platforms.
Nice write-up of the whole situation here (I pretty much summed it up in this post though): http://ask.fm/aras_pr/answers/141344091756

@tomspilman

This comment has been minimized.

Show comment
Hide comment
@tomspilman

tomspilman Dec 8, 2016

Member

@Jjagg

I don't think we can get reflection data from it (yet?)

I don't know for sure... but I am sure the tool has this information internally. We should ask the developer to make a feature which dumps this info in some simple textual format for use in other tools.

I haven't worked on this for over two months, but back then I got the build working at least. It was just the runtime that needed some work to handle GLSL in general

Getting the support for passing GLSL thru the MGFX parser would be a good thing to get committed first. There are plenty of platforms where this can be useful even without runtime support for GLSL in our OpenGL platforms.

If you don't have the time to merge this let me know where your branch is and i'll take a shot at pulling this bit out and submitting it.

Member

tomspilman commented Dec 8, 2016

@Jjagg

I don't think we can get reflection data from it (yet?)

I don't know for sure... but I am sure the tool has this information internally. We should ask the developer to make a feature which dumps this info in some simple textual format for use in other tools.

I haven't worked on this for over two months, but back then I got the build working at least. It was just the runtime that needed some work to handle GLSL in general

Getting the support for passing GLSL thru the MGFX parser would be a good thing to get committed first. There are plenty of platforms where this can be useful even without runtime support for GLSL in our OpenGL platforms.

If you don't have the time to merge this let me know where your branch is and i'll take a shot at pulling this bit out and submitting it.

@aienabled

This comment has been minimized.

Show comment
Hide comment
@aienabled

aienabled Jan 27, 2017

Contributor

New DirectX Shader Compiler based on Clang/LLVM now available as Open Source
https://blogs.msdn.microsoft.com/directx/2017/01/23/new-directx-shader-compiler-based-on-clangllvm-now-available-as-open-source/

It seems it still need porting to other platforms, but now it's just a matter of time.

Contributor

aienabled commented Jan 27, 2017

New DirectX Shader Compiler based on Clang/LLVM now available as Open Source
https://blogs.msdn.microsoft.com/directx/2017/01/23/new-directx-shader-compiler-based-on-clangllvm-now-available-as-open-source/

It seems it still need porting to other platforms, but now it's just a matter of time.

@KonajuGames

This comment has been minimized.

Show comment
Hide comment
@KonajuGames

KonajuGames Jan 27, 2017

Contributor

We have seen that already, and you will note that it requires the Windows 10 SDK and is for DirectX 12. It produces DXIL which is directly usable by newer graphics drivers after the Windows 10 update.

It may be possible to interpret the DXIL and generate GLSL or other shader languages, but with the Windows SDK as a build requirement, it would be restricted to Windows platform from the Windows 10 Anniversary update forward.

Contributor

KonajuGames commented Jan 27, 2017

We have seen that already, and you will note that it requires the Windows 10 SDK and is for DirectX 12. It produces DXIL which is directly usable by newer graphics drivers after the Windows 10 update.

It may be possible to interpret the DXIL and generate GLSL or other shader languages, but with the Windows SDK as a build requirement, it would be restricted to Windows platform from the Windows 10 Anniversary update forward.

@aienabled

This comment has been minimized.

Show comment
Hide comment
@aienabled

aienabled Jan 27, 2017

Contributor

I see now, thanks! That's really sad.
I would love to see 2MGFX as a cross-platform software (Windows, Linux and Mac) because our game relies on it heavily - it compiles effects in runtime due to its extreme modding support. Basically you can make a total conversion of the game, everything is compiled by the game itself, including most of C# code (with Roslyn compiler), assets, UI (NoesisGUI, vector-based UI library with XAML markup) and of course shaders/effects. Everything with complete live-reloading feature, already.
We want to bring it to Linux and Mac in the upcoming months and it seems only 2MGFX is preventing us from this.

Contributor

aienabled commented Jan 27, 2017

I see now, thanks! That's really sad.
I would love to see 2MGFX as a cross-platform software (Windows, Linux and Mac) because our game relies on it heavily - it compiles effects in runtime due to its extreme modding support. Basically you can make a total conversion of the game, everything is compiled by the game itself, including most of C# code (with Roslyn compiler), assets, UI (NoesisGUI, vector-based UI library with XAML markup) and of course shaders/effects. Everything with complete live-reloading feature, already.
We want to bring it to Linux and Mac in the upcoming months and it seems only 2MGFX is preventing us from this.

@KonajuGames

This comment has been minimized.

Show comment
Hide comment
@KonajuGames

KonajuGames Jan 27, 2017

Contributor

There is work underway (#5354) to allow 2MGFX parse GLSL shaders directly. The parsing already works to a degree on Linux and Mac, and the runtime portion is in progress. Parsing HLSL on Linux and Mac is not currently on the timeline, but perhaps with some work by the community the new Microsoft HLSL compiler driven by LLVM may become usable on Linux and Mac in the future.

Contributor

KonajuGames commented Jan 27, 2017

There is work underway (#5354) to allow 2MGFX parse GLSL shaders directly. The parsing already works to a degree on Linux and Mac, and the runtime portion is in progress. Parsing HLSL on Linux and Mac is not currently on the timeline, but perhaps with some work by the community the new Microsoft HLSL compiler driven by LLVM may become usable on Linux and Mac in the future.

@Jjagg

This comment has been minimized.

Show comment
Hide comment
@Jjagg

Jjagg Jan 27, 2017

Contributor

Basically you can make a total conversion of the game, everything is compiled by the game itself, including most of C# code (with Roslyn compiler), assets, UI (NoesisGUI, vector-based UI library with XAML markup) and of course shaders/effects.

That is really cool! Do you keep a devlog or write about this somewhere?
I'm pretty confident we can get the direct GLSL path stable in the next couple months.

Contributor

Jjagg commented Jan 27, 2017

Basically you can make a total conversion of the game, everything is compiled by the game itself, including most of C# code (with Roslyn compiler), assets, UI (NoesisGUI, vector-based UI library with XAML markup) and of course shaders/effects.

That is really cool! Do you keep a devlog or write about this somewhere?
I'm pretty confident we can get the direct GLSL path stable in the next couple months.

@aienabled

This comment has been minimized.

Show comment
Hide comment
@aienabled

aienabled Jan 27, 2017

Contributor

@Jjagg, we have a devlog at http://atomictorch.com but didn't posted any technical stuff yet, we have some drafts in work. There is some info about the custom game engine we've built for this game http://wiki.atomictorch.com/Renkei_Engine (the client part is using MonoGame framework).

The direct GLSL path is good to have for MonoGame development on Linux and Mac, but ideally we need to be able to build the same .FX files on Linux and Mac, otherwise it will complicate the modding and require more job to be done... of course 2D games don't need so many shaders (thought we plan to implement many fullscreen post-effects) but we still want to make everything in the game as streamlined as possible.

Contributor

aienabled commented Jan 27, 2017

@Jjagg, we have a devlog at http://atomictorch.com but didn't posted any technical stuff yet, we have some drafts in work. There is some info about the custom game engine we've built for this game http://wiki.atomictorch.com/Renkei_Engine (the client part is using MonoGame framework).

The direct GLSL path is good to have for MonoGame development on Linux and Mac, but ideally we need to be able to build the same .FX files on Linux and Mac, otherwise it will complicate the modding and require more job to be done... of course 2D games don't need so many shaders (thought we plan to implement many fullscreen post-effects) but we still want to make everything in the game as streamlined as possible.

@Jjagg

This comment has been minimized.

Show comment
Hide comment
@Jjagg

Jjagg Jan 27, 2017

Contributor

Thanks for the links!

If it's about consistency, you can write all shaders in the GLFX format rather than HLSL. We plan to have a translation option as well, but there are no libraries yet that fully support translation from HLSL to GLSL. Possible solutions are HLSL -> XShaderCompiler -> GLSL or HLSL -> glslang -> SPIR-V -> SPIRV-Cross -> GLSL. Both have come up before in this thread. Neither is ready for production.

Contributor

Jjagg commented Jan 27, 2017

Thanks for the links!

If it's about consistency, you can write all shaders in the GLFX format rather than HLSL. We plan to have a translation option as well, but there are no libraries yet that fully support translation from HLSL to GLSL. Possible solutions are HLSL -> XShaderCompiler -> GLSL or HLSL -> glslang -> SPIR-V -> SPIRV-Cross -> GLSL. Both have come up before in this thread. Neither is ready for production.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment