Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QML: Add scrolling waveforms #3967

Merged
merged 16 commits into from Mar 4, 2024
Merged

Conversation

Holzhaus
Copy link
Member

@Holzhaus Holzhaus commented Jun 8, 2021

No description provided.

@Holzhaus Holzhaus added this to In progress in QML GUI via automation Jun 8, 2021
@github-actions github-actions bot added the skins label Jun 8, 2021
Copy link
Member

@Swiftb0y Swiftb0y left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Didn't comment on C++ yet because I assume its mostly ripped out of the glsl rendering shader code just to make it work without cleaning it up much.

@@ -51,6 +51,62 @@ Rectangle {

}

WaveformRow {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For Qt 5.15 this entire section could be further shortened by using an inline component as a template.

property real allGain: pregainControl.value
property variant waveformTexture

fragmentShader: "
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure if you changed this much from what there already is in res/shaders but if you didn't, you can use the file path instructions things in qt. like qrc:// and just plain file: (for referring to a relative path).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only change is s/gl_TexCoord[0]/qt_TexCoord0/.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I just mean that I dislike the inline glsl string and rather have it in a separate file. You could've just linked to the one in res/shaders if there weren't any changes.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know, but this doesn't work unfortunately. This PR is basically unmergeable anyway because the cpp part is so hacky. I just opened it so that someone has a starting part in case he/she wants to work on scrolling QML waveforms.

res/skins/QMLDemo/WaveformShader.qml Outdated Show resolved Hide resolved
res/skins/QMLDemo/WaveformShader.qml Outdated Show resolved Hide resolved
Comment on lines 16 to 18
property color highColor: "#FF0000"
property color midColor: "#00FF00"
property color lowColor: "#0000FF"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are the wrong way around. lowcolor should be red.

@Be-ing
Copy link
Contributor

Be-ing commented Sep 30, 2021

Could you rebase this on main?

@@ -297,6 +297,7 @@ void GLSLWaveformRendererSignal::draw(QPainter* painter, QPaintEvent* /*event*/)
m_waveformRenderer->getFirstDisplayedPosition() * dataSize / 2.0);
const auto lastVisualIndex = static_cast<GLfloat>(
m_waveformRenderer->getLastDisplayedPosition() * dataSize / 2.0);
qWarning() << firstVisualIndex << lastVisualIndex;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

accidentally committed debugging info?

Copy link
Member Author

@Holzhaus Holzhaus Oct 1, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. But it doesn't make sense to review this PR on such a level right now. The whole C++ part to get the waveform texture into QML is a giant hack and basically unmergeable anyway. This is just a POC...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it was just something that stood out from a quick glance at the code

@Be-ing
Copy link
Contributor

Be-ing commented Oct 1, 2021

CPU usage with one deck playing has one thread around 45-50% and another at around 25%.

@Holzhaus
Copy link
Member Author

Holzhaus commented Oct 1, 2021

CPU usage with one deck playing has one thread around 45-50% and another at around 25%.

Use git checkout HEAD~1 and try again. Maybe using GL for drawing isn't actually faster.

@Be-ing
Copy link
Contributor

Be-ing commented Oct 1, 2021

49138f9 does use a little less CPU, around 40-45% instead of 45-50%. It also does not jerk at all whereas b3fe092 has some jerking, about the same or maybe a bit more jerking than the QWidgets waveforms. In contrast to the QWidgets waveforms, the waveforms in b3fe092 always move perfectly together in the same direction; I do not see tiny shifts relative to other waveforms as I do with the legacy waveforms.

@Be-ing
Copy link
Contributor

Be-ing commented Oct 1, 2021

I suspect it would be best to rewrite the whole waveform code from scratch, including the analysis, than recycle legacy code.

@Holzhaus
Copy link
Member Author

Holzhaus commented Oct 1, 2021

I suspect it would be best to rewrite the whole waveform code from scratch, including the analysis, than recycle legacy code.

I don't see what that would change tbh. The GLSL fragment shader code is pretty efficient IMHO. IIRC the jerking comes from moving the GL canvas. I suppose you could implement it using a vertex shader instead, but that is unrelated to the waveform analysis or the fragment shader.

@Holzhaus
Copy link
Member Author

Holzhaus commented Oct 1, 2021

CPU usage with one deck playing has one thread around 45-50% and another at around 25%.

Use git checkout HEAD~1 and try again. Maybe using GL for drawing isn't actually faster.

Hah, I forgot that the previous commit only draws the waveforms once instead of redrawing it all the time. So it's obviously faster, but it doesn't react to EQ changes.

@Be-ing
Copy link
Contributor

Be-ing commented Oct 1, 2021

An approach that might be worth experimenting with could be using Qt Quick's drawing APIs instead of a shader.

@Be-ing
Copy link
Contributor

Be-ing commented Oct 1, 2021

Considering Qt Quick can do this smoothly, I think we're doing something really wrong with the shader...

@Swiftb0y
Copy link
Member

Swiftb0y commented Oct 1, 2021

I suspect it would be best to rewrite the whole waveform code from scratch, including the analysis, than recycle legacy code.

I disagree about the analysis. IMO the biggest issue we have with the QML waveform is getting the waveform data into a shader. The GLSL code does this by putting the entire thing into a buffer which is then serialized as a bitmap which is then sampled in the fragmentshader again. I have not experienced any "temporal" jerking with this PR (waveform getting drawn too early/late), though there have been some issues that it flickers between different waveform versions sometimes. I assume this is because depending on the subpixel position, the shader samples from different parts of the waveform buffer in some cases or that GLSL does some linear interpolation between pixels which it shouldn't do. That could be fixed in the fragment shader, but that doesn't change the fact, that getting the waveform data into the shader is a super dirty hack.
The alternative approach would be more like the current GL waveforms, where we render the waveform data into a colored mesh. That has the alternative of being much less hacky in theory (since we get the data into the shader via a proper mesh with the intended APIs), but requires much more code which is forced to interact with very badly designed Qt APIs. The current approach also comes with the downsight that it re-generates and uploads the mesh on every frame and depends on GL_LINES and knowing how wide those should be drawn. Ideally, we'd find another way that can upload the vertex data once, attaches the waveform metadata to the mesh, the vertexshader would then make sure the mesh is correctly oriented in space and the fragment shader takes care of coloring the actual waveform. At least this is my ideal approach, though there have been pitfalls that I've not been able to work out yet.
For example attributes on vertices are interpolated for the fragmentshader, which makes sense for color and texture coordinates, but not if we are only interested in raw data. Also, if we rely on drawing with GL_LINES (or whatever the QRhi abstraction is), we would need to change that each time the waveform is scaled and then reupload the mesh. So while we could technically port the existing GL shader to QML with relative ease, the result would be just as inefficient as the current solution.

@Swiftb0y
Copy link
Member

Swiftb0y commented Oct 1, 2021

Considering Qt Quick can do this smoothly, I think we're doing something really wrong with the shader...

Thats Qt Quick 3D which has a different API and a vastly different feature set, I don't think we can compare that with the QSGMaterial-based APIs in stock QtQuick.

@Be-ing
Copy link
Contributor

Be-ing commented Oct 1, 2021

Thats Qt Quick 3D which has a different API and a vastly different feature set, I don't think we can compare that with the QSGMaterial-based APIs in stock QtQuick.

Maybe not in Qt5, but in Qt6 they are unified. https://www.qt.io/blog/qt-6.2-lts-released

@Swiftb0y
Copy link
Member

Swiftb0y commented Oct 1, 2021

I may have phrased that wrong, I meant that the APIs used for drawing custom geometry and materials is still completely separate from QQC 3D:
https://doc.qt.io/qt-6/qtquick-scenegraph-custommaterial-example.html
https://doc.qt.io/qt-6/qtquick-scenegraph-customgeometry-example.html
https://doc.qt.io/qt-6/qsggeometry.html

I don't see much reason to use QQC 3D since were not actually integrating 3D content.

@daschuer
Copy link
Member

daschuer commented Oct 4, 2021

Considering Qt Quick can do this smoothly, I think we're doing something really wrong with the shader...

It is relative easy to render a scene if everything is done on the GPU with only some control events. Currently in we have such a control event every single frame. Because the transport speed can change at every frame but also is expected to be smooth when nothing is changed. I think we can compare the waveform more to game than to such animation demo. So we should have a close look into games.

These days we have gigabytes of memory on the graphic card so I can imagine that we can load the whole waveform and move it by a const tempo though the view port. We need only adjust the tempo, if the position delta between the waveform and the sound-card DAC is becoming too big. I thing this way we are getting rid of the jerking issue. The other layers can also become a pre rendered texture or can be drawn on the fly. I think this way we can get rid of most of our issues.

Do you think such an approach is feasible?

@Holzhaus
Copy link
Member Author

I pushed some more changes, the waveforms now have beats, Preroll and can be scratched by mouse.

@Swiftb0y: IMO the biggest issue we have with the QML waveform is getting the waveform data into a shader.

Yes, and in Qt6 we could use something like this: https://doc.qt.io/qt-6/qtquick3d-proceduraltexture-example.html

Unfortunately, this API doesn't exist in Qt5, and mixxx doesn't build with Qt6 yet.

@Swiftb0y
Copy link
Member

Thats still QtQuick 3D which is overkill IMO. We might instead want to consider just creating the mesh of the waveform and pushing that to the GPU instead. That is already possible with Qt5.14 (though the API is quite bad): https://doc.qt.io/qt-5/qtquick-scenegraph-customgeometry-example.html

@Holzhaus
Copy link
Member Author

Thats still QtQuick 3D which is overkill IMO. We might instead want to consider just creating the mesh of the waveform and pushing that to the GPU instead. That is already possible with Qt5.14 (though the API is quite bad): https://doc.qt.io/qt-5/qtquick-scenegraph-customgeometry-example.html

I don't see how this can be done in a performant way. I think it would basically nullify all performance benefits from GPU rendering, unless I'm overlooking how that API is supposed to be used.

Right now, we have a texture where a fragment shader takes care of adjusting the amplitudes of the different EQ bands for each sample in parallel. With the API you suggested, I think we'd have to modify each vertex belonging to the EQ band iteratively whenever the EQ is turned. At least I don't see how it could used in another way.

@Swiftb0y
Copy link
Member

Well the point of the GPU rendering is that the actual rendering is done on the GPU. The vertex shader can still modify the vertex positions in parallel (that is pretty much the only point of the vertex shader afaik). The CPU work kinda stays the same afaik, no matter if we serialize the waveform into a texture or into a into bunch of vertecies. Not sure if the mesh based approach results in higher visual fidelity. IMO it currently does because its more temporally stable compared to the texture-based shader (GLSL), but that is because of rounding and sampling issues in the shader IMO (a guess, I have not yet managed to improve the shader).
I have a local PoC that I didn't manage to get working fully yet (currently just takes care of generating the mesh and pushing it to the GPU), though it allowed me to experience the bad API (off by one error resulted in a crash in some unrelated function because the heap got corrupted).

@Holzhaus
Copy link
Member Author

I don't get how this is supposed to work with the API example you posted. If you already have PoC, could you open a draft PR? Would be interesting to see what you have in mind.

@Swiftb0y
Copy link
Member

It'll take some work but I'll try to bring it into PR form (currently its just a bunch of files and git stashes I did not check in, my git hygiene is bad). The primary challenge I got stuck on was related to the fact that the mesh based approach struggles to separate concerns properly. The code generating the mesh needs to know how its being displayed to adjust the line-thickness, this is also the case in the current code. I really want to eliminate this dependency but I have not found a good solution yet. Maybe we should generate proper faces for each "sample" (instead of two verticies spanning a line, create 4 verts to span a rectangular face, but that has its own set of issues since part of the mesh would need to be disjunct otherwise Z-fighting could occur or possibly from rounding errors). One advantage of the mesh solution would be that the rasterizer the GPU could do AA and depth testing to avoid having to run the fragment shader on pixels where there is no mesh (In principle at least, not sure if Qt allows this optimization).

@Swiftb0y
Copy link
Member

The entire thing is too much of a mess to salvage. I think I'll just redo them when I have the time. Just throwing whatever broken (not building) mess I have into a PR wouldn't help anyone.

@acolombier
Copy link
Contributor

Yeah I got too carried away on getting something perfectly usable, that it grew wild. Going to put all that in a new PR. Do you think we could focus on getting this merged ASAP, even if it isn't 100% perfect? IMHO, everyone is aware that QML interface is WIP and that they may be rough edges, so it would be great to get bits out, even if some are only temporary and gets largely refactored later on. What do you think?

model: root.deckPlayer.beatsModel

Rectangle {
property real alpha: 0.9 // TODO: Make this configurable (i.e., "[Waveform],beatGridAlpha" config option)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK, [Waveform],beatGridAlpha already exists and is configurable in preferences, any reason for not using it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no interface implemented to query these values in QML.

@ywwg
Copy link
Member

ywwg commented Feb 23, 2024

I am also ok with merging broken code as long as it doesn't render the QML version completely unusable (crashes, etc).

@Holzhaus
Copy link
Member Author

Holzhaus commented Feb 23, 2024

I tend to agree that we can already merge it despite the issues:

  1. The way we get the waveforms data from the engine to QML is terrible (and possibly buggy if the track was not yet analyzed, leading to impletely rendered waveforms).
  2. The aforementioned issue with incorrect scrolling speed and/or waveform width, so that the beats of two synced tracks align at the play marker, but not over whole visible part of the track.

If someone volunteers now to take a stab at them, we can postpone merge. Otherwise both can be solved in a follow up PR. It's definitely better than no waveforms at all.

@ywwg
Copy link
Member

ywwg commented Feb 26, 2024

@Holzhaus if you think this PR is acceptable as-is, go ahead to merge :)

@m0dB
Copy link
Contributor

m0dB commented Feb 26, 2024

Hm, I am very disconnected from the QML development, but I am a bit surprised that this approach was taken, with the two major issues that @Holzhaus points out above, when I implemented an approach to access the waveform data that doesn't have these issues.

@m0dB
Copy link
Contributor

m0dB commented Feb 26, 2024

I really should review this code first before giving my opinion, but this approach here:
m0dB@76bc83c
accesses the waveform data just like the QWidget based UI.

@m0dB
Copy link
Contributor

m0dB commented Feb 26, 2024

From doing a quick review though, I am very sorry to say that I don't like this approach. Apart from the issues that @Holzhaus points out, I don't think that reimplementing the waveform rendering layers in QML / javascript is a good idea because:

  1. It will never be as fast as the waveform/renderers/allshaders code, and I have strong doubts this will work sufficiently fast on older hardware
  2. It should be possible to use qsb with the waveform/renderers/allshaders GLSL code and use that code to render QML items. When I did all the work on the allshaders renderers, one of the things on my mind was that all it could be reused in the QML UI, and not be in vein in the long run.
  3. It is reinventing the wheel, with possible bugs and differences from the shaders implementation, which has been well tested

IMO it would have been much better to adapt and re-use the waveform/renderers/allshaders code. I realise I am late to the party but I have pointed to my implementation several times. So I am opposed to merging this and it is very unfortunate @acolombier did all this work without prior discussion / agreement on the approach to take.

@m0dB
Copy link
Contributor

m0dB commented Feb 26, 2024

But in an attempt to be more constructive, I will try to rebase my branch, so @acolombier can at least have another look at the approach there to draw the waveforms with a method closer to the QWidget UI approach and without the issues mentioned by @Holzhaus (I just had a look at my DMs with @acolombier and he was aware of my branch, so I would like to understand why he didn't use that approach). And from there on, I think it would be good to have a discussion so we all agree on how to move forward. I see the risk that I will start to continue with my approach (which, to be clear, I did because this thread seemed like a dead end with the issues mentioned) and we end up with two implementations.

@Holzhaus
Copy link
Member Author

Holzhaus commented Feb 26, 2024

Hm, I am very disconnected from the QML development, but I am a bit surprised that this approach was taken, with the two major issues that @Holzhaus points out above, when I implemented an approach to access the waveform data that doesn't have these issues.

I was not aware of your approach. Also, I can't seem to find the corresponding PR.

Considering that there was not much progress on QML waveforms in the last 3 years I was just trying the tie up the lose ends and make the QML a bit more usable with working waveforms.

I didn't look into your implementation yet. If you say the waveform elements are rendered as QML items with your approach, too, would it hurt to merge this as-is and and replace the waveform drawing under the hood later, when you had the time to continue your work? I guess that most of the stuff on the qml side that @acolombier and me implemented should be reusable, right?

@JoergAtGithub
Copy link
Member

@acolombier I thought you use Martens QML waveforms in your S4 controller display work? Why not here?

@m0dB
Copy link
Contributor

m0dB commented Feb 26, 2024

Hm, I am very disconnected from the QML development, but I am a bit surprised that this approach was taken, with the two major issues that @Holzhaus points out above, when I implemented an approach to access the waveform data that doesn't have these issues.

I was not aware of your approach. Also, I can't seem to find the corresponding PR.

I am sure you were, but you must have forgotten. We discussed it here:
https://mixxx.zulipchat.com/#narrow/stream/109171-development/topic/scrolling.20waveforms.20in.20QML.20working

The PR is here:
https://github.com/m0dB/mixxx/tree/qml_scrolling_waveform

Last time I rebased it on main was in October 2023, when @acolombier asked about it.

Considering that there was not much progress on QML waveforms in the last 3 years I was just trying the tie up the lose ends and make the QML a bit more usable with working waveforms.

I didn't look into your implementation yet. If you say the waveform elements are rendered as QML items with your approach, too, would it hurt to merge this as-is and and replace the waveform drawing under the hood later, when you had the time to continue your work? I guess that most of the stuff on the qml side that @acolombier and me implemented should be reusable, right?

Of course it would be ideal and most productive if we can combine the work from the two branches, but I don't see myself dedicating much time to the QML code, at least for now.

@m0dB
Copy link
Contributor

m0dB commented Feb 26, 2024

I do realise it is a bit weird to say: "I disapprove of this approach, you should use my approach, but I am not going to work on it myself (yet)". Nevertheless, I do think that my objections are quite valid.

@acolombier
Copy link
Contributor

acolombier commented Feb 27, 2024

I would like to understand why he didn't use that approach

I first started to work off @m0dB branch to try and get waveforms working on my S4 screen implementation. I was aware of both approaches and went straight to that branch for the performance reasons you mentioned above. However, after many hours working off that branch, I was encountering a few drawings bugs and didn't know where to start with implementing the markers as well as missing COs. Due to my inexperience with GLSL, I was unable to make much progress after countless hours and thought I will have peak at @Holzhaus 's implementation.
For some reason, I was getting better performance with his approach (I'm not experiencing the bug you've mentioned above btw), in the meantime the performance with waveform on legacy UI has always been suboptimal (yet i9+rtx3070, but I've made my peace with it and worked around it 😁) and the devx was far better with native QML element and hot reload.

The feeling I had was that @Holzhaus branch was closer to completion than yours, and so I thought I would help getting this over the line, especially as QML effort have gone idle for the last 2 years if not more.

I thought it would be best to get that one merged as a stop gap and get the QML UI usable (right now it is definitely not due to waveform and library missing), then hopefully get your approach in when you would have time to finish it, with potentially reusing some of the QML native marker delegation done as part of this work.

To summarise: I agree that@m0dB implementation seem better, but till it gets ready, perhaps we could get this in. Also, QML element devx is better than C++ compiled element, so using QML for loop/grid/cue/hotcue would be nice.

Item {
id: waveform

property real effectiveZoomFactor: rateRatioControl.value * zoomControl.value * 100
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This value is now the inverse what it should be - any reason for reverting what I had put here?

This was the value I had which was ensuring the same behavior than currently:

Suggested change
property real effectiveZoomFactor: rateRatioControl.value * zoomControl.value * 100
property real effectiveZoomFactor: (1 / rateRatioControl.value) * (200 / zoomControl.value)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah thanks. As I said I had to remove the last commits as the PR grew too big and rebasing was no longer possible due to too many conflicts. Fixed in the latest commit.

@Swiftb0y
Copy link
Member

To summarise: I agree that@m0dB implementation seem better, but till it gets ready, perhaps we could get this in. Also, QML element devx is better than C++ compiled element, so using QML for loop/grid/cue/hotcue would be nice.

I agree with this. I also prefer the m0db's approach over this one, due to it being able to work with less hacks, but I'm fine with this one a stop-gap solution as well. I also think that building up the waveform layers in QML is better than doing it in C++, but there is a limit and implementing the the waveform renderer (without the grid, cues, etc on top) as a custom Qt Quick Component would make sense (due to the more powerful API we get, compared to the ShaderEffect.

@Holzhaus
Copy link
Member Author

OK, so it seems like there are some people in favor of merging this now and then replacing the rendering part later on. @m0dB what do you think?

@m0dB
Copy link
Contributor

m0dB commented Feb 29, 2024

My main concern is that (more) time will be invested in an implementation that will eventually (have to) be replaced. But as I currently don't see myself working on this much, it would be unreasonable for me to ask to put this on hold until I find the time to do so. But let's all keep in mind that this is a most likely not the permanent solution.

@Holzhaus
Copy link
Member Author

I think we all agree this is not a permanent solution as the way we get the textures into the ShaderEffect is a terrible hack. So there is some work needed anyway, so we might as well replace it with the cleaner solution. But I hope that some progress on the QML UI will at least get the interest up to attack people who would like to work on it.

@ywwg
Copy link
Member

ywwg commented Mar 1, 2024

I think we have consensus to merge, I'll let someone else do the honors :)

@Holzhaus
Copy link
Member Author

Holzhaus commented Mar 1, 2024

But I hope that some progress on the QML UI will at least get the interest up to attack people who would like to work on it.

Omg my phone's autocorrect is really bad. I obviously meant "attract" not "attack" 😅

@ywwg
Copy link
Member

ywwg commented Mar 4, 2024

ok fine I'll do it :)

@ywwg ywwg merged commit bdb6630 into mixxxdj:main Mar 4, 2024
13 checks passed
QML GUI automation moved this from In progress to Done Mar 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Development

Successfully merging this pull request may close these issues.

None yet

9 participants