New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge UxPlay gstreamer renderer into RPiPlay, allowing generic linux builds #146
Conversation
Thanks for all this effort! I wasn’t sure what @antimof’s plans on UxPlay were, since he originally promised to upstream his work once it got to a certain point of maturity, yet the development seems to have stalled. The build flag seems like a great choice for adding this in an unobtrusive way! I’ll have a closer look in the following days! |
I'm afraid I can't get this to work on my Pi 3B+ with the last Raspbian Lite build before it was renamed Raspberry OS (from March 2020; the OS I already had on an sd card). Initially, RPiPlay reported the lack of the autodetect plugin:
I had only followed the steps you included for Ubuntu, so I figured I just had to install some more gstreamer plugins, so I installed a few that seemed like they could be required:
At this point, the program was starting successfully, I just couldn't get it to render any audio or video. The screen just stayed on the console display. I also installed the |
Hmmm...I agree with everything you did at first to get past the missing plugins. The next step in my mind would be to run rpiplay with the GST_DEBUG environment variable set to something:
As soon as you connect, it should print some info about what is failing inside of GStreamer. |
We're getting somewhere! This is the output I saw initially:
So, it seems like gstreamer is trying to use an OpenGL video renderer and the Pulse audio renderer. The latter is sure to fail, because Raspbian doesn't include a Pulse server. I uninstalled the gstreamer1.0-pulseaudio package, and now it seems audio on the HDMI is working via omx:
Even better, I removed the gstreamer1.0-omx-rpi package, and could get ALSA audio to work:
Still, I don't know what it takes to get some video renderer to work. Preferably, I'd like to see the omx-rpi video renderer working. (I also initially saw some traces of a failed attempt to use the X renderer, but I don't have a copy of the output I saw and have since uninstalled the gstreamer X module, since I'm on a Raspbian Lite system without an X server) |
Well, seems like gstreamer-omx has been broken on the Pi for years: https://www.raspberrypi.org/forums/viewtopic.php?t=193152 |
Interesting...it looks like in your latest traces, the problem is it can't find a plugin for H.264 decoding:
I haven't tried testing this without an actual X server running, so there might be more work to get it to actually run on the Raspberry Pi itself, especially without a window server. The link you found about gstreamer-omx is interesting too. I noticed that in some of the sample GStreamer pipelines, they are manually specifying omxh264dec. I wonder if we have to do something similar on this. I might be able to play a little bit with a super old Raspberry Pi Model B, not sure if that's even going to be compatible...but I can try at least! |
I played around on my Pi Model B (Raspbian 10). Stock RPiPlay works pretty well. If I build my branch, it actually does work if you run it within X11, but it's super slow, almost like it's not using hardware decoding. It can't keep up with all of the frames, so it ends up with a huge backlog of frames. Here is what I have installed relating to gstreamer:
Here is a link to a screenshot of it in action. Obviously not usable on the Pi in its current state unless we can figure out how to accelerate it...but it does run great on desktop Linux. |
Ok, even if the gstreamer renderers don't provide any advantage on the Pi, I'm still willing to merge them in. I'm just not sure whether I should do that on the master branch or on an extra branch. What are your thoughts? |
I've looked at both gstreamer and ffmpeg, and it seems like omx_mmal is supported by the latter but not the former. I was trying to figure out how to do ffmpeg intergation, but an alternative path is to add a switch that either chooses the existing code or the gstreamer code. I'd rather not have to install gstreamer if I'm using the OMX code though. tl;dr: a CMake flag is probably good? but better would be to use ffmpeg or to integrate OMX MMAL into gstreamer. |
Also: this is a great step forward! |
I played around with GStreamer on my Pi some more over the weekend and came to the conclusion that GStreamer currently doesn't support displaying to the screen on the Raspberry Pi the way that OMXPlayer does. It does seem to support hardware H.264 decoding/encoding, but the accelerated playback seems to depend on OpenGL stuff which I couldn't figure out how to get working properly. Yeah...@pallas I think this might be what you are already saying, but it would be cool to make a GStreamer sink that displays video the same way that OMXPlayer (and RPiPlay) does it. I think that would be the ultimate awesome solution to actually enable using gstreamer on this project on the Pi. Does FFmpeg by itself actually support displaying anything? I was kind of under the impression it was more of a decoding library, in fact I believe gstreamer can use it. I thought more about the renderer selection and CMake vs. runtime: One idea is we could refactor things a bit to make the renderer a runtime selection, and then compile renderers into RPiPlay based on their availability. So if the OMX libraries are available, it would compile the OMX renderer. If gstreamer is available, it would compile the gstreamer renderer. If they're both available, it would compile both. With an optional runtime switch to choose which one (it would default to the OMX renderer if available). This would make it easy because no matter what platform you're on, you just do cmake and make. My personal reason for trying to get this merged back into the main project was simply to allow RPiPlay to operate on platforms other than Raspberry Pi (despite the name, haha). From that perspective I think it would be cool if it was in the master branch with special build instructions for non-Raspberry-Pi systems. (Although if we implement what I said above and auto-detect which renderers to build, special build instructions might not even be necessary...) I would be willing to spend more time cleaning things up if there are any style/usability/documentation/etc. hesitations about the current state of my branch being merged into the master...or if we need me to do something like the refactor described above before it gets merged. What do you guys think? Or am I barking up the wrong tree about wanting to make it so RPiPlay is by default compatible with other types of systems? Whew, sorry for the long-winded comment! Thanks! |
Thanks for your comments! I agree the compilation system you describe would be convenient, but as long as the gstreamer renderer is lacking in functionality (most of the additional display flags are unsupported), having to explicitly enable the gstreamer build is perfectly adequate IMHO. I'll have a final closer look with regards to coding style etc and will merge the PR in the following days. |
Thanks, I look forward to finding a way to merge this back in! I don't know if there's any interest in this, but I thought I would mention that I just finished an experimental refactor in another branch that allows runtime selection of different renderers. It compiles every available renderer based on what CMake can find at compile time. You can even do OpenMAX video and gstreamer audio, for example, although they get out of sync. For example:
|
I managed to build UxPlay on macOS by fixing several CMakeLists, however the OpenGL render window would not open unless I add a GLib event loop in uxplay.cpp, according to this. I wonder if it's possible to add a patch somewhere else to avoid conflict with this project. Here’s my repo. Hope that helps. |
Nice work @wegank! Seems like it would be a good idea to merge your changes into this too, so it would build properly for macOS. It would be cool to get it working on Windows too. |
@dougg3 It actually does work on Windows, but one has to install the GStreamer runtime (custom installation in order to select gst-libav) and Bonjour services, then add GStreamer/bin folder to PATH. Painful though. |
raop_ntp receive timeout @wegank, the audio works but there is not any video on Ubuntu 20.04. ➜ build git:(master) ./ludimus |
I think this may help. |
@ragazzonoioso The ntp receive timeout is something I noticed as well and have an experimental fix in another branch. Doesn't seem to really hurt the mirroring functionality though. Have you installed gstreamer1.0-plugins-bad? I noticed it wouldn't work without it. |
@dougg3, @wegank, Thank u so much! Now it work! p.s. apt-get install libgstreamer1.0-0 gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-doc gstreamer1.0-tools gstreamer1.0-x gstreamer1.0-alsa gstreamer1.0-gl gstreamer1.0-gtk3 gstreamer1.0-qt5 gstreamer1.0-pulseaudio |
@FD- I couldn't find a way to avoid calling GLib / GStreamer in rpiplay.cpp, so it seems inappropriate to upstream certain changes that can mess up files other than renderers... |
If everyone's in agreement, I personally think the dynamic renderer selection capability would be better than merging this pull request. (It's just another commit on top of this one though). It's basically there, I just need to give it another pass to make sure I got everything and bring the documentation up to date. It might be nice to figure out a way to detect other missing GStreamer plugins too -- it seems as though it's common for people to not have the correct GStreamer packages installed for H.264 decoding support. On the macOS front -- could we figure out a way to use the glib main loop instead of the "while (running)" loop when a gstreamer renderer is being used? I imagine it wouldn't be too hard to figure out, and I doubt it would adversely affect the Linux build, especially if it was only used when a gstreamer renderer was being selected. |
Ah, that's cool @wegank! I think I saw an earlier version before you got it fully replaced. I personally think we could figure out a way to incorporate that code going forward whenever a gstreamer renderer is in use. @FD- I have merged the dynamic renderer selection into this same pull request. I also found some style issues and fixed them. Let me know if you would rather do this a different way. |
Sorry for the delay! Tested to still work on Pi now, so about to push the button. |
Well, wait, there seem to be conflicts that prevent merging? |
I wonder if it has something to do with the way that I tried to preserve @antimof's commit history when I created my branch. I had to do some trickery allowing UxPlay to merge into my branch, even though they don't have a common history. It's weird because GitHub says it has no conflicts with the base branch... |
Yeah, I bet it’s related to that. GitHub tells me:
|
Drats, thanks for the heads up. I will play with things when I have some free time and try to figure it out. |
Can you rebase in your local repo vs. FD-'s master branch? |
Also, I'm pretty excited for this! |
Enhance audio quality
After merging the gstreamer renderers from UxPlay, this project needed a method for choosing which renderer to build into the program. I opted for supplying a CMake variable describing which renderer you wish to use. For backwards-compatibility, it defaults to selecting the openmax renderer that RPiPlay has historically used. Because the gstreamer renderer doesn't support several of the options that are supported by the openmax renderer, those options are removed from the help output in a gstreamer build.
This change makes it possible to select video and audio renderers at runtime. It's also possible to mix and match different renderers for audio versus video. All available renderers are automatically compiled into the renderers library. It should always succeed with at least the dummy renderer.
This adds an explanation for how to choose different renderers.
This plugin contains the H.264 parser required for video to work properly. This is the easiest way to require that gstreamer1.0-plugins-bad has to be installed.
Tried to match the rest of the project's code style.
Also documents the -vr and -ar options.
%llu isn't correct on 64-bit Linux, where it should actually be %lu. On 32-bit systems, %llu is correct. A simple way to work around this is to use the PRIu64 format specifier introduced in C99.
Hey, thanks for the tip @pallas! I think I got it figured out and updated the pull request. Doing a "git push -f" scared the heck out of me, but it looks like the end result is the same as it used to be. From what I can tell, the list of commits is way cleaner now too, so yay. Also, I figured out how to fix the first commit from UxPlay that was previously mistakenly attributed to "No Name" and now it's attributed to @antimof. @FD- I think this is ready for reals now. |
Fantastic! Thanks a lot for your contribution! |
I stumbled upon issue #98 where @antimof announced the creation of UxPlay, which is a port of RPiPlay that uses gstreamer for rendering rather than Pi-specific libraries. It works really well on my generic Linux machines -- thanks @antimof! I'm also aware of issue #24 where generic Linux support has been discussed. I wanted to figure out a way to merge UxPlay's gstreamer renderer back into the original project.
This branch is my first attempt at this. I believe I've preserved the commit history and original author info for UxPlay in the process. Is this the most appropriate way to do this? UxPlay isn't considered a fork on GitHub, so merging is a little tricky.
The strategy I went with is to supply the renderer as a compile-time option to CMake when you build it. (Anybody have any better suggestions?) Here is my build command:
I made it default to use the existing OpenMAX renderer if you don't supply the -DRENDERER option, so nothing changes automatically for people who build it for Pi.
Is this an acceptable strategy for choosing the renderer? I didn't want to break anybody's existing workflow. I considered figuring out how to autodetect the Pi and automatically choose which renderer to use based on that, but I've also seen some interest in trying out the gstreamer renderer on the Pi itself, so I thought the solution I ended up with might work best.
I've updated README.md with basic compilation instructions for the generic Linux version. @antimof I hope you don't mind me doing this merge -- I think you did a fantastic job on the gstreamer renderer! I wanted to see your work get upstreamed.
This is untested on the Pi itself. I've been running it on a generic Ubuntu 18.04 machine. Unfortunately I don't have a Pi that is new enough to test this. Here's hoping this leads to more experimentation with other renderers!