-
-
Notifications
You must be signed in to change notification settings - Fork 677
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rewrite iOS implementation based on AVAudioEngine #334
Comments
Is it an option to move to Swift? |
Yes, this probably will be written in Swift. |
this will be breaking for folks, don't do this |
@nt4f04uNd Do you mean breaking in terms of stability, or breaking in terms of codec support? |
in terms of that you can have these configs, except one | app | library | library written with swift can't be used with obj-c projects |
That's true, although just to play devil's advocate:
This will also virtually only affect the more experienced Flutter developers (who created their project before Flutter changed the default template to Swift), so if we include instructions in the README on how to convert their project from Objective C to Swift, I wouldn't expect Swift to be a showstopper. It would be interesting to know the actual statistics on how many people are still running Objective C projects. Plugins are a different story, as people may have various reasons to choose Objective C vs Swift when writing their plugin in that language. But a project doesn't actually contain any Objective C or Swift code so it really doesn't make a difference whether you switch a project from Objective C to Swift except that it just opens the door to accessing all of the Swift plugins on pub.dev. There are some instructions in the README for Android on how to convert old projects to the latest V2 plugin architecture, and in a similar vein I could add instructions for how to update an old Objective C project to Swift. |
(Copying this comment from another issue to get broader interest)
The waveform visualizer is implemented on iOS but not pitch. You can track the pitch feature here: #329 There is a big question at this point whether to continue with the current AVQueuePlayer-based implementation or switch to an AVAudioEngine-based implementation. For pitch scaling, I really want to take advantage of AVAudioEngine's built-in features, but that requires a rewrite of the iOS side - see #334 and this is a MUCH bigger project. I would really like to see an AVAudioEngine-based solution see the light of day, but it will probably not happen if I work on it alone. If anyone would like to help, maybe we can pull it off with some solid open source teamwork. One of the attractive solutions is to use AudioKit which is a library built on top of AVAudioEngine which also provides access to pitch adjustment AND provides a ready-made API for a visualizer and equalizer. That is, it provides us with everything we need - BUT it is written in Swift and so that involves a language change and it means we may need to deal with complaints that old projects don't compile (we'd need to provide extra instructions on how to update their projects to be Swift-compatible). Would anyone like to help me with this? (Please reply on #334) |
Another interesting library: https://github.com/tanhakabir/SwiftAudioPlayer This suggests we want a combination of AVAudioEngine and AudioToolbox:
Given that I'm spread a bit thinly, I would not like to make this a solo effort, so I will wait until some more people post below who might be willing to team up and work together. One other thought I had is that I may want to move the current iOS implementation out into its own federated plugin implementation rather than bundling it with the main plugin, although it can still be endorsed so the dependency automatically gets added to your app. The advantage of this is that if we can eventually create an AVAudioEngine-based implementation of the just_audio API, we can now do this without throwing away the old implementation. In case there are some features that don't work in the new implementation, users will still be able to use the old implementation, and vice versa. |
blacklist the https://github.com/tanhakabir/SwiftAudioPlayer. i tried to use it and it's horrible, the audio glitches and author barely maintains it imho you should just stick to the most popular one, i.e. AudioKit |
i like the idea of federated implemenation, in fact, this should be done for each platform |
I agree with you although what I did find interesting about SwiftAudioPlayer is not that I want to use it as a library, but rather that the author has written an informative blog post explaining the additional components that were needed on top of AVAudioEngine, which may also apply to us even if we go with AudioKit. For example, to stream audio, we will probably need to use techniques similar to those used in SwiftAudioPlayer to first "get" the audio to then feed into AudioKit. Thanks for sharing StreamingKit, the name itself implies that it will be another good reference when trying to implement the streaming part of this. |
Hi @ryanheise, I would be happy to help but I do have some doubts:
I do have experience with Swift, and that would be my preference over Objective-C. So I would be more comfortable with a list of clear (smaller) tasks to follow, and not just "Implement AudioKit" or similar. I do not know if this is what you are looking for. |
fyi StreamingKit is not just for streaming, it to my understanding offers pretty much the same set of features as AudioKit |
@mvolpato I'm happy to have your support! We definitely need a plan which can be broken down into tasks that can be each done by different people. What I'll do is create a Google Doc with a list of features that need to be supported, and the first thing that needs to be done is to just collecting links to relevant documentation and tutorials or StackOverflow answers relevant to implementing that feature. That first research phase will give us the peaces of the puzzle we need to then prioritise the tasks and start implementing them. So the plan is:
The 3rd point is not in strict order so we can start thinking about that earlier, but I think the order in which to do things will fall out naturally. e.g. First we should implement loading a simple audio source, then playing, then pausing, then the other standard controls. State broadcasting could happen in parallel with this. I'll also need to move the current iOS implementation into a separate platform implementation package. I'll post a link to the doc once I create it. |
Shared doc: https://docs.google.com/document/d/17EZEvmiyn94GCwddBGS5BAaYer5BTRFv-ENIAPG-WG4/edit?usp=sharing I will update the top post with details. |
Hi @mvolpato I'm ready to get ball rolling using a Swift-based implementation. Swift is nice, but there seem to be complications in how the compiler works. Would you be able to try these steps out below in your environment and see if they work for you? First, create a new plugin from the template:
Then in import AudioKit
...
var mixer: AKMixer Then in
If you try to run the example directory, you get the error mentioned here: AudioKit/AudioKit#2267 Would be interested if you could try the above steps also and see if it works for you. Note that I also upgraded my cocoapods and Xcode to the latest versions before running this. |
If it also fails for you (and a solution doesn't start out) then we may need to create either a Flutter issue or an AudioKit issue. According to reports in the above AudioKit issue, the error was resolved, so maybe its an issue that occurs due to the Flutter setup specifically. |
It looks like cocoapods is not supported (yet) for version 5, so this will not work for sure. I also cannot get it to work. I tried different versions. |
It looks like this other plugin has it working. I did not have time to investigate their approach yet. I will check later today. |
Great find! I think what's happening on my project (and my environment) is that it resolving AudioKit to 4.11 whereas flutter_sequencer is resolving it to 4.11.1, and according to that issue, the bug fix for it requires 4.11.1 or later. I was scratching my head for a while but it was caching the first version it ever resolved to when I made my first attempt at the podspec, even after running I think next it will probably be necessary to move the current iOS implementation into a separate package like the |
Unfortunately, or maybe that is a good thing with everything that is going on now in the world, I am very busy with my day job, and I had no time to look into this. :( |
No problem at all! Hopefully soon I'll be able to take a crack at getting something started. |
Has there been any updates on this? |
Instead of using iOS's native libraries, why don't we use something like libvlc? There's already work being done with it for desktop support, and libvlc has advantages such as better codec support (OPUS and Vorbis). In theory, it could mean one common backend for just_audio with no dependency on platform-specific libraries. Of course, it would be a major update that changes a lot. |
I think the main stopper for this is its LGPL license #103 (comment) |
@ryanheise I was wondering if there is already work done for making this possible? |
@toonvanstrijp thanks for the ping. I think the main issue for now is just that I would like to have multiple iOS implementations but I don't think there is yet a way in Flutter's federated plugin model to set a default iOS implementation and then allow an app to choose an override implementation, specifically when the implementation uses method channels (although the Flutter team will eventually add this). I think it would still be possible to start development on this but just delay merging it until that Flutter issue is sorted out. However, I am continually distracted by other issues, and bug fixes are always taken as higher priority. Hopefully I (or someone) can get a foundation started so that it is easier for others to start contributing. Finally, when I first started experimenting with AudioKit, it was with version 4.x. But now that AudioKit 5.x is out and recommended, I should probably scrap what I had started. I remember at the time 5.x was actually just released but they didn't have it on cocoapods yet (because the developer wasn't a fan of cocoapods). Fortunately 5.x is now on cocoapods. |
@ryanheise I started working on rewriting this library to Swift. Let me know if you're interested in merging this. Because I think if we would move to Swift a lot more developers can collaborate. (Since Swift is more the "standard" nowadays). I'll also start looking at AudioKit 5.0, but I'm not a iOS developer. So any tips on how to structure things on the iOS side is welcome! |
@ryanheise one more question regarding AudioKit. Right now we use |
From a totally outsider perspective, wouldn't it be easier to use a package that handles downloading/buffering for us? Also, one issue with most iOS libraries is that they use Apple's decoding stuff, which doesn't support OPUS/Vorbis. For my use case, it's kind of annoying. I was vaguely looking into making a gstreamer backend for all platforms, but it could never be the main implementation because it's LGPL. VLCKit also won't work for the same reason. |
I just glanced over listed options in the google doc and it seems that https://github.com/sbooth/SFBAudioEngine is the only one that supports OPUS. Vobris is not supported by none of them though, which is probably ok, given that it's a predecessor of OPUS. |
Actually, it supports Vobris as well |
That library looks great, although I won't really be able to contribute to this as I have no experience with native iOS :( |
I'd definitely go with Swift for the AudioKit-based implementation since AudioKit itself is written in Swift. Rewriting the current AVQueuePlayer implementation in Swift is something I'm a bit more hesitant to do right now since this is the principle iOS implementation and such a large scale rewrite is likely to introduce stability issues. Any rewriting of it should probably be planned and discussed in order to avoid that happening. I think we can also delay it until at least a while after the AudioKit-based implementation starts becoming usable because my hope is that that implementation could eventually replace the AVQueuePlayer implementation (meaning the effort in rewriting it would be wasted.)
I've done a quick experiment with AudioKit by submitting a PR to the sound_generator so you might get some ideas by looking at it. Just a couple of notes to keep in mind:
Just a general comment here, but keeping in line with the vision of supporting multiple federated implementations of the just_audio platform interface, it is no problem if anyone wants to write an SFBAudioEngine-based implementation (which, although MIT, will still potentially involve LGPL if you use those parts that have that license) or GStreamer, etc. I think when it comes to iOS, different people may end up needing these choices. For example, those building apps where audio processing is important (pitch shifting, time stretching, etc.) may want the AudioKit implementation, while those certain other formats might use a GStreamer or VLC-based implementation .
I think that's an issue with a lot of these alternatives to AVQueuePlayer, yes we will have to manage a lot more ourselves. But at the same time, I'm running into limitations of AVQueuePlayer precisely because it manages buffering in a way I don't like, and so on the other hand there is a benefit to managing things ourselves. |
@ryanheise I've done a small setup as you've explained. Could you check in on this and let me know if this is the correct setup? https://github.com/wavy-assistant/just_audio/tree/feature/new_ios_implementation |
Hi @toonvanstrijp this seems like a reasonable start to me. Thanks for taking the initiative! One thing strange in GitHub's diff is this:
I wonder if I notice the macOS podspec lists an older version of macOS than previously. But I think any niggling issues will likely show up once implementation starts. |
@ryanheise I think it's an issue with Github displaying the diff. One question before I get started. Would it be a good approach to keep the class structures and files like we have right now with the objective-c code? We're using |
If you implement according to the just_audio platform interface, those structures will naturally come out in your design. |
@ryanheise I've a few more questions regarding the new implementation. I'm now working on the If you want take a look at the current code: https://github.com/wavy-assistant/just_audio/tree/feature/new_ios_implementation (feedback is welcome and appreciated) |
Hi all, @SimoneBressan has just shared some significant work in PR |
up |
any update? |
I think some good work has been done on the PR mentioned in the previous comment. It's worth taking a look at it (if you haven't already). |
@ryanheise thanks but. PR is not finished, unfortunately. |
See also my comment at the top:
I think the AVAudioEngine makes it possible to implement some advanced features that were not practical with the original implementation. However, it is of course a lot of work to do a complete rewrite, and so if you are interested in seeing it get closer to completion, you might consider becoming a contributor. |
Hi @ryanheise , We're using both just_audio and audio_session in our project, and we're looking to implement an equalizer. From the sample code, we noticed the equalizer works well on Android, but it seems that iOS support is still a work in progress, as per the available documentation. Could you please provide any updates on the iOS implementation for equalizer support? Alternatively, do you recommend using any other library alongside just_audio and audio_session to achieve the same functionality on iOS? Thanks for your help! Best regards, |
Is your feature request related to a problem? Please describe.
Certain features such as a visualizer (#97 ), equalizer (#147 ), pitch shifting (#329 ) may be more easily implemented if based on AVAudioEngine rather than the current AVQueuePlayer.
Describe the solution you'd like
Either reimplement using AVAudioEngine directly, or use an AVAudioEngine-based library such as AudioKit.
Describe alternatives you've considered
We can get some of the way there by plugging an audio tap processor into AVQueuePlayer, but unfortunately this does not give us access to iOS's built-in pitch shifting API which is only available via AVAudioEngine. This could still be possible via the audio tap processor by manually implementing the pitch shifting algorithm, or perhaps integrating the C version of sonic, but long term an AVAudioEngine-based implementation may end up being more flexible in terms of implementing other audio processing features.
Additional context
None
UPDATE
This will be a collaborative effort. Anyone can contribute, and we will be following the plan in this shared Google Doc:
https://docs.google.com/document/d/17EZEvmiyn94GCwddBGS5BAaYer5BTRFv-ENIAPG-WG4/edit?usp=sharing
We are currently in the research phase, so you can contribute by sharing any relevant links to useful resources you have found to help implement each feature in the list.
The text was updated successfully, but these errors were encountered: