Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rewrite iOS implementation based on AVAudioEngine #334

Open
ryanheise opened this issue Mar 10, 2021 · 57 comments
Open

Rewrite iOS implementation based on AVAudioEngine #334

ryanheise opened this issue Mar 10, 2021 · 57 comments
Assignees
Labels
1 backlog enhancement New feature or request

Comments

@ryanheise
Copy link
Owner

ryanheise commented Mar 10, 2021

Is your feature request related to a problem? Please describe.

Certain features such as a visualizer (#97 ), equalizer (#147 ), pitch shifting (#329 ) may be more easily implemented if based on AVAudioEngine rather than the current AVQueuePlayer.

Describe the solution you'd like

Either reimplement using AVAudioEngine directly, or use an AVAudioEngine-based library such as AudioKit.

Describe alternatives you've considered

We can get some of the way there by plugging an audio tap processor into AVQueuePlayer, but unfortunately this does not give us access to iOS's built-in pitch shifting API which is only available via AVAudioEngine. This could still be possible via the audio tap processor by manually implementing the pitch shifting algorithm, or perhaps integrating the C version of sonic, but long term an AVAudioEngine-based implementation may end up being more flexible in terms of implementing other audio processing features.

Additional context

None

UPDATE

This will be a collaborative effort. Anyone can contribute, and we will be following the plan in this shared Google Doc:

https://docs.google.com/document/d/17EZEvmiyn94GCwddBGS5BAaYer5BTRFv-ENIAPG-WG4/edit?usp=sharing

We are currently in the research phase, so you can contribute by sharing any relevant links to useful resources you have found to help implement each feature in the list.

@ryanheise ryanheise added enhancement New feature or request 1 backlog labels Mar 10, 2021
@ryanheise ryanheise self-assigned this Mar 10, 2021
@ryanheise ryanheise mentioned this issue Mar 10, 2021
@mvolpato
Copy link

Is it an option to move to Swift?

@ryanheise
Copy link
Owner Author

Yes, this probably will be written in Swift.

@nt4f04uNd
Copy link
Contributor

this will be breaking for folks, don't do this

@ryanheise
Copy link
Owner Author

@nt4f04uNd Do you mean breaking in terms of stability, or breaking in terms of codec support?

@nt4f04uNd
Copy link
Contributor

in terms of that you can have these configs, except one

| app | library |
| obj-c | obj-c |
| swift | obj-c |
| swift | swift |

library written with swift can't be used with obj-c projects

@ryanheise
Copy link
Owner Author

That's true, although just to play devil's advocate:

  • New Flutter projects have been Swift by default now for more than a year.
  • People are converting older Objective C projects to Swift.
  • Consequently, this is a vanishing problem, and people who have it can solve it with flutter create.

This will also virtually only affect the more experienced Flutter developers (who created their project before Flutter changed the default template to Swift), so if we include instructions in the README on how to convert their project from Objective C to Swift, I wouldn't expect Swift to be a showstopper.

It would be interesting to know the actual statistics on how many people are still running Objective C projects. Plugins are a different story, as people may have various reasons to choose Objective C vs Swift when writing their plugin in that language. But a project doesn't actually contain any Objective C or Swift code so it really doesn't make a difference whether you switch a project from Objective C to Swift except that it just opens the door to accessing all of the Swift plugins on pub.dev.

There are some instructions in the README for Android on how to convert old projects to the latest V2 plugin architecture, and in a similar vein I could add instructions for how to update an old Objective C project to Swift.

@ryanheise
Copy link
Owner Author

(Copying this comment from another issue to get broader interest)

Is this supported on iOS currently?

The waveform visualizer is implemented on iOS but not pitch. You can track the pitch feature here: #329

There is a big question at this point whether to continue with the current AVQueuePlayer-based implementation or switch to an AVAudioEngine-based implementation. For pitch scaling, I really want to take advantage of AVAudioEngine's built-in features, but that requires a rewrite of the iOS side - see #334 and this is a MUCH bigger project.

I would really like to see an AVAudioEngine-based solution see the light of day, but it will probably not happen if I work on it alone. If anyone would like to help, maybe we can pull it off with some solid open source teamwork. One of the attractive solutions is to use AudioKit which is a library built on top of AVAudioEngine which also provides access to pitch adjustment AND provides a ready-made API for a visualizer and equalizer. That is, it provides us with everything we need - BUT it is written in Swift and so that involves a language change and it means we may need to deal with complaints that old projects don't compile (we'd need to provide extra instructions on how to update their projects to be Swift-compatible).

Would anyone like to help me with this? (Please reply on #334)

@ryanheise
Copy link
Owner Author

Another interesting library: https://github.com/tanhakabir/SwiftAudioPlayer

This suggests we want a combination of AVAudioEngine and AudioToolbox:

Thus, using AudioToolbox, we are able to stream audio and convert the downloaded data into usable data for the AVAudioEngine to play. For an overview of our solution check out our blog post.

Given that I'm spread a bit thinly, I would not like to make this a solo effort, so I will wait until some more people post below who might be willing to team up and work together.

One other thought I had is that I may want to move the current iOS implementation out into its own federated plugin implementation rather than bundling it with the main plugin, although it can still be endorsed so the dependency automatically gets added to your app. The advantage of this is that if we can eventually create an AVAudioEngine-based implementation of the just_audio API, we can now do this without throwing away the old implementation. In case there are some features that don't work in the new implementation, users will still be able to use the old implementation, and vice versa.

@nt4f04uNd
Copy link
Contributor

nt4f04uNd commented Apr 2, 2021

blacklist the https://github.com/tanhakabir/SwiftAudioPlayer. i tried to use it and it's horrible, the audio glitches and author barely maintains it
if you are researching variants of libraries you could use, i saw https://github.com/tumtumtum/StreamingKit (it's FOSS by the way)

imho you should just stick to the most popular one, i.e. AudioKit

@nt4f04uNd
Copy link
Contributor

i like the idea of federated implemenation, in fact, this should be done for each platform

@ryanheise
Copy link
Owner Author

blacklist the https://github.com/tanhakabir/SwiftAudioPlayer. i tried to use it and it's horrible, the audio glitches and author barely maintains it
if you are researching variants of libraries you could use, i saw https://github.com/tumtumtum/StreamingKit (it's FOSS by the way)

imho you should just stick to the most popular one, i.e. AudioKit

I agree with you although what I did find interesting about SwiftAudioPlayer is not that I want to use it as a library, but rather that the author has written an informative blog post explaining the additional components that were needed on top of AVAudioEngine, which may also apply to us even if we go with AudioKit. For example, to stream audio, we will probably need to use techniques similar to those used in SwiftAudioPlayer to first "get" the audio to then feed into AudioKit.

Thanks for sharing StreamingKit, the name itself implies that it will be another good reference when trying to implement the streaming part of this.

@mvolpato
Copy link

mvolpato commented Apr 2, 2021

Hi @ryanheise, I would be happy to help but I do have some doubts:

  1. I have no experience with development of audio components.
  2. I have no experience with developing a plugin for Flutter.

I do have experience with Swift, and that would be my preference over Objective-C.

So I would be more comfortable with a list of clear (smaller) tasks to follow, and not just "Implement AudioKit" or similar. I do not know if this is what you are looking for.

@nt4f04uNd
Copy link
Contributor

nt4f04uNd commented Apr 3, 2021

fyi StreamingKit is not just for streaming, it to my understanding offers pretty much the same set of features as AudioKit

@ryanheise
Copy link
Owner Author

@mvolpato I'm happy to have your support! We definitely need a plan which can be broken down into tasks that can be each done by different people.

What I'll do is create a Google Doc with a list of features that need to be supported, and the first thing that needs to be done is to just collecting links to relevant documentation and tutorials or StackOverflow answers relevant to implementing that feature. That first research phase will give us the peaces of the puzzle we need to then prioritise the tasks and start implementing them.

So the plan is:

  1. Create a list of features to implement
  2. Collect research for each feature
  3. Work out what needs to be done first
  4. Start implementing

The 3rd point is not in strict order so we can start thinking about that earlier, but I think the order in which to do things will fall out naturally. e.g. First we should implement loading a simple audio source, then playing, then pausing, then the other standard controls. State broadcasting could happen in parallel with this.

I'll also need to move the current iOS implementation into a separate platform implementation package.

I'll post a link to the doc once I create it.

@ryanheise
Copy link
Owner Author

Shared doc: https://docs.google.com/document/d/17EZEvmiyn94GCwddBGS5BAaYer5BTRFv-ENIAPG-WG4/edit?usp=sharing

I will update the top post with details.

@ryanheise ryanheise pinned this issue Apr 3, 2021
@ryanheise
Copy link
Owner Author

ryanheise commented Apr 12, 2021

Hi @mvolpato I'm ready to get ball rolling using a Swift-based implementation. Swift is nice, but there seem to be complications in how the compiler works.

Would you be able to try these steps out below in your environment and see if they work for you?

First, create a new plugin from the template:

flutter create -t plugin --platforms android,ios audiokit

Then in ios/SwiftAudiokitPlugin.swift I import AudioKit and make a symbolic reference:

import AudioKit
...
var mixer: AKMixer

Then in ios/audiokit.podspec I added the cocoapods dep. Tried various alternatives, none of which worked, though:

  s.dependency 'AudioKit/Core', '~> 4.0'
  # s.dependency 'AudioKit/Core', '4.11'
  # s.dependency 'AudioKit/Core', :git=>'https://github.com/AudioKit/AudioKit/', :branch => 'v5-develop'

If you try to run the example directory, you get the error mentioned here: AudioKit/AudioKit#2267

Would be interested if you could try the above steps also and see if it works for you. Note that I also upgraded my cocoapods and Xcode to the latest versions before running this.

@ryanheise
Copy link
Owner Author

If it also fails for you (and a solution doesn't start out) then we may need to create either a Flutter issue or an AudioKit issue. According to reports in the above AudioKit issue, the error was resolved, so maybe its an issue that occurs due to the Flutter setup specifically.

@mvolpato
Copy link

'AudioKit/Core', :git=>'https://github.com/AudioKit/AudioKit/', :branch => 'v5-develop'

It looks like cocoapods is not supported (yet) for version 5, so this will not work for sure.

I also cannot get it to work. I tried different versions.

@mvolpato
Copy link

It looks like this other plugin has it working. I did not have time to investigate their approach yet. I will check later today.

@ryanheise
Copy link
Owner Author

Great find! I think what's happening on my project (and my environment) is that it resolving AudioKit to 4.11 whereas flutter_sequencer is resolving it to 4.11.1, and according to that issue, the bug fix for it requires 4.11.1 or later.

I was scratching my head for a while but it was caching the first version it ever resolved to when I made my first attempt at the podspec, even after running flutter clean. Turns out flutter clean does not delete the Pods directory and also doesn't delete the Podfile.lock file. This is why flutter_sequencer ends up resolving the AudioKit dependency to 4.11.1 when there's an even later version 4.11.2 available. Anyway, deleted everything and with a fresh start it works!

I think next it will probably be necessary to move the current iOS implementation into a separate package like the web one, then the new AudioKit implementation can just be another alternative to that.

@mvolpato
Copy link

Unfortunately, or maybe that is a good thing with everything that is going on now in the world, I am very busy with my day job, and I had no time to look into this. :(

@ryanheise
Copy link
Owner Author

No problem at all! Hopefully soon I'll be able to take a crack at getting something started.

@NelsonJTSM
Copy link

Has there been any updates on this?

@jmshrv
Copy link

jmshrv commented Sep 6, 2021

Instead of using iOS's native libraries, why don't we use something like libvlc? There's already work being done with it for desktop support, and libvlc has advantages such as better codec support (OPUS and Vorbis). In theory, it could mean one common backend for just_audio with no dependency on platform-specific libraries. Of course, it would be a major update that changes a lot.

@nt4f04uNd
Copy link
Contributor

I think the main stopper for this is its LGPL license #103 (comment)

@toonvanstrijp
Copy link

@ryanheise I was wondering if there is already work done for making this possible?

@ryanheise
Copy link
Owner Author

@toonvanstrijp thanks for the ping. I think the main issue for now is just that I would like to have multiple iOS implementations but I don't think there is yet a way in Flutter's federated plugin model to set a default iOS implementation and then allow an app to choose an override implementation, specifically when the implementation uses method channels (although the Flutter team will eventually add this).

I think it would still be possible to start development on this but just delay merging it until that Flutter issue is sorted out. However, I am continually distracted by other issues, and bug fixes are always taken as higher priority. Hopefully I (or someone) can get a foundation started so that it is easier for others to start contributing.

Finally, when I first started experimenting with AudioKit, it was with version 4.x. But now that AudioKit 5.x is out and recommended, I should probably scrap what I had started. I remember at the time 5.x was actually just released but they didn't have it on cocoapods yet (because the developer wasn't a fan of cocoapods). Fortunately 5.x is now on cocoapods.

@toonvanstrijp
Copy link

@ryanheise I started working on rewriting this library to Swift. Let me know if you're interested in merging this. Because I think if we would move to Swift a lot more developers can collaborate. (Since Swift is more the "standard" nowadays).

I'll also start looking at AudioKit 5.0, but I'm not a iOS developer. So any tips on how to structure things on the iOS side is welcome!

@toonvanstrijp
Copy link

@ryanheise one more question regarding AudioKit. Right now we use AVQueuePlayer this handles all downloading and buffering build-in right? If we're switching to AudioKit or AVAudioEngine we need to take care of this ourselves?

@jmshrv
Copy link

jmshrv commented Nov 12, 2021

From a totally outsider perspective, wouldn't it be easier to use a package that handles downloading/buffering for us?

Also, one issue with most iOS libraries is that they use Apple's decoding stuff, which doesn't support OPUS/Vorbis. For my use case, it's kind of annoying. I was vaguely looking into making a gstreamer backend for all platforms, but it could never be the main implementation because it's LGPL. VLCKit also won't work for the same reason.

@nt4f04uNd
Copy link
Contributor

Also, one issue with most iOS libraries is that they use Apple's decoding stuff, which doesn't support OPUS/Vorbis.

I just glanced over listed options in the google doc and it seems that https://github.com/sbooth/SFBAudioEngine is the only one that supports OPUS. Vobris is not supported by none of them though, which is probably ok, given that it's a predecessor of OPUS.

@nt4f04uNd
Copy link
Contributor

Vobris is not supported by none of them though, which is probably ok, given that it's a predecessor of OPUS.

Actually, it supports Vobris as well

@jmshrv
Copy link

jmshrv commented Nov 12, 2021

That library looks great, although I won't really be able to contribute to this as I have no experience with native iOS :(

@ryanheise
Copy link
Owner Author

@ryanheise I started working on rewriting this library to Swift. Let me know if you're interested in merging this.

I'd definitely go with Swift for the AudioKit-based implementation since AudioKit itself is written in Swift.

Rewriting the current AVQueuePlayer implementation in Swift is something I'm a bit more hesitant to do right now since this is the principle iOS implementation and such a large scale rewrite is likely to introduce stability issues. Any rewriting of it should probably be planned and discussed in order to avoid that happening. I think we can also delay it until at least a while after the AudioKit-based implementation starts becoming usable because my hope is that that implementation could eventually replace the AVQueuePlayer implementation (meaning the effort in rewriting it would be wasted.)

I'll also start looking at AudioKit 5.0, but I'm not a iOS developer. So any tips on how to structure things on the iOS side is welcome!

I've done a quick experiment with AudioKit by submitting a PR to the sound_generator so you might get some ideas by looking at it. Just a couple of notes to keep in mind:

  1. Since we don't yet have an easy way of overriding the default iOS implementation, My approach would be to first do a git mv to rename the old ios/macos directories then recreate them with flutter create for the desired platforms and languages. This will also set up the necessary bridge between Swift and Objective C. Eventually, though, I'd like this new AudioKit-based implementation to be in its own directory as a federated plugin implementation.
  2. We will also need a reasonable way to share code between the macOS and iOS implementations. I haven't looked into how to do that yet for Swift. The way I did it for the Objective C implementation is via symlinks, but there are Swift approaches that you can find by looking at other Flutter plugins that are written in Swift. It's unfortunate, but the official 1st party Flutter plugins don't bother to reuse code, they instead have the iOS implementation in Objective C and the macOS implementation reimplemented in Swift.

Also, one issue with most iOS libraries is that they use Apple's decoding stuff, which doesn't support OPUS/Vorbis.

I just glanced over listed options in the google doc and it seems that https://github.com/sbooth/SFBAudioEngine is the only one that supports OPUS. Vobris is not supported by none of them though, which is probably ok, given that it's a predecessor of OPUS.

Just a general comment here, but keeping in line with the vision of supporting multiple federated implementations of the just_audio platform interface, it is no problem if anyone wants to write an SFBAudioEngine-based implementation (which, although MIT, will still potentially involve LGPL if you use those parts that have that license) or GStreamer, etc.

I think when it comes to iOS, different people may end up needing these choices. For example, those building apps where audio processing is important (pitch shifting, time stretching, etc.) may want the AudioKit implementation, while those certain other formats might use a GStreamer or VLC-based implementation .

@ryanheise one more question regarding AudioKit. Right now we use AVQueuePlayer this handles all downloading and buffering build-in right? If we're switching to AudioKit or AVAudioEngine we need to take care of this ourselves?

I think that's an issue with a lot of these alternatives to AVQueuePlayer, yes we will have to manage a lot more ourselves. But at the same time, I'm running into limitations of AVQueuePlayer precisely because it manages buffering in a way I don't like, and so on the other hand there is a benefit to managing things ourselves.

@toonvanstrijp
Copy link

@ryanheise I've done a small setup as you've explained. Could you check in on this and let me know if this is the correct setup? https://github.com/wavy-assistant/just_audio/tree/feature/new_ios_implementation

@ryanheise
Copy link
Owner Author

Hi @toonvanstrijp this seems like a reasonable start to me. Thanks for taking the initiative!

One thing strange in GitHub's diff is this:

just_audio/macos/Classes/JustAudioPlugin.m → just_audio/ios_old/Classes/JustAudioPlugin.m 

I wonder if git mv really did something strange or whether that's just GitHub having problems trying to display it.

I notice the macOS podspec lists an older version of macOS than previously.

But I think any niggling issues will likely show up once implementation starts.

@toonvanstrijp
Copy link

@ryanheise I think it's an issue with Github displaying the diff.

One question before I get started. Would it be a good approach to keep the class structures and files like we have right now with the objective-c code? We're using ConcatenatingAudioSource etc. Are those still usable for ios when using AudioKit and what are your thoughts on approaching this?

@ryanheise
Copy link
Owner Author

If you implement according to the just_audio platform interface, those structures will naturally come out in your design.

@toonvanstrijp
Copy link

@ryanheise I've a few more questions regarding the new implementation. I'm now working on the load function. But I'm not sure how the buffering part works. Does buffering also apply to local files? (file:///). And do you have some sample code on how to do this buffering with AudioKit?

If you want take a look at the current code: https://github.com/wavy-assistant/just_audio/tree/feature/new_ios_implementation (feedback is welcome and appreciated)

@ryanheise
Copy link
Owner Author

ryanheise commented Feb 12, 2022

Hi all, @SimoneBressan has just shared some significant work in PR #658 #784 with an AVAudioEngine-based implementation in Swift if anyone would like to check it out. I will merge it into a new dev branch after I sort out a way for this Swift implementation to co-exist with the current Objective C implementation so that people can choose one over the other based on stability or feature set. In particular, #658 #784 may not implement every feature but it does have the equalizer which is one of the nice things that is easier to do with AVAudioEngine, or indeed AudioKit which is something else to explore.

@mminhlequang
Copy link

up

@shahmirzali49
Copy link

any update?

@ryanheise
Copy link
Owner Author

I think some good work has been done on the PR mentioned in the previous comment. It's worth taking a look at it (if you haven't already).

@shahmirzali49
Copy link

shahmirzali49 commented May 28, 2023

@ryanheise thanks but. PR is not finished, unfortunately.
Screenshot 2023-05-28 at 18 21 00

@ryanheise
Copy link
Owner Author

See also my comment at the top:

This will be a collaborative effort. Anyone can contribute

I think the AVAudioEngine makes it possible to implement some advanced features that were not practical with the original implementation. However, it is of course a lot of work to do a complete rewrite, and so if you are interested in seeing it get closer to completion, you might consider becoming a contributor.

@richanshah
Copy link

Hi @ryanheise ,

We're using both just_audio and audio_session in our project, and we're looking to implement an equalizer. From the sample code, we noticed the equalizer works well on Android, but it seems that iOS support is still a work in progress, as per the available documentation.

Could you please provide any updates on the iOS implementation for equalizer support? Alternatively, do you recommend using any other library alongside just_audio and audio_session to achieve the same functionality on iOS?

Thanks for your help!

Best regards,
Richa Shah
shahricha723@gmail.com

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1 backlog enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

9 participants