Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AKOfflineRenderNode doesn't render any sound. #1096

Closed
ygoncharov-ideas opened this issue Oct 23, 2017 · 30 comments
Closed

AKOfflineRenderNode doesn't render any sound. #1096

ygoncharov-ideas opened this issue Oct 23, 2017 · 30 comments

Comments

@ygoncharov-ideas
Copy link

I utilized AKOfflineRenderNode to process recorded voice. The idea is to record voice and save it into the file. Next apply some effects to this file and save combined voice+effects into file.

I have successful result on my iPhone 6s running on iOS 11.0.3 but on other devices running on iOS 11+(for example last generation of iPad on 11.0.3) I always have "silent" file with same size about 60 Kb. I attached one for reference(rec.zip). Also the issue 100% reproducible on iOS 11 simulator.

Here is my setup(I removed effects setup since offline render doesn't work even on clean recording):

    fileprivate func setupAudioKit(){
        AKSettings.enableLogging = true
        AKAudioFile.cleanTempDirectory()
        
        AKSettings.bufferLength = .medium
        AKSettings.numberOfChannels = 2
        AKSettings.sampleRate = 44100
        AKSettings.defaultToSpeaker = true
        
        do {
            try AKSettings.setSession(category: .playAndRecord, with: .allowBluetoothA2DP)
        } catch {
            AKLog("Could not set session category")
        }
        
        mic = AKMicrophone()
        micMixer = AKMixer(mic)
        
        recorder = try? AKNodeRecorder(node: micMixer)
        
        if let file = recorder.audioFile {
            player = try? AKAudioPlayer(file: file)
            player.looping = true
        }
        playerMixer = AKMixer(player)
// Effects setup
        offlineRenderer = AKOfflineRenderNode(playerMixer)
        AudioKit.output = offlineRenderer

        AudioKit.start()
}

Here is export method:

     fileprivate func render() {
        offlineRenderer.internalRenderEnabled = false
        player.schedule(from: 0, to: player.duration, avTime: nil)
        
        let renderURL = URL(fileURLWithPath: FileHelper.documentsDirectory() + "rec.m4a")
        
        let sampleTimeZero = AVAudioTime(sampleTime: 0, atRate: AudioKit.format.sampleRate)
        player.play(at: sampleTimeZero)
        do {
            try offlineRenderer.renderToURL(renderURL, seconds: player.duration)
        } catch {
            print(error)
        }
        player.stop()
        offlineRenderer.internalRenderEnabled = true
    }
@dave234
Copy link
Contributor

dave234 commented Oct 23, 2017

Arg. I'll look into it. I made the Offline render to address pre iOS 11 offline rendering. Maybe the offline render node needs to be altered to use the new AVAudioEngine offline rendering.

@dave234
Copy link
Contributor

dave234 commented Oct 23, 2017

Well I didn't troubleshoot the issue in your example, but I was able to test the offline render node on my 6s running 11.0.3 and it still works. The test I used was the example iOS project SongProcessor. Keep us in the loop on your progress with this, please. It will be good to pinpoint the OS related changes so we can better generalize and document things.

@ygoncharov-ideas
Copy link
Author

ygoncharov-ideas commented Oct 23, 2017

@dave234 thank you for investigating the issue. I utilized a code from SongProcessor to build my App. So I've just downloaded the latest source code from Github(4.0.2) and build AudioKit framework from source. Next I run SongProcessor example on iPhone 6 - iOS 11 simulator and get this strange silent file(mixDown.zip)

My XCode version: Version 9.0 (9A235).
Mac OS version: 10.13 (17A405)

Please let me know if you need any additional details?

@dave234
Copy link
Contributor

dave234 commented Oct 23, 2017

So just to make sure I understand correctly. The silent file was produced from the SongProcessor example?

@ygoncharov-ideas
Copy link
Author

Yep, I input 2 as a number of loops and then uploaded result file from: /Users/Yuri/Library/Developer/CoreSimulator/Devices/E508F7BA-A400-4B13-A47B-E6512CBCD431/data/Containers/Data/Application/DFF0F2FC-15C3-4AC2-91B6-520DC4C5ED92/tmp/mixDown.m4a

@dave234
Copy link
Contributor

dave234 commented Oct 23, 2017 via email

@ygoncharov-ideas
Copy link
Author

Unfortunately I will be able to run SongProcessor on iPad only tomorrow.
BTW, added AudioKit framework as a source code to the project and try to debug it.
So I put breakpoint at first line of method:

-(BOOL)renderToFile:(NSURL * _Nonnull)fileURL
            seconds:(double)seconds
           settings:(NSDictionary<NSString *, id> * _Nullable)settings
              error:(NSError * _Nullable * _Nullable)outError{

I checked out error at it wan't nil. Here is a screenshot -
error

@evenbrenna
Copy link
Contributor

Hi,

I have a very similar issue on iOS 11 (not tested on iOS 10), but in my case, the file rendered by AKOfflineRenderNodehas a long stretch of silence in the beginning. The silent part is about as long as the duration of the original file.

If I do something like offlineRenderer.renderToURL(renderURL, seconds: player.duration * 2) I get a file where the first half (not exactly half, but pretty close) is silent, and then the rest contains the expected result.

Happy to post some code if that would be helpful.

@ygoncharov-ideas
Copy link
Author

I've just tried to increase duration player.duration * 2 but AKOfflineRenderNode produces silent audio with double length of file.

@evenbrenna
Copy link
Contributor

In my case, the duration of the silent part relative to the duration of the part with signal seems to vary depending on the length of the original file.

When player.duration is around 220 seconds I get about 50/50 silence/signal when I render a file of length player.duration * 2.

For a shorter file (14 seconds) I have to render player.duration * 16 seconds to see some signal at the end of the rendered file...

@ygoncharov-ideas
Copy link
Author

Can you SongProcessor on your iPad?

@dave234, instead of iPad I run SongProcessor on iPhone 6 with iOS 11.0.3(15A432) and then shared result to my Mac via AirDrop. I still got silent file - mixDown-iPhone6.zip

Also I disabled "Inter-App Audio" and Background Mode capabilities since it requires to register separate bundle id to run the App on real devices. But I guess it used for playback and communication with iTunes so shouldn't be a root cause of reported problem.

@ygoncharov-ideas
Copy link
Author

@dave234, in meanwhile I decided to implement solution based on AVAudioEngine. Details at: https://developer.apple.com/videos/play/wwdc2017/501/

Here is a rough code for Offline Renderer:

class OfflineRenderer: AKNode {
    public func enableManualRendering() {
        do {
            AudioKit.engine.stop()
            
            let maxNumberOfFrames: AVAudioFrameCount = 4096
            try AudioKit.engine.enableManualRenderingMode(.offline, format: AudioKit.format, maximumFrameCount: maxNumberOfFrames)
            
            try AudioKit.engine.start()
        } catch {
            print("could not enable manual rendering mode, \(error)")
        }
    }
    
    public func disableManualRendering() {
        AudioKit.engine.disableManualRenderingMode()
    }
    
    public func renderToURL(_ url: URL, length: AVAudioFramePosition, settings: [String : Any]) throws {
        let outputFile = try AVAudioFile(forWriting: url, settings: settings)
        let engine = AudioKit.engine
    
        let buffer: AVAudioPCMBuffer = AVAudioPCMBuffer(pcmFormat: engine.manualRenderingFormat,
                                                        frameCapacity:  engine.manualRenderingMaximumFrameCount)
    
        while engine.manualRenderingSampleTime < length {
            do {
                let framesToRender = min(buffer.frameCapacity, AVAudioFrameCount(length - engine.manualRenderingSampleTime))
                let status = try engine.renderOffline(framesToRender, to: buffer)
                
                if status == .success {
                    try outputFile.write(from: buffer)
                }
            } catch {
                print("render failed, \(error)")
            }
        }
    }
}

Here an example of usage:

        let settings = [
            AVFormatIDKey : kAudioFormatMPEG4AAC,
            AVSampleRateKey: 44100.0,
            AVNumberOfChannelsKey: 2,
            AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
            ] as [String : Any]

        let documentsPath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]
        let renderURL = URL(fileURLWithPath: documentsPath + "processed.m4a")
        let sampleTimeZero = AVAudioTime(sampleTime: 0, atRate: AudioKit.format.sampleRate)
        
        offlineRenderer.enableManualRendering()
        
        player.schedule(from: 0, to: player.duration, avTime: nil)
        player.play(at: sampleTimeZero)
        
        do {
            try offlineRenderer.renderToURL(renderURL, length: player.audioFile.length, settings: settings)
        } catch {
            print(error)
        }
        player.stop()

        offlineRenderer.disableManualRendering()

I've tested it on simulator and problematic iPhone and got correct processed recordings for both. I found 2 inconveniences with this approach: App should be targeted to iOS 11+ and render process is synchronous.

Could you please review my code and help to adjust it into AudioKit architecture because I'm not sure how it should be integrated. Also any help to make a rough code more production like is appreciated. Ideally we can make new feature pull request .

@dave234
Copy link
Contributor

dave234 commented Oct 26, 2017

This is great! Thank you for contributing! There's a bit of an API flaw on my part when implementing the offline render node. I made it for backward compatibility with < iOS 11.0, but didn't create an API where the implementation could be swapped. May thinking was that a separate node would be preferable so that audio could continue to play on the rest of the engine while a single "path" could be manually rendered. This has proven to be a mistake.

So the biggest inconsistency with the AVAudioEngine manual rendering and my implementation is the use of a separate node rather than rendering the output node. I think if you are going to implement offline rendering using the new enableManualRenderingMode under the hood, that it should be a method on the global AudioKit, rather than a node because that better reflects the behavior. I'm tempted to just make the offline render node unavailable in iOS 11.0, and implement offline rendering on AudioKit, then call it a day. This is probably the most straightforward way of dealing with the issue.

However, if you want to try to make the offlineRender node work >= iOS 11.0, I think a different approach is necessary (hacks). Since I am not able to reproduce the behavior locally on my 6S, do you think you could try something for me? Just after the call offlineRenderer.internalRenderEnabled = false (in the original snippet), try playing around with enableManualRenderingMode with/without stopping the engine, or any other ideas around sending the "We're rendering offline" message to all of the nodes. If you can't produce results easily with these sorts of hacks, I vote to make the offline render node unavailable >= iOS 11, and to re-implement offline rendering with an API that is consistent with AVAudioEngine's offline rendering, and that works both pre iOS 11 and post iOS 11 (i.e. render the output node rather than a separate node in the chain).

@ygoncharov-ideas
Copy link
Author

ygoncharov-ideas commented Oct 27, 2017

@dave234, thank you for feedback!

should be a method on the global AudioKit

I absolutely agree with you. I will create an extension for AudioKit class.

I'm tempted to just make the offline render node unavailable in iOS 11.0

Please don’t rush. Something “mystical” happens around offline render. When I started to develop my App I had same problem for my iPhone 6s.
I decided that it software problem and updated my Mac OS, XCode and iOS version. But problem remained. After a few hours of fight the problem disappeared by itself. I didn’t do any reasonable fix. Next I started to test on iPad and problem appeared on iPad. You can't believe but problem disappeared on iPad after several hours of testing and now randomly appears. Now I have iPhone 6 where problem is reproducible 100% of time. So it looks like offline renderer(AudioKit) conflicts/fights with other process for audio service privileges and when it won then everything works well. I also noticed that when tried to play remote(streaming) recording via AVPlayer in other ViewController. AudioKit behaved really strange and recorded a voice with x3 extra speed. I can advice to test SongProcessor on “clean” device with iOS 11. By clean I mean device that was never used with AudioKit.

try playing around with enableManualRenderingMode with/without stopping the engine

I understood idea of your hack. I will check it on problematic iPhone 6 and let you know results.

re-implement offline rendering with an API that is consistent with AVAudioEngine's offline rendering, and that works both pre iOS 11 and post iOS 11

Are you sure? enableManualRenderingMode marked as @available(iOS 11.0, *)

@ygoncharov-ideas
Copy link
Author

I tried to enable manual rendering mode for SongProcessor and tested on iPhone 6s on iOS 11.

    fileprivate func mixDownLoops(url: URL, loops: Int) throws {
        offlineRender.internalRenderEnabled = false
        if #available(iOS 11, *) {
            do {       
                let maxNumberOfFrames: AVAudioFrameCount = 4096
                try AudioKit.engine.enableManualRenderingMode(.offline, format: AudioKit.format, maximumFrameCount: maxNumberOfFrames)
            } catch (let error) {
                print("\(error)")
            }
        }
....

I saw following error: Error Domain=com.apple.coreaudio.avfaudio Code=-80801 "(null)"

Next I added engine stop/start methods:

    fileprivate func mixDownLoops(url: URL, loops: Int) throws {
        offlineRender.internalRenderEnabled = false
        if #available(iOS 11, *) {
            do {
                AudioKit.stop()
                
                let maxNumberOfFrames: AVAudioFrameCount = 4096
                try AudioKit.engine.enableManualRenderingMode(.offline, format: AudioKit.format, maximumFrameCount: maxNumberOfFrames)
                
                AudioKit.start()
            } catch (let error) {
                print("\(error)")
            }
        }
...

But it failed on try offlineRender.renderToURL(url, seconds: duration) with error:
Error Domain=AKOfflineRenderAudioUnit Code=1 "Node needs to be connected and engine needs to be running

Do you have other scenarios to try?

@dave234
Copy link
Contributor

dave234 commented Oct 27, 2017

Thanks so much for helping with this.

Are you sure? enableManualRenderingMode marked as @available(iOS 11.0, *)

My thinking is that if >= iOS 11, we use the new iOS 11 AVAudioEngine manualRender under the hood, and if < iOS 11, we use the (previously working) node approach, but devise a common interface to both of those methods (i.e. AudioKit.renderToURL(url, seconds: 123.45) ).

Do you have other scenarios to try?

In AKOfflineRenderNode.m, could you try setting pullFlags to kAudioOfflineUnitRenderAction_Render?

Sort of a long shot, but maybe worth a try.

@ygoncharov-ideas
Copy link
Author

@dave234 sorry for a delay.

As you recommended I set:

        AudioUnitRenderActionFlags pullFlags = kAudioOfflineUnitRenderAction_Render;
        AUAudioUnitStatus status = _inputBus.pullInput(&pullFlags, &ts, renderLen, 0, pullInputBlock);

Unfortunately it didn't give any positive result. SongProcessor's recording is still silent.
If I add AudioKit.engine.enableManualRenderingMode then I still see Error Domain=AKOfflineRenderAudioUnit Code=1

@ygoncharov-ideas
Copy link
Author

@dave234 do you have any updates or fixes for code to try?

@ygoncharov-ideas
Copy link
Author

@dave234 in meanwhile I tried to test SongProcessor with XCode 9.1(9B55) on iOS 11.1. Unfortunately, result is same. I still get silent audio.

Also I tested solution based on AVAudioEngine's enableManualRenderingMode but it doesn't work as expected. I can render(export) recording only once. So when I start new recording then AKNodeRecorder doesn't record any sound and I don't see any error in log.

It looks like there is no possible way to render offline sound on iOS 11.

@dave234
Copy link
Contributor

dave234 commented Nov 9, 2017

Ok I've got a working offline render function implemented as AudioKit.renderToFile, and I've deprecated AKOfflineRenderNode. Thanks for finding this.

#1114

@aure aure closed this as completed Nov 9, 2017
@eljeff
Copy link
Member

eljeff commented Feb 5, 2018

I do think it is handy to have the offline rendering NOT related to the main audio path. What if I want to render something different offline in the background?

@dave234
Copy link
Contributor

dave234 commented Feb 6, 2018

@eljeff I’m with you on this. It’s a bit of work though. Currently AudioKit uses a global AVAudioEngine, I believe that we will need to make all audiokit functions pass around an audioEngine if we want to achieve this. We could default to a global engine if audioEngine argument is missing. I also think that it’s totally worth it to do this. Not only will we have more control over offline rendering; we would also gain the ability to let AudioKit act as a “slave” engine. For my projects it’s worth it to bounce tracks without stopping playback. I have some thoughts on how to do this, I’d love to hear yours.

@eljeff
Copy link
Member

eljeff commented Feb 6, 2018

I actually figured out a hack....
I routed the offlinenode to a mixer, plugged that into the output, then muted the mixer.

Seems to work!

@jbmaxwell
Copy link

I'm curious; do you know if this would work with playback from AKSequencer (i.e., MIDI)?

@NasrullahKhan
Copy link

It will be good if you share demo project with recorded voice + reverb effect and then save the filtered file and then share it.

@aure
Copy link
Member

aure commented Oct 17, 2018

I'm curious; do you know if this would work with playback from AKSequencer (i.e., MIDI)?

No, you can't right now but we're working on it!

@aure
Copy link
Member

aure commented Oct 17, 2018

It will be good if you share demo project with recorded voice + reverb effect and then save the filtered file and then share it.

Except the share part, this is what RecorderDemo does.

@NasrullahKhan
Copy link

Hi. Thanks for your response, it worked. Currently i again need your help. i have to add echo effect. in swift i tested few codes we use this to add echo

    let echoNode = AVAudioUnitDistortion()
    echoNode.loadFactoryPreset(.multiEcho1)

but how we can add echo in our audiokit?

@luizv
Copy link

luizv commented Nov 21, 2018

Reporting that I have the same problem with SongProcessor demo. When I export default app generated guitar samples, it turns out to be a silent file. Built it from Xcode 9.4.1 into an Ipad Mini with iOS 9.3.

I'm trying it because I can't make offline render to work (I need to support iOS < 11).
I need export audio direct from player, modified by AKTimePitch. My last try was with self.player.audioFile?.exportAsyncronous method, but it returns me the original sound without volume/pitch/tempo changes. (renderToFile works great with iOS >= 11).

Please, can anyone dealing with same problem shed some light on how to proceed?

@jbmaxwell
Copy link

I'm curious; do you know if this would work with playback from AKSequencer (i.e., MIDI)?

No, you can't right now but we're working on it!

Is there any progress to report on offline rendering of MIDI?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants