Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SignalR crashes when app is moved to background and hub connection is closed #208

Closed
nysander opened this issue Oct 21, 2021 · 15 comments
Closed

Comments

@nysander
Copy link

I have such reconnect policy:

class RTReconnectPolicy: ReconnectPolicy {
    private var connect: () -> Void
    private var retryTimeout = 1

    public init(connect: @escaping () -> Void) {
        self.connect = connect
    }

    func nextAttemptInterval(retryContext: RetryContext) -> DispatchTimeInterval {
        if let error = retryContext.error as? SignalRError, error.statusCode == 401 {
            Logger.shared.log(error.localizedDescription, logLevel: .error)
            connect()
            return DispatchTimeInterval.seconds(30)
        }

        Logger.shared.log(retryContext.error.localizedDescription, logLevel: .error)

        return DispatchTimeInterval.seconds(calculateTimeout(failedAttemptsCount: retryContext.failedAttemptsCount))
    }

    func calculateTimeout(failedAttemptsCount: Int) -> Int {
        if retryTimeout < 32 {
            retryTimeout = failedAttemptsCount * 2
            return retryTimeout
        } else {
            retryTimeout = 32
            return retryTimeout
        }
    }
}

and my connection method

        refreshTokenUseCase.execute()
            .sink(receiveValue: { accessToken in
                if !accessToken.isEmpty {
                    self.connecting = true

                    self.connection = HubConnectionBuilder(url: url)
                        .withLogging(minLogLevel: rtLogLevel, logger: Logger.shared)
                        .withJSONHubProtocol()
                        .withAutoReconnect(reconnectPolicy: RTReconnectPolicy(connect: { self.connect() }))
                        .withPermittedTransportTypes(.webSockets)
                        .withHttpConnectionOptions(configureHttpOptions: { options in
                            // to authorize RealtimeBroker connections with Access Token `skipNegotiation` parameter has to be set to `false`
                            options.accessTokenProvider = { accessToken }
                            options.skipNegotiation = false
                        })
                        .build()
                    self.connection?.delegate = self

                    self.connection?.start()
                }
            })
            .store(in: &cancellables)

disconnect method

    public func disconnect() {
        Logger.shared.log("Disconnecting from Realtime server with address: \(url?.absoluteString ?? "--")", logLevel: .info, file: "\(#file)", function: "\(#function)")
        connection?.stop()
    }

App uses access token for securing hub connection (before I used access token this error did not happened (and then I used default .autoReconnect() method but when using it with token it caused to many reconnections failing.

Scenario:

App is in foreground it has AT (access token) to open web socket, then I push it to background or lock a device. Connection is then closed with .disconnect() method. When I unlock device I have report that app had crashed. and in crash log I see what's on image below.

I log also SignalR Error - 8

image

@moozzyk
Copy link
Owner

moozzyk commented Oct 21, 2021

Thanks for reporting. It seems it comes from recently added keepAlive. I merged a change yesterday that should allow disabling keepAlive by setting the interval to nil. You may want to try this until I figure out what is happening.

@moozzyk
Copy link
Owner

moozzyk commented Oct 21, 2021

Any chance to include logs?

@nysander
Copy link
Author

nysander commented Oct 21, 2021

(edited)

I will check if disabling it will help and I will try to prepare some anonimized logs for you.

@moozzyk
Copy link
Owner

moozzyk commented Oct 21, 2021

this crash is from version 0.8.0

This does not seem right. Support for keep alive was added less than 3 weeks ago in 5ddfab6 and was not released yet. It seems like the changes are from master.

@nysander
Copy link
Author

yes you are right, I thougt I use tagged version not the master. edited above

@nysander
Copy link
Author

thank you for quick fix, I will write you back next week if this resolved a crash as it is not something I can trigger on demand. I have also set keepAliveInterval to nil

@moozzyk
Copy link
Owner

moozzyk commented Oct 21, 2021 via email

@nysander
Copy link
Author

My first tries and it looks that app is not killing itself in backround, so that makes hope ;)

I am now releasing new testflight version so I will have more users to try it out.

moozzyk added a commit that referenced this issue Oct 24, 2021
moozzyk added a commit that referenced this issue Oct 24, 2021
@nysander
Copy link
Author

I think the issue is fixed now. I didn't get any new crash report from Signal R until today. Thanks again for quick fix

@moozzyk
Copy link
Owner

moozzyk commented Oct 30, 2021

Thanks for confirming!

@moozzyk
Copy link
Owner

moozzyk commented Oct 30, 2021

I am reopening because the root cause has not been addressed.

@moozzyk moozzyk reopened this Oct 30, 2021
@KateXe
Copy link
Contributor

KateXe commented Nov 2, 2021

We have the same issue, and my colleague found the bug, there is a double semaphore.wait, the one in the closure of resetKeepAlive, and the one in cleanupKeepAlive. So it comes to a freeze/crash. I try to fix it.

@cosmer-work
Copy link

cosmer-work commented Dec 2, 2021

This also frequently happens when HttpConnection.stop(stopError:) is called.
Screen Shot 2021-12-02 at 10 40 20 AM

@moozzyk
Copy link
Owner

moozzyk commented Dec 2, 2021

@cosmer-work seems like a different issue. Can you create a new issue and provide more details including logs at the debug level?

@moozzyk
Copy link
Owner

moozzyk commented Apr 26, 2022

This should now be fixed with 48942fb and 4f3a9b2

@moozzyk moozzyk closed this as completed Apr 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants