Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement]Introduce Call caching and join state management #397

Closed
wants to merge 2 commits into from

Conversation

ipavlidakis
Copy link
Collaborator

🔗 Issue Links

Provide all JIRA tickets and/or GitHub issues related to this PR, if applicable.

🎯 Goal

Describe why we are making this change.

📝 Summary

Provide bullet points with the most important changes in the codebase.

🛠 Implementation

Provide a detailed description of the implementation and explain your decisions if you find them relevant.

🎨 Showcase

Add relevant screenshots and/or videos/gifs to easily see what this PR changes, if applicable.

Before After
img img

🧪 Manual Testing Notes

Explain how this change can be tested manually, if applicable.

☑️ Contributor Checklist

  • I have signed the Stream CLA (required)
  • This change follows zero ⚠️ policy (required)
  • This change should receive manual QA
  • Changelog is updated with client-facing changes
  • New code is covered by unit tests
  • Comparison screenshots added for visual changes
  • Affected documentation updated (Docusaurus, tutorial, CMS)

🎁 Meme

Provide a funny gif or image that relates to your work on this pull request. (Optional)

Copy link

1 Message
📖 Skipping Danger since the Pull Request is classed as Draft/Work In Progress

Generated by 🚫 Danger

@ipavlidakis ipavlidakis changed the title [Enhancement]Introduce Call caching and join state managemenr [Enhancement]Introduce Call caching and join state management May 17, 2024
Copy link
Collaborator

@martinmitrevski martinmitrevski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The state machine looks great, we definitely need this 👌 Would be nice to share more details why we need the call caching. Added few other smaller comments.
Let's test this one today and tomorrow.

switch currentStage.id {
case .joining:
break
case .joined where currentStage is StreamCallStateMachine.Stage.JoinedStage:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need to cache the responses? Does that mean we call join from both callkit and the VM? (Thinking if throwing an error is better in this case)

)
}

let stage = try await stateMachine.publisher.nextValue(dropFirst: 1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need to drop the first one?

@@ -23,15 +23,15 @@ import Foundation
/// additional overhead of dispatching tasks and managing thread execution in a DispatchQueue could result
/// in unnecessary latency, making `os_unfair_lock` the superior choice for scenarios where rapid, lightweight
/// synchronization is paramount.
final class UnfairQueue {
public final class UnfairQueue {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need this one public? Would prefer if we keep it internal.


log
.debug(
"Call updated \(oldValue != nil ? Unmanaged.passUnretained(oldValue!).toOpaque().debugDescription : "nil") → \(call != nil ? Unmanaged.passUnretained(call!).toOpaque().debugDescription : "nil"). Check the StateMachine for updates."
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to use Unmanaged.passUnretained here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants