Skip to content

Extract shared call workflow runtime and restore media epoch#540

Merged
justinmoon merged 2 commits intomasterfrom
prfneudq
Mar 9, 2026
Merged

Extract shared call workflow runtime and restore media epoch#540
justinmoon merged 2 commits intomasterfrom
prfneudq

Conversation

@justinmoon
Copy link
Copy Markdown
Collaborator

@justinmoon justinmoon commented Mar 9, 2026

Summary

  • Extract shared call workflow runtime into pika-marmot-runtime crate
  • Restore call media epoch derivation

Test plan

  • CI passes

🤖 Generated with Claude Code


Open with Devin

Summary by CodeRabbit

  • New Features

    • Introduced a unified call workflow runtime system that manages call signaling, validates authentication, and handles media cryptography derivation
    • Centralized processing of incoming call invitations, acceptances, rejections, and termination signals across the platform
  • Tests

    • Added comprehensive test coverage for call workflows including acceptance validation, policy enforcement, and media crypto derivation

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 9, 2026

📝 Walkthrough

Walkthrough

Introduces a shared call workflow runtime that centralizes call signaling, authentication, and media cryptography derivation across the Pika system. The new CallWorkflowRuntime coordinates outgoing invites, incoming signal handling, and call acceptance/rejection with unified state management and validation.

Changes

Cohort / File(s) Summary
Call Workflow Runtime
crates/pika-marmot-runtime/src/call_runtime.rs, crates/pika-marmot-runtime/src/lib.rs
New 597-line runtime module with data models for call lifecycle states (PendingIncomingCall, PendingOutgoingCall, PreparedAcceptedCall, etc.), CallWorkflowRuntime core logic for preparing/handling call signals, media crypto derivation, authentication validation, and comprehensive test coverage.
Sidecar Integration
crates/pikachat-sidecar/src/daemon.rs
Refactored call invitation/signal handling to delegate to shared CallWorkflowRuntime; replaced direct crypto context construction and signal building with runtime-managed prepare/handle flows; updated pending call state tracking to use shared types.
Core Call Control
rust/src/core/call_control.rs, rust/src/core/call_runtime.rs
Integrated CallWorkflowRuntime for call acceptance preparation; removed inline validation/crypto functions; updated import paths; escalated derive_relay_auth_token visibility; refactored control flow across invite/accept/reject/end lifecycle paths.
Core Tests
rust/src/core/mod.rs
Added unit test verifying prepare_call_accept_for_chat uses the shared runtime and produces correctly validated call acceptance signals.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

🐰 Hops of joy! The calls now flow,
Through one shared runtime's steady glow,
No more scattered crypto fears—
One unified path rings clear! 🎉

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 31.71% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly and concisely captures the two main objectives: extracting a shared call workflow runtime and restoring media epoch derivation.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch prfneudq

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 potential issue.

View 6 additional findings in Devin Review.

Open in Devin Review

Comment on lines +656 to +662
.map(|session| PendingOutgoingCall {
call_id: active.call_id.clone(),
target_id: active.chat_id.clone(),
peer_pubkey_hex: peer_pubkey_hex.clone(),
session: session.clone(),
is_video_call: active.is_video_call,
})
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Peer identity check bypassed: PendingOutgoingCall.peer_pubkey_hex always equals the signal sender

In handle_incoming_call_signal, the PendingOutgoingCall is constructed at line 659 with peer_pubkey_hex: peer_pubkey_hex.clone(), where peer_pubkey_hex is derived from sender_pubkey.to_hex() at line 647 — the pubkey of whoever sent the current signal. The InboundSignalContext also sets sender_pubkey_hex: &peer_pubkey_hex at line 668. Inside CallWorkflowRuntime::handle_inbound_signal (crates/pika-marmot-runtime/src/call_runtime.rs:211), the check pending.peer_pubkey_hex != ctx.sender_pubkey_hex is therefore always false (they're the same value), making the sender identity verification a no-op.

In the old code, the peer pubkey was derived from active.peer_npub (the stored intended peer of the outgoing call), and the sender was validated against that stored value. This ensured only the intended peer could accept a call. Now any sender whose Accept signal has a matching call_id and target_id will pass. While the relay_auth cryptographic check provides some protection, the explicit sender identity gate is lost.

The daemon (daemon.rs:4076) correctly looks up pending_outgoing_call_invites.get(call_id) which stores the original peer pubkey from invite creation, so this bug is app-only.

Prompt for agents
In rust/src/core/call_control.rs, in the handle_incoming_call_signal method, lines 648-663, the PendingOutgoingCall is constructed with peer_pubkey_hex set from the incoming signal's sender_pubkey. Instead, it should use the stored peer pubkey from the active outgoing call (active.peer_npub), matching how the old code worked.

Change the pending_outgoing construction (around lines 648-663) to derive peer_pubkey_hex from the active call's peer_npub instead of from sender_pubkey:

    let pending_outgoing = self
        .state
        .active_call
        .as_ref()
        .filter(|active| matches!(active.status, CallStatus::Offering))
        .and_then(|active| {
            let stored_peer_hex = PublicKey::parse(&active.peer_npub)
                .ok()
                .map(|pk| pk.to_hex())?;
            self.call_session_params
                .as_ref()
                .map(|session| PendingOutgoingCall {
                    call_id: active.call_id.clone(),
                    target_id: active.chat_id.clone(),
                    peer_pubkey_hex: stored_peer_hex,
                    session: session.clone(),
                    is_video_call: active.is_video_call,
                })
        });

This restores the sender identity verification: handle_inbound_signal will compare pending.peer_pubkey_hex (from the stored peer) against ctx.sender_pubkey_hex (from the signal sender), correctly rejecting accept signals from unexpected senders.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
crates/pikachat-sidecar/src/daemon.rs (2)

3344-3399: ⚠️ Potential issue | 🟠 Major

Preserve pending invites until the reply is actually published.

Both branches remove the in-memory invite before the new runtime path prepares/publishes the outbound signal. If preparation or publish fails, the invite is gone and won't be reconstructed because the source group event is already in seen_group_events.

🐛 Suggested fix
- let Some(invite) = pending_call_invites.remove(&call_id) else {
+ let Some(invite) = pending_call_invites.get(&call_id).cloned() else {
      ...
  };

  match publish_call_payload(...).await {
      Ok(()) => {
+         pending_call_invites.remove(&call_id);
          ...
      }
      Err(e) => {
          ...
      }
  }

Apply the same removal-after-publish pattern to RejectCall.

Also applies to: 3497-3528

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/pikachat-sidecar/src/daemon.rs` around lines 3344 - 3399, The code
currently removes the invite from pending_call_invites before
preparing/publishing the outbound signal, so if preparation or publish fails the
invite is lost; change the flow to keep the invite in pending_call_invites until
publish_call_payload completes successfully: for the accept path
(CallWorkflowRuntime::prepare_accept_incoming -> publish_call_payload) and the
reject path (CallWorkflowRuntime::prepare_reject_signal ->
publish_call_payload), stop calling pending_call_invites.remove(&call_id)
up-front, only remove the invite after publish_call_payload returns Ok(()), and
on any Err from prepare_* or publish_call_payload send the appropriate out_error
via reply_tx without removing the invite so it remains reconstructible from
seen_group_events.

3387-3458: ⚠️ Potential issue | 🟠 Major

Tear the call down when local activation fails after the handshake.

AcceptCall publishes call.accept before start_*_worker succeeds. Later, the outbound-accepted path's active_call.is_some() early return, IncomingAcceptFailed, and worker-start failures only log or continue. At that point the peer already believes the handshake completed, so it can sit on a dead session indefinitely. Send a best-effort call.end/call.reject and clear pending state whenever local activation fails after the handshake.

Also applies to: 4134-4216

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/pikachat-sidecar/src/daemon.rs` around lines 3387 - 3458, When any
start_*_worker call (start_echo_worker, start_stt_worker, start_data_worker)
fails after publish_call_payload succeeded, perform a best-effort teardown:
publish a call.end or call.reject for the same call id/session (reusing
publish_call_payload with prepared.incoming.call_id, prepared.incoming.session
and appropriate "call_end"/"call_reject" payload), clear any local
pending/active call state that tracks this handshake, and then send the existing
reply_tx out_error as before; apply the same change to the analogous
failure-handling block around start_*_worker calls in the other section
referenced (the 4134-4216 region) so that any local activation failure after
handshake always sends teardown and clears pending state.
🧹 Nitpick comments (4)
crates/pika-marmot-runtime/src/call_runtime.rs (1)

557-596: Add a caller-side variant for this accept-path test.

make_group() returns the invitee runtime, so this case does not exercise the real inviter-side local/peer key ordering used when an outgoing call is accepted. A sibling test that runs this path with the caller runtime would better protect the relay-auth/media-derivation contract.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/pika-marmot-runtime/src/call_runtime.rs` around lines 557 - 596, Add a
sibling test that exercises the caller/inviter side: call make_group() but
capture the inviter runtime (don't ignore the first return), then invoke
handle_inbound_signal on that inviter runtime (instead of the invitee `runtime`)
with an InboundSignalContext where the sender_pubkey_hex and
GroupCallContext.local_pubkey_hex reflect the inviter-side ordering (swap
inviter/invitee keys compared to the existing test), and pass the same
PendingOutgoingCall/call_id/session so the code path for an outgoing call being
accepted on the caller side is exercised; name the test e.g.
handle_inbound_signal_accepts_matching_pending_outgoing_from_caller and assert
the outcome is InboundCallSignalOutcome::OutgoingAccepted as in the existing
test.
crates/pikachat-sidecar/src/daemon.rs (1)

4802-4821: Make this a real call-flow regression test.

This only checks that the prepared JSON contains "call.invite" and uses a dummy group id, so it won't catch regressions in the restored media epoch/media-crypto derivation path. A round-trip test with a real group plus prepare_accept_incoming or handle_inbound_signal would protect the shared runtime extraction much better.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/pikachat-sidecar/src/daemon.rs` around lines 4802 - 4821, The test
should be converted into a real round-trip call-flow: instead of the dummy group
id "deadbeef", create a real group using the MDK opened with open_mdk (or the
appropriate MDK group-creation API), then call prepare_call_invite_for_daemon
with that real group and the session from default_audio_call_session; next
simulate the acceptor side by invoking prepare_accept_incoming or
handle_inbound_signal (using the same session/peer Keys::generate() as the
responder) to process the invite and produce the accept/restore state, and
finally assert that the restored media epoch/media-crypto values (media epoch,
derived keys or crypto material) on the acceptor match the inviter’s prepared
values rather than only checking for "call.invite" in the JSON. Ensure you
reference and exercise prepare_call_invite_for_daemon,
prepare_accept_incoming/handle_inbound_signal, open_mdk,
default_audio_call_session, and Keys::generate() to validate the full
media-derivation path.
rust/src/core/mod.rs (1)

8492-8498: Assert the restored media-epoch output explicitly.

This test currently proves the shared runtime returns an accept envelope, but it would still pass if the epoch/keying data restored by this PR regressed. Please deserialize prepared.signal.payload_json and assert the concrete accept fields instead of only checking for a "call.accept" substring.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rust/src/core/mod.rs` around lines 8492 - 8498, The test currently only
checks that prepared.signal.payload_json contains the substring "call.accept";
instead deserialize prepared.signal.payload_json (e.g., parse JSON into the
call.accept struct used in your codebase) and assert the concrete accept fields
including the restored media epoch/keying data (fields on the accept payload
such as media_epoch, keying material, or whatever names your accept struct
uses). Locate the call to core.prepare_call_accept_for_chat and replace the
substring check with parsing prepared.signal.payload_json into the accept
payload type and explicit assert_eq! checks for the expected epoch and related
fields on prepared.incoming or the deserialized object.
rust/src/core/call_control.rs (1)

725-742: Consider extracting the match on failure.kind to reduce repetition.

The IncomingAcceptFailureKind is matched twice with the same variants. This could be simplified by matching once and extracting both the message and end reason.

♻️ Optional refactor to reduce repetition
             InboundCallSignalOutcome::IncomingAcceptFailed(failure) => {
-                self.toast(match failure.kind {
-                    pika_marmot_runtime::call_runtime::IncomingAcceptFailureKind::RelayAuth => {
-                        format!("Call relay auth verification failed: {}", failure.error)
-                    }
-                    pika_marmot_runtime::call_runtime::IncomingAcceptFailureKind::MediaCrypto => {
-                        format!("Call media key setup failed: {}", failure.error)
-                    }
-                });
-                self.end_call_local(match failure.kind {
-                    pika_marmot_runtime::call_runtime::IncomingAcceptFailureKind::RelayAuth => {
-                        CallEndReason::AuthFailed
-                    }
-                    pika_marmot_runtime::call_runtime::IncomingAcceptFailureKind::MediaCrypto => {
-                        CallEndReason::RuntimeError
-                    }
-                });
+                use pika_marmot_runtime::call_runtime::IncomingAcceptFailureKind;
+                let (msg, reason) = match failure.kind {
+                    IncomingAcceptFailureKind::RelayAuth => (
+                        format!("Call relay auth verification failed: {}", failure.error),
+                        CallEndReason::AuthFailed,
+                    ),
+                    IncomingAcceptFailureKind::MediaCrypto => (
+                        format!("Call media key setup failed: {}", failure.error),
+                        CallEndReason::RuntimeError,
+                    ),
+                };
+                self.toast(msg);
+                self.end_call_local(reason);
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rust/src/core/call_control.rs` around lines 725 - 742, Extract the duplicated
match on failure.kind into a single match that returns both the toast message
and the CallEndReason, then call self.toast(...) and self.end_call_local(...)
once using those results; specifically, inside the
InboundCallSignalOutcome::IncomingAcceptFailed(failure) arm match failure.kind
to produce (message, reason) and pass message to self.toast and reason to
self.end_call_local (referencing failure.kind, self.toast, self.end_call_local,
and CallEndReason).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/pika-marmot-runtime/src/call_runtime.rs`:
- Around line 249-255: The Reject/End branch currently treats any sender for a
known call_id as authoritative; update the handling in call_runtime.rs so that
before returning
InboundCallSignalOutcome::RemoteTermination(RemoteCallTermination { ... }) you
look up the pending/active call by call_id (the same place Accept checks the
peer), compare ctx.sender_pubkey_hex to the expected peer pubkey for that call,
and only convert to RemoteTermination when they match; if they do not match,
return an appropriate non-authoritative outcome (e.g., ignore/Unauthorized) and
log a warning—ensure you reference ParsedCallSignal::Reject,
ParsedCallSignal::End, RemoteCallTermination, and the ctx.sender_pubkey_hex
check in the same style as the Accept path.

In `@crates/pikachat-sidecar/src/daemon.rs`:
- Around line 4080-4096: The branch that handles call termination is accepting
Reject/End messages by call_id alone, allowing any member to terminate a call;
modify the flow so ActiveCall stores the expected peer pubkey (e.g., add/ensure
a field in ActiveCall like expected_peer_pubkey) and, in
CallWorkflowRuntime::new(...).handle_inbound_signal when constructing
InboundSignalContext/processing RemoteTermination (the paths handling
Reject/End), validate the sender against that stored expected_peer_pubkey (use
validate_auth or explicit sender_pubkey_hex comparison) before clearing
pending/active state by call_id; only allow removal/transition to
RemoteTermination if the sender matches the ActiveCall.expected_peer_pubkey.
- Around line 3359-3384: The code currently maps every
CallWorkflowRuntime::new(&mdk).prepare_accept_incoming(...) Err to "auth_failed"
which misreports failures from other causes (e.g., signal build or media-crypto
derivation); update the error handling in that match to inspect the concrete
error returned by prepare_accept_incoming (match on its error variants or use
its Display/Debug) and call prepare_reject_signal with an appropriate reject
reason (e.g., "auth_failed", "signal_build_failed", "media_crypto_failed")
instead of always "auth_failed", include the original error text in the
published reject payload and in the out_error(reply) sent via reply_tx, and
ensure you still await publish_call_payload and continue afterwards (use the
same CallWorkflowRuntime::prepare_reject_signal, publish_call_payload,
invite.call_id, out_error and reply_tx symbols).

---

Outside diff comments:
In `@crates/pikachat-sidecar/src/daemon.rs`:
- Around line 3344-3399: The code currently removes the invite from
pending_call_invites before preparing/publishing the outbound signal, so if
preparation or publish fails the invite is lost; change the flow to keep the
invite in pending_call_invites until publish_call_payload completes
successfully: for the accept path (CallWorkflowRuntime::prepare_accept_incoming
-> publish_call_payload) and the reject path
(CallWorkflowRuntime::prepare_reject_signal -> publish_call_payload), stop
calling pending_call_invites.remove(&call_id) up-front, only remove the invite
after publish_call_payload returns Ok(()), and on any Err from prepare_* or
publish_call_payload send the appropriate out_error via reply_tx without
removing the invite so it remains reconstructible from seen_group_events.
- Around line 3387-3458: When any start_*_worker call (start_echo_worker,
start_stt_worker, start_data_worker) fails after publish_call_payload succeeded,
perform a best-effort teardown: publish a call.end or call.reject for the same
call id/session (reusing publish_call_payload with prepared.incoming.call_id,
prepared.incoming.session and appropriate "call_end"/"call_reject" payload),
clear any local pending/active call state that tracks this handshake, and then
send the existing reply_tx out_error as before; apply the same change to the
analogous failure-handling block around start_*_worker calls in the other
section referenced (the 4134-4216 region) so that any local activation failure
after handshake always sends teardown and clears pending state.

---

Nitpick comments:
In `@crates/pika-marmot-runtime/src/call_runtime.rs`:
- Around line 557-596: Add a sibling test that exercises the caller/inviter
side: call make_group() but capture the inviter runtime (don't ignore the first
return), then invoke handle_inbound_signal on that inviter runtime (instead of
the invitee `runtime`) with an InboundSignalContext where the sender_pubkey_hex
and GroupCallContext.local_pubkey_hex reflect the inviter-side ordering (swap
inviter/invitee keys compared to the existing test), and pass the same
PendingOutgoingCall/call_id/session so the code path for an outgoing call being
accepted on the caller side is exercised; name the test e.g.
handle_inbound_signal_accepts_matching_pending_outgoing_from_caller and assert
the outcome is InboundCallSignalOutcome::OutgoingAccepted as in the existing
test.

In `@crates/pikachat-sidecar/src/daemon.rs`:
- Around line 4802-4821: The test should be converted into a real round-trip
call-flow: instead of the dummy group id "deadbeef", create a real group using
the MDK opened with open_mdk (or the appropriate MDK group-creation API), then
call prepare_call_invite_for_daemon with that real group and the session from
default_audio_call_session; next simulate the acceptor side by invoking
prepare_accept_incoming or handle_inbound_signal (using the same session/peer
Keys::generate() as the responder) to process the invite and produce the
accept/restore state, and finally assert that the restored media
epoch/media-crypto values (media epoch, derived keys or crypto material) on the
acceptor match the inviter’s prepared values rather than only checking for
"call.invite" in the JSON. Ensure you reference and exercise
prepare_call_invite_for_daemon, prepare_accept_incoming/handle_inbound_signal,
open_mdk, default_audio_call_session, and Keys::generate() to validate the full
media-derivation path.

In `@rust/src/core/call_control.rs`:
- Around line 725-742: Extract the duplicated match on failure.kind into a
single match that returns both the toast message and the CallEndReason, then
call self.toast(...) and self.end_call_local(...) once using those results;
specifically, inside the InboundCallSignalOutcome::IncomingAcceptFailed(failure)
arm match failure.kind to produce (message, reason) and pass message to
self.toast and reason to self.end_call_local (referencing failure.kind,
self.toast, self.end_call_local, and CallEndReason).

In `@rust/src/core/mod.rs`:
- Around line 8492-8498: The test currently only checks that
prepared.signal.payload_json contains the substring "call.accept"; instead
deserialize prepared.signal.payload_json (e.g., parse JSON into the call.accept
struct used in your codebase) and assert the concrete accept fields including
the restored media epoch/keying data (fields on the accept payload such as
media_epoch, keying material, or whatever names your accept struct uses). Locate
the call to core.prepare_call_accept_for_chat and replace the substring check
with parsing prepared.signal.payload_json into the accept payload type and
explicit assert_eq! checks for the expected epoch and related fields on
prepared.incoming or the deserialized object.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 2403fe33-258d-4d9b-a512-d440d3c778c7

📥 Commits

Reviewing files that changed from the base of the PR and between 1a027e7 and b4b4b8c.

📒 Files selected for processing (6)
  • crates/pika-marmot-runtime/src/call_runtime.rs
  • crates/pika-marmot-runtime/src/lib.rs
  • crates/pikachat-sidecar/src/daemon.rs
  • rust/src/core/call_control.rs
  • rust/src/core/call_runtime.rs
  • rust/src/core/mod.rs

Comment on lines +249 to +255
ParsedCallSignal::Reject { call_id, reason }
| ParsedCallSignal::End { call_id, reason } => {
InboundCallSignalOutcome::RemoteTermination(RemoteCallTermination {
call_id,
reason,
})
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Validate Reject/End against the expected peer before treating them as authoritative.

Unlike the Accept path, this branch trusts any sender for a known call_id. Unless every caller re-checks ctx.sender_pubkey_hex against the pending/active peer outside this runtime, another group member can tear down the call just by reusing the ID.

Possible shape of the fix
 pub struct InboundSignalContext<'a> {
     pub target_id: &'a str,
     pub sender_pubkey_hex: &'a str,
     pub group: GroupCallContext<'a>,
     pub policy: InboundCallPolicy,
     pub has_live_call: bool,
+    pub expected_peer_pubkey_hex: Option<&'a str>,
     pub pending_outgoing: Option<&'a PendingOutgoingCall>,
 }
@@
             ParsedCallSignal::Reject { call_id, reason }
             | ParsedCallSignal::End { call_id, reason } => {
+                if let Some(expected_peer) = ctx.expected_peer_pubkey_hex {
+                    if expected_peer != ctx.sender_pubkey_hex {
+                        return InboundCallSignalOutcome::Ignore;
+                    }
+                }
                 InboundCallSignalOutcome::RemoteTermination(RemoteCallTermination {
                     call_id,
                     reason,
                 })
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/pika-marmot-runtime/src/call_runtime.rs` around lines 249 - 255, The
Reject/End branch currently treats any sender for a known call_id as
authoritative; update the handling in call_runtime.rs so that before returning
InboundCallSignalOutcome::RemoteTermination(RemoteCallTermination { ... }) you
look up the pending/active call by call_id (the same place Accept checks the
peer), compare ctx.sender_pubkey_hex to the expected peer pubkey for that call,
and only convert to RemoteTermination when they match; if they do not match,
return an appropriate non-authoritative outcome (e.g., ignore/Unauthorized) and
log a warning—ensure you reference ParsedCallSignal::Reject,
ParsedCallSignal::End, RemoteCallTermination, and the ctx.sender_pubkey_hex
check in the same style as the Accept path.

Comment on lines +3359 to 3384
let prepared = match CallWorkflowRuntime::new(&mdk).prepare_accept_incoming(
&invite,
GroupCallContext {
mls_group_id: &mls_group_id,
local_pubkey_hex: &pubkey_hex,
},
) {
Ok(v) => v,
Err(err) => {
if let Ok(signal) = CallWorkflowRuntime::new(&mdk)
.prepare_reject_signal(&invite.call_id, "auth_failed")
{
let _ = publish_call_payload(
&client,
&relay_urls,
&mdk,
&keys,
&invite.target_id,
signal.payload_json,
"call_reject_auth_failed",
)
.await;
}
let _ = reply_tx.send(out_error(request_id, "auth_failed", err));
continue;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't collapse every prepare_accept_incoming failure into auth_failed.

crates/pika-marmot-runtime/src/call_runtime.rs shows that prepare_accept_incoming can also fail while building the accept signal or deriving media crypto, not just during auth validation. Mapping every error here to auth_failed sends the peer the wrong reject reason and obscures non-auth regressions in the restored media-crypto path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/pikachat-sidecar/src/daemon.rs` around lines 3359 - 3384, The code
currently maps every CallWorkflowRuntime::new(&mdk).prepare_accept_incoming(...)
Err to "auth_failed" which misreports failures from other causes (e.g., signal
build or media-crypto derivation); update the error handling in that match to
inspect the concrete error returned by prepare_accept_incoming (match on its
error variants or use its Display/Debug) and call prepare_reject_signal with an
appropriate reject reason (e.g., "auth_failed", "signal_build_failed",
"media_crypto_failed") instead of always "auth_failed", include the original
error text in the published reject payload and in the out_error(reply) sent via
reply_tx, and ensure you still await publish_call_payload and continue
afterwards (use the same CallWorkflowRuntime::prepare_reject_signal,
publish_call_payload, invite.call_id, out_error and reply_tx symbols).

Comment on lines +4080 to +4096
match CallWorkflowRuntime::new(&mdk).handle_inbound_signal(
pika_marmot_runtime::call_runtime::InboundSignalContext {
target_id: &nostr_group_id,
sender_pubkey_hex: &sender_hex,
group: GroupCallContext {
mls_group_id: &mls_group_id,
local_pubkey_hex: &pubkey_hex,
},
policy: InboundCallPolicy {
allow_group_calls: true,
allow_video_calls: false,
},
has_live_call: active_call.is_some(),
pending_outgoing,
},
signal,
) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Validate call.reject / call.end against the expected peer.

In crates/pika-marmot-runtime/src/call_runtime.rs, Reject and End currently become RemoteTermination without validate_auth or sender matching. This branch then drops pending/active state by call_id alone. In a multi-member group, any allowed sender that learns the call_id can cancel or hang up someone else's call. Please carry the expected peer pubkey into ActiveCall and only honor termination from that sender.

Also applies to: 4217-4232

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/pikachat-sidecar/src/daemon.rs` around lines 4080 - 4096, The branch
that handles call termination is accepting Reject/End messages by call_id alone,
allowing any member to terminate a call; modify the flow so ActiveCall stores
the expected peer pubkey (e.g., add/ensure a field in ActiveCall like
expected_peer_pubkey) and, in
CallWorkflowRuntime::new(...).handle_inbound_signal when constructing
InboundSignalContext/processing RemoteTermination (the paths handling
Reject/End), validate the sender against that stored expected_peer_pubkey (use
validate_auth or explicit sender_pubkey_hex comparison) before clearing
pending/active state by call_id; only allow removal/transition to
RemoteTermination if the sender matches the ActiveCall.expected_peer_pubkey.

@justinmoon justinmoon merged commit 0752edb into master Mar 9, 2026
18 checks passed
justinmoon added a commit that referenced this pull request Mar 11, 2026
* Extract shared call workflow runtime

* Restore call media epoch derivation
@justinmoon justinmoon deleted the prfneudq branch March 20, 2026 14:52
@justinmoon justinmoon restored the prfneudq branch March 20, 2026 21:19
@justinmoon justinmoon deleted the prfneudq branch March 20, 2026 21:21
@justinmoon justinmoon restored the prfneudq branch March 20, 2026 21:49
@justinmoon justinmoon deleted the prfneudq branch March 20, 2026 21:53
@justinmoon justinmoon restored the prfneudq branch March 20, 2026 21:58
@justinmoon justinmoon deleted the prfneudq branch March 20, 2026 21:59
@justinmoon justinmoon restored the prfneudq branch March 20, 2026 22:03
@justinmoon justinmoon deleted the prfneudq branch March 20, 2026 22:04
@justinmoon justinmoon restored the prfneudq branch March 20, 2026 22:09
@justinmoon justinmoon deleted the prfneudq branch March 20, 2026 22:10
@justinmoon justinmoon restored the prfneudq branch March 20, 2026 22:15
@justinmoon justinmoon deleted the prfneudq branch March 20, 2026 22:20
@justinmoon justinmoon restored the prfneudq branch March 20, 2026 22:21
@justinmoon justinmoon deleted the prfneudq branch March 20, 2026 22:26
@justinmoon justinmoon restored the prfneudq branch March 21, 2026 02:10
@justinmoon justinmoon deleted the prfneudq branch March 21, 2026 02:15
@justinmoon justinmoon restored the prfneudq branch March 21, 2026 18:29
@justinmoon justinmoon deleted the prfneudq branch March 21, 2026 18:30
@justinmoon justinmoon restored the prfneudq branch March 21, 2026 18:35
@justinmoon justinmoon deleted the prfneudq branch March 21, 2026 18:36
@justinmoon justinmoon restored the prfneudq branch March 21, 2026 18:41
@justinmoon justinmoon deleted the prfneudq branch March 21, 2026 18:46
@justinmoon justinmoon restored the prfneudq branch March 21, 2026 18:47
@justinmoon justinmoon deleted the prfneudq branch March 21, 2026 18:52
@justinmoon justinmoon restored the prfneudq branch March 21, 2026 18:53
@justinmoon justinmoon deleted the prfneudq branch March 21, 2026 18:57
@justinmoon justinmoon restored the prfneudq branch March 21, 2026 18:59
@justinmoon justinmoon deleted the prfneudq branch March 21, 2026 19:02
@justinmoon justinmoon restored the prfneudq branch March 21, 2026 19:04
@justinmoon justinmoon deleted the prfneudq branch March 21, 2026 19:08
@justinmoon justinmoon restored the prfneudq branch March 21, 2026 19:10
@justinmoon justinmoon deleted the prfneudq branch March 21, 2026 19:13
@justinmoon justinmoon restored the prfneudq branch March 21, 2026 19:16
@justinmoon justinmoon deleted the prfneudq branch March 21, 2026 19:18
@justinmoon justinmoon restored the prfneudq branch March 21, 2026 19:21
@justinmoon justinmoon deleted the prfneudq branch March 21, 2026 19:24
@justinmoon justinmoon restored the prfneudq branch March 21, 2026 19:27
@justinmoon justinmoon deleted the prfneudq branch March 21, 2026 19:29
@justinmoon justinmoon restored the prfneudq branch March 21, 2026 19:33
@justinmoon justinmoon deleted the prfneudq branch March 21, 2026 19:34
@justinmoon justinmoon restored the prfneudq branch March 21, 2026 19:38
@justinmoon justinmoon deleted the prfneudq branch March 21, 2026 19:39
@justinmoon justinmoon restored the prfneudq branch March 21, 2026 19:44
@justinmoon justinmoon deleted the prfneudq branch March 21, 2026 19:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant