Skip to content

Fix media grid overflow, add blurhash placeholders and video sharing#429

Merged
futurepaul merged 9 commits intomasterfrom
feat/media-grid-fix-blurhash
Mar 5, 2026
Merged

Fix media grid overflow, add blurhash placeholders and video sharing#429
futurepaul merged 9 commits intomasterfrom
feat/media-grid-fix-blurhash

Conversation

@futurepaul
Copy link
Copy Markdown
Collaborator

@futurepaul futurepaul commented Mar 4, 2026

Summary

  • Grid overflow fix: Multi-image grid no longer pushes messages off screen. mediaGrid uses GeometryReader to constrain cell widths instead of passing .infinity.
  • 3-image layout: Changed from 2-top + 1-bottom to 1-tall-left + 2-stacked-right (iMessage/Telegram style).
  • Blurhash placeholders: Images show a blurhash-decoded placeholder during upload and while loading, instead of a blank rectangle. Blurhash is computed in Rust (4×3 components) and rendered via an inline Swift decoder (no external dependency).
  • Video in share extension: Info.plist now allows up to 4 videos. ShareViewController extracts video providers alongside images for batch payloads.
  • Multi-image E2E test: Bot can now respond to media_batch:<count> by generating, encrypting, uploading test images to blossom, and sending them as a multi-media message. New testE2E_multiImageGrid iOS UI test verifies the grid renders.
  • Per-test filtering: Added PIKA_IOS_E2E_TEST_METHOD env var to run a single iOS E2E test (like Android's PIKA_ANDROID_E2E_TEST_CLASS).

Closes portions of #410.

E2E test video

Multi-image grid rendering (bot sends 3 encrypted images, app decrypts and renders grid):

https://us-east.nostr.pikachat.org/afefc821ba8c7e63e9fff6640aadd9ff4e7051aea94fd38bb9991ae8b37a643e.mp4

Changes

File What
ios/Sources/Views/MessageBubbleViews.swift Grid overflow fix, 3-image layout, blurhash placeholder, accessibility ID
ios/Sources/Views/BlurhashView.swift New: inline blurhash decoder + SwiftUI view
rust/src/state.rs Add blurhash: Option<String> to ChatMediaAttachment
rust/src/core/chat_media.rs Generate blurhash in send_chat_media and send_chat_media_batch
rust/Cargo.toml Add blurhash crate dependency
ios/ShareExtension/Info.plist Video max count 1→4
ios/ShareExtension/ShareViewController.swift Video extraction, multi-chat, avatar display
cli/src/harness.rs media_batch:<count> bot trigger + blossom upload
fixtures/test-images/ 8 solid-color test JPEGs
ios/UITests/PikaUITests.swift testE2E_multiImageGrid test
crates/pikahut/.../deterministic.rs Register new test, PIKA_IOS_E2E_TEST_METHOD support
docs/how-to-run-e2e-ui-tests-with-video.md Document iOS per-test filtering

Test plan

  • Share 1 image → single full-width image in bubble, no overflow
  • Share 2 images → side-by-side, each takes half width
  • Share 3 images → 1 tall left + 2 stacked right layout
  • Share 4+ images → 2×2 grid with +N overlay
  • All grid layouts stay within the bubble width
  • During upload, blurhash placeholder shown before image loads
  • Share sheet works when selecting up to 4 videos
  • cargo test -p pika_core --lib passes (222 tests)
  • PIKA_IOS_E2E_TEST_METHOD=testE2E_multiImageGrid just ios-ui-e2e-local passes with video

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Batch media staging & sending — select, preview, remove, and send multiple images/videos (single or batch) with captions; send to multiple recipients.
    • Responsive media grid with blurhash placeholders and improved video handling; dynamic send button and multi-chat selection in share flows.
    • iOS share limits increased (images up to 20, movies up to 4).
  • Tests

    • Added end-to-end UI and CLI tests for multi-image/batch flows.
  • Documentation

    • Updated E2E test guidance for video and multi-image scenarios.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 4, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds end-to-end media-batch support: new FFI/share models and converters, Android/iOS multi-select staging and send, Rust core batch upload/retry with blurhash, sidecar/CLI batch commands/tests, plist/test updates, and UI/grid/blurhash rendering changes.

Changes

Cohort / File(s) Summary
FFI & Share Models
android/app/src/main/java/com/pika/app/rust/pika_core.kt, android/app/src/main/java/com/pika/app/rust/share/pika_share.kt, crates/pika-share/src/lib.rs
Add MediaBatchItem, ShareMediaBatchEntry/ShareMediaBatchDispatchItem, sequence & optional converters; extend AppAction/ShareDispatchKind and payload enums to support media-batch.
Android App Queue & Wiring
android/app/src/main/java/com/pika/app/AppManager.kt, android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt
Handle ShareDispatchKind.MediaBatch in pending-share processing; enqueue requests updated to include mediaBatch = null; add staging/multi-document picker and dispatch AppAction.SendChatMediaBatch.
iOS Share Extension & AppManager
ios/ShareExtension/Info.plist, ios/ShareExtension/ShareViewController.swift, ios/Sources/AppManager.swift, ios/Sources/ContentView.swift
Increase image/video share limits, multi-chat selection, new images payload case, enqueue per-chat mediaBatch, and dispatch/sendChatMediaBatch from app wiring and ContentView.
iOS Chat UI & Input
ios/Sources/Views/ChatInputBar.swift, ios/Sources/Views/ChatView.swift, ios/Sources/Views/MessageBubbleViews.swift
Switch to multi-photo selection and stagedMedia bindings, add staging thumbnails/removal, new onSendMediaBatch hook, responsive media grid, and blurhash-based placeholders. Public ChatView/ChatInputBar APIs updated.
Blurhash & Preview
ios/Sources/Views/BlurhashView.swift, ios/Sources/PreviewData.swift, rust/Cargo.toml, rust/src/state.rs
Add blurhash dependency and self-contained BlurhashView/decoder; add blurhash field to ChatMediaAttachment and include nil values in previews.
Rust Core Batch Logic
rust/src/core/chat_media.rs, rust/src/core/mod.rs, rust/src/actions.rs
Add send_chat_media_batch, internal PendingMediaBatchSend/BatchMediaItem tracking, per-item preprocessing (blurhash/hash), sequential encrypt+upload flow, handle_batch_upload_completed, retry/resume, and AppAction variant for SendChatMediaBatch.
Sidecar & CLI Test Harness
crates/pikachat-sidecar/src/daemon.rs, cli/src/harness.rs
Add InCmd::SendMediaBatch handling (validate, encrypt, upload, publish) and CLI harness helpers to assemble and send test media batches; add deserialization tests.
Share Queue & pika-share crate
crates/pika-share/src/lib.rs
Enqueue/normalize/store media_batch entries, validate and assemble MediaBatch dispatch with per-item base64 payloads, update cleanup/ack paths, and tests for batch enqueue/dequeue.
Tests, E2E & Docs
ios/UITests/PikaUITests.swift, ios/Sources/TestIds.swift, crates/pikahut/src/testing/..., docs/how-to-run-e2e-ui-tests-with-video.md
Add iOS UI test for multi-image grid (duplicate added), new test id, configurable iOS test selection, and docs for E2E video/testing.

Sequence Diagram(s)

sequenceDiagram
    participant User as User
    participant UI as Chat UI (iOS/Android)
    participant AppMgr as AppManager (Kotlin/Swift)
    participant Core as AppCore (Rust)
    participant MDK as MDK (crypto)
    participant Blossom as Blossom Server
    participant Relay as Nostr Relay

    User->>UI: Select multiple media + caption
    UI->>UI: Stage thumbnails/list
    User->>UI: Tap Send
    UI->>AppMgr: sendChatMediaBatch(chatId, items[], caption)
    AppMgr->>Core: AppAction::SendChatMediaBatch
    Core->>Core: Validate items (1..32), compute blurhashs/hashes
    loop per item
      Core->>MDK: encrypt item
      MDK-->>Core: encrypted_blob + descriptor_hash
      Core->>Blossom: upload encrypted_blob
      Blossom-->>Core: uploaded_url
    end
    Core->>Relay: publish single event with imeta tags
    Relay-->>Core: event_id
    Core->>AppMgr: update outbox/state
    AppMgr->>UI: refresh UI (grid shows uploads)
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related issues

Possibly related PRs

Poem

🐰 I nibbled thumbnails, stitched them in a row,

Blurhash whispers where tiny pixels glow,
Staged and bundled, off to Blossom they flew,
One message now carries many — hip-hop hooroo!
Cheers from a rabbit as your gallery grew.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 24.58% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title clearly and concisely describes the three main changes: media grid overflow fix, blurhash placeholder addition, and video sharing support.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/media-grid-fix-blurhash

Comment @coderabbitai help to get the list of available commands and usage tips.

devin-ai-integration[bot]

This comment was marked as resolved.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 20

🧹 Nitpick comments (4)
android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt (1)

877-931: Increase remove-button touch target in staging chips.

Line 918 sets 20.dp, which is hard to hit and weak for accessibility. Prefer a ~40–48dp hit target with a smaller inner icon.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt` around lines
877 - 931, The remove-icon touch target is too small: update the IconButton used
inside the stagedMedia LazyRow (the IconButton with onClick = { stagedMedia =
stagedMedia.filter { it.id != item.id } }) to use a larger hit target (around
40–48.dp) by increasing its modifier size and padding, and keep the inner Icon
(Icons.Default.Close) smaller (e.g., 16–20.dp) so visual size stays compact
while tappable area increases; ensure the Modifier.align(Alignment.TopEnd)
remains and adjust any .size(...) calls on the IconButton and Icon accordingly
to preserve layout and appearance.
ios/Sources/Views/ChatInputBar.swift (1)

214-215: Make preview @State properties private to satisfy lint and keep preview scope tight.

🧹 Suggested fix
-    `@State` var selectedPhotoItems: [PhotosPickerItem] = []
-    `@State` var stagedMedia: [StagedMediaItem] = []
+    `@State` private var selectedPhotoItems: [PhotosPickerItem] = []
+    `@State` private var stagedMedia: [StagedMediaItem] = []
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ios/Sources/Views/ChatInputBar.swift` around lines 214 - 215, The two preview
state properties in ChatInputBar — selectedPhotoItems and stagedMedia — are
currently public `@State` vars; make them private to satisfy lint rules and
tighten their preview scope by changing their declarations to private `@State` var
selectedPhotoItems: [PhotosPickerItem] = [] and private `@State` var stagedMedia:
[StagedMediaItem] = [] (update any local references in the ChatInputBar preview
to use the now-private properties).
ios/Sources/Views/BlurhashView.swift (1)

9-21: Cache decoded blurhash images to avoid repeated decode work.

Decoding in body will run repeatedly during scrolling/reconciliation. Caching by hash+size will reduce jank and CPU usage.

♻️ Proposed caching approach
+private enum BlurhashImageCache {
+    static let cache = NSCache<NSString, UIImage>()
+
+    static func image(for hash: String, decodeSize: CGSize) -> UIImage? {
+        let key = "\(hash)#\(Int(decodeSize.width))x\(Int(decodeSize.height))" as NSString
+        if let cached = cache.object(forKey: key) { return cached }
+        guard let decoded = UIImage(blurHash: hash, size: decodeSize) else { return nil }
+        cache.setObject(decoded, forKey: key)
+        return decoded
+    }
+}
+
 struct BlurhashView: View {
@@
     var body: some View {
-        if let image = UIImage(blurHash: hash, size: CGSize(width: 32, height: 32)) {
+        if let image = BlurhashImageCache.image(for: hash, decodeSize: CGSize(width: 32, height: 32)) {
             Image(uiImage: image)
                 .resizable()
                 .scaledToFill()

Also applies to: 77-153

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ios/Sources/Views/BlurhashView.swift` around lines 9 - 21, The BlurhashView
currently decodes the blurhash inside its body (when calling
UIImage(blurHash:hash,size:)), causing repeated work; add a shared in-memory
cache (e.g., a static NSCache or dictionary on BlurhashView or a dedicated
BlurhashImageCache type) keyed by the combined hash+size string to store decoded
UIImages, lookup the cache in BlurhashView.body before decoding, and only decode
and insert into the cache when a miss occurs; ensure the cache is thread-safe
and uses weak/eviction semantics to avoid memory bloat and reuse the same cache
for other places where blurhash decoding occurs.
rust/src/core/mod.rs (1)

7271-7294: These two tests currently validate login gating, not batch-size validation.

batch_rejects_empty_items and batch_rejects_over_32_items_when_logged_in_would_reject both assert "Please log in first", so they won’t catch regressions in empty-item/max-item validation. Consider using a logged-in fixture and asserting the actual batch validation outcomes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rust/src/core/mod.rs` around lines 7271 - 7294, The two tests
batch_rejects_empty_items and
batch_rejects_over_32_items_when_logged_in_would_reject are currently asserting
the login error rather than batch-size validation; update each test to
authenticate the core before calling send_chat_media_batch (e.g., set a valid
session on the Core instance created by make_core or call the existing login
helper if one exists), then call send_chat_media_batch on make_core and assert
the specific batch validation toast/message you expect (for empty items assert
the "Empty items" or equivalent validation message, for >32 assert the max-item
validation message), referencing send_chat_media_batch and core.state.toast to
verify the correct validation path is exercised.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@android/app/src/main/java/com/pika/app/AppManager.kt`:
- Around line 447-454: The file uses MediaBatchItem inside the ShareDispatchKind
handling (in AppManager.kt within the when branch for is
ShareDispatchKind.MediaBatch) but lacks the import; add an explicit import for
com.pika.app.rust.pika_core.MediaBatchItem at the top of AppManager.kt so the
MediaBatchItem type used in the mapping of kind.items is resolved.

In `@android/app/src/main/java/com/pika/app/rust/pika_core.kt`:
- Around line 2776-2782: The MediaBatchItem data class currently carries full
base64-encoded payloads causing large memory amplification during
SendChatMediaBatch; change the shape and FFI usage so MediaBatchItem (and any
SendChatMediaBatch input) carries file references (URI, filePath, or
fileDescriptor) and metadata (mimeType, filename) instead of base64 strings, and
update the Rust/FFI boundary to read/stream binary data on demand (or implement
chunked streaming) so files are not all loaded/encoded in memory at once; update
any functions that construct or consume MediaBatchItem to use the new reference
fields and perform safe streaming/chunking on the native side.

In `@android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt`:
- Around line 181-185: StagedMedia currently embeds full media bytes via
MediaUploadPayload which can OOM; change StagedMedia to hold only a lightweight
reference (e.g., content Uri or temporary file path and metadata like
mimeType/size and optional thumbnailBitmap) instead of payload.bytes, update any
constructors/usages that build StagedMedia to pass the Uri/path and metadata,
and defer reading the actual bytes inside the upload/send path (e.g., where you
call send/upload — the function that consumes StagedMedia for dispatch) so you
load the file into memory only at send time and convert to MediaUploadPayload
there.
- Around line 392-397: The current handler sends a single-picked file
immediately via sendMediaFromUri even if there are existing staged items,
splitting batches; change the logic so that when uris.size == 1 you first check
the staging state (e.g., existing staged media list or an isStaging flag used by
your UI) and if staging is non-empty call stageMediaFromUris(uris) to append to
the batch, otherwise call sendMediaFromUri(uris.first()); keep the existing
branch that calls stageMediaFromUris when multiple uris are returned.
- Around line 332-349: The current stageMediaFromUris function only limits
uris.take(32) per invocation which allows stagedMedia to grow beyond 32 across
multiple picks; change it to compute remaining = 32 - stagedMedia.size (return
early if remaining <= 0) and then use uris.take(remaining) so you only stage up
to the overall 32-item limit; update references in stageMediaFromUris
(stagedMedia and uris.take(32)) and ensure the concatenation stagedMedia =
stagedMedia + items cannot exceed the 32-item cap.
- Around line 351-381: sendStagedMedia is doing Base64.encodeToString on the
main thread for both single-file and batch paths which can freeze the UI; move
the CPU work off the main dispatcher by wrapping the Base64 encoding in
withContext(Dispatchers.Default) (or Dispatchers.IO) so the mapping/encoding
runs on a background thread before calling manager.dispatch. Specifically, in
sendStagedMedia, perform the single-file Base64.encodeToString(p.bytes, ...)
inside withContext and likewise build batchItems with
withContext(Dispatchers.Default) { stagedMedia.map { ...
Base64.encodeToString(...) ... } } then use the resulting data to call
AppAction.SendChatMedia or AppAction.SendChatMediaBatch.
- Around line 1339-1418: The build fails because Modifier.matchParentSize() is
used in MediaGrid but the import for matchParentSize is missing; add the import
androidx.compose.foundation.layout.matchParentSize to the top of the file (where
other compose layout imports are) so the call in the Box (the overlay that shows
"+N") resolves correctly and the file compiles.

In `@crates/pika-share/src/lib.rs`:
- Around line 626-648: The ImageBatch branch (SharePayloadKind::ImageBatch)
currently only validates presence and safety of relative_path but allows
unbounded batch size and entries with empty mime_type or filename, risking
oversized jobs and bad metadata; update the validation in the ImageBatch
handling to (1) enforce a maximum batch length (e.g., max_batch_entries
constant) and return ShareError::InvalidRequest when exceeded, (2) for each
entry check that entry.mime_type and entry.filename are non-empty (trimmed) and
return InvalidRequest if missing, and (3) optionally enforce per-entry size
limits if an entry.size or similar field exists; make these checks alongside the
existing relative_path and is_safe_relative_path validations in the same code
block handling SharePayloadKind::ImageBatch.

In `@crates/pikachat-sidecar/src/daemon.rs`:
- Around line 3157-3165: The loop currently uses blocking std::fs::read(path)
which will block Tokio worker threads; replace it with asynchronous file I/O
(e.g., tokio::fs::read(path).await or opening tokio::fs::File and using
read_to_end) so reads do not block the executor. Update the enclosing
function/closure to be async if necessary or, if making it async is not
possible, offload the blocking read to tokio::task::spawn_blocking; keep the
existing error handling that sends reply_tx with out_error and request_id but
await the async read and map errors to the same reply path.

In `@ios/ShareExtension/ShareViewController.swift`:
- Around line 532-610: The loop over selectedChatIds calling
ShareQueueManager.enqueue can leave some chats enqueued and others failed,
causing partial delivery; update the send flow in ShareViewController to collect
per-chat results (e.g., a success/failure map or arrays of succeededChatIds and
failedChatIds) while continuing to attempt enqueue for every chat instead of
returning early, then surface a partial-success outcome (return/emit both
succeeded and failed IDs or throw an error containing failed IDs) so callers of
ShareEnqueueRequest/ShareQueueManager.enqueue can handle retries/dedupe safely;
adjust the code paths that currently return false to return/report the
aggregated result and ensure ShareQueueManager.enqueue and any higher-level
handler can accept/propagate the partial result.
- Around line 755-772: The code inside loadVideo(from:) calls Data(contentsOf:)
which loads the entire video into memory; instead copy/move the temporary file
URL provided by provider.loadFileRepresentation to a file-backed location (e.g.
an App Group tmp directory) using FileManager
(FileManager.copyItem/replaceItemAt), and return the file URL/filename+mimetype
rather than the Data blob (or change the function signature to return URL
instead of Data). Update the continuation path that currently uses
Data(contentsOf:) to perform a safe file copy, handle errors from FileManager,
and resume the withCheckedThrowingContinuation with the file-backed result so
the extension never reads the full movie into RAM.

In `@ios/Sources/AppManager.swift`:
- Around line 546-571: In the .mediaBatch case in AppManager.swift, validate the
items array before calling dispatch(.sendChatMediaBatch) and before sending
ShareQueueManager.acknowledge; specifically check items.count > 0 && items.count
<= 32 to match Rust; if the check fails, do not dispatch or update
firstOpenedChatId, and call ShareQueueManager.acknowledge with a rejection
status (use ShareDispatchAck with status .rejectedByCore and appropriate
errorCode/errorMessage) so the item isn’t marked accepted prematurely, otherwise
proceed with the existing dispatch and .acceptedByCore acknowledge behavior.

In `@ios/Sources/Views/ChatInputBar.swift`:
- Around line 157-161: The remove-media button uses only an icon (systemName
"xmark.circle.fill") and lacks a semantic label; update the Button in
ChatInputBar.swift (the Button that calls onRemove) to include an explicit
accessibility label (e.g., "Remove attachment" or "Remove media") by adding an
accessibilityLabel to the Button or the Image so VoiceOver announces its
purpose; keep the existing onRemove handler and visual styling but attach
.accessibilityLabel(...) (or .accessibilityLabel(Text(...))) to the Button/Image
to provide clear semantics.

In `@ios/Sources/Views/ChatView.swift`:
- Around line 506-520: When multiple items are staged but onSendMediaBatch is
nil, the current code clears stagedMedia and related state without sending
anything; change the logic in the staged-media branch so you only clear
stagedMedia, messageText, insertedMentions, and replyDraftMessage after a
successful send. Specifically, in the block that checks stagedMedia.count, if
onSendMediaBatch is non-nil call it as before; otherwise, if onSendMedia is
available, iterate stagedMedia and call onSendMedia(chatId, item.data,
item.mimeType, item.filename, caption) for each item; if neither callback
exists, do not mutate stagedMedia or the other state and instead bail out
(return) so items aren’t silently dropped. Ensure you reference stagedMedia,
onSendMediaBatch, onSendMedia, chatId, messageText, insertedMentions, and
replyDraftMessage when applying this change.

In `@ios/Sources/Views/MessageBubbleViews.swift`:
- Around line 534-536: The timestamp visibility logic uses hasFileAttachment
(which checks only .file) but rendering groups non-visual attachments via
fileMedia (message.media.filter { $0.kind != .image && $0.kind != .video }),
causing mismatches; update the timestamp condition(s) to use the same non-visual
grouping (i.e., check fileMedia.isEmpty / !fileMedia.isEmpty or introduce a
hasNonVisualAttachment computed from fileMedia) wherever hasFileAttachment is
currently used (including the other occurrence around lines 565-567) so the
timestamp decision matches the rendered grouping.
- Around line 638-643: The overlay displaying "+\(remaining)" is intercepting
taps and blocking the underlying media tap handler; wrap the overlay content
(the Color and Text) in a container (e.g., ZStack) and disable hit testing on
that container by calling .allowsHitTesting(false) so touches pass through to
the underlying media view; update the overlay definition in MessageBubbleViews
(the overlay that shows "+\(remaining)") to apply .allowsHitTesting(false) to
the overlay view.

In `@rust/src/core/chat_media.rs`:
- Around line 756-759: After write_media_file(&local_path, &media_data)
succeeds, ensure any subsequent early-return failure paths (e.g., encryption
failure and the outbox-state removal branch) remove the just-created cache file
to avoid orphaned files; specifically, call std::fs::remove_file(&local_path)
(or use a small RAII guard that deletes local_path on drop unless explicitly
disarmed) in the error branches that currently only return after toast() so the
cache file is cleaned up, and log or ignore removal errors.
- Around line 1187-1192: The early returns when uploaded_url or
descriptor_sha256_hex are None leave the batch stuck pending; in
handle_batch_upload_completed replace those bare returns with the same
failure-handling used in the existing error/hash-mismatch branches (i.e., record
the upload failure for the batch, clear any in-progress state/progress markers,
and notify/unblock the UI). Locate the checks for uploaded_url and
descriptor_sha256_hex and invoke the same code-path or helper used by the error
and hash-mismatch branches so the batch is marked failed and progress cleared
instead of returning early.
- Around line 931-949: The code uses
self.pending_media_batch_sends.values().last() to pick the newly inserted
PendingMediaBatchSend which is unsafe; instead fetch the inserted batch by its
key (batch_id) using self.pending_media_batch_sends.get(&batch_id) or the Entry
API (e.g., let batch =
self.pending_media_batch_sends.get(&batch_id).expect(...)) and then use
batch.items[0] as the first upload item; keep the same fields (caption,
temp_rumor_id, account_pubkey, next_upload_index) and ensure you clone or borrow
values appropriately when spawning the upload so you do not rely on map
iteration order.

In `@rust/src/core/mod.rs`:
- Around line 4865-4890: The closure that builds the tuple for the
pending_media_batch_sends match uses position(...).unwrap_or(0), which treats
"no un-uploaded items" as index 0 and can cause incorrect re-uploads and panics;
instead, change the mapping logic around
pending_media_batch_sends.iter().find(...).map(|(k,b)| { ... }) so that you
check the position result explicitly (match or if let Some(idx) = ...), and
return None from the map when position is None, letting the outer if let
Some((batch_id, ...)) fail; apply the same explicit None handling to the
equivalent block at the other occurrence (lines ~4916-4938) that uses
position(...).unwrap_or(0).

---

Nitpick comments:
In `@android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt`:
- Around line 877-931: The remove-icon touch target is too small: update the
IconButton used inside the stagedMedia LazyRow (the IconButton with onClick = {
stagedMedia = stagedMedia.filter { it.id != item.id } }) to use a larger hit
target (around 40–48.dp) by increasing its modifier size and padding, and keep
the inner Icon (Icons.Default.Close) smaller (e.g., 16–20.dp) so visual size
stays compact while tappable area increases; ensure the
Modifier.align(Alignment.TopEnd) remains and adjust any .size(...) calls on the
IconButton and Icon accordingly to preserve layout and appearance.

In `@ios/Sources/Views/BlurhashView.swift`:
- Around line 9-21: The BlurhashView currently decodes the blurhash inside its
body (when calling UIImage(blurHash:hash,size:)), causing repeated work; add a
shared in-memory cache (e.g., a static NSCache or dictionary on BlurhashView or
a dedicated BlurhashImageCache type) keyed by the combined hash+size string to
store decoded UIImages, lookup the cache in BlurhashView.body before decoding,
and only decode and insert into the cache when a miss occurs; ensure the cache
is thread-safe and uses weak/eviction semantics to avoid memory bloat and reuse
the same cache for other places where blurhash decoding occurs.

In `@ios/Sources/Views/ChatInputBar.swift`:
- Around line 214-215: The two preview state properties in ChatInputBar —
selectedPhotoItems and stagedMedia — are currently public `@State` vars; make them
private to satisfy lint rules and tighten their preview scope by changing their
declarations to private `@State` var selectedPhotoItems: [PhotosPickerItem] = []
and private `@State` var stagedMedia: [StagedMediaItem] = [] (update any local
references in the ChatInputBar preview to use the now-private properties).

In `@rust/src/core/mod.rs`:
- Around line 7271-7294: The two tests batch_rejects_empty_items and
batch_rejects_over_32_items_when_logged_in_would_reject are currently asserting
the login error rather than batch-size validation; update each test to
authenticate the core before calling send_chat_media_batch (e.g., set a valid
session on the Core instance created by make_core or call the existing login
helper if one exists), then call send_chat_media_batch on make_core and assert
the specific batch validation toast/message you expect (for empty items assert
the "Empty items" or equivalent validation message, for >32 assert the max-item
validation message), referencing send_chat_media_batch and core.state.toast to
verify the correct validation path is exercised.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b3a5fcc9-f0ea-4586-89ec-8a2c36cadaec

📥 Commits

Reviewing files that changed from the base of the PR and between e171da8 and 8ff9296.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (20)
  • android/app/src/main/java/com/pika/app/AppManager.kt
  • android/app/src/main/java/com/pika/app/rust/pika_core.kt
  • android/app/src/main/java/com/pika/app/rust/share/pika_share.kt
  • android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt
  • crates/pika-share/src/lib.rs
  • crates/pikachat-sidecar/src/daemon.rs
  • ios/ShareExtension/Info.plist
  • ios/ShareExtension/ShareViewController.swift
  • ios/Sources/AppManager.swift
  • ios/Sources/ContentView.swift
  • ios/Sources/PreviewData.swift
  • ios/Sources/Views/BlurhashView.swift
  • ios/Sources/Views/ChatInputBar.swift
  • ios/Sources/Views/ChatView.swift
  • ios/Sources/Views/MessageBubbleViews.swift
  • rust/Cargo.toml
  • rust/src/actions.rs
  • rust/src/core/chat_media.rs
  • rust/src/core/mod.rs
  • rust/src/state.rs

Comment on lines +2776 to +2782
data class MediaBatchItem (
var `dataBase64`: kotlin.String
,
var `mimeType`: kotlin.String
,
var `filename`: kotlin.String

Copy link
Copy Markdown

@coderabbitai coderabbitai bot Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's examine the file and the specific lines mentioned
wc -l android/app/src/main/java/com/pika/app/rust/pika_core.kt

Repository: sledtools/pika

Length of output: 119


🏁 Script executed:

# Check the lines 2776-2782 and surrounding context
sed -n '2770,2790p' android/app/src/main/java/com/pika/app/rust/pika_core.kt

Repository: sledtools/pika

Length of output: 317


🏁 Script executed:

# Check the other mentioned lines 3337-3340
sed -n '3330,3350p' android/app/src/main/java/com/pika/app/rust/pika_core.kt

Repository: sledtools/pika

Length of output: 479


🏁 Script executed:

# Search for SendChatMediaBatch definition and usage
rg -n 'SendChatMediaBatch|class MediaBatchItem|dataBase64' android/app/src/main/java/com/pika/app/rust/pika_core.kt | head -30

Repository: sledtools/pika

Length of output: 663


🏁 Script executed:

# Look for base64 encoding logic and max selection counts
rg -n 'maxSelectionCount|base64|Base64|encodeToString' android/app/src/main/java/com/pika/app/rust/pika_core.kt | head -40

Repository: sledtools/pika

Length of output: 829


🏁 Script executed:

# Check for any streaming or chunking mechanisms already in place
rg -n 'chunk|stream|InputStream|OutputStream|ByteArray' android/app/src/main/java/com/pika/app/rust/pika_core.kt | head -30

Repository: sledtools/pika

Length of output: 869


🏁 Script executed:

# Search for where MediaBatchItem is created/populated
rg -n 'MediaBatchItem(' android/app/src/main/java/com/pika/app/rust/pika_core.kt | head -20

Repository: sledtools/pika

Length of output: 138


🏁 Script executed:

# Search for SendChatMediaBatch creation
rg -n 'SendChatMediaBatch(' android/app/src/main/java/com/pika/app/rust/pika_core.kt | head -20

Repository: sledtools/pika

Length of output: 146


🏁 Script executed:

# Look for selection limits in the codebase more broadly
fd -e kt -e swift -e java | xargs rg -l 'maxSelectionCount|max.*media|media.*limit|batch.*limit' 2>/dev/null | head -20

Repository: sledtools/pika

Length of output: 158


🏁 Script executed:

# Check for where base64 encoding happens (encode operations)
rg -n 'encode|Base64' android/app/src/main/java -l | head -20

Repository: sledtools/pika

Length of output: 546


🏁 Script executed:

# Look for video/media handling to understand typical payload sizes
fd -e kt -e swift -e java android/app | xargs rg -l 'video|media|batch' 2>/dev/null | head -20

Repository: sledtools/pika

Length of output: 950


🏁 Script executed:

# Fix the regex searches with proper escaping
rg -n 'MediaBatchItem\s*\(' android/app/src/main/java/com/pika/app/rust/pika_core.kt

Repository: sledtools/pika

Length of output: 126


🏁 Script executed:

# Look at the ChatScreen where media selection happens
rg -n 'maxSelectionCount|media.*limit|batch.*count' android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt | head -30

Repository: sledtools/pika

Length of output: 40


🏁 Script executed:

# Check the iOS side for selection limits
rg -n 'maxSelectionCount|media.*limit|batch.*count' ios/Sources/Views/ChatInputBar.swift | head -30

Repository: sledtools/pika

Length of output: 107


🏁 Script executed:

# Check how media is being processed before sending - look at pika_share.kt
cat -n android/app/src/main/java/com/pika/app/rust/share/pika_share.kt | head -100

Repository: sledtools/pika

Length of output: 4456


🏁 Script executed:

# Look for where base64 encoding is happening in pika_share.kt
rg -n 'toBase64|Base64|encodeToString|readBytes' android/app/src/main/java/com/pika/app/rust/share/pika_share.kt | head -20

Repository: sledtools/pika

Length of output: 452


🏁 Script executed:

# Check AppManager for SendChatMediaBatch invocation
rg -B5 -A5 'SendChatMediaBatch' android/app/src/main/java/com/pika/app/AppManager.kt | head -50

Repository: sledtools/pika

Length of output: 502


🏁 Script executed:

# Look for where the base64 encoding happens - likely before creating MediaBatchItem
rg -B10 'dataBase64\s*=' android/app/src/main/java/com/pika/app/ | head -80

Repository: sledtools/pika

Length of output: 6780


🏁 Script executed:

# Check for any utility functions that handle media conversion
fd -e kt android/app/src/main/java/com/pika/app | xargs rg -l 'readBytes|toBase64|toByteArray' 2>/dev/null | head -10

Repository: sledtools/pika

Length of output: 665


🏁 Script executed:

# Look at the full context around the SendChatMediaBatch call in AppManager
rg -B30 -A10 'SendChatMediaBatch' android/app/src/main/java/com/pika/app/AppManager.kt | head -100

Repository: sledtools/pika

Length of output: 1662


🏁 Script executed:

# Check if there's any file size validation or chunking logic
rg -n 'size|length|chunk|file.*size|max.*size' android/app/src/main/java/com/pika/app/AppManager.kt | head -30

Repository: sledtools/pika

Length of output: 169


🏁 Script executed:

# Look for the actual media selection and preparation code
rg -B5 -A15 'MediaBatchItem(' android/app/src/main/java/com/pika/app/AppManager.kt

Repository: sledtools/pika

Length of output: 138


🏁 Script executed:

# Look for file size limits in ChatScreen
rg -n 'length\|size\|limit' android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt | grep -i 'media\|file\|byte\|limit' | head -20

Repository: sledtools/pika

Length of output: 40


🏁 Script executed:

# Check readMediaUploadPayload to understand what payload.bytes contains
rg -B5 -A20 'fun readMediaUploadPayload' android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt

Repository: sledtools/pika

Length of output: 1119


🏁 Script executed:

# Check if there's any size validation before dispatch
rg -B10 -A5 'sendStagedMedia|SendChatMediaBatch' android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt | head -60

Repository: sledtools/pika

Length of output: 1839


🏁 Script executed:

# Look for any evidence of streaming or chunking alternatives already implemented
rg -n 'sendVideoFrame|ByteArray|stream' android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt

Repository: sledtools/pika

Length of output: 218


🏁 Script executed:

# Check the actual payload size in real scenario - look at video frame handling
rg -B5 -A10 'sendVideoFrame' android/app/src/main/java/com/pika/app/AppManager.kt | head -40

Repository: sledtools/pika

Length of output: 40


🏁 Script executed:

# Check for any file size validation before base64 encoding
rg -B20 'Base64.encodeToString' android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt | head -80

Repository: sledtools/pika

Length of output: 3475


🏁 Script executed:

# Look for any memory/size limits or error handling around media preparation
rg -n 'maxFileSize|maxSize|size.*limit|catch.*OutOfMemory' android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt

Repository: sledtools/pika

Length of output: 40


🏁 Script executed:

# Check if there's any documentation on expected payload sizes
fd -e md -e txt | xargs rg -l 'media.*size\|payload.*size\|batch.*limit' 2>/dev/null | head -10

Repository: sledtools/pika

Length of output: 40


🏁 Script executed:

# Check the share path in AppManager more closely for comparison
sed -n '1260,1320p' android/app/src/main/java/com/pika/app/rust/share/pika_share.kt

Repository: sledtools/pika

Length of output: 1452


🏁 Script executed:

# Look for any FFI constants that might indicate max payload sizes
rg -n 'MAX\|LIMIT\|SIZE' android/app/src/main/java/com/pika/app/rust/pika_core.kt | grep -i 'media\|batch\|payload' | head -20

Repository: sledtools/pika

Length of output: 40


🏁 Script executed:

# Check if there's any max file size validation in media selection
rg -B5 -A10 'stagedMedia.size\|maxSelectionCount' android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt | head -40

Repository: sledtools/pika

Length of output: 40


🏁 Script executed:

# Look at how videos are handled vs images - check file reading
rg -n 'video\|Video\|mp4\|MP4' android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt | head -20

Repository: sledtools/pika

Length of output: 40


🏁 Script executed:

# Check the Rust side to see if it does any streaming/chunking
rg -n 'SendChatMediaBatch\|chunks\|stream' rust/src/ -l 2>/dev/null | head -10

Repository: sledtools/pika

Length of output: 40


🏁 Script executed:

# Look for any base64 decoder on Rust side that might chunk
rg -B5 -A10 'SendChatMediaBatch' rust/src/lib.rs 2>/dev/null | head -40

Repository: sledtools/pika

Length of output: 40


🏁 Script executed:

# Examine memory calculations - how large could a batch actually be
python3 << 'EOF'
# Calculate potential memory usage
max_items = 32
# Estimate file sizes
scenarios = [
    ("images only (avg 2MB)", 2 * 1024 * 1024),  # 2MB average
    ("mixed media (avg 5MB)", 5 * 1024 * 1024),  # 5MB average
    ("videos (avg 20MB)", 20 * 1024 * 1024),     # 20MB average
]

for name, file_size in scenarios:
    raw_total = max_items * file_size
    base64_overhead = raw_total * 1.33  # base64 is ~33% larger
    print(f"{name}:")
    print(f"  Raw total: {raw_total / (1024*1024):.1f} MB")
    print(f"  With base64: {base64_overhead / (1024*1024):.1f} MB")
    print(f"  Per-item base64 size: {(file_size * 1.33) / (1024*1024):.2f} MB")
    print()
EOF

Repository: sledtools/pika

Length of output: 358


🏁 Script executed:

# Search more broadly for selection count limits in Android
rg -n 'maxSelectionCount\|max.*selection\|32' android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt | head -20

Repository: sledtools/pika

Length of output: 40


🏁 Script executed:

# Check how PhotoPicker or media picker is configured on Android
rg -n 'ActivityResultContracts\|PickMultipleVisualMedia\|maxItems' android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt | head -20

Repository: sledtools/pika

Length of output: 40


🏁 Script executed:

# Look for any documentation or comments about batch limits
rg -B3 -A3 'SendChatMediaBatch\|MediaBatchItem' android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt | head -40

Repository: sledtools/pika

Length of output: 40


🏁 Script executed:

# Verify if the same encoding pattern exists elsewhere (single media path)
rg -n 'AppAction.SendChatMedia[^B]' android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt | wc -l

Repository: sledtools/pika

Length of output: 59


🏁 Script executed:

# Check if Rust-side actually handles base64 or if it expects raw bytes/references
rg -n 'dataBase64\|base64' rust/src/ 2>/dev/null | head -20

Repository: sledtools/pika

Length of output: 40


🏁 Script executed:

# Search more thoroughly for file size limits in media picker setup
rg -n 'picker\|selector\|size\|length' android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt | grep -v 'lastIndex\|isEmpty\|reversed' | head -30

Repository: sledtools/pika

Length of output: 40


🏁 Script executed:

# Check what happens with the bytes after encoding - is there any cleanup?
rg -B10 -A10 'SendChatMediaBatch\(' android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt

Repository: sledtools/pika

Length of output: 860


🏁 Script executed:

# Look at the Rust FFI to understand how the base64 is handled
sed -n '3330,3360p' android/app/src/main/java/com/pika/app/rust/pika_core.kt

Repository: sledtools/pika

Length of output: 667


🏁 Script executed:

# Check if there's any streaming/chunking alternative already used elsewhere
rg -n 'ByteArray\|chunks\|stream' rust/src/core/chat_media_db.rs 2>/dev/null | head -20

Repository: sledtools/pika

Length of output: 40


🏁 Script executed:

# Look at the entire media upload flow in ChatScreen
sed -n '1,50p' android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt | grep -n 'import\|package'

Repository: sledtools/pika

Length of output: 2642


🏁 Script executed:

# Verify if there's actually a potential OOM risk by checking heap constraints
rg -n 'android:largeHeap\|heapsize\|memory' android/app/src/main/AndroidManifest.xml 2>/dev/null

Repository: sledtools/pika

Length of output: 40


Base64 batching causes significant and avoidable memory overhead.

Encoding all media to base64 strings creates 33% memory expansion per file, multiplied across batch items. At the 32-item selection limit with video files, this can easily exceed 500MB+ in a single SendChatMediaBatch action, risking GC churn or OOM on lower-spec devices.

Use file references (URIs, file paths, or file descriptors) through the FFI boundary instead. If binary payload is needed, stream or chunk data rather than loading and encoding everything into memory at once.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@android/app/src/main/java/com/pika/app/rust/pika_core.kt` around lines 2776 -
2782, The MediaBatchItem data class currently carries full base64-encoded
payloads causing large memory amplification during SendChatMediaBatch; change
the shape and FFI usage so MediaBatchItem (and any SendChatMediaBatch input)
carries file references (URI, filePath, or fileDescriptor) and metadata
(mimeType, filename) instead of base64 strings, and update the Rust/FFI boundary
to read/stream binary data on demand (or implement chunked streaming) so files
are not all loaded/encoded in memory at once; update any functions that
construct or consume MediaBatchItem to use the new reference fields and perform
safe streaming/chunking on the native side.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is auto-generated UniFFI binding code — the shape mirrors the Rust MediaBatchItem struct definition. Changing how data crosses the FFI boundary (e.g., switching to file paths instead of inline bytes) would be a Rust API-level architectural change, not something actionable in the generated Kotlin.

— claude

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@futurepaul, that's a fair point — the file header makes it clear this is autogenerated UniFFI binding code and the MediaBatchItem shape is dictated by the Rust struct in rust/src/actions.rs. The memory concern would need to be addressed upstream at the Rust API level (e.g., changing MediaBatchItem to carry a file path instead of inline data_base64), not in the generated Kotlin. Thanks for the clarification.


✏️ Learnings added
Learnt from: futurepaul
Repo: sledtools/pika PR: 429
File: android/app/src/main/java/com/pika/app/rust/pika_core.kt:2776-2782
Timestamp: 2026-03-04T23:23:15.687Z
Learning: In the sledtools/pika repository, `android/app/src/main/java/com/pika/app/rust/pika_core.kt` is auto-generated UniFFI binding code that mirrors Rust struct/enum definitions. It should not be reviewed directly for API shape or FFI design issues — those concerns should be raised against the Rust source files (e.g., `rust/src/actions.rs`, `rust/src/state.rs`) instead.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Comment on lines +181 to +185
private data class StagedMedia(
val id: String = java.util.UUID.randomUUID().toString(),
val payload: MediaUploadPayload,
val thumbnailBitmap: android.graphics.Bitmap? = null,
)
Copy link
Copy Markdown

@coderabbitai coderabbitai bot Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid storing full media bytes in Compose state.

StagedMedia currently holds payload.bytes; with large files and multi-select this can spike heap usage and trigger OOM. Keep lightweight references (e.g., Uri/temp file path + metadata) in state, and load bytes only when dispatching send.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt` around lines
181 - 185, StagedMedia currently embeds full media bytes via MediaUploadPayload
which can OOM; change StagedMedia to hold only a lightweight reference (e.g.,
content Uri or temporary file path and metadata like mimeType/size and optional
thumbnailBitmap) instead of payload.bytes, update any constructors/usages that
build StagedMedia to pass the Uri/path and metadata, and defer reading the
actual bytes inside the upload/send path (e.g., where you call send/upload — the
function that consumes StagedMedia for dispatch) so you load the file into
memory only at send time and convert to MediaUploadPayload there.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Valid concern. Holding raw ByteArray + Bitmap in Compose state for multiple large files is a real OOM risk. The correct pattern is to keep Uri references in state and read bytes lazily at send time. Tracked for a follow-up fix.

— claude

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@futurepaul, thanks for confirming — that's a solid plan. Since this is deferred to a follow-up, would you like me to open a GitHub issue to formally track the StagedMedia OOM fix (replacing ByteArray in state with Uri + lazy byte loading at send time)? That way it won't get lost.

Comment on lines +332 to +349
fun stageMediaFromUris(uris: List<Uri>) {
coroutineScope.launch {
val items = withContext(Dispatchers.IO) {
uris.take(32).mapNotNull { uri ->
val payload = readMediaUploadPayload(ctx, uri) ?: return@mapNotNull null
val thumbnail = if (payload.mimeType.startsWith("image/")) {
try {
android.graphics.BitmapFactory.decodeByteArray(payload.bytes, 0, payload.bytes.size)?.let { bmp ->
android.graphics.Bitmap.createScaledBitmap(bmp, 128, 128, true)
}
} catch (_: Exception) { null }
} else null
StagedMedia(payload = payload, thumbnailBitmap = thumbnail)
}
}
stagedMedia = stagedMedia + items
}
}
Copy link
Copy Markdown

@coderabbitai coderabbitai bot Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Enforce the 32-item limit against total staged count, not per pick.

uris.take(32) caps only the current pick. Repeating picks can exceed backend/UI limits.

Suggested fix
 fun stageMediaFromUris(uris: List<Uri>) {
     coroutineScope.launch {
+        val remainingSlots = (32 - stagedMedia.size).coerceAtLeast(0)
+        if (remainingSlots == 0) {
+            Toast.makeText(ctx, "You can attach up to 32 items", Toast.LENGTH_SHORT).show()
+            return@launch
+        }
         val items = withContext(Dispatchers.IO) {
-            uris.take(32).mapNotNull { uri ->
+            uris.take(remainingSlots).mapNotNull { uri ->
                 val payload = readMediaUploadPayload(ctx, uri) ?: return@mapNotNull null
                 val thumbnail = if (payload.mimeType.startsWith("image/")) {
                     try {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fun stageMediaFromUris(uris: List<Uri>) {
coroutineScope.launch {
val items = withContext(Dispatchers.IO) {
uris.take(32).mapNotNull { uri ->
val payload = readMediaUploadPayload(ctx, uri) ?: return@mapNotNull null
val thumbnail = if (payload.mimeType.startsWith("image/")) {
try {
android.graphics.BitmapFactory.decodeByteArray(payload.bytes, 0, payload.bytes.size)?.let { bmp ->
android.graphics.Bitmap.createScaledBitmap(bmp, 128, 128, true)
}
} catch (_: Exception) { null }
} else null
StagedMedia(payload = payload, thumbnailBitmap = thumbnail)
}
}
stagedMedia = stagedMedia + items
}
}
fun stageMediaFromUris(uris: List<Uri>) {
coroutineScope.launch {
val remainingSlots = (32 - stagedMedia.size).coerceAtLeast(0)
if (remainingSlots == 0) {
Toast.makeText(ctx, "You can attach up to 32 items", Toast.LENGTH_SHORT).show()
return@launch
}
val items = withContext(Dispatchers.IO) {
uris.take(remainingSlots).mapNotNull { uri ->
val payload = readMediaUploadPayload(ctx, uri) ?: return@mapNotNull null
val thumbnail = if (payload.mimeType.startsWith("image/")) {
try {
android.graphics.BitmapFactory.decodeByteArray(payload.bytes, 0, payload.bytes.size)?.let { bmp ->
android.graphics.Bitmap.createScaledBitmap(bmp, 128, 128, true)
}
} catch (_: Exception) { null }
} else null
StagedMedia(payload = payload, thumbnailBitmap = thumbnail)
}
}
stagedMedia = stagedMedia + items
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt` around lines
332 - 349, The current stageMediaFromUris function only limits uris.take(32) per
invocation which allows stagedMedia to grow beyond 32 across multiple picks;
change it to compute remaining = 32 - stagedMedia.size (return early if
remaining <= 0) and then use uris.take(remaining) so you only stage up to the
overall 32-item limit; update references in stageMediaFromUris (stagedMedia and
uris.take(32)) and ensure the concatenation stagedMedia = stagedMedia + items
cannot exceed the 32-item cap.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Valid — take(32) caps only the current pick, not the cumulative staged count. The suggested fix (checking remaining slots) is correct. Tracked for follow-up.

— claude

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@futurepaul Thanks for confirming! Since this is tracked for a follow-up PR rather than fixed here, I'll leave the comment open. When you do address it, the suggested fix (computing remainingSlots = (32 - stagedMedia.size).coerceAtLeast(0) and using uris.take(remainingSlots)) should cover the cumulative cap correctly.

Comment on lines +351 to +381
fun sendStagedMedia() {
if (stagedMedia.isEmpty()) return
coroutineScope.launch {
val caption = draft.trim()
if (stagedMedia.size == 1) {
val p = stagedMedia.first().payload
val base64 = Base64.encodeToString(p.bytes, Base64.NO_WRAP)
manager.dispatch(
AppAction.SendChatMedia(
chatId = chat.chatId,
dataBase64 = base64,
mimeType = p.mimeType,
filename = p.filename,
caption = caption,
),
)
} else {
val batchItems = stagedMedia.map { staged ->
com.pika.app.rust.MediaBatchItem(
dataBase64 = Base64.encodeToString(staged.payload.bytes, Base64.NO_WRAP),
mimeType = staged.payload.mimeType,
filename = staged.payload.filename,
)
}
manager.dispatch(
AppAction.SendChatMediaBatch(
chatId = chat.chatId,
items = batchItems,
caption = caption,
),
)
Copy link
Copy Markdown

@coderabbitai coderabbitai bot Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Move batch Base64 encoding off the main thread.

This path can process many large files; encoding on the main dispatcher can freeze the composer/UI.

Suggested fix
             } else {
-                val batchItems = stagedMedia.map { staged ->
-                    com.pika.app.rust.MediaBatchItem(
-                        dataBase64 = Base64.encodeToString(staged.payload.bytes, Base64.NO_WRAP),
-                        mimeType = staged.payload.mimeType,
-                        filename = staged.payload.filename,
-                    )
-                }
+                val batchItems = withContext(Dispatchers.Default) {
+                    stagedMedia.map { staged ->
+                        com.pika.app.rust.MediaBatchItem(
+                            dataBase64 = Base64.encodeToString(staged.payload.bytes, Base64.NO_WRAP),
+                            mimeType = staged.payload.mimeType,
+                            filename = staged.payload.filename,
+                        )
+                    }
+                }
                 manager.dispatch(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt` around lines
351 - 381, sendStagedMedia is doing Base64.encodeToString on the main thread for
both single-file and batch paths which can freeze the UI; move the CPU work off
the main dispatcher by wrapping the Base64 encoding in
withContext(Dispatchers.Default) (or Dispatchers.IO) so the mapping/encoding
runs on a background thread before calling manager.dispatch. Specifically, in
sendStagedMedia, perform the single-file Base64.encodeToString(p.bytes, ...)
inside withContext and likewise build batchItems with
withContext(Dispatchers.Default) { stagedMedia.map { ...
Base64.encodeToString(...) ... } } then use the resulting data to call
AppAction.SendChatMedia or AppAction.SendChatMediaBatch.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Valid. The single-item sendMediaFromUri path correctly uses withContext(Dispatchers.IO) for Base64 encoding, but the batch path in sendStagedMedia does not. With multiple large files this will block the main thread. Should be wrapped in withContext(Dispatchers.IO) for consistency. Tracked for follow-up.

— claude

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@futurepaul Thanks for confirming! That's a correct analysis — the sendMediaFromUri single-item path wraps readMediaUploadPayload (which does the I/O and encoding) in withContext(Dispatchers.IO), while the batch path in sendStagedMedia encodes inline on the main dispatcher.

Would you like me to open a follow-up GitHub issue to track this so it doesn't get lost?

Comment on lines +756 to +759
if let Err(e) = write_media_file(&local_path, &media_data) {
self.toast(format!("Media cache failed: {e}"));
return;
}
Copy link
Copy Markdown

@coderabbitai coderabbitai bot Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Batch failure paths leak cached media files.

Media files are persisted during preprocessing (Line 756), but several early-return error paths (e.g., encryption failure at Line 857) remove outbox state only. Already-written cache files are not cleaned up.

This can accumulate orphaned files over time.

Suggested fix
+        let mut written_paths: Vec<PathBuf> = Vec::new();
...
             if let Err(e) = write_media_file(&local_path, &media_data) {
+                for p in &written_paths { let _ = std::fs::remove_file(p); }
                 self.toast(format!("Media cache failed: {e}"));
                 return;
             }
+            written_paths.push(local_path.clone());
...
                     Err(e) => {
+                        for p in &written_paths { let _ = std::fs::remove_file(p); }
                         if let Some(outbox) = self.local_outbox.get_mut(&chat_id) {
                             outbox.remove(&temp_rumor_id);
                         }

Also applies to: 857-868

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rust/src/core/chat_media.rs` around lines 756 - 759, After
write_media_file(&local_path, &media_data) succeeds, ensure any subsequent
early-return failure paths (e.g., encryption failure and the outbox-state
removal branch) remove the just-created cache file to avoid orphaned files;
specifically, call std::fs::remove_file(&local_path) (or use a small RAII guard
that deletes local_path on drop unless explicitly disarmed) in the error
branches that currently only return after toast() so the cache file is cleaned
up, and log or ignore removal errors.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Valid. The batch path has the same cache leak pattern that was fixed for single-image sends in a prior commit. On permanent failure or app restart, cached files from partially-processed batches become orphaned. The suggested written_paths tracking with cleanup on error is the right approach. Tracked for follow-up.

— claude

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@futurepaul, thanks for confirming! Since this is deferred to a follow-up, would you like me to open a GitHub issue to formally track the cache leak in the batch media send path? I can include a reference to this PR and comment as context.

Comment on lines +1187 to +1192
let Some(uploaded_url) = uploaded_url else {
return;
};
let Some(descriptor_hash) = descriptor_sha256_hex else {
return;
};
Copy link
Copy Markdown

@coderabbitai coderabbitai bot Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing upload URL/hash currently leaves batch stuck in pending state.

At Line 1187 and Line 1190, returning early on missing uploaded_url or descriptor_sha256_hex does not mark failure, clear progress, or unblock UI. The batch can remain indefinitely pending.

Suggested fix
-        let Some(uploaded_url) = uploaded_url else {
-            return;
-        };
-        let Some(descriptor_hash) = descriptor_sha256_hex else {
-            return;
-        };
+        let Some(uploaded_url) = uploaded_url else {
+            self.fail_batch(&batch_id, "Upload failed: missing upload URL");
+            return;
+        };
+        let Some(descriptor_hash) = descriptor_sha256_hex else {
+            self.fail_batch(&batch_id, "Upload failed: missing uploaded hash");
+            return;
+        };

(Use your existing failure behavior from the error/hash-mismatch branches in handle_batch_upload_completed.)

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let Some(uploaded_url) = uploaded_url else {
return;
};
let Some(descriptor_hash) = descriptor_sha256_hex else {
return;
};
let Some(uploaded_url) = uploaded_url else {
self.fail_batch(&batch_id, "Upload failed: missing upload URL");
return;
};
let Some(descriptor_hash) = descriptor_sha256_hex else {
self.fail_batch(&batch_id, "Upload failed: missing uploaded hash");
return;
};
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rust/src/core/chat_media.rs` around lines 1187 - 1192, The early returns when
uploaded_url or descriptor_sha256_hex are None leave the batch stuck pending; in
handle_batch_upload_completed replace those bare returns with the same
failure-handling used in the existing error/hash-mismatch branches (i.e., record
the upload failure for the batch, clear any in-progress state/progress markers,
and notify/unblock the UI). Locate the checks for uploaded_url and
descriptor_sha256_hex and invoke the same code-path or helper used by the error
and hash-mismatch branches so the batch is marked failed and progress cleared
instead of returning early.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Valid. A blossom server returning a success response with a missing URL would cause the batch to hang indefinitely in "uploading" state with no retry path. The early return should mark the batch as Failed so the user can retry. Tracked for follow-up.

— claude

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@futurepaul, thanks for confirming! Since this is being tracked for follow-up, would you like me to open a GitHub issue to capture this bug? I can include the relevant context (batch stuck in pending state when Blossom returns a success response with a missing URL/hash, with a link back to this PR comment).

Comment on lines +4865 to +4890
// Check if this is a pending batch media upload retry.
if let Some((
batch_id,
first_request_id,
first_encrypted_data,
first_upload_mime,
first_expected_hash,
)) = self
.pending_media_batch_sends
.iter()
.find(|(_, b)| b.chat_id == chat_id && b.temp_rumor_id == message_id)
.map(|(k, b)| {
// Find the first un-uploaded item.
let idx = b
.items
.iter()
.position(|item| item.uploaded_url.is_none())
.unwrap_or(0);
(
k.clone(),
b.items[idx].request_id.clone(),
b.items[idx].encrypted_data.clone(),
b.items[idx].upload.mime_type.clone(),
hex::encode(b.items[idx].upload.encrypted_hash),
)
})
Copy link
Copy Markdown

@coderabbitai coderabbitai bot Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Handle the “no un-uploaded items” batch-retry case explicitly.

position(|item| item.uploaded_url.is_none()).unwrap_or(0) collapses “none left to upload” into index 0. That can cause unnecessary re-uploads on publish-only failures and relies on unchecked indexing assumptions. Please branch the None case explicitly instead of defaulting to 0.

Also applies to: 4916-4938

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rust/src/core/mod.rs` around lines 4865 - 4890, The closure that builds the
tuple for the pending_media_batch_sends match uses position(...).unwrap_or(0),
which treats "no un-uploaded items" as index 0 and can cause incorrect
re-uploads and panics; instead, change the mapping logic around
pending_media_batch_sends.iter().find(...).map(|(k,b)| { ... }) so that you
check the position result explicitly (match or if let Some(idx) = ...), and
return None from the map when position is None, letting the outer if let
Some((batch_id, ...)) fail; apply the same explicit None handling to the
equivalent block at the other occurrence (lines ~4916-4938) that uses
position(...).unwrap_or(0).

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Valid edge case. If all items are already uploaded (e.g., crash between last upload completing and batch removal), unwrap_or(0) would restart uploads from item 0, causing unnecessary re-uploads. In theory this state shouldn't occur since the batch is removed on success, but the fallback is semantically wrong — it should check if all items are uploaded and skip straight to publish. Low priority since the retry would still succeed, just wastefully.

— claude

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@futurepaul Thanks, agreed on the semantics. Here’s a small, safe tweak that makes the “all items already uploaded” case explicit and falls through to the existing publish-retry path without starting any re-uploads.

Patch 1: only return a batch upload tuple when there is an un-uploaded item; otherwise return None so we skip the upload path and hit the general pending_sends publish-retry block.

// Before
.find(|(_, b)| b.chat_id == chat_id && b.temp_rumor_id == message_id)
.map(|(k, b)| {
    let idx = b
        .items
        .iter()
        .position(|item| item.uploaded_url.is_none())
        .unwrap_or(0);
    (
        k.clone(),
        b.items[idx].request_id.clone(),
        b.items[idx].encrypted_data.clone(),
        b.items[idx].upload.mime_type.clone(),
        hex::encode(b.items[idx].upload.encrypted_hash),
    )
})

// After
.find(|(_, b)| b.chat_id == chat_id && b.temp_rumor_id == message_id)
.and_then(|(k, b)| {
    if let Some(idx) = b.items.iter().position(|item| item.uploaded_url.is_none()) {
        Some((
            k.clone(),
            b.items[idx].request_id.clone(),
            b.items[idx].encrypted_data.clone(),
            b.items[idx].upload.mime_type.clone(),
            hex::encode(b.items[idx].upload.encrypted_hash),
        ))
    } else {
        // All items uploaded: let the handler fall through to the publish retry.
        None
    }
})

Patch 2: when resetting next_upload_index for a retry, only do it if there is a next un-uploaded item; otherwise leave it unchanged so we don’t regress to index 0.

// Before
if let Some(batch) = self.pending_media_batch_sends.get_mut(&batch_id) {
    let restart_idx = batch
        .items
        .iter()
        .position(|item| item.uploaded_url.is_none())
        .unwrap_or(0);
    batch.next_upload_index = restart_idx + 1;
}

// After
if let Some(batch) = self.pending_media_batch_sends.get_mut(&batch_id) {
    if let Some(restart_idx) = batch
        .items
        .iter()
        .position(|item| item.uploaded_url.is_none())
    {
        batch.next_upload_index = restart_idx + 1;
    } else {
        // All items uploaded — keep next_upload_index as-is and let the publish-retry path handle it.
    }
}

If you want, I can open a quick follow-up commit with these two hunks, or file a small issue to track it.


✏️ Learnings added
Learnt from: futurepaul
Repo: sledtools/pika PR: 429
File: rust/src/core/mod.rs:4865-4890
Timestamp: 2026-03-04T23:25:22.685Z
Learning: In rust/src/core/mod.rs, within AppCore::handle_action for AppAction::RetryMessage, avoid defaulting to index 0 when locating the first un-uploaded batch item. Use explicit None handling so that when all items are already uploaded, the code skips the batch re-upload path and falls through to the general publish-retry (pending_sends) logic.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the concrete patch. The suggested `and_then` approach is correct — returning `None` when all items are uploaded instead of falling back to index 0 is the right fix. Tracked for follow-up, not applying in this PR to keep the diff focused.

— claude

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@futurepaul Sounds good—leaving this PR as-is. I’ve already recorded this as a repo learning so we’ll flag the “all items uploaded → skip re-upload and fall through to publish-retry” pattern in future reviews. If you’d like, I can open a small follow-up GitHub issue referencing this PR/comment and assign it to you; just say the word.


🧠 Learnings used
Learnt from: futurepaul
Repo: sledtools/pika PR: 429
File: rust/src/core/mod.rs:4865-4890
Timestamp: 2026-03-04T23:25:22.685Z
Learning: In rust/src/core/mod.rs, within AppCore::handle_action for AppAction::RetryMessage, avoid defaulting to index 0 when locating the first un-uploaded batch item. Use explicit None handling so that when all items are already uploaded, the code skips the batch re-upload path and falls through to the general publish-retry (pending_sends) logic.

coderabbitai[bot]

This comment was marked as resolved.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
rust/src/core/chat_media.rs (2)

749-759: ⚠️ Potential issue | 🟠 Major

Batch failure paths still leak cached media files

On early returns in batch flow, outbox state is cleaned but previously written files are not removed, leaving orphaned cache files.

Proposed fix
+        let mut written_paths: Vec<PathBuf> = Vec::new();
         for di in &decoded_items {
             ...
             let local_path = media_file_path(...);
             if let Err(e) = write_media_file(&local_path, &media_data) {
+                for p in &written_paths {
+                    let _ = std::fs::remove_file(p);
+                }
                 self.toast(format!("Media cache failed: {e}"));
                 return;
             }
+            written_paths.push(local_path.clone());
             ...
         }

         {
             let Some(sess) = self.session.as_mut() else {
                 ...
+                for p in &written_paths {
+                    let _ = std::fs::remove_file(p);
+                }
                 return;
             };

             ...
             let mut upload = match manager.encrypt_for_upload_with_options(...) {
                 Ok(upload) => upload,
                 Err(e) => {
                     ...
+                    for p in &written_paths {
+                        let _ = std::fs::remove_file(p);
+                    }
                     self.toast(format!("Media encryption failed: {e}"));
                     return;
                 }
             };

Also applies to: 833-844, 857-868

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rust/src/core/chat_media.rs` around lines 749 - 759, The batch flow currently
writes cache files via media_file_path + write_media_file but on early returns
(e.g., the error branch that calls self.toast and returns) the created local
files are left behind; modify the error-handling to remove the just-written file
(or write to a temp path and atomic-rename only on success) so no orphaned cache
remains. Specifically, after calling write_media_file in the batch handlers (the
block using media_file_path / write_media_file around the self.toast return and
the similar blocks at the other noted locations), ensure you catch the Err(e),
attempt to delete the local_path (ignore delete errors but log them), then call
self.toast and return; alternatively implement a temp file write+rename pattern
so the file is only present if the full operation completes. Ensure this logic
is applied to the occurrences around media_file_path/write_media_file noted in
this file.

1182-1187: ⚠️ Potential issue | 🟠 Major

Missing upload URL/hash still exits without failing the batch

These bare returns leave batch state unresolved and can keep UI progress in a stuck state. Route them through the same failure handling used in other batch error branches.

Proposed fix
-        let Some(uploaded_url) = uploaded_url else {
-            return;
-        };
-        let Some(descriptor_hash) = descriptor_sha256_hex else {
-            return;
-        };
+        let Some(uploaded_url) = uploaded_url else {
+            // same failure path as other batch upload failures
+            if let Some(batch) = self.pending_media_batch_sends.get(&batch_id) {
+                let chat_id = batch.chat_id.clone();
+                let temp_rumor_id = batch.temp_rumor_id.clone();
+                if let Some(outbox) = self.local_outbox.get_mut(&chat_id) {
+                    if let Some(entry) = outbox.get_mut(&temp_rumor_id) {
+                        for att in &mut entry.media {
+                            att.upload_progress = None;
+                        }
+                    }
+                }
+                self.delivery_overrides.entry(chat_id.clone()).or_default().insert(
+                    temp_rumor_id,
+                    MessageDeliveryState::Failed { reason: "Upload failed: missing upload URL".to_string() },
+                );
+                self.refresh_current_chat_if_open(&chat_id);
+                self.refresh_chat_list_from_storage();
+            }
+            return;
+        };
+        let Some(descriptor_hash) = descriptor_sha256_hex else {
+            if let Some(batch) = self.pending_media_batch_sends.get(&batch_id) {
+                let chat_id = batch.chat_id.clone();
+                let temp_rumor_id = batch.temp_rumor_id.clone();
+                if let Some(outbox) = self.local_outbox.get_mut(&chat_id) {
+                    if let Some(entry) = outbox.get_mut(&temp_rumor_id) {
+                        for att in &mut entry.media {
+                            att.upload_progress = None;
+                        }
+                    }
+                }
+                self.delivery_overrides.entry(chat_id.clone()).or_default().insert(
+                    temp_rumor_id,
+                    MessageDeliveryState::Failed { reason: "Upload failed: missing uploaded hash".to_string() },
+                );
+                self.refresh_current_chat_if_open(&chat_id);
+                self.refresh_chat_list_from_storage();
+            }
+            return;
+        };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rust/src/core/chat_media.rs` around lines 1182 - 1187, The two bare
early-returns for missing uploaded_url and descriptor_sha256_hex leave the
batch/state hanging; replace those returns with the same batch-failure path used
elsewhere in this file: invoke the existing batch error/cleanup handler (the
function or closure used in other error branches) with a clear error describing
the missing uploaded_url or missing descriptor_sha256_hex, include the offending
variable values in the message, ensure the handler updates batch state/UI
progress and then exit the function.
🧹 Nitpick comments (3)
cli/src/harness.rs (2)

1435-1442: Consider validating count inside send_test_media_batch too.

Right now the helper relies on upstream parsing for bounds. A local guard makes the function safer if reused from another call path.

🛡️ Proposed defensive guard
 async fn send_test_media_batch(
     mdk: &PikaMdk,
     keys: &Keys,
     client: &Client,
     relay_url: RelayUrl,
     mls_group_id: &GroupId,
     count: usize,
 ) -> anyhow::Result<String> {
+    if !(1..=8).contains(&count) {
+        return Err(anyhow!("media_batch count out of range: {count} (expected 1..=8)"));
+    }
+
     // Locate fixtures/test-images relative to the crate root (compile-time).
     let fixtures_dir = Path::new(env!("CARGO_MANIFEST_DIR"))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli/src/harness.rs` around lines 1435 - 1442, Add a defensive guard inside
send_test_media_batch to validate the incoming count parameter (e.g., ensure
count > 0 and optionally enforce an upper bound like MAX_MEDIA_BATCH) and return
an anyhow::Error if the value is out of range; use a descriptive error message
(via anyhow::bail or anyhow::ensure) so callers get immediate feedback if this
helper is invoked directly with invalid bounds. Ensure the check appears at the
top of send_test_media_batch before any work with count, and reference a named
constant (e.g., MAX_MEDIA_BATCH) if you introduce an upper limit so the bound is
easy to adjust.

1421-1421: Align the fixture-list comment with enforced count bounds.

Line 1421 says filenames are cycled for counts > 8, but Line 1415-1416 currently rejects counts above 8. This comment is stale and can mislead future changes.

✏️ Proposed update
-/// Fixture image filenames in `fixtures/test-images/`, cycled for counts > 8.
+/// Fixture image filenames in `fixtures/test-images/` for `media_batch` counts 1..=8.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli/src/harness.rs` at line 1421, Update the stale doc comment that currently
reads "Fixture image filenames in `fixtures/test-images/`, cycled for counts >
8" to match the enforced bound that rejects counts above 8; locate the comment
for the fixture image filenames (the line containing `fixtures/test-images`) and
change the wording to indicate cycling only applies when count >
number_of_images_or_remove mention of "> 8", or explicitly state "cycled for
counts > 8 if bounds allow" to align with the existing bounds check that rejects
counts above 8 in the count validation logic.
rust/src/core/chat_media.rs (1)

651-693: Consider adding a total batch-byte cap to avoid peak memory spikes

The batch path decodes and stores all item buffers before encryption. With large inputs, this can cause very high transient memory usage. A total-byte guard (or streaming one-item-at-a-time preprocessing) would improve resilience.

Also applies to: 716-790

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rust/src/core/chat_media.rs` around lines 651 - 693, The loop that builds
decoded_items (DecodedItem struct and decoded_items Vec) can allocate a large
total memory footprint by decoding all items before processing; add a cumulative
bytes check or process items one-at-a-time: track a running total (e.g.,
total_decoded_bytes) and after decoding each item increment it and return/toast
if it exceeds a configured BATCH_MAX_BYTES (or fallback to streaming/processing
per-item instead of storing all), and apply the same pattern to the later batch
path (the other loop referenced around lines 716-790) so you never hold more
than the allowed total in memory even if many items are present.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@rust/src/core/chat_media.rs`:
- Around line 749-759: The batch flow currently writes cache files via
media_file_path + write_media_file but on early returns (e.g., the error branch
that calls self.toast and returns) the created local files are left behind;
modify the error-handling to remove the just-written file (or write to a temp
path and atomic-rename only on success) so no orphaned cache remains.
Specifically, after calling write_media_file in the batch handlers (the block
using media_file_path / write_media_file around the self.toast return and the
similar blocks at the other noted locations), ensure you catch the Err(e),
attempt to delete the local_path (ignore delete errors but log them), then call
self.toast and return; alternatively implement a temp file write+rename pattern
so the file is only present if the full operation completes. Ensure this logic
is applied to the occurrences around media_file_path/write_media_file noted in
this file.
- Around line 1182-1187: The two bare early-returns for missing uploaded_url and
descriptor_sha256_hex leave the batch/state hanging; replace those returns with
the same batch-failure path used elsewhere in this file: invoke the existing
batch error/cleanup handler (the function or closure used in other error
branches) with a clear error describing the missing uploaded_url or missing
descriptor_sha256_hex, include the offending variable values in the message,
ensure the handler updates batch state/UI progress and then exit the function.

---

Nitpick comments:
In `@cli/src/harness.rs`:
- Around line 1435-1442: Add a defensive guard inside send_test_media_batch to
validate the incoming count parameter (e.g., ensure count > 0 and optionally
enforce an upper bound like MAX_MEDIA_BATCH) and return an anyhow::Error if the
value is out of range; use a descriptive error message (via anyhow::bail or
anyhow::ensure) so callers get immediate feedback if this helper is invoked
directly with invalid bounds. Ensure the check appears at the top of
send_test_media_batch before any work with count, and reference a named constant
(e.g., MAX_MEDIA_BATCH) if you introduce an upper limit so the bound is easy to
adjust.
- Line 1421: Update the stale doc comment that currently reads "Fixture image
filenames in `fixtures/test-images/`, cycled for counts > 8" to match the
enforced bound that rejects counts above 8; locate the comment for the fixture
image filenames (the line containing `fixtures/test-images`) and change the
wording to indicate cycling only applies when count > number_of_images_or_remove
mention of "> 8", or explicitly state "cycled for counts > 8 if bounds allow" to
align with the existing bounds check that rejects counts above 8 in the count
validation logic.

In `@rust/src/core/chat_media.rs`:
- Around line 651-693: The loop that builds decoded_items (DecodedItem struct
and decoded_items Vec) can allocate a large total memory footprint by decoding
all items before processing; add a cumulative bytes check or process items
one-at-a-time: track a running total (e.g., total_decoded_bytes) and after
decoding each item increment it and return/toast if it exceeds a configured
BATCH_MAX_BYTES (or fallback to streaming/processing per-item instead of storing
all), and apply the same pattern to the later batch path (the other loop
referenced around lines 716-790) so you never hold more than the allowed total
in memory even if many items are present.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ea74b098-d116-41e2-9d96-882a6c30e489

📥 Commits

Reviewing files that changed from the base of the PR and between 62e414c and a3e7b26.

📒 Files selected for processing (3)
  • cli/src/harness.rs
  • ios/Sources/Views/MessageBubbleViews.swift
  • rust/src/core/chat_media.rs

Copy link
Copy Markdown

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 new potential issue.

View 16 additional findings in Devin Review.

Open in Devin Review

Comment on lines +1182 to +1187
let Some(uploaded_url) = uploaded_url else {
return;
};
let Some(descriptor_hash) = descriptor_sha256_hex else {
return;
};
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Batch upload completion silently returns on missing URL/hash, leaving batch permanently stuck

In handle_batch_upload_completed, when error is None but uploaded_url or descriptor_sha256_hex is None, the function silently returns without cleaning up the outbox entry, marking the delivery as failed, or showing a toast. This leaves the batch in pending_media_batch_sends with a perpetual upload spinner and no way to recover — no further items are uploaded and the UI appears frozen. The single-item upload handler (handle_chat_media_upload_completed at rust/src/core/chat_media.rs:1074-1089) correctly handles this case by cleaning up and showing a toast, but the batch path does not.

Silent return paths in batch handler

At lines 1182-1187, when the upload "succeeds" (no error) but the URL or hash is unexpectedly missing:

let Some(uploaded_url) = uploaded_url else {
    return; // No cleanup, no toast
};
let Some(descriptor_hash) = descriptor_sha256_hex else {
    return; // No cleanup, no toast
};

While the current spawn_media_upload always produces either an error or both URL+hash, this is a missing defensive error handler that breaks the batch state machine if the invariant is ever violated.

Prompt for agents
In rust/src/core/chat_media.rs, in the handle_batch_upload_completed function, around lines 1182-1187, the two let-else blocks that check for uploaded_url and descriptor_sha256_hex being None should be changed to mark the batch as failed (similar to the error handling at lines 1150-1179) instead of silently returning. Specifically:

1. For the uploaded_url None case (line 1182-1184): get the batch's chat_id and temp_rumor_id, remove upload_progress from outbox attachments, insert a Failed delivery override with reason "Upload failed: missing upload URL", refresh the UI, and return.

2. For the descriptor_sha256_hex None case (line 1185-1187): same pattern with reason "Upload failed: missing uploaded hash".

This mirrors the defensive handling already present in the single-item upload completion handler at lines 1074-1089.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same issue as the CodeRabbit thread — already acknowledged and tracked for follow-up. The early return on missing URL/hash should mark the batch as Failed instead of silently returning.

— claude

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
ios/Sources/Views/MessageBubbleViews.swift (1)

554-567: ⚠️ Potential issue | 🟡 Minor

Use fileMedia consistently for non-visual timestamp/background decisions.

You split rendering by fileMedia (all non-visual), but these branches still rely on hasFileAttachment (.file only). Non-visual bubbles like voice notes can miss timestamp/background styling.

🩹 Proposed fix
-            if !hasText, hasFileAttachment {
+            if !hasText, !fileMedia.isEmpty {
                 Text(timestampText)
                     .font(.caption2)
                     .foregroundStyle(message.isMine ? Color.white.opacity(0.78) : Color.secondary.opacity(0.9))
                     .frame(maxWidth: .infinity, alignment: .trailing)
                     .padding(.horizontal, 12)
                     .padding(.bottom, 6)
             }
...
-        .background {
-            let visualMedia = message.media.filter { $0.kind == .image || $0.kind == .video }
-            let needsBg = hasText || hasFileAttachment || visualMedia.count > 1
+        .background {
+            let needsBg = hasText || !fileMedia.isEmpty || visualMedia.count > 1
             if needsBg {
                 message.isMine ? Color.blue : Color.gray.opacity(0.2)
             } else {
                 Color.clear
             }
         }

Also applies to: 593-596

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ios/Sources/Views/MessageBubbleViews.swift` around lines 554 - 567, The
timestamp/background logic currently checks hasFileAttachment (which only
considers .file) and thus excludes other non-visual media; update those branches
to use fileMedia (e.g., check !fileMedia.isEmpty or fileMedia.count) so
decisions that render Text(timestampText) and background/timestamp styling
consider all non-visual attachments produced for MediaAttachmentView; search for
uses around MediaAttachmentView, hasText, timestampText and replace
hasFileAttachment-based conditions with fileMedia-based conditions to include
voice notes and other non-image media.
🧹 Nitpick comments (3)
.claude/skills/e2e-video/SKILL.md (1)

29-65: Add explicit prerequisites for verification/upload commands.

On Line 51 and Line 57, the guide assumes ffprobe and blossom upload prerequisites are already configured. Consider adding a short “Prerequisites” section (e.g., ffmpeg/ffprobe installed, authenticated blossom access) to reduce setup friction for first-time contributors.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/e2e-video/SKILL.md around lines 29 - 65, Add a short
"Prerequisites" section above Step 2 that lists required tools and auth: require
ffmpeg/ffprobe installed and on PATH for verification (mention using package
managers or linking to ffmpeg), and require authenticated Blossom access for
blossom-upload (explain that cargo run -p pikahut -- blossom-upload needs
Blossom credentials or environment vars set/CLI login). Also note any
platform-specific requirements already called out (e.g., nix for Android) and
link the commands referenced (ffprobe and the blossom-upload invocation).
android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt (2)

914-926: Improve close button visibility on light images.

The white Close icon (line 924) may have poor contrast against light-colored thumbnail images, making it difficult to see or tap.

💡 Suggested enhancement: Add a semi-transparent background
                                         IconButton(
                                             onClick = { stagedMedia = stagedMedia.filter { it.id != item.id } },
                                             modifier = Modifier
                                                 .align(Alignment.TopEnd)
-                                                .size(20.dp),
+                                                .size(20.dp)
+                                                .background(
+                                                    Color.Black.copy(alpha = 0.4f),
+                                                    RoundedCornerShape(10.dp),
+                                                ),
                                         ) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt` around lines
914 - 926, The close IconButton's white Icons.Default.Close lacks contrast on
light thumbnails; update the IconButton (the IconButton wrapping
Icons.Default.Close that calls stagedMedia = stagedMedia.filter { it.id !=
item.id }) to render with a semi-transparent dark background and rounded shape
(e.g., add a background brush/color with alpha and clip with CircleShape or
rounded corner) and adjust padding/size in its modifier so the white icon
remains visible and tappable on light images while preserving the existing
alignment and size behavior.

1005-1008: Minor inconsistency in draft whitespace handling.

Line 1005 uses draft.trim().isNotBlank() while line 1008 uses draft.isNotBlank() (without trim). Although this doesn't cause a functional bug due to the conditional structure, the inconsistency could be confusing.

💅 Suggested fix for consistency
-                            if (draft.trim().isNotBlank() || stagedMedia.isNotEmpty()) {
+                            val hasText = draft.trim().isNotBlank()
+                            if (hasText || stagedMedia.isNotEmpty()) {
                                 FilledIconButton(
                                     onClick = { sendDraftMessage() },
-                                    enabled = draft.isNotBlank() || stagedMedia.isNotEmpty(),
+                                    enabled = hasText || stagedMedia.isNotEmpty(),
                                     modifier = Modifier.size(40.dp).testTag(TestTags.CHAT_SEND),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt` around lines
1005 - 1008, The conditional for enabling the send button is inconsistent: the
outer if uses draft.trim().isNotBlank() but the FilledIconButton enabled
property uses draft.isNotBlank(). Update the enabled expression in the
FilledIconButton to use draft.trim().isNotBlank() (or consistently trim
everywhere) so both the if guard and the enabled parameter use the same trimmed
whitespace logic; reference the FilledIconButton invocation and
sendDraftMessage()/stagedMedia checks to locate the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.claude/skills/pr-shepherd/SKILL.md:
- Around line 119-121: Add a language identifier to the fenced code block
containing the Agent(...) snippet so the markdown linter MD040 is satisfied;
update the fence that wraps Agent(subagent_type="general-purpose", prompt="Run
/pr-review $PR ...") to include a language tag (e.g., text) immediately after
the opening ``` so the block becomes ```text and preserves the exact Agent(...)
content.
- Around line 15-17: The fenced code block containing the line "Pre-flight
(once)  →  Main Loop (repeat)  →  Final Validation  →  Exit Report" lacks a
language tag causing MD040; fix it by adding a language identifier to the
opening fence (e.g., change ``` to ```text) so the block becomes a fenced code
block with an explicit language tag.

In `@android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt`:
- Around line 1362-1404: When visualMedia.size == 3, replace the generic
chunked(2) grid with an explicit iMessage-style layout: render a Row with two
children — Left: a Box(weight = 2f) that hosts MediaAttachmentContent for
visualMedia[0] filling full height; Right: a Column(weight = 1f) with two Boxes
each (weight = 1f) hosting MediaAttachmentContent for visualMedia[1] and
visualMedia[2] stacked vertically; preserve existing props (isMine, onDownload
-> onDownloadMedia(messageId, attachment.originalHashHex), onOpenImage) and keep
the overlay logic (the +N Box) on the last visible cell as before. Implement
this special-case inside the same composable where rows/chunked(2) and
MediaAttachmentContent are used so other image counts still use the original
chunked logic.

---

Duplicate comments:
In `@ios/Sources/Views/MessageBubbleViews.swift`:
- Around line 554-567: The timestamp/background logic currently checks
hasFileAttachment (which only considers .file) and thus excludes other
non-visual media; update those branches to use fileMedia (e.g., check
!fileMedia.isEmpty or fileMedia.count) so decisions that render
Text(timestampText) and background/timestamp styling consider all non-visual
attachments produced for MediaAttachmentView; search for uses around
MediaAttachmentView, hasText, timestampText and replace hasFileAttachment-based
conditions with fileMedia-based conditions to include voice notes and other
non-image media.

---

Nitpick comments:
In @.claude/skills/e2e-video/SKILL.md:
- Around line 29-65: Add a short "Prerequisites" section above Step 2 that lists
required tools and auth: require ffmpeg/ffprobe installed and on PATH for
verification (mention using package managers or linking to ffmpeg), and require
authenticated Blossom access for blossom-upload (explain that cargo run -p
pikahut -- blossom-upload needs Blossom credentials or environment vars set/CLI
login). Also note any platform-specific requirements already called out (e.g.,
nix for Android) and link the commands referenced (ffprobe and the
blossom-upload invocation).

In `@android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt`:
- Around line 914-926: The close IconButton's white Icons.Default.Close lacks
contrast on light thumbnails; update the IconButton (the IconButton wrapping
Icons.Default.Close that calls stagedMedia = stagedMedia.filter { it.id !=
item.id }) to render with a semi-transparent dark background and rounded shape
(e.g., add a background brush/color with alpha and clip with CircleShape or
rounded corner) and adjust padding/size in its modifier so the white icon
remains visible and tappable on light images while preserving the existing
alignment and size behavior.
- Around line 1005-1008: The conditional for enabling the send button is
inconsistent: the outer if uses draft.trim().isNotBlank() but the
FilledIconButton enabled property uses draft.isNotBlank(). Update the enabled
expression in the FilledIconButton to use draft.trim().isNotBlank() (or
consistently trim everywhere) so both the if guard and the enabled parameter use
the same trimmed whitespace logic; reference the FilledIconButton invocation and
sendDraftMessage()/stagedMedia checks to locate the change.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f52706ff-80ea-4bf7-9c3e-1f92cf6172ef

📥 Commits

Reviewing files that changed from the base of the PR and between a3e7b26 and bb6efa1.

📒 Files selected for processing (5)
  • .claude/skills/e2e-video/SKILL.md
  • .claude/skills/pr-shepherd/SKILL.md
  • android/app/src/main/java/com/pika/app/AppManager.kt
  • android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt
  • ios/Sources/Views/MessageBubbleViews.swift

Comment on lines +1362 to +1404
if (visualMedia.isNotEmpty()) {
val maxVisible = 4
val rows = visualMedia.take(maxVisible).chunked(2)
Column(verticalArrangement = Arrangement.spacedBy(3.dp)) {
for ((rowIndex, row) in rows.withIndex()) {
Row(
modifier = Modifier.fillMaxWidth(),
horizontalArrangement = Arrangement.spacedBy(3.dp),
) {
for ((colIndex, attachment) in row.withIndex()) {
val isLastCell = rowIndex == rows.lastIndex && colIndex == row.lastIndex
val remaining = visualMedia.size - maxVisible
Box(
modifier = Modifier
.weight(1f)
.heightIn(max = 200.dp),
) {
MediaAttachmentContent(
attachment = attachment,
isMine = isMine,
onDownload = { onDownloadMedia(messageId, attachment.originalHashHex) },
onOpenImage = { onOpenImage(attachment) },
)
if (isLastCell && remaining > 0) {
Box(
modifier = Modifier
.matchParentSize()
.background(Color.Black.copy(alpha = 0.5f))
.clip(RoundedCornerShape(8.dp)),
contentAlignment = Alignment.Center,
) {
Text(
text = "+$remaining",
color = Color.White,
style = MaterialTheme.typography.headlineSmall,
)
}
}
}
}
}
}
}
Copy link
Copy Markdown

@coderabbitai coderabbitai bot Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Search for iOS MediaGrid implementation
find . -type f \( -name "*.swift" -o -name "*.kt" \) | xargs grep -l "MediaGrid" | head -10

Repository: sledtools/pika

Length of output: 196


🏁 Script executed:

# Look for PR description or objectives
find . -type f \( -name "*.md" -o -name "*.txt" \) | xargs grep -l "3-image\|iMessage\|1-tall-left" 2>/dev/null | head -10

Repository: sledtools/pika

Length of output: 40


🏁 Script executed:

# Check the actual Android file to understand the grid layout
cat android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt | sed -n '1362,1410p'

Repository: sledtools/pika

Length of output: 2541


🏁 Script executed:

# Examine iOS MessageBubbleViews.swift to see how it handles media grid
cat ios/Sources/Views/MessageBubbleViews.swift | grep -A 100 -B 5 "MediaGrid\|3-image\|media.*grid" -i

Repository: sledtools/pika

Length of output: 9677


🏁 Script executed:

# Search for PR description or commit messages
git log --oneline --all 2>/dev/null | grep -i "3-image\|mediagrid\|layout" | head -20

Repository: sledtools/pika

Length of output: 132


🏁 Script executed:

# Look for README or documentation about the PR
find . -type f -name "README*" -o -name "DESIGN*" -o -name "CHANGELOG*" | xargs grep -l "3-image\|iMessage" 2>/dev/null

Repository: sledtools/pika

Length of output: 42


Android 3-image grid layout differs from iOS implementation.

The Android implementation produces a "2-top + 1-full-width-bottom" layout for 3 images, which visually differs from the iOS design. iOS explicitly implements a "1-tall-left + 2-stacked-right (iMessage style)" layout for 3 images using:

  • Left: Single image at double height
  • Right: VStack with 2 images

Android's chunked(2) approach creates:

  • Row 1: 2 images at 50% width each
  • Row 2: 1 image at 100% width (single item with weight(1f))

Align the 3-image layout with iOS to maintain consistent design across platforms.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt` around lines
1362 - 1404, When visualMedia.size == 3, replace the generic chunked(2) grid
with an explicit iMessage-style layout: render a Row with two children — Left: a
Box(weight = 2f) that hosts MediaAttachmentContent for visualMedia[0] filling
full height; Right: a Column(weight = 1f) with two Boxes each (weight = 1f)
hosting MediaAttachmentContent for visualMedia[1] and visualMedia[2] stacked
vertically; preserve existing props (isMine, onDownload ->
onDownloadMedia(messageId, attachment.originalHashHex), onOpenImage) and keep
the overlay logic (the +N Box) on the last visible cell as before. Implement
this special-case inside the same composable where rows/chunked(2) and
MediaAttachmentContent are used so other image counts still use the original
chunked logic.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Acknowledged — the Android grid currently uses a uniform 2-column layout for all counts, while iOS uses the iMessage-style 1-tall-left + 2-stacked-right for 3 images. The Android layout still works correctly (no overflow or rendering bugs); the asymmetric layout is a polish item. Tracking for a follow-up PR to bring Android grid parity with iOS.

— claude

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@futurepaul, understood — thanks for the context! Since this is being tracked for a follow-up PR, would you like me to open a GitHub issue to formally track the Android grid parity work (1-tall-left + 2-stacked-right for 3 images, matching iOS)? That way it won't get lost between PRs.


🧠 Learnings used
Learnt from: futurepaul
Repo: sledtools/pika PR: 429
File: android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt:1339-1418
Timestamp: 2026-03-04T23:23:31.690Z
Learning: In Kotlin Android projects using Jetpack Compose, matchParentSize() is a member extension on BoxScope (defined in androidx.compose.foundation.layout) and is automatically available inside a Box { ... } lambda without any extra import. Do not flag missing imports for matchParentSize inside Box composables as false positives. Reviewers should treat usage inside Box scopes as valid without requiring an explicit import.

Learnt from: futurepaul
Repo: sledtools/pika PR: 429
File: android/app/src/main/java/com/pika/app/AppManager.kt:447-454
Timestamp: 2026-03-04T23:30:30.607Z
Learning: In Kotlin files under sledtools/pika (e.g., android/app/src/main/java/com/pika/app/AppManager.kt), when new UniFFI/FFI types appear in a diff hunk but lack a visible import, do not flag a missing import as a compile error. The import may have been added elsewhere in the PR or is already present. Always verify against the full file content or rely on the author’s confirmation that the code compiles (e.g., via CI).

devin-ai-integration[bot]

This comment was marked as resolved.

futurepaul and others added 7 commits March 4, 2026 18:07
Add multi-image upload support (up to 32 items per message) to the Rust
core and pikachat daemon. The batch pipeline preprocesses all items
(resize, hash, cache), creates a single outbox entry with all
attachments for immediate UI display, encrypts each item via MDK, then
uploads sequentially to Blossom. On completion, all imeta tags are
collected into one published message.

- MediaBatchItem record + SendChatMediaBatch action variant
- PendingMediaBatchSend state with sequential upload chaining
- Batch-aware upload completion and retry handlers
- SendMediaBatch daemon command (file_paths: Vec<String>)
- Unit tests for validation and deserialization

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add multi-select photo/video picker (up to 32 items), horizontal
thumbnail staging area in the compose bar, batch send dispatch via
SendChatMediaBatch, and a 2-column media grid layout for displaying
multi-image messages (with +N overlay for 5+ attachments).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…sion video support

Fix multi-image grid pushing messages off screen by using GeometryReader
to constrain cell widths. Change 3-image layout to 1-tall-left + 2-stacked-right
(iMessage-style). Add blurhash generation during image upload for placeholder
display. Support video in share extension batch with max count of 4.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add `media_batch:<count>` bot trigger that generates, encrypts, uploads
  test images to blossom and sends them as a multi-media chat message
- Add `testE2E_multiImageGrid` iOS UI test that sends the trigger and
  verifies the media grid renders
- Add `PIKA_IOS_E2E_TEST_METHOD` env var to run a single iOS E2E test
  (like Android's `PIKA_ANDROID_E2E_TEST_CLASS`)
- Add `fixtures/test-images/` with 8 solid-color JPEGs used by the bot
- Add `chat_media_grid` accessibility identifier to the media grid view

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix HashMap iteration order bug: use .get(&batch_id) instead of
  .values().last() to retrieve the just-inserted batch entry (Devin)
- Make +N overlay non-interactive so media tap still works (CodeRabbit)
- Use compile-time env!("CARGO_MANIFEST_DIR") for fixture path (CodeRabbit)
- Don't leak internal error details in bot error messages (CodeRabbit)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add missing MediaBatchItem import in AppManager.kt
- Add missing mediaBatch parameter to ShareEnqueueRequest constructors
- Fix forward-reference: move sendStagedMedia() before sendDraftMessage()
- Add background color to multi-image grids (blue for sent, gray for received)
- Add /pr-shepherd and /e2e-video Claude Code skills

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Images now fill their grid cells using scaledToFill + clipped (CSS
object-fit: cover equivalent). Adds coverMode parameter to
MediaAttachmentView that bypasses aspect-ratio-based sizing. Grid
gaps use a subtle systemGray5 background instead of the bubble color.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
crates/pika-share/src/lib.rs (1)

690-699: ⚠️ Potential issue | 🟡 Minor

Normalize media_batch entries before persisting.

Validation trims relative_path at Line 636, but Line 698 stores raw media_batch strings. Whitespace can pass validation then fail lookup or leak untrimmed metadata.

🧹 Proposed fix
+    let media_batch = request.media_batch.map(|batch| {
+        batch
+            .into_iter()
+            .map(|entry| ShareMediaBatchEntry {
+                relative_path: entry.relative_path.trim().to_string(),
+                mime_type: entry.mime_type.trim().to_string(),
+                filename: entry.filename.trim().to_string(),
+            })
+            .collect()
+    });
+
     NormalizedRequest {
         chat_id,
         compose_text,
         payload_kind: request.payload_kind,
         payload_text,
         media_relative_path,
         media_mime_type,
         media_filename,
-        media_batch: request.media_batch,
+        media_batch,
         client_request_id: request.client_request_id.trim().to_string(),
         created_at_ms: if request.created_at_ms == 0 {
             now
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/pika-share/src/lib.rs` around lines 690 - 699, The NormalizedRequest
is storing request.media_batch raw which can contain untrimmed entries even
though validation trimmed media_relative_path; update the construction of
NormalizedRequest (where media_batch: request.media_batch is set) to map over
request.media_batch and trim each string (and optionally filter out empty
strings) before assigning, e.g. replace media_batch: request.media_batch with a
trimmed collection (e.g., request.media_batch.iter().map(|s|
s.trim().to_string()).filter(|s| !s.is_empty()).collect()) so persisted
media_batch entries match the validated media_relative_path semantics.
♻️ Duplicate comments (7)
crates/pikachat-sidecar/src/daemon.rs (1)

3157-3165: ⚠️ Potential issue | 🟠 Major

Use async file reads in the batch loop to avoid blocking Tokio workers.

std::fs::read here blocks the async command path under load (especially for multi-file batches).

#!/bin/bash
set -euo pipefail

# Verify blocking file reads in media command handlers
rg -n -C3 'InCmd::SendMediaBatch|InCmd::SendMedia|std::fs::read\(' crates/pikachat-sidecar/src/daemon.rs
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/pikachat-sidecar/src/daemon.rs` around lines 3157 - 3165, The loop is
using blocking std::fs::read for each file_path (inside the SendMediaBatch
handling), which blocks Tokio workers under load; replace the blocking call with
an async read (e.g. tokio::fs::read or opening tokio::fs::File and read_to_end)
and await it so the loop does not block the runtime; keep the same error
handling using reply_tx.send(out_error(request_id.clone(), "file_error",
format!("read {file_path}: {e}"))).ok() and return Err(()) on failure, and
update any enclosing function/closure to be async if necessary (references:
file_paths, request_id, reply_tx, out_error, the
SendMediaBatch/InCmd::SendMediaBatch handling).
ios/Sources/AppManager.swift (1)

546-571: ⚠️ Potential issue | 🟠 Major

Avoid acknowledging .acceptedByCore before local batch validation.

This still acknowledges success before validating items (and before core acceptance is known), so invalid batches can be marked accepted locally and then fail downstream.

♻️ Suggested guard before dispatch/ack
 case .mediaBatch(let caption, let items):
+    guard !items.isEmpty, items.count <= 32 else {
+        ShareQueueManager.acknowledge(
+            ShareDispatchAck(
+                itemId: item.itemId,
+                status: .rejectedByCore,
+                errorCode: "invalid_batch_size",
+                errorMessage: "media batch must contain 1...32 items"
+            )
+        )
+        continue
+    }
     let batchItems = items.map { entry in
         MediaBatchItem(
             dataBase64: entry.dataBase64,
             mimeType: entry.mimeType,
             filename: entry.filename
         )
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ios/Sources/AppManager.swift` around lines 546 - 571, In the .mediaBatch
handling in AppManager.swift, don't call ShareQueueManager.acknowledge(...
status: .acceptedByCore) before validating the batch locally and before core
acceptance is known; instead, validate each item (fields like dataBase64,
mimeType, filename) from items -> MediaBatchItem and only proceed to
dispatch(.sendChatMediaBatch(...)) and/or call ShareQueueManager.acknowledge
with .acceptedByCore after the local validation succeeds (or better, after
receiving core confirmation); if validation fails, acknowledge with an
appropriate failure status/errorMessage and skip dispatch. Ensure you update the
code around the .mediaBatch case, MediaBatchItem creation,
dispatch(.sendChatMediaBatch), and ShareQueueManager.acknowledge to reflect this
ordering.
.claude/skills/pr-shepherd/SKILL.md (1)

15-17: ⚠️ Potential issue | 🟡 Minor

Add language identifiers to both fenced code blocks (MD040).

Line 15 and Line 119 still use bare triple-backtick fences; markdownlint requires a language tag.

🛠️ Proposed fix
-```
+```text
 Pre-flight (once)  →  Main Loop (repeat)  →  Final Validation  →  Exit Report

- +text
Agent(subagent_type="general-purpose", prompt="Run /pr-review $PR — fetch all review comments, prioritize human > Devin > CodeRabbit, address actionable feedback, and reply to addressed comments. For comments you disagree with, reply explaining why and sign it 'claude'. Return a summary of what was fixed, what was skipped, and what needs the user's decision.")

Also applies to: 119-121

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/pr-shepherd/SKILL.md around lines 15 - 17, Add missing
language identifiers to the two fenced code blocks: the block containing
"Pre-flight (once)  →  Main Loop (repeat)  →  Final Validation  →  Exit Report"
and the block starting with "Agent(subagent_type="general-purpose", prompt="Run
/pr-review $PR — ...")"; change the opening triple-backtick of each to include a
language tag (e.g., "text") so they become ```text and leave the content and
closing backticks unchanged.
rust/src/core/mod.rs (1)

5038-5051: ⚠️ Potential issue | 🟠 Major

Handle “all items already uploaded” explicitly in batch retry.

Line 5044 and Line 5084 use .unwrap_or(0), which treats “no un-uploaded item exists” as index 0. That can restart uploads unnecessarily instead of falling through to publish retry.

💡 Suggested fix
-                    .map(|(k, b)| {
-                        // Find the first un-uploaded item.
-                        let idx = b
-                            .items
-                            .iter()
-                            .position(|item| item.uploaded_url.is_none())
-                            .unwrap_or(0);
-                        (
-                            k.clone(),
-                            b.items[idx].request_id.clone(),
-                            b.items[idx].encrypted_data.clone(),
-                            b.items[idx].upload.mime_type.clone(),
-                            hex::encode(b.items[idx].upload.encrypted_hash),
-                        )
-                    })
+                    .and_then(|(k, b)| {
+                        b.items
+                            .iter()
+                            .position(|item| item.uploaded_url.is_none())
+                            .map(|idx| {
+                                (
+                                    k.clone(),
+                                    b.items[idx].request_id.clone(),
+                                    b.items[idx].encrypted_data.clone(),
+                                    b.items[idx].upload.mime_type.clone(),
+                                    hex::encode(b.items[idx].upload.encrypted_hash),
+                                )
+                            })
+                    })
@@
-                    if let Some(batch) = self.pending_media_batch_sends.get_mut(&batch_id) {
-                        let restart_idx = batch
-                            .items
-                            .iter()
-                            .position(|item| item.uploaded_url.is_none())
-                            .unwrap_or(0);
-                        batch.next_upload_index = restart_idx + 1;
-                    }
+                    if let Some(batch) = self.pending_media_batch_sends.get_mut(&batch_id) {
+                        if let Some(restart_idx) = batch
+                            .items
+                            .iter()
+                            .position(|item| item.uploaded_url.is_none())
+                        {
+                            batch.next_upload_index = restart_idx + 1;
+                        }
+                    }

Based on learnings: "In rust/src/core/mod.rs, within AppCore::handle_action for AppAction::RetryMessage, avoid defaulting to index 0 when locating the first un-uploaded batch item. Use explicit None handling so that when all items are already uploaded, the code skips the batch re-upload path and falls through to the general publish-retry (pending_sends) logic."

Also applies to: 5079-5085

rust/src/core/chat_media.rs (2)

756-759: ⚠️ Potential issue | 🟠 Major

Batch failure paths still leak already-written cache files.

On Line 756 and Line 857 failure returns, previously written local_path files are not deleted. This leaves orphaned files when preprocessing/encryption aborts mid-batch.

💡 Suggested fix
+        let mut written_paths: Vec<PathBuf> = Vec::with_capacity(decoded_items.len());
...
             if let Err(e) = write_media_file(&local_path, &media_data) {
+                for p in &written_paths {
+                    let _ = std::fs::remove_file(p);
+                }
                 self.toast(format!("Media cache failed: {e}"));
                 return;
             }
+            written_paths.push(local_path.clone());
...
             let Some(sess) = self.session.as_mut() else {
                 // Clean up on session loss.
                 if let Some(outbox) = self.local_outbox.get_mut(&chat_id) {
                     outbox.remove(&temp_rumor_id);
                 }
                 if let Some(overrides) = self.delivery_overrides.get_mut(&chat_id) {
                     overrides.remove(&temp_rumor_id);
                 }
+                for p in &written_paths {
+                    let _ = std::fs::remove_file(p);
+                }
                 self.refresh_current_chat_if_open(&chat_id);
                 self.refresh_chat_list_from_storage();
                 return;
             };
...
                     Err(e) => {
                         // Clean up outbox on encryption failure.
                         if let Some(outbox) = self.local_outbox.get_mut(&chat_id) {
                             outbox.remove(&temp_rumor_id);
                         }
                         if let Some(overrides) = self.delivery_overrides.get_mut(&chat_id) {
                             overrides.remove(&temp_rumor_id);
                         }
+                        for p in &written_paths {
+                            let _ = std::fs::remove_file(p);
+                        }
                         self.refresh_current_chat_if_open(&chat_id);
                         self.refresh_chat_list_from_storage();
                         self.toast(format!("Media encryption failed: {e}"));
                         return;
                     }

Also applies to: 833-843, 857-868


1182-1187: ⚠️ Potential issue | 🟠 Major

Missing URL/hash still leaves batch stuck in pending state.

Line 1182 and Line 1185 return early without marking the batch failed or clearing progress, so the message can remain indefinitely pending.

💡 Suggested fix
         let Some(uploaded_url) = uploaded_url else {
-            return;
+            if let Some(batch) = self.pending_media_batch_sends.get(&batch_id) {
+                let chat_id = batch.chat_id.clone();
+                let temp_rumor_id = batch.temp_rumor_id.clone();
+                if let Some(outbox) = self.local_outbox.get_mut(&chat_id) {
+                    if let Some(entry) = outbox.get_mut(&temp_rumor_id) {
+                        for att in &mut entry.media {
+                            att.upload_progress = None;
+                        }
+                    }
+                }
+                self.delivery_overrides
+                    .entry(chat_id.clone())
+                    .or_default()
+                    .insert(
+                        temp_rumor_id,
+                        MessageDeliveryState::Failed {
+                            reason: "Upload failed: missing upload URL".to_string(),
+                        },
+                    );
+                self.refresh_current_chat_if_open(&chat_id);
+                self.refresh_chat_list_from_storage();
+            }
+            return;
         };
         let Some(descriptor_hash) = descriptor_sha256_hex else {
-            return;
+            if let Some(batch) = self.pending_media_batch_sends.get(&batch_id) {
+                let chat_id = batch.chat_id.clone();
+                let temp_rumor_id = batch.temp_rumor_id.clone();
+                if let Some(outbox) = self.local_outbox.get_mut(&chat_id) {
+                    if let Some(entry) = outbox.get_mut(&temp_rumor_id) {
+                        for att in &mut entry.media {
+                            att.upload_progress = None;
+                        }
+                    }
+                }
+                self.delivery_overrides
+                    .entry(chat_id.clone())
+                    .or_default()
+                    .insert(
+                        temp_rumor_id,
+                        MessageDeliveryState::Failed {
+                            reason: "Upload failed: missing uploaded hash".to_string(),
+                        },
+                    );
+                self.refresh_current_chat_if_open(&chat_id);
+                self.refresh_chat_list_from_storage();
+            }
+            return;
         };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rust/src/core/chat_media.rs` around lines 1182 - 1187, The early returns
after unpacking let Some(uploaded_url) and let Some(descriptor_sha256_hex) leave
the related batch in pending state; before returning, call the batch/failure
handling code to mark the batch as failed and clear any in-progress flags (e.g.
invoke the existing batch failure/update functions used
elsewhere—mark_batch_failed(batch_id, reason) or
clear_upload_progress(message_id) — or update the batch/message status to Failed
and remove progress) so the message won't stay pending; update the branches
where uploaded_url and descriptor_sha256_hex are None to perform that
failure/cleanup using the surrounding batch/message identifiers and then return.
ios/Sources/Views/MessageBubbleViews.swift (1)

532-534: ⚠️ Potential issue | 🟡 Minor

Use the same non-visual grouping for timestamp/background checks.

Rendering uses fileMedia (all non-image/video), but these checks still use hasFileAttachment (.file only), so non-visual non-file media can miss timestamp/background behavior.

🩹 Proposed fix
-            if !hasText, hasFileAttachment {
+            if !hasText, !fileMedia.isEmpty {
                 Text(timestampText)
                     .font(.caption2)
                     .foregroundStyle(message.isMine ? Color.white.opacity(0.78) : Color.secondary.opacity(0.9))
                     .frame(maxWidth: .infinity, alignment: .trailing)
                     .padding(.horizontal, 12)
                     .padding(.bottom, 6)
             }
@@
-            if hasText || hasFileAttachment {
+            if hasText || !fileMedia.isEmpty {
                 message.isMine ? Color.blue : Color.gray.opacity(0.2)
             } else {
                 Color.clear
             }

Also applies to: 570-571, 597-600

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ios/Sources/Views/MessageBubbleViews.swift` around lines 532 - 534, The
timestamp/background grouping uses a different predicate than rendering: replace
the discrepancy by making hasFileAttachment use the same non-visual grouping as
fileMedia (i.e., consider any media that is not image/video) or by changing the
timestamp/background checks to call the existing fileMedia boolean; update the
computed property hasFileAttachment (or all call sites that currently reference
hasFileAttachment) to use message.media.filter / contains logic that matches
fileMedia's criteria so timestamp and background behavior matches rendering for
MessageBubbleViews.
🧹 Nitpick comments (9)
docs/how-to-run-e2e-ui-tests-with-video.md (1)

56-60: Keep iOS method examples consistent with recording workflow.

The ping-pong command drops PIKA_UI_E2E_RECORD_VIDEO/KEEP, which may confuse readers in this video-focused doc.

Suggested doc tweak
-Or ping-pong only:
+Or ping-pong only (with recording enabled):
 
 ```bash
-PIKA_IOS_E2E_TEST_METHOD=testE2E_deployedRustBot_pingPong just ios-ui-e2e-local
+PIKA_UI_E2E_RECORD_VIDEO=1 KEEP=1 \
+  PIKA_IOS_E2E_TEST_METHOD=testE2E_deployedRustBot_pingPong \
+  just ios-ui-e2e-local
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @docs/how-to-run-e2e-ui-tests-with-video.md around lines 56 - 60, The
ping-pong example omits the recording environment variables, causing
inconsistency with the video workflow; update the example so it sets
PIKA_UI_E2E_RECORD_VIDEO=1 and KEEP=1 along with
PIKA_IOS_E2E_TEST_METHOD=testE2E_deployedRustBot_pingPong and then runs just
ios-ui-e2e-local (split across lines if desired), ensuring the example mirrors
the recording workflow used elsewhere in the doc.


</details>

</blockquote></details>
<details>
<summary>ios/UITests/PikaUITests.swift (1)</summary><blockquote>

`759-833`: **Extract shared E2E bot-chat bootstrap into a helper.**

This block duplicates the same launch/login/open-chat flow used by other E2E tests in this file; centralizing it will reduce drift and make flake fixes one-place.  

<details>
<summary>♻️ Refactor sketch</summary>

```diff
-    func testE2E_multiImageGrid() throws {
-        let env = ProcessInfo.processInfo.environment
-        let dotenv = loadDotenv()
-        ...
-        let composer = msgField.exists ? msgField : msgFieldFallback
-        XCTAssertTrue(composer.waitForExistence(timeout: 10))
+    func testE2E_multiImageGrid() throws {
+        let (app, composer) = try launchE2EAndOpenBotChat()
private func launchE2EAndOpenBotChat() throws -> (XCUIApplication, XCUIElement) {
    // shared env resolution + launch + login + openNewChatFromChatList + wait for composer
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ios/UITests/PikaUITests.swift` around lines 759 - 833, Extract the duplicated
launch/login/open-chat flow inside testE2E_multiImageGrid into a new helper
method (e.g., private func launchE2EAndOpenBotChat() throws -> (XCUIApplication,
XCUIElement)) that performs the env resolution (PIKA_UI_E2E_BOT_NPUB,
PIKA_UI_E2E_NSEC/PIKA_TEST_NSEC, PIKA_UI_E2E_RELAYS, PIKA_UI_E2E_KP_RELAYS),
sets app.launchEnvironment, launches the app, runs the login steps (using
identifiers login_create_account, login_nsec_input, login_submit), calls
openNewChatFromChatList(_:timeout:), fills the peer (newchat_peer_npub) and taps
newchat_start, then waits for the composer (chat_message_input via
textViews/textFields) while using dismissPikaToastIfPresent(_:timeout:) to
detect fatal errors; return the XCUIApplication and the composer element (or
throw/XCTFail on missing env or fatal toast). Replace the duplicated block in
testE2E_multiImageGrid with a call to launchE2EAndOpenBotChat and use its
returned app/composer.
android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt (1)

914-926: Consider adding contrast background to remove button.

The white close icon (tint = Color.White) may have poor visibility on light-colored image thumbnails. Adding a semi-transparent dark background to the IconButton would improve tap-target visibility.

💡 Optional: Add contrast background
                                        IconButton(
                                            onClick = { stagedMedia = stagedMedia.filter { it.id != item.id } },
                                            modifier = Modifier
                                                .align(Alignment.TopEnd)
-                                               .size(20.dp),
+                                               .size(20.dp)
+                                               .background(
+                                                   Color.Black.copy(alpha = 0.5f),
+                                                   RoundedCornerShape(10.dp)
+                                               ),
                                        ) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt` around lines
914 - 926, The close IconButton over media thumbnails (IconButton / Icon using
tint = Color.White) can be hard to see on light images; update the IconButton's
Modifier to add a semi-transparent dark background and rounded shape (e.g.,
background(color = Color.Black.copy(alpha = 0.5f), shape = CircleShape) and clip
if needed) while preserving the .size(20.dp) and .align(Alignment.TopEnd) so the
white Icon remains visible and tap target stays correct; ensure the background
is applied before sizing/clipping and keep the Icon size at ~16.dp and tint =
Color.White.
cli/src/harness.rs (2)

1410-1422: Parser constraint and fixture comment are out of sync.

Line 1415 rejects values > 8, but Line 1421 says filenames are cycled for counts over 8. Please align the docs/behavior to avoid misleading future test authors.

📝 Suggested doc-only fix
-/// Fixture image filenames in `fixtures/test-images/`, cycled for counts > 8.
+/// Fixture image filenames in `fixtures/test-images/` for `media_batch:1..=8`.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli/src/harness.rs` around lines 1410 - 1422, The parser
parse_media_batch_probe currently enforces a max of 8 but the comment above
TEST_IMAGE_NAMES claims filenames are cycled for counts > 8, causing a mismatch;
either remove/adjust the parser's upper bound or update the comment to reflect
the true behavior. Locate parse_media_batch_probe and TEST_IMAGE_NAMES and
choose one of two fixes: (A) if you want to support counts > 8, remove or raise
the upper-bound check in parse_media_batch_probe so it returns counts beyond 8
and rely on TEST_IMAGE_NAMES cycling logic, or (B) if the intent is to limit to
8, change the comment near TEST_IMAGE_NAMES to state that counts are restricted
to 1..=8 (no cycling). Make the chosen change only to the parser or the comment
so both code and docs stay consistent.

1483-1491: Add explicit timeout to prevent upload stalls in media-batch handling.

The upload_blob call at lines 1483-1491 lacks timeout protection, allowing a single slow or hung server response to block the entire probe flow. Since the codebase already uses tokio::time::timeout for similar async operations, wrapping this call with a bounded timeout (e.g., 20 seconds) ensures deterministic behavior under network degradation.

⏱️ Suggested timeout guard
-            let descriptor = match blossom
-                .upload_blob(
-                    encrypted_data.clone(),
-                    Some(upload.mime_type.clone()),
-                    None,
-                    Some(keys),
-                )
-                .await
+            let descriptor = match tokio::time::timeout(
+                Duration::from_secs(20),
+                blossom.upload_blob(
+                    encrypted_data.clone(),
+                    Some(upload.mime_type.clone()),
+                    None,
+                    Some(keys),
+                ),
+            )
+            .await
             {
-                Ok(d) => d,
-                Err(e) => {
+                Ok(Ok(d)) => d,
+                Ok(Err(e)) => {
                     last_error = Some(format!("{server}: {e}"));
                     continue;
                 }
+                Err(_) => {
+                    last_error = Some(format!("{server}: upload timeout"));
+                    continue;
+                }
             };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli/src/harness.rs` around lines 1483 - 1491, Wrap the
blossom.upload_blob(...) await in a tokio::time::timeout (e.g., 20 seconds) to
prevent a stalled upload from blocking the probe flow; locate the call to
blossom.upload_blob (the encrypted_data.clone(), Some(upload.mime_type.clone()),
None, Some(keys) invocation) in harness.rs and replace the direct await with a
timeout guarded await, then handle the Timeout error case by returning or
propagating a suitable error/result (same error branch used for other upload
failures) so the caller can proceed instead of hanging.
ios/Sources/Views/ChatInputBar.swift (1)

214-215: Make preview @State vars private to satisfy lint.

These preview-only state fields can be private to align with the SwiftLint private_swiftui_state rule.

♻️ Suggested change
-    `@State` var selectedPhotoItems: [PhotosPickerItem] = []
-    `@State` var stagedMedia: [StagedMediaItem] = []
+    `@State` private var selectedPhotoItems: [PhotosPickerItem] = []
+    `@State` private var stagedMedia: [StagedMediaItem] = []
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ios/Sources/Views/ChatInputBar.swift` around lines 214 - 215, The two SwiftUI
preview state properties selectedPhotoItems and stagedMedia in ChatInputBar
(declared as `@State` var selectedPhotoItems: [PhotosPickerItem] = [] and `@State`
var stagedMedia: [StagedMediaItem] = []) should be made private to satisfy the
private_swiftui_state lint; update their declarations to use `@State` private var
for both properties so they remain internal to the view while complying with
SwiftLint.
crates/pikachat-sidecar/src/daemon.rs (1)

3123-3275: Consider extracting a shared media-upload helper for single + batch paths.

The batch branch repeats most of the SendMedia pipeline (read/validate/encrypt/upload/hash-check/imeta), which increases drift risk.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/pikachat-sidecar/src/daemon.rs` around lines 3123 - 3275, The
SendMediaBatch branch duplicates the
read/validate/encrypt/upload/hash-check/imeta pipeline used by the single-file
path; extract a shared helper (e.g., a new async fn like
process_and_upload_media or upload_media_item) that encapsulates the logic
currently inside the for-loop (reading file, size checks, mime/filename
resolution, manager.encrypt_for_upload_with_options, uploading via
BlossomClient.upload_blob, hash verification, and manager.create_imeta_tag) and
returns (Tag, original_hash, uploaded_url) or a descriptive error; then call
that helper from both InCmd::SendMediaBatch and the single-file handler so both
use the same logic (reference symbols: InCmd::SendMediaBatch,
manager.encrypt_for_upload_with_options, BlossomClient::upload_blob,
descriptor.sha256, manager.create_imeta_tag, sign_and_publish).
crates/pika-share/src/lib.rs (1)

751-769: Add a batch byte budget before full reads/base64 encoding.

Line 758 and Line 768 load+encode every file into memory in one dequeue path. A large batch can create avoidable memory spikes.

📏 Proposed guardrails
+const MAX_BATCH_ITEM_BYTES: u64 = 20 * 1024 * 1024;
+const MAX_BATCH_TOTAL_BYTES: u64 = 64 * 1024 * 1024;
...
         SharePayloadKind::ImageBatch => {
             let batch = item.media_batch.as_ref().ok_or_else(|| {
                 ShareError::InvalidRequest("missing media_batch for ImageBatch".to_string())
             })?;
             let mut dispatch_items = Vec::with_capacity(batch.len());
+            let mut total_bytes = 0u64;
             for entry in batch {
                 let abs = resolve_relative_path(&layout.root_dir, &entry.relative_path)?;
+                let file_len = fs::metadata(&abs)
+                    .map_err(|err| ShareError::Io(err.to_string()))?
+                    .len();
+                if file_len > MAX_BATCH_ITEM_BYTES {
+                    return Err(ShareError::InvalidRequest(format!(
+                        "media batch file too large: {}",
+                        entry.relative_path
+                    )));
+                }
+                total_bytes = total_bytes.saturating_add(file_len);
+                if total_bytes > MAX_BATCH_TOTAL_BYTES {
+                    return Err(ShareError::InvalidRequest(
+                        "media batch total size too large".to_string(),
+                    ));
+                }
                 let data = fs::read(&abs).map_err(|err| ShareError::Io(err.to_string()))?;
                 if data.is_empty() {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/pika-share/src/lib.rs` around lines 751 - 769, The ImageBatch path
currently reads and base64-encodes every file into memory which can spike RAM;
before reading files in the SharePayloadKind::ImageBatch branch, enforce a batch
byte budget: define a BATCH_BYTE_BUDGET constant, iterate entries and call
fs::metadata(resolve_relative_path(...)) to sum file sizes, return
ShareError::InvalidRequest if the total exceeds the budget, and only then
proceed to read/encode each file into ShareMediaBatchDispatchItem (using
resolve_relative_path and BASE64_STANDARD.encode as before); this prevents
loading arbitrarily large batches into memory.
ios/Sources/Views/BlurhashView.swift (1)

9-21: Decode blurhash once per hash instead of on every body recomposition.

The current decode runs synchronously in body, so frequent SwiftUI updates can repeatedly redo image reconstruction.

♻️ Proposed refactor
 struct BlurhashView: View {
     let hash: String
     let size: CGSize
+    `@State` private var decodedImage: UIImage?
 
     var body: some View {
-        if let image = UIImage(blurHash: hash, size: CGSize(width: 32, height: 32)) {
-            Image(uiImage: image)
-                .resizable()
-                .scaledToFill()
-                .frame(width: size.width, height: size.height)
-                .clipped()
-        } else {
-            Rectangle()
-                .fill(Color.gray.opacity(0.2))
-                .frame(width: size.width, height: size.height)
-        }
+        Group {
+            if let image = decodedImage {
+                Image(uiImage: image)
+                    .resizable()
+                    .scaledToFill()
+                    .frame(width: size.width, height: size.height)
+                    .clipped()
+            } else {
+                Rectangle()
+                    .fill(Color.gray.opacity(0.2))
+                    .frame(width: size.width, height: size.height)
+            }
+        }
+        .task(id: hash) {
+            decodedImage = await Task.detached(priority: .utility) {
+                UIImage(blurHash: hash, size: CGSize(width: 32, height: 32))
+            }.value
+        }
     }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ios/Sources/Views/BlurhashView.swift` around lines 9 - 21, Blurhash decoding
is happening synchronously inside BlurhashView.body on every recomposition;
change it to decode once per hash by adding a `@State` private var (e.g.,
decodedImage: UIImage?) to BlurhashView and move the UIImage(blurHash:hash,
size:) call into a lifecycle handler (use .task or .onAppear plus .onChange(of:
hash)) so you set decodedImage only when the view appears or when hash changes,
and have body render Image(uiImage: decodedImage) or the placeholder when
decodedImage is nil.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt`:
- Around line 1386-1399: The dark overlay Box applies
.clip(RoundedCornerShape(8.dp)) after .background(...), so the background
remains rectangular; swap the order of the Modifier calls on the Box so
.clip(RoundedCornerShape(8.dp)) comes before .background(Color.Black.copy(alpha
= 0.5f)) (the Box with Modifier.matchParentSize() and contentAlignment in
ChatScreen.kt), ensuring the rounded corners affect the overlay behind the
Text("+$remaining").

In `@crates/pika-share/src/lib.rs`:
- Around line 221-230: The batch validation currently checks only that resolved
paths (via resolve_relative_path called on layout.root_dir) exist as files,
which allows files outside the queue media area to be accepted; update the
normalized.media_batch validation loop to additionally enforce that each
resolved absolute path is located under the queue media directory (e.g., the
canonicalized path starts_with the canonicalized layout.queue_media_dir or
layout.root_dir.join("queue/media") as appropriate) before accepting it; use
canonicalize() comparisons to avoid symlink escapes and mirror the same
containment check used later by the cleanup/deletion routine so cleanup cannot
remove non-media files.

In `@ios/Sources/Views/BlurhashView.swift`:
- Around line 106-110: Guard against non-finite or non-positive decode
dimensions before allocating the pixel buffer in BlurhashView: check that
size.width and size.height are finite and > 0 (e.g., within the function/method
that computes width/height where `let width = Int(size.width)` and `let height =
Int(size.height)` are set) and return early or handle the failure (skip
rendering/emit nil) if the check fails, so you avoid creating `pixels =
[UInt8](repeating: 0, count: width * height * 4)` with invalid counts and
prevent downstream CGImage creation errors.

In `@ios/UITests/PikaUITests.swift`:
- Around line 846-851: The current assertion uses
mediaGrid.waitForExistence(timeout:) on a global matcher which can match
earlier/offscreen elements; change the check to first wait for existence then
assert on visibility/hittability to avoid false positives: call
mediaGrid.waitForExistence(timeout: 30) and then assert mediaGrid.exists &&
mediaGrid.isHittable (or otherwise verify its frame intersects the app window)
so the test ensures the "chat_media_grid" is actually on-screen and interactable
rather than just present in the view hierarchy.

In `@rust/src/core/chat_media.rs`:
- Around line 874-895: When copying the preview to the final hash path (using
write_media_file on final_local_path) fails, the code still sets att.local_path
to path_if_exists(&final_local_path) causing a lost local path; change the logic
in the block that updates the outbox (local_outbox.get_mut(&chat_id) ->
entry.media.get_mut(i)) so that att.original_hash_hex is updated as before but
att.local_path is set only to the final path when write_media_file succeeds,
otherwise keep the existing preview path (pp.local_path) — i.e., determine a
new_local_path = path_if_exists(&final_local_path) if write succeeded else
path_if_exists(&pp.local_path) and assign that to att.local_path, and only call
std::fs::remove_file(&pp.local_path) when the write actually succeeded.
- Around line 1257-1262: The code currently does an early return when
self.session or sess.local_keys are missing mid-batch, which can leave the batch
stuck in pending/uploading; instead of returning, detect the missing
self.session or sess.local_keys (the checks around self.session.as_ref() and
sess.local_keys.clone()), log/propagate an error, and transition the batch to a
terminal failed/error state (e.g., set the batch status to Failed or call the
existing batch error/cleanup handler) for the current batch before returning or
breaking out so the system doesn't leave the batch stranded.

In `@rust/src/core/mod.rs`:
- Around line 7560-7576: Tests batch_rejects_empty_items and
batch_rejects_over_32_items_when_logged_in_would_reject are exercising auth
gating rather than batch validation because they call send_chat_media_batch
while logged out; authenticate the test Core before invoking
send_chat_media_batch (for example call the login method on the Core or set the
authenticated user on make_core’s returned object) so the call reaches the batch
validation paths, then assert the expected validation toast messages
(referencing send_chat_media_batch, make_core, make_item, and core.state.toast).
Alternatively, if you prefer unit-testing validation in isolation, call the
internal batch validation function directly (if present) instead of
send_chat_media_batch.

---

Outside diff comments:
In `@crates/pika-share/src/lib.rs`:
- Around line 690-699: The NormalizedRequest is storing request.media_batch raw
which can contain untrimmed entries even though validation trimmed
media_relative_path; update the construction of NormalizedRequest (where
media_batch: request.media_batch is set) to map over request.media_batch and
trim each string (and optionally filter out empty strings) before assigning,
e.g. replace media_batch: request.media_batch with a trimmed collection (e.g.,
request.media_batch.iter().map(|s| s.trim().to_string()).filter(|s|
!s.is_empty()).collect()) so persisted media_batch entries match the validated
media_relative_path semantics.

---

Duplicate comments:
In @.claude/skills/pr-shepherd/SKILL.md:
- Around line 15-17: Add missing language identifiers to the two fenced code
blocks: the block containing "Pre-flight (once)  →  Main Loop (repeat)  →  Final
Validation  →  Exit Report" and the block starting with
"Agent(subagent_type="general-purpose", prompt="Run /pr-review $PR — ...")";
change the opening triple-backtick of each to include a language tag (e.g.,
"text") so they become ```text and leave the content and closing backticks
unchanged.

In `@crates/pikachat-sidecar/src/daemon.rs`:
- Around line 3157-3165: The loop is using blocking std::fs::read for each
file_path (inside the SendMediaBatch handling), which blocks Tokio workers under
load; replace the blocking call with an async read (e.g. tokio::fs::read or
opening tokio::fs::File and read_to_end) and await it so the loop does not block
the runtime; keep the same error handling using
reply_tx.send(out_error(request_id.clone(), "file_error", format!("read
{file_path}: {e}"))).ok() and return Err(()) on failure, and update any
enclosing function/closure to be async if necessary (references: file_paths,
request_id, reply_tx, out_error, the SendMediaBatch/InCmd::SendMediaBatch
handling).

In `@ios/Sources/AppManager.swift`:
- Around line 546-571: In the .mediaBatch handling in AppManager.swift, don't
call ShareQueueManager.acknowledge(... status: .acceptedByCore) before
validating the batch locally and before core acceptance is known; instead,
validate each item (fields like dataBase64, mimeType, filename) from items ->
MediaBatchItem and only proceed to dispatch(.sendChatMediaBatch(...)) and/or
call ShareQueueManager.acknowledge with .acceptedByCore after the local
validation succeeds (or better, after receiving core confirmation); if
validation fails, acknowledge with an appropriate failure status/errorMessage
and skip dispatch. Ensure you update the code around the .mediaBatch case,
MediaBatchItem creation, dispatch(.sendChatMediaBatch), and
ShareQueueManager.acknowledge to reflect this ordering.

In `@ios/Sources/Views/MessageBubbleViews.swift`:
- Around line 532-534: The timestamp/background grouping uses a different
predicate than rendering: replace the discrepancy by making hasFileAttachment
use the same non-visual grouping as fileMedia (i.e., consider any media that is
not image/video) or by changing the timestamp/background checks to call the
existing fileMedia boolean; update the computed property hasFileAttachment (or
all call sites that currently reference hasFileAttachment) to use
message.media.filter / contains logic that matches fileMedia's criteria so
timestamp and background behavior matches rendering for MessageBubbleViews.

In `@rust/src/core/chat_media.rs`:
- Around line 1182-1187: The early returns after unpacking let
Some(uploaded_url) and let Some(descriptor_sha256_hex) leave the related batch
in pending state; before returning, call the batch/failure handling code to mark
the batch as failed and clear any in-progress flags (e.g. invoke the existing
batch failure/update functions used elsewhere—mark_batch_failed(batch_id,
reason) or clear_upload_progress(message_id) — or update the batch/message
status to Failed and remove progress) so the message won't stay pending; update
the branches where uploaded_url and descriptor_sha256_hex are None to perform
that failure/cleanup using the surrounding batch/message identifiers and then
return.

---

Nitpick comments:
In `@android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt`:
- Around line 914-926: The close IconButton over media thumbnails (IconButton /
Icon using tint = Color.White) can be hard to see on light images; update the
IconButton's Modifier to add a semi-transparent dark background and rounded
shape (e.g., background(color = Color.Black.copy(alpha = 0.5f), shape =
CircleShape) and clip if needed) while preserving the .size(20.dp) and
.align(Alignment.TopEnd) so the white Icon remains visible and tap target stays
correct; ensure the background is applied before sizing/clipping and keep the
Icon size at ~16.dp and tint = Color.White.

In `@cli/src/harness.rs`:
- Around line 1410-1422: The parser parse_media_batch_probe currently enforces a
max of 8 but the comment above TEST_IMAGE_NAMES claims filenames are cycled for
counts > 8, causing a mismatch; either remove/adjust the parser's upper bound or
update the comment to reflect the true behavior. Locate parse_media_batch_probe
and TEST_IMAGE_NAMES and choose one of two fixes: (A) if you want to support
counts > 8, remove or raise the upper-bound check in parse_media_batch_probe so
it returns counts beyond 8 and rely on TEST_IMAGE_NAMES cycling logic, or (B) if
the intent is to limit to 8, change the comment near TEST_IMAGE_NAMES to state
that counts are restricted to 1..=8 (no cycling). Make the chosen change only to
the parser or the comment so both code and docs stay consistent.
- Around line 1483-1491: Wrap the blossom.upload_blob(...) await in a
tokio::time::timeout (e.g., 20 seconds) to prevent a stalled upload from
blocking the probe flow; locate the call to blossom.upload_blob (the
encrypted_data.clone(), Some(upload.mime_type.clone()), None, Some(keys)
invocation) in harness.rs and replace the direct await with a timeout guarded
await, then handle the Timeout error case by returning or propagating a suitable
error/result (same error branch used for other upload failures) so the caller
can proceed instead of hanging.

In `@crates/pika-share/src/lib.rs`:
- Around line 751-769: The ImageBatch path currently reads and base64-encodes
every file into memory which can spike RAM; before reading files in the
SharePayloadKind::ImageBatch branch, enforce a batch byte budget: define a
BATCH_BYTE_BUDGET constant, iterate entries and call
fs::metadata(resolve_relative_path(...)) to sum file sizes, return
ShareError::InvalidRequest if the total exceeds the budget, and only then
proceed to read/encode each file into ShareMediaBatchDispatchItem (using
resolve_relative_path and BASE64_STANDARD.encode as before); this prevents
loading arbitrarily large batches into memory.

In `@crates/pikachat-sidecar/src/daemon.rs`:
- Around line 3123-3275: The SendMediaBatch branch duplicates the
read/validate/encrypt/upload/hash-check/imeta pipeline used by the single-file
path; extract a shared helper (e.g., a new async fn like
process_and_upload_media or upload_media_item) that encapsulates the logic
currently inside the for-loop (reading file, size checks, mime/filename
resolution, manager.encrypt_for_upload_with_options, uploading via
BlossomClient.upload_blob, hash verification, and manager.create_imeta_tag) and
returns (Tag, original_hash, uploaded_url) or a descriptive error; then call
that helper from both InCmd::SendMediaBatch and the single-file handler so both
use the same logic (reference symbols: InCmd::SendMediaBatch,
manager.encrypt_for_upload_with_options, BlossomClient::upload_blob,
descriptor.sha256, manager.create_imeta_tag, sign_and_publish).

In `@docs/how-to-run-e2e-ui-tests-with-video.md`:
- Around line 56-60: The ping-pong example omits the recording environment
variables, causing inconsistency with the video workflow; update the example so
it sets PIKA_UI_E2E_RECORD_VIDEO=1 and KEEP=1 along with
PIKA_IOS_E2E_TEST_METHOD=testE2E_deployedRustBot_pingPong and then runs just
ios-ui-e2e-local (split across lines if desired), ensuring the example mirrors
the recording workflow used elsewhere in the doc.

In `@ios/Sources/Views/BlurhashView.swift`:
- Around line 9-21: Blurhash decoding is happening synchronously inside
BlurhashView.body on every recomposition; change it to decode once per hash by
adding a `@State` private var (e.g., decodedImage: UIImage?) to BlurhashView and
move the UIImage(blurHash:hash, size:) call into a lifecycle handler (use .task
or .onAppear plus .onChange(of: hash)) so you set decodedImage only when the
view appears or when hash changes, and have body render Image(uiImage:
decodedImage) or the placeholder when decodedImage is nil.

In `@ios/Sources/Views/ChatInputBar.swift`:
- Around line 214-215: The two SwiftUI preview state properties
selectedPhotoItems and stagedMedia in ChatInputBar (declared as `@State` var
selectedPhotoItems: [PhotosPickerItem] = [] and `@State` var stagedMedia:
[StagedMediaItem] = []) should be made private to satisfy the
private_swiftui_state lint; update their declarations to use `@State` private var
for both properties so they remain internal to the view while complying with
SwiftLint.

In `@ios/UITests/PikaUITests.swift`:
- Around line 759-833: Extract the duplicated launch/login/open-chat flow inside
testE2E_multiImageGrid into a new helper method (e.g., private func
launchE2EAndOpenBotChat() throws -> (XCUIApplication, XCUIElement)) that
performs the env resolution (PIKA_UI_E2E_BOT_NPUB,
PIKA_UI_E2E_NSEC/PIKA_TEST_NSEC, PIKA_UI_E2E_RELAYS, PIKA_UI_E2E_KP_RELAYS),
sets app.launchEnvironment, launches the app, runs the login steps (using
identifiers login_create_account, login_nsec_input, login_submit), calls
openNewChatFromChatList(_:timeout:), fills the peer (newchat_peer_npub) and taps
newchat_start, then waits for the composer (chat_message_input via
textViews/textFields) while using dismissPikaToastIfPresent(_:timeout:) to
detect fatal errors; return the XCUIApplication and the composer element (or
throw/XCTFail on missing env or fatal toast). Replace the duplicated block in
testE2E_multiImageGrid with a call to launchE2EAndOpenBotChat and use its
returned app/composer.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: afffb5a2-4e9d-4661-a34d-5bfe54b47f9f

📥 Commits

Reviewing files that changed from the base of the PR and between bb6efa1 and d39e9d5.

⛔ Files ignored due to path filters (9)
  • Cargo.lock is excluded by !**/*.lock
  • fixtures/test-images/blue.jpg is excluded by !**/*.jpg
  • fixtures/test-images/cyan.jpg is excluded by !**/*.jpg
  • fixtures/test-images/green.jpg is excluded by !**/*.jpg
  • fixtures/test-images/magenta.jpg is excluded by !**/*.jpg
  • fixtures/test-images/orange.jpg is excluded by !**/*.jpg
  • fixtures/test-images/purple.jpg is excluded by !**/*.jpg
  • fixtures/test-images/red.jpg is excluded by !**/*.jpg
  • fixtures/test-images/yellow.jpg is excluded by !**/*.jpg
📒 Files selected for processing (27)
  • .claude/skills/e2e-video/SKILL.md
  • .claude/skills/pr-shepherd/SKILL.md
  • android/app/src/main/java/com/pika/app/AppManager.kt
  • android/app/src/main/java/com/pika/app/rust/pika_core.kt
  • android/app/src/main/java/com/pika/app/rust/share/pika_share.kt
  • android/app/src/main/java/com/pika/app/ui/screens/ChatScreen.kt
  • cli/src/harness.rs
  • crates/pika-share/src/lib.rs
  • crates/pikachat-sidecar/src/daemon.rs
  • crates/pikahut/src/testing/scenarios/deterministic.rs
  • docs/how-to-run-e2e-ui-tests-with-video.md
  • ios/ShareExtension/Info.plist
  • ios/ShareExtension/ShareViewController.swift
  • ios/Sources/AppManager.swift
  • ios/Sources/ContentView.swift
  • ios/Sources/PreviewData.swift
  • ios/Sources/TestIds.swift
  • ios/Sources/Views/BlurhashView.swift
  • ios/Sources/Views/ChatInputBar.swift
  • ios/Sources/Views/ChatView.swift
  • ios/Sources/Views/MessageBubbleViews.swift
  • ios/UITests/PikaUITests.swift
  • rust/Cargo.toml
  • rust/src/actions.rs
  • rust/src/core/chat_media.rs
  • rust/src/core/mod.rs
  • rust/src/state.rs
🚧 Files skipped from review as they are similar to previous changes (5)
  • rust/Cargo.toml
  • rust/src/actions.rs
  • rust/src/state.rs
  • android/app/src/main/java/com/pika/app/AppManager.kt
  • .claude/skills/e2e-video/SKILL.md

Comment on lines +874 to +895
if original_hash_hex != pp.pre_hash_hex {
let final_local_path = media_file_path(
&self.data_dir,
&account_pubkey,
&chat_id,
&original_hash_hex,
&pp.local_filename,
);
if final_local_path != pp.local_path {
if let Err(e) = write_media_file(&final_local_path, &pp.media_data) {
tracing::warn!(%e, "failed to copy local preview to final hash path");
} else {
let _ = std::fs::remove_file(&pp.local_path);
}
}
// Update outbox attachment hash and local_path.
if let Some(outbox) = self.local_outbox.get_mut(&chat_id) {
if let Some(entry) = outbox.get_mut(&temp_rumor_id) {
if let Some(att) = entry.media.get_mut(i) {
att.original_hash_hex = original_hash_hex.clone();
att.local_path = path_if_exists(&final_local_path);
}
Copy link
Copy Markdown

@coderabbitai coderabbitai bot Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Local preview path can be lost when re-hash copy fails.

If writing final_local_path fails on Line 883, Line 894 still sets att.local_path from final_local_path, which becomes None even though pp.local_path still exists.

💡 Suggested fix
                 if original_hash_hex != pp.pre_hash_hex {
                     let final_local_path = media_file_path(
                         &self.data_dir,
                         &account_pubkey,
                         &chat_id,
                         &original_hash_hex,
                         &pp.local_filename,
                     );
+                    let effective_path = if final_local_path != pp.local_path {
+                        if let Err(e) = write_media_file(&final_local_path, &pp.media_data) {
+                            tracing::warn!(%e, "failed to copy local preview to final hash path");
+                            &pp.local_path
+                        } else {
+                            let _ = std::fs::remove_file(&pp.local_path);
+                            &final_local_path
+                        }
+                    } else {
+                        &final_local_path
+                    };
-                    if final_local_path != pp.local_path {
-                        if let Err(e) = write_media_file(&final_local_path, &pp.media_data) {
-                            tracing::warn!(%e, "failed to copy local preview to final hash path");
-                        } else {
-                            let _ = std::fs::remove_file(&pp.local_path);
-                        }
-                    }
                     // Update outbox attachment hash and local_path.
                     if let Some(outbox) = self.local_outbox.get_mut(&chat_id) {
                         if let Some(entry) = outbox.get_mut(&temp_rumor_id) {
                             if let Some(att) = entry.media.get_mut(i) {
                                 att.original_hash_hex = original_hash_hex.clone();
-                                att.local_path = path_if_exists(&final_local_path);
+                                att.local_path = path_if_exists(effective_path);
                             }
                         }
                     }
                 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rust/src/core/chat_media.rs` around lines 874 - 895, When copying the preview
to the final hash path (using write_media_file on final_local_path) fails, the
code still sets att.local_path to path_if_exists(&final_local_path) causing a
lost local path; change the logic in the block that updates the outbox
(local_outbox.get_mut(&chat_id) -> entry.media.get_mut(i)) so that
att.original_hash_hex is updated as before but att.local_path is set only to the
final path when write_media_file succeeds, otherwise keep the existing preview
path (pp.local_path) — i.e., determine a new_local_path =
path_if_exists(&final_local_path) if write succeeded else
path_if_exists(&pp.local_path) and assign that to att.local_path, and only call
std::fs::remove_file(&pp.local_path) when the write actually succeeded.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point — if the re-hash copy fails, path_if_exists(&final_local_path) would return None even though the original pp.local_path still exists. In practice this only affects the local preview cache (the upload uses encrypted_data not the local file), so the impact is cosmetic (the thumbnail disappears from the outbox entry until the message is published with the remote URL). Will track for a follow-up to fall back to pp.local_path when the copy fails.

— claude

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@futurepaul, thanks for the thorough explanation! The impact analysis is spot-on — since the upload path exclusively uses the in-memory encrypted_data rather than reading from the local file path, the breakage is indeed limited to the thumbnail preview disappearing from the outbox bubble during the upload window. Once the batch completes and the message is published with the remote URL, the UI recovers naturally.

Would you like me to open a GitHub issue to track the follow-up fix (falling back to pp.local_path when the re-hash copy fails)?


✏️ Learnings added
Learnt from: futurepaul
Repo: sledtools/pika PR: 429
File: rust/src/core/chat_media.rs:874-895
Timestamp: 2026-03-05T00:37:03.771Z
Learning: In `rust/src/core/chat_media.rs`, within `send_chat_media_batch`, when the MDK re-encodes media and the local preview copy to `final_local_path` fails, the current code sets `att.local_path = path_if_exists(&final_local_path)` which returns `None` even though `pp.local_path` still exists. The impact is cosmetic only (thumbnail disappears from outbox entry temporarily) because the upload uses `encrypted_data` in memory, not the local file path. The fix is to fall back to `pp.local_path` when the copy fails. This is tracked as a known follow-up.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: futurepaul
Repo: sledtools/pika PR: 429
File: rust/src/core/mod.rs:4865-4890
Timestamp: 2026-03-04T23:25:22.685Z
Learning: In rust/src/core/mod.rs, within AppCore::handle_action for AppAction::RetryMessage, avoid defaulting to index 0 when locating the first un-uploaded batch item. Use explicit None handling so that when all items are already uploaded, the code skips the batch re-upload path and falls through to the general publish-retry (pending_sends) logic.

Comment on lines +1257 to +1262
let Some(sess) = self.session.as_ref() else {
return;
};
let Some(local_keys) = sess.local_keys.clone() else {
return;
};
Copy link
Copy Markdown

@coderabbitai coderabbitai bot Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Do not silently return when session keys are unavailable mid-batch.

Line 1257 and Line 1260 return without transitioning state, which can strand the batch in pending/uploading after partial completion.

💡 Suggested fix
                 let Some(sess) = self.session.as_ref() else {
-                    return;
+                    self.delivery_overrides
+                        .entry(chat_id.clone())
+                        .or_default()
+                        .insert(
+                            temp_rumor_id.clone(),
+                            MessageDeliveryState::Failed {
+                                reason: "Upload failed: session unavailable".to_string(),
+                            },
+                        );
+                    if let Some(outbox) = self.local_outbox.get_mut(&chat_id) {
+                        if let Some(entry) = outbox.get_mut(&temp_rumor_id) {
+                            for att in &mut entry.media {
+                                att.upload_progress = None;
+                            }
+                        }
+                    }
+                    self.refresh_current_chat_if_open(&chat_id);
+                    self.refresh_chat_list_from_storage();
+                    return;
                 };
                 let Some(local_keys) = sess.local_keys.clone() else {
-                    return;
+                    self.delivery_overrides
+                        .entry(chat_id.clone())
+                        .or_default()
+                        .insert(
+                            temp_rumor_id.clone(),
+                            MessageDeliveryState::Failed {
+                                reason: "Upload failed: local signer unavailable".to_string(),
+                            },
+                        );
+                    if let Some(outbox) = self.local_outbox.get_mut(&chat_id) {
+                        if let Some(entry) = outbox.get_mut(&temp_rumor_id) {
+                            for att in &mut entry.media {
+                                att.upload_progress = None;
+                            }
+                        }
+                    }
+                    self.refresh_current_chat_if_open(&chat_id);
+                    self.refresh_chat_list_from_storage();
+                    return;
                 };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rust/src/core/chat_media.rs` around lines 1257 - 1262, The code currently
does an early return when self.session or sess.local_keys are missing mid-batch,
which can leave the batch stuck in pending/uploading; instead of returning,
detect the missing self.session or sess.local_keys (the checks around
self.session.as_ref() and sess.local_keys.clone()), log/propagate an error, and
transition the batch to a terminal failed/error state (e.g., set the batch
status to Failed or call the existing batch error/cleanup handler) for the
current batch before returning or breaking out so the system doesn't leave the
batch stranded.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Valid concern — the silent return when session/keys are unavailable mid-batch can strand the batch. This is the same class of issue as the missing-URL/hash early return that was already acknowledged in the earlier Devin thread. The proper fix (marking the batch as Failed and cleaning up) requires touching the error recovery path which is shared with single-item uploads. Tracking for a follow-up PR alongside the other batch error-handling hardening.

— claude

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@futurepaul, understood — deferring to a follow-up PR alongside the broader batch error-handling hardening makes sense. Would you like me to open a GitHub issue to track this? I can capture both the session-unavailable and local-keys-unavailable silent returns in handle_batch_upload_completed, as well as the missing-URL/hash early returns, so they're all addressed together in the follow-up.


✏️ Learnings added
Learnt from: futurepaul
Repo: sledtools/pika PR: 429
File: rust/src/core/chat_media.rs:1257-1262
Timestamp: 2026-03-05T00:37:01.166Z
Learning: In `rust/src/core/chat_media.rs`, within `handle_batch_upload_completed`, the early returns when `self.session` is unavailable (around the `let Some(sess) = self.session.as_ref() else { return; }` check) and when `sess.local_keys` is unavailable do not transition the batch to a Failed state, which can strand the batch in pending/uploading. The fix — marking the batch as Failed and cleaning up outbox/delivery_overrides — is intentionally deferred to a follow-up PR alongside broader batch error-handling hardening (same class as the missing-URL/hash early returns in the same function).

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: futurepaul
Repo: sledtools/pika PR: 429
File: rust/src/core/mod.rs:4865-4890
Timestamp: 2026-03-04T23:25:22.685Z
Learning: In rust/src/core/mod.rs, within AppCore::handle_action for AppAction::RetryMessage, avoid defaulting to index 0 when locating the first un-uploaded batch item. Use explicit None handling so that when all items are already uploaded, the code skips the batch re-upload path and falls through to the general publish-retry (pending_sends) logic.

Comment on lines +7560 to +7576
fn batch_rejects_empty_items() {
let tempdir = tempfile::tempdir().expect("tempdir");
let data_dir = tempdir.path().to_string_lossy().into_owned();
let mut core = make_core(data_dir);
core.send_chat_media_batch("chat1".into(), vec![], "hi".into());
assert_eq!(core.state.toast.as_deref(), Some("Please log in first"));
}

#[test]
fn batch_rejects_over_32_items_when_logged_in_would_reject() {
let tempdir = tempfile::tempdir().expect("tempdir");
let data_dir = tempdir.path().to_string_lossy().into_owned();
let mut core = make_core(data_dir);
let items: Vec<MediaBatchItem> = (0..33).map(|_| make_item(10)).collect();
core.send_chat_media_batch("chat1".into(), items, "hi".into());
assert_eq!(core.state.toast.as_deref(), Some("Please log in first"));
}
Copy link
Copy Markdown

@coderabbitai coderabbitai bot Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

These tests currently validate auth gating, not batch item validation.

batch_rejects_empty_items and batch_rejects_over_32_items_when_logged_in_would_reject both call send_chat_media_batch while logged out and only assert "Please log in first". They never exercise empty/size validation paths.

✅ Minimal direction
- fn batch_rejects_empty_items() {
+ fn batch_rejects_empty_items_when_logged_in() {
    ...
-   assert_eq!(core.state.toast.as_deref(), Some("Please log in first"));
+   // initialize logged-in session, then:
+   assert_eq!(core.state.toast.as_deref(), Some("Select at least one image"));
 }

- fn batch_rejects_over_32_items_when_logged_in_would_reject() {
+ fn batch_rejects_over_32_items_when_logged_in() {
    ...
-   assert_eq!(core.state.toast.as_deref(), Some("Please log in first"));
+   // initialize logged-in session, then:
+   assert_eq!(core.state.toast.as_deref(), Some("You can send up to 32 items at once"));
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@rust/src/core/mod.rs` around lines 7560 - 7576, Tests
batch_rejects_empty_items and
batch_rejects_over_32_items_when_logged_in_would_reject are exercising auth
gating rather than batch validation because they call send_chat_media_batch
while logged out; authenticate the test Core before invoking
send_chat_media_batch (for example call the login method on the Core or set the
authenticated user on make_core’s returned object) so the call reaches the batch
validation paths, then assert the expected validation toast messages
(referencing send_chat_media_batch, make_core, make_item, and core.state.toast).
Alternatively, if you prefer unit-testing validation in isolation, call the
internal batch validation function directly (if present) instead of
send_chat_media_batch.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right — both tests hit the Please log in first guard before reaching the empty/size validation. Fixing this properly requires either scaffolding a logged-in session in the test (which needs a mock MLS session, not trivial) or restructuring the validation order so size checks come first. The current tests still verify the auth gate works, which is useful. Tracking logged-in batch validation tests for a follow-up.

— claude

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@futurepaul Got it — makes sense to keep these as auth-gate checks in this PR and track logged‑in batch validation separately.

If you want a tiny polish now so intent is obvious in CI logs, we can just retitle the two tests to reflect what they validate:

@@
-    #[test]
-    fn batch_rejects_empty_items() {
+    #[test]
+    fn batch_auth_gate_logged_out_rejects_empty_items() {
@@
-    #[test]
-    fn batch_rejects_over_32_items_when_logged_in_would_reject() {
+    #[test]
+    fn batch_auth_gate_logged_out_rejects_over_32_items() {

For the follow-up, two lightweight paths:

  • Pure validation unit tests (recommended)
    • Extract a small validator (e.g., validate_media_batch_items_len(len) -> Result<(), &'static str>) that enforces:
      • empty → "Select at least one image"
      • 32 → "You can send up to 32 items at once"

    • Call this at the top of send_chat_media_batch but also unit-test it directly (no MLS/MDK session needed).
  • Logged-in scaffold tests
    • Add a shared test helper to create a logged-in AppCore (similar to existing private helpers in other submodules) and assert the same error paths through send_chat_media_batch end-to-end.

Want me to open a GitHub issue to track the follow-up with acceptance criteria and assign it to you? I can do that and include:

  • Rename current tests (optional).
  • Add validator + unit tests OR shared logged-in helper + tests (your choice).
  • Cover both empty and >32 cases; include a happy-path sanity check (1..=32 items reach upload path).

✏️ Learnings added
Learnt from: futurepaul
Repo: sledtools/pika PR: 429
File: rust/src/core/mod.rs:7560-7576
Timestamp: 2026-03-05T00:38:48.699Z
Learning: In sledtools/pika, the current batch_media_tests intentionally exercise the auth gate while logged out; adding logged-in batch validation tests is deferred because standing up an MLS/MDK-backed session in tests is non-trivial. Preferred follow-up: extract a small pure validator for batch size/emptiness and unit-test it without a session.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: futurepaul
Repo: sledtools/pika PR: 429
File: rust/src/core/mod.rs:4865-4890
Timestamp: 2026-03-04T23:25:22.685Z
Learning: In rust/src/core/mod.rs, within AppCore::handle_action for AppAction::RetryMessage, avoid defaulting to index 0 when locating the first un-uploaded batch item. Use explicit None handling so that when all items are already uploaded, the code skips the batch re-upload path and falls through to the general publish-retry (pending_sends) logic.

Learnt from: futurepaul
Repo: sledtools/pika PR: 429
File: android/app/src/main/java/com/pika/app/AppManager.kt:447-454
Timestamp: 2026-03-04T23:30:30.607Z
Learning: In sledtools/pika, `AppManager.kt` (package `com.pika.app`) uses types from `com.pika.app.rust` (UniFFI bindings in `pika_core.kt`) such as `MediaBatchItem`. When new FFI types appear in diff hunks without a visible import, do not flag a missing import as a compile error — the import may have been added elsewhere in the PR (outside the reviewed hunk) or already present. Always verify against the full file content or trust the author when they confirm the code compiles.

Adds `set dotenv-load` so all just recipes can read .env. When
PIKA_IOS_DEVELOPMENT_TEAM is set, ios-xcodeproj stamps it into the
generated Xcode project so all three targets get signing automatically.
No more manual team picker in Xcode for `just ios-appstore`.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add language tags to fenced code blocks in SKILL.md (CodeRabbit MD040)
- Swap clip/background order on Android +N overlay for rounded corners (CodeRabbit)
- Guard invalid decode dimensions in BlurhashView before buffer alloc (CodeRabbit)
- Normalize media_batch entry fields (trim whitespace) in pika-share (Devin)
- Constrain batch file paths to media directory to prevent cleanup from
  deleting non-media queue files (CodeRabbit)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 new potential issue.

View 24 additional findings in Devin Review.

Open in Devin Review

Comment on lines +852 to +866
&pp.local_filename,
&MediaProcessingOptions::default(),
) {
Ok(upload) => upload,
Err(e) => {
// Clean up outbox on encryption failure.
if let Some(outbox) = self.local_outbox.get_mut(&chat_id) {
outbox.remove(&temp_rumor_id);
}
if let Some(overrides) = self.delivery_overrides.get_mut(&chat_id) {
overrides.remove(&temp_rumor_id);
}
self.refresh_current_chat_if_open(&chat_id);
self.refresh_chat_list_from_storage();
self.toast(format!("Media encryption failed: {e}"));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Batch media send leaks local files on encryption failure

In send_chat_media_batch, when encrypt_for_upload_with_options fails for any item during Phase B, the code cleans up the outbox entry and delivery overrides but does not delete any of the local media files written in Phase A (lines 749–756). All N files for the batch are written to disk before encryption starts. On encryption failure, the early return at line 866 leaves all those files as orphans.

This differs from the single-item send_chat_media, which explicitly calls std::fs::remove_file(&local_path) on encryption failure. These orphaned files won't be cleaned up by any GC mechanism since they're in the per-account local media cache directory, not the share queue's media directory.

Prompt for agents
In rust/src/core/chat_media.rs, in the send_chat_media_batch method, at the encryption failure handler around lines 852-866, add cleanup of all local media files written during Phase A before returning. You need to iterate over all preprocessed items and delete their local_path files, similar to how send_chat_media does std::fs::remove_file(&local_path) on encryption failure. The code should look like:

Err(e) => {
    if let Some(outbox) = self.local_outbox.get_mut(&chat_id) {
        outbox.remove(&temp_rumor_id);
    }
    if let Some(overrides) = self.delivery_overrides.get_mut(&chat_id) {
        overrides.remove(&temp_rumor_id);
    }
    // Clean up ALL local media files written in Phase A
    for pp in &preprocessed {
        let _ = std::fs::remove_file(&pp.local_path);
    }
    self.refresh_current_chat_if_open(&chat_id);
    self.refresh_chat_list_from_storage();
    self.toast(format!("Media encryption failed: {e}"));
    return;
}
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

@futurepaul futurepaul merged commit cd1e4f2 into master Mar 5, 2026
32 of 34 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant