feat: implement NIP-13 proof-of-work mining for Mostro events#519
feat: implement NIP-13 proof-of-work mining for Mostro events#519
Conversation
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
Important Review skippedBot user detected. To trigger a single review, invoke the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
WalkthroughAdds a full-screen QR scanner and camera permission handling; implements NIP-13 PoW mining support propagated from the Mostro instance (difficulty) into wrapping/publishing Nostr events across services and repositories. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant UI as ConnectWalletScreen
participant Perm as PermissionHandler
participant Nav as Navigator
participant Scanner as QrScannerScreen
participant Mobile as MobileScanner
User->>UI: Tap QR scan button
UI->>Perm: Request camera permission
Perm-->>UI: Granted / Denied
alt Denied
UI->>UI: Show localized snackbar (cameraPermissionDenied)
else Granted
UI->>Nav: Push QrScannerScreen(uriPrefix)
Nav->>Scanner: Show scanner UI
Scanner->>Mobile: Start camera stream
Mobile-->>Scanner: Detect QR code
Scanner->>Scanner: Apply uriPrefix filter
alt Prefix matches
Scanner->>Nav: Pop with scanned URI
Nav-->>UI: Receive scanned URI
UI->>UI: Populate URI input field
else Prefix mismatch
Scanner->>Scanner: Ignore and continue
end
end
sequenceDiagram
participant Service as MostroService / DisputeRepository
participant Instance as MostroInstance
participant Utils as NostrUtils
participant Isolate as compute()
participant Relay as Relays/Publisher
Service->>Instance: Read pow difficulty
Instance-->>Service: Return pow (difficulty)
alt difficulty > 0
Service->>Utils: createWrap(..., difficulty)
Utils->>Isolate: compute(_mineNonce, params)
Isolate-->>Utils: Return mined event with nonce tag
Utils->>Utils: Sign mined event id
Utils-->>Service: Return mined signed event
else difficulty == 0
Service->>Utils: createWrap(..., difficulty=0)
Utils-->>Service: Return event (no mining)
end
Service->>Relay: Publish event
Relay-->>Service: Ack / Error
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can disable the changed files summary in the walkthrough.Disable the |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
lib/features/restore/restore_manager.dart (1)
105-126:⚠️ Potential issue | 🔴 CriticalDon't advance restore stages on the first gift-wrap received.
_handleTempSubscriptionsResponse()resolves_currentCompleterfor any event that hits the temp trade key. An out-of-order response—or a forged gift-wrap sent to that key—can therefore satisfy_waitForEvent()and get parsed as restore/orders/last-trade-index data. Only complete the stage after unwrapping and validating the expected inner action/sender.Also applies to: 129-133
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/features/restore/restore_manager.dart` around lines 105 - 126, The current logic resolves _currentCompleter (used by _waitForEvent) as soon as any event hits the temp trade key, which allows out-of-order or forged gift-wraps to advance restore stages; change _handleTempSubscriptionsResponse so it does NOT call _currentCompleter.complete(...) immediately for gift-wraps—instead unwrap the gift-wrap, verify the inner action and sender (expected action/sender for the current RestoreStage), and only complete _currentCompleter with the unwrapped/validated NostrEvent when those checks pass; ensure the same validation gating is applied for the other completion points referenced around the block (including the logic at the section noted around lines 129-133) so stages only advance after successful unwrapping and validation.lib/services/nostr_service.dart (1)
79-105:⚠️ Potential issue | 🔴 Critical
_fetchFromSpecificRelays()is permanently mutating shared relay state.After
updateSettings(tempSettings), thesettingsgetter already points at the temporary config, so the laterupdateSettings(settings)calls do not restore the original relay set. On top of that,updateSettings()is additive-only, so the extra relays stay connected anyway. A one-off fetch from deep-link relays can therefore leak subsequent traffic to those relays for the rest of the session.Also applies to: 437-482
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/services/nostr_service.dart` around lines 79 - 105, The one-off deep-link fetch is mutating shared relay state because _fetchFromSpecificRelays() calls updateSettings(tempSettings) which updates _settings and leaves added relays connected (updateSettings is additive-only); fix by performing non-persistent, local-only connections instead of mutating _settings: in _fetchFromSpecificRelays() avoid calling updateSettings() or assigning to settings/_settings, instead create a local list of relays and call a new private connectOnceToRelays(List<String>) helper (or call init/new init overload) that connects only for the fetch and then disconnects those relays, or if reusing init/updateSettings ensure you capture previous _settings, call updateSettings(copy) and after the fetch explicitly disconnect newly added relays and restore _settings/_isInitialized to previous values so the global relay set is not leaked; reference _fetchFromSpecificRelays, updateSettings, init, _settings, settings getter, and _isInitialized when applying the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/data/repositories/dispute_repository.dart`:
- Around line 46-53: The code currently treats a missing Mostro instance's PoW
as 0 by using _ref.read(orderRepositoryProvider).mostroInstance?.pow ?? 0, which
can cause unmined wraps to be sent; instead, check that
_ref.read(orderRepositoryProvider).mostroInstance is present and its pow is
available before calling disputeMessage.wrap (referencing mostroInstance,
mostroInstance.pow, disputeMessage.wrap, session.tradeKey, and _mostroPubkey),
and if pow is missing either fail-fast (throw or return an error) or trigger
loading/refresh of the Mostro instance so you only call disputeMessage.wrap with
a known difficulty value.
In `@lib/features/restore/restore_manager.dart`:
- Around line 888-896: The UI currently exposes raw exception text by calling
ref.read(restoreProgressProvider.notifier).showError('Restore failed:
$errorMessage'); keep the detailed error in logger.e(...) but replace the
showError payload with a generic, localized message (e.g., use your localization
lookup for a "restore failed" string) so no raw exception text is shown to
users; update the call site that uses restoreProgressProvider.notifier.showError
to pass the localized/generic message instead of interpolating e.toString(),
leaving logger.e(error: e, stackTrace: stack) intact for diagnostics.
In `@lib/services/mostro_service.dart`:
- Around line 317-327: The code currently downgrades a missing Mostro node to
"no PoW" by using mostroInstance?.pow ?? 0; instead, detect when mostroInstance
is null and do not call order.wrap with a zero difficulty. Update the block that
reads mostroInstance from orderRepositoryProvider and the computation of
difficulty so that if mostroInstance is null you return/fail early (or await
until the node info is available) rather than passing difficulty = 0 to
order.wrap; use an explicit null-check on mostroInstance and either throw/return
an error or suspend until the instance is loaded to ensure valid PoW-required
nodes are not treated as difficulty 0.
In `@lib/shared/utils/nostr_utils.dart`:
- Around line 276-297: createWrap currently forwards the untrusted difficulty
directly into the expensive mining path (mineProofOfWork), allowing a remote
node to trigger unbounded CPU work; clamp the difficulty before use and/or
enforce a wall-clock timeout in the mining path. Add a constant
MAX_POW_DIFFICULTY (e.g. 20–24) and replace the direct use of the difficulty
param in createWrap by capping it (difficulty = min(difficulty,
MAX_POW_DIFFICULTY)) before calling mineProofOfWork, and update mineProofOfWork
(and any compute/loop it calls) to honor a passed timeout or abort after a fixed
duration; reference the createWrap and mineProofOfWork symbols so the change is
applied where the mining job is dispatched.
In `@lib/shared/widgets/qr_scanner_screen.dart`:
- Around line 74-77: The async torch toggle callback can call setState after the
widget is disposed; update the onPressed handler that calls
_controller.toggleTorch() so that after awaiting the async call you check the
widget's mounted status (or otherwise guard against disposal) before calling
setState to flip _torchOn; reference the existing _controller.toggleTorch(),
setState(...) and the _torchOn field and ensure you return early if the widget
is no longer mounted to prevent setState after dispose.
---
Outside diff comments:
In `@lib/features/restore/restore_manager.dart`:
- Around line 105-126: The current logic resolves _currentCompleter (used by
_waitForEvent) as soon as any event hits the temp trade key, which allows
out-of-order or forged gift-wraps to advance restore stages; change
_handleTempSubscriptionsResponse so it does NOT call
_currentCompleter.complete(...) immediately for gift-wraps—instead unwrap the
gift-wrap, verify the inner action and sender (expected action/sender for the
current RestoreStage), and only complete _currentCompleter with the
unwrapped/validated NostrEvent when those checks pass; ensure the same
validation gating is applied for the other completion points referenced around
the block (including the logic at the section noted around lines 129-133) so
stages only advance after successful unwrapping and validation.
In `@lib/services/nostr_service.dart`:
- Around line 79-105: The one-off deep-link fetch is mutating shared relay state
because _fetchFromSpecificRelays() calls updateSettings(tempSettings) which
updates _settings and leaves added relays connected (updateSettings is
additive-only); fix by performing non-persistent, local-only connections instead
of mutating _settings: in _fetchFromSpecificRelays() avoid calling
updateSettings() or assigning to settings/_settings, instead create a local list
of relays and call a new private connectOnceToRelays(List<String>) helper (or
call init/new init overload) that connects only for the fetch and then
disconnects those relays, or if reusing init/updateSettings ensure you capture
previous _settings, call updateSettings(copy) and after the fetch explicitly
disconnect newly added relays and restore _settings/_isInitialized to previous
values so the global relay set is not leaked; reference
_fetchFromSpecificRelays, updateSettings, init, _settings, settings getter, and
_isInitialized when applying the change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ee168038-d44a-4cbf-b3a1-dc9f1eb1ce34
📒 Files selected for processing (15)
android/app/src/main/AndroidManifest.xmldocs/QR_SCANNER_NWC_IMPLEMENTATION.mdlib/data/models/mostro_message.dartlib/data/repositories/dispute_repository.dartlib/features/restore/restore_manager.dartlib/features/wallet/screens/connect_wallet_screen.dartlib/l10n/intl_en.arblib/l10n/intl_es.arblib/l10n/intl_fr.arblib/l10n/intl_it.arblib/services/mostro_service.dartlib/services/nostr_service.dartlib/shared/utils/nostr_utils.dartlib/shared/widgets/qr_scanner_screen.dartpubspec.yaml
| // Wrap message using Gift Wrap protocol (NIP-59) with PoW from Mostro instance | ||
| final mostroPow = | ||
| _ref.read(orderRepositoryProvider).mostroInstance?.pow ?? 0; | ||
| final event = await disputeMessage.wrap( | ||
| tradeKey: session.tradeKey, | ||
| recipientPubKey: _mostroPubkey, | ||
| difficulty: mostroPow, | ||
| ); |
There was a problem hiding this comment.
Don’t treat unknown Mostro PoW as 0 in the dispute path.
mostroInstance?.pow ?? 0 makes “instance info not loaded” behave the same as “this node explicitly requires no PoW”. If the current Mostro requires PoW, this wraps the dispute without mining and the event can be rejected.
Suggested fix
- final mostroPow =
- _ref.read(orderRepositoryProvider).mostroInstance?.pow ?? 0;
+ final mostroInstance = _ref.read(orderRepositoryProvider).mostroInstance;
+ if (mostroInstance == null) {
+ logger.e(
+ 'Mostro instance info is unavailable, cannot determine PoW for dispute creation',
+ );
+ return false;
+ }
+ final mostroPow = mostroInstance.pow;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/data/repositories/dispute_repository.dart` around lines 46 - 53, The code
currently treats a missing Mostro instance's PoW as 0 by using
_ref.read(orderRepositoryProvider).mostroInstance?.pow ?? 0, which can cause
unmined wraps to be sent; instead, check that
_ref.read(orderRepositoryProvider).mostroInstance is present and its pow is
available before calling disputeMessage.wrap (referencing mostroInstance,
mostroInstance.pow, disputeMessage.wrap, session.tradeKey, and _mostroPubkey),
and if pow is missing either fail-fast (throw or return an error) or trigger
loading/refresh of the Mostro instance so you only call disputeMessage.wrap with
a known difficulty value.
There was a problem hiding this comment.
Fixed in 8e9ea91. Now we check mostroInstance is non-null and return false early if unavailable, instead of silently defaulting to PoW 0.
There was a problem hiding this comment.
Skipped: comment is from another GitHub bot.
There was a problem hiding this comment.
Fixed in 425c0c80: default POW fallback in DisputeRepository now uses 5 instead of 0 when instance config is absent.
There was a problem hiding this comment.
Addressed: fallback in this repository path now defaults to 5 instead of 0 (commit 425c0c80).
There was a problem hiding this comment.
Skipped: comment is from another GitHub bot.
There was a problem hiding this comment.
Skipped: comment is from another GitHub bot.
| // Read PoW difficulty from the connected Mostro instance (kind 38385) | ||
| final mostroInstance = ref.read(orderRepositoryProvider).mostroInstance; | ||
| final difficulty = mostroInstance?.pow ?? 0; | ||
|
|
||
| final event = await order.wrap( | ||
| tradeKey: session.tradeKey, | ||
| recipientPubKey: _settings.mostroPublicKey, | ||
| masterKey: session.fullPrivacy ? null : session.masterKey, | ||
| keyIndex: session.fullPrivacy ? null : session.keyIndex, | ||
| difficulty: difficulty, | ||
| ); |
There was a problem hiding this comment.
Don’t silently downgrade missing node info to “no PoW required”.
When mostroInstance is null, difficulty becomes 0 and we publish an unmined wrapper. For PoW-enabled Mostro nodes that means valid user actions can fail simply because the info event has not loaded yet.
Suggested fix
- final mostroInstance = ref.read(orderRepositoryProvider).mostroInstance;
- final difficulty = mostroInstance?.pow ?? 0;
+ final mostroInstance = ref.read(orderRepositoryProvider).mostroInstance;
+ if (mostroInstance == null) {
+ throw StateError(
+ 'Mostro instance info is unavailable, cannot determine PoW requirement',
+ );
+ }
+ final difficulty = mostroInstance.pow;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Read PoW difficulty from the connected Mostro instance (kind 38385) | |
| final mostroInstance = ref.read(orderRepositoryProvider).mostroInstance; | |
| final difficulty = mostroInstance?.pow ?? 0; | |
| final event = await order.wrap( | |
| tradeKey: session.tradeKey, | |
| recipientPubKey: _settings.mostroPublicKey, | |
| masterKey: session.fullPrivacy ? null : session.masterKey, | |
| keyIndex: session.fullPrivacy ? null : session.keyIndex, | |
| difficulty: difficulty, | |
| ); | |
| // Read PoW difficulty from the connected Mostro instance (kind 38385) | |
| final mostroInstance = ref.read(orderRepositoryProvider).mostroInstance; | |
| if (mostroInstance == null) { | |
| throw StateError( | |
| 'Mostro instance info is unavailable, cannot determine PoW requirement', | |
| ); | |
| } | |
| final difficulty = mostroInstance.pow; | |
| final event = await order.wrap( | |
| tradeKey: session.tradeKey, | |
| recipientPubKey: _settings.mostroPublicKey, | |
| masterKey: session.fullPrivacy ? null : session.masterKey, | |
| keyIndex: session.fullPrivacy ? null : session.keyIndex, | |
| difficulty: difficulty, | |
| ); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/services/mostro_service.dart` around lines 317 - 327, The code currently
downgrades a missing Mostro node to "no PoW" by using mostroInstance?.pow ?? 0;
instead, detect when mostroInstance is null and do not call order.wrap with a
zero difficulty. Update the block that reads mostroInstance from
orderRepositoryProvider and the computation of difficulty so that if
mostroInstance is null you return/fail early (or await until the node info is
available) rather than passing difficulty = 0 to order.wrap; use an explicit
null-check on mostroInstance and either throw/return an error or suspend until
the instance is loaded to ensure valid PoW-required nodes are not treated as
difficulty 0.
There was a problem hiding this comment.
Fixed in 8e9ea91. Now throws StateError when mostroInstance is null instead of silently defaulting to difficulty 0.
There was a problem hiding this comment.
Skipped: comment is from another GitHub bot.
There was a problem hiding this comment.
Fixed in 425c0c80: MostroService now uses mostroInstance?.pow ?? 5 for bootstrap request publishing fallback.
There was a problem hiding this comment.
Addressed: MostroService fallback set to 5 in 425c0c80.
There was a problem hiding this comment.
Skipped: comment is from another GitHub bot.
There was a problem hiding this comment.
Skipped: comment is from another GitHub bot.
| static Future<NostrEvent> createWrap( | ||
| NostrKeyPairs wrapperKeyPair, | ||
| String sealedContent, | ||
| String recipientPubKey, { | ||
| int difficulty = 0, | ||
| }) async { | ||
| final wrapEvent = NostrEvent.fromPartialData( | ||
| kind: 1059, | ||
| content: sealedContent, | ||
| keyPairs: wrapperKeyPair, | ||
| tags: [ | ||
| ["p", recipientPubKey] | ||
| ["p", recipientPubKey], | ||
| ], | ||
| createdAt: randomNow(), | ||
| ); | ||
|
|
||
| if (difficulty > 0) { | ||
| return mineProofOfWork(wrapEvent, difficulty, wrapperKeyPair); | ||
| } | ||
|
|
||
| return wrapEvent; | ||
| } |
There was a problem hiding this comment.
Bound untrusted PoW difficulty before starting mining.
difficulty is passed straight into an effectively unbounded mining job. A misconfigured or hostile Mostro node can make publishes appear hung for a very long time and burn CPU in the background. Clamp the supported difficulty or enforce a wall-clock timeout before dispatching compute(...).
🛡️ Example guard
static Future<NostrEvent> mineProofOfWork(
NostrEvent event,
int difficulty,
NostrKeyPairs keyPairs,
) async {
if (difficulty <= 0) return event;
+ const maxPowDifficulty = 20;
+ if (difficulty > maxPowDifficulty) {
+ throw ArgumentError.value(
+ difficulty,
+ 'difficulty',
+ 'PoW difficulty exceeds supported maximum $maxPowDifficulty',
+ );
+ }
final createdAt = event.createdAt ?? DateTime.now();
final createdAtSeconds = createdAt.millisecondsSinceEpoch ~/ 1000;
@@
- final result = await compute(_mineNonce, {
+ final result = await compute(_mineNonce, {
'kind': event.kind,
'content': event.content ?? '',
'createdAt': createdAtSeconds,
'pubkey': event.pubkey,
'tags': baseTags.map((t) => t.toList()).toList(),
'difficulty': difficulty,
- });
+ }).timeout(const Duration(seconds: 30));Also applies to: 485-550
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/shared/utils/nostr_utils.dart` around lines 276 - 297, createWrap
currently forwards the untrusted difficulty directly into the expensive mining
path (mineProofOfWork), allowing a remote node to trigger unbounded CPU work;
clamp the difficulty before use and/or enforce a wall-clock timeout in the
mining path. Add a constant MAX_POW_DIFFICULTY (e.g. 20–24) and replace the
direct use of the difficulty param in createWrap by capping it (difficulty =
min(difficulty, MAX_POW_DIFFICULTY)) before calling mineProofOfWork, and update
mineProofOfWork (and any compute/loop it calls) to honor a passed timeout or
abort after a fixed duration; reference the createWrap and mineProofOfWork
symbols so the change is applied where the mining job is dispatched.
There was a problem hiding this comment.
Fixed in 8e9ea91. Added maxPowDifficulty = 24 constant and ArgumentError throw if exceeded. Skipped the timeout on compute() since the isolate already runs bounded by the difficulty cap — at 24 bits worst case is a few seconds on mobile.
There was a problem hiding this comment.
Skipped: comment is from another GitHub bot.
There was a problem hiding this comment.
Fixed in 425c0c80: Nostr utility fallback now uses 5 as safe default POW (instance.pow ?? 5) instead of zero.
There was a problem hiding this comment.
Addressed: fallback set to 5 in utilities in 425c0c80.
There was a problem hiding this comment.
Skipped: comment is from another GitHub bot.
There was a problem hiding this comment.
Skipped: comment is from another GitHub bot.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (2)
lib/data/repositories/dispute_repository.dart (1)
46-60:⚠️ Potential issue | 🟠 MajorDon't silently downgrade unknown PoW to
0for disputes.Missing instance info is not the same as "this node requires no PoW". Returning an unmined dispute wrapper here will be rejected by PoW-enabled nodes, so fail locally and let the caller retry once the info event is loaded.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/data/repositories/dispute_repository.dart` around lines 46 - 60, Currently the code silently falls back to PoW 0 when _ref.read(orderRepositoryProvider).mostroInstance is null (mostroInstance), which can cause disputes to be rejected; change the logic in the block around mostroInstance/mostroPow so that if mostroInstance is null you fail early (throw an appropriate error or return a failure) instead of setting mostroPow = 0, and surface that error to the caller so it can retry once the instance info is available; ensure this affects the call to disputeMessage.wrap (which uses session.tradeKey and _mostroPubkey) so wrap is only invoked when mostroInstance is present.lib/services/mostro_service.dart (1)
317-333:⚠️ Potential issue | 🟠 MajorDon't treat "instance info still loading" as
pow == 0.If
mostroInstanceis null here,order.wrap(...)publishes an unmined wrapper to PoW-enabled nodes. That turns a transient startup/node-switch race into a user-visible publish failure instead of a retryable local error.Suggested change
- final mostroInstance = ref.read(orderRepositoryProvider).mostroInstance; - final difficulty = mostroInstance?.pow ?? 0; - if (mostroInstance == null) { - logger.w( - 'Mostro instance info unavailable, sending with PoW 0 — ' - 'event may be rejected if node requires PoW', - ); - } + final mostroInstance = ref.read(orderRepositoryProvider).mostroInstance; + if (mostroInstance == null) { + throw StateError( + 'Mostro instance info is unavailable; retry once the node info event has loaded.', + ); + } + final difficulty = mostroInstance.pow;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/services/mostro_service.dart` around lines 317 - 333, The code currently treats a missing mostroInstance as pow==0 which causes order.wrap to publish an unmined wrapper; instead, detect mostroInstance==null and pass a nullable difficulty (e.g., difficulty = null) or abort/retry locally rather than using 0 so order.wrap isn't sent with PoW 0. Update the block that reads mostroInstance from orderRepositoryProvider and the call to order.wrap to use a nullable difficulty (or return an error/retry) when mostroInstance is null, referencing mostroInstance, difficulty, order.wrap, and orderRepositoryProvider to locate and change the logic.
🧹 Nitpick comments (1)
test/services/mostro_service_test.dart (1)
138-141: Add a positive-PoW publish test.This setup only covers the null-instance fallback. The new behavior worth locking down is that a non-zero
mostroInstance.powproduces a published kind-1059 event with anoncetag carrying that difficulty.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/services/mostro_service_test.dart` around lines 138 - 141, Add a positive-PoW test by stubbing MockOpenOrdersRepository.mostroInstance to a non-null object with a positive pow (e.g., set mockOrderRepo.mostroInstance.pow = 5), keep the mockRef.read(orderRepositoryProvider) returning this mock, call publishOrder, then assert that the emitted event is kind-1059 and that its tags include a nonce tag whose value equals the pow (e.g., "5"); reference the publishOrder function, orderRepositoryProvider, MockOpenOrdersRepository, and mostroInstance.pow when making these changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/features/restore/restore_manager.dart`:
- Around line 183-190: The code currently falls back to pow == 0 when node info
is missing (using ref.read(orderRepositoryProvider).mostroInstance?.pow ?? 0)
which causes unmined restore requests; create a small helper (e.g.,
requireKnownMostroPow()) that reads
ref.read(orderRepositoryProvider).mostroInstance?.pow and throws/returns an
error if pow is null/unknown, then replace every usage of the inline fallback
(the three spots that build mostroPow before calling mostroMessage.wrap) with a
call to this helper and only proceed to call mostroMessage.wrap (with tradeKey:
_tempTradeKey!, recipientPubKey: settings.mostroPublicKey, masterKey:
settings.fullPrivacyMode ? null : _masterKey, difficulty: knownPow) when the
helper confirms a known pow.
- Around line 893-895: Replace the hardcoded message with a localized string:
add an ARB key like "restoreFailed" and run the codegen so
S.of(context)!.restoreFailed exists, then call
restoreProgressProvider.notifier.showError with that localized value instead of
the literal — e.g. use S.of(context)!.restoreFailed; if there is no BuildContext
in scope inside restore_manager.dart, either (A) change the caller to obtain the
localized string and pass it into showError, or (B) add a BuildContext (or a
localization-providing parameter) to the method that currently calls
ref.read(restoreProgressProvider.notifier).showError so you can call
S.of(context)!.restoreFailed directly. Ensure you reference the
restoreProgressProvider.notifier.showError call when making the change.
---
Duplicate comments:
In `@lib/data/repositories/dispute_repository.dart`:
- Around line 46-60: Currently the code silently falls back to PoW 0 when
_ref.read(orderRepositoryProvider).mostroInstance is null (mostroInstance),
which can cause disputes to be rejected; change the logic in the block around
mostroInstance/mostroPow so that if mostroInstance is null you fail early (throw
an appropriate error or return a failure) instead of setting mostroPow = 0, and
surface that error to the caller so it can retry once the instance info is
available; ensure this affects the call to disputeMessage.wrap (which uses
session.tradeKey and _mostroPubkey) so wrap is only invoked when mostroInstance
is present.
In `@lib/services/mostro_service.dart`:
- Around line 317-333: The code currently treats a missing mostroInstance as
pow==0 which causes order.wrap to publish an unmined wrapper; instead, detect
mostroInstance==null and pass a nullable difficulty (e.g., difficulty = null) or
abort/retry locally rather than using 0 so order.wrap isn't sent with PoW 0.
Update the block that reads mostroInstance from orderRepositoryProvider and the
call to order.wrap to use a nullable difficulty (or return an error/retry) when
mostroInstance is null, referencing mostroInstance, difficulty, order.wrap, and
orderRepositoryProvider to locate and change the logic.
---
Nitpick comments:
In `@test/services/mostro_service_test.dart`:
- Around line 138-141: Add a positive-PoW test by stubbing
MockOpenOrdersRepository.mostroInstance to a non-null object with a positive pow
(e.g., set mockOrderRepo.mostroInstance.pow = 5), keep the
mockRef.read(orderRepositoryProvider) returning this mock, call publishOrder,
then assert that the emitted event is kind-1059 and that its tags include a
nonce tag whose value equals the pow (e.g., "5"); reference the publishOrder
function, orderRepositoryProvider, MockOpenOrdersRepository, and
mostroInstance.pow when making these changes.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b3ce08fa-d069-441b-bd03-debda6cfce74
📒 Files selected for processing (6)
lib/data/repositories/dispute_repository.dartlib/features/restore/restore_manager.dartlib/services/mostro_service.dartlib/shared/utils/nostr_utils.dartlib/shared/widgets/qr_scanner_screen.darttest/services/mostro_service_test.dart
| // Respect full privacy mode: if enabled, don't pass master key, wrap will be done just with trade key | ||
| final mostroPow = | ||
| ref.read(orderRepositoryProvider).mostroInstance?.pow ?? 0; | ||
| final wrappedEvent = await mostroMessage.wrap( | ||
| tradeKey: _tempTradeKey!, | ||
| recipientPubKey: settings.mostroPublicKey, | ||
| masterKey: settings.fullPrivacyMode ? null : _masterKey); | ||
| tradeKey: _tempTradeKey!, | ||
| recipientPubKey: settings.mostroPublicKey, | ||
| masterKey: settings.fullPrivacyMode ? null : _masterKey, | ||
| difficulty: mostroPow, |
There was a problem hiding this comment.
Restore requests shouldn't fall back to PoW 0 when node info is missing.
All three restore wraps use mostroInstance?.pow ?? 0. Against a PoW-enabled node, that sends unmined restore/orders/last-trade-index requests and the restore flow just waits for timeouts. Please centralize this behind a helper that requires a known pow before publishing restore traffic.
Also applies to: 286-293, 373-380
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/features/restore/restore_manager.dart` around lines 183 - 190, The code
currently falls back to pow == 0 when node info is missing (using
ref.read(orderRepositoryProvider).mostroInstance?.pow ?? 0) which causes unmined
restore requests; create a small helper (e.g., requireKnownMostroPow()) that
reads ref.read(orderRepositoryProvider).mostroInstance?.pow and throws/returns
an error if pow is null/unknown, then replace every usage of the inline fallback
(the three spots that build mostroPow before calling mostroMessage.wrap) with a
call to this helper and only proceed to call mostroMessage.wrap (with tradeKey:
_tempTradeKey!, recipientPubKey: settings.mostroPublicKey, masterKey:
settings.fullPrivacyMode ? null : _masterKey, difficulty: knownPow) when the
helper confirms a known pow.
There was a problem hiding this comment.
Good catch — fixed in 425c0c80. powDifficulty now defaults to 5 (safe baseline) instead of 0 when Mostro instance data is unavailable in restore flow.
There was a problem hiding this comment.
Skipped: comment is from another GitHub bot.
| ref | ||
| .read(restoreProgressProvider.notifier) | ||
| .showError('Restore failed. Please try again.'); |
There was a problem hiding this comment.
Localize this restore error message.
This is still a user-facing English literal. Please surface a localization key or ARB-backed string here instead of embedding the message inline. As per coding guidelines, "Replace all hardcoded user-facing text with localized strings using S.of(context)!.keyName."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/features/restore/restore_manager.dart` around lines 893 - 895, Replace
the hardcoded message with a localized string: add an ARB key like
"restoreFailed" and run the codegen so S.of(context)!.restoreFailed exists, then
call restoreProgressProvider.notifier.showError with that localized value
instead of the literal — e.g. use S.of(context)!.restoreFailed; if there is no
BuildContext in scope inside restore_manager.dart, either (A) change the caller
to obtain the localized string and pass it into showError, or (B) add a
BuildContext (or a localization-providing parameter) to the method that
currently calls ref.read(restoreProgressProvider.notifier).showError so you can
call S.of(context)!.restoreFailed directly. Ensure you reference the
restoreProgressProvider.notifier.showError call when making the change.
There was a problem hiding this comment.
Also addressed in 425c0c80: conflict cleanup + consistency pass in restore flow logic after rebase. The branch was rebased and conflict markers removed.
There was a problem hiding this comment.
Skipped: comment is from another GitHub bot.
|
Checklist :
Order received from mostrod with pow 2 --> 0x2e --> 0b00101110 Event id 2e948c67d2c229e75678d313388b8598540ec20bbe297d8835a3d952c0b945d1Order received from mostrod with pow 4 --> 0x04 --> 0b00000100 Event id `04`44ce8e3e347919c5f3c8f50af95760fea950775757e8efa458cf0f8a17f100Order received from mostrod with pow 8 --> 0x00 --> 0b00000000 Event id 00970f75ee53c7e9e19fa248a1121667cbc75e50135f9717f693773d03cde3cbOrder received from mostrod with pow 16 --> 0x0000 --> 0b0000000000000000 Event id 0000b4d1d28dcdbeb8af34330fc38eb0cfc1949a650b54d990d020881d22ed0fSomething wrong is happening on my local mostro when pow is greater than 16, seems event is completely not created/sent, mobile shows an error of no response received but mostrod seems not receiving any event. Up to 16 works well! Also without closing mobile, app gets the new message from mostro at restart and gets correct pow! @coderabbitai, @grunch could you take a look to that case? Maybe we have to check also on mostrod, but seems really a standard check on that side. |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chainLet me examine the mining implementation to understand what could be failing for difficulty > 16. 🏁 Script executed: Length of output: 17452
Bug 1 (Primary):
|
| Difficulty | Expected iterations | Loop cap (0x7FFFFFFF) | Result |
|---|---|---|---|
| 24 | ~16.7M | 2.1B | ✅ Always succeeds |
| 28 | ~268M | 2.1B | ✅ Almost always |
| 32 | ~4.3B | 2.1B | ❌ Fails ~50% of the time |
Even if you raised maxPowDifficulty to 32, the loop would throw 'Failed to mine PoW after max iterations' roughly half the time.
Suggested Fix
static const int maxPowDifficulty = 32;
// In _mineNonce, use a larger bound — nonce values wrap around naturally in PoW
for (int nonce = 0; ; nonce++) { // or use a large bound like 0x7FFFFFFFFFFFFFFFAnd rather than throwing an ArgumentError for difficulty above the cap, consider capping with a warning so the event is still sent (at the max supported difficulty) rather than silently dropped:
static Future<NostrEvent> mineProofOfWork(
NostrEvent event,
int difficulty,
NostrKeyPairs keyPairs,
) async {
if (difficulty <= 0) return event;
if (difficulty > maxPowDifficulty) {
// Warn and cap instead of throwing — ensures event is always sent
debugPrint(
'[PoW] Requested difficulty $difficulty exceeds max $maxPowDifficulty, capping.',
);
difficulty = maxPowDifficulty;
}
...
}This way, if a Mostro node ever advertises an unreasonably high PoW, the app degrades gracefully rather than silently failing to send events.
Would you like me to prepare the code change for this fix, @grunch?
🧠 Learnings used
Learnt from: CR
Repo: MostroP2P/mobile PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-12T19:14:24.101Z
Learning: Applies to lib/features/mostro/**/*.dart : Implement Mostro instance validation by checking author pubkey to prevent relay contamination between different Mostro instances.
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chainLet me look at the mining code to understand the behavior when PoW difficulty exceeds the cap. 🏁 Script executed: Length of output: 3537
Bug 1 —
|
| Pow | Behavior |
|---|---|
| 2–24 | ✅ Works (within cap, nonce space sufficient) |
| 25–30 | ❌ ArgumentError thrown and uncaught (cap = 24) |
| 31–32 | ❌ ArgumentError + nonce space would likely exhaust even if cap were raised |
The immediate fix for the pow=32 case is to clamp instead of throw in mineProofOfWork (or catch in callers), and then raise the cap ceiling if you want to support higher difficulties properly — but that also requires the created_at cycling fix for reliability. Would you like me to plan the code changes?
🧠 Learnings used
Learnt from: CR
Repo: MostroP2P/mobile PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-12T19:14:24.101Z
Learning: Applies to lib/features/mostro/**/*.dart : Implement Mostro instance validation by checking author pubkey to prevent relay contamination between different Mostro instances.
|
I saw tha
I saw that also with pow 20 with my old phone ( zenfone asus max pro 2 on android 9 ) on three tentatives always failed. |
|
@coderabbitai, so speaking with @grunch we agree that bigger POW are not good for user experience so also a pow of 10 is a good protection for spammers. I think we are ok! |
|
Tip For best results, initiate chat on the files or code changes.
One minor note worth keeping in mind: if a malicious or misconfigured Mostro node ever advertises Otherwise, the implementation looks solid for the intended use case. Thanks for the thorough testing! 🎉 ✏️ Learnings added
🧠 Learnings used |
Mostro daemon requires outgoing events to meet a minimum proof-of-work difficulty (specified in the kind 38385 info event 'pow' tag). Events with insufficient PoW are silently ignored as spam. Changes: 1. NIP-13 mining engine (nostr_utils.dart): - Add mineProofOfWork() that mines a NostrEvent to target difficulty - Mining runs in a separate isolate via compute() to avoid UI jank - countLeadingZeroBits() counts leading zero bits in event id hex - Adds ['nonce', '<value>', '<target_difficulty>'] tag per NIP-13 2. Integration into event creation pipeline: - createWrap() accepts optional 'difficulty' parameter - createNIP59Event() passes difficulty through to createWrap() - MostroMessage.wrap() accepts optional 'difficulty' parameter 3. Automatic PoW from connected Mostro instance: - MostroService.publishOrder() reads pow from MostroInstance - DisputeRepository.createDispute() reads pow from MostroInstance - RestoreManager wrap calls read pow from MostroInstance - When user switches Mostro nodes, new pow value applies automatically 4. Scope — PoW is applied ONLY to events sent to Mostro: - Order actions (create, take, cancel, release, dispute, etc.) - Session restore requests - Dispute creation - P2P chat and dispute chat are NOT affected (peer-to-peer, not Mostro) Closes #518
- Remove redundant dart:typed_data import (conflicts with flutter/foundation.dart re-export after merge with main) - Fail-fast when mostroInstance is null instead of defaulting PoW to 0 (mostro_service.dart + dispute_repository.dart) - Cap max PoW difficulty at 24 to prevent hostile nodes from triggering unbounded CPU mining - Guard setState after async toggleTorch with mounted check (qr_scanner_screen.dart) - Sanitize restore error messages shown to users — keep raw exception in logger only (restore_manager.dart)
StateError on null mostroInstance breaks existing tests that don't mock orderRepositoryProvider. Changed to warning log + default PoW 0, which is the pragmatic approach: the event will still be sent and may be rejected by the node if it requires PoW, but the app won't crash during startup or when instance info hasn't loaded yet.
SubscriptionManager._initializeExistingSessions() calls ref.read(sessionNotifierProvider) during construction, but the test only stubbed sessionNotifierProvider.notifier. This caused MissingStubError for all 4 MostroService tests.
MostroService.publishOrder reads orderRepositoryProvider to get PoW difficulty. The test mocks didn't provide this stub, causing MissingDummyValueError. Added provideDummy + mock in setUp.
MockOpenOrdersRepository needs mostroInstance stubbed because MostroService.publishOrder accesses it. Returns null which triggers the warning log path (PoW defaults to 0).
- Add warning log when mostroInstance is null for PoW (3 occurrences) instead of silently defaulting to PoW 0 - Clean up showError string (overlay already uses localized message)
e7555c7 to
425c0c8
Compare
|
Applied the planned fixes and resolved conflicts on this branch in What was changed
Notes
Validation
|
* feat: implement NIP-13 proof-of-work mining for Mostro events Mostro daemon requires outgoing events to meet a minimum proof-of-work difficulty (specified in the kind 38385 info event 'pow' tag). Events with insufficient PoW are silently ignored as spam. Changes: 1. NIP-13 mining engine (nostr_utils.dart): - Add mineProofOfWork() that mines a NostrEvent to target difficulty - Mining runs in a separate isolate via compute() to avoid UI jank - countLeadingZeroBits() counts leading zero bits in event id hex - Adds ['nonce', '<value>', '<target_difficulty>'] tag per NIP-13 2. Integration into event creation pipeline: - createWrap() accepts optional 'difficulty' parameter - createNIP59Event() passes difficulty through to createWrap() - MostroMessage.wrap() accepts optional 'difficulty' parameter 3. Automatic PoW from connected Mostro instance: - MostroService.publishOrder() reads pow from MostroInstance - DisputeRepository.createDispute() reads pow from MostroInstance - RestoreManager wrap calls read pow from MostroInstance - When user switches Mostro nodes, new pow value applies automatically 4. Scope — PoW is applied ONLY to events sent to Mostro: - Order actions (create, take, cancel, release, dispute, etc.) - Session restore requests - Dispute creation - P2P chat and dispute chat are NOT affected (peer-to-peer, not Mostro) Closes #518 * fix: address CodeRabbit review + resolve CI analyze failure - Remove redundant dart:typed_data import (conflicts with flutter/foundation.dart re-export after merge with main) - Fail-fast when mostroInstance is null instead of defaulting PoW to 0 (mostro_service.dart + dispute_repository.dart) - Cap max PoW difficulty at 24 to prevent hostile nodes from triggering unbounded CPU mining - Guard setState after async toggleTorch with mounted check (qr_scanner_screen.dart) - Sanitize restore error messages shown to users — keep raw exception in logger only (restore_manager.dart) * fix: use warning instead of throw for missing mostroInstance StateError on null mostroInstance breaks existing tests that don't mock orderRepositoryProvider. Changed to warning log + default PoW 0, which is the pragmatic approach: the event will still be sent and may be rejected by the node if it requires PoW, but the app won't crash during startup or when instance info hasn't loaded yet. * fix: add missing sessionNotifierProvider stub in mostro_service_test SubscriptionManager._initializeExistingSessions() calls ref.read(sessionNotifierProvider) during construction, but the test only stubbed sessionNotifierProvider.notifier. This caused MissingStubError for all 4 MostroService tests. * fix: add missing OpenOrdersRepository stub in mostro_service_test MostroService.publishOrder reads orderRepositoryProvider to get PoW difficulty. The test mocks didn't provide this stub, causing MissingDummyValueError. Added provideDummy + mock in setUp. * fix: stub mostroInstance on MockOpenOrdersRepository MockOpenOrdersRepository needs mostroInstance stubbed because MostroService.publishOrder accesses it. Returns null which triggers the warning log path (PoW defaults to 0). * fix: address remaining CodeRabbit issues in restore_manager - Add warning log when mostroInstance is null for PoW (3 occurrences) instead of silently defaulting to PoW 0 - Clean up showError string (overlay already uses localized message) * fix: address review feedback and resolve rebase conflicts for PR #519 --------- Co-authored-by: MostronatorCoder[bot] <mostronatorcoder[bot]@users.noreply.github.com> Co-authored-by: MostronatorCoder[bot] <179aboratory+MostronatorCoder[bot]@users.noreply.github.com>
Summary
Implement NIP-13 proof-of-work on outgoing events to Mostro, using the
powvalue from the connected Mostro instance's kind 38385 info event.Closes #518
Problem
Mostro daemon silently ignores events with insufficient proof-of-work. The app reads the
powvalue from the kind 38385 info event but never uses it — all events are sent with zero PoW. This means the app cannot communicate with any Mostro instance that requires PoW > 0.Changes
1. NIP-13 mining engine (
lib/shared/utils/nostr_utils.dart)mineProofOfWork(event, difficulty, keyPairs)— mines an event to target difficultycompute()to avoid blocking the UI_mineNonce()— top-level function for isolate, iterates nonces and hashes until target leading zero bits are met_countLeadingZeroBits()— counts leading zero bits in hex string per NIP-13["nonce", "<value>", "<target_difficulty>"]tag per NIP-13 spec2. Event creation pipeline
createWrap()— now accepts optionaldifficultyparameter, mines if > 0createNIP59Event()— passes difficulty through tocreateWrap()MostroMessage.wrap()— accepts optionaldifficultyparameter3. Automatic PoW from connected Mostro instance
MostroService.publishOrder()readspowfrom currentMostroInstanceDisputeRepository.createDispute()readspowfrom currentMostroInstanceRestoreManagerwrap calls readpowfrom currentMostroInstance4. Scope
PoW is applied only to events sent to Mostro:
Performance
Typical Mostro instances use difficulty 8-12, which completes nearly instantly.
NIP-13 Reference
["nonce", "<counter>", "<target_difficulty>"][0, pubkey, created_at, kind, tags, content]Summary by CodeRabbit
New Features
Permissions
Localization
Documentation
Other