Skip to content

Conversation

@V-Vaal
Copy link
Contributor

@V-Vaal V-Vaal commented Dec 16, 2025

Summary

This PR adds frontend retrocompatibility between the legacy badge registry (bytes32 descriptions) and the new version using bytes descriptions.

What’s included

  • Single read pipeline: totalBadges → version probe → multicall (no separate version hook)
  • Strict decoding based on observable contract behavior (decode-error based, no heuristics)
  • Robust write path: always try V2, fallback to V1
  • Split ABIs to avoid viem/wagmi ambiguity
  • Shared utils for string → bytes conversion

Notes

Related to #157

Copy link
Contributor

@joelamouche joelamouche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Screenshot 2025-12-17 at 11 08 40 Would be nice to annotate the code saying this retrocompatibility logic should be removed after redeployment, with link to the PR to facilitate future work. I tested locally and badges were not loading. Did you try it locally with old version? PUBLIC_BADGE_REGISTRY_ADDRESS=0x94f5F12BE60a338D263882a1A49E81ca8A0c30F4

import { useReadContract } from "wagmi";
import { badgeRegistryAbiV2 } from "@/lib/abis/badgeRegistryAbi";

export function useBadgeRegistryVersion(address?: `0x${string}`): {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe specify that this logic can be removed after redeployment

@V-Vaal
Copy link
Contributor Author

V-Vaal commented Dec 19, 2025

Okay, let's do some checkpoint...
After iterating on the frontend implementation and validating the behavior against real deployments (thanks for the reviews, by the way), I converged on a slightly different approach than the one initially proposed in the issue.

Why approach evolved

My initial idea (described in the issue) relied on a dedicated useBadgeRegistryVersion hook, and version detection used as a prerequisite for both reads and writes.

While conceptually sound, testing this against real registries surfaced two practical issues:

  1. Multiple read pipelines (version hook + badge list) caused duplicated RPC calls and refetch storms.

  2. ABI ambiguity during multicalls made heuristic-based detection (due to length checks) unreliable in practice.

The final solution favors determinism, minimal RPC usage, and real on-chain signals over architectural purity.

Final architecture (implemented in my last commit)

  1. Single read pipeline
    All badge reads now go through a single hook:
    totalBadges → version probe → multicall
    This removes duplicated calls and guarantees a stable RPC budget.

  2. Probe-based version detection (data-driven)
    Instead of a separate version hook, version detection is integrated directly into useGetBadges:

If `totalBadges === 0`
Return []

No probe, no multicall, no version assumption

Otherwise:

  • Run a single probe: getBadgeAt(0) using the V2 ABI
  • If it decodes successfully → ABI V2
  • If it fails with a decode error only → ABI V1
  • Any other error is surfaced normally

This avoids heuristics, avoids address assumptions, and relies purely on observable contract behavior.

  1. ABI selection is explicit and strict
  • V1 → bytes32ToString
  • V2 → bytesToString
  • No length-based detection

No silent fallback on non-decode errors

  1. Defensive query defaults
    To prevent RPC saturation, I've added defensive QueryClient defaults:
  • no refetch on focus / reconnect / remount
  • limited retries
  • controlled cache lifetime

This ensures predictable behavior even under frequent navigation.

In long-term :

  • Works with both existing registries without assumptions

  • Deterministic and observable behavior

  • Minimal RPC footprint

  • Easy to remove once V1 is fully deprecated

The previously introduced useBadgeRegistryVersion hook is no longer needed with this approach and has been removed.

I think this approach is closer to our usage and should behave robustly in practice.

Let me know what you're thinking about.
Happy to adjust or extend this if you’d like to.

@V-Vaal
Copy link
Contributor Author

V-Vaal commented Dec 20, 2025

Added inline TODOs to clearly mark the temporary V1/V2 BadgeRegistry retro-compat code paths.
Cleanup is tracked separately in #160 and will be handled once the V2 redeploy is complete.

Copy link
Contributor

@joelamouche joelamouche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you test it with v1 deployment? I can't seem to make it work with a v1 contract (try 0xc142ab6b4688b7b81cb4cc8b305f517bba3bfd25)

@V-Vaal
Copy link
Contributor Author

V-Vaal commented Dec 22, 2025

Did you test it with v1 deployment? I can't seem to make it work with a v1 contract (try 0xc142ab6b4688b7b81cb4cc8b305f517bba3bfd25)

I tested against the v1 deployment as well.

I ran locally with:

  • PUBLIC_BADGE_REGISTRY_ADDRESS=0x94f5F12BE60a338D263882a1A49E81ca8A0c30F4
  • PUBLIC_BADGE_REGISTRY_ADDRESS=0xc142ab6b4688b7b81cb4cc8b305f517bba3bfd25

In both cases the UI loads fine on my side.

If it still fails for you, could you please share the exact page/action that fails and any console error or failing network call?
Happy to help reproduce if you paste the stacktrace / failing call.

@joelamouche
Copy link
Contributor

Screenshot 2025-12-22 at 12 08 57 You tried creating badges with v1?

@V-Vaal
Copy link
Contributor Author

V-Vaal commented Dec 22, 2025

Not yet 🙂 I’m going to test the create flow against the v1 deployment and dig into it. I’ll get back with findings shortly.

@V-Vaal V-Vaal force-pushed the fix/157-frontend-badge-retrocompat branch from 897ebdf to 3e2ef03 Compare December 23, 2025 22:16
@V-Vaal
Copy link
Contributor Author

V-Vaal commented Dec 23, 2025

Updating after further testing.

I’ve now tested the full createBadge flow against both existing Amoy V1 deployments:

0xc142ab6b4688b7b81cb4cc8b305f517bba3bfd25 (older V1)
0x94f5F12BE60a338D263882a1A49E81ca8A0c30F4 (current registry in use, still V1)

In both cases, on my side:

/badges loads correctly
existing badges are displayed
createBadge succeeds without revert

The current implementation detects the ABI on-demand, based on observable on-chain behavior, and routes the write with the correct signature before sending the transaction.
This avoids both ABI-mismatch reverts and any RPC refetch storms.

The frontend should therefore remain fully retro-compatible with V1 while being ready for the upcoming V2 redeploy.

Unless you’d like a different behavior or structure, I think this is ready to merge.
Happy to iterate if you see any edge case I may have missed.

Edit (1 hour later ): I’m now intermittently hitting an Internal JSON-RPC error again while re-running the same flow.
At this stage it smells more like a flaky RPC / provider issue (rate limiting, transient node failure) than a deterministic ABI or frontend bug... unless the RPC gods (or my last commit) decided otherwise 😄
Still investigating and will report back with concrete repro details.

Separately, once this is fully settled, I’ll do a small follow-up cleanup pass on the previous commit (pure refactor / readability only, no behavior changes).

@V-Vaal
Copy link
Contributor Author

V-Vaal commented Dec 26, 2025

I pushed a small refactor (helpers extraction only). No behavior change intended.
I also instrumented the flow locally (debug-only) : eth_call, simulateContract, and estimateGas succeed on both Amoy V1 registries, which strongly suggests ABI detection + args are correct and the call is valid at EVM level.
When I occasionally hit Internal JSON-RPC error, it appears intermittent and more consistent with RPC/provider flakiness during the send/estimation path than a deterministic contract revert.
I'm avoiding additional write tests to not pollute the deployed registries.

Optional UX follow-up : we could treat Internal JSON-RPC error / transient provider failures as a retryable class and surface a clearer message to users (eg. "RPC/provider hiccup. Please retry in a few seconds or switch RPC") possibly with a simple Retry action.

@joelamouche
Copy link
Contributor

@V-Vaal

Optional UX follow-up : we could treat Internal JSON-RPC error / transient provider failures as a retryable class and surface a clearer message to users (eg. "RPC/provider hiccup. Please retry in a few seconds or switch RPC") possibly with a simple Retry action.

There already was an error message. What do you see now?

Copy link
Contributor

@joelamouche joelamouche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested v1 compat and it works 🥳
Great work on this one 🚀

Regarding error messaging, we previously had a ctach error message with retry message for the frequent case where json rpc errors (amoy behavior). If yyou see it not working anymore please create a ticket with screenshots

@joelamouche joelamouche merged commit f67d93b into TheSoftwareDevGuild:main Jan 5, 2026
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants