Skip to content

lunaticscode/optimize-memorize-rule

Repository files navigation

optimize-memorize-rule

Feature Request: Structural (deep) comparison option for useMemo / React.memo dependencies

Summary

Current useMemo, useCallback, and React.memo rely on referential equality (Object.is) to determine whether dependencies have changed. This means that even when an object or array has the exact same content, a new reference triggers an unnecessary re-render.

I'd like to propose a built-in structural comparison mode that serializes dependency values into a deterministic string key and compares that string instead of references.

Motivation

This is a common pain point in React applications:

const Parent = () => {
  const [count, setCount] = useState(0);

  // New reference every render, but same content
  const config = { theme: "dark", locale: "en" };

  return (
    <>
      <button onClick={() => setCount(c => c + 1)}>count: {count}</button>
      {/* Re-renders every time even though config content is identical */}
      <MemoizedChild config={config} />
    </>
  );
};

Current workarounds and their limitations:

Approach Limitation
Manually wrap with useMemo Requires developer discipline; easy to miss
JSON.stringify in deps Cannot handle Map, Set, Date, Function, FormData, File, circular references
React Compiler (React 19+) Automates memoization but still based on referential equality internally
Custom areEqual in React.memo Per-component, manual, not reusable across hooks

Proposed Approach

A type-aware recursive serializer that produces a deterministic string key from any dependency value:

// Usage would look like:
const MemorizedComponent = useMemo(
  () => <ExpensiveComponent data={data} />,
  [getMemorizedRule([data, filters, config])],
);

Core serialization logic — getValueOfKey(value, depth):

  • Primitives (string, number, boolean, bigint, symbol): String(value) with delimiter
  • null / undefined: literal "null" / "undefined"
  • Array: preserve order — [elem0,elem1,...], recurse into elements
  • Plain Object: sort keys alphabetically — {a:val,b:val}, recurse into values
  • Map: sort by key — Map{key=>val,...}, recurse into both keys and values
  • Set: sort elements — Set{elem,...}, recurse into elements
  • Date: value.getTime() (millisecond timestamp)
  • Function: Fn(functionName) via value.name
  • FormData: sort by key, distinguish File entries — FormData{key:val,...}
  • File: File(name|size|lastModified)

Safety mechanisms:

  • Depth limit (MAX_DEPTH = 5): returns "..." beyond max depth, preventing infinite recursion from circular references
  • Error fallback: on any serialization failure, returns Symbol() (always unique -> guarantees re-render, never swallows updates)

Benchmark Results

Tested with 4-level deep nested structures (Object -> Map -> Object -> Array -> primitive):

Structure 3 calls Per call
Object + Array + Map 0.0285ms ~0.0095ms
Set + Object + Array 0.0262ms ~0.0087ms
Map + Array + Object 0.0181ms ~0.0060ms

At ~0.01ms per call, this is negligible compared to a 16ms frame budget. The overhead is justified when it prevents expensive subtree re-renders caused by referential inequality.

Test Coverage

25 passing tests covering:

  • Primitive types (equality, delimiter separation, null/undefined distinction)
  • Arrays (order preservation, nested mutation detection)
  • Plain objects (key-order independence, deep value changes)
  • Date, Map, Set, Function types
  • Circular reference safety via depth limiting
  • Mixed multi-type dependency arrays
  • 4-level deep nested complex structures

Trade-offs & Open Questions

  1. When NOT to use: For shallow, primitive-only deps, Object.is is faster and sufficient. Structural comparison is most valuable when deps contain objects/arrays that are reconstructed each render with the same content.

  2. Function identity: Currently uses function.name only. Two different closures with the same name produce the same key. function.toString() is an alternative but is expensive and still cannot capture closed-over variables.

  3. Depth limit: Fixed at 5. Should this be configurable? Deeper structures produce longer keys, trading comparison speed for detection coverage.

  4. Potential integration points:

    • A new hook: useMemoDeep(() => value, deps)
    • An option on existing hooks: useMemo(() => value, deps, { compare: "structural" })
    • A standalone utility (current approach) that composes with existing useMemo

About

optimize-memorize-rule

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors