Skip to content

Implement inference caching while knowledge graph generation  #223

@dhirenmathur

Description

@dhirenmathur

Implement Hash-Based Caching for Knowledge Graph Nodes

Objective

Optimize knowledge graph generation across branches by implementing hash-based caching for node inference and embeddings.

Current Behavior

  • Complete knowledge graph regeneration for each new branch
  • Redundant inference generation for unchanged nodes

Proposed Solution

  • Calculate and store hash for each node in graph
  • Compare node hashes between branches
  • Reuse inference and embeddings for matching hashes
  • Generate new inference only for modified nodes

Implementation

  1. Add hash generation for nodes
  2. Store hashes in graph structure
  3. Implement hash comparison system
  4. Add cache lookup before inference
  5. Copy matching node data from cache

Success Criteria

  • Hash generation working correctly
  • Cache hit/miss working as expected
  • Faster graph generation for similar branches
  • No loss in inference quality

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions