Skip to content

⚡ Cache label provider capability lists in IntelDisassembler#84

Merged
danielplohmann merged 1 commit intodanielplohmann:masterfrom
r0ny123:codex/consolidate-label-provider-loops
Mar 23, 2026
Merged

⚡ Cache label provider capability lists in IntelDisassembler#84
danielplohmann merged 1 commit intodanielplohmann:masterfrom
r0ny123:codex/consolidate-label-provider-loops

Conversation

@r0ny123
Copy link
Copy Markdown
Contributor

@r0ny123 r0ny123 commented Mar 6, 2026

💡 What
Cache API-capable and symbol-capable label providers when IntelDisassembler registers them, and reuse those cached lists for API lookup, symbol lookup, and symbol-candidate collection.

🎯 Why
The disassembler was repeatedly scanning the full label_providers list and rechecking provider capabilities on hot paths like symbol candidate collection and label resolution. That adds avoidable Python loop and method-call overhead during setup and lookup.

📊 Measured Improvement
A focused setup microbenchmark of _updateLabelProviders() plus getSymbolCandidates() improved from 2.353s to 2.198s over 50,000 iterations, which is about 6.6% faster.

To isolate the exact loop change, I also benchmarked the old full-list symbol-candidate scan against the new cached symbol-provider iteration. That dropped from 10.220s to 6.100s over 200,000 iterations, reducing loop time by about 40.3%.

Validation:

  • python -m ruff format smda/intel/IntelDisassembler.py tests/testIntelDisassembler.py
  • python -m ruff check .
  • python -m unittest discover -s tests -t . -p "test*.py"

@r0ny123 r0ny123 changed the title Combine label provider loops ⚡ Cache label provider capability lists in IntelDisassembler Mar 6, 2026
@danielplohmann danielplohmann merged commit 4eec66e into danielplohmann:master Mar 23, 2026
7 checks passed
@r0ny123 r0ny123 deleted the codex/consolidate-label-provider-loops branch March 23, 2026 15:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants