Skip to content

Multi-tile configuration of CachePool#10

Merged
Aquaticfuller merged 47 commits intomainfrom
dev/multi-tile
Mar 27, 2026
Merged

Multi-tile configuration of CachePool#10
Aquaticfuller merged 47 commits intomainfrom
dev/multi-tile

Conversation

@DiyouS
Copy link
Copy Markdown
Collaborator

@DiyouS DiyouS commented Jan 10, 2026

Overview

This PR introduces the multi-tile configuration of the CachePool cluster, extending the architecture from a single-tile design to a scalable multi-tile system. The default configuration is updated to 4 tiles with 16 cores total.

Features

Group Hierarchy

A new Group level is introduced between the Tile and Cluster, containing a configurable number of Tiles connected via a crossbar. This enables scaling the core count and cache capacity beyond a single tile while keeping inter-tile communication latency bounded.

Peripheral Reorganization

Memory-mapped registers and peripherals (including the bootrom) are moved out of the Spatz cluster and fully into CachePool at the cluster level. This centralizes control and simplifies the per-tile design. A new register is added to configure the private cache partition start address.

Two-Level Hardware Barrier

A two-level hardware barrier replaces the previous single-level barrier to support global synchronization across all tiles in a group. The first level synchronizes cores within a tile; the second level synchronizes across tiles.

Cache Bank Partitioning

The L1 cache banks can now be partitioned at runtime between a shared pool (accessible cluster-wide via the interconnect) and a private partition (local to each tile). Three configurations are currently supported, with more planned:

  • All-shared: all banks contribute to the cluster-wide interleaved cache pool
  • All-private: all banks are local to the tile, not visible to remote tiles
  • Half-private / half-shared: half the banks are private, half remain in the shared pool

Partitioning is controlled via a memory-mapped register (l1d_private). The interconnect (tcdm_cache_interco) uses a runtime-configurable address rotation scheme to present a dense index space to each cache bank regardless of partition mode, preserving full SRAM utilization. The refill unit applies the inverse rotation before issuing misses to the NoC.

DiyouS and others added 30 commits January 27, 2026 13:11
- pass core wstrb into cachepool_cache_ctrl and use per-byte bank enables
- map wide line SRAMs to byte-wide BE slices in cachepool_tile
- bump 512b line tag/meta width to avoid truncation with byte masks
- update local build/sim overrides used to run the modified insitu-cache
@DiyouS DiyouS marked this pull request as ready for review March 26, 2026 14:56
@DiyouS DiyouS self-assigned this Mar 26, 2026
@DiyouS DiyouS requested a review from Aquaticfuller March 26, 2026 16:05
@DiyouS DiyouS changed the title WIP: Multi-tile configuration of CachePool Multi-tile configuration of CachePool Mar 26, 2026
@Aquaticfuller Aquaticfuller merged commit 514061c into main Mar 27, 2026
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants