A sophisticated memory management system designed for kernel memory allocation with orthogonal multi-dimensional requirements and intelligent resource optimization.
The system implements a hierarchical memory architecture with three dimensions:
- Processing Elements (PE): Top-level compute units
- Memory Sub-Systems (MSS): Memory controllers within each PE
- Slices: Memory segments within each MSS
The system starts with a universal mapping covering all coordinates and progressively forks into specialized mappings as allocation patterns diverge.
# Starts with one universal mapping
manager = MappingCentricMemoryManager(pe_count=2, mss_per_pe=2, slices_per_mss=8)
# PE-specific allocation triggers forking
pe_specific = MemoryRequirement(
size=1024,
pe_req=DimensionRequirement(DimensionScope.SPECIFIC, value=0), # PE 0 only
mss_req=DimensionRequirement(DimensionScope.ALL), # All MSS
slice_req=DimensionRequirement(DimensionScope.ALL), # All slices
)The system can collect multiple requirements and allocate them in an optimal order to minimize conflicts and mapping forking:
# Collect requirements for batch processing
manager = MappingCentricMemoryManager(pe_count=2, mss_per_pe=2, slices_per_mss=8)
# Add requirements in any order - system will optimize
manager.collect_requirement(pe_specific_req) # Would normally cause early forking
manager.collect_requirement(auto_selection_req) # Auto-selection
manager.collect_requirement(global_req) # Broad scope (will be processed first)
manager.collect_requirement(mss_specific_req) # MSS-specific
# Allocate all in optimized order
batch_results = manager.allocate_all()
print(f"Successful: {batch_results['successful_allocations']}")
print(f"Total forks: {sum(r['fork_occurred'] for r in batch_results['allocation_details'])}")Optimization Strategy:
- Scope breadth first: Process ALL scope before SPECIFIC scope before GROUP scope
- Minimize auto-selections early: Handle explicit values before auto-selections
- Size priority: Larger allocations processed first
- Mode preference: Serial allocations before parallel allocations
Each memory requirement tracks its fulfillment state and detailed allocation information:
# Create a requirement
req = MemoryRequirement(
size=512,
pe_req=DimensionRequirement(DimensionScope.SPECIFIC, value=None), # Auto-select PE
mss_req=DimensionRequirement(DimensionScope.SPECIFIC, value=None), # Auto-select MSS
slice_req=DimensionRequirement(DimensionScope.ALL), # All slices
allocation_id="my_buffer"
)
# Check initial state
print(f"State: {req.state.value}") # "pending"
print(f"Fulfilled: {req.is_fulfilled()}") # False
# Allocate
success = manager.allocate_requirement(req)
# Check final state and details
if req.is_fulfilled():
details = req.allocation_details
print(f"State: {req.state.value}") # "fulfilled"
print(f"Address: 0x{details.allocated_address:08x}") # Allocated address
print(f"Resolved PE: {details.resolved_pe}") # System-chosen PE
print(f"Resolved MSS: {details.resolved_mss}") # System-chosen MSS
print(f"Slices: {details.resolved_slice_values}") # Affected slice(s)
print(f"Mappings: {details.mapping_count_at_allocation}") # Mapping count at timeMemory requirements can be specified independently across dimensions:
# Uniform allocation across all resources
uniform_req = MemoryRequirement(
size=4096,
pe_req=DimensionRequirement(DimensionScope.ALL), # All PEs
mss_req=DimensionRequirement(DimensionScope.ALL), # All MSS
slice_req=DimensionRequirement(DimensionScope.ALL), # All slices
)
# PE-specific with auto-selected MSS
pe_specific_req = MemoryRequirement(
size=2048,
pe_req=DimensionRequirement(DimensionScope.SPECIFIC, value=1), # PE 1 only
mss_req=DimensionRequirement(DimensionScope.SPECIFIC, value=None), # Auto-select MSS
slice_req=DimensionRequirement(DimensionScope.ALL), # All slices
)The system automatically selects optimal resources based on available space and load balancing:
auto_req = MemoryRequirement(
size=1024,
pe_req=DimensionRequirement(DimensionScope.SPECIFIC, value=None), # System chooses PE
mss_req=DimensionRequirement(DimensionScope.SPECIFIC, value=None), # System chooses MSS
slice_req=DimensionRequirement(DimensionScope.ALL), # All slices
)Supports parallel allocation across slice groups with automatic distribution:
parallel_req = MemoryRequirement(
size=2048, # 512 bytes per slice in group
pe_req=DimensionRequirement(DimensionScope.ALL),
mss_req=DimensionRequirement(DimensionScope.ALL),
slice_req=DimensionRequirement(DimensionScope.GROUP, group=SliceGroup.GROUP_0_3),
allocation_mode=SliceAllocationMode.PARALLEL
)The system provides comprehensive tracking of all processed requirements:
# Get summary statistics
summary = manager.get_requirements_summary()
print(f"Total: {summary['total_requirements']}")
print(f"Fulfilled: {summary['fulfilled_count']}")
print(f"Pending: {summary['pending_count']}")
# Print detailed status of all requirements
manager.print_requirements_summary()Main memory manager implementing the multi-dimensional allocation system with these key methods:
collect_requirement(req): Collect a requirement for later batch allocationallocate_all(): Allocate all collected requirements in optimal orderallocate_requirement(req): Immediate allocation of a single requirementget_requirements_summary(): Get statistics about processed requirementsprint_requirements_summary(): Print detailed requirement statustotal_allocated_bytes(): Returns total allocated bytes across all coordinates
Represents a memory allocation request with:
- State tracking:
RequirementState.PENDING→RequirementState.FULFILLED - Allocation details: Address, resolved dimensions, mapping count
- Dimension requirements: Independent specifications for PE, MSS, and slice allocation
- Allocation mode: Serial or parallel slice allocation
- Size calculation:
total_allocation_size()computes total bytes allocated across all affected coordinates
Key methods:
get_affected_coordinates(): Returns all coordinate locations affected by this requirementtotal_allocation_size(): Returnssize * number_of_affected_coordinates- the total bytes allocatedis_fulfilled(): Check if requirement has been successfully allocatedget_fulfillment_summary(): Get detailed status string with allocation information
Contains fulfillment information:
allocated_address: Virtual address where memory was allocatedresolved_pe: Final PE value (for auto-selected or ALL scope)resolved_mss: Final MSS value (for auto-selected or ALL scope)resolved_slice_values: List of affected slice indicesmapping_count_at_allocation: Number of mappings when allocated
Specifies requirements for a single dimension (PE, MSS, or slice).
manager = MappingCentricMemoryManager(pe_count=2, mss_per_pe=4, slices_per_mss=8)
req = MemoryRequirement(
size=4096,
pe_req=DimensionRequirement(DimensionScope.ALL),
mss_req=DimensionRequirement(DimensionScope.ALL),
slice_req=DimensionRequirement(DimensionScope.ALL),
allocation_id="kernel_buffer"
)
success = manager.allocate_requirement(req)
if success:
print(f"Allocated at: 0x{req.allocation_details.allocated_address:08x}")manager = MappingCentricMemoryManager(pe_count=2, mss_per_pe=2, slices_per_mss=8)
# Collect requirements in any order
manager.collect_requirement(MemoryRequirement(
size=512,
pe_req=DimensionRequirement(DimensionScope.SPECIFIC, value=0), # PE 0 specific
mss_req=DimensionRequirement(DimensionScope.ALL),
slice_req=DimensionRequirement(DimensionScope.ALL),
allocation_id="pe0_cache"
))
manager.collect_requirement(MemoryRequirement(
size=1024,
pe_req=DimensionRequirement(DimensionScope.ALL), # Global (will be processed first)
mss_req=DimensionRequirement(DimensionScope.ALL),
slice_req=DimensionRequirement(DimensionScope.ALL),
allocation_id="global_data"
))
# System optimizes order and allocates
results = manager.allocate_all()
print(f"Allocated {results['successful_allocations']} requirements")
print(f"Total forks: {sum(r['fork_occurred'] for r in results['allocation_details'])}")req = MemoryRequirement(
size=1024,
pe_req=DimensionRequirement(DimensionScope.SPECIFIC, value=None), # Auto-select
mss_req=DimensionRequirement(DimensionScope.SPECIFIC, value=None), # Auto-select
slice_req=DimensionRequirement(DimensionScope.ALL),
allocation_id="auto_buffer"
)
success = manager.allocate_requirement(req)
if req.is_fulfilled():
details = req.allocation_details
print(f"System chose PE {details.resolved_pe}, MSS {details.resolved_mss}")
print(f"Allocated at 0x{details.allocated_address:08x}")stats = manager.get_memory_stats()
print(f"Total mappings: {stats['total_mappings']}")
print(f"Total allocated: {sum(m['total_allocated'] for m in stats['mappings']):,} bytes")
# Review all processed requirements
manager.print_requirements_summary()- Intelligent Resource Management: Dynamic mapping forking and automatic resource selection
- Optimal Allocation Ordering: Batch processing minimizes conflicts and mapping fragmentation
- Complete Transparency: Full tracking of requirement state and allocation details
- Scalability: Efficiently handles complex allocation patterns with minimal overhead
- Flexibility: Orthogonal dimension requirements support diverse allocation patterns
- Optimization: Smart resource selection and load balancing
- Consistency: Unified approach to different allocation scenarios
- Visibility: Comprehensive statistics and requirement tracking
memory_manager.py: Core implementation with requirement tracking and batch allocationmemory_manager_demo.py: Interactive demonstration of all features including batch allocationtest_requirements_tracking.py: Focused test of fulfillment trackingtest_batch_allocation.py: Comprehensive test of batch allocation optimizationsimple_batch_test.py: Simple demonstration of batch allocation benefitsREADME.md: This documentation
python memory_manager.pypython memory_manager_demo.pypython test_requirements_tracking.pypython simple_batch_test.py- Multi-dimensional Orthogonality: Requirements in different dimensions are independent
- Copy-on-Write Mapping: Shared mappings are forked only when allocation patterns diverge
- Intelligent Selection: Automatic resource selection based on utilization and constraints
- Optimal Ordering: Batch allocation minimizes conflicts through strategic requirement ordering
- Requirement Transparency: Complete visibility into allocation decisions and results
- Conflict Resolution: Intersection-based allocation for cross-mapping requirements
- Performance: Efficient memory utilization with minimal fragmentation