Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add API to configure memory allocator settings #453

Merged
merged 1 commit into from
Jun 26, 2023

Commits on Jun 22, 2023

  1. Add API to configure memory allocator settings

    - Added the Python API torch.mps.memory.set_allocator_settings(str) to adjust
    high/low watermark ratios, small/large/xlarge heap size divisors, max pow2 roundup
    size, and debug verbosity of MPSAllocator messages.
    - Added the env-var "PYTORCH_MPS_ALLOC_CONF" to pass the settings as an alternative
    to API set_allocator_settings() (similar to CUDA).
    - Removed the old env-vars PYTORCH_DEBUG_MPS_ALLOCATOR, PYTORCH_MPS_HIGH_WATERMARK_RATIO, and PYTORCH_MPS_LOW_WATERMARK_RATIO
    - Fixed the bug with total_memory_allocated_size to count the total size of MTLHeaps and not MTLBuffers (in case heaps aren't fully occupied)
    razarmehr committed Jun 22, 2023
    Configuration menu
    Copy the full SHA
    c52c33a View commit details
    Browse the repository at this point in the history