You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Application can greatly increase it performance by caching (memorizing) and reusing intermedia results. At the same agressive mempry consumption descreases cluster capacity. To provide best possible service (faster on empty cluster with smooth degradation by speed on busy cluster) new management policy is required:
Mandatory memory. It is minimal memory required for computation, if is not available, application will fail.
Primary (level 1) cache. It is NOT allocated on application startup. Instead, it is requested in runtime. Since it is optional, memory manager may refuse to allocate extra memory or return less amount of memory than requested. Any result should not fail the application. It will run slower and can try to repeat memory request later. If node memory is exhausted, memory manager may ask the application to reduce its cache size, but this action is not immediate, it may require some time to process already filled cache.
Example of this kind of cache is extra buffer for async IO. Smaller (mandatory) buffer slows down the IO. Larger (mandatory + primary cache) buffer speeds up the IO.
Secondary (level 2) cache. It is also NOT allocated at the startup and the requested in runtime. Key difference with primary cache is ability to instantly drop any secondary cache and return all the quota back to the memory manager in sync way.
Usually secondary cache is used to back the primary one. If you need to process some data twice, you may download it and keep in the secondary cache as long as memory is available. The data will be used (moved to primary cache) for processing later. If discarded, data will be downloaded again (smooth degradation).
Memory manager may be implemented, for example, with folowing policy:
Always satisfy mandatory requests as long as memory is available.
Satisfy primary cache requests if node uses less than 50% of memory available. When usage grows (> 50%) - send to application soft ASYNC requests to free primary cache, completely or partially.
Satisfy secondary cache requests if node uses less than 75% of memory available. When usage grows (> 75%) - require the application in hard SYNC request to discard its secondary cache and return the quota to the memory manager immediately.
For extra performance, memory manager should be integrated with task planner - reserve mandatory memory on scheduling phase to descrease runtime failures.
The text was updated successfully, but these errors were encountered:
Application can greatly increase it performance by caching (memorizing) and reusing intermedia results. At the same agressive mempry consumption descreases cluster capacity. To provide best possible service (faster on empty cluster with smooth degradation by speed on busy cluster) new management policy is required:
Mandatory memory. It is minimal memory required for computation, if is not available, application will fail.
Primary (level 1) cache. It is NOT allocated on application startup. Instead, it is requested in runtime. Since it is optional, memory manager may refuse to allocate extra memory or return less amount of memory than requested. Any result should not fail the application. It will run slower and can try to repeat memory request later. If node memory is exhausted, memory manager may ask the application to reduce its cache size, but this action is not immediate, it may require some time to process already filled cache.
Example of this kind of cache is extra buffer for async IO. Smaller (mandatory) buffer slows down the IO. Larger (mandatory + primary cache) buffer speeds up the IO.
Usually secondary cache is used to back the primary one. If you need to process some data twice, you may download it and keep in the secondary cache as long as memory is available. The data will be used (moved to primary cache) for processing later. If discarded, data will be downloaded again (smooth degradation).
Memory manager may be implemented, for example, with folowing policy:
Always satisfy mandatory requests as long as memory is available.
Satisfy primary cache requests if node uses less than 50% of memory available. When usage grows (> 50%) - send to application soft ASYNC requests to free primary cache, completely or partially.
Satisfy secondary cache requests if node uses less than 75% of memory available. When usage grows (> 75%) - require the application in hard SYNC request to discard its secondary cache and return the quota to the memory manager immediately.
For extra performance, memory manager should be integrated with task planner - reserve mandatory memory on scheduling phase to descrease runtime failures.
The text was updated successfully, but these errors were encountered: