You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Cloud Run functions we are using are charged per vCPU/s (compute) and GB/s (memory). Currently I have set the orchestrator to have 4 GB / 4 vCPU; and the offset tile 4 GB / 8 vCPU. Looking at the metrics for the orchestrator there is room for optimisation. Average run time is quite low at the moment because of the dry runs (avg billed time is 2.63s). Same can be said of the offset tile metrics, in terms of optimising resources. Average billed time is around 2.02s. I’ve limited the concurrency for the cloud run offset tile to 20 per orchestrator (so there will be a max 20 instances running per orchestrator at the same time). I think we could further reduce this as well without facing significant time increases. Our queue is defined in such a way that we are sending a small amount of tasks at a time, the idea here being that we reuse as much the started up containers, to avoid the startup cost. I’ve created this issue for further reference.
The text was updated successfully, but these errors were encountered:
The Cloud Run functions we are using are charged per vCPU/s (compute) and GB/s (memory). Currently I have set the orchestrator to have 4 GB / 4 vCPU; and the offset tile 4 GB / 8 vCPU. Looking at the metrics for the orchestrator there is room for optimisation. Average run time is quite low at the moment because of the dry runs (avg billed time is 2.63s). Same can be said of the offset tile metrics, in terms of optimising resources. Average billed time is around 2.02s. I’ve limited the concurrency for the cloud run offset tile to 20 per orchestrator (so there will be a max 20 instances running per orchestrator at the same time). I think we could further reduce this as well without facing significant time increases. Our queue is defined in such a way that we are sending a small amount of tasks at a time, the idea here being that we reuse as much the started up containers, to avoid the startup cost. I’ve created this issue for further reference.
The text was updated successfully, but these errors were encountered: