[AutomationAPI/Python] Confirm Parallelism Flag Respected #11123
Labels
area/automation-api
impact/performance
Something is slower than expected
kind/bug
Some behavior is incorrect or out of spec
language/python
resolution/fixed
This issue was fixed
size/S
Estimated effort to complete (1-2 days).
Milestone
What happened?
NOTE: This issue was originally discovered and patched the CLI only. See #11116.
This issue tracks the work to confirm that the bug identified in #1116 does not apply to AutoAPI. The hardcoded thread count in the runtime code implies to me that the Automation API will have the same bug as in #1116, but I wasn't able to trace the code deeply enough to be sure. The hardcoding occurs in
preview
,up
, andrefresh
. If the bug is legit, I don't think the fix is as simple as replacing the hardcoded4
with the value ofparallel
because that implies to me the total number of threads will now be twiceparallel
: set once when constructing theSettings
object and once again for handling gRPC requests: there would be twoThreadPoolExecutor
allocated, one attached to the default system executor and one provided to the gRPC server. The solution might be to cache theThreadPoolExecutor
created inSettings
and passing that to the gRPC server.From the original issue:
Python programs do not respect the --parallel flag. While the --parallel flag is plumbed through to the language host, and even into the the SDK at pulumi/runtime/settings.py, the value is never used.
Certain rpc calls generate blocking futures that the runtime offloads to a worker thread. For example, any use of run_in_executor throws the future into a worker thread. (Here's where we make that call.)
The default executor sets the max number of worker threads using a fixed value scaling based on the number of CPUs on the machine. This is the value that's currently employed.
This issue is to replace the default executor with one that respects the --parallel flag.
Steps to reproduce
Copying the steps from #11116, but deploy the code with the Automation API.
Run the following program. Even though parallelism is unbounded by default, the maximum number of concurrently created resources is limited.
Expected Behavior
All resources should be created concurrently.
Actual Behavior
Only about 20 resources are created at a time.
Output of
pulumi about
Additional context
Original PR:
#11122
Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
The text was updated successfully, but these errors were encountered: