New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using azblob backend, pulumi stack ls runs quite slowly #8872
Comments
Yes, this is just how the blob storage backend works. I would expect a lot of traffic for |
I am running into the same issue. I simply want to know what the s3 backend is and the stack that is currently used by the Pulumi cli. Currently I'm using these commands for that:
However, these operation take approximately Update:
|
@simonkarman: I wrote some little scripts a while back to quickly access stack in a generalized way:
#!/usr/bin/env bash
# This is relative to our git root for a project, you can adjust as necessary.
CFG_PATH="$(git rev-parse --show-toplevel)/pulumi/Pulumi.yaml"
WORKSPACE_HASH=$(echo -n "$CFG_PATH" | sha1sum | sed 's/ .*//')
WORKSPACE_NAME=$(grep name: "$CFG_PATH" | sed 's/name: //g')
echo -n "$HOME/.pulumi/workspaces/$WORKSPACE_NAME-$WORKSPACE_HASH-workspace.json"
#!/usr/bin/env bash
# adjust to refer to the script above:
WORKSPACE_FILE=$(./get-pulumi-stack-file)
if [ -f "$WORKSPACE_FILE" ]; then
jq -r '.stack' "$WORKSPACE_FILE"
fi Timing for my script: We use this primarily to switch the kubernetes context depending on the currently active pulumi stack, sharing here as it might also be useful:
#!/usr/bin/env bash
CURRENT_STACK=$("$INFRA_SCRIPT_ROOT"/current-stack-fast)
KUBE_CONTEXT="ourcompany-${CURRENT_STACK}"
# Updates k8s creds to currently active stack
if ! kubectl config use-context "$KUBE_CONTEXT"; then
"$INFRA_SCRIPT_ROOT"/get-k8s-creds
fi
We use this in conjunction with |
Hello!
Issue details
When using an
azblob://
pulumi backend,pulumi stack ls
command is quite slow, from five successive trials:About 20s is the floor, but I've seen this command take over 3 minutes before. Others on my team have seen similar behavior.
The storage account on Azure is a StorageV2 account with Standard/Hot access in the same location that my team and I live and work.
We're seeing a large disparity between E2E latency on Azure and server latency:
Red is E2E latency, and blue is server latency. We're all on fast, stable connections with low CPU and memory usage, so my feeling is that this is on the pulumi side, but I can't say for sure.
We have three mostly identical stacks stored in our backend, for one particular stack:
Currently running pulumi version
v3.22.1
.A particular
pulumi stack ls
network request downloads almost 40MB and it occurs very very slowly (very qualitative as I watched the network tab in Activity Monitor on macos). It seems like commands likepulumi stack ls
could be sped up avoiding a full download of all of the state files.Versions:
Steps to reproduce
Most of the details here are in the summary, if it would be helpful I can try to put together a simple test case.
Expected:
pulumi stack ls
command runs quicklyActual:
pulumi stack ls
takes a long time to run.The text was updated successfully, but these errors were encountered: