Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
747a05f
Tweak fluent bit configuration (#1940)
moritzkiefer-da Aug 18, 2025
4c92241
Reduce multi-validator deployment parallelism to 2 (#1938)
julientinguely-da Aug 18, 2025
3822f1a
Bump Canton for KMS resilience fix (#1941)
martinflorian-da Aug 18, 2025
9956232
Refactor some form components in sv ui (#1936)
fayi-da Aug 18, 2025
65a0383
Docs: Clarifications around validator DR (#1937)
martinflorian-da Aug 18, 2025
8883150
Fix tag prefix in stackdriver export (#1944)
moritzkiefer-da Aug 18, 2025
978d77a
query to aggregate traffic purchases over a time period (#1926)
stephencompall-DA Aug 18, 2025
28d1769
Implement DeleteCorruptAcsSnapshotTrigger (#1096)
rautenrieth-da Aug 18, 2025
662601c
[static] increase multi validators parallelism to 5 (#1949)
julientinguely-da Aug 19, 2025
4b06746
Write how-to docs for token standard usage (#1872)
OriolMunoz-da Aug 19, 2025
77b6708
Reduce gcp logging components (#1951)
moritzkiefer-da Aug 19, 2025
659e5e8
WalletSurviveCantonRestartIntegrationTest: bump wait on participant i…
martinflorian-da Aug 19, 2025
044433e
Bump cometbft mempool and cache size (#1953)
moritzkiefer-da Aug 19, 2025
13a1df1
[static] Add istio rate limits to pulumi (#1798)
nicu-da Aug 19, 2025
5571abc
Implement Amulet Rules Proposal Form in new SV UI (#1945)
fayi-da Aug 19, 2025
1b4ef86
Fix fluentbit log truncation (#1959)
moritzkiefer-da Aug 19, 2025
778adbb
[static] include rate of sequencer events processed in the participan…
nicu-da Aug 19, 2025
6b5c2e7
move pulumi npm packages into lfdt namespace (#1848)
isegall-da Aug 19, 2025
6d2ec21
don't alert a Slack channel unless explicitly set in .envrc.vars (#1913)
stephencompall-DA Aug 19, 2025
13bcefe
Support running static tests on gh-hosted runners (#1668)
isegall-da Aug 19, 2025
5815742
Revert "Support running static tests on gh-hosted runners (#1668)" (#…
isegall-da Aug 19, 2025
08901ff
Make pulumi stack parallelism configurable (#1967)
moritzkiefer-da Aug 20, 2025
40239bf
[static] Make the cluster node pools sizes configurable (#1957)
nicu-da Aug 20, 2025
659aeaa
Try to fix grafana alert expansion (#1970)
moritzkiefer-da Aug 20, 2025
b4f2a3e
[ci] More lenient scan rate limit test (#1971)
nicu-da Aug 20, 2025
a82fa0d
Match package name on template filter (#1955)
OriolMunoz-da Aug 20, 2025
47b4754
Document routing of the JSON API (#1973)
moritzkiefer-da Aug 20, 2025
62d5e9c
Synchronize on scan processing lock archival (#1969)
moritzkiefer-da Aug 20, 2025
ff4fff4
Add config rendering helper function and enhance splice-participant h…
timpel-fcs Aug 20, 2025
e4d532c
Remove migrate-istio (#1977)
moritzkiefer-da Aug 20, 2025
095080c
mention BFT success requirement in validator onboarding doc (#1979)
stephencompall-DA Aug 20, 2025
e2d37d8
shorter output/timeout/portability in validator onboarding test scrip…
stephencompall-DA Aug 20, 2025
a32995a
Support running static tests on gh-hosted runners (#1978)
isegall-da Aug 20, 2025
095f818
Make workflow ids of import updates consistent (#1981)
rautenrieth-da Aug 21, 2025
ff8cf95
Further clarify safe ways of bypassing the party limit (#1984)
moritzkiefer-da Aug 21, 2025
4273949
Remove todo artifacts (#1986)
moritzkiefer-da Aug 21, 2025
ca58926
Mention existing transfer preapproval proposal (#1987)
moritzkiefer-da Aug 21, 2025
86aabc3
vagrant: Restart nix-daemon after mounting cache (#1985)
giner Aug 21, 2025
d35626e
Filter pr_cluster_test for pull requests (#1988)
moritzkiefer-da Aug 21, 2025
fa7d20b
[static] Update release notes for 0.4.12 (#1989)
nicu-da Aug 21, 2025
baf0476
stop triggering ciupgrade tests (#1983)
isegall-da Aug 21, 2025
2f044a9
Upgrade Canton to 3.3.0-snapshot.20250821.16057.0.v3719b9e9 (#1994)
moritzkiefer-da Aug 21, 2025
c19b637
[ci] Update VERSION to 0.4.13 (#1995)
nicu-da Aug 21, 2025
332e06a
run BigQuery integration test daily (#1873)
stephencompall-DA Aug 21, 2025
8f96a5b
Add missing CO_TransferPreapprovalSend case in UserWalletTxLogParser …
OriolMunoz-da Aug 25, 2025
19479aa
Merge remote-tracking branch 'origin/main' into canton-3.4
cocreature Aug 25, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 1 addition & 9 deletions .envrc.vars
Original file line number Diff line number Diff line change
Expand Up @@ -57,15 +57,7 @@ export CLOUDSDK_COMPUTE_REGION="us-central1"
export DB_CLOUDSDK_COMPUTE_ZONE="${CLOUDSDK_COMPUTE_REGION}-a"
# Default to the scratch environment
export CLOUDSDK_CORE_PROJECT="da-cn-scratchnet"
# Default cluster sizing
export GCP_CLUSTER_NODE_TYPE=e2-standard-16
export GCP_CLUSTER_MIN_NODES=0
# A high max-nodes by default to support large deployments and hard migrations
# Should be set to a lower number (currently 8) on CI clusters that do neither of those.
export GCP_CLUSTER_MAX_NODES=20
# The logging variant supports default, that google recommends for up to 100kb/s logs (https://cloud.google.com/kubernetes-engine/docs/how-to/adjust-log-throughput)
# The max throughput variant supports multiple tens of MB/s of logs, but also the agents require 2CPUs and therefore we lose 2 CPUs per node
export GCP_CLUSTER_LOGGING_VARIANT="DEFAULT"

export GCP_DNS_PROJECT="da-gcp-canton-domain"
export GCP_DNS_SA_SECRET="clouddns-dns01-solver-svc-acct"
# DNS Service Account information
Expand Down
235 changes: 235 additions & 0 deletions .github/actions/nix/setup_nix/action.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,235 @@
name: "Setup Nix"
description: "Setup Nix"
inputs:
artifactory_user:
description: "The Artifactory user"
required: true
artifactory_password:
description: "The Artifactory password"
required: true
oss_only:
description: "Restrict upstream dependencies (e.g. Canton) to OSS versions (the equivalent of OSS_ONLY=1 in local checkouts)"
default: "false"
required: false
cache_version:
description: "The cache version"
required: true
should_save:
description: "If the nix cache should be saved"
# this should be run just from one job to ensure we avoid multi write conflicts, which makes everything worse
default: "false"
should_save_gcp:
description: "If the nix cache should be saved to the public GCP bucket"
default: "false"
upload_workload_identity_provider:
description: "The workload identity provider to use for uploading the cache"
required: false
default: ""
upload_service_account:
description: "The service account to use for uploading the cache"
required: false
default: ""

runs:
using: "composite"
steps:
- name: Compute cache Key
id: cache_key
shell: bash
run: |
set -euxo pipefail
git ls-files nix/ | grep -v '[.]md$' | LC_ALL=C sort | xargs sha256sum -b > /tmp/nix-cache-key
uname -m >> /tmp/nix-cache-key # Add architecture to the cache key
echo "gh_cache_version: ${{ inputs.cache_version }}" >> /tmp/nix-cache-key # Add cache version to the cache key
if [ "${{ inputs.oss_only }}" == true ]; then
echo "Using OSS only dependencies"
echo "oss_only: ${{ inputs.oss_only }}" >> /tmp/nix-cache-key
touch /tmp/oss-only # Create a file to indicate that we are using OSS only dependencies (so we don't need to re-specifify oss_only to run_bash_command_in_nix)
fi
cat /tmp/nix-cache-key
cache_key=($(md5sum "/tmp/nix-cache-key"))
echo "cache_key=$cache_key" >> $GITHUB_ENV

- name: Download cache (for non-self-hosted)
if: ${{ !startsWith(runner.name, 'self-hosted') }}
shell: bash
run: |
set -euxo pipefail

if [ ${{ inputs.oss_only }} != 'true' ]; then
echo "Must use OSS only dependencies in GitHub-hosted runners"
exit 1
fi

echo "Latest nix cache:"

wget -q "https://storage.googleapis.com/splice-nix-cache-public/${cache_key}.tar.gz" -O cache.tar.gz || true
if [ ! -s cache.tar.gz ]; then
echo "Cache not found, fetching latest instead"
latest=$(curl https://storage.googleapis.com/storage/v1/b/splice-nix-cache-public/o | jq -r '.items | sort_by(.updated) | .[-1].name')
wget -q "https://storage.googleapis.com/splice-nix-cache-public/${latest}" -O cache.tar.gz

fi

sudo mkdir -p /cache/nix/${cache_key}
sudo tar -xzf cache.tar.gz -C /cache/nix/${cache_key}

- name: Restore nix
id: restore_nix
shell: bash
run: |
set -euxo pipefail
sudo mkdir -p /nix/store
sudo chown -R $(whoami):$(whoami) /nix
if [ -f "/cache/nix/$cache_key/cached" ]; then
echo "Restoring nix cache (key $cache_key)"
# we use rsync here because it's simply faster to install
rsync -avi /cache/nix/$cache_key/.nix-* $HOME/
rsync -avi "/cache/nix/$cache_key/nix" $HOME/.config/
rsync -avi "/cache/nix/$cache_key/nix_store/var/" /nix/var
sudo mount --bind /cache/nix/$cache_key/nix_store/store /nix/store
else
sudo mkdir -p "/cache/nix/$cache_key"
sudo chown $(whoami):$(whoami) "/cache/nix/$cache_key"
sudo chown $(whoami):$(whoami) "/cache/nix"
fi
- name: Setup Nix
shell: bash
run: |
set -exuo pipefail
echo 'source ~/.nix-profile/etc/profile.d/nix.sh' > nix.rc
if [[ -f ~/.config/nix/nix.conf && -f ~/.nix-profile/etc/profile.d/nix.sh ]]; then
echo "nix.conf or nix.sh already exists, skipping Nix setup"
exit 0
else
# Disabling sandbox because:
# 1. It doesn't work on CircleCI (sethostname is not allowed)
# 2. We don't plan to build anything, so the risk is fairly low
mkdir -p ~/.config/nix
if [ true ]; then
cat <<EOF > ~/.config/nix/nix.conf
sandbox = false
netrc-file = /etc/nix/netrc
extra-experimental-features = nix-command flakes
substituters = file:///cache/nix/binary_cache?trusted=1 https://cache.nixos.org/
trusted-substituters = file:///cache/nix/binary_cache?trusted=1
trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY=
cores = 4
max-jobs = 16
EOF
else
cat <<EOF > ~/.config/nix/nix.conf
sandbox = false
netrc-file = /etc/nix/netrc
extra-experimental-features = nix-command flakes
cores = 4
max-jobs = 16
EOF
fi
sh <(curl -fsSL --retry 8 https://releases.nixos.org/nix/nix-2.13.3/install) --no-daemon
sudo mkdir -p /etc/nix
sudo chmod a+rw /etc/nix
if [[ "${{ inputs.oss_only }}" == true ]]; then
echo "Using OSS only dependencies, not setting up Artifactory credentials"
else
cat <<EOF > /etc/nix/netrc
machine digitalasset.jfrog.io
login ${{ inputs.artifactory_user }}
password ${{ inputs.artifactory_password }}
EOF
fi
export USER=$(whoami)
echo "Running nix.sh"
. ~/.nix-profile/etc/profile.d/nix.sh
if [[ "${{ inputs.oss_only }}" == true ]]; then
target="oss"
else
target="default"
fi
nix develop path:nix#${target} -v --profile "$HOME/.nix-shell" --command echo "Done loading packages"
echo "Garbage collecting to reduce cache size"
nix-store --gc
fi

- name: Invoke nix before saving cache
uses: ./.github/actions/nix/run_bash_command_in_nix
with:
cmd: |
echo "Validated nix"
ls -al

# The nix cache does not change in the workflow, so we can save it immediately, rather than splitting it into pre-&post- steps
- name: Save nix cache
shell: bash
if: ${{ inputs.should_save == 'true' }}
run: |
set -euxo pipefail
echo ~
chown -R $(whoami):$(whoami) ~
cat /tmp/nix-cache-key
if [ ! -f "/cache/nix/$cache_key/cached" ]; then
echo "Saving nix"

sudo -v ; curl https://rclone.org/install.sh | sudo bash

echo "sourcing nix profile"
export USER=$(whoami)
. ~/.nix-profile/etc/profile.d/nix.sh

nix copy --all --to 'file:///cache/nix/binary_cache?trusted=1' -v

CLONE_COMMAND="rclone --no-update-dir-modtime --no-update-modtime --size-only --multi-thread-streams=32 --transfers=32 --ignore-existing --links --create-empty-src-dirs --fast-list --metadata --order-by name,mixed --retries 10 copy"
${CLONE_COMMAND} "$HOME/" "/cache/nix/$cache_key/" --include ".nix-*/**" --include ".nix-*"
${CLONE_COMMAND} $HOME/.config/nix "/cache/nix/$cache_key/nix"

mkdir -p "/cache/nix/$cache_key/nix_store/store"
mkdir -p "/cache/nix/$cache_key/nix_store/var"

#requires to preserve read only during clone
sudo ${CLONE_COMMAND} /nix/store/ /cache/nix/$cache_key/nix_store/store
sudo ${CLONE_COMMAND} /nix/var/ "/cache/nix/$cache_key/nix_store/var"

echo "done" > "/cache/nix/$cache_key/cached"
fi

- name: Check if cache already exists in GCP
id: already_exists
if: ${{ inputs.should_save_gcp == 'true' }}
shell: bash
run: |
if curl -Isf https://storage.googleapis.com/splice-nix-cache-public/${cache_key}.tar.gz &> /dev/null; then
echo "Cache with key ${cache_key} already exists in GCP, not uploading again"
echo "already_exists=true" >> $GITHUB_OUTPUT;
fi
- name: Authenticate to GCP
id: auth
if: ${{ inputs.should_save_gcp == 'true' && steps.already_exists.outputs.already_exists != 'true' }}
uses: "google-github-actions/auth@ba79af03959ebeac9769e648f473a284504d9193" #v2.1.10
with:
workload_identity_provider: "${{ inputs.upload_workload_identity_provider }}"
service_account: "${{ inputs.upload_service_account }}"

- name: tar-gz the cache
shell: bash
if: ${{ inputs.should_save_gcp == 'true' && steps.already_exists.outputs.already_exists != 'true' }}
id: prep_cache_upload
run: |
set -euxo pipefail
echo "Compressing nix cache to /cache/nix/${cache_key}.tar.gz"
mkdir -p /tmp/nix-upload

tar -czf "/tmp/nix-upload/${cache_key}.tar.gz" -C "/cache/nix/$cache_key" .

echo "Cache compressed to /tmp/nix-upload/${cache_key}.tar.gz"
ls /tmp/nix-upload
echo "cache_file=/tmp/nix-upload/${cache_key}.tar.gz" >> $GITHUB_OUTPUT

- name: Upload nix cache
if: ${{ inputs.should_save_gcp == 'true' && steps.already_exists.outputs.already_exists != 'true' }}
uses: google-github-actions/upload-cloud-storage@v2
with:
destination: splice-nix-cache-public
path: "${{ steps.prep_cache_upload.outputs.cache_file }}"
parent: false # upload to root of the bucket
process_gcloudignore: false # no gcloud ignore file in this repo, must set this to false
gzip: false # it's already gzipped
Loading
Loading