Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

some fixes to cartodb ci #16179

Merged
merged 23 commits into from
Feb 26, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ sudo make install

### Bug fixes / enhancements

- Some CI improvements [16179](https://github.com/CartoDB/cartodb/pull/16179)
- Bump @carto/viewer to v1.0.3 [16170](https://github.com/CartoDB/cartodb/pull/16170)
- Show a new message for create connections after first login [16159](https://github.com/CartoDB/cartodb/pull/16159)
- Remove master api key from do-catalog layers request [16158](https://github.com/CartoDB/cartodb/pull/16158)
Expand Down
2 changes: 1 addition & 1 deletion private
43 changes: 39 additions & 4 deletions script/ci/cloudbuild-tests-pr-pg12.yaml
Original file line number Diff line number Diff line change
@@ -1,20 +1,20 @@
steps:

# Cancel previous job on the same branch
- name: gcr.io/cloud-builders/gcloud
- name: gcr.io/cloud-builders/gcloud-slim
entrypoint: /bin/bash
args:
- '-c'
- 'gcloud builds list --ongoing --filter="buildTriggerId=e983e3b9-109f-43a3-82b2-b312e42c878e AND substitutions.BRANCH_NAME=${BRANCH_NAME} AND id!=${BUILD_ID}" --format="get(ID)" > jobs_to_cancel'

- name: gcr.io/cloud-builders/gcloud
- name: gcr.io/cloud-builders/gcloud-slim
entrypoint: /bin/bash
args:
- '-c'
- 'gcloud builds cancel $(cat jobs_to_cancel | xargs) || true'

# Decrypt github key
- name: gcr.io/cloud-builders/gcloud
- name: gcr.io/cloud-builders/gcloud-slim
args:
- kms
- decrypt
Expand Down Expand Up @@ -82,13 +82,48 @@ steps:
fi
docker build --build-arg COMPILE_ASSETS=false --build-arg BUNDLE_JOBS=16 -t gcr.io/cartodb-on-gcp-main-artifacts/builder:current -t gcr.io/cartodb-on-gcp-main-artifacts/builder:${_BRANCH_TAG} -t gcr.io/cartodb-on-gcp-main-artifacts/builder:${SHORT_SHA} -t gcr.io/cartodb-on-gcp-main-artifacts/builder:${_BRANCH_TAG}--${SHORT_SHA} --cache-from gcr.io/cartodb-on-gcp-main-artifacts/builder:${_BRANCH_TAG} .

# Run tests
# Start necessary services (redis, postgres) in background
- name: 'docker/compose:1.22.0'
args: ['-f', 'docker-compose-pg12.yml', 'up', '--build', '-d']
timeout: 900s

# Run tests, first in parallel, then give a second try in serial if some tests fail
- name: gcr.io/cloud-builders/docker
args: ['exec', '-i', 'builder_1', 'bash', '-c', '/cartodb/runParallelTests.sh 18' ]
timeout: 1800s

# Copy tests results and logs from the container
- name: 'docker/compose:1.22.0'
entrypoint: /bin/sh
args:
- -c
- |
mkdir test_logs_${BUILD_ID}
docker-compose -f docker-compose-pg12.yml logs --no-color > test_logs_${BUILD_ID}/docker_compose_logs
docker cp builder_1:/cartodb/parallel_tests_logs test_logs_${BUILD_ID}/parallel_tests_logs
docker cp builder_1:/cartodb/serial_tests_logs test_logs_${BUILD_ID}/serial_tests_logs || true
docker cp builder_1:/cartodb/tests_exit_status tests_exit_status
echo "Logs will be available during 1 day at gs://cartodb-ci-tmp-logs/${BUILD_ID}/"

# Upload logs to gcs
- name: gcr.io/cloud-builders/gsutil
args: [ '-m', 'cp', '-r', 'test_logs_${BUILD_ID}', 'gs://cartodb-ci-tmp-logs/' ]

# Check tests return value and exit accordingly
- name: gcr.io/cloud-builders/docker
entrypoint: /bin/bash
args:
- -c
- |
if [ $(cat tests_exit_status) == "ok" ];then
echo "Tests succeeded, logs are at https://console.cloud.google.com/storage/browser/cartodb-ci-tmp-logs/test_logs_${BUILD_ID}"
exit 0
else
cat tests_exit_status
echo "Tests failed, logs are at https://console.cloud.google.com/storage/browser/cartodb-ci-tmp-logs/test_logs_${BUILD_ID}"
exit 1
fi


substitutions:
_BRANCH_TAG: ${BRANCH_NAME//\//-}
Expand Down
32 changes: 0 additions & 32 deletions script/ci/reporter.sh

This file was deleted.

9 changes: 1 addition & 8 deletions script/ci/runParallelTests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -28,12 +28,5 @@ script/ci/wrapper.sh $WORKERS || exit 1
# TESTS
time parallel -j $WORKERS -a parallel_tests/specfull.txt 'script/ci/executor.sh {} {%} {#}' || exit 1

# print logs of first try
echo "PRINT LOGS OF FAILED PARALLEL TESTS (FIRST TRY)"
time cat parallel_tests/*.log

# SECOND TRY
script/ci/secondTry.sh || exit 1

# REPORTER
script/ci/reporter.sh || exit 1
script/ci/secondTry.sh
42 changes: 19 additions & 23 deletions script/ci/secondTry.sh
Original file line number Diff line number Diff line change
Expand Up @@ -13,39 +13,35 @@ failedSpecs=$(cat parallel_tests/specfailed.log | wc -l)

if [ "$failedSpecs" -eq "0" ];
then
echo ok > tests_exit_status
exit 0;
else
specs=$(cat parallel_tests/specfailed.log | sed ':a;N;$!ba;s/\n/ /g')
fi

TRASH_MESSAGES="Varnish purge error: \[Errno 111\] Connection refused\|_CDB_LinkGhostTables() called with username=\|terminating connection due to administrator command\|Error trying to connect to Invalidation Service to link Ghost Tables: No module named redis\|pg_restore:\|pg_dump:\|is already a member of\|Skipping Ghost Tables linking"

## uncomment the following if you want to debug failures in parallel execution
## Print parallel logs if some of them failed
#if [ -s parallel_tests/specfailed.log ]; then
# echo "*****************************************************************************************************"
# echo "Logs of tests that ran in parallel"
# echo "*****************************************************************************************************"
# cat parallel_tests/6*.log | grep -v "$TRASH_MESSAGES"
# echo "*****************************************************************************************************"
#fi
# save parallel logs tests to be uploaded later"
cat parallel_tests/*.log > parallel_tests_logs


if [ "$failedSpecs" -gt "10" ];
then
echo "ERROR: Too many failures for a second try. Giving up."
exit 1;
fi

echo "Giving a second try to the next specs"
cat parallel_tests/specfailed.log

RAILS_ENV=test bundle exec rspec $specs > tmp_file 2>&1
RC=$?

cat tmp_file | grep -v "$TRASH_MESSAGES"

if [ $RC -eq 0 ]; then
truncate -s 0 parallel_tests/specfailed.log # Here is where the hack takes place. If im the second try we dont have errors then we're OK
echo "$failedSpecs failed tests > 10, see parallel_tests_logs and docker-compose logs" > tests_exit_status
else
exit 0; # The reporter script will output the failed specs
echo "*****************************************************************************************************"
echo "Giving a second try to the next specs"
echo "*****************************************************************************************************"
cat parallel_tests/specfailed.log
echo "*****************************************************************************************************"

RAILS_ENV=test bundle exec rspec $specs > serial_tests_logs 2>&1
RC=$?

if [ $RC -eq 0 ]; then
echo ok > tests_exit_status
else
echo "some tests failed after a second try, see serial_tests_logs" > tests_exit_status
fi
fi