New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bats 1.4.X fix: BATS_TMPDIR #2110
Conversation
The linked discussions reference the ENV var as meant for internal usage in bats only, that remains the case for the renamed variant. It seems advised here how to handle temporary directories with bats. I would assume There's also some related new ENV added, that are documented here. It's unclear why your latest change is failing... upstream bats sets
Which AFAIK doesn't look right, seems like it's missing
That seems to be missing
What I do notice is that only |
I also got similar errors on our CI now (for master branch, did work on Friday), that do not happen on my local dev environment
|
I'm unable to reproduce the issue you and the CI are facing. Tested on Fedora 34 cloning the repo and building:
No failures encountered 😕 I'm wondering if it's something to do with the CI on Github. Is the output path logged actually |
Shouldn't we put the test artifacts somewhere in the test directory and also clean them up in
and
|
Explicitly writing to
Will try with the local working directory now.
Pretty much every other test does that. I wrote this failing test and I think I had a reason (may have just been preference) to use I know that I at one point did have it clean up the contents, but that was undesirable if I wanted to inspect the file content upon a test failure for insights. The cleanup for bats tests is handled in the Lines 24 to 29 in f333740
|
…s_cipherlists.bats
@polarathene , thanks for the help with this!
That's what I'm thinking too. |
Just tested the changes in e2b3370 locally and see:
I'm going to try cleaning up some of my environment and trying again |
hm, the uid seems to be the problem.
seems drwetter/testssl.sh:3.1dev is only able to run with uid 1000 now. maybe it includes something in $HOME/.. (that belongs to user 1000) that is not readable by other users |
@polarathene @ap-wtioit , can you please send me your docker versions? I'd like to rule that out
|
Great find! I too was poking at that earlier. I tried 3.0 of the testssl.sh project and it still failed though |
The only recent commit in default branch that would have run the failing test workflow is your bats update from 8 days ago. It just so happens the Ubuntu 20.04 image was updated 7 days ago. Nothing stands out as a culprit though :/
Is this local environment macOS? Just curious about your uid:gid mapping.
Oh great catch!
|
Correct :) I run
|
I tested with a local Dockerfile and patched FROM drwetter/testssl.sh:3.1dev
RUN chmod o+rx /home/testssl/testssl.sh docker build -t drwetter/testssl.sh:3.1dev-fixed .
docker run --rm --user=1001:1001 drwetter/testssl.sh:3.1dev-fixed | head -n 5 now it works also with non 1000 users. I'll create a merge request for drwetter/testssl.sh |
This comment has been minimized.
This comment has been minimized.
Not 100% that solves it for me. Built using your Dockerfile, updated the tests to target the new tag name, and see:
To be fair, it did fix the permissions issue. Maybe we need to add something now to the tests to get this to work? |
@NorseGaud i will run a "full" test on our CI now, my guess is that we need to patch some more (e.g. permissions for /home/testssl/etc) |
new Dockerfile:
now the tests on our CI pass (we usually build and version used images daily (in case someone changes something and we are not able to fix it fast))
|
I guess the question is: Do we update the CI to perform the steps provided by @ap-wtioit , or just wait for test-ssl.sh to release something with fixes? |
@NorseGaud i'm not sure if it's needed in the short term. Long term should be the fix in drwetter/testssl.sh. But as i already enabled the patches on our CI i could easily create another pull request so you could use it if you need to. |
I'm confused why CI broke. There aren't any notes about uid or gid changing, the fix doesn't seem like it'd have been an issue for the CI thus far unless some recent push to DockerHub for testssl image introduced something that broke that. Do we have information on what version of Docker the CI is running? I'm curious if the upgrade to Alpine 3.13 for testssl had anything to do with it, it fits into the time window since the last passing test. EDIT: Tried out my suspicion with the CI and the testssl alpine update based on another user noting the issue recently, but doesn't seem to be related 😅 Ubuntu 20.04 environment for the CI has packages and versions here(links PR for recent update diff). |
@polarathene my guess would be that the image before upgrading to 3.12 took the permissions from the build directory of whoever buildt the image. And either alpine:3.13 became stricter when copying files or the person/system that built the newer image changed the permissions in their build directory. But as i don't have an old version of drwetter/testssl.sh:3.1dev i cannot check if that's the real reason. |
Can you confirm that your CI is same as ours with Github Workflow and Ubuntu 20.04 environment? I was thinking of signing up to Azure to create a VHD of the previous Ubunutu 20.04 environment as documented here, but I'm not sure if it's worth the time, especially if your CI is the same environment we have failing atm. If your CI environment is the same as ours and now passes, then I think it's safe to assume that the likely cause for failure was some change in how the testssl image was built (where permissions for whatever reason changed as you mentioned). Since we can't easily access prior image releases due to testssl release management, I can't reliably bisect the cause I think? |
Not sure if it helps but I manually built back a month or so of commits for the 3.1dev branch and the tests kept failing |
@NorseGaud i added a bit more explanation on what i think happened here: drwetter/testssl.sh#1955 Basically i think the author / maintainer / build person for drwetter/testssl.sh changed the system the image was built on removing the permissions for others on testssl.sh, bin and etc locally, then building those permissions into the image and publishing it without permissions for others. Ubuntu 18.04 LTS with |
It's working now that drwtter has updated dockerhub. Thanks for the help y'all! |
Description
While working on #2104, some of the TLS tests were locking my docker up or failing.
(The failure below is using the verbose flags that are a pending PR in bats-core)
It turns out this is because of a change with BATS_TMPDIR. See bats-core/bats-core#410 (comment) and bats-core/bats-core#365 (comment)
Type of change
Checklist:
docs/
)