Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mount secrets are revealed in log messages #12296

Closed
hmeine opened this issue Mar 15, 2022 · 7 comments · Fixed by moby/moby#43597
Closed

Mount secrets are revealed in log messages #12296

hmeine opened this issue Mar 15, 2022 · 7 comments · Fixed by moby/moby#43597

Comments

@hmeine
Copy link

hmeine commented Mar 15, 2022

Nomad version

v1.0.4 (from web interface)

Issue

We run a nomad cluster in which it is possible to run data science jobs that access private data via custom mounts, the secrets for which are stored in Vault. The vault secret and the mount password are properly hidden during normal operation, but we offer a way for users to type in custom mount paths into a form, and mistyping a path results in a mount error.

The mount error which is visible in the nomad web UI visibly displays the secret.

Reproduction steps

  • Make credentials for a cifs mount accessible via Vault (the mount does not have to exist, since we want to provoke an error anyhow).
  • Set up a job with a custom mount (snippet from job description given below), in which the UNC source path is misspelt.
  • Run job (which should fail to set up).
  • Observe mount error in logs available via the web UI.
"mount": [
              {
                "target": "/private_input",
                "type": "volume",
                "volume_options": [
                  {
                    "driver_config": [
                      {
                        "options": [
                          {
                            "device": "//servername.organization.lan/share/path/does/not/exist/",
                            "o": "addr=servername.organization.lan,_netdev,iocharset=utf8,noperm,uid=0,gid=0,username=${MOUNT_USERNAME},password=${MOUNT_PASSWORD},domain=org_domain",
                            "type": "cifs"
                          }
                        ],
                        "name": "local"
                      }
                    ]
                  }
                ],
                "source": "some_volume_name"
              },

Expected Result

The logged mount error should censor the secret, and/or an alternative way to specify the secret should be available.

(Alas, we learned that the credentials=/path/to/file is a feature of mount.cifs, and not of the underlying kernel module, which nomad talks to via sysctls.)

Actual Result

The log contains a line such as

Failed to start container 9acd605a012f83f2a40841151608c5ff9657e10bc168bc085eed3ddc5253fbaa: API error (500): error while mounting volume '/var/lib/docker/volumes/mount_input_3f6a4f51-3b97-2b0a-3dc1-f14cad6a5bb9/_data': failed to mount local volume: mount //servername.organization.lan/share/path/does/not/exist/:/var/lib/docker/volumes/mount_input_3f6a4f51-3b97-2b0a-3dc1-f14cad6a5bb9/_data, data: addr=11.22.33.44,_netdev,iocharset=utf8,noperm,uid=0,gid=0,username=secret_user,password=secret_password_in_plaintext,domain=org_domain: no such file or directory

Job file (if appropriate)

Our setup is quite complex and the jobs are created on the fly via the Rest API, so I find it very difficult to provide a full example, which would not work in other contexts anyhow. I hope the above part of the job definition helps to understand the issue well enough.

(I consider this to be a security-relevant bug, but decided for public posting, because it requires a fairly special setup such as ours to cause problems. It does not seem to be exploitable by an attacker, but may cause problems with credentials leaked by accident through errors which just happen.)

@DerekStrickland DerekStrickland self-assigned this Mar 15, 2022
@DerekStrickland
Copy link
Contributor

Hi @hmeine

Thanks for bringing this to our attention. This is a bit tricky because the secret leak is actually coming in the error message from the upstream docker client. We are just logging the error message it produces which includes the username and password. However, we have some ideas about how to try to suppress this by generating a credentials file. That of course makes the fix a little more involved and means we'll have to roadmap it. PRs are always welcome though. If you want to take a pass, let me know and I'll happily collaborate with you.

I'm going to move this over to review by the backlog team. I think this is something we want to try to mitigate if we can. That said, if you have time, I'd encourage you to also create an issue in the upstream go-dockerclient repository. I think the community, in general, would benefit from it being solved at that level as well.

@hmeine
Copy link
Author

hmeine commented Mar 17, 2022

Thanks for looking into this and giving your opinion. I am not familiar with Go and do not have the time to tackle this, but I want to be a good citizen and report this where the community benefits from the information, so I opened fsouza/go-dockerclient#905

It's a little hard to understand sometimes how nomad, its docker driver, the go-dockerclient, the docker engine and the underlying kernel interact; the boundaries between these layers seem to be quite "rich", because there is a desire for information not to get lost, but that makes for interesting debugging sessions… :-)

@DerekStrickland
Copy link
Contributor

Thanks @hmeine!

I see your comment about this being fixed upstream in the moby repo.

If I am following this correctly, based on what I am seeing in the current Nomad go.mod we should be benefiting from this fix upstream.

Do you have a way to verify the problem is resolved for you? Reading your past comments, I see that you are not a Go person so building from source is probably not something you can do to test, but the next point release (1.3.4) should include the updated dependencies. After that is released, would you be able to upgrade a client and easily/quickly see if your logs still contain the password?

@hmeine
Copy link
Author

hmeine commented Aug 26, 2022

I think so, yes! Looking forward to that, even though it does not affect normal operation.

@DerekStrickland
Copy link
Contributor

Great. 1.3.4 is now released. Let us know how it goes.

@tgross
Copy link
Member

tgross commented Nov 18, 2022

Looks like this fix was shipped and it's been a while since we heard back on this. I'm going to close this issue out but please let us know if you've hit any problems!

@tgross tgross closed this as completed Nov 18, 2022
@github-actions
Copy link

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 19, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
Development

Successfully merging a pull request may close this issue.

3 participants