Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: --no-inspect-json-endpoint #303

Closed
alexkozy opened this issue May 16, 2019 · 16 comments
Closed

proposal: --no-inspect-json-endpoint #303

alexkozy opened this issue May 16, 2019 · 16 comments
Labels

Comments

@alexkozy
Copy link
Member

alexkozy commented May 16, 2019

During last diagnostics WG meeting @ofrobots raised a question about is it safe to use --inspect or --inspect-brk in production.

If we have more then one node instance running in the same environment, it should not be possible by default to connect from one instance to another instance using inspector WebSocket. Connection to inspector requires full web socket url, ws://<ip>:<port>/<unguessable token> . Example of unguessable token is 5b19ecae-c62e-4f26-a43a-e3d1f680e091, it is generated in a way that external client can not guess it.

Currently there are three ways to get this token out of Node process:

  • parse process stderr, it requires control over the app start,
  • sending http.get request to <ip>:<port>/json/list,
  • require('inspect').url(), it returns only url for current node process, if we can run this code it means that we can run anything.

Guessing port is much simpler task than guessing unguessable token. Any process can guess it and get full WebSocket url using json endpoint. At the same time json endpoint is used by different tools, e.g. Chrome DevTools or chrome-remote-interface so we can not remove it all together.

Based on this idea, I'd like to propose: --no-inspect-json-endpoint flag, it disables inspector json endpoint.

Later we can disable it by default and introduce --inspect-json-endpoint. This change will break some clients but sounds safer to me.

@ofrobots @eugeneo what do you think?

@eugeneo
Copy link

eugeneo commented May 16, 2019

I think this may be useful (and it should be a fairly minor impact on a codebase). I can work on a patch.

@ofrobots
Copy link
Contributor

For production use in the backends we should design with defense in depth in mind. It may be a good idea to reflect on the distinct use-cases for the inspector:

  • Interactive/non-interactive debugging that provides arbitrary control including code injection.
  • Production monitoring. This use-case may use cpu, heap or other profiling, but never debugging.

Most users who want to use inspector in production have the latter in mind.

Getting security done right is hard for users and even skilled practitioners. IMO it would be better defense in depth that users aren't subject to remote code injection even if their unguessable token leaks due to a mistake or accidental disclosure.

As far as my production scenarios are concerned, I would want a mode where I can start the inspector with the Debug domain (and anything else that allows code injection) to be hard-disabled.

@dgozman
Copy link

dgozman commented May 16, 2019

Multiple layers of security are great. Is there anything which prevents us from having both --no-inspect-json-endpoint and limited protocol exposure?

That said, it would be hard to ensure security on the protocol level:

  • Monitoring tools like tracing expose PII in their payload and should not be exposed without privacy considerations.
  • Profiling tools like precise coverage allow to trace specific execution patterns and therefore expose sensitive information. For example, consider if (password.contains(username)) return 'weak password'.
  • Protocol owners should be aware of the security issues and actively work to prevent them. Currently, there are no measures, tools and processes established.

This means secure inspector protocol will not happen in the near term, and guarding against any inspector access is a great solution in the meantime.

@ofrobots
Copy link
Contributor

ofrobots commented May 16, 2019

Is there anything which prevents us from having both --no-inspect-json-endpoint and limited protocol exposure?

I have no objections on adding --no-inspect-json-endpoint as long as we also limit protocol exposure.

Monitoring tools like tracing expose PII in their payload and should not be exposed without privacy considerations.

Monitoring tools need to guard against exfiltration of data. Without the ability to restrict protocol exposure, today, the only mechanism monitoring tools will have is one that exposes them to remote code execution which has might higher severity.

This means secure inspector protocol will not happen in the near term

The objective is not to make inspector protocol insulated against having access to PII or sensitive data, but to make sure that tools using inspector protocol get only the level of access that they intended to have.

@eugeneo
Copy link

eugeneo commented May 16, 2019

I uploaded a PR for whitelisting the domains: nodejs/node#27739

I still think there may be some value in this proposal. One suggestion I have is making the flag more generic - something like --inspector-publish-uiid=console,http There were many requests to disable printing out debug URL to console so this would cover them. We may extend the flag later, for instance, if the UIID will also be published to the diagnostics report.

@pavelfeldman
Copy link

The objective is not to make inspector protocol insulated against having access to PII or sensitive data, but to make sure that tools using inspector protocol get only the level of access that they intended to have.

@ofrobots: Ali, I'm disagreeing with it on multiple levels :)

Drawing a hard line between compromising privacy and compromising execution is disturbing. We don't want to introduce security boundaries that are suggesting that exposing PII is less severe than losing control over the execution.

On top of that, there are no security boundaries within the protocol handler, we can't say that one domain has access to remote execution and the other does not. There is no fuzzer testing against malicious protocol clients, we assume all the clients are of a good faith. Protocol is the universal access to the runtime and changing that seems artificial and expensive.

@alexkozy
Copy link
Member Author

alexkozy commented May 16, 2019

It sounds to me like there are two separate problems: exposing PII and remote code execution.

Exposing PII problem looks harder to solve then remote code execution and depend on what information we consider as PII, based on almost infinite amount of different attack vectors here including coverage profiler, setting breakpoints and watching for branches, tracing events with some data, stack traces from CPU profiler, e.t.c. I prefer to consider leaking inspector websocket url as leaking enough data to get PII information.

In theory we can pass additional data to node process that is not available from JavaScript world but required for connection than leaked websocket url will be not enough to establish connection. At the same time we can leak this additional data as well.

As a potential solution for second problem we can introduce high level node flag that forbids running any JavaScript in V8 outside of normal Node execution, we can go even further and allow only signed code to be executed. In this case any attempt to run some JavaScript using protocol will return JavaScript exception or protocol error.

@ofrobots
Copy link
Contributor

@pavelfeldman

We don't want to introduce security boundaries that are suggesting that exposing PII is less severe than losing control over the execution.

We will have to disagree on this one. I am not a professional security practitioner, but those are distinct threat classes from a security point of view as far as my understanding is concerned. Ideally we will never have PII exposure nor code injection. Accidental data exfiltration can have impact based on the type of data leaked. Remote code injection means all bets are off.

we assume all the clients are of a good faith.

A robust defense in depth would guard against flaws even in trusted clients.

On top of that, there are no security boundaries within the protocol handler, we can't say that one domain has access to remote execution and the other does not.

This suggest to me that the protocol is designed with a very different security model in mind. Just because it works this way, doesn't mean it is right.

I do have some use-cases where I want to use the inspector for monitoring in production for Google Cloud, but unless there is a way to turn off the Debug domain, I don't think I will be using it.

Adding @mikesamuel in case he has thoughts on this.

@mikesamuel
Copy link

Sorry I'm late to this discussion.

I'm missing some context here, and am pretty ignorant of the inspector APIs and how they're used. What should I read to come up to speed?

This sounds related to abuses of remote debugging and monitoring hooks.
Is that right?
Are there also concerns about the properties of the network handshakes that initiates such connections?

@pavelfeldman
Copy link

@mikesamuel, are you Mike Samuel from Google that I worked with on Google Calendar in 2007 :)?

This suggest to me that the protocol is designed with a very different security model in mind.

Indeed, it is designed with no security model in mind. Introducing security boundaries is hard, so we would like to defer to the existing process boundary security model instead.

I'd like to better understand the security perimeters of your model. What is unclear to me is when you are fine with compromising PII, but not fine compromising the process that operates that PII.

Profiler / tracing diagnostics expose direct access to raw JS memory that has all kinds of PIIs, passwords, security tokens, etc. When is it fine to leak the security token and not fine to code inject?

alexkozy added a commit to alexkozy/node that referenced this issue Jun 3, 2019
This flag specifies how inspector websocket url should be reported.
Tthre options are supported:
- stderr - reports websocket as a message to stderr,
- http - exposes /json/list endpoint that contains inspector websocket
  url,
- binding - require('inspector').url().

Related discussion: nodejs/diagnostics#303
alexkozy added a commit to nodejs/node that referenced this issue Jun 3, 2019
This flag specifies how inspector websocket url should be reported.
Tthre options are supported:
- stderr - reports websocket as a message to stderr,
- http - exposes /json/list endpoint that contains inspector websocket
  url,
- binding - require('inspector').url().

Related discussion: nodejs/diagnostics#303

PR-URL: #27741
Reviewed-By: Eugene Ostroukhov <eostroukhov@google.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
@ofrobots
Copy link
Contributor

ofrobots commented Jun 4, 2019

What is unclear to me is when you are fine with compromising PII, but not fine compromising the process that operates that PII.

Please don't dramatize my words. What I am stating is that Remote Code Injection is a higher severity vulnerability than exfiltration. If I am comfortable to say that none of my profiling data has PII; using a mechanism to get profiling data that also enables a remote code injection vector is not acceptable in my risk profile for a production server.

@ofrobots
Copy link
Contributor

ofrobots commented Jun 4, 2019

I guess that one would want to run the inspector in a mode where even heap snapshots are disabled, but the rest of the CPU and sampling heap profilers are enabled.

Note that I don't think of heap snapshots as 'profiling', but rather 'debugging'. Maybe that's where the disagreement is coming from?

@pavelfeldman
Copy link

Please don't dramatize my words

I will do my best not to :)

If I am comfortable to say that none of my profiling data has PII

I guess that's where the disagreement is. As long as there are no formal guarantees that my PII data does not end up in a generated function name, I can not introduce a formal distinction between the stack and the memory.

This topic came up during the user-land profiler / tracing discussions we've had a number of times, but we could never clear user-land function names as non-PII.

Basically my point is that there is no clear security boundary you are trying to establish.

@ofrobots
Copy link
Contributor

There is already a preponderance of profiling tools on the server side (across various languages) that do not consider the stack trace PII. For example: NodeSource provides a runtime that lets users take a CPU profile on demand. Google cloud provides Stackdriver Profiler as a feature across many programming languages.

In the future I would imagine us wanting to take advantage of Tracing data as a Google cloud feature, but if the inspector is the only way to get it; that's a hard no-go.

BridgeAR pushed a commit to nodejs/node that referenced this issue Jun 17, 2019
This flag specifies how inspector websocket url should be reported.
Tthre options are supported:
- stderr - reports websocket as a message to stderr,
- http - exposes /json/list endpoint that contains inspector websocket
  url,
- binding - require('inspector').url().

Related discussion: nodejs/diagnostics#303

PR-URL: #27741
Reviewed-By: Eugene Ostroukhov <eostroukhov@google.com>
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
@june07
Copy link

june07 commented Jan 24, 2020

This would immediately break the way NiM works. And what other client tooling (essentially anything without control of the root process unlike ndb, vscode, ... which fork node) would be affected? Is being a first class tool, only by owning the root Node process and treating everything else as second class the goal? As an optional flag... sure, but by default, no. Further I would add that if people are scanning on your private/protected network for inspector ports, you have larger issues.

@github-actions
Copy link

This issue is stale because it has been open many days with no activity. It will be closed soon unless the stale label is removed or a comment is made.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants