Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Elastic Agent] Infrastructure to support multiple outputs in a policy #27442

Closed
nimarezainia opened this issue Aug 17, 2021 · 8 comments
Closed
Labels
QA:Validated Validated by the QA Team Team:Elastic-Agent Label for the Agent team v8.2.0

Comments

@nimarezainia
Copy link
Contributor

nimarezainia commented Aug 17, 2021

Describe the enhancement:

There's a requirement to build the infrastructure to support multiple outputs of the the same type in a given policy.
[note: this request is NOT to support multiple types of output such as Logstash and/or Kafka etc. Thos are tracked else where]

This enhancement request is not a request to add support in Fleet for the second output.

Describe a specific use case for the enhancement or feature:

Stack Monitoring is a specific use case. Often users would want the Stack monitoring data to be sent to an Elasticsearch that is different from the cluster receiving the actual data (logs/metrics) collected. So in a given agent policy there's a requirement to support at least two outputs (of the same type)

cc: @blakerouse @mukeshelastic @mostlyjason

@elasticmachine
Copy link
Collaborator

Pinging @elastic/agent (Team:Agent)

@nimarezainia
Copy link
Contributor Author

API Key Management in Fleet Server
For better security on the Elastic Agent endpoints each agent should have a uniquely scoped API key for remote Elasticsearch clusters (the same as the main Elasticsearch endpoint that Fleet Server already uses). The difference is that multiple Fleet Server’s will share a service token when the Elasticsearch output is a remote cluster versus the main cluster where each Fleet Server will have a unique service token.

The shared service token for the Fleet Server should be scoped to only provide the functionality that the Fleet Server needs to create API keys (access to other indexes is not needed and should not be given).

Fleet Server would store the created API Key for remote Elasticsearch clusters in its main Elasticsearch cluster indexes.

Implementation Stages

  • Modify Fleet Server to use a service token that is passed down from an output defined in a policy.
  • Modify Fleet Server to store multiple API keys for an Elastic Agent (at the moment it can only store one)
  • Add a limited scoped service token to Elasticsearch (elastic/fleet-remote) that only gives the ability to create/delete API keys.

@ruflin
Copy link
Member

ruflin commented Sep 1, 2021

The shared service token for the Fleet Server should be scoped to only provide the functionality that the Fleet Server needs to create API keys (access to other indexes is not needed and should not be given).

A service token has the exact permissions the service account as. Most of the permissions are needed. The service token needs to have access to logs-* etc. indices as otherwise no API could be created for it. The permissions that are not strictly needed are for the remote .fleet-* indices but not sure if for this scenario it is worth complicating things. My assumption would be that on the remote server, Fleet is NOT used. Otherwise there could be credentials in there.

@blakerouse
Copy link
Contributor

The shared service token for the Fleet Server should be scoped to only provide the functionality that the Fleet Server needs to create API keys (access to other indexes is not needed and should not be given).

A service token has the exact permissions the service account as. Most of the permissions are needed. The service token needs to have access to logs-* etc. indices as otherwise no API could be created for it. The permissions that are not strictly needed are for the remote .fleet-* indices but not sure if for this scenario it is worth complicating things. My assumption would be that on the remote server, Fleet is NOT used. Otherwise there could be credentials in there.

Think the goal here is to scope it as much as possible so if we can get a elastic/fleet-remote service account that gives only what a remote Fleet Server requires.

@jlind23
Copy link
Collaborator

jlind23 commented Nov 22, 2021

@nimarezainia missed that one.
7.16 scope does not seems to be relevant anymore.
What is the priority here? Adding it to the backlog while waiting your answer.

@jlind23
Copy link
Collaborator

jlind23 commented Mar 9, 2022

@lykkin thanks to your work on the logstash output, policies are able to have multiple outputs even of the same kind right?

cc @ph

@jlind23 jlind23 added the QA:Ready For Testing Code is merged and ready for QA to validate label Mar 21, 2022
@jlind23
Copy link
Collaborator

jlind23 commented Mar 21, 2022

@nimarezainia I discussed just now with @lykkin.
It is already supported thanks to the work done on the Logstash output. One thing to keep in mind: Remote clusters are not supported yet because credentials are not passed down from fleet ui to the agents.

@jlind23 jlind23 closed this as completed Mar 21, 2022
@amolnater-qasource
Copy link

Hi @jlind23
We have validated logstash output feature on 8.2 BC-2 Self-managed environment and latest 8.2 Snapshot.

  • We are successfully able to run secondary agent and get the System integration data through logstash.

Build details 8.2 Snapshot:
BUILD: 51835
COMMIT: 6fcd2d0b9ddd4b6a8616f832e8eb80923b6bea7c

Screenshot:
10

Hence, marking this as QA:Validated. Further, we will be creating test content for the same.
Please let us know if anything else is required from our end.
Thanks

@amolnater-qasource amolnater-qasource added QA:Validated Validated by the QA Team and removed QA:Ready For Testing Code is merged and ready for QA to validate labels Apr 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
QA:Validated Validated by the QA Team Team:Elastic-Agent Label for the Agent team v8.2.0
Projects
None yet
Development

No branches or pull requests

6 participants