Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(@aws-cdk/aws-ecs): SplunkLogDriver in Fargate PV 1.4 does not respect 'tag', but 'splunk-tag' instead #13881

Closed
bweigel opened this issue Mar 30, 2021 · 1 comment · Fixed by #13882
Labels
@aws-cdk/aws-ecs Related to Amazon Elastic Container bug This issue is a bug. effort/small Small work item – less than a day of effort

Comments

@bweigel
Copy link
Contributor

bweigel commented Mar 30, 2021

In Fargate platform version 1.4 the Docker engine was replaced by containerd.
This results in a different behavior and setting tag in SplunkLogDriverProps will not yield the expected result.

Instead the new logging implementation in PV 1.4 expects splunk-tag, as outlined by my correpondence with the AWS support:

Diving deeper into the Fargate 1.4 Platform version, a change was made in the backend logging architecture which replaces Docker Engine with Containerd as Fargate’s container execution engine. I'm including a document for your review, which is linked in the resources section, below [1].

In the Github page for the new shim loggers, support for the Splunk log driver tag is mentioned in the README file. Interestingly, the syntax appears to have changed for the Containerd logging option. I am providing this link for you, as well [2].

Instead of using "tag": "", Containerd is expecting "splunk-tag": "".
...
[1] https://aws.amazon.com/blogs/containers/under-the-hood-fargate-data-plane : Under the Hood - Fargate Data Plane
[2] https://github.com/aws/amazon-ecs-shim-loggers-for-containerd : AWS ECS Shim Loggers for Containerd

I have not verified that this works, however I thought it reasonable to open an issue in case anyone else has this problem.

Reproduction Steps

I don't have a minimal example as of now, bur pretty much anyone using

new ecs.SplunkLogDriver({
      tag: '',
      ...
    });

will have noticed a difference in behavior, when the Fargate-PV-tag LATEST was rolled up to 1.4 these past weeks.

What did you expect to happen?

No Docker container id in my Splunk log-messages, when setting tag: '' (empty string).

What actually happened?

Docker container id in my Splunk log-messages.

Environment

  • CDK CLI Version : 1.95.1
  • Framework Version: 1.95.1
  • Node.js Version: 15.11.0
  • OS : Manjaro Linux
  • Language (Version): Typescript 3.9.9

Comment

Don't know if this is a bug per se in the CDK, bc the behavior is as expected.

Also I can try and fix/implement this, probably starting here with something pragmatic?

@aws-cdk/aws-ecs/lib/log-drivers/base-log-driver.ts#L2-L9

public bind(_scope: CoreConstruct, _containerDefinition: ContainerDefinition): LogDriverConfig {
return {
logDriver: 'splunk',
options: stringifyOptions({
'splunk-token': this.props.token,
'splunk-url': this.props.url,
'splunk-source': this.props.source,
'splunk-sourcetype': this.props.sourceType,
'splunk-index': this.props.index,
'splunk-capath': this.props.caPath,
'splunk-caname': this.props.caName,
'splunk-insecureskipverify': this.props.insecureSkipVerify,
'splunk-format': this.props.format,
'splunk-verify-connection': this.props.verifyConnection,
'splunk-gzip': this.props.gzip,
'splunk-gzip-level': this.props.gzipLevel,
...renderCommonLogDriverOptions(this.props),

What do you think?


This is 🐛 Bug Report

@bweigel bweigel added bug This issue is a bug. needs-triage This issue or PR still needs to be triaged. labels Mar 30, 2021
@peterwoodworth peterwoodworth added the @aws-cdk/aws-ecs Related to Amazon Elastic Container label Mar 30, 2021
@SoManyHs SoManyHs added effort/small Small work item – less than a day of effort and removed needs-triage This issue or PR still needs to be triaged. labels Mar 30, 2021
@mergify mergify bot closed this as completed in #13882 Mar 30, 2021
mergify bot pushed a commit that referenced this issue Mar 30, 2021
…version 1.4 (#13882)

Set `splunk-tag` when `tag` is set. This will keep the API constant, however it will add an additional `splunk-tag` in the key-value `Options` property in `AWS::ECS::TaskDefinition`s - `LogConfiguration`.

This is a very pragmatic approach. Feel free to suggest something else.

closes #13881 

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@github-actions
Copy link

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

bweigel added a commit to bweigel/aws-cdk that referenced this issue Mar 31, 2021
…latform version 1.4 (aws#13882)"

This reverts commit e9d9299.

Unfortunately the above commit leads to broken deployments that are
caused by behaviors upstream (i.e. in the ECS backend):

> Resource handler returned message: "Invalid request provided: Create TaskDefinition: Log driver splunk disallows options: splunk-tag (Service: AmazonECS; Status Code: 400; Error Code: ClientException; Request ID: d13abe56-36fa-4e81-b980-bf83789d4d0d; Proxy
: null)" (RequestToken: 546a74e2-a2eb-ef75-43e7-231fe8927096, HandlerErrorCode: InvalidRequest)

This means however, that the buggy behavior mentioned in aws#13881
persists.
hollanddd pushed a commit to hollanddd/aws-cdk that referenced this issue Mar 31, 2021
…version 1.4 (aws#13882)

Set `splunk-tag` when `tag` is set. This will keep the API constant, however it will add an additional `splunk-tag` in the key-value `Options` property in `AWS::ECS::TaskDefinition`s - `LogConfiguration`.

This is a very pragmatic approach. Feel free to suggest something else.

closes aws#13881 

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
mergify bot pushed a commit that referenced this issue Apr 2, 2021
…platform version 1.4 (#13882)" (#13892)

This reverts commit e9d9299.

Unfortunately the above commit leads to broken deployments that are caused by behaviors upstream (i.e. in the ECS backend):

> Resource handler returned message: "Invalid request provided: Create TaskDefinition: Log driver splunk disallows options: splunk-tag (Service: AmazonECS; Status Code: 400; Error Code: ClientException; Request ID: d13abe56-36fa-4e81-b980-bf83789d4d0d; Proxy
: null)" (RequestToken: 546a74e2-a2eb-ef75-43e7-231fe8927096, HandlerErrorCode: InvalidRequest)

This means however, that the buggy behavior mentioned in #13881 persists.
I am in communication with AWS support on this issue and will keep you posted.

Sorry to have caused that mess. I should have marked the PR #13882 as draft before everything was clear.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
hollanddd pushed a commit to hollanddd/aws-cdk that referenced this issue Aug 26, 2021
…version 1.4 (aws#13882)

Set `splunk-tag` when `tag` is set. This will keep the API constant, however it will add an additional `splunk-tag` in the key-value `Options` property in `AWS::ECS::TaskDefinition`s - `LogConfiguration`.

This is a very pragmatic approach. Feel free to suggest something else.

closes aws#13881 

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
hollanddd pushed a commit to hollanddd/aws-cdk that referenced this issue Aug 26, 2021
…platform version 1.4 (aws#13882)" (aws#13892)

This reverts commit e9d9299.

Unfortunately the above commit leads to broken deployments that are caused by behaviors upstream (i.e. in the ECS backend):

> Resource handler returned message: "Invalid request provided: Create TaskDefinition: Log driver splunk disallows options: splunk-tag (Service: AmazonECS; Status Code: 400; Error Code: ClientException; Request ID: d13abe56-36fa-4e81-b980-bf83789d4d0d; Proxy
: null)" (RequestToken: 546a74e2-a2eb-ef75-43e7-231fe8927096, HandlerErrorCode: InvalidRequest)

This means however, that the buggy behavior mentioned in aws#13881 persists.
I am in communication with AWS support on this issue and will keep you posted.

Sorry to have caused that mess. I should have marked the PR aws#13882 as draft before everything was clear.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/aws-ecs Related to Amazon Elastic Container bug This issue is a bug. effort/small Small work item – less than a day of effort
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants