Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

s3 input is not separating log entries #420

Open
tfmm opened this issue Apr 12, 2023 · 2 comments
Open

s3 input is not separating log entries #420

tfmm opened this issue Apr 12, 2023 · 2 comments

Comments

@tfmm
Copy link

tfmm commented Apr 12, 2023

Describe the bug

Using s3 input from cloudwatch logs (gzipped json) to send to opensearch, and all entries from each log file in s3 are being put into one entry in opensearch.

To Reproduce

Setup fluentd using below config, with proper permissions on s3 and sqs.

Expected behavior

Each log entry should be parsed into separate entries in opensearch.

Your Environment

- Fluentd version:1.16-1
- TD Agent version:
- fluent-plugin-s3 version: latest
- aws-sdk-s3 version:
- aws-sdk-sqs version:
- Operating system:
- Kernel version:

Your Configuration

<source>
  @type s3
  s3_bucket S3_BUCKET_NAME
  s3_region us-west-2
  add_object_metadata true
  format json
  <sqs>
    queue_name SQS_QUEUE_NAME
  </sqs>
</source>

<match **>
  @type opensearch
  host OPENSEARCH_HOST
  port 9200
  user %{OPENSEARCH_USER}
  password OPENSEARCH_PASSWORD
  scheme https
  include_timestamp true 
  logstash_format true
  logstash_prefix OS_INDEX_NAME
  suppress_type_name true
  ssl_verify false
  include_tag_key true
  tag_key _key
</match>

Your Error Log

No applicable errors being shown.

Additional context

No response

@valentinacala
Copy link

@tfmm didi u find any solution?

@tfmm
Copy link
Author

tfmm commented Aug 7, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants