Skip to content

Conversation

@jsoriano
Copy link
Member

@jsoriano jsoriano commented Feb 15, 2024

Polish some things in the logstash scenario:

  • Pre-build logstash image:
    • It includes the plugin now so it doesn't need to be installed on every run, speeding up startup after first run.
    • It includes the configuration, so it can be modified with auto-reload, as introduced in Logstash improvements: auto pipeline reload. #1668.
  • Use private key without converting it to PKCS8.
  • Use actual CA instead of elasticsearch certificate as CA in logstash output.
  • Startup script removed as operations moved to other places or not needed.
  • Use the same configuration template for serverless and compose providers.

Part of #1682.

@jsoriano jsoriano requested review from bhapas and mashhurs February 15, 2024 17:26
@jsoriano jsoriano self-assigned this Feb 15, 2024
command: bash /usr/share/logstash/startup.sh
volumes:
- "../certs/logstash:/usr/share/logstash/config/certs"
- "../certs/elasticsearch/cert.pem:/usr/share/logstash/config/certs/elasticsearch.pem:ro"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was creating a file owned by root in the user profile, we should avoid writing there.

@jsoriano jsoriano requested a review from a team February 15, 2024 17:29
@jsoriano
Copy link
Member Author

It breaks serverless now, I will fix it.

Mode: resource.FileMode(0755),
Path: "Dockerfile.logstash",
Content: staticSource.File("_static/Dockerfile.logstash"),
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is also another reference to this file in serverlessresources.go

Trying to creaste a serverless Elastic stack raised this error:

2024/02/15 18:41:48  INFO Starting local services
Error: booting up the stack failed: failed to start local services: could not initialize compose files for local services: there where errors: template: pattern matches no files: `_static/logstash_startup.sh`

Content: staticSource.Template("_static/logstash_startup.sh"),

@jsoriano jsoriano marked this pull request as draft February 15, 2024 17:53
@jsoriano jsoriano marked this pull request as ready for review February 15, 2024 18:42
@jsoriano jsoriano requested review from mashhurs and mrodm February 15, 2024 18:43
user => '{{ fact "username" }}'
password => '{{ fact "password" }}'
ssl_enabled => true
data_stream => "true"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should confirm this removal with @bhapas there is document_id => "%{[@metadata][_ingest_document][id]}" in on-prems... That was a reason I leaved it as duplicated.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The document_id was to avoid duplicate events to get ingested. It can be applied to serverless too.

ARG IMAGE
FROM ${IMAGE}

RUN bin/logstash-plugin install logstash-filter-elastic_integration
Copy link
Contributor

@mashhurs mashhurs Feb 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will pull new version of the plugin, the expectation to use as it is if plugin is bundled with current Logstash version. Wouldn't be problem with other plugins but this one is tightly coupled with ES.

Suggested change
RUN bin/logstash-plugin install logstash-filter-elastic_integration
RUN if [ ! "$(bin/logstash-plugin list)" = *logstash-filter-elastic_integration* ]; then \
echo "Missing plugin logstash-filter-elastic_integration, installing now."; \
bin/logstash-plugin install logstash-filter-elastic_integration; \
fi

FYI: couldn't test if this works...
Updated the suggestion, will work I think...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change applied, thanks.

Copy link
Contributor

@mashhurs mashhurs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @jsoriano for a quality of work here. This is the best design.
I have pulled and tested both on-prems and serverless with Logstash. Put one comment to install the elastic_integration plugin IFF only not bundled/installed.

@mrodm
Copy link
Contributor

mrodm commented Feb 19, 2024

I've tried to create a cluster Serverless project using this branch and adding the system package into the Agent policy. This configuration is not able to ingest data in Elasticsearch.

This is an example of the error that I get from logstash container:

Could not index event to Elasticsearch. .... :response=>{"create"=>{"status"=>400, "error"=>{"type"=>"document_parsing_exception", "reason"=>"[1:1011] failed to parse: _id must be unset or set to [7QWrKrJpmTXFC8zWAAABjcDXW-A] but was [%{[@metadata][_ingest_document][id]}] because [.ds-metrics-system.diskio-default-2024.02.19-000001] is in time_series mode", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"_id must be unset or set to [7QWrKrJpmTXFC8zWAAABjcDXW-A] but was [%{[@metadata][_ingest_document][id]}] because [.ds-metrics-system.diskio-default-2024.02.19-000001] is in time_series mode"}}}}}

cc @jsoriano @bhapas

EDIT: the cluster created was a Serverless project.

@bhapas
Copy link
Contributor

bhapas commented Feb 19, 2024

I've tried to create a cluster using this branch and adding the system package into the Agent policy. This configuration is not able to ingest data in Elasticsearch.

This is an example of the error that I get from logstash container:

Could not index event to Elasticsearch. .... :response=>{"create"=>{"status"=>400, "error"=>{"type"=>"document_parsing_exception", "reason"=>"[1:1011] failed to parse: _id must be unset or set to [7QWrKrJpmTXFC8zWAAABjcDXW-A] but was [%{[@metadata][_ingest_document][id]}] because [.ds-metrics-system.diskio-default-2024.02.19-000001] is in time_series mode", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"_id must be unset or set to [7QWrKrJpmTXFC8zWAAABjcDXW-A] but was [%{[@metadata][_ingest_document][id]}] because [.ds-metrics-system.diskio-default-2024.02.19-000001] is in time_series mode"}}}}}

cc @jsoriano @bhapas

@mrodm I think for time series index mode cannot use the generated _id if I remember from @mashhurs .

One way is to remove the [%{[@metadata][_ingest_document][id]}] from logstash config and let it ingest duplicate entries [ As this is just for testing ] . Or have a different config for time_series mode.

@jsoriano
Copy link
Member Author

One way is to remove the [%{[@metadata][_ingest_document][id]}] from logstash config and let it ingest duplicate entries [ As this is just for testing ] . Or have a different config for time_series mode.

I am removing it from serverless as it was before this change. In local it is not failing for me or for CI.

@elasticmachine
Copy link
Collaborator

💚 Build Succeeded

History

cc @jsoriano

@jsoriano jsoriano merged commit 26e81fe into elastic:main Feb 19, 2024
@jsoriano jsoriano deleted the generate-pkcs8-keys branch February 19, 2024 14:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants