Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Istio logs #3632

Merged
merged 51 commits into from
Sep 12, 2022
Merged

Istio logs #3632

merged 51 commits into from
Sep 12, 2022

Conversation

gsantoro
Copy link
Contributor

@gsantoro gsantoro commented Jun 30, 2022

What does this PR do?

Add support for Istio logs.

Notes:

  • parser: container
    • the default Istio configuration contains text format logs but the user can configure json logs as well. In order to handle both situations we use the container parser with a separate ingest pipeline for json logs.
  • stream: stdout
    • both access logs and errors use the stdout stream and the same text format
  • format: cri
    • Istio uses always the cri format.
  • we support both text and json default log formats. Customer can change the default format via a custom log format. More info at here

Checklist

  • I have reviewed tips for building integrations and this pull request is aligned with them.
  • I have verified that all data streams collect metrics or logs.
  • I have added an entry to my package's changelog.yml file.
  • I have verified that Kibana version constraints are current according to guidelines.

Author's Checklist

How to test this PR locally

Setup istio on elastic cloud + k8s on google cloud

  1. Start k8s cluster on google cloud
    1. Create a standard kubernetes cluster (NO autopilot) with a static version 1.24.3-gke.200 and 3 nodes with machine type e2-medium
    2. Connect to the cluster from your laptop. The console UI has a command to run locally to get credentials to the k8s cluster and add a cluster context to your kubectl configs
    3. Start kube-state-metrics as documented at https://github.com/kubernetes/kube-state-metrics
  2. Start VM with package registry
    1. on Google cloud, create a VM where to start the package registry. This is required since the integration hasn't been published yet.
    2. start the package registry via docker as documented at https://github.com/elastic/package-registry. To make it easier to publish our istio integration, I have first downloaded all the packages to a local folder via git clone --branch snapshot https://github.com/elastic/package-storage.git and then started the package registry with that local folder mounted as a volume. Command docker run -it -p 80:8080 -v $(pwd)/package-storage/packages/:/packages/package-registry docker.elastic.co/package-registry/package-registry:main
    3. Build the istio integration locally by running the command elastic-package build inside the folder packages/istio from the integrations repo (cloned locally)
    4. copy the istio integration to the package registry on this VM via scp
    5. restart the package registry so that it is correctly loaded into the registry
  3. setup ELK cluster
    1. start a elastic.cloud cluster on google-cloud in us-west2 so that you can change custom config
    2. change kibana to point to a custom registry with the custom user config xpack.fleet.registryUrl: "http://<ip>:80" . change <ip> with primary external ip (ephemeral) of the VM where you started the package registry. Make sure the port 80 is open at the firewall level.
    3. Click on Save to restart kibana with the new configs
    4. create a custom Agent policy to generate a fleet enrollment token to use to connect the elastic agents running on k8s to the elastic stack
  4. Add elastic-agents on k8s
    1. configure FLEET_URL and FLEET_ENROLLMENT_TOKEN from kibana into elastic-agent/deploy/kubernetes/elastic-agent-managed-kubernetes.yaml to correctly configure elastic-agent to talk to Kibana/fleet on your elastic deployment
    2. apply the previous manifest to start the elastic-agents
    3. check Kibana to check that the agents are detected from fleet and they are communicating
  5. Configure a Sample app in istio
    1. Follow all the steps at https://istio.io/latest/docs/setup/getting-started/ to setup istio and start a sample app to generate some access logs
  6. Add integrations
    1. add Istio integration
    2. add data view for istio logs

Related issues

Screenshots

@gsantoro gsantoro self-assigned this Jun 30, 2022
@gsantoro gsantoro requested a review from a team June 30, 2022 09:58
@gsantoro gsantoro added In Progress Team:Cloudnative-Monitoring Label for the Cloud Native Monitoring team [elastic/obs-cloudnative-monitoring] labels Jun 30, 2022
@elasticmachine
Copy link

elasticmachine commented Jun 30, 2022

💚 Build Succeeded

the below badges are clickable and redirect to their specific view in the CI or DOCS
Pipeline View Test View Changes Artifacts preview preview

Expand to view the summary

Build stats

  • Start Time: 2022-09-09T13:01:22.314+0000

  • Duration: 13 min 22 sec

Test stats 🧪

Test Results
Failed 0
Passed 6
Skipped 0
Total 6

🤖 GitHub comments

To re-run your PR in the CI, just comment with:

  • /test : Re-trigger the build.

@ChrsMark
Copy link
Member

Nice to see this! It would be nice to have some sample logs provided and a sample produced event so as to see if there is something additional we could add etc.

Copy link
Contributor

@tetianakravchenko tetianakravchenko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

packages/istio/manifest.yml Outdated Show resolved Hide resolved
@gsantoro gsantoro changed the title Feature/istio logs Istio logs Aug 18, 2022
@gsantoro gsantoro added the enhancement New feature or request label Aug 18, 2022
@gsantoro gsantoro requested review from a team and gizas August 30, 2022 12:49
Copy link
Member

@ChrsMark ChrsMark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! I have left some comments that we will need to consider.

Also please remember to add a "How to test manually" section in the PR description for everyone that would be interested to test it as well as for future reference.

Copy link
Member

@ChrsMark ChrsMark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@gsantoro gsantoro merged commit 1ceefe1 into elastic:main Sep 12, 2022
@gsantoro gsantoro deleted the feature/istio_logs branch September 12, 2022 08:18
Copy link
Contributor

@tetianakravchenko tetianakravchenko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! I've left few questions

@@ -0,0 +1,17 @@
# Istio Integration

This integration ingest access logs created by the [Istio](https://istio.io/) service mesh.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it only access logs? as I understood this format works for both access and error logs, or?

should then the data_stream be renamed access_logs -> logs?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point. I would argue that we are only processing a subset of all logs (only the access logs). Errors can probably be either in access logs or not. So I would keep the name as it is

parsers:
- container:
stream: stdout
format: cri
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it needed to define this parser configuration? default values looks quite safe - https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-filestream.html#_container. also if this integration is used on cluster with docker log format, it will not work

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry I just saw this message after I have already merged the PR. I'll keep it in mind for v0.2.0

field: http.response.status_code
type: long
ignore_missing: true
on_failure:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will be there added any information in document that processor failed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you mean for this specific field or in general? In general, there is a field error.message that is added to the last step of the pipeline.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request In Progress Integration:istio Istio New Integration Team:Cloudnative-Monitoring Label for the Cloud Native Monitoring team [elastic/obs-cloudnative-monitoring]
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants