Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default Values in Helm Chart cause problems #519

Closed
fm4991 opened this issue Aug 25, 2022 · 1 comment
Closed

Default Values in Helm Chart cause problems #519

fm4991 opened this issue Aug 25, 2022 · 1 comment
Labels
bug Something isn't working

Comments

@fm4991
Copy link

fm4991 commented Aug 25, 2022

When we Install the otel-collector with this values.yaml in our k3s clusters we get the following error message:

kubectl logs splunk-otel-collector-agent-xl89v
2022/08/23 15:53:28 main.go:225: Set config to [/conf/relay.yaml]
2022/08/23 15:53:28 main.go:291: Set ballast to 165 MiB
2022/08/23 15:53:28 main.go:305: Set memory limit to 450 MiB
Error: failed to get config: cannot unmarshal the configuration: error reading receivers configuration for "filelog": 2 error(s) decoding:

* 'force_flush_period' expected a map, got 'string'
* 'poll_interval' expected a map, got 'string'
2022/08/23 15:53:29 main.go:143: application run finished with error: failed to get config: cannot unmarshal the configuration: error reading receivers configuration for "filelog": 2 error(s) decoding:

* 'force_flush_period' expected a map, got 'string'
* 'poll_interval' expected a map, got 'string'

We tried to fix it in serveral playing around with the YAML structure, but in our opinion it is already a map.
In the End we fixed it with commenting the two mentioned values in the agent configmap.

poll_interval: 200ms
max_concurrent_files: 1024
encoding: utf-8
fingerprint_size: 1kb
max_log_size: 1MiB
# Disable force flush until this issue is fixed:
# https://github.com/open-telemetry/opentelemetry-log-collection/issues/292
force_flush_period: "0"

@atoulme atoulme added the bug Something isn't working label Aug 25, 2022
@aryznar-splunk
Copy link
Contributor

This issue is fixed and delivered.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants