Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Duplicated logs when fingerprint_size is set to larger value #22936

Closed
kkujawa-sumo opened this issue May 30, 2023 · 11 comments · Fixed by #31251
Closed

Duplicated logs when fingerprint_size is set to larger value #22936

kkujawa-sumo opened this issue May 30, 2023 · 11 comments · Fixed by #31251
Labels
bug Something isn't working pkg/stanza priority:p2 Medium receiver/filelog release:required-for-ga Must be resolved before GA release

Comments

@kkujawa-sumo
Copy link
Contributor

Component(s)

receiver/filelog

What happened?

Description

Filelog receiver constantly reads the same log file when fingerprint_size is set to value larger than size of scanner default buffer and size of the log file is lower than fingerprint_size but larger than scanner buffer.

Steps to Reproduce

Expected Result

Logs are not duplicated

Actual Result

Logs are duplicated, the same logs are constantly read

Collector version

commit: ca50a98fdda4c362d8782a29f6fc5cc27977f37b

Environment information

Environment

Darwin Kernel Version 22.3.0
go version go1.19.6 darwin/amd64

OpenTelemetry Collector configuration

exporters:
  file:
    path: out.log
receivers:
  filelog:
    fingerprint_size: 17408
    start_at: beginning
    include:
    - generated_log.txt
service:
  pipelines:
    logs:
      exporters:
      - file
      receivers:
      - filelog

Log output

No response

Additional context

No response

@kkujawa-sumo kkujawa-sumo added bug Something isn't working needs triage New item requiring triage labels May 30, 2023
@github-actions
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@kkujawa-sumo
Copy link
Contributor Author

The issue is not observed with fix in this draft pull request: #22937

@djaglowski djaglowski removed the needs triage New item requiring triage label Jun 1, 2023
@djaglowski
Copy link
Member

@kkujawa-sumo, thanks for the detailed example. I was able to reproduce the issue.

@djaglowski
Copy link
Member

This issue was tentatively resolved by #23183, but unfortunately that PR had to be reverted because it introduced some minor issue with rotation. I can't make it an immediate priority to reboot the PR but it is on my list to circle back to when possible.

@kkujawa-sumo
Copy link
Contributor Author

kkujawa-sumo commented Jul 7, 2023

Maybe it is worthy to make buffer size configurable 🤔 There will be a workaround for people who need higher fingerprint size and have this issue.
What do you think?

@djaglowski
Copy link
Member

Maybe it is worthy to make buffer size configurable 🤔 There will be a workaround for people who need higher fingerprint size and have this issue. What do you think?

I'm hesitant to add this until we are sure it is necessary. My top priority right now it refactoring the fileconsumer package to make it more testable. Once we have solid test coverage on these nuanced issues, I think we can either solve this one or prove that it is necessary to make this a user facing option. That said, if you need a workaround urgently, I think it would be reasonable to add this behind a feature gate and we can remove it later if possible.

kkujawa-sumo added a commit to SumoLogic/sumologic-kubernetes-collection that referenced this issue Jul 19, 2023
This prevents of open-telemetry/opentelemetry-collector-contrib#22936
which is caused by incorrect update of fingerprint when it is read in mutliple iterations.
The issue is not observed when fingerprint_size is not higher than default buffer size (16kb).
kkujawa-sumo added a commit to SumoLogic/sumologic-kubernetes-collection that referenced this issue Jul 19, 2023
This prevents of open-telemetry/opentelemetry-collector-contrib#22936
which is caused by incorrect update of fingerprint when it is read in mutliple iterations.
The issue is not observed when fingerprint_size is not higher than default buffer size (16kb).
kkujawa-sumo added a commit to SumoLogic/sumologic-kubernetes-collection that referenced this issue Jul 19, 2023
This prevents of open-telemetry/opentelemetry-collector-contrib#22936
which is caused by incorrect update of fingerprint when it is read in multiple iterations.
The issue is not observed when fingerprint_size is not higher than default buffer size (16kb).
kkujawa-sumo added a commit to SumoLogic/sumologic-kubernetes-collection that referenced this issue Jul 19, 2023
This prevents of open-telemetry/opentelemetry-collector-contrib#22936
which is caused by incorrect update of fingerprint when it is read in multiple iterations.
The issue is not observed when fingerprint_size is not higher than default buffer size (16kb).
kkujawa-sumo added a commit to SumoLogic/sumologic-kubernetes-collection that referenced this issue Aug 3, 2023
… settings for fingerprint_size on k8s >=1.24

When fingerprint_size is set to 1kb (default value) the issue with duplicated logs is not observed
ref: open-telemetry/opentelemetry-collector-contrib#22936
kkujawa-sumo added a commit to SumoLogic/sumologic-kubernetes-collection that referenced this issue Aug 3, 2023
… settings for fingerprint_size on k8s >=1.24

When fingerprint_size is set to 1kb (default value) the issue with duplicated logs is not observed
ref: open-telemetry/opentelemetry-collector-contrib#22936
@github-actions
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@djaglowski
Copy link
Member

Still on my radar. I'm on a mission to refactor fileconsumer into smaller, more testable packages. Much progress has been made but still a few steps away from circling back to this.

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

github-actions bot commented Feb 2, 2024

Pinging code owners for pkg/stanza: @djaglowski. See Adding Labels via Comments if you do not have permissions to add labels yourself.

@djaglowski djaglowski added the release:required-for-ga Must be resolved before GA release label Feb 12, 2024
djaglowski added a commit that referenced this issue Feb 27, 2024
Depends on #31298

Fixes #22936

This PR changes the way readers update their fingerprints.

Currently, when `reader.ReadToEnd` is called, it creates a scanner and
passes itself (the reader) in as the `io.Reader` so that a custom
implementation of `Read` will be used by the scanner. Each time the
scanner calls `Read`, we try to perform appropriate reasoning about
whether the data we've read should be appended to the fingerprint. The
problem is that the correct positioning of the bytes buffer is in some
rare cases different than the file's "offset", as we track it. See
example
[here](#22937 (comment)).

There appear to be two ways to solve this. A "simple" solution is to
independently determine the file handle's current offset with a clever
use of `Seek`, ([suggested
here](https://stackoverflow.com/a/10901436/3511338). Although this does
appear to work, it leaves open the possibility that the fingerprint is
corrupted because _if the file was truncated_, we may be updating the
fingerprint with incorrect information.

The other solution, proposed in this PR, simply has the custom `Read`
function set a flag to indicate that the fingerprint _should_ be
updated. Then, just before returning from `ReadToEnd`, we create an
entirely new fingerprint. This has the advantage of not having to manage
any kind of append operations, but also allows the the opportunity to
independently check that the fingerprint has not been altered by
truncation.

Benchmarks appear to show all three solutions are close in performance.
XinRanZhAWS pushed a commit to XinRanZhAWS/opentelemetry-collector-contrib that referenced this issue Mar 13, 2024
Depends on open-telemetry#31298

Fixes open-telemetry#22936

This PR changes the way readers update their fingerprints.

Currently, when `reader.ReadToEnd` is called, it creates a scanner and
passes itself (the reader) in as the `io.Reader` so that a custom
implementation of `Read` will be used by the scanner. Each time the
scanner calls `Read`, we try to perform appropriate reasoning about
whether the data we've read should be appended to the fingerprint. The
problem is that the correct positioning of the bytes buffer is in some
rare cases different than the file's "offset", as we track it. See
example
[here](open-telemetry#22937 (comment)).

There appear to be two ways to solve this. A "simple" solution is to
independently determine the file handle's current offset with a clever
use of `Seek`, ([suggested
here](https://stackoverflow.com/a/10901436/3511338). Although this does
appear to work, it leaves open the possibility that the fingerprint is
corrupted because _if the file was truncated_, we may be updating the
fingerprint with incorrect information.

The other solution, proposed in this PR, simply has the custom `Read`
function set a flag to indicate that the fingerprint _should_ be
updated. Then, just before returning from `ReadToEnd`, we create an
entirely new fingerprint. This has the advantage of not having to manage
any kind of append operations, but also allows the the opportunity to
independently check that the fingerprint has not been altered by
truncation.

Benchmarks appear to show all three solutions are close in performance.
LokeshOpsramp pushed a commit to opsramp/opentelemetry-collector-contrib that referenced this issue May 5, 2024
Depends on open-telemetry#31298

Fixes open-telemetry#22936

This PR changes the way readers update their fingerprints.

Currently, when `reader.ReadToEnd` is called, it creates a scanner and
passes itself (the reader) in as the `io.Reader` so that a custom
implementation of `Read` will be used by the scanner. Each time the
scanner calls `Read`, we try to perform appropriate reasoning about
whether the data we've read should be appended to the fingerprint. The
problem is that the correct positioning of the bytes buffer is in some
rare cases different than the file's "offset", as we track it. See
example
[here](open-telemetry#22937 (comment)).

There appear to be two ways to solve this. A "simple" solution is to
independently determine the file handle's current offset with a clever
use of `Seek`, ([suggested
here](https://stackoverflow.com/a/10901436/3511338). Although this does
appear to work, it leaves open the possibility that the fingerprint is
corrupted because _if the file was truncated_, we may be updating the
fingerprint with incorrect information.

The other solution, proposed in this PR, simply has the custom `Read`
function set a flag to indicate that the fingerprint _should_ be
updated. Then, just before returning from `ReadToEnd`, we create an
entirely new fingerprint. This has the advantage of not having to manage
any kind of append operations, but also allows the the opportunity to
independently check that the fingerprint has not been altered by
truncation.

Benchmarks appear to show all three solutions are close in performance.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working pkg/stanza priority:p2 Medium receiver/filelog release:required-for-ga Must be resolved before GA release
Projects
None yet
2 participants