-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[exporter/datadog] Logs export fails with '414 Request-URI Too Large' #16380
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
@hamidp555 , could you please enable debug logging for collector and send the output.
This would print the url, and post body it is sending to the datadog backend.Please remove any sensitive information , and share the log along with error. This would be helpful in debugging and fixing the issue. |
@dineshg13 , I enabled DEBUG; First export is successful. Here is part of the log for the first export However, next logs get transformed with many backslashes, here is part of the log The number of backslashes increase after each failure and causes error: Then, after a few retries, the error changes to |
This is part of the payload that datadog exporter is trying to send, the payload is an array and the following is an item in it |
If I understand correctly, the problem is that you are sending the Collector logs via the Datadog exporter, and the Datadog exporter is logging these logs, which means it will repeatedly escape the logs until we reach the maximum payload size. A short-term workaround while we figure out a solution is to make the Collector log level info or higher. Could you try this out @hamidp555 ? |
@mx-psi That is correct. I tried the info level logs too, and the after almost an hour and half the same errors appeared. The following is the logs with info level turned on. As you can see, the first few exports are fine and then 400 error, and later 414 error.
|
Alright, thanks for testing @hamidp555, looks like this is not the only issue then. We would need to have the |
I also wanted to mention: if sharing the file is hard for you or there are sensitive details in it, feel free to reach out through support (support@datadoghq.com) and mention this issue |
@mx-psi , Currently I am not using file exporter, do i have to use a different pipeline config than what I shared? If so could you please share the new configuration? thank you |
Hi @hamidp555! I'll try to help while @mx-psi is away. Please bear with me. It looks like the exporters:
file:
path: /path/to/file.json Then, in your logs pipeline, simply add the exporter to your already existing list of exporters. There is no need for a separate pipeline, it is just an additional exporter. If you're having trouble with my explanation above, please share your full YAML file and I can show you the exact modifications you need to make. |
…level This change removes the dumping of the payload at 'debug' logging level. It can be re-enabled using exporter::datadog::logs::dump_payload if desired. Updates open-telemetry#16380
I created a PR which suppresses the payload dump at debug level, which may be the cause for 414 too. @hamidp555 please let us know if you can try it out once it is merged. |
@gbbr Great, i will build from your PR branch and try it |
…level (#16492) * exporter/datadogexporter: supress payload dump when logging at debug level This change removes the dumping of the payload at 'debug' logging level. It can be re-enabled using exporter::datadog::logs::dump_payload if desired. Updates #16380 * Add changelog * Log nothing unless DumpPaylod is on
I have tried the fix, unfortunately, I am still getting the 414 error after a few hours |
😞 Will look into it. Is the backslash problem solved though? |
Yes the backslash issue is solved, thank you :) When I ran the collector with DEBUG log level, I get this error after a few successful log exports
Then, it turns into
These errors ( 400 and 414) are about URI. I am wondering why the error is about the URI and not the payload itself, if the payload is an issue |
Hello, I'm also experiencing the same issue and I'm happy to help test any fix. For me the problem starts after about 5 minutes of collector runtime. There doesn't seem to be anything in the content of the log that changed from the previous log before it starts happening. |
The client appears to be appending Since these end up as query parameters in the API call to DD, could it be possible these are growing with each request until the API URI limit is hit? (https://github.com/DataDog/datadog-api-client-go/blob/master/api/datadogV2/api_logs.go#L779). |
I added some debug logging to For example, here's dd tags from one of the earlier requests: otel | 2022-11-30T03:52:47.241Z debug logs/sender.go:77 Tags {"kind": "exporter", "data_type": "logs", "name": "datadog", "tags": "service:nest-app,env:simon,os.type:linux,otel_source:datadog_exporter,service:nest-app,env:simon,os.type:linux,otel_source:datadog_exporter"} and then again from a later request (note the growth and duplication of tags): otel | 2022-11-30T03:53:57.255Z debug logs/sender.go:77 Tags {"kind": "exporter", "data_type": "logs", "name": "datadog", "tags": "service:nest-app,env:simon,os.type:linux,otel_source:datadog_exporter,service:nest-app,env:simon,os.type:linux,otel_source:datadog_exporter,service:nest-app,env:simon,os.type:linux,otel_source:datadog_exporter,service:nest-app,env:simon,os.type:linux,otel_source:datadog_exporter,service:nest-app,env:simon,os.type:linux,otel_source:datadog_exporter,service:nest-app,env:simon,os.type:linux,otel_source:datadog_exporter,service:nest-app,env:simon,os.type:linux,otel_source:datadog_exporter,service:nest-app,env:simon,os.type:linux,otel_source:datadog_exporter"} Edit; This growth continues until the 414 appears. Do we need to append the tags on each request, can we just override it or do we need to retain some original context? |
The attached PR fixes the issue in my case, but would appreciate some input from someone more familiar with the project to ensure it doesn't lead to any other side effects. |
@sbracegirdle Thanks for debugging and creating the PR. I have added a comment |
…level (open-telemetry#16492) * exporter/datadogexporter: supress payload dump when logging at debug level This change removes the dumping of the payload at 'debug' logging level. It can be re-enabled using exporter::datadog::logs::dump_payload if desired. Updates open-telemetry#16380 * Add changelog * Log nothing unless DumpPaylod is on
@sbracegirdle can we close this issue ? |
Yes I believe it is fixed @dineshg13 . |
Component(s)
exporter/datadog
What happened?
Description
Datadog exporter throws "414 Request-URI Too Large" error after exporting a few logs
Here is. error log:
2022-11-19T11:22:42.930Z error Failed to send logs {"kind": "exporter", "data_type": "logs", "name": "datadog/logs", "error": "414 Request-URI Too Large", "msg": "\r\n<title>414 Request-URI Too Large</title>\r\n\r\n
Steps to Reproduce
Run a collector with the following configuration and
Expected Result
Datadogexporter continue to export logs to Datadog without failing with 414 error
Actual Result
Collector version
0.64.0
Environment information
Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
OpenTelemetry Collector configuration
Log output
Additional context
No response
The text was updated successfully, but these errors were encountered: