Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fluent-bit stops processing logs after reaching memory buffer limit #8046

Closed
robmcelhinney opened this issue Oct 16, 2023 · 5 comments
Closed
Labels
waiting-for-user Waiting for more information, tests or requested changes

Comments

@robmcelhinney
Copy link

Bug Report

Describe the bug
We’ve seen lots of instances of our fluent-bit sidecars becoming stuck once they reach/near their memory limit (we can’t use a filesystem buffer).
The only way to resolve the issue is to restart the container. No logs emit from the container either so we can’t find any root cause.

To Reproduce
No (info level) logs are emitted during the incident

> docker logs -f --tail 100 fluentbit-sidecar
...
[2023/10/12 11:52:40] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/12 11:52:40] [ info] [input] pausing fluent_forwarder_input
[2023/10/12 11:52:40] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/12 11:52:40] [ info] [input] pausing fluent_forwarder_input
[2023/10/12 11:52:41] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/12 11:52:41] [ info] [input] pausing fluent_forwarder_input
[2023/10/12 11:52:41] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/12 11:52:41] [ info] [input] pausing fluent_forwarder_input
[2023/10/12 11:52:41] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/12 11:52:41] [ info] [input] pausing fluent_forwarder_input
[2023/10/12 11:52:41] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/12 11:52:41] [ info] [input] pausing fluent_forwarder_input
[2023/10/12 11:52:43] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/12 11:52:43] [ info] [input] pausing fluent_forwarder_input
[2023/10/12 11:52:43] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/12 11:52:43] [ info] [input] pausing fluent_forwarder_input
[2023/10/12 11:52:43] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/12 11:52:43] [ info] [input] pausing fluent_forwarder_input
[2023/10/12 11:52:43] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/12 11:52:43] [ info] [input] pausing fluent_forwarder_input
[2023/10/12 11:52:43] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/12 11:52:43] [ info] [input] pausing fluent_forwarder_input
[2023/10/12 11:52:44] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/12 11:52:44] [ info] [input] pausing fluent_forwarder_input
[2023/10/12 11:52:45] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/12 11:52:45] [ info] [input] pausing fluent_forwarder_input
[2023/10/12 11:52:45] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/12 11:52:45] [ info] [input] pausing fluent_forwarder_input
[2023/10/12 11:52:45] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/12 11:52:45] [ info] [input] pausing fluent_forwarder_input
[2023/10/12 11:52:45] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/12 11:52:45] [ info] [input] pausing fluent_forwarder_input
[2023/10/12 11:52:45] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/12 11:52:45] [ info] [input] pausing fluent_forwarder_input
[2023/10/12 11:52:45] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/12 11:52:45] [ info] [input] pausing fluent_forwarder_input
[2023/10/12 11:52:45] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/12 11:52:45] [ info] [input] pausing fluent_forwarder_input
[2023/10/12 11:52:46] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/12 11:52:46] [ info] [input] pausing fluent_forwarder_input

> date
Thu Oct 12 13:00:48 IST 2023
  • Steps to reproduce the problem:
  1. Use the memory buffer & a forward input.
  2. Fill up the memory buffer from forwarded logs.
  3. Continue to send logs.

Expected behavior
New logs are not accepted until backlog is cleared.

Screenshots
Different host than the logs above.
Screenshot 2023-10-12 at 12 39 55
Screenshot 2023-10-12 at 12 37 55
Screenshot 2023-10-12 at 12 37 44

Your Environment

  • Version used: 2.1.6
  • Environment name and version (e.g. Kubernetes? What version?): Docker
  • Operating System and version: CentOS 7

I've posted this in Slack and received a comment about using a ring buffer. Do you have any information about that?

Configuration files:

> sudo cat /etc/fluent-bit/fluentbit.conf
[SERVICE]
    Parsers_File              /fluent-bit/etc/parsers/parsers.conf
    flush                     1
    log_Level                 info
    # Log Sync to Disk
    storage.path              /var/log/flb-storage/
    storage.sync              normal
    storage.checksum          Off
    storage.max_chunks_up     128
    storage.backlog.mem_limit 5M
    storage.metrics           On
    # Enable Metrics
    HTTP_Server  On
    HTTP_Listen  0.0.0.0
    HTTP_PORT    2020
    # Enable Healthcheck and Options
    Health_Check On
    HC_Errors_Count  5
    HC_Retry_Failure_Count  5
    HC_Period  5
    # Other Configuration Options
    coro_stack_size 24576
    scheduler.cap   2000
    scheduler.base  5

# Setup Forwarded logs from Docker etc.
[INPUT]
    Name                              forward
    Alias                             fluent_forwarder_input
    Listen                            0.0.0.0
    Port                              24225
    Buffer_Max_Size                   6144000
    Buffer_Chunk_Size                 1024000
    Tag_Prefix                        __forwarded_log__.
    Mem_Buf_Limit                     150M
    storage.type                      memory
    storage.pause_on_chunks_overlimit off

# Include any files in (Default: /etc/fluent-bit/conf:/fluent-bit/etc/conf/*)
@INCLUDE conf/*
> sudo cat /etc/fluent-bit/conf/fluentbit_config_HOST.conf
# Nomad Configuration
#
[FILTER]
    Name          rewrite_tag
    Match_Regex   ^((?!__parse_).)*$
    Rule          $sourcetype ^(company_name_json)$ __parse_json__.$TAG false
    Rule          $NOMAD_META_QAMEL__SOURCE_TYPE ^(company_name_json)$ __parse_json__.$TAG false
    Rule          $sourcetype ^(company_name_access_log)$ __parse_access_log__.$TAG false
    Rule          $NOMAD_META_QAMEL__SOURCE_TYPE ^(company_name_access_log)$ __parse_access_log__.$TAG false
    Rule          $sourcetype ^(company_name_access_log_json)$ __parse_access_log__.__parse_json__.$TAG false
    Rule          $NOMAD_META_QAMEL__SOURCE_TYPE ^(company_name_access_log_json)$ __parse_access_log__.__parse_json__.$TAG false
    Rule          $sourcetype ^(company_name_log)$ __parse_company_name_log__.$TAG false
    Rule          $NOMAD_META_QAMEL__SOURCE_TYPE ^(company_name_log)$ __parse_company_name_log__.$TAG false
    Emitter_Name  -re_emitted-PROVIDER

[FILTER]
    Name                  multiline
    match                 *
    multiline.key_content log
    mode                  partial_message

# ensure fields are not nested into event from parsing
[FILTER]
    Name        modify
    Match_Regex  .*__parse_.*
    Alias       nomad_rename_original_fields_filter

# Adding Parser Filter '0' for `forward` logs with the tag ''
[FILTER]
    Name         parser
    Match        *__forwarded_log__.*
    Key_Name     log
    Preserve_Key true
    Reserve_Data true
    Parser       time_company_name_log
    Parser       time_company_name_log_space
    Parser       time_company_name_timezone

# Adding Parser Filter '1' for `forward` logs with the tag ''
[FILTER]
    Name         parser
    Match_Regex  .*__parse_company_name_log__..*
    Key_Name     log
    Preserve_Key false
    Reserve_Data true
    Parser       company_name_log
    Parser       docker

# Adding Parser Filter '2' for `forward` logs with the tag ''
[FILTER]
    Name         parser
    Match_Regex  .*__parse_access_log__..*
    Key_Name     log
    Preserve_Key false
    Reserve_Data true
    Parser       company_name_access_log

# Adding Parser Filter '3' for `forward` logs with the tag ''
[FILTER]
    Name         parser
    Match_Regex  .*__parse_json__..*
    Key_Name     log
    Preserve_Key false
    Reserve_Data true
    Parser       docker

# rename parsed top_level fields so they are put under event field
[FILTER]
    Name        modify
    Match_Regex  .*__parse_.*
    Alias       nomad_rename_parsed_fields_filter

# move original fields back to top level
[FILTER]
    Name        modify
    Match_Regex  .*__parse_.*
    Alias       nomad_rename_original_fields_back_filter

[FILTER]
    Name        modify
    Match       *__forwarded_log__.*
    Alias       nomad_modify_filter
    Rename      container_name fields_container_name
    # Remove Useless Metadata
    Remove      container_id
    # Rename Docker fields for Fields Additional Metadata
    SET         fields_dc DC
    SET         fields_role ROLE_NAME
    SET         host HOSTNAME
    #p:additional_log_metadata
    Rename      NOMAD_ALLOC_ID fields_allocation_id
    Rename      NOMAD_GROUP_NAME fields_group_name
    Rename      NOMAD_META_QAMEL__BILLING_TEAM fields_billing_team
    Rename      NOMAD_JOB_NAME fields_job_name
    Rename      NOMAD_TASK_NAME fields_task_name
    Rename      NOMAD_META_QAMEL__INDEX index
    Rename      NOMAD_META_QAMEL__SOURCE_TYPE sourcetype
    Rename      log event_log
    Rename      message event_message
    Rename      timestamp time

[FILTER]
    Name        modify
    Match       *__forwarded_log__.*
    Alias       nomad_modify_bad_sourcetype_filter
    CONDITION   Key_value_does_not_match sourcetype ^(company_name_(json|text|log|access_log|access_log_json))$
    SET         sourcetype company_name_text

[FILTER]
    Name        modify
    Match       *__forwarded_log__.*
    Alias       nomad_modify_missing_sourcetype_filter
    CONDITION   Key_does_not_exist sourcetype
    SET         sourcetype company_name_text
[FILTER]
    Name         parser
    Match        *__forwarded_log__.*
    Alias        nomad_parse_job_name
    Key_Name     fields_job_name
    Parser       nomad_job_name
    Reserve_Data True
    Preserve_Key True

[FILTER]
    Name         modify
    Match        *__forwarded_log__.*
    Alias        nomad_rewrite_job_name
    Hard_Rename  nomad_job_name fields_job_name

[FILTER]
    Name         modify
    Match        *__forwarded_log__.*
    Alias        nomad_rewrite_source
    Hard_Copy    fields_job_name source

# Nest fields_ under { fields }
[FILTER]
    Name          nest
    Match         *__forwarded_log__.*
    Alias         nomad_nest_fields_filter
    Operation     nest
    Wildcard      fields_*
    Nest_under    fields
    Remove_prefix fields_

# Nest event_ under { event }
[FILTER]
    Name          nest
    Match         *__forwarded_log__.*
    Alias         nomad_nest_event_filter
    Operation     nest
    Wildcard      event_*
    Nest_under    event
    Remove_prefix event_

[FILTER]
    Name         Lua
    Match_Regex  .*__parse_.*
    Script       scripts/scripts.lua
    Call         cb_transform_for_PROVIDER

[FILTER]
    Name          record_modifier
    Match         *__forwarded_log__.*
    Allowlist_key source
    Allowlist_key sourcetype
    Allowlist_key index
    Allowlist_key host
    Allowlist_key event
    Allowlist_key fields

[OUTPUT]
    Name          http
    Match         *__forwarded_log__.*
    Alias         nomad_http_output
    Host          example-host.com
    Port          443
    Tls           On
    Tls.verify    On
    Uri           /services/collector/event
    Header        Authorization HEADER TOKEN
    Format        json_stream
    Json_date_key time
    Workers       2
> sudo cat /etc/fluent-bit/scripts/scripts.lua
-- This file is managed by Puppet
-- Local changes will be overwritten

-- Fluentbit Return Codes
CODE_DROP = -1
CODE_MODIFICATION_NONE = 0
CODE_MODIFICATION_ALL = 1
CODE_MODIFICATION_RECORD = 2

TOP_LEVEL_KEYS_EXCL_EVENTS = {'time', 'index', 'source', 'sourcetype', 'host', 'fields'}
KEYS_EVENT_TO_FIELDS = {'filename'}

---@diagnostic disable-next-line: lowercase-global, unused-local
-- TLDR; Ensures JSON Parsing keys are not placed into the record root as PROVIDRE will reject non-standard keys eg. event, source
-- 1. Store the current event for later removing it from the record
-- 2. Move the whole record into the event field
-- 3. Move the known record level fields to the recordtop level eg. source, index, sourceType
-- 4. Merge the stored event fields into the event field
-- 5. Move any predefined event metadata into the fields field
-- 6. Return event
function cb_transform_for_PROVIDER(tag, timestamp, record)
    local original_record_event = record["event"]
    record["event"] = nil

    local new_record = {}
    new_record["event"] = record
    -- move top-level fields back to top
    for index, value in ipairs(TOP_LEVEL_KEYS_EXCL_EVENTS) do
        new_record[value] = record[value]
        new_record["event"][value] = nil
    end


    if original_record_event ~= nil then
        for key, value in pairs(original_record_event) do
            new_record["event"][key] = value
        end
    end

    -- move metadata fields to "fields"
    for _, value in ipairs(KEYS_EVENT_TO_FIELDS) do
        new_record["fields"][value] = new_record["event"][value]
        new_record["event"][value] = nil
    end
    return CODE_MODIFICATION_RECORD, timestamp, new_record
end
> sudo cat /etc/fluent-bit/parsers/parsers.conf
[PARSER]
  Name   json
  Format json
  Time_Key time
  Time_Format %d/%b/%Y:%H:%M:%S %z

[PARSER]
  Name         docker
  Format       json
  Time_Key     time
  Time_Format  %Y-%m-%dT%H:%M:%S.%L
  Time_Keep    On

[PARSER]
  Name        nomad_job_name
  Format      regex
  Regex       /^(QAMEL:[^:]+:)?(?<nomad_job_name>[^\/]+)(\/.+)?$/

[PARSER]
  Name   company_name_log
  Format regex
  Regex ^(?<timestamp>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{1,9})Z? (?<log>.+)$
  Time_Format %Y-%m-%dT%H:%M:%S.%L
  Time_Key    timestamp

[PARSER]
  Name   company_name_access_log
  Format regex
  Regex ^(?<timestamp>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{1,9})Z? \[(?<log_type>\w+)\] (?<log>.+)$
  Time_Format %Y-%m-%dT%H:%M:%S.%L
  Time_Key    timestamp

[PARSER]
  Name   time_company_name_timezone
  Format regex
  Regex ^(?<timestamp>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{1,9}(Z|[+-]\d{4}))
  Time_Key    timestamp

[PARSER]
  Name   time_company_name_log
  Format regex
  Regex ^(?<timestamp>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{1,9})Z?
  Time_Key    timestamp

[PARSER]
  Name   time_company_name_log_space
  Format regex
  Regex ^(?<timestamp>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}\.\d{1,9})Z?
  Time_Format %Y-%m-%d %H:%M:%S.%L
  Time_Key    timestamp

# No Additional Parsers Provided
@RicardoAAD
Copy link
Collaborator

Hello @robmcelhinney,

I will try reproducing this behavior, but please confirm my assumptions.

Is Fluent-Bit running as a docker container, and docker is using the FluentD logging driver to send all container logs to this Fluent-Bit container, and from here you are using the HTTP output plugin to send these logs to an endpoint?

If this is the case, please let me know what this endpoint is.
Do you have an estimate of the load that these Fluent-bit is receiving?
Do you have any other logs before these messages appeared (error reaching the endpoint)?

[2023/10/12 11:52:41] [ info] [input] pausing fluent_forwarder_input
[2023/10/12 11:52:41] [ warn] [input] forward.0 paused (mem buf overlimit

As per your description, the behavior is expected when reaching 150MB

From: https://docs.fluentbit.io/manual/administration/backpressure#mem_buf_limit

*** block local buffers for the input plugin (cannot append more data)
** notify the input plugin invoking a pause callback

@RicardoAAD RicardoAAD added the waiting-for-user Waiting for more information, tests or requested changes label Oct 19, 2023
@robmcelhinney
Copy link
Author

Is Fluent-Bit running as a docker container, and docker is using the FluentD logging driver to send all container logs to this Fluent-Bit container, and from here you are using the HTTP output plugin to send these logs to an endpoint?

Yes, fluent-bit is running as a docker container. We get a lot of our logs from other docker containers using the fluentd logging driver.
We also receive logs from other fluent-bit & fluentd docker containers that are tailing their own files and adding metadata before forwarding them to our fluent-bit container that is having issues above.
We sometimes use the tail input directly in this container too but that only ingests a minuscule amount of logs.

Yes, we use the HTTP output plugin for all forward input logs and also use a Splunk Output plugin for those few tailed logs.

If this is the case, please let me know what this endpoint is.

We use a SplunkCloud HTTPEvent Collector: https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector

Do you have an estimate of the load that these Fluent-bit is receiving?

We tend to start seeing issues once we are ingesting over ~8GB an hour. It is usually fine before then and can sometimes handle that amount too.

Do you have any other logs before these messages appeared (error reaching the endpoint)?

I don't have any right now, I'll add any to this ticket when it reoccurs.

As per your description, the behavior is expected when reaching 150MB
From: https://docs.fluentbit.io/manual/administration/backpressure#mem_buf_limit

We expect the block to happen at that stage, we're just not sure why it often fails to resolve itself even when the ingest rate drops to 0.

@RicardoAAD
Copy link
Collaborator

Hello @robmcelhinney

Based on your configuration, I tested with a reduced Fluent Bit config file, and I can see the same behavior in the logs, mem buf over-limit pausing the input, but on Splunk, I can see all the logs coming into Splunk.

The Splunk output plugin doesn't send an HTTP server response code 200 when the records are received at the endpoint; this is why nothing else is displayed on the logs when log_level is info.

Fluent Bit v2.1.6
* Copyright (C) 2015-2022 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2023/10/25 12:31:34] [ info] [fluent bit] version=2.1.6, commit=baef212885, pid=1
[2023/10/25 12:31:34] [ info] [storage] created root path /var/log/flb-storage/
[2023/10/25 12:31:34] [ info] [storage] ver=1.4.0, type=memory+filesystem, sync=normal, checksum=off, max_chunks_up=128
[2023/10/25 12:31:34] [ info] [storage] backlog input plugin: storage_backlog.1
[2023/10/25 12:31:34] [ info] [cmetrics] version=0.6.2
[2023/10/25 12:31:34] [ info] [ctraces ] version=0.3.1
[2023/10/25 12:31:34] [ info] [input:forward:fluent_forwarder_input] initializing
[2023/10/25 12:31:34] [ info] [input:forward:fluent_forwarder_input] storage_strategy='memory' (memory only)
[2023/10/25 12:31:34] [ info] [input:forward:fluent_forwarder_input] listening on 0.0.0.0:24224
[2023/10/25 12:31:34] [ info] [input:storage_backlog:storage_backlog.1] initializing
[2023/10/25 12:31:34] [ info] [input:storage_backlog:storage_backlog.1] storage_strategy='memory' (memory only)
[2023/10/25 12:31:34] [ info] [input:storage_backlog:storage_backlog.1] queue memory limit: 4.8M
[2023/10/25 12:31:34] [ info] [output:splunk:splunk.0] worker #0 started
[2023/10/25 12:31:34] [ info] [output:splunk:splunk.0] worker #1 started
[2023/10/25 12:31:34] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
[2023/10/25 12:31:34] [ info] [sp] stream processor started
[2023/10/25 12:31:35] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/25 12:31:35] [ info] [input] pausing fluent_forwarder_input
[2023/10/25 12:31:36] [ warn] [input] forward.0 paused (mem buf overlimit)

If you set the log_level to debug instead of info, you will see the data is flowing from fluent-bit to Splunk, and the below log messages confirm it.

As you can see in the below snippet, the chunks are getting updated with data, then paused, and then resumed again.

I will engage our Dev-Team to get more details about the forward plugin because we should see a message in the logs indicating that the input plugin is resumed when the mem_buf_limit has cleared the pause condition.

[2023/10/25 12:36:27] [debug] [input chunk] update output instances with new chunk size diff=499, records=1, input=fluent_forwarder_input
[2023/10/25 12:36:27] [debug] [input chunk] update output instances with new chunk size diff=499, records=1, input=fluent_forwarder_input
[2023/10/25 12:36:27] [debug] [input chunk] update output instances with new chunk size diff=499, records=1, input=fluent_forwarder_input
[2023/10/25 12:36:27] [ warn] [input] forward.0 paused (mem buf overlimit)
[2023/10/25 12:36:27] [ info] [input] pausing fluent_forwarder_input
[2023/10/25 12:36:27] [debug] [input chunk] forward.0 is paused, cannot append records
[2023/10/25 12:36:27] [debug] [input chunk] forward.0 is paused, cannot append records
[2023/10/25 12:36:27] [debug] [input chunk] forward.0 is paused, cannot append records
[2023/10/25 12:36:27] [debug] [input chunk] forward.0 is paused, cannot append records
[2023/10/25 12:36:27] [debug] [input chunk] forward.0 is paused, cannot append records
[2023/10/25 12:36:27] [debug] [input chunk] forward.0 is paused, cannot append records
[2023/10/25 12:36:27] [debug] [input chunk] forward.0 is paused, cannot append records
[2023/10/25 12:36:27] [debug] [input chunk] forward.0 is paused, cannot append records
[2023/10/25 12:36:27] [debug] [input chunk] forward.0 is paused, cannot append records
[2023/10/25 12:36:27] [debug] [input chunk] forward.0 is paused, cannot append records
[2023/10/25 12:36:27] [debug] [task] destroy task=0x7fce56eda3f0 (task_id=7)
[2023/10/25 12:36:27] [debug] [input chunk] update output instances with new chunk size diff=234, records=1, input=fluent_forwarder_input
[2023/10/25 12:36:27] [debug] [input chunk] update output instances with new chunk size diff=234, records=1, input=fluent_forwarder_input
[2023/10/25 12:36:27] [debug] [input chunk] update output instances with new chunk size diff=234, records=1, input=fluent_forwarder_input
[2023/10/25 12:36:27] [debug] [input chunk] update output instances with new chunk size diff=234, records=1, input=fluent_forwarder_input
[2023/10/25 12:36:27] [debug] [input chunk] update output instances with new chunk size diff=234, records=1, input=fluent_forwarder_input
[2023/10/25 12:36:27] [debug] [input chunk] update output instances with new chunk size diff=234, records=1, input=fluent_forwarder_input

Could you please provide a simplified version of your configuration that reproduces the issue? as I mentioned, I can see the same in the logs that you are seeing, but the logs are reaching the endpoint using the Splunk and the HTTP output plugins.

Thanks,

@lecaros
Copy link
Contributor

lecaros commented Nov 23, 2023

The Forward plugin doesn't have the callback for the resume; hence, there is no log saying it resumed.
@robmcelhinney, if you have a repro demonstrating the issue, we're happy to review this.

@lecaros lecaros closed this as completed Nov 23, 2023
@robmcelhinney
Copy link
Author

I believe this was caused by the rewrite_tag deadlock noted: #4940 (comment)

And the new version 3.0.2 seems to have fixed it after including: #8473

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
waiting-for-user Waiting for more information, tests or requested changes
Projects
None yet
Development

No branches or pull requests

3 participants