You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug: Occasionally, when using the elasticapm.Client (without a framework), during process shutdown (in the atexit handler), the transport thread will block forever while trying to send data to the APM server and subsequently be killed by the thread manager, after the configured timeout is reached. This causes "Closing the transport connection timed out." to be printed to the command line and the messages remaining in the buffer to be lost.
This seems to be caused by a race condition involving the atexit handler of the elasticapm.Client and the weakref.finalize of urllib3.connectionpool.HTTPConnectionPool (which uses an atexit handler under the hood) which calls _close_pool_connections. A timeline causing this bugs looks like this:
The process is about to shutdown. atexit handlers are called.
_close_pool_connections is called while all connections are in the pool. All existing connections are disposed.
The elasticapm.Clientatexit handler is called, sending the "close" event to the transport thread.
The transport thread handles the "close" event, flushing the buffer and trying to send remaining data to the APM server.
urlopen will block the transport thread forever while waiting to get a connection from the connection pool (since the poolmanager uses block=True and no pool timeout is configured, this will block forever, because the only way to get a connection from the pool this way, is for someone else to put a connection into the pool).
The thread manager kills the thread after the configured timeout is reached, printing the error message and loosing all data in the buffer
The reason why this does occur consistently, is because _close_pool_connections will not clean up connections which are currently in use (e.g. connections being used in another thread). If a request is in progress when _close_pool_connections is called, the associated connection "survives" the cleanup and will be added back to the pool afterwards and can be reused by the transport thread (which may be a bug/unintended behavior of urllib3 since it claims HTTPConnectionPool is thread safe).
To Reproduce
The following minimal example reproduces the issue:
importtimeimportelasticapm# NOTE: You should be able to remove the "config" argument in your environmentclient=elasticapm.Client(
service_name="<SERVICE>",
server_url="<APM_SERVER_URL>",
secret_token="<SECRET_TOKEN>",
config={
"SERVER_CA_CERT_FILE": "<INTERNAL_CA_FILE_PATH>",
"GLOBAL_LABELS": {"Tenant": "<TENANT>"},
},
)
client.capture_message("Test")
# Give the client time to resolve all internal network requests, ensuring# that all urllib connections are in the pool when the atexit handlers are calledtime.sleep(10)
As is the case with race conditions, you might have to fiddle with the sleep timing a little bit. 10 seconds work quite reliable in my environment, but you may need a few more/less seconds, depending on your environment.
Environment
OS: Windows 10
Python version: 3.11.7
package versions: urllib3==2.2.1
APM Server version: 8.11.3
Agent version: elastic-apm==6.20.0
Additional context
A workaround for my use case is to use a custom Transport class which uses a non-blocking pool. I don't know enough about the elastic-apm code base to know whether or not this causes issues in other parts of the package, but it seems to resolve the issue for me without causing any other major issues.
defget_import_string(cls) ->str:
module=cls.__module__ifmodule=="builtins":
# avoid outputs like 'builtins.str'returncls.__qualname__returnmodule+"."+cls.__qualname__classNonBlockingTransport(Transport):
def__init__(self, *args, **kwargs) ->None:
super(NonBlockingTransport, self).__init__(*args, **kwargs)
self._pool_kwargs["block"] =False# Use like this:client=elasticapm.Client(
...,
config={
...,
"TRANSPORT_CLASS": get_import_string(NonBlockingTransport),
},
)
The text was updated successfully, but these errors were encountered:
In ES_APM_CONFIGURATION, I have: SERVICE_NAME, SECRET_TOKEN, SERVER_URL, SERVICE_VERSION, ENABLED, ENVIRONMENT
I tried to add function get_import_string as suggested by @robin-mader-bis but I got an error
AttributeError: 'NonBlockingTransport' object has no attribute '_pool_kwargs'
I just received the message Start job do_something and nothing else. I don’t know how to resolve this problem.
Describe the bug: Occasionally, when using the
elasticapm.Client
(without a framework), during process shutdown (in theatexit
handler), the transport thread will block forever while trying to send data to the APM server and subsequently be killed by the thread manager, after the configured timeout is reached. This causes "Closing the transport connection timed out." to be printed to the command line and the messages remaining in the buffer to be lost.This seems to be caused by a race condition involving the
atexit
handler of theelasticapm.Client
and theweakref.finalize
ofurllib3.connectionpool.HTTPConnectionPool
(which uses anatexit
handler under the hood) which calls_close_pool_connections
. A timeline causing this bugs looks like this:atexit
handlers are called._close_pool_connections
is called while all connections are in the pool. All existing connections are disposed.elasticapm.Client
atexit
handler is called, sending the "close" event to the transport thread.urlopen
will block the transport thread forever while waiting to get a connection from the connection pool (since the poolmanager usesblock=True
and no pool timeout is configured, this will block forever, because the only way to get a connection from the pool this way, is for someone else to put a connection into the pool).The reason why this does occur consistently, is because
_close_pool_connections
will not clean up connections which are currently in use (e.g. connections being used in another thread). If a request is in progress when_close_pool_connections
is called, the associated connection "survives" the cleanup and will be added back to the pool afterwards and can be reused by the transport thread (which may be a bug/unintended behavior of urllib3 since it claimsHTTPConnectionPool
is thread safe).To Reproduce
The following minimal example reproduces the issue:
As is the case with race conditions, you might have to fiddle with the sleep timing a little bit. 10 seconds work quite reliable in my environment, but you may need a few more/less seconds, depending on your environment.
Environment
urllib3==2.2.1
elastic-apm==6.20.0
Additional context
A workaround for my use case is to use a custom Transport class which uses a non-blocking pool. I don't know enough about the
elastic-apm
code base to know whether or not this causes issues in other parts of the package, but it seems to resolve the issue for me without causing any other major issues.The text was updated successfully, but these errors were encountered: