-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Description
Elasticsearch version (bin/elasticsearch --version
): 8.2.0
elasticsearch-py
version (elasticsearch.__versionstr__
): 8.12.0
Python version: 3.9.2
Description of the problem including expected versus actual behavior:
We run an API with an endpoint that does a call to Elasticsearch. In this endpoint we initialize AsyncElasticsearch, run a search query(might be multiple in the future, but just one for now) and close the connection to Elasticsearch. We noticed that if this API endpoint is called a lot, memory used by the process running the API keeps increasing until the process is killed because it goes OOM.
Steps to reproduce:
I isolated the issue in a relatively simple script:
import asyncio
from elasticsearch import AsyncElasticsearch
SERVERS = [
'https://elk001:9200',
'https://elk002:9200',
'https://elk003:9200',
]
INDEX = 'logs'
API_KEY = 'xxx'
async def leaky():
while True:
es = AsyncElasticsearch(SERVERS, api_key=API_KEY)
async with es as client:
await client.search(
index=INDEX,
body={
'from': 0,
'size': 0,
'query': {
'bool': {
'must': [],
'filter': [],
'should': [],
'must_not': [],
},
},
},
)
print('completed a query')
if __name__ == '__main__':
asyncio.run(leaky())
If you run this memory usage will quickly(< 1 minute in our setup) increase to about 1GiB and beyond. If you pull the es = AsyncElasticsearch
initialization out of the while True
loop memory still increases, but much more slowly(although unless I'm missing something, while it might not be best practice it still shouldn't leak that fast when it's inside the loop).
What I didn't test:
I didn't have time to fully analyze this with memory profilers. I'm also not sure if it's only search queries that are affected by this or if simple initializing AsyncElasticsearch without running any query already causes the leak to happen(or if any other request leaks). Didn't test whether the api key or SSL has an effect either. I just wanted an isolated testcase to confirm I was still sane. We solved this in the end by just switching back to the sync Elasticsearch client since we're not executing queries in parallel any time soon, but I still thought I'd report it in case others run into this issue.