Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Manticore crashes on frequent index updates #1458

Closed
Eclipsium opened this issue Sep 22, 2023 · 19 comments
Closed

Manticore crashes on frequent index updates #1458

Eclipsium opened this issue Sep 22, 2023 · 19 comments
Assignees

Comments

@Eclipsium
Copy link

Eclipsium commented Sep 22, 2023

To Reproduce
Steps to reproduce the behavior:

  1. Starting bulk update 1000 rows in two thread
  2. Starting the search in two threads
  3. Expecting a crash (locally took about 15-20 minutes)

You can run script in repo, where i reproduce this logic:
https://github.com/Eclipsium/manticore_crash
docker compose up

Expected behavior
stable operation, or a warning of what we are doing wrong

Describe the environment:

  • Managed yandex for k8s v. 1.25

Messages from log files:
link to repo

@PavelShilin89
Copy link
Contributor

I was not able to recreate this issue. After 15-20 minutes your sender crashes, manticore continues to work, without crash.

Logs:

manticore_crash-requests_sender-1  | 2023-09-27 12:18:13.159 | INFO     | __main__:insert_to_manticore:70 - {'items': [{'bulk': {'_index': 'posts_idx', '_id': 1627844933, 'created': 1000, 'deleted': 3, 'updated': 0, 'result': 'updated', 'status': 200}}], 'current_line': 1000, 'skipped_lines': 0, 'errors': False, 'error': ''}
manticore_crash-requests_sender-1  | 2023-09-27 12:18:13.164 | INFO     | __main__:insert_to_manticore:70 - {'items': [{'bulk': {'_index': 'posts_idx', '_id': 1708324863, 'created': 1000, 'deleted': 2, 'updated': 0, 'result': 'updated', 'status': 200}}], 'current_line': 1000, 'skipped_lines': 0, 'errors': False, 'error': ''}
manticore_crash-requests_sender-1  | 2023-09-27 12:18:15.929 | INFO     | __main__:insert_to_manticore:70 - {'items': [{'bulk': {'_index': 'posts_idx', '_id': 1595884512, 'created': 1000, 'deleted': 2, 'updated': 0, 'result': 'updated', 'status': 200}}], 'current_line': 1000, 'skipped_lines': 0, 'errors': False, 'error': ''}
manticore_crash-requests_sender-1  | 2023-09-27 12:18:15.947 | INFO     | __main__:insert_to_manticore:70 - {'items': [{'bulk': {'_index': 'posts_idx', '_id': 60464415, 'created': 1000, 'deleted': 3, 'updated': 0, 'result': 'updated', 'status': 200}}], 'current_line': 1000, 'skipped_lines': 0, 'errors': False, 'error': ''}
manticore                          | WARNING: timed out while performing SyncSend to flush network buffers, sock=156
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | +
manticore_crash-requests_sender-1  | Exception Group Traceback (most recent call last):
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  |   File "/app/main.py", line 146, in <module>
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | asyncio.run(main())
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  |   File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | return runner.run(main)
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  |   File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | return self._loop.run_until_complete(task)
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  |   File "/usr/local/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | return future.result()
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  |   File "/app/main.py", line 139, in main
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | async with asyncio.TaskGroup() as tg:
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  |   File "/usr/local/lib/python3.11/asyncio/taskgroups.py", line 147, in __aexit__
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | raise me from None
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  | ExceptionGroup
manticore_crash-requests_sender-1  | :
manticore_crash-requests_sender-1  | unhandled errors in a TaskGroup (1 sub-exception)
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | +-+---------------- 1 ----------------
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  | Traceback (most recent call last):
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpcore/_exceptions.py", line 10, in map_exceptions
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | yield
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |     |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpcore/_async/http11.py", line 209, in _receive_event
manticore_crash-requests_sender-1  |     |     event = self._h11_state.next_event()
manticore_crash-requests_sender-1  |     |             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |     |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/h11/_connection.py", line 469, in next_event
manticore_crash-requests_sender-1  |     |     event = self._extract_next_receive_event()
manticore_crash-requests_sender-1  |     |             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |     |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/h11/_connection.py", line 419, in _extract_next_receive_event
manticore_crash-requests_sender-1  |     |     event = self._reader.read_eof()  # type: ignore[attr-defined]
manticore_crash-requests_sender-1  |     |             ^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |     |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/h11/_readers.py", line 137, in read_eof
manticore_crash-requests_sender-1  |     |     raise RemoteProtocolError(
manticore_crash-requests_sender-1  |     | h11._util.RemoteProtocolError: peer closed connection without sending complete message body (received 45936002 bytes, expected 64103368)
manticore_crash-requests_sender-1  |     |
manticore_crash-requests_sender-1  |     | The above exception was the direct cause of the following exception:
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | | Traceback (most recent call last):
manticore_crash-requests_sender-1  |     |
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 66, in map_httpcore_exceptions
manticore_crash-requests_sender-1  |     |
manticore_crash-requests_sender-1  |     yield
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 249, in __aiter__
manticore_crash-requests_sender-1  |     |     async for part in self._httpcore_stream:
manticore_crash-requests_sender-1  |     |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 347, in __aiter__
manticore_crash-requests_sender-1  |     |     async for part in self._stream:
manticore_crash-requests_sender-1  |     |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpcore/_async/http11.py", line 337, in __aiter__
manticore_crash-requests_sender-1  |     |     raise exc
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |     |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpcore/_async/http11.py", line 329, in __aiter__
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |     async for chunk in self._connection._receive_response_body(**kwargs):
manticore_crash-requests_sender-1  |     |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpcore/_async/http11.py", line 198, in _receive_response_body
manticore_crash-requests_sender-1  |     |     event = await self._receive_event(timeout=timeout)
manticore_crash-requests_sender-1  |     |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^^^^^^^^^^^
manticore_crash-requests_sender-1  |     |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpcore/_async/http11.py", line 208, in _receive_event
manticore_crash-requests_sender-1  |     |     with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}):
manticore_crash-requests_sender-1  |     |   File "/usr/local/lib/python3.11/contextlib.py", line 155, in __exit__
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |     self.gen.throw(typ, value, traceback)
manticore_crash-requests_sender-1  |     |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
manticore_crash-requests_sender-1  |     |     raise to_exc(exc) from exc
manticore_crash-requests_sender-1  |     | httpcore.RemoteProtocolError: peer closed connection without sending complete message body (received 45936002 bytes, expected 64103368)
manticore_crash-requests_sender-1  |     |
manticore_crash-requests_sender-1  |     | The above exception was the direct cause of the following exception:
manticore_crash-requests_sender-1  |     |
manticore_crash-requests_sender-1  |     | Traceback (most recent call last):
manticore_crash-requests_sender-1  |     |   File "/app/main.py", line 112, in make_search_request
manticore_crash-requests_sender-1  |     |
manticore_crash-requests_sender-1  |     response = await client.post(
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |                ^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |     |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1848, in post
manticore_crash-requests_sender-1  |     |     return await self.request(
manticore_crash-requests_sender-1  |     |            ^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |     |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1530, in request
manticore_crash-requests_sender-1  |     |     return await self.send(request, auth=auth, follow_redirects=follow_redirects)
manticore_crash-requests_sender-1  |     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^
manticore_crash-requests_sender-1  | ^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |     |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1631, in send
manticore_crash-requests_sender-1  |     |     raise exc
manticore_crash-requests_sender-1  |     |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1625, in send
manticore_crash-requests_sender-1  |     |     await response.aread()
manticore_crash-requests_sender-1  |     |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_models.py", line 909, in aread
manticore_crash-requests_sender-1  |     |     self._content = b"".join([part async for part in self.aiter_bytes()])
manticore_crash-requests_sender-1  |     |
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  |  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |     |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_models.py", line 909, in <listcomp>
manticore_crash-requests_sender-1  |     |     self._content = b"".join([part async for part in self.aiter_bytes()])
manticore_crash-requests_sender-1  |     |                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |     |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_models.py", line 927, in aiter_bytes
manticore_crash-requests_sender-1  |     |
manticore_crash-requests_sender-1  |     async for raw_bytes in self.aiter_raw():
manticore_crash-requests_sender-1  |     |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_models.py", line 985, in aiter_raw
manticore_crash-requests_sender-1  |     |     async for raw_stream_bytes in self.stream:
manticore_crash-requests_sender-1  |     |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_client.py", line 146, in __aiter__
manticore_crash-requests_sender-1  |     |     async for chunk in self._stream:
manticore_crash-requests_sender-1  |     |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 248, in __aiter__
manticore_crash-requests_sender-1  |     |     with map_httpcore_exceptions():
manticore_crash-requests_sender-1  |     |   File "/usr/local/lib/python3.11/contextlib.py", line 155, in __exit__
manticore_crash-requests_sender-1  |     |     self.gen.throw(typ, value, traceback)
manticore_crash-requests_sender-1  |     |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 83, in map_httpcore_exceptions
manticore_crash-requests_sender-1  |     |     raise mapped_exc(message) from exc
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | |
manticore_crash-requests_sender-1  | httpx
manticore_crash-requests_sender-1  | .RemoteProtocolError
manticore_crash-requests_sender-1  | : peer closed connection without sending complete message body (received 45936002 bytes, expected 64103368)
manticore_crash-requests_sender-1  |     +------------------------------------
manticore                          | rt: table posts_idx: diskchunk 724(13), segments 19  saved in 1.701823 (2.325859) sec, RAM saved/new 132079181/0 ratio 0.950000 (soft limit 127506841, conf limit 134217728)
manticore_crash-requests_sender-1 exited with code 1
manticore                          | /* Wed Sep 27 12:18:22.361 2023 conn 5454 real 33.684 wall 33.684 found 1987789 */  /*{"index": "posts_idx", "query": {"bool": {"must": [{"range": {"uploaded_at": {"gte": 1695730290}}}, {"equals": {"is_blogger": 0}}, {"in": {"any(source_id)": [0, 1, 2, 3, 4]}}, {"bool": {"should": [{"query_string": "\"blanditiis\""}, {"query_string": "\"expedita\""}, {"query_string": "\"nam\""}, {"query_string": "\"magni\""}, {"query_string": "\"dignissimos\""}, {"query_string": "\"labore\""}, {"query_string": "\"consectetur\""}, {"query_string": "\"libero\""}, {"query_string": "\"aperiam\""}, {"query_string": "\"aspernatur\""}]}}], "must_not": [{"query_string": "\"some_exclude_words\""}, {"query_string": "\"some_exclude_words\""}, {"query_string": "\"some_exclude_words\""}, {"query_string": "\"some_exclude_words\""}, {"query_string": "\"some_exclude_words\""}, {"query_string": "\"some_exclude_words\""}, {"query_string": "\"some_exclude_words\""}, {"query_string": "\"some_exclude_words\""}, {"query_string": "\"some_exclude_words\""}, {"query_string": "\"some_exclude_words\""}]}}, "limit": 10000, "offset": 0, "sort": ["posted"], "options": {"max_matches": 50000}, "highlight": {"limit": 50000}} */
manticore                          | WARNING: timed out while performing SyncSend to flush network buffers, sock=142
manticore                          | rt: table posts_idx: optimized progressive chunk(s) 39 ( left 8 ) in 7m 4.5s

@PavelShilin89 PavelShilin89 added the waiting Waiting for the original poster (in most cases) or something else label Sep 27, 2023
@Eclipsium
Copy link
Author

Very strange, manticore dropped the connection inside docker for some reason. Can you try again? I tried locally and let another developer test the script. We were able to replicate the crash

@sanikolaev sanikolaev removed the waiting Waiting for the original poster (in most cases) or something else label Oct 2, 2023
@PavelShilin89
Copy link
Contributor

PavelShilin89 commented Oct 3, 2023

Issue is being played on our server.

Steps for reproduce.

  1. Log in to our dev2 server.

  2. cd /home/pavel/issue1458/manticore_crash

  3. Run docker compose up.

  4. If there is no crash, run it again until it appears.

Logs:

docker compose up
[+] Running 2/0
 ✔ Container manticore_issue1458                Created                                                                                                                      0.0s
 ✔ Container manticore_crash-requests_sender-1  Created                                                                                                                      0.0s
Attaching to manticore_crash-requests_sender-1, manticore_issue1458
manticore_issue1458                | Manticore 6.2.12 dc5144d35@230822 (columnar 2.2.4 5aec342@230822) (secondary 2.2.4 5aec342@230822)
manticore_issue1458                | Manticore 6.2.12 dc5144d35@230822 (columnar 2.2.4 5aec342@230822) (secondary 2.2.4 5aec342@230822)
manticore_issue1458                | [Tue Oct  3 09:23:38.959 2023] [1] using config file '/etc/manticoresearch/manticore.conf' (9282 chars)...
manticore_issue1458                | starting daemon version '6.2.12 dc5144d35@230822 (columnar 2.2.4 5aec342@230822) (secondary 2.2.4 5aec342@230822)' ...
manticore_issue1458                | listening on all interfaces for mysql, port=9306
manticore_issue1458                | listening on UNIX socket /var/run/mysqld/mysqld.sock
manticore_issue1458                | listening on 192.168.144.2:9312 for sphinx and http(s)
manticore_issue1458                | listening on all interfaces for sphinx and http(s), port=9308
manticore_issue1458                | Manticore 6.2.12 dc5144d35@230822 (columnar 2.2.4 5aec342@230822) (secondary 2.2.4 5aec342@230822)
manticore_issue1458                | Copyright (c) 2001-2016, Andrew Aksyonoff
manticore_issue1458                | Copyright (c) 2008-2016, Sphinx Technologies Inc (http://sphinxsearch.com)
manticore_issue1458                | Copyright (c) 2017-2023, Manticore Software LTD (https://manticoresearch.com)
manticore_issue1458                |
manticore_issue1458                | precaching table 'posts_idx'
manticore_issue1458                | WARNING: table 'posts_idx': table 'posts_idx': morphology option changed from config has no effect, ignoring
manticore_issue1458                | binlog: replaying log /var/lib/manticore/binlog/binlog.001
manticore_issue1458                | FATAL: binlog: commit (table=posts_idx, lasttid=391, logtid=392, pos=87199897, error=pread error in /var/lib/manticore/binlog/binlog.001: pos=88805376, len=1)
manticore_issue1458                | Crash!!! Handling signal 11
manticore_issue1458 exited with code 139
manticore_crash-requests_sender-1  | Traceback (most recent call last):
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/anyio/_core/_sockets.py", line 190, in connect_tcp
manticore_crash-requests_sender-1  |     addr_obj = ip_address(remote_host)
manticore_crash-requests_sender-1  |                ^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |   File "/usr/local/lib/python3.11/ipaddress.py", line 54, in ip_address
manticore_crash-requests_sender-1  |     raise ValueError(f'{address!r} does not appear to be an IPv4 or IPv6 address')
manticore_crash-requests_sender-1  | ValueError: 'manticore_issue1458' does not appear to be an IPv4 or IPv6 address
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | During handling of the above exception, another exception occurred:
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | Traceback (most recent call last):
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpcore/_exceptions.py", line 10, in map_exceptions
manticore_crash-requests_sender-1  |     yield
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpcore/_backends/anyio.py", line 114, in connect_tcp
manticore_crash-requests_sender-1  |     stream: anyio.abc.ByteStream = await anyio.connect_tcp(
manticore_crash-requests_sender-1  |                                    ^^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/anyio/_core/_sockets.py", line 193, in connect_tcp
manticore_crash-requests_sender-1  |     gai_res = await getaddrinfo(
manticore_crash-requests_sender-1  |               ^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |   File "/usr/local/lib/python3.11/concurrent/futures/thread.py", line 58, in run
manticore_crash-requests_sender-1  |     result = self.fn(*self.args, **self.kwargs)
manticore_crash-requests_sender-1  |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |   File "/usr/local/lib/python3.11/socket.py", line 962, in getaddrinfo
manticore_crash-requests_sender-1  |     for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
manticore_crash-requests_sender-1  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  | socket.gaierror: [Errno -3] Temporary failure in name resolution
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | The above exception was the direct cause of the following exception:
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | Traceback (most recent call last):
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 66, in map_httpcore_exceptions
manticore_crash-requests_sender-1  |     yield
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 366, in handle_async_request
manticore_crash-requests_sender-1  |     resp = await self._pool.handle_async_request(req)
manticore_crash-requests_sender-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 262, in handle_async_request
manticore_crash-requests_sender-1  |     raise exc
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 245, in handle_async_request
manticore_crash-requests_sender-1  |     response = await connection.handle_async_request(request)
manticore_crash-requests_sender-1  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpcore/_async/connection.py", line 99, in handle_async_request
manticore_crash-requests_sender-1  |     raise exc
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpcore/_async/connection.py", line 76, in handle_async_request
manticore_crash-requests_sender-1  |     stream = await self._connect(request)
manticore_crash-requests_sender-1  |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpcore/_async/connection.py", line 124, in _connect
manticore_crash-requests_sender-1  |     stream = await self._network_backend.connect_tcp(**kwargs)
manticore_crash-requests_sender-1  |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpcore/_backends/auto.py", line 31, in connect_tcp
manticore_crash-requests_sender-1  |     return await self._backend.connect_tcp(
manticore_crash-requests_sender-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpcore/_backends/anyio.py", line 112, in connect_tcp
manticore_crash-requests_sender-1  |     with map_exceptions(exc_map):
manticore_crash-requests_sender-1  |   File "/usr/local/lib/python3.11/contextlib.py", line 155, in __exit__
manticore_crash-requests_sender-1  |     self.gen.throw(typ, value, traceback)
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
manticore_crash-requests_sender-1  |     raise to_exc(exc) from exc
manticore_crash-requests_sender-1  | httpcore.ConnectError: [Errno -3] Temporary failure in name resolution
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | The above exception was the direct cause of the following exception:
manticore_crash-requests_sender-1  |
manticore_crash-requests_sender-1  | Traceback (most recent call last):
manticore_crash-requests_sender-1  |   File "/app/main.py", line 146, in <module>
manticore_crash-requests_sender-1  |     asyncio.run(main())
manticore_crash-requests_sender-1  |   File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
manticore_crash-requests_sender-1  |     return runner.run(main)
manticore_crash-requests_sender-1  |            ^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |   File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
manticore_crash-requests_sender-1  |     return self._loop.run_until_complete(task)
manticore_crash-requests_sender-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |   File "/usr/local/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
manticore_crash-requests_sender-1  |     return future.result()
manticore_crash-requests_sender-1  |            ^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |   File "/app/main.py", line 137, in main
manticore_crash-requests_sender-1  |     await init_db()
manticore_crash-requests_sender-1  |   File "/app/main.py", line 128, in init_db
manticore_crash-requests_sender-1  |     r = await client.get(f'{BASE_URL}/cli?{INDEX_STMT}')
manticore_crash-requests_sender-1  |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1757, in get
manticore_crash-requests_sender-1  |     return await self.request(
manticore_crash-requests_sender-1  |            ^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1530, in request
manticore_crash-requests_sender-1  |     return await self.send(request, auth=auth, follow_redirects=follow_redirects)
manticore_crash-requests_sender-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1617, in send
manticore_crash-requests_sender-1  |     response = await self._send_handling_auth(
manticore_crash-requests_sender-1  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1645, in _send_handling_auth
manticore_crash-requests_sender-1  |     response = await self._send_handling_redirects(
manticore_crash-requests_sender-1  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1682, in _send_handling_redirects
manticore_crash-requests_sender-1  |     response = await self._send_single_request(request)
manticore_crash-requests_sender-1  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_client.py", line 1719, in _send_single_request
manticore_crash-requests_sender-1  |     response = await transport.handle_async_request(request)
manticore_crash-requests_sender-1  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 365, in handle_async_request
manticore_crash-requests_sender-1  |     with map_httpcore_exceptions():
manticore_crash-requests_sender-1  |   File "/usr/local/lib/python3.11/contextlib.py", line 155, in __exit__
manticore_crash-requests_sender-1  |     self.gen.throw(typ, value, traceback)
manticore_crash-requests_sender-1  |   File "/opt/pysetup/.venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 83, in map_httpcore_exceptions
manticore_crash-requests_sender-1  |     raise mapped_exc(message) from exc
manticore_crash-requests_sender-1  | httpx.ConnectError: [Errno -3] Temporary failure in name resolution
manticore_crash-requests_sender-1 exited with code 1

Comment:

I can note the behaviour, crash is only reproduced after repeatedly running docker compose up. When running locally, there is no crash, it gives an error in sender operation.

@Eclipsium
Copy link
Author

Eclipsium commented Oct 4, 2023

image
Manticore is crash on your image. Docker break your index.
I update my script, add error handler and add log volume.

I think the error is related to the docker.

@Eclipsium
Copy link
Author

I try downgrade to 6.0.4 + rowwide / columnar engine - crashes everywhere.

@sanikolaev
Copy link
Collaborator

As discussed in Telegram, I can't reproduce this issue in the dev version, so most likely this bug has been already fixed. I'm closing this issue. Feel free to reopen if you can reproduce it in the dev version.

@sanikolaev sanikolaev added rel::upcoming Upcoming release and removed est::TO_ESTIMATE labels Oct 6, 2023
@sanikolaev
Copy link
Collaborator

I can't reproduce this issue in the dev version

In the dev version there's another problem with secondary indexes - manticoresoftware/columnar#36

@sanikolaev
Copy link
Collaborator

Reopening. It seems that on the newer version the crash occurs less frequently, but still the same crash can happen:

Manticore 6.2.13 c10f1d848@231006 dev (columnar 2.2.5 b8be4eb@230928) (secondary 2.2.5 b8be4eb@230928)
Handling signal 11
...
 0# sphBacktrace(int, bool) in searchd
 1# CrashLogger::HandleCrash(int) in searchd
 2# 0x00007FC46889E520 in /lib/x86_64-linux-gnu/libc.so.6
 3# Expr_Highlight_c::RearrangeFetchedFields(DocstoreDoc_t const&) const in searchd
 4# Expr_Highlight_c::StringEval(CSphMatch const&, unsigned char const**) const in searchd
 5# ISphExpr::StringEvalPacked(CSphMatch const&) const in searchd
 6# 0x000055EA182ED08C in searchd
 7# 0x000055EA182ECF8C in searchd
 8# MinimizeAggrResult(AggrResult_t&, CSphQuery const&, bool, sph::StringSet const&, QueryProfile_c*, CSphFilterSettings const*, bool, bool) in searchd
 9# SearchHandler_c::RunSubset(int, int) in searchd
10# SearchHandler_c::RunQueries() in searchd
11# HttpSearchHandler_c::Process() in searchd
12# ProcessHttpQuery(CharStream_c&, std::pair<char const*, int>&, CSphOrderedHash<CSphString, CSphString, CSphStrHashFunc, 256>&, sph::Vector_T<unsigned char, sph::DefaultCopy_T<unsigned char>, sph::DefaultRelimit, sph::DefaultStorage_T<unsigned char> >&, bool, http_method) in searchd
13# HttpRequestParser_c::ProcessClientHttp(AsyncNetInputBuffer_c&, sph::Vector_T<unsigned char, sph::DefaultCopy_T<unsigned char>, sph::DefaultRelimit, sph::DefaultStorage_T<unsigned char> >&) in searchd
14# HttpServe(std::unique_ptr<AsyncNetBuffer_c, std::default_delete<AsyncNetBuffer_c> >) in searchd
15# MultiServe(std::unique_ptr<AsyncNetBuffer_c, std::default_delete<AsyncNetBuffer_c> >, std::pair<int, unsigned short>, Proto_e) in searchd
16# 0x000055EA18219972 in searchd
17# Threads::CoRoutine_c::CreateContext(std::function<void ()>, std::pair<boost::context::stack_context, Threads::StackFlavour_E>)::{lambda(boost::context::detail::transfer_t)#1}::__invoke(boost::context::detail::transfer_t) in searchd
18# make_fcontext in searchd

@sanikolaev sanikolaev reopened this Oct 9, 2023
@sanikolaev
Copy link
Collaborator

It's also very unstable: sometimes the provided script works fine for the whole night, sometimes it crashes in a minute after started.

@sanikolaev
Copy link
Collaborator

A couple crash reports from a debug version of Manticore:

8# void LRUCache_T<SkipCacheKey_t, SkipData_t*, SkipCacheUtil_t>::Delete<SkipCache_c::DeleteAll(long)::{lambda(SkipCacheKey_t const&)#1}>(SkipCache_c::DeleteAll(long)::{lambda(SkipCacheKey_t const&)#1}&&) in searchd

[Tue Oct 10 12:17:54.278 2023] [18] WARNING: send() failed: 32: Broken pipe, sock=1801
------- FATAL: CRASH DUMP -------
[Tue Oct 10 12:17:56.182 2023] [    1]

--- crashed HTTP request dump ---
as vitae perspiciatis sequi <b>quidem</b> praesentium, <b>nulla</b> ipsum molestias unde vel nam commodi totam voluptates itaque laudantium voluptatibus, eligendi excepturi <b>tempore</b> commodi quos cumque hic sit quae voluptates <b>nulla</b>, iusto <b>recusandae</b> vel? Quam soluta atque deleniti, quo ex ad id earum facere voluptate quos, sed atque fugiat quo molestias omnis necessitatibus, iste cumque ex id quia porro perspiciatis aliquam perferendis minima aperiam?
Sunt ea dolorum similique, <b>recusandae</b> facilis laboriosam sequi quisquam sed quis deleniti. Quae quas mollitia quaerat, at nihil consectetur molestiae et. Accusamus <b>nulla</b> ducimus eaque error quis quisquam temporibus, voluptatem ex facere? Facere eius voluptatum culpa corrupti sint quaerat atque illo quis explicabo, iste praesentium <b>doloribus</b> dolorum, expedita porro at nobis iure veritatis sapiente est, placeat laboriosam quaerat dolor obcaecati accusamus odit.
Ullam quasi velit, ipsum dolores culpa blanditiis perspiciatis est. Modi amet exercitationem numquam officiis, blanditiis <b>labore</b> deleniti, consequatur eveniet vero quas exercitationem perspiciatis dolore ducimus, ab rerum autem.ditiis obcaecati culpa eaque ad voluptatum velit animi?s maxime quod voluptatem. Cum animi necessitatibus fugiat soluta <b>expedita</b> ipsum eum, libero iure dolor rem eveniet nemo.odit, veritatis iure unde ad illo facilis.saepe blanditiis. Ipsum illum laboriosam sed accusamus similique explicabo quam dolor nulla assumenda consequatur, ipsam quod ad facere dolorum in repellat animi id delectus perferendis odio?
Facilis quam voluptate dolores delectus quis dicta <b>aperiam</b>, nobis mollitia voluptas iste exercitationem alias eius sunt. Excepturi blanditiis repudiandae earum, rerum iure cumque nesciunt magnam incidunt quia voluptatum, <b>quidem</b> consequatur odio beatae adipisci tempore <b>autem</b> iste, voluptatem <b>autem</b> similique deserunt voluptatibus, <b>praesentium</b> eaque <b>reprehenderit</b> deserunt qui soluta asperiores numquam.nt voluptas <b>praesentium</b> nulla, consequatur ad dignissimos facere quas fuga sint vero officia, qui temporibus optio eligendi obcaecati ipsam <b>expedita</b> beatae ullam?45�K���4ŚK���@4��K���p4��K����4u�K����4�K���4�YUnde incidunt voluptates quasi quibusdam rem ab aspernatur nesciunt magnam doloribus, similique at blanditiis culpa voluptatibus inventore placeat corrupti eligendi facilis, ipsa ab reiciendis iusto expedita ut, necessitatibus quasi unde accusantium dolores explicabo tempore eligendi dolorem eaque, aspernatur recusandae tempora exercitationem.	�۹��x�#(���p�	�KMEi,��6���
!Hm���	��a�w�/g����!�
                     V��	:U��(����5��0pYn��P6kl��landitiis tempora iste nihil repellendus alias deleniti error, quae tempore laboriosam id explicabo modi numquam facilis iste?DP�n��#[]E���n��E�r
                                                                                                                                                                                                         �
                                                                                                                                                                                                          @5#(���p�5
!Hm���	�5
          V��	:U��5#(���p�5Hm�5:U��5 5
--- request dump end ---
--- local index:m alias d
Manticore 6.2.13 edac58564@231009 dev (columnar 2.2.5 b8be4eb@230928) (secondary 2.2.5 b8be4eb@230928)
Handling signal 6
-------------- backtrace begins here ---------------
Program compiled with Clang 15.0.7
Configured with flags: Configured with these definitions: -DDISTR_BUILD=jammy -DUSE_SYSLOG=1 -DWITH_GALERA=1 -DWITH_RE2=1 -DWITH_RE2_FORCE_STATIC=1 -DWITH_STEMMER=1 -DWITH_STEMMER_FORCE_STATIC=1 -DWITH_NLJSON=1 -DWITH_UNIALGO=1 -DWITH_ICU=1 -DWITH_ICU_FORCE_STATIC=1 -DWITH_SSL=1 -DWITH_ZLIB=1 -DWITH_ZSTD=1 -DDL_ZSTD=1 -DZSTD_LIB=libzstd.so.1 -DWITH_CURL=1 -DDL_CURL=1 -DCURL_LIB=libcurl.so.4 -DWITH_ODBC=1 -DDL_ODBC=1 -DODBC_LIB=libodbc.so.2 -DWITH_EXPAT=1 -DDL_EXPAT=1 -DEXPAT_LIB=libexpat.so.1 -DWITH_ICONV=1 -DWITH_MYSQL=1 -DDL_MYSQL=1 -DMYSQL_LIB=libmysqlclient.so.21 -DWITH_POSTGRESQL=1 -DDL_POSTGRESQL=1 -DPOSTGRESQL_LIB=libpq.so.5 -DLOCALDATADIR=/var/lib/manticore -DFULL_SHARE_DIR=/usr/share/manticore
Built on Linux x86_64 (jammy) (cross-compiled)
Stack bottom = 0x7fdef048e9e0, thread stack size = 0x20000
Trying manual backtrace:
Something wrong with thread stack, manual backtrace may be incorrect (fp=0x7fdef0487df0)
Stack looks OK, attempting backtrace.
0x5642836c0526
0x7fdfa30f1a7c
Something wrong in frame pointers, manual backtrace failed (fp=b)
Trying system backtrace:
begin of system symbols:
searchd(_Z12sphBacktraceib+0x2f9)[0x564283900ae9]
searchd(_ZN11CrashLogger11HandleCrashEi+0x666)[0x5642836c0526]
/lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7fdfa309d520]
/lib/x86_64-linux-gnu/libc.so.6(pthread_kill+0x12c)[0x7fdfa30f1a7c]
/lib/x86_64-linux-gnu/libc.so.6(raise+0x16)[0x7fdfa309d476]
/lib/x86_64-linux-gnu/libc.so.6(abort+0xd3)[0x7fdfa30837f3]
/lib/x86_64-linux-gnu/libc.so.6(+0x2871b)[0x7fdfa308371b]
/lib/x86_64-linux-gnu/libc.so.6(+0x39e96)[0x7fdfa3094e96]
searchd(_ZN10LRUCache_TI14SkipCacheKey_tP10SkipData_t15SkipCacheUtil_tE6DeleteIZN11SkipCache_c9DeleteAllElEUlRKS0_E_EEvOT_+0xa2)[0x564283898da2]
searchd(_ZN11SkipCache_c9DeleteAllEl+0x25)[0x5642838788c5]
searchd(_ZN9CSphIndexD1Ev+0x74)[0x564283793e94]
searchd(_ZN13CSphIndex_VLND1Ev+0x1c8)[0x5642837950e8]
searchd(_ZN13CSphIndex_VLND0Ev+0x19)[0x564283795269]
searchd(_ZN11DiskChunk_cD2Ev+0x8b)[0x5642841ce6ab]
searchd(_ZN11DiskChunk_cD0Ev+0x19)[0x5642841ce749]
searchd(_ZNK16ISphRefcountedMT7ReleaseEv+0x20c)[0x56428354231c]
searchd(_ZN17CSphRefcountedPtrIK11DiskChunk_cED2Ev+0x2e)[0x5642841c6c9e]
searchd(_ZN3sph13LazyStorage_TI17CSphRefcountedPtrIK11DiskChunk_cELi512EED2Ev+0x2f)[0x5642841d327f]
searchd(_ZN3sph8Vector_TI17CSphRefcountedPtrIK11DiskChunk_cENS_13DefaultCopy_TIS4_EENS_14DefaultRelimitENS_13LazyStorage_TIS4_Li512EEEED2Ev+0x4b)[0x5642841d31eb]
searchd(_ZN15RefCountedVec_TI17CSphRefcountedPtrIK11DiskChunk_cEED2Ev+0x1d)[0x5642841d30ed]
searchd(_ZN15RefCountedVec_TI17CSphRefcountedPtrIK11DiskChunk_cEED0Ev+0x19)[0x5642841d3119]
searchd(_ZNK16ISphRefcountedMT7ReleaseEv+0x20c)[0x56428354231c]
searchd(_ZN17CSphRefcountedPtrIK15RefCountedVec_TIS_IK11DiskChunk_cEEED2Ev+0x2e)[0x5642841c2dbe]
searchd(_ZN11ConstRtDataD2Ev+0x26)[0x5642841c9716]
searchd(_ZN9RtGuard_tD2Ev+0x15)[0x5642841c3135]
searchd(_ZNK9RtIndex_c10MultiQueryER15CSphQueryResultRK9CSphQueryRK11VecTraits_TIP15ISphMatchSorterERK18CSphMultiQueryArgs+0x19c7)[0x5642841a5ee7]
searchd(_ZNK13CSphIndexStub12MultiQueryExEiPK9CSphQueryP15CSphQueryResultPP15ISphMatchSorterRK18CSphMultiQueryArgs+0xa1)[0x5642838974b1]
searchd(+0x1b2b447)[0x564283734447]
searchd(+0x1b2a975)[0x564283733975]
searchd(+0x1b2a935)[0x564283733935]
searchd(+0x1b2a82d)[0x56428373382d]
searchd(_ZNKSt8functionIFvvEEclEv+0x35)[0x56428354bbd5]
searchd(+0x278ec81)[0x564284397c81]
searchd(+0x278ec15)[0x564284397c15]
searchd(+0x278ebd5)[0x564284397bd5]
searchd(+0x278ea9d)[0x564284397a9d]
searchd(_ZNKSt8functionIFvvEEclEv+0x35)[0x56428354bbd5]
searchd(_ZN7Threads4Coro8ExecuteNEiOSt8functionIFvvEE+0x4e)[0x56428484b97e]
searchd(_ZN15SearchHandler_c16RunLocalSearchesEv+0x65b)[0x5642836ceb5b]
searchd(_ZN15SearchHandler_c9RunSubsetEii+0x99a)[0x5642836cfa3a]
searchd(_ZN15SearchHandler_c10RunQueriesEv+0xba)[0x5642836ccd2a]
searchd(_ZN18PubSearchHandler_c10RunQueriesEv+0x1d)[0x5642836ccc5d]
searchd(_ZN19HttpSearchHandler_c7ProcessEv+0x1aa)[0x56428356636a]
searchd(_Z16ProcessHttpQueryR12CharStream_cRSt4pairIPKciER15CSphOrderedHashI10CSphStringS7_15CSphStrHashFuncLi256EERN3sph8Vector_TIhNSB_13DefaultCopy_TIhEENSB_14DefaultRelimitENSB_16DefaultStorage_TIhEEEEb11http_method+0x38e)[0x564283559a7e]
searchd(_ZN19HttpRequestParser_c17ProcessClientHttpER21AsyncNetInputBuffer_cRN3sph8Vector_TIhNS2_13DefaultCopy_TIhEENS2_14DefaultRelimitENS2_16DefaultStorage_TIhEEEE+0x34f)[0x56428355a9df]
searchd(_Z9HttpServeSt10unique_ptrI16AsyncNetBuffer_cSt14default_deleteIS0_EE+0x9a5)[0x5642835f78d5]
searchd(_Z10MultiServeSt10unique_ptrI16AsyncNetBuffer_cSt14default_deleteIS0_EESt4pairIitE7Proto_e+0x199)[0x5642835efd69]
searchd(+0x19e85bf)[0x5642835f15bf]
searchd(+0x19e8545)[0x5642835f1545]
searchd(+0x19e8505)[0x5642835f1505]
searchd(+0x19e83ed)[0x5642835f13ed]
searchd(_ZNKSt8functionIFvvEEclEv+0x35)[0x56428354bbd5]
searchd(_ZN7Threads11CoRoutine_c12WorkerLowestEPv+0x2f)[0x5642848547cf]
searchd(_ZZN7Threads11CoRoutine_c13CreateContextESt8functionIFvvEESt4pairIN5boost7context13stack_contextENS_14StackFlavour_EEEENKUlNS6_6detail10transfer_tEE_clESB_+0x21)[0x564284854791]
searchd(_ZZN7Threads11CoRoutine_c13CreateContextESt8functionIFvvEESt4pairIN5boost7context13stack_contextENS_14StackFlavour_EEEENUlNS6_6detail10transfer_tEE_8__invokeESB_+0x1d)[0x56428485475d]
searchd(make_fcontext+0x37)[0x5642848892e7]
Trying boost backtrace:
 0# sphBacktrace(int, bool) in searchd
 1# CrashLogger::HandleCrash(int) in searchd
 2# 0x00007FDFA309D520 in /lib/x86_64-linux-gnu/libc.so.6
 3# pthread_kill in /lib/x86_64-linux-gnu/libc.so.6
 4# raise in /lib/x86_64-linux-gnu/libc.so.6
 5# abort in /lib/x86_64-linux-gnu/libc.so.6
 6# 0x00007FDFA308371B in /lib/x86_64-linux-gnu/libc.so.6
 7# 0x00007FDFA3094E96 in /lib/x86_64-linux-gnu/libc.so.6
 8# void LRUCache_T<SkipCacheKey_t, SkipData_t*, SkipCacheUtil_t>::Delete<SkipCache_c::DeleteAll(long)::{lambda(SkipCacheKey_t const&)#1}>(SkipCache_c::DeleteAll(long)::{lambda(SkipCacheKey_t const&)#1}&&) in searchd
 9# SkipCache_c::DeleteAll(long) in searchd
10# CSphIndex::~CSphIndex() in searchd
11# CSphIndex_VLN::~CSphIndex_VLN() in searchd
12# CSphIndex_VLN::~CSphIndex_VLN() in searchd
13# DiskChunk_c::~DiskChunk_c() in searchd
14# DiskChunk_c::~DiskChunk_c() in searchd
15# ISphRefcountedMT::Release() const in searchd
16# CSphRefcountedPtr<DiskChunk_c const>::~CSphRefcountedPtr() in searchd
17# sph::LazyStorage_T<CSphRefcountedPtr<DiskChunk_c const>, 512>::~LazyStorage_T() in searchd
18# sph::Vector_T<CSphRefcountedPtr<DiskChunk_c const>, sph::DefaultCopy_T<CSphRefcountedPtr<DiskChunk_c const> >, sph::DefaultRelimit, sph::LazyStorage_T<CSphRefcountedPtr<DiskChunk_c const>, 512> >::~Vector_T() in searchd
19# RefCountedVec_T<CSphRefcountedPtr<DiskChunk_c const> >::~RefCountedVec_T() in searchd
20# RefCountedVec_T<CSphRefcountedPtr<DiskChunk_c const> >::~RefCountedVec_T() in searchd
21# ISphRefcountedMT::Release() const in searchd
22# CSphRefcountedPtr<RefCountedVec_T<CSphRefcountedPtr<DiskChunk_c const> > const>::~CSphRefcountedPtr() in searchd
23# ConstRtData::~ConstRtData() in searchd
24# RtGuard_t::~RtGuard_t() in searchd
25# RtIndex_c::MultiQuery(CSphQueryResult&, CSphQuery const&, VecTraits_T<ISphMatchSorter*> const&, CSphMultiQueryArgs const&) const in searchd
26# CSphIndexStub::MultiQueryEx(int, CSphQuery const*, CSphQueryResult*, ISphMatchSorter**, CSphMultiQueryArgs const&) const in searchd
27# 0x0000564283734447 in searchd
28# 0x0000564283733975 in searchd
29# 0x0000564283733935 in searchd
30# 0x000056428373382D in searchd
31# std::function<void ()>::operator()() const in searchd
32# 0x0000564284397C81 in searchd
33# 0x0000564284397C15 in searchd
34# 0x0000564284397BD5 in searchd
35# 0x0000564284397A9D in searchd
36# std::function<void ()>::operator()() const in searchd
37# Threads::Coro::ExecuteN(int, std::function<void ()>&&) in searchd
38# SearchHandler_c::RunLocalSearches() in searchd
39# SearchHandler_c::RunSubset(int, int) in searchd
40# SearchHandler_c::RunQueries() in searchd
41# PubSearchHandler_c::RunQueries() in searchd
42# HttpSearchHandler_c::Process() in searchd
43# ProcessHttpQuery(CharStream_c&, std::pair<char const*, int>&, CSphOrderedHash<CSphString, CSphString, CSphStrHashFunc, 256>&, sph::Vector_T<unsigned char, sph::DefaultCopy_T<unsigned char>, sph::DefaultRelimit, sph::DefaultStorage_T<unsigned char> >&, bool, http_method) in searchd
44# HttpRequestParser_c::ProcessClientHttp(AsyncNetInputBuffer_c&, sph::Vector_T<unsigned char, sph::DefaultCopy_T<unsigned char>, sph::DefaultRelimit, sph::DefaultStorage_T<unsigned char> >&) in searchd
45# HttpServe(std::unique_ptr<AsyncNetBuffer_c, std::default_delete<AsyncNetBuffer_c> >) in searchd
46# MultiServe(std::unique_ptr<AsyncNetBuffer_c, std::default_delete<AsyncNetBuffer_c> >, std::pair<int, unsigned short>, Proto_e) in searchd
47# 0x00005642835F15BF in searchd
48# 0x00005642835F1545 in searchd
49# 0x00005642835F1505 in searchd
50# 0x00005642835F13ED in searchd
51# std::function<void ()>::operator()() const in searchd
52# Threads::CoRoutine_c::WorkerLowest(void*) in searchd
53# Threads::CoRoutine_c::CreateContext(std::function<void ()>, std::pair<boost::context::stack_context, Threads::StackFlavour_E>)::{lambda(boost::context::detail::transfer_t)#1}::operator()(boost::context::detail::transfer_t) const in searchd
54# Threads::CoRoutine_c::CreateContext(std::function<void ()>, std::pair<boost::context::stack_context, Threads::StackFlavour_E>)::{lambda(boost::context::detail::transfer_t)#1}::__invoke(boost::context::detail::transfer_t) in searchd
55# make_fcontext in searchd

-------------- backtrace ends here ---------------
Please, create a bug report in our bug tracker (https://github.com/manticoresoftware/manticore/issues)
and attach there:
a) searchd log, b) searchd binary, c) searchd symbols.
Look into the chapter 'Reporting bugs' in the manual
(https://manual.manticoresearch.com/Reporting_bugs)
Dump with GDB via watchdog
[Tue Oct 10 12:17:59.179 2023] [29] WARNING: send() failed: 32: Broken pipe, sock=1797
--- active threads ---
thd 0 (work_0), proto http, state -, command -
thd 1 (work_2), proto http, state -, command -
thd 2 (work_3), proto http, state -, command -
thd 3 (work_4), proto http, state -, command -
thd 4 (work_5), proto http, state -, command -
thd 5 (work_6), proto http, state -, command -
thd 6 (work_7), proto http, state -, command -
thd 7 (work_8), proto http, state -, command -
thd 8 (work_9), proto http, state -, command -
thd 9 (work_10), proto http, state -, command -
thd 10 (work_11), proto http, state -, command -
thd 11 (work_12), proto http, state -, command -
thd 12 (work_13), proto http, state -, command -
thd 13 (work_14), proto http, state -, command -
thd 14 (work_15), proto http, state -, command -
thd 15 (work_17), proto http, state -, command -
thd 16 (work_18), proto http, state -, command -
thd 17 (work_19), proto http, state -, command -
thd 18 (work_20), proto http, state -, command -
thd 19 (work_21), proto http, state -, command -
thd 20 (work_22), proto http, state -, command -
thd 21 (work_23), proto http, state -, command -
thd 22 (work_24), proto http, state -, command -
thd 23 (work_25), proto http, state -, command -
thd 24 (work_26), proto http, state -, command -
thd 25 (work_27), proto http, state -, command -
thd 26 (work_28), proto http, state -, command -
thd 27 (work_29), proto http, state -, command -
thd 28 (work_30), proto http, state -, command -
thd 29 (work_31), proto http, state -, command -
--- Totally 32 threads, and 30 client-working threads ---
------- CRASH DUMP END -------
------- FATAL: CRASH DUMP -------
[Tue Oct 10 12:18:00.782 2023] [    1]

--- crashed HTTP request dump ---
as vitae perspiciatis sequi <b>quidem</b> praesentium, <b>nulla</b> ipsum molestias unde vel nam commodi totam voluptates itaque laudantium voluptatibus, eligendi excepturi <b>tempore</b> commodi quos cumque hic sit quae voluptates <b>nulla</b>, iusto <b>recusandae</b> vel? Quam soluta atque deleniti, quo ex ad id earum facere voluptate quos, sed atque fugiat quo molestias omnis necessitatibus, iste cumque ex id quia porro perspiciatis aliquam perferendis minima aperiam?
Sunt ea dolorum similique, <b>recusandae</b> facilis laboriosam sequi quisquam sed quis deleniti. Quae quas mollitia quaerat, at nihil consectetur molestiae et. Accusamus <b>nulla</b> ducimus eaque error quis quisquam temporibus, voluptatem ex facere? Facere eius voluptatum culpa corrupti sint quaerat atque illo quis explicabo, iste praesentium <b>doloribus</b> dolorum, expedita porro at nobis iure veritatis sapiente est, placeat laboriosam quaerat dolor obcaecati accusamus odit.
Ullam quasi velit, ipsum dolores culpa blanditiis perspiciatis est. Modi amet exercitationem numquam officiis, blanditiis <b>labore</b> deleniti, consequatur eveniet vero quas exercitationem perspiciatis dolore ducimus, ab rerum autem.ditiis obcaecati culpa eaque ad voluptatum velit animi?s maxime quod voluptatem. Cum animi necessitatibus fugiat soluta <b>expedita</b> ipsum eum, libero iure dolor rem eveniet nemo.odit, veritatis iure unde ad illo facilis.saepe blanditiis. Ipsum illum laboriosam sed accusamus similique explicabo quam dolor nulla assumenda consequatur, ipsam quod ad facere dolorum in repellat animi id delectus perferendis odio?
Facilis quam voluptate dolores delectus quis dicta <b>aperiam</b>, nobis mollitia voluptas iste exercitationem alias eius sunt. Excepturi blanditiis repudiandae earum, rerum iure cumque nesciunt magnam incidunt quia voluptatum, <b>quidem</b> consequatur odio beatae adipisci tempore <b>autem</b> iste, voluptatem <b>autem</b> similique deserunt voluptatibus, <b>praesentium</b> eaque <b>reprehenderit</b> deserunt qui soluta asperiores numquam.nt voluptas <b>praesentium</b> nulla, consequatur ad dignissimos facere quas fuga sint vero officia, qui temporibus optio eligendi obcaecati ipsam <b>expedita</b> beatae ullam?45�K���4ŚK���@4��K���p4��K����4u�K����4�K���4�YUnde incidunt voluptates quasi quibusdam rem ab aspernatur nesciunt magnam doloribus, similique at blanditiis culpa voluptatibus inventore placeat corrupti eligendi facilis, ipsa ab reiciendis iusto expedita ut, necessitatibus quasi unde accusantium dolores explicabo tempore eligendi dolorem eaque, aspernatur recusandae tempora exercitationem.	�۹��x�#(���p�	�KMEi,��6���
!Hm���	��a�w�/g����!�
                     V��	:U��(����5��0pYn��P6kl��landitiis tempora iste nihil repellendus alias deleniti error, quae tempore laboriosam id explicabo modi numquam facilis iste?DP�n��#[]E���n��E�r
                                                                                                                                                                                                         �
                                                                                                                                                                                                          @5#(���p�5
!Hm���	�5
          V��	:U��5#(���p�5Hm�5:U��5 5
--- request dump end ---
--- local index:m alias d
Manticore 6.2.13 edac58564@231009 dev (columnar 2.2.5 b8be4eb@230928) (secondary 2.2.5 b8be4eb@230928)
Handling signal 11
-------------- backtrace begins here ---------------
Program compiled with Clang 15.0.7
Configured with flags: Configured with these definitions: -DDISTR_BUILD=jammy -DUSE_SYSLOG=1 -DWITH_GALERA=1 -DWITH_RE2=1 -DWITH_RE2_FORCE_STATIC=1 -DWITH_STEMMER=1 -DWITH_STEMMER_FORCE_STATIC=1 -DWITH_NLJSON=1 -DWITH_UNIALGO=1 -DWITH_ICU=1 -DWITH_ICU_FORCE_STATIC=1 -DWITH_SSL=1 -DWITH_ZLIB=1 -DWITH_ZSTD=1 -DDL_ZSTD=1 -DZSTD_LIB=libzstd.so.1 -DWITH_CURL=1 -DDL_CURL=1 -DCURL_LIB=libcurl.so.4 -DWITH_ODBC=1 -DDL_ODBC=1 -DODBC_LIB=libodbc.so.2 -DWITH_EXPAT=1 -DDL_EXPAT=1 -DEXPAT_LIB=libexpat.so.1 -DWITH_ICONV=1 -DWITH_MYSQL=1 -DDL_MYSQL=1 -DMYSQL_LIB=libmysqlclient.so.21 -DWITH_POSTGRESQL=1 -DDL_POSTGRESQL=1 -DPOSTGRESQL_LIB=libpq.so.5 -DLOCALDATADIR=/var/lib/manticore -DFULL_SHARE_DIR=/usr/share/manticore
Built on Linux x86_64 (jammy) (cross-compiled)
Stack bottom = 0x7fdef048e9e0, thread stack size = 0x20000
Trying manual backtrace:
Something wrong with thread stack, manual backtrace may be incorrect (fp=0x7fdef0487eb0)
Stack looks OK, attempting backtrace.
0x5642836c0526
0x7fdfa3083898
Trying system backtrace:
begin of system symbols:
searchd(_Z12sphBacktraceib+0x2f9)[0x564283900ae9]
searchd(_ZN11CrashLogger11HandleCrashEi+0x666)[0x5642836c0526]
/lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7fdfa309d520]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x178)[0x7fdfa3083898]
/lib/x86_64-linux-gnu/libc.so.6(+0x2871b)[0x7fdfa308371b]
/lib/x86_64-linux-gnu/libc.so.6(+0x39e96)[0x7fdfa3094e96]
searchd(_ZN10LRUCache_TI14SkipCacheKey_tP10SkipData_t15SkipCacheUtil_tE6DeleteIZN11SkipCache_c9DeleteAllElEUlRKS0_E_EEvOT_+0xa2)[0x564283898da2]
searchd(_ZN11SkipCache_c9DeleteAllEl+0x25)[0x5642838788c5]
searchd(_ZN9CSphIndexD1Ev+0x74)[0x564283793e94]
searchd(_ZN13CSphIndex_VLND1Ev+0x1c8)[0x5642837950e8]
searchd(_ZN13CSphIndex_VLND0Ev+0x19)[0x564283795269]
searchd(_ZN11DiskChunk_cD2Ev+0x8b)[0x5642841ce6ab]
searchd(_ZN11DiskChunk_cD0Ev+0x19)[0x5642841ce749]
searchd(_ZNK16ISphRefcountedMT7ReleaseEv+0x20c)[0x56428354231c]
searchd(_ZN17CSphRefcountedPtrIK11DiskChunk_cED2Ev+0x2e)[0x5642841c6c9e]
searchd(_ZN3sph13LazyStorage_TI17CSphRefcountedPtrIK11DiskChunk_cELi512EED2Ev+0x2f)[0x5642841d327f]
searchd(_ZN3sph8Vector_TI17CSphRefcountedPtrIK11DiskChunk_cENS_13DefaultCopy_TIS4_EENS_14DefaultRelimitENS_13LazyStorage_TIS4_Li512EEEED2Ev+0x4b)[0x5642841d31eb]
searchd(_ZN15RefCountedVec_TI17CSphRefcountedPtrIK11DiskChunk_cEED2Ev+0x1d)[0x5642841d30ed]
searchd(_ZN15RefCountedVec_TI17CSphRefcountedPtrIK11DiskChunk_cEED0Ev+0x19)[0x5642841d3119]
searchd(_ZNK16ISphRefcountedMT7ReleaseEv+0x20c)[0x56428354231c]
searchd(_ZN17CSphRefcountedPtrIK15RefCountedVec_TIS_IK11DiskChunk_cEEED2Ev+0x2e)[0x5642841c2dbe]
searchd(_ZN11ConstRtDataD2Ev+0x26)[0x5642841c9716]
searchd(_ZN9RtGuard_tD2Ev+0x15)[0x5642841c3135]
searchd(_ZNK9RtIndex_c10MultiQueryER15CSphQueryResultRK9CSphQueryRK11VecTraits_TIP15ISphMatchSorterERK18CSphMultiQueryArgs+0x19c7)[0x5642841a5ee7]
searchd(_ZNK13CSphIndexStub12MultiQueryExEiPK9CSphQueryP15CSphQueryResultPP15ISphMatchSorterRK18CSphMultiQueryArgs+0xa1)[0x5642838974b1]
searchd(+0x1b2b447)[0x564283734447]
searchd(+0x1b2a975)[0x564283733975]
searchd(+0x1b2a935)[0x564283733935]
searchd(+0x1b2a82d)[0x56428373382d]
searchd(_ZNKSt8functionIFvvEEclEv+0x35)[0x56428354bbd5]
searchd(+0x278ec81)[0x564284397c81]
searchd(+0x278ec15)[0x564284397c15]
searchd(+0x278ebd5)[0x564284397bd5]
searchd(+0x278ea9d)[0x564284397a9d]
searchd(_ZNKSt8functionIFvvEEclEv+0x35)[0x56428354bbd5]
searchd(_ZN7Threads4Coro8ExecuteNEiOSt8functionIFvvEE+0x4e)[0x56428484b97e]
searchd(_ZN15SearchHandler_c16RunLocalSearchesEv+0x65b)[0x5642836ceb5b]
searchd(_ZN15SearchHandler_c9RunSubsetEii+0x99a)[0x5642836cfa3a]
searchd(_ZN15SearchHandler_c10RunQueriesEv+0xba)[0x5642836ccd2a]
searchd(_ZN18PubSearchHandler_c10RunQueriesEv+0x1d)[0x5642836ccc5d]
searchd(_ZN19HttpSearchHandler_c7ProcessEv+0x1aa)[0x56428356636a]
searchd(_Z16ProcessHttpQueryR12CharStream_cRSt4pairIPKciER15CSphOrderedHashI10CSphStringS7_15CSphStrHashFuncLi256EERN3sph8Vector_TIhNSB_13DefaultCopy_TIhEENSB_14DefaultRelimitENSB_16DefaultStorage_TIhEEEEb11http_method+0x38e)[0x564283559a7e]
searchd(_ZN19HttpRequestParser_c17ProcessClientHttpER21AsyncNetInputBuffer_cRN3sph8Vector_TIhNS2_13DefaultCopy_TIhEENS2_14DefaultRelimitENS2_16DefaultStorage_TIhEEEE+0x34f)[0x56428355a9df]
searchd(_Z9HttpServeSt10unique_ptrI16AsyncNetBuffer_cSt14default_deleteIS0_EE+0x9a5)[0x5642835f78d5]
searchd(_Z10MultiServeSt10unique_ptrI16AsyncNetBuffer_cSt14default_deleteIS0_EESt4pairIitE7Proto_e+0x199)[0x5642835efd69]
searchd(+0x19e85bf)[0x5642835f15bf]
searchd(+0x19e8545)[0x5642835f1545]
searchd(+0x19e8505)[0x5642835f1505]
searchd(+0x19e83ed)[0x5642835f13ed]
searchd(_ZNKSt8functionIFvvEEclEv+0x35)[0x56428354bbd5]
searchd(_ZN7Threads11CoRoutine_c12WorkerLowestEPv+0x2f)[0x5642848547cf]
searchd(_ZZN7Threads11CoRoutine_c13CreateContextESt8functionIFvvEESt4pairIN5boost7context13stack_contextENS_14StackFlavour_EEEENKUlNS6_6detail10transfer_tEE_clESB_+0x21)[0x564284854791]
searchd(_ZZN7Threads11CoRoutine_c13CreateContextESt8functionIFvvEESt4pairIN5boost7context13stack_contextENS_14StackFlavour_EEEENUlNS6_6detail10transfer_tEE_8__invokeESB_+0x1d)[0x56428485475d]
searchd(make_fcontext+0x37)[0x5642848892e7]
Trying boost backtrace:
 0# sphBacktrace(int, bool) in searchd
 1# CrashLogger::HandleCrash(int) in searchd
 2# 0x00007FDFA309D520 in /lib/x86_64-linux-gnu/libc.so.6
 3# abort in /lib/x86_64-linux-gnu/libc.so.6
 4# 0x00007FDFA308371B in /lib/x86_64-linux-gnu/libc.so.6
 5# 0x00007FDFA3094E96 in /lib/x86_64-linux-gnu/libc.so.6
 6# void LRUCache_T<SkipCacheKey_t, SkipData_t*, SkipCacheUtil_t>::Delete<SkipCache_c::DeleteAll(long)::{lambda(SkipCacheKey_t const&)#1}>(SkipCache_c::DeleteAll(long)::{lambda(SkipCacheKey_t const&)#1}&&) in searchd
 7# SkipCache_c::DeleteAll(long) in searchd
 8# CSphIndex::~CSphIndex() in searchd
 9# CSphIndex_VLN::~CSphIndex_VLN() in searchd
10# CSphIndex_VLN::~CSphIndex_VLN() in searchd
11# DiskChunk_c::~DiskChunk_c() in searchd
12# DiskChunk_c::~DiskChunk_c() in searchd
13# ISphRefcountedMT::Release() const in searchd
14# CSphRefcountedPtr<DiskChunk_c const>::~CSphRefcountedPtr() in searchd
15# sph::LazyStorage_T<CSphRefcountedPtr<DiskChunk_c const>, 512>::~LazyStorage_T() in searchd
16# sph::Vector_T<CSphRefcountedPtr<DiskChunk_c const>, sph::DefaultCopy_T<CSphRefcountedPtr<DiskChunk_c const> >, sph::DefaultRelimit, sph::LazyStorage_T<CSphRefcountedPtr<DiskChunk_c const>, 512> >::~Vector_T() in searchd
17# RefCountedVec_T<CSphRefcountedPtr<DiskChunk_c const> >::~RefCountedVec_T() in searchd
18# RefCountedVec_T<CSphRefcountedPtr<DiskChunk_c const> >::~RefCountedVec_T() in searchd
19# ISphRefcountedMT::Release() const in searchd
20# CSphRefcountedPtr<RefCountedVec_T<CSphRefcountedPtr<DiskChunk_c const> > const>::~CSphRefcountedPtr() in searchd
21# ConstRtData::~ConstRtData() in searchd
22# RtGuard_t::~RtGuard_t() in searchd
23# RtIndex_c::MultiQuery(CSphQueryResult&, CSphQuery const&, VecTraits_T<ISphMatchSorter*> const&, CSphMultiQueryArgs const&) const in searchd
24# CSphIndexStub::MultiQueryEx(int, CSphQuery const*, CSphQueryResult*, ISphMatchSorter**, CSphMultiQueryArgs const&) const in searchd
25# 0x0000564283734447 in searchd
26# 0x0000564283733975 in searchd
27# 0x0000564283733935 in searchd
28# 0x000056428373382D in searchd
29# std::function<void ()>::operator()() const in searchd
30# 0x0000564284397C81 in searchd
31# 0x0000564284397C15 in searchd
32# 0x0000564284397BD5 in searchd
33# 0x0000564284397A9D in searchd
34# std::function<void ()>::operator()() const in searchd
35# Threads::Coro::ExecuteN(int, std::function<void ()>&&) in searchd
36# SearchHandler_c::RunLocalSearches() in searchd
37# SearchHandler_c::RunSubset(int, int) in searchd
38# SearchHandler_c::RunQueries() in searchd
39# PubSearchHandler_c::RunQueries() in searchd
40# HttpSearchHandler_c::Process() in searchd
41# ProcessHttpQuery(CharStream_c&, std::pair<char const*, int>&, CSphOrderedHash<CSphString, CSphString, CSphStrHashFunc, 256>&, sph::Vector_T<unsigned char, sph::DefaultCopy_T<unsigned char>, sph::DefaultRelimit, sph::DefaultStorage_T<unsigned char> >&, bool, http_method) in searchd
42# HttpRequestParser_c::ProcessClientHttp(AsyncNetInputBuffer_c&, sph::Vector_T<unsigned char, sph::DefaultCopy_T<unsigned char>, sph::DefaultRelimit, sph::DefaultStorage_T<unsigned char> >&) in searchd
43# HttpServe(std::unique_ptr<AsyncNetBuffer_c, std::default_delete<AsyncNetBuffer_c> >) in searchd
44# MultiServe(std::unique_ptr<AsyncNetBuffer_c, std::default_delete<AsyncNetBuffer_c> >, std::pair<int, unsigned short>, Proto_e) in searchd
45# 0x00005642835F15BF in searchd
46# 0x00005642835F1545 in searchd
47# 0x00005642835F1505 in searchd
48# 0x00005642835F13ED in searchd
49# std::function<void ()>::operator()() const in searchd
50# Threads::CoRoutine_c::WorkerLowest(void*) in searchd
51# Threads::CoRoutine_c::CreateContext(std::function<void ()>, std::pair<boost::context::stack_context, Threads::StackFlavour_E>)::{lambda(boost::context::detail::transfer_t)#1}::operator()(boost::context::detail::transfer_t) const in searchd
52# Threads::CoRoutine_c::CreateContext(std::function<void ()>, std::pair<boost::context::stack_context, Threads::StackFlavour_E>)::{lambda(boost::context::detail::transfer_t)#1}::__invoke(boost::context::detail::transfer_t) in searchd
53# make_fcontext in searchd

-------------- backtrace ends here ---------------
Please, create a bug report in our bug tracker (https://github.com/manticoresoftware/manticore/issues)
and attach there:
a) searchd log, b) searchd binary, c) searchd symbols.
Look into the chapter 'Reporting bugs' in the manual
(https://manual.manticoresearch.com/Reporting_bugs)
Dump with GDB via watchdog
--- active threads ---
thd 0 (work_0), proto http, state -, command -
thd 1 (work_2), proto http, state -, command -
thd 2 (work_3), proto http, state -, command -
thd 3 (work_4), proto http, state -, command -
thd 4 (work_5), proto http, state -, command -
thd 5 (work_6), proto http, state -, command -
thd 6 (work_7), proto http, state -, command -
thd 7 (work_8), proto http, state -, command -
thd 8 (work_9), proto http, state -, command -
thd 9 (work_10), proto http, state -, command -
thd 10 (work_11), proto http, state -, command -
thd 11 (work_12), proto http, state -, command -
thd 12 (work_13), proto http, state -, command -
thd 13 (work_14), proto http, state -, command -
thd 14 (work_15), proto http, state -, command -
thd 15 (work_17), proto http, state -, command -
thd 16 (work_18), proto http, state -, command -
thd 17 (work_19), proto http, state -, command -
thd 18 (work_20), proto http, state -, command -
thd 19 (work_21), proto http, state -, command -
thd 20 (work_22), proto http, state -, command -
thd 21 (work_23), proto http, state -, command -
thd 22 (work_24), proto http, state -, command -
thd 23 (work_25), proto http, state -, command -
thd 24 (work_26), proto http, state -, command -
thd 25 (work_27), proto http, state -, command -
thd 26 (work_28), proto http, state -, command -
thd 27 (work_29), proto http, state -, command -
thd 28 (work_30), proto http, state -, command -
thd 29 (work_31), proto http, state -, command -
--- Totally 32 threads, and 30 client-working threads ---
------- CRASH DUMP END -------

6# MemoryReader_c::SetPos(int) in searchd 7# Docstore_c::ProcessSmallBlockDoc(unsigned int, unsigned int, VecTraits_T<int> const*, CSphFixedVector<int, sph::DefaultCopy_T<int>, sph::DefaultStorage_T<int> > const&, bool, MemoryReader2_c&, BitVec_T<unsigned int, 128>&, DocstoreDoc_t&) const in searchd:

``` [Tue Oct 10 11:38:17.278 2023] [25] rt: table posts_idx: diskchunk 1179(168), segments 6 saved in 69.903959 (70.196098) sec, RAM saved/new 82581914/55061971 ratio 0.599968 (soft limit 80526329, conf limit 134217728) ------- FATAL: CRASH DUMP ------- [Tue Oct 10 11:38:34.078 2023] [ 1]

--- crashed HTTP request dump ---
��B�0�)��! ��b �!|�8

�N		�

�g
�m�&� ��!���A��

�u_�����S�
��S�
�Y�2�? �w��%
�D
�=\�Wa�!�Z�W��;�{ z� �I�Q�<{f
�s��G ���tV�9�8�u�.�?
--- request dump end ---
--- local index:`$
Manticore 6.2.13 edac585@231009 dev (columnar 2.2.5 b8be4eb@230928) (secondary 2.2.5 b8be4eb@230928)
Handling signal 6
-------------- backtrace begins here ---------------
Program compiled with Clang 15.0.7
Configured with flags: Configured with these definitions: -DDISTR_BUILD=jammy -DUSE_SYSLOG=1 -DWITH_GALERA=1 -DWITH_RE2=1 -DWITH_RE2_FORCE_STATIC=1 -DWITH_STEMMER=1 -DWITH_STEMMER_FORCE_STATIC=1 -DWITH_NLJSON=1 -DWITH_UNIALGO=1 -DWITH_ICU=1 -DWITH_ICU_FORCE_STATIC=1 -DWITH_SSL=1 -DWITH_ZLIB=1 -DWITH_ZSTD=1 -DDL_ZSTD=1 -DZSTD_LIB=libzstd.so.1 -DWITH_CURL=1 -DDL_CURL=1 -DCURL_LIB=libcurl.so.4 -DWITH_ODBC=1 -DDL_ODBC=1 -DODBC_LIB=libodbc.so.2 -DWITH_EXPAT=1 -DDL_EXPAT=1 -DEXPAT_LIB=libexpat.so.1 -DWITH_ICONV=1 -DWITH_MYSQL=1 -DDL_MYSQL=1 -DMYSQL_LIB=libmysqlclient.so.21 -DWITH_POSTGRESQL=1 -DDL_POSTGRESQL=1 -DPOSTGRESQL_LIB=libpq.so.5 -DLOCALDATADIR=/var/lib/manticore -DFULL_SHARE_DIR=/usr/share/manticore
Built on Linux x86_64 (jammy) (cross-compiled)
Stack bottom = 0x7fba00083f80, thread stack size = 0x20000
Trying manual backtrace:
Something wrong with thread stack, manual backtrace may be incorrect (fp=0x7fba0007f230)
Stack looks OK, attempting backtrace.
0x55d526597526
0x7fbab4a8ea7c
Something wrong in frame pointers, manual backtrace failed (fp=20)
Trying system backtrace:
begin of system symbols:
searchd(_Z12sphBacktraceib+0x2f9)[0x55d5267d7ae9]
searchd(_ZN11CrashLogger11HandleCrashEi+0x666)[0x55d526597526]
/lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7fbab4a3a520]
/lib/x86_64-linux-gnu/libc.so.6(pthread_kill+0x12c)[0x7fbab4a8ea7c]
/lib/x86_64-linux-gnu/libc.so.6(raise+0x16)[0x7fbab4a3a476]
/lib/x86_64-linux-gnu/libc.so.6(abort+0xd3)[0x7fbab4a207f3]
/lib/x86_64-linux-gnu/libc.so.6(+0x2871b)[0x7fbab4a2071b]
/lib/x86_64-linux-gnu/libc.so.6(+0x39e96)[0x7fbab4a31e96]
searchd(_ZN14MemoryReader_c6SetPosEi+0x6a)[0x55d52727586a]
searchd(_ZNK10Docstore_c20ProcessSmallBlockDocEjjPK11VecTraits_TIiERK15CSphFixedVectorIiN3sph13DefaultCopy_TIiEENS5_16DefaultStorage_TIiEEEbR15MemoryReader2_cR8BitVec_TIjLi128EER13DocstoreDoc_t+0x2d9)[0x55d5271a6bf9]
searchd(_ZNK10Docstore_c21ReadDocFromSmallBlockERKNS_7Block_tEjPK11VecTraits_TIiElb+0x2e3)[0x55d5271a5cb3]
searchd(_ZNK10Docstore_c6GetDocEjPK11VecTraits_TIiElb+0x159)[0x55d5271a5999]
searchd(_ZNK13CSphIndex_VLN6GetDocER13DocstoreDoc_tlPK11VecTraits_TIiElb+0xc0)[0x55d5266a0ac0]
searchd(_ZNK9RtIndex_c6GetDocER13DocstoreDoc_tlPK11VecTraits_TIiElb+0x347)[0x55d52708d317]
searchd(_ZNK16Expr_Highlight_c23FetchFieldsFromDocstoreER13DocstoreDoc_tRl+0x8f)[0x55d52721658f]
searchd(_ZNK16Expr_Highlight_c10StringEvalERK9CSphMatchPPKh+0x301)[0x55d527216261]
searchd(_ZNK8ISphExpr16StringEvalPackedERK9CSphMatch+0x2e)[0x55d526f8faee]
searchd(+0x1b15b54)[0x55d5265f5b54]
searchd(+0x1b158e9)[0x55d5265f58e9]
searchd(+0x1ac32c9)[0x55d5265a32c9]
searchd(_Z18MinimizeAggrResultR12AggrResult_tRK9CSphQuerybRKN3sph9StringSetEP14QueryProfile_cPK18CSphFilterSettingsbb+0x627)[0x55d5265a0f77]
searchd(_ZN15SearchHandler_c9RunSubsetEii+0x1e4c)[0x55d5265a7eec]
searchd(_ZN15SearchHandler_c10RunQueriesEv+0xba)[0x55d5265a3d2a]
searchd(_ZN18PubSearchHandler_c10RunQueriesEv+0x1d)[0x55d5265a3c5d]
searchd(_ZN19HttpSearchHandler_c7ProcessEv+0x1aa)[0x55d52643d36a]
searchd(_Z16ProcessHttpQueryR12CharStream_cRSt4pairIPKciER15CSphOrderedHashI10CSphStringS7_15CSphStrHashFuncLi256EERN3sph8Vector_TIhNSB_13DefaultCopy_TIhEENSB_14DefaultRelimitENSB_16DefaultStorage_TIhEEEEb11http_method+0x38e)[0x55d526430a7e]
searchd(_ZN19HttpRequestParser_c17ProcessClientHttpER21AsyncNetInputBuffer_cRN3sph8Vector_TIhNS2_13DefaultCopy_TIhEENS2_14DefaultRelimitENS2_16DefaultStorage_TIhEEEE+0x34f)[0x55d5264319df]
searchd(_Z9HttpServeSt10unique_ptrI16AsyncNetBuffer_cSt14default_deleteIS0_EE+0x9a5)[0x55d5264ce8d5]
searchd(_Z10MultiServeSt10unique_ptrI16AsyncNetBuffer_cSt14default_deleteIS0_EESt4pairIitE7Proto_e+0x199)[0x55d5264c6d69]
searchd(+0x19e85bf)[0x55d5264c85bf]
searchd(+0x19e8545)[0x55d5264c8545]
searchd(+0x19e8505)[0x55d5264c8505]
searchd(+0x19e83ed)[0x55d5264c83ed]
searchd(_ZNKSt8functionIFvvEEclEv+0x35)[0x55d526422bd5]
searchd(_ZN7Threads11CoRoutine_c12WorkerLowestEPv+0x2f)[0x55d52772b7cf]
searchd(ZZN7Threads11CoRoutine_c13CreateContextESt8functionIFvvEESt4pairIN5boost7context13stack_contextENS_14StackFlavour_EEEENKUlNS6_6detail10transfer_tEE_clESB+0x21)[0x55d52772b791]
searchd(ZZN7Threads11CoRoutine_c13CreateContextESt8functionIFvvEESt4pairIN5boost7context13stack_contextENS_14StackFlavour_EEEENUlNS6_6detail10transfer_tEE_8__invokeESB+0x1d)[0x55d52772b75d]
searchd(make_fcontext+0x37)[0x55d5277602e7]
Trying boost backtrace:
0# sphBacktrace(int, bool) in searchd
1# CrashLogger::HandleCrash(int) in searchd
2# 0x00007FBAB4A3A520 in /lib/x86_64-linux-gnu/libc.so.6
3# pthread_kill in /lib/x86_64-linux-gnu/libc.so.6
4# raise in /lib/x86_64-linux-gnu/libc.so.6
5# abort in /lib/x86_64-linux-gnu/libc.so.6
6# 0x00007FBAB4A2071B in /lib/x86_64-linux-gnu/libc.so.6
7# 0x00007FBAB4A31E96 in /lib/x86_64-linux-gnu/libc.so.6
8# MemoryReader_c::SetPos(int) in searchd
9# Docstore_c::ProcessSmallBlockDoc(unsigned int, unsigned int, VecTraits_T const*, CSphFixedVector<int, sph::DefaultCopy_T, sph::DefaultStorage_T > const&, bool, MemoryReader2_c&, BitVec_T<unsigned int, 128>&, DocstoreDoc_t&) const in searchd
10# Docstore_c::ReadDocFromSmallBlock(Docstore_c::Block_t const&, unsigned int, VecTraits_T const*, long, bool) const in searchd
11# Docstore_c::GetDoc(unsigned int, VecTraits_T const*, long, bool) const in searchd
12# CSphIndex_VLN::GetDoc(DocstoreDoc_t&, long, VecTraits_T const*, long, bool) const in searchd
13# RtIndex_c::GetDoc(DocstoreDoc_t&, long, VecTraits_T const*, long, bool) const in searchd
14# Expr_Highlight_c::FetchFieldsFromDocstore(DocstoreDoc_t&, long&) const in searchd
15# Expr_Highlight_c::StringEval(CSphMatch const&, unsigned char const**) const in searchd
16# ISphExpr::StringEvalPacked(CSphMatch const&) const in searchd
17# 0x000055D5265F5B54 in searchd
18# 0x000055D5265F58E9 in searchd
19# 0x000055D5265A32C9 in searchd
20# MinimizeAggrResult(AggrResult_t&, CSphQuery const&, bool, sph::StringSet const&, QueryProfile_c*, CSphFilterSettings const*, bool, bool) in searchd
21# SearchHandler_c::RunSubset(int, int) in searchd
22# SearchHandler_c::RunQueries() in searchd
23# PubSearchHandler_c::RunQueries() in searchd
24# HttpSearchHandler_c::Process() in searchd
25# ProcessHttpQuery(CharStream_c&, std::pair<char const*, int>&, CSphOrderedHash<CSphString, CSphString, CSphStrHashFunc, 256>&, sph::Vector_T<unsigned char, sph::DefaultCopy_T, sph::DefaultRelimit, sph::DefaultStorage_T >&, bool, http_method) in searchd
26# HttpRequestParser_c::ProcessClientHttp(AsyncNetInputBuffer_c&, sph::Vector_T<unsigned char, sph::DefaultCopy_T, sph::DefaultRelimit, sph::DefaultStorage_T >&) in searchd
27# HttpServe(std::unique_ptr<AsyncNetBuffer_c, std::default_delete<AsyncNetBuffer_c> >) in searchd
28# MultiServe(std::unique_ptr<AsyncNetBuffer_c, std::default_delete<AsyncNetBuffer_c> >, std::pair<int, unsigned short>, Proto_e) in searchd
29# 0x000055D5264C85BF in searchd
30# 0x000055D5264C8545 in searchd
31# 0x000055D5264C8505 in searchd
32# 0x000055D5264C83ED in searchd
33# std::function<void ()>::operator()() const in searchd
34# Threads::CoRoutine_c::WorkerLowest(void*) in searchd
35# Threads::CoRoutine_c::CreateContext(std::function<void ()>, std::pair<boost::context::stack_context, Threads::StackFlavour_E>)::{lambda(boost::context::detail::transfer_t)#1}::operator()(boost::context::detail::transfer_t) const in searchd
36# Threads::CoRoutine_c::CreateContext(std::function<void ()>, std::pair<boost::context::stack_context, Threads::StackFlavour_E>)::{lambda(boost::context::detail::transfer_t)#1}::__invoke(boost::context::detail::transfer_t) in searchd
37# make_fcontext in searchd

-------------- backtrace ends here ---------------
Please, create a bug report in our bug tracker (https://github.com/manticoresoftware/manticore/issues)
and attach there:
a) searchd log, b) searchd binary, c) searchd symbols.
Look into the chapter 'Reporting bugs' in the manual
(https://manual.manticoresearch.com/Reporting_bugs)
Dump with GDB via watchdog
[Tue Oct 10 11:38:38.183 2023] [35] WARNING: send() failed: 32: Broken pipe, sock=1543
--- active threads ---
thd 0 (work_0), proto http, state -, command -
thd 1 (work_2), proto http, state -, command -
thd 2 (work_3), proto http, state net_read, command -
thd 3 (work_4), proto http, state -, command -
thd 4 (work_5), proto http, state -, command -
thd 5 (work_6), proto http, state -, command -
thd 6 (work_8), proto http, state -, command -
thd 7 (work_9), proto http, state -, command -
thd 8 (work_10), proto http, state net_read, command -
thd 9 (work_11), proto http, state -, command -
thd 10 (work_12), proto http, state -, command -
thd 11 (work_13), proto http, state net_read, command -
thd 12 (work_14), proto http, state net_read, command -
thd 13 (work_15), proto http, state -, command -
thd 14 (work_16), proto http, state -, command -
thd 15 (work_18), proto http, state -, command -
thd 16 (work_19), proto http, state -, command -
thd 17 (work_20), proto http, state net_read, command -
thd 18 (work_21), proto http, state -, command -
thd 19 (work_22), proto http, state -, command -
thd 20 (work_23), proto http, state -, command -
thd 21 (work_24), proto http, state -, command -
thd 22 (work_25), proto http, state net_read, command -
thd 23 (work_26), proto http, state -, command -
thd 24 (work_27), proto http, state -, command -
thd 25 (work_29), proto http, state -, command -
thd 26 (work_30), proto http, state -, command -
thd 27 (work_31), proto http, state -, command -
--- Totally 30 threads, and 28 client-working threads ---
------- CRASH DUMP END -------
------- FATAL: CRASH DUMP -------
[Tue Oct 10 11:38:38.678 2023] [ 1]

--- crashed HTTP request dump ---
��B�0�)��! ��b �!|�8

�N		�

�g
�m�&� ��!���A��

�u_�����S�
��S�
�Y�2�? �w��%
�D
�=\�Wa�!�Z�W��;�{ z� �I�Q�<{f
�s��G ���tV�9�8�u�.�?
--- request dump end ---
--- local index:`$
Manticore 6.2.13 edac585@231009 dev (columnar 2.2.5 b8be4eb@230928) (secondary 2.2.5 b8be4eb@230928)
Handling signal 11
-------------- backtrace begins here ---------------
Program compiled with Clang 15.0.7
Configured with flags: Configured with these definitions: -DDISTR_BUILD=jammy -DUSE_SYSLOG=1 -DWITH_GALERA=1 -DWITH_RE2=1 -DWITH_RE2_FORCE_STATIC=1 -DWITH_STEMMER=1 -DWITH_STEMMER_FORCE_STATIC=1 -DWITH_NLJSON=1 -DWITH_UNIALGO=1 -DWITH_ICU=1 -DWITH_ICU_FORCE_STATIC=1 -DWITH_SSL=1 -DWITH_ZLIB=1 -DWITH_ZSTD=1 -DDL_ZSTD=1 -DZSTD_LIB=libzstd.so.1 -DWITH_CURL=1 -DDL_CURL=1 -DCURL_LIB=libcurl.so.4 -DWITH_ODBC=1 -DDL_ODBC=1 -DODBC_LIB=libodbc.so.2 -DWITH_EXPAT=1 -DDL_EXPAT=1 -DEXPAT_LIB=libexpat.so.1 -DWITH_ICONV=1 -DWITH_MYSQL=1 -DDL_MYSQL=1 -DMYSQL_LIB=libmysqlclient.so.21 -DWITH_POSTGRESQL=1 -DDL_POSTGRESQL=1 -DPOSTGRESQL_LIB=libpq.so.5 -DLOCALDATADIR=/var/lib/manticore -DFULL_SHARE_DIR=/usr/share/manticore
Built on Linux x86_64 (jammy) (cross-compiled)
Stack bottom = 0x7fba00083f80, thread stack size = 0x20000
Trying manual backtrace:
Something wrong with thread stack, manual backtrace may be incorrect (fp=0x7fba0007f2f0)
Stack looks OK, attempting backtrace.
0x55d526597526
0x7fbab4a20898
Trying system backtrace:
begin of system symbols:
searchd(_Z12sphBacktraceib+0x2f9)[0x55d5267d7ae9]
searchd(_ZN11CrashLogger11HandleCrashEi+0x666)[0x55d526597526]
/lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7fbab4a3a520]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x178)[0x7fbab4a20898]
/lib/x86_64-linux-gnu/libc.so.6(+0x2871b)[0x7fbab4a2071b]
/lib/x86_64-linux-gnu/libc.so.6(+0x39e96)[0x7fbab4a31e96]
searchd(_ZN14MemoryReader_c6SetPosEi+0x6a)[0x55d52727586a]
searchd(_ZNK10Docstore_c20ProcessSmallBlockDocEjjPK11VecTraits_TIiERK15CSphFixedVectorIiN3sph13DefaultCopy_TIiEENS5_16DefaultStorage_TIiEEEbR15MemoryReader2_cR8BitVec_TIjLi128EER13DocstoreDoc_t+0x2d9)[0x55d5271a6bf9]
searchd(_ZNK10Docstore_c21ReadDocFromSmallBlockERKNS_7Block_tEjPK11VecTraits_TIiElb+0x2e3)[0x55d5271a5cb3]
searchd(_ZNK10Docstore_c6GetDocEjPK11VecTraits_TIiElb+0x159)[0x55d5271a5999]
searchd(_ZNK13CSphIndex_VLN6GetDocER13DocstoreDoc_tlPK11VecTraits_TIiElb+0xc0)[0x55d5266a0ac0]
searchd(_ZNK9RtIndex_c6GetDocER13DocstoreDoc_tlPK11VecTraits_TIiElb+0x347)[0x55d52708d317]
searchd(_ZNK16Expr_Highlight_c23FetchFieldsFromDocstoreER13DocstoreDoc_tRl+0x8f)[0x55d52721658f]
searchd(_ZNK16Expr_Highlight_c10StringEvalERK9CSphMatchPPKh+0x301)[0x55d527216261]
searchd(_ZNK8ISphExpr16StringEvalPackedERK9CSphMatch+0x2e)[0x55d526f8faee]
searchd(+0x1b15b54)[0x55d5265f5b54]
searchd(+0x1b158e9)[0x55d5265f58e9]
searchd(+0x1ac32c9)[0x55d5265a32c9]
searchd(_Z18MinimizeAggrResultR12AggrResult_tRK9CSphQuerybRKN3sph9StringSetEP14QueryProfile_cPK18CSphFilterSettingsbb+0x627)[0x55d5265a0f77]
searchd(_ZN15SearchHandler_c9RunSubsetEii+0x1e4c)[0x55d5265a7eec]
searchd(_ZN15SearchHandler_c10RunQueriesEv+0xba)[0x55d5265a3d2a]
searchd(_ZN18PubSearchHandler_c10RunQueriesEv+0x1d)[0x55d5265a3c5d]
searchd(_ZN19HttpSearchHandler_c7ProcessEv+0x1aa)[0x55d52643d36a]
searchd(_Z16ProcessHttpQueryR12CharStream_cRSt4pairIPKciER15CSphOrderedHashI10CSphStringS7_15CSphStrHashFuncLi256EERN3sph8Vector_TIhNSB_13DefaultCopy_TIhEENSB_14DefaultRelimitENSB_16DefaultStorage_TIhEEEEb11http_method+0x38e)[0x55d526430a7e]
searchd(_ZN19HttpRequestParser_c17ProcessClientHttpER21AsyncNetInputBuffer_cRN3sph8Vector_TIhNS2_13DefaultCopy_TIhEENS2_14DefaultRelimitENS2_16DefaultStorage_TIhEEEE+0x34f)[0x55d5264319df]
searchd(_Z9HttpServeSt10unique_ptrI16AsyncNetBuffer_cSt14default_deleteIS0_EE+0x9a5)[0x55d5264ce8d5]
searchd(_Z10MultiServeSt10unique_ptrI16AsyncNetBuffer_cSt14default_deleteIS0_EESt4pairIitE7Proto_e+0x199)[0x55d5264c6d69]
searchd(+0x19e85bf)[0x55d5264c85bf]
searchd(+0x19e8545)[0x55d5264c8545]
searchd(+0x19e8505)[0x55d5264c8505]
searchd(+0x19e83ed)[0x55d5264c83ed]
searchd(_ZNKSt8functionIFvvEEclEv+0x35)[0x55d526422bd5]
searchd(_ZN7Threads11CoRoutine_c12WorkerLowestEPv+0x2f)[0x55d52772b7cf]
searchd(ZZN7Threads11CoRoutine_c13CreateContextESt8functionIFvvEESt4pairIN5boost7context13stack_contextENS_14StackFlavour_EEEENKUlNS6_6detail10transfer_tEE_clESB+0x21)[0x55d52772b791]
searchd(ZZN7Threads11CoRoutine_c13CreateContextESt8functionIFvvEESt4pairIN5boost7context13stack_contextENS_14StackFlavour_EEEENUlNS6_6detail10transfer_tEE_8__invokeESB+0x1d)[0x55d52772b75d]
searchd(make_fcontext+0x37)[0x55d5277602e7]
Trying boost backtrace:
0# sphBacktrace(int, bool) in searchd
1# CrashLogger::HandleCrash(int) in searchd
2# 0x00007FBAB4A3A520 in /lib/x86_64-linux-gnu/libc.so.6
3# abort in /lib/x86_64-linux-gnu/libc.so.6
4# 0x00007FBAB4A2071B in /lib/x86_64-linux-gnu/libc.so.6
5# 0x00007FBAB4A31E96 in /lib/x86_64-linux-gnu/libc.so.6
6# MemoryReader_c::SetPos(int) in searchd
7# Docstore_c::ProcessSmallBlockDoc(unsigned int, unsigned int, VecTraits_T const*, CSphFixedVector<int, sph::DefaultCopy_T, sph::DefaultStorage_T > const&, bool, MemoryReader2_c&, BitVec_T<unsigned int, 128>&, DocstoreDoc_t&) const in searchd
8# Docstore_c::ReadDocFromSmallBlock(Docstore_c::Block_t const&, unsigned int, VecTraits_T const*, long, bool) const in searchd
9# Docstore_c::GetDoc(unsigned int, VecTraits_T const*, long, bool) const in searchd
10# CSphIndex_VLN::GetDoc(DocstoreDoc_t&, long, VecTraits_T const*, long, bool) const in searchd
11# RtIndex_c::GetDoc(DocstoreDoc_t&, long, VecTraits_T const*, long, bool) const in searchd
12# Expr_Highlight_c::FetchFieldsFromDocstore(DocstoreDoc_t&, long&) const in searchd
13# Expr_Highlight_c::StringEval(CSphMatch const&, unsigned char const**) const in searchd
14# ISphExpr::StringEvalPacked(CSphMatch const&) const in searchd
15# 0x000055D5265F5B54 in searchd
16# 0x000055D5265F58E9 in searchd
17# 0x000055D5265A32C9 in searchd
18# MinimizeAggrResult(AggrResult_t&, CSphQuery const&, bool, sph::StringSet const&, QueryProfile_c*, CSphFilterSettings const*, bool, bool) in searchd
19# SearchHandler_c::RunSubset(int, int) in searchd
20# SearchHandler_c::RunQueries() in searchd
21# PubSearchHandler_c::RunQueries() in searchd
22# HttpSearchHandler_c::Process() in searchd
23# ProcessHttpQuery(CharStream_c&, std::pair<char const*, int>&, CSphOrderedHash<CSphString, CSphString, CSphStrHashFunc, 256>&, sph::Vector_T<unsigned char, sph::DefaultCopy_T, sph::DefaultRelimit, sph::DefaultStorage_T >&, bool, http_method) in searchd
24# HttpRequestParser_c::ProcessClientHttp(AsyncNetInputBuffer_c&, sph::Vector_T<unsigned char, sph::DefaultCopy_T, sph::DefaultRelimit, sph::DefaultStorage_T >&) in searchd
25# HttpServe(std::unique_ptr<AsyncNetBuffer_c, std::default_delete<AsyncNetBuffer_c> >) in searchd
26# MultiServe(std::unique_ptr<AsyncNetBuffer_c, std::default_delete<AsyncNetBuffer_c> >, std::pair<int, unsigned short>, Proto_e) in searchd
27# 0x000055D5264C85BF in searchd
28# 0x000055D5264C8545 in searchd
29# 0x000055D5264C8505 in searchd
30# 0x000055D5264C83ED in searchd
31# std::function<void ()>::operator()() const in searchd
32# Threads::CoRoutine_c::WorkerLowest(void*) in searchd
33# Threads::CoRoutine_c::CreateContext(std::function<void ()>, std::pair<boost::context::stack_context, Threads::StackFlavour_E>)::{lambda(boost::context::detail::transfer_t)#1}::operator()(boost::context::detail::transfer_t) const in searchd
34# Threads::CoRoutine_c::CreateContext(std::function<void ()>, std::pair<boost::context::stack_context, Threads::StackFlavour_E>)::{lambda(boost::context::detail::transfer_t)#1}::__invoke(boost::context::detail::transfer_t) in searchd
35# make_fcontext in searchd

-------------- backtrace ends here ---------------
Please, create a bug report in our bug tracker (https://github.com/manticoresoftware/manticore/issues)
and attach there:
a) searchd log, b) searchd binary, c) searchd symbols.
Look into the chapter 'Reporting bugs' in the manual
(https://manual.manticoresearch.com/Reporting_bugs)
Dump with GDB via watchdog
--- active threads ---
thd 0 (work_0), proto http, state net_read, command -
thd 1 (work_2), proto http, state -, command -
thd 2 (work_3), proto http, state -, command -
thd 3 (work_4), proto http, state -, command -
thd 4 (work_5), proto http, state -, command -
thd 5 (work_6), proto http, state -, command -
thd 6 (work_7), proto http, state -, command -
thd 7 (work_9), proto http, state -, command -
thd 8 (work_10), proto http, state -, command -
thd 9 (work_11), proto http, state net_read, command -
thd 10 (work_12), proto http, state -, command -
thd 11 (work_13), proto http, state net_read, command -
thd 12 (work_14), proto http, state -, command -
thd 13 (work_15), proto http, state -, command -
thd 14 (work_16), proto http, state -, command -
thd 15 (work_18), proto http, state -, command -
thd 16 (work_19), proto http, state -, command -
thd 17 (work_20), proto http, state -, command -
thd 18 (work_21), proto http, state -, command -
thd 19 (work_22), proto http, state -, command -
thd 20 (work_23), proto http, state -, command -
thd 21 (work_24), proto http, state -, command -
thd 22 (work_25), proto http, state net_read, command -
thd 23 (work_26), proto http, state -, command -
thd 24 (work_27), proto http, state net_read, command -
thd 25 (work_28), proto http, state -, command -
thd 26 (work_29), proto http, state -, command -
thd 27 (work_30), proto http, state -, command -
thd 28 (work_31), proto http, state net_read, command -
--- Totally 32 threads, and 29 client-working threads ---
------- CRASH DUMP END -------
[Tue Oct 10 11:47:48.254 2023] [1] starting daemon version '6.2.13 edac585@231009 dev (columnar 2.2.5 b8be4eb@230928) (secondary 2.2.5 b8be4eb@230928)' ...

</details>

@sanikolaev
Copy link
Collaborator

Finally I could reproduce it more or less stably without docker. Here's how to do it on dev2:

Take the etalon data dir (13G) on which it crashes stable:

cd /home/snikolaev/issue_1458/repro/
rm -fr data
sudo cp -ar ../data .
sudo chown snikolaev:snikolaev -R data

(most likely you don't have to do it every time, but if it stops crashing again, you know where the etalon data dir is located)

Start Manticore (latest dev):

snikolaev@dev2:~/issue_1458/repro$ searchd -c manticore.conf --nodetach
Manticore 6.2.13 590d63bf8@231102 dev (columnar 2.2.5 b8be4eb@230928) (secondary 2.2.5 b8be4eb@230928)
Copyright (c) 2001-2016, Andrew Aksyonoff
Copyright (c) 2008-2016, Sphinx Technologies Inc (http://sphinxsearch.com)
Copyright (c) 2017-2023, Manticore Software LTD (https://manticoresearch.com)

[02:35.582] [3087234] using config file '/home/snikolaev/issue_1458/repro/manticore.conf' (159 chars)...
starting daemon version '6.2.13 590d63bf8@231102 dev (columnar 2.2.5 b8be4eb@230928) (secondary 2.2.5 b8be4eb@230928)' ...
listening on 127.0.0.1:49308 for sphinx and http(s)
precaching table 'posts_idx'
WARNING: table 'posts_idx': table 'posts_idx': morphology option changed from config has no effect, ignoring
precached 1 tables in 0.144 sec
prereading 1 tables
accepting connections
preread 1 tables in 0.019 sec
[BUDDY] started v2.0.3 '/usr/share/manticore/modules/manticore-buddy/bin/manticore-buddy --listen=http://127.0.0.1:49308  --threads=32' at http://127.0.0.1:33791
[BUDDY] Loaded plugins:
  core: empty-string, backup, emulate-elastic, insert, alias, select, show, cli-table, plugin, test, insert-mva, create-table
  local:
  extra: update-text

Run the REPLACE load: 50 concurrent replaces, then a 30 second pause:

while :; do for n in `seq 1 50`; do curl -sX POST http://localhost:49308/bulk -H "Content-Type: application/x-ndjson" --data-binary @replace > /dev/null & done; while :; do running_jobs=$(jobs -r | wc -l); echo "$running_jobs jobs running"; if [ $running_jobs -eq 0 ]; then break; fi; sleep 1; done; echo "sleeping"; sleep 30; done

At the same time run the SELECT load: always 50 concurrent selects:

while :; do running_jobs=$(jobs -r | wc -l); echo "$running_jobs jobs running"; jobs_to_start=$((50 - running_jobs)); if [ $jobs_to_start -gt 0 ]; then for n in $(seq 1 $jobs_to_start); do curl -sX POST http://localhost:49308/search -d '{"index": "posts_idx", "query": {"bool": {"must": [{"range": {"uploaded_at": {"gte": 1698747276}}}, {"equals": {"is_blogger": 1}}, {"in": {"any(source_id)": [0, 1, 2, 3, 4]}}, {"bool": {"should": [{"query_string": "\"esse\""}, {"query_string": "\"praesentium\""},  {"query_string": "\"natus\""}, {"query_string": "\"deleniti\""}, {"query_string": "\"quod\""}, {"query_string": "\"optio\""}, {"query_string": "\"nam\""}, {"query_string": "\"adipisci\""}, {"query_string": "\"maxime\""}, {"query_string": "\"sint\""}]}}], "must_not": [{"query_string": "\"some_exclude_words\""}, {"query_string": "\"some_exclude_words\""}, {"query_string": "\"some_exclude_words\""}, {"query_string": "\"some_exclude_words\""}, {"query_string": "\"some_exclude_words\""}, {"query_string": "\"some_exclude_words\""}, {"query_string": "\"some_exclude_words\""}, {"query_string": "\"some_exclude_words\""}, {"query_string": "\"some_exclude_words\""}, {"query_string": "\"some_exclude_words\""}]}}, "limit": 10000, "offset": 0, "sort": ["posted"], "options": {"max_matches": 50000}, "highlight": {"limit": 50000}}' > /dev/null & done; fi; sleep 1; done

Wait a couple minutes until it crashes:

Crash!!! Handling signal 11
Segmentation fault (core dumped)

Inspect the log:

snikolaev@dev2:~/issue_1458/repro$ tail -n 30 searchd.log
searchd(_ZZN7Threads11CoRoutine_c13CreateContextESt8functionIFvvEESt4pairIN5boost7context13stack_contextENS_14StackFlavour_EEEENUlNS6_6detail10transfer_tEE_8__invokeESB_+0x1c)[0x5601a5337e1c]
searchd(make_fcontext+0x37)[0x5601a5358597]
Trying boost backtrace:
 0# sphBacktrace(int, bool) in searchd
 1# CrashLogger::HandleCrash(int) in searchd
 2# 0x00007FE12704D520 in /lib/x86_64-linux-gnu/libc.so.6
 3# Expr_Highlight_c::RearrangeFetchedFields(DocstoreDoc_t const&) const in searchd
 4# Expr_Highlight_c::StringEval(CSphMatch const&, unsigned char const**) const in searchd
 5# ISphExpr::StringEvalPacked(CSphMatch const&) const in searchd
 6# 0x00005601A408777C in searchd
 7# 0x00005601A408767C in searchd
 8# MinimizeAggrResult(AggrResult_t&, CSphQuery const&, bool, sph::StringSet const&, QueryProfile_c*, CSphFilterSettings const*, bool, bool) in searchd
 9# SearchHandler_c::RunSubset(int, int) in searchd
10# SearchHandler_c::RunQueries() in searchd
11# HttpSearchHandler_c::Process() in searchd
12# ProcessHttpQuery(CharStream_c&, std::pair<char const*, int>&, CSphOrderedHash<CSphString, CSphString, CSphStrHashFunc, 256>&, sph::Vector_T<unsigned char, sph::DefaultCopy_T<unsigned char>, sph::DefaultRelimit, sph::DefaultStorage_T<unsigned char> >&, bool, http_method) in searchd
13# HttpRequestParser_c::ProcessClientHttp(AsyncNetInputBuffer_c&, sph::Vector_T<unsigned char, sph::DefaultCopy_T<unsigned char>, sph::DefaultRelimit, sph::DefaultStorage_T<unsigned char> >&) in searchd
14# HttpServe(std::unique_ptr<AsyncNetBuffer_c, std::default_delete<AsyncNetBuffer_c> >) in searchd
15# MultiServe(std::unique_ptr<AsyncNetBuffer_c, std::default_delete<AsyncNetBuffer_c> >, std::pair<int, unsigned short>, Proto_e) in searchd
16# 0x00005601A3FB4082 in searchd
17# Threads::CoRoutine_c::CreateContext(std::function<void ()>, std::pair<boost::context::stack_context, Threads::StackFlavour_E>)::{lambda(boost::context::detail::transfer_t)#1}::__invoke(boost::context::detail::transfer_t) in searchd
18# make_fcontext in searchd

-------------- backtrace ends here ---------------
Please, create a bug report in our bug tracker (https://github.com/manticoresoftware/manticore/issues)
and attach there:
a) searchd log, b) searchd binary, c) searchd symbols.
Look into the chapter 'Reporting bugs' in the manual
(https://manual.manticoresearch.com/Reporting_bugs)
Dump with GDB via watchdog

@starinacool
Copy link

6 years since the fork and it is a buggy mess still. Same problem here. Was thinking to migrate from sphinx.

@starinacool
Copy link

Inserted by 1000 rows in a transaction. Result:

------- FATAL: CRASH DUMP -------
[Mon Nov 20 08:41:55.884 2023] [  754]

--- crashed SphinxQL request dump ---
start transaction
--- request dump end ---
--- local index:7<C0>[<82>A^?^@^@
Manticore 6.2.12 dc5144d35@230822 (columnar 2.2.4 5aec342@230822) (secondary 2.2.4 5aec342@230822)
Handling signal 11
-------------- backtrace begins here ---------------
Program compiled with Clang 15.0.7
Configured with flags: Configured with these definitions: -DDISTR_BUILD=bookworm -DUSE_SYSLOG=1 -DWITH_GALERA=1 -DWITH_RE2=1 -DWITH_RE2_FORCE_STATIC=1 -DWITH_
STEMMER=1 -DWITH_STEMMER_FORCE_STATIC=1 -DWITH_NLJSON=1 -DWITH_UNIALGO=1 -DWITH_ICU=1 -DWITH_ICU_FORCE_STATIC=1 -DWITH_SSL=1 -DWITH_ZLIB=1 -DWITH_ZSTD=1 -DDL_
ZSTD=1 -DZSTD_LIB=libzstd.so.1 -DWITH_CURL=1 -DDL_CURL=1 -DCURL_LIB=libcurl.so.4 -DWITH_ODBC=1 -DDL_ODBC=1 -DODBC_LIB=libodbc.so.2 -DWITH_EXPAT=1 -DDL_EXPAT=1
 -DEXPAT_LIB=libexpat.so.1 -DWITH_ICONV=1 -DWITH_MYSQL=1 -DDL_MYSQL=1 -DMYSQL_LIB=libmariadb.so.3 -DWITH_POSTGRESQL=1 -DDL_POSTGRESQL=1 -DPOSTGRESQL_LIB=libpq
.so.5 -DLOCALDATADIR=/var/lib/manticore -DFULL_SHARE_DIR=/usr/share/manticore
Built on Linux x86_64 (bookworm) (cross-compiled)
Stack bottom = 0x7f467661d270, thread stack size = 0x20000
Trying manual backtrace:
Something wrong with thread stack, manual backtrace may be incorrect (fp=0x1)
Wrong stack limit or frame pointer, manual backtrace failed (fp=0x1, stack=0x7f4676620000, stacksize=0x20000)
Trying system backtrace:
begin of system symbols:
/usr/bin/searchd(_Z12sphBacktraceib+0x22a)[0x55ec6fe93e0a]
/usr/bin/searchd(_ZN11CrashLogger11HandleCrashEi+0x355)[0x55ec6fd126c5]
/lib/x86_64-linux-gnu/libc.so.6(+0x3bfd0)[0x7f468165afd0]
/usr/bin/searchd(_ZN13CSphIndex_VLN10MergeWordsI16DiskIndexQword_cILb1ELb0EES2_EEbPKS_S4_11VecTraits_TIjES6_P14CSphHitBuilderR10CSphStringR17CSphIndexProgress
+0xe16)[0x55ec6fe4cef6]
/usr/bin/searchd(_ZN13CSphIndex_VLN7DoMergeEPKS_S1_PK10ISphFilterR10CSphStringR17CSphIndexProgressbb+0x642)[0x55ec6fdb0ca2]
/usr/bin/searchd(_Z8sphMergePK9CSphIndexS1_11VecTraits_TI18CSphFilterSettingsER17CSphIndexProgressR10CSphString+0x72)[0x55ec6fdb1632]
/usr/bin/searchd(_ZN9RtIndex_c15MergeDiskChunksEPKcRK17CSphRefcountedPtrIK11DiskChunk_cES7_R17CSphIndexProgress11VecTraits_TI18CSphFilterSettingsE+0x65)[0x55e
c70aca9a5]
/usr/bin/searchd(_ZN9RtIndex_c14MergeTwoChunksEiiPi+0x496)[0x55ec70ace7e6]
/usr/bin/searchd(_ZN9RtIndex_c19ProgressiveOptimizeEi+0x597)[0x55ec70acfc77]
/usr/bin/searchd(_ZN9RtIndex_c8OptimizeE14OptimizeTask_t+0xed)[0x55ec70acf38d]
/usr/bin/searchd(+0xda96f7)[0x55ec6fc616f7]
/usr/bin/searchd(_ZZN7Threads11CoRoutine_c13CreateContextESt8functionIFvvEESt4pairIN5boost7context13stack_contextENS_14StackFlavour_EEEENUlNS6_6detail10transf
er_tEE_8__invokeESB_+0x1c)[0x55ec70fe2e8c]
/usr/bin/searchd(make_fcontext+0x2f)[0x55ec7100323f]
Trying boost backtrace:
 0# sphBacktrace(int, bool) in /usr/bin/searchd
 1# CrashLogger::HandleCrash(int) in /usr/bin/searchd
 2# 0x00007F468165AFD0 in /lib/x86_64-linux-gnu/libc.so.6
 3# bool CSphIndex_VLN::MergeWords<DiskIndexQword_c<true, false>, DiskIndexQword_c<true, false> >(CSphIndex_VLN const*, CSphIndex_VLN const*, VecTraits_T<unsigned int>, VecTraits_T<unsigned int>, CSphHitBuilder*, CSphString&, CSphIndexProgress&) in /usr/bin/searchd
 4# CSphIndex_VLN::DoMerge(CSphIndex_VLN const*, CSphIndex_VLN const*, ISphFilter const*, CSphString&, CSphIndexProgress&, bool, bool) in /usr/bin/searchd
 5# sphMerge(CSphIndex const*, CSphIndex const*, VecTraits_T<CSphFilterSettings>, CSphIndexProgress&, CSphString&) in /usr/bin/searchd
 6# RtIndex_c::MergeDiskChunks(char const*, CSphRefcountedPtr<DiskChunk_c const> const&, CSphRefcountedPtr<DiskChunk_c const> const&, CSphIndexProgress&, VecT
raits_T<CSphFilterSettings>) in /usr/bin/searchd
 7# RtIndex_c::MergeTwoChunks(int, int, int*) in /usr/bin/searchd
 8# RtIndex_c::ProgressiveOptimize(int) in /usr/bin/searchd
 9# RtIndex_c::Optimize(OptimizeTask_t) in /usr/bin/searchd
10# 0x000055EC6FC616F7 in /usr/bin/searchd
11# Threads::CoRoutine_c::CreateContext(std::function<void ()>, std::pair<boost::context::stack_context, Threads::StackFlavour_E>)::{lambda(boost::context::de
tail::transfer_t)#1}::__invoke(boost::context::detail::transfer_t) in /usr/bin/searchd
12# make_fcontext in /usr/bin/searchd

@starinacool
Copy link

@glookka is there any workaround?

@sanikolaev
Copy link
Collaborator

@starinacool please try the latest dev version. As said in #1458 (comment) , some issues have been already fixed there.

@sanikolaev
Copy link
Collaborator

@starinacool please try the latest dev version. As said in #1458 (comment) , some issues have been already fixed there.

Oops. I've confused this issue with another one. But it still makes sense to make sure the issue can be reproduced in the latest dev version.

@starinacool
Copy link

@sanikolaev , made a separate issue. May be it is something different: #1602

@sanikolaev sanikolaev assigned sanikolaev and unassigned glookka Dec 6, 2023
@glookka
Copy link
Contributor

glookka commented Dec 16, 2023

Fixed in d67dbe6

@glookka glookka closed this as completed Dec 16, 2023
@sanikolaev
Copy link
Collaborator

Fixed in d67dbe6

I confirm I can't reproduce the issue anymore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants