Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Redis-server killed by oom_killer at 19GB memory allocation #974

Open
molnaromatic opened this issue Sep 18, 2023 · 11 comments
Open

Redis-server killed by oom_killer at 19GB memory allocation #974

molnaromatic opened this issue Sep 18, 2023 · 11 comments

Comments

@molnaromatic
Copy link

I have a relatively small installation of greenbone community containers running on docker (one VM). Only 85 targets and 8 tasks.
The VM containing 6 vCPU@2.2GHz and 16GB of vRAM

When I start more then one scan in Greenbone all scan will be stopped because of the openvas crash.
When the problem is happening the redis try to allocate more memory then all of the memory and swap of the VM.

Sep 18 14:11:05 ITSM02 kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=docker-65dd470fde11d33d27a502c2e9aef0d0d01dc287d15746f68807275f63050d52.scope,mems_allowed=0,global_oom,task_memcg=/system.slice/docker-1def25045ac5ffb3f939d482581ed9ce4050d5ad08c31c4f0b86d57008537e0b.scope,task=redis-server,pid=3496,uid=100
Sep 18 14:11:05 ITSM02 kernel: Out of memory: Killed process 3496 (redis-server) total-vm:19497288kB, anon-rss:12841672kB, file-rss:0kB, shmem-rss:0kB, UID:100 pgtables:33344kB oom_score_adj:0
Sep 18 14:11:06 ITSM02 kernel: oom_reaper: reaped process 3496 (redis-server), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
Sep 18 14:11:07 ITSM02 kernel: show_signal_msg: 3 callbacks suppressed
Sep 18 14:11:07 ITSM02 kernel: openvas[117580]: segfault at 0 ip 000055d92e44ffe6 sp 00007ffc8a9932c0 error 4 in openvas[55d92e44f000+9000]
Sep 18 14:11:07 ITSM02 kernel: Code: ff ff 48 8d 3d 4b 80 00 00 48 89 c3 e8 a3 f8 ff ff 89 de 48 89 c7 48 8b 05 c7 d1 00 00 ff 50 18 48 8d 35 5d 80 00 00 48 89 c3 <48> 8b 00 48 89 df ff 50 28 48 89 c5 48 8b 03 48 8b 80 d0 00 00 00
Sep 18 14:11:07 ITSM02 kernel: openvas[166925]: segfault at 0 ip 000055c611a13fe6 sp 00007ffe63419aa0 error 4 in openvas[55c611a13000+9000]
Sep 18 14:11:07 ITSM02 kernel: Code: ff ff 48 8d 3d 4b 80 00 00 48 89 c3 e8 a3 f8 ff ff 89 de 48 89 c7 48 8b 05 c7 d1 00 00 ff 50 18 48 8d 35 5d 80 00 00 48 89 c3 <48> 8b 00 48 89 df ff 50 28 48 89 c5 48 8b 03 48 8b 80 d0 00 00 00

The redis server print a log:

2023-09-18T09:42:39+02:00 8:M 18 Sep 2023 07:42:39.305 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can can also cause failures without low memory condition, see jemalloc/jemalloc#1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
2023-09-18T09:42:39+02:00 8:M 18 Sep 2023 07:42:39.347 * The server is now ready to accept connections at /run/redis/redis.sock
2023-09-18T14:11:07+02:00 Killed
2023-09-18T14:11:11+02:00 9:C 18 Sep 2023 12:11:11.537 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
2023-09-18T14:11:11+02:00 9:C 18 Sep 2023 12:11:11.537 # Redis version=7.0.11, bits=64, commit=00000000, modified=0, pid=9, just started

The vm.overcommit_memory is set in sysctl.conf like this:

vm.overcommit_ratio = 25
vm.overcommit_memory = 1

The vm.overcommit_ration calculated by swap size

After the redis are killed, in the openvas log:

2023-09-18T12:36:26+02:00 OSPD[8] 2023-09-18 10:36:26,308: INFO: (ospd.ospd) Starting scan 539f2dae-6495-422e-b043-f0ab4c9fdb44.
2023-09-18T12:42:52+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:42:52+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:42:52+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:42:52+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:42:52+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:54:01+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:56:02+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:56:02+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:56:03+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:56:03+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:56:03+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:56:03+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:56:05+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T13:11:53+02:00 OSPD[8] 2023-09-18 11:11:53,367: INFO: (ospd.command.command) Scan 217bd3d9-a4bf-4bca-80f2-e972aff67d5f added to the queue in position 2.
2023-09-18T13:11:56+02:00 OSPD[8] 2023-09-18 11:11:56,580: INFO: (ospd.ospd) Currently 1 queued scans.
2023-09-18T13:11:56+02:00 OSPD[8] 2023-09-18 11:11:56,784: INFO: (ospd.ospd) Starting scan 217bd3d9-a4bf-4bca-80f2-e972aff67d5f.
2023-09-18T14:11:07+02:00 Traceback (most recent call last):
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/connection.py", line 493, in read_response
2023-09-18T14:11:07+02:00 response = self._parser.read_response(disable_decoding=disable_decoding)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/resp2.py", line 15, in read_response
2023-09-18T14:11:07+02:00 result = self._read_response(disable_decoding=disable_decoding)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/resp2.py", line 25, in _read_response
2023-09-18T14:11:07+02:00 raw = self._buffer.readline()
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/socket.py", line 115, in readline
2023-09-18T14:11:07+02:00 self._read_from_socket()
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/socket.py", line 65, in _read_from_socket
2023-09-18T14:11:07+02:00 data = self._sock.recv(socket_read_size)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 ConnectionResetError: [Errno 104] Connection reset by peer
2023-09-18T14:11:07+02:00 During handling of the above exception, another exception occurred:
2023-09-18T14:11:07+02:00 Traceback (most recent call last):
2023-09-18T14:11:07+02:00 File "/usr/local/bin/ospd-openvas", line 8, in
2023-09-18T14:11:07+02:00 sys.exit(main())
2023-09-18T14:11:07+02:00 ^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/daemon.py", line 1244, in main
2023-09-18T14:11:07+02:00 daemon_main('OSPD - openvas', OSPDopenvas, NotusParser())
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/main.py", line 153, in main
2023-09-18T14:11:07+02:00 daemon.run()
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 1103, in run
2023-09-18T14:11:07+02:00 self.scheduler()
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/daemon.py", line 703, in scheduler
2023-09-18T14:11:07+02:00 self.check_feed()
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/daemon.py", line 677, in check_feed
2023-09-18T14:11:07+02:00 current_feed = self.nvti.get_feed_version()
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/nvticache.py", line 70, in get_feed_version
2023-09-18T14:11:07+02:00 return OpenvasDB.get_single_item(self.ctx, NVTI_CACHE_NAME)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/db.py", line 268, in get_single_item
2023-09-18T14:11:07+02:00 return ctx.lindex(name, index)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/commands/core.py", line 2685, in lindex
2023-09-18T14:11:07+02:00 return self.execute_command("LINDEX", name, index)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 508, in execute_command
2023-09-18T14:11:07+02:00 return conn.retry.call_with_retry(
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/retry.py", line 49, in call_with_retry
2023-09-18T14:11:07+02:00 fail(error)
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 512, in
2023-09-18T14:11:07+02:00 lambda error: self._disconnect_raise(conn, error),
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 498, in _disconnect_raise
2023-09-18T14:11:07+02:00 raise error
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/retry.py", line 46, in call_with_retry
2023-09-18T14:11:07+02:00 return do()
2023-09-18T14:11:07+02:00 ^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 509, in
2023-09-18T14:11:07+02:00 lambda: self._send_command_parse_response(
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 485, in _send_command_parse_response
2023-09-18T14:11:07+02:00 return self.parse_response(conn, command_name, **options)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 525, in parse_response
2023-09-18T14:11:07+02:00 response = connection.read_response()
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/connection.py", line 501, in read_response
2023-09-18T14:11:07+02:00 raise ConnectionError(
2023-09-18T14:11:07+02:00 redis.exceptions.ConnectionError: Error while reading from /run/redis/redis.sock : (104, 'Connection reset by peer')
2023-09-18T14:11:07+02:00 OSPD[8] 2023-09-18 12:11:07,017: ERROR: (ospd.ospd) 217bd3d9-a4bf-4bca-80f2-e972aff67d5f: Exception Error while reading from /run/redis/redis.sock : (104, 'Connection reset by peer') while scanning
2023-09-18T14:11:07+02:00 Traceback (most recent call last):
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/connection.py", line 493, in read_response
2023-09-18T14:11:07+02:00 response = self._parser.read_response(disable_decoding=disable_decoding)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/resp2.py", line 15, in read_response
2023-09-18T14:11:07+02:00 result = self._read_response(disable_decoding=disable_decoding)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/resp2.py", line 25, in _read_response
2023-09-18T14:11:07+02:00 raw = self._buffer.readline()
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/socket.py", line 115, in readline
2023-09-18T14:11:07+02:00 self._read_from_socket()
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/socket.py", line 65, in _read_from_socket
2023-09-18T14:11:07+02:00 data = self._sock.recv(socket_read_size)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 ConnectionResetError: [Errno 104] Connection reset by peer
2023-09-18T14:11:07+02:00 During handling of the above exception, another exception occurred:
2023-09-18T14:11:07+02:00 Traceback (most recent call last):
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 584, in start_scan
2023-09-18T14:11:07+02:00 self.exec_scan(scan_id)
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/daemon.py", line 1174, in exec_scan
2023-09-18T14:11:07+02:00 target_is_finished = kbdb.target_is_finished(scan_id)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/db.py", line 576, in target_is_finished
2023-09-18T14:11:07+02:00 status = self._get_single_item(f'internal/{scan_id}')
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/db.py", line 470, in _get_single_item
2023-09-18T14:11:07+02:00 return OpenvasDB.get_single_item(self.ctx, name)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/db.py", line 268, in get_single_item
2023-09-18T14:11:07+02:00 return ctx.lindex(name, index)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/commands/core.py", line 2685, in lindex
2023-09-18T14:11:07+02:00 return self.execute_command("LINDEX", name, index)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 508, in execute_command
2023-09-18T14:11:07+02:00 return conn.retry.call_with_retry(
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/retry.py", line 49, in call_with_retry
2023-09-18T14:11:07+02:00 fail(error)
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 512, in
2023-09-18T14:11:07+02:00 lambda error: self._disconnect_raise(conn, error),
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 498, in _disconnect_raise
2023-09-18T14:11:07+02:00 raise error
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/retry.py", line 46, in call_with_retry
2023-09-18T14:11:07+02:00 return do()
2023-09-18T14:11:07+02:00 ^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 509, in
2023-09-18T14:11:07+02:00 lambda: self._send_command_parse_response(
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 485, in _send_command_parse_response
2023-09-18T14:11:07+02:00 return self.parse_response(conn, command_name, **options)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 525, in parse_response
2023-09-18T14:11:07+02:00 response = connection.read_response()
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/connection.py", line 501, in read_response
2023-09-18T14:11:07+02:00 raise ConnectionError(
2023-09-18T14:11:07+02:00 redis.exceptions.ConnectionError: Error while reading from /run/redis/redis.sock : (104, 'Connection reset by peer')
2023-09-18T14:11:07+02:00 Process Process-4:
2023-09-18T14:11:07+02:00 OSPD[8] 2023-09-18 12:11:07,166: ERROR: (ospd.ospd) 539f2dae-6495-422e-b043-f0ab4c9fdb44: Exception Error while reading from /run/redis/redis.sock : (104, 'Connection reset by peer') while scanning
2023-09-18T14:11:07+02:00 Traceback (most recent call last):
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/connection.py", line 493, in read_response
2023-09-18T14:11:07+02:00 response = self._parser.read_response(disable_decoding=disable_decoding)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/resp2.py", line 15, in read_response
2023-09-18T14:11:07+02:00 result = self._read_response(disable_decoding=disable_decoding)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/resp2.py", line 25, in _read_response
2023-09-18T14:11:07+02:00 raw = self._buffer.readline()
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/socket.py", line 115, in readline
2023-09-18T14:11:07+02:00 self._read_from_socket()
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/socket.py", line 65, in _read_from_socket
2023-09-18T14:11:07+02:00 data = self._sock.recv(socket_read_size)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 ConnectionResetError: [Errno 104] Connection reset by peer
2023-09-18T14:11:07+02:00 During handling of the above exception, another exception occurred:
2023-09-18T14:11:07+02:00 Traceback (most recent call last):
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 584, in start_scan
2023-09-18T14:11:07+02:00 self.exec_scan(scan_id)
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/daemon.py", line 1174, in exec_scan
2023-09-18T14:11:07+02:00 target_is_finished = kbdb.target_is_finished(scan_id)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/db.py", line 576, in target_is_finished
2023-09-18T14:11:07+02:00 status = self._get_single_item(f'internal/{scan_id}')
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/db.py", line 470, in _get_single_item
2023-09-18T14:11:07+02:00 return OpenvasDB.get_single_item(self.ctx, name)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/db.py", line 268, in get_single_item
2023-09-18T14:11:07+02:00 return ctx.lindex(name, index)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/commands/core.py", line 2685, in lindex
2023-09-18T14:11:07+02:00 return self.execute_command("LINDEX", name, index)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 508, in execute_command
2023-09-18T14:11:07+02:00 return conn.retry.call_with_retry(
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/retry.py", line 49, in call_with_retry
2023-09-18T14:11:07+02:00 fail(error)
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 512, in
2023-09-18T14:11:07+02:00 lambda error: self._disconnect_raise(conn, error),
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 498, in _disconnect_raise
2023-09-18T14:11:07+02:00 raise error
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/retry.py", line 46, in call_with_retry
2023-09-18T14:11:07+02:00 return do()
2023-09-18T14:11:07+02:00 ^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 509, in
2023-09-18T14:11:07+02:00 lambda: self._send_command_parse_response(
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 485, in _send_command_parse_response
2023-09-18T14:11:07+02:00 return self.parse_response(conn, command_name, **options)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 525, in parse_response
2023-09-18T14:11:07+02:00 response = connection.read_response()
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/connection.py", line 501, in read_response
2023-09-18T14:11:07+02:00 raise ConnectionError(
2023-09-18T14:11:07+02:00 redis.exceptions.ConnectionError: Error while reading from /run/redis/redis.sock : (104, 'Connection reset by peer')
2023-09-18T14:11:07+02:00 Traceback (most recent call last):
2023-09-18T14:11:07+02:00 File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
2023-09-18T14:11:07+02:00 self.run()
2023-09-18T14:11:07+02:00 File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run
2023-09-18T14:11:07+02:00 self._target(*self._args, **self._kwargs)
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 598, in start_scan
2023-09-18T14:11:07+02:00 self.set_scan_progress(scan_id)
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 672, in set_scan_progress
2023-09-18T14:11:07+02:00 self._get_scan_progress_raw(scan_id)
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 814, in _get_scan_progress_raw
2023-09-18T14:11:07+02:00 current_progress['overall'] = self.get_scan_progress(scan_id)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 1342, in get_scan_progress
2023-09-18T14:11:07+02:00 progress = self.scan_collection.get_progress(scan_id)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/scan.py", line 370, in get_progress
2023-09-18T14:11:07+02:00 return self.scans_table[scan_id].get('progress', ScanProgress.INIT)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "", line 2, in get
2023-09-18T14:11:07+02:00 File "/usr/lib/python3.11/multiprocessing/managers.py", line 822, in _callmethod
2023-09-18T14:11:07+02:00 kind, result = conn.recv()
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/lib/python3.11/multiprocessing/connection.py", line 249, in recv
2023-09-18T14:11:07+02:00 buf = self._recv_bytes()
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/lib/python3.11/multiprocessing/connection.py", line 413, in _recv_bytes
2023-09-18T14:11:07+02:00 buf = self._recv(4)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/lib/python3.11/multiprocessing/connection.py", line 378, in _recv
2023-09-18T14:11:07+02:00 chunk = read(handle, remaining)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 ConnectionResetError: [Errno 104] Connection reset by peer
2023-09-18T14:11:08+02:00 Process Process-3:
2023-09-18T14:11:08+02:00 Traceback (most recent call last):
2023-09-18T14:11:08+02:00 File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
2023-09-18T14:11:08+02:00 self.run()
2023-09-18T14:11:08+02:00 File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run
2023-09-18T14:11:08+02:00 self._target(*self._args, **self._kwargs)
2023-09-18T14:11:08+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 598, in start_scan
2023-09-18T14:11:08+02:00 self.set_scan_progress(scan_id)
2023-09-18T14:11:08+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 672, in set_scan_progress
2023-09-18T14:11:08+02:00 self._get_scan_progress_raw(scan_id)
2023-09-18T14:11:08+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 814, in _get_scan_progress_raw
2023-09-18T14:11:08+02:00 current_progress['overall'] = self.get_scan_progress(scan_id)
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:08+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 1342, in get_scan_progress
2023-09-18T14:11:08+02:00 progress = self.scan_collection.get_progress(scan_id)
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:08+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/scan.py", line 370, in get_progress
2023-09-18T14:11:08+02:00 return self.scans_table[scan_id].get('progress', ScanProgress.INIT)
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:08+02:00 File "", line 2, in get
2023-09-18T14:11:08+02:00 File "/usr/lib/python3.11/multiprocessing/managers.py", line 822, in _callmethod
2023-09-18T14:11:08+02:00 kind, result = conn.recv()
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^
2023-09-18T14:11:08+02:00 File "/usr/lib/python3.11/multiprocessing/connection.py", line 249, in recv
2023-09-18T14:11:08+02:00 buf = self._recv_bytes()
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:08+02:00 File "/usr/lib/python3.11/multiprocessing/connection.py", line 413, in _recv_bytes
2023-09-18T14:11:08+02:00 buf = self._recv(4)
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^^^
2023-09-18T14:11:08+02:00 File "/usr/lib/python3.11/multiprocessing/connection.py", line 378, in _recv
2023-09-18T14:11:08+02:00 chunk = read(handle, remaining)
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:08+02:00 ConnectionResetError: [Errno 104] Connection reset by peer
2023-09-18T14:11:08+02:00 Exception ignored in atexit callback: <function exit_cleanup at 0x7f4530af9120>
2023-09-18T14:11:08+02:00 Traceback (most recent call last):
2023-09-18T14:11:08+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/main.py", line 69, in exit_cleanup
2023-09-18T14:11:08+02:00 daemon.daemon_exit_cleanup()
2023-09-18T14:11:08+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 448, in daemon_exit_cleanup
2023-09-18T14:11:08+02:00 self.scan_collection.clean_up_pickled_scan_info()
2023-09-18T14:11:08+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/scan.py", line 252, in clean_up_pickled_scan_info
2023-09-18T14:11:08+02:00 if self.get_status(scan_id) == ScanStatus.QUEUED:
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:08+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/scan.py", line 352, in get_status
2023-09-18T14:11:08+02:00 status = self.scans_table.get(scan_id, {}).get('status', None)
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:08+02:00 File "", line 2, in get
2023-09-18T14:11:08+02:00 File "/usr/lib/python3.11/multiprocessing/managers.py", line 818, in _callmethod
2023-09-18T14:11:08+02:00 self._connect()
2023-09-18T14:11:08+02:00 File "/usr/lib/python3.11/multiprocessing/managers.py", line 805, in _connect
2023-09-18T14:11:08+02:00 conn = self._Client(self._token.address, authkey=self._authkey)
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:08+02:00 File "/usr/lib/python3.11/multiprocessing/connection.py", line 501, in Client
2023-09-18T14:11:08+02:00 c = SocketClient(address)
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:08+02:00 File "/usr/lib/python3.11/multiprocessing/connection.py", line 629, in SocketClient
2023-09-18T14:11:08+02:00 s.connect(address)
2023-09-18T14:11:08+02:00 FileNotFoundError: [Errno 2] No such file or directory
2023-09-18T14:11:11+02:00 [WARN tini (7)] Tini is not running as PID 1 and isn't registered as a child subreaper.
2023-09-18T14:11:11+02:00 Zombie processes will not be re-parented to Tini, so zombie reaping won't work.
2023-09-18T14:11:11+02:00 To fix the problem, use the -s option or set the environment variable TINI_SUBREAPER to register Tini as a child subreaper, or run Tini as PID 1.
2023-09-18T14:11:15+02:00 OSPD[8] 2023-09-18 12:11:15,495: INFO: (ospd.main) Starting OSPd OpenVAS version 22.6.0.
2023-09-18T14:11:15+02:00 OSPD[8] 2023-09-18 12:11:15,513: INFO: (ospd_openvas.messaging.mqtt) Successfully connected to MQTT broker
2023-09-18T14:11:25+02:00 OSPD[8] 2023-09-18 12:11:25,658: INFO: (ospd_openvas.daemon) Loading VTs. Scans will be [requested|queued] until VTs are loaded. This may take a few minutes, please wait...
2023-09-18T14:13:30+02:00 OSPD[8] 2023-09-18 12:13:30,122: INFO: (ospd_openvas.daemon) Finished loading VTs. The VT cache has been updated from version 0 to 202309150621.
2023-09-18T14:32:45+02:00 OSPD[8] 2023-09-18 12:32:45,567: INFO: (ospd.command.command) Scan 02317102-1868-48cd-882b-07772ac65ee3 added to the queue in position 2.
2023-09-18T14:32:50+02:00 OSPD[8] 2023-09-18 12:32:50,956: INFO: (ospd.ospd) Currently 1 queued scans.
2023-09-18T14:32:51+02:00 OSPD[8] 2023-09-18 12:32:51,172: INFO: (ospd.ospd) Starting scan 02317102-1868-48cd-882b-07772ac65ee3.
2023-09-18T14:39:34+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0

@wdoekes
Copy link

wdoekes commented Mar 2, 2024

Seconded. I got these stats from redis just before it got killed again. I was interrupted at 80% of scanning a /21.

(18:00)                                                     (18:10) oom-kill at 18:11
db0:keys=1,expires=0,avg_ttl=0                           |  db0:keys=1,expires=0,avg_ttl=0
db1:keys=47701,expires=0,avg_ttl=0                       |  db1:keys=47701,expires=0,avg_ttl=0
db2:keys=177456,expires=0,avg_ttl=0                      |  db2:keys=177456,expires=0,avg_ttl=0
db3:keys=6,expires=0,avg_ttl=0                           |  db3:keys=5,expires=0,avg_ttl=0
db4:keys=100,expires=0,avg_ttl=0                         |  db4:keys=99,expires=0,avg_ttl=0
db5:keys=17,expires=0,avg_ttl=0                          |  db5:keys=17,expires=0,avg_ttl=0
db6:keys=17,expires=0,avg_ttl=0                          |  db6:keys=17,expires=0,avg_ttl=0
db7:keys=456,expires=0,avg_ttl=0                         |  db7:keys=99,expires=0,avg_ttl=0
db8:keys=99,expires=0,avg_ttl=0                          |  db8:keys=17,expires=0,avg_ttl=0
db9:keys=10344,expires=0,avg_ttl=0                       |  db9:keys=14963,expires=0,avg_ttl=0
db10:keys=5726,expires=0,avg_ttl=0                       |  --------------------------------------------
db11:keys=17,expires=0,avg_ttl=0                         |  db11:keys=17,expires=0,avg_ttl=0
db12:keys=99,expires=0,avg_ttl=0                         |  db12:keys=17,expires=0,avg_ttl=0
db13:keys=10342,expires=0,avg_ttl=0                      |  db13:keys=15038,expires=0,avg_ttl=0
db14:keys=98,expires=0,avg_ttl=0                         |  db14:keys=5501,expires=0,avg_ttl=0
db15:keys=75,expires=0,avg_ttl=0                         |  db15:keys=203,expires=0,avg_ttl=0
db16:keys=497,expires=0,avg_ttl=0                        |  db16:keys=5682,expires=0,avg_ttl=0
---------------------------------------------------------|  db17:keys=99,expires=0,avg_ttl=0
db18:keys=99,expires=0,avg_ttl=0                         |  db18:keys=99,expires=0,avg_ttl=0
db19:keys=5657,expires=0,avg_ttl=0                       |  db19:keys=8716,expires=0,avg_ttl=0
db20:keys=479,expires=0,avg_ttl=0                        |  db20:keys=6856,expires=0,avg_ttl=0
db21:keys=17,expires=0,avg_ttl=0                         |  db21:keys=17,expires=0,avg_ttl=0
db22:keys=17,expires=0,avg_ttl=0                         |  db22:keys=17,expires=0,avg_ttl=0
db23:keys=202,expires=0,avg_ttl=0                        |  db23:keys=99,expires=0,avg_ttl=0
db24:keys=17,expires=0,avg_ttl=0                         |  db24:keys=17,expires=0,avg_ttl=0

image

@wdoekes
Copy link

wdoekes commented Mar 2, 2024

Okay. Been looking at it some more, and I saw a lot of memory usage, even as redis-server had been killed a moment ago.

I dropped the db that seemed to be transient:

-server_time_usec:1709405329080351
-uptime_in_seconds:588
+server_time_usec:1709406156005681
+uptime_in_seconds:1415
...
 # Memory
-used_memory:3344099096
-used_memory_human:3.11G
-used_memory_rss:3001507840
-used_memory_rss_human:2.80G
+used_memory:171441592
+used_memory_human:163.50M
+used_memory_rss:194514944
+used_memory_rss_human:185.50M
...

 # Keyspace
 db0:keys=1,expires=0,avg_ttl=0
 db1:keys=177456,expires=0,avg_ttl=0
-db6:keys=3468,expires=0,avg_ttl=0

With just 3400 entries in db6, it was apparently using those 2GB.

The entries consisted of:

$ sudo docker exec -it greenbone-community-container_redis-server_1 redis-cli -s /run/redis/redis.sock -n 6
redis /run/redis/redis.sock[6]> keys *
   1) "Cache/node.example.com/8200/excluding_404_body/URL_/ui/vcav-bootstrap/rest/vcav-providers/config.neon"
   2) "Cache/node.example.com/8200/excluding_404_body/URL_/ui/vcav-bootstrap/rest/WEB-INF/local.properties"
   3) "Cache/node.example.com/8200/excluding_404_body/URL_/ui/vropspluginui/rest/services/.env.example"
   4) "Cache/node.example.com/8200/excluding_404_body/URL_/ui/vcav-bootstrap/rest/vcav-providers/local.properties"
...
3466) "Cache/node.example.com/8200/excluding_404_body/URL_/ui/psc-ui/resources/core/config/databases.yml"
3467) "Cache/node.example.com/8200/excluding_404_body/URL_/ui/vcav-bootstrap/rest/vcav-providers/example.key"
3468) "Cache/node.example.com/8200/excluding_404_body/URL_/ui/h5-vsan/rest/proxy/service/phinx.yml"

Not sure what was in those values, but it was big enough..

@bjoernricks bjoernricks transferred this issue from greenbone/greenbone-container-images Mar 4, 2024
@bjoernricks
Copy link
Contributor

Moved this issue to the ospd-openvas repository to let the @greenbone/scanner-maintainers have a look at.

@cfi-gb
Copy link
Member

cfi-gb commented Mar 4, 2024

@wdoekes
Copy link

wdoekes commented Mar 8, 2024

Okay. I increased memory (and cpu) on the box, and now there is no excessive redis key usage anymore..

By increasing mem, I probably implemented fix no. 1 suggested at https://forum.greenbone.net/t/oom-killing-redis-on-large-scan-with-openvas/14251/2 -- "Prevent overloading the system [...]"

I guess there is still something buggy, but I have no time to allocate to look into this deeper at the moment. Thanks for the suggested links!

@wdoekes
Copy link

wdoekes commented Mar 18, 2024

In the mean time I've gotten a bunch of OOMs again.

Here's a sheet of mem usage / key space and highlighted potential culprits:

image

It would be nice if I could see in the logs somewhere what the db<N> was used for at that point in time.

@wdoekes
Copy link

wdoekes commented Mar 18, 2024

In the mean time I tried adding some limits, and now I'm running into this when doing a single scan:

  redis-server:
    image: greenbone/redis-server
    command:
      # https://forum.greenbone.net/t/redis-oom-killed-on-one-host-scan/15722/5
      - /bin/sh
      - -c
      - 'rm -f /run/redis/redis.sock && cat /etc/redis/redis.conf >/run/redis/redis.conf && printf "%s\n" "maxmemory 12884901888" "maxmemory-policy volatile-ttl" "maxclients 150" "tcp-keepalive 15" >>/run/redis/redis.conf && redis-server /run/redis/redis.conf'
    logging:
      driver: journald
    restart: on-failure
    volumes:
      - redis_socket_vol:/run/redis/
OSPD[7] 2024-03-18 11:10:23,909: ERROR: (ospd.ospd) 570186d8-8de4-4609-a13b-0ad28f639a11: Exception Command # 1 (LRANGE internal/results 0 -1) of pipeline caused error: command not allowed when used memory > 'maxmemory'. while scanning

Traceback (most recent call last):
  File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 1348, in _execute_transaction
    response = self.parse_response(connection, "_")
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 1416, in parse_response
    result = Redis.parse_response(self, connection, command_name, **options)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 553, in parse_response
    response = connection.read_response()
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/redis/connection.py", line 524, in read_response
    raise response
redis.exceptions.ExecAbortError: Transaction discarded because of previous errors.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 584, in start_scan
    self.exec_scan(scan_id)
  File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/daemon.py", line 1179, in exec_scan
    got_results = self.report_openvas_results(kbdb, scan_id)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/daemon.py", line 800, in report_openvas_results
    all_results = db.get_result()
                  ^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/db.py", line 503, in get_result
    return self._pop_list_items("internal/results")
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/db.py", line 487, in _pop_list_items
    return OpenvasDB.pop_list_items(self.ctx, name)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/db.py", line 201, in pop_list_items
    results, redis_return_code = pipe.execute()
                                 ^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 1478, in execute
    return conn.retry.call_with_retry(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/redis/retry.py", line 46, in call_with_retry
    return do()
           ^^^^
  File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 1479, in <lambda>
    lambda: execute(conn, stack, raise_on_error),
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 1351, in _execute_transaction
    raise errors[0][1]
  File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 1341, in _execute_transaction
    self.parse_response(connection, "_")
  File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 1416, in parse_response
    result = Redis.parse_response(self, connection, command_name, **options)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 553, in parse_response
    response = connection.read_response()
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/dist-packages/redis/connection.py", line 524, in read_response
    raise response

redis.exceptions.OutOfMemoryError: Command # 1 (LRANGE internal/results 0 -1) of pipeline caused error: command not allowed when used memory > 'maxmemory'.
OSPD[7] 2024-03-18 11:10:23,913: INFO: (ospd.ospd) 570186d8-8de4-4609-a13b-0ad28f639a11: Host scan got interrupted. Progress: 25, Status: RUNNING

🤔

@wdoekes
Copy link

wdoekes commented Mar 18, 2024

Okay. It turned out the problem was indeed the "caching of web pages during CGI scanning" mentioned in oom-killing-redis-on-large-scan-with-openvas.

A HashiCorp Vault server was yielding responses of about 770kB for lots and lots of scan URLs, like /ui/vropspluginui/rest/services/getstatus/news.php (basically everything after /ui/). With 28000+ of those results, the redis would quickly fill up.

I did the following mini-patch:

--- greenbone-community-container_vt_data_vol/_data/http_keepalive.inc.orig	2024-03-18 15:46:31.480951508 +0100
+++ greenbone-community-container_vt_data_vol/_data/http_keepalive.inc	2024-03-18 15:52:51.764904305 +0100
@@ -726,7 +726,8 @@ function http_get_cache( port, item, hos
     # Internal Server Errors (5xx)
     # Too Many Requests (429)
     # Request Timeout (408)
-    if( res !~ "^HTTP/1\.[01] (5(0[0-9]|1[01])|4(08|29))" )
+    # Size of response must be less than 1.5*64k
+    if( res !~ "^HTTP/1\.[01] (5(0[0-9]|1[01])|4(08|29))" && strlen( res ) < 98304 )
       replace_kb_item( name:"Cache/" + host + "/" + port + "/" + key + "/URL_" + item, value:res );
 
   }

And now I finally got past that server without things breaking.

@wdoekes
Copy link

wdoekes commented Mar 19, 2024

Ok. That looks good. Of course it would be better if we could make that strlen( res ) < 98304 a bit more dynamic. Responses a few hundred K large should not be a problem, but they are when we have >1000 keys cached or they're already taking up several GBs.

I did not see an easy way to query redis for the total size in a keyspace. But maybe the kbdb [1] itself could keep track of key count and value size. Thinking out loud...

[1]

Doesn't look there is something available for that yet.

Alternatively set the a TTL on these values, so that they can be purged with volatile-ttl. Although that probably won't be sufficient seeing that the LRANGE call failed with ENOMEM even when allkeys-lru was enabled.

(edit)
If there was access to the INCRBY [2] function, we could have the script keep track of its own usage and stop inserting new items in the cache once it considered that it was too much. Trying to set/get a counter ourselves without atomicity might work, but could also prove to be off-by-a-lot. Or maybe just implement a hard memory cap in the gvm-libs per keyspace.
[2] https://redis.io/commands/incrby/

@cfi-gb
Copy link
Member

cfi-gb commented Mar 20, 2024

This really looks like a duplicate of greenbone/openvas-scanner#1488 and should be IMHO closed as such (as this is also not a problem in ospd-openvas)

Maybe you could post your (btw. great) analysis over there to have all info collected at one place?

basically everything after /ui/

If this is the only problematic dir for this specific target system it should be also possible to exclude that whole dir in the setting Regex pattern to exclude directories from CGI scanning : of the VT Global variable settings (Family: Settings, OID: 1.3.6.1.4.1.25623.1.0.12288).

@wdoekes
Copy link

wdoekes commented Mar 20, 2024

I summarized the stuff here: greenbone/openvas-scanner#1488 (comment)

I'm fine with closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants