-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
openvas finishes task, ospd-openvas keeps looking elsewhere... #259
Comments
It's worth noticing that this issue occurs on different tasks, using different scanners. It's not linked to a particular installation or plateform. It's a general problem which happens on tasks having lots of possible targets (I would say more than 1500 IP) with probably a lot of dead hosts. I would really appreciate if someone could look into this issue. I can provide all necessary help to do real-time debugging. Thank you |
Hi @wisukind, I will try to reproduce this issue. Could you give some more information about the scan you are running? It would be nice if you can answer the following questions.
If you see that a host process is locked (the process name is openvas: testing ), and if possible, a gdb backtrace would be very helpful (attach the process ID to gdb with -p option) Regards, |
Thanks for looking into this. The original taks have been killed by now, but I have other tasks in a more or less same situation, on remote scanners. So I'll answer your questions using those. Scan config: Full & Fast
All those scans are actually finished since days. I don't see any process locked unfortunately now, all of them seems more or less busy:
But on gsad side; those scans are stucked at various percentage levels (13%, 15% and 70%) while all of them are actually done from openvas perspective. ospd.openvas though still report results occasionaly for those three tasks, so on it's side the scan are not finished. Period between 2 results report is highly erratic and sometimes can take days until new results are reported to gvmd, up to a point where results reporting will stop completely. It seems there is a bottleneck at redis point which prevent ospd.openvas from reporting results progressively, although redis is not very loaded, but taking a lot of memory. Let me know if you need anything else. |
Hi @wisukind, |
Hi,
|
If you have access to redis, could you tell me how many kb are in use and the value of max databases you have set in redis.conf? I have:
|
redis /var/run/redis/redis.sock> info keyspace Keyspacedb0:keys=1,expires=0,avg_ttl=0 redis /var/run/redis/redis.sock> config get databases
|
Hi @wisukind, I was testing this with both tls and unix socket types. F&F scan configs against a /20 network (4092 hosts). I ran three scans in parallel. I was not able to reproduce this issue. I will keep an eye on this. Did you check in other log files if some other messages are being logged (in /var/log syslog, messages, debug, kernel) ? Maybe some issue related with the amount of open file descriptors ? You can check with
As I am not being able to reproduce this, it would be nice if you could check the ospd-openvas status for each scan with gvm-cli
Regards, |
Hi Juan, Thanks for your follow up. Here is my outputs.
Then
At this point I have tasks stucked at 100% for several days. One task stopped without any reason, during the night; the only thing I have in the log is:
It may be a different issue, though. But I notice this generally happens after a task is stucked for a long time. One difficulty to track down issues like this is that task SID is not shared between gvmd and ospd-openvas. Gvmd use a particular task ID, ospd use another one for the same task. When you have multiple tasks running simultaneously, it's difficult to track down events to a particular task. Having said that, if you cannot reproduce the issue whatsoever, it may be linked to my targets complexity. I'll split targets which regularly pose problems to smaller targets, and see if that makes any difference. Thanks |
gvm-cli is used for direct communication with ospd too, using OSP protocol instead gmp. Therefore you are getting "bogus command" error.
|
There is something wrong. I get an certificate verification error when trying to access ospd via tls. I tried using the local ospd client & server certificates, but not luck. I don't understand what's going on; these are the certificates I used with --create-scanner and my scanner works and is validated without issues. |
Hi @wisukind, Regards. |
Hi Juan,
The issue is doing so make the SSL handshake fails with the following
error:
***@***.***:~$ /opt/gvm/bin/ospd-scanner/bin/python3.6 /opt/gvm/bin/ospd-scanner/bin/ospd-openvas --pid-file /opt/gvm/var/run/ospd-openvas-slave.pid -p 9390 -b 0.0.0.0 -k /opt/gvm/var/lib/gvm/private/CA/serverkey.pem -c /opt/gvm/var/lib/gvm/CA/servercert.pem --ca-file /opt/gvm/var/lib/gvm/CA/cacert.pem --log-level debug -l /opt/gvm/var/log/gvm/ospd-openvas-slave.log --unix-socket /opt/gvm/var/run/ospd.sock --lock-file-dir /opt/gvm/var/run/ --stream-timeout 100 --config /opt/gvm/etc/openvas/ospd.conf -f
2020-06-16 11:44:28,724 OSPD - openvas: DEBUG: (ospd.server) New
connection from ('10.194.157.7', 47772)------------------------------
----------Exception happened during processing of request from
('10.194.157.7', 47772)Traceback (most recent call last): File
"/usr/lib/python3.6/socketserver.py", line 654, in
process_request_thread self.finish_request(request, client_address)
File "/usr/lib/python3.6/socketserver.py", line 364, in
finish_request self.RequestHandlerClass(request, client_address,
self) File "/usr/lib/python3.6/socketserver.py", line 724, in
__init__ self.handle() File "/opt/gvm/bin/ospd-
scanner/lib/python3.6/site-packages/ospd/server.py", line 127, in
handle self.server.handle_request(self.request,
self.client_address) File "/opt/gvm/bin/ospd-
scanner/lib/python3.6/site-packages/ospd/server.py", line 167, in
handle_request self.server.handle_request(request, client_address)
File "/opt/gvm/bin/ospd-scanner/lib/python3.6/site-
packages/ospd/server.py", line 295, in handle_request req_socket =
self.tls_context.wrap_socket(request, server_side=True) File
"/usr/lib/python3.6/ssl.py", line 407, in wrap_socket _context=self,
_session=session) File "/usr/lib/python3.6/ssl.py", line 817, in
__init__ self.do_handshake() File "/usr/lib/python3.6/ssl.py", line
1077, in do_handshake self._sslobj.do_handshake() File
"/usr/lib/python3.6/ssl.py", line 689, in
do_handshake self._sslobj.do_handshake()OSError: [Errno 0] Error
On gvmd side, I have created the scanner with:
***@***.***:/opt/gvm/slaves/boc$ gvmd --create-scanner=slaveBoc-test --scanner-type=OpenVas --scanner-port=9390 --scanner-host=ov-slave-boc --scanner-ca-pub=/opt/gvm/slaves/boc/cacert.pem --scanner-key-priv=/opt/gvm/slaves/boc/clientkey.pem --scanner-key-pub=/opt/gvm/slaves/boc/clientcert.pem
For some reason, the only way to make gvmd & ospd.openvas working
together via TLS is to use the same (client) certificates on both side.
Other users are having the same issue, as reported in the GSE part of
the forum.
I don't understand why it doesn't work. Using client certs for the
client and server certs for the server should be the way to work; but
it doesnt ! :(
Thanks----------------------------------------On Tue, 2020-06-16 at
08:16 -0700, Juan José Nicola wrote:
… Hi @wisukind,
As far as I can see, you are starting ospd with the client certs. You
should start ospd with the server cert and key, and use the client
key in the client (gvmd or gvm-cli). Also, I was dealing with the
local certificates, so I ended creating my owns by hand. After that,
the communication between the gvm-cli and gvmd with ospd-openvas was
successful.
Regarding the original issue, I was not able to reproduce it. So, I
will wait until you can confirm there is no communication issue
between gvmd and ospd. I mean, try to connect via gvm-cli and call
<get_scans> to be sure that the info shown in the GUI is the same
shown by ospd.
Regards.
Juan
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
[
{
***@***.***": "http://schema.org",
***@***.***": "EmailMessage",
"potentialAction": {
***@***.***": "ViewAction",
"target": "
#259 (comment)
",
"url": "
#259 (comment)
",
"name": "View Issue"
},
"description": "View this Issue on GitHub",
"publisher": {
***@***.***": "Organization",
"name": "GitHub",
"url": "https://github.com"
}
}
]
|
I forgot to mention; all certificates were generated with gvm-manager-certs -a |
Hi @wisukind ! |
Hi @wisukind, |
Hi Jose,
I used that option in the hope it could improve the situation somehow;
unfortunately I have no clue what this option does as it's not
explained anywhere. But that didn't change anything anyway. The issue
is clearly link with the scope of the target and the number of live IP
behind.
I have splitted the tasks causing issues into much more smaller tasks,
and this fixed the problem. Perhaps it's just that ospd isn't designed
to handle large targets (>2000 IP)
Not sure what to do with this ticket; the problem still exist but seems
difficult to reproduce as deeply linked with the targets. I'm also
stuck with the certificate issue; my scanners are used in production
and if I change the certificates I risk to break down a lot of things
(including the db itself) since many tasks are linked to them and you
can't change them through gsa, you need to do this via gvmd directly.
What do you think ?
Thanks for your support
…On Sun, 2020-06-21 at 23:59 -0700, Juan José Nicola wrote:
Hi @wisukind,
I realized that you are using the option --stream-timeout = 100 for
ospd. Not sure why you are using this large value here. I would use 1
or 5 maximal. I tried to reproduce this issue again with a value of
100 but still not able.
Setting a smaller value could probably improve the situation, also
for your other reported issue greenbone/gvmd#1061.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
[
{
***@***.***": "http://schema.org",
***@***.***": "EmailMessage",
"potentialAction": {
***@***.***": "ViewAction",
"target": "
#259 (comment)
",
"url": "
#259 (comment)
",
"name": "View Issue"
},
"description": "View this Issue on GitHub",
"publisher": {
***@***.***": "Organization",
"name": "GitHub",
"url": "https://github.com"
}
}
]
|
Hi Jose, Just an update on this bug; I have another process in the same situation:
On gsad, scan is stuck at 7%. On scanner node side, python is still consuming a lot of CPU, so it's doing something. As I explained earlier, I can't remotely connect to the slave ospd due to the certificate issue, but hopefully there is other ways to investigate where the botteneck is ? Please advise. |
Hi! |
Hi @wisukind , |
I think we can close this issue. I havn't seen this problem anymore since more than 6 months. |
OpenVAS 7.0.1
gvm-libs 11.0.1
OSP Server for openvas: 1.0.1
OSP: 1.2
OSPd: 2.0.1
python2.6
Ubuntu 18.04 LTS
Redis 4.09 with GVMd tuned configuration file
Hello
I have a scan running on a somewhat important task (3642 IP, with many dead hosts). When I run this task, openvas is launched by ospd-openvas without problems. Both are located on the same machine.
After some times, openvas finish scanning the task as it's suppose to:
_
_
However ospd-openvas seems to have lost communication in the middle with openvas & gvmd, as the last log entry reads 2020-05-13 (while openvas last log is 2020-05-14). No error logged. Process is still running and loaded:
_
_
On gvmd side, task is still running but stuck at 7% since more than one day. The problem is not systematic; out of 10 launches it occurs around 5-6 times. Other 4-5 times scan will finish successfully.
My ospd-openvas process is still running, in case I can do anything to help investigating wha's going on.
Thank you
The text was updated successfully, but these errors were encountered: