Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

clammit runtime error #24

Closed
ITler opened this issue May 25, 2021 · 6 comments
Closed

clammit runtime error #24

ITler opened this issue May 25, 2021 · 6 comments

Comments

@ITler
Copy link

ITler commented May 25, 2021

Could i please share some insights what this output means?
I especially would need to know if with this kind of error the clammit stops working entirely (then it would be a good target for a liveness probe btw), or if just the connection to the clamd fails for one try? Does the app recover from this by itself?

2021/05/21 13:44:55 http: panic serving 127.0.0.1:60806: runtime error: invalid memory address or nil pointer dereference
goroutine 120 [running]:
net/http.(*conn).serve.func1(0xc000106c80)
	/usr/local/go/src/net/http/server.go:1824 +0x153
panic(0x727e00, 0x998100)
	/usr/local/go/src/runtime/panic.go:971 +0x499
github.com/dutchcoders/go-clamd.(*Clamd).Ping(0xc000158430, 0x6e8d01, 0xc000060cc0)
	/go/pkg/mod/github.com/dutchcoders/go-clamd@v0.0.0-20170520113014-b970184f4d9e/clamd.go:118 +0x8a
clammit/scanner.(*Clamav).Ping(0xc000023cc0, 0xc000060cc0, 0x1b)
	/app/scanner/clamav.go:72 +0x2f
main.infoHandler(0x7f2da0, 0xc00014c460, 0xc00032a200)
	/app/main.go:356 +0x142
net/http.HandlerFunc.ServeHTTP(0x79ec10, 0x7f2da0, 0xc00014c460, 0xc00032a200)
	/usr/local/go/src/net/http/server.go:2069 +0x44
net/http.(*ServeMux).ServeHTTP(0xc000023d00, 0x7f2da0, 0xc00014c460, 0xc00032a200)
	/usr/local/go/src/net/http/server.go:2448 +0x1ad
net/http.serverHandler.ServeHTTP(0xc00014c1c0, 0x7f2da0, 0xc00014c460, 0xc00032a200)
	/usr/local/go/src/net/http/server.go:2887 +0xa3
net/http.(*conn).serve(0xc000106c80, 0x7f3660, 0xc000348180)
	/usr/local/go/src/net/http/server.go:1952 +0x8cd
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:3013 +0x39b

Some more infos. I am running the clammit with 1 vCPU and a minium of 756MB RAM, because I want the clammit to keep a payload of 500MB in RAM and I leave 256MB for the app itself (which feels quite too much for a golang app).
Is this sufficient? Would increasing resources solve the mentioned issue?

@vjt
Copy link
Contributor

vjt commented May 25, 2021

ouch. From the stacktrace the problem is near here in go-clamd it appears that sending data to clamd didn't fail, but no response was received, so the response is nil and this causes the invalid pointer deref crash.

Can you check the clamd logs or syslog for a clamd crash?

Also, could you please share the clammit and clamd config file and provide a sample of the file that you're attempting to scan? That'll help me in reproducing the issue.

Anyway, considering also the current memory leak #22, go-clamd deserves a pull request to fix these. I'll try to work on this in the upcoming days, but of course any help is welcome :-)

@vjt
Copy link
Contributor

vjt commented May 25, 2021

Reviewing the clamd man page, indeed clamd closes the connection if a stream whose size is greater than the maximum configured scan size (MaxScanSize in clamd.conf) is supplied to it. This case is not handled by go-clamd.

It looks trivial to fix, so I'll attempt working on it in the coming days.

Eventually - I am not really up to date on virus scanning best practices, but from experience I remember that there is not much value in scanning very large files, as it is unlikely they contain threats.

@ITler
Copy link
Author

ITler commented May 25, 2021

Also, could you please share the clammit and clamd config file and provide a sample of the file that you're attempting to scan? That'll help me in reproducing the issue.

First, I am not yet scanning files, actively. Not yet and especially no big ones, yet. I have just prepared the runtime environment and pod configuration. This is upcoming work on my desk for tomorrow... :)

[ application ]
listen=:8438
clamd-url=tcp://antivirus-clamav:3310
content-memory-threshold=805306368
debug=false
test-pages=false
virus-status-code=409

Can you check the clamd logs or syslog for a clamd crash?

I can reproduce the issue which pops up when restarting clamd (pods in kubernetes). Unfortunately, this is a very common thing and expected to happen. The lifecycle of clammit and clamd are decoupled (technically) so when clamd as a downstream dependency needs to restart, the clammit should mitigate that and survive/recover. (btw: same applies in case of temporary network interruption, which is also to be considered as something that is normal)

As a suggestion, to overcome these common cases, there could be a retry loop around every call to clamd, like trying every 5 seconds (customizable via application config) and maybe producing a log output, i.e.

Cannot reach {{.clamd-url}}, retrying in {{.retry-interval}} seconds...
Cannot reach {{.clamd-url}}, retrying in {{.retry-interval}} seconds...
Cannot reach {{.clamd-url}}, retrying in {{.retry-interval}} seconds...
Cannot reach {{.clamd-url}}, continue retrying every {{.retry-interval}} seconds without output
Connection to {{.clamd-url}} recovered

@vjt
Copy link
Contributor

vjt commented May 25, 2021

Thanks, will review.

@ITler
Copy link
Author

ITler commented Jun 9, 2021

It seems that clammit does not die when clamd is not reachable. It just outputs a bunch of these panics / stack traces I put above,
unless clamd is available again. But clammit survives, so this is fine and I think we can close this issue here.

@ITler ITler closed this as completed Jun 9, 2021
@vjt
Copy link
Contributor

vjt commented Jun 9, 2021

Hi @ITler, sorry for not being able to work on this yet. I'm happy that the current behaviour is good enough for your use case.

As soon as I get a chance, I'll anyway handle this case and output a more appropriate log message rather than a discomforting panic stack trace.

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants