-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support request for massive log lines #100
Comments
�Hi, can you put your project on GitHub in its own repo so that we can check it out and see the source code? I also wanted to know which mode of the watchdog you are using and why you feel it necessary to stream a massive file into the container logs? There may be alternatives which do not require this proposed solution, so I wanted to get a better idea of the exact use-case, can you help us out with that? Regards, Alex |
Hi Alex, Please checkout the code:
Kind regards, |
+1 When trying to log something longer than 65536, "bufio.Scanner: token too long" will occur. Here is my problem: In scan.go 144 // Is the buffer full? If so, resize.
145 if s.end == len(s.buf) {
146 if len(s.buf) >= s.maxTokenSize {
147 s.setErr(ErrTooLong)
148 return false
149 }
150 newSize := len(s.buf) * 2
151 if newSize > s.maxTokenSize {
152 newSize = s.maxTokenSize
153 }
154 newBuf := make([]byte, newSize)
155 copy(newBuf, s.buf[s.start:s.end])
156 s.buf = newBuf
157 s.end -= s.start
158 s.start = 0
159 continue
160 } What's more, the deamon process will be dead after this error. |
/set title: Support request for massive log lines |
Can I ask why you are logging such huge amounts of data to the container logs? This doesn't seem like a normal usecase. |
Yes, you are right. This is not a normal usecase. But there all serveral senes:
The huge log works fine in the handle function but the error is ocurr in the watchdog, and then the handling process dies. It seems better to take care of this problem. |
I think we can consider increasing or even allowing the buffer size to be controlled via an env variable. Even LogStash (in ELK) has a max buffer size https://www.elastic.co/guide/en/logstash/current/plugins-inputs-udp.html#plugins-inputs-udp-buffer_size Looking through several version The default buffer size is:
I suspect all log systems will have some kind of limit. If your request body is very big, I would not recommend logging it. An absurd case like a 10GB binary file as a log output is not going to be a good idea and it certainly is not something that i consider as a "log". |
If a log line longer than 64K stop a service to operate, it is a serious problem I found really amusing renaming the title of the issue from "bufio.Scanner: token too long" to "Support request for massive log lines", which implies a feature request and not a bug. As a comparison please check hashicorp/terraform#20325 |
@everesio I agree, a long log line should not stop the watchdog from working. My main point above is that it seems reasonable to have a buffer and to allow control over the size of that. Of course, logging shouldn't cause anything to crash, that goes without saying. |
I just read the hashicorp/terraform#20325 it looks like something we could replicate in https://github.com/openfaas/of-watchdog/blob/master/executor/logging.go |
Exactly the "scanner.Scan" method it the source of problem. |
I started adapting the terraform patch, and it is definitely a bit messy because of the internal use of |
**What** - Use the bufio ReadLine method so that we can check if the the log line has been broken into several parts due to the internal buffering. This implementation has been adapted from hashicorp/go-plugin#98 The implementation also preserves the timestamp prefix and is fully backwards compatible. Resolves openfaas#100 Signed-off-by: Lucas Roesler <roesler.lucas@gmail.com>
@LucasRoesler @alexellis we faced that same issue internally, causing the function to hang because its stdout is locked (as it's piped, but not read). |
Hi @Kanshiroron And as far as I was aware we'd already done the work for this in #126 - can you confirm you've tried the new settings we have documented? See If you still have issues please raise your own Issue here. Alex |
/close: resolved |
/lock |
When log contains log line longer than MaxScanTokenSize (64 * 1024), an error "Error scanning stderr: bufio.Scanner: token too long" is reported. All subsequent requests (when using the logger) are blocked.
Expected Behaviour
Long lines should be logged
Current Behaviour
bufio.NewScanner is initialized with MaxScanTokenSize = 64 * 1024. If log line is longer then this limit,
bufio.Scanner: token too long is returned. On error scanner.Scan loop in bindLoggingPipe is finished and there is no
pipe reader any more.
Possible Solution
NewScanner can be replaced by NewReaderSize.
I will rework #99
Steps to Reproduce (for bugs)
token-too-long.zip
All subsequent requests are not processed e.g.
5. make test
Context
This behavior is a blocker as in my use case the logged events can be longer than 64kb.
Your Environment
Docker version
docker version
(e.g. Docker 17.0.05 ):Are you using Docker Swarm or Kubernetes (FaaS-netes)?
Kubernetes
Operating System and version (e.g. Linux, Windows, MacOS):
Linux
Link to your project or a code example to reproduce issue:
token-too-long.zip
The text was updated successfully, but these errors were encountered: