-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better logger for console with direct access #4720
Conversation
@@ -77,10 +77,10 @@ defmodule Logger.Backends.Console do | |||
format_event(level, msg, ts, md, state) | |||
|> color_event(level, colors) | |||
try do | |||
IO.write(device, output) | |||
send(device, {:io_request, self(), self(), {:put_chars, :unicode, output}}) | |||
rescue | |||
ArgumentError -> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! However, this clause will never be raised now. Can you try the following variation instead of the try do
:
case :unicode.characters_to_binary(output) do
{:error, good, bad} ->
send(device, {:io_request, self(), self(), {:put_chars, :unicode, [good | Logger.Formatter.prune(bad)]}})
good ->
send(device, {:io_request, self(), self(), {:put_chars, :unicode, good}})
end
And let us know how it performs CPU and memory wise?
Thanks @josevalim for the review, I was wondering about that. Still performs better a little bit less difference in memory usage but always less. And still a big difference in CPU usage. |
I'm running: |
❤️ 💚 💙 💛 💜 |
I like what this PR is trying to achieve. However this creates the tricky situation where If we want to speed up writing to terminal I wonder if we could introduce a middleman process that we send to async and it batches messages to the device. When flushing we get the list of handlers but ignore it. Instead we could call all Console handlers and have them sync with their middleman. I think this would allow us to do async with back pressure too? With the current setup I think we could build up message queue of :user. Also do we get side effect that Logger sync mode comes on twice as early because replies will increase length of Logger message queue? |
Perhaps we can achieve batching without middleman as we have replies for backpressure |
@fishcakez since |
@fishcakez I have pushed this change: fb6eafe However, this may slow things down altogether. An alternative approach is to not encode on the client and handle the |
I am not sure how back pressure will be affected given we will now consumer messages much faster as well. |
Won't this break this: elixir/lib/elixir/lib/kernel/cli.ex Line 42 in ed8aac2
If we are handling messages much faster that means we are pushing the queue to the |
This has the race condition where messages from the same process can get out of order if the first message has an error and the second doesn't. The first message is sent, the second message is sent, first fails, seconds succeeds, then we handle the error and send the first again. This would be very confusing. |
@olafura can you please benchmark the current code in master and let us know how it goes? |
It shouldn't really have anything to do with Logger itself since the patch is to remove the push and wait for confirmation that :io has, plus sending the message a couple of times. You can probably flood user_drv with messages but you can also fill the message box of Logger or :group which are also involved. I'll benchmark the new master when I get back from lunch ;) |
If we fill up Logger we get some backpressure as logs to it become synchronous, this means the |
Hi I've been doing some experiments that you can try out and veryify the result:
https://github.com/olafura/test_io
With sending messages directly to user_drv we are able to have less blocking, significantly less cpu time and less memory usage over all.
I'm making these changes to Logger and not IO directly since IO does a lot more than just print things to the console.
It's not faster per se at putting thing on the terminal but that is something that is going to be difficult to make faster according to my tests.
We might want to handle these reply cases:
https://github.com/erlang/otp/blob/maint/lib/stdlib/src/io.erl#L573
I'm currently pushing everything back to self so it won't be blocking