Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can you comment on this test by the maker of g3log? #293

Closed
ruipacheco opened this issue Sep 30, 2016 · 49 comments

Comments

Projects
None yet
5 participants
@ruipacheco
Copy link
Contributor

commented Sep 30, 2016

https://kjellkod.wordpress.com/2015/06/30/the-worlds-fastest-logger-vs-g3log/

He makes an interesting point that could lead to further optimisation of g3log.

@gabime

This comment has been minimized.

Copy link
Owner

commented Sep 30, 2016

which point?

@ruipacheco

This comment has been minimized.

Copy link
Contributor Author

commented Sep 30, 2016

First he asserts:

Worst case latency is the highest latency (read: wait time) from that the code does a LOG statement until it is ready for the next code statement.

then he runs a number of comparisons between his library and spdlog. I'm adding the conclusions:

Test 1:

Ouch. Around 10-11ms Worst Time Latency for spdlog. For some systems this type of hiccup is OK. For other systems a factor 10 decrease of performance, once in awhile, is unacceptable.

Test 2:

Worst case latency decreases from around 11,000 us to 1100 us. Unfortunately this wasn’t consistent, as you will see it sometimes reverted back to the sluggish worst case performance outliers.

Test 3:

The spdlog results varied greatly. Most runs spdlog had its worst time latency around 1000 us, (1ms) but sometimes it increased with a factor 10+. I.e. +10,000 us (10 ms). G3log was steadily around 500 – 600 us as worst case latency.

Test 4:

The result showed that worst case latency was consistent for spdlog at around 10,000 us, for g3log 400us to 1,000us.

Test 5:

The average case is still not terrible but spdlog’s worst case latency on the threads are troubling. Close to 8 seconds as worst case latency when the queue contention got real bad. Ouch.

@gabime

This comment has been minimized.

Copy link
Owner

commented Oct 1, 2016

I performed some tests on my own (Ubuntu 64 bit, Intel i7-4770 CPU @ 3.40GHz):

It should be noted that each log entry get flushed to disk immediately (to be fair with g3log that flushes each entry). Otherwise the results would be even better.

./compare.sh

running spdlog and g3log tests 10 time with 10 threads each (total 1,000,000 entries)..

[spdlog] worst:       1288  Avg: 1.13685    Total: 172,720 us
[g3log]  worst:      10187  Avg: 7.94727    Total: 855,357 us

[spdlog] worst:        619  Avg: 1.14674    Total: 172,754 us
[g3log]  worst:       8508  Avg: 7.68996    Total: 829,601 us

[spdlog] worst:       3014  Avg: 1.16171    Total: 173,909 us
[g3log]  worst:       8225  Avg: 7.50522    Total: 810,579 us

[spdlog] worst:       2150  Avg: 1.13715    Total: 173,223 us
[g3log]  worst:       8320  Avg: 7.91071    Total: 850,466 us

[spdlog] worst:        659  Avg: 1.20026    Total: 179,374 us
[g3log]  worst:       8851  Avg: 7.4614     Total: 805,390 us

[spdlog] worst:        904  Avg: 1.12321    Total: 172,558 us
[g3log]  worst:       5878  Avg: 7.54343    Total: 811,313 us

[spdlog] worst:       4530  Avg: 1.15551    Total: 174,627 us
[g3log]  worst:       8640  Avg: 8.48928    Total: 907,111 us

[spdlog] worst:       2776  Avg: 1.1201     Total: 171,164 us
[g3log]  worst:       7562  Avg: 7.75016    Total: 835,064 us

[spdlog] worst:       1437  Avg: 1.14095   Total: 173,324 us
[g3log]  worst:       4559  Avg: 7.99172   Total: 858,894 us

[spdlog] worst:       3534  Avg: 1.13461   Total: 172,166 us
[g3log]  worst:       6572  Avg: 8.17584   Total: 877,922 us

And for 100 threads:

./compare.sh 100

running spdlog and g3log tests 10 time with 100 threads each (total 1,000,000 entries)..

[spdlog] worst:      14708  Avg: 20.1998    Total: 248,958 us
[g3log]  worst:      12266  Avg: 101.848    Total: 1,033,921 us

[spdlog] worst:       9604  Avg: 20.5861   Total: 250,699 us
[g3log]  worst:      13444  Avg: 93.5952   Total: 948,145 us

[spdlog] worst:      11878  Avg: 19.7405    Total: 241,091 us
[g3log]  worst:      12699  Avg: 101.766    Total: 1,036,378 us

[spdlog] worst:      15892  Avg: 20.5997   Total: 255,937 us
[g3log]  worst:      10426  Avg: 95.0904    Total: 969,952 us

[spdlog] worst:      11864  Avg: 20.6075    Total: 253,899 us
[g3log]  worst:      10252  Avg: 100.916    Total: 1,029,302 us

[spdlog] worst:       9695  Avg: 20.446 Total: 251,642 us
[g3log]  worst:      13570  Avg: 101.454    Total: 1,033,206 us

[spdlog] worst:      14438  Avg: 19.9635    Total: 248,497 us
[g3log]  worst:      11728  Avg: 101.363    Total: 1,029,637 us

[spdlog] worst:      14467  Avg: 19.4419    Total: 234,165 us
[g3log]  worst:      11069  Avg: 101.64 Total: 1,033,295 us

[spdlog] worst:      11910  Avg: 20.3399    Total: 252,151 us
[g3log]  worst:      13522  Avg: 101.22 Total: 1,026,303 us

[spdlog] worst:      12144  Avg: 20.3297    Total: 248,405 us
[g3log]  worst:      10144  Avg: 98.5897    Total: 1,007,894 us
@ruipacheco

This comment has been minimized.

Copy link
Contributor Author

commented Oct 1, 2016

Can I link this issue back to him? I know it's a public github issue but still.

@gabime

This comment has been minimized.

Copy link
Owner

commented Oct 1, 2016

Sure, I would be happy to get a second opinion

@gabime

This comment has been minimized.

Copy link
Owner

commented Oct 1, 2016

As a bonus, here is "ouch" from me too ;)
Simple g3log program that would crush the machine (for me it was after writing about 700 MB of file).

while spdlog works hard to protect the user from memory exhaustion by limiting the queue and employing push-backs (or log drops), g3log just happily logs along until the inevitable doom.

#include <iostream>

#include <g3log/g3log.hpp>
#include <g3log/logworker.hpp>

void CrusherLoop()
{
    size_t counter = 0;
    while (true)
    {
        LOGF(INFO, "Some text to crush you machine. thread:");        
        if(++counter % 1000000 == 0)
        {
            std::cout << "Wrote " << counter << " entries" << std::endl;
        }
    }
}


int main(int argc, char** argv)
{
    std::cout << "WARNING: This test will exaust all your machine memory and will crush it!" << std::endl;
    std::cout << "Are you sure you want to continue ? " << std::endl;
    char c;
    std::cin >> c;
    if (toupper( c ) != 'Y')
        return 0;

   auto worker = g3::LogWorker::createLogWorker();
    auto handle= worker->addDefaultLogger(argv[0], "g3log.txt");
    g3::initializeLogging(worker.get());        
   CrusherLoop();

    return 0;
}

@ruipacheco

This comment has been minimized.

Copy link
Contributor Author

commented Oct 1, 2016

I’ve pasted a link to this GitHub issue on his blog post. Lets see if he replies.

On 1 Oct 2016, at 17:21, Gabi Melman notifications@github.com wrote:

As a bonus, here is "ouch" from me too ;)
Simple g3log program that would crush the machine (for me it was after writing about 700 MB of file).

while spdlog works hard to protect the user from memory exhaustion by limiting the queue and employing push-backs (or log drops), g3log just happily logs along until the inevitable doom.

#include

#include <g3log/g3log.hpp>
#include <g3log/logworker.hpp>

void CrusherLoop()
{
size_t counter = 0;
while (true)
{
LOGF(INFO, "Some text to crush you machine. thread:");
if(++counter % 1000000 == 0)
{
std::cout << "Wrote " << counter << " entries" << std::endl;
}
}
}

int main(int argc, char** argv)
{
std::cout << "WARNING: This test will exaust all your machine memory and will crush it!" << std::endl;
std::cout << "Are you sure you want to continue ? " << std::endl;
char c;
std::cin >> c;
if (toupper( c ) != 'Y')
return 0;

auto worker = g3::LogWorker::createLogWorker();
auto handle= worker->addDefaultLogger(argv[0], "g3log.txt");
g3::initializeLogging(worker.get());
CrusherLoop();

return 0;

}


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub #293 (comment), or mute the thread https://github.com/notifications/unsubscribe-auth/AAfK791ayH5XuwJoqTteMtgOFJQ0T2Evks5qvnp0gaJpZM4KLGa2.

@KjellKod

This comment has been minimized.

Copy link

commented Oct 1, 2016

Fun test @gabime. Good thing the user can use whatever sink they want on g3log

I do enjoy the performance comparisons. It's sweet that we now at least have two kick-ASS async loggers

@ruipacheco

This comment has been minimized.

Copy link
Contributor Author

commented Oct 1, 2016

Good thing the user can use whatever sink they want on g3log

What are you referring to exactly?

@KjellKod

This comment has been minimized.

Copy link

commented Oct 1, 2016

Actually. I commented to early. The queues would continue to grow no matter what. It's easy enough to to replace the queue for anything else. Just wrap the queue (ringbuffer, std::queue or whatever) with the same API and its interchangeable in all locations in g3log.

What I too quickly referred to was that g3log can use any sink as the logging sink, you are not bound to the default one.

Another note to make is that in the tests above, if I remember the default queue size right for spdlog the max size isn't reached so the tests never really test the worst case latency. While for g3log the same test hit worst case scenario multiple times as the internal queue buffer will double in size under extreme pressure.

Just for kicks make the spdlog queue size 10,000 as default size and you'll actually hit the worst case scenario with the same test code as shown above.

In the end it boils down to priorities of what is important for the client.

@gabime

This comment has been minimized.

Copy link
Owner

commented Oct 1, 2016

Just for kicks make the spdlog queue size 10,000 as default size and you'll actually hit the worst case scenario with the same test code as shown above.

@KjellKod well that would not be fair comparison wouldn't it? spdlog limiting itself to 10,000 slots, struggling to protect RAM, blocking the user on full queue, waking him up again when there is room again, while g3log just having fun, logging with no constraints, eating up the RAM.

(note that spdlog also offers async_overflow_policy::discard_log_msg flag to drop messages when the queue is full to never block the callers thread.. this might be handy in critical systems which should never block, no matter what happes to the disk or the queue)

@ruipacheco

This comment has been minimized.

Copy link
Contributor Author

commented Oct 1, 2016

So,

make the spdlog queue size 10,000

and

spdlog limiting itself to 10,000 slots

Means... 10.000 messages in the queue?

@KjellKod

This comment has been minimized.

Copy link

commented Oct 1, 2016

@ruipacheco yes, and running the test so it hits the queue limit

@gabime , that's my whole point. The loggers are made with different guarantees in mind.
For you it make sense to run performance tests that will not trigger the queue contention.
It's fine. It's your logger.

For me it makes sense to test this as an extreme case. For the systems I've worked with getting the log, with minimal worst case latency is a key guarantee of the logger. Losing a message or having one log entry take (was it 8 seconds?) is not OK.

The downside is that there is no limit to the queue size and so it is a matter on what the user wants the most.

Guaranteed low worst case scenario up to the point of system death? Vs dropping messages and having some log entries taking a very long time, seconds, but the logger itself will not take up more than a guaranteed amount of RAM.

Both are extreme situations but for some systems these extreme situations and the behavior of the logger in these situations matter a great deal

@gabime

This comment has been minimized.

Copy link
Owner

commented Oct 1, 2016

@KjellKod sorry, I've never heard of any system where it's OK to crash the entire machine under pressure, contention or not..

Moreover, g3log cannot give any "minimal worst case guarantee". After about 30,000,000 messages, It stops responding and hangs the process - give it few more seconds and the whole machine halts.

I recommend to put a big fat warning in g3logs docs stating that it might crash the machine very badly under extreme situations (I needed to disconnect the power plug - this little test completely killed my mighty machine).

@KjellKod

This comment has been minimized.

Copy link

commented Oct 2, 2016

Dito under the other scenario.

Honestly. This is fun but like I stated before we have different requirements in mind when we designed this software.

I'm not dissing your baby it's just the way it is

@gabime

This comment has been minimized.

Copy link
Owner

commented Oct 2, 2016

To answer @ruipacheco question, I think this discussion can be summarized as follows:

  • Both libraries display different worst case scenario behaviors, which might happen under extreme (and not very realistic) conditions, where numerous threads log hundrerds of thousands of messages per second in a tight loop:
  • spdlog's worst case scenario: logging to a full queue in tight loop(in "blocking" mode).
    Observed behaviour: hiccups in latency. a logging thread might get blocked for long time now and then.
  • g3log worst case scenario: logging to the queue faster than its consumption rate for enough time for the memory to exhaust.
    Observed behaviour : crash due to memory exhaustion.
  • Apart from those scenarios, the tests (under linux) above show that spdlog have much better (x5) latency then g3log on average.
@KjellKod

This comment has been minimized.

Copy link

commented Oct 2, 2016

I don't contest that. I did see worst case latencies worse for spdlog even when the queue didn't get full. It might be a case of different CPU architecture and compiling optimization options but overall I agree with your summary.

Worth noting is that this is all, or mostly, queue behavior. Like I stated in : https://www.google.com/amp/s/kjellkod.wordpress.com/2015/06/30/the-worlds-fastest-logger-vs-g3log with lock-free, bounded, queues g3log became way faster on average latency. I'm just not interested in that since the systems I work with care more about not having, even one, occasional latency hickup.

@gabime

This comment has been minimized.

Copy link
Owner

commented Oct 2, 2016

lockfree queues are pretty tricky creatures, especially in regard of efficiantly waiting on empty/full queues.. one cannot just use std::conditional or something (because it will defeat the "lockless" feature).

spdlog for example uses demitry's state of the art lockfree mpmc queue, but the logger works hard to provide the "blocking" feature on full queue.

For the future I am considering using some kind combination of lockfree +regular queue, but this stuff is really tricky to get right..

@KjellKod

This comment has been minimized.

Copy link

commented Oct 2, 2016

If you get it right I would be very interested to hear about it. I've had similar thoughts or possibly using swap mechanism to exchange lock-free queues as they pass a high watermark

@ruipacheco

This comment has been minimized.

Copy link
Contributor Author

commented Oct 2, 2016

This is a great debate and the C++ community is only better for it. @gabime's summary helped me make up my mind but I'm definitely listening to this.

@NRSoft

This comment has been minimized.

Copy link

commented Oct 2, 2016

Interesting debate, indeed. Obviously there is no magic solution if the rate of logging is higher than what the system can handle. You can either:

  1. swap to disk with occasional hiccups (spdlog)
  2. increase RAM usage with eventual crash (g3log)
  3. overwrite previous messages (nanolog, spdlog)

I wonder if you can provide an option to automatically increase the level of logging if such situation is imminent. E.g. start dropping "debug" messages (which are usually numerous) in favour of "info", "warnings" and higher. And so on. At least more critical information will not be missed and the whole system will be more stable.

@ruipacheco

This comment has been minimized.

Copy link
Contributor Author

commented Oct 2, 2016

This does seem to be the saner course of action.

@gabime

This comment has been minimized.

Copy link
Owner

commented Oct 2, 2016

This could certainly help in some situations.

Something like

spdlog::overflow_drop_level(level::debug);

Maybe also logging how many messages were dropped every once in a while.

The set_async_mode(..) number of params and combinations is becoming too big though..
I think it is also a good time to replace them with a struct.

@ruipacheco

This comment has been minimized.

Copy link
Contributor Author

commented Oct 2, 2016

Also, how does spdlog handle messages when a crash happens? g3log's behaviour seems to be ideal as in it logs everything.

@NRSoft

This comment has been minimized.

Copy link

commented Oct 2, 2016

I would rather suggest something like

spdlog::overflow_escalate_level(true);

as we don't know ahead which level will clatter the logger. But obviously, see for yourself what will work better in your architecture.

@ruipacheco

This comment has been minimized.

Copy link
Contributor Author

commented Oct 2, 2016

Levels categorize messages so want to keep the most important ones. Past a certain system load I don't mind losing all the info as long as I can still see the errors.

@gabime

This comment has been minimized.

Copy link
Owner

commented Oct 2, 2016

@ruipacheco there is no support for crash handling in spdlog

@ruipacheco

This comment has been minimized.

Copy link
Contributor Author

commented Oct 2, 2016

Plans to add any?

On 3 Oct 2016, at 00:04, Gabi Melman notifications@github.com wrote:

@ruipacheco https://github.com/ruipacheco there is no support for crash handling in spdlog


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub #293 (comment), or mute the thread https://github.com/notifications/unsubscribe-auth/AAfK72J2wAv9ukAJuaR3znWQuG4aowujks5qwCpugaJpZM4KLGa2.

@gabime

This comment has been minimized.

Copy link
Owner

commented Oct 2, 2016

Not sure yet..

@ruipacheco

This comment has been minimized.

Copy link
Contributor Author

commented Oct 2, 2016

g3log's behaviour is pretty good in that regard. Giving you information about a program's behaviour is the purpose of logging after all.

@Iyengar111

This comment has been minimized.

Copy link

commented Oct 7, 2016

Hi,

Have a look at https://github.com/Iyengar111/NanoLog#latency-benchmark-of-guaranteed-logger

Comparison shows it is faster than spdlog and g3log and provides same strong guarantee that log lines will never be dropped.

Karthik

@KjellKod

This comment has been minimized.

Copy link

commented Oct 7, 2016

Hi @lyengar

That's not correct. In fact there is less guarantee than even spdlog.

With the author's own words

"In terms of the design, when the ring buffer is full, the producer just overwrites the log line in that slot [i.e. does not wait for consumer thread to pop the item].
In summary - the old log line will be lost when buffer is full."

@Iyengar111

This comment has been minimized.

Copy link

commented Oct 7, 2016

It is correct. NanoLog has 2 policies, guaranteed and non guaranteed. Have a look at the link again. It is very clear the comparison refers to guaranteed logging. No log messages are ever dropped in guaranteed logging.

@KjellKod

This comment has been minimized.

Copy link

commented Oct 7, 2016

My bad I took that from this:
Iyengar111/NanoLog#1 (comment)

Nice stats!

What are the numbers when you push the numbers much more? 1,000 000 entries?

@KjellKod

This comment has been minimized.

Copy link

commented Oct 7, 2016

It's definitely inspirational to see @lyengar111's and @gabime's work. Keep up the great work both of you,. you have definitely inspired me to do some changes ;)

Cheers

@ruipacheco

This comment has been minimized.

Copy link
Contributor Author

commented Oct 7, 2016

@Iyengar111 - I'm very interested to know the number @KjellKod asked about. It seems like nanolog is faster than spdlog and as safe as g3log which is great.

@gabime

This comment has been minimized.

Copy link
Owner

commented Oct 7, 2016

@Iyengar111 Nice code, clean and elegant..

If I understood it right, it spins like crazy when the queue get full (indefinitely, no sleep, no yield, no blocking of any kind after a timeout).

So this impressive benchmark comes at the expense of burning the CPU on full queue situations

@Iyengar111

This comment has been minimized.

Copy link

commented Oct 7, 2016

Hi

Thank you!

The consumer thread goes to sleep if there aren't any log lines. So no
spinning there.

On the producer side, a new buffer of size 1024 log lines is pushed into a
std queue when the current buffer becomes full (i.e every time 1024 log
lines are logged). The thread that pushed the 1024th log line is
responsible for allocating the next buffer. Any thread that concurrently
logs during this time will spin wait until the new buffer is allocated.
Effectively the spin time is the time taken to setup the next buffer.

I would put in a asm pause or _mm_pause () in the spin loops

On 7 Oct 2016 10:59 p.m., "Gabi Melman" notifications@github.com wrote:

@Iyengar111 https://github.com/Iyengar111 Nice code, clean and elegant..

If I understood it right, both push and pop spin like crazy when the queue
get full/empty (indefinitely, no sleep, no yield, no blocking of any kind).

So this impressive benchmark come on the expense of burning the CPU (and
not being nice to other processes)


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#293 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AT97wRAByCJIxrHwznLLsFXd3nLX8Jrlks5qxl5KgaJpZM4KLGa2
.

@Iyengar111

This comment has been minimized.

Copy link

commented Oct 7, 2016

Hi

Thank you!

The consumer thread goes to sleep if there aren't any log lines. So no
spinning there.

On the producer side, a new buffer of size 1024 log lines is pushed into a
std queue when the current buffer becomes full (i.e every time 1024 log
lines are logged). The thread that pushed the 1024th log line is
responsible for allocating the next buffer. Any thread that concurrently
logs during this time will spin wait until the new buffer is allocated.
Effectively the spin time is the time taken to setup the next buffer.

I would put in a asm pause or _mm_pause () in the spin loops

On 7 Oct 2016 10:59 p.m., "Gabi Melman" notifications@github.com wrote:

@Iyengar111 https://github.com/Iyengar111 Nice code, clean and elegant..

If I understood it right, both push and pop spin like crazy when the queue
get full/empty (indefinitely, no sleep, no yield, no blocking of any kind).

So this impressive benchmark come on the expense of burning the CPU (and
not being nice to other processes)


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#293 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AT97wRAByCJIxrHwznLLsFXd3nLX8Jrlks5qxl5KgaJpZM4KLGa2
.

@Iyengar111

This comment has been minimized.

Copy link

commented Oct 7, 2016

Continued from my previous comment

The mm pause and asm pause instructions aren't part of the standard so I haven't put it in the library....

@KjellKod

This comment has been minimized.

Copy link

commented Oct 7, 2016

I'm a little sorry gabime for using your repo as a discussion thread. You
guys are probably the best log-writers I know out there so I hope you are
OK with it.

I like the approach a lot with NanoLog.Like @gabime mentioned the code is
very clean and easy to read.

I'll probably reach out to you when I'm making g4log (roughly same API as
g3log but scaled down, no Windows suport and using lock-free MPSC queue).
As I definitely see a possibility that I could have use for your queue.
Have you considered putting the queue in a separate repo?

Since you don't block forever when filling up the queue and the queue is
basically a buffer at a time you get huge speed improvements. Nice!
For g4log I'll try to adopt a similar schema as I see no bad affects from
it when I have tested yours but I'll keep, of course, the crash safety
aspects of g3log.

If that works out then I think NanoLog and SpedLog will continue to win the
race for the fastest logger ;) since I'll likely keep dynamic logging
levels that is eating
up a few CPU cycles as well as adding "any sink" and call that sink's API
through a command queue. Either way the the drop-in replacement g4log
(for g3log)
would keep "don't loose logs even when crashing" quality

... Unless of course you adopt the crash handling. In which case g4log
probably would just be an API wrapper ;)

On Fri, Oct 7, 2016 at 10:50 AM, Iyengar111 notifications@github.com
wrote:

Hi

Thank you!

The consumer thread goes to sleep if there aren't any log lines. So no
spinning there.

On the producer side, a new buffer of size 1024 log lines is pushed into a
std queue when the current buffer becomes full (i.e every time 1024 log
lines are logged). The thread that pushed the 1024th log line is
responsible for allocating the next buffer. Any thread that concurrently
logs during this time will spin wait until the new buffer is allocated.
Effectively the spin time is the time taken to setup the next buffer.

I would put in a asm pause or _mm_pause () in the spin loops

On 7 Oct 2016 10:59 p.m., "Gabi Melman" notifications@github.com wrote:

@Iyengar111 https://github.com/Iyengar111 Nice code, clean and
elegant..

If I understood it right, both push and pop spin like crazy when the
queue
get full/empty (indefinitely, no sleep, no yield, no blocking of any
kind).

So this impressive benchmark come on the expense of burning the CPU (and
not being nice to other processes)


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#293 (comment),
or mute
the thread
<https://github.com/notifications/unsubscribe-auth/
AT97wRAByCJIxrHwznLLsFXd3nLX8Jrlks5qxl5KgaJpZM4KLGa2>
.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#293 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAUP3qbLcDefGCAY6qCai_dGUi8bzhffks5qxnhNgaJpZM4KLGa2
.

@ruipacheco

This comment has been minimized.

Copy link
Contributor Author

commented Oct 7, 2016

no Windows suport

@KjellKod - why?

@KjellKod

This comment has been minimized.

Copy link

commented Oct 7, 2016

Two reasons

  1. I don't develop in a Windows environment at work so I'm rusty (and I don't have Windows dev environment)
  2. Due to 1. that part of the g3log code base is not as good as it should and I haven't gotten "good enough" community boost on that side. Especially the crash handling is an API nightmare on Windows. It's like Microsoft was on crack when they decided how it should work.

Frankly it would be better to have one version of g4log on *nix and if someone would be interested make a completely different fork just for Windows then it could start from the *nix base and just divert from there.

What OS are you on @rui?

Sent from my iPhone

On Oct 7, 2016, at 12:10 PM, Rui Pacheco notifications@github.com wrote:

no Windows suport

@KjellKod - why?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

@ruipacheco

This comment has been minimized.

Copy link
Contributor Author

commented Oct 7, 2016

I’m developing on mac, deploying on Linux, Windows, FreeBSD and Solaris in that order.

But hey, I’ve already settled for spdlog :)

On 7 Oct 2016, at 20:49, Kjell Hedström notifications@github.com wrote:

Two reasons

  1. I don't develop in a Windows environment at work so I'm rusty (and I don't have Windows dev environment)
  2. Due to 1. that part of the g3log code base is not as good as it should and I haven't gotten "good enough" community boost on that side. Especially the crash handling is an API nightmare on Windows. It's like Microsoft was on crack when they decided how it should work.

Frankly it would be better to have one version of g4log on *nix and if someone would be interested make a completely different fork just for Windows then it could start from the *nix base and just divert from there.

What OS are you on @rui?

Sent from my iPhone

On Oct 7, 2016, at 12:10 PM, Rui Pacheco notifications@github.com wrote:

no Windows suport

@KjellKod - why?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub #293 (comment), or mute the thread https://github.com/notifications/unsubscribe-auth/AAfK79di7OvPob9m64urjPE_zyviPZqzks5qxpQdgaJpZM4KLGa2.

@ruipacheco

This comment has been minimized.

Copy link
Contributor Author

commented Oct 7, 2016

@Iyengar111 - are you planning to introduce log rotation?

@gabime

This comment has been minimized.

Copy link
Owner

commented Oct 7, 2016

I'm a little sorry gabime for using your repo as a discussion thread.

@KjellKod That's OK, This is very good discussion 👍

The thread that pushed the 1024th log line is
responsible for allocating the next buffer. Any thread that concurrently
logs during this time will spin wait until the new buffer is allocated.

@Iyengar111 Is there any limit for this ? Or will it eventualy consume all available memory if producers rate > consumer rate?

@Iyengar111

This comment has been minimized.

Copy link

commented Oct 7, 2016

Hi

The log rotates to the next file every x mb where x is passed at
initialization.

Karthik

On 8 Oct 2016 3:55 a.m., "Rui Pacheco" notifications@github.com wrote:

@Iyengar111 https://github.com/Iyengar111 - are you planning to
introduce log rotation?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#293 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AT97wSKhUrbkTySbIIQxwqx_hBc2Ap3gks5qxqO9gaJpZM4KLGa2
.

@Iyengar111

This comment has been minimized.

Copy link

commented Oct 7, 2016

Hi

Yes it will consume all available memory if the extreme logging frequency
continues indefinitely...

My view on this is if their application logs at a rate that their ram is
not enough, there probably is no way their disk io rate is good enough...
they ought to use the non guaranteed logger with a big ring buffer size.
Old log lines will be dropped.

@KjellKod Regarding crash handling, I don't see the point of reinventing
what you have already done. People should just port the good work you have
done on crash handling give you credit. That's the same I thing I wrote in
the readme crash handling section of nanolog.

Karthik

On 8 Oct 2016 4:09 a.m., "Gabi Melman" notifications@github.com wrote:

I'm a little sorry gabime for using your repo as a discussion thread.

@KjellKod https://github.com/KjellKod That's OK, This is very good
discussion 👍

The thread that pushed the 1024th log line is
responsible for allocating the next buffer. Any thread that concurrently
logs during this time will spin wait until the new buffer is allocated.

@Iyengar111 https://github.com/Iyengar111 Is there any limit for this ?
Or will it eventualy consume all available memory if producers rate >
consumer rate?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#293 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AT97wQHvEGss7049k44zskiT5DS2wPSMks5qxqcPgaJpZM4KLGa2
.

@KjellKod

This comment has been minimized.

Copy link

commented Oct 8, 2016

@Iyengar111 thanks

Sent from my iPhone

On Oct 7, 2016, at 5:58 PM, Iyengar111 notifications@github.com wrote:

Hi

Yes it will consume all available memory if the extreme logging frequency
continues indefinitely...

My view on this is if their application logs at a rate that their ram is
not enough, there probably is no way their disk io rate is good enough...
they ought to use the non guaranteed logger with a big ring buffer size.
Old log lines will be dropped.

@KjellKod Regarding crash handling, I don't see the point of reinventing
what you have already done. People should just port the good work you have
done on crash handling give you credit. That's the same I thing I wrote in
the readme crash handling section of nanolog.

Karthik

On 8 Oct 2016 4:09 a.m., "Gabi Melman" notifications@github.com wrote:

I'm a little sorry gabime for using your repo as a discussion thread.

@KjellKod https://github.com/KjellKod That's OK, This is very good
discussion 👍

The thread that pushed the 1024th log line is
responsible for allocating the next buffer. Any thread that concurrently
logs during this time will spin wait until the new buffer is allocated.

@Iyengar111 https://github.com/Iyengar111 Is there any limit for this ?
Or will it eventualy consume all available memory if producers rate >
consumer rate?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#293 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AT97wQHvEGss7049k44zskiT5DS2wPSMks5qxqcPgaJpZM4KLGa2
.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

@gabime gabime closed this Oct 14, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.