You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Intense logging, particularly at start-up, does not pose a problem and all logs are properly stored.
Actual Behavior
Intense logging cuts off at seemingly arbitrary points (>500 messages in <250 ms) and it's hard to get a full picture of the health of the live algorithm.
These limits were put in place long ago, and my question is whether they should be revisited?
I can see two potential issues with removing the limit, both of which (imho) should be fixed outside of LEAN:
Message flooding causes browser to hang. This should be fixed in the web UI - it should not rely on LEAN to not flood it.
Message flooding causes an IO bottleneck when writing log files. Not sure if this is a problem as each line is written one at a time. Also, the log queue shouldn't be blocking the stack for the rest of the algo's execution - and I don't think it is. So in theory, even with a million lines in the queue, they will eventually be written out. The only legitimate problem is if the algo is producing an infinite loop of messages. To protect against this case, the limit should be set significantly higher and a verbose warning should be produced that log messages are being ignored and dequeued.
Reproducing the Problem
Imagine you're running an algo with the S&P universe and you want to log one message per stock upon initialisation of the algo, to confirm that e.g. the stock has been subscribed to. Likely you'd already be above the limit and you'd find yourself asking whether your stocks had been subscribed to properly.
Checklist
I have completely filled out this template
I have confirmed that this issue exists on the current master branch
I have confirmed that this is not a duplicate issue by searching issues
The text was updated successfully, but these errors were encountered:
Logging is one place where the community routinely kill their own algorithms without realizing why. We provide desktop abstractions for the LEAN open source platform to allow customization of this behavior.
Best option would be for you to make a "DougLiveTradingResultHandler" and remove the limits for your own use. We decided that if we remove it from the cloud it'll cause more complains that it'll solve.
Expected Behavior
Intense logging, particularly at start-up, does not pose a problem and all logs are properly stored.
Actual Behavior
Intense logging cuts off at seemingly arbitrary points (>500 messages in <250 ms) and it's hard to get a full picture of the health of the live algorithm.
Potential Solution
https://github.com/QuantConnect/Lean/blame/33ae9c9f1aa7dad3869f16a340c265d364bf12cf/Engine/Results/LiveTradingResultHandler.cs#L475
These limits were put in place long ago, and my question is whether they should be revisited?
I can see two potential issues with removing the limit, both of which (imho) should be fixed outside of LEAN:
Reproducing the Problem
Imagine you're running an algo with the S&P universe and you want to log one message per stock upon initialisation of the algo, to confirm that e.g. the stock has been subscribed to. Likely you'd already be above the limit and you'd find yourself asking whether your stocks had been subscribed to properly.
Checklist
master
branchThe text was updated successfully, but these errors were encountered: