New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better Async performance with many concurrent logger threads (up to 500%+) - Async attribute and AsyncWrapper #2650
Better Async performance with many concurrent logger threads (up to 500%+) - Async attribute and AsyncWrapper #2650
Conversation
f4f44d0
to
4a358b5
Compare
Codecov Report
@@ Coverage Diff @@
## master #2650 +/- ##
=======================================
- Coverage 81% 81% -<1%
=======================================
Files 326 327 +1
Lines 24414 24564 +150
Branches 3107 3135 +28
=======================================
+ Hits 19689 19793 +104
- Misses 3863 3920 +57
+ Partials 862 851 -11 |
9f0f9c5
to
445f666
Compare
af3d0b9
to
efe6874
Compare
After making some performance tests, then it looks like the allocations caused by This is the performance results when combined together with #2653
|
efe6874
to
7b972de
Compare
Looks like the Net40 ConcurrencyQueue can seem dangerous: https://blogs.msdn.microsoft.com/pfxteam/2012/05/08/concurrentqueuet-holding-on-to-a-few-dequeued-elements/ Will only have it enabled by default on NETSTANDARD2_0 to avoid random complaints. |
7b972de
to
3ab6284
Compare
3ab6284
to
e93760f
Compare
Wow! nice improvements The queue is used for the async and buffering target wrapper, isn't? (Now on mobile) |
Nope it is only used by the AsyncTargetWrapper. The BufferingTargetWrapper uses a container called LogEventInfoBuffer. |
I changed the title to make it more clear for outsiders :) |
Well you only see the performance improvement if you are very aggressive against one async-target. Normal users will probably see a very small performance drop (See my test with 1 thread). But it is a better out-of-the-box-experience for those special cases. Also the performance improvement only works when ALL layout-renderers are thread-safe or thread-agnostic. See also #2653 |
get { return _requestQueue is AsyncRequestQueue; } | ||
set | ||
{ | ||
if (value != _requestQueue is AsyncRequestQueue) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer the pattern matching syntax her, eg is AsyncRequestQueue queue
(c# 7)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I understand, but will add some parentheses.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When pattern matching then the cast (line below) is not needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still don't understand
/// <summary> | ||
/// Gets the number of requests currently in the queue. | ||
/// </summary> | ||
public int Count => (int)_count; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this tricky?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Count is only there for debugging. IsEmpty is the one used for decisions. Think the container (or computer) will break if more than two billion items.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK clear, maybe we should set that in the <remarks>
7b5bb24
to
7c9c8b8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, some small questions about the structure of the code.
If preferred, I could also push a commit on this.
get { return _requestQueue is AsyncRequestQueue; } | ||
set | ||
{ | ||
if (value != (_requestQueue is AsyncRequestQueue)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer having a backend boolean for this (e.g. _forceLockingQueue), and and reset _requestQueue if value != _forceLockingQueue
.
That would be more logical IMO. What do you think?
/// <summary> | ||
/// Gets the number of requests currently in the queue. | ||
/// </summary> | ||
public int Count => (int)_count; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK clear, maybe we should set that in the <remarks>
PS @snakefoot As you may have noticed, we have now also a "dev" branch. I'm switching to the gitflow-workflow because:
I use it also at work at some larger projects, it work pretty well. This means that important bugfixes should be to "master", and need a "backport" to "dev". |
I dont mind the current flow, where small PRs can race complicated PRs. But I can see the need for a place to prepare the 4.6 release and create beta version.
Think it is okay that complicated PRs can stay pending for a long time. Kind of inspires you to make small and simple PRs. Something that I need to practise :)
|
yeah I like also small and simple PRs. Easier to review/test/understand. |
7c9c8b8
to
31866a0
Compare
31866a0
to
36f9415
Compare
@304NotModified Updated the PR with the requested changes. |
nice!! |
@304NotModified You merged this into |
ooops. thx |
now on dev |
Updated documentation: https://github.com/NLog/NLog/wiki/AsyncWrapper-target |
thanks! |
ConcurrentQueue works very well in NetCore2. But not sure when the improvements will get to NetFramework.
When 8 threads are attacking the same async-wrapper, then the ConcurrentRequestQueue performs twice as fast on NetCore2. This should help fix #2627