Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Filetarget allocations optimization (up to 15% speed improvement) #1871

Merged
merged 3 commits into from
Jan 11, 2017

Conversation

nazim9214
Copy link
Contributor

@nazim9214 nazim9214 commented Dec 21, 2016

  1. Removed new string allocation in GetBytesToWrite method. Now we are get count of bytes for text and newline characters and write directly into byte array.

  2. Allocate MemoryStream directly with required buffer size.

Performance results for this test https://github.com/NLog/NLog-Performance-tests/blob/master/NLogPerformance
with following parameters:

Messages Size Threads Loggers
1000000 100 2 1
version Time (ms) Msgs/sec GC2 GC1 GC0 CPU (ms) Mem (MB)
master 14,885 67,178 328 409 848 28,671 51.051
optimized 12,914 77,432 179 324 704 25,000 47,344

This change is Reviewable

@304NotModified
Copy link
Member

@snakefoot as your are active on the FileTarget, your opinion please :)
Is this one also clashing with another PR?

@nazim9214 thanks!

int newLineBytesCount = this.Encoding.GetByteCount(this.NewLineChars);
byte[] bytes = new byte[textBytesCount + newLineBytesCount];
this.Encoding.GetBytes(text, 0, text.Length, bytes, 0);
this.Encoding.GetBytes(this.NewLineChars, 0, this.NewLineChars.Length, bytes, textBytesCount);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nazim9214

As the NewLineChars & Encoding almost never changes while running, I was thinking to "cache" this one (static variable). I noticed before while performance testing that setting a fixed bytes for the newline improves the performance ~10-20%. (but with a limited test)

Your thoughts please :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@304NotModified

I wrote a version where 'line ending bytes' are computed only on setting Encoding/LineEnding property of FileTarget (as i understand user can change this properties at runtime in some way?).
My tests on small and medium message sizes show same throughput as in this implementation.
So i think we shouldn't add more complexity to codebase, because there isn't any significant benefits.

@codecov-io
Copy link

codecov-io commented Dec 21, 2016

Current coverage is 81% (diff: 100%)

Merging #1871 into master will decrease coverage by <1%

@@             master      #1871   diff @@
==========================================
  Files           276        276          
  Lines         18519      18652   +133   
  Methods        2860       2862     +2   
  Messages          0          0          
  Branches       2143       2142     -1   
==========================================
+ Hits          15097      15180    +83   
- Misses         2926       2969    +43   
- Partials        496        503     +7   

Sunburst

Powered by Codecov. Last update a7293bb...9914bdb

@snakefoot
Copy link
Contributor

snakefoot commented Dec 21, 2016

It is a good optimization of the old code path without breaking interfaces.

Will of course conflict with my PR #1799 that introduces a new code-path that allows reuse of MemoryStream + byte[]-array + StringBuilder. But I can fix that.

@snakefoot
Copy link
Contributor

snakefoot commented Dec 23, 2016

Instead of making an array of byte-array to make an exact MemoryAllocation could it not just be:

byte[] bytes = this.GetBytesToWrite(ev.LogEvent);
if (ms.Capacity == 0)
{
   if (bucketCount > 10)
      ms.Capacity = ((((bucketCount + 1) * bytes.Length) / 1024 + 1) * 1024);
   else if (bucketCount > 1)
      ms.Capacity = ((1 + bucketCount) * bytes.Length);
}
ms.Write(bytes, 0, bytes.Length);

Of course this very crude calculation will be cheated when having very uneven string-lengths. And the first string is the shortest one.

@nazim9214
Copy link
Contributor Author

Yes, we could use this approach.
I ran some tests:

  1. For constant message size(100 chars, same test as in the PR) this approach better than PR implementation: ~77k msg/sec (exact calculation) vs ~80k msg/sec (approx calculation).
  2. For random message size(50 - 100 chars) another situation: ~83k (exact calculation) vs ~81k (approx calculation)

I'm not sure which of the approaches select, your thoughts please @304NotModified @snakefoot

@304NotModified
Copy link
Member

I think random is more important, but also not sure if <5% improvement is worth the change

@snakefoot
Copy link
Contributor

I think random is more important, but also not sure if <5% improvement is worth the change

It will just be easier for me to merge with #1799, if not needing to keep track of an array of byte-arrays.

I guess in NLog 5.0 then the old code-path will be removed completely, and the MemoryStream will just be reused instead of being allocated and thrown away constantly.

@nazim9214
Copy link
Contributor Author

Ok, then i'll rewrite this piece of code, as you proposed.

@304NotModified
Copy link
Member

👍 thanks in advance!

@304NotModified 304NotModified changed the title Filetarget allocations optimization Filetarget allocations optimization (up to 15% speed improvement) Jan 11, 2017
@304NotModified 304NotModified added this to the 4.4.2 milestone Jan 11, 2017
@304NotModified 304NotModified merged commit b8d326f into NLog:master Jan 11, 2017
@304NotModified
Copy link
Member

Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants