Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Observed Memory Leak in WCF #10

Closed
zacherybohon opened this issue Jun 15, 2017 · 15 comments
Closed

Observed Memory Leak in WCF #10

zacherybohon opened this issue Jun 15, 2017 · 15 comments

Comments

@zacherybohon
Copy link

@zacherybohon zacherybohon commented Jun 15, 2017

When we include the loggly-log4net's logglyAppender in our log4net config, it results in a memory leak. Removing it resolves the leak. Please let me know if you need any additional information to investigate this issue.

@mostlyjason
Copy link

@mostlyjason mostlyjason commented Jun 15, 2017

Yes please let us know your version of .net, log4net, and paste your config. Also what activities trigger the leak and how long does it take?

@zacherybohon
Copy link
Author

@zacherybohon zacherybohon commented Jun 19, 2017

.Net 4.5.2
Log4net version 1.2.15.0
We've tried rebuilding the log4net-loggly project with the newer version of log4net but the issue still seems to come up. It seem like every time our WCF service calls GetLogger it is instantiating a new instance of the LogglyAppender class.

<?xml version="1.0" encoding="utf-8" ?>
<log4net>
    <logger name="myLogger">
        <level value="ERROR"/>
        <appender-ref ref="RollingBySize" />
        <appender-ref ref="LogglyAppender"/>
    </logger>

    <appender name="RollingBySize" type="log4net.Appender.RollingFileAppender">
        <file value="logs\webService.log" />
        <PreserveLogFileNameExtension value="true" />
        <appendToFile value="true" />
        <rollingStyle value="Size" />
        <maxSizeRollBackups value="10" />
        <maximumFileSize value="10MB" />
        <staticLogFileName value="true" />
        <layout type="log4net.Layout.PatternLayout">
            <conversionPattern value="%date %-5level %x %logger - %message%newline" />
        </layout>
    </appender>

    <appender name="DebugAppender" type="log4net.Appender.DebugAppender">
        <layout type="log4net.Layout.PatternLayout">
            <conversionPattern value="%-5level - %message%newline" />
        </layout>
    </appender>
    <appender name="LogglyAppender" type="log4net.loggly.LogglyAppender, log4net-loggly">
      <rootUrl value="https://logs-01.loggly.com/" />
      <inputKey value="TOKEN" />
      <tag value="log4net" />
    </appender>
</log4net>
@loggly-opensource
Copy link

@loggly-opensource loggly-opensource commented Jun 19, 2017

@zacherybohon
Copy link
Author

@zacherybohon zacherybohon commented Jun 19, 2017

I am using the latest code from this Git repo.

@mostlyjason
Copy link

@mostlyjason mostlyjason commented Jun 20, 2017

Okay thanks for giving us the information! I think we have what we need. Our developer is currently working on some other tasks and will hopefully be able to work on it sometime this quarter. If you figure it out before us, please feel free to submit your findings or even a pull request.

@YouGitMoreWithLes
Copy link

@YouGitMoreWithLes YouGitMoreWithLes commented Jul 22, 2018

I am running same versions and configurations as above and have memory leak in Windows Service. Since this post is from 2017, I am going to assume that Loggly does not want to fix this and I will need to find a better logging resource.

@Shwetajain148
Copy link

@Shwetajain148 Shwetajain148 commented Jul 24, 2018

Hi @YouGitMoreWithLes, Sorry for the delay here. This issue was on priority in our list but unfortunately, got skipped due to other tasks. Please give me some time to look into this and I'll try to come up with some solution soon.

Thank you for your patience.

@YouGitMoreWithLes
Copy link

@YouGitMoreWithLes YouGitMoreWithLes commented Jul 24, 2018

@Katulus
Copy link

@Katulus Katulus commented Nov 17, 2018

@YouGitMoreWithLes , @zacherybohon Hello, I'm looking into this and it would help if you can provide a sample code that causes memory leak in your application.
The GetLogger() call should not be that, it always returns the same instance of the logger as long as you call it with the same logger name or logger type.

The only scenario where I'm able to see the memory usage increase is if I log faster than it's able to send logs to Loggly. In that case the logs are queued to be sent to Loggly and this queue can grow indefinitely if the income is still faster than sending. Could that be your case? If so, would you prefer to have an option to limit this queue size and lose some logs if the queue gets full?

@zacherybohon
Copy link
Author

@zacherybohon zacherybohon commented Nov 17, 2018

@Katulus, I’ve moved to another org so I no longer have access to the source code, but @loweva will probably be able to work with you on this.

@loweva
Copy link

@loweva loweva commented Dec 29, 2018

@Katulus : Thank you for your response/investigation. Please see my original post here, which contains our sample config file.

In our code, we instantiate one logger per service file, as shown in the example below:
private static ILog log { get; } = LogManager.GetLogger(typeof(MyService));

A search of our code shows that we have 79 such invocations, which seems reasonable.

It's hard to say whether or not we're logging too prodigiously for the logger to keep up because I don't know what the "problem" threshold is. But between 15 Dec and 17 Dec 2018 on one of our highly trafficked systems, we logged 6.3K individual entries straight to a local log file via log4net (not to loggly). If we hadn't disabled the loggly appender to resolve the memory leak, the numbers would have been the same.

In looking over the notes from my investigation almost two years ago, I was convinced that a new LogglyAppender was always being instantiated with each new Log.Error() call because a cast exception was being thrown whenever GetLog() was called, as shown in the sample stack trace below.

System.InvalidCastException: Unable to cast object of type 'log4net.loggly.LogglyAppender' to type 'log4net.Appender.IAppender'. at log4net.Repository.Hierarchy.XmlHierarchyConfigurator.ParseAppender(XmlElement appenderElement) in C:\Users\<me>\Git\log4net\src\Repository\Hierarchy\XmlHierarchyConfigurator.cs:line 324

@zacherybohon also found that log4net-loggly "is creating a new instance of the logglyAppender object for each call", but he was unable to catch/see the cast exception that I pasted above. He suspected "that the logglyAppender is not being disposed of when it goes out of scope". But I believe that after additional investigation he thought that log4net-loggly was "not correctly disposing of the timer for the batches."

Thanks for any assistance/insight you can provide!

cc: @YouGitMoreWithLes / @finoutlook

@Katulus
Copy link

@Katulus Katulus commented Jan 2, 2019

@loweva , thanks for the details. 6.3k events per 2 days should be far below safe threshold. When I was talking about fast logging I meant thousands of events per minute.

Looking at your System.InvalidCastException exception and at the code of LogglyAppender it should not happen, unless you use different version of log4net than log4net-loggly was built for - which is 2.0.8. Can you check which version of log4net.dll you have deployed with your application? And which version of log4net-loggly do you use now? If it's older than 8.0 can you try to upgrade? When I'm running stress tests on current codebase (which is 8.0.1) I don't see any memory leak.

The instance of LogglyAppender is not created by log4net-loggly code but by log4net itself. According to my tests it's creating just one instance per given logger name/type so this should not be the source of the issue. Unless that System.InvalidCastException causes log4net to try creating appender next time a log method is called. In any case it's not something we can control.

Same applies to disposing, it's log4net itself who takes care of that. The correct point is that the timer that we use for batching is never disposed, but if everything works as expected there should be just one timer per logger - in your case 79 of them - and they should live through the whole process lifetime. So no leak source here.

If you are on latest version, using log4net 2.0.8 and still seeing the leak it would be great if you can either provide small sample app that reproduces the issue or the memory dump of the process so that we can investigate what memory is leaking and how it's related to log4net-loggly.

@Katulus
Copy link

@Katulus Katulus commented Jan 29, 2019

@loweva , new version (beta) has been just released. You can get it on https://github.com/loggly/log4net-loggly/releases/tag/v9.0.0-beta1 or via Nuget. It's almost complete rewrite of the library and none of tests we ran showed any memory issues. Can you give it a try and let us know if the issue is gone or if you still see memory leaks?

@loweva
Copy link

@loweva loweva commented Feb 20, 2019

@Katulus : We have tried again with the new version, and we see no evidence of memory leaks! Thank you!

@Katulus
Copy link

@Katulus Katulus commented Feb 20, 2019

Thanks for confirmation. Closing the issue.

@Katulus Katulus closed this Feb 20, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
7 participants
You can’t perform that action at this time.