New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not Releasing Memory #48

Open
MattHodge opened this Issue Nov 5, 2015 · 33 comments

Comments

Projects
None yet
@MattHodge

MattHodge commented Nov 5, 2015

Hi Boe,

Give these the following a try in a new PowerShell window and use task manager to keep an eye on the memory usage:

Import-Module PoshRSJob

1..100 | Start-RSJob -Name {$_} -ScriptBlock {
  $i = 0 
  while ($i -lt 1000)
  {
    Set-Variable -Name "var$($i)" -Value (Get-Service)
    $i++
  }
}

# Wait for things to finish
Get-RSJob | Wait-RSJob

# Throw away the jobs
Get-RSJob | Remove-RSJob

You will notice over 1gb memory usage even though the jobs have been thrown away.

If trying to clear the memory using the [System.GC]::Collect() command, the usage drops to around 500mb, but then does not go any lower.

Any ideas how to free the memory up without restarting the powershell process?

Thanks :)

@proxb

This comment has been minimized.

Show comment
Hide comment
@proxb

proxb Nov 5, 2015

Owner

Interesting. I will have to take a look at that this weekend or next week and see what is going on.

Owner

proxb commented Nov 5, 2015

Interesting. I will have to take a look at that this weekend or next week and see what is going on.

@MattHodge

This comment has been minimized.

Show comment
Hide comment
@MattHodge

MattHodge commented Nov 8, 2015

Thanks @proxb

@proxb proxb self-assigned this Nov 24, 2015

@proxb proxb added the question label Nov 24, 2015

@proxb proxb added the bug label Dec 9, 2015

@proxb

This comment has been minimized.

Show comment
Hide comment
@proxb

proxb Dec 9, 2015

Owner

Just out of curiosity, what version of PowerShell are you running when you get the memory leaks?

Owner

proxb commented Dec 9, 2015

Just out of curiosity, what version of PowerShell are you running when you get the memory leaks?

@MattHodge

This comment has been minimized.

Show comment
Hide comment
@MattHodge

MattHodge Dec 9, 2015

Hi @proxb I believe I was on 4.0 at the time (was either 4.0 or 5.0).

MattHodge commented Dec 9, 2015

Hi @proxb I believe I was on 4.0 at the time (was either 4.0 or 5.0).

@proxb

This comment has been minimized.

Show comment
Hide comment
@proxb

proxb Dec 10, 2015

Owner

Weird, I run this at work (Windows 7 w/ PowerShell V4) and it tops out at ~370MB and eventually drops down to 111MB (starting was 96MB).
On my home laptop (Windows 10 w/ PowerShell V5) and I can duplicate your issue. I'm not sure yet what is going on, but will continue to troubleshoot this.

Owner

proxb commented Dec 10, 2015

Weird, I run this at work (Windows 7 w/ PowerShell V4) and it tops out at ~370MB and eventually drops down to 111MB (starting was 96MB).
On my home laptop (Windows 10 w/ PowerShell V5) and I can duplicate your issue. I'm not sure yet what is going on, but will continue to troubleshoot this.

@EsOsO

This comment has been minimized.

Show comment
Hide comment
@EsOsO

EsOsO Dec 10, 2015

Contributor

Same here, memory leaks.

OS: Windows 10
Name                           Value
----                           -----
PSVersion                      5.0.10240.16384
WSManStackVersion              3.0
SerializationVersion           1.1.0.1
CLRVersion                     4.0.30319.42000
BuildVersion                   10.0.10240.16384
PSCompatibleVersions           {1.0, 2.0, 3.0, 4.0...}
PSRemotingProtocolVersion      2.3
Contributor

EsOsO commented Dec 10, 2015

Same here, memory leaks.

OS: Windows 10
Name                           Value
----                           -----
PSVersion                      5.0.10240.16384
WSManStackVersion              3.0
SerializationVersion           1.1.0.1
CLRVersion                     4.0.30319.42000
BuildVersion                   10.0.10240.16384
PSCompatibleVersions           {1.0, 2.0, 3.0, 4.0...}
PSRemotingProtocolVersion      2.3
@proxb

This comment has been minimized.

Show comment
Hide comment
@proxb

proxb Dec 10, 2015

Owner

I'm trying to determine if this is a PoshRSJob issue or a PowerShell issue. I can duplicate this without the module by running this:

$PowerShell=[powershell]::Create()
$RunspacePool = [runspacefactory]::CreateRunspacePool()
$RunspacePool.Open()
$PowerShell.RunspacePool = $RunspacePool
[void]$PowerShell.AddScript({
    [appdomain]::GetCurrentThreadId()
    $i = 0 
    while ($i -lt 1000) {
        Set-Variable -Name "var$($i)" -Value (Get-Service)
        $i++
    }
})
#Begins allocating all memory
$Handle = $PowerShell.BeginInvoke()

While (-NOT $Handle.IsCompleted) {start-sleep -Milliseconds 100}

$PowerShell.EndInvoke($Handle)
$PowerShell.RunspacePool.Dispose()
$PowerShell.Dispose()
Remove-Variable PowerShell,RunspacePool
[gc]::Collect()
[gc]::WaitForPendingFinalizers()
[gc]::Collect()

The memory jumps up immediately after running BeginInvoke() and sees a small release of memory after disposing the RunspacePool but is still well above the starting memory.

Owner

proxb commented Dec 10, 2015

I'm trying to determine if this is a PoshRSJob issue or a PowerShell issue. I can duplicate this without the module by running this:

$PowerShell=[powershell]::Create()
$RunspacePool = [runspacefactory]::CreateRunspacePool()
$RunspacePool.Open()
$PowerShell.RunspacePool = $RunspacePool
[void]$PowerShell.AddScript({
    [appdomain]::GetCurrentThreadId()
    $i = 0 
    while ($i -lt 1000) {
        Set-Variable -Name "var$($i)" -Value (Get-Service)
        $i++
    }
})
#Begins allocating all memory
$Handle = $PowerShell.BeginInvoke()

While (-NOT $Handle.IsCompleted) {start-sleep -Milliseconds 100}

$PowerShell.EndInvoke($Handle)
$PowerShell.RunspacePool.Dispose()
$PowerShell.Dispose()
Remove-Variable PowerShell,RunspacePool
[gc]::Collect()
[gc]::WaitForPendingFinalizers()
[gc]::Collect()

The memory jumps up immediately after running BeginInvoke() and sees a small release of memory after disposing the RunspacePool but is still well above the starting memory.

@proxb proxb added help wanted and removed question labels Dec 10, 2015

@ryan-leap

This comment has been minimized.

Show comment
Hide comment
@ryan-leap

ryan-leap Dec 10, 2015

Boe,

I have scripts that use either normal PowerShell Jobs or PoshRSJobs with
the use of a switch. What I have observed is that for large sets, either
way(jobs or RSJobs) the memory in the host stays elevated even after all
jobs have completed and been received, variables cleared, etc. I'm inclined
to believe it to be a PowerShell problem.

Ryan
On Dec 10, 2015 8:51 AM, "Boe Prox" notifications@github.com wrote:

I'm trying to determine if this is a PoshRSJob issue or a PowerShell
issue. I can duplicate this without the module by running this:

$PowerShell=[powershell]::Create()$RunspacePool = [runspacefactory]::CreateRunspacePool()$RunspacePool.Open()$PowerShell.RunspacePool = $RunspacePool[void]$PowerShell.AddScript({
[appdomain]::GetCurrentThreadId()
$i = 0
while ($i -lt 1000) {
Set-Variable -Name "var$($i)" -Value (Get-Service)
$i++
}
})#Begins allocating all memory$Handle = $PowerShell.BeginInvoke()
While (-NOT $Handle.IsCompleted) {start-sleep -Milliseconds 100}
$PowerShell.EndInvoke($Handle)$PowerShell.RunspacePool.Dispose()$PowerShell.Dispose()

The memory jumps up immediately after running BeginInvoke() and sees a
small release of memory after disposing the RunspacePool but is still well
above the starting memory.


Reply to this email directly or view it on GitHub
#48 (comment).

ryan-leap commented Dec 10, 2015

Boe,

I have scripts that use either normal PowerShell Jobs or PoshRSJobs with
the use of a switch. What I have observed is that for large sets, either
way(jobs or RSJobs) the memory in the host stays elevated even after all
jobs have completed and been received, variables cleared, etc. I'm inclined
to believe it to be a PowerShell problem.

Ryan
On Dec 10, 2015 8:51 AM, "Boe Prox" notifications@github.com wrote:

I'm trying to determine if this is a PoshRSJob issue or a PowerShell
issue. I can duplicate this without the module by running this:

$PowerShell=[powershell]::Create()$RunspacePool = [runspacefactory]::CreateRunspacePool()$RunspacePool.Open()$PowerShell.RunspacePool = $RunspacePool[void]$PowerShell.AddScript({
[appdomain]::GetCurrentThreadId()
$i = 0
while ($i -lt 1000) {
Set-Variable -Name "var$($i)" -Value (Get-Service)
$i++
}
})#Begins allocating all memory$Handle = $PowerShell.BeginInvoke()
While (-NOT $Handle.IsCompleted) {start-sleep -Milliseconds 100}
$PowerShell.EndInvoke($Handle)$PowerShell.RunspacePool.Dispose()$PowerShell.Dispose()

The memory jumps up immediately after running BeginInvoke() and sees a
small release of memory after disposing the RunspacePool but is still well
above the starting memory.


Reply to this email directly or view it on GitHub
#48 (comment).

@MattHodge

This comment has been minimized.

Show comment
Hide comment
@MattHodge

MattHodge Dec 10, 2015

I agree @ryan-leap .. I noticed the same with PowerShell jobs which is why I gave PoshRSJobs a try..

MattHodge commented Dec 10, 2015

I agree @ryan-leap .. I noticed the same with PowerShell jobs which is why I gave PoshRSJobs a try..

@proxb

This comment has been minimized.

Show comment
Hide comment
@proxb

proxb Jan 8, 2016

Owner

Thanks @ryan-leap and @MattHodge.
I am keeping this open just because I want to do more digging around to see what I can find. I've been using ANT Memory Profiler to look under the hood and see what is happening and I see a lot of memory being stuck on System.String and System.ServiceProcess.ServiceController for some reason (under generation 2 garbage collection). Still trying to learn more about this stuff so maybe I can find something useful at some point.

Owner

proxb commented Jan 8, 2016

Thanks @ryan-leap and @MattHodge.
I am keeping this open just because I want to do more digging around to see what I can find. I've been using ANT Memory Profiler to look under the hood and see what is happening and I see a lot of memory being stuck on System.String and System.ServiceProcess.ServiceController for some reason (under generation 2 garbage collection). Still trying to learn more about this stuff so maybe I can find something useful at some point.

@MattHodge

This comment has been minimized.

Show comment
Hide comment
@MattHodge

MattHodge Jan 8, 2016

Thanks for the update @proxb - sounds like an interesting problem!

MattHodge commented Jan 8, 2016

Thanks for the update @proxb - sounds like an interesting problem!

@gerane

This comment has been minimized.

Show comment
Hide comment
@gerane

gerane Feb 16, 2016

I am seeing the same issue. I noticed ISE was becoming slow and unresponsive. I checked Task Manager and noticed I had a background task consuming over 4gbs of memory.

OS: Windows 10
Name                           Value                                                                                                                                                                                                    
----                           -----                                                                                                                                                                                                    
PSVersion                      5.0.10586.0                                                                                                                                                                                              
PSCompatibleVersions           {1.0, 2.0, 3.0, 4.0...}                                                                                                                                                                                  
BuildVersion                   10.0.10586.0                                                                                                                                                                                             
CLRVersion                     4.0.30319.42000                                                                                                                                                                                          
WSManStackVersion              3.0                                                                                                                                                                                                      
PSRemotingProtocolVersion      2.3                                                                                                                                                                                                      
SerializationVersion           1.1.0.1   

gerane commented Feb 16, 2016

I am seeing the same issue. I noticed ISE was becoming slow and unresponsive. I checked Task Manager and noticed I had a background task consuming over 4gbs of memory.

OS: Windows 10
Name                           Value                                                                                                                                                                                                    
----                           -----                                                                                                                                                                                                    
PSVersion                      5.0.10586.0                                                                                                                                                                                              
PSCompatibleVersions           {1.0, 2.0, 3.0, 4.0...}                                                                                                                                                                                  
BuildVersion                   10.0.10586.0                                                                                                                                                                                             
CLRVersion                     4.0.30319.42000                                                                                                                                                                                          
WSManStackVersion              3.0                                                                                                                                                                                                      
PSRemotingProtocolVersion      2.3                                                                                                                                                                                                      
SerializationVersion           1.1.0.1   
@EsOsO

This comment has been minimized.

Show comment
Hide comment
@EsOsO

EsOsO Feb 17, 2016

Contributor

I recorded a video running the same test script in powershell.exe and in powershell.exe -version 2.0 with process explorer performance graph open. With version 2.0 memory is being released after every execution and also after [gc]::Collect() without version 2.0 memory is not released at all.

If you are interested in the video drop me a line, I can upload it to youtube.

Test Script:

1..10 | Start-RSJob {
    Start-Sleep -Milliseconds (Get-Random -Minimum 100 -Maximum 10000)
}

Get-RSJob | Wait-RSJob -ShowProgress
Contributor

EsOsO commented Feb 17, 2016

I recorded a video running the same test script in powershell.exe and in powershell.exe -version 2.0 with process explorer performance graph open. With version 2.0 memory is being released after every execution and also after [gc]::Collect() without version 2.0 memory is not released at all.

If you are interested in the video drop me a line, I can upload it to youtube.

Test Script:

1..10 | Start-RSJob {
    Start-Sleep -Milliseconds (Get-Random -Minimum 100 -Maximum 10000)
}

Get-RSJob | Wait-RSJob -ShowProgress
@proxb

This comment has been minimized.

Show comment
Hide comment
@proxb

proxb Feb 18, 2016

Owner

@EsOsO If you can, go ahead and upload it and throw the link in here. That is interesting with your testing on V2 and not something I looked at (mostly focused on V4/5 testing).

Owner

proxb commented Feb 18, 2016

@EsOsO If you can, go ahead and upload it and throw the link in here. That is interesting with your testing on V2 and not something I looked at (mostly focused on V4/5 testing).

@EsOsO

This comment has been minimized.

Show comment
Hide comment
@EsOsO

EsOsO Feb 22, 2016

Contributor

Here's the link.

Contributor

EsOsO commented Feb 22, 2016

Here's the link.

@proxb

This comment has been minimized.

Show comment
Hide comment
@proxb

proxb Feb 23, 2016

Owner

Thanks! I'll check it out.

Owner

proxb commented Feb 23, 2016

Thanks! I'll check it out.

@ALuckyGuy

This comment has been minimized.

Show comment
Hide comment
@ALuckyGuy

ALuckyGuy Mar 31, 2016

Edit: Removed - realized my comment was unrelated. Filing a separate issue.

ALuckyGuy commented Mar 31, 2016

Edit: Removed - realized my comment was unrelated. Filing a separate issue.

@potatoqualitee

This comment has been minimized.

Show comment
Hide comment
@potatoqualitee

potatoqualitee Jun 6, 2016

I have the same issue with my own runspace module. The only time I can really get it to release the memory is doing it within the ScriptBlock but that slows down my script tremendously.

Alternatively, if I run 1..3 | foreach { [System.GC]::Collect() } after the script is finished, it'll clear up immediately. There is no place that I can find to run that once within the script that it will clear the memory. I have found that $Host.Runspace.ThreadOptions = "ReuseThread" helps, but not enough. All of my testing is on 4/5.

potatoqualitee commented Jun 6, 2016

I have the same issue with my own runspace module. The only time I can really get it to release the memory is doing it within the ScriptBlock but that slows down my script tremendously.

Alternatively, if I run 1..3 | foreach { [System.GC]::Collect() } after the script is finished, it'll clear up immediately. There is no place that I can find to run that once within the script that it will clear the memory. I have found that $Host.Runspace.ThreadOptions = "ReuseThread" helps, but not enough. All of my testing is on 4/5.

@proxb

This comment has been minimized.

Show comment
Hide comment
@proxb

proxb Jun 10, 2016

Owner

Yea, this is a weird issue that sometimes will go away on its own during garbage collection and other times it will drop maybe half of the memory that it has used. I wonder if the 1..3 is forcing it to go from gen0 to gen1 and to gen2 before being cleared out.

I've had a memory profiler on this and the results have been interesting, but I haven't had the time to dive much more into this lately. Definitely need to come back to this and see what I can figure out. I think I am going to watch it using the 1..3 approach that you showed to see what happens. I have some wild ideas such as tracking the thread and then killing the thread itself to see how well that plays but I've read that it is a dangerous practice if you are not sure of what all the thread is doing.

I found that when working with a runspacepool, the ThreadOptions are set to Default which is actually ReuseThread while a single runspace created using [runspacefactory]::CreateRunspace() will also have its ThreadOptions set to default which means it is using UseNewThread

Owner

proxb commented Jun 10, 2016

Yea, this is a weird issue that sometimes will go away on its own during garbage collection and other times it will drop maybe half of the memory that it has used. I wonder if the 1..3 is forcing it to go from gen0 to gen1 and to gen2 before being cleared out.

I've had a memory profiler on this and the results have been interesting, but I haven't had the time to dive much more into this lately. Definitely need to come back to this and see what I can figure out. I think I am going to watch it using the 1..3 approach that you showed to see what happens. I have some wild ideas such as tracking the thread and then killing the thread itself to see how well that plays but I've read that it is a dangerous practice if you are not sure of what all the thread is doing.

I found that when working with a runspacepool, the ThreadOptions are set to Default which is actually ReuseThread while a single runspace created using [runspacefactory]::CreateRunspace() will also have its ThreadOptions set to default which means it is using UseNewThread

@sheldonhull

This comment has been minimized.

Show comment
Hide comment
@sheldonhull

sheldonhull Oct 21, 2016

I had to discontinue using RoshRSJob and revert back to Invoke-Parallel. I had issues with the background jobs continually hanging from continuing, and this grew with memory usage to over 17GB. I repeated this several times. I couldn't identify what was causing the problem as the processes running inside the runspace ran inside the Powershell_ISE context, and didn't have separate threads I could identify activity on.

Invoke-Parallel worked fine for me, with the automatic variable import, module/function import, etc. It did error on write-verbose, but I removed this and it's running successfully without issue.

I'll definitely revisit this soon, as this is under active work, but for now Invoke-Parallel is stable for me while PoshRSJob is not working

sheldonhull commented Oct 21, 2016

I had to discontinue using RoshRSJob and revert back to Invoke-Parallel. I had issues with the background jobs continually hanging from continuing, and this grew with memory usage to over 17GB. I repeated this several times. I couldn't identify what was causing the problem as the processes running inside the runspace ran inside the Powershell_ISE context, and didn't have separate threads I could identify activity on.

Invoke-Parallel worked fine for me, with the automatic variable import, module/function import, etc. It did error on write-verbose, but I removed this and it's running successfully without issue.

I'll definitely revisit this soon, as this is under active work, but for now Invoke-Parallel is stable for me while PoshRSJob is not working

@proxb

This comment has been minimized.

Show comment
Hide comment
@proxb

proxb Oct 21, 2016

Owner

@sheldonhull What version of PoshRSJob are were you using and is it possible to see the code that you were using? Just looking to duplicate your issue.

Owner

proxb commented Oct 21, 2016

@sheldonhull What version of PoshRSJob are were you using and is it possible to see the code that you were using? Just looking to duplicate your issue.

@sheldonhull

This comment has been minimized.

Show comment
Hide comment
@sheldonhull

sheldonhull Oct 25, 2016

1.7.2.9
I've ensured the latest version on all machines.
This is reproducible for me on both powershell tools project in visual
studio and ISE.

What additional information can I provide to help with reproducing?

On Fri, Oct 21, 2016 at 11:55 AM Boe Prox notifications@github.com wrote:

@sheldonhull https://github.com/sheldonhull What version of PoshRSJob
are were you using and is it possible to see the code that you were using?
Just looking to duplicate your issue.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#48 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/ADXOsI7PRFCvxjFdSJzJAGg1VcfZ7iDmks5q2O5wgaJpZM4GcPo7
.

sheldonhull commented Oct 25, 2016

1.7.2.9
I've ensured the latest version on all machines.
This is reproducible for me on both powershell tools project in visual
studio and ISE.

What additional information can I provide to help with reproducing?

On Fri, Oct 21, 2016 at 11:55 AM Boe Prox notifications@github.com wrote:

@sheldonhull https://github.com/sheldonhull What version of PoshRSJob
are were you using and is it possible to see the code that you were using?
Just looking to duplicate your issue.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#48 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/ADXOsI7PRFCvxjFdSJzJAGg1VcfZ7iDmks5q2O5wgaJpZM4GcPo7
.

@proxb

This comment has been minimized.

Show comment
Hide comment
@proxb

proxb Nov 14, 2016

Owner

@sheldonhull It would be great to see the code that you are using, if possible to help reproduce this issue as well as seeing it run with Invoke-Parallel.

Owner

proxb commented Nov 14, 2016

@sheldonhull It would be great to see the code that you are using, if possible to help reproduce this issue as well as seeing it run with Invoke-Parallel.

@proxb

This comment has been minimized.

Show comment
Hide comment
@proxb

proxb Nov 14, 2016

Owner

I did some more testing with this and with a starting memory usage of ~100MB, I ran the initial code and watched it go into ~@2gb before finishing at ~4GB. I waited until the Runspacepool was disposed of and then ran [gc]::Collect() and it went to ~500MB. After waiting a couple of seconds I ran it again and it bottomed out at ~289MB. I might look at forcing garbage collection only when a runspacepool has been disposed of and see if that might lessen the memory impact.

Owner

proxb commented Nov 14, 2016

I did some more testing with this and with a starting memory usage of ~100MB, I ran the initial code and watched it go into ~@2gb before finishing at ~4GB. I waited until the Runspacepool was disposed of and then ran [gc]::Collect() and it went to ~500MB. After waiting a couple of seconds I ran it again and it bottomed out at ~289MB. I might look at forcing garbage collection only when a runspacepool has been disposed of and see if that might lessen the memory impact.

@proxb

This comment has been minimized.

Show comment
Hide comment
@proxb

proxb Nov 15, 2016

Owner

I don't know when it happened, but sometime in the last 24 hours (I left my console session open), my memory dropped down to its original levels at console startup. I'm going to do some more testing with this and see when exactly that this happens.

Owner

proxb commented Nov 15, 2016

I don't know when it happened, but sometime in the last 24 hours (I left my console session open), my memory dropped down to its original levels at console startup. I'm going to do some more testing with this and see when exactly that this happens.

@stej

This comment has been minimized.

Show comment
Hide comment
@stej

stej Nov 18, 2016

What's some simple script that shows the memory problems? I would try to run it and then maybe have a look with windbg. If there is still a lot of memory, there has to be some GC root that holds all the data and prevents them from GC to be collected.

stej commented Nov 18, 2016

What's some simple script that shows the memory problems? I would try to run it and then maybe have a look with windbg. If there is still a lot of memory, there has to be some GC root that holds all the data and prevents them from GC to be collected.

@proxb

This comment has been minimized.

Show comment
Hide comment
@proxb

proxb Nov 21, 2016

Owner

@stej I just used the one that @MattHodge provided and it does a great job of holding some 1.5-2GB of memory on my machine prior to the 2 [gc]::Collect() calls that I make at the end (waiting a couple seconds between each call) which brings it down to about 200MB.

Import-Module PoshRSJob

1..100 | Start-RSJob -Name {$_} -ScriptBlock {
  $i = 0 
  while ($i -lt 1000)
  {
    Set-Variable -Name "var$($i)" -Value (Get-Service)
    $i++
  }
}

# Wait for things to finish
Get-RSJob | Wait-RSJob

# Throw away the jobs
Get-RSJob | Remove-RSJob

I was using Red Gate's software (http://www.red-gate.com/products/dotnet-development/ants-memory-profiler/) to look where things are at, but my limited knowledge in this area may not be pointing to an accurate location of what is going on. I do see a lot of pscustomobject as well as [servicescontroller] data being held.

I haven't actually pushed out the latest commit that includes the garbage collection yet to free up the space after a few minutes within the runspacepool cleanup, but was planning on doing so some time this week.

I'll be interested in seeing your results if you get time to look at it using WinDBG.

Owner

proxb commented Nov 21, 2016

@stej I just used the one that @MattHodge provided and it does a great job of holding some 1.5-2GB of memory on my machine prior to the 2 [gc]::Collect() calls that I make at the end (waiting a couple seconds between each call) which brings it down to about 200MB.

Import-Module PoshRSJob

1..100 | Start-RSJob -Name {$_} -ScriptBlock {
  $i = 0 
  while ($i -lt 1000)
  {
    Set-Variable -Name "var$($i)" -Value (Get-Service)
    $i++
  }
}

# Wait for things to finish
Get-RSJob | Wait-RSJob

# Throw away the jobs
Get-RSJob | Remove-RSJob

I was using Red Gate's software (http://www.red-gate.com/products/dotnet-development/ants-memory-profiler/) to look where things are at, but my limited knowledge in this area may not be pointing to an accurate location of what is going on. I do see a lot of pscustomobject as well as [servicescontroller] data being held.

I haven't actually pushed out the latest commit that includes the garbage collection yet to free up the space after a few minutes within the runspacepool cleanup, but was planning on doing so some time this week.

I'll be interested in seeing your results if you get time to look at it using WinDBG.

@proxb

This comment has been minimized.

Show comment
Hide comment
@proxb

proxb Nov 22, 2016

Owner

Here is a screenshot after all of the garbage collection has been done and only ~250MB of memory is left which is still about 120MB more than what it started with. The blue line represents the last snapshot while the grey was the snapshot taken before my two garbage collections run.

image

Owner

proxb commented Nov 22, 2016

Here is a screenshot after all of the garbage collection has been done and only ~250MB of memory is left which is still about 120MB more than what it started with. The blue line represents the last snapshot while the grey was the snapshot taken before my two garbage collections run.

image

@proxb

This comment has been minimized.

Show comment
Hide comment
@proxb

proxb Jan 14, 2017

Owner

I added some garbage collection to the routine that cleans up the jobs that should help with the memory issues that have been reported every 2 minutes.

Owner

proxb commented Jan 14, 2017

I added some garbage collection to the routine that cleans up the jobs that should help with the memory issues that have been reported every 2 minutes.

MVKozlov added a commit to MVKozlov/PoshRSJob that referenced this issue Apr 4, 2017

@nightroman

This comment has been minimized.

Show comment
Hide comment
@nightroman

nightroman Dec 26, 2017

I saw similar symptoms on using runspace pools and submitted the issue to PowerShell
PowerShell/PowerShell#5746

nightroman commented Dec 26, 2017

I saw similar symptoms on using runspace pools and submitted the issue to PowerShell
PowerShell/PowerShell#5746

@potatoqualitee

This comment has been minimized.

Show comment
Hide comment
@potatoqualitee

potatoqualitee Dec 26, 2017

Ohhh your mention that it's the pools! 💯 @FriedrichWeinmann check it

potatoqualitee commented Dec 26, 2017

Ohhh your mention that it's the pools! 💯 @FriedrichWeinmann check it

@nightroman

This comment has been minimized.

Show comment
Hide comment
@nightroman

nightroman Dec 26, 2017

Yep, I recently redesigned my script Build-Parallel so that it uses just runspaces without pools. And the problem with leaks was solved. By the way, at least in my scenario, I do not see any significant performance impact, if any at all. But some more coding was needed, of course.

nightroman commented Dec 26, 2017

Yep, I recently redesigned my script Build-Parallel so that it uses just runspaces without pools. And the problem with leaks was solved. By the way, at least in my scenario, I do not see any significant performance impact, if any at all. But some more coding was needed, of course.

@MVKozlov

This comment has been minimized.

Show comment
Hide comment
@MVKozlov

MVKozlov Dec 26, 2017

Contributor

I think that the impact on performance can only be evaluated with the rapid creation / removal of hundreds of tasks. However, throtling support will have to be done manually.

Contributor

MVKozlov commented Dec 26, 2017

I think that the impact on performance can only be evaluated with the rapid creation / removal of hundreds of tasks. However, throtling support will have to be done manually.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment