You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
NOTE: the above two functions and their tests utilize only a single thread.
NOTE: There appears to be a memory ceiling at ~25GB that mprof is unable to observe beyond. E.g. when running astropy.convolution.convolve_fft(image, kernel, allow_huge=True), the OSX activity monitor records a peak usage of ~47GB with ~22GB compressed, equating to ~25 "uncompressed". ~6GB is the background usage by other apps that are open during testing, yielding the 25GB out of the total 32GB available. It appears that mprof is oblivious to this compressed memory.
Results (10k x 10k with 111 x 111)
Memory Deck
The minimum memory required for both arrays to be stored in memory:
0.8GB (float64)
Direct
The above linear increase in memory, as a function of time, is indicative of it "leaking" (may not be a literal leak).
FFT
Results (10k x 10k with 10k+1 x 10k+1)
NOTE: The below plot titles are incorrect for kernel size in that they are missing the +1 in both dims
Memory Deck
The minimum memory required for both arrays to be stored in memory:
1.6GB (float64)
Direct
Estimates assuming linear scaling:
Astropy default (None) : ~141 days. Actual memory used ~2.3GB (incomplete run).
Scipy default (reflect) : ~101.5 days. Immediately fails with MemoryError. ~3GB reached at fail-time.
FFT
NOTE: effectively the PSF deconvolution benchmark.
Scipy default (full) :
Astropy default (fill) : Aborted by OS with signal 9. OSX activity monitor indicated significantly higher mem consumption than mprof has. This was true when not profiled so cannot be attributed to mprof overhead. Approx 70GB of memory was recorded, with ~50GB compressed.
The text was updated successfully, but these errors were encountered:
Repository owner
locked and limited conversation to collaborators
Jan 25, 2018
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
References:
Astropy:
SciPy:
Other:
Interferometer Observations
Perf:
Driver:
mprof run --include-children ../convolve.py
Benchmark
Env
-O3
) v3.0rc2 has no significant code changes.Box
Direct
NOTE: the above two functions and their tests utilize only a single thread.
FFT
NOTE: the above two functions and their tests utilize only a single thread.
NOTE: There appears to be a memory ceiling at ~25GB that
mprof
is unable to observe beyond. E.g. when runningastropy.convolution.convolve_fft(image, kernel, allow_huge=True)
, the OSX activity monitor records a peak usage of ~47GB with ~22GB compressed, equating to ~25 "uncompressed". ~6GB is the background usage by other apps that are open during testing, yielding the 25GB out of the total 32GB available. It appears thatmprof
is oblivious to this compressed memory.Results (10k x 10k with 111 x 111)
Memory Deck
The minimum memory required for both arrays to be stored in memory:
Direct
The above linear increase in memory, as a function of time, is indicative of it "leaking" (may not be a literal leak).
FFT
Results (10k x 10k with 10k+1 x 10k+1)
NOTE: The below plot titles are incorrect for kernel size in that they are missing the +1 in both dims
Memory Deck
The minimum memory required for both arrays to be stored in memory:
Direct
Estimates assuming linear scaling:
None
) : ~141 days. Actual memory used ~2.3GB (incomplete run).MemoryError
. ~3GB reached at fail-time.FFT
NOTE: effectively the PSF deconvolution benchmark.
mprof
has. This was true when not profiled so cannot be attributed tomprof
overhead. Approx 70GB of memory was recorded, with ~50GB compressed.The text was updated successfully, but these errors were encountered: