Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak #7

Open
JochenKr opened this issue May 3, 2023 · 5 comments
Open

Memory leak #7

JochenKr opened this issue May 3, 2023 · 5 comments
Assignees

Comments

@JochenKr
Copy link

JochenKr commented May 3, 2023

I've build a baby monitor based on your great article:
https://towardsdatascience.com/create-your-own-smart-baby-monitor-with-a-raspberrypi-and-tensorflow-5b25713410ca

In my application using the micmon library the memory leaks, so that I have to perform regularly restarts as a workaround.
Any possibility to fix this? I'm not sure if it really coming from the library or from my python script. But as that script is basically a simplified version of the one mentioned in the article, it looks to me that it is more related to the library.

@JochenKr
Copy link
Author

JochenKr commented May 8, 2023

I found that the Memory leak happens in this line of code:
prediction = model.predict(sample)

if I remove it and add a fixed prediction = 'negative' in, the memory consumption is not increasing

@blacklight
Copy link
Owner

Hi @JochenKr, thanks for the heads up! Did you manage to replicate the leak with this simple snippet or did you add some more logic around it?

for sample in reader:
  prediction = model.predict(sample)

It seems that memory leaks in model.predict are quite common when using the predict API within a for loop - although I don't think I've ever experienced this issue myself.

A proper solution would require me way too much redesign time, but in the meantime there seems to be a workaround, I may push this soon.

@blacklight blacklight self-assigned this May 9, 2023
@JochenKr
Copy link
Author

JochenKr commented May 9, 2023

I did at least reproduce the issue using this script:
https://gist.github.com/BlackLight/b4b29e5044f5a6a609e62fa212b736a3#file-micmon_predict_example-py

ok, it is caused by the tensorflow API. That might explain why this phenomenon is now popping up in this intensity on my side. I was running the baby monitor already like a year and it was running fine when I did a reset of the RPi once a day. Now my SD card crashed and I had to build it up again. Most probably with a different version of tf. and now I have to reset the RPi every two hours.
Unluckily my phython skills are too weak. If you find the time for a workaround, that would be great. Let me know if there is something I can test on my side.

@blacklight
Copy link
Owner

I was running the baby monitor already like a year and it was running fine when I did a reset of the RPi once a day. Now my SD card crashed and I had to build it up again.

I've been running my model in my kid's room for more than a year without requiring a single restart :) but I also haven't been using it for more than a year (by now my kid no longer requires immediate intervention). This seems to indeed point to a change in the TF API that is now causing the leak to manifest much more.

Eventually I'd like to move the ML pipelines both in this project and Platypush from Tensorflow to PyTorch - TF APIs break once every couple of months, .predict() is a fundamental piece of the API, and they can't just say "oh, just go for this numpy workaround insteadf of predict if you're running for loops".

@JochenKr
Copy link
Author

I "fixed" it by downgrading the tensorflow version.
The excessive memory leakage happened using the latest version 2.12.0. Now I've installed 2.8.0 and there is no excessive memory leakage visible anymore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants