Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Already on GitHub? Sign in to your account

Sometimes Server objects locks on is_measuring=True #14

Open
swistakm opened this Issue Nov 19, 2012 · 5 comments

Comments

Projects
None yet
2 participants
Contributor

swistakm commented Nov 19, 2012

This makes skwissh stop collecting data for this server. Additionally there is no way to fix this except using django shell.

I suppose that when cron job is killed or exits unexpectedly nothing more can change is_measuring field back to False. Maybe using timestamp here would be better - it will ensure that even if cron job is killed server will (in some point of time) return to it's prevoius state.

Owner

rsaikali commented Nov 19, 2012

You're right, it happened to me too...
I'm thinking about the real purpose of this 'is_measuring' field... I'll maybe remove it...
It's real purpose was to avoid to much python process running at the same time, for example if a sensor never exits its measuring process...
I've never seen such cases, it's just a security hook.

Or maybe add a button somewhere to unlock 'is_measuring' on all servers.

Contributor

swistakm commented Nov 19, 2012

Maybe use cache to limit rate? But it still won't help if sensor never exits...

Owner

rsaikali commented Nov 20, 2012

The mechanism here should ideally be a 'queue', pushing 'sensor jobs' in it.

Pros :
If a job never ends, no other job is launched, there's just more and more queued (but not launched) jobs.

Cons :
Job asked to be launched (almost exactly) at 12:00 may execute at 12:10 or whatever... so sensors values timestamp could be inexact/melted.

Queue should be able to process 1 to n jobs in parallel (should be a configuration variable).
I should have a look at Celery (http://celeryproject.org/) or python-rq (http://python-rq.org/), but I think it will complicate a lot my small project :-)

Contributor

swistakm commented Nov 20, 2012

I think celery and pythonrq are overkill. I think it won't complicate code of skwissh because celery makes executing tasks simplier, but it would really complicate application stack needed to run skwissh :)

Contributor

swistakm commented Jan 20, 2013

After some thinking I'd say it would be nice to have clery or pythonrq as optional tasks backends. It will require some new managment commads and code refactorization because now skwissh is relying on how kronos collects and installs tasks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment