Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Number of running jobs showed in example/jobs_list.py does not correspond to the real number of jobs #24

Closed
laparn opened this issue Dec 28, 2012 · 3 comments

Comments

@laparn
Copy link

laparn commented Dec 28, 2012

While running the example/jobs_list.py and having one running job, I have :
arnaud@D3550:/src/pyslurm/examples$ ./jobs_list.py
No jobs found !
arnaud@D3550:
/src/pyslurm/examples$ ./jobs_list.py
JobID 136 :
account : None
alloc_node : D3550
alloc_sid : 2849
altered : None
assoc_id : 0
batch_flag : 0
batch_host : D3550
batch_script : None
block_id : None
blrts_image : None
boards_per_node : 0
cnode_cnt : None
command : /home/arnaud/src/slurmjob/./wait-arg.sh
comment : None
conn_type : (None, 'None')
contiguous : False
cores_per_socket : 65534
cpus_per_task : 1
dependency : None
derived_ec : 0
eligible_time : Fri Dec 28 14:51:01 2012
end_time : Sat Dec 28 14:51:01 2013
exc_nodes : []
exit_code : 0
features : []
gres : []
group_id : 1001
ionodes : None
job_state : (1, 'RUNNING')
licenses : {}
linux_image : None
max_cpus : 0
max_nodes : 0
mloader_image : None
name : wait-arg.sh
network : None
nice : 10000
nodes : None
ntasks_per_core : 65535
ntasks_per_node : 0
ntasks_per_socket : 65535
num_cpus : 1
num_nodes : 1
partition : debug
pn_min_cpus : 1
pn_min_memory : 0
pn_min_tmp_disk : 0
pre_sus_time : 0
preempt_time : 0
priority : 4294901759
qos : None
ramdisk_image : None
reboot : None
req_nodes : []
req_switch : 0
requeue : True
resize_time : N/A
restart_cnt : 0
resv_id : None
resv_name : None
rotate : False
shared : 0
show_flags : 0
sockets_per_board : 0
sockets_per_node : 65534
start_time : Fri Dec 28 14:51:01 2012
state_desc : None
state_reason : (0, 'None')
submit_time : Fri Dec 28 14:51:01 2012
suspend_time : 0
threads_per_core : 65534
time_limit : Infinite
time_min : 0
user_id : 1001
wait4switch : 0
wckey : None

work_dir : /home/arnaud/src/slurmjob

Number of Jobs - 1

[]
Number of pending jobs - 0
Number of running jobs - 0

JobIDs in Running state - []

This is fine. But the 2 last lines : Number of running jobs and JobIDs in Running state are wrong.

In jobs_list.py, it corresponds to the call to : a.find('job_state', pyslurm.JOB_RUNNING). It seems that the representation of the running jobs has changed in pyslurm, which result in this non critical bug.

@gingergeeks
Copy link
Member

Laparn,
Hi, yes this issue is because I mistakenly decided (maybe) to have some data be returned as a tuple such as job_state which was the numeric job state and it's equivalent string. Obviously the find function is based on a simple single value search. I was discussing with Phantez if we should change back and provide a helper function to convert the value to it's string equivalent. Alternatively the find function could detect if the store was a tuple and just searched on the numeric value if the search parameter is a numeric and search on the string if the search parameter was a string.

Mark

@laparn
Copy link
Author

laparn commented Feb 4, 2013

I find the last alternative that you describe (find function behaving differently if it a tupple, an int or a string) more "natural" and appealing. But any solution (different find functions for different cases) would be also perfect for me. Good luck.

@gingergeeks
Copy link
Member

Closing this - been open for a while. If it is still issue then please raise a new ticket as we are moving the code base forward to Slurm-16.05

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants