You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[]
Number of pending jobs - 0
Number of running jobs - 0
JobIDs in Running state - []
This is fine. But the 2 last lines : Number of running jobs and JobIDs in Running state are wrong.
In jobs_list.py, it corresponds to the call to : a.find('job_state', pyslurm.JOB_RUNNING). It seems that the representation of the running jobs has changed in pyslurm, which result in this non critical bug.
The text was updated successfully, but these errors were encountered:
Laparn,
Hi, yes this issue is because I mistakenly decided (maybe) to have some data be returned as a tuple such as job_state which was the numeric job state and it's equivalent string. Obviously the find function is based on a simple single value search. I was discussing with Phantez if we should change back and provide a helper function to convert the value to it's string equivalent. Alternatively the find function could detect if the store was a tuple and just searched on the numeric value if the search parameter is a numeric and search on the string if the search parameter was a string.
I find the last alternative that you describe (find function behaving differently if it a tupple, an int or a string) more "natural" and appealing. But any solution (different find functions for different cases) would be also perfect for me. Good luck.
While running the example/jobs_list.py and having one running job, I have :
arnaud@D3550:
/src/pyslurm/examples$ ./jobs_list.py/src/pyslurm/examples$ ./jobs_list.pyNo jobs found !
arnaud@D3550:
JobID 136 :
account : None
alloc_node : D3550
alloc_sid : 2849
altered : None
assoc_id : 0
batch_flag : 0
batch_host : D3550
batch_script : None
block_id : None
blrts_image : None
boards_per_node : 0
cnode_cnt : None
command : /home/arnaud/src/slurmjob/./wait-arg.sh
comment : None
conn_type : (None, 'None')
contiguous : False
cores_per_socket : 65534
cpus_per_task : 1
dependency : None
derived_ec : 0
eligible_time : Fri Dec 28 14:51:01 2012
end_time : Sat Dec 28 14:51:01 2013
exc_nodes : []
exit_code : 0
features : []
gres : []
group_id : 1001
ionodes : None
job_state : (1, 'RUNNING')
licenses : {}
linux_image : None
max_cpus : 0
max_nodes : 0
mloader_image : None
name : wait-arg.sh
network : None
nice : 10000
nodes : None
ntasks_per_core : 65535
ntasks_per_node : 0
ntasks_per_socket : 65535
num_cpus : 1
num_nodes : 1
partition : debug
pn_min_cpus : 1
pn_min_memory : 0
pn_min_tmp_disk : 0
pre_sus_time : 0
preempt_time : 0
priority : 4294901759
qos : None
ramdisk_image : None
reboot : None
req_nodes : []
req_switch : 0
requeue : True
resize_time : N/A
restart_cnt : 0
resv_id : None
resv_name : None
rotate : False
shared : 0
show_flags : 0
sockets_per_board : 0
sockets_per_node : 65534
start_time : Fri Dec 28 14:51:01 2012
state_desc : None
state_reason : (0, 'None')
submit_time : Fri Dec 28 14:51:01 2012
suspend_time : 0
threads_per_core : 65534
time_limit : Infinite
time_min : 0
user_id : 1001
wait4switch : 0
wckey : None
work_dir : /home/arnaud/src/slurmjob
Number of Jobs - 1
[]
Number of pending jobs - 0
Number of running jobs - 0
JobIDs in Running state - []
This is fine. But the 2 last lines : Number of running jobs and JobIDs in Running state are wrong.
In jobs_list.py, it corresponds to the call to : a.find('job_state', pyslurm.JOB_RUNNING). It seems that the representation of the running jobs has changed in pyslurm, which result in this non critical bug.
The text was updated successfully, but these errors were encountered: