Skip to content
This repository has been archived by the owner on Feb 24, 2018. It is now read-only.

File locking when using hdf5, collective mode #10

Open
cbartz opened this issue Aug 6, 2013 · 2 comments
Open

File locking when using hdf5, collective mode #10

cbartz opened this issue Aug 6, 2013 · 2 comments

Comments

@cbartz
Copy link

cbartz commented Aug 6, 2013

Hello,
when i am using the hdf5 api, collective mode, the underlying system
tries to do file locking (ADIOI_Set_lock).
After doing some investigation, I think I have found the reason:

In aiori-HDF5.c, the line 295

memDataSpaceDims[0] = (hsize_t) param->transferSize;

invokes strided io in the romio ADIO layer (ad_write_str). The line should be

memDataSpaceDims[0] = (hsize_t) param->transferSize / sizeof(IOR_size_t);

After changing this line of code, the configuration works without file locking.

@roblatham00
Copy link

I'm happy you are looking at the HDF5 driver. It doesn't get a lot of (any?) attention.

How are you observing that ADIOI_Set_lock is getting called?

@cbartz
Copy link
Author

cbartz commented Aug 15, 2013

My program crashed and I got the following error output:

"File locking failed in ADIOI_Set_lock(fd 13,cmd F_SETLKW/7,type
F_WRLCK/1,whence 0) with return value FFFFFFFF and errno 26.

  • If the file system is NFS, you need to use NFS version 3, ensure that
    the lockd daemon is running on all the machines, and mount the directory
    with the 'noac' option (no attribute caching).
  • If the file system is LUSTRE, ensure that the directory is mounted with
    the 'flock' option.

ADIOI_Set_lock:: Function not implemented"

osteffen pushed a commit to ThinkParQ/ior-1 that referenced this issue Oct 20, 2017
add support for tuning BeeGFS parameters
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants