Skip to content

Conversation

djs55
Copy link
Collaborator

@djs55 djs55 commented May 7, 2015

bugs:

  • always round up to the nearest extent
  • local allocation is synchronous with error reporting

robustness:

  • use lock files to prevent multiple instances of the daemons being started

David Scott added 8 commits May 7, 2015 13:33
If we ask for 1 bytes we should use (1 + extent_size - 1) / extent_size
i.e. we should allocate a whole extent even though it is more than what
we asked for.

Fixes #65

Signed-off-by: David Scott <dave.scott@citrix.com>
…locator

We can shrink deactivated volumes as normal through the API.
The local allocator can only allocate, so fail if shrinking is requested.

Also avoid calling the local allocator to allocate 0 bytes.

Signed-off-by: David Scott <dave.scott@citrix.com>
It will exit with 0 only if the resize has been successful.
Otherwise it will print a diagnostic message to stderr and exit with
  non-zero.

Fixes #66

Signed-off-by: David Scott <dave.scott@citrix.com>
Signed-off-by: David Scott <dave.scott@citrix.com>
We create a lock file based on the Unix domain socket path to
prevent a second instance starting up.

Signed-off-by: David Scott <dave.scott@citrix.com>
If we are listening on a path, we create a lock file based on the path
name. If we are listening on a TCP port, we rely on the bind being exclusive
(although the TCP code is deprecated)

Signed-off-by: David Scott <dave.scott@citrix.com>
I suspect the daemonize code was closing the fd and releasing the lock.

Fixes #67

Signed-off-by: David Scott <dave.scott@citrix.com>
Signed-off-by: David Scott <dave.scott@citrix.com>
@coveralls
Copy link

Coverage Status

Coverage decreased (-0.75%) to 32.58% when pulling bfdb82d on lvresize into 3bf9aba on master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-1.17%) to 32.16% when pulling 69cb7cd on lvresize into 3bf9aba on master.

…ee blocks

We missed one case where this could happen: when initially registering a host
where the local allocator has been spawned previously.

Signed-off-by: David Scott <dave.scott@citrix.com>
@coveralls
Copy link

Coverage Status

Coverage decreased (-1.33%) to 32.0% when pulling 4c98a55 on lvresize into 3bf9aba on master.

David Scott added 2 commits May 7, 2015 20:45
Previously we would try to allocate everything we needed in one chunk.
If the free space isn't available we would block. However xenvmd does
not know we need more space and won't give us more, so we deadlock.

Instead we allocate as much as we can (up to the amount we really need),
expand the volume, thus draining the free pool, which triggers xenvmd
to refill it.

Fix #69

Signed-off-by: David Scott <dave.scott@citrix.com>
…ng order

In particular device mapper likes its targets to be posted in order of
virtual address.

Signed-off-by: David Scott <dave.scott@citrix.com>
@coveralls
Copy link

Coverage Status

Coverage decreased (-1.33%) to 32.0% when pulling 33a0273 on lvresize into 3bf9aba on master.

djs55 added a commit that referenced this pull request May 7, 2015
Fix bugs and make more robust
@djs55 djs55 merged commit 5d8f6d6 into master May 7, 2015
@djs55 djs55 deleted the lvresize branch May 7, 2015 20:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants