Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tiny fixes #12143

Merged
merged 7 commits into from
Aug 11, 2023
Merged

Tiny fixes #12143

merged 7 commits into from
Aug 11, 2023

Conversation

simondeziel
Copy link
Member

No description provided.

Signed-off-by: Simon Deziel <simon.deziel@canonical.com>
Signed-off-by: Simon Deziel <simon.deziel@canonical.com>
Signed-off-by: Simon Deziel <simon.deziel@canonical.com>
Signed-off-by: Simon Deziel <simon.deziel@canonical.com>
Signed-off-by: Simon Deziel <simon.deziel@canonical.com>
Signed-off-by: Simon Deziel <simon.deziel@canonical.com>
@simondeziel
Copy link
Member Author

https://github.com/canonical/lxd/actions/runs/5825820485/job/15798320627?pr=12143:

 Transferring instance: Unpack: 100% (4.03MB/s)
+ LXD_DIR=/home/runner/work/lxd/lxd/test/tmp.yAf/cy9 lxc info c1
+ set +x
+ grep -q Location: node1
+ timeout --foreground 120 /home/runner/go/bin/lxc info c1 --verbose
+ LXD_DIR=/home/runner/work/lxd/lxd/test/tmp.yAf/cy9 lxc move c1 --target=@default
+ set +x
+ timeout --foreground 120 /home/runner/go/bin/lxc move c1 --target=@default --verbose
Error: Target must be different than instance's current location
+ cleanup
+ set +ex

In test/suites/clustering_move.sh, the LXD nodes (nodeX) are each member of foobarX and default cluster groups. At that point, c1 is on node1 because it was moved to @foobar1. Moving c1 to @default should either be a no-op as the placement's already right or the instance should be moved to any other member of the @default group, no? Erroring out like that feels counterintuitive to me. Something to discuss with @tomponline (when you are back).

Signed-off-by: Simon Deziel <simon.deziel@canonical.com>
@simondeziel simondeziel marked this pull request as ready for review August 10, 2023 21:52
@tomponline
Copy link
Member

@roosterfish did you work on that feature? What do you think?

@tomponline tomponline merged commit b0eab5b into canonical:main Aug 11, 2023
24 of 25 checks passed
@roosterfish
Copy link
Contributor

It should be the default behavior to move the instance to another node if the same cluster group (containing multiple members) is given.

There are two checks for this in the clustering_move.sh suite:

# c1 can be moved within the same cluster group if it has multiple members
LXD_DIR="${LXD_ONE_DIR}" lxc move c1 --target=@default # -> move from node1 to either 2 or 3
LXD_DIR="${LXD_ONE_DIR}" lxc move c1 --target=@default # -> move from 2 to 1/3 or 3 to 1/2

# c1 cannot be moved within the same cluster group if it has a single member
LXD_DIR="${LXD_ONE_DIR}" lxc move c1 --target=@foobar3
LXD_DIR="${LXD_ONE_DIR}" lxc info c1 | grep -q "Location: node3"
! LXD_DIR="${LXD_ONE_DIR}" lxc move c1 --target=@foobar3 || false # -> fails since len(foobar3) == 1

@simondeziel was the pipeline job failing unintentionally?

@simondeziel
Copy link
Member Author

simondeziel commented Aug 11, 2023

@simondeziel was the pipeline job failing unintentionally?

The failure was not expected however it happened like this:

+ timeout --foreground 120 /home/runner/go/bin/lxc move c1 --target=@default --verbose
Error: Target must be different than instance's current location
+ cleanup

So with the explanation you provided, it should have picked a non-self node as the move target. But it didn't.

@simondeziel simondeziel deleted the tiny-fixes branch August 11, 2023 14:03
@roosterfish
Copy link
Contributor

I'll look into it and try to reproduce it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants