Replacing Volume brick failed - repliacte:2 - heketi v9.0.0 #1630
Comments
as far i can see, should be fixed in #1653 (which is not released yet) |
is it part of v9.0.0 or it's going to be v10.0.0? where and when I can get the tar file with the fix |
It will likely be part of the next release whatever version number we decide upon. The change in question is on the master branch. Refer to our |
According to: https://github.com/heketi/heketi/blob/master/docs/contributing.md, did the following:yum -y install go glide... installed echo $HOME/root pwd/root mkdir golangcd golangexport GOPATH=$HOME/golangexport PATH=$GOPATH/bin:$PATHmkdir -p $GOPATH/src/github.com/heketicd $GOPATH/src/github.com/heketigit clone https://github.com/heketi/heketi.git.... cloned makefatal: --points-at option is only allowed with -l. |
I need to build the server .tgz file |
@amgads try running "glide install -v" before make. |
Thanks a lot! It did build the "heketi" executable. How do I build the tarball "heketi-v9.0.0.linux.amd64.tar.gz" ? |
I guess the question: is there a make option to make it? or it's just a tarball of the files: |
You are probably looking for "make release" https://github.com/heketi/heketi/blob/master/Makefile#L162 |
Thanks a lot .. |
I tested the original problem “remove replicate:2 bricks” after building from master and it failed the same way as reported |
one thing I realised -- heketi and heketi-cli sizes are smaller than the original "v9.0.0". Does that mean something wasn't picked up or newer code is cleaned up? |
The code is showing the changes that was done |
OK -- found the issue. It was an enviroment parameter in the container. So the fix works. [INFO] --> Fetching k8s.io/kubernetes. |
I just encountered same issue. After build, need copy binary output to server end. Using new heketi-cli is not enough. @amgads |
Real issue was resolved. Issue in build is a how glide-uses-bitbucket bug, but glide is a moribund project AFAICT. |
Kind of issue
Observed behavior
Replicated volume - replica_count=2:
We're trying to do a recovery of a node. The node has one of the bricks for a volume (replica: 2) by doing:
a) removing the device on the failed node from heketi topology
b) allocate new brick on another node
c) replace old brick by new one.
We get error " ERROR : Command (/usr/bin/heketi-cli device remove ed0659cd69eb3d1a3c7b0f4c584ff6cf) failed: (Error: Failed to remove device, error: Cannot replace brick 0a7a5821de0e7a8ebbd6cb9568c0c00b as only 1 of 2 required peer bricks are online"
gluster cli allows that, but there is no way to sync heketi db with gluster state (to remove old bricks, add new)
Are there any possible solutions for these cases in heketi?
Expected/desired behavior
Details on how to reproduce (minimal and precise)
volumetype: "replicate:2"
" ERROR : Command (/usr/bin/heketi-cli device remove ed0659cd69eb3d1a3c7b0f4c584ff6cf) failed: (Error: Failed to remove device, error: Cannot replace brick 0a7a5821de0e7a8ebbd6cb9568c0c00b as only 1 of 2 required peer bricks are online"
Information about the environment:
kubectl apply -f .yml
Other useful information
Appreciate urgent support as this is blocking recovery of critical infrastructure
The text was updated successfully, but these errors were encountered: