You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 23, 2020. It is now read-only.
We cannot reproduce this issue, it seems that it happens randomly and most of the time when we create a new node on the cluster.
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
4gi5kwzwlron5y7ekdrnnynm5 swarm-manager00000E Ready Active Leader
5ry73uzy3m4jf8p933civtbar swarm-manager00000J Ready Active Reachable
hwb5qgfwqtfhko9w4y3lfsc62 * swarm-manager00000H Ready Active Reachable
qb4oajnqi8tc0wvegkr87ssmi swarm-manager00000K Ready Active Reachable
vj5ct7afr9u2syptiy3qe8nik swarm-worker000006 Ready Active
z9lqn97sub3p2og7kx8ganni4 swarm-worker000005 Ready Active
OK hostname=swarm-manager00000E session=1506678273-FLLiB0hHe2gg6PtTOE3ygphZafxPqLZX
OK hostname=swarm-manager00000H session=1506678273-FLLiB0hHe2gg6PtTOE3ygphZafxPqLZX
OK hostname=swarm-manager00000J session=1506678273-FLLiB0hHe2gg6PtTOE3ygphZafxPqLZX
OK hostname=swarm-manager00000K session=1506678273-FLLiB0hHe2gg6PtTOE3ygphZafxPqLZX
OK hostname=swarm-worker000005 session=1506678273-FLLiB0hHe2gg6PtTOE3ygphZafxPqLZX
OK hostname=swarm-worker000006 session=1506678273-FLLiB0hHe2gg6PtTOE3ygphZafxPqLZX
Done requesting diagnostics.
Your diagnostics session ID is 1506678273-FLLiB0hHe2gg6PtTOE3ygphZafxPqLZX
Are there any known issues about that behaviour? Is there a way where I can check this or re-initialize the plugin?
To fix this problem we have to create a new node and delete the old one.
The text was updated successfully, but these errors were encountered:
We experienced issues with the azure:cloudstor plugin where the plugin didn't load the Azure storage correctly.
We have a bunch of services, which use the same volume. We created the services using the
docker stack deploy
command.I created a dummy container to check the loaded storage on two different nodes:
We cannot reproduce this issue, it seems that it happens randomly and most of the time when we create a new node on the cluster.
Are there any known issues about that behaviour? Is there a way where I can check this or re-initialize the plugin?
To fix this problem we have to create a new node and delete the old one.
The text was updated successfully, but these errors were encountered: