-
Notifications
You must be signed in to change notification settings - Fork 303
Panic when running fleetctl status or journal on vagrant coreos #143
Comments
More info:
|
@philips helped me debug this on IRC and came up with the following to do within the coreos-vagrant directory on my Mac: Setup your ssh client on your Mac so it can ssh into your vagrant machine like a regular host:
Login! Using 'ssh -A' forwards an agent from your host machine into your vagrant machine.
So now when you are ssh'd in to your coreos vagrant instance, ssh commands run there will use the ssh-agent on your Mac to authenticate. Fleetctl was using ssh under the covers to issue the commands like status and journal. Once this ssh-agent was set up, the ssh commands were being properly authenticated even though they were going to the same instance that they were issued on. |
Great! I'll add this to the docs. |
tl;dr: scroll to bottom for alternative approach to resolving. Hey guys, some more detail here for what it's worth. looks like ssh-agent isn't running by default, turning it on removes the panic and displays the following: core@core-02 ~ $ eval `ssh-agent -s`
Agent pid 973
core@core-02 ~ $ fleetctl ssh 2bd808e9-8e79-4eca-9027-c05d3da5146e
2014/02/24 03:47:19 Unable to establish SSH connection: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain so then using the update-ssh-keys script to check which keys are installed for a sec, does reveal the vagrant insecure public key, which is curious... so i destroyed the vms, removed the local .vagrant/ folder and vagrant upped from scratch then ran the command again. results were the same. core@core-02 ~ $ update-ssh-keys -l
All keys for core:
oem:
2048 dd:3b:b8:2e:85:04:06:e9:ab:ff:a8:0a:c0:04:6e:d6 vagrant insecure public key (RSA)
Updated /home/core/.ssh/authorized_keys Don't know where that key was coming from, couldn't find it anywhere in the image. Thought it might be something coming from local machine (os x). Not sure. Anyway by uploading the local vagrant insecure key properly into /home/core/.ssh/ and starting the ssh-agent and doing an ssh-add on the key, all the clustered VMs could communicate just fine via the existing fleetctl, etc. I.e.,: Local: scp -i ~/.vagrant.d/insecure_private_key ~/.vagrant.d/insecure_private_key core@192.168.65.3:/home/core/.ssh Cluster instance (core-02, in keeping with the above example which is @ ...65.3): core@core-02 ~ $ eval `ssh-agent -s`
Agent pid 964
core@core-02 ~ $ ssh-add ~/.ssh/insecure_private_key
Identity added: /home/core/.ssh/insecure_private_key (/home/core/.ssh/insecure_private_key)
core@core-02 ~ $ fleetctl list-machines -l
MACHINE IP METADATA
a8505f9e-e163-45f9-9ca1-bac65dd983bc 192.168.65.2 name=core-01
f1912f0d-4fbb-4645-b101-c0ac0d66dff8 192.168.65.4 name=core-03
49ecaa92-3f25-456c-9f72-b55751fb19d9 192.168.65.3 name=core-02
core@core-02 ~ $ fleetctl ssh a8505f9e-e163-45f9-9ca1-bac65dd983bc
Last login: Mon Feb 24 03:55:23 UTC 2014 from 192.168.65.1 on pts/0
______ ____ _____
/ ____/___ ________ / __ \/ ___/
/ / / __ \/ ___/ _ \/ / / /\__ \
/ /___/ /_/ / / / __/ /_/ /___/ /
\____/\____/_/ \___/\____//____/
core@core-01 ~ $ exit
logout |
This issue happened to me today. Resolved using the workaround provided above. |
I was following the instructions in Run a Container in the Cluster in https://coreos.com/docs/launching-containers/launching/launching-containers-fleet/ after installing and running things on the vagrant coreos image.
Once I found out from the IRC channel that I needed to do
sudo systemctl start fleet
I was able to do the steps in Run a Container in the Cluster and get the process running as seen in list_units:But when I tried
fleetctl status myapp.service
orfleetctl journal myapp.service
I got panics. You can see one at https://gist.github.com/rberger/9108181The text was updated successfully, but these errors were encountered: