Skip to content
This repository has been archived by the owner on Aug 29, 2023. It is now read-only.

[WIP] Display information about the cluster or all running footloose containers #103

Merged
merged 28 commits into from
Apr 1, 2019

Conversation

Skarlso
Copy link
Contributor

@Skarlso Skarlso commented Mar 23, 2019

Hi @dlespiau.

This is a WIP right now. Just wanted to show you my approach as early as possible.

Here is a sample output for now:

JSON:

❯ ./footloose list -f json
{"machines":[{"name":"cluster-node0","ports":{},"spec":{"name":"node%d","image":"quay.io/footloose/centos7:0.1.0","portMappings":[{"containerPort":22}]},"status":"Stopped"},{"name":"cluster-node1","ports":{},"spec":{"name":"node%d","image":"quay.io/footloose/centos7:0.1.0","portMappings":[{"containerPort":22}]},"status":"Stopped"}]}

Default:

❯ ./footloose list -f default
+---------------+-------+---------+
| Name          | Ports | State   |
+---------------+-------+---------+
| cluster-node0 | 22    | Stopped |
| cluster-node1 | 22    | Stopped |
+---------------+-------+---------+


~/goprojects/footloose issues_7_list_status*
❯ ./footloose create
INFO[0000] Image: quay.io/footloose/centos7:0.1.0 present locally
INFO[0000] Creating machine: cluster-node0 ...
INFO[0001] Creating machine: cluster-node1 ...

~/goprojects/footloose issues_7_list_status*
❯ ./footloose list -f default
+---------------+-------+---------+
| Name          | Ports | State   |
+---------------+-------+---------+
| cluster-node0 | 22    | Running |
| cluster-node1 | 22    | Running |
+---------------+-------+---------+

This doesn't include colors and same status information for now which I'm about to add. :)

TODO:

  • Include data into the default view
  • Implement the search for all labeled containers via the docker client
  • Add some metrics to the output

@Skarlso
Copy link
Contributor Author

Skarlso commented Mar 23, 2019

This takes config into consideration. But if --all is supplied it will look for all containers that footloose created using a label. I'm labelling all containers with an owner label.

@Skarlso
Copy link
Contributor Author

Skarlso commented Mar 23, 2019

Hmm, the docker client is pretty limited... I would actually prefer something that talks to the API rather then using exec. :/

@Skarlso
Copy link
Contributor Author

Skarlso commented Mar 23, 2019

Screenshot 2019-03-23 at 11 48 23

@Skarlso
Copy link
Contributor Author

Skarlso commented Mar 23, 2019

+----------------+-----------+---------------------------------+------------+--------------------------------+---------+
| Name           | Ports     | Image                           | Cmd        | Volumes                        | State   |
+----------------+-----------+---------------------------------+------------+--------------------------------+---------+
| /cluster-node1 | 22->32774 | quay.io/footloose/centos7:0.1.0 | /sbin/init | /sys/fs/cgroup->/sys/fs/cgroup | Running |
| /cluster-node0 | 22->32773 | quay.io/footloose/centos7:0.1.0 | /sbin/init | /sys/fs/cgroup->/sys/fs/cgroup | Running |
+----------------+-----------+---------------------------------+------------+--------------------------------+---------+

@Skarlso
Copy link
Contributor Author

Skarlso commented Mar 23, 2019

❯ ./footloose list -f json | jq
{
  "machines": [
    {
      "name": "/cluster-node1",
      "spec": {
        "name": "",
        "image": "quay.io/footloose/centos7:0.1.0",
        "volumes": [
          {
            "type": "bind",
            "source": "/sys/fs/cgroup",
            "destination": "/sys/fs/cgroup",
            "readOnly": false
          }
        ],
        "cmd": "/sbin/init"
      },
      "status": "Running",
      "ports": [
        {
          "guest": 22,
          "host": 32774
        }
      ]
    },
    {
      "name": "/cluster-node0",
      "spec": {
        "name": "",
        "image": "quay.io/footloose/centos7:0.1.0",
        "volumes": [
          {
            "type": "bind",
            "source": "/sys/fs/cgroup",
            "destination": "/sys/fs/cgroup",
            "readOnly": false
          }
        ],
        "cmd": "/sbin/init"
      },
      "status": "Running",
      "ports": [
        {
          "guest": 22,
          "host": 32773
        }
      ]
    }
  ]
}

@Skarlso
Copy link
Contributor Author

Skarlso commented Mar 23, 2019

With the --all flag other clusters are also in the list now:

{
  "machines": [
    {
      "name": "/another_cluster-node0",
      "spec": {
        "name": "",
        "image": "docker.io/skarlso/footloose:ubuntu16",
        "volumes": [
          {
            "type": "bind",
            "source": "/sys/fs/cgroup",
            "destination": "/sys/fs/cgroup",
            "readOnly": false
          }
        ],
        "cmd": "/sbin/init"
      },
      "status": "Running",
      "ports": [
        {
          "guest": 22,
          "host": 32775
        }
      ]
    },
    {
      "name": "/cluster-node1",
      "spec": {
        "name": "",
        "image": "quay.io/footloose/centos7:0.1.0",
        "volumes": [
          {
            "type": "bind",
            "source": "/sys/fs/cgroup",
            "destination": "/sys/fs/cgroup",
            "readOnly": false
          }
        ],
        "cmd": "/sbin/init"
      },
      "status": "Running",
      "ports": [
        {
          "guest": 22,
          "host": 32774
        }
      ]
    },
    {
      "name": "/cluster-node0",
      "spec": {
        "name": "",
        "image": "quay.io/footloose/centos7:0.1.0",
        "volumes": [
          {
            "type": "bind",
            "source": "/sys/fs/cgroup",
            "destination": "/sys/fs/cgroup",
            "readOnly": false
          }
        ],
        "cmd": "/sbin/init"
      },
      "status": "Running",
      "ports": [
        {
          "guest": 22,
          "host": 32773
        }
      ]
    }
  ]
}

@Skarlso
Copy link
Contributor Author

Skarlso commented Mar 23, 2019

Oh It's run with --rm so there will never be a stopped status. :)

@Skarlso
Copy link
Contributor Author

Skarlso commented Mar 23, 2019

@dlespiau I'm at an impasse. I think adding --all is pretty cool. However, this means that when it's switched on, I'll loose the ability to gather cluster information, because right now, I'm using the api to see running containers only.

If I revert to using something like forAllMachines does, I will loose the ability to have a --all unless, I try to travers the current folder for *.footloose files in order to gather cluster data.

What do you think? Stay on this path and say something like, Footloose doesn't have machines running at the moment.. Or switch back to using the config files to gather data and add computed data if they are running and loose --all flag. ORRRR try and find all footloose files next to the executing file's location.

@Skarlso
Copy link
Contributor Author

Skarlso commented Mar 24, 2019

Ooooooooooorrr!! Another, other option would be to leave in --all but in case it's provided, we are okay with leaving out cluster information if there are no running containers for a cluster. If all is not provided I fall back to other way I was using when I first started. :)

@Skarlso
Copy link
Contributor Author

Skarlso commented Mar 24, 2019

Alright. I decided to leave all in there, because I have to inspect the container anyhow to obtain certain data from it. And the easiest way to do that is using the API. It gives back a very nice struct.

In case there are no running containers though, I'm falling back to display just the cluster data. This way either way, this command will be useful and hopefully nicely coded. :)

@Skarlso
Copy link
Contributor Author

Skarlso commented Mar 24, 2019

Okay, cool. This works now nicely. In case there are no containers that can be inspected it will fall back to display cluster related information. Hostname and ports are visible in case containers are running.

❯ ./footloose list
+----------------+----------+-----------+---------------------------------+------------+--------------------------------+---------+
| Name           | Hostname | Ports     | Image                           | Cmd        | Volumes                        | State   |
+----------------+----------+-----------+---------------------------------+------------+--------------------------------+---------+
| /cluster-node1 | node1    | 22->32785 | quay.io/footloose/centos7:0.1.0 | /sbin/init | /sys/fs/cgroup->/sys/fs/cgroup | Running |
| /cluster-node0 | node0    | 22->32784 | quay.io/footloose/centos7:0.1.0 | /sbin/init | /sys/fs/cgroup->/sys/fs/cgroup | Running |
+----------------+----------+-----------+---------------------------------+------------+--------------------------------+---------+

In case they aren't, port is missing since there are no outer mappings, thus hostname and port will the be obtained from the config file.

❯ ./footloose list
+---------------+----------+-------+---------------------------------+-----+---------+---------+
| Name          | Hostname | Ports | Image                           | Cmd | Volumes | State   |
+---------------+----------+-------+---------------------------------+-----+---------+---------+
| cluster-node0 | node0    | 22->0 | quay.io/footloose/centos7:0.1.0 |     |         | Stopped |
| cluster-node1 | node1    | 22->0 | quay.io/footloose/centos7:0.1.0 |     |         | Stopped |
+---------------+----------+-------+---------------------------------+-----+---------+---------+

return
}

func (c *Cluster) gatherMachinesByCluster() (machines []*Machine) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I saw that you already have forEachMachine but since I wanted each machine I extracted this. I could use the forEachMachine function but then I would have to resort to a package wide variable to track each machines inspect data that came back. That was my other idea in case I would loose the --all option.

@@ -0,0 +1,7 @@
+-----------------------------+----------+-------+-------------------------------+-----+---------+---------+
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now, considering how the golden output works, I can only test for non running containers, since the running ones have dynamic data. Like the host port number which would be different on each test run. This is good enough for now...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm envisioning being able to query parts of the json like docker inspect:

footloose show node0 -f '{{.Status}}'

we could use to test show when we have that.

@dlespiau
Copy link
Contributor

Hi! I have seen this, but don't have time to comment right now, will do in the next few days/this week though!

@Skarlso
Copy link
Contributor Author

Skarlso commented Mar 30, 2019

Yeah I agree. It's ugly as hell too. :D I will distill that information tomorrow. Now, it's late. :)

I changed most things. :)

@Skarlso
Copy link
Contributor Author

Skarlso commented Mar 31, 2019

@dlespiau So we can agree if it's a single node inspect then the output is always json, right? Because it would be helluva annoying if you'd have to view like 15 columns of a table in a terminal. :)

Hmm, I'll leave it open though because someone might actually want toml... or yaml.

@Skarlso
Copy link
Contributor Author

Skarlso commented Mar 31, 2019

@dlespiau Okay, dokey. Now, there is parity between table formatter and json output. Both only display these:

❯ ./footloose show
      NAME       HOSTNAME  PORTS               IMAGE               CMD   STATE

  cluster-node0  node0     22->0  quay.io/footloose/centos7:0.1.0       Stopped
  cluster-node1  node1     22->0  quay.io/footloose/centos7:0.1.0       Stopped

And I got rid of the table thingies for you. :)

Now there is an option to inspect a single machine in any cluster. It doesn't matter what cluster it is in if you type it's name after footloose show it will display some more information on it. Like volumes.

This is the normal info on a running machine:

    {
      "name": "/cluster-node1",
      "state": "Running",
      "ports": [
        {
          "guest": 22,
          "host": 32798
        }
      ],
      "hostname": "node1",
      "image": "quay.io/footloose/centos7:0.1.0",
      "cmd": "/sbin/init"
    },

This is when it's used as footloose show cluster-node0

❯ ./footloose show cluster-node0
{
  "name": "/cluster-node0",
  "state": "Running",
  "spec": {
    "name": "",
    "image": "quay.io/footloose/centos7:0.1.0",
    "volumes": [
      {
        "type": "bind",
        "source": "/sys/fs/cgroup",
        "destination": "/sys/fs/cgroup",
        "readOnly": false
      }
    ],
    "cmd": "/sbin/init"
  },
  "ports": [
    {
      "guest": 22,
      "host": 32797
    }
  ],
  "hostname": "node0",
  "image": "quay.io/footloose/centos7:0.1.0",
  "cmd": "/sbin/init"
}

There is some redundancy here between the spec. Like, image and the cmd. 🤔

Is this okay?

Cheers.

@Skarlso
Copy link
Contributor Author

Skarlso commented Mar 31, 2019

BTW as you can see in my output the state is Running. I wonder if some other container might hinder the docker inspect command or the formatter errors out or something. In any case in your place I would print out the error if there is one.

@Skarlso
Copy link
Contributor Author

Skarlso commented Mar 31, 2019

res, _ := docker.Inspect(m.name, "{{.Name}}")
here. See what's happening here.

@Skarlso
Copy link
Contributor Author

Skarlso commented Mar 31, 2019

Yeah travis is a mess. :/ You should switch away from them. Considering that they no longer have the manpower to even deal with outages or don't really care to upgrade or maintain it anymore. :/

@dlespiau
Copy link
Contributor

dlespiau commented Apr 1, 2019

Thanks for rebasing this PR! I have a bit of time now, let me have a look, should be close to mergeable, we can fix/improve things later as well.

@dlespiau
Copy link
Contributor

dlespiau commented Apr 1, 2019

Oh, you merged the master branch in. I'm more used to rebasing it but it really doesn't matter. I think I may have to squash and merge it instead of just merging this PR though if that's ok with you (so we don't have commits like "Ups.. left image out" or "UUUUUUUUUUUUUUUUUuuuuuuuuuuuuuuhhhhhh" in master :)

@Skarlso
Copy link
Contributor Author

Skarlso commented Apr 1, 2019

Sure, don't worry, I can squash my commits too if you wish. :) I usually keep those as if there is anything wrong it's easier to roll back.

Also, do you know that git can squash and merge now-a-days? :D

@dlespiau
Copy link
Contributor

dlespiau commented Apr 1, 2019

Github can do that for us even, there's a "Squash and merge" option when merging a PR.

@Skarlso
Copy link
Contributor Author

Skarlso commented Apr 1, 2019

Yep. :)

A rebase merge of master is harder to reconcile imho if something really bad goes wrong with it.

But I'll keep it mind for next time it happens. :)

@dlespiau
Copy link
Contributor

dlespiau commented Apr 1, 2019

I've logged a few improvements from playing with your branch: issues ranging from #109 to #115. Most of them are small but allow us to split the work. Let's get this merged. I'll definitely have a look at #114.

@dlespiau dlespiau merged commit 7ea5786 into weaveworks:master Apr 1, 2019
@Skarlso
Copy link
Contributor Author

Skarlso commented Apr 1, 2019

Thanks. Cool! :) That is so weird. I have two machines one work one mine. Both report the correct status. Both are OSX if that helps. I might test this in a linux vm... :)

@dlespiau
Copy link
Contributor

dlespiau commented Apr 1, 2019

Don't worry about it, I'll find time to look at this. We just need to fix it before 0.4.0.

@Skarlso Skarlso deleted the issues_7_list_status branch April 2, 2019 04:28
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants