Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFE] Allow docker to report the veth interface used by a container. #17064

Open
shishir-a412ed opened this issue Oct 15, 2015 · 12 comments
Open
Labels
area/networking kind/enhancement Enhancements are not bugs or new features but can improve usability or performance. version/master

Comments

@shishir-a412ed
Copy link
Contributor

Proposed title of this feature request.

Allow easy discovery of which veth interface a given container is using.

What is the nature and description of the request?

Currently there is not a reliable and easy way to determine which veth interface a container is using. Docker inspect will report the bridge and other networking information, but reporting which veth interface is in use directly would be beneficial.

Why does the customer need this? (List the business requirements here)

In their own words:

"We're trying to use tc and netem in conjunction with docker containers to create self-contained "nightmare networks" for code testing purposes."

How would the customer like to achieve this? (List the functional requirements here)

Being able to retrieve this from a 'docker inspect' would be acceptable.

Is there already an existing RFE upstream or in Red Hat Bugzilla?

No

List any affected packages or components. docker

uname -a
Linux dhcp-25-141.bos.redhat.com 4.0.4-303.fc22.x86_64 #1 SMP Thu May 28 12:37:06 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Docker Version:
Client:
Version: 1.9.0-dev
API version: 1.21
Go version: go1.4.2
Git commit: 31b882e
Built: Wed Sep 30 14:20:38 UTC 2015
OS/Arch: linux/amd64

Server:
Version: 1.9.0-dev
API version: 1.21
Go version: go1.4.2
Git commit: 31b882e
Built: Wed Sep 30 14:20:38 UTC 2015
OS/Arch: linux/amd64

Docker Info:
Containers: 3
Images: 52
Engine Version: 1.9.0-dev
Storage Driver: devicemapper
Pool Name: docker-8:3-2097606-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: extfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 4.198 GB
Data Space Total: 107.4 GB
Data Space Available: 33.68 GB
Metadata Space Used: 5.411 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.142 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.93 (2015-01-30)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.0.4-303.fc22.x86_64
Operating System: Fedora 22 (Twenty Two)
CPUs: 4
Total Memory: 11.72 GiB
Name: dhcp-25-141.bos.redhat.com
ID: NODU:OCAV:EOZD:CWD3:EEKS:AJCL:ZCKH:B6S7:7247:UVAC:2EAK:3FC2

@phemmer
Copy link
Contributor

phemmer commented Oct 15, 2015

See also #16729 (and #16729 (comment))

@rhatdan
Copy link
Contributor

rhatdan commented Oct 15, 2015

@shishir-a412ed
Copy link
Contributor Author

@icecrime
@jfrazelle @LK4D4 @cpuguy83 Since the original PR #16729 is closed. I can take a look on this.
Are you guys still open to accepting this feature as part of docker inspect ?

@mrjana @mavenugo thoughts on the design ?

@runcom
Copy link
Member

runcom commented Jan 11, 2016

I'm carrying this, @mrjana @mavenugo is it ok to expose the veth pair in docker inspect? I've noticed the veth pair name generation has been moved to libcontainer also, where could I find the names in docker?

@mavenugo
Copy link
Contributor

@runcom the veth pair is managed by the network driver (or) any other plugin. Infact there can be plugins which may not even use veth pairs. That is the reason why this is not a good idea to expose driver specific information at a container management level. It makes the solution less portable and binds a container to a particular driver/plugin.

@mrjana thoughts ?

@maran
Copy link

maran commented Feb 12, 2016

I lack in-depth knowledge of docker internals so you will have to excuse me if what I'm trying to accomplish can be done in a more straight forward way.

I expected docker to always have a network component, regardless of which driver or plugin is managing it. In order to track bandwidth/connection coming out of docker using iptables I need to find a way to reliably track it these connections.

The easiest way, at least when I was developing this when docker 1.5 was new, was using the PHYSDEV match --physdev-in iptable match.

I have thousands of containers that rely on this functionality in order to work, making it impossible for me to upgrade to the latest docker since the veth prefix can't be found anymore.

I understand that docker cares little of how I use it and what I'm trying to achieve with it but I hope there is a way for me to start using the latest docker releases while maintaining a way to use iptables to track the usage.

Having support in docker inspect to get some stats about the network plugin used would greatly ease my burdens.

@mavenugo
Copy link
Contributor

@maran we care a lot about user use it :) So pls don't hesitate to share your views. But based on the discussion we both had in #20224, I guess your issue is taken care of.

@peterwillcn
Copy link

add +1

@thaJeztah thaJeztah added kind/enhancement Enhancements are not bugs or new features but can improve usability or performance. area/networking labels Aug 13, 2016
@thaJeztah
Copy link
Member

Following the discussion #20224 and here, it looks like this is not something we want to support, because it's really driver dependent, and exposing low-level information that's used by the internals of docker; I think we should close?

@saada
Copy link

saada commented Aug 23, 2016

@thaJeztah, how can I scan network traffic and packets on each container without this feature?

@aboch
Copy link
Contributor

aboch commented Sep 21, 2016

@saada

how can I scan network traffic and packets on each container without this feature?

To inspect traffic to/from container x, you can do it in a container that was run with --net container:x

@daniilyar
Copy link

daniilyar commented Apr 13, 2017

This is the script I am using to build veth-iface-to-container mapping:

#!/bin/bash

function veth_interface_for_container() {

  container_name=$(docker inspect --format='{{.Name}}' "${1}")

  # Get the process ID for the container named ${1}:
  local pid=$(docker inspect -f '{{.State.Pid}}' "${1}")

  # Make the container's network namespace available to the ip-netns command:
  mkdir -p /var/run/netns
  ln -sf /proc/$pid/ns/net "/var/run/netns/${1}"

  # Get the interface index of the container's eth0:
  local index=$(ip netns exec "${1}" ip link show eth0 | head -n1 | sed s/:.*//)
  # Increment the index to determine the veth index, which we assume is
  # always one greater than the container's index:
  let index=index+1

  # Write the name of the veth interface to stdout:
  VETH=`ip link show | grep "^${index}:" | sed "s/${index}: \(.*\):.*/\1/"`

  echo "$VETH $container_name $1"

  # Clean up the netns symlink, since we don't need it anymore
  rm -f "/var/run/netns/${1}"
}

if [ "$#" -eq 0 ]; then
    for docker_container in `docker ps -q`; do
      veth_interface_for_container $docker_container
    done
fi

if [ "$#" -eq 1 ]; then
  veth_interface_for_container $1
fi

Took it from one SO answer and improved a bit. Maybe somebody will found it useful. Usage:

./veth.sh [<container_name_or_id>]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking kind/enhancement Enhancements are not bugs or new features but can improve usability or performance. version/master
Projects
None yet
Development

No branches or pull requests