Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

containers with extraModules in host #13852

Closed
Rizary opened this issue Mar 12, 2016 · 6 comments
Closed

containers with extraModules in host #13852

Rizary opened this issue Mar 12, 2016 · 6 comments

Comments

@Rizary
Copy link
Contributor

Rizary commented Mar 12, 2016

Basic info

To make sure that we are on the same page:

  • Kernel: Linux Rizilab 4.3.4 Compile glibc without -fstack-protector. #1-NixOS SMP Thu Jan 1 00:00:01 UTC 1970 x86_64 GNU/Linux
  • System: 15.09.1076.9220f03 (Dingo)
  • Nix version: nix-env (Nix) 1.10
  • Nixpkgs version:"15.09.1076.9220f03"

Describe your issue here

I create new containers called beHaskell with this configuration

{ config, lib, pkgs, ... }:

with lib;

{ boot.isContainer = true;
  networking = {
    hostName = mkDefault "behaskell";
    useDHCP = false;
    firewall.enable = false;
    firewall.allowedTCPPorts = [ 80 443 ];

  };

  services = {
    #openssh.enable = true;
  };

  users = {
    mutableUsers = true;
    extraUsers.Rizilab = {
      createHome = true;
      uid = 1001;
      extraGroups = [ "wheel" ];
      home = "/home/Rizilab";
      description = "Development Account";
      useDefaultShell = true;
    };
  };

}

when i tried to do nix-channel --update i cannot connect to the internet, i tried to ping google.com in my containers and it cannot connect.

so, i add containers configuration in my host /etc/nixos/configuration.nix like this:

 containers = {
    # behaskell is containers for learning haskell
    behaskell = {
      config = 
        { config, pkgs, ... }:
        {
          boot = {
            kernelPackages = pkgs.linuxPackages_latest;
            kernelModules = ["e1000e"];
          };
          #privateNetwork = true;

      };
    };
  };

i add the kernelModules = [ "e1000e"], then nixos-rebuild switch in my host. It build perfectly and nothing happened. I can connect to my beHaskell container and do nix-channel --update and nixos-rebuild switch

the problem occurs when i tried to do nixos-container show-ip beHaskell from my host, the error says

/run/current-system/sw/bin/nixos-container: cannot get IP address

then i tried to add the following code:

hostAddress = "10.10.0.1";
localAddress = "10.10.0.2";

to the container attribute in my host configuration.nix, and when i tried to do "nixos-rebuild switch" it got an error :

The optioncontainers.beHaskell.hostAddress' defined in <unknown-file>' does not exist.

Expected result

10.10.0.2

Actual result

/run/current-system/sw/bin/nixos-container: cannot get IP address

and after add the localAddress and hostAddress:

The optioncontainers.beHaskell.hostAddress' defined in <unknown-file>' does not exist.

Steps to reproduce

  1. nixos-container create beHaskell
  2. nixos-container start beHaskell
  3. nixos-container root-login beHaskell
  4. nano /etc/nixos/configuration.nix
  5. add the configuration above
  6. in host, nixos-rebuild switch
@hrdinka
Copy link
Contributor

hrdinka commented Mar 12, 2016

Is your container declarative or imperative? It looks like you've mixed these two configuration schemes. When created as described in your steps to reproduce everything should be fine because your container will be assigned an IPv4 during creation.

@Rizary
Copy link
Contributor Author

Rizary commented Mar 12, 2016

@hrdinka , i mix declarative and imperative. At first, i did imperative and it has IPv4 when i do

show-ip

. But, i cannot connect to the internet, because my "imperative" way does not recognize my ethernet card.

That's when i remember that i added extraModules to my host configuration.nix. So, I added these extraModules to the containers via "declarative" way, and suddenly, the IPv4 is missing and produce the above error even when i did not put the localAddress and hostAddress in my host configuration.nix.

So, now the IPv4 is missing because i added the extraModules in declarative way via my host configuration.nix

@hrdinka
Copy link
Contributor

hrdinka commented Mar 12, 2016

Looks like you've got several things mixed up here:

  • Mixing declarative and imperative containers does not work. Declarative containers are built together with your host configuration. This means after a host rebuild your container will be configured as defined in the host config. After a rebuild within the container it will obey your containers config file. There is no merging or mixing taking place between these config schemes.
  • Your containers config completely belongs to /etc/nixos/configuration.nix of your container. extraModules would belong there as well if there wasn't the following:
  • Containers aren't fully emulated virtual machines. They are just another process of your system and do not run their own kernel. If your host kernel understands you ethernet card your container does as well.
  • Containers use veth devices. Your containers network interface has nothing to do with your host interface. A veth device is a virtual ethernet device. These are created in pairs, this means your host sees one and your container the other one. These two device behave as they were two linked ethernet cards. This is a kernel feature and doesn't need any hardware drivers.

The reason why you didn't had network connectivity within your container is likely a different one. On a freshly created container you can examine your network interfaces with ip link. If you see an interface in there your interface is recognized and working. This means all drivers are loaded and if something does not work its due network configuration.

The problem is that your container is in a different subnet than your hosts wan and by default your host won't route between these two networks. If it does and your router is properly configured it will block all traffic from your container because it's not part of the routers subnet.

First check if ip forwarding is enabled on your host sysctl net.ipv4.ip_forward if not enable it via sysctl or by adding this to your host config:

kernel.sysctl = {
  "net.ipv4.ip_forward" = true;
  "net.ipv6.conf.all.forwarding" = true;
};

Next you can review (ip route) and setup your routing table to route between your container and your wan or use NAT to translate the containers IP range to the one used by your wan. The easiest and likely wanted solution for your problem is to use masquerading all traffic from and to your container via NAT. When masqueraded your host machine will rewrite all packages from your container to make them look like they are coming from your host machine. Your router will think it is talking to your host. On the way back packages are rewritten to make them target your containers ip. This is described in the nixos manual over here http://nixos.org/nixos/manual/#sec-container-networking.

Update: There is no need to enable ip forwarding manually if you are using nixos config for nat or routing. These options will do this automatically for you.

@Rizary
Copy link
Contributor Author

Rizary commented Mar 14, 2016

Hi @hrdinka,

thank you for your suggestion, i think i did had misunderstanding about how containers work.

so in this case, if i do pure declarative configuration, i don't need to configure the configuration.nix on my containers?

thanks,
andika

@hrdinka
Copy link
Contributor

hrdinka commented Mar 14, 2016

Yes that's how it works. When using imperative containers your containers function like completely independent machines and won't be bothered by your host at all.

With declarative containers your containers are part of your hosts config. NixOS handles your host&containers as one entity and builds&updates these together.

Both behaviors are nice to have and come in handy depending on your projects needs. Decide for on cause you can't mix them.

@yacinehmito
Copy link
Contributor

I believe this issue can be closed.

@Mic92 Mic92 closed this as completed Jul 21, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants