Skip to content

Commit

Permalink
Documentation of How To Guide on SSH Tunneling of remote device services
Browse files Browse the repository at this point in the history
  • Loading branch information
jim-wang-intel committed Jun 9, 2020
1 parent 1160e74 commit f0e0d52
Show file tree
Hide file tree
Showing 8 changed files with 1,455 additions and 0 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,185 @@
# Security for EdgeX Stack

This page describes one of options to secure the EdgeX software stack with running remote device services like device-virtual, device-rest, device-mqtt, and so on, via secure two-way SSH-tunnelings.

## Basic SSH-Tunneling

In this option to secure the EdgeX software stack, SSH tunneling is utilized. The basic idea is to create a secure SSH connection between a local machine, the primary host, and a remote machine, the secondary, in which some micro-services or applications can be relayed. In this particular example, the primary host is running the whole EdgeX core services including core services and security services but **without** any device service. The device services are running in the secondary or the remote machine.

The communication is secure because SSH port forwarding connection is encrypted by default.

The SSH communication is established by introducing some extra SSH-related services:

1) device-ssh-proxy: this is the service with ssh client opening up the SSH communication between the primary and the secondary

2) device-ssh-remote: this is actually the SSH server or daemon service together with device services running on the remote machine

The high-level diagram is shown as follows:

![image](ssh-tunneling device.png) "Top level diagram for SSH tunneling for device services"

<TBD> more description needed

## How to- Reference implementation example

### Setup remote running Virtual Machine

In the example setup, `vagrant` is used on the top of `Virtual Box` to set up as the secondary/remote VM. The network port for ssh standard port 22 is mapped into the vm port 2223 and host port also 2223.

Once you have downloaded the vagrant from Hashicorp website, typical vagrant setup for the first time can be done via command `./vagrant init` and it will generate the Vagrant configuration file.

Here is the Vagrant file used to create the remote machine:

[Vagrantfile]: Vagrantfile "remote VM Vagrant file with docker and docker-compose installed"

### SSH Tunneling: Setup the SSH server on the remote machine

Use the example from Docker hub, an example of ssh damenon or ssh server can be setup pretty easily as a Docker container: https://docs.docker.com/engine/examples/running_ssh_service/

Note that this one is the ssh server and it is set up using password authentication by default. In order to authenticate to this ssh server without password prompt, we injected the generated public SSH key from the primary machine via simple login into the ssh server machine first and then created the `authorized_keys` under `~/.ssh` directory. Without loss of generality, the following command shows how this is accomplished:

```sh
root@sshd-remote: mkdir -p ~/.ssh
root@sshd-remote: chmod 700 ~/.ssh
root@sshd-remote: echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDKvsFf5HocBOBWXdVJKfQzkhf0K8lSLjZn9PX84VdhHyP8n1mzfpZywA4vsz8+A3OsGHAr2xpkyzOS0YkwD7nrI3q1x0A0+ANhQNOaKbnfQRepTAES3FPm5n0AbNVfgOre3RR2NLOt6M5m3mA/MERNer1fEp6BM96sdU0o3KjqwFGkPufoQrVkpz2691MZ6/ACDc+lk7uQrinsB4YxM7ctiLNl4I1A3TJgVv0jkJImUCHaThYj3XoaqUqUjQFTS7SlFfkXuk13EjNfRzqPwKFnVvGTUaYzaBV5S4wt5XCxhLfs497M2k5zmNx3HFY/GEyeoroCpjsiXkm+HcgdIYb7 root" >> ~/.ssh/authorized_keys
```

The ssh key pairs can be generated using `ssh-keygen` command from the primary machine and the contents of ssh public key usually is stored as ~/.ssh/id_rsa.pub file like this:

```sh
ssh-keygen -q -t rsa -C root -N '' -f ~/.ssh/id_rsa 2>/dev/null
```


### SSH Tunneling: Local Port Forwarding

from primary to secondary /remote
it is achieved by -L flags of ssh command.

```sh
ssh -vv -o StrictHostKeyChecking=no -N $TUNNEL_HOST \
-L *:$LOCAL_PORT:$REMOTE_HOST:$REMOTE_PORT -p $SSH_PROT
```

where environment variables are:

- TUNNEL_HOST is the remote host name or IP address that SSH daemon or server is running on;

- LOCAL_PORT is the port number from the local or the primary to be forwared to the remote machine;

- REMOTE_HOST is the host name or IP address of the Docker containers that are running on the remote machine;

- REMOTE_PORT is the port number to be used on the remote machine that is forwarded from the primary machine in the SSH tunneling

### SSH Reverse Tunneling: Remote Port Forwarding

the reverse direction: from the secondary /remote back to the primary

The reverse SSH tunneling is also needed because the device services depends on the core services like `data`, `metadata`, `command`, ... and so on. These core services are running on the primray machine and should be **reversely** tunneling back to the device services on the remote side through the SSH remote port forwarding connection. This can be achieved by using `-R` flag of ssh command.

```sh
ssh -vv -o StrictHostKeyChecking=no -N $TUNNEL_HOST \
-R 48080:$REVERSE_HOST:48080 \
-R 48081:$REVERSE_HOST:48081 \
-R 48082:$REVERSE_HOST:48082 \
-R 5563:$REVERSE_HOST:5563 \
-p $SSH_PORT
```

where environment variables are:

- TUNNEL_HOST is the remote host name or IP address that SSH daemon or server is running on;

- REVERSE_HOST is the host name or IP address of the Docker containers that are running on the primary; it is basically the gateway host name or IP address of the ssh-device-proxy container;

### Put it all together

- Launch the remote machine or VM if it is not yet:

```sh
~/vm/vagrant up
```

- In the primary machine, generate ssh key pairs using ssh-keygen:

```sh
ssh-keygen -q -t rsa -C root -N '' -f ~/.ssh/id_rsa 2>/dev/null
```

This produces two files under directory ~/.ssh: one for private key (id_rsa) and one for public key (id_rsa.pub)

- Build ssh-device-proxy Docker file and entrypoint.sh:

[Dockerfile]: Dockerfile-primary-ds-proxy "primary device service proxy Dockerfile"

[docker-entrypoint]: ds-proxy-entrypoint.sh "Docker entrypoint shell script for primary device service proxy"

and build it with the following command:

```sh
docker build -f Dockerfile-primary-ds-proxy --build-arg SSH_PORT=2223 -t device-ssh-proxy:test .
```

- Build the remote sshd server / daemon image with Dockerfile:

[Dockerfile]: Dockerfile-remote-sshd "remote sshd Dockerfile"

to build:

```sh
docker build -t eg_sshd .
```

- Run the remote EdgeX device services with the following docker-compose file:

[composefile]: edgex-device-sshd-remote.yml "docker-compose file for remote device services with SSH server/daemon"

Note that the following ssh server service is added in the docker-compose file:

```yaml
################################################################
# SSH Daemon
################################################################
sshd-remote:
image: eg_sshd
ports:
- "2223:22"
container_name: edgex-sshd-remote
hostname: edgex-sshd-remote
networks:
- edgex-network
```

- Copy the contents of the public key into the ~/.ssh/ directory as `~/.ssh/authorized_keys` in the `sshd-remote` container so that authentication between primary machine and remote container `sshd-remote` can be authenticated automatically.

- In the primary machine, include `device-ssh-proxy:test` ssh proxy docker image together with EdgeX core services in the docker-compose file like this:

```yaml
##########################################################
# ssh tunneling proxy service
##########################################################
device-ssh-proxy:
image: device-ssh-proxy:test
volumes:
- $HOME/.ssh:/root/ssh:ro
environment:
TUNNEL_HOST: 192.168.1.190
LOCAL_HOST: 172.17.0.1
REMOTE_HOST: 192.168.64.1
LOCAL_PORT: 49986
REMOTE_PORT: 49986
SSH_PORT: 2223
```

The full docker-compose file is included here:
[composefile]: edgex-core-ssh-proxy.yml "docker-compose file for the primary core services and ssh tunneling proxy service without any device services"

Note that:

1. The values of environment variables depend on your environment settings of the primary and the remote machine. In this particular case, we are ssh tunneling to the remote device-rest service.

2. The docker-compose file in the primary machine does not include any device services at all. This is to ensure that we are actually using the device services in the remote machine.

#### Test with the device-rest API

<TBD> mainly run curl or postman directly from the primary machine to the device-rest APIs to verify rest device service can be accessible via two-way SSH tunneling.
33 changes: 33 additions & 0 deletions docs_src/microservices/security/Dockerfile-primary-ds-proxy
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
FROM alpine:latest

ARG SSH_VERSION

# tunneling host name or ip
ARG TUNNEL_HOST

# the local hostname / IP of container for Remote port forwarding
ARG LOCAL_HOST

# local port
ARG LOCAL_PORT

# remote sshd host name or ip address
ARG REMOTE_HOST

# remote sshd container port
ARG REMOTE_PORT

# ssh port in use, set this number if it is not the usually port 22
# or there is a different ssh port mapping between local and remote
ARG SSH_PORT

RUN apk add --update dumb-init openssh-client && rm -rf /var/cache/apk/*

COPY entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/entrypoint.sh \
&& ln -s /usr/local/bin/entrypoint.sh /

ENV APP_PORT=49990
EXPOSE $APP_PORT $LOCAL_PORT $REMOTE_PORT $SSH_PORT

ENTRYPOINT ["entrypoint.sh"]
15 changes: 15 additions & 0 deletions docs_src/microservices/security/Dockerfile-remote-sshd
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
FROM ubuntu:16.04

RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:THEPASSWORDYOUCREATED' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config

# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd

ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile

EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
76 changes: 76 additions & 0 deletions docs_src/microservices/security/Vagrantfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.

# Every Vagrant development environment requires a box. You can search for
# boxes at https://vagrantcloud.com/search.
config.vm.box = "ubuntu/bionic64"

# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false

# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
# NOTE: This will enable public access to the opened port
# config.vm.network "forwarded_port", guest: 80, host: 8080

# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine and only allow access
# via 127.0.0.1 to disable public access
# config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1"

# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network "private_network", ip: "192.168.33.10"

# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
config.vm.network "public_network"

# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder "../data", "/vagrant_data"

# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
config.vm.provider "virtualbox" do |vb|
# # Display the VirtualBox GUI when booting the machine
vb.gui = true
#
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
end
#
# View the documentation for the provider you are using for more
# information on available options.

# Enable provisioning with a shell script. Additional provisioners such as
# Ansible, Chef, Docker, Puppet and Salt are also available. Please see the
# documentation for more information about their specific syntax and use.
config.vm.provision "shell", inline: <<-SHELL
# Install last version of Docker
curl -fsSL https://test.docker.com -o test-docker.sh
sh test-docker.sh # helper script installs the beta package # Add default user in docker group
usermod -aG docker vagrant
# Install docker-compose package
curl -L https://github.com/docker/compose/releases/download/1.25.5/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
docker-compose --version
SHELL
end
54 changes: 54 additions & 0 deletions docs_src/microservices/security/ds-proxy-entrypoint.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
#!/usr/bin/dumb-init /bin/sh
# ----------------------------------------------------------------------------------
# Copyright (c) 2020 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# SPDX-License-Identifier: Apache-2.0'
# ----------------------------------------------------------------------------------

set -e

# Use dumb-init as PID 1 in order to reap zombie processes and forward system signals to
# all processes in its session. This can alleviate the chance of leaking zombies,
# thus more graceful termination of all sub-processes if any.

# runtime directory is set per user:
XDG_RUNTIME_DIR=${XDG_RUNTIME_DIR:-/run/user/$(echo $(id -u))}
export XDG_RUNTIME_DIR

# debug output:
echo XDG_RUNTIME_DIR $XDG_RUNTIME_DIR

# use static ssh key
rm -rf /root/.ssh && mkdir /root/.ssh \
&& cp -R /root/ssh/* /root/.ssh/ \
&& chmod -R 600 /root/.ssh/* \
&& ls -al /root/.ssh/* \
&& cat /root/.ssh/id_rsa.pub

posthook="ssh -vv -o StrictHostKeyChecking=no -N $TUNNEL_HOST \
-L *:$LOCAL_PORT:$REMOTE_HOST:$REMOTE_PORT \
-R 48080:$LOCAL_HOST:48080 \
-R 48082:$LOCAL_HOST:48082 \
-R 5563:$LOCAL_HOST:5563 \
-p $SSH_PORT && while true; do sleep 60; done"

echo "Executing $@"
"$@"

#sleep for some time to before running posthook
sleep 3

echo "Executing hook=$posthook"
eval $posthook
Loading

0 comments on commit f0e0d52

Please sign in to comment.