Skip to content

Commit

Permalink
9may (#1560)
Browse files Browse the repository at this point in the history
* fix MySQL error handling
* fix tar command
* type hinting for proxy
  • Loading branch information
micheloosterhof committed May 14, 2021
1 parent 9887332 commit ec39aad
Show file tree
Hide file tree
Showing 24 changed files with 172 additions and 135 deletions.
4 changes: 2 additions & 2 deletions INSTALL.rst
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@ Installing Backend Pool dependencies (OPTIONAL)
***********************************************

If you want to use the proxy functionality combined with the automatic
backend pool, you need to install some dependencies, namely qemu, libvirt,
backend pool, you need to install some dependencies, namely QEMU, libvirt,
and their Python interface. In Debian/Ubuntu::

$ sudo apt-get install qemu qemu-system-arm qemu-system-x86 libvirt-dev libvirt-daemon libvirt-daemon-system libvirt-clients nmap
Expand All @@ -185,7 +185,7 @@ Then install the Python API to run the backend pool::

(cowrie-env) $ pip install libvirt-python==6.4.0

To allow Qemu to use disk images and snapshots, set it to run with the user and group of the user running the pool
To allow QEMU to use disk images and snapshots, set it to run with the user and group of the user running the pool
(usually called 'cowrie' too::

$ sudo vim /etc/libvirt/qemu.conf
Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ lint:

.PHONY: clean
clean:
rm -rf _trial_temp build dist
rm -rf _trial_temp build dist src/_trial_temp src/Cowrie.egg-info
make -C docs clean

.PHONY: pre-commit
Expand Down
2 changes: 1 addition & 1 deletion README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ Features

* Or proxy SSH and telnet to another system
* Run as a pure telnet and ssh proxy with monitoring
* Or let Cowrie manage a pool of Qemu emualted servers to provide the systems to login to
* Or let Cowrie manage a pool of QEMU emulated servers to provide the systems to login to

For both settings:

Expand Down
4 changes: 2 additions & 2 deletions docs/BACKEND_POOL.rst
Original file line number Diff line number Diff line change
Expand Up @@ -135,8 +135,8 @@ A set of guest (VM) parameters can be defined as we explain below:
* **guest_image_path**: the base image upon which all VMs are created from

* **guest_hypervisor**: the hypervisor used; if you have an older machine or the emulated
architecture is different from the host one, then use software-based "qemu"; however,
if you are able to, use "kvm", it's **much** faster.
architecture is different from the host one, then use software-based "QEMU"; however,
if you are able to, use "KVM", it's **much** faster.

* **guest_memory**: memory assigned to the guest; choose a value considering the number
of guests you'll have running in total (``pool_max_vms``)
Expand Down
Binary file modified honeyfs/etc/shadow
Binary file not shown.
30 changes: 15 additions & 15 deletions src/backend_pool/README.md
Original file line number Diff line number Diff line change
@@ -1,43 +1,43 @@
qemu/libvirt Python examples to handle a guest
QEMU/libvirt Python examples to handle a guest

# Developer Guide
We'll start by looking at the classes that compose the Backend Pool, from "outermost" to the inner, specific classes.

## pool_server.py
The main interface of the backend pool is exposed as a TCP server in _pool\_server.py_. The protocol is a very simple
The main interface of the backend pool is exposed as a TCP server in _pool\_server.py_. The protocol is a very simple
wire protocol, always composed of an op-code, a status code (for responses), and any needed data thereafter.

## pool_service.py
The server interfaces exposes a producer-consumer infinite loop that runs on _pool\_service.py_.

The **producer** is an infinite loop started by the server, and runs every 5 seconds. It creates VMs up to the
configured limit, checks which VMs become available (by testing if they accept SSH and/or Telnet connections), and
The **producer** is an infinite loop started by the server, and runs every 5 seconds. It creates VMs up to the
configured limit, checks which VMs become available (by testing if they accept SSH and/or Telnet connections), and
destroys VMs that are no longer needed.

**Consumer** methods are called by server request, and basically involve requesting and freeing VMs. All operations on
shared data in the producer-consumer are guarded by a lock, since there may be concurrent requests. The lock protects
the _guests_ list, which contains references for each VM backend (in our case libvirt/qemu instances).
**Consumer** methods are called by server request, and basically involve requesting and freeing VMs. All operations on
shared data in the producer-consumer are guarded by a lock, since there may be concurrent requests. The lock protects
the _guests_ list, which contains references for each VM backend (in our case libvirt/QEMU instances).

Since we won't be dealing with a very large number of VMs (never more than 100, we find that a single simple lock is
Since we won't be dealing with a very large number of VMs (never more than 100, we find that a single simple lock is
enough.

The Pool Service expects to find a "backend service" with a given interface:
* A method to initialise the backend interface and environment (start_backend), stop it and destroy the current
* A method to initialise the backend interface and environment (start_backend), stop it and destroy the current
environment (stop_backend), and shutdown it permanently for the current execution (shutdown_backend).
* A method to create a new guest (create_guest)
* A method to destroy a guest (destroy_guest)

Currently the service supports a libvirt/qemu backend. However, by splitting the logic from generic guest handling /
interaction with main Cowrie, from the logic to create guests in a low-level perspective, we hope to ease development
Currently the service supports a libvirt/QEMU backend. However, by splitting the logic from generic guest handling /
interaction with main Cowrie, from the logic to create guests in a low-level perspective, we hope to ease development
of different kinds of backends in the future.

## libvirt classes
The main class for libvirt is _backend\_service.py_, and implements the interface discussed above. Guest, network and
The main class for libvirt is _backend\_service.py_, and implements the interface discussed above. Guest, network and
snapshot handlers deal with those specific components of libvirt's handling.

Initialising libvirt involves connecting to the running system daemon, creating a network filter to restrict guest's
Initialising libvirt involves connecting to the running system daemon, creating a network filter to restrict guest's
Internet access, and creating a "cowrie" network in libvirt.

Guest creation is started by creating a snapshot from the base qcow2 image defined in the configs, and instantiating
a guest from the XML provided. The Guest Handler replaces templates ("{guest_name}") with user configs for the wanted
Guest creation is started by creating a snapshot from the base qcow2 image defined in the configs, and instantiating
a guest from the XML provided. The Guest Handler replaces templates ("{guest_name}") with user configs for the wanted
guest. If the XML provided does not contain templates, then no replacement takes place, naturally.
20 changes: 12 additions & 8 deletions src/backend_pool/libvirt/backend_service.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# Copyright (c) 2019 Guilherme Borges <guilhermerosasborges@gmail.com>
# See the COPYRIGHT file for more information

import os
import random
import sys
Expand All @@ -12,6 +13,8 @@
import backend_pool.util
from cowrie.core.config import CowrieConfig

LIBVIRT_URI = "qemu:///system"


class LibvirtError(Exception):
pass
Expand All @@ -23,31 +26,32 @@ def __init__(self):
import libvirt

# open connection to libvirt
self.conn = libvirt.open("qemu:///system")
self.conn = libvirt.open(LIBVIRT_URI)
if self.conn is None:
log.msg(
eventid="cowrie.backend_pool.qemu",
format="Failed to open connection to qemu:///system",
format="Failed to open connection to %(uri)s",
uri=LIBVIRT_URI,
)
raise LibvirtError()

self.filter = None
self.network = None

# signals backend is ready to be operated
self.ready = False
self.ready: bool = False

# table to associate IPs and MACs
seed = random.randint(0, sys.maxsize)
seed: int = random.randint(0, sys.maxsize)
self.network_table = backend_pool.util.generate_network_table(seed)

log.msg(
eventid="cowrie.backend_pool.qemu", format="Connection to Qemu established"
eventid="cowrie.backend_pool.qemu", format="Connection to QEMU established"
)

def start_backend(self):
"""
Initialises Qemu/libvirt environment needed to run guests. Namely starts networks and network filters.
Initialises QEMU/libvirt environment needed to run guests. Namely starts networks and network filters.
"""
# create a network filter
self.filter = backend_pool.libvirt.network_handler.create_filter(self.conn)
Expand All @@ -62,7 +66,7 @@ def start_backend(self):

def stop_backend(self):
log.msg(
eventid="cowrie.backend_pool.qemu", format="Doing Qemu clean shutdown..."
eventid="cowrie.backend_pool.qemu", format="Doing QEMU clean shutdown..."
)

self.ready = False
Expand All @@ -74,7 +78,7 @@ def shutdown_backend(self):

log.msg(
eventid="cowrie.backend_pool.qemu",
format="Connection to Qemu closed successfully",
format="Connection to QEMU closed successfully",
)

def get_mac_ip(self, ip_tester):
Expand Down
22 changes: 13 additions & 9 deletions src/backend_pool/libvirt/guest_handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,18 +19,20 @@ def create_guest(connection, mac_address, guest_unique_id):
import libvirt

# get guest configurations
configuration_file = os.path.join(
configuration_file: str = os.path.join(
CowrieConfig.get(
"backend_pool", "config_files_path", fallback="share/pool_configs"
),
CowrieConfig.get("backend_pool", "guest_config", fallback="default_guest.xml"),
)

version_tag = CowrieConfig.get("backend_pool", "guest_tag", fallback="guest")
base_image = CowrieConfig.get("backend_pool", "guest_image_path")
hypervisor = CowrieConfig.get("backend_pool", "guest_hypervisor", fallback="qemu")
memory = CowrieConfig.getint("backend_pool", "guest_memory", fallback=128)
qemu_machine = CowrieConfig.get(
version_tag: str = CowrieConfig.get("backend_pool", "guest_tag", fallback="guest")
base_image: str = CowrieConfig.get("backend_pool", "guest_image_path")
hypervisor: str = CowrieConfig.get(
"backend_pool", "guest_hypervisor", fallback="qemu"
)
memory: int = CowrieConfig.getint("backend_pool", "guest_memory", fallback=128)
qemu_machine: str = CowrieConfig.get(
"backend_pool", "guest_qemu_machine", fallback="pc-q35-3.1"
)

Expand All @@ -44,19 +46,21 @@ def create_guest(connection, mac_address, guest_unique_id):
os._exit(1)

# only in some cases, like wrt
kernel_image = CowrieConfig.get("backend_pool", "guest_kernel_image", fallback="")
kernel_image: str = CowrieConfig.get(
"backend_pool", "guest_kernel_image", fallback=""
)

# get a directory to save snapshots, even if temporary
try:
# guest configuration, to be read by qemu, needs an absolute path
snapshot_path = backend_pool.util.to_absolute_path(
snapshot_path: str = backend_pool.util.to_absolute_path(
CowrieConfig.get("backend_pool", "snapshot_path")
)
except NoOptionError:
snapshot_path = os.getcwd()

# create a disk snapshot to be used by the guest
disk_img = os.path.join(
disk_img: str = os.path.join(
snapshot_path, f"snapshot-{version_tag}-{guest_unique_id}.qcow2"
)

Expand Down
8 changes: 4 additions & 4 deletions src/backend_pool/libvirt/network_handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ def create_filter(connection):
# lazy import to avoid exception if not using the backend_pool and libvirt not installed (#1185)
import libvirt

filter_file = os.path.join(
filter_file: str = os.path.join(
CowrieConfig.get(
"backend_pool", "config_files_path", fallback="share/pool_configs"
),
Expand All @@ -39,7 +39,7 @@ def create_network(connection, network_table):
import libvirt

# TODO support more interfaces and therefore more IP space to allow > 253 guests
network_file = os.path.join(
network_file: str = os.path.join(
CowrieConfig.get(
"backend_pool", "config_files_path", fallback="share/pool_configs"
),
Expand All @@ -50,8 +50,8 @@ def create_network(connection, network_table):

network_xml = backend_pool.util.read_file(network_file)

template_host = "<host mac='{mac_address}' name='{name}' ip='{ip_address}'/>\n"
hosts = ""
template_host: str = "<host mac='{mac_address}' name='{name}' ip='{ip_address}'/>\n"
hosts: str = ""

# generate a host entry for every possible guest in this network (253 entries)
it = iter(network_table)
Expand Down
6 changes: 3 additions & 3 deletions src/backend_pool/nat.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,9 +51,9 @@ def connectionLost(self, reason):


class ServerFactory(protocol.Factory):
def __init__(self, dst_ip, dst_port):
self.dst_ip = dst_ip
self.dst_port = dst_port
def __init__(self, dst_ip: str, dst_port: int) -> None:
self.dst_ip: str = dst_ip
self.dst_port: int = dst_port

def buildProtocol(self, addr):
return ServerProtocol(self.dst_ip, self.dst_port)
Expand Down
10 changes: 7 additions & 3 deletions src/backend_pool/pool_server.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,15 @@
class PoolServer(Protocol):
def __init__(self, factory):
self.factory = factory
self.local_pool = CowrieConfig.get("proxy", "pool", fallback="local") == "local"
self.pool_only = CowrieConfig.getboolean(
self.local_pool: bool = (
CowrieConfig.get("proxy", "pool", fallback="local") == "local"
)
self.pool_only: bool = CowrieConfig.getboolean(
"backend_pool", "pool_only", fallback=False
)
self.use_nat = CowrieConfig.getboolean("backend_pool", "use_nat", fallback=True)
self.use_nat: bool = CowrieConfig.getboolean(
"backend_pool", "use_nat", fallback=True
)

if self.use_nat:
self.nat_public_ip = CowrieConfig.get("backend_pool", "nat_public_ip")
Expand Down
34 changes: 19 additions & 15 deletions src/backend_pool/pool_service.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ class PoolService:
VM States:
created -> available -> using -> used -> unavailable -> destroyed
created: initialised but not fully booted by Qemu
created: initialised but not fully booted by QEMU
available: can be requested
using: a client is connected, can be served for other clients from same ip
used: client disconnectec, but can still be served for its ip
Expand All @@ -38,31 +38,35 @@ def __init__(self, nat_service):
self.nat_service = nat_service

self.guests = []
self.guest_id = 0
self.guest_id: int = 0
self.guest_lock = Lock()

# time in seconds between each loop iteration
self.loop_sleep_time = 5
self.loop_sleep_time: int = 5
self.loop_next_call = None

# default configs; custom values will come from the client when they connect to the pool
self.max_vm = 2
self.vm_unused_timeout = 600
self.share_guests = True
self.max_vm: int = 2
self.vm_unused_timeout: int = 600
self.share_guests: bool = True

# file configs
self.ssh_port = CowrieConfig.getint(
self.ssh_port: int = CowrieConfig.getint(
"backend_pool", "guest_ssh_port", fallback=-1
)
self.telnet_port = CowrieConfig.getint(
self.telnet_port: int = CowrieConfig.getint(
"backend_pool", "guest_telnet_port", fallback=-1
)

self.local_pool = CowrieConfig.get("proxy", "pool", fallback="local") == "local"
self.pool_only = CowrieConfig.getboolean(
self.local_pool: str = (
CowrieConfig.get("proxy", "pool", fallback="local") == "local"
)
self.pool_only: bool = CowrieConfig.getboolean(
"backend_pool", "pool_only", fallback=False
)
self.use_nat = CowrieConfig.getboolean("backend_pool", "use_nat", fallback=True)
self.use_nat: bool = CowrieConfig.getboolean(
"backend_pool", "use_nat", fallback=True
)

# detect invalid config
if not self.ssh_port > 0 and not self.telnet_port > 0:
Expand All @@ -72,7 +76,7 @@ def __init__(self, nat_service):
)
os._exit(1)

self.any_vm_up = False # TODO fix for no VM available
self.any_vm_up: bool = False # TODO fix for no VM available

def start_pool(self):
# cleanup older qemu objects
Expand Down Expand Up @@ -124,7 +128,7 @@ def stop_pool(self):
try:
self.qemu.stop_backend()
except libvirt.libvirtError:
print("Not connected to Qemu")
print("Not connected to QEMU")

def shutdown_pool(self):
# lazy import to avoid exception if not using the backend_pool and libvirt not installed (#1185)
Expand All @@ -135,7 +139,7 @@ def shutdown_pool(self):
try:
self.qemu.shutdown_backend()
except libvirt.libvirtError:
print("Not connected to Qemu")
print("Not connected to QEMU")

def restart_pool(self):
log.msg(
Expand Down Expand Up @@ -188,7 +192,7 @@ def has_connectivity(self, ip):
return has_ssh or has_telnet

# Producers
def __producer_mark_timed_out(self, guest_timeout):
def __producer_mark_timed_out(self, guest_timeout: int) -> None:
"""
Checks timed-out VMs and acquires lock to safely mark for deletion
"""
Expand Down
2 changes: 1 addition & 1 deletion src/backend_pool/ssh_exec.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ def __init__(self, command, done_deferred, callback, *args, **kwargs):
def channelOpen(self, data):
self.conn.sendRequest(self, "exec", common.NS(self.command), wantReply=True)

def dataReceived(self, data):
def dataReceived(self, data: bytes) -> None:
self.data += data

def extReceived(self, dataType, data):
Expand Down
Loading

0 comments on commit ec39aad

Please sign in to comment.