HyperShield is the extended version of VirtShield published in CCNC26. https://github.com/faizalam87/VirtShield
HyperShield is a research framework for evaluating security mechanisms in both virtual machines (VMs) and containers.
It enables controlled experiments with security models deployed in either kernel-space or user-space, while supporting end-to-end benchmarking of both network performance and microarchitectural behavior.
Two deployment options are supported:
- VM-based setup using QEMU
- Container-based setup (for this repository, we rely on Docker. Other runtimes may require adjustments.)
Both setups follow the same topology:
- Client: traffic generator
- Model: security model host (where the security model is deployed and tested)
- Server: traffic receiver
Two execution modes are available:
- direct: client sends traffic directly to server (used to establish performance baseline)
- model: client routes traffic through the security model host (used to evaluate security model impact)
We would create, connect, and configure three VMs: a client (which contains the benchmarks), a server (that receives and processes the packets), and a model that mimics a cloud infrastructure, where the model VMs host the security module, either implemented in user-space or kernel-space.
- QEMU installed on the host
- Ubuntu ISO (
ubuntu-22.04.5-live-server-amd64.iso) in working directory- Download: Ubuntu 22.04.5 ISO
- If using another ISO, update its name in
vm-client.sh,vm-model.sh, andvm-server.sh - Scripts required:
setup.sh,vm-client.sh,vm-model.sh,vm-server.sh,vm_setup_file_transfer.sh,setup_client.sh,setup_model.sh
./setup.shThis creates the Linux bridge (br0), sets up NAT and routing, prepares QEMU networking, and creates empty disk images (client.qcow2, model.qcow2, server.qcow2).
./vm-client.sh
./vm-model.sh
./vm-server.shEach script launches its VM with proper MAC, disk, and ISO. During the initial boot, follow the on‑screen installation procedures to set up Ubuntu inside the VM. It is recommended to go with the default options during installation unless specific customization is needed.
| VM | MAC Address | IP Address | Role |
|---|---|---|---|
| Client | 52:54:00:12:34:01 | 192.168.10.4 | Traffic Generator |
| Model | 52:54:00:12:34:02 | 192.168.10.3 | Security Model Host |
| Server | 52:54:00:12:34:03 | 192.168.10.5 | Receiver |
To set static IP addresses inside each VM, configure netplan as below. At this stage, you may need to type the following code directly into each VM console, since the VMs do not yet have an IP address to allow remote connection. The interface name may be ens3 as shown. You can confirm the correct name by running ip link inside the VM; look for the primary network interface (commonly ens3 in QEMU).; adjust if your VM presents a different device name:
sudo nano /etc/netplan/01-netcfg.yamlnetwork:
version: 2
ethernets:
ens3:
dhcp4: no
addresses: 192.168.10.4/24
routes:
- to: default
via: 192.168.10.1
nameservers:
addresses: 8.8.8.8sudo netplan applysudo nano /etc/netplan/01-netcfg.yamlnetwork:
version: 2
ethernets:
ens3:
dhcp4: no
addresses: 192.168.10.3/24
routes:
- to: default
via: 192.168.10.1
nameservers:
addresses: 8.8.8.8sudo netplan applysudo nano /etc/netplan/01-netcfg.yamlnetwork:
version: 2
ethernets:
ens3:
dhcp4: no
addresses: 192.168.10.5/24
routes:
- to: default
via: 192.168.10.1
nameservers:
addresses: 8.8.8.8sudo netplan applyThese static configurations ensure each VM is reachable at the fixed IPs shown in the table above.
./vm_setup_file_transfer.shTo enable SSH into VMs, install OpenSSH Server inside each VM (see Ubuntu documentation for details):
sudo apt install openssh-server
-
Client:
./setup_client.sh <direct|model>
- Use
directif traffic should go straight from client to server. - Use
modelif traffic should go through the firewall (model VM). - Run
sudo apt-get install iperf3 netperf nuttcp redis-tools sysbench mysql-client fio tracerouteto install all the benchmarks.
direct: client sends traffic straight to server (used to establish performance baseline).
model: client routes traffic through the security model host (used to evaluate security model impact). - Use
-
Model:
./setup_model.sh
-
Server:
./setup_server.sh
-
Source code in
~/VirtShield/:
kernel_space.c,user_space_model.c,packet_queue.c/h,Makefile,performance/ -
run
./setup/model_file_transfer.sh. It would transfer all required files to build the user-space or kernel-space security module to the Model VM. -
Build:
cd ~/VirtShield make
NOTE:- If the make fails due to the model VM not having a GCC compiler, install it using
sudo apt install -y gccand thenmake CC=gcc.Builds
kernel_space.koanduser_space_model[packet_sniffer]. -
Run:
sudo insmod kernel_space.ko # kernel-space ./user_space_model # user-space[packet_sniffer]
-
Unload kernel module:
sudo rmmod kernel_space
- Network Performance (Client VM):
./run_test.sh
- Microarchitectural Profiling (Model VM):
cd ~/VirtShield/performance sudo ./run_perf.sh <logdir> <mode> # mode: 0 = user-space, 1 = kernel-space
Manually Running the Benchmarks The Benchmarks folder contains a shell script to run each benchmark manually. Move them to the client VM and launch the benchmark individually.
NOTE: The results and result_summary are the result traces generated from our individual runs for the data presented in the paper. Please note that this would vary significantly based on the type of security model used.
| Component | Command/File | |
|---|---|---|
| Kernel logs | dmesg, `sudo journalctl -k` |
|
| User-space logs | redirect stdout from binary | |
| Perf outputs | perf_kernel.log, perf_user_space.log |
|
| Perf raw data | perf_kernel.data + perf report |
If VMs can ping 192.168.10.1 but not the internet, first check whether this is caused by firewalld blocking forwarding rules. You can check with:
sudo firewall-cmd --stateIf it shows running, continue with the fix below. If not, your issue is elsewhere:
sudo firewall-cmd --permanent --zone=trusted --add-interface=br0
sudo firewall-cmd --reload
sudo firewall-cmd --zone=trusted --permanent --add-masquerade
sudo firewall-cmd --reload
sudo sysctl -w net.ipv4.ip_forward=1This guide walks you through the Container-based setup for the HyperShield platform. It mirrors the VM-based setup but uses lightweight containers to emulate the client, server, and firewall environments.
- Container runtime (for this repo, we rely on Docker)
- Working directory:
VirtShield/ - All setup scripts and Dockerfiles present
cd setup
./setup_containers.sh <direct|model>Container Network Configuration
| Container | IP (client-net) | IP (server-net) | Role |
|---|---|---|---|
| Client | 192.168.1.3 | 192.168.2.4* | Traffic Generator |
| Model | 192.168.1.2 | 192.168.2.2 | Security Model Host |
| Server | - | 192.168.2.3 | Receiver |
**What Happens in **`setup_containers.sh`
- Creates two networks (
client-net,server-net) - Builds images for client, model, server
- Launches containers with fixed IPs
- Configures routing in model mode via IP forwarding and
iptables - File Copy into Containers
Once the containers are up and the required files are copied in, the user is expected to:
- Navigate to
/root/performance/workspace/inside the model container - Modify the required C source files to implement their own security logic
- For example:
kernel_space.c,user_space_model.c,packet_queue.c, etc.
- For example:
After modifying the files, inside the model container:
cd /root/performance/workspace
makeThis produces kernel_space.ko and user_space_model.
Run:
sudo insmod kernel_space.ko # kernel-space
./user_space_model & # user-spaceUnload kernel module:
sudo rmmod kernel_space- Client container (network metrics):
./run_test.sh
- Model container (microarchitectural profiling):
cd /root/performance sudo ./run_perf.sh <logdir>
| Component | Command/Path |
|---|---|
| Kernel logs | dmesg, journalctl -k |
| User logs | redirect stdout from binary |
| Performance logs | /root/performance/*.log |
VirtShield provides a unified framework to develop, deploy, and evaluate security models in both VM and container environments.
The workflow is consistent:
- Modify provided C source files (
kernel_space.c,user_space_model.c,packet_queue.c) - Rebuild (
make) - Deploy (insmod/run binary)
- Benchmark performance (network + microarchitectural)