Skip to content

NetLock: Fast, Centralized Lock Management Using Programmable Switches

License

Notifications You must be signed in to change notification settings

netx-repo/NetLock

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NetLock: Fast, Centralized Lock Management Using Programmable Switches

0. Introduction

NetLock is a new centralized lock manager that co-designs servers and network switches to achieve high performance without sacrificing flexibility in policy support. NetLock exploits the capability of emerging programmable switches to directly process lock requests in the switch data plane.

10b

Here we show some major building blocks of NetLock and how they are the implemented at a high level.

  • Lock Request Handling
    Due to the limitation of switch memory, NetLock only processes requests on popular locks in the switch, while the lock servers will help with the rest of the locks. We use check_lock_exist_table in netlock.p4 to check whether the switch is responsible for the coming packet (request).
    table check_lock_exist_table {
      reads {
        nlk_hdr.lock: exact;
      }
      actions {
        check_lock_exist_action;
      }
      size: NUM_LOCKS;
    }
    
    action check_lock_exist_action(index) {
      modify_field(meta.lock_exist, 1);
      modify_field(meta.lock_id, index);
    }
  • Switch Memory Layout
    We store the requests in a large circular queue and keep extra registers for the heads/tails/boundaries, so that each queue can have a flexible length and the switch memory can be efficiently utilized:
    • head_register: stores the head pointers.
    • tail_register: stores the tail pointers.
    • left_bound_register: stores the left boundaries.
    • right_bound_register: stores the right boundaries.
  • Resubmission
    After a lock is released, the packet will be resubmitted to check on the requests waiting in the queue. We store the information of dequeued request into the packet header.
    action mark_to_resubmit_action() {
      modify_field(nlk_hdr.recirc_flag, 1);
      add_header(recirculate_hdr);
      modify_field(recirculate_hdr.cur_tail, meta.tail);
      modify_field(recirculate_hdr.cur_head, meta.head);
      modify_field(recirculate_hdr.dequeued_mode, current_node_meta.mode);
      modify_field(meta.do_resubmit, 1);
    }
    For each packet, we check nlk_hdr.recirc_flag, recirculate_hdr.dequeued_mode, and current_node_meta.mode to decide whether we need to notify the clients and whether we need to resubmit this packet in the release_lock control block of netlock.p4.

More details of the design are available in our SIGCOMM'20 paper "NetLock: Fast, Centralized Lock Management Using Programmable Switches". [Paper]

Below we show how to configure the environment, how to run the system, and how to reproduce the results.

1. Content

  • dpdk_code/
    • client_code/: C code to run on clients.
    • lock_server_code/: C code to run on lock servers.
  • switch_code/
    • netlock/
      • p4src/: data-plane module (p4 code) for NetLock.
      • controller_init/: control-plane module for NetLock.
    • netchain/: netchain for comparison
  • results/: We collect results from all the servers and store them here.
  • logs/: We collect logs from all the servers and store them here.
  • traces/: Some traces we use for the experiments.
  • console.py: A script to help run different set of evaluations.
  • config.py: Some parameters to configure.
  • parser.py: A script to parse the raw results.
  • README.md: This file.

2. Environment requirement

  • Hardware
    • A Barefoot Tofino switch.
    • Servers with a DPDK-compatible NIC (we used an Intel XL710 for 40GbE QSFP+) and multi-core CPU.
  • Software
    The current version of NetLock is tested on:
    • Tofino SDK (version after 8.2.2) on the switch.
    • DPDK (16.11.1) on the servers.
      You can either refer to the official guige or use the tools.sh script in dpdk_code/.
      cd dpdk_code
      ./tools.sh install_dpdk
    We provide easy-to-use scripts to run the experiments and to analyze the results. To use the scripts, you need:
    • Python2.7, Paramiko at your endhost.
      pip install paramiko

3. How to run

First the traces should be downloaded to the traces/ directory.

cd traces
wget [The link is in the Content section]
unzip tpcc_traces.zip -d tpcc_traces
unzip microbenchmark.zip -d microbenchmark

Then you can either manually execute programs on the switch and the servers, or use the script we provided (Recommended).

  • To use scripts (Recommended)
    • Configure the parameters in the files based on your environment
      • config.py: provide the information of your servers (username, passwd, hostname, dir).
      • switch_code/netlock/controller_init/ports.json: use the information (actual enabled ports) on your switch.
    • Environment setup
      • Setup the switch
        • Setup the necessary environment variables to point to the appropriate locations.
        • Copy the files to the switch.
          • python console.py init_sync_switch
        • Compile the NetLock.
          • python console.py compile_switch
            This will take a couple of minutes. You can check logs/p4_compile.log in the switch to see if it's finished.
      • Setup the servers
        • Setup DPDK environment (install dpdk, and set correct environment variables).
        • Copy the files to the servers.
          • python console.py init_sync_server
        • Compile the clients and lock servers.
          • python console.py compile_host
            It will compile for both lock servers and clients.
        • Bind NIC to DPDK.
          • python console.py setup_dpdk
            It will bind NIC to DPDK for both lock servers and clients.
    • Run the programs
      • Run NetLock on the switch
        • python console.py run_netlock
          It will bring up both the data-plane module and the control-plane module. It may take up to 150 seconds (may vary between devices). You can check logs/run_ptf_test.log in the switch to see if it's finished (it will say "INIT Finished").
      • Run lock servers
        • python console.py run_server
          It will run the lock servers with parameters defined in the script console.py. For the parameters, you can check this readme.
      • Run clients
        • python console.py run_client
          It will run the clients with parameters defined in the script console.py. For the parameters, you can check this readme.
    • Get the results and logs
      The results are located at results/, and the log files are located at logs/
      • To easily analyze the results, you can grab results from all the clients/servers to the local machine where you are running all the commands.
        • python console.py grab_result
    • Kill the processes
      • Kill the switch process
        • python console.py kill_switch
      • Kill the lock server and client processes
        • python console.py kill_host
      • Kill all the processes (switch, lock servers, clients)
        • python console.py kill_all
    • Other commands
      There are also some other commands you can use:
      • python console.py sync_switch
        copy the local "switch code" to the switch
      • python console.py sync_host
        copy the local "client code" and "lock server code" to the servers
      • python console.py sync_trace
        copy the traces to the servers
      • python console.py clean_result
        clean up the results/ directory
  • To manually run (Not recommended)
    • Configure the ports information
      • switch_code/netlock/controller_init/ports.json: use the information (actual enabled ports) on your switch.
    • Environment setup
      • Setup the switch
        • Setup the necessary environment variables to point to the appropriate locations.
        • Copy the files to the switch.
        • Compile the NetLock.
          cd switch_code/netlock/p4src
          python tool.py compile netlock.p4
      • Setup the servers
        • Setup DPDK environment (install dpdk, and set correct environment variables).
        • Copy the files to the servers.
        • Bind NIC to DPDK.
          cd dpdk_code
          ./tools.sh setup_dpdk
        • Compile the clients.
          cd dpdk_code/client_code
          make
        • Compile the lock servers.
          cd dpdk_code/lock_server_code
          make
    • Run the programs
      • Run NetLock on the switch.
        cd switch_code/netlock/p4src
        python tool.py start_switch netlock
        python tool.py ptf_test ../controller_init netlock (Execute in another window)
      • Run lock servers
      • Run clients
    • Results and logs The results are located at results/, and the log files are located at logs/

4. How to reproduce the results

  • Copy the traces.
    cd traces
    wget [The link is in the Content section]
    unzip tpcc_traces.zip -d tpcc_traces
    unzip microbenchmark.zip -d microbenchmark
  • Configure the parameters in the files based on your environment
    • config.py: provide the information of your servers (username, passwd, hostname, dir).
    • switch_code/netlock/controller_init/ports.json: use the information (actual enabled ports) on your switch.
  • Setup the switch
    • Setup the necessary environment variables to point to the appropriate locations.
    • Copy the files to the switch: python console.py init_sync_switch
    • Compile the netlock: python console.py compile_switch
      Again it will take a couple of minutes. You can check logs/p4_compile.log in the switch to see if it's finished.
  • Setup the servers
    • Setup dpdk environment
    • Copy the files to the server: python console.py init_sync_server
    • Bind NIC to DPDK: python console.py setup_dpdk
    • Compile the clients and lock servers: python console.py compile_host
  • After both the switch and the servers are correctly configured, you can replay the results by running console.py. The following command will execute the switch program, lock server programs, and client programs automatically and grab the results to your endhost.
    • Figure 8(a): python console.py micro_bm_s
    • Figure 8(b): python console.py micro_bm_x
    • Figure 8(c)(d): python console.py micro_bm_cont
    • Figure 9: python console.py micro_bm_only_server
    • Figure 10: python console.py run_tpcc
    • Figure 11: python console.py run_tpcc_ms
    • Figure 13: python console.py mem_man
    • Figure 14: python console.py mem_size
  • Interprete the results.
    • console.py will collect raw results from the servers and store them at results/.
    • parser.py can parse the results (tput, avg. latency, etc.)
      • parser.py can help process the result files to get the throughput/latency.
      • It can process different metrics by running python parser.py [metric] [task_name]:
        • metric:
          • tput: lock throughput.
          • txn_tput: transaction throughput.
          • avg_latency/99_latency/99.9_latency: the average/99%/99.9% latency for locks.
          • txn_avg_latency/txn_99_latency/txn_99.9_latency: the average/99%/99.9% latency for transactions.
        • task_name:
          • micro_bm_s: microbenchmark - shared locks.
          • micro_bm_x: microbenchmark - exclusive locks w/o contention.
          • micro_bm_cont: microbenchmark - exclusive locks w/ contention.
          • tpcc: TPC-C workload with 10v2 setting.
          • tpcc_ms: TPC-C workload with 6v6 setting.
          • mem_man: memory management experiment.
          • mem_size: memory size experiment.
      • For example, after running python console.py run_tpcc, you can run:
        • python parser.py tput tpcc will give you the transaction throughput. It will give the results we used for Figure 10(a) (Shown below). 10b

5. Contact

You can email us at zhuolong at cs dot jhu dot edu if you have any questions.

About

NetLock: Fast, Centralized Lock Management Using Programmable Switches

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published