Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Java APIs And Update Java CLI. #30

Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
7e322df
Update README.md
KenBirman Apr 18, 2022
8c33891
add dairy_farm as standalone
songweijia Apr 18, 2022
ac8fb53
remove applications/demo
songweijia Apr 18, 2022
b8acb69
Merge branch 'v1.0rc' of github.com:Derecho-Project/cascade into v1.0rc
songweijia Apr 18, 2022
5666aaa
Update README.md
songweijia Apr 18, 2022
f9e9b14
Update derecho dependencies
songweijia Apr 18, 2022
cc5716f
+pingpong_latency test
songweijia Apr 18, 2022
3fa9214
Merge branch 'v1.0rc' of github.com:Derecho-Project/cascade into v1.0rc
songweijia Apr 18, 2022
c6287e6
bugfix: Blob::from_bytes_noalloc_const() should return a const contex…
songweijia Apr 19, 2022
e57df57
bugfix: the second half of Blob::from_bytes_noalloc_const()
songweijia Apr 19, 2022
32dc0ee
DDS: send rate control + thread pool size default to 1 + use Blob as …
songweijia Apr 19, 2022
239cd08
Update README.md
songweijia Apr 19, 2022
74c99ed
+dds command line support
songweijia Apr 19, 2022
5c264d7
minor fix
songweijia Apr 20, 2022
8da73d0
bugfix:use get_walltime() instead of get_time()
songweijia Apr 22, 2022
b0b6ab3
minor fix: dds client library should include configuration file
songweijia Apr 26, 2022
4cd6587
disable dds copy
songweijia Apr 26, 2022
4665d89
DISABLE_DDS_COPY default to 0. This is only used for debug purpose.
songweijia Apr 27, 2022
1e6c17e
remove an 'error' pragma
songweijia Apr 27, 2022
39fb350
add more debug information.
songweijia Apr 27, 2022
8a7337a
Do not use this commit:I disabled the DDS copy for test temporarily.
songweijia Apr 27, 2022
3344145
Add JNI interface for multi-get.
yw2399 Mar 28, 2022
737d051
Change the interface for multi_get internal to cascade_java.cpp.
yw2399 Mar 28, 2022
b7207b2
Complete updating put_and_forget and multi_get.
yw2399 Mar 29, 2022
173e835
implementing multi-get
yw2399 Mar 30, 2022
1e65ce2
quick fix
yw2399 Mar 30, 2022
fc62ffd
Finish all flavors of put_and_forget and multi_get.
yw2399 Apr 1, 2022
4b3c50a
Start adding object pool version of get.
yw2399 Apr 1, 2022
8765c08
Add more interfaces for the object pool.
yw2399 Apr 2, 2022
549b0dc
Add Java APIs And Update Java CLI.
yw2399 Apr 8, 2022
0d5ffed
Add get_by_time(obj) and listKey(obj) in Client.java.
yw2399 Apr 20, 2022
62a5c99
Add more JNI interface.
yw2399 Apr 22, 2022
f27adae
Add the internal function for getSize().
yw2399 Apr 29, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
15 changes: 0 additions & 15 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -67,9 +67,6 @@ endif()
# json
find_package(nlohmann_json 3.2.0 REQUIRED)

#opencv
find_package(OpenCV QUIET)

set(CMAKE_REQUIRED_DEFINITIONS -DFUSE_USE_VERSION=30)
include(CheckIncludeFiles)
include(CheckIncludeFileCXX)
Expand All @@ -80,17 +77,6 @@ CHECK_INCLUDE_FILES("fuse3/fuse.h;fuse3/fuse_lowlevel.h" HAS_FUSE)
# boolinq
CHECK_INCLUDE_FILE_CXX("boolinq/boolinq.h" HAS_BOOLINQ)

#mxnet
set(CMAKE_REQUIRED_LIBRARIES mxnet)
CHECK_INCLUDE_FILE_CXX("mxnet-cpp/MxNetCpp.h" HAS_MXNET_CPP)
unset(CMAKE_REQUIRED_LIBRARIES)

#cuda
find_package(CUDA QUIET)
if (CUDA_FOUND)
set (ENABLE_GPU 1)
endif()

# enable evaluation
set (ENABLE_EVALUATION 1)
set (DUMP_TIMESTAMP_WORKAROUND 1)
Expand All @@ -106,7 +92,6 @@ add_subdirectory(src/core)
add_subdirectory(src/utils)
add_subdirectory(src/service)
add_subdirectory(src/applications/tests)
add_subdirectory(src/applications/demos)

# make libcascade.so
add_library(cascade SHARED
Expand Down
5 changes: 2 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Cascade is a C++17 cloud application framework powered by optimized RDMA data pa
- [boolinq](https://github.com/k06a/boolinq) or newer (Optional for LINQ API)
- Python 3.5 or newer and [pybind11](https://github.com/pybind/pybind11) (Optional for Python API)
- OpenJDK 11.06 or newer. On Ubuntu, use `apt install openjdk-11-jdk` to install it. (Optional for Java API)
- Derecho v2.2.2. Plesae follow this [document](http://github.com/Derecho-Project/derecho) to install Derecho. Note: this cascade version replies on Derecho commit 0ac2fb535f372cb964c823baf1391b1e2f2241dd on branch [`for_cascade_get_api`](https://github.com/Derecho-Project/derecho/tree/for_cascade_get_api).
- Derecho v2.2.2. Plesae follow this [document](http://github.com/Derecho-Project/derecho) to install Derecho. Note: this cascade version replies on Derecho commit 3f24e06ed5ad572eb82206e8c1024935d03e903e on the master branch.

## Build Cascade
1) Download Cascade Source Code
Expand Down Expand Up @@ -64,5 +64,4 @@ This will install the following cascade components:
There are two ways to use Cascade in an application. You can use Cascade as a standalone service with pre-defined K/V types and configurable layout. Or, you can use the Cascade storage templates (defined in Cascade ) as building blocks to build the application using the Derecho group framework. Please refer to [Cascade service's README](https://github.com/Derecho-Project/cascade/tree/master/src/service) for using Cascade as a service and [cli_example README](https://github.com/Derecho-Project/cascade/tree/master/src/applications/tests/cascade_as_subgroup_classes) for using Cascade components to build your own binary with customized key type and value type.

# New Features to Come
1) Metadata service
2) Resource management
1) Resource management
2 changes: 0 additions & 2 deletions config.h.in
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
#cmakedefine HAS_FUSE
#cmakedefine HAS_BOOLINQ
#cmakedefine HAS_MXNET_CPP
#cmakedefine ENABLE_GPU
#cmakedefine ENABLE_EVALUATION
#cmakedefine DUMP_TIMESTAMP_WORKAROUND
#define PATH_SEPARATOR '/'
2 changes: 1 addition & 1 deletion include/cascade/object.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ class Blob : public mutils::ByteRepresentable {
mutils::DeserializationManager* ctx,
const uint8_t* const v);

static mutils::context_ptr<Blob> from_bytes_noalloc_const(
static mutils::context_ptr<const Blob> from_bytes_noalloc_const(
mutils::DeserializationManager* ctx,
const uint8_t* const v);
};
Expand Down
6 changes: 0 additions & 6 deletions src/applications/demos/CMakeLists.txt

This file was deleted.

3 changes: 3 additions & 0 deletions src/applications/standalone/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@ The standalone applications are templates designed for Cascade application devel

- [**kvs_client**](kvs_client) shows how to access the data in Cascade K/V stores.
- [**console_printer_udl**](console_printer_udl) shows how to write application logic on Cascade's data path.
- [**notification**](notification) shows how to use the "server side notification" feature of Cascade.
- [**dairy_farm**](dairy_farm) is an IoT demo application showing how to use Cascade with ML/AI capabilities.
- [**dds**](dds) is a data distribution service built on Cascade.

To use those template, you can just copy the folder to your own project folder and build it as following:
```
Expand Down
Original file line number Diff line number Diff line change
@@ -1,13 +1,50 @@
cmake_minimum_required(VERSION 3.12.4)
set(CMAKE_DISABLE_SOURCE_CHANGES ON)
set(CMAKE_DISABLE_IN_SOURCE_BUILD ON)
project(cascade CXX)
project(cascade_dairy_farm CXX)

# Version
set(cascade_dairy_farm_VERSION 1.0rc0)
set(cascade_build_VERSION 1.0rc0)

# C++ STANDARD
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
if (${USE_VERBS_API})
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DUSE_VERBS_API")
endif()
set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -O0 -Wall --ggdb -gdwarf-3 -ftemplate-backtrace-limit=0 -DEVALUATION")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -Wall -DEVALUATION -fcompare-debug-second")
set(CMAKE_CXX_FLAGS_RELWITHDEBINFO "${CMAKE_CXX_FLAGS_RELWITHDEBINFO} -Wall -ggdb -gdwarf-3")

find_package(nlohmann_json 3.2.0 REQUIRED)
find_package(OpenSSL REQUIRED)
find_package(spdlog 1.3.1 REQUIRED)
find_package(derecho CONFIG REQUIRED)
find_package(cascade CONFIG REQUIRED)

include(CheckIncludeFileCXX)
find_package(CUDA QUIET)
if (CUDA_FOUND)
set (ENABLE_GPU 1)
endif()
# set(CMAKE_REQUIRED_LIBRARIES mxnet)
# CHECK_INCLUDE_FILE_CXX("mxnet-cpp/MxNetCpp.h" HAS_MXNET_CPP)
# unset(CMAKE_REQUIRED_LIBRARIES)
find_package(OpenCV QUIET)
find_library(TENSORFLOW_LIB_FOUND tensorflow)
find_library(ANN_LIB_FOUND ANN)
find_package(Torch QUIET)

set (ENABLE_EVALUATION 1)
if (ENABLE_EVALUATION)
find_package(rpclib 2.3.0 REQUIRED)
endif()

# configure header
# CONFIGURE_FILE(${CMAKE_CURRENT_SOURCE_DIR}/config.h.in ${CMAKE_CURRENT_BINARY_DIR}/include/config.h)
CONFIGURE_FILE(${CMAKE_CURRENT_SOURCE_DIR}/config.h.in ${CMAKE_CURRENT_BINARY_DIR}/config.h)

if (OPENCV_CORE_FOUND)
# dairy farm client
add_executable(dairy_farm_client client.cpp)
Expand All @@ -18,7 +55,7 @@ if (OPENCV_CORE_FOUND)
$<BUILD_INTERFACE:${CMAKE_BINARY_DIR}>
$<BUILD_INTERFACE:${OpenCV_INCLUDE_DIRS}>
)
target_link_libraries(dairy_farm_client cascade ${OpenCV_LIBS})
target_link_libraries(dairy_farm_client derecho::cascade ${OpenCV_LIBS})
# dairy farm perf test
add_executable(dairy_farm_perf perf.cpp)
target_include_directories(dairy_farm_perf PRIVATE
Expand All @@ -29,9 +66,17 @@ if (OPENCV_CORE_FOUND)
$<BUILD_INTERFACE:${OpenCV_INCLUDE_DIRS}>
)
if (CMAKE_CXX_COMPILER_VERSION VERSION_GREATER 9)
target_link_libraries(dairy_farm_perf cascade rpc ${OpenCV_LIBS})
if (ENABLE_EVALUATION)
target_link_libraries(dairy_farm_perf derecho::cascade rpc ${OpenCV_LIBS})
else()
target_link_libraries(dairy_farm_perf derecho::cascade ${OpenCV_LIBS})
endif()
else ()
target_link_libraries(dairy_farm_perf cascade rpc stdc++fs ${OpenCV_LIBS})
if (ENABLE_EVALUATION)
target_link_libraries(dairy_farm_perf derecho::cascade rpc stdc++fs ${OpenCV_LIBS})
else()
target_link_libraries(dairy_farm_perf derecho::cascade stdc++fs ${OpenCV_LIBS})
endif()
endif()
endif()

Expand All @@ -57,7 +102,7 @@ if (TENSORFLOW_LIB_FOUND AND OPENCV_CORE_FOUND AND ANN_LIB_FOUND AND TORCH_FOUND
$<BUILD_INTERFACE:${OpenCV_INCLUDE_DIRS}>
$<BUILD_INTERFACE:${TENSORFLOW_LIB_INCLUDE_DIRS}>
)
target_link_libraries(filter_udl demo_common cascade ${OpenCV_LIBS} ${TENSORFLOW_LIB_FOUND})
target_link_libraries(filter_udl demo_common derecho::cascade ${OpenCV_LIBS} ${TENSORFLOW_LIB_FOUND})
if (EXISTS "${CMAKE_CURRENT_SOURCE_DIR}/filter-model.tar.gz")
add_custom_command(TARGET filter_udl POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/filter-model.tar.gz
Expand All @@ -79,7 +124,7 @@ if (TENSORFLOW_LIB_FOUND AND OPENCV_CORE_FOUND AND ANN_LIB_FOUND AND TORCH_FOUND
$<BUILD_INTERFACE:${TENSORFLOW_LIB_INCLUDE_DIRS}>
$<BUILD_INTERFACE:${OpenCV_INCLUDE_DIRS}>
)
target_link_libraries(infer_udl demo_common cascade ${TENSORFLOW_LIB_FOUND} ${TORCH_LIBRARIES} ${OpenCV_LIBS} ANN)
target_link_libraries(infer_udl demo_common derecho::cascade ${TENSORFLOW_LIB_FOUND} ${TORCH_LIBRARIES} ${OpenCV_LIBS} ANN)
if (EXISTS "${CMAKE_CURRENT_SOURCE_DIR}/bcs-model.tar.gz" AND
EXISTS "${CMAKE_CURRENT_SOURCE_DIR}/cow-id-model.tar.gz")
add_custom_command(TARGET infer_udl POST_BUILD
Expand Down Expand Up @@ -205,4 +250,4 @@ target_include_directories(storage_udl PRIVATE
$<BUILD_INTERFACE:${CMAKE_BINARY_DIR}>
$<BUILD_INTERFACE:${OpenCV_INCLUDE_DIRS}>
)
target_link_libraries(storage_udl cascade)
target_link_libraries(storage_udl derecho::cascade)
3 changes: 3 additions & 0 deletions src/applications/standalone/dairy_farm/config.h.in
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#cmakedefine HAS_MXNET_CPP
#cmakedefine ENABLE_GPU
#cmakedefine ENABLE_EVALUATION
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
#include <unistd.h>
#include <fstream>
#include <string>
#include "config.h"

/**
* Tensor Flow GPU configuration
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
#include <tensorflow/c/eager/c_api.h>
#include "demo_common.hpp"
#include "time_probes.hpp"
#include "config.h"

namespace derecho{
namespace cascade{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
#include <tensorflow/c/eager/c_api.h>
#include "demo_common.hpp"
#include "time_probes.hpp"
#include "config.h"

namespace derecho{
namespace cascade{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
#include <iostream>
#include <vector>
#include "time_probes.hpp"
#include "config.h"

namespace derecho{
namespace cascade{
Expand Down
2 changes: 2 additions & 0 deletions src/applications/standalone/dds/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@ find_package(Java 1.11 QUIET)

set(UDL_UUID 94f8509c-a6e6-11ec-a9f5-0242ac110002)

set(DISABLE_DDS_COPY 1)

CONFIGURE_FILE(${CMAKE_CURRENT_SOURCE_DIR}/config.h.in ${CMAKE_CURRENT_BINARY_DIR}/include/cascade_dds/config.h)

add_library(cascade_dds SHARED src/dds.cpp)
Expand Down
16 changes: 11 additions & 5 deletions src/applications/standalone/dds/README.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,21 @@
## Cascade Data Distribution Service (DDS)
Cascade DDS is a pub/sub system implemented as a Cascade Application. It composes of three components:
Cascade DDS is a pub/sub system implemented as a Cascade Application. It consists of three components:
- a client side library for developing DDS applications: `libcascade_dds.so`,
- a server side UDL running in Cascade servers' address space: `libcascade_dds_udl.so`,
- a DDS configuration file defining DDS's metadata service, data plane, and control plane, which will be discussed later: `dds.cfg`, and
- DDS service's object pool (we plan to use 'placement group' for 'object pool' in the future) layout: `dfgs.json`

Before diving into the details, let's talk about sever concepts in Cascade DDS. In the DDS, a topic maps to a key in some object pool in the Cascade Service. A DDS client works as an externcal client. To publish to a topic, the DDS client simply put a key/value pair with the key corresponding to the topic and the blob value representing the message. On receiving the messages, the DDS UDL is triggered and check if there are any subscribers on the corresponding topic. If yes, it relays the message to all the subscribers using Cascade server side notification. To subscribe to a topic, the DDS client does two things: one is to register the application-provided lambda a.k.a topic message handler in a local registry maintained by the client side library; the other is to put a key/value pair as a control message. The key of the control message is the topic key with a control plane suffix (defined in `dds.cfg`). On a control message, the DDS UDL will pick it from the data plane and modify the subscriber's status in the server's local memory. Unsubscribing is done similarly.
Use of the Cascade DDS requires some understanding of its internals, so before diving into the client API (which is quite simple), we should first talk about a few of the main concepts relevant to the Cascade DDS, and review the deployment architecture: as a developer, you'll be responsible for setting this up. With that out of the way, we'll review the client API, which is quite simple and hides most of the internals -- the reason for reviewing them first is because a developer who understands how the system works will obtain better performance and be able to leverage more of the functionality of the system than one who just closes their eyes to these aspects. In the DDS, a topic maps to a key in some _object pool_ in the Cascade Service (similar to a partition group in Kafka).

DDS topics are stored as metadata in a separate Cascade object pool defined in `dds.cfg`. The DDS metadata is simply a persumably persistent object pool. The keys are the topic names and the values are the object pool working as the data plane for that topic. For example, a key pair "topic_a":"/dds/tiny_text" indicates that the topic "topic_a" is backed up by the object pool "/dds/tiny_text". To publish to "topic_a", the DDS client will append the topic name after the object pool to derive the key "/dds/tiny_text/topic_a" for its internal `put` operation.
An application using the DDS is said to be a "client", or sometimes an "external client" -- the distinction won't be important for this documentation, but refers to whether the client belongs to the pool of servers (the top-level Cascade group), versus just being connected to it on a datacenter or Internet link. To publish to a topic, a DDS client simply publishes a key/value pair with the key corresponding to the topic and the blob value representing the message, using an API we call "put".

As these messages are received, the Cascade DDS library (technically, what we refer to as a user-defined library (UDL), which is a special form of dynamically linked library loaded into the Cascade servers) will check for subscribers on the associated topic. If so, it relays the message to all the subscribers using Cascade server side notification. To subscribe to a topic, a DDS client does two things: one is to create and register a lambda function (a message handler) in a local registry maintained by the client side library, specifying the topic; the other is to publish (via put) a key/value pair that functions as a control message. The key of the control message is the topic key with a control plane suffix (defined in `dds.cfg`). On receiving a control message, the DDS UDL will pick it from the data plane and modify the subscriber's status in the server's local memory. Unsubscribing is done similarly.

DDS topics are stored as metadata in a separate Cascade object pool defined in `dds.cfg`. The DDS metadata is simply an object pool, which is normally configured as persistent (retained across complete shutdown/restart sequences). The keys for key-value objects in this pool will be the topic names and the values correspond to messages published to the topic. For example, a key pair "topic_a":"/dds/tiny_text" indicates that the topic "topic_a" is backed up by the object pool "/dds/tiny_text". To publish to "topic_a", the DDS client will append the topic name after the object pool to derive the key "/dds/tiny_text/topic_a" for its internal `put` operation.

The control plane suffix in `dds.cfg` indicates how to separate control messages from data. In the above example, let's assume the control plan suffix is '__control__', which is the default value. To subscribe to "topic_a", a DDS client will put a K/V pair to the same object pool of the data plane "/dds/tiny_text". Instead of using the data plane topic key, it append the control plan suffix to derive the key for the control plane, which is "/dds/tiny_text/topic_a/__control__". The value is a serialized request of subscribing to topic "topic_a".

Despite above details of the internal design, the client side library covers them with [clean and simple APIs](include/cascade_dds/dds.hpp) for application developers.
As noted at the outset, the client library hides all of these details behind a set of [clean and simple APIs](include/cascade_dds/dds.hpp) for application developers.

### A Step-by-step Tutorial
Cascade DDS depends on Cascade and Derecho. Please make sure Derecho and Cascade are installed correctly before following this section.
Expand Down Expand Up @@ -118,4 +122,6 @@ Please refer to [`dds.hpp`](include/cascade_dds/dds.hpp) for the DDS API. The [c
There are couple of limitations we plan to address in the future.
1) The DDS UDL can run in multiple threads, therefore the messages in a topic might be sent to the subscribers from different threads. Therefore, a subscriber might receive messages out of order. The current workaround is to limit the size of off critical data path thread pool to 1, by setting `num_workers_for_multicast_ocdp` to 1 in `derecho.cfg`. In the future, we plan to add a "thread stickness feature" to control the affinity of messages and worker threads. Using this feature, we can allow only one worker thread for each topic to guarantee message order without disabling multithreading.
2) Sometimes, the application wants to leverage the computation power on the Cascade Server nodes to process topic messages. For example, the servers can be used to preprocess high resolution pictures and delete irrelavent pixels. We plan to introduce a server side API for this DDS so that the application can inject such logics to the server side like how we handle the UDLs.
3) Currently, the DDS service supports only C++ API. We plan to provide Python and Java API soon.
3) We plan to add stateful DDS by allowing buffering recent topic messages and checkpointing.
4) Currently, the DDS service supports only C++ API. We plan to provide Python and Java API soon.
5) Performance optimization: zero-copy API for big messages.
4 changes: 2 additions & 2 deletions src/applications/standalone/dds/cfg/n0/derecho.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -176,8 +176,8 @@ json_layout_file = 'layout.json'
# TODO: add document for how to setup a cascade service.
[CASCADE]
cpu_cores = 0-3
num_workers_for_multicast_ocdp = 2
num_workers_for_p2p_ocdp = 2
num_workers_for_multicast_ocdp = 1
num_workers_for_p2p_ocdp = 1
worker_cpu_affinity = '
{
"multicast_ocdp" : {
Expand Down
4 changes: 2 additions & 2 deletions src/applications/standalone/dds/cfg/n1/derecho.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -177,8 +177,8 @@ json_layout_file = 'layout.json'
# TODO: add document for how to setup a cascade service.
[CASCADE]
cpu_cores = 0-3
num_workers_for_multicast_ocdp = 2
num_workers_for_p2p_ocdp = 2
num_workers_for_multicast_ocdp = 1
num_workers_for_p2p_ocdp = 1
worker_cpu_affinity = '
{
"multicast_ocdp" : {
Expand Down
4 changes: 2 additions & 2 deletions src/applications/standalone/dds/cfg/n2/derecho.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -177,8 +177,8 @@ json_layout_file = 'layout.json'
# TODO: add document for how to setup a cascade service.
[CASCADE]
cpu_cores = 0-3
num_workers_for_multicast_ocdp = 2
num_workers_for_p2p_ocdp = 2
num_workers_for_multicast_ocdp = 1
num_workers_for_p2p_ocdp = 1
worker_cpu_affinity = '
{
"multicast_ocdp" : {
Expand Down
1 change: 1 addition & 0 deletions src/applications/standalone/dds/config.h.in
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
#define UDL_UUID "@UDL_UUID@"
#define DISABLE_DDS_COPY @DISABLE_DDS_COPY@
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
#include <unordered_map>
#include <shared_mutex>
#include <nlohmann/json.hpp>
#include <cascade_dds/config.h>

using namespace nlohmann;

Expand Down