Skip to content

eclipse-kuksa/kuksa-someip-provider

Repository files navigation

KUKSA SOME/IP Provider

KUKSA Logo

SOME/IP integration in Docker containers

Running default vsomeip examples in containers is described in details here

SOME/IP to KUKSA Provider

Overview

SOME/IP is an automotive communication protocol which supports remote procedure calls, event notifications, service discovery. SOME/IP messages are sent as TCP/UDP unicast/multicast packets, but it is also possible to use local (Unix) endpoints.

SOME/IP feeder is COVESA / vsomeip application, that subscribes for specific "Wiper" SOME/IP Events, parses the payload and feeds values to KUKSA Databroker. It also provides an example "Wiper" SOME/IP request handling for setting wiper parameters.

Module summary

Setup Development environment

Prerequisites

  1. Install cmake and build requirements
    sudo apt-get install -y cmake g++ build-essential g++-aarch64-linux-gnu binutils-aarch64-linux-gnu jq
  2. Install and configure conan (if needed)
    sudo apt-get install -y python3 python3-pip
    pip3 install "conan==1.55"
    NOTE: Sometimes latest conan recipe revisions are broken, but the local build succeeds using cached older revision. If build fails on CI local conan cache could be cleared to reproduce the error. Also latest recipes may require newer conan version.
    rm -rf ~/.conan/data
    pip3 install "conan==1.*"
    Last known working revisions are hardcoded in conanfile.txt [requires].
  3. Install VS Code. To setup proper conan environment in vs code, launch vscode using:
    ./vscode-conan.sh
  4. Install and start recent KUKSA Databroker:
    docker run --rm -it -p 55555:55555/tcp --name databroker ghcr.io/eclipse/kuksa.val/databroker:master

Building KUKSA SOME/IP Provider

There are scripts for building release and debug versions of someip provider, supporting x86_64, aarch64 or rpi architectures:

./build-release.sh <arch>

NOTE: Use rpi when building on a Raspberry Pi. Scripts generate can-provider_<arch>_<debug|release>.tar archives.

There is also a script for exporting OCI container images (or import them locally for testing):

./docker-build.sh [OPTIONS] TARGETS

Standalone build helper for someip-feeder container.

OPTIONS:
  -l, --local      local docker import (does not export tar)
  -v, --verbose    enable plain docker output and disable cache
      --help       show help

TARGETS:
  x86_64|amd64, aarch64|amd64    Target arch to build for, if not set - defaults to multiarch

NOTE: This script can't handle multi-arch images!

Configuration

vsomeip requires a combination of json config file + environment variables

vsomeip specific Configuration

vsomeip library uses a combination of environment variables and config json files that must be set correctly or binaries won't work. You can test vsomeip services in a "local" mode (running on a single Linux host, using Unix sockets for communication) or in "normal" mode, where 2 different hosts are required (e.g. wiper service running on the 1st host and KUKSA SOME/IP Provider running on the 2nd host).

NOTE: Multicast config (service-discovery) for both services must be matching and multicast packages between the hosts must be enabled, also unicast messages between hosts must be possible (both hosts in the same network).

Environment variables for vsomeip

  • VSOMEIP_CONFIGURATION: path to vsomeip config json file.
  • VSOMEIP_APPLICATION_NAME: vsomeip application name, must be consistent with json config file .applications[].name

NOTE: Those variables are already set (and validated) in provided ./bin/setup-*.sh scripts.

Wiper configuration files

NOTE: With vsomeip it is not possible to have multiple routing applications running on the same host, so in Proxy setup, Wiper service is configured as routing app and Proxy clients are configured to route through Wiper Service. In case two hosts (VMs) are available, Proxy configs are not needed, then one host should run the service and the other - client config.

Config file modifications

In order to use non-proxy mode on 2 network hosts, you have to modify the .unicast address in vsomeip config file, unfortunately it does not support hostnames, so there are some helper scripts for setting up the environment and replacing hostnames with jq

Runing someip example and KUKSA SOME/IP Provider

Setup scripts in ./bin are meant to run from install directory, e.g. after executing ./build-debug.sh it is: target/x86_64/debug/install/bin.

If running from another location, make sure your LD_LIBRARY_PATH includes vsomeip3 binaries.

Local mode (single host)

In this mode only Unix sockets are used, wiper service is acting as a someip router app and KUKSA SOME/IP Provider is a proxy.

  • Launch wiper service from install directory:
. ./setup-wiper-service.sh
./wiper_service --cycle 300
  • Launch KUKSA SOME/IP Provider in proxy mode:
. ./setup-someip2val-proxy.sh
./someip_feeder

UDP mode (2 hosts)

In this mode you need another host in your network to run the service.

  • Launch wiper service from install directory on Host2:
. ./setup-wiper-service.sh
./wiper_service --cycle 300
  • Launch KUKSA SOME/IP provider in default mode:
. ./setup-someip2val.sh
./someip_feeder

Make sure you have jq installed as it is rewriting config files to update unicast address.

Extending KUKSA SOME/IP Provider

Provided wiper example needs to be adjusted for another someip service events.

  • SomeIPClient class provides generic event subscription and passes someip payload to a custom callback:
typedef std::function <
  int (vsomeip::service_t service, vsomeip::instance_t instance, vsomeip::method_t event,
      const uint8_t *payload, size_t size)
> message_callback_t;
  • SomeIPConfig vsomeip service/instance/event_group/event values also have to be changed (e.g. via environment variables, or in code)
  • SomeipFeederAdapter::on_someip_message() : Example for someip payload callback, deserializing payload and feeding to Databroker

Running KUKSA SOME/IP Provider with Authorization

Authorization support and example setup is described here.

Pre-commit set up

This repository is set up to use pre-commit hooks. Use pip install pre-commit to install pre-commit. After you clone the project, run pre-commit install to install pre-commit into your git hooks. Pre-commit will now run on every commit. Every time you clone a project using pre-commit running pre-commit install should always be the first thing you do.