Skip to content

Commit

Permalink
add links to c++
Browse files Browse the repository at this point in the history
  • Loading branch information
AlirezaMorsali committed Nov 25, 2022
1 parent 0495ef2 commit c8b78ce
Show file tree
Hide file tree
Showing 4 changed files with 126 additions and 18 deletions.
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,17 +14,17 @@
## Overview
<!-- start elevator-pitch -->
Inverted AI provides an API for controlling non-playable characters (NPCs) in autonomous driving simulations,
available as either a REST API or a Python library built on top of it. Using the API requires an access key -
available as either a [REST API][rest-link] or a [Python SDK](https://docs.inverted.ai/en/latest/pythonapi/index.html), (and [C++ SDK](https://docs.inverted.ai/en/latest/cppapi/index.html)) built on top of it. Using the API requires an access key -
[contact us](mailto:sales@inverted.ai) to get yours. This page describes how to get started quickly. For more in-depth understanding,
see the [API usage guide](https://docs.inverted.ai/en/latest/userguide.html), and detailed documentation for the [REST API][rest-link] and the
[Python library](https://docs.inverted.ai/en/latest/pythonapi/index.html).
see the [API usage guide](https://docs.inverted.ai/en/latest/userguide.html), and detailed documentation for the [REST API][rest-link],
the [Python SDK](https://docs.inverted.ai/en/latest/pythonapi/index.html), and the [C++ SDK](https://docs.inverted.ai/en/latest/cppapi/index.html).
To understand the underlying technology and why it's necessary for autonomous driving simulations, visit the
[Inverted AI website](https://www.inverted.ai/).
<!-- end elevator-pitch -->

![](docs/images/top_camera.gif)

# Get Started
# Getting started
<!-- start quickstart -->
## Installation
For installing the Python package from [PyPI][pypi-link]:
Expand All @@ -33,7 +33,7 @@ For installing the Python package from [PyPI][pypi-link]:
pip install --upgrade invertedai
```

The Python client library is [open source](https://github.com/inverted-ai/invertedai),
The Python client SDK is [open source](https://github.com/inverted-ai/invertedai),
so you can also download it and build locally.


Expand Down
90 changes: 90 additions & 0 deletions docs/source/cppapi/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,94 @@ cpp-common
```


## Quick Start with C++ SDK
### Docker
Clone the [Inverted AI repository](https://github.com/inverted-ai/invertedai.git), or just download the [CPP Library](https://download-directory.github.io/?url=https://github.com/inverted-ai/invertedai/tree/master/invertedai_cpp). \n
Make sure Docker is installed and running.
This can be done by running `docker info` in the terminal. \n
Navigate to the library directory (`cd invertedai_cppA`) and run the following commands in the terminal:
``` sh
docker compose build
docker compose run --rm dev
```

### Running a demo
- First, complie with: `bazel build //examples:client_example`
- Then, run: `./client_example $location $agent_num $timestep $api_key`,
e.g. `./bazel-bin/examples/client_example canada:vancouver:ubc_roundabout 5 20 xxxxxx`.


### Simple example

``` c++
#include <cstdlib>
#include <fstream>
#include <iostream>
#include <sstream>
#include <string>

#include "../invertedai/api.h"

using tcp = net::ip::tcp; // from <boost/asio/ip/tcp.hpp>
using json = nlohmann::json; // from <json.hpp>

// usage: ./client $location $agent_num $timestep $api_key
int main(int argc, char **argv) {
const std::string location(argv[1]);
const int agent_num = std::stoi(argv[2]);
const int timestep = std::stoi(argv[3]);
const std::string api_key(argv[4]);

net::io_context ioc;
ssl::context ctx(ssl::context::tlsv12_client);
// configure connection setting
invertedai::Session session(ioc, ctx);
session.set_api_key(api_key);
session.connect();

// construct request for getting information about the location
invertedai::LocationInfoRequest loc_info_req(
"{\"location\": \"" + location +
"\", "
"\"include_map_source\": true}");
// get response of location information
invertedai::LocationInfoResponse loc_info_res =
invertedai::location_info(loc_info_req, &session);

// construct request for initializing the simulation (placing NPCs on the map)
invertedai::InitializeRequest init_req(
invertedai::read_file("examples/initialize_body.json"));
// set the location
init_req.set_location(location);
// set the number of agents
init_req.set_num_agents_to_spawn(agent_num);
// get the response of simulation initialization
invertedai::InitializeResponse init_res =
invertedai::initialize(init_req, &session);

// construct request for stepping the simulation (driving the NPCs)
invertedai::DriveRequest drive_req(
invertedai::read_file("examples/drive_body.json"));
drive_req.update(init_res);

for (int t = 0; t < timestep; t++) {
// step the simulation by driving the agents
invertedai::DriveResponse drive_res =
invertedai::drive(drive_req, &session);
}
return EXIT_SUCCESS;
}
```
### Manual installation
Install the following dependencies:
- GCC-10 and g++-10
- [Bazel](https://bazel.build/install)
- Opencv (`sudo apt-get install -y libopencv-dev`)
- Boost (`sudo apt install openssl libssl-dev libboost-all-dev`)
- [JSON for Modern C++](https://json.nlohmann.me/) (under `invertedai/externals/json.hpp`)
6 changes: 2 additions & 4 deletions docs/source/userguide.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Inverted AI API provides a service that controls non-playable characters (NPCs)
functions are INITIALIZE, called at the beginning of the simulation, and DRIVE, called at each time step. Typically, the
user runs their simulator locally, controlling the actions of the ego vehicle, and querying the API to obtain the
behavior of NPCs. This page describes the high level concepts governing the interaction with the API. Please refer to
specific pages for {ref}`Python SDK`, [REST API][rest-link], {ref}`Getting started`, and [Examples][examples-link].
specific pages for {ref}`Python SDK`, {ref}`C++ SDK`, [REST API][rest-link], {ref}`Getting started`, and [Examples][examples-link].

We follow the continuous space, discrete time approach used in most driving simulators. In the current version, the API
only supports a time step of 100 ms, corresponding to 10 frames per second, and expects to run in a synchronous
Expand All @@ -21,9 +21,7 @@ provisioned to accommodate a large number of agents, where the maximum allowed v
## Programming language support
The core interface is a [REST API][rest-link], that can be called from any programming language. This is a low-level,
bare-bones access mode that offers maximum flexibility to deploy in any environment.
For convenience, we also provide a {ref}`Python SDK`, freely available on PyPI with minimal dependencies, which
provides an abstraction layer on top of the REST API. In the future we intend to release a similar library in C++ and
potentially other languages.
For convenience, we also provide a {ref}`Python SDK`, freely available on PyPI with minimal dependencies, which provides an abstraction layer on top of the REST API. Recently, we also released {ref}`C++ SDK` and in the future we intend to release similar libraries for other languages.

## Maps and geofencing
The API operates on a pre-defined collection of maps and currently there is no programmatic way to add additional
Expand Down
38 changes: 29 additions & 9 deletions invertedai_cpp/examples/client_example.cc
Original file line number Diff line number Diff line change
Expand Up @@ -19,18 +19,25 @@ int main(int argc, char **argv) {
const std::string location(argv[1]);
const int agent_num = std::stoi(argv[2]);
const int timestep = std::stoi(argv[3]);
const std::string api_key(argv[4]);

net::io_context ioc;
ssl::context ctx(ssl::context::tlsv12_client);
// configure connection setting
invertedai::Session session(ioc, ctx);
session.set_api_key(api_key);
session.connect();
session.set_api_key(argv[4]);

invertedai::LocationInfoRequest loc_info_req("{\"location\": \"" + location +
"\", "
"\"include_map_source\": true}");
invertedai::LocationInfoResponse loc_info_res = invertedai::location_info(loc_info_req, &session);
// construct request for getting information about the location
invertedai::LocationInfoRequest loc_info_req(
"{\"location\": \"" + location +
"\", "
"\"include_map_source\": true}");
// get response of location information
invertedai::LocationInfoResponse loc_info_res =
invertedai::location_info(loc_info_req, &session);

// use opencv to decode and save the bird's eye view image of the simulation
auto image = cv::imdecode(loc_info_res.birdview_image(), cv::IMREAD_COLOR);
cv::cvtColor(image, image, cv::COLOR_BGR2RGB);
int frame_width = image.rows;
Expand All @@ -39,16 +46,29 @@ int main(int argc, char **argv) {
cv::VideoWriter::fourcc('M', 'J', 'P', 'G'), 10,
cv::Size(frame_width, frame_height));

invertedai::InitializeRequest init_req(invertedai::read_file("examples/initialize_body.json"));
// construct request for initializing the simulation (placing NPCs on the
// map)
invertedai::InitializeRequest init_req(
invertedai::read_file("examples/initialize_body.json"));
// set the location
init_req.set_location(location);
// set the number of agents
init_req.set_num_agents_to_spawn(agent_num);
invertedai::InitializeResponse init_res = invertedai::initialize(init_req, &session);
// get the response of simulation initialization
invertedai::InitializeResponse init_res =
invertedai::initialize(init_req, &session);

invertedai::DriveRequest drive_req(invertedai::read_file("examples/drive_body.json"));
// construct request for stepping the simulation (driving the NPCs)
invertedai::DriveRequest drive_req(
invertedai::read_file("examples/drive_body.json"));
drive_req.update(init_res);

for (int t = 0; t < timestep; t++) {
invertedai::DriveResponse drive_res = invertedai::drive(drive_req, &session);
// step the simulation by driving the agents
invertedai::DriveResponse drive_res =
invertedai::drive(drive_req, &session);
// use opencv to decode and save the bird's eye view image of the
// simulation
auto image = cv::imdecode(drive_res.birdview(), cv::IMREAD_COLOR);
cv::cvtColor(image, image, cv::COLOR_BGR2RGB);
video.write(image);
Expand Down

0 comments on commit c8b78ce

Please sign in to comment.