Skip to content

Commit

Permalink
update doc
Browse files Browse the repository at this point in the history
  • Loading branch information
orgua committed Feb 20, 2024
1 parent 5b7f7f9 commit 841ea6a
Show file tree
Hide file tree
Showing 7 changed files with 197 additions and 187 deletions.
3 changes: 1 addition & 2 deletions .ruff.toml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ line-length = 100
target-version = "py310"

[lint]
select = [
select = [ # TODO: replace by "ALL"
"A", # flake8-builtins
"ANN", # flake8-annotations
"ARG", # flake8-unused-arguments
Expand Down Expand Up @@ -47,7 +47,6 @@ ignore = [
"N802", "N803", "N806", "N815", "N816", # naming (si-units should stay)
"PLR2004", # magic values
"TID252", # relative imports from parent
"RUF100", # noqa from other linters, TODO: replace flake8, when security is fully ported (S4**)
"PLR0904", # complexity
"PLR0911", "PLR0912", # complexity
"PLR0913", "PLR0915", # complexity
Expand Down
159 changes: 159 additions & 0 deletions docs/dev/contributing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,159 @@
# Contributing

This section helps developers getting started with contributing to `shepherd`.

(codestyle)=
## Codestyle

Please stick to the C and Python codestyle guidelines provided with the source code.

All included **Python code** uses the feature-set of **version 3.10** and is supposed to be formatted & linted using [ruff](https://docs.astral.sh/ruff/) for clean and secure code.

**C code** uses the feature-set of **C99** and shall be formatted based on *LLVM*-Style with some alterations to make it easier to read, similar to python code.
We provide the corresponding `clang-format` config as `.clang-format` in the repository's root directory.

Many IDEs/editors allow to automatically format code using the corresponding formatter and codestyle.

To ensure quality standards we implemented the [pre-commit](https://pre-commit.com/)-workflow into the repo. It will

- handle formatting for python and C code (automatically)
- lint python, C, YAML, TOML, reStructuredText (rst), ansible playbooks
- warns about security-related issues and deprecated features in python and C code

Pull Requests to the main branch will be tested online with **GitHub Actions**.

To run it on your own make sure you have pre-commit installed:

```shell
pip3 install pre-commit
sudo apt install cppcheck
```

Now you can run the pre-commit checks:

```Shell
pre-commit run --all-files
# or in short
pre-commit run -a
```

(dev_setup)=
## Development setup

While some parts of the `shepherd` software stack can be developed hardware independent, in most cases you will need to develop/test code on the actual target hardware.

We found the following setup convenient: Have the code on your laptop/workstation and use your editor/IDE to develop code.
Have a BeagleBone (potentially with `shepherd` hardware) connected to the same network as your workstation.
Prepare the BeagleBone by running the `bootstrap.yml` ansible playbook to allow passwordless entry.

(dev_opt1)=
### Option 1

You can now use the integrated functionally of the `deploy/dev_rebuild_sw` playbook that can push the changed files to the target and builds and installs it there with needing a reboot.
Running the playbook takes some minutes, as all software components (kernel module, firmware and python package) are rebuilt.

```shell
cd shepherd
ansible-playbook deploy/dev_rebuild_sw.yml
# if transfer of host-files is desired, answer yes on the prompt
```

### Option 2

Some IDEs/editors allow to automatically push changes via ssh / scp to the target. The directory `/opt/shepherd` is used as the projects root-dir on the beaglebone.
In addition, the playbook `deploy/dev_rebuild_sw.yml` builds and installs all local source on target (conveniently without a restart) or you can update only parts of it manually. You can have a look at `deploy/roles/sheep/tasks/build_shp.yml` to see the commands needed.

### Option 3

You can mirror your working copy of the `shepherd` code to the BeagleBone using a network file system.
We provide a playbook (`deploy/setup-dev-nfs.yml`) to conveniently configure an `NFS` share from your local machine to the BeagleBone.
After mounting the share on the BeagleBone, you can compile and install the corresponding software component remotely over ssh on the BeagleBone while editing the code locally on your machine.
Or you use the playbook described in [](#dev_opt1).


## Build the docs

**Note**: Docs are automatically built with GitHub actions after changes on main-branch.

Make sure you have the python requirements installed:

```shell
pip install --upgrade pip pipenv wheel setuptools

pipenv install
```

Activate the `pipenv` environment:

```shell
pipenv shell
```

Change into the docs directory and build the html documentation

```shell
cd docs
make html
```

The build is found at `docs/_build/html`. You can view it by starting a simple http server:

```shell
cd _build/html
python -m http.server
```

Now navigate your browser to `localhost:8000` to view the documentation.
As an alternative it often suffices to just pull the `index.html` into a browser of your choice.

## Tests

There is an initial testing framework that covers a large portion of the python code.
You should always make sure the tests are passing before committing your code.

To run the full range of python tests, have a copy of the source code on a BeagleBone.
Build and install from source (see [](#dev_setup) for more).
Change into the `software/python-package` directory on the BeagleBone and run the following commands to:

- install dependencies of tests
- run testbench

```shell
cd /opt/shepherd/software/python-package
sudo pip3 install ./[tests]
sudo pytest-3
```

Some tests (~40) are hardware-independent, while most of them require a BeagleBone to work (~100). The testbench detects the BeagleBone automatically. A small subset of tests (~8) are writing & configuring the EEPROM on the shepherd cape and must be enabled manually (`sudo pytest --eeprom-write`)

The following commands allow to:

- restartable run that exits for each error (perfect for debugging on slow BBone)
- run single tests,
- whole test-files or

```shell
sudo pytest-3 --stepwise

sudo pytest-3 tests/test_sheep_cli.py::test_cli_emulate_aux_voltage

sudo pytest-3 tests/test_sheep_cli.py
```

It is also recommended to **run the testbench of the herd-tool prior to releasing a new version**. See [project-page](https://github.com/orgua/shepherd/tree/main/software/shepherd-herd#testbench) for more info.


## Releasing

Before committing to the repository please run our [pre-commit](https://pre-commit.com/)-workflow described in [](#codestyle).

Once you have a clean, stable and tested version, you should decide if your release is a patch, minor or major update (see [Semantic Versioning](https://semver.org/)).
Use `bump2version` to update the version number across the repository:

```shell
pipenv shell
pre-commit run --all-files
bump2version patch --allow-dirty
```

Finally, open a pull-request to allow merging your changes into the main-branch and to trigger the test-pipeline.
151 changes: 0 additions & 151 deletions docs/dev/contributing.rst

This file was deleted.

34 changes: 34 additions & 0 deletions docs/dev/data_handling.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# Data handling

## Data Acquisition

Data is sampled/replayed through the ADC (`TI ADS8691`) and DAC (`TI DAC8562T`). Both devices are interfaced over a custom, SPI-compatible protocol. For a detailed description of the protocol and timing requirements, refer to the corresponding datasheets. The protocol is bit-banged using the low-latency GPIOs connected to PRU0. The transfer routines itself are [implemented in assembly](../../software/firmware/lib/src/spi_transfer_pru.asm).

## PRU to host

Data is sampled and bidirectionally transferred between PRUs and user space in buffers, i.e. blocks of `SAMPLES_PER_BUFFER` samples. These buffers correspond to sections of a continuous area of memory in DDR RAM to which both PRUs and user space application have access. This memory is provisioned through `remoteproc`, a Linux framework for managing resources in an AMP (asymmetric multicore processing) system. The PRU firmware contains a so-called resource table that allows to specify required resources. We request a carve-out memory area, which is a continuous, non-cached area in physical memory. On booting the PRU, the `remoteproc` driver reads the request, allocates the memory and writes the starting address of the allocated memory area to the resource table, which is readable by the PRU during run-time. The PRU exposes this memory location through shared RAM, which is accessible through the sysfs interface provided by the kernel module. Knowing physical address and size, the user space application can map that memory after which it has direct read/write access. The total memory areas is divided into N_BUFFERS distinct buffers.

The shared RAM approach is the fastest option on the BeagleBone, but still has some caveats:

- pro: writing from PRU-side to DDR RAM can be done within one cycle (the operation does not finish that quickly, but does not block), but
- con: reading can take several hundred cycles, in rare cases > 4 us ⇾ equals 800 cycles or almost half the real-time window
- pro: almost same time for reading 1 byte or 100 byte

:::{note}
This design will switch to a large cyclic buffer in the near future to reduce overhead (buffer exchanges) for the PRU.
:::

In the following we describe the data transfer process for emulation. Emulation is the most general case because harvesting data has to be transferred from a database to the analog frontend, while simultaneously data about energy consumption (target voltage and current) & gpio traces have to be transferred from the analog frontend (ADC) to the database.

The userspace application writes the first block of harvesting data into one of the (currently 64) buffers, e.g. buffer index 0. After the data is written, it sends a message to PRU0, indicating the message type (`MSG_BUF_FROM_HOST`) and index (0). The PRU0 receives that message and stores the index in a ringbuffer of empty buffers. When it's time, PRU0 retrieves a buffer index from the ringbuffer and reads the harvesting values (current and voltage) sample by sample from the buffer, sends it to the DAC and subsequently samples the 'load' ADC channels (current and voltage), overwriting the harvesting samples in the buffer. Once the buffer is full, the PRU sends a message with type (`MSG_BUF_FROM_PRU`) and index (i). The userspace application receives the index, reads the buffer, writes its content to the database and fills it with the next block of harvesting data for emulation.

### Data extraction

The user space code (written in python) has to extract the data from a buffer in the shared memory. Generally, a user space application can only access its own virtual address space. We use Linux's `/dev/mem` and python's `mmap.mmap(..)` to map the corresponding region of physical memory to the local address space. Using this mapping, we only need to seek the memory location of a buffer, extract the header information using `struct.unpack()` and interpret the raw data as numpy array using `numpy.frombuffer()`.


## Database

In the current local implementation, all data is locally stored in the filesystem. This means, that for emulation, the harvesting data is first copied to the corresponding shepherd node. The sampled data (harvesting data for recording and energy traces for emulation) is also stored on each individual node first and later copied and merged to a central database. We use the popular HDF5 data format to store data and meta-information.

See Chapter [](../user/data_format) for more details.

0 comments on commit 841ea6a

Please sign in to comment.