Skip to content

Commit

Permalink
Merge pull request #30 from miguelaeh/remove_context
Browse files Browse the repository at this point in the history
feat(core,cli): Replace context by class variables
  • Loading branch information
miguelaeh committed Sep 6, 2023
2 parents 3665d9d + 5e49600 commit 5f20b8a
Show file tree
Hide file tree
Showing 17 changed files with 75 additions and 90 deletions.
17 changes: 12 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,13 +131,11 @@ These are the stages that actually modify/learn/process the media streams. All o

These stages have been mainly defined for a proper logical code structure, there are no significant differences on how the code is executed on them.

#### Context
#### App internal state

You app can maintain its own internal state. This is useful when you need to pass information between stages.
The application maintains its own internal state. This is useful when you need to pass information between stages.

By default, an internal context is created and can be accessed via the `ctx` variable.

You can also define your own variables within the `App` class, however, note that if you override the constructor the context won't be initialized properly.
Simply use class variables within the `App` class.

### Run Your App

Expand Down Expand Up @@ -290,6 +288,15 @@ For the full license text, please refer to the [Apache License 2.0](LICENSE).

## Notable Changes

### Core version `0.1.9` and CLI version `0.1.7`

The context has been replaced by class variables, thus, it is no longer used.

Required changes on your application code:

* Update your app methods to **NOT** receive the `ctx` parameter.
* Update any usage of the context, i.e. references to `ctx['xxx']` must be replaced by class variables, i.e. `self.xxx`.

### Core version `0.1.6` and CLI version `0.1.5`

These versions include a new **optional** parameter to configure the reception buffers of the sockets, which is useful to adjust the buffer sizes to the processing time.
Expand Down
2 changes: 1 addition & 1 deletion cli/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "pipeless-ai-cli"
version = "0.1.6"
version = "0.1.7"
description = "Pipeless is a framework to build and deploy multimodal perception apps in minutes without worrying about multimedia pipelines"
authors = ["Miguel Angel Cabrera Minagorri <devgorri@gmail.com>"]
license = "Apache-2.0"
Expand Down
31 changes: 12 additions & 19 deletions cli/src/pipeless_ai_cli/commands/templates/default/README.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,21 @@
# App project directory
# Pipeless application

An app is a special class that is loaded by the pipeless framework.
This is a bootstrapped Pipeless application using `pipeless create project`.

You can see an app as an image processing pipeline. It has some stages (see below) and takes an RGB image to return an RGB image. It could be the same input image, a modified image, or even a totally new image.
## Configure the application

In the case of videos, the app code is automatically executed for every frame of the video, so you just need to care about a single image processing and the framework will take care of the rest.
Open the `config.yaml` file to edit the default configuration.

## App stages
Find [here](https://pipeless.ai/docs/v0/configuration) the whole list of configuration options.

An app is build from a set of independent pipeline stages. All stages can be left empty if not required for a particular application.
## Run the application

In some special cases, you may need to maintain some state between two stages. In those cases you can use the app context, represented in the code by the `ctx` variable. You can access the context in all the stages of the pipeline and its value will be preserved between stages when processing a single image and also between pipeline iterations when processing video frames. In short, anything that you add to the context can be accessed and modified at any stage until the app finishes.
Simply execute:

### Initial and final stage
```bash
pipeless run
```

These stages are executed only once and do not receive nor return any images. They are used when an app requires to execute some code before processing any image and when it needs to execute some code after processing all the images.
## Learn more

* `before`: code that is executed before starting to process the first frame
* `after`: code that is executed after the processing of the last frame

### Processing stages

These are the stages that actually process the images. They receive the image and they **must** return an image. When not implemented they simply forward the previous stage image to the next stage.

* `pre-process`: code that is executed before the processing of each frame
* `process`: the actual code that processes a frame
* `post-process`: code that is executed after the processing of each frame
To learn more about how to implement the application please refer to the [documentation](https://pipeless.ai/docs).
16 changes: 6 additions & 10 deletions cli/src/pipeless_ai_cli/commands/templates/default/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,32 +5,28 @@ class App(PipelessApp):
Main application class.
Pre-process, process and post-process hooks, if implemented,
must return a RGB image.
The context can be accessed and modified at any stage of the pipeline.
You can use it to share data between stages or pipeline iterations (i.e
between the processing of different frames)
must return a RGB frame as numpy array of the same shape than the received one.
"""

# Hook to execute before the processing of the first image
def before(self, ctx):
def before(self):
pass

# Hook to execute to pre-process each image
def pre_process(self, frame, ctx):
def pre_process(self, frame):
modified_frame = frame # Do something to the frame
return modified_frame

# Hook to execute to process each image
def process(self, frame, ctx):
def process(self, frame):
modified_frame = frame # Do something to the frame
return modified_frame

# Hook to execute after processing each image
def post_process(self, frame, ctx):
def post_process(self, frame):
modified_frame = frame # Do something to the frame
return modified_frame

# Hook to execute after the processing of the last image
def after(self, ctx):
def after(self):
pass
2 changes: 1 addition & 1 deletion core/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "pipeless-ai"
version = "0.1.8"
version = "0.1.9"
description = "A framework to build and deploy multimodal perception apps in minutes without worrying about multimedia pipelines"
authors = ["Miguel Angel Cabrera Minagorri <devgorri@gmail.com>"]
license = "Apache-2.0"
Expand Down
12 changes: 6 additions & 6 deletions core/src/pipeless_ai/lib/app/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,32 +6,32 @@ class PipelessApp():
"""

def __init__(self):
self.ctx = {}
pass

@timer
def __before(self):
if hasattr(self, 'before') and callable(self.before):
self.before(self.ctx)
self.before()

@timer
def __pre_process(self, frame):
if hasattr(self, 'pre_process') and callable(self.pre_process):
return self.pre_process(frame, self.ctx)
return self.pre_process(frame)
return frame

@timer
def __process(self, frame):
if hasattr(self, 'process') and callable(self.process):
return self.process(frame, self.ctx)
return self.process(frame)
return frame

@timer
def __post_process(self, frame):
if hasattr(self, 'post_process') and callable(self.post_process):
return self.post_process(frame, self.ctx)
return self.post_process(frame)
return frame

@timer
def __after(self):
if hasattr(self, 'after') and callable(self.after):
self.after(self.ctx)
self.after()
12 changes: 4 additions & 8 deletions examples/cats/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,23 +40,19 @@ In order to recognise cats we need to load a model trained for that purpose.
Since we want to load the model before any frame is processed, we do it within the `before` stage (method).

```python
xml_data = cv2.CascadeClassifier('cats.xml')
self.xml_data = cv2.CascadeClassifier('cats.xml')
```

After loading the model, we store it on the app context (`ctx`) in order to have access during other stage iterations.

```python
ctx['xml_data'] = xml_data
```
We store the model as a class variable in order to have access to it from other stages.

### Process stage

We will do basic processing here in order to recognise cat faces on the frames and draw a square around them.

First, we get a reference to the model that we added to the context on the `before` stage:
First, we get a reference to the model within the `before` stage:

```python
model = ctx['xml_data']
model = self.xml_data
```

Detecting cats is faster on smaller images so we resize the original frame:
Expand Down
9 changes: 4 additions & 5 deletions examples/cats/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,12 @@
import cv2

class App(PipelessApp):
def before(self, ctx):
def before(self):
# Load the model before processing any frame
xml_data = cv2.CascadeClassifier('cats.xml')
ctx['xml_data'] = xml_data
self.xml_data = cv2.CascadeClassifier('cats.xml')

def process(self, frame, ctx):
model = ctx['xml_data']
def process(self, frame):
model = self.xml_data

# Create reduced frame for faster detection
original_height, original_width, _ = frame.shape
Expand Down
6 changes: 4 additions & 2 deletions examples/cats/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,15 @@ input:
port: 1234
video:
enable: true
uri: 'file:///home/example/path/pipeless/examples/cats/cats.mp4'
uri: 'file:///home/miguelaeh/projects/pipeless/examples/cats/cats.mp4'
# uri: 'rtmp://localhost:1935/twich/input'
output:
address:
host: localhost
port: 1237
video:
enable: true
uri: file:///home/example/path/pipeless/examples/cats/cats-output.mp4
uri: file:///home/miguelaeh/projects/pipeless/examples/cats/cats-output.mp4
#uri: rtmp://localhost:1935/twich/output
worker:
n_workers: 1
10 changes: 3 additions & 7 deletions examples/kafka/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,23 +87,19 @@ We describe here only the new lines, to understand how the recognition model is

### Before stage

In the before state we initiate the producer connection to our Kafka cluster and store on the context so we can use the producer later to send messages to the topic. Initiating the connection is realy simple thanks to the Pipeless Kafka Plugin:
In the before state we initiate the producer connection to our Kafka cluster so we can use the producer later to send messages to the topic. Initiating the connection is realy simple thanks to the Pipeless Kafka Plugin:

```python
ctx['producer'] = KafkaProducer()
self.producer = KafkaProducer()
```

### Processing stage

In this case, instead of editing the frame to draw a bounding box and return the modified video frame like we did on the original cats example, we simply identify the bounding box and, if there is a bounding box, we send a message to the `pipeless` kafka topic:

```python
producer = ctx['producer']

...

if len(bounding_boxes) > 0:
producer.produce('pipeless', 'There is a cat!')
self.producer.produce('pipeless', 'There is a cat!')
```

> NOTE: the bounding box detection is exactly the same than on the cats example.
13 changes: 6 additions & 7 deletions examples/kafka/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,12 @@
import cv2

class App(PipelessApp):
def before(self, ctx):
ctx['producer'] = KafkaProducer()
ctx['xml_data'] = cv2.CascadeClassifier('cats.xml')
def before(self):
self.producer = KafkaProducer()
self.xml_data = cv2.CascadeClassifier('cats.xml')

def process(self, frame, ctx):
producer = ctx['producer']
model = ctx['xml_data']
def process(self, frame):
model = self.xml_data

# Create reduced frame for faster detection
original_height, original_width, _ = frame.shape
Expand All @@ -21,4 +20,4 @@ def process(self, frame, ctx):

# Notify that there is a cat
if len(bounding_boxes) > 0:
producer.produce('pipeless', 'There is a cat!')
self.producer.produce('pipeless', 'There is a cat!')
9 changes: 4 additions & 5 deletions examples/pose/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,15 +30,14 @@ You can now check the output video with any media player.

The first thing we need to do is to create an instance of out model. We do it into the `before` stage:
```python
def before(self, ctx):
ctx['model'] = MultiPoseEstimationLightning()
def before(self):
self.model = MultiPoseEstimationLightning()
```

Once we have an instance of our model, we can use it on every frame to get bounding boxes and keypoints:
```python
def process(self, frame, ctx):
model = ctx['model']
bboxes, keypoints = model.invoke_inference(frame)
def process(self, frame):
bboxes, keypoints = self.model.invoke_inference(frame)
```

Finally, we print our bounding boxes and keypoints into the original frame before returning it, so we can visualize the detections on the output:
Expand Down
10 changes: 4 additions & 6 deletions examples/pose/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,11 @@
import cv2

class App(PipelessApp):
def before(self, ctx):
ctx['model'] = MultiPoseEstimationLightning()
def before(self):
self.model = MultiPoseEstimationLightning()

def process(self, frame, ctx):
model = ctx['model']
bboxes, keypoints = model.invoke_inference(frame)
def process(self, frame):
bboxes, keypoints = self.model.invoke_inference(frame)

for bbox in bboxes:
cv2.rectangle(frame, (bbox[1], bbox[0]), (bbox[3], bbox[2]), (0, 255, 0), 2)
Expand All @@ -18,4 +17,3 @@ def process(self, frame, ctx):
cv2.circle(frame, (keypoint[0], keypoint[1]), 5, (255, 0, 255), -1)

return frame

4 changes: 2 additions & 2 deletions examples/pose/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,14 @@ input:
port: 1234
video:
enable: true
uri: file:///home/example/path/woman-walking.mp4
uri: file:///home/miguelaeh/projects/pipeless-2/examples/pose/woman-walking.mp4
log_level: INFO
output:
address:
host: localhost
port: 1237
video:
enable: true
uri: file:///home/example/path/output.mp4
uri: file:///home/miguelaeh/projects/pipeless-2/examples/pose/output.mp4
worker:
n_workers: 1
2 changes: 1 addition & 1 deletion examples/text-overlay/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
import numpy as np

class App(PipelessApp):
def process(self, frame, ctx):
def process(self, frame):
pil_image = Image.fromarray(frame)

text = "Hello pipeless!"
Expand Down
4 changes: 2 additions & 2 deletions examples/text-overlay/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,14 @@ input:
port: 1234
video:
enable: true
uri: file:///home/example/path/book-video.mp4
uri: file:///home/miguelaeh/projects/pipeless-2/examples/text-overlay/book-video.mp4
log_level: INFO
output:
address:
host: localhost
port: 1237
video:
enable: true
uri: screen
uri: rtmp://localhost:1935/twich/test
worker:
n_workers: 1
6 changes: 3 additions & 3 deletions plugins/src/pipeless_ai_plugins/kafka/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,15 @@ Initialize the producer within the `before` stage:

```python
...
def before(self, ctx):
ctx['producer'] = KafkaProducer()
def before(self):
self.producer = KafkaProducer()
...
```

Send information to a Kafka topic at any stage:

```python
ctx['producer'].produce('pipeless', 'hello!')
self.producer.produce('pipeless', 'hello!')
```

## Configuration
Expand Down

0 comments on commit 5f20b8a

Please sign in to comment.