👋 Hi Developers! Thank you so much for contributing to Portal!
This file serves as a guide for you to contribute your ideas onto the Portal platform.
What are the Components of Portal?
How do I Set Up Portal for Development?
How do I Get Started on Contributing?
How do I Handle Errors while Contributing?
The two main components of portal is the App and the Engine.
The App serves as a graphical user interface for users to interact with. Events triggered in App may update App itself and/or trigger API calls to the Engine.
The Engine performs the more complicated computations. The Engine receives a specific API call from the App, and performs the function corresponding to the API call, before responding to App.
- Node v16. You may use nvm to manage your Node versions.
- Python 3.7. You may run the following commands to install a virtual environment with all the necessary dependencies.
chmod u+x setup-virtualenv.sh
./setup-virtualenv.sh
source ./portal_build/.venv/Scripts/activate
Alternatively, you can consider other virtual environment managers such as virtualenvwrapper or conda, follow the installation and environment creation instructions, and then run the following command to install the dependencies.
pip install -r requirements.txt
Open two terminals. In the first terminal, navigate to src/app
from the root directory of the repository and run the following commands:
nvm install 16 && nvm use 16
npm install --legacy-peer-deps
npm run build:static
npm run dev
In the second terminal, navigate to src/engine
from the root directory of the repository and run the following commands:
python run.py
Lastly, open your browser and navigate to localhost:3000
to access Portal. You should be able to see Portal's interface.
We have conveniently provided a bash script to build the Windows executable for testing.
chmod u+x build-windows.sh
./build-windows.sh
To install the executable locally, navigate to application
from the root directory of the repository and double click on Portal Setup x.x.x.exe
to install Portal. Follow the instructions on the installer to complete the installation, then check the Run Portal
box to launch Portal after installation. You should be able to see Portal's interface.
To begin contributing to Portal, first fork the repository, make your changes, then submit a pull request into the Portal's develop
branch.
Here are some suggestions for you to begin contributing to Portal. However, you may contribute to portal in any way, so do not be restricted by this list!
Custom Model Class Creation
In the current release of Portal, TensorFlow and Darknet models are supported. So what if you have another model that is built from a different machine learning library? In this case you might have to consider creating a custom model class.
The Engine is compatible with any type of model provided that they inherit from the BaseModel class architecture.
The steps to creating a custom model can be done in a several steps:
-
From
src/engine/server/models
, create your own custom model module (such as example_model.py) -
Import the following modules:
from server.services.errors import Errors, PortalError from server.services.hashing import get_hash from server.models.abstract.BaseModel import BaseModel
-
Create your own custom model class (such as ExampleModel), which inherits from BaseModel
class ExampleModel(BaseModel):{ }
-
Within your custom model class, define the following functions:
-
_load_label_map_(self)
- Converts your label map into the following dictionary format and then saves it into
self._label_map_
:self._label_map_ = { '1':{ 'id': 1, 'name': 'apple', }, '2':{ 'id': 2, 'name': 'pear', } }
- Converts your label map into the following dictionary format and then saves it into
-
register(self)
- Checks if all critical files needed for loading and prediction are inside
self._directory_
- Set the height (
self._height_
) and width (self._width_
) of the image that the model should receive. - Load the label map with the function
_load_label_map_()
- Set the model key (
self._key_
) to be the hash of the model directory (self._directory_
) with the function:from server.services.hashing import get_hash self._key_ = get_hash(self._directory_)
- return (
self._key_
, self) as a tuple.
- Checks if all critical files needed for loading and prediction are inside
-
load(self)
- Load the model into a variable and save that variable into
self._model_
.loaded_model = load_the_model(<model_path>) self._model_ = loaded_model
- Load the model into a variable and save that variable into
-
predict(self, image_array)
-
Perform inference on the image array.
-
Return the inference as a dictionary of
{ "detection_masks": <ndarray of shape [Instances, Height, Width] representing the prediction masks, or None if this is not a segmentation model>, "detection_boxes": <ndarray of shape [Instances, 4] representing the bounding boxes, in the form (Ymin, Xmin, Ymax, Xmax)>, "detection_scores": <ndarray of shape [Instances, 1] representing the confidence>, "detection_classes": <ndarray of shape [Instances, 1], representing label ids>, }
NOTE: The segmentation masks are in the form of image masks, not bounding box masks.
-
You may also define other functions, but these functions are the basic necessity.
-
-
In the Model Factory
src/engine/server/models/abstract/Model.py
, import and add your your custom model class into themodel_class
dictionary.from server.models.example_model import ExampleModel # Inside Model function: model_class = { "tensorflow": TensorflowModel, "darknet": DarknetModel, "example": ExampleModel, # <<--------- add here }
The Engine is now configured to accept a new type of model. Next, we configure the App.
- In the model file
src/app/src/components/annotations/model.tsx
:- In
FormData -> modelType
add your custom model stringexport type FormData = { type: string; name: string; description: string; directory: string; modelKey: string; projectSecret: string; modelType: "tensorflow" | "darknet" | "example" | ""; //<----Add here };
- In
Model -> render() -> modelTypes
, add your custom modelconst modelTypes = { tensorflow: "TensorFlow 2.x", darknet: "DarkNet (YOLO v3, YOLO v4)", example: "Example Model", //<-------------Add here };
- In
Model -> render() -> registerModelForm
, add a newMenu.Item
in theMenu
component<Menu> <Menu.Item shouldDismissPopover={false} text={modelTypes.tensorflow} onClick={() => { const event = { target: { name: "modelType", value: "tensorflow" }, }; this.handleChangeForm(event); }} /> <Menu.Item shouldDismissPopover={false} text={modelTypes.darknet} onClick={() => { const event = { target: { name: "modelType", value: "darknet" }, }; this.handleChangeForm(event); }} /> </Menu> //----------------Add below----------------------// <Menu.Item shouldDismissPopover={false} text={modelTypes.example} onClick={() => { const event = { target: { name: "modelType", value: "example" } }; this.handleChangeForm(event); }} /> </Menu>
- In
Now restart Portal and you should be able to see the changes!
Portal Error Codes
In Portal Engine, all errors are raised as a customised PortalError
, which are readable and displayed by App. PortalError
have different types, all of which can be found in src/engine.server/services/errors.py.
While contributing, whenever you need to raise an error, you can do so using this method:
from server.services.errors import Errors, PortalError
raise PortalError(Errors.YOURERROR, "Your error string")
Should there be a scenario whereby an error that you wish to raise does not fall in the category of any of the primary errors provided by Portal, you may add your own PortalError type here in the Errors class of src/engine.server/services/errors.py.
class Errors(Enum):
# insert in this class
YOURERROR = XXXX, YYY
# where integer XXXX will be your custom error code
# and integer YYY will be its HTTP return status.
Note that all exceptions raised that are not PortalError
will be automatically converted to PortalError
with error type UNKNOWN
and error string transferred from the original exception.
Debug Mode
As PortalError
will not cause Engine to exit, the error message will only be shown in App, but not in the terminal where Engine is running in.
To be able to display the traceback of Engine, you can enable debug mode
by setting the environment variable PORTAL_LOGGING
before starting engine:
Unix/Mac
$ export PORTAL_LOGGING=1
Windows
> set PORTAL_LOGGING=1
PORTAL_LOGGING
accepts the values 1 to 5 inclusive, with the values signifying:
Value | Threshold |
---|---|
1 | Debug |
2 | Info |
3 | Warning |
4 | Error |
5 | Critical |