Skip to content
This repository has been archived by the owner on Jun 14, 2023. It is now read-only.

Commit

Permalink
Merge pull request #36 from deephdc/ignacio-br1
Browse files Browse the repository at this point in the history
update docs
  • Loading branch information
laramaktub committed Dec 14, 2022
2 parents 6aeedfa + b19cbb7 commit 2a65ace
Show file tree
Hide file tree
Showing 3 changed files with 19 additions and 7 deletions.
14 changes: 9 additions & 5 deletions source/user/howto/train-model-locally.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,8 @@ on a custom dataset to create a `plant classifier <https://github.com/deephdc/DE

* having `Docker <https://www.docker.com>`__ installed. For an up-to-date installation please follow
the `official Docker installation guide <https://docs.docker.com/install>`__.

As you will likely be using GPUs for training, you also have to install the `nvidia-container-toolkit <https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian>`__
to make them visible from inside the container.


1. Choose a module from the Marketplace
Expand Down Expand Up @@ -100,12 +101,15 @@ accessible from inside the container. This is done via the Docker volume ``-v``
$ docker run -ti -p 5000:5000 -p 6006:6006 -p 8888:8888 -v path_to_local_folder:path_to_docker_folder deephdc/deep-oc-image-classification-tf
In our case, this looks like the following we have to mount one folder for the data
and one folder for the model weights (where we will later retrieve the newly trained model):
We also need to make GPUs visible from inside the container using the ``--runtime=nvidia``
(or the ``--gpus all`` flag).

In our case, the final command, mounting the data folder and the model weights folder
(where we will later retrieve the newly trained model), looks as following:

.. code-block:: console
$ docker run -ti -p 5000:5000 -p 6006:6006 -p 8888:8888 -v /home/ubuntu/data:/srv/image-classification-tf/data -v /home/ubuntu/models:/srv/image-classification-tf/models deephdc/deep-oc-image-classification-tf
$ docker run -ti -p 5000:5000 -p 6006:6006 -p 8888:8888 -v /home/ubuntu/data:/srv/image-classification-tf/data -v /home/ubuntu/models:/srv/image-classification-tf/models --runtime=nvidia deephdc/deep-oc-image-classification-tf:gpu
4. Open the DEEPaaS API and train the model
Expand Down Expand Up @@ -141,7 +145,7 @@ To account for this simpler process, we have prepared a version of the
:doc:`the DEEP Modules Template <../overview/cookiecutter-template>`
specially tailored to this task.

In your local machine, run the Template with the ``child`` branch.
In your local machine, run the Template with the ``child-module`` branch.

.. code-block::
Expand Down
10 changes: 9 additions & 1 deletion source/user/howto/train-model-remotely.rst
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,14 @@ After submitting you will be redirected to the deployment's list.
In your new deployment go to **Access** and choose **JupyterLab**. You will be redirected to ``http://jupyterlab_endpoint``

Now that you are in JupyterLab, open a **Terminal** window (**[+]** (New launcher) ➜ **Others** ➜ **Terminal**).

First let's check we are seeing our GPU correctly:

.. code-block:: console
$ nvidia-smi
This should output the GPU model along with some extra info.
Now we will mount our remote Nextcloud folders in our local containers:

.. code-block:: console
Expand Down Expand Up @@ -166,7 +174,7 @@ To account for this simpler process, we have prepared a version of the
:doc:`the DEEP Modules Template <../overview/cookiecutter-template>`
specially tailored to this task.

In your local machine, run the Template with the ``child`` branch.
In your local machine, run the Template with the ``child-module`` branch.

.. code-block::
Expand Down
2 changes: 1 addition & 1 deletion source/user/overview/cookiecutter-template.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Overview
To simplify the development and in an easy way integrate your model with the :doc:`DEEPaaS API <api>`,
a `standard template <https://github.com/deephdc/cookiecutter-deep>`__ for modules is provided.

There are two versions of this template:
There are different versions of this template:

* `master <https://github.com/deephdc/cookiecutter-deep/tree/master>`__: this is what 99% of users are probably
looking for. Simple, minimal template, with the minimum requirements to integrate your code in DEEP.
Expand Down

0 comments on commit 2a65ace

Please sign in to comment.