Skip to content

Commit

Permalink
responsive images
Browse files Browse the repository at this point in the history
  • Loading branch information
bourdakos1 committed Mar 31, 2020
1 parent 8a01153 commit ecde4e6
Show file tree
Hide file tree
Showing 190 changed files with 236 additions and 89 deletions.
5 changes: 5 additions & 0 deletions docs/.dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
**/_site/
**/.sass-cache/
**/.jekyll-cache/
**/.jekyll-metadata
**/vendor
97 changes: 66 additions & 31 deletions docs/_classification/5.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,70 +9,105 @@ Cloud Annotations makes labeling images and training machine learning models eas
Whether you’ve never touched a line of code in your life or you’re a TensorFlow ninja, these docs will help you build what you need.
Let’s get started!

## Sign up for [IBM Cloud](https://ibm.biz/cloud-annotations-sign-up){:target="_blank"}
## Sign up for [IBM Cloud](https://ibm.biz/cloud-annotations-sign-up){:target="\_blank"}

Cloud Annotations is built on top of IBM Cloud Object Storage.
Using a cloud object storage offering provides a reliable place to store training data.
It also opens up the potential for collaboration, letting a team to simultaneously annotate the dataset in real-time.

IBM Cloud offers a lite tier of object storage, which includes 25 GB of free storage.

Before you start, sign up for a free [IBM Cloud](https://ibm.biz/cloud-annotations-dashboard){:target="_blank"} account.

Before you start, sign up for a free [IBM Cloud](https://ibm.biz/cloud-annotations-dashboard){:target="\_blank"} account.

## Training data best practices

To train a computer vision model you need a lot of images.
Cloud Annotations supports uploading both photos and videos.
However, before you start snapping, there's a few limitations to consider.

* **Object Type** The model is optimized for photographs of objects in the real world. They are unlikely to work well for x-rays, hand drawings, scanned documents, receipts, etc.

* **Object Environment** The training data should be as close as possible to the data on which predictions are to be made. For example, if your use case involves blurry and low-resolution images (such as from a security camera), your training data should be composed of blurry, low-resolution images. In general, you should also consider providing multiple angles, resolutions, and backgrounds for your training images.

* **Difficulty** The model generally can't predict labels that humans can't assign. So, if a human can't be trained to assign labels by looking at the image for 1-2 seconds, the model likely can't be trained to do it either.

* **Label Count** We recommend at least 50 labels per object category for a usable model, but using 100s or 1000s would provide better results.

* **Image Dimensions** The model resizes the image to 300x300 pixels, so keep that in mind when training the model with images where one dimension is much longer than the other.
![](/docs-assets/images/shrink_image.png)

* **Object Size** The object of interests size should be at least ~5% of the image area to be detected. For example, on the resized 300x300 pixel image the object should cover ~60x60 pixels.
![](/docs-assets/images/small_image.png)

<!-- markdown list doesn't support include -->
<ul>
<li>
<p>
<strong>Object Type</strong> The model is optimized for photographs of objects in the real world. They are unlikely to work well for x-rays, hand drawings, scanned documents, receipts, etc.
</p>
</li>
<li>
<p>
<strong>Object Environment</strong> The training data should be as close as possible to the data on which predictions are to be made. For example, if your use case involves blurry and low-resolution images (such as from a security camera), your training data should be composed of blurry, low-resolution images. In general, you should also consider providing multiple angles, resolutions, and backgrounds for your training images.
</p>
</li>
<li>
<p>
<strong>Difficulty</strong> The model generally can't predict labels that humans can't assign. So, if a human can't be trained to assign labels by looking at the image for 1-2 seconds, the model likely can't be trained to do it either.
</p>
</li>
<li>
<p>
<strong>Label Count</strong> We recommend at least 50 labels per object category for a usable model, but using 100s or 1000s would provide better results.
</p>
</li>
<li>
<p>
<strong>Image Dimensions</strong> The model resizes the image to 300x300 pixels, so keep that in mind when training the model with images where one dimension is much longer than the other.
{% include responsive.html image="shrink_image.png" %}
</p>
</li>
<li>
<p>
<strong>Object Size</strong> The object of interests size should be at least ~5% of the image area to be detected. For example, on the resized 300x300 pixel image the object should cover ~60x60 pixels.
{% include responsive.html image="small_image.png" %}
</p>
</li>
</ul>

## Set up Cloud Annotations

To use Cloud Annotations just navigate to [cloud.annotations.ai](https://cloud.annotations.ai) and click **Continue with IBM Cloud**.
![](/docs-assets/images/0a.CA_login.png)

{% include responsive.html image="0a.CA_login.png" %}

Once logged, if you don't have an object storage instance, it will prompt you to create one. Click **Get started** to be directed to IBM Cloud, where you can create a free object storage instance.
![](/docs-assets/images/1a.CA_no-object-storage.png)

{% include responsive.html image="1a.CA_no-object-storage.png" %}

You might need to re-login to IBM Cloud to create a resource.
![](/docs-assets/images/2a.IBM_login-to-create-resource.png)

{% include responsive.html image="2a.IBM_login-to-create-resource.png" %}

Choose a pricing plan and click **Create**, then **Confirm** on the following popup.
![](/docs-assets/images/3a.IBM_create-object-storage.png)

Once your object storage instance has been provisioned, navigate back to [cloud.annotations.ai](https://cloud.annotations.ai) and refresh the page.
{% include responsive.html image="3a.IBM_create-object-storage.png" %}

Once your object storage instance has been provisioned, navigate back to [cloud.annotations.ai](https://cloud.annotations.ai) and refresh the page.

The files and annotations will be stored in a **bucket**, You can create one by clicking **Start a new project**.
![](/docs-assets/images/4a.CA_create-bucket.png)

{% include responsive.html image="4a.CA_create-bucket.png" %}

Give the bucket a unique name.
![](/docs-assets/images/5.CA_name-bucket.png)

{% include responsive.html image="5.CA_name-bucket.png" %}

After your bucket is created and named, it will prompt you to choose an annotation type. Choose `Classification`.
![](/docs-assets/images/6a.CA_set-type-classification.png)

{% include responsive.html image="6a.CA_set-type-classification.png" %}

## Labeling the data
1. Create the desired labels
![](/docs-assets/images/create-label-button.png)
2. Upload a video or some images
![](/docs-assets/images/upload-media-classification.png)
3. Select images then choose `Label` > `DESIRED_LABEL`
![](/docs-assets/images/label-donuts.png)

<!-- markdown list doesn't support include -->
<ol>
<li>Create the desired labels
{% include responsive.html image="create-label-button.png" %}
</li>
<li>Upload a video or some images
{% include responsive.html image="upload-media-classification.png" %}
</li>
<li>Select images then choose <code class="highlighter-rouge">Label</code> > <code class="highlighter-rouge">DESIRED_LABEL</code>
{% include responsive.html image="label-donuts.png" %}
</li>
</ol>

## &nbsp;

> **📁 [Sample Training Data](https://github.com/cloud-annotations/training/releases/download/v1.2.30/coffee-donuts.zip)**
21 changes: 14 additions & 7 deletions docs/_guides/classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,16 +4,23 @@ title: Classification
---

After your bucket is created and named, it will prompt you to choose an annotation type. Choose `Classification`.
![](/docs-assets/images/6a.CA_set-type-classification.png)

{% include responsive.html image="6a.CA_set-type-classification.png" %}


## Labeling the data
1. Create the desired labels
![](/docs-assets/images/create-label-button.png)
2. Upload a video or some images
![](/docs-assets/images/upload-media-classification.png)
3. Select images then choose `Label` > `DESIRED_LABEL`
![](/docs-assets/images/label-donuts.png)
<!-- markdown list doesn't support include -->
<ol>
<li>Create the desired labels
{% include responsive.html image="create-label-button.png" %}
</li>
<li>Upload a video or some images
{% include responsive.html image="upload-media-classification.png" %}
</li>
<li>Select images then choose <code class="highlighter-rouge">Label</code> > <code class="highlighter-rouge">DESIRED_LABEL</code>
{% include responsive.html image="label-donuts.png" %}
</li>
</ol>

> **Pro Tip:** Upload images of the same class and use <kbd>⌘</kbd> + <kbd>A</kbd> (<kbd>Ctrl</kbd> + <kbd>A</kbd> on windows) to label all of the unlabeled images as the same label.
Expand Down
3 changes: 2 additions & 1 deletion docs/_guides/downloading-a-model-via-gui.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ title: Downloading a model via GUI
---

From an existing project, select **Training runs** > **View all**
![](/docs-assets/images/view_all_training.png)

{% include responsive.html image="view_all_training.png" %}

Select a completed training job from the lefthand side, click **Download**. A zip file will be created containing your trained model files.
21 changes: 10 additions & 11 deletions docs/_guides/labeling-with-a-team.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,17 +8,16 @@ To give someone access to your project, you need to set up an Identity & Access
Navigate to [IBM Cloud](https://ibm.biz/cloud-annotations-dashboard){:target="_blank"}.
From the titlebar, choose `Manage` > `Access (IAM)`.

![](/docs-assets/images/manage-access.png)

{% include responsive.html image="manage-access.png" %}

## Invite users
Invite the user, by choosing the `Users` sidebar item and clicking `Invite users`.

![](/docs-assets/images/invite-users.png)
{% include responsive.html image="invite-users.png" %}

Enter their email address, then click `Invite`.

![](/docs-assets/images/add-email.png)
{% include responsive.html image="add-email.png" %}


## Create an access group
Expand All @@ -31,23 +30,23 @@ For Cloud Annotations to work properly, the user will need:

Create an access group, by choosing the `Access groups` sidebar item and clicking `Create`.

![](/docs-assets/images/access-groups.png)
{% include responsive.html image="access-groups.png" %}

Give the access group a name.

![](/docs-assets/images/name-access-group.png)
{% include responsive.html image="name-access-group.png" %}

Add the invited user to the access group by clicking `Add users`.

![](/docs-assets/images/add-users-to-access-group.png)
{% include responsive.html image="add-users-to-access-group.png" %}

Select the user from the list and click `Add to group`.

![](/docs-assets/images/select-users-from-access-group-list.png)
{% include responsive.html image="select-users-from-access-group-list.png" %}

Choose the `Access policies` tab and click `Assign access`.

![](/docs-assets/images/add-access-policy.png)
{% include responsive.html image="add-access-policy.png" %}

Choose `Cloud Object Storage` from the dropdown, this will enable the rest of the options.
For `Service instance`, choose the Cloud Object Storage instance affiliated with you Cloud Annotation project.
Expand All @@ -58,10 +57,10 @@ For access, choose:

Followed by clicking `Add`.

![](/docs-assets/images/choose-policies.png)
{% include responsive.html image="choose-policies.png" %}

Once added, click `Assign`.

![](/docs-assets/images/assign-the-policy.png)
{% include responsive.html image="assign-the-policy.png" %}

Once assigned, the invited users should automatically be able to see the project in Cloud Annotations. To invite additional users, just add them to the access group you just created.
21 changes: 14 additions & 7 deletions docs/_guides/object-detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,16 +4,23 @@ title: Object detection
---

After your bucket is created and named, it will prompt you to choose an annotation type. Choose `Localization`, this enables bounding box drawing.
![](/docs-assets/images/6a.CA_set-type.png)

{% include responsive.html image="6a.CA_set-type.png" %}


## Labeling the data
1. Upload a video or some images
![](/docs-assets/images/7a.CA_blank-canvas.png)
2. Create the desired labels
![](/docs-assets/images/9a.CA_create-label.png)
3. Start drawing bounding boxes
![](/docs-assets/images/10.CA_labeled.png)
<!-- markdown list doesn't support include -->
<ol>
<li>Upload a video or some images
{% include responsive.html image="7a.CA_blank-canvas.png" %}
</li>
<li>Create the desired labels
{% include responsive.html image="9a.CA_create-label.png" %}
</li>
<li>Start drawing bounding boxes
{% include responsive.html image="10.CA_labeled.png" %}
</li>
</ol>


## Keyboard shortcuts
Expand Down
69 changes: 46 additions & 23 deletions docs/_guides/preparing-training-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,41 +8,67 @@ Cloud Annotations supports uploading both photos and videos.
However, before you start snapping, there's a few limitations to consider.

## Training data best practices
* **Object Type** The model is optimized for photographs of objects in the real world. They are unlikely to work well for x-rays, hand drawings, scanned documents, receipts, etc.

* **Object Environment** The training data should be as close as possible to the data on which predictions are to be made. For example, if your use case involves blurry and low-resolution images (such as from a security camera), your training data should be composed of blurry, low-resolution images. In general, you should also consider providing multiple angles, resolutions, and backgrounds for your training images.

* **Difficulty** The model generally can't predict labels that humans can't assign. So, if a human can't be trained to assign labels by looking at the image for 1-2 seconds, the model likely can't be trained to do it either.

* **Label Count** We recommend at least 50 labels per object category for a usable model, but using 100s or 1000s would provide better results.

* **Image Dimensions** The model resizes the image to 300x300 pixels, so keep that in mind when training the model with images where one dimension is much longer than the other.
![](/docs-assets/images/shrink_image.png)

* **Object Size** The object of interests size should be at least ~5% of the image area to be detected. For example, on the resized 300x300 pixel image the object should cover ~60x60 pixels.
![](/docs-assets/images/small_image.png)

<ul>
<li>
<p>
<strong>Object Type</strong> The model is optimized for photographs of objects in the real world. They are unlikely to work well for x-rays, hand drawings, scanned documents, receipts, etc.
</p>
</li>
<li>
<p>
<strong>Object Environment</strong> The training data should be as close as possible to the data on which predictions are to be made. For example, if your use case involves blurry and low-resolution images (such as from a security camera), your training data should be composed of blurry, low-resolution images. In general, you should also consider providing multiple angles, resolutions, and backgrounds for your training images.
</p>
</li>
<li>
<p>
<strong>Difficulty</strong> The model generally can't predict labels that humans can't assign. So, if a human can't be trained to assign labels by looking at the image for 1-2 seconds, the model likely can't be trained to do it either.
</p>
</li>
<li>
<p>
<strong>Label Count</strong> We recommend at least 50 labels per object category for a usable model, but using 100s or 1000s would provide better results.
</p>
</li>
<li>
<p>
<strong>Image Dimensions</strong> The model resizes the image to 300x300 pixels, so keep that in mind when training the model with images where one dimension is much longer than the other.
{% include responsive.html image="shrink_image.png" %}
</p>
</li>
<li>
<p>
<strong>Object Size</strong> The object of interests size should be at least ~5% of the image area to be detected. For example, on the resized 300x300 pixel image the object should cover ~60x60 pixels.
{% include responsive.html image="small_image.png" %}
</p>
</li>
</ul>

## Set up Cloud Annotations
To use Cloud Annotations just navigate to [cloud.annotations.ai](https://cloud.annotations.ai) and click **Continue with IBM Cloud**.
![](/docs-assets/images/0a.CA_login.png)

{% include responsive.html image="0a.CA_login.png" %}

Once logged, if you don't have an object storage instance, it will prompt you to create one. Click **Get started** to be directed to IBM Cloud, where you can create a free object storage instance.
![](/docs-assets/images/1a.CA_no-object-storage.png)

{% include responsive.html image="1a.CA_no-object-storage.png" %}

You might need to re-login to IBM Cloud to create a resource.
![](/docs-assets/images/2a.IBM_login-to-create-resource.png)

{% include responsive.html image="2a.IBM_login-to-create-resource.png" %}

Choose a pricing plan and click **Create**, then **Confirm** on the following popup.
![](/docs-assets/images/3a.IBM_create-object-storage.png)

{% include responsive.html image="3a.IBM_create-object-storage.png" %}

Once your object storage instance has been provisioned, navigate back to [cloud.annotations.ai](https://cloud.annotations.ai) and refresh the page.

The files and annotations will be stored in a **bucket**, You can create one by clicking **Start a new project**.
![](/docs-assets/images/4a.CA_create-bucket.png)

{% include responsive.html image="4a.CA_create-bucket.png" %}

Give the bucket a unique name.
![](/docs-assets/images/5.CA_name-bucket.png)

{% include responsive.html image="5.CA_name-bucket.png" %}

## [Object detection](#object-detection) or [classification](#classification)?
A classification model can tell you what an image is and how confident it is about it's decision.
Expand All @@ -55,6 +81,3 @@ If an object detection model gives us this extra information, why would we use c
* **Labor Cost** An object detection model requires humans to draw boxes around every object to train. A classification model only requires a simple label for each image.
* **Training Cost** It can take longer and require more expensive hardware to train an object detection model.
* **Inference Cost** An object detection model can be much slower than real-time to process an image on low-end hardware.



10 changes: 6 additions & 4 deletions docs/_guides/training-via-gui.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,17 @@ title: Training via GUI
---

Once you have labeled a sufficient amount of photos, click **Train Model**. A dialog message will appear, prompting you to select your Watson Machine Learning instance. If none are available, it will guide you to create a new one (You may need to refresh your Cloud Annotations window for the new instance to appear, but don't worry, your labels will be saved).
![](/docs-assets/images/wml_dialog.png)

{% include responsive.html image="wml_dialog.png" %}

Click **Train**. Your training job will not be added to the queue.
![](/docs-assets/images/training_queue.png)

You will see it listed as *pending* until the training starts (this could take several minutes).
![](/docs-assets/images/pending_training.png)

{% include responsive.html image="pending_training.png" %}

Once your training job starts, the status will change and you will see a graph of the training steps running.
![](/docs-assets/images/training_steps.png)

{% include responsive.html image="training_steps.png" %}

Once the job is completed, you're all set!

0 comments on commit ecde4e6

Please sign in to comment.