Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update keras cv docs to the newest release #920

Merged
merged 20 commits into from
Jun 21, 2022
Merged
Show file tree
Hide file tree
Changes from 15 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
32 changes: 19 additions & 13 deletions guides/keras_cv/coco_metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,26 +22,26 @@
"""
## Input format

KerasCV COCO metrics require a specific input format.
All KerasCV components that process bounding boxes, including COCO metrics, require a
`bounding_box_format` parameter. This parameter is used to tell the components what
format your bounding boxes are in. While this guide uses the `xyxy` format, a full
list of supported formats is available in
[the bounding_box API documentation](/api/keras_cv/bounding_box/formats).

The metrics expect `y_true` and be a `float` Tensor with the shape `[batch,
num_images, num_boxes, 5]`. The final axis stores the locational and class
information for each specific bounding box. The dimensions in order are: `[left,
top, right, bottom, class]`.

The metrics expect `y_pred` and be a `float` Tensor with the shape `[batch,
num_images, num_boxes, 56]`. The final axis stores the locational and class
information for each specific bounding box. The dimensions in order are: `[left,
top, right, bottom, class, confidence]`.
num_images, num_boxes, 5]`, with the ordering of last set of axes determined by the
provided format. The same is true of `y_pred`, except that an additional `confidence`
axis must be provided.

Due to the fact that each image may have a different number of bounding boxes,
the `num_boxes` dimension may actually have a mismatching shape between images.
KerasCV works around this by allowing you to either pass a `RaggedTensor` as an
input to the KerasCV COCO metrics, or padding unused bounding boxes with `-1`.

Utility functions to manipulate bounding boxes, transform between formats, and
pad bounding box Tensors with `-1s` are available at
[`keras_cv.bounding_box`](https://github.com/keras-team/keras-cv/blob/master/keras_cv/bounding_box).
pad bounding box Tensors with `-1s` are available from the
[`keras_cv.bounding_box`](https://github.com/keras-team/keras-cv/blob/master/keras_cv/bounding_box)
package.

"""

Expand All @@ -67,7 +67,9 @@
from tensorflow import keras

# only consider boxes with areas less than a 32x32 square.
metric = keras_cv.metrics.COCORecall(class_ids=[1, 2, 3], area_range=(0, 32**2))
metric = keras_cv.metrics.COCORecall(
bounding_box_format="xyxy", class_ids=[1, 2, 3], area_range=(0, 32**2)
)

"""
2.) Create Some Bounding Boxes:
Expand Down Expand Up @@ -130,7 +132,11 @@
"""

recall = keras_cv.metrics.COCORecall(
max_detections=100, class_ids=[1], area_range=(0, 64**2), name="coco_recall"
bounding_box_format="xyxy",
max_detections=100,
class_ids=[1],
area_range=(0, 64**2),
name="coco_recall",
)
model.compile(metrics=[recall])

Expand Down
6 changes: 6 additions & 0 deletions guides/keras_cv/custom_image_augmentations.py
Original file line number Diff line number Diff line change
Expand Up @@ -430,6 +430,12 @@ def __init__(self, **kwargs):


"""

Additionally, be sure to accept `**kwargs` to your `augment_*` methods to ensure
forwards compatibility. KerasCV will add additional label types in the future, and
if you do not include a `**kwargs` argument your augmentation layers will not be
forward compatible.

## Conclusion and next steps

KerasCV offers a standard set of APIs to streamline the process of implementing your
Expand Down
16 changes: 6 additions & 10 deletions guides/keras_cv/cut_mix_mix_up_and_rand_augment.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,8 @@
pipelines for image classification and object detection tasks. KerasCV offers a wide
suite of preprocessing layers implementing common data augmentation techniques.

Perhaps three of the most useful layers are `CutMix`, `MixUp`, and `RandAugment`. These
Perhaps three of the most useful layers are `keras_cv.layers.CutMix`,
`keras_cv.layers.MixUp`, and `keras_cv.layers.RandAugment`. These
layers are used in nearly all state-of-the-art image classification pipelines.

This guide will show you how to compose these layers into your own data
Expand Down Expand Up @@ -70,7 +71,7 @@
num_classes = dataset_info.features["label"].num_classes


def prepare(image, label):
def to_dict(image, label):
image = tf.image.resize(image, IMAGE_SIZE)
image = tf.cast(image, tf.float32)
label = tf.one_hot(label, num_classes)
Expand Down Expand Up @@ -214,7 +215,8 @@ def cut_mix_and_mix_up(samples):
## Customizing your augmentation pipeline

Perhaps you want to exclude an augmentation from `RandAugment`, or perhaps you want to
include the `GridMask()` as an option alongside the default `RandAugment` augmentations.
include the `keras_cv.layers.GridMask` as an option alongside the default `RandAugment`
augmentations.

KerasCV allows you to construct production grade custom data augmentation pipelines using
the `keras_cv.layers.RandomAugmentationPipeline` layer. This class operates similarly to
Expand Down Expand Up @@ -246,7 +248,7 @@ def cut_mix_and_mix_up(samples):
]

"""
Next, let's add `GridMask` to our layers:
Next, let's add `keras_cv.layers.GridMask` to our layers:
"""

layers = layers + [keras_cv.layers.GridMask()]
Expand All @@ -263,12 +265,6 @@ def cut_mix_and_mix_up(samples):
Let's check out the results!
"""


def apply_pipeline(inputs):
inputs["images"] = pipeline(inputs["images"])
return inputs


train_dataset = load_dataset().map(apply_pipeline, num_parallel_calls=AUTOTUNE)
visualize_dataset(train_dataset, title="After custom pipeline")

Expand Down
3 changes: 1 addition & 2 deletions scripts/autogen.py
Original file line number Diff line number Diff line change
Expand Up @@ -258,7 +258,6 @@ def add_example(self, path, working_dir=None):
)
open(md_path, "w").write(md_content)


def add_guide(self, name, working_dir=None):
"""e.g. add_guide('functional_api')"""

Expand Down Expand Up @@ -948,7 +947,7 @@ def generate_md_toc(entries, url, depth=2):
title=title, full_url=full_url
)
if children:
assert path.endswith("/")
assert path.endswith("/"), f"{path} should end with /"
for child in children:
if child.get("skip_from_toc", False):
continue
Expand Down
40 changes: 38 additions & 2 deletions scripts/cv_api_master.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,11 @@
"title": "AutoContrast layer",
"generate": ["keras_cv.layers.AutoContrast"],
},
{
"path": "aug_mix",
"title": "AugMix layer",
"generate": ["keras_cv.layers.AugMix"],
},
{
"path": "channel_shuffle",
"title": "ChannelShuffle layer",
Expand Down Expand Up @@ -64,7 +69,7 @@
"generate": ["keras_cv.layers.RandomChannelShift"],
},
{
"path": "rancom_color_degeneration",
"path": "random_color_degeneration",
"title": "RandomColorDegeneration layer",
"generate": ["keras_cv.layers.RandomColorDegeneration"],
},
Expand Down Expand Up @@ -101,6 +106,37 @@
],
}

BOUNDING_BOX_FORMATS = {
"path": "formats",
"title": "Bounding box formats",
"generate": [
"keras_cv.bounding_box.CENTER_XYWH",
"keras_cv.bounding_box.XYWH",
"keras_cv.bounding_box.XYXY",
"keras_cv.bounding_box.REL_XYXY",
],
}

BOUNDING_BOX_UTILS = {
"path": "utils/",
"title": "Bounding box utilities",
"toc": True,
"children": [
{
"path": "convert_format",
"title": "Convert bounding box formats",
"generate": ["keras_cv.bounding_box.convert_format"],
},
],
}

BOUNDING_BOX_MASTER = {
"path": "bounding_box/",
"title": "Bounding box formats and utilities",
"toc": True,
"children": [BOUNDING_BOX_FORMATS, BOUNDING_BOX_UTILS],
}

REGULARIZATION_MASTER = {
"path": "regularization/",
"title": "Regularization layers",
Expand Down Expand Up @@ -150,5 +186,5 @@
"path": "keras_cv/",
"title": "KerasCV",
"toc": True,
"children": [LAYERS_MASTER, METRICS_MASTER],
"children": [LAYERS_MASTER, METRICS_MASTER, BOUNDING_BOX_MASTER],
}
3 changes: 2 additions & 1 deletion scripts/docstrings.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,7 @@ def import_object(string: str):
try:
last_object_got = importlib.import_module(".".join(seen_names))
except ModuleNotFoundError:
assert last_object_got is not None, f"Failed to import path {string}"
last_object_got = getattr(last_object_got, name)
return last_object_got

Expand All @@ -127,7 +128,7 @@ def make_source_link(cls, project_url):

base_module = cls.__module__.split(".")[0]
project_url = project_url[base_module]
assert project_url.endswith("/")
assert project_url.endswith("/"), f"{base_module} not found"
project_url_version = project_url.split("/")[-2].replace("v", "")
module_version = importlib.import_module(base_module).__version__
if module_version != project_url_version:
Expand Down
10 changes: 10 additions & 0 deletions templates/keras_cv/bounding_box/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# KerasCV Bounding Boxes

All KerasCV components that process bounding boxes require a `bounding_box_format`
LukeWood marked this conversation as resolved.
Show resolved Hide resolved
argument. This argument allows you to seamlessly integrate KerasCV components into
your own workflows while preserving proper behavior of the components themselves.

The bounding box formats supported in KerasCV
[are listed in the API docs](/api/keras_cv/bounding_box/formats)
If a format you would like to use is missing,
[feel free to open a GitHub issue on KerasCV](https://github.com/keras-team/keras-cv/issues)!
40 changes: 40 additions & 0 deletions templates/keras_cv/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,46 @@ pip install keras-cv --upgrade
You can also check out other versions in our
[GitHub repository](https://github.com/keras-team/keras-cv/releases).

## Quick Introduction

Create a preprocessing pipeline:
LukeWood marked this conversation as resolved.
Show resolved Hide resolved

```python
import keras_cv
from tensorflow import keras

preprocessing_model = keras.Sequential([
keras_cv.layers.RandAugment(value_range=(0, 255))
keras_cv.layers.CutMix(),
keras_cv.layers.MixUp()
], name="preprocessing_model")
```

Augment a `tf.data.Dataset`:

```python
dataset = dataset.map(lambda images, labels: {"images": images, "labels": labels})
dataset = dataset.map(preprocessing_model)
dataset = dataset.map(lambda inputs: (inputs["images"], inputs["labels"]))
```

Create a model:

```python
LukeWood marked this conversation as resolved.
Show resolved Hide resolved
densenet = keras_cv.models.DenseNet(
include_rescaling=True,
include_top=True,
num_classes=102
)
densenet.compile(optimizer='adam', metrics=['accuracy'])
```

Train your model:

```python
LukeWood marked this conversation as resolved.
Show resolved Hide resolved
densenet.fit(dataset)
```

---
## Citing KerasCV

Expand Down