You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"This demo will take you through the steps of running an \"out-of-the-box\" detection model on a\ncollection of images.\n\n"
26
+
]
27
+
},
28
+
{
29
+
"cell_type": "markdown",
30
+
"metadata": {},
31
+
"source": [
32
+
"Create the data directory\n~~~~~~~~~~~~~~~~~~~~~~~~~\nThe snippet shown below will create the ``data`` directory where all our data will be stored. The\ncode will create a directory structure as shown bellow:\n\n.. code-block:: bash\n\n data\n\u251c\u2500\u2500 images\n\u2514\u2500\u2500 models\n\nwhere the ``images`` folder will contain the downlaoded test images, while ``models`` will\ncontain the downloaded models.\n\n"
33
+
]
34
+
},
35
+
{
36
+
"cell_type": "code",
37
+
"execution_count": null,
38
+
"metadata": {
39
+
"collapsed": false
40
+
},
41
+
"outputs": [],
42
+
"source": [
43
+
"import os\n\nDATA_DIR = os.path.join(os.getcwd(), 'data')\nIMAGES_DIR = os.path.join(DATA_DIR, 'images')\nMODELS_DIR = os.path.join(DATA_DIR, 'models')\nfor dir in [DATA_DIR, IMAGES_DIR, MODELS_DIR]:\n if not os.path.exists(dir):\n os.mkdir(dir)"
44
+
]
45
+
},
46
+
{
47
+
"cell_type": "markdown",
48
+
"metadata": {},
49
+
"source": [
50
+
"Download the test images\n~~~~~~~~~~~~~~~~~~~~~~~~\nFirst we will download the images that we will use throughout this tutorial. The code snippet\nshown bellow will download the test images from the `TensorFlow Model Garden <https://github.com/tensorflow/models/tree/master/research/object_detection/test_images>`_\nand save them inside the ``data/images`` folder.\n\n"
51
+
]
52
+
},
53
+
{
54
+
"cell_type": "code",
55
+
"execution_count": null,
56
+
"metadata": {
57
+
"collapsed": false
58
+
},
59
+
"outputs": [],
60
+
"source": [
61
+
"import urllib.request\n\nIMAGE_FILENAMES = ['image1.jpg', 'image2.jpg']\nIMAGES_DOWNLOAD_BASE = \\\n 'https://raw.githubusercontent.com/tensorflow/models/master/research/object_detection/test_images/'\n\nfor image_filename in IMAGE_FILENAMES:\n\n image_path = os.path.join(IMAGES_DIR, image_filename)\n\n # Download image\n if not os.path.exists(image_path):\n print('Downloading {}... '.format(image_filename), end='')\n urllib.request.urlretrieve(IMAGES_DOWNLOAD_BASE + image_filename, image_path)\n print('Done')"
62
+
]
63
+
},
64
+
{
65
+
"cell_type": "markdown",
66
+
"metadata": {},
67
+
"source": [
68
+
"Download the model\n~~~~~~~~~~~~~~~~~~\nThe code snippet shown below is used to download the object detection model checkpoint file,\nas well as the labels file (.pbtxt) which contains a list of strings used to add the correct\nlabel to each detection (e.g. person). Once downloaded the files will be stored under the\n``data/models`` folder.\n\nThe particular detection algorithm we will use is the `CenterNet HourGlass104 1024x1024`. More\nmodels can be found in the `TensorFlow 2 Detection Model Zoo <https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md>`_.\nTo use a different model you will need the URL name of the specific model. This can be done as\nfollows:\n\n1. Right click on the `Model name` of the model you would like to use;\n2. Click on `Copy link address` to copy the download link of the model;\n3. Paste the link in a text editor of your choice. You should observe a link similar to ``download.tensorflow.org/models/object_detection/tf2/YYYYYYYY/XXXXXXXXX.tar.gz``;\n4. Copy the ``XXXXXXXXX`` part of the link and use it to replace the value of the ``MODEL_NAME`` variable in the code shown below;\n5. Copy the ``YYYYYYYY`` part of the link and use it to replace the value of the ``MODEL_DATE`` variable in the code shown below.\n\nFor example, the download link for the model used below is: ``download.tensorflow.org/models/object_detection/tf2/20200711/centernet_hg104_1024x1024_coco17_tpu-32.tar.gz``\n\n"
"Load label map data (for plotting)\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nLabel maps correspond index numbers to category names, so that when our convolution network\npredicts `5`, we know that this corresponds to `airplane`. Here we use internal utility\nfunctions, but anything that returns a dictionary mapping integers to appropriate string labels\nwould be fine.\n\n"
"Putting everything together\n~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThe code shown below loads an image, runs it through the detection model and visualizes the\ndetection results, including the keypoints.\n\nNote that this will take a long time (several minutes) the first time you run this code due to\ntf.function's trace-compilation --- on subsequent runs (e.g. on new images), things will be\nfaster.\n\nHere are some simple things to try out if you are curious:\n\n* Modify some of the input images and see if detection still works. Some simple things to try out here (just uncomment the relevant portions of code) include flipping the image horizontally, or converting to grayscale (note that we still expect the input image to have 3 channels).\n* Print out `detections['detection_boxes']` and try to match the box locations to the boxes in the image. Notice that coordinates are given in normalized form (i.e., in the interval [0, 1]).\n* Set ``min_score_thresh`` to other values (between 0 and 1) to allow more detections in or to filter out more detections.\n\n"
123
+
]
124
+
},
125
+
{
126
+
"cell_type": "code",
127
+
"execution_count": null,
128
+
"metadata": {
129
+
"collapsed": false
130
+
},
131
+
"outputs": [],
132
+
"source": [
133
+
"import numpy as np\nfrom six import BytesIO\nfrom PIL import Image\nimport matplotlib.pyplot as plt\nimport warnings\nwarnings.filterwarnings('ignore') # Suppress Matplotlib warnings\n\ndef load_image_into_numpy_array(path):\n \"\"\"Load an image from file into a numpy array.\n\n Puts image into numpy array to feed into tensorflow graph.\n Note that by convention we put it into a numpy array with shape\n (height, width, channels), where channels=3 for RGB.\n\n Args:\n path: the file path to the image\n\n Returns:\n uint8 numpy array with shape (img_height, img_width, 3)\n \"\"\"\n img_data = tf.io.gfile.GFile(path, 'rb').read()\n image = Image.open(BytesIO(img_data))\n (im_width, im_height) = image.size\n return np.array(image.getdata()).reshape(\n (im_height, im_width, 3)).astype(np.uint8)\n\n\nfor image_filename in IMAGE_FILENAMES:\n\n print('Running inference for {}... '.format(image_filename), end='')\n\n image_path = os.path.join(IMAGES_DIR, image_filename)\n image_np = load_image_into_numpy_array(image_path)\n\n # Things to try:\n # Flip horizontally\n # image_np = np.fliplr(image_np).copy()\n\n # Convert image to grayscale\n # image_np = np.tile(\n # np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)\n\n input_tensor = tf.convert_to_tensor(\n np.expand_dims(image_np, 0), dtype=tf.float32)\n detections, predictions_dict, shapes = detect_fn(input_tensor)\n\n label_id_offset = 1\n image_np_with_detections = image_np.copy()\n\n viz_utils.visualize_boxes_and_labels_on_image_array(\n image_np_with_detections,\n detections['detection_boxes'][0].numpy(),\n (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),\n detections['detection_scores'][0].numpy(),\n category_index,\n use_normalized_coordinates=True,\n max_boxes_to_draw=200,\n min_score_thresh=.30,\n agnostic_mode=False)\n\n plt.figure()\n plt.imshow(image_np_with_detections)\n print('Done')\nplt.show()\n\n# sphinx_gallery_thumbnail_number = 2"
0 commit comments