diff --git a/0.11./_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip b/0.11./_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip deleted file mode 100644 index 386aeea972b..00000000000 Binary files a/0.11./_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip and /dev/null differ diff --git a/0.11./_downloads/0a0ea3da81f0782f42d1ded74c1acb75/plot_video_api.ipynb b/0.11./_downloads/0a0ea3da81f0782f42d1ded74c1acb75/plot_video_api.ipynb deleted file mode 100644 index afc4b7ffe50..00000000000 --- a/0.11./_downloads/0a0ea3da81f0782f42d1ded74c1acb75/plot_video_api.ipynb +++ /dev/null @@ -1,309 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\n# Video API\n\nThis example illustrates some of the APIs that torchvision offers for\nvideos, together with the examples on how to build datasets and more.\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 1. Introduction: building a new video object and examining the properties\nFirst we select a video to test the object out. For the sake of argument\nwe're using one from kinetics400 dataset.\nTo create it, we need to define the path and the stream we want to use.\n\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Chosen video statistics:\n\n- WUzgd7C1pWA.mp4\n - source:\n - kinetics-400\n - video:\n - H-264\n - MPEG-4 AVC (part 10) (avc1)\n - fps: 29.97\n - audio:\n - MPEG AAC audio (mp4a)\n - sample rate: 48K Hz\n\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import torch\nimport torchvision\nfrom torchvision.datasets.utils import download_url\n\n# Download the sample video\ndownload_url(\n \"https://github.com/pytorch/vision/blob/main/test/assets/videos/WUzgd7C1pWA.mp4?raw=true\",\n \".\",\n \"WUzgd7C1pWA.mp4\"\n)\nvideo_path = \"./WUzgd7C1pWA.mp4\"" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Streams are defined in a similar fashion as torch devices. We encode them as strings in a form\nof ``stream_type:stream_id`` where ``stream_type`` is a string and ``stream_id`` a long int.\nThe constructor accepts passing a ``stream_type`` only, in which case the stream is auto-discovered.\nFirstly, let's get the metadata for our particular video:\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "stream = \"video\"\nvideo = torchvision.io.VideoReader(video_path, stream)\nvideo.get_metadata()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Here we can see that video has two streams - a video and an audio stream.\nCurrently available stream types include ['video', 'audio'].\nEach descriptor consists of two parts: stream type (e.g. 'video') and a unique stream id\n(which are determined by video encoding).\nIn this way, if the video container contains multiple streams of the same type,\nusers can access the one they want.\nIf only stream type is passed, the decoder auto-detects first stream of that type and returns it.\n\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's read all the frames from the video stream. By default, the return value of\n``next(video_reader)`` is a dict containing the following fields.\n\nThe return fields are:\n\n- ``data``: containing a torch.tensor\n- ``pts``: containing a float timestamp of this particular frame\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "metadata = video.get_metadata()\nvideo.set_current_stream(\"audio\")\n\nframes = [] # we are going to save the frames here.\nptss = [] # pts is a presentation timestamp in seconds (float) of each frame\nfor frame in video:\n frames.append(frame['data'])\n ptss.append(frame['pts'])\n\nprint(\"PTS for first five frames \", ptss[:5])\nprint(\"Total number of frames: \", len(frames))\napprox_nf = metadata['audio']['duration'][0] * metadata['audio']['framerate'][0]\nprint(\"Approx total number of datapoints we can expect: \", approx_nf)\nprint(\"Read data size: \", frames[0].size(0) * len(frames))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "But what if we only want to read certain time segment of the video?\nThat can be done easily using the combination of our ``seek`` function, and the fact that each call\nto next returns the presentation timestamp of the returned frame in seconds.\n\nGiven that our implementation relies on python iterators,\nwe can leverage itertools to simplify the process and make it more pythonic.\n\nFor example, if we wanted to read ten frames from second second:\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import itertools\nvideo.set_current_stream(\"video\")\n\nframes = [] # we are going to save the frames here.\n\n# We seek into a second second of the video and use islice to get 10 frames since\nfor frame, pts in itertools.islice(video.seek(2), 10):\n frames.append(frame)\n\nprint(\"Total number of frames: \", len(frames))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Or if we wanted to read from 2nd to 5th second,\nWe seek into a second second of the video,\nthen we utilize the itertools takewhile to get the\ncorrect number of frames:\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "video.set_current_stream(\"video\")\nframes = [] # we are going to save the frames here.\nvideo = video.seek(2)\n\nfor frame in itertools.takewhile(lambda x: x['pts'] <= 5, video):\n frames.append(frame['data'])\n\nprint(\"Total number of frames: \", len(frames))\napprox_nf = (5 - 2) * video.get_metadata()['video']['fps'][0]\nprint(\"We can expect approx: \", approx_nf)\nprint(\"Tensor size: \", frames[0].size())" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 2. Building a sample read_video function\nWe can utilize the methods above to build the read video function that follows\nthe same API to the existing ``read_video`` function.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "def example_read_video(video_object, start=0, end=None, read_video=True, read_audio=True):\n if end is None:\n end = float(\"inf\")\n if end < start:\n raise ValueError(\n \"end time should be larger than start time, got \"\n \"start time={} and end time={}\".format(start, end)\n )\n\n video_frames = torch.empty(0)\n video_pts = []\n if read_video:\n video_object.set_current_stream(\"video\")\n frames = []\n for frame in itertools.takewhile(lambda x: x['pts'] <= end, video_object.seek(start)):\n frames.append(frame['data'])\n video_pts.append(frame['pts'])\n if len(frames) > 0:\n video_frames = torch.stack(frames, 0)\n\n audio_frames = torch.empty(0)\n audio_pts = []\n if read_audio:\n video_object.set_current_stream(\"audio\")\n frames = []\n for frame in itertools.takewhile(lambda x: x['pts'] <= end, video_object.seek(start)):\n frames.append(frame['data'])\n video_pts.append(frame['pts'])\n if len(frames) > 0:\n audio_frames = torch.cat(frames, 0)\n\n return video_frames, audio_frames, (video_pts, audio_pts), video_object.get_metadata()\n\n\n# Total number of frames should be 327 for video and 523264 datapoints for audio\nvf, af, info, meta = example_read_video(video)\nprint(vf.size(), af.size())" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 3. Building an example randomly sampled dataset (can be applied to training dataest of kinetics400)\nCool, so now we can use the same principle to make the sample dataset.\nWe suggest trying out iterable dataset for this purpose.\nHere, we are going to build an example dataset that reads randomly selected 10 frames of video.\n\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Make sample dataset\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import os\nos.makedirs(\"./dataset\", exist_ok=True)\nos.makedirs(\"./dataset/1\", exist_ok=True)\nos.makedirs(\"./dataset/2\", exist_ok=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Download the videos\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from torchvision.datasets.utils import download_url\ndownload_url(\n \"https://github.com/pytorch/vision/blob/main/test/assets/videos/WUzgd7C1pWA.mp4?raw=true\",\n \"./dataset/1\", \"WUzgd7C1pWA.mp4\"\n)\ndownload_url(\n \"https://github.com/pytorch/vision/blob/main/test/assets/videos/RATRACE_wave_f_nm_np1_fr_goo_37.avi?raw=true\",\n \"./dataset/1\",\n \"RATRACE_wave_f_nm_np1_fr_goo_37.avi\"\n)\ndownload_url(\n \"https://github.com/pytorch/vision/blob/main/test/assets/videos/SOX5yA1l24A.mp4?raw=true\",\n \"./dataset/2\",\n \"SOX5yA1l24A.mp4\"\n)\ndownload_url(\n \"https://github.com/pytorch/vision/blob/main/test/assets/videos/v_SoccerJuggling_g23_c01.avi?raw=true\",\n \"./dataset/2\",\n \"v_SoccerJuggling_g23_c01.avi\"\n)\ndownload_url(\n \"https://github.com/pytorch/vision/blob/main/test/assets/videos/v_SoccerJuggling_g24_c01.avi?raw=true\",\n \"./dataset/2\",\n \"v_SoccerJuggling_g24_c01.avi\"\n)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Housekeeping and utilities\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import os\nimport random\n\nfrom torchvision.datasets.folder import make_dataset\nfrom torchvision import transforms as t\n\n\ndef _find_classes(dir):\n classes = [d.name for d in os.scandir(dir) if d.is_dir()]\n classes.sort()\n class_to_idx = {cls_name: i for i, cls_name in enumerate(classes)}\n return classes, class_to_idx\n\n\ndef get_samples(root, extensions=(\".mp4\", \".avi\")):\n _, class_to_idx = _find_classes(root)\n return make_dataset(root, class_to_idx, extensions=extensions)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We are going to define the dataset and some basic arguments.\nWe assume the structure of the FolderDataset, and add the following parameters:\n\n- ``clip_len``: length of a clip in frames\n- ``frame_transform``: transform for every frame individually\n- ``video_transform``: transform on a video sequence\n\n

Note

We actually add epoch size as using :func:`~torch.utils.data.IterableDataset`\n class allows us to naturally oversample clips or images from each video if needed.

\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "class RandomDataset(torch.utils.data.IterableDataset):\n def __init__(self, root, epoch_size=None, frame_transform=None, video_transform=None, clip_len=16):\n super(RandomDataset).__init__()\n\n self.samples = get_samples(root)\n\n # Allow for temporal jittering\n if epoch_size is None:\n epoch_size = len(self.samples)\n self.epoch_size = epoch_size\n\n self.clip_len = clip_len\n self.frame_transform = frame_transform\n self.video_transform = video_transform\n\n def __iter__(self):\n for i in range(self.epoch_size):\n # Get random sample\n path, target = random.choice(self.samples)\n # Get video object\n vid = torchvision.io.VideoReader(path, \"video\")\n metadata = vid.get_metadata()\n video_frames = [] # video frame buffer\n\n # Seek and return frames\n max_seek = metadata[\"video\"]['duration'][0] - (self.clip_len / metadata[\"video\"]['fps'][0])\n start = random.uniform(0., max_seek)\n for frame in itertools.islice(vid.seek(start), self.clip_len):\n video_frames.append(self.frame_transform(frame['data']))\n current_pts = frame['pts']\n # Stack it into a tensor\n video = torch.stack(video_frames, 0)\n if self.video_transform:\n video = self.video_transform(video)\n output = {\n 'path': path,\n 'video': video,\n 'target': target,\n 'start': start,\n 'end': current_pts}\n yield output" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Given a path of videos in a folder structure, i.e:\n\n- dataset\n - class 1\n - file 0\n - file 1\n - ...\n - class 2\n - file 0\n - file 1\n - ...\n - ...\n\nWe can generate a dataloader and test the dataset.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "transforms = [t.Resize((112, 112))]\nframe_transform = t.Compose(transforms)\n\ndataset = RandomDataset(\"./dataset\", epoch_size=None, frame_transform=frame_transform)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from torch.utils.data import DataLoader\nloader = DataLoader(dataset, batch_size=12)\ndata = {\"video\": [], 'start': [], 'end': [], 'tensorsize': []}\nfor batch in loader:\n for i in range(len(batch['path'])):\n data['video'].append(batch['path'][i])\n data['start'].append(batch['start'][i].item())\n data['end'].append(batch['end'][i].item())\n data['tensorsize'].append(batch['video'][i].size())\nprint(data)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 4. Data Visualization\nExample of visualized video\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import matplotlib.pylab as plt\n\nplt.figure(figsize=(12, 12))\nfor i in range(16):\n plt.subplot(4, 4, i + 1)\n plt.imshow(batch[\"video\"][0, i, ...].permute(1, 2, 0))\n plt.axis(\"off\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Cleanup the video and dataset:\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import os\nimport shutil\nos.remove(\"./WUzgd7C1pWA.mp4\")\nshutil.rmtree(\"./dataset\")" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.7.11" - } - }, - "nbformat": 4, - "nbformat_minor": 0 -} \ No newline at end of file diff --git a/0.11./_downloads/1031091ece7f376de0a2c941f5c11f30/plot_visualization_utils.py b/0.11./_downloads/1031091ece7f376de0a2c941f5c11f30/plot_visualization_utils.py deleted file mode 100644 index daa22fe8fa6..00000000000 --- a/0.11./_downloads/1031091ece7f376de0a2c941f5c11f30/plot_visualization_utils.py +++ /dev/null @@ -1,368 +0,0 @@ -""" -======================= -Visualization utilities -======================= - -This example illustrates some of the utilities that torchvision offers for -visualizing images, bounding boxes, and segmentation masks. -""" - -# sphinx_gallery_thumbnail_path = "../../gallery/assets/visualization_utils_thumbnail.png" - -import torch -import numpy as np -import matplotlib.pyplot as plt - -import torchvision.transforms.functional as F - - -plt.rcParams["savefig.bbox"] = 'tight' - - -def show(imgs): - if not isinstance(imgs, list): - imgs = [imgs] - fix, axs = plt.subplots(ncols=len(imgs), squeeze=False) - for i, img in enumerate(imgs): - img = img.detach() - img = F.to_pil_image(img) - axs[0, i].imshow(np.asarray(img)) - axs[0, i].set(xticklabels=[], yticklabels=[], xticks=[], yticks=[]) - - -#################################### -# Visualizing a grid of images -# ---------------------------- -# The :func:`~torchvision.utils.make_grid` function can be used to create a -# tensor that represents multiple images in a grid. This util requires a single -# image of dtype ``uint8`` as input. - -from torchvision.utils import make_grid -from torchvision.io import read_image -from pathlib import Path - -dog1_int = read_image(str(Path('assets') / 'dog1.jpg')) -dog2_int = read_image(str(Path('assets') / 'dog2.jpg')) - -grid = make_grid([dog1_int, dog2_int, dog1_int, dog2_int]) -show(grid) - -#################################### -# Visualizing bounding boxes -# -------------------------- -# We can use :func:`~torchvision.utils.draw_bounding_boxes` to draw boxes on an -# image. We can set the colors, labels, width as well as font and font size. -# The boxes are in ``(xmin, ymin, xmax, ymax)`` format. - -from torchvision.utils import draw_bounding_boxes - - -boxes = torch.tensor([[50, 50, 100, 200], [210, 150, 350, 430]], dtype=torch.float) -colors = ["blue", "yellow"] -result = draw_bounding_boxes(dog1_int, boxes, colors=colors, width=5) -show(result) - - -##################################### -# Naturally, we can also plot bounding boxes produced by torchvision detection -# models. Here is demo with a Faster R-CNN model loaded from -# :func:`~torchvision.models.detection.fasterrcnn_resnet50_fpn` -# model. You can also try using a RetinaNet with -# :func:`~torchvision.models.detection.retinanet_resnet50_fpn`, an SSDlite with -# :func:`~torchvision.models.detection.ssdlite320_mobilenet_v3_large` or an SSD with -# :func:`~torchvision.models.detection.ssd300_vgg16`. For more details -# on the output of such models, you may refer to :ref:`instance_seg_output`. - -from torchvision.models.detection import fasterrcnn_resnet50_fpn -from torchvision.transforms.functional import convert_image_dtype - - -batch_int = torch.stack([dog1_int, dog2_int]) -batch = convert_image_dtype(batch_int, dtype=torch.float) - -model = fasterrcnn_resnet50_fpn(pretrained=True, progress=False) -model = model.eval() - -outputs = model(batch) -print(outputs) - -##################################### -# Let's plot the boxes detected by our model. We will only plot the boxes with a -# score greater than a given threshold. - -score_threshold = .8 -dogs_with_boxes = [ - draw_bounding_boxes(dog_int, boxes=output['boxes'][output['scores'] > score_threshold], width=4) - for dog_int, output in zip(batch_int, outputs) -] -show(dogs_with_boxes) - -##################################### -# Visualizing segmentation masks -# ------------------------------ -# The :func:`~torchvision.utils.draw_segmentation_masks` function can be used to -# draw segmentation masks on images. Semantic segmentation and instance -# segmentation models have different outputs, so we will treat each -# independently. -# -# .. _semantic_seg_output: -# -# Semantic segmentation models -# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -# -# We will see how to use it with torchvision's FCN Resnet-50, loaded with -# :func:`~torchvision.models.segmentation.fcn_resnet50`. You can also try using -# DeepLabv3 (:func:`~torchvision.models.segmentation.deeplabv3_resnet50`) or -# lraspp mobilenet models -# (:func:`~torchvision.models.segmentation.lraspp_mobilenet_v3_large`). -# -# Let's start by looking at the ouput of the model. Remember that in general, -# images must be normalized before they're passed to a semantic segmentation -# model. - -from torchvision.models.segmentation import fcn_resnet50 - - -model = fcn_resnet50(pretrained=True, progress=False) -model = model.eval() - -normalized_batch = F.normalize(batch, mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)) -output = model(normalized_batch)['out'] -print(output.shape, output.min().item(), output.max().item()) - -##################################### -# As we can see above, the output of the segmentation model is a tensor of shape -# ``(batch_size, num_classes, H, W)``. Each value is a non-normalized score, and -# we can normalize them into ``[0, 1]`` by using a softmax. After the softmax, -# we can interpret each value as a probability indicating how likely a given -# pixel is to belong to a given class. -# -# Let's plot the masks that have been detected for the dog class and for the -# boat class: - -sem_classes = [ - '__background__', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', - 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', - 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor' -] -sem_class_to_idx = {cls: idx for (idx, cls) in enumerate(sem_classes)} - -normalized_masks = torch.nn.functional.softmax(output, dim=1) - -dog_and_boat_masks = [ - normalized_masks[img_idx, sem_class_to_idx[cls]] - for img_idx in range(batch.shape[0]) - for cls in ('dog', 'boat') -] - -show(dog_and_boat_masks) - -##################################### -# As expected, the model is confident about the dog class, but not so much for -# the boat class. -# -# The :func:`~torchvision.utils.draw_segmentation_masks` function can be used to -# plots those masks on top of the original image. This function expects the -# masks to be boolean masks, but our masks above contain probabilities in ``[0, -# 1]``. To get boolean masks, we can do the following: - -class_dim = 1 -boolean_dog_masks = (normalized_masks.argmax(class_dim) == sem_class_to_idx['dog']) -print(f"shape = {boolean_dog_masks.shape}, dtype = {boolean_dog_masks.dtype}") -show([m.float() for m in boolean_dog_masks]) - - -##################################### -# The line above where we define ``boolean_dog_masks`` is a bit cryptic, but you -# can read it as the following query: "For which pixels is 'dog' the most likely -# class?" -# -# .. note:: -# While we're using the ``normalized_masks`` here, we would have -# gotten the same result by using the non-normalized scores of the model -# directly (as the softmax operation preserves the order). -# -# Now that we have boolean masks, we can use them with -# :func:`~torchvision.utils.draw_segmentation_masks` to plot them on top of the -# original images: - -from torchvision.utils import draw_segmentation_masks - -dogs_with_masks = [ - draw_segmentation_masks(img, masks=mask, alpha=0.7) - for img, mask in zip(batch_int, boolean_dog_masks) -] -show(dogs_with_masks) - -##################################### -# We can plot more than one mask per image! Remember that the model returned as -# many masks as there are classes. Let's ask the same query as above, but this -# time for *all* classes, not just the dog class: "For each pixel and each class -# C, is class C the most most likely class?" -# -# This one is a bit more involved, so we'll first show how to do it with a -# single image, and then we'll generalize to the batch - -num_classes = normalized_masks.shape[1] -dog1_masks = normalized_masks[0] -class_dim = 0 -dog1_all_classes_masks = dog1_masks.argmax(class_dim) == torch.arange(num_classes)[:, None, None] - -print(f"dog1_masks shape = {dog1_masks.shape}, dtype = {dog1_masks.dtype}") -print(f"dog1_all_classes_masks = {dog1_all_classes_masks.shape}, dtype = {dog1_all_classes_masks.dtype}") - -dog_with_all_masks = draw_segmentation_masks(dog1_int, masks=dog1_all_classes_masks, alpha=.6) -show(dog_with_all_masks) - -##################################### -# We can see in the image above that only 2 masks were drawn: the mask for the -# background and the mask for the dog. This is because the model thinks that -# only these 2 classes are the most likely ones across all the pixels. If the -# model had detected another class as the most likely among other pixels, we -# would have seen its mask above. -# -# Removing the background mask is as simple as passing -# ``masks=dog1_all_classes_masks[1:]``, because the background class is the -# class with index 0. -# -# Let's now do the same but for an entire batch of images. The code is similar -# but involves a bit more juggling with the dimensions. - -class_dim = 1 -all_classes_masks = normalized_masks.argmax(class_dim) == torch.arange(num_classes)[:, None, None, None] -print(f"shape = {all_classes_masks.shape}, dtype = {all_classes_masks.dtype}") -# The first dimension is the classes now, so we need to swap it -all_classes_masks = all_classes_masks.swapaxes(0, 1) - -dogs_with_masks = [ - draw_segmentation_masks(img, masks=mask, alpha=.6) - for img, mask in zip(batch_int, all_classes_masks) -] -show(dogs_with_masks) - - -##################################### -# .. _instance_seg_output: -# -# Instance segmentation models -# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -# -# Instance segmentation models have a significantly different output from the -# semantic segmentation models. We will see here how to plot the masks for such -# models. Let's start by analyzing the output of a Mask-RCNN model. Note that -# these models don't require the images to be normalized, so we don't need to -# use the normalized batch. -# -# .. note:: -# -# We will here describe the output of a Mask-RCNN model. The models in -# :ref:`object_det_inst_seg_pers_keypoint_det` all have a similar output -# format, but some of them may have extra info like keypoints for -# :func:`~torchvision.models.detection.keypointrcnn_resnet50_fpn`, and some -# of them may not have masks, like -# :func:`~torchvision.models.detection.fasterrcnn_resnet50_fpn`. - -from torchvision.models.detection import maskrcnn_resnet50_fpn -model = maskrcnn_resnet50_fpn(pretrained=True, progress=False) -model = model.eval() - -output = model(batch) -print(output) - -##################################### -# Let's break this down. For each image in the batch, the model outputs some -# detections (or instances). The number of detections varies for each input -# image. Each instance is described by its bounding box, its label, its score -# and its mask. -# -# The way the output is organized is as follows: the output is a list of length -# ``batch_size``. Each entry in the list corresponds to an input image, and it -# is a dict with keys 'boxes', 'labels', 'scores', and 'masks'. Each value -# associated to those keys has ``num_instances`` elements in it. In our case -# above there are 3 instances detected in the first image, and 2 instances in -# the second one. -# -# The boxes can be plotted with :func:`~torchvision.utils.draw_bounding_boxes` -# as above, but here we're more interested in the masks. These masks are quite -# different from the masks that we saw above for the semantic segmentation -# models. - -dog1_output = output[0] -dog1_masks = dog1_output['masks'] -print(f"shape = {dog1_masks.shape}, dtype = {dog1_masks.dtype}, " - f"min = {dog1_masks.min()}, max = {dog1_masks.max()}") - -##################################### -# Here the masks corresponds to probabilities indicating, for each pixel, how -# likely it is to belong to the predicted label of that instance. Those -# predicted labels correspond to the 'labels' element in the same output dict. -# Let's see which labels were predicted for the instances of the first image. - -inst_classes = [ - '__background__', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', - 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'N/A', 'stop sign', - 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', - 'elephant', 'bear', 'zebra', 'giraffe', 'N/A', 'backpack', 'umbrella', 'N/A', 'N/A', - 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', - 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', - 'bottle', 'N/A', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', - 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', - 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'N/A', 'dining table', - 'N/A', 'N/A', 'toilet', 'N/A', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', - 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'N/A', 'book', - 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush' -] - -inst_class_to_idx = {cls: idx for (idx, cls) in enumerate(inst_classes)} - -print("For the first dog, the following instances were detected:") -print([inst_classes[label] for label in dog1_output['labels']]) - -##################################### -# Interestingly, the model detects two persons in the image. Let's go ahead and -# plot those masks. Since :func:`~torchvision.utils.draw_segmentation_masks` -# expects boolean masks, we need to convert those probabilities into boolean -# values. Remember that the semantic of those masks is "How likely is this pixel -# to belong to the predicted class?". As a result, a natural way of converting -# those masks into boolean values is to threshold them with the 0.5 probability -# (one could also choose a different threshold). - -proba_threshold = 0.5 -dog1_bool_masks = dog1_output['masks'] > proba_threshold -print(f"shape = {dog1_bool_masks.shape}, dtype = {dog1_bool_masks.dtype}") - -# There's an extra dimension (1) to the masks. We need to remove it -dog1_bool_masks = dog1_bool_masks.squeeze(1) - -show(draw_segmentation_masks(dog1_int, dog1_bool_masks, alpha=0.9)) - -##################################### -# The model seems to have properly detected the dog, but it also confused trees -# with people. Looking more closely at the scores will help us plotting more -# relevant masks: - -print(dog1_output['scores']) - -##################################### -# Clearly the model is more confident about the dog detection than it is about -# the people detections. That's good news. When plotting the masks, we can ask -# for only those that have a good score. Let's use a score threshold of .75 -# here, and also plot the masks of the second dog. - -score_threshold = .75 - -boolean_masks = [ - out['masks'][out['scores'] > score_threshold] > proba_threshold - for out in output -] - -dogs_with_masks = [ - draw_segmentation_masks(img, mask.squeeze(1)) - for img, mask in zip(batch_int, boolean_masks) -] -show(dogs_with_masks) - -##################################### -# The two 'people' masks in the first image where not selected because they have -# a lower score than the score threshold. Similarly in the second image, the -# instance with class 15 (which corresponds to 'bench') was not selected. diff --git a/0.11./_downloads/19a6d5f6ec4c29d7cbcc4a07a4b5339c/plot_video_api.py b/0.11./_downloads/19a6d5f6ec4c29d7cbcc4a07a4b5339c/plot_video_api.py deleted file mode 100644 index fe296d67be0..00000000000 --- a/0.11./_downloads/19a6d5f6ec4c29d7cbcc4a07a4b5339c/plot_video_api.py +++ /dev/null @@ -1,341 +0,0 @@ -""" -======================= -Video API -======================= - -This example illustrates some of the APIs that torchvision offers for -videos, together with the examples on how to build datasets and more. -""" - -#################################### -# 1. Introduction: building a new video object and examining the properties -# ------------------------------------------------------------------------- -# First we select a video to test the object out. For the sake of argument -# we're using one from kinetics400 dataset. -# To create it, we need to define the path and the stream we want to use. - -###################################### -# Chosen video statistics: -# -# - WUzgd7C1pWA.mp4 -# - source: -# - kinetics-400 -# - video: -# - H-264 -# - MPEG-4 AVC (part 10) (avc1) -# - fps: 29.97 -# - audio: -# - MPEG AAC audio (mp4a) -# - sample rate: 48K Hz -# - -import torch -import torchvision -from torchvision.datasets.utils import download_url - -# Download the sample video -download_url( - "https://github.com/pytorch/vision/blob/main/test/assets/videos/WUzgd7C1pWA.mp4?raw=true", - ".", - "WUzgd7C1pWA.mp4" -) -video_path = "./WUzgd7C1pWA.mp4" - -###################################### -# Streams are defined in a similar fashion as torch devices. We encode them as strings in a form -# of ``stream_type:stream_id`` where ``stream_type`` is a string and ``stream_id`` a long int. -# The constructor accepts passing a ``stream_type`` only, in which case the stream is auto-discovered. -# Firstly, let's get the metadata for our particular video: - -stream = "video" -video = torchvision.io.VideoReader(video_path, stream) -video.get_metadata() - -###################################### -# Here we can see that video has two streams - a video and an audio stream. -# Currently available stream types include ['video', 'audio']. -# Each descriptor consists of two parts: stream type (e.g. 'video') and a unique stream id -# (which are determined by video encoding). -# In this way, if the video container contains multiple streams of the same type, -# users can access the one they want. -# If only stream type is passed, the decoder auto-detects first stream of that type and returns it. - -###################################### -# Let's read all the frames from the video stream. By default, the return value of -# ``next(video_reader)`` is a dict containing the following fields. -# -# The return fields are: -# -# - ``data``: containing a torch.tensor -# - ``pts``: containing a float timestamp of this particular frame - -metadata = video.get_metadata() -video.set_current_stream("audio") - -frames = [] # we are going to save the frames here. -ptss = [] # pts is a presentation timestamp in seconds (float) of each frame -for frame in video: - frames.append(frame['data']) - ptss.append(frame['pts']) - -print("PTS for first five frames ", ptss[:5]) -print("Total number of frames: ", len(frames)) -approx_nf = metadata['audio']['duration'][0] * metadata['audio']['framerate'][0] -print("Approx total number of datapoints we can expect: ", approx_nf) -print("Read data size: ", frames[0].size(0) * len(frames)) - -###################################### -# But what if we only want to read certain time segment of the video? -# That can be done easily using the combination of our ``seek`` function, and the fact that each call -# to next returns the presentation timestamp of the returned frame in seconds. -# -# Given that our implementation relies on python iterators, -# we can leverage itertools to simplify the process and make it more pythonic. -# -# For example, if we wanted to read ten frames from second second: - - -import itertools -video.set_current_stream("video") - -frames = [] # we are going to save the frames here. - -# We seek into a second second of the video and use islice to get 10 frames since -for frame, pts in itertools.islice(video.seek(2), 10): - frames.append(frame) - -print("Total number of frames: ", len(frames)) - -###################################### -# Or if we wanted to read from 2nd to 5th second, -# We seek into a second second of the video, -# then we utilize the itertools takewhile to get the -# correct number of frames: - -video.set_current_stream("video") -frames = [] # we are going to save the frames here. -video = video.seek(2) - -for frame in itertools.takewhile(lambda x: x['pts'] <= 5, video): - frames.append(frame['data']) - -print("Total number of frames: ", len(frames)) -approx_nf = (5 - 2) * video.get_metadata()['video']['fps'][0] -print("We can expect approx: ", approx_nf) -print("Tensor size: ", frames[0].size()) - -#################################### -# 2. Building a sample read_video function -# ---------------------------------------------------------------------------------------- -# We can utilize the methods above to build the read video function that follows -# the same API to the existing ``read_video`` function. - - -def example_read_video(video_object, start=0, end=None, read_video=True, read_audio=True): - if end is None: - end = float("inf") - if end < start: - raise ValueError( - "end time should be larger than start time, got " - "start time={} and end time={}".format(start, end) - ) - - video_frames = torch.empty(0) - video_pts = [] - if read_video: - video_object.set_current_stream("video") - frames = [] - for frame in itertools.takewhile(lambda x: x['pts'] <= end, video_object.seek(start)): - frames.append(frame['data']) - video_pts.append(frame['pts']) - if len(frames) > 0: - video_frames = torch.stack(frames, 0) - - audio_frames = torch.empty(0) - audio_pts = [] - if read_audio: - video_object.set_current_stream("audio") - frames = [] - for frame in itertools.takewhile(lambda x: x['pts'] <= end, video_object.seek(start)): - frames.append(frame['data']) - video_pts.append(frame['pts']) - if len(frames) > 0: - audio_frames = torch.cat(frames, 0) - - return video_frames, audio_frames, (video_pts, audio_pts), video_object.get_metadata() - - -# Total number of frames should be 327 for video and 523264 datapoints for audio -vf, af, info, meta = example_read_video(video) -print(vf.size(), af.size()) - -#################################### -# 3. Building an example randomly sampled dataset (can be applied to training dataest of kinetics400) -# ------------------------------------------------------------------------------------------------------- -# Cool, so now we can use the same principle to make the sample dataset. -# We suggest trying out iterable dataset for this purpose. -# Here, we are going to build an example dataset that reads randomly selected 10 frames of video. - -#################################### -# Make sample dataset -import os -os.makedirs("./dataset", exist_ok=True) -os.makedirs("./dataset/1", exist_ok=True) -os.makedirs("./dataset/2", exist_ok=True) - -#################################### -# Download the videos -from torchvision.datasets.utils import download_url -download_url( - "https://github.com/pytorch/vision/blob/main/test/assets/videos/WUzgd7C1pWA.mp4?raw=true", - "./dataset/1", "WUzgd7C1pWA.mp4" -) -download_url( - "https://github.com/pytorch/vision/blob/main/test/assets/videos/RATRACE_wave_f_nm_np1_fr_goo_37.avi?raw=true", - "./dataset/1", - "RATRACE_wave_f_nm_np1_fr_goo_37.avi" -) -download_url( - "https://github.com/pytorch/vision/blob/main/test/assets/videos/SOX5yA1l24A.mp4?raw=true", - "./dataset/2", - "SOX5yA1l24A.mp4" -) -download_url( - "https://github.com/pytorch/vision/blob/main/test/assets/videos/v_SoccerJuggling_g23_c01.avi?raw=true", - "./dataset/2", - "v_SoccerJuggling_g23_c01.avi" -) -download_url( - "https://github.com/pytorch/vision/blob/main/test/assets/videos/v_SoccerJuggling_g24_c01.avi?raw=true", - "./dataset/2", - "v_SoccerJuggling_g24_c01.avi" -) - -#################################### -# Housekeeping and utilities -import os -import random - -from torchvision.datasets.folder import make_dataset -from torchvision import transforms as t - - -def _find_classes(dir): - classes = [d.name for d in os.scandir(dir) if d.is_dir()] - classes.sort() - class_to_idx = {cls_name: i for i, cls_name in enumerate(classes)} - return classes, class_to_idx - - -def get_samples(root, extensions=(".mp4", ".avi")): - _, class_to_idx = _find_classes(root) - return make_dataset(root, class_to_idx, extensions=extensions) - -#################################### -# We are going to define the dataset and some basic arguments. -# We assume the structure of the FolderDataset, and add the following parameters: -# -# - ``clip_len``: length of a clip in frames -# - ``frame_transform``: transform for every frame individually -# - ``video_transform``: transform on a video sequence -# -# .. note:: -# We actually add epoch size as using :func:`~torch.utils.data.IterableDataset` -# class allows us to naturally oversample clips or images from each video if needed. - - -class RandomDataset(torch.utils.data.IterableDataset): - def __init__(self, root, epoch_size=None, frame_transform=None, video_transform=None, clip_len=16): - super(RandomDataset).__init__() - - self.samples = get_samples(root) - - # Allow for temporal jittering - if epoch_size is None: - epoch_size = len(self.samples) - self.epoch_size = epoch_size - - self.clip_len = clip_len - self.frame_transform = frame_transform - self.video_transform = video_transform - - def __iter__(self): - for i in range(self.epoch_size): - # Get random sample - path, target = random.choice(self.samples) - # Get video object - vid = torchvision.io.VideoReader(path, "video") - metadata = vid.get_metadata() - video_frames = [] # video frame buffer - - # Seek and return frames - max_seek = metadata["video"]['duration'][0] - (self.clip_len / metadata["video"]['fps'][0]) - start = random.uniform(0., max_seek) - for frame in itertools.islice(vid.seek(start), self.clip_len): - video_frames.append(self.frame_transform(frame['data'])) - current_pts = frame['pts'] - # Stack it into a tensor - video = torch.stack(video_frames, 0) - if self.video_transform: - video = self.video_transform(video) - output = { - 'path': path, - 'video': video, - 'target': target, - 'start': start, - 'end': current_pts} - yield output - -#################################### -# Given a path of videos in a folder structure, i.e: -# -# - dataset -# - class 1 -# - file 0 -# - file 1 -# - ... -# - class 2 -# - file 0 -# - file 1 -# - ... -# - ... -# -# We can generate a dataloader and test the dataset. - - -transforms = [t.Resize((112, 112))] -frame_transform = t.Compose(transforms) - -dataset = RandomDataset("./dataset", epoch_size=None, frame_transform=frame_transform) - -#################################### -from torch.utils.data import DataLoader -loader = DataLoader(dataset, batch_size=12) -data = {"video": [], 'start': [], 'end': [], 'tensorsize': []} -for batch in loader: - for i in range(len(batch['path'])): - data['video'].append(batch['path'][i]) - data['start'].append(batch['start'][i].item()) - data['end'].append(batch['end'][i].item()) - data['tensorsize'].append(batch['video'][i].size()) -print(data) - -#################################### -# 4. Data Visualization -# ---------------------------------- -# Example of visualized video - -import matplotlib.pylab as plt - -plt.figure(figsize=(12, 12)) -for i in range(16): - plt.subplot(4, 4, i + 1) - plt.imshow(batch["video"][0, i, ...].permute(1, 2, 0)) - plt.axis("off") - -#################################### -# Cleanup the video and dataset: -import os -import shutil -os.remove("./WUzgd7C1pWA.mp4") -shutil.rmtree("./dataset") diff --git a/0.11./_downloads/2fc879ef12ea97750926a04c0a48c66b/plot_transforms.py b/0.11./_downloads/2fc879ef12ea97750926a04c0a48c66b/plot_transforms.py deleted file mode 100644 index ab0cb892b16..00000000000 --- a/0.11./_downloads/2fc879ef12ea97750926a04c0a48c66b/plot_transforms.py +++ /dev/null @@ -1,300 +0,0 @@ -""" -========================== -Illustration of transforms -========================== - -This example illustrates the various transforms available in :ref:`the -torchvision.transforms module `. -""" - -# sphinx_gallery_thumbnail_path = "../../gallery/assets/transforms_thumbnail.png" - -from PIL import Image -from pathlib import Path -import matplotlib.pyplot as plt -import numpy as np - -import torch -import torchvision.transforms as T - - -plt.rcParams["savefig.bbox"] = 'tight' -orig_img = Image.open(Path('assets') / 'astronaut.jpg') -# if you change the seed, make sure that the randomly-applied transforms -# properly show that the image can be both transformed and *not* transformed! -torch.manual_seed(0) - - -def plot(imgs, with_orig=True, row_title=None, **imshow_kwargs): - if not isinstance(imgs[0], list): - # Make a 2d grid even if there's just 1 row - imgs = [imgs] - - num_rows = len(imgs) - num_cols = len(imgs[0]) + with_orig - fig, axs = plt.subplots(nrows=num_rows, ncols=num_cols, squeeze=False) - for row_idx, row in enumerate(imgs): - row = [orig_img] + row if with_orig else row - for col_idx, img in enumerate(row): - ax = axs[row_idx, col_idx] - ax.imshow(np.asarray(img), **imshow_kwargs) - ax.set(xticklabels=[], yticklabels=[], xticks=[], yticks=[]) - - if with_orig: - axs[0, 0].set(title='Original image') - axs[0, 0].title.set_size(8) - if row_title is not None: - for row_idx in range(num_rows): - axs[row_idx, 0].set(ylabel=row_title[row_idx]) - - plt.tight_layout() - - -#################################### -# Pad -# --- -# The :class:`~torchvision.transforms.Pad` transform -# (see also :func:`~torchvision.transforms.functional.pad`) -# fills image borders with some pixel values. -padded_imgs = [T.Pad(padding=padding)(orig_img) for padding in (3, 10, 30, 50)] -plot(padded_imgs) - -#################################### -# Resize -# ------ -# The :class:`~torchvision.transforms.Resize` transform -# (see also :func:`~torchvision.transforms.functional.resize`) -# resizes an image. -resized_imgs = [T.Resize(size=size)(orig_img) for size in (30, 50, 100, orig_img.size)] -plot(resized_imgs) - -#################################### -# CenterCrop -# ---------- -# The :class:`~torchvision.transforms.CenterCrop` transform -# (see also :func:`~torchvision.transforms.functional.center_crop`) -# crops the given image at the center. -center_crops = [T.CenterCrop(size=size)(orig_img) for size in (30, 50, 100, orig_img.size)] -plot(center_crops) - -#################################### -# FiveCrop -# -------- -# The :class:`~torchvision.transforms.FiveCrop` transform -# (see also :func:`~torchvision.transforms.functional.five_crop`) -# crops the given image into four corners and the central crop. -(top_left, top_right, bottom_left, bottom_right, center) = T.FiveCrop(size=(100, 100))(orig_img) -plot([top_left, top_right, bottom_left, bottom_right, center]) - -#################################### -# Grayscale -# --------- -# The :class:`~torchvision.transforms.Grayscale` transform -# (see also :func:`~torchvision.transforms.functional.to_grayscale`) -# converts an image to grayscale -gray_img = T.Grayscale()(orig_img) -plot([gray_img], cmap='gray') - -#################################### -# Random transforms -# ----------------- -# The following transforms are random, which means that the same transfomer -# instance will produce different result each time it transforms a given image. -# -# ColorJitter -# ~~~~~~~~~~~ -# The :class:`~torchvision.transforms.ColorJitter` transform -# randomly changes the brightness, saturation, and other properties of an image. -jitter = T.ColorJitter(brightness=.5, hue=.3) -jitted_imgs = [jitter(orig_img) for _ in range(4)] -plot(jitted_imgs) - -#################################### -# GaussianBlur -# ~~~~~~~~~~~~ -# The :class:`~torchvision.transforms.GaussianBlur` transform -# (see also :func:`~torchvision.transforms.functional.gaussian_blur`) -# performs gaussian blur transform on an image. -blurrer = T.GaussianBlur(kernel_size=(5, 9), sigma=(0.1, 5)) -blurred_imgs = [blurrer(orig_img) for _ in range(4)] -plot(blurred_imgs) - -#################################### -# RandomPerspective -# ~~~~~~~~~~~~~~~~~ -# The :class:`~torchvision.transforms.RandomPerspective` transform -# (see also :func:`~torchvision.transforms.functional.perspective`) -# performs random perspective transform on an image. -perspective_transformer = T.RandomPerspective(distortion_scale=0.6, p=1.0) -perspective_imgs = [perspective_transformer(orig_img) for _ in range(4)] -plot(perspective_imgs) - -#################################### -# RandomRotation -# ~~~~~~~~~~~~~~ -# The :class:`~torchvision.transforms.RandomRotation` transform -# (see also :func:`~torchvision.transforms.functional.rotate`) -# rotates an image with random angle. -rotater = T.RandomRotation(degrees=(0, 180)) -rotated_imgs = [rotater(orig_img) for _ in range(4)] -plot(rotated_imgs) - -#################################### -# RandomAffine -# ~~~~~~~~~~~~ -# The :class:`~torchvision.transforms.RandomAffine` transform -# (see also :func:`~torchvision.transforms.functional.affine`) -# performs random affine transform on an image. -affine_transfomer = T.RandomAffine(degrees=(30, 70), translate=(0.1, 0.3), scale=(0.5, 0.75)) -affine_imgs = [affine_transfomer(orig_img) for _ in range(4)] -plot(affine_imgs) - -#################################### -# RandomCrop -# ~~~~~~~~~~ -# The :class:`~torchvision.transforms.RandomCrop` transform -# (see also :func:`~torchvision.transforms.functional.crop`) -# crops an image at a random location. -cropper = T.RandomCrop(size=(128, 128)) -crops = [cropper(orig_img) for _ in range(4)] -plot(crops) - -#################################### -# RandomResizedCrop -# ~~~~~~~~~~~~~~~~~ -# The :class:`~torchvision.transforms.RandomResizedCrop` transform -# (see also :func:`~torchvision.transforms.functional.resized_crop`) -# crops an image at a random location, and then resizes the crop to a given -# size. -resize_cropper = T.RandomResizedCrop(size=(32, 32)) -resized_crops = [resize_cropper(orig_img) for _ in range(4)] -plot(resized_crops) - -#################################### -# RandomInvert -# ~~~~~~~~~~~~ -# The :class:`~torchvision.transforms.RandomInvert` transform -# (see also :func:`~torchvision.transforms.functional.invert`) -# randomly inverts the colors of the given image. -inverter = T.RandomInvert() -invertered_imgs = [inverter(orig_img) for _ in range(4)] -plot(invertered_imgs) - -#################################### -# RandomPosterize -# ~~~~~~~~~~~~~~~ -# The :class:`~torchvision.transforms.RandomPosterize` transform -# (see also :func:`~torchvision.transforms.functional.posterize`) -# randomly posterizes the image by reducing the number of bits -# of each color channel. -posterizer = T.RandomPosterize(bits=2) -posterized_imgs = [posterizer(orig_img) for _ in range(4)] -plot(posterized_imgs) - -#################################### -# RandomSolarize -# ~~~~~~~~~~~~~~ -# The :class:`~torchvision.transforms.RandomSolarize` transform -# (see also :func:`~torchvision.transforms.functional.solarize`) -# randomly solarizes the image by inverting all pixel values above -# the threshold. -solarizer = T.RandomSolarize(threshold=192.0) -solarized_imgs = [solarizer(orig_img) for _ in range(4)] -plot(solarized_imgs) - -#################################### -# RandomAdjustSharpness -# ~~~~~~~~~~~~~~~~~~~~~ -# The :class:`~torchvision.transforms.RandomAdjustSharpness` transform -# (see also :func:`~torchvision.transforms.functional.adjust_sharpness`) -# randomly adjusts the sharpness of the given image. -sharpness_adjuster = T.RandomAdjustSharpness(sharpness_factor=2) -sharpened_imgs = [sharpness_adjuster(orig_img) for _ in range(4)] -plot(sharpened_imgs) - -#################################### -# RandomAutocontrast -# ~~~~~~~~~~~~~~~~~~ -# The :class:`~torchvision.transforms.RandomAutocontrast` transform -# (see also :func:`~torchvision.transforms.functional.autocontrast`) -# randomly applies autocontrast to the given image. -autocontraster = T.RandomAutocontrast() -autocontrasted_imgs = [autocontraster(orig_img) for _ in range(4)] -plot(autocontrasted_imgs) - -#################################### -# RandomEqualize -# ~~~~~~~~~~~~~~ -# The :class:`~torchvision.transforms.RandomEqualize` transform -# (see also :func:`~torchvision.transforms.functional.equalize`) -# randomly equalizes the histogram of the given image. -equalizer = T.RandomEqualize() -equalized_imgs = [equalizer(orig_img) for _ in range(4)] -plot(equalized_imgs) - -#################################### -# AutoAugment -# ~~~~~~~~~~~ -# The :class:`~torchvision.transforms.AutoAugment` transform -# automatically augments data based on a given auto-augmentation policy. -# See :class:`~torchvision.transforms.AutoAugmentPolicy` for the available policies. -policies = [T.AutoAugmentPolicy.CIFAR10, T.AutoAugmentPolicy.IMAGENET, T.AutoAugmentPolicy.SVHN] -augmenters = [T.AutoAugment(policy) for policy in policies] -imgs = [ - [augmenter(orig_img) for _ in range(4)] - for augmenter in augmenters -] -row_title = [str(policy).split('.')[-1] for policy in policies] -plot(imgs, row_title=row_title) - -#################################### -# RandAugment -# ~~~~~~~~~~~ -# The :class:`~torchvision.transforms.RandAugment` transform automatically augments the data. -augmenter = T.RandAugment() -imgs = [augmenter(orig_img) for _ in range(4)] -plot(imgs) - -#################################### -# TrivialAugmentWide -# ~~~~~~~~~~~~~~~~~~ -# The :class:`~torchvision.transforms.TrivialAugmentWide` transform automatically augments the data. -augmenter = T.TrivialAugmentWide() -imgs = [augmenter(orig_img) for _ in range(4)] -plot(imgs) - -#################################### -# Randomly-applied transforms -# --------------------------- -# -# Some transforms are randomly-applied given a probability ``p``. That is, the -# transformed image may actually be the same as the original one, even when -# called with the same transformer instance! -# -# RandomHorizontalFlip -# ~~~~~~~~~~~~~~~~~~~~ -# The :class:`~torchvision.transforms.RandomHorizontalFlip` transform -# (see also :func:`~torchvision.transforms.functional.hflip`) -# performs horizontal flip of an image, with a given probability. -hflipper = T.RandomHorizontalFlip(p=0.5) -transformed_imgs = [hflipper(orig_img) for _ in range(4)] -plot(transformed_imgs) - -#################################### -# RandomVerticalFlip -# ~~~~~~~~~~~~~~~~~~ -# The :class:`~torchvision.transforms.RandomVerticalFlip` transform -# (see also :func:`~torchvision.transforms.functional.vflip`) -# performs vertical flip of an image, with a given probability. -vflipper = T.RandomVerticalFlip(p=0.5) -transformed_imgs = [vflipper(orig_img) for _ in range(4)] -plot(transformed_imgs) - -#################################### -# RandomApply -# ~~~~~~~~~~~ -# The :class:`~torchvision.transforms.RandomApply` transform -# randomly applies a list of transforms, with a given probability. -applier = T.RandomApply(transforms=[T.RandomCrop(size=(64, 64))], p=0.5) -transformed_imgs = [applier(orig_img) for _ in range(4)] -plot(transformed_imgs) diff --git a/0.11./_downloads/44cefcbc2110528a73124d64db3315fc/plot_visualization_utils.ipynb b/0.11./_downloads/44cefcbc2110528a73124d64db3315fc/plot_visualization_utils.ipynb deleted file mode 100644 index 8d2a72c66aa..00000000000 --- a/0.11./_downloads/44cefcbc2110528a73124d64db3315fc/plot_visualization_utils.ipynb +++ /dev/null @@ -1,349 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\n# Visualization utilities\n\nThis example illustrates some of the utilities that torchvision offers for\nvisualizing images, bounding boxes, and segmentation masks.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "# sphinx_gallery_thumbnail_path = \"../../gallery/assets/visualization_utils_thumbnail.png\"\n\nimport torch\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport torchvision.transforms.functional as F\n\n\nplt.rcParams[\"savefig.bbox\"] = 'tight'\n\n\ndef show(imgs):\n if not isinstance(imgs, list):\n imgs = [imgs]\n fix, axs = plt.subplots(ncols=len(imgs), squeeze=False)\n for i, img in enumerate(imgs):\n img = img.detach()\n img = F.to_pil_image(img)\n axs[0, i].imshow(np.asarray(img))\n axs[0, i].set(xticklabels=[], yticklabels=[], xticks=[], yticks=[])" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Visualizing a grid of images\nThe :func:`~torchvision.utils.make_grid` function can be used to create a\ntensor that represents multiple images in a grid. This util requires a single\nimage of dtype ``uint8`` as input.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from torchvision.utils import make_grid\nfrom torchvision.io import read_image\nfrom pathlib import Path\n\ndog1_int = read_image(str(Path('assets') / 'dog1.jpg'))\ndog2_int = read_image(str(Path('assets') / 'dog2.jpg'))\n\ngrid = make_grid([dog1_int, dog2_int, dog1_int, dog2_int])\nshow(grid)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Visualizing bounding boxes\nWe can use :func:`~torchvision.utils.draw_bounding_boxes` to draw boxes on an\nimage. We can set the colors, labels, width as well as font and font size.\nThe boxes are in ``(xmin, ymin, xmax, ymax)`` format.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from torchvision.utils import draw_bounding_boxes\n\n\nboxes = torch.tensor([[50, 50, 100, 200], [210, 150, 350, 430]], dtype=torch.float)\ncolors = [\"blue\", \"yellow\"]\nresult = draw_bounding_boxes(dog1_int, boxes, colors=colors, width=5)\nshow(result)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Naturally, we can also plot bounding boxes produced by torchvision detection\nmodels. Here is demo with a Faster R-CNN model loaded from\n:func:`~torchvision.models.detection.fasterrcnn_resnet50_fpn`\nmodel. You can also try using a RetinaNet with\n:func:`~torchvision.models.detection.retinanet_resnet50_fpn`, an SSDlite with\n:func:`~torchvision.models.detection.ssdlite320_mobilenet_v3_large` or an SSD with\n:func:`~torchvision.models.detection.ssd300_vgg16`. For more details\non the output of such models, you may refer to `instance_seg_output`.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from torchvision.models.detection import fasterrcnn_resnet50_fpn\nfrom torchvision.transforms.functional import convert_image_dtype\n\n\nbatch_int = torch.stack([dog1_int, dog2_int])\nbatch = convert_image_dtype(batch_int, dtype=torch.float)\n\nmodel = fasterrcnn_resnet50_fpn(pretrained=True, progress=False)\nmodel = model.eval()\n\noutputs = model(batch)\nprint(outputs)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's plot the boxes detected by our model. We will only plot the boxes with a\nscore greater than a given threshold.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "score_threshold = .8\ndogs_with_boxes = [\n draw_bounding_boxes(dog_int, boxes=output['boxes'][output['scores'] > score_threshold], width=4)\n for dog_int, output in zip(batch_int, outputs)\n]\nshow(dogs_with_boxes)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Visualizing segmentation masks\nThe :func:`~torchvision.utils.draw_segmentation_masks` function can be used to\ndraw segmentation masks on images. Semantic segmentation and instance\nsegmentation models have different outputs, so we will treat each\nindependently.\n\n\n### Semantic segmentation models\n\nWe will see how to use it with torchvision's FCN Resnet-50, loaded with\n:func:`~torchvision.models.segmentation.fcn_resnet50`. You can also try using\nDeepLabv3 (:func:`~torchvision.models.segmentation.deeplabv3_resnet50`) or\nlraspp mobilenet models\n(:func:`~torchvision.models.segmentation.lraspp_mobilenet_v3_large`).\n\nLet's start by looking at the ouput of the model. Remember that in general,\nimages must be normalized before they're passed to a semantic segmentation\nmodel.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from torchvision.models.segmentation import fcn_resnet50\n\n\nmodel = fcn_resnet50(pretrained=True, progress=False)\nmodel = model.eval()\n\nnormalized_batch = F.normalize(batch, mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225))\noutput = model(normalized_batch)['out']\nprint(output.shape, output.min().item(), output.max().item())" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "As we can see above, the output of the segmentation model is a tensor of shape\n``(batch_size, num_classes, H, W)``. Each value is a non-normalized score, and\nwe can normalize them into ``[0, 1]`` by using a softmax. After the softmax,\nwe can interpret each value as a probability indicating how likely a given\npixel is to belong to a given class.\n\nLet's plot the masks that have been detected for the dog class and for the\nboat class:\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "sem_classes = [\n '__background__', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus',\n 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike',\n 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'\n]\nsem_class_to_idx = {cls: idx for (idx, cls) in enumerate(sem_classes)}\n\nnormalized_masks = torch.nn.functional.softmax(output, dim=1)\n\ndog_and_boat_masks = [\n normalized_masks[img_idx, sem_class_to_idx[cls]]\n for img_idx in range(batch.shape[0])\n for cls in ('dog', 'boat')\n]\n\nshow(dog_and_boat_masks)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "As expected, the model is confident about the dog class, but not so much for\nthe boat class.\n\nThe :func:`~torchvision.utils.draw_segmentation_masks` function can be used to\nplots those masks on top of the original image. This function expects the\nmasks to be boolean masks, but our masks above contain probabilities in ``[0,\n1]``. To get boolean masks, we can do the following:\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "class_dim = 1\nboolean_dog_masks = (normalized_masks.argmax(class_dim) == sem_class_to_idx['dog'])\nprint(f\"shape = {boolean_dog_masks.shape}, dtype = {boolean_dog_masks.dtype}\")\nshow([m.float() for m in boolean_dog_masks])" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The line above where we define ``boolean_dog_masks`` is a bit cryptic, but you\ncan read it as the following query: \"For which pixels is 'dog' the most likely\nclass?\"\n\n

Note

While we're using the ``normalized_masks`` here, we would have\n gotten the same result by using the non-normalized scores of the model\n directly (as the softmax operation preserves the order).

\n\nNow that we have boolean masks, we can use them with\n:func:`~torchvision.utils.draw_segmentation_masks` to plot them on top of the\noriginal images:\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from torchvision.utils import draw_segmentation_masks\n\ndogs_with_masks = [\n draw_segmentation_masks(img, masks=mask, alpha=0.7)\n for img, mask in zip(batch_int, boolean_dog_masks)\n]\nshow(dogs_with_masks)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We can plot more than one mask per image! Remember that the model returned as\nmany masks as there are classes. Let's ask the same query as above, but this\ntime for *all* classes, not just the dog class: \"For each pixel and each class\nC, is class C the most most likely class?\"\n\nThis one is a bit more involved, so we'll first show how to do it with a\nsingle image, and then we'll generalize to the batch\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "num_classes = normalized_masks.shape[1]\ndog1_masks = normalized_masks[0]\nclass_dim = 0\ndog1_all_classes_masks = dog1_masks.argmax(class_dim) == torch.arange(num_classes)[:, None, None]\n\nprint(f\"dog1_masks shape = {dog1_masks.shape}, dtype = {dog1_masks.dtype}\")\nprint(f\"dog1_all_classes_masks = {dog1_all_classes_masks.shape}, dtype = {dog1_all_classes_masks.dtype}\")\n\ndog_with_all_masks = draw_segmentation_masks(dog1_int, masks=dog1_all_classes_masks, alpha=.6)\nshow(dog_with_all_masks)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We can see in the image above that only 2 masks were drawn: the mask for the\nbackground and the mask for the dog. This is because the model thinks that\nonly these 2 classes are the most likely ones across all the pixels. If the\nmodel had detected another class as the most likely among other pixels, we\nwould have seen its mask above.\n\nRemoving the background mask is as simple as passing\n``masks=dog1_all_classes_masks[1:]``, because the background class is the\nclass with index 0.\n\nLet's now do the same but for an entire batch of images. The code is similar\nbut involves a bit more juggling with the dimensions.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "class_dim = 1\nall_classes_masks = normalized_masks.argmax(class_dim) == torch.arange(num_classes)[:, None, None, None]\nprint(f\"shape = {all_classes_masks.shape}, dtype = {all_classes_masks.dtype}\")\n# The first dimension is the classes now, so we need to swap it\nall_classes_masks = all_classes_masks.swapaxes(0, 1)\n\ndogs_with_masks = [\n draw_segmentation_masks(img, masks=mask, alpha=.6)\n for img, mask in zip(batch_int, all_classes_masks)\n]\nshow(dogs_with_masks)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\n### Instance segmentation models\n\nInstance segmentation models have a significantly different output from the\nsemantic segmentation models. We will see here how to plot the masks for such\nmodels. Let's start by analyzing the output of a Mask-RCNN model. Note that\nthese models don't require the images to be normalized, so we don't need to\nuse the normalized batch.\n\n

Note

We will here describe the output of a Mask-RCNN model. The models in\n `object_det_inst_seg_pers_keypoint_det` all have a similar output\n format, but some of them may have extra info like keypoints for\n :func:`~torchvision.models.detection.keypointrcnn_resnet50_fpn`, and some\n of them may not have masks, like\n :func:`~torchvision.models.detection.fasterrcnn_resnet50_fpn`.

\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from torchvision.models.detection import maskrcnn_resnet50_fpn\nmodel = maskrcnn_resnet50_fpn(pretrained=True, progress=False)\nmodel = model.eval()\n\noutput = model(batch)\nprint(output)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's break this down. For each image in the batch, the model outputs some\ndetections (or instances). The number of detections varies for each input\nimage. Each instance is described by its bounding box, its label, its score\nand its mask.\n\nThe way the output is organized is as follows: the output is a list of length\n``batch_size``. Each entry in the list corresponds to an input image, and it\nis a dict with keys 'boxes', 'labels', 'scores', and 'masks'. Each value\nassociated to those keys has ``num_instances`` elements in it. In our case\nabove there are 3 instances detected in the first image, and 2 instances in\nthe second one.\n\nThe boxes can be plotted with :func:`~torchvision.utils.draw_bounding_boxes`\nas above, but here we're more interested in the masks. These masks are quite\ndifferent from the masks that we saw above for the semantic segmentation\nmodels.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "dog1_output = output[0]\ndog1_masks = dog1_output['masks']\nprint(f\"shape = {dog1_masks.shape}, dtype = {dog1_masks.dtype}, \"\n f\"min = {dog1_masks.min()}, max = {dog1_masks.max()}\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Here the masks corresponds to probabilities indicating, for each pixel, how\nlikely it is to belong to the predicted label of that instance. Those\npredicted labels correspond to the 'labels' element in the same output dict.\nLet's see which labels were predicted for the instances of the first image.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "inst_classes = [\n '__background__', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus',\n 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'N/A', 'stop sign',\n 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',\n 'elephant', 'bear', 'zebra', 'giraffe', 'N/A', 'backpack', 'umbrella', 'N/A', 'N/A',\n 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',\n 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket',\n 'bottle', 'N/A', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl',\n 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',\n 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'N/A', 'dining table',\n 'N/A', 'N/A', 'toilet', 'N/A', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',\n 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'N/A', 'book',\n 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'\n]\n\ninst_class_to_idx = {cls: idx for (idx, cls) in enumerate(inst_classes)}\n\nprint(\"For the first dog, the following instances were detected:\")\nprint([inst_classes[label] for label in dog1_output['labels']])" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Interestingly, the model detects two persons in the image. Let's go ahead and\nplot those masks. Since :func:`~torchvision.utils.draw_segmentation_masks`\nexpects boolean masks, we need to convert those probabilities into boolean\nvalues. Remember that the semantic of those masks is \"How likely is this pixel\nto belong to the predicted class?\". As a result, a natural way of converting\nthose masks into boolean values is to threshold them with the 0.5 probability\n(one could also choose a different threshold).\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "proba_threshold = 0.5\ndog1_bool_masks = dog1_output['masks'] > proba_threshold\nprint(f\"shape = {dog1_bool_masks.shape}, dtype = {dog1_bool_masks.dtype}\")\n\n# There's an extra dimension (1) to the masks. We need to remove it\ndog1_bool_masks = dog1_bool_masks.squeeze(1)\n\nshow(draw_segmentation_masks(dog1_int, dog1_bool_masks, alpha=0.9))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The model seems to have properly detected the dog, but it also confused trees\nwith people. Looking more closely at the scores will help us plotting more\nrelevant masks:\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "print(dog1_output['scores'])" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Clearly the model is more confident about the dog detection than it is about\nthe people detections. That's good news. When plotting the masks, we can ask\nfor only those that have a good score. Let's use a score threshold of .75\nhere, and also plot the masks of the second dog.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "score_threshold = .75\n\nboolean_masks = [\n out['masks'][out['scores'] > score_threshold] > proba_threshold\n for out in output\n]\n\ndogs_with_masks = [\n draw_segmentation_masks(img, mask.squeeze(1))\n for img, mask in zip(batch_int, boolean_masks)\n]\nshow(dogs_with_masks)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The two 'people' masks in the first image where not selected because they have\na lower score than the score threshold. Similarly in the second image, the\ninstance with class 15 (which corresponds to 'bench') was not selected.\n\n" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.7.11" - } - }, - "nbformat": 4, - "nbformat_minor": 0 -} \ No newline at end of file diff --git a/0.11./_downloads/64bb9b01863bd76675f57b9a9a8e6229/plot_repurposing_annotations.ipynb b/0.11./_downloads/64bb9b01863bd76675f57b9a9a8e6229/plot_repurposing_annotations.ipynb deleted file mode 100644 index e25b354b777..00000000000 --- a/0.11./_downloads/64bb9b01863bd76675f57b9a9a8e6229/plot_repurposing_annotations.ipynb +++ /dev/null @@ -1,216 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\n# Repurposing masks into bounding boxes\n\nThe following example illustrates the operations available\nthe `torchvision.ops ` module for repurposing\nsegmentation masks into object localization annotations for different tasks\n(e.g. transforming masks used by instance and panoptic segmentation\nmethods into bounding boxes used by object detection methods).\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "# sphinx_gallery_thumbnail_path = \"../../gallery/assets/repurposing_annotations_thumbnail.png\"\n\nimport os\nimport numpy as np\nimport torch\nimport matplotlib.pyplot as plt\n\nimport torchvision.transforms.functional as F\n\n\nASSETS_DIRECTORY = \"assets\"\n\nplt.rcParams[\"savefig.bbox\"] = \"tight\"\n\n\ndef show(imgs):\n if not isinstance(imgs, list):\n imgs = [imgs]\n fix, axs = plt.subplots(ncols=len(imgs), squeeze=False)\n for i, img in enumerate(imgs):\n img = img.detach()\n img = F.to_pil_image(img)\n axs[0, i].imshow(np.asarray(img))\n axs[0, i].set(xticklabels=[], yticklabels=[], xticks=[], yticks=[])" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Masks\nIn tasks like instance and panoptic segmentation, masks are commonly defined, and are defined by this package,\nas a multi-dimensional array (e.g. a NumPy array or a PyTorch tensor) with the following shape:\n\n (num_objects, height, width)\n\nWhere num_objects is the number of annotated objects in the image. Each (height, width) object corresponds to exactly\none object. For example, if your input image has the dimensions 224 x 224 and has four annotated objects the shape\nof your masks annotation has the following shape:\n\n (4, 224, 224).\n\nA nice property of masks is that they can be easily repurposed to be used in methods to solve a variety of object\nlocalization tasks.\n\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Converting Masks to Bounding Boxes\nFor example, the :func:`~torchvision.ops.masks_to_boxes` operation can be used to\ntransform masks into bounding boxes that can be\nused as input to detection models such as FasterRCNN and RetinaNet.\nWe will take images and masks from the `PenFudan Dataset `_.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from torchvision.io import read_image\n\nimg_path = os.path.join(ASSETS_DIRECTORY, \"FudanPed00054.png\")\nmask_path = os.path.join(ASSETS_DIRECTORY, \"FudanPed00054_mask.png\")\nimg = read_image(img_path)\nmask = read_image(mask_path)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Here the masks are represented as a PNG Image, with floating point values.\nEach pixel is encoded as different colors, with 0 being background.\nNotice that the spatial dimensions of image and mask match.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "print(mask.size())\nprint(img.size())\nprint(mask)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "# We get the unique colors, as these would be the object ids.\nobj_ids = torch.unique(mask)\n\n# first id is the background, so remove it.\nobj_ids = obj_ids[1:]\n\n# split the color-encoded mask into a set of boolean masks.\n# Note that this snippet would work as well if the masks were float values instead of ints.\nmasks = mask == obj_ids[:, None, None]" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Now the masks are a boolean tensor.\nThe first dimension in this case 3 and denotes the number of instances: there are 3 people in the image.\nThe other two dimensions are height and width, which are equal to the dimensions of the image.\nFor each instance, the boolean tensors represent if the particular pixel\nbelongs to the segmentation mask of the image.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "print(masks.size())\nprint(masks)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let us visualize an image and plot its corresponding segmentation masks.\nWe will use the :func:`~torchvision.utils.draw_segmentation_masks` to draw the segmentation masks.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from torchvision.utils import draw_segmentation_masks\n\ndrawn_masks = []\nfor mask in masks:\n drawn_masks.append(draw_segmentation_masks(img, mask, alpha=0.8, colors=\"blue\"))\n\nshow(drawn_masks)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "To convert the boolean masks into bounding boxes.\nWe will use the :func:`~torchvision.ops.masks_to_boxes` from the torchvision.ops module\nIt returns the boxes in ``(xmin, ymin, xmax, ymax)`` format.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from torchvision.ops import masks_to_boxes\n\nboxes = masks_to_boxes(masks)\nprint(boxes.size())\nprint(boxes)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "As the shape denotes, there are 3 boxes and in ``(xmin, ymin, xmax, ymax)`` format.\nThese can be visualized very easily with :func:`~torchvision.utils.draw_bounding_boxes` utility\nprovided in `torchvision.utils `.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from torchvision.utils import draw_bounding_boxes\n\ndrawn_boxes = draw_bounding_boxes(img, boxes, colors=\"red\")\nshow(drawn_boxes)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "These boxes can now directly be used by detection models in torchvision.\nHere is demo with a Faster R-CNN model loaded from\n:func:`~torchvision.models.detection.fasterrcnn_resnet50_fpn`\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from torchvision.models.detection import fasterrcnn_resnet50_fpn\n\nmodel = fasterrcnn_resnet50_fpn(pretrained=True, progress=False)\nprint(img.size())\n\nimg = F.convert_image_dtype(img, torch.float)\ntarget = {}\ntarget[\"boxes\"] = boxes\ntarget[\"labels\"] = labels = torch.ones((masks.size(0),), dtype=torch.int64)\ndetection_outputs = model(img.unsqueeze(0), [target])" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Converting Segmentation Dataset to Detection Dataset\n\nWith this utility it becomes very simple to convert a segmentation dataset to a detection dataset.\nWith this we can now use a segmentation dataset to train a detection model.\nOne can similarly convert panoptic dataset to detection dataset.\nHere is an example where we re-purpose the dataset from the\n`PenFudan Detection Tutorial `_.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "class SegmentationToDetectionDataset(torch.utils.data.Dataset):\n def __init__(self, root, transforms):\n self.root = root\n self.transforms = transforms\n # load all image files, sorting them to\n # ensure that they are aligned\n self.imgs = list(sorted(os.listdir(os.path.join(root, \"PNGImages\"))))\n self.masks = list(sorted(os.listdir(os.path.join(root, \"PedMasks\"))))\n\n def __getitem__(self, idx):\n # load images and masks\n img_path = os.path.join(self.root, \"PNGImages\", self.imgs[idx])\n mask_path = os.path.join(self.root, \"PedMasks\", self.masks[idx])\n\n img = read_image(img_path)\n mask = read_image(mask_path)\n\n img = F.convert_image_dtype(img, dtype=torch.float)\n mask = F.convert_image_dtype(mask, dtype=torch.float)\n\n # We get the unique colors, as these would be the object ids.\n obj_ids = torch.unique(mask)\n\n # first id is the background, so remove it.\n obj_ids = obj_ids[1:]\n\n # split the color-encoded mask into a set of boolean masks.\n masks = mask == obj_ids[:, None, None]\n\n boxes = masks_to_boxes(masks)\n\n # there is only one class\n labels = torch.ones((masks.shape[0],), dtype=torch.int64)\n\n target = {}\n target[\"boxes\"] = boxes\n target[\"labels\"] = labels\n\n if self.transforms is not None:\n img, target = self.transforms(img, target)\n\n return img, target" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.7.11" - } - }, - "nbformat": 4, - "nbformat_minor": 0 -} \ No newline at end of file diff --git a/0.11./_downloads/6a01ac4f9248f75d7d59fd4b8e9147f5/plot_scripted_tensor_transforms.py b/0.11./_downloads/6a01ac4f9248f75d7d59fd4b8e9147f5/plot_scripted_tensor_transforms.py deleted file mode 100644 index 6f3cc22073e..00000000000 --- a/0.11./_downloads/6a01ac4f9248f75d7d59fd4b8e9147f5/plot_scripted_tensor_transforms.py +++ /dev/null @@ -1,145 +0,0 @@ -""" -========================= -Tensor transforms and JIT -========================= - -This example illustrates various features that are now supported by the -:ref:`image transformations ` on Tensor images. In particular, we -show how image transforms can be performed on GPU, and how one can also script -them using JIT compilation. - -Prior to v0.8.0, transforms in torchvision have traditionally been PIL-centric -and presented multiple limitations due to that. Now, since v0.8.0, transforms -implementations are Tensor and PIL compatible and we can achieve the following -new features: - -- transform multi-band torch tensor images (with more than 3-4 channels) -- torchscript transforms together with your model for deployment -- support for GPU acceleration -- batched transformation such as for videos -- read and decode data directly as torch tensor with torchscript support (for PNG and JPEG image formats) - -.. note:: - These features are only possible with **Tensor** images. -""" - -from pathlib import Path - -import matplotlib.pyplot as plt -import numpy as np - -import torch -import torchvision.transforms as T -from torchvision.io import read_image - - -plt.rcParams["savefig.bbox"] = 'tight' -torch.manual_seed(1) - - -def show(imgs): - fix, axs = plt.subplots(ncols=len(imgs), squeeze=False) - for i, img in enumerate(imgs): - img = T.ToPILImage()(img.to('cpu')) - axs[0, i].imshow(np.asarray(img)) - axs[0, i].set(xticklabels=[], yticklabels=[], xticks=[], yticks=[]) - - -#################################### -# The :func:`~torchvision.io.read_image` function allows to read an image and -# directly load it as a tensor - -dog1 = read_image(str(Path('assets') / 'dog1.jpg')) -dog2 = read_image(str(Path('assets') / 'dog2.jpg')) -show([dog1, dog2]) - -#################################### -# Transforming images on GPU -# -------------------------- -# Most transforms natively support tensors on top of PIL images (to visualize -# the effect of the transforms, you may refer to see -# :ref:`sphx_glr_auto_examples_plot_transforms.py`). -# Using tensor images, we can run the transforms on GPUs if cuda is available! - -import torch.nn as nn - -transforms = torch.nn.Sequential( - T.RandomCrop(224), - T.RandomHorizontalFlip(p=0.3), -) - -device = 'cuda' if torch.cuda.is_available() else 'cpu' -dog1 = dog1.to(device) -dog2 = dog2.to(device) - -transformed_dog1 = transforms(dog1) -transformed_dog2 = transforms(dog2) -show([transformed_dog1, transformed_dog2]) - -#################################### -# Scriptable transforms for easier deployment via torchscript -# ----------------------------------------------------------- -# We now show how to combine image transformations and a model forward pass, -# while using ``torch.jit.script`` to obtain a single scripted module. -# -# Let's define a ``Predictor`` module that transforms the input tensor and then -# applies an ImageNet model on it. - -from torchvision.models import resnet18 - - -class Predictor(nn.Module): - - def __init__(self): - super().__init__() - self.resnet18 = resnet18(pretrained=True, progress=False).eval() - self.transforms = nn.Sequential( - T.Resize([256, ]), # We use single int value inside a list due to torchscript type restrictions - T.CenterCrop(224), - T.ConvertImageDtype(torch.float), - T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) - ) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - with torch.no_grad(): - x = self.transforms(x) - y_pred = self.resnet18(x) - return y_pred.argmax(dim=1) - - -#################################### -# Now, let's define scripted and non-scripted instances of ``Predictor`` and -# apply it on multiple tensor images of the same size - -predictor = Predictor().to(device) -scripted_predictor = torch.jit.script(predictor).to(device) - -batch = torch.stack([dog1, dog2]).to(device) - -res = predictor(batch) -res_scripted = scripted_predictor(batch) - -#################################### -# We can verify that the prediction of the scripted and non-scripted models are -# the same: - -import json - -with open(Path('assets') / 'imagenet_class_index.json', 'r') as labels_file: - labels = json.load(labels_file) - -for i, (pred, pred_scripted) in enumerate(zip(res, res_scripted)): - assert pred == pred_scripted - print(f"Prediction for Dog {i + 1}: {labels[str(pred.item())]}") - -#################################### -# Since the model is scripted, it can be easily dumped on disk and re-used - -import tempfile - -with tempfile.NamedTemporaryFile() as f: - scripted_predictor.save(f.name) - - dumped_scripted_predictor = torch.jit.load(f.name) - res_scripted_dumped = dumped_scripted_predictor(batch) -assert (res_scripted_dumped == res_scripted).all() diff --git a/0.11./_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip b/0.11./_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip deleted file mode 100644 index edefd9291ca..00000000000 Binary files a/0.11./_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip and /dev/null differ diff --git a/0.11./_downloads/82da64ed59815304068ce683aaf81dd9/plot_transforms.ipynb b/0.11./_downloads/82da64ed59815304068ce683aaf81dd9/plot_transforms.ipynb deleted file mode 100644 index 578f469aff9..00000000000 --- a/0.11./_downloads/82da64ed59815304068ce683aaf81dd9/plot_transforms.ipynb +++ /dev/null @@ -1,486 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\n# Illustration of transforms\n\nThis example illustrates the various transforms available in `the\ntorchvision.transforms module `.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "# sphinx_gallery_thumbnail_path = \"../../gallery/assets/transforms_thumbnail.png\"\n\nfrom PIL import Image\nfrom pathlib import Path\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimport torch\nimport torchvision.transforms as T\n\n\nplt.rcParams[\"savefig.bbox\"] = 'tight'\norig_img = Image.open(Path('assets') / 'astronaut.jpg')\n# if you change the seed, make sure that the randomly-applied transforms\n# properly show that the image can be both transformed and *not* transformed!\ntorch.manual_seed(0)\n\n\ndef plot(imgs, with_orig=True, row_title=None, **imshow_kwargs):\n if not isinstance(imgs[0], list):\n # Make a 2d grid even if there's just 1 row\n imgs = [imgs]\n\n num_rows = len(imgs)\n num_cols = len(imgs[0]) + with_orig\n fig, axs = plt.subplots(nrows=num_rows, ncols=num_cols, squeeze=False)\n for row_idx, row in enumerate(imgs):\n row = [orig_img] + row if with_orig else row\n for col_idx, img in enumerate(row):\n ax = axs[row_idx, col_idx]\n ax.imshow(np.asarray(img), **imshow_kwargs)\n ax.set(xticklabels=[], yticklabels=[], xticks=[], yticks=[])\n\n if with_orig:\n axs[0, 0].set(title='Original image')\n axs[0, 0].title.set_size(8)\n if row_title is not None:\n for row_idx in range(num_rows):\n axs[row_idx, 0].set(ylabel=row_title[row_idx])\n\n plt.tight_layout()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Pad\nThe :class:`~torchvision.transforms.Pad` transform\n(see also :func:`~torchvision.transforms.functional.pad`)\nfills image borders with some pixel values.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "padded_imgs = [T.Pad(padding=padding)(orig_img) for padding in (3, 10, 30, 50)]\nplot(padded_imgs)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Resize\nThe :class:`~torchvision.transforms.Resize` transform\n(see also :func:`~torchvision.transforms.functional.resize`)\nresizes an image.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "resized_imgs = [T.Resize(size=size)(orig_img) for size in (30, 50, 100, orig_img.size)]\nplot(resized_imgs)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## CenterCrop\nThe :class:`~torchvision.transforms.CenterCrop` transform\n(see also :func:`~torchvision.transforms.functional.center_crop`)\ncrops the given image at the center.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "center_crops = [T.CenterCrop(size=size)(orig_img) for size in (30, 50, 100, orig_img.size)]\nplot(center_crops)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## FiveCrop\nThe :class:`~torchvision.transforms.FiveCrop` transform\n(see also :func:`~torchvision.transforms.functional.five_crop`)\ncrops the given image into four corners and the central crop.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "(top_left, top_right, bottom_left, bottom_right, center) = T.FiveCrop(size=(100, 100))(orig_img)\nplot([top_left, top_right, bottom_left, bottom_right, center])" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Grayscale\nThe :class:`~torchvision.transforms.Grayscale` transform\n(see also :func:`~torchvision.transforms.functional.to_grayscale`)\nconverts an image to grayscale\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "gray_img = T.Grayscale()(orig_img)\nplot([gray_img], cmap='gray')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Random transforms\nThe following transforms are random, which means that the same transfomer\ninstance will produce different result each time it transforms a given image.\n\n### ColorJitter\nThe :class:`~torchvision.transforms.ColorJitter` transform\nrandomly changes the brightness, saturation, and other properties of an image.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "jitter = T.ColorJitter(brightness=.5, hue=.3)\njitted_imgs = [jitter(orig_img) for _ in range(4)]\nplot(jitted_imgs)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### GaussianBlur\nThe :class:`~torchvision.transforms.GaussianBlur` transform\n(see also :func:`~torchvision.transforms.functional.gaussian_blur`)\nperforms gaussian blur transform on an image.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "blurrer = T.GaussianBlur(kernel_size=(5, 9), sigma=(0.1, 5))\nblurred_imgs = [blurrer(orig_img) for _ in range(4)]\nplot(blurred_imgs)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### RandomPerspective\nThe :class:`~torchvision.transforms.RandomPerspective` transform\n(see also :func:`~torchvision.transforms.functional.perspective`)\nperforms random perspective transform on an image.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "perspective_transformer = T.RandomPerspective(distortion_scale=0.6, p=1.0)\nperspective_imgs = [perspective_transformer(orig_img) for _ in range(4)]\nplot(perspective_imgs)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### RandomRotation\nThe :class:`~torchvision.transforms.RandomRotation` transform\n(see also :func:`~torchvision.transforms.functional.rotate`)\nrotates an image with random angle.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "rotater = T.RandomRotation(degrees=(0, 180))\nrotated_imgs = [rotater(orig_img) for _ in range(4)]\nplot(rotated_imgs)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### RandomAffine\nThe :class:`~torchvision.transforms.RandomAffine` transform\n(see also :func:`~torchvision.transforms.functional.affine`)\nperforms random affine transform on an image.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "affine_transfomer = T.RandomAffine(degrees=(30, 70), translate=(0.1, 0.3), scale=(0.5, 0.75))\naffine_imgs = [affine_transfomer(orig_img) for _ in range(4)]\nplot(affine_imgs)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### RandomCrop\nThe :class:`~torchvision.transforms.RandomCrop` transform\n(see also :func:`~torchvision.transforms.functional.crop`)\ncrops an image at a random location.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "cropper = T.RandomCrop(size=(128, 128))\ncrops = [cropper(orig_img) for _ in range(4)]\nplot(crops)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### RandomResizedCrop\nThe :class:`~torchvision.transforms.RandomResizedCrop` transform\n(see also :func:`~torchvision.transforms.functional.resized_crop`)\ncrops an image at a random location, and then resizes the crop to a given\nsize.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "resize_cropper = T.RandomResizedCrop(size=(32, 32))\nresized_crops = [resize_cropper(orig_img) for _ in range(4)]\nplot(resized_crops)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### RandomInvert\nThe :class:`~torchvision.transforms.RandomInvert` transform\n(see also :func:`~torchvision.transforms.functional.invert`)\nrandomly inverts the colors of the given image.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "inverter = T.RandomInvert()\ninvertered_imgs = [inverter(orig_img) for _ in range(4)]\nplot(invertered_imgs)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### RandomPosterize\nThe :class:`~torchvision.transforms.RandomPosterize` transform\n(see also :func:`~torchvision.transforms.functional.posterize`)\nrandomly posterizes the image by reducing the number of bits\nof each color channel.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "posterizer = T.RandomPosterize(bits=2)\nposterized_imgs = [posterizer(orig_img) for _ in range(4)]\nplot(posterized_imgs)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### RandomSolarize\nThe :class:`~torchvision.transforms.RandomSolarize` transform\n(see also :func:`~torchvision.transforms.functional.solarize`)\nrandomly solarizes the image by inverting all pixel values above\nthe threshold.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "solarizer = T.RandomSolarize(threshold=192.0)\nsolarized_imgs = [solarizer(orig_img) for _ in range(4)]\nplot(solarized_imgs)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### RandomAdjustSharpness\nThe :class:`~torchvision.transforms.RandomAdjustSharpness` transform\n(see also :func:`~torchvision.transforms.functional.adjust_sharpness`)\nrandomly adjusts the sharpness of the given image.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "sharpness_adjuster = T.RandomAdjustSharpness(sharpness_factor=2)\nsharpened_imgs = [sharpness_adjuster(orig_img) for _ in range(4)]\nplot(sharpened_imgs)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### RandomAutocontrast\nThe :class:`~torchvision.transforms.RandomAutocontrast` transform\n(see also :func:`~torchvision.transforms.functional.autocontrast`)\nrandomly applies autocontrast to the given image.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "autocontraster = T.RandomAutocontrast()\nautocontrasted_imgs = [autocontraster(orig_img) for _ in range(4)]\nplot(autocontrasted_imgs)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### RandomEqualize\nThe :class:`~torchvision.transforms.RandomEqualize` transform\n(see also :func:`~torchvision.transforms.functional.equalize`)\nrandomly equalizes the histogram of the given image.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "equalizer = T.RandomEqualize()\nequalized_imgs = [equalizer(orig_img) for _ in range(4)]\nplot(equalized_imgs)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### AutoAugment\nThe :class:`~torchvision.transforms.AutoAugment` transform\nautomatically augments data based on a given auto-augmentation policy.\nSee :class:`~torchvision.transforms.AutoAugmentPolicy` for the available policies.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "policies = [T.AutoAugmentPolicy.CIFAR10, T.AutoAugmentPolicy.IMAGENET, T.AutoAugmentPolicy.SVHN]\naugmenters = [T.AutoAugment(policy) for policy in policies]\nimgs = [\n [augmenter(orig_img) for _ in range(4)]\n for augmenter in augmenters\n]\nrow_title = [str(policy).split('.')[-1] for policy in policies]\nplot(imgs, row_title=row_title)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### RandAugment\nThe :class:`~torchvision.transforms.RandAugment` transform automatically augments the data.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "augmenter = T.RandAugment()\nimgs = [augmenter(orig_img) for _ in range(4)]\nplot(imgs)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### TrivialAugmentWide\nThe :class:`~torchvision.transforms.TrivialAugmentWide` transform automatically augments the data.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "augmenter = T.TrivialAugmentWide()\nimgs = [augmenter(orig_img) for _ in range(4)]\nplot(imgs)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Randomly-applied transforms\n\nSome transforms are randomly-applied given a probability ``p``. That is, the\ntransformed image may actually be the same as the original one, even when\ncalled with the same transformer instance!\n\n### RandomHorizontalFlip\nThe :class:`~torchvision.transforms.RandomHorizontalFlip` transform\n(see also :func:`~torchvision.transforms.functional.hflip`)\nperforms horizontal flip of an image, with a given probability.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "hflipper = T.RandomHorizontalFlip(p=0.5)\ntransformed_imgs = [hflipper(orig_img) for _ in range(4)]\nplot(transformed_imgs)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### RandomVerticalFlip\nThe :class:`~torchvision.transforms.RandomVerticalFlip` transform\n(see also :func:`~torchvision.transforms.functional.vflip`)\nperforms vertical flip of an image, with a given probability.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "vflipper = T.RandomVerticalFlip(p=0.5)\ntransformed_imgs = [vflipper(orig_img) for _ in range(4)]\nplot(transformed_imgs)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### RandomApply\nThe :class:`~torchvision.transforms.RandomApply` transform\nrandomly applies a list of transforms, with a given probability.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "applier = T.RandomApply(transforms=[T.RandomCrop(size=(64, 64))], p=0.5)\ntransformed_imgs = [applier(orig_img) for _ in range(4)]\nplot(transformed_imgs)" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.7.11" - } - }, - "nbformat": 4, - "nbformat_minor": 0 -} \ No newline at end of file diff --git a/0.11./_downloads/835d7cd9c1c44b0656fb931b7f5af002/plot_scripted_tensor_transforms.ipynb b/0.11./_downloads/835d7cd9c1c44b0656fb931b7f5af002/plot_scripted_tensor_transforms.ipynb deleted file mode 100644 index 2062c325361..00000000000 --- a/0.11./_downloads/835d7cd9c1c44b0656fb931b7f5af002/plot_scripted_tensor_transforms.ipynb +++ /dev/null @@ -1,162 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "%matplotlib inline" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\n# Tensor transforms and JIT\n\nThis example illustrates various features that are now supported by the\n`image transformations ` on Tensor images. In particular, we\nshow how image transforms can be performed on GPU, and how one can also script\nthem using JIT compilation.\n\nPrior to v0.8.0, transforms in torchvision have traditionally been PIL-centric\nand presented multiple limitations due to that. Now, since v0.8.0, transforms\nimplementations are Tensor and PIL compatible and we can achieve the following\nnew features:\n\n- transform multi-band torch tensor images (with more than 3-4 channels)\n- torchscript transforms together with your model for deployment\n- support for GPU acceleration\n- batched transformation such as for videos\n- read and decode data directly as torch tensor with torchscript support (for PNG and JPEG image formats)\n\n

Note

These features are only possible with **Tensor** images.

\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from pathlib import Path\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimport torch\nimport torchvision.transforms as T\nfrom torchvision.io import read_image\n\n\nplt.rcParams[\"savefig.bbox\"] = 'tight'\ntorch.manual_seed(1)\n\n\ndef show(imgs):\n fix, axs = plt.subplots(ncols=len(imgs), squeeze=False)\n for i, img in enumerate(imgs):\n img = T.ToPILImage()(img.to('cpu'))\n axs[0, i].imshow(np.asarray(img))\n axs[0, i].set(xticklabels=[], yticklabels=[], xticks=[], yticks=[])" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The :func:`~torchvision.io.read_image` function allows to read an image and\ndirectly load it as a tensor\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "dog1 = read_image(str(Path('assets') / 'dog1.jpg'))\ndog2 = read_image(str(Path('assets') / 'dog2.jpg'))\nshow([dog1, dog2])" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Transforming images on GPU\nMost transforms natively support tensors on top of PIL images (to visualize\nthe effect of the transforms, you may refer to see\n`sphx_glr_auto_examples_plot_transforms.py`).\nUsing tensor images, we can run the transforms on GPUs if cuda is available!\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import torch.nn as nn\n\ntransforms = torch.nn.Sequential(\n T.RandomCrop(224),\n T.RandomHorizontalFlip(p=0.3),\n)\n\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\ndog1 = dog1.to(device)\ndog2 = dog2.to(device)\n\ntransformed_dog1 = transforms(dog1)\ntransformed_dog2 = transforms(dog2)\nshow([transformed_dog1, transformed_dog2])" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Scriptable transforms for easier deployment via torchscript\nWe now show how to combine image transformations and a model forward pass,\nwhile using ``torch.jit.script`` to obtain a single scripted module.\n\nLet's define a ``Predictor`` module that transforms the input tensor and then\napplies an ImageNet model on it.\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "from torchvision.models import resnet18\n\n\nclass Predictor(nn.Module):\n\n def __init__(self):\n super().__init__()\n self.resnet18 = resnet18(pretrained=True, progress=False).eval()\n self.transforms = nn.Sequential(\n T.Resize([256, ]), # We use single int value inside a list due to torchscript type restrictions\n T.CenterCrop(224),\n T.ConvertImageDtype(torch.float),\n T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n )\n\n def forward(self, x: torch.Tensor) -> torch.Tensor:\n with torch.no_grad():\n x = self.transforms(x)\n y_pred = self.resnet18(x)\n return y_pred.argmax(dim=1)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Now, let's define scripted and non-scripted instances of ``Predictor`` and\napply it on multiple tensor images of the same size\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "predictor = Predictor().to(device)\nscripted_predictor = torch.jit.script(predictor).to(device)\n\nbatch = torch.stack([dog1, dog2]).to(device)\n\nres = predictor(batch)\nres_scripted = scripted_predictor(batch)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We can verify that the prediction of the scripted and non-scripted models are\nthe same:\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import json\n\nwith open(Path('assets') / 'imagenet_class_index.json', 'r') as labels_file:\n labels = json.load(labels_file)\n\nfor i, (pred, pred_scripted) in enumerate(zip(res, res_scripted)):\n assert pred == pred_scripted\n print(f\"Prediction for Dog {i + 1}: {labels[str(pred.item())]}\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Since the model is scripted, it can be easily dumped on disk and re-used\n\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "collapsed": false - }, - "outputs": [], - "source": [ - "import tempfile\n\nwith tempfile.NamedTemporaryFile() as f:\n scripted_predictor.save(f.name)\n\n dumped_scripted_predictor = torch.jit.load(f.name)\n res_scripted_dumped = dumped_scripted_predictor(batch)\nassert (res_scripted_dumped == res_scripted).all()" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.7.11" - } - }, - "nbformat": 4, - "nbformat_minor": 0 -} \ No newline at end of file diff --git a/0.11./_downloads/b8b00ddf3e9bca37ad16e1aced1e3ea4/plot_repurposing_annotations.py b/0.11./_downloads/b8b00ddf3e9bca37ad16e1aced1e3ea4/plot_repurposing_annotations.py deleted file mode 100644 index fb4835496c3..00000000000 --- a/0.11./_downloads/b8b00ddf3e9bca37ad16e1aced1e3ea4/plot_repurposing_annotations.py +++ /dev/null @@ -1,205 +0,0 @@ -""" -===================================== -Repurposing masks into bounding boxes -===================================== - -The following example illustrates the operations available -the :ref:`torchvision.ops ` module for repurposing -segmentation masks into object localization annotations for different tasks -(e.g. transforming masks used by instance and panoptic segmentation -methods into bounding boxes used by object detection methods). -""" - -# sphinx_gallery_thumbnail_path = "../../gallery/assets/repurposing_annotations_thumbnail.png" - -import os -import numpy as np -import torch -import matplotlib.pyplot as plt - -import torchvision.transforms.functional as F - - -ASSETS_DIRECTORY = "assets" - -plt.rcParams["savefig.bbox"] = "tight" - - -def show(imgs): - if not isinstance(imgs, list): - imgs = [imgs] - fix, axs = plt.subplots(ncols=len(imgs), squeeze=False) - for i, img in enumerate(imgs): - img = img.detach() - img = F.to_pil_image(img) - axs[0, i].imshow(np.asarray(img)) - axs[0, i].set(xticklabels=[], yticklabels=[], xticks=[], yticks=[]) - - -#################################### -# Masks -# ----- -# In tasks like instance and panoptic segmentation, masks are commonly defined, and are defined by this package, -# as a multi-dimensional array (e.g. a NumPy array or a PyTorch tensor) with the following shape: -# -# (num_objects, height, width) -# -# Where num_objects is the number of annotated objects in the image. Each (height, width) object corresponds to exactly -# one object. For example, if your input image has the dimensions 224 x 224 and has four annotated objects the shape -# of your masks annotation has the following shape: -# -# (4, 224, 224). -# -# A nice property of masks is that they can be easily repurposed to be used in methods to solve a variety of object -# localization tasks. - -#################################### -# Converting Masks to Bounding Boxes -# ----------------------------------------------- -# For example, the :func:`~torchvision.ops.masks_to_boxes` operation can be used to -# transform masks into bounding boxes that can be -# used as input to detection models such as FasterRCNN and RetinaNet. -# We will take images and masks from the `PenFudan Dataset `_. - - -from torchvision.io import read_image - -img_path = os.path.join(ASSETS_DIRECTORY, "FudanPed00054.png") -mask_path = os.path.join(ASSETS_DIRECTORY, "FudanPed00054_mask.png") -img = read_image(img_path) -mask = read_image(mask_path) - - -######################### -# Here the masks are represented as a PNG Image, with floating point values. -# Each pixel is encoded as different colors, with 0 being background. -# Notice that the spatial dimensions of image and mask match. - -print(mask.size()) -print(img.size()) -print(mask) - -############################ - -# We get the unique colors, as these would be the object ids. -obj_ids = torch.unique(mask) - -# first id is the background, so remove it. -obj_ids = obj_ids[1:] - -# split the color-encoded mask into a set of boolean masks. -# Note that this snippet would work as well if the masks were float values instead of ints. -masks = mask == obj_ids[:, None, None] - -######################## -# Now the masks are a boolean tensor. -# The first dimension in this case 3 and denotes the number of instances: there are 3 people in the image. -# The other two dimensions are height and width, which are equal to the dimensions of the image. -# For each instance, the boolean tensors represent if the particular pixel -# belongs to the segmentation mask of the image. - -print(masks.size()) -print(masks) - -#################################### -# Let us visualize an image and plot its corresponding segmentation masks. -# We will use the :func:`~torchvision.utils.draw_segmentation_masks` to draw the segmentation masks. - -from torchvision.utils import draw_segmentation_masks - -drawn_masks = [] -for mask in masks: - drawn_masks.append(draw_segmentation_masks(img, mask, alpha=0.8, colors="blue")) - -show(drawn_masks) - -#################################### -# To convert the boolean masks into bounding boxes. -# We will use the :func:`~torchvision.ops.masks_to_boxes` from the torchvision.ops module -# It returns the boxes in ``(xmin, ymin, xmax, ymax)`` format. - -from torchvision.ops import masks_to_boxes - -boxes = masks_to_boxes(masks) -print(boxes.size()) -print(boxes) - -#################################### -# As the shape denotes, there are 3 boxes and in ``(xmin, ymin, xmax, ymax)`` format. -# These can be visualized very easily with :func:`~torchvision.utils.draw_bounding_boxes` utility -# provided in :ref:`torchvision.utils `. - -from torchvision.utils import draw_bounding_boxes - -drawn_boxes = draw_bounding_boxes(img, boxes, colors="red") -show(drawn_boxes) - -################################### -# These boxes can now directly be used by detection models in torchvision. -# Here is demo with a Faster R-CNN model loaded from -# :func:`~torchvision.models.detection.fasterrcnn_resnet50_fpn` - -from torchvision.models.detection import fasterrcnn_resnet50_fpn - -model = fasterrcnn_resnet50_fpn(pretrained=True, progress=False) -print(img.size()) - -img = F.convert_image_dtype(img, torch.float) -target = {} -target["boxes"] = boxes -target["labels"] = labels = torch.ones((masks.size(0),), dtype=torch.int64) -detection_outputs = model(img.unsqueeze(0), [target]) - - -#################################### -# Converting Segmentation Dataset to Detection Dataset -# ---------------------------------------------------- -# -# With this utility it becomes very simple to convert a segmentation dataset to a detection dataset. -# With this we can now use a segmentation dataset to train a detection model. -# One can similarly convert panoptic dataset to detection dataset. -# Here is an example where we re-purpose the dataset from the -# `PenFudan Detection Tutorial `_. - -class SegmentationToDetectionDataset(torch.utils.data.Dataset): - def __init__(self, root, transforms): - self.root = root - self.transforms = transforms - # load all image files, sorting them to - # ensure that they are aligned - self.imgs = list(sorted(os.listdir(os.path.join(root, "PNGImages")))) - self.masks = list(sorted(os.listdir(os.path.join(root, "PedMasks")))) - - def __getitem__(self, idx): - # load images and masks - img_path = os.path.join(self.root, "PNGImages", self.imgs[idx]) - mask_path = os.path.join(self.root, "PedMasks", self.masks[idx]) - - img = read_image(img_path) - mask = read_image(mask_path) - - img = F.convert_image_dtype(img, dtype=torch.float) - mask = F.convert_image_dtype(mask, dtype=torch.float) - - # We get the unique colors, as these would be the object ids. - obj_ids = torch.unique(mask) - - # first id is the background, so remove it. - obj_ids = obj_ids[1:] - - # split the color-encoded mask into a set of boolean masks. - masks = mask == obj_ids[:, None, None] - - boxes = masks_to_boxes(masks) - - # there is only one class - labels = torch.ones((masks.shape[0],), dtype=torch.int64) - - target = {} - target["boxes"] = boxes - target["labels"] = labels - - if self.transforms is not None: - img, target = self.transforms(img, target) - - return img, target diff --git a/0.11./_images/sphx_glr_plot_repurposing_annotations_001.png b/0.11./_images/sphx_glr_plot_repurposing_annotations_001.png deleted file mode 100644 index 12a725677d8..00000000000 Binary files a/0.11./_images/sphx_glr_plot_repurposing_annotations_001.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_repurposing_annotations_002.png b/0.11./_images/sphx_glr_plot_repurposing_annotations_002.png deleted file mode 100644 index 6be6913f533..00000000000 Binary files a/0.11./_images/sphx_glr_plot_repurposing_annotations_002.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_repurposing_annotations_thumb.png b/0.11./_images/sphx_glr_plot_repurposing_annotations_thumb.png deleted file mode 100644 index fbed6047b39..00000000000 Binary files a/0.11./_images/sphx_glr_plot_repurposing_annotations_thumb.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_scripted_tensor_transforms_001.png b/0.11./_images/sphx_glr_plot_scripted_tensor_transforms_001.png deleted file mode 100644 index 0ffa6b771e3..00000000000 Binary files a/0.11./_images/sphx_glr_plot_scripted_tensor_transforms_001.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_scripted_tensor_transforms_002.png b/0.11./_images/sphx_glr_plot_scripted_tensor_transforms_002.png deleted file mode 100644 index 0a25d59f73c..00000000000 Binary files a/0.11./_images/sphx_glr_plot_scripted_tensor_transforms_002.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_scripted_tensor_transforms_thumb.png b/0.11./_images/sphx_glr_plot_scripted_tensor_transforms_thumb.png deleted file mode 100644 index d1a78f5e5d2..00000000000 Binary files a/0.11./_images/sphx_glr_plot_scripted_tensor_transforms_thumb.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_001.png b/0.11./_images/sphx_glr_plot_transforms_001.png deleted file mode 100644 index a3c12318136..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_001.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_002.png b/0.11./_images/sphx_glr_plot_transforms_002.png deleted file mode 100644 index af16bec8706..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_002.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_003.png b/0.11./_images/sphx_glr_plot_transforms_003.png deleted file mode 100644 index 36cf9b27d46..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_003.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_004.png b/0.11./_images/sphx_glr_plot_transforms_004.png deleted file mode 100644 index 993a060d324..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_004.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_005.png b/0.11./_images/sphx_glr_plot_transforms_005.png deleted file mode 100644 index d7a47aa622e..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_005.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_006.png b/0.11./_images/sphx_glr_plot_transforms_006.png deleted file mode 100644 index 4f3ed913298..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_006.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_007.png b/0.11./_images/sphx_glr_plot_transforms_007.png deleted file mode 100644 index 78c8e80806c..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_007.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_008.png b/0.11./_images/sphx_glr_plot_transforms_008.png deleted file mode 100644 index 1c8306e17a5..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_008.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_009.png b/0.11./_images/sphx_glr_plot_transforms_009.png deleted file mode 100644 index c359be4a8cc..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_009.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_010.png b/0.11./_images/sphx_glr_plot_transforms_010.png deleted file mode 100644 index dce7325bbde..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_010.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_011.png b/0.11./_images/sphx_glr_plot_transforms_011.png deleted file mode 100644 index 88f6f352d1c..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_011.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_012.png b/0.11./_images/sphx_glr_plot_transforms_012.png deleted file mode 100644 index c3e919fd57a..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_012.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_013.png b/0.11./_images/sphx_glr_plot_transforms_013.png deleted file mode 100644 index 889fdd4163c..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_013.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_014.png b/0.11./_images/sphx_glr_plot_transforms_014.png deleted file mode 100644 index 0e32e949ce6..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_014.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_015.png b/0.11./_images/sphx_glr_plot_transforms_015.png deleted file mode 100644 index a9662c81b61..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_015.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_016.png b/0.11./_images/sphx_glr_plot_transforms_016.png deleted file mode 100644 index 0fba0cb687f..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_016.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_017.png b/0.11./_images/sphx_glr_plot_transforms_017.png deleted file mode 100644 index 0fba0cb687f..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_017.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_018.png b/0.11./_images/sphx_glr_plot_transforms_018.png deleted file mode 100644 index cf3fb3cbd79..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_018.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_019.png b/0.11./_images/sphx_glr_plot_transforms_019.png deleted file mode 100644 index f4f713729b7..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_019.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_020.png b/0.11./_images/sphx_glr_plot_transforms_020.png deleted file mode 100644 index 472d6544f45..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_020.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_021.png b/0.11./_images/sphx_glr_plot_transforms_021.png deleted file mode 100644 index 123c1bc748c..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_021.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_022.png b/0.11./_images/sphx_glr_plot_transforms_022.png deleted file mode 100644 index 84655812c0b..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_022.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_023.png b/0.11./_images/sphx_glr_plot_transforms_023.png deleted file mode 100644 index 6497576a3cc..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_023.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_024.png b/0.11./_images/sphx_glr_plot_transforms_024.png deleted file mode 100644 index a572f221a1d..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_024.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_transforms_thumb.png b/0.11./_images/sphx_glr_plot_transforms_thumb.png deleted file mode 100644 index d6d933b2a69..00000000000 Binary files a/0.11./_images/sphx_glr_plot_transforms_thumb.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_video_api_001.png b/0.11./_images/sphx_glr_plot_video_api_001.png deleted file mode 100644 index 0305457b9fc..00000000000 Binary files a/0.11./_images/sphx_glr_plot_video_api_001.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_video_api_thumb.png b/0.11./_images/sphx_glr_plot_video_api_thumb.png deleted file mode 100644 index c4555201856..00000000000 Binary files a/0.11./_images/sphx_glr_plot_video_api_thumb.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_visualization_utils_001.png b/0.11./_images/sphx_glr_plot_visualization_utils_001.png deleted file mode 100644 index f52173325f9..00000000000 Binary files a/0.11./_images/sphx_glr_plot_visualization_utils_001.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_visualization_utils_002.png b/0.11./_images/sphx_glr_plot_visualization_utils_002.png deleted file mode 100644 index 4fe56400208..00000000000 Binary files a/0.11./_images/sphx_glr_plot_visualization_utils_002.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_visualization_utils_003.png b/0.11./_images/sphx_glr_plot_visualization_utils_003.png deleted file mode 100644 index df5482a7615..00000000000 Binary files a/0.11./_images/sphx_glr_plot_visualization_utils_003.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_visualization_utils_004.png b/0.11./_images/sphx_glr_plot_visualization_utils_004.png deleted file mode 100644 index c3ffb3325b1..00000000000 Binary files a/0.11./_images/sphx_glr_plot_visualization_utils_004.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_visualization_utils_005.png b/0.11./_images/sphx_glr_plot_visualization_utils_005.png deleted file mode 100644 index 1dbdab571dc..00000000000 Binary files a/0.11./_images/sphx_glr_plot_visualization_utils_005.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_visualization_utils_006.png b/0.11./_images/sphx_glr_plot_visualization_utils_006.png deleted file mode 100644 index 7f65851a71b..00000000000 Binary files a/0.11./_images/sphx_glr_plot_visualization_utils_006.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_visualization_utils_007.png b/0.11./_images/sphx_glr_plot_visualization_utils_007.png deleted file mode 100644 index 0098ec17765..00000000000 Binary files a/0.11./_images/sphx_glr_plot_visualization_utils_007.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_visualization_utils_008.png b/0.11./_images/sphx_glr_plot_visualization_utils_008.png deleted file mode 100644 index df2272d0b67..00000000000 Binary files a/0.11./_images/sphx_glr_plot_visualization_utils_008.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_visualization_utils_009.png b/0.11./_images/sphx_glr_plot_visualization_utils_009.png deleted file mode 100644 index e7cca5bab1f..00000000000 Binary files a/0.11./_images/sphx_glr_plot_visualization_utils_009.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_visualization_utils_010.png b/0.11./_images/sphx_glr_plot_visualization_utils_010.png deleted file mode 100644 index 58725b5ffda..00000000000 Binary files a/0.11./_images/sphx_glr_plot_visualization_utils_010.png and /dev/null differ diff --git a/0.11./_images/sphx_glr_plot_visualization_utils_thumb.png b/0.11./_images/sphx_glr_plot_visualization_utils_thumb.png deleted file mode 100644 index 359de279600..00000000000 Binary files a/0.11./_images/sphx_glr_plot_visualization_utils_thumb.png and /dev/null differ diff --git a/0.11./_modules/index.html b/0.11./_modules/index.html deleted file mode 100644 index 82db8007117..00000000000 --- a/0.11./_modules/index.html +++ /dev/null @@ -1,692 +0,0 @@ - - - - - - - - - - - - Overview: module code — Torchvision main documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - - - - - - - - - - - - - - - -
- - - - -
-
- -
- Shortcuts -
-
- -
-
- - - -
- -
-
- -

All modules for which code is available

- - -
- -
-
- - - - -
- - - -
-

- © Copyright 2017-present, Torch Contributors. - -

-
- -
- Built with Sphinx using a theme provided by Read the Docs. -
- - -
- -
-
- -
-
-
- -
-
-
-
-
- - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
-
-

Docs

-

Access comprehensive developer documentation for PyTorch

- View Docs -
- -
-

Tutorials

-

Get in-depth tutorials for beginners and advanced developers

- View Tutorials -
- -
-

Resources

-

Find development resources and get your questions answered

- View Resources -
-
-
-
- - - - - - - - - -
-
-
-
- - -
-
-
- - -
- - - - - - - - \ No newline at end of file diff --git a/0.11./_modules/torchvision.html b/0.11./_modules/torchvision.html deleted file mode 100644 index ba98a7891a3..00000000000 --- a/0.11./_modules/torchvision.html +++ /dev/null @@ -1,724 +0,0 @@ - - - - - - - - - - - - torchvision — Torchvision main documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - - - - - - - - - - - - - - - -
- - - - -
-
- -
- Shortcuts -
-
- -
-
- - - -
- -
-
- -

Source code for torchvision

-import warnings
-import os
-
-from .extension import _HAS_OPS
-
-from torchvision import models
-from torchvision import datasets
-from torchvision import ops
-from torchvision import transforms
-from torchvision import utils
-from torchvision import io
-
-import torch
-
-try:
-    from .version import __version__  # noqa: F401
-except ImportError:
-    pass
-
-# Check if torchvision is being imported within the root folder
-if (not _HAS_OPS and os.path.dirname(os.path.realpath(__file__)) ==
-        os.path.join(os.path.realpath(os.getcwd()), 'torchvision')):
-    message = ('You are importing torchvision within its own root folder ({}). '
-               'This is not expected to work and may give errors. Please exit the '
-               'torchvision project source and relaunch your python interpreter.')
-    warnings.warn(message.format(os.getcwd()))
-
-_image_backend = 'PIL'
-
-_video_backend = "pyav"
-
-
-
[docs]def set_image_backend(backend): - """ - Specifies the package used to load images. - - Args: - backend (string): Name of the image backend. one of {'PIL', 'accimage'}. - The :mod:`accimage` package uses the Intel IPP library. It is - generally faster than PIL, but does not support as many operations. - """ - global _image_backend - if backend not in ['PIL', 'accimage']: - raise ValueError("Invalid backend '{}'. Options are 'PIL' and 'accimage'" - .format(backend)) - _image_backend = backend
- - -
[docs]def get_image_backend(): - """ - Gets the name of the package used to load images - """ - return _image_backend
- - -
[docs]def set_video_backend(backend): - """ - Specifies the package used to decode videos. - - Args: - backend (string): Name of the video backend. one of {'pyav', 'video_reader'}. - The :mod:`pyav` package uses the 3rd party PyAv library. It is a Pythonic - binding for the FFmpeg libraries. - The :mod:`video_reader` package includes a native C++ implementation on - top of FFMPEG libraries, and a python API of TorchScript custom operator. - It generally decodes faster than :mod:`pyav`, but is perhaps less robust. - - .. note:: - Building with FFMPEG is disabled by default in the latest `main`. If you want to use the 'video_reader' - backend, please compile torchvision from source. - """ - global _video_backend - if backend not in ["pyav", "video_reader"]: - raise ValueError( - "Invalid video backend '%s'. Options are 'pyav' and 'video_reader'" % backend - ) - if backend == "video_reader" and not io._HAS_VIDEO_OPT: - message = ( - "video_reader video backend is not available." - " Please compile torchvision from source and try again" - ) - warnings.warn(message) - else: - _video_backend = backend
- - -
[docs]def get_video_backend(): - """ - Returns the currently active video backend used to decode videos. - - Returns: - str: Name of the video backend. one of {'pyav', 'video_reader'}. - """ - - return _video_backend
- - -def _is_tracing(): - return torch._C._get_tracing_state() -
- -
- -
-
- - - - -
- - - -
-

- © Copyright 2017-present, Torch Contributors. - -

-
- -
- Built with Sphinx using a theme provided by Read the Docs. -
- - -
- -
-
- -
-
-
- -
-
-
-
-
- - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
-
-

Docs

-

Access comprehensive developer documentation for PyTorch

- View Docs -
- -
-

Tutorials

-

Get in-depth tutorials for beginners and advanced developers

- View Tutorials -
- -
-

Resources

-

Find development resources and get your questions answered

- View Resources -
-
-
-
- - - - - - - - - -
-
-
-
- - -
-
-
- - -
- - - - - - - - \ No newline at end of file diff --git a/0.11./_sources/auto_examples/index.rst.txt b/0.11./_sources/auto_examples/index.rst.txt deleted file mode 100644 index 424af3b367e..00000000000 --- a/0.11./_sources/auto_examples/index.rst.txt +++ /dev/null @@ -1,145 +0,0 @@ -:orphan: - - - -.. _sphx_glr_auto_examples: - -Example gallery -=============== - -Below is a gallery of examples - - - -.. raw:: html - -
- -.. only:: html - - .. figure:: /auto_examples/images/thumb/sphx_glr_plot_scripted_tensor_transforms_thumb.png - :alt: Tensor transforms and JIT - - :ref:`sphx_glr_auto_examples_plot_scripted_tensor_transforms.py` - -.. raw:: html - -
- - -.. toctree:: - :hidden: - - /auto_examples/plot_scripted_tensor_transforms - -.. raw:: html - -
- -.. only:: html - - .. figure:: /auto_examples/images/thumb/sphx_glr_plot_repurposing_annotations_thumb.png - :alt: Repurposing masks into bounding boxes - - :ref:`sphx_glr_auto_examples_plot_repurposing_annotations.py` - -.. raw:: html - -
- - -.. toctree:: - :hidden: - - /auto_examples/plot_repurposing_annotations - -.. raw:: html - -
- -.. only:: html - - .. figure:: /auto_examples/images/thumb/sphx_glr_plot_transforms_thumb.png - :alt: Illustration of transforms - - :ref:`sphx_glr_auto_examples_plot_transforms.py` - -.. raw:: html - -
- - -.. toctree:: - :hidden: - - /auto_examples/plot_transforms - -.. raw:: html - -
- -.. only:: html - - .. figure:: /auto_examples/images/thumb/sphx_glr_plot_visualization_utils_thumb.png - :alt: Visualization utilities - - :ref:`sphx_glr_auto_examples_plot_visualization_utils.py` - -.. raw:: html - -
- - -.. toctree:: - :hidden: - - /auto_examples/plot_visualization_utils - -.. raw:: html - -
- -.. only:: html - - .. figure:: /auto_examples/images/thumb/sphx_glr_plot_video_api_thumb.png - :alt: Video API - - :ref:`sphx_glr_auto_examples_plot_video_api.py` - -.. raw:: html - -
- - -.. toctree:: - :hidden: - - /auto_examples/plot_video_api -.. raw:: html - -
- - - -.. only :: html - - .. container:: sphx-glr-footer - :class: sphx-glr-footer-gallery - - - .. container:: sphx-glr-download sphx-glr-download-python - - :download:`Download all examples in Python source code: auto_examples_python.zip ` - - - - .. container:: sphx-glr-download sphx-glr-download-jupyter - - :download:`Download all examples in Jupyter notebooks: auto_examples_jupyter.zip ` - - -.. only:: html - - .. rst-class:: sphx-glr-signature - - `Gallery generated by Sphinx-Gallery `_ diff --git a/0.11./_sources/auto_examples/plot_repurposing_annotations.rst.txt b/0.11./_sources/auto_examples/plot_repurposing_annotations.rst.txt deleted file mode 100644 index 082093ea317..00000000000 --- a/0.11./_sources/auto_examples/plot_repurposing_annotations.rst.txt +++ /dev/null @@ -1,461 +0,0 @@ - -.. DO NOT EDIT. -.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. -.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: -.. "auto_examples/plot_repurposing_annotations.py" -.. LINE NUMBERS ARE GIVEN BELOW. - -.. only:: html - - .. note:: - :class: sphx-glr-download-link-note - - Click :ref:`here ` - to download the full example code - -.. rst-class:: sphx-glr-example-title - -.. _sphx_glr_auto_examples_plot_repurposing_annotations.py: - - -===================================== -Repurposing masks into bounding boxes -===================================== - -The following example illustrates the operations available -the :ref:`torchvision.ops ` module for repurposing -segmentation masks into object localization annotations for different tasks -(e.g. transforming masks used by instance and panoptic segmentation -methods into bounding boxes used by object detection methods). - -.. GENERATED FROM PYTHON SOURCE LINES 12-39 - -.. code-block:: default - - - # sphinx_gallery_thumbnail_path = "../../gallery/assets/repurposing_annotations_thumbnail.png" - - import os - import numpy as np - import torch - import matplotlib.pyplot as plt - - import torchvision.transforms.functional as F - - - ASSETS_DIRECTORY = "assets" - - plt.rcParams["savefig.bbox"] = "tight" - - - def show(imgs): - if not isinstance(imgs, list): - imgs = [imgs] - fix, axs = plt.subplots(ncols=len(imgs), squeeze=False) - for i, img in enumerate(imgs): - img = img.detach() - img = F.to_pil_image(img) - axs[0, i].imshow(np.asarray(img)) - axs[0, i].set(xticklabels=[], yticklabels=[], xticks=[], yticks=[]) - - - - - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 40-55 - -Masks ------ -In tasks like instance and panoptic segmentation, masks are commonly defined, and are defined by this package, -as a multi-dimensional array (e.g. a NumPy array or a PyTorch tensor) with the following shape: - - (num_objects, height, width) - -Where num_objects is the number of annotated objects in the image. Each (height, width) object corresponds to exactly -one object. For example, if your input image has the dimensions 224 x 224 and has four annotated objects the shape -of your masks annotation has the following shape: - - (4, 224, 224). - -A nice property of masks is that they can be easily repurposed to be used in methods to solve a variety of object -localization tasks. - -.. GENERATED FROM PYTHON SOURCE LINES 57-63 - -Converting Masks to Bounding Boxes ------------------------------------------------ -For example, the :func:`~torchvision.ops.masks_to_boxes` operation can be used to -transform masks into bounding boxes that can be -used as input to detection models such as FasterRCNN and RetinaNet. -We will take images and masks from the `PenFudan Dataset `_. - -.. GENERATED FROM PYTHON SOURCE LINES 63-73 - -.. code-block:: default - - - - from torchvision.io import read_image - - img_path = os.path.join(ASSETS_DIRECTORY, "FudanPed00054.png") - mask_path = os.path.join(ASSETS_DIRECTORY, "FudanPed00054_mask.png") - img = read_image(img_path) - mask = read_image(mask_path) - - - - - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 74-77 - -Here the masks are represented as a PNG Image, with floating point values. -Each pixel is encoded as different colors, with 0 being background. -Notice that the spatial dimensions of image and mask match. - -.. GENERATED FROM PYTHON SOURCE LINES 77-82 - -.. code-block:: default - - - print(mask.size()) - print(img.size()) - print(mask) - - - - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - torch.Size([1, 498, 533]) - torch.Size([3, 498, 533]) - tensor([[[0, 0, 0, ..., 0, 0, 0], - [0, 0, 0, ..., 0, 0, 0], - [0, 0, 0, ..., 0, 0, 0], - ..., - [0, 0, 0, ..., 0, 0, 0], - [0, 0, 0, ..., 0, 0, 0], - [0, 0, 0, ..., 0, 0, 0]]], dtype=torch.uint8) - - - - -.. GENERATED FROM PYTHON SOURCE LINES 83-94 - -.. code-block:: default - - - # We get the unique colors, as these would be the object ids. - obj_ids = torch.unique(mask) - - # first id is the background, so remove it. - obj_ids = obj_ids[1:] - - # split the color-encoded mask into a set of boolean masks. - # Note that this snippet would work as well if the masks were float values instead of ints. - masks = mask == obj_ids[:, None, None] - - - - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 95-100 - -Now the masks are a boolean tensor. -The first dimension in this case 3 and denotes the number of instances: there are 3 people in the image. -The other two dimensions are height and width, which are equal to the dimensions of the image. -For each instance, the boolean tensors represent if the particular pixel -belongs to the segmentation mask of the image. - -.. GENERATED FROM PYTHON SOURCE LINES 100-104 - -.. code-block:: default - - - print(masks.size()) - print(masks) - - - - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - torch.Size([3, 498, 533]) - tensor([[[False, False, False, ..., False, False, False], - [False, False, False, ..., False, False, False], - [False, False, False, ..., False, False, False], - ..., - [False, False, False, ..., False, False, False], - [False, False, False, ..., False, False, False], - [False, False, False, ..., False, False, False]], - - [[False, False, False, ..., False, False, False], - [False, False, False, ..., False, False, False], - [False, False, False, ..., False, False, False], - ..., - [False, False, False, ..., False, False, False], - [False, False, False, ..., False, False, False], - [False, False, False, ..., False, False, False]], - - [[False, False, False, ..., False, False, False], - [False, False, False, ..., False, False, False], - [False, False, False, ..., False, False, False], - ..., - [False, False, False, ..., False, False, False], - [False, False, False, ..., False, False, False], - [False, False, False, ..., False, False, False]]]) - - - - -.. GENERATED FROM PYTHON SOURCE LINES 105-107 - -Let us visualize an image and plot its corresponding segmentation masks. -We will use the :func:`~torchvision.utils.draw_segmentation_masks` to draw the segmentation masks. - -.. GENERATED FROM PYTHON SOURCE LINES 107-116 - -.. code-block:: default - - - from torchvision.utils import draw_segmentation_masks - - drawn_masks = [] - for mask in masks: - drawn_masks.append(draw_segmentation_masks(img, mask, alpha=0.8, colors="blue")) - - show(drawn_masks) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_repurposing_annotations_001.png - :alt: plot repurposing annotations - :srcset: /auto_examples/images/sphx_glr_plot_repurposing_annotations_001.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 117-120 - -To convert the boolean masks into bounding boxes. -We will use the :func:`~torchvision.ops.masks_to_boxes` from the torchvision.ops module -It returns the boxes in ``(xmin, ymin, xmax, ymax)`` format. - -.. GENERATED FROM PYTHON SOURCE LINES 120-127 - -.. code-block:: default - - - from torchvision.ops import masks_to_boxes - - boxes = masks_to_boxes(masks) - print(boxes.size()) - print(boxes) - - - - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - torch.Size([3, 4]) - tensor([[ 96., 134., 181., 417.], - [286., 113., 357., 331.], - [363., 120., 436., 328.]]) - - - - -.. GENERATED FROM PYTHON SOURCE LINES 128-131 - -As the shape denotes, there are 3 boxes and in ``(xmin, ymin, xmax, ymax)`` format. -These can be visualized very easily with :func:`~torchvision.utils.draw_bounding_boxes` utility -provided in :ref:`torchvision.utils `. - -.. GENERATED FROM PYTHON SOURCE LINES 131-137 - -.. code-block:: default - - - from torchvision.utils import draw_bounding_boxes - - drawn_boxes = draw_bounding_boxes(img, boxes, colors="red") - show(drawn_boxes) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_repurposing_annotations_002.png - :alt: plot repurposing annotations - :srcset: /auto_examples/images/sphx_glr_plot_repurposing_annotations_002.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 138-141 - -These boxes can now directly be used by detection models in torchvision. -Here is demo with a Faster R-CNN model loaded from -:func:`~torchvision.models.detection.fasterrcnn_resnet50_fpn` - -.. GENERATED FROM PYTHON SOURCE LINES 141-154 - -.. code-block:: default - - - from torchvision.models.detection import fasterrcnn_resnet50_fpn - - model = fasterrcnn_resnet50_fpn(pretrained=True, progress=False) - print(img.size()) - - img = F.convert_image_dtype(img, torch.float) - target = {} - target["boxes"] = boxes - target["labels"] = labels = torch.ones((masks.size(0),), dtype=torch.int64) - detection_outputs = model(img.unsqueeze(0), [target]) - - - - - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - Downloading: "https://download.pytorch.org/models/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth" to /root/.cache/torch/hub/checkpoints/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth - torch.Size([3, 498, 533]) - /root/project/env/lib/python3.7/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1634272092750/work/aten/src/ATen/native/TensorShape.cpp:2157.) - return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] - - - - -.. GENERATED FROM PYTHON SOURCE LINES 155-163 - -Converting Segmentation Dataset to Detection Dataset ----------------------------------------------------- - -With this utility it becomes very simple to convert a segmentation dataset to a detection dataset. -With this we can now use a segmentation dataset to train a detection model. -One can similarly convert panoptic dataset to detection dataset. -Here is an example where we re-purpose the dataset from the -`PenFudan Detection Tutorial `_. - -.. GENERATED FROM PYTHON SOURCE LINES 163-206 - -.. code-block:: default - - - class SegmentationToDetectionDataset(torch.utils.data.Dataset): - def __init__(self, root, transforms): - self.root = root - self.transforms = transforms - # load all image files, sorting them to - # ensure that they are aligned - self.imgs = list(sorted(os.listdir(os.path.join(root, "PNGImages")))) - self.masks = list(sorted(os.listdir(os.path.join(root, "PedMasks")))) - - def __getitem__(self, idx): - # load images and masks - img_path = os.path.join(self.root, "PNGImages", self.imgs[idx]) - mask_path = os.path.join(self.root, "PedMasks", self.masks[idx]) - - img = read_image(img_path) - mask = read_image(mask_path) - - img = F.convert_image_dtype(img, dtype=torch.float) - mask = F.convert_image_dtype(mask, dtype=torch.float) - - # We get the unique colors, as these would be the object ids. - obj_ids = torch.unique(mask) - - # first id is the background, so remove it. - obj_ids = obj_ids[1:] - - # split the color-encoded mask into a set of boolean masks. - masks = mask == obj_ids[:, None, None] - - boxes = masks_to_boxes(masks) - - # there is only one class - labels = torch.ones((masks.shape[0],), dtype=torch.int64) - - target = {} - target["boxes"] = boxes - target["labels"] = labels - - if self.transforms is not None: - img, target = self.transforms(img, target) - - return img, target - - - - - - - - -.. rst-class:: sphx-glr-timing - - **Total running time of the script:** ( 0 minutes 2.456 seconds) - - -.. _sphx_glr_download_auto_examples_plot_repurposing_annotations.py: - - -.. only :: html - - .. container:: sphx-glr-footer - :class: sphx-glr-footer-example - - - - .. container:: sphx-glr-download sphx-glr-download-python - - :download:`Download Python source code: plot_repurposing_annotations.py ` - - - - .. container:: sphx-glr-download sphx-glr-download-jupyter - - :download:`Download Jupyter notebook: plot_repurposing_annotations.ipynb ` - - -.. only:: html - - .. rst-class:: sphx-glr-signature - - `Gallery generated by Sphinx-Gallery `_ diff --git a/0.11./_sources/auto_examples/plot_scripted_tensor_transforms.rst.txt b/0.11./_sources/auto_examples/plot_scripted_tensor_transforms.rst.txt deleted file mode 100644 index 9c02fd67a58..00000000000 --- a/0.11./_sources/auto_examples/plot_scripted_tensor_transforms.rst.txt +++ /dev/null @@ -1,312 +0,0 @@ - -.. DO NOT EDIT. -.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. -.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: -.. "auto_examples/plot_scripted_tensor_transforms.py" -.. LINE NUMBERS ARE GIVEN BELOW. - -.. only:: html - - .. note:: - :class: sphx-glr-download-link-note - - Click :ref:`here ` - to download the full example code - -.. rst-class:: sphx-glr-example-title - -.. _sphx_glr_auto_examples_plot_scripted_tensor_transforms.py: - - -========================= -Tensor transforms and JIT -========================= - -This example illustrates various features that are now supported by the -:ref:`image transformations ` on Tensor images. In particular, we -show how image transforms can be performed on GPU, and how one can also script -them using JIT compilation. - -Prior to v0.8.0, transforms in torchvision have traditionally been PIL-centric -and presented multiple limitations due to that. Now, since v0.8.0, transforms -implementations are Tensor and PIL compatible and we can achieve the following -new features: - -- transform multi-band torch tensor images (with more than 3-4 channels) -- torchscript transforms together with your model for deployment -- support for GPU acceleration -- batched transformation such as for videos -- read and decode data directly as torch tensor with torchscript support (for PNG and JPEG image formats) - -.. note:: - These features are only possible with **Tensor** images. - -.. GENERATED FROM PYTHON SOURCE LINES 25-48 - -.. code-block:: default - - - from pathlib import Path - - import matplotlib.pyplot as plt - import numpy as np - - import torch - import torchvision.transforms as T - from torchvision.io import read_image - - - plt.rcParams["savefig.bbox"] = 'tight' - torch.manual_seed(1) - - - def show(imgs): - fix, axs = plt.subplots(ncols=len(imgs), squeeze=False) - for i, img in enumerate(imgs): - img = T.ToPILImage()(img.to('cpu')) - axs[0, i].imshow(np.asarray(img)) - axs[0, i].set(xticklabels=[], yticklabels=[], xticks=[], yticks=[]) - - - - - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 49-51 - -The :func:`~torchvision.io.read_image` function allows to read an image and -directly load it as a tensor - -.. GENERATED FROM PYTHON SOURCE LINES 51-56 - -.. code-block:: default - - - dog1 = read_image(str(Path('assets') / 'dog1.jpg')) - dog2 = read_image(str(Path('assets') / 'dog2.jpg')) - show([dog1, dog2]) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_scripted_tensor_transforms_001.png - :alt: plot scripted tensor transforms - :srcset: /auto_examples/images/sphx_glr_plot_scripted_tensor_transforms_001.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 57-63 - -Transforming images on GPU --------------------------- -Most transforms natively support tensors on top of PIL images (to visualize -the effect of the transforms, you may refer to see -:ref:`sphx_glr_auto_examples_plot_transforms.py`). -Using tensor images, we can run the transforms on GPUs if cuda is available! - -.. GENERATED FROM PYTHON SOURCE LINES 63-79 - -.. code-block:: default - - - import torch.nn as nn - - transforms = torch.nn.Sequential( - T.RandomCrop(224), - T.RandomHorizontalFlip(p=0.3), - ) - - device = 'cuda' if torch.cuda.is_available() else 'cpu' - dog1 = dog1.to(device) - dog2 = dog2.to(device) - - transformed_dog1 = transforms(dog1) - transformed_dog2 = transforms(dog2) - show([transformed_dog1, transformed_dog2]) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_scripted_tensor_transforms_002.png - :alt: plot scripted tensor transforms - :srcset: /auto_examples/images/sphx_glr_plot_scripted_tensor_transforms_002.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 80-87 - -Scriptable transforms for easier deployment via torchscript ------------------------------------------------------------ -We now show how to combine image transformations and a model forward pass, -while using ``torch.jit.script`` to obtain a single scripted module. - -Let's define a ``Predictor`` module that transforms the input tensor and then -applies an ImageNet model on it. - -.. GENERATED FROM PYTHON SOURCE LINES 87-110 - -.. code-block:: default - - - from torchvision.models import resnet18 - - - class Predictor(nn.Module): - - def __init__(self): - super().__init__() - self.resnet18 = resnet18(pretrained=True, progress=False).eval() - self.transforms = nn.Sequential( - T.Resize([256, ]), # We use single int value inside a list due to torchscript type restrictions - T.CenterCrop(224), - T.ConvertImageDtype(torch.float), - T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) - ) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - with torch.no_grad(): - x = self.transforms(x) - y_pred = self.resnet18(x) - return y_pred.argmax(dim=1) - - - - - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 111-113 - -Now, let's define scripted and non-scripted instances of ``Predictor`` and -apply it on multiple tensor images of the same size - -.. GENERATED FROM PYTHON SOURCE LINES 113-122 - -.. code-block:: default - - - predictor = Predictor().to(device) - scripted_predictor = torch.jit.script(predictor).to(device) - - batch = torch.stack([dog1, dog2]).to(device) - - res = predictor(batch) - res_scripted = scripted_predictor(batch) - - - - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /root/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth - - - - -.. GENERATED FROM PYTHON SOURCE LINES 123-125 - -We can verify that the prediction of the scripted and non-scripted models are -the same: - -.. GENERATED FROM PYTHON SOURCE LINES 125-135 - -.. code-block:: default - - - import json - - with open(Path('assets') / 'imagenet_class_index.json', 'r') as labels_file: - labels = json.load(labels_file) - - for i, (pred, pred_scripted) in enumerate(zip(res, res_scripted)): - assert pred == pred_scripted - print(f"Prediction for Dog {i + 1}: {labels[str(pred.item())]}") - - - - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - Prediction for Dog 1: ['n02113023', 'Pembroke'] - Prediction for Dog 2: ['n02106662', 'German_shepherd'] - - - - -.. GENERATED FROM PYTHON SOURCE LINES 136-137 - -Since the model is scripted, it can be easily dumped on disk and re-used - -.. GENERATED FROM PYTHON SOURCE LINES 137-146 - -.. code-block:: default - - - import tempfile - - with tempfile.NamedTemporaryFile() as f: - scripted_predictor.save(f.name) - - dumped_scripted_predictor = torch.jit.load(f.name) - res_scripted_dumped = dumped_scripted_predictor(batch) - assert (res_scripted_dumped == res_scripted).all() - - - - - - - - -.. rst-class:: sphx-glr-timing - - **Total running time of the script:** ( 0 minutes 1.822 seconds) - - -.. _sphx_glr_download_auto_examples_plot_scripted_tensor_transforms.py: - - -.. only :: html - - .. container:: sphx-glr-footer - :class: sphx-glr-footer-example - - - - .. container:: sphx-glr-download sphx-glr-download-python - - :download:`Download Python source code: plot_scripted_tensor_transforms.py ` - - - - .. container:: sphx-glr-download sphx-glr-download-jupyter - - :download:`Download Jupyter notebook: plot_scripted_tensor_transforms.ipynb ` - - -.. only:: html - - .. rst-class:: sphx-glr-signature - - `Gallery generated by Sphinx-Gallery `_ diff --git a/0.11./_sources/auto_examples/plot_transforms.rst.txt b/0.11./_sources/auto_examples/plot_transforms.rst.txt deleted file mode 100644 index 251dcf0c605..00000000000 --- a/0.11./_sources/auto_examples/plot_transforms.rst.txt +++ /dev/null @@ -1,794 +0,0 @@ - -.. DO NOT EDIT. -.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. -.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: -.. "auto_examples/plot_transforms.py" -.. LINE NUMBERS ARE GIVEN BELOW. - -.. only:: html - - .. note:: - :class: sphx-glr-download-link-note - - Click :ref:`here ` - to download the full example code - -.. rst-class:: sphx-glr-example-title - -.. _sphx_glr_auto_examples_plot_transforms.py: - - -========================== -Illustration of transforms -========================== - -This example illustrates the various transforms available in :ref:`the -torchvision.transforms module `. - -.. GENERATED FROM PYTHON SOURCE LINES 9-53 - -.. code-block:: default - - - # sphinx_gallery_thumbnail_path = "../../gallery/assets/transforms_thumbnail.png" - - from PIL import Image - from pathlib import Path - import matplotlib.pyplot as plt - import numpy as np - - import torch - import torchvision.transforms as T - - - plt.rcParams["savefig.bbox"] = 'tight' - orig_img = Image.open(Path('assets') / 'astronaut.jpg') - # if you change the seed, make sure that the randomly-applied transforms - # properly show that the image can be both transformed and *not* transformed! - torch.manual_seed(0) - - - def plot(imgs, with_orig=True, row_title=None, **imshow_kwargs): - if not isinstance(imgs[0], list): - # Make a 2d grid even if there's just 1 row - imgs = [imgs] - - num_rows = len(imgs) - num_cols = len(imgs[0]) + with_orig - fig, axs = plt.subplots(nrows=num_rows, ncols=num_cols, squeeze=False) - for row_idx, row in enumerate(imgs): - row = [orig_img] + row if with_orig else row - for col_idx, img in enumerate(row): - ax = axs[row_idx, col_idx] - ax.imshow(np.asarray(img), **imshow_kwargs) - ax.set(xticklabels=[], yticklabels=[], xticks=[], yticks=[]) - - if with_orig: - axs[0, 0].set(title='Original image') - axs[0, 0].title.set_size(8) - if row_title is not None: - for row_idx in range(num_rows): - axs[row_idx, 0].set(ylabel=row_title[row_idx]) - - plt.tight_layout() - - - - - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 54-59 - -Pad ---- -The :class:`~torchvision.transforms.Pad` transform -(see also :func:`~torchvision.transforms.functional.pad`) -fills image borders with some pixel values. - -.. GENERATED FROM PYTHON SOURCE LINES 59-62 - -.. code-block:: default - - padded_imgs = [T.Pad(padding=padding)(orig_img) for padding in (3, 10, 30, 50)] - plot(padded_imgs) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_001.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_001.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 63-68 - -Resize ------- -The :class:`~torchvision.transforms.Resize` transform -(see also :func:`~torchvision.transforms.functional.resize`) -resizes an image. - -.. GENERATED FROM PYTHON SOURCE LINES 68-71 - -.. code-block:: default - - resized_imgs = [T.Resize(size=size)(orig_img) for size in (30, 50, 100, orig_img.size)] - plot(resized_imgs) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_002.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_002.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 72-77 - -CenterCrop ----------- -The :class:`~torchvision.transforms.CenterCrop` transform -(see also :func:`~torchvision.transforms.functional.center_crop`) -crops the given image at the center. - -.. GENERATED FROM PYTHON SOURCE LINES 77-80 - -.. code-block:: default - - center_crops = [T.CenterCrop(size=size)(orig_img) for size in (30, 50, 100, orig_img.size)] - plot(center_crops) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_003.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_003.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 81-86 - -FiveCrop --------- -The :class:`~torchvision.transforms.FiveCrop` transform -(see also :func:`~torchvision.transforms.functional.five_crop`) -crops the given image into four corners and the central crop. - -.. GENERATED FROM PYTHON SOURCE LINES 86-89 - -.. code-block:: default - - (top_left, top_right, bottom_left, bottom_right, center) = T.FiveCrop(size=(100, 100))(orig_img) - plot([top_left, top_right, bottom_left, bottom_right, center]) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_004.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_004.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 90-95 - -Grayscale ---------- -The :class:`~torchvision.transforms.Grayscale` transform -(see also :func:`~torchvision.transforms.functional.to_grayscale`) -converts an image to grayscale - -.. GENERATED FROM PYTHON SOURCE LINES 95-98 - -.. code-block:: default - - gray_img = T.Grayscale()(orig_img) - plot([gray_img], cmap='gray') - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_005.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_005.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 99-108 - -Random transforms ------------------ -The following transforms are random, which means that the same transfomer -instance will produce different result each time it transforms a given image. - -ColorJitter -~~~~~~~~~~~ -The :class:`~torchvision.transforms.ColorJitter` transform -randomly changes the brightness, saturation, and other properties of an image. - -.. GENERATED FROM PYTHON SOURCE LINES 108-112 - -.. code-block:: default - - jitter = T.ColorJitter(brightness=.5, hue=.3) - jitted_imgs = [jitter(orig_img) for _ in range(4)] - plot(jitted_imgs) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_006.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_006.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 113-118 - -GaussianBlur -~~~~~~~~~~~~ -The :class:`~torchvision.transforms.GaussianBlur` transform -(see also :func:`~torchvision.transforms.functional.gaussian_blur`) -performs gaussian blur transform on an image. - -.. GENERATED FROM PYTHON SOURCE LINES 118-122 - -.. code-block:: default - - blurrer = T.GaussianBlur(kernel_size=(5, 9), sigma=(0.1, 5)) - blurred_imgs = [blurrer(orig_img) for _ in range(4)] - plot(blurred_imgs) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_007.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_007.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 123-128 - -RandomPerspective -~~~~~~~~~~~~~~~~~ -The :class:`~torchvision.transforms.RandomPerspective` transform -(see also :func:`~torchvision.transforms.functional.perspective`) -performs random perspective transform on an image. - -.. GENERATED FROM PYTHON SOURCE LINES 128-132 - -.. code-block:: default - - perspective_transformer = T.RandomPerspective(distortion_scale=0.6, p=1.0) - perspective_imgs = [perspective_transformer(orig_img) for _ in range(4)] - plot(perspective_imgs) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_008.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_008.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 133-138 - -RandomRotation -~~~~~~~~~~~~~~ -The :class:`~torchvision.transforms.RandomRotation` transform -(see also :func:`~torchvision.transforms.functional.rotate`) -rotates an image with random angle. - -.. GENERATED FROM PYTHON SOURCE LINES 138-142 - -.. code-block:: default - - rotater = T.RandomRotation(degrees=(0, 180)) - rotated_imgs = [rotater(orig_img) for _ in range(4)] - plot(rotated_imgs) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_009.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_009.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 143-148 - -RandomAffine -~~~~~~~~~~~~ -The :class:`~torchvision.transforms.RandomAffine` transform -(see also :func:`~torchvision.transforms.functional.affine`) -performs random affine transform on an image. - -.. GENERATED FROM PYTHON SOURCE LINES 148-152 - -.. code-block:: default - - affine_transfomer = T.RandomAffine(degrees=(30, 70), translate=(0.1, 0.3), scale=(0.5, 0.75)) - affine_imgs = [affine_transfomer(orig_img) for _ in range(4)] - plot(affine_imgs) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_010.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_010.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 153-158 - -RandomCrop -~~~~~~~~~~ -The :class:`~torchvision.transforms.RandomCrop` transform -(see also :func:`~torchvision.transforms.functional.crop`) -crops an image at a random location. - -.. GENERATED FROM PYTHON SOURCE LINES 158-162 - -.. code-block:: default - - cropper = T.RandomCrop(size=(128, 128)) - crops = [cropper(orig_img) for _ in range(4)] - plot(crops) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_011.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_011.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 163-169 - -RandomResizedCrop -~~~~~~~~~~~~~~~~~ -The :class:`~torchvision.transforms.RandomResizedCrop` transform -(see also :func:`~torchvision.transforms.functional.resized_crop`) -crops an image at a random location, and then resizes the crop to a given -size. - -.. GENERATED FROM PYTHON SOURCE LINES 169-173 - -.. code-block:: default - - resize_cropper = T.RandomResizedCrop(size=(32, 32)) - resized_crops = [resize_cropper(orig_img) for _ in range(4)] - plot(resized_crops) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_012.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_012.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 174-179 - -RandomInvert -~~~~~~~~~~~~ -The :class:`~torchvision.transforms.RandomInvert` transform -(see also :func:`~torchvision.transforms.functional.invert`) -randomly inverts the colors of the given image. - -.. GENERATED FROM PYTHON SOURCE LINES 179-183 - -.. code-block:: default - - inverter = T.RandomInvert() - invertered_imgs = [inverter(orig_img) for _ in range(4)] - plot(invertered_imgs) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_013.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_013.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 184-190 - -RandomPosterize -~~~~~~~~~~~~~~~ -The :class:`~torchvision.transforms.RandomPosterize` transform -(see also :func:`~torchvision.transforms.functional.posterize`) -randomly posterizes the image by reducing the number of bits -of each color channel. - -.. GENERATED FROM PYTHON SOURCE LINES 190-194 - -.. code-block:: default - - posterizer = T.RandomPosterize(bits=2) - posterized_imgs = [posterizer(orig_img) for _ in range(4)] - plot(posterized_imgs) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_014.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_014.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 195-201 - -RandomSolarize -~~~~~~~~~~~~~~ -The :class:`~torchvision.transforms.RandomSolarize` transform -(see also :func:`~torchvision.transforms.functional.solarize`) -randomly solarizes the image by inverting all pixel values above -the threshold. - -.. GENERATED FROM PYTHON SOURCE LINES 201-205 - -.. code-block:: default - - solarizer = T.RandomSolarize(threshold=192.0) - solarized_imgs = [solarizer(orig_img) for _ in range(4)] - plot(solarized_imgs) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_015.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_015.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 206-211 - -RandomAdjustSharpness -~~~~~~~~~~~~~~~~~~~~~ -The :class:`~torchvision.transforms.RandomAdjustSharpness` transform -(see also :func:`~torchvision.transforms.functional.adjust_sharpness`) -randomly adjusts the sharpness of the given image. - -.. GENERATED FROM PYTHON SOURCE LINES 211-215 - -.. code-block:: default - - sharpness_adjuster = T.RandomAdjustSharpness(sharpness_factor=2) - sharpened_imgs = [sharpness_adjuster(orig_img) for _ in range(4)] - plot(sharpened_imgs) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_016.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_016.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 216-221 - -RandomAutocontrast -~~~~~~~~~~~~~~~~~~ -The :class:`~torchvision.transforms.RandomAutocontrast` transform -(see also :func:`~torchvision.transforms.functional.autocontrast`) -randomly applies autocontrast to the given image. - -.. GENERATED FROM PYTHON SOURCE LINES 221-225 - -.. code-block:: default - - autocontraster = T.RandomAutocontrast() - autocontrasted_imgs = [autocontraster(orig_img) for _ in range(4)] - plot(autocontrasted_imgs) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_017.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_017.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 226-231 - -RandomEqualize -~~~~~~~~~~~~~~ -The :class:`~torchvision.transforms.RandomEqualize` transform -(see also :func:`~torchvision.transforms.functional.equalize`) -randomly equalizes the histogram of the given image. - -.. GENERATED FROM PYTHON SOURCE LINES 231-235 - -.. code-block:: default - - equalizer = T.RandomEqualize() - equalized_imgs = [equalizer(orig_img) for _ in range(4)] - plot(equalized_imgs) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_018.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_018.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 236-241 - -AutoAugment -~~~~~~~~~~~ -The :class:`~torchvision.transforms.AutoAugment` transform -automatically augments data based on a given auto-augmentation policy. -See :class:`~torchvision.transforms.AutoAugmentPolicy` for the available policies. - -.. GENERATED FROM PYTHON SOURCE LINES 241-250 - -.. code-block:: default - - policies = [T.AutoAugmentPolicy.CIFAR10, T.AutoAugmentPolicy.IMAGENET, T.AutoAugmentPolicy.SVHN] - augmenters = [T.AutoAugment(policy) for policy in policies] - imgs = [ - [augmenter(orig_img) for _ in range(4)] - for augmenter in augmenters - ] - row_title = [str(policy).split('.')[-1] for policy in policies] - plot(imgs, row_title=row_title) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_019.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_019.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 251-254 - -RandAugment -~~~~~~~~~~~ -The :class:`~torchvision.transforms.RandAugment` transform automatically augments the data. - -.. GENERATED FROM PYTHON SOURCE LINES 254-258 - -.. code-block:: default - - augmenter = T.RandAugment() - imgs = [augmenter(orig_img) for _ in range(4)] - plot(imgs) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_020.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_020.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 259-262 - -TrivialAugmentWide -~~~~~~~~~~~~~~~~~~ -The :class:`~torchvision.transforms.TrivialAugmentWide` transform automatically augments the data. - -.. GENERATED FROM PYTHON SOURCE LINES 262-266 - -.. code-block:: default - - augmenter = T.TrivialAugmentWide() - imgs = [augmenter(orig_img) for _ in range(4)] - plot(imgs) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_021.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_021.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 267-279 - -Randomly-applied transforms ---------------------------- - -Some transforms are randomly-applied given a probability ``p``. That is, the -transformed image may actually be the same as the original one, even when -called with the same transformer instance! - -RandomHorizontalFlip -~~~~~~~~~~~~~~~~~~~~ -The :class:`~torchvision.transforms.RandomHorizontalFlip` transform -(see also :func:`~torchvision.transforms.functional.hflip`) -performs horizontal flip of an image, with a given probability. - -.. GENERATED FROM PYTHON SOURCE LINES 279-283 - -.. code-block:: default - - hflipper = T.RandomHorizontalFlip(p=0.5) - transformed_imgs = [hflipper(orig_img) for _ in range(4)] - plot(transformed_imgs) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_022.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_022.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 284-289 - -RandomVerticalFlip -~~~~~~~~~~~~~~~~~~ -The :class:`~torchvision.transforms.RandomVerticalFlip` transform -(see also :func:`~torchvision.transforms.functional.vflip`) -performs vertical flip of an image, with a given probability. - -.. GENERATED FROM PYTHON SOURCE LINES 289-293 - -.. code-block:: default - - vflipper = T.RandomVerticalFlip(p=0.5) - transformed_imgs = [vflipper(orig_img) for _ in range(4)] - plot(transformed_imgs) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_023.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_023.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 294-298 - -RandomApply -~~~~~~~~~~~ -The :class:`~torchvision.transforms.RandomApply` transform -randomly applies a list of transforms, with a given probability. - -.. GENERATED FROM PYTHON SOURCE LINES 298-301 - -.. code-block:: default - - applier = T.RandomApply(transforms=[T.RandomCrop(size=(64, 64))], p=0.5) - transformed_imgs = [applier(orig_img) for _ in range(4)] - plot(transformed_imgs) - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_transforms_024.png - :alt: Original image - :srcset: /auto_examples/images/sphx_glr_plot_transforms_024.png - :class: sphx-glr-single-img - - - - - - -.. rst-class:: sphx-glr-timing - - **Total running time of the script:** ( 0 minutes 8.589 seconds) - - -.. _sphx_glr_download_auto_examples_plot_transforms.py: - - -.. only :: html - - .. container:: sphx-glr-footer - :class: sphx-glr-footer-example - - - - .. container:: sphx-glr-download sphx-glr-download-python - - :download:`Download Python source code: plot_transforms.py ` - - - - .. container:: sphx-glr-download sphx-glr-download-jupyter - - :download:`Download Jupyter notebook: plot_transforms.ipynb ` - - -.. only:: html - - .. rst-class:: sphx-glr-signature - - `Gallery generated by Sphinx-Gallery `_ diff --git a/0.11./_sources/auto_examples/plot_video_api.rst.txt b/0.11./_sources/auto_examples/plot_video_api.rst.txt deleted file mode 100644 index fcfe603cbb1..00000000000 --- a/0.11./_sources/auto_examples/plot_video_api.rst.txt +++ /dev/null @@ -1,657 +0,0 @@ - -.. DO NOT EDIT. -.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. -.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: -.. "auto_examples/plot_video_api.py" -.. LINE NUMBERS ARE GIVEN BELOW. - -.. only:: html - - .. note:: - :class: sphx-glr-download-link-note - - Click :ref:`here ` - to download the full example code - -.. rst-class:: sphx-glr-example-title - -.. _sphx_glr_auto_examples_plot_video_api.py: - - -======================= -Video API -======================= - -This example illustrates some of the APIs that torchvision offers for -videos, together with the examples on how to build datasets and more. - -.. GENERATED FROM PYTHON SOURCE LINES 11-16 - -1. Introduction: building a new video object and examining the properties -------------------------------------------------------------------------- -First we select a video to test the object out. For the sake of argument -we're using one from kinetics400 dataset. -To create it, we need to define the path and the stream we want to use. - -.. GENERATED FROM PYTHON SOURCE LINES 18-31 - -Chosen video statistics: - -- WUzgd7C1pWA.mp4 - - source: - - kinetics-400 - - video: - - H-264 - - MPEG-4 AVC (part 10) (avc1) - - fps: 29.97 - - audio: - - MPEG AAC audio (mp4a) - - sample rate: 48K Hz - - -.. GENERATED FROM PYTHON SOURCE LINES 31-44 - -.. code-block:: default - - - import torch - import torchvision - from torchvision.datasets.utils import download_url - - # Download the sample video - download_url( - "https://github.com/pytorch/vision/blob/main/test/assets/videos/WUzgd7C1pWA.mp4?raw=true", - ".", - "WUzgd7C1pWA.mp4" - ) - video_path = "./WUzgd7C1pWA.mp4" - - - - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - Downloading https://raw.githubusercontent.com/pytorch/vision/main/test/assets/videos/WUzgd7C1pWA.mp4 to ./WUzgd7C1pWA.mp4 - 0.1% 0.2% 0.3% 0.5% 0.6% 0.7% 0.8% 0.9% 1.0% 1.2% 1.3% 1.4% 1.5% 1.6% 1.7% 1.8% 2.0% 2.1% 2.2% 2.3% 2.4% 2.5% 2.6% 2.8% 2.9% 3.0% 3.1% 3.2% 3.3% 3.5% 3.6% 3.7% 3.8% 3.9% 4.0% 4.1% 4.3% 4.4% 4.5% 4.6% 4.7% 4.8% 4.9% 5.1% 5.2% 5.3% 5.4% 5.5% 5.6% 5.8% 5.9% 6.0% 6.1% 6.2% 6.3% 6.4% 6.6% 6.7% 6.8% 6.9% 7.0% 7.1% 7.3% 7.4% 7.5% 7.6% 7.7% 7.8% 7.9% 8.1% 8.2% 8.3% 8.4% 8.5% 8.6% 8.7% 8.9% 9.0% 9.1% 9.2% 9.3% 9.4% 9.6% 9.7% 9.8% 9.9% 10.0% 10.1% 10.2% 10.4% 10.5% 10.6% 10.7% 10.8% 10.9% 11.1% 11.2% 11.3% 11.4% 11.5% 11.6% 11.7% 11.9% 12.0% 12.1% 12.2% 12.3% 12.4% 12.5% 12.7% 12.8% 12.9% 13.0% 13.1% 13.2% 13.4% 13.5% 13.6% 13.7% 13.8% 13.9% 14.0% 14.2% 14.3% 14.4% 14.5% 14.6% 14.7% 14.8% 15.0% 15.1% 15.2% 15.3% 15.4% 15.5% 15.7% 15.8% 15.9% 16.0% 16.1% 16.2% 16.3% 16.5% 16.6% 16.7% 16.8% 16.9% 17.0% 17.2% 17.3% 17.4% 17.5% 17.6% 17.7% 17.8% 18.0% 18.1% 18.2% 18.3% 18.4% 18.5% 18.6% 18.8% 18.9% 19.0% 19.1% 19.2% 19.3% 19.5% 19.6% 19.7% 19.8% 19.9% 20.0% 20.1% 20.3% 20.4% 20.5% 20.6% 20.7% 20.8% 20.9% 21.1% 21.2% 21.3% 21.4% 21.5% 21.6% 21.8% 21.9% 22.0% 22.1% 22.2% 22.3% 22.4% 22.6% 22.7% 22.8% 22.9% 23.0% 23.1% 23.3% 23.4% 23.5% 23.6% 23.7% 23.8% 23.9% 24.1% 24.2% 24.3% 24.4% 24.5% 24.6% 24.7% 24.9% 25.0% 25.1% 25.2% 25.3% 25.4% 25.6% 25.7% 25.8% 25.9% 26.0% 26.1% 26.2% 26.4% 26.5% 26.6% 26.7% 26.8% 26.9% 27.0% 27.2% 27.3% 27.4% 27.5% 27.6% 27.7% 27.9% 28.0% 28.1% 28.2% 28.3% 28.4% 28.5% 28.7% 28.8% 28.9% 29.0% 29.1% 29.2% 29.4% 29.5% 29.6% 29.7% 29.8% 29.9% 30.0% 30.2% 30.3% 30.4% 30.5% 30.6% 30.7% 30.8% 31.0% 31.1% 31.2% 31.3% 31.4% 31.5% 31.7% 31.8% 31.9% 32.0% 32.1% 32.2% 32.3% 32.5% 32.6% 32.7% 32.8% 32.9% 33.0% 33.2% 33.3% 33.4% 33.5% 33.6% 33.7% 33.8% 34.0% 34.1% 34.2% 34.3% 34.4% 34.5% 34.6% 34.8% 34.9% 35.0% 35.1% 35.2% 35.3% 35.5% 35.6% 35.7% 35.8% 35.9% 36.0% 36.1% 36.3% 36.4% 36.5% 36.6% 36.7% 36.8% 36.9% 37.1% 37.2% 37.3% 37.4% 37.5% 37.6% 37.8% 37.9% 38.0% 38.1% 38.2% 38.3% 38.4% 38.6% 38.7% 38.8% 38.9% 39.0% 39.1% 39.3% 39.4% 39.5% 39.6% 39.7% 39.8% 39.9% 40.1% 40.2% 40.3% 40.4% 40.5% 40.6% 40.7% 40.9% 41.0% 41.1% 41.2% 41.3% 41.4% 41.6% 41.7% 41.8% 41.9% 42.0% 42.1% 42.2% 42.4% 42.5% 42.6% 42.7% 42.8% 42.9% 43.0% 43.2% 43.3% 43.4% 43.5% 43.6% 43.7% 43.9% 44.0% 44.1% 44.2% 44.3% 44.4% 44.5% 44.7% 44.8% 44.9% 45.0% 45.1% 45.2% 45.4% 45.5% 45.6% 45.7% 45.8% 45.9% 46.0% 46.2% 46.3% 46.4% 46.5% 46.6% 46.7% 46.8% 47.0% 47.1% 47.2% 47.3% 47.4% 47.5% 47.7% 47.8% 47.9% 48.0% 48.1% 48.2% 48.3% 48.5% 48.6% 48.7% 48.8% 48.9% 49.0% 49.2% 49.3% 49.4% 49.5% 49.6% 49.7% 49.8% 50.0% 50.1% 50.2% 50.3% 50.4% 50.5% 50.6% 50.8% 50.9% 51.0% 51.1% 51.2% 51.3% 51.5% 51.6% 51.7% 51.8% 51.9% 52.0% 52.1% 52.3% 52.4% 52.5% 52.6% 52.7% 52.8% 52.9% 53.1% 53.2% 53.3% 53.4% 53.5% 53.6% 53.8% 53.9% 54.0% 54.1% 54.2% 54.3% 54.4% 54.6% 54.7% 54.8% 54.9% 55.0% 55.1% 55.3% 55.4% 55.5% 55.6% 55.7% 55.8% 55.9% 56.1% 56.2% 56.3% 56.4% 56.5% 56.6% 56.7% 56.9% 57.0% 57.1% 57.2% 57.3% 57.4% 57.6% 57.7% 57.8% 57.9% 58.0% 58.1% 58.2% 58.4% 58.5% 58.6% 58.7% 58.8% 58.9% 59.0% 59.2% 59.3% 59.4% 59.5% 59.6% 59.7% 59.9% 60.0% 60.1% 60.2% 60.3% 60.4% 60.5% 60.7% 60.8% 60.9% 61.0% 61.1% 61.2% 61.4% 61.5% 61.6% 61.7% 61.8% 61.9% 62.0% 62.2% 62.3% 62.4% 62.5% 62.6% 62.7% 62.8% 63.0% 63.1% 63.2% 63.3% 63.4% 63.5% 63.7% 63.8% 63.9% 64.0% 64.1% 64.2% 64.3% 64.5% 64.6% 64.7% 64.8% 64.9% 65.0% 65.1% 65.3% 65.4% 65.5% 65.6% 65.7% 65.8% 66.0% 66.1% 66.2% 66.3% 66.4% 66.5% 66.6% 66.8% 66.9% 67.0% 67.1% 67.2% 67.3% 67.5% 67.6% 67.7% 67.8% 67.9% 68.0% 68.1% 68.3% 68.4% 68.5% 68.6% 68.7% 68.8% 68.9% 69.1% 69.2% 69.3% 69.4% 69.5% 69.6% 69.8% 69.9% 70.0% 70.1% 70.2% 70.3% 70.4% 70.6% 70.7% 70.8% 70.9% 71.0% 71.1% 71.3% 71.4% 71.5% 71.6% 71.7% 71.8% 71.9% 72.1% 72.2% 72.3% 72.4% 72.5% 72.6% 72.7% 72.9% 73.0% 73.1% 73.2% 73.3% 73.4% 73.6% 73.7% 73.8% 73.9% 74.0% 74.1% 74.2% 74.4% 74.5% 74.6% 74.7% 74.8% 74.9% 75.0% 75.2% 75.3% 75.4% 75.5% 75.6% 75.7% 75.9% 76.0% 76.1% 76.2% 76.3% 76.4% 76.5% 76.7% 76.8% 76.9% 77.0% 77.1% 77.2% 77.4% 77.5% 77.6% 77.7% 77.8% 77.9% 78.0% 78.2% 78.3% 78.4% 78.5% 78.6% 78.7% 78.8% 79.0% 79.1% 79.2% 79.3% 79.4% 79.5% 79.7% 79.8% 79.9% 80.0% 80.1% 80.2% 80.3% 80.5% 80.6% 80.7% 80.8% 80.9% 81.0% 81.1% 81.3% 81.4% 81.5% 81.6% 81.7% 81.8% 82.0% 82.1% 82.2% 82.3% 82.4% 82.5% 82.6% 82.8% 82.9% 83.0% 83.1% 83.2% 83.3% 83.5% 83.6% 83.7% 83.8% 83.9% 84.0% 84.1% 84.3% 84.4% 84.5% 84.6% 84.7% 84.8% 84.9% 85.1% 85.2% 85.3% 85.4% 85.5% 85.6% 85.8% 85.9% 86.0% 86.1% 86.2% 86.3% 86.4% 86.6% 86.7% 86.8% 86.9% 87.0% 87.1% 87.2% 87.4% 87.5% 87.6% 87.7% 87.8% 87.9% 88.1% 88.2% 88.3% 88.4% 88.5% 88.6% 88.7% 88.9% 89.0% 89.1% 89.2% 89.3% 89.4% 89.6% 89.7% 89.8% 89.9% 90.0% 90.1% 90.2% 90.4% 90.5% 90.6% 90.7% 90.8% 90.9% 91.0% 91.2% 91.3% 91.4% 91.5% 91.6% 91.7% 91.9% 92.0% 92.1% 92.2% 92.3% 92.4% 92.5% 92.7% 92.8% 92.9% 93.0% 93.1% 93.2% 93.4% 93.5% 93.6% 93.7% 93.8% 93.9% 94.0% 94.2% 94.3% 94.4% 94.5% 94.6% 94.7% 94.8% 95.0% 95.1% 95.2% 95.3% 95.4% 95.5% 95.7% 95.8% 95.9% 96.0% 96.1% 96.2% 96.3% 96.5% 96.6% 96.7% 96.8% 96.9% 97.0% 97.1% 97.3% 97.4% 97.5% 97.6% 97.7% 97.8% 98.0% 98.1% 98.2% 98.3% 98.4% 98.5% 98.6% 98.8% 98.9% 99.0% 99.1% 99.2% 99.3% 99.5% 99.6% 99.7% 99.8% 99.9% 100.0% - - - - -.. GENERATED FROM PYTHON SOURCE LINES 45-49 - -Streams are defined in a similar fashion as torch devices. We encode them as strings in a form -of ``stream_type:stream_id`` where ``stream_type`` is a string and ``stream_id`` a long int. -The constructor accepts passing a ``stream_type`` only, in which case the stream is auto-discovered. -Firstly, let's get the metadata for our particular video: - -.. GENERATED FROM PYTHON SOURCE LINES 49-54 - -.. code-block:: default - - - stream = "video" - video = torchvision.io.VideoReader(video_path, stream) - video.get_metadata() - - - - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - - {'video': {'duration': [10.9109], 'fps': [29.97002997002997]}, 'audio': {'duration': [10.9], 'framerate': [48000.0]}, 'subtitles': {'duration': []}, 'cc': {'duration': []}} - - - -.. GENERATED FROM PYTHON SOURCE LINES 55-62 - -Here we can see that video has two streams - a video and an audio stream. -Currently available stream types include ['video', 'audio']. -Each descriptor consists of two parts: stream type (e.g. 'video') and a unique stream id -(which are determined by video encoding). -In this way, if the video container contains multiple streams of the same type, -users can access the one they want. -If only stream type is passed, the decoder auto-detects first stream of that type and returns it. - -.. GENERATED FROM PYTHON SOURCE LINES 64-71 - -Let's read all the frames from the video stream. By default, the return value of -``next(video_reader)`` is a dict containing the following fields. - -The return fields are: - -- ``data``: containing a torch.tensor -- ``pts``: containing a float timestamp of this particular frame - -.. GENERATED FROM PYTHON SOURCE LINES 71-87 - -.. code-block:: default - - - metadata = video.get_metadata() - video.set_current_stream("audio") - - frames = [] # we are going to save the frames here. - ptss = [] # pts is a presentation timestamp in seconds (float) of each frame - for frame in video: - frames.append(frame['data']) - ptss.append(frame['pts']) - - print("PTS for first five frames ", ptss[:5]) - print("Total number of frames: ", len(frames)) - approx_nf = metadata['audio']['duration'][0] * metadata['audio']['framerate'][0] - print("Approx total number of datapoints we can expect: ", approx_nf) - print("Read data size: ", frames[0].size(0) * len(frames)) - - - - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - PTS for first five frames [0.0, 0.021332999999999998, 0.042667, 0.064, 0.08533299999999999] - Total number of frames: 511 - Approx total number of datapoints we can expect: 523200.0 - Read data size: 523264 - - - - -.. GENERATED FROM PYTHON SOURCE LINES 88-96 - -But what if we only want to read certain time segment of the video? -That can be done easily using the combination of our ``seek`` function, and the fact that each call -to next returns the presentation timestamp of the returned frame in seconds. - -Given that our implementation relies on python iterators, -we can leverage itertools to simplify the process and make it more pythonic. - -For example, if we wanted to read ten frames from second second: - -.. GENERATED FROM PYTHON SOURCE LINES 96-109 - -.. code-block:: default - - - - import itertools - video.set_current_stream("video") - - frames = [] # we are going to save the frames here. - - # We seek into a second second of the video and use islice to get 10 frames since - for frame, pts in itertools.islice(video.seek(2), 10): - frames.append(frame) - - print("Total number of frames: ", len(frames)) - - - - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - Total number of frames: 10 - - - - -.. GENERATED FROM PYTHON SOURCE LINES 110-114 - -Or if we wanted to read from 2nd to 5th second, -We seek into a second second of the video, -then we utilize the itertools takewhile to get the -correct number of frames: - -.. GENERATED FROM PYTHON SOURCE LINES 114-127 - -.. code-block:: default - - - video.set_current_stream("video") - frames = [] # we are going to save the frames here. - video = video.seek(2) - - for frame in itertools.takewhile(lambda x: x['pts'] <= 5, video): - frames.append(frame['data']) - - print("Total number of frames: ", len(frames)) - approx_nf = (5 - 2) * video.get_metadata()['video']['fps'][0] - print("We can expect approx: ", approx_nf) - print("Tensor size: ", frames[0].size()) - - - - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - Total number of frames: 90 - We can expect approx: 89.91008991008991 - Tensor size: torch.Size([3, 256, 340]) - - - - -.. GENERATED FROM PYTHON SOURCE LINES 128-132 - -2. Building a sample read_video function ----------------------------------------------------------------------------------------- -We can utilize the methods above to build the read video function that follows -the same API to the existing ``read_video`` function. - -.. GENERATED FROM PYTHON SOURCE LINES 132-172 - -.. code-block:: default - - - - def example_read_video(video_object, start=0, end=None, read_video=True, read_audio=True): - if end is None: - end = float("inf") - if end < start: - raise ValueError( - "end time should be larger than start time, got " - "start time={} and end time={}".format(start, end) - ) - - video_frames = torch.empty(0) - video_pts = [] - if read_video: - video_object.set_current_stream("video") - frames = [] - for frame in itertools.takewhile(lambda x: x['pts'] <= end, video_object.seek(start)): - frames.append(frame['data']) - video_pts.append(frame['pts']) - if len(frames) > 0: - video_frames = torch.stack(frames, 0) - - audio_frames = torch.empty(0) - audio_pts = [] - if read_audio: - video_object.set_current_stream("audio") - frames = [] - for frame in itertools.takewhile(lambda x: x['pts'] <= end, video_object.seek(start)): - frames.append(frame['data']) - video_pts.append(frame['pts']) - if len(frames) > 0: - audio_frames = torch.cat(frames, 0) - - return video_frames, audio_frames, (video_pts, audio_pts), video_object.get_metadata() - - - # Total number of frames should be 327 for video and 523264 datapoints for audio - vf, af, info, meta = example_read_video(video) - print(vf.size(), af.size()) - - - - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - torch.Size([327, 3, 256, 340]) torch.Size([523264, 1]) - - - - -.. GENERATED FROM PYTHON SOURCE LINES 173-178 - -3. Building an example randomly sampled dataset (can be applied to training dataest of kinetics400) -------------------------------------------------------------------------------------------------------- -Cool, so now we can use the same principle to make the sample dataset. -We suggest trying out iterable dataset for this purpose. -Here, we are going to build an example dataset that reads randomly selected 10 frames of video. - -.. GENERATED FROM PYTHON SOURCE LINES 180-181 - -Make sample dataset - -.. GENERATED FROM PYTHON SOURCE LINES 181-186 - -.. code-block:: default - - import os - os.makedirs("./dataset", exist_ok=True) - os.makedirs("./dataset/1", exist_ok=True) - os.makedirs("./dataset/2", exist_ok=True) - - - - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 187-188 - -Download the videos - -.. GENERATED FROM PYTHON SOURCE LINES 188-214 - -.. code-block:: default - - from torchvision.datasets.utils import download_url - download_url( - "https://github.com/pytorch/vision/blob/main/test/assets/videos/WUzgd7C1pWA.mp4?raw=true", - "./dataset/1", "WUzgd7C1pWA.mp4" - ) - download_url( - "https://github.com/pytorch/vision/blob/main/test/assets/videos/RATRACE_wave_f_nm_np1_fr_goo_37.avi?raw=true", - "./dataset/1", - "RATRACE_wave_f_nm_np1_fr_goo_37.avi" - ) - download_url( - "https://github.com/pytorch/vision/blob/main/test/assets/videos/SOX5yA1l24A.mp4?raw=true", - "./dataset/2", - "SOX5yA1l24A.mp4" - ) - download_url( - "https://github.com/pytorch/vision/blob/main/test/assets/videos/v_SoccerJuggling_g23_c01.avi?raw=true", - "./dataset/2", - "v_SoccerJuggling_g23_c01.avi" - ) - download_url( - "https://github.com/pytorch/vision/blob/main/test/assets/videos/v_SoccerJuggling_g24_c01.avi?raw=true", - "./dataset/2", - "v_SoccerJuggling_g24_c01.avi" - ) - - - - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - Downloading https://raw.githubusercontent.com/pytorch/vision/main/test/assets/videos/WUzgd7C1pWA.mp4 to ./dataset/1/WUzgd7C1pWA.mp4 - 0.1% 0.2% 0.3% 0.5% 0.6% 0.7% 0.8% 0.9% 1.0% 1.2% 1.3% 1.4% 1.5% 1.6% 1.7% 1.8% 2.0% 2.1% 2.2% 2.3% 2.4% 2.5% 2.6% 2.8% 2.9% 3.0% 3.1% 3.2% 3.3% 3.5% 3.6% 3.7% 3.8% 3.9% 4.0% 4.1% 4.3% 4.4% 4.5% 4.6% 4.7% 4.8% 4.9% 5.1% 5.2% 5.3% 5.4% 5.5% 5.6% 5.8% 5.9% 6.0% 6.1% 6.2% 6.3% 6.4% 6.6% 6.7% 6.8% 6.9% 7.0% 7.1% 7.3% 7.4% 7.5% 7.6% 7.7% 7.8% 7.9% 8.1% 8.2% 8.3% 8.4% 8.5% 8.6% 8.7% 8.9% 9.0% 9.1% 9.2% 9.3% 9.4% 9.6% 9.7% 9.8% 9.9% 10.0% 10.1% 10.2% 10.4% 10.5% 10.6% 10.7% 10.8% 10.9% 11.1% 11.2% 11.3% 11.4% 11.5% 11.6% 11.7% 11.9% 12.0% 12.1% 12.2% 12.3% 12.4% 12.5% 12.7% 12.8% 12.9% 13.0% 13.1% 13.2% 13.4% 13.5% 13.6% 13.7% 13.8% 13.9% 14.0% 14.2% 14.3% 14.4% 14.5% 14.6% 14.7% 14.8% 15.0% 15.1% 15.2% 15.3% 15.4% 15.5% 15.7% 15.8% 15.9% 16.0% 16.1% 16.2% 16.3% 16.5% 16.6% 16.7% 16.8% 16.9% 17.0% 17.2% 17.3% 17.4% 17.5% 17.6% 17.7% 17.8% 18.0% 18.1% 18.2% 18.3% 18.4% 18.5% 18.6% 18.8% 18.9% 19.0% 19.1% 19.2% 19.3% 19.5% 19.6% 19.7% 19.8% 19.9% 20.0% 20.1% 20.3% 20.4% 20.5% 20.6% 20.7% 20.8% 20.9% 21.1% 21.2% 21.3% 21.4% 21.5% 21.6% 21.8% 21.9% 22.0% 22.1% 22.2% 22.3% 22.4% 22.6% 22.7% 22.8% 22.9% 23.0% 23.1% 23.3% 23.4% 23.5% 23.6% 23.7% 23.8% 23.9% 24.1% 24.2% 24.3% 24.4% 24.5% 24.6% 24.7% 24.9% 25.0% 25.1% 25.2% 25.3% 25.4% 25.6% 25.7% 25.8% 25.9% 26.0% 26.1% 26.2% 26.4% 26.5% 26.6% 26.7% 26.8% 26.9% 27.0% 27.2% 27.3% 27.4% 27.5% 27.6% 27.7% 27.9% 28.0% 28.1% 28.2% 28.3% 28.4% 28.5% 28.7% 28.8% 28.9% 29.0% 29.1% 29.2% 29.4% 29.5% 29.6% 29.7% 29.8% 29.9% 30.0% 30.2% 30.3% 30.4% 30.5% 30.6% 30.7% 30.8% 31.0% 31.1% 31.2% 31.3% 31.4% 31.5% 31.7% 31.8% 31.9% 32.0% 32.1% 32.2% 32.3% 32.5% 32.6% 32.7% 32.8% 32.9% 33.0% 33.2% 33.3% 33.4% 33.5% 33.6% 33.7% 33.8% 34.0% 34.1% 34.2% 34.3% 34.4% 34.5% 34.6% 34.8% 34.9% 35.0% 35.1% 35.2% 35.3% 35.5% 35.6% 35.7% 35.8% 35.9% 36.0% 36.1% 36.3% 36.4% 36.5% 36.6% 36.7% 36.8% 36.9% 37.1% 37.2% 37.3% 37.4% 37.5% 37.6% 37.8% 37.9% 38.0% 38.1% 38.2% 38.3% 38.4% 38.6% 38.7% 38.8% 38.9% 39.0% 39.1% 39.3% 39.4% 39.5% 39.6% 39.7% 39.8% 39.9% 40.1% 40.2% 40.3% 40.4% 40.5% 40.6% 40.7% 40.9% 41.0% 41.1% 41.2% 41.3% 41.4% 41.6% 41.7% 41.8% 41.9% 42.0% 42.1% 42.2% 42.4% 42.5% 42.6% 42.7% 42.8% 42.9% 43.0% 43.2% 43.3% 43.4% 43.5% 43.6% 43.7% 43.9% 44.0% 44.1% 44.2% 44.3% 44.4% 44.5% 44.7% 44.8% 44.9% 45.0% 45.1% 45.2% 45.4% 45.5% 45.6% 45.7% 45.8% 45.9% 46.0% 46.2% 46.3% 46.4% 46.5% 46.6% 46.7% 46.8% 47.0% 47.1% 47.2% 47.3% 47.4% 47.5% 47.7% 47.8% 47.9% 48.0% 48.1% 48.2% 48.3% 48.5% 48.6% 48.7% 48.8% 48.9% 49.0% 49.2% 49.3% 49.4% 49.5% 49.6% 49.7% 49.8% 50.0% 50.1% 50.2% 50.3% 50.4% 50.5% 50.6% 50.8% 50.9% 51.0% 51.1% 51.2% 51.3% 51.5% 51.6% 51.7% 51.8% 51.9% 52.0% 52.1% 52.3% 52.4% 52.5% 52.6% 52.7% 52.8% 52.9% 53.1% 53.2% 53.3% 53.4% 53.5% 53.6% 53.8% 53.9% 54.0% 54.1% 54.2% 54.3% 54.4% 54.6% 54.7% 54.8% 54.9% 55.0% 55.1% 55.3% 55.4% 55.5% 55.6% 55.7% 55.8% 55.9% 56.1% 56.2% 56.3% 56.4% 56.5% 56.6% 56.7% 56.9% 57.0% 57.1% 57.2% 57.3% 57.4% 57.6% 57.7% 57.8% 57.9% 58.0% 58.1% 58.2% 58.4% 58.5% 58.6% 58.7% 58.8% 58.9% 59.0% 59.2% 59.3% 59.4% 59.5% 59.6% 59.7% 59.9% 60.0% 60.1% 60.2% 60.3% 60.4% 60.5% 60.7% 60.8% 60.9% 61.0% 61.1% 61.2% 61.4% 61.5% 61.6% 61.7% 61.8% 61.9% 62.0% 62.2% 62.3% 62.4% 62.5% 62.6% 62.7% 62.8% 63.0% 63.1% 63.2% 63.3% 63.4% 63.5% 63.7% 63.8% 63.9% 64.0% 64.1% 64.2% 64.3% 64.5% 64.6% 64.7% 64.8% 64.9% 65.0% 65.1% 65.3% 65.4% 65.5% 65.6% 65.7% 65.8% 66.0% 66.1% 66.2% 66.3% 66.4% 66.5% 66.6% 66.8% 66.9% 67.0% 67.1% 67.2% 67.3% 67.5% 67.6% 67.7% 67.8% 67.9% 68.0% 68.1% 68.3% 68.4% 68.5% 68.6% 68.7% 68.8% 68.9% 69.1% 69.2% 69.3% 69.4% 69.5% 69.6% 69.8% 69.9% 70.0% 70.1% 70.2% 70.3% 70.4% 70.6% 70.7% 70.8% 70.9% 71.0% 71.1% 71.3% 71.4% 71.5% 71.6% 71.7% 71.8% 71.9% 72.1% 72.2% 72.3% 72.4% 72.5% 72.6% 72.7% 72.9% 73.0% 73.1% 73.2% 73.3% 73.4% 73.6% 73.7% 73.8% 73.9% 74.0% 74.1% 74.2% 74.4% 74.5% 74.6% 74.7% 74.8% 74.9% 75.0% 75.2% 75.3% 75.4% 75.5% 75.6% 75.7% 75.9% 76.0% 76.1% 76.2% 76.3% 76.4% 76.5% 76.7% 76.8% 76.9% 77.0% 77.1% 77.2% 77.4% 77.5% 77.6% 77.7% 77.8% 77.9% 78.0% 78.2% 78.3% 78.4% 78.5% 78.6% 78.7% 78.8% 79.0% 79.1% 79.2% 79.3% 79.4% 79.5% 79.7% 79.8% 79.9% 80.0% 80.1% 80.2% 80.3% 80.5% 80.6% 80.7% 80.8% 80.9% 81.0% 81.1% 81.3% 81.4% 81.5% 81.6% 81.7% 81.8% 82.0% 82.1% 82.2% 82.3% 82.4% 82.5% 82.6% 82.8% 82.9% 83.0% 83.1% 83.2% 83.3% 83.5% 83.6% 83.7% 83.8% 83.9% 84.0% 84.1% 84.3% 84.4% 84.5% 84.6% 84.7% 84.8% 84.9% 85.1% 85.2% 85.3% 85.4% 85.5% 85.6% 85.8% 85.9% 86.0% 86.1% 86.2% 86.3% 86.4% 86.6% 86.7% 86.8% 86.9% 87.0% 87.1% 87.2% 87.4% 87.5% 87.6% 87.7% 87.8% 87.9% 88.1% 88.2% 88.3% 88.4% 88.5% 88.6% 88.7% 88.9% 89.0% 89.1% 89.2% 89.3% 89.4% 89.6% 89.7% 89.8% 89.9% 90.0% 90.1% 90.2% 90.4% 90.5% 90.6% 90.7% 90.8% 90.9% 91.0% 91.2% 91.3% 91.4% 91.5% 91.6% 91.7% 91.9% 92.0% 92.1% 92.2% 92.3% 92.4% 92.5% 92.7% 92.8% 92.9% 93.0% 93.1% 93.2% 93.4% 93.5% 93.6% 93.7% 93.8% 93.9% 94.0% 94.2% 94.3% 94.4% 94.5% 94.6% 94.7% 94.8% 95.0% 95.1% 95.2% 95.3% 95.4% 95.5% 95.7% 95.8% 95.9% 96.0% 96.1% 96.2% 96.3% 96.5% 96.6% 96.7% 96.8% 96.9% 97.0% 97.1% 97.3% 97.4% 97.5% 97.6% 97.7% 97.8% 98.0% 98.1% 98.2% 98.3% 98.4% 98.5% 98.6% 98.8% 98.9% 99.0% 99.1% 99.2% 99.3% 99.5% 99.6% 99.7% 99.8% 99.9% 100.0% - Downloading https://raw.githubusercontent.com/pytorch/vision/main/test/assets/videos/RATRACE_wave_f_nm_np1_fr_goo_37.avi to ./dataset/1/RATRACE_wave_f_nm_np1_fr_goo_37.avi - 0.4% 0.8% 1.2% 1.6% 1.9% 2.3% 2.7% 3.1% 3.5% 3.9% 4.3% 4.7% 5.0% 5.4% 5.8% 6.2% 6.6% 7.0% 7.4% 7.8% 8.2% 8.5% 8.9% 9.3% 9.7% 10.1% 10.5% 10.9% 11.3% 11.7% 12.0% 12.4% 12.8% 13.2% 13.6% 14.0% 14.4% 14.8% 15.1% 15.5% 15.9% 16.3% 16.7% 17.1% 17.5% 17.9% 18.3% 18.6% 19.0% 19.4% 19.8% 20.2% 20.6% 21.0% 21.4% 21.7% 22.1% 22.5% 22.9% 23.3% 23.7% 24.1% 24.5% 24.9% 25.2% 25.6% 26.0% 26.4% 26.8% 27.2% 27.6% 28.0% 28.3% 28.7% 29.1% 29.5% 29.9% 30.3% 30.7% 31.1% 31.5% 31.8% 32.2% 32.6% 33.0% 33.4% 33.8% 34.2% 34.6% 35.0% 35.3% 35.7% 36.1% 36.5% 36.9% 37.3% 37.7% 38.1% 38.4% 38.8% 39.2% 39.6% 40.0% 40.4% 40.8% 41.2% 41.6% 41.9% 42.3% 42.7% 43.1% 43.5% 43.9% 44.3% 44.7% 45.0% 45.4% 45.8% 46.2% 46.6% 47.0% 47.4% 47.8% 48.2% 48.5% 48.9% 49.3% 49.7% 50.1% 50.5% 50.9% 51.3% 51.7% 52.0% 52.4% 52.8% 53.2% 53.6% 54.0% 54.4% 54.8% 55.1% 55.5% 55.9% 56.3% 56.7% 57.1% 57.5% 57.9% 58.3% 58.6% 59.0% 59.4% 59.8% 60.2% 60.6% 61.0% 61.4% 61.7% 62.1% 62.5% 62.9% 63.3% 63.7% 64.1% 64.5% 64.9% 65.2% 65.6% 66.0% 66.4% 66.8% 67.2% 67.6% 68.0% 68.3% 68.7% 69.1% 69.5% 69.9% 70.3% 70.7% 71.1% 71.5% 71.8% 72.2% 72.6% 73.0% 73.4% 73.8% 74.2% 74.6% 75.0% 75.3% 75.7% 76.1% 76.5% 76.9% 77.3% 77.7% 78.1% 78.4% 78.8% 79.2% 79.6% 80.0% 80.4% 80.8% 81.2% 81.6% 81.9% 82.3% 82.7% 83.1% 83.5% 83.9% 84.3% 84.7% 85.0% 85.4% 85.8% 86.2% 86.6% 87.0% 87.4% 87.8% 88.2% 88.5% 88.9% 89.3% 89.7% 90.1% 90.5% 90.9% 91.3% 91.7% 92.0% 92.4% 92.8% 93.2% 93.6% 94.0% 94.4% 94.8% 95.1% 95.5% 95.9% 96.3% 96.7% 97.1% 97.5% 97.9% 98.3% 98.6% 99.0% 99.4% 99.8% 100.2% - Downloading https://raw.githubusercontent.com/pytorch/vision/main/test/assets/videos/SOX5yA1l24A.mp4 to ./dataset/2/SOX5yA1l24A.mp4 - 0.2% 0.4% 0.5% 0.7% 0.9% 1.1% 1.3% 1.5% 1.6% 1.8% 2.0% 2.2% 2.4% 2.6% 2.7% 2.9% 3.1% 3.3% 3.5% 3.7% 3.8% 4.0% 4.2% 4.4% 4.6% 4.8% 4.9% 5.1% 5.3% 5.5% 5.7% 5.8% 6.0% 6.2% 6.4% 6.6% 6.8% 6.9% 7.1% 7.3% 7.5% 7.7% 7.9% 8.0% 8.2% 8.4% 8.6% 8.8% 9.0% 9.1% 9.3% 9.5% 9.7% 9.9% 10.1% 10.2% 10.4% 10.6% 10.8% 11.0% 11.1% 11.3% 11.5% 11.7% 11.9% 12.1% 12.2% 12.4% 12.6% 12.8% 13.0% 13.2% 13.3% 13.5% 13.7% 13.9% 14.1% 14.3% 14.4% 14.6% 14.8% 15.0% 15.2% 15.4% 15.5% 15.7% 15.9% 16.1% 16.3% 16.4% 16.6% 16.8% 17.0% 17.2% 17.4% 17.5% 17.7% 17.9% 18.1% 18.3% 18.5% 18.6% 18.8% 19.0% 19.2% 19.4% 19.6% 19.7% 19.9% 20.1% 20.3% 20.5% 20.7% 20.8% 21.0% 21.2% 21.4% 21.6% 21.7% 21.9% 22.1% 22.3% 22.5% 22.7% 22.8% 23.0% 23.2% 23.4% 23.6% 23.8% 23.9% 24.1% 24.3% 24.5% 24.7% 24.9% 25.0% 25.2% 25.4% 25.6% 25.8% 26.0% 26.1% 26.3% 26.5% 26.7% 26.9% 27.0% 27.2% 27.4% 27.6% 27.8% 28.0% 28.1% 28.3% 28.5% 28.7% 28.9% 29.1% 29.2% 29.4% 29.6% 29.8% 30.0% 30.2% 30.3% 30.5% 30.7% 30.9% 31.1% 31.3% 31.4% 31.6% 31.8% 32.0% 32.2% 32.3% 32.5% 32.7% 32.9% 33.1% 33.3% 33.4% 33.6% 33.8% 34.0% 34.2% 34.4% 34.5% 34.7% 34.9% 35.1% 35.3% 35.5% 35.6% 35.8% 36.0% 36.2% 36.4% 36.6% 36.7% 36.9% 37.1% 37.3% 37.5% 37.6% 37.8% 38.0% 38.2% 38.4% 38.6% 38.7% 38.9% 39.1% 39.3% 39.5% 39.7% 39.8% 40.0% 40.2% 40.4% 40.6% 40.8% 40.9% 41.1% 41.3% 41.5% 41.7% 41.9% 42.0% 42.2% 42.4% 42.6% 42.8% 42.9% 43.1% 43.3% 43.5% 43.7% 43.9% 44.0% 44.2% 44.4% 44.6% 44.8% 45.0% 45.1% 45.3% 45.5% 45.7% 45.9% 46.1% 46.2% 46.4% 46.6% 46.8% 47.0% 47.2% 47.3% 47.5% 47.7% 47.9% 48.1% 48.2% 48.4% 48.6% 48.8% 49.0% 49.2% 49.3% 49.5% 49.7% 49.9% 50.1% 50.3% 50.4% 50.6% 50.8% 51.0% 51.2% 51.4% 51.5% 51.7% 51.9% 52.1% 52.3% 52.5% 52.6% 52.8% 53.0% 53.2% 53.4% 53.5% 53.7% 53.9% 54.1% 54.3% 54.5% 54.6% 54.8% 55.0% 55.2% 55.4% 55.6% 55.7% 55.9% 56.1% 56.3% 56.5% 56.7% 56.8% 57.0% 57.2% 57.4% 57.6% 57.8% 57.9% 58.1% 58.3% 58.5% 58.7% 58.8% 59.0% 59.2% 59.4% 59.6% 59.8% 59.9% 60.1% 60.3% 60.5% 60.7% 60.9% 61.0% 61.2% 61.4% 61.6% 61.8% 62.0% 62.1% 62.3% 62.5% 62.7% 62.9% 63.1% 63.2% 63.4% 63.6% 63.8% 64.0% 64.1% 64.3% 64.5% 64.7% 64.9% 65.1% 65.2% 65.4% 65.6% 65.8% 66.0% 66.2% 66.3% 66.5% 66.7% 66.9% 67.1% 67.3% 67.4% 67.6% 67.8% 68.0% 68.2% 68.4% 68.5% 68.7% 68.9% 69.1% 69.3% 69.4% 69.6% 69.8% 70.0% 70.2% 70.4% 70.5% 70.7% 70.9% 71.1% 71.3% 71.5% 71.6% 71.8% 72.0% 72.2% 72.4% 72.6% 72.7% 72.9% 73.1% 73.3% 73.5% 73.7% 73.8% 74.0% 74.2% 74.4% 74.6% 74.7% 74.9% 75.1% 75.3% 75.5% 75.7% 75.8% 76.0% 76.2% 76.4% 76.6% 76.8% 76.9% 77.1% 77.3% 77.5% 77.7% 77.9% 78.0% 78.2% 78.4% 78.6% 78.8% 79.0% 79.1% 79.3% 79.5% 79.7% 79.9% 80.0% 80.2% 80.4% 80.6% 80.8% 81.0% 81.1% 81.3% 81.5% 81.7% 81.9% 82.1% 82.2% 82.4% 82.6% 82.8% 83.0% 83.2% 83.3% 83.5% 83.7% 83.9% 84.1% 84.3% 84.4% 84.6% 84.8% 85.0% 85.2% 85.3% 85.5% 85.7% 85.9% 86.1% 86.3% 86.4% 86.6% 86.8% 87.0% 87.2% 87.4% 87.5% 87.7% 87.9% 88.1% 88.3% 88.5% 88.6% 88.8% 89.0% 89.2% 89.4% 89.6% 89.7% 89.9% 90.1% 90.3% 90.5% 90.6% 90.8% 91.0% 91.2% 91.4% 91.6% 91.7% 91.9% 92.1% 92.3% 92.5% 92.7% 92.8% 93.0% 93.2% 93.4% 93.6% 93.8% 93.9% 94.1% 94.3% 94.5% 94.7% 94.9% 95.0% 95.2% 95.4% 95.6% 95.8% 95.9% 96.1% 96.3% 96.5% 96.7% 96.9% 97.0% 97.2% 97.4% 97.6% 97.8% 98.0% 98.1% 98.3% 98.5% 98.7% 98.9% 99.1% 99.2% 99.4% 99.6% 99.8% 100.0% 100.2% - Downloading https://raw.githubusercontent.com/pytorch/vision/main/test/assets/videos/v_SoccerJuggling_g23_c01.avi to ./dataset/2/v_SoccerJuggling_g23_c01.avi - 0.2% 0.4% 0.6% 0.8% 1.0% 1.2% 1.4% 1.6% 1.8% 2.0% 2.2% 2.4% 2.6% 2.8% 3.0% 3.2% 3.4% 3.6% 3.8% 4.0% 4.2% 4.4% 4.6% 4.8% 5.0% 5.2% 5.4% 5.6% 5.8% 6.0% 6.2% 6.4% 6.6% 6.8% 7.0% 7.3% 7.5% 7.7% 7.9% 8.1% 8.3% 8.5% 8.7% 8.9% 9.1% 9.3% 9.5% 9.7% 9.9% 10.1% 10.3% 10.5% 10.7% 10.9% 11.1% 11.3% 11.5% 11.7% 11.9% 12.1% 12.3% 12.5% 12.7% 12.9% 13.1% 13.3% 13.5% 13.7% 13.9% 14.1% 14.3% 14.5% 14.7% 14.9% 15.1% 15.3% 15.5% 15.7% 15.9% 16.1% 16.3% 16.5% 16.7% 16.9% 17.1% 17.3% 17.5% 17.7% 17.9% 18.1% 18.3% 18.5% 18.7% 18.9% 19.1% 19.3% 19.5% 19.7% 19.9% 20.1% 20.3% 20.5% 20.7% 20.9% 21.1% 21.4% 21.6% 21.8% 22.0% 22.2% 22.4% 22.6% 22.8% 23.0% 23.2% 23.4% 23.6% 23.8% 24.0% 24.2% 24.4% 24.6% 24.8% 25.0% 25.2% 25.4% 25.6% 25.8% 26.0% 26.2% 26.4% 26.6% 26.8% 27.0% 27.2% 27.4% 27.6% 27.8% 28.0% 28.2% 28.4% 28.6% 28.8% 29.0% 29.2% 29.4% 29.6% 29.8% 30.0% 30.2% 30.4% 30.6% 30.8% 31.0% 31.2% 31.4% 31.6% 31.8% 32.0% 32.2% 32.4% 32.6% 32.8% 33.0% 33.2% 33.4% 33.6% 33.8% 34.0% 34.2% 34.4% 34.6% 34.8% 35.0% 35.2% 35.4% 35.7% 35.9% 36.1% 36.3% 36.5% 36.7% 36.9% 37.1% 37.3% 37.5% 37.7% 37.9% 38.1% 38.3% 38.5% 38.7% 38.9% 39.1% 39.3% 39.5% 39.7% 39.9% 40.1% 40.3% 40.5% 40.7% 40.9% 41.1% 41.3% 41.5% 41.7% 41.9% 42.1% 42.3% 42.5% 42.7% 42.9% 43.1% 43.3% 43.5% 43.7% 43.9% 44.1% 44.3% 44.5% 44.7% 44.9% 45.1% 45.3% 45.5% 45.7% 45.9% 46.1% 46.3% 46.5% 46.7% 46.9% 47.1% 47.3% 47.5% 47.7% 47.9% 48.1% 48.3% 48.5% 48.7% 48.9% 49.1% 49.3% 49.5% 49.8% 50.0% 50.2% 50.4% 50.6% 50.8% 51.0% 51.2% 51.4% 51.6% 51.8% 52.0% 52.2% 52.4% 52.6% 52.8% 53.0% 53.2% 53.4% 53.6% 53.8% 54.0% 54.2% 54.4% 54.6% 54.8% 55.0% 55.2% 55.4% 55.6% 55.8% 56.0% 56.2% 56.4% 56.6% 56.8% 57.0% 57.2% 57.4% 57.6% 57.8% 58.0% 58.2% 58.4% 58.6% 58.8% 59.0% 59.2% 59.4% 59.6% 59.8% 60.0% 60.2% 60.4% 60.6% 60.8% 61.0% 61.2% 61.4% 61.6% 61.8% 62.0% 62.2% 62.4% 62.6% 62.8% 63.0% 63.2% 63.4% 63.6% 63.8% 64.1% 64.3% 64.5% 64.7% 64.9% 65.1% 65.3% 65.5% 65.7% 65.9% 66.1% 66.3% 66.5% 66.7% 66.9% 67.1% 67.3% 67.5% 67.7% 67.9% 68.1% 68.3% 68.5% 68.7% 68.9% 69.1% 69.3% 69.5% 69.7% 69.9% 70.1% 70.3% 70.5% 70.7% 70.9% 71.1% 71.3% 71.5% 71.7% 71.9% 72.1% 72.3% 72.5% 72.7% 72.9% 73.1% 73.3% 73.5% 73.7% 73.9% 74.1% 74.3% 74.5% 74.7% 74.9% 75.1% 75.3% 75.5% 75.7% 75.9% 76.1% 76.3% 76.5% 76.7% 76.9% 77.1% 77.3% 77.5% 77.7% 77.9% 78.2% 78.4% 78.6% 78.8% 79.0% 79.2% 79.4% 79.6% 79.8% 80.0% 80.2% 80.4% 80.6% 80.8% 81.0% 81.2% 81.4% 81.6% 81.8% 82.0% 82.2% 82.4% 82.6% 82.8% 83.0% 83.2% 83.4% 83.6% 83.8% 84.0% 84.2% 84.4% 84.6% 84.8% 85.0% 85.2% 85.4% 85.6% 85.8% 86.0% 86.2% 86.4% 86.6% 86.8% 87.0% 87.2% 87.4% 87.6% 87.8% 88.0% 88.2% 88.4% 88.6% 88.8% 89.0% 89.2% 89.4% 89.6% 89.8% 90.0% 90.2% 90.4% 90.6% 90.8% 91.0% 91.2% 91.4% 91.6% 91.8% 92.0% 92.2% 92.5% 92.7% 92.9% 93.1% 93.3% 93.5% 93.7% 93.9% 94.1% 94.3% 94.5% 94.7% 94.9% 95.1% 95.3% 95.5% 95.7% 95.9% 96.1% 96.3% 96.5% 96.7% 96.9% 97.1% 97.3% 97.5% 97.7% 97.9% 98.1% 98.3% 98.5% 98.7% 98.9% 99.1% 99.3% 99.5% 99.7% 99.9% 100.1% - Downloading https://raw.githubusercontent.com/pytorch/vision/main/test/assets/videos/v_SoccerJuggling_g24_c01.avi to ./dataset/2/v_SoccerJuggling_g24_c01.avi - 0.2% 0.3% 0.5% 0.7% 0.8% 1.0% 1.2% 1.3% 1.5% 1.6% 1.8% 2.0% 2.1% 2.3% 2.5% 2.6% 2.8% 3.0% 3.1% 3.3% 3.5% 3.6% 3.8% 3.9% 4.1% 4.3% 4.4% 4.6% 4.8% 4.9% 5.1% 5.3% 5.4% 5.6% 5.8% 5.9% 6.1% 6.2% 6.4% 6.6% 6.7% 6.9% 7.1% 7.2% 7.4% 7.6% 7.7% 7.9% 8.1% 8.2% 8.4% 8.5% 8.7% 8.9% 9.0% 9.2% 9.4% 9.5% 9.7% 9.9% 10.0% 10.2% 10.4% 10.5% 10.7% 10.8% 11.0% 11.2% 11.3% 11.5% 11.7% 11.8% 12.0% 12.2% 12.3% 12.5% 12.7% 12.8% 13.0% 13.2% 13.3% 13.5% 13.6% 13.8% 14.0% 14.1% 14.3% 14.5% 14.6% 14.8% 15.0% 15.1% 15.3% 15.5% 15.6% 15.8% 15.9% 16.1% 16.3% 16.4% 16.6% 16.8% 16.9% 17.1% 17.3% 17.4% 17.6% 17.8% 17.9% 18.1% 18.2% 18.4% 18.6% 18.7% 18.9% 19.1% 19.2% 19.4% 19.6% 19.7% 19.9% 20.1% 20.2% 20.4% 20.5% 20.7% 20.9% 21.0% 21.2% 21.4% 21.5% 21.7% 21.9% 22.0% 22.2% 22.4% 22.5% 22.7% 22.9% 23.0% 23.2% 23.3% 23.5% 23.7% 23.8% 24.0% 24.2% 24.3% 24.5% 24.7% 24.8% 25.0% 25.2% 25.3% 25.5% 25.6% 25.8% 26.0% 26.1% 26.3% 26.5% 26.6% 26.8% 27.0% 27.1% 27.3% 27.5% 27.6% 27.8% 27.9% 28.1% 28.3% 28.4% 28.6% 28.8% 28.9% 29.1% 29.3% 29.4% 29.6% 29.8% 29.9% 30.1% 30.2% 30.4% 30.6% 30.7% 30.9% 31.1% 31.2% 31.4% 31.6% 31.7% 31.9% 32.1% 32.2% 32.4% 32.5% 32.7% 32.9% 33.0% 33.2% 33.4% 33.5% 33.7% 33.9% 34.0% 34.2% 34.4% 34.5% 34.7% 34.9% 35.0% 35.2% 35.3% 35.5% 35.7% 35.8% 36.0% 36.2% 36.3% 36.5% 36.7% 36.8% 37.0% 37.2% 37.3% 37.5% 37.6% 37.8% 38.0% 38.1% 38.3% 38.5% 38.6% 38.8% 39.0% 39.1% 39.3% 39.5% 39.6% 39.8% 39.9% 40.1% 40.3% 40.4% 40.6% 40.8% 40.9% 41.1% 41.3% 41.4% 41.6% 41.8% 41.9% 42.1% 42.2% 42.4% 42.6% 42.7% 42.9% 43.1% 43.2% 43.4% 43.6% 43.7% 43.9% 44.1% 44.2% 44.4% 44.5% 44.7% 44.9% 45.0% 45.2% 45.4% 45.5% 45.7% 45.9% 46.0% 46.2% 46.4% 46.5% 46.7% 46.9% 47.0% 47.2% 47.3% 47.5% 47.7% 47.8% 48.0% 48.2% 48.3% 48.5% 48.7% 48.8% 49.0% 49.2% 49.3% 49.5% 49.6% 49.8% 50.0% 50.1% 50.3% 50.5% 50.6% 50.8% 51.0% 51.1% 51.3% 51.5% 51.6% 51.8% 51.9% 52.1% 52.3% 52.4% 52.6% 52.8% 52.9% 53.1% 53.3% 53.4% 53.6% 53.8% 53.9% 54.1% 54.2% 54.4% 54.6% 54.7% 54.9% 55.1% 55.2% 55.4% 55.6% 55.7% 55.9% 56.1% 56.2% 56.4% 56.6% 56.7% 56.9% 57.0% 57.2% 57.4% 57.5% 57.7% 57.9% 58.0% 58.2% 58.4% 58.5% 58.7% 58.9% 59.0% 59.2% 59.3% 59.5% 59.7% 59.8% 60.0% 60.2% 60.3% 60.5% 60.7% 60.8% 61.0% 61.2% 61.3% 61.5% 61.6% 61.8% 62.0% 62.1% 62.3% 62.5% 62.6% 62.8% 63.0% 63.1% 63.3% 63.5% 63.6% 63.8% 63.9% 64.1% 64.3% 64.4% 64.6% 64.8% 64.9% 65.1% 65.3% 65.4% 65.6% 65.8% 65.9% 66.1% 66.2% 66.4% 66.6% 66.7% 66.9% 67.1% 67.2% 67.4% 67.6% 67.7% 67.9% 68.1% 68.2% 68.4% 68.6% 68.7% 68.9% 69.0% 69.2% 69.4% 69.5% 69.7% 69.9% 70.0% 70.2% 70.4% 70.5% 70.7% 70.9% 71.0% 71.2% 71.3% 71.5% 71.7% 71.8% 72.0% 72.2% 72.3% 72.5% 72.7% 72.8% 73.0% 73.2% 73.3% 73.5% 73.6% 73.8% 74.0% 74.1% 74.3% 74.5% 74.6% 74.8% 75.0% 75.1% 75.3% 75.5% 75.6% 75.8% 75.9% 76.1% 76.3% 76.4% 76.6% 76.8% 76.9% 77.1% 77.3% 77.4% 77.6% 77.8% 77.9% 78.1% 78.3% 78.4% 78.6% 78.7% 78.9% 79.1% 79.2% 79.4% 79.6% 79.7% 79.9% 80.1% 80.2% 80.4% 80.6% 80.7% 80.9% 81.0% 81.2% 81.4% 81.5% 81.7% 81.9% 82.0% 82.2% 82.4% 82.5% 82.7% 82.9% 83.0% 83.2% 83.3% 83.5% 83.7% 83.8% 84.0% 84.2% 84.3% 84.5% 84.7% 84.8% 85.0% 85.2% 85.3% 85.5% 85.6% 85.8% 86.0% 86.1% 86.3% 86.5% 86.6% 86.8% 87.0% 87.1% 87.3% 87.5% 87.6% 87.8% 87.9% 88.1% 88.3% 88.4% 88.6% 88.8% 88.9% 89.1% 89.3% 89.4% 89.6% 89.8% 89.9% 90.1% 90.3% 90.4% 90.6% 90.7% 90.9% 91.1% 91.2% 91.4% 91.6% 91.7% 91.9% 92.1% 92.2% 92.4% 92.6% 92.7% 92.9% 93.0% 93.2% 93.4% 93.5% 93.7% 93.9% 94.0% 94.2% 94.4% 94.5% 94.7% 94.9% 95.0% 95.2% 95.3% 95.5% 95.7% 95.8% 96.0% 96.2% 96.3% 96.5% 96.7% 96.8% 97.0% 97.2% 97.3% 97.5% 97.6% 97.8% 98.0% 98.1% 98.3% 98.5% 98.6% 98.8% 99.0% 99.1% 99.3% 99.5% 99.6% 99.8% 99.9% 100.1% - - - - -.. GENERATED FROM PYTHON SOURCE LINES 215-216 - -Housekeeping and utilities - -.. GENERATED FROM PYTHON SOURCE LINES 216-234 - -.. code-block:: default - - import os - import random - - from torchvision.datasets.folder import make_dataset - from torchvision import transforms as t - - - def _find_classes(dir): - classes = [d.name for d in os.scandir(dir) if d.is_dir()] - classes.sort() - class_to_idx = {cls_name: i for i, cls_name in enumerate(classes)} - return classes, class_to_idx - - - def get_samples(root, extensions=(".mp4", ".avi")): - _, class_to_idx = _find_classes(root) - return make_dataset(root, class_to_idx, extensions=extensions) - - - - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 235-245 - -We are going to define the dataset and some basic arguments. -We assume the structure of the FolderDataset, and add the following parameters: - -- ``clip_len``: length of a clip in frames -- ``frame_transform``: transform for every frame individually -- ``video_transform``: transform on a video sequence - -.. note:: - We actually add epoch size as using :func:`~torch.utils.data.IterableDataset` - class allows us to naturally oversample clips or images from each video if needed. - -.. GENERATED FROM PYTHON SOURCE LINES 245-289 - -.. code-block:: default - - - - class RandomDataset(torch.utils.data.IterableDataset): - def __init__(self, root, epoch_size=None, frame_transform=None, video_transform=None, clip_len=16): - super(RandomDataset).__init__() - - self.samples = get_samples(root) - - # Allow for temporal jittering - if epoch_size is None: - epoch_size = len(self.samples) - self.epoch_size = epoch_size - - self.clip_len = clip_len - self.frame_transform = frame_transform - self.video_transform = video_transform - - def __iter__(self): - for i in range(self.epoch_size): - # Get random sample - path, target = random.choice(self.samples) - # Get video object - vid = torchvision.io.VideoReader(path, "video") - metadata = vid.get_metadata() - video_frames = [] # video frame buffer - - # Seek and return frames - max_seek = metadata["video"]['duration'][0] - (self.clip_len / metadata["video"]['fps'][0]) - start = random.uniform(0., max_seek) - for frame in itertools.islice(vid.seek(start), self.clip_len): - video_frames.append(self.frame_transform(frame['data'])) - current_pts = frame['pts'] - # Stack it into a tensor - video = torch.stack(video_frames, 0) - if self.video_transform: - video = self.video_transform(video) - output = { - 'path': path, - 'video': video, - 'target': target, - 'start': start, - 'end': current_pts} - yield output - - - - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 290-304 - -Given a path of videos in a folder structure, i.e: - -- dataset - - class 1 - - file 0 - - file 1 - - ... - - class 2 - - file 0 - - file 1 - - ... - - ... - -We can generate a dataloader and test the dataset. - -.. GENERATED FROM PYTHON SOURCE LINES 304-311 - -.. code-block:: default - - - - transforms = [t.Resize((112, 112))] - frame_transform = t.Compose(transforms) - - dataset = RandomDataset("./dataset", epoch_size=None, frame_transform=frame_transform) - - - - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 312-323 - -.. code-block:: default - - from torch.utils.data import DataLoader - loader = DataLoader(dataset, batch_size=12) - data = {"video": [], 'start': [], 'end': [], 'tensorsize': []} - for batch in loader: - for i in range(len(batch['path'])): - data['video'].append(batch['path'][i]) - data['start'].append(batch['start'][i].item()) - data['end'].append(batch['end'][i].item()) - data['tensorsize'].append(batch['video'][i].size()) - print(data) - - - - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - {'video': ['./dataset/1/WUzgd7C1pWA.mp4', './dataset/2/SOX5yA1l24A.mp4', './dataset/2/v_SoccerJuggling_g24_c01.avi', './dataset/1/WUzgd7C1pWA.mp4', './dataset/2/SOX5yA1l24A.mp4'], 'start': [1.6059886546397426, 2.8462735255185843, 5.794335670319363, 3.7124644717480897, 5.732515897132387], 'end': [2.135467, 3.370033, 6.306299999999999, 4.237566999999999, 6.239567], 'tensorsize': [torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112])]} - - - - -.. GENERATED FROM PYTHON SOURCE LINES 324-327 - -4. Data Visualization ----------------------------------- -Example of visualized video - -.. GENERATED FROM PYTHON SOURCE LINES 327-336 - -.. code-block:: default - - - import matplotlib.pylab as plt - - plt.figure(figsize=(12, 12)) - for i in range(16): - plt.subplot(4, 4, i + 1) - plt.imshow(batch["video"][0, i, ...].permute(1, 2, 0)) - plt.axis("off") - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_video_api_001.png - :alt: plot video api - :srcset: /auto_examples/images/sphx_glr_plot_video_api_001.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 337-338 - -Cleanup the video and dataset: - -.. GENERATED FROM PYTHON SOURCE LINES 338-342 - -.. code-block:: default - - import os - import shutil - os.remove("./WUzgd7C1pWA.mp4") - shutil.rmtree("./dataset") - - - - - - - - -.. rst-class:: sphx-glr-timing - - **Total running time of the script:** ( 0 minutes 4.826 seconds) - - -.. _sphx_glr_download_auto_examples_plot_video_api.py: - - -.. only :: html - - .. container:: sphx-glr-footer - :class: sphx-glr-footer-example - - - - .. container:: sphx-glr-download sphx-glr-download-python - - :download:`Download Python source code: plot_video_api.py ` - - - - .. container:: sphx-glr-download sphx-glr-download-jupyter - - :download:`Download Jupyter notebook: plot_video_api.ipynb ` - - -.. only:: html - - .. rst-class:: sphx-glr-signature - - `Gallery generated by Sphinx-Gallery `_ diff --git a/0.11./_sources/auto_examples/plot_visualization_utils.rst.txt b/0.11./_sources/auto_examples/plot_visualization_utils.rst.txt deleted file mode 100644 index 1ce1c492e36..00000000000 --- a/0.11./_sources/auto_examples/plot_visualization_utils.rst.txt +++ /dev/null @@ -1,820 +0,0 @@ - -.. DO NOT EDIT. -.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. -.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: -.. "auto_examples/plot_visualization_utils.py" -.. LINE NUMBERS ARE GIVEN BELOW. - -.. only:: html - - .. note:: - :class: sphx-glr-download-link-note - - Click :ref:`here ` - to download the full example code - -.. rst-class:: sphx-glr-example-title - -.. _sphx_glr_auto_examples_plot_visualization_utils.py: - - -======================= -Visualization utilities -======================= - -This example illustrates some of the utilities that torchvision offers for -visualizing images, bounding boxes, and segmentation masks. - -.. GENERATED FROM PYTHON SOURCE LINES 9-33 - -.. code-block:: default - - - # sphinx_gallery_thumbnail_path = "../../gallery/assets/visualization_utils_thumbnail.png" - - import torch - import numpy as np - import matplotlib.pyplot as plt - - import torchvision.transforms.functional as F - - - plt.rcParams["savefig.bbox"] = 'tight' - - - def show(imgs): - if not isinstance(imgs, list): - imgs = [imgs] - fix, axs = plt.subplots(ncols=len(imgs), squeeze=False) - for i, img in enumerate(imgs): - img = img.detach() - img = F.to_pil_image(img) - axs[0, i].imshow(np.asarray(img)) - axs[0, i].set(xticklabels=[], yticklabels=[], xticks=[], yticks=[]) - - - - - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 34-39 - -Visualizing a grid of images ----------------------------- -The :func:`~torchvision.utils.make_grid` function can be used to create a -tensor that represents multiple images in a grid. This util requires a single -image of dtype ``uint8`` as input. - -.. GENERATED FROM PYTHON SOURCE LINES 39-50 - -.. code-block:: default - - - from torchvision.utils import make_grid - from torchvision.io import read_image - from pathlib import Path - - dog1_int = read_image(str(Path('assets') / 'dog1.jpg')) - dog2_int = read_image(str(Path('assets') / 'dog2.jpg')) - - grid = make_grid([dog1_int, dog2_int, dog1_int, dog2_int]) - show(grid) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_visualization_utils_001.png - :alt: plot visualization utils - :srcset: /auto_examples/images/sphx_glr_plot_visualization_utils_001.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 51-56 - -Visualizing bounding boxes --------------------------- -We can use :func:`~torchvision.utils.draw_bounding_boxes` to draw boxes on an -image. We can set the colors, labels, width as well as font and font size. -The boxes are in ``(xmin, ymin, xmax, ymax)`` format. - -.. GENERATED FROM PYTHON SOURCE LINES 56-66 - -.. code-block:: default - - - from torchvision.utils import draw_bounding_boxes - - - boxes = torch.tensor([[50, 50, 100, 200], [210, 150, 350, 430]], dtype=torch.float) - colors = ["blue", "yellow"] - result = draw_bounding_boxes(dog1_int, boxes, colors=colors, width=5) - show(result) - - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_visualization_utils_002.png - :alt: plot visualization utils - :srcset: /auto_examples/images/sphx_glr_plot_visualization_utils_002.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 67-75 - -Naturally, we can also plot bounding boxes produced by torchvision detection -models. Here is demo with a Faster R-CNN model loaded from -:func:`~torchvision.models.detection.fasterrcnn_resnet50_fpn` -model. You can also try using a RetinaNet with -:func:`~torchvision.models.detection.retinanet_resnet50_fpn`, an SSDlite with -:func:`~torchvision.models.detection.ssdlite320_mobilenet_v3_large` or an SSD with -:func:`~torchvision.models.detection.ssd300_vgg16`. For more details -on the output of such models, you may refer to :ref:`instance_seg_output`. - -.. GENERATED FROM PYTHON SOURCE LINES 75-89 - -.. code-block:: default - - - from torchvision.models.detection import fasterrcnn_resnet50_fpn - from torchvision.transforms.functional import convert_image_dtype - - - batch_int = torch.stack([dog1_int, dog2_int]) - batch = convert_image_dtype(batch_int, dtype=torch.float) - - model = fasterrcnn_resnet50_fpn(pretrained=True, progress=False) - model = model.eval() - - outputs = model(batch) - print(outputs) - - - - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - [{'boxes': tensor([[215.9767, 171.1661, 402.0078, 378.7391], - [344.6341, 172.6735, 357.6114, 220.1435], - [153.1306, 185.5568, 172.9223, 254.7014]], grad_fn=), 'labels': tensor([18, 1, 1]), 'scores': tensor([0.9989, 0.0701, 0.0611], grad_fn=)}, {'boxes': tensor([[ 23.5963, 132.4332, 449.9359, 493.0222], - [225.8183, 124.6292, 467.2861, 492.2621], - [ 18.5249, 135.4171, 420.9786, 479.2226]], grad_fn=), 'labels': tensor([18, 18, 17]), 'scores': tensor([0.9980, 0.0879, 0.0671], grad_fn=)}] - - - - -.. GENERATED FROM PYTHON SOURCE LINES 90-92 - -Let's plot the boxes detected by our model. We will only plot the boxes with a -score greater than a given threshold. - -.. GENERATED FROM PYTHON SOURCE LINES 92-100 - -.. code-block:: default - - - score_threshold = .8 - dogs_with_boxes = [ - draw_bounding_boxes(dog_int, boxes=output['boxes'][output['scores'] > score_threshold], width=4) - for dog_int, output in zip(batch_int, outputs) - ] - show(dogs_with_boxes) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_visualization_utils_003.png - :alt: plot visualization utils - :srcset: /auto_examples/images/sphx_glr_plot_visualization_utils_003.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 101-122 - -Visualizing segmentation masks ------------------------------- -The :func:`~torchvision.utils.draw_segmentation_masks` function can be used to -draw segmentation masks on images. Semantic segmentation and instance -segmentation models have different outputs, so we will treat each -independently. - -.. _semantic_seg_output: - -Semantic segmentation models -^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -We will see how to use it with torchvision's FCN Resnet-50, loaded with -:func:`~torchvision.models.segmentation.fcn_resnet50`. You can also try using -DeepLabv3 (:func:`~torchvision.models.segmentation.deeplabv3_resnet50`) or -lraspp mobilenet models -(:func:`~torchvision.models.segmentation.lraspp_mobilenet_v3_large`). - -Let's start by looking at the ouput of the model. Remember that in general, -images must be normalized before they're passed to a semantic segmentation -model. - -.. GENERATED FROM PYTHON SOURCE LINES 122-133 - -.. code-block:: default - - - from torchvision.models.segmentation import fcn_resnet50 - - - model = fcn_resnet50(pretrained=True, progress=False) - model = model.eval() - - normalized_batch = F.normalize(batch, mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)) - output = model(normalized_batch)['out'] - print(output.shape, output.min().item(), output.max().item()) - - - - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - Downloading: "https://download.pytorch.org/models/fcn_resnet50_coco-1167a1af.pth" to /root/.cache/torch/hub/checkpoints/fcn_resnet50_coco-1167a1af.pth - torch.Size([2, 21, 500, 500]) -7.089669704437256 14.858256340026855 - - - - -.. GENERATED FROM PYTHON SOURCE LINES 134-142 - -As we can see above, the output of the segmentation model is a tensor of shape -``(batch_size, num_classes, H, W)``. Each value is a non-normalized score, and -we can normalize them into ``[0, 1]`` by using a softmax. After the softmax, -we can interpret each value as a probability indicating how likely a given -pixel is to belong to a given class. - -Let's plot the masks that have been detected for the dog class and for the -boat class: - -.. GENERATED FROM PYTHON SOURCE LINES 142-160 - -.. code-block:: default - - - sem_classes = [ - '__background__', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', - 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', - 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor' - ] - sem_class_to_idx = {cls: idx for (idx, cls) in enumerate(sem_classes)} - - normalized_masks = torch.nn.functional.softmax(output, dim=1) - - dog_and_boat_masks = [ - normalized_masks[img_idx, sem_class_to_idx[cls]] - for img_idx in range(batch.shape[0]) - for cls in ('dog', 'boat') - ] - - show(dog_and_boat_masks) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_visualization_utils_004.png - :alt: plot visualization utils - :srcset: /auto_examples/images/sphx_glr_plot_visualization_utils_004.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 161-168 - -As expected, the model is confident about the dog class, but not so much for -the boat class. - -The :func:`~torchvision.utils.draw_segmentation_masks` function can be used to -plots those masks on top of the original image. This function expects the -masks to be boolean masks, but our masks above contain probabilities in ``[0, -1]``. To get boolean masks, we can do the following: - -.. GENERATED FROM PYTHON SOURCE LINES 168-175 - -.. code-block:: default - - - class_dim = 1 - boolean_dog_masks = (normalized_masks.argmax(class_dim) == sem_class_to_idx['dog']) - print(f"shape = {boolean_dog_masks.shape}, dtype = {boolean_dog_masks.dtype}") - show([m.float() for m in boolean_dog_masks]) - - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_visualization_utils_005.png - :alt: plot visualization utils - :srcset: /auto_examples/images/sphx_glr_plot_visualization_utils_005.png - :class: sphx-glr-single-img - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - shape = torch.Size([2, 500, 500]), dtype = torch.bool - - - - -.. GENERATED FROM PYTHON SOURCE LINES 176-188 - -The line above where we define ``boolean_dog_masks`` is a bit cryptic, but you -can read it as the following query: "For which pixels is 'dog' the most likely -class?" - -.. note:: - While we're using the ``normalized_masks`` here, we would have - gotten the same result by using the non-normalized scores of the model - directly (as the softmax operation preserves the order). - -Now that we have boolean masks, we can use them with -:func:`~torchvision.utils.draw_segmentation_masks` to plot them on top of the -original images: - -.. GENERATED FROM PYTHON SOURCE LINES 188-197 - -.. code-block:: default - - - from torchvision.utils import draw_segmentation_masks - - dogs_with_masks = [ - draw_segmentation_masks(img, masks=mask, alpha=0.7) - for img, mask in zip(batch_int, boolean_dog_masks) - ] - show(dogs_with_masks) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_visualization_utils_006.png - :alt: plot visualization utils - :srcset: /auto_examples/images/sphx_glr_plot_visualization_utils_006.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 198-205 - -We can plot more than one mask per image! Remember that the model returned as -many masks as there are classes. Let's ask the same query as above, but this -time for *all* classes, not just the dog class: "For each pixel and each class -C, is class C the most most likely class?" - -This one is a bit more involved, so we'll first show how to do it with a -single image, and then we'll generalize to the batch - -.. GENERATED FROM PYTHON SOURCE LINES 205-217 - -.. code-block:: default - - - num_classes = normalized_masks.shape[1] - dog1_masks = normalized_masks[0] - class_dim = 0 - dog1_all_classes_masks = dog1_masks.argmax(class_dim) == torch.arange(num_classes)[:, None, None] - - print(f"dog1_masks shape = {dog1_masks.shape}, dtype = {dog1_masks.dtype}") - print(f"dog1_all_classes_masks = {dog1_all_classes_masks.shape}, dtype = {dog1_all_classes_masks.dtype}") - - dog_with_all_masks = draw_segmentation_masks(dog1_int, masks=dog1_all_classes_masks, alpha=.6) - show(dog_with_all_masks) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_visualization_utils_007.png - :alt: plot visualization utils - :srcset: /auto_examples/images/sphx_glr_plot_visualization_utils_007.png - :class: sphx-glr-single-img - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - dog1_masks shape = torch.Size([21, 500, 500]), dtype = torch.float32 - dog1_all_classes_masks = torch.Size([21, 500, 500]), dtype = torch.bool - - - - -.. GENERATED FROM PYTHON SOURCE LINES 218-230 - -We can see in the image above that only 2 masks were drawn: the mask for the -background and the mask for the dog. This is because the model thinks that -only these 2 classes are the most likely ones across all the pixels. If the -model had detected another class as the most likely among other pixels, we -would have seen its mask above. - -Removing the background mask is as simple as passing -``masks=dog1_all_classes_masks[1:]``, because the background class is the -class with index 0. - -Let's now do the same but for an entire batch of images. The code is similar -but involves a bit more juggling with the dimensions. - -.. GENERATED FROM PYTHON SOURCE LINES 230-244 - -.. code-block:: default - - - class_dim = 1 - all_classes_masks = normalized_masks.argmax(class_dim) == torch.arange(num_classes)[:, None, None, None] - print(f"shape = {all_classes_masks.shape}, dtype = {all_classes_masks.dtype}") - # The first dimension is the classes now, so we need to swap it - all_classes_masks = all_classes_masks.swapaxes(0, 1) - - dogs_with_masks = [ - draw_segmentation_masks(img, masks=mask, alpha=.6) - for img, mask in zip(batch_int, all_classes_masks) - ] - show(dogs_with_masks) - - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_visualization_utils_008.png - :alt: plot visualization utils - :srcset: /auto_examples/images/sphx_glr_plot_visualization_utils_008.png - :class: sphx-glr-single-img - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - shape = torch.Size([21, 2, 500, 500]), dtype = torch.bool - - - - -.. GENERATED FROM PYTHON SOURCE LINES 245-264 - -.. _instance_seg_output: - -Instance segmentation models -^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Instance segmentation models have a significantly different output from the -semantic segmentation models. We will see here how to plot the masks for such -models. Let's start by analyzing the output of a Mask-RCNN model. Note that -these models don't require the images to be normalized, so we don't need to -use the normalized batch. - -.. note:: - - We will here describe the output of a Mask-RCNN model. The models in - :ref:`object_det_inst_seg_pers_keypoint_det` all have a similar output - format, but some of them may have extra info like keypoints for - :func:`~torchvision.models.detection.keypointrcnn_resnet50_fpn`, and some - of them may not have masks, like - :func:`~torchvision.models.detection.fasterrcnn_resnet50_fpn`. - -.. GENERATED FROM PYTHON SOURCE LINES 264-272 - -.. code-block:: default - - - from torchvision.models.detection import maskrcnn_resnet50_fpn - model = maskrcnn_resnet50_fpn(pretrained=True, progress=False) - model = model.eval() - - output = model(batch) - print(output) - - - - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - Downloading: "https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth" to /root/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth - [{'boxes': tensor([[219.7444, 168.1722, 400.7379, 384.0263], - [343.9716, 171.2287, 358.3447, 222.6263], - [301.0303, 192.6917, 313.8879, 232.3154]], grad_fn=), 'labels': tensor([18, 1, 1]), 'scores': tensor([0.9987, 0.7187, 0.6525], grad_fn=), 'masks': tensor([[[[0., 0., 0., ..., 0., 0., 0.], - [0., 0., 0., ..., 0., 0., 0.], - [0., 0., 0., ..., 0., 0., 0.], - ..., - [0., 0., 0., ..., 0., 0., 0.], - [0., 0., 0., ..., 0., 0., 0.], - [0., 0., 0., ..., 0., 0., 0.]]], - - - [[[0., 0., 0., ..., 0., 0., 0.], - [0., 0., 0., ..., 0., 0., 0.], - [0., 0., 0., ..., 0., 0., 0.], - ..., - [0., 0., 0., ..., 0., 0., 0.], - [0., 0., 0., ..., 0., 0., 0.], - [0., 0., 0., ..., 0., 0., 0.]]], - - - [[[0., 0., 0., ..., 0., 0., 0.], - [0., 0., 0., ..., 0., 0., 0.], - [0., 0., 0., ..., 0., 0., 0.], - ..., - [0., 0., 0., ..., 0., 0., 0.], - [0., 0., 0., ..., 0., 0., 0.], - [0., 0., 0., ..., 0., 0., 0.]]]], grad_fn=)}, {'boxes': tensor([[ 44.6767, 137.9018, 446.5324, 487.3429], - [ 0.0000, 288.0053, 489.9293, 490.2352]], grad_fn=), 'labels': tensor([18, 15]), 'scores': tensor([0.9978, 0.0697], grad_fn=), 'masks': tensor([[[[0., 0., 0., ..., 0., 0., 0.], - [0., 0., 0., ..., 0., 0., 0.], - [0., 0., 0., ..., 0., 0., 0.], - ..., - [0., 0., 0., ..., 0., 0., 0.], - [0., 0., 0., ..., 0., 0., 0.], - [0., 0., 0., ..., 0., 0., 0.]]], - - - [[[0., 0., 0., ..., 0., 0., 0.], - [0., 0., 0., ..., 0., 0., 0.], - [0., 0., 0., ..., 0., 0., 0.], - ..., - [0., 0., 0., ..., 0., 0., 0.], - [0., 0., 0., ..., 0., 0., 0.], - [0., 0., 0., ..., 0., 0., 0.]]]], grad_fn=)}] - - - - -.. GENERATED FROM PYTHON SOURCE LINES 273-289 - -Let's break this down. For each image in the batch, the model outputs some -detections (or instances). The number of detections varies for each input -image. Each instance is described by its bounding box, its label, its score -and its mask. - -The way the output is organized is as follows: the output is a list of length -``batch_size``. Each entry in the list corresponds to an input image, and it -is a dict with keys 'boxes', 'labels', 'scores', and 'masks'. Each value -associated to those keys has ``num_instances`` elements in it. In our case -above there are 3 instances detected in the first image, and 2 instances in -the second one. - -The boxes can be plotted with :func:`~torchvision.utils.draw_bounding_boxes` -as above, but here we're more interested in the masks. These masks are quite -different from the masks that we saw above for the semantic segmentation -models. - -.. GENERATED FROM PYTHON SOURCE LINES 289-295 - -.. code-block:: default - - - dog1_output = output[0] - dog1_masks = dog1_output['masks'] - print(f"shape = {dog1_masks.shape}, dtype = {dog1_masks.dtype}, " - f"min = {dog1_masks.min()}, max = {dog1_masks.max()}") - - - - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - shape = torch.Size([3, 1, 500, 500]), dtype = torch.float32, min = 0.0, max = 0.9999862909317017 - - - - -.. GENERATED FROM PYTHON SOURCE LINES 296-300 - -Here the masks corresponds to probabilities indicating, for each pixel, how -likely it is to belong to the predicted label of that instance. Those -predicted labels correspond to the 'labels' element in the same output dict. -Let's see which labels were predicted for the instances of the first image. - -.. GENERATED FROM PYTHON SOURCE LINES 300-321 - -.. code-block:: default - - - inst_classes = [ - '__background__', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', - 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'N/A', 'stop sign', - 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', - 'elephant', 'bear', 'zebra', 'giraffe', 'N/A', 'backpack', 'umbrella', 'N/A', 'N/A', - 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', - 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', - 'bottle', 'N/A', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', - 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', - 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'N/A', 'dining table', - 'N/A', 'N/A', 'toilet', 'N/A', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', - 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'N/A', 'book', - 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush' - ] - - inst_class_to_idx = {cls: idx for (idx, cls) in enumerate(inst_classes)} - - print("For the first dog, the following instances were detected:") - print([inst_classes[label] for label in dog1_output['labels']]) - - - - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - For the first dog, the following instances were detected: - ['dog', 'person', 'person'] - - - - -.. GENERATED FROM PYTHON SOURCE LINES 322-329 - -Interestingly, the model detects two persons in the image. Let's go ahead and -plot those masks. Since :func:`~torchvision.utils.draw_segmentation_masks` -expects boolean masks, we need to convert those probabilities into boolean -values. Remember that the semantic of those masks is "How likely is this pixel -to belong to the predicted class?". As a result, a natural way of converting -those masks into boolean values is to threshold them with the 0.5 probability -(one could also choose a different threshold). - -.. GENERATED FROM PYTHON SOURCE LINES 329-339 - -.. code-block:: default - - - proba_threshold = 0.5 - dog1_bool_masks = dog1_output['masks'] > proba_threshold - print(f"shape = {dog1_bool_masks.shape}, dtype = {dog1_bool_masks.dtype}") - - # There's an extra dimension (1) to the masks. We need to remove it - dog1_bool_masks = dog1_bool_masks.squeeze(1) - - show(draw_segmentation_masks(dog1_int, dog1_bool_masks, alpha=0.9)) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_visualization_utils_009.png - :alt: plot visualization utils - :srcset: /auto_examples/images/sphx_glr_plot_visualization_utils_009.png - :class: sphx-glr-single-img - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - shape = torch.Size([3, 1, 500, 500]), dtype = torch.bool - - - - -.. GENERATED FROM PYTHON SOURCE LINES 340-343 - -The model seems to have properly detected the dog, but it also confused trees -with people. Looking more closely at the scores will help us plotting more -relevant masks: - -.. GENERATED FROM PYTHON SOURCE LINES 343-346 - -.. code-block:: default - - - print(dog1_output['scores']) - - - - - -.. rst-class:: sphx-glr-script-out - - Out: - - .. code-block:: none - - tensor([0.9987, 0.7187, 0.6525], grad_fn=) - - - - -.. GENERATED FROM PYTHON SOURCE LINES 347-351 - -Clearly the model is more confident about the dog detection than it is about -the people detections. That's good news. When plotting the masks, we can ask -for only those that have a good score. Let's use a score threshold of .75 -here, and also plot the masks of the second dog. - -.. GENERATED FROM PYTHON SOURCE LINES 351-365 - -.. code-block:: default - - - score_threshold = .75 - - boolean_masks = [ - out['masks'][out['scores'] > score_threshold] > proba_threshold - for out in output - ] - - dogs_with_masks = [ - draw_segmentation_masks(img, mask.squeeze(1)) - for img, mask in zip(batch_int, boolean_masks) - ] - show(dogs_with_masks) - - - - -.. image-sg:: /auto_examples/images/sphx_glr_plot_visualization_utils_010.png - :alt: plot visualization utils - :srcset: /auto_examples/images/sphx_glr_plot_visualization_utils_010.png - :class: sphx-glr-single-img - - - - - -.. GENERATED FROM PYTHON SOURCE LINES 366-369 - -The two 'people' masks in the first image where not selected because they have -a lower score than the score threshold. Similarly in the second image, the -instance with class 15 (which corresponds to 'bench') was not selected. - - -.. rst-class:: sphx-glr-timing - - **Total running time of the script:** ( 0 minutes 9.400 seconds) - - -.. _sphx_glr_download_auto_examples_plot_visualization_utils.py: - - -.. only :: html - - .. container:: sphx-glr-footer - :class: sphx-glr-footer-example - - - - .. container:: sphx-glr-download sphx-glr-download-python - - :download:`Download Python source code: plot_visualization_utils.py ` - - - - .. container:: sphx-glr-download sphx-glr-download-jupyter - - :download:`Download Jupyter notebook: plot_visualization_utils.ipynb ` - - -.. only:: html - - .. rst-class:: sphx-glr-signature - - `Gallery generated by Sphinx-Gallery `_ diff --git a/0.11./_sources/auto_examples/sg_execution_times.rst.txt b/0.11./_sources/auto_examples/sg_execution_times.rst.txt deleted file mode 100644 index 2425834e3b0..00000000000 --- a/0.11./_sources/auto_examples/sg_execution_times.rst.txt +++ /dev/null @@ -1,20 +0,0 @@ - -:orphan: - -.. _sphx_glr_auto_examples_sg_execution_times: - -Computation times -================= -**00:27.092** total execution time for **auto_examples** files: - -+-----------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_plot_visualization_utils.py` (``plot_visualization_utils.py``) | 00:09.400 | 0.0 MB | -+-----------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_plot_transforms.py` (``plot_transforms.py``) | 00:08.589 | 0.0 MB | -+-----------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_plot_video_api.py` (``plot_video_api.py``) | 00:04.826 | 0.0 MB | -+-----------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_plot_repurposing_annotations.py` (``plot_repurposing_annotations.py``) | 00:02.456 | 0.0 MB | -+-----------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_plot_scripted_tensor_transforms.py` (``plot_scripted_tensor_transforms.py``) | 00:01.822 | 0.0 MB | -+-----------------------------------------------------------------------------------------------------------+-----------+--------+ diff --git a/0.11./_sources/datasets.rst.txt b/0.11./_sources/datasets.rst.txt deleted file mode 100644 index 050622625ec..00000000000 --- a/0.11./_sources/datasets.rst.txt +++ /dev/null @@ -1,281 +0,0 @@ -torchvision.datasets -==================== - -All datasets are subclasses of :class:`torch.utils.data.Dataset` -i.e, they have ``__getitem__`` and ``__len__`` methods implemented. -Hence, they can all be passed to a :class:`torch.utils.data.DataLoader` -which can load multiple samples in parallel using ``torch.multiprocessing`` workers. -For example: :: - - imagenet_data = torchvision.datasets.ImageNet('path/to/imagenet_root/') - data_loader = torch.utils.data.DataLoader(imagenet_data, - batch_size=4, - shuffle=True, - num_workers=args.nThreads) - -.. currentmodule:: torchvision.datasets - -All the datasets have almost similar API. They all have two common arguments: -``transform`` and ``target_transform`` to transform the input and target respectively. -You can also create your own datasets using the provided :ref:`base classes `. - -Caltech -~~~~~~~ - -.. autoclass:: Caltech101 - :members: __getitem__ - :special-members: - -.. autoclass:: Caltech256 - :members: __getitem__ - :special-members: - -CelebA -~~~~~~ - -.. autoclass:: CelebA - :members: __getitem__ - :special-members: - -CIFAR -~~~~~ - -.. autoclass:: CIFAR10 - :members: __getitem__ - :special-members: - -.. autoclass:: CIFAR100 - -Cityscapes -~~~~~~~~~~ - -.. note :: - Requires Cityscape to be downloaded. - -.. autoclass:: Cityscapes - :members: __getitem__ - :special-members: - -COCO -~~~~ - -.. note :: - These require the `COCO API to be installed`_ - -.. _COCO API to be installed: https://github.com/pdollar/coco/tree/master/PythonAPI - - -Captions -^^^^^^^^ - -.. autoclass:: CocoCaptions - :members: __getitem__ - :special-members: - - -Detection -^^^^^^^^^ - -.. autoclass:: CocoDetection - :members: __getitem__ - :special-members: - - -EMNIST -~~~~~~ - -.. autoclass:: EMNIST - -FakeData -~~~~~~~~ - -.. autoclass:: FakeData - -Fashion-MNIST -~~~~~~~~~~~~~ - -.. autoclass:: FashionMNIST - -Flickr -~~~~~~ - -.. autoclass:: Flickr8k - :members: __getitem__ - :special-members: - -.. autoclass:: Flickr30k - :members: __getitem__ - :special-members: - -HMDB51 -~~~~~~~ - -.. autoclass:: HMDB51 - :members: __getitem__ - :special-members: - -ImageNet -~~~~~~~~~~~ - -.. autoclass:: ImageNet - -.. note :: - This requires `scipy` to be installed - -iNaturalist -~~~~~~~~~~~ - -.. autoclass:: INaturalist - :members: __getitem__, category_name - -Kinetics-400 -~~~~~~~~~~~~ - -.. autoclass:: Kinetics400 - :members: __getitem__ - :special-members: - -KITTI -~~~~~~~~~ - -.. autoclass:: Kitti - :members: __getitem__ - :special-members: - -KMNIST -~~~~~~~~~~~~~ - -.. autoclass:: KMNIST - -LFW -~~~~~ - -.. autoclass:: LFWPeople - :members: __getitem__ - :special-members: - -.. autoclass:: LFWPairs - :members: __getitem__ - :special-members: - -LSUN -~~~~ - -.. autoclass:: LSUN - :members: __getitem__ - :special-members: - -MNIST -~~~~~ - -.. autoclass:: MNIST - -Omniglot -~~~~~~~~ - -.. autoclass:: Omniglot - -PhotoTour -~~~~~~~~~ - -.. autoclass:: PhotoTour - :members: __getitem__ - :special-members: - -Places365 -~~~~~~~~~ - -.. autoclass:: Places365 - :members: __getitem__ - :special-members: - -QMNIST -~~~~~~ - -.. autoclass:: QMNIST - -SBD -~~~~~~ - -.. autoclass:: SBDataset - :members: __getitem__ - :special-members: - -SBU -~~~ - -.. autoclass:: SBU - :members: __getitem__ - :special-members: - -SEMEION -~~~~~~~ - -.. autoclass:: SEMEION - :members: __getitem__ - :special-members: - -STL10 -~~~~~ - -.. autoclass:: STL10 - :members: __getitem__ - :special-members: - -SVHN -~~~~~ - -.. autoclass:: SVHN - :members: __getitem__ - :special-members: - -UCF101 -~~~~~~~ - -.. autoclass:: UCF101 - :members: __getitem__ - :special-members: - -USPS -~~~~~ - -.. autoclass:: USPS - :members: __getitem__ - :special-members: - -VOC -~~~~~~ - -.. autoclass:: VOCSegmentation - :members: __getitem__ - :special-members: - -.. autoclass:: VOCDetection - :members: __getitem__ - :special-members: - -WIDERFace -~~~~~~~~~ - -.. autoclass:: WIDERFace - :members: __getitem__ - :special-members: - - -.. _base_classes_datasets: - -Base classes for custom datasets -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. autoclass:: DatasetFolder - :members: __getitem__, find_classes, make_dataset - :special-members: - - -.. autoclass:: ImageFolder - :members: __getitem__ - :special-members: - -.. autoclass:: VisionDataset - :members: __getitem__ - :special-members: diff --git a/0.11./_sources/feature_extraction.rst.txt b/0.11./_sources/feature_extraction.rst.txt deleted file mode 100644 index f41b5c6127d..00000000000 --- a/0.11./_sources/feature_extraction.rst.txt +++ /dev/null @@ -1,162 +0,0 @@ -torchvision.models.feature_extraction -===================================== - -.. currentmodule:: torchvision.models.feature_extraction - -Feature extraction utilities let us tap into our models to access intermediate -transformations of our inputs. This could be useful for a variety of -applications in computer vision. Just a few examples are: - -- Visualizing feature maps. -- Extracting features to compute image descriptors for tasks like facial - recognition, copy-detection, or image retrieval. -- Passing selected features to downstream sub-networks for end-to-end training - with a specific task in mind. For example, passing a hierarchy of features - to a Feature Pyramid Network with object detection heads. - -Torchvision provides :func:`create_feature_extractor` for this purpose. -It works by following roughly these steps: - -1. Symbolically tracing the model to get a graphical representation of - how it transforms the input, step by step. -2. Setting the user-selected graph nodes as outputs. -3. Removing all redundant nodes (anything downstream of the output nodes). -4. Generating python code from the resulting graph and bundling that into a - PyTorch module together with the graph itself. - -| - -The `torch.fx documentation `_ -provides a more general and detailed explanation of the above procedure and -the inner workings of the symbolic tracing. - -.. _about-node-names: - -**About Node Names** - -In order to specify which nodes should be output nodes for extracted -features, one should be familiar with the node naming convention used here -(which differs slightly from that used in ``torch.fx``). A node name is -specified as a ``.`` separated path walking the module hierarchy from top level -module down to leaf operation or leaf module. For instance ``"layer4.2.relu"`` -in ResNet-50 represents the output of the ReLU of the 2nd block of the 4th -layer of the ``ResNet`` module. Here are some finer points to keep in mind: - -- When specifying node names for :func:`create_feature_extractor`, you may - provide a truncated version of a node name as a shortcut. To see how this - works, try creating a ResNet-50 model and printing the node names with - ``train_nodes, _ = get_graph_node_names(model) print(train_nodes)`` and - observe that the last node pertaining to ``layer4`` is - ``"layer4.2.relu_2"``. One may specify ``"layer4.2.relu_2"`` as the return - node, or just ``"layer4"`` as this, by convention, refers to the last node - (in order of execution) of ``layer4``. -- If a certain module or operation is repeated more than once, node names get - an additional ``_{int}`` postfix to disambiguate. For instance, maybe the - addition (``+``) operation is used three times in the same ``forward`` - method. Then there would be ``"path.to.module.add"``, - ``"path.to.module.add_1"``, ``"path.to.module.add_2"``. The counter is - maintained within the scope of the direct parent. So in ResNet-50 there is - a ``"layer4.1.add"`` and a ``"layer4.2.add"``. Because the addition - operations reside in different blocks, there is no need for a postfix to - disambiguate. - - -**An Example** - -Here is an example of how we might extract features for MaskRCNN: - -.. code-block:: python - - import torch - from torchvision.models import resnet50 - from torchvision.models.feature_extraction import get_graph_node_names - from torchvision.models.feature_extraction import create_feature_extractor - from torchvision.models.detection.mask_rcnn import MaskRCNN - from torchvision.models.detection.backbone_utils import LastLevelMaxPool - from torchvision.ops.feature_pyramid_network import FeaturePyramidNetwork - - - # To assist you in designing the feature extractor you may want to print out - # the available nodes for resnet50. - m = resnet50() - train_nodes, eval_nodes = get_graph_node_names(resnet50()) - - # The lists returned, are the names of all the graph nodes (in order of - # execution) for the input model traced in train mode and in eval mode - # respectively. You'll find that `train_nodes` and `eval_nodes` are the same - # for this example. But if the model contains control flow that's dependent - # on the training mode, they may be different. - - # To specify the nodes you want to extract, you could select the final node - # that appears in each of the main layers: - return_nodes = { - # node_name: user-specified key for output dict - 'layer1.2.relu_2': 'layer1', - 'layer2.3.relu_2': 'layer2', - 'layer3.5.relu_2': 'layer3', - 'layer4.2.relu_2': 'layer4', - } - - # But `create_feature_extractor` can also accept truncated node specifications - # like "layer1", as it will just pick the last node that's a descendent of - # of the specification. (Tip: be careful with this, especially when a layer - # has multiple outputs. It's not always guaranteed that the last operation - # performed is the one that corresponds to the output you desire. You should - # consult the source code for the input model to confirm.) - return_nodes = { - 'layer1': 'layer1', - 'layer2': 'layer2', - 'layer3': 'layer3', - 'layer4': 'layer4', - } - - # Now you can build the feature extractor. This returns a module whose forward - # method returns a dictionary like: - # { - # 'layer1': output of layer 1, - # 'layer2': output of layer 2, - # 'layer3': output of layer 3, - # 'layer4': output of layer 4, - # } - create_feature_extractor(m, return_nodes=return_nodes) - - # Let's put all that together to wrap resnet50 with MaskRCNN - - # MaskRCNN requires a backbone with an attached FPN - class Resnet50WithFPN(torch.nn.Module): - def __init__(self): - super(Resnet50WithFPN, self).__init__() - # Get a resnet50 backbone - m = resnet50() - # Extract 4 main layers (note: MaskRCNN needs this particular name - # mapping for return nodes) - self.body = create_feature_extractor( - m, return_nodes={f'layer{k}': str(v) - for v, k in enumerate([1, 2, 3, 4])}) - # Dry run to get number of channels for FPN - inp = torch.randn(2, 3, 224, 224) - with torch.no_grad(): - out = self.body(inp) - in_channels_list = [o.shape[1] for o in out.values()] - # Build FPN - self.out_channels = 256 - self.fpn = FeaturePyramidNetwork( - in_channels_list, out_channels=self.out_channels, - extra_blocks=LastLevelMaxPool()) - - def forward(self, x): - x = self.body(x) - x = self.fpn(x) - return x - - - # Now we can build our model! - model = MaskRCNN(Resnet50WithFPN(), num_classes=91).eval() - - -API Reference -------------- - -.. autofunction:: create_feature_extractor - -.. autofunction:: get_graph_node_names diff --git a/0.11./_sources/index.rst.txt b/0.11./_sources/index.rst.txt deleted file mode 100644 index d96086704c3..00000000000 --- a/0.11./_sources/index.rst.txt +++ /dev/null @@ -1,68 +0,0 @@ -torchvision -=========== -This library is part of the `PyTorch -`_ project. PyTorch is an open source -machine learning framework. - -Features described in this documentation are classified by release status: - - *Stable:* These features will be maintained long-term and there should generally - be no major performance limitations or gaps in documentation. - We also expect to maintain backwards compatibility (although - breaking changes can happen and notice will be given one release ahead - of time). - - *Beta:* Features are tagged as Beta because the API may change based on - user feedback, because the performance needs to improve, or because - coverage across operators is not yet complete. For Beta features, we are - committing to seeing the feature through to the Stable classification. - We are not, however, committing to backwards compatibility. - - *Prototype:* These features are typically not available as part of - binary distributions like PyPI or Conda, except sometimes behind run-time - flags, and are at an early stage for feedback and testing. - - - -The :mod:`torchvision` package consists of popular datasets, model -architectures, and common image transformations for computer vision. - -.. toctree:: - :maxdepth: 2 - :caption: Package Reference - - datasets - io - models - feature_extraction - ops - transforms - utils - -.. toctree:: - :maxdepth: 1 - :caption: Examples and training references - - auto_examples/index - training_references - -.. automodule:: torchvision - :members: - -.. toctree:: - :maxdepth: 1 - :caption: PyTorch Libraries - - PyTorch - torchaudio - torchtext - torchvision - TorchElastic - TorchServe - PyTorch on XLA Devices - - -Indices -------- - -* :ref:`genindex` diff --git a/0.11./_sources/io.rst.txt b/0.11./_sources/io.rst.txt deleted file mode 100644 index 2e416469d17..00000000000 --- a/0.11./_sources/io.rst.txt +++ /dev/null @@ -1,82 +0,0 @@ -torchvision.io -============== - -.. currentmodule:: torchvision.io - -The :mod:`torchvision.io` package provides functions for performing IO -operations. They are currently specific to reading and writing video and -images. - -Video ------ - -.. autofunction:: read_video - -.. autofunction:: read_video_timestamps - -.. autofunction:: write_video - - -Fine-grained video API ----------------------- - -In addition to the :mod:`read_video` function, we provide a high-performance -lower-level API for more fine-grained control compared to the :mod:`read_video` function. -It does all this whilst fully supporting torchscript. - -.. autoclass:: VideoReader - :members: __next__, get_metadata, set_current_stream, seek - - -Example of inspecting a video: - -.. code:: python - - import torchvision - video_path = "path to a test video" - # Constructor allocates memory and a threaded decoder - # instance per video. At the moment it takes two arguments: - # path to the video file, and a wanted stream. - reader = torchvision.io.VideoReader(video_path, "video") - - # The information about the video can be retrieved using the - # `get_metadata()` method. It returns a dictionary for every stream, with - # duration and other relevant metadata (often frame rate) - reader_md = reader.get_metadata() - - # metadata is structured as a dict of dicts with following structure - # {"stream_type": {"attribute": [attribute per stream]}} - # - # following would print out the list of frame rates for every present video stream - print(reader_md["video"]["fps"]) - - # we explicitly select the stream we would like to operate on. In - # the constructor we select a default video stream, but - # in practice, we can set whichever stream we would like - video.set_current_stream("video:0") - - -Image ------ - -.. autoclass:: ImageReadMode - -.. autofunction:: read_image - -.. autofunction:: decode_image - -.. autofunction:: encode_jpeg - -.. autofunction:: decode_jpeg - -.. autofunction:: write_jpeg - -.. autofunction:: encode_png - -.. autofunction:: decode_png - -.. autofunction:: write_png - -.. autofunction:: read_file - -.. autofunction:: write_file diff --git a/0.11./_sources/models.rst.txt b/0.11./_sources/models.rst.txt deleted file mode 100644 index 9d05b509899..00000000000 --- a/0.11./_sources/models.rst.txt +++ /dev/null @@ -1,699 +0,0 @@ -.. _models: - -torchvision.models -################## - - -The models subpackage contains definitions of models for addressing -different tasks, including: image classification, pixelwise semantic -segmentation, object detection, instance segmentation, person -keypoint detection and video classification. - -.. note :: - Backward compatibility is guaranteed for loading a serialized - ``state_dict`` to the model created using old PyTorch version. - On the contrary, loading entire saved models or serialized - ``ScriptModules`` (seralized using older versions of PyTorch) - may not preserve the historic behaviour. Refer to the following - `documentation - `_ - - -Classification -============== - -The models subpackage contains definitions for the following model -architectures for image classification: - -- `AlexNet`_ -- `VGG`_ -- `ResNet`_ -- `SqueezeNet`_ -- `DenseNet`_ -- `Inception`_ v3 -- `GoogLeNet`_ -- `ShuffleNet`_ v2 -- `MobileNetV2`_ -- `MobileNetV3`_ -- `ResNeXt`_ -- `Wide ResNet`_ -- `MNASNet`_ -- `EfficientNet`_ -- `RegNet`_ - -You can construct a model with random weights by calling its constructor: - -.. code:: python - - import torchvision.models as models - resnet18 = models.resnet18() - alexnet = models.alexnet() - vgg16 = models.vgg16() - squeezenet = models.squeezenet1_0() - densenet = models.densenet161() - inception = models.inception_v3() - googlenet = models.googlenet() - shufflenet = models.shufflenet_v2_x1_0() - mobilenet_v2 = models.mobilenet_v2() - mobilenet_v3_large = models.mobilenet_v3_large() - mobilenet_v3_small = models.mobilenet_v3_small() - resnext50_32x4d = models.resnext50_32x4d() - wide_resnet50_2 = models.wide_resnet50_2() - mnasnet = models.mnasnet1_0() - efficientnet_b0 = models.efficientnet_b0() - efficientnet_b1 = models.efficientnet_b1() - efficientnet_b2 = models.efficientnet_b2() - efficientnet_b3 = models.efficientnet_b3() - efficientnet_b4 = models.efficientnet_b4() - efficientnet_b5 = models.efficientnet_b5() - efficientnet_b6 = models.efficientnet_b6() - efficientnet_b7 = models.efficientnet_b7() - regnet_y_400mf = models.regnet_y_400mf() - regnet_y_800mf = models.regnet_y_800mf() - regnet_y_1_6gf = models.regnet_y_1_6gf() - regnet_y_3_2gf = models.regnet_y_3_2gf() - regnet_y_8gf = models.regnet_y_8gf() - regnet_y_16gf = models.regnet_y_16gf() - regnet_y_32gf = models.regnet_y_32gf() - regnet_x_400mf = models.regnet_x_400mf() - regnet_x_800mf = models.regnet_x_800mf() - regnet_x_1_6gf = models.regnet_x_1_6gf() - regnet_x_3_2gf = models.regnet_x_3_2gf() - regnet_x_8gf = models.regnet_x_8gf() - regnet_x_16gf = models.regnet_x_16gf() - regnet_x_32gf = models.regnet_x_32gf() - -We provide pre-trained models, using the PyTorch :mod:`torch.utils.model_zoo`. -These can be constructed by passing ``pretrained=True``: - -.. code:: python - - import torchvision.models as models - resnet18 = models.resnet18(pretrained=True) - alexnet = models.alexnet(pretrained=True) - squeezenet = models.squeezenet1_0(pretrained=True) - vgg16 = models.vgg16(pretrained=True) - densenet = models.densenet161(pretrained=True) - inception = models.inception_v3(pretrained=True) - googlenet = models.googlenet(pretrained=True) - shufflenet = models.shufflenet_v2_x1_0(pretrained=True) - mobilenet_v2 = models.mobilenet_v2(pretrained=True) - mobilenet_v3_large = models.mobilenet_v3_large(pretrained=True) - mobilenet_v3_small = models.mobilenet_v3_small(pretrained=True) - resnext50_32x4d = models.resnext50_32x4d(pretrained=True) - wide_resnet50_2 = models.wide_resnet50_2(pretrained=True) - mnasnet = models.mnasnet1_0(pretrained=True) - efficientnet_b0 = models.efficientnet_b0(pretrained=True) - efficientnet_b1 = models.efficientnet_b1(pretrained=True) - efficientnet_b2 = models.efficientnet_b2(pretrained=True) - efficientnet_b3 = models.efficientnet_b3(pretrained=True) - efficientnet_b4 = models.efficientnet_b4(pretrained=True) - efficientnet_b5 = models.efficientnet_b5(pretrained=True) - efficientnet_b6 = models.efficientnet_b6(pretrained=True) - efficientnet_b7 = models.efficientnet_b7(pretrained=True) - regnet_y_400mf = models.regnet_y_400mf(pretrained=True) - regnet_y_800mf = models.regnet_y_800mf(pretrained=True) - regnet_y_1_6gf = models.regnet_y_1_6gf(pretrained=True) - regnet_y_3_2gf = models.regnet_y_3_2gf(pretrained=True) - regnet_y_8gf = models.regnet_y_8gf(pretrained=True) - regnet_y_16gf = models.regnet_y_16gf(pretrained=True) - regnet_y_32gf = models.regnet_y_32gf(pretrained=True) - regnet_x_400mf = models.regnet_x_400mf(pretrained=True) - regnet_x_800mf = models.regnet_x_800mf(pretrained=True) - regnet_x_1_6gf = models.regnet_x_1_6gf(pretrained=True) - regnet_x_3_2gf = models.regnet_x_3_2gf(pretrained=True) - regnet_x_8gf = models.regnet_x_8gf(pretrained=True) - regnet_x_16gf = models.regnet_x_16gf(pretrainedTrue) - regnet_x_32gf = models.regnet_x_32gf(pretrained=True) - -Instancing a pre-trained model will download its weights to a cache directory. -This directory can be set using the `TORCH_MODEL_ZOO` environment variable. See -:func:`torch.utils.model_zoo.load_url` for details. - -Some models use modules which have different training and evaluation -behavior, such as batch normalization. To switch between these modes, use -``model.train()`` or ``model.eval()`` as appropriate. See -:meth:`~torch.nn.Module.train` or :meth:`~torch.nn.Module.eval` for details. - -All pre-trained models expect input images normalized in the same way, -i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), -where H and W are expected to be at least 224. -The images have to be loaded in to a range of [0, 1] and then normalized -using ``mean = [0.485, 0.456, 0.406]`` and ``std = [0.229, 0.224, 0.225]``. -You can use the following transform to normalize:: - - normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], - std=[0.229, 0.224, 0.225]) - -An example of such normalization can be found in the imagenet example -`here `_ - -The process for obtaining the values of `mean` and `std` is roughly equivalent -to:: - - import torch - from torchvision import datasets, transforms as T - - transform = T.Compose([T.Resize(256), T.CenterCrop(224), T.ToTensor()]) - dataset = datasets.ImageNet(".", split="train", transform=transform) - - means = [] - stds = [] - for img in subset(dataset): - means.append(torch.mean(img)) - stds.append(torch.std(img)) - - mean = torch.mean(torch.tensor(means)) - std = torch.mean(torch.tensor(stds)) - -Unfortunately, the concrete `subset` that was used is lost. For more -information see `this discussion `_ -or `these experiments `_. - -The sizes of the EfficientNet models depend on the variant. For the exact input sizes -`check here `_ - -ImageNet 1-crop error rates - -================================ ============= ============= -Model Acc@1 Acc@5 -================================ ============= ============= -AlexNet 56.522 79.066 -VGG-11 69.020 88.628 -VGG-13 69.928 89.246 -VGG-16 71.592 90.382 -VGG-19 72.376 90.876 -VGG-11 with batch normalization 70.370 89.810 -VGG-13 with batch normalization 71.586 90.374 -VGG-16 with batch normalization 73.360 91.516 -VGG-19 with batch normalization 74.218 91.842 -ResNet-18 69.758 89.078 -ResNet-34 73.314 91.420 -ResNet-50 76.130 92.862 -ResNet-101 77.374 93.546 -ResNet-152 78.312 94.046 -SqueezeNet 1.0 58.092 80.420 -SqueezeNet 1.1 58.178 80.624 -Densenet-121 74.434 91.972 -Densenet-169 75.600 92.806 -Densenet-201 76.896 93.370 -Densenet-161 77.138 93.560 -Inception v3 77.294 93.450 -GoogleNet 69.778 89.530 -ShuffleNet V2 x1.0 69.362 88.316 -ShuffleNet V2 x0.5 60.552 81.746 -MobileNet V2 71.878 90.286 -MobileNet V3 Large 74.042 91.340 -MobileNet V3 Small 67.668 87.402 -ResNeXt-50-32x4d 77.618 93.698 -ResNeXt-101-32x8d 79.312 94.526 -Wide ResNet-50-2 78.468 94.086 -Wide ResNet-101-2 78.848 94.284 -MNASNet 1.0 73.456 91.510 -MNASNet 0.5 67.734 87.490 -EfficientNet-B0 77.692 93.532 -EfficientNet-B1 78.642 94.186 -EfficientNet-B2 80.608 95.310 -EfficientNet-B3 82.008 96.054 -EfficientNet-B4 83.384 96.594 -EfficientNet-B5 83.444 96.628 -EfficientNet-B6 84.008 96.916 -EfficientNet-B7 84.122 96.908 -regnet_x_400mf 72.834 90.950 -regnet_x_800mf 75.212 92.348 -regnet_x_1_6gf 77.040 93.440 -regnet_x_3_2gf 78.364 93.992 -regnet_x_8gf 79.344 94.686 -regnet_x_16gf 80.058 94.944 -regnet_x_32gf 80.622 95.248 -regnet_y_400mf 74.046 91.716 -regnet_y_800mf 76.420 93.136 -regnet_y_1_6gf 77.950 93.966 -regnet_y_3_2gf 78.948 94.576 -regnet_y_8gf 80.032 95.048 -regnet_y_16gf 80.424 95.240 -regnet_y_32gf 80.878 95.340 -================================ ============= ============= - - -.. _AlexNet: https://arxiv.org/abs/1404.5997 -.. _VGG: https://arxiv.org/abs/1409.1556 -.. _ResNet: https://arxiv.org/abs/1512.03385 -.. _SqueezeNet: https://arxiv.org/abs/1602.07360 -.. _DenseNet: https://arxiv.org/abs/1608.06993 -.. _Inception: https://arxiv.org/abs/1512.00567 -.. _GoogLeNet: https://arxiv.org/abs/1409.4842 -.. _ShuffleNet: https://arxiv.org/abs/1807.11164 -.. _MobileNetV2: https://arxiv.org/abs/1801.04381 -.. _MobileNetV3: https://arxiv.org/abs/1905.02244 -.. _ResNeXt: https://arxiv.org/abs/1611.05431 -.. _MNASNet: https://arxiv.org/abs/1807.11626 -.. _EfficientNet: https://arxiv.org/abs/1905.11946 -.. _RegNet: https://arxiv.org/abs/2003.13678 - -.. currentmodule:: torchvision.models - -Alexnet -------- - -.. autofunction:: alexnet - -VGG ---- - -.. autofunction:: vgg11 -.. autofunction:: vgg11_bn -.. autofunction:: vgg13 -.. autofunction:: vgg13_bn -.. autofunction:: vgg16 -.. autofunction:: vgg16_bn -.. autofunction:: vgg19 -.. autofunction:: vgg19_bn - - -ResNet ------- - -.. autofunction:: resnet18 -.. autofunction:: resnet34 -.. autofunction:: resnet50 -.. autofunction:: resnet101 -.. autofunction:: resnet152 - -SqueezeNet ----------- - -.. autofunction:: squeezenet1_0 -.. autofunction:: squeezenet1_1 - -DenseNet ---------- - -.. autofunction:: densenet121 -.. autofunction:: densenet169 -.. autofunction:: densenet161 -.. autofunction:: densenet201 - -Inception v3 ------------- - -.. autofunction:: inception_v3 - -.. note :: - This requires `scipy` to be installed - - -GoogLeNet ------------- - -.. autofunction:: googlenet - -.. note :: - This requires `scipy` to be installed - - -ShuffleNet v2 -------------- - -.. autofunction:: shufflenet_v2_x0_5 -.. autofunction:: shufflenet_v2_x1_0 -.. autofunction:: shufflenet_v2_x1_5 -.. autofunction:: shufflenet_v2_x2_0 - -MobileNet v2 -------------- - -.. autofunction:: mobilenet_v2 - -MobileNet v3 -------------- - -.. autofunction:: mobilenet_v3_large -.. autofunction:: mobilenet_v3_small - -ResNext -------- - -.. autofunction:: resnext50_32x4d -.. autofunction:: resnext101_32x8d - -Wide ResNet ------------ - -.. autofunction:: wide_resnet50_2 -.. autofunction:: wide_resnet101_2 - -MNASNet --------- - -.. autofunction:: mnasnet0_5 -.. autofunction:: mnasnet0_75 -.. autofunction:: mnasnet1_0 -.. autofunction:: mnasnet1_3 - -EfficientNet ------------- - -.. autofunction:: efficientnet_b0 -.. autofunction:: efficientnet_b1 -.. autofunction:: efficientnet_b2 -.. autofunction:: efficientnet_b3 -.. autofunction:: efficientnet_b4 -.. autofunction:: efficientnet_b5 -.. autofunction:: efficientnet_b6 -.. autofunction:: efficientnet_b7 - -RegNet ------------- - -.. autofunction:: regnet_y_400mf -.. autofunction:: regnet_y_800mf -.. autofunction:: regnet_y_1_6gf -.. autofunction:: regnet_y_3_2gf -.. autofunction:: regnet_y_8gf -.. autofunction:: regnet_y_16gf -.. autofunction:: regnet_y_32gf -.. autofunction:: regnet_x_400mf -.. autofunction:: regnet_x_800mf -.. autofunction:: regnet_x_1_6gf -.. autofunction:: regnet_x_3_2gf -.. autofunction:: regnet_x_8gf -.. autofunction:: regnet_x_16gf -.. autofunction:: regnet_x_32gf - -Quantized Models ----------------- - -The following architectures provide support for INT8 quantized models. You can get -a model with random weights by calling its constructor: - -.. code:: python - - import torchvision.models as models - googlenet = models.quantization.googlenet() - inception_v3 = models.quantization.inception_v3() - mobilenet_v2 = models.quantization.mobilenet_v2() - mobilenet_v3_large = models.quantization.mobilenet_v3_large() - resnet18 = models.quantization.resnet18() - resnet50 = models.quantization.resnet50() - resnext101_32x8d = models.quantization.resnext101_32x8d() - shufflenet_v2_x0_5 = models.quantization.shufflenet_v2_x0_5() - shufflenet_v2_x1_0 = models.quantization.shufflenet_v2_x1_0() - shufflenet_v2_x1_5 = models.quantization.shufflenet_v2_x1_5() - shufflenet_v2_x2_0 = models.quantization.shufflenet_v2_x2_0() - -Obtaining a pre-trained quantized model can be done with a few lines of code: - -.. code:: python - - import torchvision.models as models - model = models.quantization.mobilenet_v2(pretrained=True, quantize=True) - model.eval() - # run the model with quantized inputs and weights - out = model(torch.rand(1, 3, 224, 224)) - -We provide pre-trained quantized weights for the following models: - -================================ ============= ============= -Model Acc@1 Acc@5 -================================ ============= ============= -MobileNet V2 71.658 90.150 -MobileNet V3 Large 73.004 90.858 -ShuffleNet V2 68.360 87.582 -ResNet 18 69.494 88.882 -ResNet 50 75.920 92.814 -ResNext 101 32x8d 78.986 94.480 -Inception V3 77.176 93.354 -GoogleNet 69.826 89.404 -================================ ============= ============= - - -Semantic Segmentation -===================== - -The models subpackage contains definitions for the following model -architectures for semantic segmentation: - -- `FCN ResNet50, ResNet101 `_ -- `DeepLabV3 ResNet50, ResNet101, MobileNetV3-Large `_ -- `LR-ASPP MobileNetV3-Large `_ - -As with image classification models, all pre-trained models expect input images normalized in the same way. -The images have to be loaded in to a range of ``[0, 1]`` and then normalized using -``mean = [0.485, 0.456, 0.406]`` and ``std = [0.229, 0.224, 0.225]``. -They have been trained on images resized such that their minimum size is 520. - -For details on how to plot the masks of such models, you may refer to :ref:`semantic_seg_output`. - -The pre-trained models have been trained on a subset of COCO train2017, on the 20 categories that are -present in the Pascal VOC dataset. You can see more information on how the subset has been selected in -``references/segmentation/coco_utils.py``. The classes that the pre-trained model outputs are the following, -in order: - - .. code-block:: python - - ['__background__', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', - 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', - 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'] - -The accuracies of the pre-trained models evaluated on COCO val2017 are as follows - -================================ ============= ==================== -Network mean IoU global pixelwise acc -================================ ============= ==================== -FCN ResNet50 60.5 91.4 -FCN ResNet101 63.7 91.9 -DeepLabV3 ResNet50 66.4 92.4 -DeepLabV3 ResNet101 67.4 92.4 -DeepLabV3 MobileNetV3-Large 60.3 91.2 -LR-ASPP MobileNetV3-Large 57.9 91.2 -================================ ============= ==================== - - -Fully Convolutional Networks ----------------------------- - -.. autofunction:: torchvision.models.segmentation.fcn_resnet50 -.. autofunction:: torchvision.models.segmentation.fcn_resnet101 - - -DeepLabV3 ---------- - -.. autofunction:: torchvision.models.segmentation.deeplabv3_resnet50 -.. autofunction:: torchvision.models.segmentation.deeplabv3_resnet101 -.. autofunction:: torchvision.models.segmentation.deeplabv3_mobilenet_v3_large - - -LR-ASPP -------- - -.. autofunction:: torchvision.models.segmentation.lraspp_mobilenet_v3_large - -.. _object_det_inst_seg_pers_keypoint_det: - -Object Detection, Instance Segmentation and Person Keypoint Detection -===================================================================== - -The models subpackage contains definitions for the following model -architectures for detection: - -- `Faster R-CNN `_ -- `Mask R-CNN `_ -- `RetinaNet `_ -- `SSD `_ -- `SSDlite `_ - -The pre-trained models for detection, instance segmentation and -keypoint detection are initialized with the classification models -in torchvision. - -The models expect a list of ``Tensor[C, H, W]``, in the range ``0-1``. -The models internally resize the images but the behaviour varies depending -on the model. Check the constructor of the models for more information. The -output format of such models is illustrated in :ref:`instance_seg_output`. - - -For object detection and instance segmentation, the pre-trained -models return the predictions of the following classes: - - .. code-block:: python - - COCO_INSTANCE_CATEGORY_NAMES = [ - '__background__', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', - 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'N/A', 'stop sign', - 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', - 'elephant', 'bear', 'zebra', 'giraffe', 'N/A', 'backpack', 'umbrella', 'N/A', 'N/A', - 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', - 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', - 'bottle', 'N/A', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', - 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', - 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'N/A', 'dining table', - 'N/A', 'N/A', 'toilet', 'N/A', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', - 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'N/A', 'book', - 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush' - ] - - -Here are the summary of the accuracies for the models trained on -the instances set of COCO train2017 and evaluated on COCO val2017. - -====================================== ======= ======== =========== -Network box AP mask AP keypoint AP -====================================== ======= ======== =========== -Faster R-CNN ResNet-50 FPN 37.0 - - -Faster R-CNN MobileNetV3-Large FPN 32.8 - - -Faster R-CNN MobileNetV3-Large 320 FPN 22.8 - - -RetinaNet ResNet-50 FPN 36.4 - - -SSD300 VGG16 25.1 - - -SSDlite320 MobileNetV3-Large 21.3 - - -Mask R-CNN ResNet-50 FPN 37.9 34.6 - -====================================== ======= ======== =========== - -For person keypoint detection, the accuracies for the pre-trained -models are as follows - -================================ ======= ======== =========== -Network box AP mask AP keypoint AP -================================ ======= ======== =========== -Keypoint R-CNN ResNet-50 FPN 54.6 - 65.0 -================================ ======= ======== =========== - -For person keypoint detection, the pre-trained model return the -keypoints in the following order: - - .. code-block:: python - - COCO_PERSON_KEYPOINT_NAMES = [ - 'nose', - 'left_eye', - 'right_eye', - 'left_ear', - 'right_ear', - 'left_shoulder', - 'right_shoulder', - 'left_elbow', - 'right_elbow', - 'left_wrist', - 'right_wrist', - 'left_hip', - 'right_hip', - 'left_knee', - 'right_knee', - 'left_ankle', - 'right_ankle' - ] - -Runtime characteristics ------------------------ - -The implementations of the models for object detection, instance segmentation -and keypoint detection are efficient. - -In the following table, we use 8 GPUs to report the results. During training, -we use a batch size of 2 per GPU for all models except SSD which uses 4 -and SSDlite which uses 24. During testing a batch size of 1 is used. - -For test time, we report the time for the model evaluation and postprocessing -(including mask pasting in image), but not the time for computing the -precision-recall. - -====================================== =================== ================== =========== -Network train time (s / it) test time (s / it) memory (GB) -====================================== =================== ================== =========== -Faster R-CNN ResNet-50 FPN 0.2288 0.0590 5.2 -Faster R-CNN MobileNetV3-Large FPN 0.1020 0.0415 1.0 -Faster R-CNN MobileNetV3-Large 320 FPN 0.0978 0.0376 0.6 -RetinaNet ResNet-50 FPN 0.2514 0.0939 4.1 -SSD300 VGG16 0.2093 0.0744 1.5 -SSDlite320 MobileNetV3-Large 0.1773 0.0906 1.5 -Mask R-CNN ResNet-50 FPN 0.2728 0.0903 5.4 -Keypoint R-CNN ResNet-50 FPN 0.3789 0.1242 6.8 -====================================== =================== ================== =========== - - -Faster R-CNN ------------- - -.. autofunction:: torchvision.models.detection.fasterrcnn_resnet50_fpn -.. autofunction:: torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn -.. autofunction:: torchvision.models.detection.fasterrcnn_mobilenet_v3_large_320_fpn - - -RetinaNet ---------- - -.. autofunction:: torchvision.models.detection.retinanet_resnet50_fpn - - -SSD ---- - -.. autofunction:: torchvision.models.detection.ssd300_vgg16 - - -SSDlite -------- - -.. autofunction:: torchvision.models.detection.ssdlite320_mobilenet_v3_large - - -Mask R-CNN ----------- - -.. autofunction:: torchvision.models.detection.maskrcnn_resnet50_fpn - - -Keypoint R-CNN --------------- - -.. autofunction:: torchvision.models.detection.keypointrcnn_resnet50_fpn - - -Video classification -==================== - -We provide models for action recognition pre-trained on Kinetics-400. -They have all been trained with the scripts provided in ``references/video_classification``. - -All pre-trained models expect input images normalized in the same way, -i.e. mini-batches of 3-channel RGB videos of shape (3 x T x H x W), -where H and W are expected to be 112, and T is a number of video frames in a clip. -The images have to be loaded in to a range of [0, 1] and then normalized -using ``mean = [0.43216, 0.394666, 0.37645]`` and ``std = [0.22803, 0.22145, 0.216989]``. - - -.. note:: - The normalization parameters are different from the image classification ones, and correspond - to the mean and std from Kinetics-400. - -.. note:: - For now, normalization code can be found in ``references/video_classification/transforms.py``, - see the ``Normalize`` function there. Note that it differs from standard normalization for - images because it assumes the video is 4d. - -Kinetics 1-crop accuracies for clip length 16 (16x112x112) - -================================ ============= ============= -Network Clip acc@1 Clip acc@5 -================================ ============= ============= -ResNet 3D 18 52.75 75.45 -ResNet MC 18 53.90 76.29 -ResNet (2+1)D 57.50 78.81 -================================ ============= ============= - - -ResNet 3D ----------- - -.. autofunction:: torchvision.models.video.r3d_18 - -ResNet Mixed Convolution ------------------------- - -.. autofunction:: torchvision.models.video.mc3_18 - -ResNet (2+1)D -------------- - -.. autofunction:: torchvision.models.video.r2plus1d_18 diff --git a/0.11./_sources/ops.rst.txt b/0.11./_sources/ops.rst.txt deleted file mode 100644 index f4f03bdb298..00000000000 --- a/0.11./_sources/ops.rst.txt +++ /dev/null @@ -1,38 +0,0 @@ -.. _ops: - -torchvision.ops -=============== - -.. currentmodule:: torchvision.ops - -:mod:`torchvision.ops` implements operators that are specific for Computer Vision. - -.. note:: - All operators have native support for TorchScript. - - -.. autofunction:: batched_nms -.. autofunction:: box_area -.. autofunction:: box_convert -.. autofunction:: box_iou -.. autofunction:: clip_boxes_to_image -.. autofunction:: deform_conv2d -.. autofunction:: generalized_box_iou -.. autofunction:: masks_to_boxes -.. autofunction:: nms -.. autofunction:: ps_roi_align -.. autofunction:: ps_roi_pool -.. autofunction:: remove_small_boxes -.. autofunction:: roi_align -.. autofunction:: roi_pool -.. autofunction:: sigmoid_focal_loss -.. autofunction:: stochastic_depth - -.. autoclass:: RoIAlign -.. autoclass:: PSRoIAlign -.. autoclass:: RoIPool -.. autoclass:: PSRoIPool -.. autoclass:: DeformConv2d -.. autoclass:: MultiScaleRoIAlign -.. autoclass:: FeaturePyramidNetwork -.. autoclass:: StochasticDepth diff --git a/0.11./_sources/training_references.rst.txt b/0.11./_sources/training_references.rst.txt deleted file mode 100644 index fc22ac5eba6..00000000000 --- a/0.11./_sources/training_references.rst.txt +++ /dev/null @@ -1,29 +0,0 @@ -Training references -=================== - -On top of the many models, datasets, and image transforms, Torchvision also -provides training reference scripts. These are the scripts that we use to train -the :ref:`models ` which are then available with pre-trained weights. - -These scripts are not part of the core package and are instead available `on -GitHub `_. We currently -provide references for -`classification `_, -`detection `_, -`segmentation `_, -`similarity learning `_, -and `video classification `_. - -While these scripts are largely stable, they do not offer backward compatibility -guarantees. - -In general, these scripts rely on the latest (not yet released) pytorch version -or the latest torchvision version. This means that to use them, **you might need -to install the latest pytorch and torchvision versions**, with e.g.:: - - conda install pytorch torchvision -c pytorch-nightly - -If you need to rely on an older stable version of pytorch or torchvision, e.g. -torchvision 0.10, then it's safer to use the scripts from that corresponding -release on GitHub, namely -https://github.com/pytorch/vision/tree/v0.10.0/references. diff --git a/0.11./_sources/transforms.rst.txt b/0.11./_sources/transforms.rst.txt deleted file mode 100644 index 3d4ba9542ec..00000000000 --- a/0.11./_sources/transforms.rst.txt +++ /dev/null @@ -1,295 +0,0 @@ -.. _transforms: - -torchvision.transforms -====================== - -.. currentmodule:: torchvision.transforms - -Transforms are common image transformations. They can be chained together using :class:`Compose`. -Most transform classes have a function equivalent: :ref:`functional -transforms ` give fine-grained control over the -transformations. -This is useful if you have to build a more complex transformation pipeline -(e.g. in the case of segmentation tasks). - -Most transformations accept both `PIL `_ -images and tensor images, although some transformations are :ref:`PIL-only -` and some are :ref:`tensor-only -`. The :ref:`conversion_transforms` may be used to -convert to and from PIL images. - -The transformations that accept tensor images also accept batches of tensor -images. A Tensor Image is a tensor with ``(C, H, W)`` shape, where ``C`` is a -number of channels, ``H`` and ``W`` are image height and width. A batch of -Tensor Images is a tensor of ``(B, C, H, W)`` shape, where ``B`` is a number -of images in the batch. - -The expected range of the values of a tensor image is implicitly defined by -the tensor dtype. Tensor images with a float dtype are expected to have -values in ``[0, 1)``. Tensor images with an integer dtype are expected to -have values in ``[0, MAX_DTYPE]`` where ``MAX_DTYPE`` is the largest value -that can be represented in that dtype. - -Randomized transformations will apply the same transformation to all the -images of a given batch, but they will produce different transformations -across calls. For reproducible transformations across calls, you may use -:ref:`functional transforms `. - -The following examples illustrate the use of the available transforms: - - * :ref:`sphx_glr_auto_examples_plot_transforms.py` - - .. figure:: ../source/auto_examples/images/sphx_glr_plot_transforms_001.png - :align: center - :scale: 65% - - * :ref:`sphx_glr_auto_examples_plot_scripted_tensor_transforms.py` - - .. figure:: ../source/auto_examples/images/sphx_glr_plot_scripted_tensor_transforms_001.png - :align: center - :scale: 30% - -.. warning:: - - Since v0.8.0 all random transformations are using torch default random generator to sample random parameters. - It is a backward compatibility breaking change and user should set the random state as following: - - .. code:: python - - # Previous versions - # import random - # random.seed(12) - - # Now - import torch - torch.manual_seed(17) - - Please, keep in mind that the same seed for torch random generator and Python random generator will not - produce the same results. - - -Scriptable transforms ---------------------- - -In order to script the transformations, please use ``torch.nn.Sequential`` instead of :class:`Compose`. - -.. code:: python - - transforms = torch.nn.Sequential( - transforms.CenterCrop(10), - transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)), - ) - scripted_transforms = torch.jit.script(transforms) - -Make sure to use only scriptable transformations, i.e. that work with ``torch.Tensor`` and does not require -`lambda` functions or ``PIL.Image``. - -For any custom transformations to be used with ``torch.jit.script``, they should be derived from ``torch.nn.Module``. - - -Compositions of transforms --------------------------- - -.. autoclass:: Compose - - -Transforms on PIL Image and torch.\*Tensor ------------------------------------------- - -.. autoclass:: CenterCrop - :members: - -.. autoclass:: ColorJitter - :members: - -.. autoclass:: FiveCrop - :members: - -.. autoclass:: Grayscale - :members: - -.. autoclass:: Pad - :members: - -.. autoclass:: RandomAffine - :members: - -.. autoclass:: RandomApply - -.. autoclass:: RandomCrop - :members: - -.. autoclass:: RandomGrayscale - :members: - -.. autoclass:: RandomHorizontalFlip - :members: - -.. autoclass:: RandomPerspective - :members: - -.. autoclass:: RandomResizedCrop - :members: - -.. autoclass:: RandomRotation - :members: - -.. autoclass:: RandomSizedCrop - :members: - -.. autoclass:: RandomVerticalFlip - :members: - -.. autoclass:: Resize - :members: - -.. autoclass:: Scale - :members: - -.. autoclass:: TenCrop - :members: - -.. autoclass:: GaussianBlur - :members: - -.. autoclass:: RandomInvert - :members: - -.. autoclass:: RandomPosterize - :members: - -.. autoclass:: RandomSolarize - :members: - -.. autoclass:: RandomAdjustSharpness - :members: - -.. autoclass:: RandomAutocontrast - :members: - -.. autoclass:: RandomEqualize - :members: - -.. _transforms_pil_only: - -Transforms on PIL Image only ----------------------------- - -.. autoclass:: RandomChoice - -.. autoclass:: RandomOrder - -.. _transforms_tensor_only: - -Transforms on torch.\*Tensor only ---------------------------------- - -.. autoclass:: LinearTransformation - :members: - -.. autoclass:: Normalize - :members: - -.. autoclass:: RandomErasing - :members: - -.. autoclass:: ConvertImageDtype - -.. _conversion_transforms: - -Conversion Transforms ---------------------- - -.. autoclass:: ToPILImage - :members: - -.. autoclass:: ToTensor - :members: - -.. autoclass:: PILToTensor - :members: - - -Generic Transforms ------------------- - -.. autoclass:: Lambda - :members: - - -Automatic Augmentation Transforms ---------------------------------- - -`AutoAugment `_ is a common Data Augmentation technique that can improve the accuracy of Image Classification models. -Though the data augmentation policies are directly linked to their trained dataset, empirical studies show that -ImageNet policies provide significant improvements when applied to other datasets. -In TorchVision we implemented 3 policies learned on the following datasets: ImageNet, CIFAR10 and SVHN. -The new transform can be used standalone or mixed-and-matched with existing transforms: - -.. autoclass:: AutoAugmentPolicy - :members: - -.. autoclass:: AutoAugment - :members: - -`RandAugment `_ is a simple high-performing Data Augmentation technique which improves the accuracy of Image Classification models. - -.. autoclass:: RandAugment - :members: - -`TrivialAugmentWide `_ is a dataset-independent data-augmentation technique which improves the accuracy of Image Classification models. - -.. autoclass:: TrivialAugmentWide - :members: - -.. _functional_transforms: - -Functional Transforms ---------------------- - -Functional transforms give you fine-grained control of the transformation pipeline. -As opposed to the transformations above, functional transforms don't contain a random number -generator for their parameters. -That means you have to specify/generate all parameters, but the functional transform will give you -reproducible results across calls. - -Example: -you can apply a functional transform with the same parameters to multiple images like this: - -.. code:: python - - import torchvision.transforms.functional as TF - import random - - def my_segmentation_transforms(image, segmentation): - if random.random() > 0.5: - angle = random.randint(-30, 30) - image = TF.rotate(image, angle) - segmentation = TF.rotate(segmentation, angle) - # more transforms ... - return image, segmentation - - -Example: -you can use a functional transform to build transform classes with custom behavior: - -.. code:: python - - import torchvision.transforms.functional as TF - import random - - class MyRotationTransform: - """Rotate by one of the given angles.""" - - def __init__(self, angles): - self.angles = angles - - def __call__(self, x): - angle = random.choice(self.angles) - return TF.rotate(x, angle) - - rotation_transform = MyRotationTransform(angles=[-30, -15, 0, 15, 30]) - - -.. automodule:: torchvision.transforms.functional - :members: diff --git a/0.11./_sources/utils.rst.txt b/0.11./_sources/utils.rst.txt deleted file mode 100644 index b0a2d743d4e..00000000000 --- a/0.11./_sources/utils.rst.txt +++ /dev/null @@ -1,14 +0,0 @@ -.. _utils: - -torchvision.utils -================= - -.. currentmodule:: torchvision.utils - -.. autofunction:: make_grid - -.. autofunction:: save_image - -.. autofunction:: draw_bounding_boxes - -.. autofunction:: draw_segmentation_masks diff --git a/0.11./_static/basic.css b/0.11./_static/basic.css deleted file mode 100644 index b3bdc004066..00000000000 --- a/0.11./_static/basic.css +++ /dev/null @@ -1,861 +0,0 @@ -/* - * basic.css - * ~~~~~~~~~ - * - * Sphinx stylesheet -- basic theme. - * - * :copyright: Copyright 2007-2021 by the Sphinx team, see AUTHORS. - * :license: BSD, see LICENSE for details. - * - */ - -/* -- main layout ----------------------------------------------------------- */ - -div.clearer { - clear: both; -} - -div.section::after { - display: block; - content: ''; - clear: left; -} - -/* -- relbar ---------------------------------------------------------------- */ - -div.related { - width: 100%; - font-size: 90%; -} - -div.related h3 { - display: none; -} - -div.related ul { - margin: 0; - padding: 0 0 0 10px; - list-style: none; -} - -div.related li { - display: inline; -} - -div.related li.right { - float: right; - margin-right: 5px; -} - -/* -- sidebar --------------------------------------------------------------- */ - -div.sphinxsidebarwrapper { - padding: 10px 5px 0 10px; -} - -div.sphinxsidebar { - float: left; - width: 230px; - margin-left: -100%; - font-size: 90%; - word-wrap: break-word; - overflow-wrap : break-word; -} - -div.sphinxsidebar ul { - list-style: none; -} - -div.sphinxsidebar ul ul, -div.sphinxsidebar ul.want-points { - margin-left: 20px; - list-style: square; -} - -div.sphinxsidebar ul ul { - margin-top: 0; - margin-bottom: 0; -} - -div.sphinxsidebar form { - margin-top: 10px; -} - -div.sphinxsidebar input { - border: 1px solid #98dbcc; - font-family: sans-serif; - font-size: 1em; -} - -div.sphinxsidebar #searchbox form.search { - overflow: hidden; -} - -div.sphinxsidebar #searchbox input[type="text"] { - float: left; - width: 80%; - padding: 0.25em; - box-sizing: border-box; -} - -div.sphinxsidebar #searchbox input[type="submit"] { - float: left; - width: 20%; - border-left: none; - padding: 0.25em; - box-sizing: border-box; -} - - -img { - border: 0; - max-width: 100%; -} - -/* -- search page ----------------------------------------------------------- */ - -ul.search { - margin: 10px 0 0 20px; - padding: 0; -} - -ul.search li { - padding: 5px 0 5px 20px; - background-image: url(file.png); - background-repeat: no-repeat; - background-position: 0 7px; -} - -ul.search li a { - font-weight: bold; -} - -ul.search li div.context { - color: #888; - margin: 2px 0 0 30px; - text-align: left; -} - -ul.keywordmatches li.goodmatch a { - font-weight: bold; -} - -/* -- index page ------------------------------------------------------------ */ - -table.contentstable { - width: 90%; - margin-left: auto; - margin-right: auto; -} - -table.contentstable p.biglink { - line-height: 150%; -} - -a.biglink { - font-size: 1.3em; -} - -span.linkdescr { - font-style: italic; - padding-top: 5px; - font-size: 90%; -} - -/* -- general index --------------------------------------------------------- */ - -table.indextable { - width: 100%; -} - -table.indextable td { - text-align: left; - vertical-align: top; -} - -table.indextable ul { - margin-top: 0; - margin-bottom: 0; - list-style-type: none; -} - -table.indextable > tbody > tr > td > ul { - padding-left: 0em; -} - -table.indextable tr.pcap { - height: 10px; -} - -table.indextable tr.cap { - margin-top: 10px; - background-color: #f2f2f2; -} - -img.toggler { - margin-right: 3px; - margin-top: 3px; - cursor: pointer; -} - -div.modindex-jumpbox { - border-top: 1px solid #ddd; - border-bottom: 1px solid #ddd; - margin: 1em 0 1em 0; - padding: 0.4em; -} - -div.genindex-jumpbox { - border-top: 1px solid #ddd; - border-bottom: 1px solid #ddd; - margin: 1em 0 1em 0; - padding: 0.4em; -} - -/* -- domain module index --------------------------------------------------- */ - -table.modindextable td { - padding: 2px; - border-collapse: collapse; -} - -/* -- general body styles --------------------------------------------------- */ - -div.body { - min-width: 450px; - max-width: 800px; -} - -div.body p, div.body dd, div.body li, div.body blockquote { - -moz-hyphens: auto; - -ms-hyphens: auto; - -webkit-hyphens: auto; - hyphens: auto; -} - -a.headerlink { - visibility: hidden; -} - -a.brackets:before, -span.brackets > a:before{ - content: "["; -} - -a.brackets:after, -span.brackets > a:after { - content: "]"; -} - -h1:hover > a.headerlink, -h2:hover > a.headerlink, -h3:hover > a.headerlink, -h4:hover > a.headerlink, -h5:hover > a.headerlink, -h6:hover > a.headerlink, -dt:hover > a.headerlink, -caption:hover > a.headerlink, -p.caption:hover > a.headerlink, -div.code-block-caption:hover > a.headerlink { - visibility: visible; -} - -div.body p.caption { - text-align: inherit; -} - -div.body td { - text-align: left; -} - -.first { - margin-top: 0 !important; -} - -p.rubric { - margin-top: 30px; - font-weight: bold; -} - -img.align-left, figure.align-left, .figure.align-left, object.align-left { - clear: left; - float: left; - margin-right: 1em; -} - -img.align-right, figure.align-right, .figure.align-right, object.align-right { - clear: right; - float: right; - margin-left: 1em; -} - -img.align-center, figure.align-center, .figure.align-center, object.align-center { - display: block; - margin-left: auto; - margin-right: auto; -} - -img.align-default, figure.align-default, .figure.align-default { - display: block; - margin-left: auto; - margin-right: auto; -} - -.align-left { - text-align: left; -} - -.align-center { - text-align: center; -} - -.align-default { - text-align: center; -} - -.align-right { - text-align: right; -} - -/* -- sidebars -------------------------------------------------------------- */ - -div.sidebar, -aside.sidebar { - margin: 0 0 0.5em 1em; - border: 1px solid #ddb; - padding: 7px; - background-color: #ffe; - width: 40%; - float: right; - clear: right; - overflow-x: auto; -} - -p.sidebar-title { - font-weight: bold; -} - -div.admonition, div.topic, blockquote { - clear: left; -} - -/* -- topics ---------------------------------------------------------------- */ - -div.topic { - border: 1px solid #ccc; - padding: 7px; - margin: 10px 0 10px 0; -} - -p.topic-title { - font-size: 1.1em; - font-weight: bold; - margin-top: 10px; -} - -/* -- admonitions ----------------------------------------------------------- */ - -div.admonition { - margin-top: 10px; - margin-bottom: 10px; - padding: 7px; -} - -div.admonition dt { - font-weight: bold; -} - -p.admonition-title { - margin: 0px 10px 5px 0px; - font-weight: bold; -} - -div.body p.centered { - text-align: center; - margin-top: 25px; -} - -/* -- content of sidebars/topics/admonitions -------------------------------- */ - -div.sidebar > :last-child, -aside.sidebar > :last-child, -div.topic > :last-child, -div.admonition > :last-child { - margin-bottom: 0; -} - -div.sidebar::after, -aside.sidebar::after, -div.topic::after, -div.admonition::after, -blockquote::after { - display: block; - content: ''; - clear: both; -} - -/* -- tables ---------------------------------------------------------------- */ - -table.docutils { - margin-top: 10px; - margin-bottom: 10px; - border: 0; - border-collapse: collapse; -} - -table.align-center { - margin-left: auto; - margin-right: auto; -} - -table.align-default { - margin-left: auto; - margin-right: auto; -} - -table caption span.caption-number { - font-style: italic; -} - -table caption span.caption-text { -} - -table.docutils td, table.docutils th { - padding: 1px 8px 1px 5px; - border-top: 0; - border-left: 0; - border-right: 0; - border-bottom: 1px solid #aaa; -} - -table.footnote td, table.footnote th { - border: 0 !important; -} - -th { - text-align: left; - padding-right: 5px; -} - -table.citation { - border-left: solid 1px gray; - margin-left: 1px; -} - -table.citation td { - border-bottom: none; -} - -th > :first-child, -td > :first-child { - margin-top: 0px; -} - -th > :last-child, -td > :last-child { - margin-bottom: 0px; -} - -/* -- figures --------------------------------------------------------------- */ - -div.figure, figure { - margin: 0.5em; - padding: 0.5em; -} - -div.figure p.caption, figcaption { - padding: 0.3em; -} - -div.figure p.caption span.caption-number, -figcaption span.caption-number { - font-style: italic; -} - -div.figure p.caption span.caption-text, -figcaption span.caption-text { -} - -/* -- field list styles ----------------------------------------------------- */ - -table.field-list td, table.field-list th { - border: 0 !important; -} - -.field-list ul { - margin: 0; - padding-left: 1em; -} - -.field-list p { - margin: 0; -} - -.field-name { - -moz-hyphens: manual; - -ms-hyphens: manual; - -webkit-hyphens: manual; - hyphens: manual; -} - -/* -- hlist styles ---------------------------------------------------------- */ - -table.hlist { - margin: 1em 0; -} - -table.hlist td { - vertical-align: top; -} - - -/* -- other body styles ----------------------------------------------------- */ - -ol.arabic { - list-style: decimal; -} - -ol.loweralpha { - list-style: lower-alpha; -} - -ol.upperalpha { - list-style: upper-alpha; -} - -ol.lowerroman { - list-style: lower-roman; -} - -ol.upperroman { - list-style: upper-roman; -} - -:not(li) > ol > li:first-child > :first-child, -:not(li) > ul > li:first-child > :first-child { - margin-top: 0px; -} - -:not(li) > ol > li:last-child > :last-child, -:not(li) > ul > li:last-child > :last-child { - margin-bottom: 0px; -} - -ol.simple ol p, -ol.simple ul p, -ul.simple ol p, -ul.simple ul p { - margin-top: 0; -} - -ol.simple > li:not(:first-child) > p, -ul.simple > li:not(:first-child) > p { - margin-top: 0; -} - -ol.simple p, -ul.simple p { - margin-bottom: 0; -} - -dl.footnote > dt, -dl.citation > dt { - float: left; - margin-right: 0.5em; -} - -dl.footnote > dd, -dl.citation > dd { - margin-bottom: 0em; -} - -dl.footnote > dd:after, -dl.citation > dd:after { - content: ""; - clear: both; -} - -dl.field-list { - display: grid; - grid-template-columns: fit-content(30%) auto; -} - -dl.field-list > dt { - font-weight: bold; - word-break: break-word; - padding-left: 0.5em; - padding-right: 5px; -} - -dl.field-list > dt:after { - content: ":"; -} - -dl.field-list > dd { - padding-left: 0.5em; - margin-top: 0em; - margin-left: 0em; - margin-bottom: 0em; -} - -dl { - margin-bottom: 15px; -} - -dd > :first-child { - margin-top: 0px; -} - -dd ul, dd table { - margin-bottom: 10px; -} - -dd { - margin-top: 3px; - margin-bottom: 10px; - margin-left: 30px; -} - -dl > dd:last-child, -dl > dd:last-child > :last-child { - margin-bottom: 0; -} - -dt:target, span.highlighted { - background-color: #fbe54e; -} - -rect.highlighted { - fill: #fbe54e; -} - -dl.glossary dt { - font-weight: bold; - font-size: 1.1em; -} - -.optional { - font-size: 1.3em; -} - -.sig-paren { - font-size: larger; -} - -.versionmodified { - font-style: italic; -} - -.system-message { - background-color: #fda; - padding: 5px; - border: 3px solid red; -} - -.footnote:target { - background-color: #ffa; -} - -.line-block { - display: block; - margin-top: 1em; - margin-bottom: 1em; -} - -.line-block .line-block { - margin-top: 0; - margin-bottom: 0; - margin-left: 1.5em; -} - -.guilabel, .menuselection { - font-family: sans-serif; -} - -.accelerator { - text-decoration: underline; -} - -.classifier { - font-style: oblique; -} - -.classifier:before { - font-style: normal; - margin: 0.5em; - content: ":"; -} - -abbr, acronym { - border-bottom: dotted 1px; - cursor: help; -} - -/* -- code displays --------------------------------------------------------- */ - -pre { - overflow: auto; - overflow-y: hidden; /* fixes display issues on Chrome browsers */ -} - -pre, div[class*="highlight-"] { - clear: both; -} - -span.pre { - -moz-hyphens: none; - -ms-hyphens: none; - -webkit-hyphens: none; - hyphens: none; -} - -div[class*="highlight-"] { - margin: 1em 0; -} - -td.linenos pre { - border: 0; - background-color: transparent; - color: #aaa; -} - -table.highlighttable { - display: block; -} - -table.highlighttable tbody { - display: block; -} - -table.highlighttable tr { - display: flex; -} - -table.highlighttable td { - margin: 0; - padding: 0; -} - -table.highlighttable td.linenos { - padding-right: 0.5em; -} - -table.highlighttable td.code { - flex: 1; - overflow: hidden; -} - -.highlight .hll { - display: block; -} - -div.highlight pre, -table.highlighttable pre { - margin: 0; -} - -div.code-block-caption + div { - margin-top: 0; -} - -div.code-block-caption { - margin-top: 1em; - padding: 2px 5px; - font-size: small; -} - -div.code-block-caption code { - background-color: transparent; -} - -table.highlighttable td.linenos, -span.linenos, -div.doctest > div.highlight span.gp { /* gp: Generic.Prompt */ - user-select: none; -} - -div.code-block-caption span.caption-number { - padding: 0.1em 0.3em; - font-style: italic; -} - -div.code-block-caption span.caption-text { -} - -div.literal-block-wrapper { - margin: 1em 0; -} - -code.descname { - background-color: transparent; - font-weight: bold; - font-size: 1.2em; -} - -code.descclassname { - background-color: transparent; -} - -code.xref, a code { - background-color: transparent; - font-weight: bold; -} - -h1 code, h2 code, h3 code, h4 code, h5 code, h6 code { - background-color: transparent; -} - -.viewcode-link { - float: right; -} - -.viewcode-back { - float: right; - font-family: sans-serif; -} - -div.viewcode-block:target { - margin: -1px -10px; - padding: 0 10px; -} - -/* -- math display ---------------------------------------------------------- */ - -img.math { - vertical-align: middle; -} - -div.body div.math p { - text-align: center; -} - -span.eqno { - float: right; -} - -span.eqno a.headerlink { - position: absolute; - z-index: 1; -} - -div.math:hover a.headerlink { - visibility: visible; -} - -/* -- printout stylesheet --------------------------------------------------- */ - -@media print { - div.document, - div.documentwrapper, - div.bodywrapper { - margin: 0 !important; - width: 100%; - } - - div.sphinxsidebar, - div.related, - div.footer, - #top-link { - display: none; - } -} \ No newline at end of file diff --git a/0.11./_static/binder_badge_logo.svg b/0.11./_static/binder_badge_logo.svg deleted file mode 100644 index 327f6b639a9..00000000000 --- a/0.11./_static/binder_badge_logo.svg +++ /dev/null @@ -1 +0,0 @@ - launchlaunchbinderbinder \ No newline at end of file diff --git a/0.11./_static/broken_example.png b/0.11./_static/broken_example.png deleted file mode 100644 index 4fea24e7df4..00000000000 Binary files a/0.11./_static/broken_example.png and /dev/null differ diff --git a/0.11./_static/check-solid.svg b/0.11./_static/check-solid.svg deleted file mode 100644 index 92fad4b5c0b..00000000000 --- a/0.11./_static/check-solid.svg +++ /dev/null @@ -1,4 +0,0 @@ - - - - diff --git a/0.11./_static/clipboard.min.js b/0.11./_static/clipboard.min.js deleted file mode 100644 index 54b3c463811..00000000000 --- a/0.11./_static/clipboard.min.js +++ /dev/null @@ -1,7 +0,0 @@ -/*! - * clipboard.js v2.0.8 - * https://clipboardjs.com/ - * - * Licensed MIT © Zeno Rocha - */ -!function(t,e){"object"==typeof exports&&"object"==typeof module?module.exports=e():"function"==typeof define&&define.amd?define([],e):"object"==typeof exports?exports.ClipboardJS=e():t.ClipboardJS=e()}(this,function(){return n={686:function(t,e,n){"use strict";n.d(e,{default:function(){return o}});var e=n(279),i=n.n(e),e=n(370),u=n.n(e),e=n(817),c=n.n(e);function a(t){try{return document.execCommand(t)}catch(t){return}}var f=function(t){t=c()(t);return a("cut"),t};var l=function(t){var e,n,o,r=1 - - - - diff --git a/0.11./_static/copybutton.css b/0.11./_static/copybutton.css deleted file mode 100644 index 5d291490ce5..00000000000 --- a/0.11./_static/copybutton.css +++ /dev/null @@ -1,81 +0,0 @@ -/* Copy buttons */ -button.copybtn { - position: absolute; - display: flex; - top: .3em; - right: .5em; - width: 1.7em; - height: 1.7em; - opacity: 0; - transition: opacity 0.3s, border .3s, background-color .3s; - user-select: none; - padding: 0; - border: none; - outline: none; - border-radius: 0.4em; - border: #e1e1e1 1px solid; - background-color: rgb(245, 245, 245); -} - -button.copybtn.success { - border-color: #22863a; -} - -button.copybtn img { - width: 100%; - padding: .2em; -} - -div.highlight { - position: relative; -} - -.highlight:hover button.copybtn { - opacity: 1; -} - -.highlight button.copybtn:hover { - background-color: rgb(235, 235, 235); -} - -.highlight button.copybtn:active { - background-color: rgb(187, 187, 187); -} - -/** - * A minimal CSS-only tooltip copied from: - * https://codepen.io/mildrenben/pen/rVBrpK - * - * To use, write HTML like the following: - * - *

Short

- */ - .o-tooltip--left { - position: relative; - } - - .o-tooltip--left:after { - opacity: 0; - visibility: hidden; - position: absolute; - content: attr(data-tooltip); - padding: .2em; - font-size: .8em; - left: -.2em; - background: grey; - color: white; - white-space: nowrap; - z-index: 2; - border-radius: 2px; - transform: translateX(-102%) translateY(0); - transition: opacity 0.2s cubic-bezier(0.64, 0.09, 0.08, 1), transform 0.2s cubic-bezier(0.64, 0.09, 0.08, 1); -} - -.o-tooltip--left:hover:after { - display: block; - opacity: 1; - visibility: visible; - transform: translateX(-100%) translateY(0); - transition: opacity 0.2s cubic-bezier(0.64, 0.09, 0.08, 1), transform 0.2s cubic-bezier(0.64, 0.09, 0.08, 1); - transition-delay: .5s; -} diff --git a/0.11./_static/copybutton.js b/0.11./_static/copybutton.js deleted file mode 100644 index 482bda03cdc..00000000000 --- a/0.11./_static/copybutton.js +++ /dev/null @@ -1,197 +0,0 @@ -// Localization support -const messages = { - 'en': { - 'copy': 'Copy', - 'copy_to_clipboard': 'Copy to clipboard', - 'copy_success': 'Copied!', - 'copy_failure': 'Failed to copy', - }, - 'es' : { - 'copy': 'Copiar', - 'copy_to_clipboard': 'Copiar al portapapeles', - 'copy_success': '¡Copiado!', - 'copy_failure': 'Error al copiar', - }, - 'de' : { - 'copy': 'Kopieren', - 'copy_to_clipboard': 'In die Zwischenablage kopieren', - 'copy_success': 'Kopiert!', - 'copy_failure': 'Fehler beim Kopieren', - }, - 'fr' : { - 'copy': 'Copier', - 'copy_to_clipboard': 'Copié dans le presse-papier', - 'copy_success': 'Copié !', - 'copy_failure': 'Échec de la copie', - }, - 'ru': { - 'copy': 'Скопировать', - 'copy_to_clipboard': 'Скопировать в буфер', - 'copy_success': 'Скопировано!', - 'copy_failure': 'Не удалось скопировать', - }, - 'zh-CN': { - 'copy': '复制', - 'copy_to_clipboard': '复制到剪贴板', - 'copy_success': '复制成功!', - 'copy_failure': '复制失败', - } -} - -let locale = 'en' -if( document.documentElement.lang !== undefined - && messages[document.documentElement.lang] !== undefined ) { - locale = document.documentElement.lang -} - -let doc_url_root = DOCUMENTATION_OPTIONS.URL_ROOT; -if (doc_url_root == '#') { - doc_url_root = ''; -} - -const path_static = `${doc_url_root}_static/`; - -/** - * Set up copy/paste for code blocks - */ - -const runWhenDOMLoaded = cb => { - if (document.readyState != 'loading') { - cb() - } else if (document.addEventListener) { - document.addEventListener('DOMContentLoaded', cb) - } else { - document.attachEvent('onreadystatechange', function() { - if (document.readyState == 'complete') cb() - }) - } -} - -const codeCellId = index => `codecell${index}` - -// Clears selected text since ClipboardJS will select the text when copying -const clearSelection = () => { - if (window.getSelection) { - window.getSelection().removeAllRanges() - } else if (document.selection) { - document.selection.empty() - } -} - -// Changes tooltip text for two seconds, then changes it back -const temporarilyChangeTooltip = (el, oldText, newText) => { - el.setAttribute('data-tooltip', newText) - el.classList.add('success') - setTimeout(() => el.setAttribute('data-tooltip', oldText), 2000) - setTimeout(() => el.classList.remove('success'), 2000) -} - -// Changes the copy button icon for two seconds, then changes it back -const temporarilyChangeIcon = (el) => { - img = el.querySelector("img"); - img.setAttribute('src', `${path_static}check-solid.svg`) - setTimeout(() => img.setAttribute('src', `${path_static}copy-button.svg`), 2000) -} - -const addCopyButtonToCodeCells = () => { - // If ClipboardJS hasn't loaded, wait a bit and try again. This - // happens because we load ClipboardJS asynchronously. - if (window.ClipboardJS === undefined) { - setTimeout(addCopyButtonToCodeCells, 250) - return - } - - // Add copybuttons to all of our code cells - const codeCells = document.querySelectorAll('div.highlight pre') - codeCells.forEach((codeCell, index) => { - const id = codeCellId(index) - codeCell.setAttribute('id', id) - - const clipboardButton = id => - `` - codeCell.insertAdjacentHTML('afterend', clipboardButton(id)) - }) - -function escapeRegExp(string) { - return string.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'); // $& means the whole matched string -} - -// Callback when a copy button is clicked. Will be passed the node that was clicked -// should then grab the text and replace pieces of text that shouldn't be used in output -function formatCopyText(textContent, copybuttonPromptText, isRegexp = false, onlyCopyPromptLines = true, removePrompts = true, copyEmptyLines = true, lineContinuationChar = "", hereDocDelim = "") { - - var regexp; - var match; - - // Do we check for line continuation characters and "HERE-documents"? - var useLineCont = !!lineContinuationChar - var useHereDoc = !!hereDocDelim - - // create regexp to capture prompt and remaining line - if (isRegexp) { - regexp = new RegExp('^(' + copybuttonPromptText + ')(.*)') - } else { - regexp = new RegExp('^(' + escapeRegExp(copybuttonPromptText) + ')(.*)') - } - - const outputLines = []; - var promptFound = false; - var gotLineCont = false; - var gotHereDoc = false; - const lineGotPrompt = []; - for (const line of textContent.split('\n')) { - match = line.match(regexp) - if (match || gotLineCont || gotHereDoc) { - promptFound = regexp.test(line) - lineGotPrompt.push(promptFound) - if (removePrompts && promptFound) { - outputLines.push(match[2]) - } else { - outputLines.push(line) - } - gotLineCont = line.endsWith(lineContinuationChar) & useLineCont - if (line.includes(hereDocDelim) & useHereDoc) - gotHereDoc = !gotHereDoc - } else if (!onlyCopyPromptLines) { - outputLines.push(line) - } else if (copyEmptyLines && line.trim() === '') { - outputLines.push(line) - } - } - - // If no lines with the prompt were found then just use original lines - if (lineGotPrompt.some(v => v === true)) { - textContent = outputLines.join('\n'); - } - - // Remove a trailing newline to avoid auto-running when pasting - if (textContent.endsWith("\n")) { - textContent = textContent.slice(0, -1) - } - return textContent -} - - -var copyTargetText = (trigger) => { - var target = document.querySelector(trigger.attributes['data-clipboard-target'].value); - return formatCopyText(target.innerText, '', false, true, true, true, '', '') -} - - // Initialize with a callback so we can modify the text before copy - const clipboard = new ClipboardJS('.copybtn', {text: copyTargetText}) - - // Update UI with error/success messages - clipboard.on('success', event => { - clearSelection() - temporarilyChangeTooltip(event.trigger, messages[locale]['copy'], messages[locale]['copy_success']) - temporarilyChangeIcon(event.trigger) - }) - - clipboard.on('error', event => { - temporarilyChangeTooltip(event.trigger, messages[locale]['copy'], messages[locale]['copy_failure']) - }) -} - -runWhenDOMLoaded(addCopyButtonToCodeCells) \ No newline at end of file diff --git a/0.11./_static/copybutton_funcs.js b/0.11./_static/copybutton_funcs.js deleted file mode 100644 index b9168c55654..00000000000 --- a/0.11./_static/copybutton_funcs.js +++ /dev/null @@ -1,58 +0,0 @@ -function escapeRegExp(string) { - return string.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'); // $& means the whole matched string -} - -// Callback when a copy button is clicked. Will be passed the node that was clicked -// should then grab the text and replace pieces of text that shouldn't be used in output -export function formatCopyText(textContent, copybuttonPromptText, isRegexp = false, onlyCopyPromptLines = true, removePrompts = true, copyEmptyLines = true, lineContinuationChar = "", hereDocDelim = "") { - - var regexp; - var match; - - // Do we check for line continuation characters and "HERE-documents"? - var useLineCont = !!lineContinuationChar - var useHereDoc = !!hereDocDelim - - // create regexp to capture prompt and remaining line - if (isRegexp) { - regexp = new RegExp('^(' + copybuttonPromptText + ')(.*)') - } else { - regexp = new RegExp('^(' + escapeRegExp(copybuttonPromptText) + ')(.*)') - } - - const outputLines = []; - var promptFound = false; - var gotLineCont = false; - var gotHereDoc = false; - const lineGotPrompt = []; - for (const line of textContent.split('\n')) { - match = line.match(regexp) - if (match || gotLineCont || gotHereDoc) { - promptFound = regexp.test(line) - lineGotPrompt.push(promptFound) - if (removePrompts && promptFound) { - outputLines.push(match[2]) - } else { - outputLines.push(line) - } - gotLineCont = line.endsWith(lineContinuationChar) & useLineCont - if (line.includes(hereDocDelim) & useHereDoc) - gotHereDoc = !gotHereDoc - } else if (!onlyCopyPromptLines) { - outputLines.push(line) - } else if (copyEmptyLines && line.trim() === '') { - outputLines.push(line) - } - } - - // If no lines with the prompt were found then just use original lines - if (lineGotPrompt.some(v => v === true)) { - textContent = outputLines.join('\n'); - } - - // Remove a trailing newline to avoid auto-running when pasting - if (textContent.endsWith("\n")) { - textContent = textContent.slice(0, -1) - } - return textContent -} diff --git a/0.11./_static/css/custom_torchvision.css b/0.11./_static/css/custom_torchvision.css deleted file mode 100644 index aa0e74c753a..00000000000 --- a/0.11./_static/css/custom_torchvision.css +++ /dev/null @@ -1,12 +0,0 @@ -/* This rule (and possibly this entire file) should be removed once -https://github.com/pytorch/pytorch_sphinx_theme/issues/125 is fixed. - -We override the rule so that the links to the notebooks aren't hidden in the -gallery examples. pytorch_sphinx_theme is supposed to customize those links so -that they render nicely (look at the nice links on top of the tutorials -examples) but it doesn't work for repos that are not the tutorial repo, and in -torchvision it just hides the links. So we have to put them back here */ -article.pytorch-article .sphx-glr-download-link-note.admonition.note, -article.pytorch-article .reference.download.internal, article.pytorch-article .sphx-glr-signature { - display: block; -} diff --git a/0.11./_static/css/theme.css b/0.11./_static/css/theme.css deleted file mode 100644 index 42b57139336..00000000000 --- a/0.11./_static/css/theme.css +++ /dev/null @@ -1,12375 +0,0 @@ -@charset "UTF-8"; -/*! - * Bootstrap v4.0.0 (https://getbootstrap.com) - * Copyright 2011-2018 The Bootstrap Authors - * Copyright 2011-2018 Twitter, Inc. - * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE) - */ -:root { - --blue: #007bff; - --indigo: #6610f2; - --purple: #6f42c1; - --pink: #e83e8c; - --red: #dc3545; - --orange: #fd7e14; - --yellow: #ffc107; - --green: #28a745; - --teal: #20c997; - --cyan: #17a2b8; - --white: #fff; - --gray: #6c757d; - --gray-dark: #343a40; - --primary: #007bff; - --secondary: #6c757d; - --success: #28a745; - --info: #17a2b8; - --warning: #ffc107; - --danger: #dc3545; - --light: #f8f9fa; - --dark: #343a40; - --breakpoint-xs: 0; - --breakpoint-sm: 576px; - --breakpoint-md: 768px; - --breakpoint-lg: 992px; - --breakpoint-xl: 1200px; - --font-family-sans-serif: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol"; - --font-family-monospace: SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace; -} - -*, -*::before, -*::after { - -webkit-box-sizing: border-box; - box-sizing: border-box; -} - -html { - font-family: sans-serif; - line-height: 1.15; - -webkit-text-size-adjust: 100%; - -ms-text-size-adjust: 100%; - -ms-overflow-style: scrollbar; - -webkit-tap-highlight-color: rgba(0, 0, 0, 0); -} - -@-ms-viewport { - width: device-width; -} -article, aside, dialog, figcaption, figure, footer, header, hgroup, main, nav, section { - display: block; -} - -body { - margin: 0; - font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol"; - font-size: 1rem; - font-weight: 400; - line-height: 1.5; - color: #212529; - text-align: left; - background-color: #fff; -} - -[tabindex="-1"]:focus { - outline: 0 !important; -} - -hr { - -webkit-box-sizing: content-box; - box-sizing: content-box; - height: 0; - overflow: visible; -} - -h1, h2, h3, h4, h5, h6 { - margin-top: 0; - margin-bottom: 0.5rem; -} - -p { - margin-top: 0; - margin-bottom: 1rem; -} - -abbr[title], -abbr[data-original-title] { - text-decoration: underline; - -webkit-text-decoration: underline dotted; - text-decoration: underline dotted; - cursor: help; - border-bottom: 0; -} - -address { - margin-bottom: 1rem; - font-style: normal; - line-height: inherit; -} - -ol, -ul, -dl { - margin-top: 0; - margin-bottom: 1rem; -} - -ol ol, -ul ul, -ol ul, -ul ol { - margin-bottom: 0; -} - -dt { - font-weight: 700; -} - -dd { - margin-bottom: .5rem; - margin-left: 0; -} - -blockquote { - margin: 0 0 1rem; -} - -dfn { - font-style: italic; -} - -b, -strong { - font-weight: bolder; -} - -small { - font-size: 80%; -} - -sub, -sup { - position: relative; - font-size: 75%; - line-height: 0; - vertical-align: baseline; -} - -sub { - bottom: -.25em; -} - -sup { - top: -.5em; -} - -a { - color: #007bff; - text-decoration: none; - background-color: transparent; - -webkit-text-decoration-skip: objects; -} -a:hover { - color: #0056b3; - text-decoration: underline; -} - -a:not([href]):not([tabindex]) { - color: inherit; - text-decoration: none; -} -a:not([href]):not([tabindex]):hover, a:not([href]):not([tabindex]):focus { - color: inherit; - text-decoration: none; -} -a:not([href]):not([tabindex]):focus { - outline: 0; -} - -pre, -code, -kbd, -samp { - font-family: monospace, monospace; - font-size: 1em; -} - -pre { - margin-top: 0; - margin-bottom: 1rem; - overflow: auto; - -ms-overflow-style: scrollbar; -} - -figure { - margin: 0 0 1rem; -} - -img { - vertical-align: middle; - border-style: none; -} - -svg:not(:root) { - overflow: hidden; -} - -table { - border-collapse: collapse; -} - -caption { - padding-top: 0.75rem; - padding-bottom: 0.75rem; - color: #6c757d; - text-align: left; - caption-side: bottom; -} - -th { - text-align: inherit; -} - -label { - display: inline-block; - margin-bottom: .5rem; -} - -button { - border-radius: 0; -} - -button:focus { - outline: 1px dotted; - outline: 5px auto -webkit-focus-ring-color; -} - -input, -button, -select, -optgroup, -textarea { - margin: 0; - font-family: inherit; - font-size: inherit; - line-height: inherit; -} - -button, -input { - overflow: visible; -} - -button, -select { - text-transform: none; -} - -button, -html [type="button"], -[type="reset"], -[type="submit"] { - -webkit-appearance: button; -} - -button::-moz-focus-inner, -[type="button"]::-moz-focus-inner, -[type="reset"]::-moz-focus-inner, -[type="submit"]::-moz-focus-inner { - padding: 0; - border-style: none; -} - -input[type="radio"], -input[type="checkbox"] { - -webkit-box-sizing: border-box; - box-sizing: border-box; - padding: 0; -} - -input[type="date"], -input[type="time"], -input[type="datetime-local"], -input[type="month"] { - -webkit-appearance: listbox; -} - -textarea { - overflow: auto; - resize: vertical; -} - -fieldset { - min-width: 0; - padding: 0; - margin: 0; - border: 0; -} - -legend { - display: block; - width: 100%; - max-width: 100%; - padding: 0; - margin-bottom: .5rem; - font-size: 1.5rem; - line-height: inherit; - color: inherit; - white-space: normal; -} - -progress { - vertical-align: baseline; -} - -[type="number"]::-webkit-inner-spin-button, -[type="number"]::-webkit-outer-spin-button { - height: auto; -} - -[type="search"] { - outline-offset: -2px; - -webkit-appearance: none; -} - -[type="search"]::-webkit-search-cancel-button, -[type="search"]::-webkit-search-decoration { - -webkit-appearance: none; -} - -::-webkit-file-upload-button { - font: inherit; - -webkit-appearance: button; -} - -output { - display: inline-block; -} - -summary { - display: list-item; - cursor: pointer; -} - -template { - display: none; -} - -[hidden] { - display: none !important; -} - -h1, h2, h3, h4, h5, h6, -.h1, .h2, .h3, .h4, .h5, .h6 { - margin-bottom: 0.5rem; - font-family: inherit; - font-weight: 500; - line-height: 1.2; - color: inherit; -} - -h1, .h1 { - font-size: 2.5rem; -} - -h2, .h2 { - font-size: 2rem; -} - -h3, .h3 { - font-size: 1.75rem; -} - -h4, .h4 { - font-size: 1.5rem; -} - -h5, .h5 { - font-size: 1.25rem; -} - -h6, .h6 { - font-size: 1rem; -} - -.lead { - font-size: 1.25rem; - font-weight: 300; -} - -.display-1 { - font-size: 6rem; - font-weight: 300; - line-height: 1.2; -} - -.display-2 { - font-size: 5.5rem; - font-weight: 300; - line-height: 1.2; -} - -.display-3 { - font-size: 4.5rem; - font-weight: 300; - line-height: 1.2; -} - -.display-4 { - font-size: 3.5rem; - font-weight: 300; - line-height: 1.2; -} - -hr { - margin-top: 1rem; - margin-bottom: 1rem; - border: 0; - border-top: 1px solid rgba(0, 0, 0, 0.1); -} - -small, -.small { - font-size: 80%; - font-weight: 400; -} - -mark, -.mark { - padding: 0.2em; - background-color: #fcf8e3; -} - -.list-unstyled { - padding-left: 0; - list-style: none; -} - -.list-inline { - padding-left: 0; - list-style: none; -} - -.list-inline-item { - display: inline-block; -} -.list-inline-item:not(:last-child) { - margin-right: 0.5rem; -} - -.initialism { - font-size: 90%; - text-transform: uppercase; -} - -.blockquote { - margin-bottom: 1rem; - font-size: 1.25rem; -} - -.blockquote-footer { - display: block; - font-size: 80%; - color: #6c757d; -} -.blockquote-footer::before { - content: "\2014 \00A0"; -} - -.img-fluid { - max-width: 100%; - height: auto; -} - -.img-thumbnail { - padding: 0.25rem; - background-color: #fff; - border: 1px solid #dee2e6; - border-radius: 0.25rem; - max-width: 100%; - height: auto; -} - -.figure { - display: inline-block; -} - -.figure-img { - margin-bottom: 0.5rem; - line-height: 1; -} - -.figure-caption { - font-size: 90%; - color: #6c757d; -} - -code, -kbd, -pre, -samp { - font-family: SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace; -} - -code { - font-size: 87.5%; - color: #e83e8c; - word-break: break-word; -} -a > code { - color: inherit; -} - -kbd { - padding: 0.2rem 0.4rem; - font-size: 87.5%; - color: #fff; - background-color: #212529; - border-radius: 0.2rem; -} -kbd kbd { - padding: 0; - font-size: 100%; - font-weight: 700; -} - -pre { - display: block; - font-size: 87.5%; - color: #212529; -} -pre code { - font-size: inherit; - color: inherit; - word-break: normal; -} - -.pre-scrollable { - max-height: 340px; - overflow-y: scroll; -} - -.container { - width: 100%; - padding-right: 15px; - padding-left: 15px; - margin-right: auto; - margin-left: auto; -} -@media (min-width: 576px) { - .container { - max-width: 540px; - } -} -@media (min-width: 768px) { - .container { - max-width: 720px; - } -} -@media (min-width: 992px) { - .container { - max-width: 960px; - } -} -@media (min-width: 1200px) { - .container { - max-width: 1140px; - } -} - -.container-fluid { - width: 100%; - padding-right: 15px; - padding-left: 15px; - margin-right: auto; - margin-left: auto; -} - -.row { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -ms-flex-wrap: wrap; - flex-wrap: wrap; - margin-right: -15px; - margin-left: -15px; -} - -.no-gutters { - margin-right: 0; - margin-left: 0; -} -.no-gutters > .col, -.no-gutters > [class*="col-"] { - padding-right: 0; - padding-left: 0; -} - -.col-1, .col-2, .col-3, .col-4, .col-5, .col-6, .col-7, .col-8, .col-9, .col-10, .col-11, .col-12, .col, -.col-auto, .col-sm-1, .col-sm-2, .col-sm-3, .col-sm-4, .col-sm-5, .col-sm-6, .col-sm-7, .col-sm-8, .col-sm-9, .col-sm-10, .col-sm-11, .col-sm-12, .col-sm, -.col-sm-auto, .col-md-1, .col-md-2, .col-md-3, .col-md-4, .col-md-5, .col-md-6, .col-md-7, .col-md-8, .col-md-9, .col-md-10, .col-md-11, .col-md-12, .col-md, -.col-md-auto, .col-lg-1, .col-lg-2, .col-lg-3, .col-lg-4, .col-lg-5, .col-lg-6, .col-lg-7, .col-lg-8, .col-lg-9, .col-lg-10, .col-lg-11, .col-lg-12, .col-lg, -.col-lg-auto, .col-xl-1, .col-xl-2, .col-xl-3, .col-xl-4, .col-xl-5, .col-xl-6, .col-xl-7, .col-xl-8, .col-xl-9, .col-xl-10, .col-xl-11, .col-xl-12, .col-xl, -.col-xl-auto { - position: relative; - width: 100%; - min-height: 1px; - padding-right: 15px; - padding-left: 15px; -} - -.col { - -ms-flex-preferred-size: 0; - flex-basis: 0; - -webkit-box-flex: 1; - -ms-flex-positive: 1; - flex-grow: 1; - max-width: 100%; -} - -.col-auto { - -webkit-box-flex: 0; - -ms-flex: 0 0 auto; - flex: 0 0 auto; - width: auto; - max-width: none; -} - -.col-1 { - -webkit-box-flex: 0; - -ms-flex: 0 0 8.3333333333%; - flex: 0 0 8.3333333333%; - max-width: 8.3333333333%; -} - -.col-2 { - -webkit-box-flex: 0; - -ms-flex: 0 0 16.6666666667%; - flex: 0 0 16.6666666667%; - max-width: 16.6666666667%; -} - -.col-3 { - -webkit-box-flex: 0; - -ms-flex: 0 0 25%; - flex: 0 0 25%; - max-width: 25%; -} - -.col-4 { - -webkit-box-flex: 0; - -ms-flex: 0 0 33.3333333333%; - flex: 0 0 33.3333333333%; - max-width: 33.3333333333%; -} - -.col-5 { - -webkit-box-flex: 0; - -ms-flex: 0 0 41.6666666667%; - flex: 0 0 41.6666666667%; - max-width: 41.6666666667%; -} - -.col-6 { - -webkit-box-flex: 0; - -ms-flex: 0 0 50%; - flex: 0 0 50%; - max-width: 50%; -} - -.col-7 { - -webkit-box-flex: 0; - -ms-flex: 0 0 58.3333333333%; - flex: 0 0 58.3333333333%; - max-width: 58.3333333333%; -} - -.col-8 { - -webkit-box-flex: 0; - -ms-flex: 0 0 66.6666666667%; - flex: 0 0 66.6666666667%; - max-width: 66.6666666667%; -} - -.col-9 { - -webkit-box-flex: 0; - -ms-flex: 0 0 75%; - flex: 0 0 75%; - max-width: 75%; -} - -.col-10 { - -webkit-box-flex: 0; - -ms-flex: 0 0 83.3333333333%; - flex: 0 0 83.3333333333%; - max-width: 83.3333333333%; -} - -.col-11 { - -webkit-box-flex: 0; - -ms-flex: 0 0 91.6666666667%; - flex: 0 0 91.6666666667%; - max-width: 91.6666666667%; -} - -.col-12 { - -webkit-box-flex: 0; - -ms-flex: 0 0 100%; - flex: 0 0 100%; - max-width: 100%; -} - -.order-first { - -webkit-box-ordinal-group: 0; - -ms-flex-order: -1; - order: -1; -} - -.order-last { - -webkit-box-ordinal-group: 14; - -ms-flex-order: 13; - order: 13; -} - -.order-0 { - -webkit-box-ordinal-group: 1; - -ms-flex-order: 0; - order: 0; -} - -.order-1 { - -webkit-box-ordinal-group: 2; - -ms-flex-order: 1; - order: 1; -} - -.order-2 { - -webkit-box-ordinal-group: 3; - -ms-flex-order: 2; - order: 2; -} - -.order-3 { - -webkit-box-ordinal-group: 4; - -ms-flex-order: 3; - order: 3; -} - -.order-4 { - -webkit-box-ordinal-group: 5; - -ms-flex-order: 4; - order: 4; -} - -.order-5 { - -webkit-box-ordinal-group: 6; - -ms-flex-order: 5; - order: 5; -} - -.order-6 { - -webkit-box-ordinal-group: 7; - -ms-flex-order: 6; - order: 6; -} - -.order-7 { - -webkit-box-ordinal-group: 8; - -ms-flex-order: 7; - order: 7; -} - -.order-8 { - -webkit-box-ordinal-group: 9; - -ms-flex-order: 8; - order: 8; -} - -.order-9 { - -webkit-box-ordinal-group: 10; - -ms-flex-order: 9; - order: 9; -} - -.order-10 { - -webkit-box-ordinal-group: 11; - -ms-flex-order: 10; - order: 10; -} - -.order-11 { - -webkit-box-ordinal-group: 12; - -ms-flex-order: 11; - order: 11; -} - -.order-12 { - -webkit-box-ordinal-group: 13; - -ms-flex-order: 12; - order: 12; -} - -.offset-1 { - margin-left: 8.3333333333%; -} - -.offset-2 { - margin-left: 16.6666666667%; -} - -.offset-3 { - margin-left: 25%; -} - -.offset-4 { - margin-left: 33.3333333333%; -} - -.offset-5 { - margin-left: 41.6666666667%; -} - -.offset-6 { - margin-left: 50%; -} - -.offset-7 { - margin-left: 58.3333333333%; -} - -.offset-8 { - margin-left: 66.6666666667%; -} - -.offset-9 { - margin-left: 75%; -} - -.offset-10 { - margin-left: 83.3333333333%; -} - -.offset-11 { - margin-left: 91.6666666667%; -} - -@media (min-width: 576px) { - .col-sm { - -ms-flex-preferred-size: 0; - flex-basis: 0; - -webkit-box-flex: 1; - -ms-flex-positive: 1; - flex-grow: 1; - max-width: 100%; - } - - .col-sm-auto { - -webkit-box-flex: 0; - -ms-flex: 0 0 auto; - flex: 0 0 auto; - width: auto; - max-width: none; - } - - .col-sm-1 { - -webkit-box-flex: 0; - -ms-flex: 0 0 8.3333333333%; - flex: 0 0 8.3333333333%; - max-width: 8.3333333333%; - } - - .col-sm-2 { - -webkit-box-flex: 0; - -ms-flex: 0 0 16.6666666667%; - flex: 0 0 16.6666666667%; - max-width: 16.6666666667%; - } - - .col-sm-3 { - -webkit-box-flex: 0; - -ms-flex: 0 0 25%; - flex: 0 0 25%; - max-width: 25%; - } - - .col-sm-4 { - -webkit-box-flex: 0; - -ms-flex: 0 0 33.3333333333%; - flex: 0 0 33.3333333333%; - max-width: 33.3333333333%; - } - - .col-sm-5 { - -webkit-box-flex: 0; - -ms-flex: 0 0 41.6666666667%; - flex: 0 0 41.6666666667%; - max-width: 41.6666666667%; - } - - .col-sm-6 { - -webkit-box-flex: 0; - -ms-flex: 0 0 50%; - flex: 0 0 50%; - max-width: 50%; - } - - .col-sm-7 { - -webkit-box-flex: 0; - -ms-flex: 0 0 58.3333333333%; - flex: 0 0 58.3333333333%; - max-width: 58.3333333333%; - } - - .col-sm-8 { - -webkit-box-flex: 0; - -ms-flex: 0 0 66.6666666667%; - flex: 0 0 66.6666666667%; - max-width: 66.6666666667%; - } - - .col-sm-9 { - -webkit-box-flex: 0; - -ms-flex: 0 0 75%; - flex: 0 0 75%; - max-width: 75%; - } - - .col-sm-10 { - -webkit-box-flex: 0; - -ms-flex: 0 0 83.3333333333%; - flex: 0 0 83.3333333333%; - max-width: 83.3333333333%; - } - - .col-sm-11 { - -webkit-box-flex: 0; - -ms-flex: 0 0 91.6666666667%; - flex: 0 0 91.6666666667%; - max-width: 91.6666666667%; - } - - .col-sm-12 { - -webkit-box-flex: 0; - -ms-flex: 0 0 100%; - flex: 0 0 100%; - max-width: 100%; - } - - .order-sm-first { - -webkit-box-ordinal-group: 0; - -ms-flex-order: -1; - order: -1; - } - - .order-sm-last { - -webkit-box-ordinal-group: 14; - -ms-flex-order: 13; - order: 13; - } - - .order-sm-0 { - -webkit-box-ordinal-group: 1; - -ms-flex-order: 0; - order: 0; - } - - .order-sm-1 { - -webkit-box-ordinal-group: 2; - -ms-flex-order: 1; - order: 1; - } - - .order-sm-2 { - -webkit-box-ordinal-group: 3; - -ms-flex-order: 2; - order: 2; - } - - .order-sm-3 { - -webkit-box-ordinal-group: 4; - -ms-flex-order: 3; - order: 3; - } - - .order-sm-4 { - -webkit-box-ordinal-group: 5; - -ms-flex-order: 4; - order: 4; - } - - .order-sm-5 { - -webkit-box-ordinal-group: 6; - -ms-flex-order: 5; - order: 5; - } - - .order-sm-6 { - -webkit-box-ordinal-group: 7; - -ms-flex-order: 6; - order: 6; - } - - .order-sm-7 { - -webkit-box-ordinal-group: 8; - -ms-flex-order: 7; - order: 7; - } - - .order-sm-8 { - -webkit-box-ordinal-group: 9; - -ms-flex-order: 8; - order: 8; - } - - .order-sm-9 { - -webkit-box-ordinal-group: 10; - -ms-flex-order: 9; - order: 9; - } - - .order-sm-10 { - -webkit-box-ordinal-group: 11; - -ms-flex-order: 10; - order: 10; - } - - .order-sm-11 { - -webkit-box-ordinal-group: 12; - -ms-flex-order: 11; - order: 11; - } - - .order-sm-12 { - -webkit-box-ordinal-group: 13; - -ms-flex-order: 12; - order: 12; - } - - .offset-sm-0 { - margin-left: 0; - } - - .offset-sm-1 { - margin-left: 8.3333333333%; - } - - .offset-sm-2 { - margin-left: 16.6666666667%; - } - - .offset-sm-3 { - margin-left: 25%; - } - - .offset-sm-4 { - margin-left: 33.3333333333%; - } - - .offset-sm-5 { - margin-left: 41.6666666667%; - } - - .offset-sm-6 { - margin-left: 50%; - } - - .offset-sm-7 { - margin-left: 58.3333333333%; - } - - .offset-sm-8 { - margin-left: 66.6666666667%; - } - - .offset-sm-9 { - margin-left: 75%; - } - - .offset-sm-10 { - margin-left: 83.3333333333%; - } - - .offset-sm-11 { - margin-left: 91.6666666667%; - } -} -@media (min-width: 768px) { - .col-md { - -ms-flex-preferred-size: 0; - flex-basis: 0; - -webkit-box-flex: 1; - -ms-flex-positive: 1; - flex-grow: 1; - max-width: 100%; - } - - .col-md-auto { - -webkit-box-flex: 0; - -ms-flex: 0 0 auto; - flex: 0 0 auto; - width: auto; - max-width: none; - } - - .col-md-1 { - -webkit-box-flex: 0; - -ms-flex: 0 0 8.3333333333%; - flex: 0 0 8.3333333333%; - max-width: 8.3333333333%; - } - - .col-md-2 { - -webkit-box-flex: 0; - -ms-flex: 0 0 16.6666666667%; - flex: 0 0 16.6666666667%; - max-width: 16.6666666667%; - } - - .col-md-3 { - -webkit-box-flex: 0; - -ms-flex: 0 0 25%; - flex: 0 0 25%; - max-width: 25%; - } - - .col-md-4 { - -webkit-box-flex: 0; - -ms-flex: 0 0 33.3333333333%; - flex: 0 0 33.3333333333%; - max-width: 33.3333333333%; - } - - .col-md-5 { - -webkit-box-flex: 0; - -ms-flex: 0 0 41.6666666667%; - flex: 0 0 41.6666666667%; - max-width: 41.6666666667%; - } - - .col-md-6 { - -webkit-box-flex: 0; - -ms-flex: 0 0 50%; - flex: 0 0 50%; - max-width: 50%; - } - - .col-md-7 { - -webkit-box-flex: 0; - -ms-flex: 0 0 58.3333333333%; - flex: 0 0 58.3333333333%; - max-width: 58.3333333333%; - } - - .col-md-8 { - -webkit-box-flex: 0; - -ms-flex: 0 0 66.6666666667%; - flex: 0 0 66.6666666667%; - max-width: 66.6666666667%; - } - - .col-md-9 { - -webkit-box-flex: 0; - -ms-flex: 0 0 75%; - flex: 0 0 75%; - max-width: 75%; - } - - .col-md-10 { - -webkit-box-flex: 0; - -ms-flex: 0 0 83.3333333333%; - flex: 0 0 83.3333333333%; - max-width: 83.3333333333%; - } - - .col-md-11 { - -webkit-box-flex: 0; - -ms-flex: 0 0 91.6666666667%; - flex: 0 0 91.6666666667%; - max-width: 91.6666666667%; - } - - .col-md-12 { - -webkit-box-flex: 0; - -ms-flex: 0 0 100%; - flex: 0 0 100%; - max-width: 100%; - } - - .order-md-first { - -webkit-box-ordinal-group: 0; - -ms-flex-order: -1; - order: -1; - } - - .order-md-last { - -webkit-box-ordinal-group: 14; - -ms-flex-order: 13; - order: 13; - } - - .order-md-0 { - -webkit-box-ordinal-group: 1; - -ms-flex-order: 0; - order: 0; - } - - .order-md-1 { - -webkit-box-ordinal-group: 2; - -ms-flex-order: 1; - order: 1; - } - - .order-md-2 { - -webkit-box-ordinal-group: 3; - -ms-flex-order: 2; - order: 2; - } - - .order-md-3 { - -webkit-box-ordinal-group: 4; - -ms-flex-order: 3; - order: 3; - } - - .order-md-4 { - -webkit-box-ordinal-group: 5; - -ms-flex-order: 4; - order: 4; - } - - .order-md-5 { - -webkit-box-ordinal-group: 6; - -ms-flex-order: 5; - order: 5; - } - - .order-md-6 { - -webkit-box-ordinal-group: 7; - -ms-flex-order: 6; - order: 6; - } - - .order-md-7 { - -webkit-box-ordinal-group: 8; - -ms-flex-order: 7; - order: 7; - } - - .order-md-8 { - -webkit-box-ordinal-group: 9; - -ms-flex-order: 8; - order: 8; - } - - .order-md-9 { - -webkit-box-ordinal-group: 10; - -ms-flex-order: 9; - order: 9; - } - - .order-md-10 { - -webkit-box-ordinal-group: 11; - -ms-flex-order: 10; - order: 10; - } - - .order-md-11 { - -webkit-box-ordinal-group: 12; - -ms-flex-order: 11; - order: 11; - } - - .order-md-12 { - -webkit-box-ordinal-group: 13; - -ms-flex-order: 12; - order: 12; - } - - .offset-md-0 { - margin-left: 0; - } - - .offset-md-1 { - margin-left: 8.3333333333%; - } - - .offset-md-2 { - margin-left: 16.6666666667%; - } - - .offset-md-3 { - margin-left: 25%; - } - - .offset-md-4 { - margin-left: 33.3333333333%; - } - - .offset-md-5 { - margin-left: 41.6666666667%; - } - - .offset-md-6 { - margin-left: 50%; - } - - .offset-md-7 { - margin-left: 58.3333333333%; - } - - .offset-md-8 { - margin-left: 66.6666666667%; - } - - .offset-md-9 { - margin-left: 75%; - } - - .offset-md-10 { - margin-left: 83.3333333333%; - } - - .offset-md-11 { - margin-left: 91.6666666667%; - } -} -@media (min-width: 992px) { - .col-lg { - -ms-flex-preferred-size: 0; - flex-basis: 0; - -webkit-box-flex: 1; - -ms-flex-positive: 1; - flex-grow: 1; - max-width: 100%; - } - - .col-lg-auto { - -webkit-box-flex: 0; - -ms-flex: 0 0 auto; - flex: 0 0 auto; - width: auto; - max-width: none; - } - - .col-lg-1 { - -webkit-box-flex: 0; - -ms-flex: 0 0 8.3333333333%; - flex: 0 0 8.3333333333%; - max-width: 8.3333333333%; - } - - .col-lg-2 { - -webkit-box-flex: 0; - -ms-flex: 0 0 16.6666666667%; - flex: 0 0 16.6666666667%; - max-width: 16.6666666667%; - } - - .col-lg-3 { - -webkit-box-flex: 0; - -ms-flex: 0 0 25%; - flex: 0 0 25%; - max-width: 25%; - } - - .col-lg-4 { - -webkit-box-flex: 0; - -ms-flex: 0 0 33.3333333333%; - flex: 0 0 33.3333333333%; - max-width: 33.3333333333%; - } - - .col-lg-5 { - -webkit-box-flex: 0; - -ms-flex: 0 0 41.6666666667%; - flex: 0 0 41.6666666667%; - max-width: 41.6666666667%; - } - - .col-lg-6 { - -webkit-box-flex: 0; - -ms-flex: 0 0 50%; - flex: 0 0 50%; - max-width: 50%; - } - - .col-lg-7 { - -webkit-box-flex: 0; - -ms-flex: 0 0 58.3333333333%; - flex: 0 0 58.3333333333%; - max-width: 58.3333333333%; - } - - .col-lg-8 { - -webkit-box-flex: 0; - -ms-flex: 0 0 66.6666666667%; - flex: 0 0 66.6666666667%; - max-width: 66.6666666667%; - } - - .col-lg-9 { - -webkit-box-flex: 0; - -ms-flex: 0 0 75%; - flex: 0 0 75%; - max-width: 75%; - } - - .col-lg-10 { - -webkit-box-flex: 0; - -ms-flex: 0 0 83.3333333333%; - flex: 0 0 83.3333333333%; - max-width: 83.3333333333%; - } - - .col-lg-11 { - -webkit-box-flex: 0; - -ms-flex: 0 0 91.6666666667%; - flex: 0 0 91.6666666667%; - max-width: 91.6666666667%; - } - - .col-lg-12 { - -webkit-box-flex: 0; - -ms-flex: 0 0 100%; - flex: 0 0 100%; - max-width: 100%; - } - - .order-lg-first { - -webkit-box-ordinal-group: 0; - -ms-flex-order: -1; - order: -1; - } - - .order-lg-last { - -webkit-box-ordinal-group: 14; - -ms-flex-order: 13; - order: 13; - } - - .order-lg-0 { - -webkit-box-ordinal-group: 1; - -ms-flex-order: 0; - order: 0; - } - - .order-lg-1 { - -webkit-box-ordinal-group: 2; - -ms-flex-order: 1; - order: 1; - } - - .order-lg-2 { - -webkit-box-ordinal-group: 3; - -ms-flex-order: 2; - order: 2; - } - - .order-lg-3 { - -webkit-box-ordinal-group: 4; - -ms-flex-order: 3; - order: 3; - } - - .order-lg-4 { - -webkit-box-ordinal-group: 5; - -ms-flex-order: 4; - order: 4; - } - - .order-lg-5 { - -webkit-box-ordinal-group: 6; - -ms-flex-order: 5; - order: 5; - } - - .order-lg-6 { - -webkit-box-ordinal-group: 7; - -ms-flex-order: 6; - order: 6; - } - - .order-lg-7 { - -webkit-box-ordinal-group: 8; - -ms-flex-order: 7; - order: 7; - } - - .order-lg-8 { - -webkit-box-ordinal-group: 9; - -ms-flex-order: 8; - order: 8; - } - - .order-lg-9 { - -webkit-box-ordinal-group: 10; - -ms-flex-order: 9; - order: 9; - } - - .order-lg-10 { - -webkit-box-ordinal-group: 11; - -ms-flex-order: 10; - order: 10; - } - - .order-lg-11 { - -webkit-box-ordinal-group: 12; - -ms-flex-order: 11; - order: 11; - } - - .order-lg-12 { - -webkit-box-ordinal-group: 13; - -ms-flex-order: 12; - order: 12; - } - - .offset-lg-0 { - margin-left: 0; - } - - .offset-lg-1 { - margin-left: 8.3333333333%; - } - - .offset-lg-2 { - margin-left: 16.6666666667%; - } - - .offset-lg-3 { - margin-left: 25%; - } - - .offset-lg-4 { - margin-left: 33.3333333333%; - } - - .offset-lg-5 { - margin-left: 41.6666666667%; - } - - .offset-lg-6 { - margin-left: 50%; - } - - .offset-lg-7 { - margin-left: 58.3333333333%; - } - - .offset-lg-8 { - margin-left: 66.6666666667%; - } - - .offset-lg-9 { - margin-left: 75%; - } - - .offset-lg-10 { - margin-left: 83.3333333333%; - } - - .offset-lg-11 { - margin-left: 91.6666666667%; - } -} -@media (min-width: 1200px) { - .col-xl { - -ms-flex-preferred-size: 0; - flex-basis: 0; - -webkit-box-flex: 1; - -ms-flex-positive: 1; - flex-grow: 1; - max-width: 100%; - } - - .col-xl-auto { - -webkit-box-flex: 0; - -ms-flex: 0 0 auto; - flex: 0 0 auto; - width: auto; - max-width: none; - } - - .col-xl-1 { - -webkit-box-flex: 0; - -ms-flex: 0 0 8.3333333333%; - flex: 0 0 8.3333333333%; - max-width: 8.3333333333%; - } - - .col-xl-2 { - -webkit-box-flex: 0; - -ms-flex: 0 0 16.6666666667%; - flex: 0 0 16.6666666667%; - max-width: 16.6666666667%; - } - - .col-xl-3 { - -webkit-box-flex: 0; - -ms-flex: 0 0 25%; - flex: 0 0 25%; - max-width: 25%; - } - - .col-xl-4 { - -webkit-box-flex: 0; - -ms-flex: 0 0 33.3333333333%; - flex: 0 0 33.3333333333%; - max-width: 33.3333333333%; - } - - .col-xl-5 { - -webkit-box-flex: 0; - -ms-flex: 0 0 41.6666666667%; - flex: 0 0 41.6666666667%; - max-width: 41.6666666667%; - } - - .col-xl-6 { - -webkit-box-flex: 0; - -ms-flex: 0 0 50%; - flex: 0 0 50%; - max-width: 50%; - } - - .col-xl-7 { - -webkit-box-flex: 0; - -ms-flex: 0 0 58.3333333333%; - flex: 0 0 58.3333333333%; - max-width: 58.3333333333%; - } - - .col-xl-8 { - -webkit-box-flex: 0; - -ms-flex: 0 0 66.6666666667%; - flex: 0 0 66.6666666667%; - max-width: 66.6666666667%; - } - - .col-xl-9 { - -webkit-box-flex: 0; - -ms-flex: 0 0 75%; - flex: 0 0 75%; - max-width: 75%; - } - - .col-xl-10 { - -webkit-box-flex: 0; - -ms-flex: 0 0 83.3333333333%; - flex: 0 0 83.3333333333%; - max-width: 83.3333333333%; - } - - .col-xl-11 { - -webkit-box-flex: 0; - -ms-flex: 0 0 91.6666666667%; - flex: 0 0 91.6666666667%; - max-width: 91.6666666667%; - } - - .col-xl-12 { - -webkit-box-flex: 0; - -ms-flex: 0 0 100%; - flex: 0 0 100%; - max-width: 100%; - } - - .order-xl-first { - -webkit-box-ordinal-group: 0; - -ms-flex-order: -1; - order: -1; - } - - .order-xl-last { - -webkit-box-ordinal-group: 14; - -ms-flex-order: 13; - order: 13; - } - - .order-xl-0 { - -webkit-box-ordinal-group: 1; - -ms-flex-order: 0; - order: 0; - } - - .order-xl-1 { - -webkit-box-ordinal-group: 2; - -ms-flex-order: 1; - order: 1; - } - - .order-xl-2 { - -webkit-box-ordinal-group: 3; - -ms-flex-order: 2; - order: 2; - } - - .order-xl-3 { - -webkit-box-ordinal-group: 4; - -ms-flex-order: 3; - order: 3; - } - - .order-xl-4 { - -webkit-box-ordinal-group: 5; - -ms-flex-order: 4; - order: 4; - } - - .order-xl-5 { - -webkit-box-ordinal-group: 6; - -ms-flex-order: 5; - order: 5; - } - - .order-xl-6 { - -webkit-box-ordinal-group: 7; - -ms-flex-order: 6; - order: 6; - } - - .order-xl-7 { - -webkit-box-ordinal-group: 8; - -ms-flex-order: 7; - order: 7; - } - - .order-xl-8 { - -webkit-box-ordinal-group: 9; - -ms-flex-order: 8; - order: 8; - } - - .order-xl-9 { - -webkit-box-ordinal-group: 10; - -ms-flex-order: 9; - order: 9; - } - - .order-xl-10 { - -webkit-box-ordinal-group: 11; - -ms-flex-order: 10; - order: 10; - } - - .order-xl-11 { - -webkit-box-ordinal-group: 12; - -ms-flex-order: 11; - order: 11; - } - - .order-xl-12 { - -webkit-box-ordinal-group: 13; - -ms-flex-order: 12; - order: 12; - } - - .offset-xl-0 { - margin-left: 0; - } - - .offset-xl-1 { - margin-left: 8.3333333333%; - } - - .offset-xl-2 { - margin-left: 16.6666666667%; - } - - .offset-xl-3 { - margin-left: 25%; - } - - .offset-xl-4 { - margin-left: 33.3333333333%; - } - - .offset-xl-5 { - margin-left: 41.6666666667%; - } - - .offset-xl-6 { - margin-left: 50%; - } - - .offset-xl-7 { - margin-left: 58.3333333333%; - } - - .offset-xl-8 { - margin-left: 66.6666666667%; - } - - .offset-xl-9 { - margin-left: 75%; - } - - .offset-xl-10 { - margin-left: 83.3333333333%; - } - - .offset-xl-11 { - margin-left: 91.6666666667%; - } -} -.table { - width: 100%; - max-width: 100%; - margin-bottom: 1rem; - background-color: transparent; -} -.table th, -.table td { - padding: 0.75rem; - vertical-align: top; - border-top: 1px solid #dee2e6; -} -.table thead th { - vertical-align: bottom; - border-bottom: 2px solid #dee2e6; -} -.table tbody + tbody { - border-top: 2px solid #dee2e6; -} -.table .table { - background-color: #fff; -} - -.table-sm th, -.table-sm td { - padding: 0.3rem; -} - -.table-bordered { - border: 1px solid #dee2e6; -} -.table-bordered th, -.table-bordered td { - border: 1px solid #dee2e6; -} -.table-bordered thead th, -.table-bordered thead td { - border-bottom-width: 2px; -} - -.table-striped tbody tr:nth-of-type(odd) { - background-color: rgba(0, 0, 0, 0.05); -} - -.table-hover tbody tr:hover { - background-color: rgba(0, 0, 0, 0.075); -} - -.table-primary, -.table-primary > th, -.table-primary > td { - background-color: #b8daff; -} - -.table-hover .table-primary:hover { - background-color: #9fcdff; -} -.table-hover .table-primary:hover > td, -.table-hover .table-primary:hover > th { - background-color: #9fcdff; -} - -.table-secondary, -.table-secondary > th, -.table-secondary > td { - background-color: #d6d8db; -} - -.table-hover .table-secondary:hover { - background-color: #c8cbcf; -} -.table-hover .table-secondary:hover > td, -.table-hover .table-secondary:hover > th { - background-color: #c8cbcf; -} - -.table-success, -.table-success > th, -.table-success > td { - background-color: #c3e6cb; -} - -.table-hover .table-success:hover { - background-color: #b1dfbb; -} -.table-hover .table-success:hover > td, -.table-hover .table-success:hover > th { - background-color: #b1dfbb; -} - -.table-info, -.table-info > th, -.table-info > td { - background-color: #bee5eb; -} - -.table-hover .table-info:hover { - background-color: #abdde5; -} -.table-hover .table-info:hover > td, -.table-hover .table-info:hover > th { - background-color: #abdde5; -} - -.table-warning, -.table-warning > th, -.table-warning > td { - background-color: #ffeeba; -} - -.table-hover .table-warning:hover { - background-color: #ffe8a1; -} -.table-hover .table-warning:hover > td, -.table-hover .table-warning:hover > th { - background-color: #ffe8a1; -} - -.table-danger, -.table-danger > th, -.table-danger > td { - background-color: #f5c6cb; -} - -.table-hover .table-danger:hover { - background-color: #f1b0b7; -} -.table-hover .table-danger:hover > td, -.table-hover .table-danger:hover > th { - background-color: #f1b0b7; -} - -.table-light, -.table-light > th, -.table-light > td { - background-color: #fdfdfe; -} - -.table-hover .table-light:hover { - background-color: #ececf6; -} -.table-hover .table-light:hover > td, -.table-hover .table-light:hover > th { - background-color: #ececf6; -} - -.table-dark, -.table-dark > th, -.table-dark > td { - background-color: #c6c8ca; -} - -.table-hover .table-dark:hover { - background-color: #b9bbbe; -} -.table-hover .table-dark:hover > td, -.table-hover .table-dark:hover > th { - background-color: #b9bbbe; -} - -.table-active, -.table-active > th, -.table-active > td { - background-color: rgba(0, 0, 0, 0.075); -} - -.table-hover .table-active:hover { - background-color: rgba(0, 0, 0, 0.075); -} -.table-hover .table-active:hover > td, -.table-hover .table-active:hover > th { - background-color: rgba(0, 0, 0, 0.075); -} - -.table .thead-dark th { - color: #fff; - background-color: #212529; - border-color: #32383e; -} -.table .thead-light th { - color: #495057; - background-color: #e9ecef; - border-color: #dee2e6; -} - -.table-dark { - color: #fff; - background-color: #212529; -} -.table-dark th, -.table-dark td, -.table-dark thead th { - border-color: #32383e; -} -.table-dark.table-bordered { - border: 0; -} -.table-dark.table-striped tbody tr:nth-of-type(odd) { - background-color: rgba(255, 255, 255, 0.05); -} -.table-dark.table-hover tbody tr:hover { - background-color: rgba(255, 255, 255, 0.075); -} - -@media (max-width: 575.98px) { - .table-responsive-sm { - display: block; - width: 100%; - overflow-x: auto; - -webkit-overflow-scrolling: touch; - -ms-overflow-style: -ms-autohiding-scrollbar; - } - .table-responsive-sm > .table-bordered { - border: 0; - } -} -@media (max-width: 767.98px) { - .table-responsive-md { - display: block; - width: 100%; - overflow-x: auto; - -webkit-overflow-scrolling: touch; - -ms-overflow-style: -ms-autohiding-scrollbar; - } - .table-responsive-md > .table-bordered { - border: 0; - } -} -@media (max-width: 991.98px) { - .table-responsive-lg { - display: block; - width: 100%; - overflow-x: auto; - -webkit-overflow-scrolling: touch; - -ms-overflow-style: -ms-autohiding-scrollbar; - } - .table-responsive-lg > .table-bordered { - border: 0; - } -} -@media (max-width: 1199.98px) { - .table-responsive-xl { - display: block; - width: 100%; - overflow-x: auto; - -webkit-overflow-scrolling: touch; - -ms-overflow-style: -ms-autohiding-scrollbar; - } - .table-responsive-xl > .table-bordered { - border: 0; - } -} -.table-responsive { - display: block; - width: 100%; - overflow-x: auto; - -webkit-overflow-scrolling: touch; - -ms-overflow-style: -ms-autohiding-scrollbar; -} -.table-responsive > .table-bordered { - border: 0; -} - -.form-control { - display: block; - width: 100%; - padding: 0.375rem 0.75rem; - font-size: 1rem; - line-height: 1.5; - color: #495057; - background-color: #fff; - background-clip: padding-box; - border: 1px solid #ced4da; - border-radius: 0.25rem; - -webkit-transition: border-color 0.15s ease-in-out, -webkit-box-shadow 0.15s ease-in-out; - transition: border-color 0.15s ease-in-out, -webkit-box-shadow 0.15s ease-in-out; - transition: border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out; - transition: border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out, -webkit-box-shadow 0.15s ease-in-out; -} -.form-control::-ms-expand { - background-color: transparent; - border: 0; -} -.form-control:focus { - color: #495057; - background-color: #fff; - border-color: #80bdff; - outline: 0; - -webkit-box-shadow: 0 0 0 0.2rem rgba(0, 123, 255, 0.25); - box-shadow: 0 0 0 0.2rem rgba(0, 123, 255, 0.25); -} -.form-control::-webkit-input-placeholder { - color: #6c757d; - opacity: 1; -} -.form-control::-moz-placeholder { - color: #6c757d; - opacity: 1; -} -.form-control:-ms-input-placeholder { - color: #6c757d; - opacity: 1; -} -.form-control::-ms-input-placeholder { - color: #6c757d; - opacity: 1; -} -.form-control::placeholder { - color: #6c757d; - opacity: 1; -} -.form-control:disabled, .form-control[readonly] { - background-color: #e9ecef; - opacity: 1; -} - -select.form-control:not([size]):not([multiple]) { - height: calc(2.25rem + 2px); -} -select.form-control:focus::-ms-value { - color: #495057; - background-color: #fff; -} - -.form-control-file, -.form-control-range { - display: block; - width: 100%; -} - -.col-form-label { - padding-top: calc(0.375rem + 1px); - padding-bottom: calc(0.375rem + 1px); - margin-bottom: 0; - font-size: inherit; - line-height: 1.5; -} - -.col-form-label-lg { - padding-top: calc(0.5rem + 1px); - padding-bottom: calc(0.5rem + 1px); - font-size: 1.25rem; - line-height: 1.5; -} - -.col-form-label-sm { - padding-top: calc(0.25rem + 1px); - padding-bottom: calc(0.25rem + 1px); - font-size: 0.875rem; - line-height: 1.5; -} - -.form-control-plaintext { - display: block; - width: 100%; - padding-top: 0.375rem; - padding-bottom: 0.375rem; - margin-bottom: 0; - line-height: 1.5; - background-color: transparent; - border: solid transparent; - border-width: 1px 0; -} -.form-control-plaintext.form-control-sm, .input-group-sm > .form-control-plaintext.form-control, -.input-group-sm > .input-group-prepend > .form-control-plaintext.input-group-text, -.input-group-sm > .input-group-append > .form-control-plaintext.input-group-text, -.input-group-sm > .input-group-prepend > .form-control-plaintext.btn, -.input-group-sm > .input-group-append > .form-control-plaintext.btn, .form-control-plaintext.form-control-lg, .input-group-lg > .form-control-plaintext.form-control, -.input-group-lg > .input-group-prepend > .form-control-plaintext.input-group-text, -.input-group-lg > .input-group-append > .form-control-plaintext.input-group-text, -.input-group-lg > .input-group-prepend > .form-control-plaintext.btn, -.input-group-lg > .input-group-append > .form-control-plaintext.btn { - padding-right: 0; - padding-left: 0; -} - -.form-control-sm, .input-group-sm > .form-control, -.input-group-sm > .input-group-prepend > .input-group-text, -.input-group-sm > .input-group-append > .input-group-text, -.input-group-sm > .input-group-prepend > .btn, -.input-group-sm > .input-group-append > .btn { - padding: 0.25rem 0.5rem; - font-size: 0.875rem; - line-height: 1.5; - border-radius: 0.2rem; -} - -select.form-control-sm:not([size]):not([multiple]), .input-group-sm > select.form-control:not([size]):not([multiple]), -.input-group-sm > .input-group-prepend > select.input-group-text:not([size]):not([multiple]), -.input-group-sm > .input-group-append > select.input-group-text:not([size]):not([multiple]), -.input-group-sm > .input-group-prepend > select.btn:not([size]):not([multiple]), -.input-group-sm > .input-group-append > select.btn:not([size]):not([multiple]) { - height: calc(1.8125rem + 2px); -} - -.form-control-lg, .input-group-lg > .form-control, -.input-group-lg > .input-group-prepend > .input-group-text, -.input-group-lg > .input-group-append > .input-group-text, -.input-group-lg > .input-group-prepend > .btn, -.input-group-lg > .input-group-append > .btn { - padding: 0.5rem 1rem; - font-size: 1.25rem; - line-height: 1.5; - border-radius: 0.3rem; -} - -select.form-control-lg:not([size]):not([multiple]), .input-group-lg > select.form-control:not([size]):not([multiple]), -.input-group-lg > .input-group-prepend > select.input-group-text:not([size]):not([multiple]), -.input-group-lg > .input-group-append > select.input-group-text:not([size]):not([multiple]), -.input-group-lg > .input-group-prepend > select.btn:not([size]):not([multiple]), -.input-group-lg > .input-group-append > select.btn:not([size]):not([multiple]) { - height: calc(2.875rem + 2px); -} - -.form-group { - margin-bottom: 1rem; -} - -.form-text { - display: block; - margin-top: 0.25rem; -} - -.form-row { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -ms-flex-wrap: wrap; - flex-wrap: wrap; - margin-right: -5px; - margin-left: -5px; -} -.form-row > .col, -.form-row > [class*="col-"] { - padding-right: 5px; - padding-left: 5px; -} - -.form-check { - position: relative; - display: block; - padding-left: 1.25rem; -} - -.form-check-input { - position: absolute; - margin-top: 0.3rem; - margin-left: -1.25rem; -} -.form-check-input:disabled ~ .form-check-label { - color: #6c757d; -} - -.form-check-label { - margin-bottom: 0; -} - -.form-check-inline { - display: -webkit-inline-box; - display: -ms-inline-flexbox; - display: inline-flex; - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; - padding-left: 0; - margin-right: 0.75rem; -} -.form-check-inline .form-check-input { - position: static; - margin-top: 0; - margin-right: 0.3125rem; - margin-left: 0; -} - -.valid-feedback { - display: none; - width: 100%; - margin-top: 0.25rem; - font-size: 80%; - color: #28a745; -} - -.valid-tooltip { - position: absolute; - top: 100%; - z-index: 5; - display: none; - max-width: 100%; - padding: .5rem; - margin-top: .1rem; - font-size: .875rem; - line-height: 1; - color: #fff; - background-color: rgba(40, 167, 69, 0.8); - border-radius: .2rem; -} - -.was-validated .form-control:valid, .form-control.is-valid, -.was-validated .custom-select:valid, -.custom-select.is-valid { - border-color: #28a745; -} -.was-validated .form-control:valid:focus, .form-control.is-valid:focus, -.was-validated .custom-select:valid:focus, -.custom-select.is-valid:focus { - border-color: #28a745; - -webkit-box-shadow: 0 0 0 0.2rem rgba(40, 167, 69, 0.25); - box-shadow: 0 0 0 0.2rem rgba(40, 167, 69, 0.25); -} -.was-validated .form-control:valid ~ .valid-feedback, -.was-validated .form-control:valid ~ .valid-tooltip, .form-control.is-valid ~ .valid-feedback, -.form-control.is-valid ~ .valid-tooltip, -.was-validated .custom-select:valid ~ .valid-feedback, -.was-validated .custom-select:valid ~ .valid-tooltip, -.custom-select.is-valid ~ .valid-feedback, -.custom-select.is-valid ~ .valid-tooltip { - display: block; -} - -.was-validated .form-check-input:valid ~ .form-check-label, .form-check-input.is-valid ~ .form-check-label { - color: #28a745; -} -.was-validated .form-check-input:valid ~ .valid-feedback, -.was-validated .form-check-input:valid ~ .valid-tooltip, .form-check-input.is-valid ~ .valid-feedback, -.form-check-input.is-valid ~ .valid-tooltip { - display: block; -} - -.was-validated .custom-control-input:valid ~ .custom-control-label, .custom-control-input.is-valid ~ .custom-control-label { - color: #28a745; -} -.was-validated .custom-control-input:valid ~ .custom-control-label::before, .custom-control-input.is-valid ~ .custom-control-label::before { - background-color: #71dd8a; -} -.was-validated .custom-control-input:valid ~ .valid-feedback, -.was-validated .custom-control-input:valid ~ .valid-tooltip, .custom-control-input.is-valid ~ .valid-feedback, -.custom-control-input.is-valid ~ .valid-tooltip { - display: block; -} -.was-validated .custom-control-input:valid:checked ~ .custom-control-label::before, .custom-control-input.is-valid:checked ~ .custom-control-label::before { - background-color: #34ce57; -} -.was-validated .custom-control-input:valid:focus ~ .custom-control-label::before, .custom-control-input.is-valid:focus ~ .custom-control-label::before { - -webkit-box-shadow: 0 0 0 1px #fff, 0 0 0 0.2rem rgba(40, 167, 69, 0.25); - box-shadow: 0 0 0 1px #fff, 0 0 0 0.2rem rgba(40, 167, 69, 0.25); -} - -.was-validated .custom-file-input:valid ~ .custom-file-label, .custom-file-input.is-valid ~ .custom-file-label { - border-color: #28a745; -} -.was-validated .custom-file-input:valid ~ .custom-file-label::before, .custom-file-input.is-valid ~ .custom-file-label::before { - border-color: inherit; -} -.was-validated .custom-file-input:valid ~ .valid-feedback, -.was-validated .custom-file-input:valid ~ .valid-tooltip, .custom-file-input.is-valid ~ .valid-feedback, -.custom-file-input.is-valid ~ .valid-tooltip { - display: block; -} -.was-validated .custom-file-input:valid:focus ~ .custom-file-label, .custom-file-input.is-valid:focus ~ .custom-file-label { - -webkit-box-shadow: 0 0 0 0.2rem rgba(40, 167, 69, 0.25); - box-shadow: 0 0 0 0.2rem rgba(40, 167, 69, 0.25); -} - -.invalid-feedback { - display: none; - width: 100%; - margin-top: 0.25rem; - font-size: 80%; - color: #dc3545; -} - -.invalid-tooltip { - position: absolute; - top: 100%; - z-index: 5; - display: none; - max-width: 100%; - padding: .5rem; - margin-top: .1rem; - font-size: .875rem; - line-height: 1; - color: #fff; - background-color: rgba(220, 53, 69, 0.8); - border-radius: .2rem; -} - -.was-validated .form-control:invalid, .form-control.is-invalid, -.was-validated .custom-select:invalid, -.custom-select.is-invalid { - border-color: #dc3545; -} -.was-validated .form-control:invalid:focus, .form-control.is-invalid:focus, -.was-validated .custom-select:invalid:focus, -.custom-select.is-invalid:focus { - border-color: #dc3545; - -webkit-box-shadow: 0 0 0 0.2rem rgba(220, 53, 69, 0.25); - box-shadow: 0 0 0 0.2rem rgba(220, 53, 69, 0.25); -} -.was-validated .form-control:invalid ~ .invalid-feedback, -.was-validated .form-control:invalid ~ .invalid-tooltip, .form-control.is-invalid ~ .invalid-feedback, -.form-control.is-invalid ~ .invalid-tooltip, -.was-validated .custom-select:invalid ~ .invalid-feedback, -.was-validated .custom-select:invalid ~ .invalid-tooltip, -.custom-select.is-invalid ~ .invalid-feedback, -.custom-select.is-invalid ~ .invalid-tooltip { - display: block; -} - -.was-validated .form-check-input:invalid ~ .form-check-label, .form-check-input.is-invalid ~ .form-check-label { - color: #dc3545; -} -.was-validated .form-check-input:invalid ~ .invalid-feedback, -.was-validated .form-check-input:invalid ~ .invalid-tooltip, .form-check-input.is-invalid ~ .invalid-feedback, -.form-check-input.is-invalid ~ .invalid-tooltip { - display: block; -} - -.was-validated .custom-control-input:invalid ~ .custom-control-label, .custom-control-input.is-invalid ~ .custom-control-label { - color: #dc3545; -} -.was-validated .custom-control-input:invalid ~ .custom-control-label::before, .custom-control-input.is-invalid ~ .custom-control-label::before { - background-color: #efa2a9; -} -.was-validated .custom-control-input:invalid ~ .invalid-feedback, -.was-validated .custom-control-input:invalid ~ .invalid-tooltip, .custom-control-input.is-invalid ~ .invalid-feedback, -.custom-control-input.is-invalid ~ .invalid-tooltip { - display: block; -} -.was-validated .custom-control-input:invalid:checked ~ .custom-control-label::before, .custom-control-input.is-invalid:checked ~ .custom-control-label::before { - background-color: #e4606d; -} -.was-validated .custom-control-input:invalid:focus ~ .custom-control-label::before, .custom-control-input.is-invalid:focus ~ .custom-control-label::before { - -webkit-box-shadow: 0 0 0 1px #fff, 0 0 0 0.2rem rgba(220, 53, 69, 0.25); - box-shadow: 0 0 0 1px #fff, 0 0 0 0.2rem rgba(220, 53, 69, 0.25); -} - -.was-validated .custom-file-input:invalid ~ .custom-file-label, .custom-file-input.is-invalid ~ .custom-file-label { - border-color: #dc3545; -} -.was-validated .custom-file-input:invalid ~ .custom-file-label::before, .custom-file-input.is-invalid ~ .custom-file-label::before { - border-color: inherit; -} -.was-validated .custom-file-input:invalid ~ .invalid-feedback, -.was-validated .custom-file-input:invalid ~ .invalid-tooltip, .custom-file-input.is-invalid ~ .invalid-feedback, -.custom-file-input.is-invalid ~ .invalid-tooltip { - display: block; -} -.was-validated .custom-file-input:invalid:focus ~ .custom-file-label, .custom-file-input.is-invalid:focus ~ .custom-file-label { - -webkit-box-shadow: 0 0 0 0.2rem rgba(220, 53, 69, 0.25); - box-shadow: 0 0 0 0.2rem rgba(220, 53, 69, 0.25); -} - -.form-inline { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-orient: horizontal; - -webkit-box-direction: normal; - -ms-flex-flow: row wrap; - flex-flow: row wrap; - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; -} -.form-inline .form-check { - width: 100%; -} -@media (min-width: 576px) { - .form-inline label { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; - -webkit-box-pack: center; - -ms-flex-pack: center; - justify-content: center; - margin-bottom: 0; - } - .form-inline .form-group { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-flex: 0; - -ms-flex: 0 0 auto; - flex: 0 0 auto; - -webkit-box-orient: horizontal; - -webkit-box-direction: normal; - -ms-flex-flow: row wrap; - flex-flow: row wrap; - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; - margin-bottom: 0; - } - .form-inline .form-control { - display: inline-block; - width: auto; - vertical-align: middle; - } - .form-inline .form-control-plaintext { - display: inline-block; - } - .form-inline .input-group { - width: auto; - } - .form-inline .form-check { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; - -webkit-box-pack: center; - -ms-flex-pack: center; - justify-content: center; - width: auto; - padding-left: 0; - } - .form-inline .form-check-input { - position: relative; - margin-top: 0; - margin-right: 0.25rem; - margin-left: 0; - } - .form-inline .custom-control { - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; - -webkit-box-pack: center; - -ms-flex-pack: center; - justify-content: center; - } - .form-inline .custom-control-label { - margin-bottom: 0; - } -} - -.btn { - display: inline-block; - font-weight: 400; - text-align: center; - white-space: nowrap; - vertical-align: middle; - -webkit-user-select: none; - -moz-user-select: none; - -ms-user-select: none; - user-select: none; - border: 1px solid transparent; - padding: 0.375rem 0.75rem; - font-size: 1rem; - line-height: 1.5; - border-radius: 0.25rem; - -webkit-transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, -webkit-box-shadow 0.15s ease-in-out; - transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, -webkit-box-shadow 0.15s ease-in-out; - transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out; - transition: color 0.15s ease-in-out, background-color 0.15s ease-in-out, border-color 0.15s ease-in-out, box-shadow 0.15s ease-in-out, -webkit-box-shadow 0.15s ease-in-out; -} -.btn:hover, .btn:focus { - text-decoration: none; -} -.btn:focus, .btn.focus { - outline: 0; - -webkit-box-shadow: 0 0 0 0.2rem rgba(0, 123, 255, 0.25); - box-shadow: 0 0 0 0.2rem rgba(0, 123, 255, 0.25); -} -.btn.disabled, .btn:disabled { - opacity: 0.65; -} -.btn:not(:disabled):not(.disabled) { - cursor: pointer; -} -.btn:not(:disabled):not(.disabled):active, .btn:not(:disabled):not(.disabled).active { - background-image: none; -} - -a.btn.disabled, -fieldset:disabled a.btn { - pointer-events: none; -} - -.btn-primary { - color: #fff; - background-color: #007bff; - border-color: #007bff; -} -.btn-primary:hover { - color: #fff; - background-color: #0069d9; - border-color: #0062cc; -} -.btn-primary:focus, .btn-primary.focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(0, 123, 255, 0.5); - box-shadow: 0 0 0 0.2rem rgba(0, 123, 255, 0.5); -} -.btn-primary.disabled, .btn-primary:disabled { - color: #fff; - background-color: #007bff; - border-color: #007bff; -} -.btn-primary:not(:disabled):not(.disabled):active, .btn-primary:not(:disabled):not(.disabled).active, .show > .btn-primary.dropdown-toggle { - color: #fff; - background-color: #0062cc; - border-color: #005cbf; -} -.btn-primary:not(:disabled):not(.disabled):active:focus, .btn-primary:not(:disabled):not(.disabled).active:focus, .show > .btn-primary.dropdown-toggle:focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(0, 123, 255, 0.5); - box-shadow: 0 0 0 0.2rem rgba(0, 123, 255, 0.5); -} - -.btn-secondary { - color: #fff; - background-color: #6c757d; - border-color: #6c757d; -} -.btn-secondary:hover { - color: #fff; - background-color: #5a6268; - border-color: #545b62; -} -.btn-secondary:focus, .btn-secondary.focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(108, 117, 125, 0.5); - box-shadow: 0 0 0 0.2rem rgba(108, 117, 125, 0.5); -} -.btn-secondary.disabled, .btn-secondary:disabled { - color: #fff; - background-color: #6c757d; - border-color: #6c757d; -} -.btn-secondary:not(:disabled):not(.disabled):active, .btn-secondary:not(:disabled):not(.disabled).active, .show > .btn-secondary.dropdown-toggle { - color: #fff; - background-color: #545b62; - border-color: #4e555b; -} -.btn-secondary:not(:disabled):not(.disabled):active:focus, .btn-secondary:not(:disabled):not(.disabled).active:focus, .show > .btn-secondary.dropdown-toggle:focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(108, 117, 125, 0.5); - box-shadow: 0 0 0 0.2rem rgba(108, 117, 125, 0.5); -} - -.btn-success { - color: #fff; - background-color: #28a745; - border-color: #28a745; -} -.btn-success:hover { - color: #fff; - background-color: #218838; - border-color: #1e7e34; -} -.btn-success:focus, .btn-success.focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(40, 167, 69, 0.5); - box-shadow: 0 0 0 0.2rem rgba(40, 167, 69, 0.5); -} -.btn-success.disabled, .btn-success:disabled { - color: #fff; - background-color: #28a745; - border-color: #28a745; -} -.btn-success:not(:disabled):not(.disabled):active, .btn-success:not(:disabled):not(.disabled).active, .show > .btn-success.dropdown-toggle { - color: #fff; - background-color: #1e7e34; - border-color: #1c7430; -} -.btn-success:not(:disabled):not(.disabled):active:focus, .btn-success:not(:disabled):not(.disabled).active:focus, .show > .btn-success.dropdown-toggle:focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(40, 167, 69, 0.5); - box-shadow: 0 0 0 0.2rem rgba(40, 167, 69, 0.5); -} - -.btn-info { - color: #fff; - background-color: #17a2b8; - border-color: #17a2b8; -} -.btn-info:hover { - color: #fff; - background-color: #138496; - border-color: #117a8b; -} -.btn-info:focus, .btn-info.focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(23, 162, 184, 0.5); - box-shadow: 0 0 0 0.2rem rgba(23, 162, 184, 0.5); -} -.btn-info.disabled, .btn-info:disabled { - color: #fff; - background-color: #17a2b8; - border-color: #17a2b8; -} -.btn-info:not(:disabled):not(.disabled):active, .btn-info:not(:disabled):not(.disabled).active, .show > .btn-info.dropdown-toggle { - color: #fff; - background-color: #117a8b; - border-color: #10707f; -} -.btn-info:not(:disabled):not(.disabled):active:focus, .btn-info:not(:disabled):not(.disabled).active:focus, .show > .btn-info.dropdown-toggle:focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(23, 162, 184, 0.5); - box-shadow: 0 0 0 0.2rem rgba(23, 162, 184, 0.5); -} - -.btn-warning { - color: #212529; - background-color: #ffc107; - border-color: #ffc107; -} -.btn-warning:hover { - color: #212529; - background-color: #e0a800; - border-color: #d39e00; -} -.btn-warning:focus, .btn-warning.focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(255, 193, 7, 0.5); - box-shadow: 0 0 0 0.2rem rgba(255, 193, 7, 0.5); -} -.btn-warning.disabled, .btn-warning:disabled { - color: #212529; - background-color: #ffc107; - border-color: #ffc107; -} -.btn-warning:not(:disabled):not(.disabled):active, .btn-warning:not(:disabled):not(.disabled).active, .show > .btn-warning.dropdown-toggle { - color: #212529; - background-color: #d39e00; - border-color: #c69500; -} -.btn-warning:not(:disabled):not(.disabled):active:focus, .btn-warning:not(:disabled):not(.disabled).active:focus, .show > .btn-warning.dropdown-toggle:focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(255, 193, 7, 0.5); - box-shadow: 0 0 0 0.2rem rgba(255, 193, 7, 0.5); -} - -.btn-danger { - color: #fff; - background-color: #dc3545; - border-color: #dc3545; -} -.btn-danger:hover { - color: #fff; - background-color: #c82333; - border-color: #bd2130; -} -.btn-danger:focus, .btn-danger.focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(220, 53, 69, 0.5); - box-shadow: 0 0 0 0.2rem rgba(220, 53, 69, 0.5); -} -.btn-danger.disabled, .btn-danger:disabled { - color: #fff; - background-color: #dc3545; - border-color: #dc3545; -} -.btn-danger:not(:disabled):not(.disabled):active, .btn-danger:not(:disabled):not(.disabled).active, .show > .btn-danger.dropdown-toggle { - color: #fff; - background-color: #bd2130; - border-color: #b21f2d; -} -.btn-danger:not(:disabled):not(.disabled):active:focus, .btn-danger:not(:disabled):not(.disabled).active:focus, .show > .btn-danger.dropdown-toggle:focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(220, 53, 69, 0.5); - box-shadow: 0 0 0 0.2rem rgba(220, 53, 69, 0.5); -} - -.btn-light { - color: #212529; - background-color: #f8f9fa; - border-color: #f8f9fa; -} -.btn-light:hover { - color: #212529; - background-color: #e2e6ea; - border-color: #dae0e5; -} -.btn-light:focus, .btn-light.focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(248, 249, 250, 0.5); - box-shadow: 0 0 0 0.2rem rgba(248, 249, 250, 0.5); -} -.btn-light.disabled, .btn-light:disabled { - color: #212529; - background-color: #f8f9fa; - border-color: #f8f9fa; -} -.btn-light:not(:disabled):not(.disabled):active, .btn-light:not(:disabled):not(.disabled).active, .show > .btn-light.dropdown-toggle { - color: #212529; - background-color: #dae0e5; - border-color: #d3d9df; -} -.btn-light:not(:disabled):not(.disabled):active:focus, .btn-light:not(:disabled):not(.disabled).active:focus, .show > .btn-light.dropdown-toggle:focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(248, 249, 250, 0.5); - box-shadow: 0 0 0 0.2rem rgba(248, 249, 250, 0.5); -} - -.btn-dark { - color: #fff; - background-color: #343a40; - border-color: #343a40; -} -.btn-dark:hover { - color: #fff; - background-color: #23272b; - border-color: #1d2124; -} -.btn-dark:focus, .btn-dark.focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(52, 58, 64, 0.5); - box-shadow: 0 0 0 0.2rem rgba(52, 58, 64, 0.5); -} -.btn-dark.disabled, .btn-dark:disabled { - color: #fff; - background-color: #343a40; - border-color: #343a40; -} -.btn-dark:not(:disabled):not(.disabled):active, .btn-dark:not(:disabled):not(.disabled).active, .show > .btn-dark.dropdown-toggle { - color: #fff; - background-color: #1d2124; - border-color: #171a1d; -} -.btn-dark:not(:disabled):not(.disabled):active:focus, .btn-dark:not(:disabled):not(.disabled).active:focus, .show > .btn-dark.dropdown-toggle:focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(52, 58, 64, 0.5); - box-shadow: 0 0 0 0.2rem rgba(52, 58, 64, 0.5); -} - -.btn-outline-primary { - color: #007bff; - background-color: transparent; - background-image: none; - border-color: #007bff; -} -.btn-outline-primary:hover { - color: #fff; - background-color: #007bff; - border-color: #007bff; -} -.btn-outline-primary:focus, .btn-outline-primary.focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(0, 123, 255, 0.5); - box-shadow: 0 0 0 0.2rem rgba(0, 123, 255, 0.5); -} -.btn-outline-primary.disabled, .btn-outline-primary:disabled { - color: #007bff; - background-color: transparent; -} -.btn-outline-primary:not(:disabled):not(.disabled):active, .btn-outline-primary:not(:disabled):not(.disabled).active, .show > .btn-outline-primary.dropdown-toggle { - color: #fff; - background-color: #007bff; - border-color: #007bff; -} -.btn-outline-primary:not(:disabled):not(.disabled):active:focus, .btn-outline-primary:not(:disabled):not(.disabled).active:focus, .show > .btn-outline-primary.dropdown-toggle:focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(0, 123, 255, 0.5); - box-shadow: 0 0 0 0.2rem rgba(0, 123, 255, 0.5); -} - -.btn-outline-secondary { - color: #6c757d; - background-color: transparent; - background-image: none; - border-color: #6c757d; -} -.btn-outline-secondary:hover { - color: #fff; - background-color: #6c757d; - border-color: #6c757d; -} -.btn-outline-secondary:focus, .btn-outline-secondary.focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(108, 117, 125, 0.5); - box-shadow: 0 0 0 0.2rem rgba(108, 117, 125, 0.5); -} -.btn-outline-secondary.disabled, .btn-outline-secondary:disabled { - color: #6c757d; - background-color: transparent; -} -.btn-outline-secondary:not(:disabled):not(.disabled):active, .btn-outline-secondary:not(:disabled):not(.disabled).active, .show > .btn-outline-secondary.dropdown-toggle { - color: #fff; - background-color: #6c757d; - border-color: #6c757d; -} -.btn-outline-secondary:not(:disabled):not(.disabled):active:focus, .btn-outline-secondary:not(:disabled):not(.disabled).active:focus, .show > .btn-outline-secondary.dropdown-toggle:focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(108, 117, 125, 0.5); - box-shadow: 0 0 0 0.2rem rgba(108, 117, 125, 0.5); -} - -.btn-outline-success { - color: #28a745; - background-color: transparent; - background-image: none; - border-color: #28a745; -} -.btn-outline-success:hover { - color: #fff; - background-color: #28a745; - border-color: #28a745; -} -.btn-outline-success:focus, .btn-outline-success.focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(40, 167, 69, 0.5); - box-shadow: 0 0 0 0.2rem rgba(40, 167, 69, 0.5); -} -.btn-outline-success.disabled, .btn-outline-success:disabled { - color: #28a745; - background-color: transparent; -} -.btn-outline-success:not(:disabled):not(.disabled):active, .btn-outline-success:not(:disabled):not(.disabled).active, .show > .btn-outline-success.dropdown-toggle { - color: #fff; - background-color: #28a745; - border-color: #28a745; -} -.btn-outline-success:not(:disabled):not(.disabled):active:focus, .btn-outline-success:not(:disabled):not(.disabled).active:focus, .show > .btn-outline-success.dropdown-toggle:focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(40, 167, 69, 0.5); - box-shadow: 0 0 0 0.2rem rgba(40, 167, 69, 0.5); -} - -.btn-outline-info { - color: #17a2b8; - background-color: transparent; - background-image: none; - border-color: #17a2b8; -} -.btn-outline-info:hover { - color: #fff; - background-color: #17a2b8; - border-color: #17a2b8; -} -.btn-outline-info:focus, .btn-outline-info.focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(23, 162, 184, 0.5); - box-shadow: 0 0 0 0.2rem rgba(23, 162, 184, 0.5); -} -.btn-outline-info.disabled, .btn-outline-info:disabled { - color: #17a2b8; - background-color: transparent; -} -.btn-outline-info:not(:disabled):not(.disabled):active, .btn-outline-info:not(:disabled):not(.disabled).active, .show > .btn-outline-info.dropdown-toggle { - color: #fff; - background-color: #17a2b8; - border-color: #17a2b8; -} -.btn-outline-info:not(:disabled):not(.disabled):active:focus, .btn-outline-info:not(:disabled):not(.disabled).active:focus, .show > .btn-outline-info.dropdown-toggle:focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(23, 162, 184, 0.5); - box-shadow: 0 0 0 0.2rem rgba(23, 162, 184, 0.5); -} - -.btn-outline-warning { - color: #ffc107; - background-color: transparent; - background-image: none; - border-color: #ffc107; -} -.btn-outline-warning:hover { - color: #212529; - background-color: #ffc107; - border-color: #ffc107; -} -.btn-outline-warning:focus, .btn-outline-warning.focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(255, 193, 7, 0.5); - box-shadow: 0 0 0 0.2rem rgba(255, 193, 7, 0.5); -} -.btn-outline-warning.disabled, .btn-outline-warning:disabled { - color: #ffc107; - background-color: transparent; -} -.btn-outline-warning:not(:disabled):not(.disabled):active, .btn-outline-warning:not(:disabled):not(.disabled).active, .show > .btn-outline-warning.dropdown-toggle { - color: #212529; - background-color: #ffc107; - border-color: #ffc107; -} -.btn-outline-warning:not(:disabled):not(.disabled):active:focus, .btn-outline-warning:not(:disabled):not(.disabled).active:focus, .show > .btn-outline-warning.dropdown-toggle:focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(255, 193, 7, 0.5); - box-shadow: 0 0 0 0.2rem rgba(255, 193, 7, 0.5); -} - -.btn-outline-danger { - color: #dc3545; - background-color: transparent; - background-image: none; - border-color: #dc3545; -} -.btn-outline-danger:hover { - color: #fff; - background-color: #dc3545; - border-color: #dc3545; -} -.btn-outline-danger:focus, .btn-outline-danger.focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(220, 53, 69, 0.5); - box-shadow: 0 0 0 0.2rem rgba(220, 53, 69, 0.5); -} -.btn-outline-danger.disabled, .btn-outline-danger:disabled { - color: #dc3545; - background-color: transparent; -} -.btn-outline-danger:not(:disabled):not(.disabled):active, .btn-outline-danger:not(:disabled):not(.disabled).active, .show > .btn-outline-danger.dropdown-toggle { - color: #fff; - background-color: #dc3545; - border-color: #dc3545; -} -.btn-outline-danger:not(:disabled):not(.disabled):active:focus, .btn-outline-danger:not(:disabled):not(.disabled).active:focus, .show > .btn-outline-danger.dropdown-toggle:focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(220, 53, 69, 0.5); - box-shadow: 0 0 0 0.2rem rgba(220, 53, 69, 0.5); -} - -.btn-outline-light { - color: #f8f9fa; - background-color: transparent; - background-image: none; - border-color: #f8f9fa; -} -.btn-outline-light:hover { - color: #212529; - background-color: #f8f9fa; - border-color: #f8f9fa; -} -.btn-outline-light:focus, .btn-outline-light.focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(248, 249, 250, 0.5); - box-shadow: 0 0 0 0.2rem rgba(248, 249, 250, 0.5); -} -.btn-outline-light.disabled, .btn-outline-light:disabled { - color: #f8f9fa; - background-color: transparent; -} -.btn-outline-light:not(:disabled):not(.disabled):active, .btn-outline-light:not(:disabled):not(.disabled).active, .show > .btn-outline-light.dropdown-toggle { - color: #212529; - background-color: #f8f9fa; - border-color: #f8f9fa; -} -.btn-outline-light:not(:disabled):not(.disabled):active:focus, .btn-outline-light:not(:disabled):not(.disabled).active:focus, .show > .btn-outline-light.dropdown-toggle:focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(248, 249, 250, 0.5); - box-shadow: 0 0 0 0.2rem rgba(248, 249, 250, 0.5); -} - -.btn-outline-dark { - color: #343a40; - background-color: transparent; - background-image: none; - border-color: #343a40; -} -.btn-outline-dark:hover { - color: #fff; - background-color: #343a40; - border-color: #343a40; -} -.btn-outline-dark:focus, .btn-outline-dark.focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(52, 58, 64, 0.5); - box-shadow: 0 0 0 0.2rem rgba(52, 58, 64, 0.5); -} -.btn-outline-dark.disabled, .btn-outline-dark:disabled { - color: #343a40; - background-color: transparent; -} -.btn-outline-dark:not(:disabled):not(.disabled):active, .btn-outline-dark:not(:disabled):not(.disabled).active, .show > .btn-outline-dark.dropdown-toggle { - color: #fff; - background-color: #343a40; - border-color: #343a40; -} -.btn-outline-dark:not(:disabled):not(.disabled):active:focus, .btn-outline-dark:not(:disabled):not(.disabled).active:focus, .show > .btn-outline-dark.dropdown-toggle:focus { - -webkit-box-shadow: 0 0 0 0.2rem rgba(52, 58, 64, 0.5); - box-shadow: 0 0 0 0.2rem rgba(52, 58, 64, 0.5); -} - -.btn-link { - font-weight: 400; - color: #007bff; - background-color: transparent; -} -.btn-link:hover { - color: #0056b3; - text-decoration: underline; - background-color: transparent; - border-color: transparent; -} -.btn-link:focus, .btn-link.focus { - text-decoration: underline; - border-color: transparent; - -webkit-box-shadow: none; - box-shadow: none; -} -.btn-link:disabled, .btn-link.disabled { - color: #6c757d; -} - -.btn-lg, .btn-group-lg > .btn { - padding: 0.5rem 1rem; - font-size: 1.25rem; - line-height: 1.5; - border-radius: 0.3rem; -} - -.btn-sm, .btn-group-sm > .btn { - padding: 0.25rem 0.5rem; - font-size: 0.875rem; - line-height: 1.5; - border-radius: 0.2rem; -} - -.btn-block { - display: block; - width: 100%; -} -.btn-block + .btn-block { - margin-top: 0.5rem; -} - -input[type="submit"].btn-block, -input[type="reset"].btn-block, -input[type="button"].btn-block { - width: 100%; -} - -.fade { - opacity: 0; - -webkit-transition: opacity 0.15s linear; - transition: opacity 0.15s linear; -} -.fade.show { - opacity: 1; -} - -.collapse { - display: none; -} -.collapse.show { - display: block; -} - -tr.collapse.show { - display: table-row; -} - -tbody.collapse.show { - display: table-row-group; -} - -.collapsing { - position: relative; - height: 0; - overflow: hidden; - -webkit-transition: height 0.35s ease; - transition: height 0.35s ease; -} - -.dropup, -.dropdown { - position: relative; -} - -.dropdown-toggle::after { - display: inline-block; - width: 0; - height: 0; - margin-left: 0.255em; - vertical-align: 0.255em; - content: ""; - border-top: 0.3em solid; - border-right: 0.3em solid transparent; - border-bottom: 0; - border-left: 0.3em solid transparent; -} -.dropdown-toggle:empty::after { - margin-left: 0; -} - -.dropdown-menu { - position: absolute; - top: 100%; - left: 0; - z-index: 1000; - display: none; - float: left; - min-width: 10rem; - padding: 0.5rem 0; - margin: 0.125rem 0 0; - font-size: 1rem; - color: #212529; - text-align: left; - list-style: none; - background-color: #fff; - background-clip: padding-box; - border: 1px solid rgba(0, 0, 0, 0.15); - border-radius: 0.25rem; -} - -.dropup .dropdown-menu { - margin-top: 0; - margin-bottom: 0.125rem; -} -.dropup .dropdown-toggle::after { - display: inline-block; - width: 0; - height: 0; - margin-left: 0.255em; - vertical-align: 0.255em; - content: ""; - border-top: 0; - border-right: 0.3em solid transparent; - border-bottom: 0.3em solid; - border-left: 0.3em solid transparent; -} -.dropup .dropdown-toggle:empty::after { - margin-left: 0; -} - -.dropright .dropdown-menu { - margin-top: 0; - margin-left: 0.125rem; -} -.dropright .dropdown-toggle::after { - display: inline-block; - width: 0; - height: 0; - margin-left: 0.255em; - vertical-align: 0.255em; - content: ""; - border-top: 0.3em solid transparent; - border-bottom: 0.3em solid transparent; - border-left: 0.3em solid; -} -.dropright .dropdown-toggle:empty::after { - margin-left: 0; -} -.dropright .dropdown-toggle::after { - vertical-align: 0; -} - -.dropleft .dropdown-menu { - margin-top: 0; - margin-right: 0.125rem; -} -.dropleft .dropdown-toggle::after { - display: inline-block; - width: 0; - height: 0; - margin-left: 0.255em; - vertical-align: 0.255em; - content: ""; -} -.dropleft .dropdown-toggle::after { - display: none; -} -.dropleft .dropdown-toggle::before { - display: inline-block; - width: 0; - height: 0; - margin-right: 0.255em; - vertical-align: 0.255em; - content: ""; - border-top: 0.3em solid transparent; - border-right: 0.3em solid; - border-bottom: 0.3em solid transparent; -} -.dropleft .dropdown-toggle:empty::after { - margin-left: 0; -} -.dropleft .dropdown-toggle::before { - vertical-align: 0; -} - -.dropdown-divider { - height: 0; - margin: 0.5rem 0; - overflow: hidden; - border-top: 1px solid #e9ecef; -} - -.dropdown-item { - display: block; - width: 100%; - padding: 0.25rem 1.5rem; - clear: both; - font-weight: 400; - color: #212529; - text-align: inherit; - white-space: nowrap; - background-color: transparent; - border: 0; -} -.dropdown-item:hover, .dropdown-item:focus { - color: #16181b; - text-decoration: none; - background-color: #f8f9fa; -} -.dropdown-item.active, .dropdown-item:active { - color: #fff; - text-decoration: none; - background-color: #007bff; -} -.dropdown-item.disabled, .dropdown-item:disabled { - color: #6c757d; - background-color: transparent; -} - -.dropdown-menu.show { - display: block; -} - -.dropdown-header { - display: block; - padding: 0.5rem 1.5rem; - margin-bottom: 0; - font-size: 0.875rem; - color: #6c757d; - white-space: nowrap; -} - -.btn-group, -.btn-group-vertical { - position: relative; - display: -webkit-inline-box; - display: -ms-inline-flexbox; - display: inline-flex; - vertical-align: middle; -} -.btn-group > .btn, -.btn-group-vertical > .btn { - position: relative; - -webkit-box-flex: 0; - -ms-flex: 0 1 auto; - flex: 0 1 auto; -} -.btn-group > .btn:hover, -.btn-group-vertical > .btn:hover { - z-index: 1; -} -.btn-group > .btn:focus, .btn-group > .btn:active, .btn-group > .btn.active, -.btn-group-vertical > .btn:focus, -.btn-group-vertical > .btn:active, -.btn-group-vertical > .btn.active { - z-index: 1; -} -.btn-group .btn + .btn, -.btn-group .btn + .btn-group, -.btn-group .btn-group + .btn, -.btn-group .btn-group + .btn-group, -.btn-group-vertical .btn + .btn, -.btn-group-vertical .btn + .btn-group, -.btn-group-vertical .btn-group + .btn, -.btn-group-vertical .btn-group + .btn-group { - margin-left: -1px; -} - -.btn-toolbar { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -ms-flex-wrap: wrap; - flex-wrap: wrap; - -webkit-box-pack: start; - -ms-flex-pack: start; - justify-content: flex-start; -} -.btn-toolbar .input-group { - width: auto; -} - -.btn-group > .btn:first-child { - margin-left: 0; -} -.btn-group > .btn:not(:last-child):not(.dropdown-toggle), -.btn-group > .btn-group:not(:last-child) > .btn { - border-top-right-radius: 0; - border-bottom-right-radius: 0; -} -.btn-group > .btn:not(:first-child), -.btn-group > .btn-group:not(:first-child) > .btn { - border-top-left-radius: 0; - border-bottom-left-radius: 0; -} - -.dropdown-toggle-split { - padding-right: 0.5625rem; - padding-left: 0.5625rem; -} -.dropdown-toggle-split::after { - margin-left: 0; -} - -.btn-sm + .dropdown-toggle-split, .btn-group-sm > .btn + .dropdown-toggle-split { - padding-right: 0.375rem; - padding-left: 0.375rem; -} - -.btn-lg + .dropdown-toggle-split, .btn-group-lg > .btn + .dropdown-toggle-split { - padding-right: 0.75rem; - padding-left: 0.75rem; -} - -.btn-group-vertical { - -webkit-box-orient: vertical; - -webkit-box-direction: normal; - -ms-flex-direction: column; - flex-direction: column; - -webkit-box-align: start; - -ms-flex-align: start; - align-items: flex-start; - -webkit-box-pack: center; - -ms-flex-pack: center; - justify-content: center; -} -.btn-group-vertical .btn, -.btn-group-vertical .btn-group { - width: 100%; -} -.btn-group-vertical > .btn + .btn, -.btn-group-vertical > .btn + .btn-group, -.btn-group-vertical > .btn-group + .btn, -.btn-group-vertical > .btn-group + .btn-group { - margin-top: -1px; - margin-left: 0; -} -.btn-group-vertical > .btn:not(:last-child):not(.dropdown-toggle), -.btn-group-vertical > .btn-group:not(:last-child) > .btn { - border-bottom-right-radius: 0; - border-bottom-left-radius: 0; -} -.btn-group-vertical > .btn:not(:first-child), -.btn-group-vertical > .btn-group:not(:first-child) > .btn { - border-top-left-radius: 0; - border-top-right-radius: 0; -} - -.btn-group-toggle > .btn, -.btn-group-toggle > .btn-group > .btn { - margin-bottom: 0; -} -.btn-group-toggle > .btn input[type="radio"], -.btn-group-toggle > .btn input[type="checkbox"], -.btn-group-toggle > .btn-group > .btn input[type="radio"], -.btn-group-toggle > .btn-group > .btn input[type="checkbox"] { - position: absolute; - clip: rect(0, 0, 0, 0); - pointer-events: none; -} - -.input-group { - position: relative; - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -ms-flex-wrap: wrap; - flex-wrap: wrap; - -webkit-box-align: stretch; - -ms-flex-align: stretch; - align-items: stretch; - width: 100%; -} -.input-group > .form-control, -.input-group > .custom-select, -.input-group > .custom-file { - position: relative; - -webkit-box-flex: 1; - -ms-flex: 1 1 auto; - flex: 1 1 auto; - width: 1%; - margin-bottom: 0; -} -.input-group > .form-control:focus, -.input-group > .custom-select:focus, -.input-group > .custom-file:focus { - z-index: 3; -} -.input-group > .form-control + .form-control, -.input-group > .form-control + .custom-select, -.input-group > .form-control + .custom-file, -.input-group > .custom-select + .form-control, -.input-group > .custom-select + .custom-select, -.input-group > .custom-select + .custom-file, -.input-group > .custom-file + .form-control, -.input-group > .custom-file + .custom-select, -.input-group > .custom-file + .custom-file { - margin-left: -1px; -} -.input-group > .form-control:not(:last-child), -.input-group > .custom-select:not(:last-child) { - border-top-right-radius: 0; - border-bottom-right-radius: 0; -} -.input-group > .form-control:not(:first-child), -.input-group > .custom-select:not(:first-child) { - border-top-left-radius: 0; - border-bottom-left-radius: 0; -} -.input-group > .custom-file { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; -} -.input-group > .custom-file:not(:last-child) .custom-file-label, .input-group > .custom-file:not(:last-child) .custom-file-label::before { - border-top-right-radius: 0; - border-bottom-right-radius: 0; -} -.input-group > .custom-file:not(:first-child) .custom-file-label, .input-group > .custom-file:not(:first-child) .custom-file-label::before { - border-top-left-radius: 0; - border-bottom-left-radius: 0; -} - -.input-group-prepend, -.input-group-append { - display: -webkit-box; - display: -ms-flexbox; - display: flex; -} -.input-group-prepend .btn, -.input-group-append .btn { - position: relative; - z-index: 2; -} -.input-group-prepend .btn + .btn, -.input-group-prepend .btn + .input-group-text, -.input-group-prepend .input-group-text + .input-group-text, -.input-group-prepend .input-group-text + .btn, -.input-group-append .btn + .btn, -.input-group-append .btn + .input-group-text, -.input-group-append .input-group-text + .input-group-text, -.input-group-append .input-group-text + .btn { - margin-left: -1px; -} - -.input-group-prepend { - margin-right: -1px; -} - -.input-group-append { - margin-left: -1px; -} - -.input-group-text { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; - padding: 0.375rem 0.75rem; - margin-bottom: 0; - font-size: 1rem; - font-weight: 400; - line-height: 1.5; - color: #495057; - text-align: center; - white-space: nowrap; - background-color: #e9ecef; - border: 1px solid #ced4da; - border-radius: 0.25rem; -} -.input-group-text input[type="radio"], -.input-group-text input[type="checkbox"] { - margin-top: 0; -} - -.input-group > .input-group-prepend > .btn, -.input-group > .input-group-prepend > .input-group-text, -.input-group > .input-group-append:not(:last-child) > .btn, -.input-group > .input-group-append:not(:last-child) > .input-group-text, -.input-group > .input-group-append:last-child > .btn:not(:last-child):not(.dropdown-toggle), -.input-group > .input-group-append:last-child > .input-group-text:not(:last-child) { - border-top-right-radius: 0; - border-bottom-right-radius: 0; -} - -.input-group > .input-group-append > .btn, -.input-group > .input-group-append > .input-group-text, -.input-group > .input-group-prepend:not(:first-child) > .btn, -.input-group > .input-group-prepend:not(:first-child) > .input-group-text, -.input-group > .input-group-prepend:first-child > .btn:not(:first-child), -.input-group > .input-group-prepend:first-child > .input-group-text:not(:first-child) { - border-top-left-radius: 0; - border-bottom-left-radius: 0; -} - -.custom-control { - position: relative; - display: block; - min-height: 1.5rem; - padding-left: 1.5rem; -} - -.custom-control-inline { - display: -webkit-inline-box; - display: -ms-inline-flexbox; - display: inline-flex; - margin-right: 1rem; -} - -.custom-control-input { - position: absolute; - z-index: -1; - opacity: 0; -} -.custom-control-input:checked ~ .custom-control-label::before { - color: #fff; - background-color: #007bff; -} -.custom-control-input:focus ~ .custom-control-label::before { - -webkit-box-shadow: 0 0 0 1px #fff, 0 0 0 0.2rem rgba(0, 123, 255, 0.25); - box-shadow: 0 0 0 1px #fff, 0 0 0 0.2rem rgba(0, 123, 255, 0.25); -} -.custom-control-input:active ~ .custom-control-label::before { - color: #fff; - background-color: #b3d7ff; -} -.custom-control-input:disabled ~ .custom-control-label { - color: #6c757d; -} -.custom-control-input:disabled ~ .custom-control-label::before { - background-color: #e9ecef; -} - -.custom-control-label { - margin-bottom: 0; -} -.custom-control-label::before { - position: absolute; - top: 0.25rem; - left: 0; - display: block; - width: 1rem; - height: 1rem; - pointer-events: none; - content: ""; - -webkit-user-select: none; - -moz-user-select: none; - -ms-user-select: none; - user-select: none; - background-color: #dee2e6; -} -.custom-control-label::after { - position: absolute; - top: 0.25rem; - left: 0; - display: block; - width: 1rem; - height: 1rem; - content: ""; - background-repeat: no-repeat; - background-position: center center; - background-size: 50% 50%; -} - -.custom-checkbox .custom-control-label::before { - border-radius: 0.25rem; -} -.custom-checkbox .custom-control-input:checked ~ .custom-control-label::before { - background-color: #007bff; -} -.custom-checkbox .custom-control-input:checked ~ .custom-control-label::after { - background-image: url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3E%3Cpath fill='%23fff' d='M6.564.75l-3.59 3.612-1.538-1.55L0 4.26 2.974 7.25 8 2.193z'/%3E%3C/svg%3E"); -} -.custom-checkbox .custom-control-input:indeterminate ~ .custom-control-label::before { - background-color: #007bff; -} -.custom-checkbox .custom-control-input:indeterminate ~ .custom-control-label::after { - background-image: url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 4 4'%3E%3Cpath stroke='%23fff' d='M0 2h4'/%3E%3C/svg%3E"); -} -.custom-checkbox .custom-control-input:disabled:checked ~ .custom-control-label::before { - background-color: rgba(0, 123, 255, 0.5); -} -.custom-checkbox .custom-control-input:disabled:indeterminate ~ .custom-control-label::before { - background-color: rgba(0, 123, 255, 0.5); -} - -.custom-radio .custom-control-label::before { - border-radius: 50%; -} -.custom-radio .custom-control-input:checked ~ .custom-control-label::before { - background-color: #007bff; -} -.custom-radio .custom-control-input:checked ~ .custom-control-label::after { - background-image: url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3E%3Ccircle r='3' fill='%23fff'/%3E%3C/svg%3E"); -} -.custom-radio .custom-control-input:disabled:checked ~ .custom-control-label::before { - background-color: rgba(0, 123, 255, 0.5); -} - -.custom-select { - display: inline-block; - width: 100%; - height: calc(2.25rem + 2px); - padding: 0.375rem 1.75rem 0.375rem 0.75rem; - line-height: 1.5; - color: #495057; - vertical-align: middle; - background: #fff url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 4 5'%3E%3Cpath fill='%23343a40' d='M2 0L0 2h4zm0 5L0 3h4z'/%3E%3C/svg%3E") no-repeat right 0.75rem center; - background-size: 8px 10px; - border: 1px solid #ced4da; - border-radius: 0.25rem; - -webkit-appearance: none; - -moz-appearance: none; - appearance: none; -} -.custom-select:focus { - border-color: #80bdff; - outline: 0; - -webkit-box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.075), 0 0 5px rgba(128, 189, 255, 0.5); - box-shadow: inset 0 1px 2px rgba(0, 0, 0, 0.075), 0 0 5px rgba(128, 189, 255, 0.5); -} -.custom-select:focus::-ms-value { - color: #495057; - background-color: #fff; -} -.custom-select[multiple], .custom-select[size]:not([size="1"]) { - height: auto; - padding-right: 0.75rem; - background-image: none; -} -.custom-select:disabled { - color: #6c757d; - background-color: #e9ecef; -} -.custom-select::-ms-expand { - opacity: 0; -} - -.custom-select-sm { - height: calc(1.8125rem + 2px); - padding-top: 0.375rem; - padding-bottom: 0.375rem; - font-size: 75%; -} - -.custom-select-lg { - height: calc(2.875rem + 2px); - padding-top: 0.375rem; - padding-bottom: 0.375rem; - font-size: 125%; -} - -.custom-file { - position: relative; - display: inline-block; - width: 100%; - height: calc(2.25rem + 2px); - margin-bottom: 0; -} - -.custom-file-input { - position: relative; - z-index: 2; - width: 100%; - height: calc(2.25rem + 2px); - margin: 0; - opacity: 0; -} -.custom-file-input:focus ~ .custom-file-control { - border-color: #80bdff; - -webkit-box-shadow: 0 0 0 0.2rem rgba(0, 123, 255, 0.25); - box-shadow: 0 0 0 0.2rem rgba(0, 123, 255, 0.25); -} -.custom-file-input:focus ~ .custom-file-control::before { - border-color: #80bdff; -} -.custom-file-input:lang(en) ~ .custom-file-label::after { - content: "Browse"; -} - -.custom-file-label { - position: absolute; - top: 0; - right: 0; - left: 0; - z-index: 1; - height: calc(2.25rem + 2px); - padding: 0.375rem 0.75rem; - line-height: 1.5; - color: #495057; - background-color: #fff; - border: 1px solid #ced4da; - border-radius: 0.25rem; -} -.custom-file-label::after { - position: absolute; - top: 0; - right: 0; - bottom: 0; - z-index: 3; - display: block; - height: calc(calc(2.25rem + 2px) - 1px * 2); - padding: 0.375rem 0.75rem; - line-height: 1.5; - color: #495057; - content: "Browse"; - background-color: #e9ecef; - border-left: 1px solid #ced4da; - border-radius: 0 0.25rem 0.25rem 0; -} - -.nav { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -ms-flex-wrap: wrap; - flex-wrap: wrap; - padding-left: 0; - margin-bottom: 0; - list-style: none; -} - -.nav-link { - display: block; - padding: 0.5rem 1rem; -} -.nav-link:hover, .nav-link:focus { - text-decoration: none; -} -.nav-link.disabled { - color: #6c757d; -} - -.nav-tabs { - border-bottom: 1px solid #dee2e6; -} -.nav-tabs .nav-item { - margin-bottom: -1px; -} -.nav-tabs .nav-link { - border: 1px solid transparent; - border-top-left-radius: 0.25rem; - border-top-right-radius: 0.25rem; -} -.nav-tabs .nav-link:hover, .nav-tabs .nav-link:focus { - border-color: #e9ecef #e9ecef #dee2e6; -} -.nav-tabs .nav-link.disabled { - color: #6c757d; - background-color: transparent; - border-color: transparent; -} -.nav-tabs .nav-link.active, -.nav-tabs .nav-item.show .nav-link { - color: #495057; - background-color: #fff; - border-color: #dee2e6 #dee2e6 #fff; -} -.nav-tabs .dropdown-menu { - margin-top: -1px; - border-top-left-radius: 0; - border-top-right-radius: 0; -} - -.nav-pills .nav-link { - border-radius: 0.25rem; -} -.nav-pills .nav-link.active, -.nav-pills .show > .nav-link { - color: #fff; - background-color: #007bff; -} - -.nav-fill .nav-item { - -webkit-box-flex: 1; - -ms-flex: 1 1 auto; - flex: 1 1 auto; - text-align: center; -} - -.nav-justified .nav-item { - -ms-flex-preferred-size: 0; - flex-basis: 0; - -webkit-box-flex: 1; - -ms-flex-positive: 1; - flex-grow: 1; - text-align: center; -} - -.tab-content > .tab-pane { - display: none; -} -.tab-content > .active { - display: block; -} - -.navbar { - position: relative; - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -ms-flex-wrap: wrap; - flex-wrap: wrap; - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; - -webkit-box-pack: justify; - -ms-flex-pack: justify; - justify-content: space-between; - padding: 0.5rem 1rem; -} -.navbar > .container, -.navbar > .container-fluid { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -ms-flex-wrap: wrap; - flex-wrap: wrap; - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; - -webkit-box-pack: justify; - -ms-flex-pack: justify; - justify-content: space-between; -} - -.navbar-brand { - display: inline-block; - padding-top: 0.3125rem; - padding-bottom: 0.3125rem; - margin-right: 1rem; - font-size: 1.25rem; - line-height: inherit; - white-space: nowrap; -} -.navbar-brand:hover, .navbar-brand:focus { - text-decoration: none; -} - -.navbar-nav { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-orient: vertical; - -webkit-box-direction: normal; - -ms-flex-direction: column; - flex-direction: column; - padding-left: 0; - margin-bottom: 0; - list-style: none; -} -.navbar-nav .nav-link { - padding-right: 0; - padding-left: 0; -} -.navbar-nav .dropdown-menu { - position: static; - float: none; -} - -.navbar-text { - display: inline-block; - padding-top: 0.5rem; - padding-bottom: 0.5rem; -} - -.navbar-collapse { - -ms-flex-preferred-size: 100%; - flex-basis: 100%; - -webkit-box-flex: 1; - -ms-flex-positive: 1; - flex-grow: 1; - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; -} - -.navbar-toggler { - padding: 0.25rem 0.75rem; - font-size: 1.25rem; - line-height: 1; - background-color: transparent; - border: 1px solid transparent; - border-radius: 0.25rem; -} -.navbar-toggler:hover, .navbar-toggler:focus { - text-decoration: none; -} -.navbar-toggler:not(:disabled):not(.disabled) { - cursor: pointer; -} - -.navbar-toggler-icon { - display: inline-block; - width: 1.5em; - height: 1.5em; - vertical-align: middle; - content: ""; - background: no-repeat center center; - background-size: 100% 100%; -} - -@media (max-width: 575.98px) { - .navbar-expand-sm > .container, - .navbar-expand-sm > .container-fluid { - padding-right: 0; - padding-left: 0; - } -} -@media (min-width: 576px) { - .navbar-expand-sm { - -webkit-box-orient: horizontal; - -webkit-box-direction: normal; - -ms-flex-flow: row nowrap; - flex-flow: row nowrap; - -webkit-box-pack: start; - -ms-flex-pack: start; - justify-content: flex-start; - } - .navbar-expand-sm .navbar-nav { - -webkit-box-orient: horizontal; - -webkit-box-direction: normal; - -ms-flex-direction: row; - flex-direction: row; - } - .navbar-expand-sm .navbar-nav .dropdown-menu { - position: absolute; - } - .navbar-expand-sm .navbar-nav .dropdown-menu-right { - right: 0; - left: auto; - } - .navbar-expand-sm .navbar-nav .nav-link { - padding-right: 0.5rem; - padding-left: 0.5rem; - } - .navbar-expand-sm > .container, - .navbar-expand-sm > .container-fluid { - -ms-flex-wrap: nowrap; - flex-wrap: nowrap; - } - .navbar-expand-sm .navbar-collapse { - display: -webkit-box !important; - display: -ms-flexbox !important; - display: flex !important; - -ms-flex-preferred-size: auto; - flex-basis: auto; - } - .navbar-expand-sm .navbar-toggler { - display: none; - } - .navbar-expand-sm .dropup .dropdown-menu { - top: auto; - bottom: 100%; - } -} -@media (max-width: 767.98px) { - .navbar-expand-md > .container, - .navbar-expand-md > .container-fluid { - padding-right: 0; - padding-left: 0; - } -} -@media (min-width: 768px) { - .navbar-expand-md { - -webkit-box-orient: horizontal; - -webkit-box-direction: normal; - -ms-flex-flow: row nowrap; - flex-flow: row nowrap; - -webkit-box-pack: start; - -ms-flex-pack: start; - justify-content: flex-start; - } - .navbar-expand-md .navbar-nav { - -webkit-box-orient: horizontal; - -webkit-box-direction: normal; - -ms-flex-direction: row; - flex-direction: row; - } - .navbar-expand-md .navbar-nav .dropdown-menu { - position: absolute; - } - .navbar-expand-md .navbar-nav .dropdown-menu-right { - right: 0; - left: auto; - } - .navbar-expand-md .navbar-nav .nav-link { - padding-right: 0.5rem; - padding-left: 0.5rem; - } - .navbar-expand-md > .container, - .navbar-expand-md > .container-fluid { - -ms-flex-wrap: nowrap; - flex-wrap: nowrap; - } - .navbar-expand-md .navbar-collapse { - display: -webkit-box !important; - display: -ms-flexbox !important; - display: flex !important; - -ms-flex-preferred-size: auto; - flex-basis: auto; - } - .navbar-expand-md .navbar-toggler { - display: none; - } - .navbar-expand-md .dropup .dropdown-menu { - top: auto; - bottom: 100%; - } -} -@media (max-width: 991.98px) { - .navbar-expand-lg > .container, - .navbar-expand-lg > .container-fluid { - padding-right: 0; - padding-left: 0; - } -} -@media (min-width: 992px) { - .navbar-expand-lg { - -webkit-box-orient: horizontal; - -webkit-box-direction: normal; - -ms-flex-flow: row nowrap; - flex-flow: row nowrap; - -webkit-box-pack: start; - -ms-flex-pack: start; - justify-content: flex-start; - } - .navbar-expand-lg .navbar-nav { - -webkit-box-orient: horizontal; - -webkit-box-direction: normal; - -ms-flex-direction: row; - flex-direction: row; - } - .navbar-expand-lg .navbar-nav .dropdown-menu { - position: absolute; - } - .navbar-expand-lg .navbar-nav .dropdown-menu-right { - right: 0; - left: auto; - } - .navbar-expand-lg .navbar-nav .nav-link { - padding-right: 0.5rem; - padding-left: 0.5rem; - } - .navbar-expand-lg > .container, - .navbar-expand-lg > .container-fluid { - -ms-flex-wrap: nowrap; - flex-wrap: nowrap; - } - .navbar-expand-lg .navbar-collapse { - display: -webkit-box !important; - display: -ms-flexbox !important; - display: flex !important; - -ms-flex-preferred-size: auto; - flex-basis: auto; - } - .navbar-expand-lg .navbar-toggler { - display: none; - } - .navbar-expand-lg .dropup .dropdown-menu { - top: auto; - bottom: 100%; - } -} -@media (max-width: 1199.98px) { - .navbar-expand-xl > .container, - .navbar-expand-xl > .container-fluid { - padding-right: 0; - padding-left: 0; - } -} -@media (min-width: 1200px) { - .navbar-expand-xl { - -webkit-box-orient: horizontal; - -webkit-box-direction: normal; - -ms-flex-flow: row nowrap; - flex-flow: row nowrap; - -webkit-box-pack: start; - -ms-flex-pack: start; - justify-content: flex-start; - } - .navbar-expand-xl .navbar-nav { - -webkit-box-orient: horizontal; - -webkit-box-direction: normal; - -ms-flex-direction: row; - flex-direction: row; - } - .navbar-expand-xl .navbar-nav .dropdown-menu { - position: absolute; - } - .navbar-expand-xl .navbar-nav .dropdown-menu-right { - right: 0; - left: auto; - } - .navbar-expand-xl .navbar-nav .nav-link { - padding-right: 0.5rem; - padding-left: 0.5rem; - } - .navbar-expand-xl > .container, - .navbar-expand-xl > .container-fluid { - -ms-flex-wrap: nowrap; - flex-wrap: nowrap; - } - .navbar-expand-xl .navbar-collapse { - display: -webkit-box !important; - display: -ms-flexbox !important; - display: flex !important; - -ms-flex-preferred-size: auto; - flex-basis: auto; - } - .navbar-expand-xl .navbar-toggler { - display: none; - } - .navbar-expand-xl .dropup .dropdown-menu { - top: auto; - bottom: 100%; - } -} -.navbar-expand { - -webkit-box-orient: horizontal; - -webkit-box-direction: normal; - -ms-flex-flow: row nowrap; - flex-flow: row nowrap; - -webkit-box-pack: start; - -ms-flex-pack: start; - justify-content: flex-start; -} -.navbar-expand > .container, -.navbar-expand > .container-fluid { - padding-right: 0; - padding-left: 0; -} -.navbar-expand .navbar-nav { - -webkit-box-orient: horizontal; - -webkit-box-direction: normal; - -ms-flex-direction: row; - flex-direction: row; -} -.navbar-expand .navbar-nav .dropdown-menu { - position: absolute; -} -.navbar-expand .navbar-nav .dropdown-menu-right { - right: 0; - left: auto; -} -.navbar-expand .navbar-nav .nav-link { - padding-right: 0.5rem; - padding-left: 0.5rem; -} -.navbar-expand > .container, -.navbar-expand > .container-fluid { - -ms-flex-wrap: nowrap; - flex-wrap: nowrap; -} -.navbar-expand .navbar-collapse { - display: -webkit-box !important; - display: -ms-flexbox !important; - display: flex !important; - -ms-flex-preferred-size: auto; - flex-basis: auto; -} -.navbar-expand .navbar-toggler { - display: none; -} -.navbar-expand .dropup .dropdown-menu { - top: auto; - bottom: 100%; -} - -.navbar-light .navbar-brand { - color: rgba(0, 0, 0, 0.9); -} -.navbar-light .navbar-brand:hover, .navbar-light .navbar-brand:focus { - color: rgba(0, 0, 0, 0.9); -} -.navbar-light .navbar-nav .nav-link { - color: rgba(0, 0, 0, 0.5); -} -.navbar-light .navbar-nav .nav-link:hover, .navbar-light .navbar-nav .nav-link:focus { - color: rgba(0, 0, 0, 0.7); -} -.navbar-light .navbar-nav .nav-link.disabled { - color: rgba(0, 0, 0, 0.3); -} -.navbar-light .navbar-nav .show > .nav-link, -.navbar-light .navbar-nav .active > .nav-link, -.navbar-light .navbar-nav .nav-link.show, -.navbar-light .navbar-nav .nav-link.active { - color: rgba(0, 0, 0, 0.9); -} -.navbar-light .navbar-toggler { - color: rgba(0, 0, 0, 0.5); - border-color: rgba(0, 0, 0, 0.1); -} -.navbar-light .navbar-toggler-icon { - background-image: url("data:image/svg+xml;charset=utf8,%3Csvg viewBox='0 0 30 30' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath stroke='rgba(0, 0, 0, 0.5)' stroke-width='2' stroke-linecap='round' stroke-miterlimit='10' d='M4 7h22M4 15h22M4 23h22'/%3E%3C/svg%3E"); -} -.navbar-light .navbar-text { - color: rgba(0, 0, 0, 0.5); -} -.navbar-light .navbar-text a { - color: rgba(0, 0, 0, 0.9); -} -.navbar-light .navbar-text a:hover, .navbar-light .navbar-text a:focus { - color: rgba(0, 0, 0, 0.9); -} - -.navbar-dark .navbar-brand { - color: #fff; -} -.navbar-dark .navbar-brand:hover, .navbar-dark .navbar-brand:focus { - color: #fff; -} -.navbar-dark .navbar-nav .nav-link { - color: rgba(255, 255, 255, 0.5); -} -.navbar-dark .navbar-nav .nav-link:hover, .navbar-dark .navbar-nav .nav-link:focus { - color: rgba(255, 255, 255, 0.75); -} -.navbar-dark .navbar-nav .nav-link.disabled { - color: rgba(255, 255, 255, 0.25); -} -.navbar-dark .navbar-nav .show > .nav-link, -.navbar-dark .navbar-nav .active > .nav-link, -.navbar-dark .navbar-nav .nav-link.show, -.navbar-dark .navbar-nav .nav-link.active { - color: #fff; -} -.navbar-dark .navbar-toggler { - color: rgba(255, 255, 255, 0.5); - border-color: rgba(255, 255, 255, 0.1); -} -.navbar-dark .navbar-toggler-icon { - background-image: url("data:image/svg+xml;charset=utf8,%3Csvg viewBox='0 0 30 30' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath stroke='rgba(255, 255, 255, 0.5)' stroke-width='2' stroke-linecap='round' stroke-miterlimit='10' d='M4 7h22M4 15h22M4 23h22'/%3E%3C/svg%3E"); -} -.navbar-dark .navbar-text { - color: rgba(255, 255, 255, 0.5); -} -.navbar-dark .navbar-text a { - color: #fff; -} -.navbar-dark .navbar-text a:hover, .navbar-dark .navbar-text a:focus { - color: #fff; -} - -.card { - position: relative; - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-orient: vertical; - -webkit-box-direction: normal; - -ms-flex-direction: column; - flex-direction: column; - min-width: 0; - word-wrap: break-word; - background-color: #fff; - background-clip: border-box; - border: 1px solid rgba(0, 0, 0, 0.125); - border-radius: 0.25rem; -} -.card > hr { - margin-right: 0; - margin-left: 0; -} -.card > .list-group:first-child .list-group-item:first-child { - border-top-left-radius: 0.25rem; - border-top-right-radius: 0.25rem; -} -.card > .list-group:last-child .list-group-item:last-child { - border-bottom-right-radius: 0.25rem; - border-bottom-left-radius: 0.25rem; -} - -.card-body { - -webkit-box-flex: 1; - -ms-flex: 1 1 auto; - flex: 1 1 auto; - padding: 1.25rem; -} - -.card-title { - margin-bottom: 0.75rem; -} - -.card-subtitle { - margin-top: -0.375rem; - margin-bottom: 0; -} - -.card-text:last-child { - margin-bottom: 0; -} - -.card-link:hover { - text-decoration: none; -} -.card-link + .card-link { - margin-left: 1.25rem; -} - -.card-header { - padding: 0.75rem 1.25rem; - margin-bottom: 0; - background-color: rgba(0, 0, 0, 0.03); - border-bottom: 1px solid rgba(0, 0, 0, 0.125); -} -.card-header:first-child { - border-radius: calc(0.25rem - 1px) calc(0.25rem - 1px) 0 0; -} -.card-header + .list-group .list-group-item:first-child { - border-top: 0; -} - -.card-footer { - padding: 0.75rem 1.25rem; - background-color: rgba(0, 0, 0, 0.03); - border-top: 1px solid rgba(0, 0, 0, 0.125); -} -.card-footer:last-child { - border-radius: 0 0 calc(0.25rem - 1px) calc(0.25rem - 1px); -} - -.card-header-tabs { - margin-right: -0.625rem; - margin-bottom: -0.75rem; - margin-left: -0.625rem; - border-bottom: 0; -} - -.card-header-pills { - margin-right: -0.625rem; - margin-left: -0.625rem; -} - -.card-img-overlay { - position: absolute; - top: 0; - right: 0; - bottom: 0; - left: 0; - padding: 1.25rem; -} - -.card-img { - width: 100%; - border-radius: calc(0.25rem - 1px); -} - -.card-img-top { - width: 100%; - border-top-left-radius: calc(0.25rem - 1px); - border-top-right-radius: calc(0.25rem - 1px); -} - -.card-img-bottom { - width: 100%; - border-bottom-right-radius: calc(0.25rem - 1px); - border-bottom-left-radius: calc(0.25rem - 1px); -} - -.card-deck { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-orient: vertical; - -webkit-box-direction: normal; - -ms-flex-direction: column; - flex-direction: column; -} -.card-deck .card { - margin-bottom: 15px; -} -@media (min-width: 576px) { - .card-deck { - -webkit-box-orient: horizontal; - -webkit-box-direction: normal; - -ms-flex-flow: row wrap; - flex-flow: row wrap; - margin-right: -15px; - margin-left: -15px; - } - .card-deck .card { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-flex: 1; - -ms-flex: 1 0 0%; - flex: 1 0 0%; - -webkit-box-orient: vertical; - -webkit-box-direction: normal; - -ms-flex-direction: column; - flex-direction: column; - margin-right: 15px; - margin-bottom: 0; - margin-left: 15px; - } -} - -.card-group { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-orient: vertical; - -webkit-box-direction: normal; - -ms-flex-direction: column; - flex-direction: column; -} -.card-group > .card { - margin-bottom: 15px; -} -@media (min-width: 576px) { - .card-group { - -webkit-box-orient: horizontal; - -webkit-box-direction: normal; - -ms-flex-flow: row wrap; - flex-flow: row wrap; - } - .card-group > .card { - -webkit-box-flex: 1; - -ms-flex: 1 0 0%; - flex: 1 0 0%; - margin-bottom: 0; - } - .card-group > .card + .card { - margin-left: 0; - border-left: 0; - } - .card-group > .card:first-child { - border-top-right-radius: 0; - border-bottom-right-radius: 0; - } - .card-group > .card:first-child .card-img-top, - .card-group > .card:first-child .card-header { - border-top-right-radius: 0; - } - .card-group > .card:first-child .card-img-bottom, - .card-group > .card:first-child .card-footer { - border-bottom-right-radius: 0; - } - .card-group > .card:last-child { - border-top-left-radius: 0; - border-bottom-left-radius: 0; - } - .card-group > .card:last-child .card-img-top, - .card-group > .card:last-child .card-header { - border-top-left-radius: 0; - } - .card-group > .card:last-child .card-img-bottom, - .card-group > .card:last-child .card-footer { - border-bottom-left-radius: 0; - } - .card-group > .card:only-child { - border-radius: 0.25rem; - } - .card-group > .card:only-child .card-img-top, - .card-group > .card:only-child .card-header { - border-top-left-radius: 0.25rem; - border-top-right-radius: 0.25rem; - } - .card-group > .card:only-child .card-img-bottom, - .card-group > .card:only-child .card-footer { - border-bottom-right-radius: 0.25rem; - border-bottom-left-radius: 0.25rem; - } - .card-group > .card:not(:first-child):not(:last-child):not(:only-child) { - border-radius: 0; - } - .card-group > .card:not(:first-child):not(:last-child):not(:only-child) .card-img-top, - .card-group > .card:not(:first-child):not(:last-child):not(:only-child) .card-img-bottom, - .card-group > .card:not(:first-child):not(:last-child):not(:only-child) .card-header, - .card-group > .card:not(:first-child):not(:last-child):not(:only-child) .card-footer { - border-radius: 0; - } -} - -.card-columns .card { - margin-bottom: 0.75rem; -} -@media (min-width: 576px) { - .card-columns { - -webkit-column-count: 3; - -moz-column-count: 3; - column-count: 3; - -webkit-column-gap: 1.25rem; - -moz-column-gap: 1.25rem; - column-gap: 1.25rem; - } - .card-columns .card { - display: inline-block; - width: 100%; - } -} - -.breadcrumb { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -ms-flex-wrap: wrap; - flex-wrap: wrap; - padding: 0.75rem 1rem; - margin-bottom: 1rem; - list-style: none; - background-color: #e9ecef; - border-radius: 0.25rem; -} - -.breadcrumb-item + .breadcrumb-item::before { - display: inline-block; - padding-right: 0.5rem; - padding-left: 0.5rem; - color: #6c757d; - content: "/"; -} -.breadcrumb-item + .breadcrumb-item:hover::before { - text-decoration: underline; -} -.breadcrumb-item + .breadcrumb-item:hover::before { - text-decoration: none; -} -.breadcrumb-item.active { - color: #6c757d; -} - -.pagination { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - padding-left: 0; - list-style: none; - border-radius: 0.25rem; -} - -.page-link { - position: relative; - display: block; - padding: 0.5rem 0.75rem; - margin-left: -1px; - line-height: 1.25; - color: #007bff; - background-color: #fff; - border: 1px solid #dee2e6; -} -.page-link:hover { - color: #0056b3; - text-decoration: none; - background-color: #e9ecef; - border-color: #dee2e6; -} -.page-link:focus { - z-index: 2; - outline: 0; - -webkit-box-shadow: 0 0 0 0.2rem rgba(0, 123, 255, 0.25); - box-shadow: 0 0 0 0.2rem rgba(0, 123, 255, 0.25); -} -.page-link:not(:disabled):not(.disabled) { - cursor: pointer; -} - -.page-item:first-child .page-link { - margin-left: 0; - border-top-left-radius: 0.25rem; - border-bottom-left-radius: 0.25rem; -} -.page-item:last-child .page-link { - border-top-right-radius: 0.25rem; - border-bottom-right-radius: 0.25rem; -} -.page-item.active .page-link { - z-index: 1; - color: #fff; - background-color: #007bff; - border-color: #007bff; -} -.page-item.disabled .page-link { - color: #6c757d; - pointer-events: none; - cursor: auto; - background-color: #fff; - border-color: #dee2e6; -} - -.pagination-lg .page-link { - padding: 0.75rem 1.5rem; - font-size: 1.25rem; - line-height: 1.5; -} -.pagination-lg .page-item:first-child .page-link { - border-top-left-radius: 0.3rem; - border-bottom-left-radius: 0.3rem; -} -.pagination-lg .page-item:last-child .page-link { - border-top-right-radius: 0.3rem; - border-bottom-right-radius: 0.3rem; -} - -.pagination-sm .page-link { - padding: 0.25rem 0.5rem; - font-size: 0.875rem; - line-height: 1.5; -} -.pagination-sm .page-item:first-child .page-link { - border-top-left-radius: 0.2rem; - border-bottom-left-radius: 0.2rem; -} -.pagination-sm .page-item:last-child .page-link { - border-top-right-radius: 0.2rem; - border-bottom-right-radius: 0.2rem; -} - -.badge { - display: inline-block; - padding: 0.25em 0.4em; - font-size: 75%; - font-weight: 700; - line-height: 1; - text-align: center; - white-space: nowrap; - vertical-align: baseline; - border-radius: 0.25rem; -} -.badge:empty { - display: none; -} - -.btn .badge { - position: relative; - top: -1px; -} - -.badge-pill { - padding-right: 0.6em; - padding-left: 0.6em; - border-radius: 10rem; -} - -.badge-primary { - color: #fff; - background-color: #007bff; -} -.badge-primary[href]:hover, .badge-primary[href]:focus { - color: #fff; - text-decoration: none; - background-color: #0062cc; -} - -.badge-secondary { - color: #fff; - background-color: #6c757d; -} -.badge-secondary[href]:hover, .badge-secondary[href]:focus { - color: #fff; - text-decoration: none; - background-color: #545b62; -} - -.badge-success { - color: #fff; - background-color: #28a745; -} -.badge-success[href]:hover, .badge-success[href]:focus { - color: #fff; - text-decoration: none; - background-color: #1e7e34; -} - -.badge-info { - color: #fff; - background-color: #17a2b8; -} -.badge-info[href]:hover, .badge-info[href]:focus { - color: #fff; - text-decoration: none; - background-color: #117a8b; -} - -.badge-warning { - color: #212529; - background-color: #ffc107; -} -.badge-warning[href]:hover, .badge-warning[href]:focus { - color: #212529; - text-decoration: none; - background-color: #d39e00; -} - -.badge-danger { - color: #fff; - background-color: #dc3545; -} -.badge-danger[href]:hover, .badge-danger[href]:focus { - color: #fff; - text-decoration: none; - background-color: #bd2130; -} - -.badge-light { - color: #212529; - background-color: #f8f9fa; -} -.badge-light[href]:hover, .badge-light[href]:focus { - color: #212529; - text-decoration: none; - background-color: #dae0e5; -} - -.badge-dark { - color: #fff; - background-color: #343a40; -} -.badge-dark[href]:hover, .badge-dark[href]:focus { - color: #fff; - text-decoration: none; - background-color: #1d2124; -} - -.jumbotron { - padding: 2rem 1rem; - margin-bottom: 2rem; - background-color: #e9ecef; - border-radius: 0.3rem; -} -@media (min-width: 576px) { - .jumbotron { - padding: 4rem 2rem; - } -} - -.jumbotron-fluid { - padding-right: 0; - padding-left: 0; - border-radius: 0; -} - -.alert { - position: relative; - padding: 0.75rem 1.25rem; - margin-bottom: 1rem; - border: 1px solid transparent; - border-radius: 0.25rem; -} - -.alert-heading { - color: inherit; -} - -.alert-link { - font-weight: 700; -} - -.alert-dismissible { - padding-right: 4rem; -} -.alert-dismissible .close { - position: absolute; - top: 0; - right: 0; - padding: 0.75rem 1.25rem; - color: inherit; -} - -.alert-primary { - color: #004085; - background-color: #cce5ff; - border-color: #b8daff; -} -.alert-primary hr { - border-top-color: #9fcdff; -} -.alert-primary .alert-link { - color: #002752; -} - -.alert-secondary { - color: #383d41; - background-color: #e2e3e5; - border-color: #d6d8db; -} -.alert-secondary hr { - border-top-color: #c8cbcf; -} -.alert-secondary .alert-link { - color: #202326; -} - -.alert-success { - color: #155724; - background-color: #d4edda; - border-color: #c3e6cb; -} -.alert-success hr { - border-top-color: #b1dfbb; -} -.alert-success .alert-link { - color: #0b2e13; -} - -.alert-info { - color: #0c5460; - background-color: #d1ecf1; - border-color: #bee5eb; -} -.alert-info hr { - border-top-color: #abdde5; -} -.alert-info .alert-link { - color: #062c33; -} - -.alert-warning { - color: #856404; - background-color: #fff3cd; - border-color: #ffeeba; -} -.alert-warning hr { - border-top-color: #ffe8a1; -} -.alert-warning .alert-link { - color: #533f03; -} - -.alert-danger { - color: #721c24; - background-color: #f8d7da; - border-color: #f5c6cb; -} -.alert-danger hr { - border-top-color: #f1b0b7; -} -.alert-danger .alert-link { - color: #491217; -} - -.alert-light { - color: #818182; - background-color: #fefefe; - border-color: #fdfdfe; -} -.alert-light hr { - border-top-color: #ececf6; -} -.alert-light .alert-link { - color: #686868; -} - -.alert-dark { - color: #1b1e21; - background-color: #d6d8d9; - border-color: #c6c8ca; -} -.alert-dark hr { - border-top-color: #b9bbbe; -} -.alert-dark .alert-link { - color: #040505; -} - -@-webkit-keyframes progress-bar-stripes { - from { - background-position: 1rem 0; - } - to { - background-position: 0 0; - } -} - -@keyframes progress-bar-stripes { - from { - background-position: 1rem 0; - } - to { - background-position: 0 0; - } -} -.progress { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - height: 1rem; - overflow: hidden; - font-size: 0.75rem; - background-color: #e9ecef; - border-radius: 0.25rem; -} - -.progress-bar { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-orient: vertical; - -webkit-box-direction: normal; - -ms-flex-direction: column; - flex-direction: column; - -webkit-box-pack: center; - -ms-flex-pack: center; - justify-content: center; - color: #fff; - text-align: center; - background-color: #007bff; - -webkit-transition: width 0.6s ease; - transition: width 0.6s ease; -} - -.progress-bar-striped { - background-image: linear-gradient(45deg, rgba(255, 255, 255, 0.15) 25%, transparent 25%, transparent 50%, rgba(255, 255, 255, 0.15) 50%, rgba(255, 255, 255, 0.15) 75%, transparent 75%, transparent); - background-size: 1rem 1rem; -} - -.progress-bar-animated { - -webkit-animation: progress-bar-stripes 1s linear infinite; - animation: progress-bar-stripes 1s linear infinite; -} - -.media { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-align: start; - -ms-flex-align: start; - align-items: flex-start; -} - -.media-body { - -webkit-box-flex: 1; - -ms-flex: 1; - flex: 1; -} - -.list-group { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-orient: vertical; - -webkit-box-direction: normal; - -ms-flex-direction: column; - flex-direction: column; - padding-left: 0; - margin-bottom: 0; -} - -.list-group-item-action { - width: 100%; - color: #495057; - text-align: inherit; -} -.list-group-item-action:hover, .list-group-item-action:focus { - color: #495057; - text-decoration: none; - background-color: #f8f9fa; -} -.list-group-item-action:active { - color: #212529; - background-color: #e9ecef; -} - -.list-group-item { - position: relative; - display: block; - padding: 0.75rem 1.25rem; - margin-bottom: -1px; - background-color: #fff; - border: 1px solid rgba(0, 0, 0, 0.125); -} -.list-group-item:first-child { - border-top-left-radius: 0.25rem; - border-top-right-radius: 0.25rem; -} -.list-group-item:last-child { - margin-bottom: 0; - border-bottom-right-radius: 0.25rem; - border-bottom-left-radius: 0.25rem; -} -.list-group-item:hover, .list-group-item:focus { - z-index: 1; - text-decoration: none; -} -.list-group-item.disabled, .list-group-item:disabled { - color: #6c757d; - background-color: #fff; -} -.list-group-item.active { - z-index: 2; - color: #fff; - background-color: #007bff; - border-color: #007bff; -} - -.list-group-flush .list-group-item { - border-right: 0; - border-left: 0; - border-radius: 0; -} -.list-group-flush:first-child .list-group-item:first-child { - border-top: 0; -} -.list-group-flush:last-child .list-group-item:last-child { - border-bottom: 0; -} - -.list-group-item-primary { - color: #004085; - background-color: #b8daff; -} -.list-group-item-primary.list-group-item-action:hover, .list-group-item-primary.list-group-item-action:focus { - color: #004085; - background-color: #9fcdff; -} -.list-group-item-primary.list-group-item-action.active { - color: #fff; - background-color: #004085; - border-color: #004085; -} - -.list-group-item-secondary { - color: #383d41; - background-color: #d6d8db; -} -.list-group-item-secondary.list-group-item-action:hover, .list-group-item-secondary.list-group-item-action:focus { - color: #383d41; - background-color: #c8cbcf; -} -.list-group-item-secondary.list-group-item-action.active { - color: #fff; - background-color: #383d41; - border-color: #383d41; -} - -.list-group-item-success { - color: #155724; - background-color: #c3e6cb; -} -.list-group-item-success.list-group-item-action:hover, .list-group-item-success.list-group-item-action:focus { - color: #155724; - background-color: #b1dfbb; -} -.list-group-item-success.list-group-item-action.active { - color: #fff; - background-color: #155724; - border-color: #155724; -} - -.list-group-item-info { - color: #0c5460; - background-color: #bee5eb; -} -.list-group-item-info.list-group-item-action:hover, .list-group-item-info.list-group-item-action:focus { - color: #0c5460; - background-color: #abdde5; -} -.list-group-item-info.list-group-item-action.active { - color: #fff; - background-color: #0c5460; - border-color: #0c5460; -} - -.list-group-item-warning { - color: #856404; - background-color: #ffeeba; -} -.list-group-item-warning.list-group-item-action:hover, .list-group-item-warning.list-group-item-action:focus { - color: #856404; - background-color: #ffe8a1; -} -.list-group-item-warning.list-group-item-action.active { - color: #fff; - background-color: #856404; - border-color: #856404; -} - -.list-group-item-danger { - color: #721c24; - background-color: #f5c6cb; -} -.list-group-item-danger.list-group-item-action:hover, .list-group-item-danger.list-group-item-action:focus { - color: #721c24; - background-color: #f1b0b7; -} -.list-group-item-danger.list-group-item-action.active { - color: #fff; - background-color: #721c24; - border-color: #721c24; -} - -.list-group-item-light { - color: #818182; - background-color: #fdfdfe; -} -.list-group-item-light.list-group-item-action:hover, .list-group-item-light.list-group-item-action:focus { - color: #818182; - background-color: #ececf6; -} -.list-group-item-light.list-group-item-action.active { - color: #fff; - background-color: #818182; - border-color: #818182; -} - -.list-group-item-dark { - color: #1b1e21; - background-color: #c6c8ca; -} -.list-group-item-dark.list-group-item-action:hover, .list-group-item-dark.list-group-item-action:focus { - color: #1b1e21; - background-color: #b9bbbe; -} -.list-group-item-dark.list-group-item-action.active { - color: #fff; - background-color: #1b1e21; - border-color: #1b1e21; -} - -.close { - float: right; - font-size: 1.5rem; - font-weight: 700; - line-height: 1; - color: #000; - text-shadow: 0 1px 0 #fff; - opacity: .5; -} -.close:hover, .close:focus { - color: #000; - text-decoration: none; - opacity: .75; -} -.close:not(:disabled):not(.disabled) { - cursor: pointer; -} - -button.close { - padding: 0; - background-color: transparent; - border: 0; - -webkit-appearance: none; -} - -.modal-open { - overflow: hidden; -} - -.modal { - position: fixed; - top: 0; - right: 0; - bottom: 0; - left: 0; - z-index: 1050; - display: none; - overflow: hidden; - outline: 0; -} -.modal-open .modal { - overflow-x: hidden; - overflow-y: auto; -} - -.modal-dialog { - position: relative; - width: auto; - margin: 0.5rem; - pointer-events: none; -} -.modal.fade .modal-dialog { - -webkit-transition: -webkit-transform 0.3s ease-out; - transition: -webkit-transform 0.3s ease-out; - transition: transform 0.3s ease-out; - transition: transform 0.3s ease-out, -webkit-transform 0.3s ease-out; - -webkit-transform: translate(0, -25%); - transform: translate(0, -25%); -} -.modal.show .modal-dialog { - -webkit-transform: translate(0, 0); - transform: translate(0, 0); -} - -.modal-dialog-centered { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; - min-height: calc(100% - (0.5rem * 2)); -} - -.modal-content { - position: relative; - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-orient: vertical; - -webkit-box-direction: normal; - -ms-flex-direction: column; - flex-direction: column; - width: 100%; - pointer-events: auto; - background-color: #fff; - background-clip: padding-box; - border: 1px solid rgba(0, 0, 0, 0.2); - border-radius: 0.3rem; - outline: 0; -} - -.modal-backdrop { - position: fixed; - top: 0; - right: 0; - bottom: 0; - left: 0; - z-index: 1040; - background-color: #000; -} -.modal-backdrop.fade { - opacity: 0; -} -.modal-backdrop.show { - opacity: 0.5; -} - -.modal-header { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-align: start; - -ms-flex-align: start; - align-items: flex-start; - -webkit-box-pack: justify; - -ms-flex-pack: justify; - justify-content: space-between; - padding: 1rem; - border-bottom: 1px solid #e9ecef; - border-top-left-radius: 0.3rem; - border-top-right-radius: 0.3rem; -} -.modal-header .close { - padding: 1rem; - margin: -1rem -1rem -1rem auto; -} - -.modal-title { - margin-bottom: 0; - line-height: 1.5; -} - -.modal-body { - position: relative; - -webkit-box-flex: 1; - -ms-flex: 1 1 auto; - flex: 1 1 auto; - padding: 1rem; -} - -.modal-footer { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; - -webkit-box-pack: end; - -ms-flex-pack: end; - justify-content: flex-end; - padding: 1rem; - border-top: 1px solid #e9ecef; -} -.modal-footer > :not(:first-child) { - margin-left: .25rem; -} -.modal-footer > :not(:last-child) { - margin-right: .25rem; -} - -.modal-scrollbar-measure { - position: absolute; - top: -9999px; - width: 50px; - height: 50px; - overflow: scroll; -} - -@media (min-width: 576px) { - .modal-dialog { - max-width: 500px; - margin: 1.75rem auto; - } - - .modal-dialog-centered { - min-height: calc(100% - (1.75rem * 2)); - } - - .modal-sm { - max-width: 300px; - } -} -@media (min-width: 992px) { - .modal-lg { - max-width: 800px; - } -} -.tooltip { - position: absolute; - z-index: 1070; - display: block; - margin: 0; - font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol"; - font-style: normal; - font-weight: 400; - line-height: 1.5; - text-align: left; - text-align: start; - text-decoration: none; - text-shadow: none; - text-transform: none; - letter-spacing: normal; - word-break: normal; - word-spacing: normal; - white-space: normal; - line-break: auto; - font-size: 0.875rem; - word-wrap: break-word; - opacity: 0; -} -.tooltip.show { - opacity: 0.9; -} -.tooltip .arrow { - position: absolute; - display: block; - width: 0.8rem; - height: 0.4rem; -} -.tooltip .arrow::before { - position: absolute; - content: ""; - border-color: transparent; - border-style: solid; -} - -.bs-tooltip-top, .bs-tooltip-auto[x-placement^="top"] { - padding: 0.4rem 0; -} -.bs-tooltip-top .arrow, .bs-tooltip-auto[x-placement^="top"] .arrow { - bottom: 0; -} -.bs-tooltip-top .arrow::before, .bs-tooltip-auto[x-placement^="top"] .arrow::before { - top: 0; - border-width: 0.4rem 0.4rem 0; - border-top-color: #000; -} - -.bs-tooltip-right, .bs-tooltip-auto[x-placement^="right"] { - padding: 0 0.4rem; -} -.bs-tooltip-right .arrow, .bs-tooltip-auto[x-placement^="right"] .arrow { - left: 0; - width: 0.4rem; - height: 0.8rem; -} -.bs-tooltip-right .arrow::before, .bs-tooltip-auto[x-placement^="right"] .arrow::before { - right: 0; - border-width: 0.4rem 0.4rem 0.4rem 0; - border-right-color: #000; -} - -.bs-tooltip-bottom, .bs-tooltip-auto[x-placement^="bottom"] { - padding: 0.4rem 0; -} -.bs-tooltip-bottom .arrow, .bs-tooltip-auto[x-placement^="bottom"] .arrow { - top: 0; -} -.bs-tooltip-bottom .arrow::before, .bs-tooltip-auto[x-placement^="bottom"] .arrow::before { - bottom: 0; - border-width: 0 0.4rem 0.4rem; - border-bottom-color: #000; -} - -.bs-tooltip-left, .bs-tooltip-auto[x-placement^="left"] { - padding: 0 0.4rem; -} -.bs-tooltip-left .arrow, .bs-tooltip-auto[x-placement^="left"] .arrow { - right: 0; - width: 0.4rem; - height: 0.8rem; -} -.bs-tooltip-left .arrow::before, .bs-tooltip-auto[x-placement^="left"] .arrow::before { - left: 0; - border-width: 0.4rem 0 0.4rem 0.4rem; - border-left-color: #000; -} - -.tooltip-inner { - max-width: 200px; - padding: 0.25rem 0.5rem; - color: #fff; - text-align: center; - background-color: #000; - border-radius: 0.25rem; -} - -.popover { - position: absolute; - top: 0; - left: 0; - z-index: 1060; - display: block; - max-width: 276px; - font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol"; - font-style: normal; - font-weight: 400; - line-height: 1.5; - text-align: left; - text-align: start; - text-decoration: none; - text-shadow: none; - text-transform: none; - letter-spacing: normal; - word-break: normal; - word-spacing: normal; - white-space: normal; - line-break: auto; - font-size: 0.875rem; - word-wrap: break-word; - background-color: #fff; - background-clip: padding-box; - border: 1px solid rgba(0, 0, 0, 0.2); - border-radius: 0.3rem; -} -.popover .arrow { - position: absolute; - display: block; - width: 1rem; - height: 0.5rem; - margin: 0 0.3rem; -} -.popover .arrow::before, .popover .arrow::after { - position: absolute; - display: block; - content: ""; - border-color: transparent; - border-style: solid; -} - -.bs-popover-top, .bs-popover-auto[x-placement^="top"] { - margin-bottom: 0.5rem; -} -.bs-popover-top .arrow, .bs-popover-auto[x-placement^="top"] .arrow { - bottom: calc((0.5rem + 1px) * -1); -} -.bs-popover-top .arrow::before, .bs-popover-auto[x-placement^="top"] .arrow::before, -.bs-popover-top .arrow::after, -.bs-popover-auto[x-placement^="top"] .arrow::after { - border-width: 0.5rem 0.5rem 0; -} -.bs-popover-top .arrow::before, .bs-popover-auto[x-placement^="top"] .arrow::before { - bottom: 0; - border-top-color: rgba(0, 0, 0, 0.25); -} -.bs-popover-top .arrow::after, .bs-popover-auto[x-placement^="top"] .arrow::after { - bottom: 1px; - border-top-color: #fff; -} - -.bs-popover-right, .bs-popover-auto[x-placement^="right"] { - margin-left: 0.5rem; -} -.bs-popover-right .arrow, .bs-popover-auto[x-placement^="right"] .arrow { - left: calc((0.5rem + 1px) * -1); - width: 0.5rem; - height: 1rem; - margin: 0.3rem 0; -} -.bs-popover-right .arrow::before, .bs-popover-auto[x-placement^="right"] .arrow::before, -.bs-popover-right .arrow::after, -.bs-popover-auto[x-placement^="right"] .arrow::after { - border-width: 0.5rem 0.5rem 0.5rem 0; -} -.bs-popover-right .arrow::before, .bs-popover-auto[x-placement^="right"] .arrow::before { - left: 0; - border-right-color: rgba(0, 0, 0, 0.25); -} -.bs-popover-right .arrow::after, .bs-popover-auto[x-placement^="right"] .arrow::after { - left: 1px; - border-right-color: #fff; -} - -.bs-popover-bottom, .bs-popover-auto[x-placement^="bottom"] { - margin-top: 0.5rem; -} -.bs-popover-bottom .arrow, .bs-popover-auto[x-placement^="bottom"] .arrow { - top: calc((0.5rem + 1px) * -1); -} -.bs-popover-bottom .arrow::before, .bs-popover-auto[x-placement^="bottom"] .arrow::before, -.bs-popover-bottom .arrow::after, -.bs-popover-auto[x-placement^="bottom"] .arrow::after { - border-width: 0 0.5rem 0.5rem 0.5rem; -} -.bs-popover-bottom .arrow::before, .bs-popover-auto[x-placement^="bottom"] .arrow::before { - top: 0; - border-bottom-color: rgba(0, 0, 0, 0.25); -} -.bs-popover-bottom .arrow::after, .bs-popover-auto[x-placement^="bottom"] .arrow::after { - top: 1px; - border-bottom-color: #fff; -} -.bs-popover-bottom .popover-header::before, .bs-popover-auto[x-placement^="bottom"] .popover-header::before { - position: absolute; - top: 0; - left: 50%; - display: block; - width: 1rem; - margin-left: -0.5rem; - content: ""; - border-bottom: 1px solid #f7f7f7; -} - -.bs-popover-left, .bs-popover-auto[x-placement^="left"] { - margin-right: 0.5rem; -} -.bs-popover-left .arrow, .bs-popover-auto[x-placement^="left"] .arrow { - right: calc((0.5rem + 1px) * -1); - width: 0.5rem; - height: 1rem; - margin: 0.3rem 0; -} -.bs-popover-left .arrow::before, .bs-popover-auto[x-placement^="left"] .arrow::before, -.bs-popover-left .arrow::after, -.bs-popover-auto[x-placement^="left"] .arrow::after { - border-width: 0.5rem 0 0.5rem 0.5rem; -} -.bs-popover-left .arrow::before, .bs-popover-auto[x-placement^="left"] .arrow::before { - right: 0; - border-left-color: rgba(0, 0, 0, 0.25); -} -.bs-popover-left .arrow::after, .bs-popover-auto[x-placement^="left"] .arrow::after { - right: 1px; - border-left-color: #fff; -} - -.popover-header { - padding: 0.5rem 0.75rem; - margin-bottom: 0; - font-size: 1rem; - color: inherit; - background-color: #f7f7f7; - border-bottom: 1px solid #ebebeb; - border-top-left-radius: calc(0.3rem - 1px); - border-top-right-radius: calc(0.3rem - 1px); -} -.popover-header:empty { - display: none; -} - -.popover-body { - padding: 0.5rem 0.75rem; - color: #212529; -} - -.carousel { - position: relative; -} - -.carousel-inner { - position: relative; - width: 100%; - overflow: hidden; -} - -.carousel-item { - position: relative; - display: none; - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; - width: 100%; - -webkit-transition: -webkit-transform 0.6s ease; - transition: -webkit-transform 0.6s ease; - transition: transform 0.6s ease; - transition: transform 0.6s ease, -webkit-transform 0.6s ease; - -webkit-backface-visibility: hidden; - backface-visibility: hidden; - -webkit-perspective: 1000px; - perspective: 1000px; -} - -.carousel-item.active, -.carousel-item-next, -.carousel-item-prev { - display: block; -} - -.carousel-item-next, -.carousel-item-prev { - position: absolute; - top: 0; -} - -.carousel-item-next.carousel-item-left, -.carousel-item-prev.carousel-item-right { - -webkit-transform: translateX(0); - transform: translateX(0); -} -@supports (transform-style: preserve-3d) { - .carousel-item-next.carousel-item-left, - .carousel-item-prev.carousel-item-right { - -webkit-transform: translate3d(0, 0, 0); - transform: translate3d(0, 0, 0); - } -} - -.carousel-item-next, -.active.carousel-item-right { - -webkit-transform: translateX(100%); - transform: translateX(100%); -} -@supports (transform-style: preserve-3d) { - .carousel-item-next, - .active.carousel-item-right { - -webkit-transform: translate3d(100%, 0, 0); - transform: translate3d(100%, 0, 0); - } -} - -.carousel-item-prev, -.active.carousel-item-left { - -webkit-transform: translateX(-100%); - transform: translateX(-100%); -} -@supports (transform-style: preserve-3d) { - .carousel-item-prev, - .active.carousel-item-left { - -webkit-transform: translate3d(-100%, 0, 0); - transform: translate3d(-100%, 0, 0); - } -} - -.carousel-control-prev, -.carousel-control-next { - position: absolute; - top: 0; - bottom: 0; - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; - -webkit-box-pack: center; - -ms-flex-pack: center; - justify-content: center; - width: 15%; - color: #fff; - text-align: center; - opacity: 0.5; -} -.carousel-control-prev:hover, .carousel-control-prev:focus, -.carousel-control-next:hover, -.carousel-control-next:focus { - color: #fff; - text-decoration: none; - outline: 0; - opacity: .9; -} - -.carousel-control-prev { - left: 0; -} - -.carousel-control-next { - right: 0; -} - -.carousel-control-prev-icon, -.carousel-control-next-icon { - display: inline-block; - width: 20px; - height: 20px; - background: transparent no-repeat center center; - background-size: 100% 100%; -} - -.carousel-control-prev-icon { - background-image: url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' fill='%23fff' viewBox='0 0 8 8'%3E%3Cpath d='M5.25 0l-4 4 4 4 1.5-1.5-2.5-2.5 2.5-2.5-1.5-1.5z'/%3E%3C/svg%3E"); -} - -.carousel-control-next-icon { - background-image: url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' fill='%23fff' viewBox='0 0 8 8'%3E%3Cpath d='M2.75 0l-1.5 1.5 2.5 2.5-2.5 2.5 1.5 1.5 4-4-4-4z'/%3E%3C/svg%3E"); -} - -.carousel-indicators { - position: absolute; - right: 0; - bottom: 10px; - left: 0; - z-index: 15; - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-pack: center; - -ms-flex-pack: center; - justify-content: center; - padding-left: 0; - margin-right: 15%; - margin-left: 15%; - list-style: none; -} -.carousel-indicators li { - position: relative; - -webkit-box-flex: 0; - -ms-flex: 0 1 auto; - flex: 0 1 auto; - width: 30px; - height: 3px; - margin-right: 3px; - margin-left: 3px; - text-indent: -999px; - background-color: rgba(255, 255, 255, 0.5); -} -.carousel-indicators li::before { - position: absolute; - top: -10px; - left: 0; - display: inline-block; - width: 100%; - height: 10px; - content: ""; -} -.carousel-indicators li::after { - position: absolute; - bottom: -10px; - left: 0; - display: inline-block; - width: 100%; - height: 10px; - content: ""; -} -.carousel-indicators .active { - background-color: #fff; -} - -.carousel-caption { - position: absolute; - right: 15%; - bottom: 20px; - left: 15%; - z-index: 10; - padding-top: 20px; - padding-bottom: 20px; - color: #fff; - text-align: center; -} - -.align-baseline { - vertical-align: baseline !important; -} - -.align-top { - vertical-align: top !important; -} - -.align-middle { - vertical-align: middle !important; -} - -.align-bottom { - vertical-align: bottom !important; -} - -.align-text-bottom { - vertical-align: text-bottom !important; -} - -.align-text-top { - vertical-align: text-top !important; -} - -.bg-primary { - background-color: #007bff !important; -} - -a.bg-primary:hover, a.bg-primary:focus, -button.bg-primary:hover, -button.bg-primary:focus { - background-color: #0062cc !important; -} - -.bg-secondary { - background-color: #6c757d !important; -} - -a.bg-secondary:hover, a.bg-secondary:focus, -button.bg-secondary:hover, -button.bg-secondary:focus { - background-color: #545b62 !important; -} - -.bg-success { - background-color: #28a745 !important; -} - -a.bg-success:hover, a.bg-success:focus, -button.bg-success:hover, -button.bg-success:focus { - background-color: #1e7e34 !important; -} - -.bg-info { - background-color: #17a2b8 !important; -} - -a.bg-info:hover, a.bg-info:focus, -button.bg-info:hover, -button.bg-info:focus { - background-color: #117a8b !important; -} - -.bg-warning { - background-color: #ffc107 !important; -} - -a.bg-warning:hover, a.bg-warning:focus, -button.bg-warning:hover, -button.bg-warning:focus { - background-color: #d39e00 !important; -} - -.bg-danger { - background-color: #dc3545 !important; -} - -a.bg-danger:hover, a.bg-danger:focus, -button.bg-danger:hover, -button.bg-danger:focus { - background-color: #bd2130 !important; -} - -.bg-light { - background-color: #f8f9fa !important; -} - -a.bg-light:hover, a.bg-light:focus, -button.bg-light:hover, -button.bg-light:focus { - background-color: #dae0e5 !important; -} - -.bg-dark { - background-color: #343a40 !important; -} - -a.bg-dark:hover, a.bg-dark:focus, -button.bg-dark:hover, -button.bg-dark:focus { - background-color: #1d2124 !important; -} - -.bg-white { - background-color: #fff !important; -} - -.bg-transparent { - background-color: transparent !important; -} - -.border { - border: 1px solid #dee2e6 !important; -} - -.border-top { - border-top: 1px solid #dee2e6 !important; -} - -.border-right { - border-right: 1px solid #dee2e6 !important; -} - -.border-bottom { - border-bottom: 1px solid #dee2e6 !important; -} - -.border-left { - border-left: 1px solid #dee2e6 !important; -} - -.border-0 { - border: 0 !important; -} - -.border-top-0 { - border-top: 0 !important; -} - -.border-right-0 { - border-right: 0 !important; -} - -.border-bottom-0 { - border-bottom: 0 !important; -} - -.border-left-0 { - border-left: 0 !important; -} - -.border-primary { - border-color: #007bff !important; -} - -.border-secondary { - border-color: #6c757d !important; -} - -.border-success { - border-color: #28a745 !important; -} - -.border-info { - border-color: #17a2b8 !important; -} - -.border-warning { - border-color: #ffc107 !important; -} - -.border-danger { - border-color: #dc3545 !important; -} - -.border-light { - border-color: #f8f9fa !important; -} - -.border-dark { - border-color: #343a40 !important; -} - -.border-white { - border-color: #fff !important; -} - -.rounded { - border-radius: 0.25rem !important; -} - -.rounded-top { - border-top-left-radius: 0.25rem !important; - border-top-right-radius: 0.25rem !important; -} - -.rounded-right { - border-top-right-radius: 0.25rem !important; - border-bottom-right-radius: 0.25rem !important; -} - -.rounded-bottom { - border-bottom-right-radius: 0.25rem !important; - border-bottom-left-radius: 0.25rem !important; -} - -.rounded-left { - border-top-left-radius: 0.25rem !important; - border-bottom-left-radius: 0.25rem !important; -} - -.rounded-circle { - border-radius: 50% !important; -} - -.rounded-0 { - border-radius: 0 !important; -} - -.clearfix::after { - display: block; - clear: both; - content: ""; -} - -.d-none { - display: none !important; -} - -.d-inline { - display: inline !important; -} - -.d-inline-block { - display: inline-block !important; -} - -.d-block { - display: block !important; -} - -.d-table { - display: table !important; -} - -.d-table-row { - display: table-row !important; -} - -.d-table-cell { - display: table-cell !important; -} - -.d-flex { - display: -webkit-box !important; - display: -ms-flexbox !important; - display: flex !important; -} - -.d-inline-flex { - display: -webkit-inline-box !important; - display: -ms-inline-flexbox !important; - display: inline-flex !important; -} - -@media (min-width: 576px) { - .d-sm-none { - display: none !important; - } - - .d-sm-inline { - display: inline !important; - } - - .d-sm-inline-block { - display: inline-block !important; - } - - .d-sm-block { - display: block !important; - } - - .d-sm-table { - display: table !important; - } - - .d-sm-table-row { - display: table-row !important; - } - - .d-sm-table-cell { - display: table-cell !important; - } - - .d-sm-flex { - display: -webkit-box !important; - display: -ms-flexbox !important; - display: flex !important; - } - - .d-sm-inline-flex { - display: -webkit-inline-box !important; - display: -ms-inline-flexbox !important; - display: inline-flex !important; - } -} -@media (min-width: 768px) { - .d-md-none { - display: none !important; - } - - .d-md-inline { - display: inline !important; - } - - .d-md-inline-block { - display: inline-block !important; - } - - .d-md-block { - display: block !important; - } - - .d-md-table { - display: table !important; - } - - .d-md-table-row { - display: table-row !important; - } - - .d-md-table-cell { - display: table-cell !important; - } - - .d-md-flex { - display: -webkit-box !important; - display: -ms-flexbox !important; - display: flex !important; - } - - .d-md-inline-flex { - display: -webkit-inline-box !important; - display: -ms-inline-flexbox !important; - display: inline-flex !important; - } -} -@media (min-width: 992px) { - .d-lg-none { - display: none !important; - } - - .d-lg-inline { - display: inline !important; - } - - .d-lg-inline-block { - display: inline-block !important; - } - - .d-lg-block { - display: block !important; - } - - .d-lg-table { - display: table !important; - } - - .d-lg-table-row { - display: table-row !important; - } - - .d-lg-table-cell { - display: table-cell !important; - } - - .d-lg-flex { - display: -webkit-box !important; - display: -ms-flexbox !important; - display: flex !important; - } - - .d-lg-inline-flex { - display: -webkit-inline-box !important; - display: -ms-inline-flexbox !important; - display: inline-flex !important; - } -} -@media (min-width: 1200px) { - .d-xl-none { - display: none !important; - } - - .d-xl-inline { - display: inline !important; - } - - .d-xl-inline-block { - display: inline-block !important; - } - - .d-xl-block { - display: block !important; - } - - .d-xl-table { - display: table !important; - } - - .d-xl-table-row { - display: table-row !important; - } - - .d-xl-table-cell { - display: table-cell !important; - } - - .d-xl-flex { - display: -webkit-box !important; - display: -ms-flexbox !important; - display: flex !important; - } - - .d-xl-inline-flex { - display: -webkit-inline-box !important; - display: -ms-inline-flexbox !important; - display: inline-flex !important; - } -} -@media print { - .d-print-none { - display: none !important; - } - - .d-print-inline { - display: inline !important; - } - - .d-print-inline-block { - display: inline-block !important; - } - - .d-print-block { - display: block !important; - } - - .d-print-table { - display: table !important; - } - - .d-print-table-row { - display: table-row !important; - } - - .d-print-table-cell { - display: table-cell !important; - } - - .d-print-flex { - display: -webkit-box !important; - display: -ms-flexbox !important; - display: flex !important; - } - - .d-print-inline-flex { - display: -webkit-inline-box !important; - display: -ms-inline-flexbox !important; - display: inline-flex !important; - } -} -.embed-responsive { - position: relative; - display: block; - width: 100%; - padding: 0; - overflow: hidden; -} -.embed-responsive::before { - display: block; - content: ""; -} -.embed-responsive .embed-responsive-item, -.embed-responsive iframe, -.embed-responsive embed, -.embed-responsive object, -.embed-responsive video { - position: absolute; - top: 0; - bottom: 0; - left: 0; - width: 100%; - height: 100%; - border: 0; -} - -.embed-responsive-21by9::before { - padding-top: 42.8571428571%; -} - -.embed-responsive-16by9::before { - padding-top: 56.25%; -} - -.embed-responsive-4by3::before { - padding-top: 75%; -} - -.embed-responsive-1by1::before { - padding-top: 100%; -} - -.flex-row { - -webkit-box-orient: horizontal !important; - -webkit-box-direction: normal !important; - -ms-flex-direction: row !important; - flex-direction: row !important; -} - -.flex-column { - -webkit-box-orient: vertical !important; - -webkit-box-direction: normal !important; - -ms-flex-direction: column !important; - flex-direction: column !important; -} - -.flex-row-reverse { - -webkit-box-orient: horizontal !important; - -webkit-box-direction: reverse !important; - -ms-flex-direction: row-reverse !important; - flex-direction: row-reverse !important; -} - -.flex-column-reverse { - -webkit-box-orient: vertical !important; - -webkit-box-direction: reverse !important; - -ms-flex-direction: column-reverse !important; - flex-direction: column-reverse !important; -} - -.flex-wrap { - -ms-flex-wrap: wrap !important; - flex-wrap: wrap !important; -} - -.flex-nowrap { - -ms-flex-wrap: nowrap !important; - flex-wrap: nowrap !important; -} - -.flex-wrap-reverse { - -ms-flex-wrap: wrap-reverse !important; - flex-wrap: wrap-reverse !important; -} - -.justify-content-start { - -webkit-box-pack: start !important; - -ms-flex-pack: start !important; - justify-content: flex-start !important; -} - -.justify-content-end { - -webkit-box-pack: end !important; - -ms-flex-pack: end !important; - justify-content: flex-end !important; -} - -.justify-content-center { - -webkit-box-pack: center !important; - -ms-flex-pack: center !important; - justify-content: center !important; -} - -.justify-content-between { - -webkit-box-pack: justify !important; - -ms-flex-pack: justify !important; - justify-content: space-between !important; -} - -.justify-content-around { - -ms-flex-pack: distribute !important; - justify-content: space-around !important; -} - -.align-items-start { - -webkit-box-align: start !important; - -ms-flex-align: start !important; - align-items: flex-start !important; -} - -.align-items-end { - -webkit-box-align: end !important; - -ms-flex-align: end !important; - align-items: flex-end !important; -} - -.align-items-center { - -webkit-box-align: center !important; - -ms-flex-align: center !important; - align-items: center !important; -} - -.align-items-baseline { - -webkit-box-align: baseline !important; - -ms-flex-align: baseline !important; - align-items: baseline !important; -} - -.align-items-stretch { - -webkit-box-align: stretch !important; - -ms-flex-align: stretch !important; - align-items: stretch !important; -} - -.align-content-start { - -ms-flex-line-pack: start !important; - align-content: flex-start !important; -} - -.align-content-end { - -ms-flex-line-pack: end !important; - align-content: flex-end !important; -} - -.align-content-center { - -ms-flex-line-pack: center !important; - align-content: center !important; -} - -.align-content-between { - -ms-flex-line-pack: justify !important; - align-content: space-between !important; -} - -.align-content-around { - -ms-flex-line-pack: distribute !important; - align-content: space-around !important; -} - -.align-content-stretch { - -ms-flex-line-pack: stretch !important; - align-content: stretch !important; -} - -.align-self-auto { - -ms-flex-item-align: auto !important; - align-self: auto !important; -} - -.align-self-start { - -ms-flex-item-align: start !important; - align-self: flex-start !important; -} - -.align-self-end { - -ms-flex-item-align: end !important; - align-self: flex-end !important; -} - -.align-self-center { - -ms-flex-item-align: center !important; - align-self: center !important; -} - -.align-self-baseline { - -ms-flex-item-align: baseline !important; - align-self: baseline !important; -} - -.align-self-stretch { - -ms-flex-item-align: stretch !important; - align-self: stretch !important; -} - -@media (min-width: 576px) { - .flex-sm-row { - -webkit-box-orient: horizontal !important; - -webkit-box-direction: normal !important; - -ms-flex-direction: row !important; - flex-direction: row !important; - } - - .flex-sm-column { - -webkit-box-orient: vertical !important; - -webkit-box-direction: normal !important; - -ms-flex-direction: column !important; - flex-direction: column !important; - } - - .flex-sm-row-reverse { - -webkit-box-orient: horizontal !important; - -webkit-box-direction: reverse !important; - -ms-flex-direction: row-reverse !important; - flex-direction: row-reverse !important; - } - - .flex-sm-column-reverse { - -webkit-box-orient: vertical !important; - -webkit-box-direction: reverse !important; - -ms-flex-direction: column-reverse !important; - flex-direction: column-reverse !important; - } - - .flex-sm-wrap { - -ms-flex-wrap: wrap !important; - flex-wrap: wrap !important; - } - - .flex-sm-nowrap { - -ms-flex-wrap: nowrap !important; - flex-wrap: nowrap !important; - } - - .flex-sm-wrap-reverse { - -ms-flex-wrap: wrap-reverse !important; - flex-wrap: wrap-reverse !important; - } - - .justify-content-sm-start { - -webkit-box-pack: start !important; - -ms-flex-pack: start !important; - justify-content: flex-start !important; - } - - .justify-content-sm-end { - -webkit-box-pack: end !important; - -ms-flex-pack: end !important; - justify-content: flex-end !important; - } - - .justify-content-sm-center { - -webkit-box-pack: center !important; - -ms-flex-pack: center !important; - justify-content: center !important; - } - - .justify-content-sm-between { - -webkit-box-pack: justify !important; - -ms-flex-pack: justify !important; - justify-content: space-between !important; - } - - .justify-content-sm-around { - -ms-flex-pack: distribute !important; - justify-content: space-around !important; - } - - .align-items-sm-start { - -webkit-box-align: start !important; - -ms-flex-align: start !important; - align-items: flex-start !important; - } - - .align-items-sm-end { - -webkit-box-align: end !important; - -ms-flex-align: end !important; - align-items: flex-end !important; - } - - .align-items-sm-center { - -webkit-box-align: center !important; - -ms-flex-align: center !important; - align-items: center !important; - } - - .align-items-sm-baseline { - -webkit-box-align: baseline !important; - -ms-flex-align: baseline !important; - align-items: baseline !important; - } - - .align-items-sm-stretch { - -webkit-box-align: stretch !important; - -ms-flex-align: stretch !important; - align-items: stretch !important; - } - - .align-content-sm-start { - -ms-flex-line-pack: start !important; - align-content: flex-start !important; - } - - .align-content-sm-end { - -ms-flex-line-pack: end !important; - align-content: flex-end !important; - } - - .align-content-sm-center { - -ms-flex-line-pack: center !important; - align-content: center !important; - } - - .align-content-sm-between { - -ms-flex-line-pack: justify !important; - align-content: space-between !important; - } - - .align-content-sm-around { - -ms-flex-line-pack: distribute !important; - align-content: space-around !important; - } - - .align-content-sm-stretch { - -ms-flex-line-pack: stretch !important; - align-content: stretch !important; - } - - .align-self-sm-auto { - -ms-flex-item-align: auto !important; - align-self: auto !important; - } - - .align-self-sm-start { - -ms-flex-item-align: start !important; - align-self: flex-start !important; - } - - .align-self-sm-end { - -ms-flex-item-align: end !important; - align-self: flex-end !important; - } - - .align-self-sm-center { - -ms-flex-item-align: center !important; - align-self: center !important; - } - - .align-self-sm-baseline { - -ms-flex-item-align: baseline !important; - align-self: baseline !important; - } - - .align-self-sm-stretch { - -ms-flex-item-align: stretch !important; - align-self: stretch !important; - } -} -@media (min-width: 768px) { - .flex-md-row { - -webkit-box-orient: horizontal !important; - -webkit-box-direction: normal !important; - -ms-flex-direction: row !important; - flex-direction: row !important; - } - - .flex-md-column { - -webkit-box-orient: vertical !important; - -webkit-box-direction: normal !important; - -ms-flex-direction: column !important; - flex-direction: column !important; - } - - .flex-md-row-reverse { - -webkit-box-orient: horizontal !important; - -webkit-box-direction: reverse !important; - -ms-flex-direction: row-reverse !important; - flex-direction: row-reverse !important; - } - - .flex-md-column-reverse { - -webkit-box-orient: vertical !important; - -webkit-box-direction: reverse !important; - -ms-flex-direction: column-reverse !important; - flex-direction: column-reverse !important; - } - - .flex-md-wrap { - -ms-flex-wrap: wrap !important; - flex-wrap: wrap !important; - } - - .flex-md-nowrap { - -ms-flex-wrap: nowrap !important; - flex-wrap: nowrap !important; - } - - .flex-md-wrap-reverse { - -ms-flex-wrap: wrap-reverse !important; - flex-wrap: wrap-reverse !important; - } - - .justify-content-md-start { - -webkit-box-pack: start !important; - -ms-flex-pack: start !important; - justify-content: flex-start !important; - } - - .justify-content-md-end { - -webkit-box-pack: end !important; - -ms-flex-pack: end !important; - justify-content: flex-end !important; - } - - .justify-content-md-center { - -webkit-box-pack: center !important; - -ms-flex-pack: center !important; - justify-content: center !important; - } - - .justify-content-md-between { - -webkit-box-pack: justify !important; - -ms-flex-pack: justify !important; - justify-content: space-between !important; - } - - .justify-content-md-around { - -ms-flex-pack: distribute !important; - justify-content: space-around !important; - } - - .align-items-md-start { - -webkit-box-align: start !important; - -ms-flex-align: start !important; - align-items: flex-start !important; - } - - .align-items-md-end { - -webkit-box-align: end !important; - -ms-flex-align: end !important; - align-items: flex-end !important; - } - - .align-items-md-center { - -webkit-box-align: center !important; - -ms-flex-align: center !important; - align-items: center !important; - } - - .align-items-md-baseline { - -webkit-box-align: baseline !important; - -ms-flex-align: baseline !important; - align-items: baseline !important; - } - - .align-items-md-stretch { - -webkit-box-align: stretch !important; - -ms-flex-align: stretch !important; - align-items: stretch !important; - } - - .align-content-md-start { - -ms-flex-line-pack: start !important; - align-content: flex-start !important; - } - - .align-content-md-end { - -ms-flex-line-pack: end !important; - align-content: flex-end !important; - } - - .align-content-md-center { - -ms-flex-line-pack: center !important; - align-content: center !important; - } - - .align-content-md-between { - -ms-flex-line-pack: justify !important; - align-content: space-between !important; - } - - .align-content-md-around { - -ms-flex-line-pack: distribute !important; - align-content: space-around !important; - } - - .align-content-md-stretch { - -ms-flex-line-pack: stretch !important; - align-content: stretch !important; - } - - .align-self-md-auto { - -ms-flex-item-align: auto !important; - align-self: auto !important; - } - - .align-self-md-start { - -ms-flex-item-align: start !important; - align-self: flex-start !important; - } - - .align-self-md-end { - -ms-flex-item-align: end !important; - align-self: flex-end !important; - } - - .align-self-md-center { - -ms-flex-item-align: center !important; - align-self: center !important; - } - - .align-self-md-baseline { - -ms-flex-item-align: baseline !important; - align-self: baseline !important; - } - - .align-self-md-stretch { - -ms-flex-item-align: stretch !important; - align-self: stretch !important; - } -} -@media (min-width: 992px) { - .flex-lg-row { - -webkit-box-orient: horizontal !important; - -webkit-box-direction: normal !important; - -ms-flex-direction: row !important; - flex-direction: row !important; - } - - .flex-lg-column { - -webkit-box-orient: vertical !important; - -webkit-box-direction: normal !important; - -ms-flex-direction: column !important; - flex-direction: column !important; - } - - .flex-lg-row-reverse { - -webkit-box-orient: horizontal !important; - -webkit-box-direction: reverse !important; - -ms-flex-direction: row-reverse !important; - flex-direction: row-reverse !important; - } - - .flex-lg-column-reverse { - -webkit-box-orient: vertical !important; - -webkit-box-direction: reverse !important; - -ms-flex-direction: column-reverse !important; - flex-direction: column-reverse !important; - } - - .flex-lg-wrap { - -ms-flex-wrap: wrap !important; - flex-wrap: wrap !important; - } - - .flex-lg-nowrap { - -ms-flex-wrap: nowrap !important; - flex-wrap: nowrap !important; - } - - .flex-lg-wrap-reverse { - -ms-flex-wrap: wrap-reverse !important; - flex-wrap: wrap-reverse !important; - } - - .justify-content-lg-start { - -webkit-box-pack: start !important; - -ms-flex-pack: start !important; - justify-content: flex-start !important; - } - - .justify-content-lg-end { - -webkit-box-pack: end !important; - -ms-flex-pack: end !important; - justify-content: flex-end !important; - } - - .justify-content-lg-center { - -webkit-box-pack: center !important; - -ms-flex-pack: center !important; - justify-content: center !important; - } - - .justify-content-lg-between { - -webkit-box-pack: justify !important; - -ms-flex-pack: justify !important; - justify-content: space-between !important; - } - - .justify-content-lg-around { - -ms-flex-pack: distribute !important; - justify-content: space-around !important; - } - - .align-items-lg-start { - -webkit-box-align: start !important; - -ms-flex-align: start !important; - align-items: flex-start !important; - } - - .align-items-lg-end { - -webkit-box-align: end !important; - -ms-flex-align: end !important; - align-items: flex-end !important; - } - - .align-items-lg-center { - -webkit-box-align: center !important; - -ms-flex-align: center !important; - align-items: center !important; - } - - .align-items-lg-baseline { - -webkit-box-align: baseline !important; - -ms-flex-align: baseline !important; - align-items: baseline !important; - } - - .align-items-lg-stretch { - -webkit-box-align: stretch !important; - -ms-flex-align: stretch !important; - align-items: stretch !important; - } - - .align-content-lg-start { - -ms-flex-line-pack: start !important; - align-content: flex-start !important; - } - - .align-content-lg-end { - -ms-flex-line-pack: end !important; - align-content: flex-end !important; - } - - .align-content-lg-center { - -ms-flex-line-pack: center !important; - align-content: center !important; - } - - .align-content-lg-between { - -ms-flex-line-pack: justify !important; - align-content: space-between !important; - } - - .align-content-lg-around { - -ms-flex-line-pack: distribute !important; - align-content: space-around !important; - } - - .align-content-lg-stretch { - -ms-flex-line-pack: stretch !important; - align-content: stretch !important; - } - - .align-self-lg-auto { - -ms-flex-item-align: auto !important; - align-self: auto !important; - } - - .align-self-lg-start { - -ms-flex-item-align: start !important; - align-self: flex-start !important; - } - - .align-self-lg-end { - -ms-flex-item-align: end !important; - align-self: flex-end !important; - } - - .align-self-lg-center { - -ms-flex-item-align: center !important; - align-self: center !important; - } - - .align-self-lg-baseline { - -ms-flex-item-align: baseline !important; - align-self: baseline !important; - } - - .align-self-lg-stretch { - -ms-flex-item-align: stretch !important; - align-self: stretch !important; - } -} -@media (min-width: 1200px) { - .flex-xl-row { - -webkit-box-orient: horizontal !important; - -webkit-box-direction: normal !important; - -ms-flex-direction: row !important; - flex-direction: row !important; - } - - .flex-xl-column { - -webkit-box-orient: vertical !important; - -webkit-box-direction: normal !important; - -ms-flex-direction: column !important; - flex-direction: column !important; - } - - .flex-xl-row-reverse { - -webkit-box-orient: horizontal !important; - -webkit-box-direction: reverse !important; - -ms-flex-direction: row-reverse !important; - flex-direction: row-reverse !important; - } - - .flex-xl-column-reverse { - -webkit-box-orient: vertical !important; - -webkit-box-direction: reverse !important; - -ms-flex-direction: column-reverse !important; - flex-direction: column-reverse !important; - } - - .flex-xl-wrap { - -ms-flex-wrap: wrap !important; - flex-wrap: wrap !important; - } - - .flex-xl-nowrap { - -ms-flex-wrap: nowrap !important; - flex-wrap: nowrap !important; - } - - .flex-xl-wrap-reverse { - -ms-flex-wrap: wrap-reverse !important; - flex-wrap: wrap-reverse !important; - } - - .justify-content-xl-start { - -webkit-box-pack: start !important; - -ms-flex-pack: start !important; - justify-content: flex-start !important; - } - - .justify-content-xl-end { - -webkit-box-pack: end !important; - -ms-flex-pack: end !important; - justify-content: flex-end !important; - } - - .justify-content-xl-center { - -webkit-box-pack: center !important; - -ms-flex-pack: center !important; - justify-content: center !important; - } - - .justify-content-xl-between { - -webkit-box-pack: justify !important; - -ms-flex-pack: justify !important; - justify-content: space-between !important; - } - - .justify-content-xl-around { - -ms-flex-pack: distribute !important; - justify-content: space-around !important; - } - - .align-items-xl-start { - -webkit-box-align: start !important; - -ms-flex-align: start !important; - align-items: flex-start !important; - } - - .align-items-xl-end { - -webkit-box-align: end !important; - -ms-flex-align: end !important; - align-items: flex-end !important; - } - - .align-items-xl-center { - -webkit-box-align: center !important; - -ms-flex-align: center !important; - align-items: center !important; - } - - .align-items-xl-baseline { - -webkit-box-align: baseline !important; - -ms-flex-align: baseline !important; - align-items: baseline !important; - } - - .align-items-xl-stretch { - -webkit-box-align: stretch !important; - -ms-flex-align: stretch !important; - align-items: stretch !important; - } - - .align-content-xl-start { - -ms-flex-line-pack: start !important; - align-content: flex-start !important; - } - - .align-content-xl-end { - -ms-flex-line-pack: end !important; - align-content: flex-end !important; - } - - .align-content-xl-center { - -ms-flex-line-pack: center !important; - align-content: center !important; - } - - .align-content-xl-between { - -ms-flex-line-pack: justify !important; - align-content: space-between !important; - } - - .align-content-xl-around { - -ms-flex-line-pack: distribute !important; - align-content: space-around !important; - } - - .align-content-xl-stretch { - -ms-flex-line-pack: stretch !important; - align-content: stretch !important; - } - - .align-self-xl-auto { - -ms-flex-item-align: auto !important; - align-self: auto !important; - } - - .align-self-xl-start { - -ms-flex-item-align: start !important; - align-self: flex-start !important; - } - - .align-self-xl-end { - -ms-flex-item-align: end !important; - align-self: flex-end !important; - } - - .align-self-xl-center { - -ms-flex-item-align: center !important; - align-self: center !important; - } - - .align-self-xl-baseline { - -ms-flex-item-align: baseline !important; - align-self: baseline !important; - } - - .align-self-xl-stretch { - -ms-flex-item-align: stretch !important; - align-self: stretch !important; - } -} -.float-left { - float: left !important; -} - -.float-right { - float: right !important; -} - -.float-none { - float: none !important; -} - -@media (min-width: 576px) { - .float-sm-left { - float: left !important; - } - - .float-sm-right { - float: right !important; - } - - .float-sm-none { - float: none !important; - } -} -@media (min-width: 768px) { - .float-md-left { - float: left !important; - } - - .float-md-right { - float: right !important; - } - - .float-md-none { - float: none !important; - } -} -@media (min-width: 992px) { - .float-lg-left { - float: left !important; - } - - .float-lg-right { - float: right !important; - } - - .float-lg-none { - float: none !important; - } -} -@media (min-width: 1200px) { - .float-xl-left { - float: left !important; - } - - .float-xl-right { - float: right !important; - } - - .float-xl-none { - float: none !important; - } -} -.position-static { - position: static !important; -} - -.position-relative { - position: relative !important; -} - -.position-absolute { - position: absolute !important; -} - -.position-fixed { - position: fixed !important; -} - -.position-sticky { - position: sticky !important; -} - -.fixed-top { - position: fixed; - top: 0; - right: 0; - left: 0; - z-index: 1030; -} - -.fixed-bottom { - position: fixed; - right: 0; - bottom: 0; - left: 0; - z-index: 1030; -} - -@supports (position: sticky) { - .sticky-top { - position: sticky; - top: 0; - z-index: 1020; - } -} - -.sr-only { - position: absolute; - width: 1px; - height: 1px; - padding: 0; - overflow: hidden; - clip: rect(0, 0, 0, 0); - white-space: nowrap; - -webkit-clip-path: inset(50%); - clip-path: inset(50%); - border: 0; -} - -.sr-only-focusable:active, .sr-only-focusable:focus { - position: static; - width: auto; - height: auto; - overflow: visible; - clip: auto; - white-space: normal; - -webkit-clip-path: none; - clip-path: none; -} - -.w-25 { - width: 25% !important; -} - -.w-50 { - width: 50% !important; -} - -.w-75 { - width: 75% !important; -} - -.w-100 { - width: 100% !important; -} - -.h-25 { - height: 25% !important; -} - -.h-50 { - height: 50% !important; -} - -.h-75 { - height: 75% !important; -} - -.h-100 { - height: 100% !important; -} - -.mw-100 { - max-width: 100% !important; -} - -.mh-100 { - max-height: 100% !important; -} - -.m-0 { - margin: 0 !important; -} - -.mt-0, -.my-0 { - margin-top: 0 !important; -} - -.mr-0, -.mx-0 { - margin-right: 0 !important; -} - -.mb-0, -.my-0 { - margin-bottom: 0 !important; -} - -.ml-0, -.mx-0 { - margin-left: 0 !important; -} - -.m-1 { - margin: 0.25rem !important; -} - -.mt-1, -.my-1 { - margin-top: 0.25rem !important; -} - -.mr-1, -.mx-1 { - margin-right: 0.25rem !important; -} - -.mb-1, -.my-1 { - margin-bottom: 0.25rem !important; -} - -.ml-1, -.mx-1 { - margin-left: 0.25rem !important; -} - -.m-2 { - margin: 0.5rem !important; -} - -.mt-2, -.my-2 { - margin-top: 0.5rem !important; -} - -.mr-2, -.mx-2 { - margin-right: 0.5rem !important; -} - -.mb-2, -.my-2 { - margin-bottom: 0.5rem !important; -} - -.ml-2, -.mx-2 { - margin-left: 0.5rem !important; -} - -.m-3 { - margin: 1rem !important; -} - -.mt-3, -.my-3 { - margin-top: 1rem !important; -} - -.mr-3, -.mx-3 { - margin-right: 1rem !important; -} - -.mb-3, -.my-3 { - margin-bottom: 1rem !important; -} - -.ml-3, -.mx-3 { - margin-left: 1rem !important; -} - -.m-4 { - margin: 1.5rem !important; -} - -.mt-4, -.my-4 { - margin-top: 1.5rem !important; -} - -.mr-4, -.mx-4 { - margin-right: 1.5rem !important; -} - -.mb-4, -.my-4 { - margin-bottom: 1.5rem !important; -} - -.ml-4, -.mx-4 { - margin-left: 1.5rem !important; -} - -.m-5 { - margin: 3rem !important; -} - -.mt-5, -.my-5 { - margin-top: 3rem !important; -} - -.mr-5, -.mx-5 { - margin-right: 3rem !important; -} - -.mb-5, -.my-5 { - margin-bottom: 3rem !important; -} - -.ml-5, -.mx-5 { - margin-left: 3rem !important; -} - -.p-0 { - padding: 0 !important; -} - -.pt-0, -.py-0 { - padding-top: 0 !important; -} - -.pr-0, -.px-0 { - padding-right: 0 !important; -} - -.pb-0, -.py-0 { - padding-bottom: 0 !important; -} - -.pl-0, -.px-0 { - padding-left: 0 !important; -} - -.p-1 { - padding: 0.25rem !important; -} - -.pt-1, -.py-1 { - padding-top: 0.25rem !important; -} - -.pr-1, -.px-1 { - padding-right: 0.25rem !important; -} - -.pb-1, -.py-1 { - padding-bottom: 0.25rem !important; -} - -.pl-1, -.px-1 { - padding-left: 0.25rem !important; -} - -.p-2 { - padding: 0.5rem !important; -} - -.pt-2, -.py-2 { - padding-top: 0.5rem !important; -} - -.pr-2, -.px-2 { - padding-right: 0.5rem !important; -} - -.pb-2, -.py-2 { - padding-bottom: 0.5rem !important; -} - -.pl-2, -.px-2 { - padding-left: 0.5rem !important; -} - -.p-3 { - padding: 1rem !important; -} - -.pt-3, -.py-3 { - padding-top: 1rem !important; -} - -.pr-3, -.px-3 { - padding-right: 1rem !important; -} - -.pb-3, -.py-3 { - padding-bottom: 1rem !important; -} - -.pl-3, -.px-3 { - padding-left: 1rem !important; -} - -.p-4 { - padding: 1.5rem !important; -} - -.pt-4, -.py-4 { - padding-top: 1.5rem !important; -} - -.pr-4, -.px-4 { - padding-right: 1.5rem !important; -} - -.pb-4, -.py-4 { - padding-bottom: 1.5rem !important; -} - -.pl-4, -.px-4 { - padding-left: 1.5rem !important; -} - -.p-5 { - padding: 3rem !important; -} - -.pt-5, -.py-5 { - padding-top: 3rem !important; -} - -.pr-5, -.px-5 { - padding-right: 3rem !important; -} - -.pb-5, -.py-5 { - padding-bottom: 3rem !important; -} - -.pl-5, -.px-5 { - padding-left: 3rem !important; -} - -.m-auto { - margin: auto !important; -} - -.mt-auto, -.my-auto { - margin-top: auto !important; -} - -.mr-auto, -.mx-auto { - margin-right: auto !important; -} - -.mb-auto, -.my-auto { - margin-bottom: auto !important; -} - -.ml-auto, -.mx-auto { - margin-left: auto !important; -} - -@media (min-width: 576px) { - .m-sm-0 { - margin: 0 !important; - } - - .mt-sm-0, - .my-sm-0 { - margin-top: 0 !important; - } - - .mr-sm-0, - .mx-sm-0 { - margin-right: 0 !important; - } - - .mb-sm-0, - .my-sm-0 { - margin-bottom: 0 !important; - } - - .ml-sm-0, - .mx-sm-0 { - margin-left: 0 !important; - } - - .m-sm-1 { - margin: 0.25rem !important; - } - - .mt-sm-1, - .my-sm-1 { - margin-top: 0.25rem !important; - } - - .mr-sm-1, - .mx-sm-1 { - margin-right: 0.25rem !important; - } - - .mb-sm-1, - .my-sm-1 { - margin-bottom: 0.25rem !important; - } - - .ml-sm-1, - .mx-sm-1 { - margin-left: 0.25rem !important; - } - - .m-sm-2 { - margin: 0.5rem !important; - } - - .mt-sm-2, - .my-sm-2 { - margin-top: 0.5rem !important; - } - - .mr-sm-2, - .mx-sm-2 { - margin-right: 0.5rem !important; - } - - .mb-sm-2, - .my-sm-2 { - margin-bottom: 0.5rem !important; - } - - .ml-sm-2, - .mx-sm-2 { - margin-left: 0.5rem !important; - } - - .m-sm-3 { - margin: 1rem !important; - } - - .mt-sm-3, - .my-sm-3 { - margin-top: 1rem !important; - } - - .mr-sm-3, - .mx-sm-3 { - margin-right: 1rem !important; - } - - .mb-sm-3, - .my-sm-3 { - margin-bottom: 1rem !important; - } - - .ml-sm-3, - .mx-sm-3 { - margin-left: 1rem !important; - } - - .m-sm-4 { - margin: 1.5rem !important; - } - - .mt-sm-4, - .my-sm-4 { - margin-top: 1.5rem !important; - } - - .mr-sm-4, - .mx-sm-4 { - margin-right: 1.5rem !important; - } - - .mb-sm-4, - .my-sm-4 { - margin-bottom: 1.5rem !important; - } - - .ml-sm-4, - .mx-sm-4 { - margin-left: 1.5rem !important; - } - - .m-sm-5 { - margin: 3rem !important; - } - - .mt-sm-5, - .my-sm-5 { - margin-top: 3rem !important; - } - - .mr-sm-5, - .mx-sm-5 { - margin-right: 3rem !important; - } - - .mb-sm-5, - .my-sm-5 { - margin-bottom: 3rem !important; - } - - .ml-sm-5, - .mx-sm-5 { - margin-left: 3rem !important; - } - - .p-sm-0 { - padding: 0 !important; - } - - .pt-sm-0, - .py-sm-0 { - padding-top: 0 !important; - } - - .pr-sm-0, - .px-sm-0 { - padding-right: 0 !important; - } - - .pb-sm-0, - .py-sm-0 { - padding-bottom: 0 !important; - } - - .pl-sm-0, - .px-sm-0 { - padding-left: 0 !important; - } - - .p-sm-1 { - padding: 0.25rem !important; - } - - .pt-sm-1, - .py-sm-1 { - padding-top: 0.25rem !important; - } - - .pr-sm-1, - .px-sm-1 { - padding-right: 0.25rem !important; - } - - .pb-sm-1, - .py-sm-1 { - padding-bottom: 0.25rem !important; - } - - .pl-sm-1, - .px-sm-1 { - padding-left: 0.25rem !important; - } - - .p-sm-2 { - padding: 0.5rem !important; - } - - .pt-sm-2, - .py-sm-2 { - padding-top: 0.5rem !important; - } - - .pr-sm-2, - .px-sm-2 { - padding-right: 0.5rem !important; - } - - .pb-sm-2, - .py-sm-2 { - padding-bottom: 0.5rem !important; - } - - .pl-sm-2, - .px-sm-2 { - padding-left: 0.5rem !important; - } - - .p-sm-3 { - padding: 1rem !important; - } - - .pt-sm-3, - .py-sm-3 { - padding-top: 1rem !important; - } - - .pr-sm-3, - .px-sm-3 { - padding-right: 1rem !important; - } - - .pb-sm-3, - .py-sm-3 { - padding-bottom: 1rem !important; - } - - .pl-sm-3, - .px-sm-3 { - padding-left: 1rem !important; - } - - .p-sm-4 { - padding: 1.5rem !important; - } - - .pt-sm-4, - .py-sm-4 { - padding-top: 1.5rem !important; - } - - .pr-sm-4, - .px-sm-4 { - padding-right: 1.5rem !important; - } - - .pb-sm-4, - .py-sm-4 { - padding-bottom: 1.5rem !important; - } - - .pl-sm-4, - .px-sm-4 { - padding-left: 1.5rem !important; - } - - .p-sm-5 { - padding: 3rem !important; - } - - .pt-sm-5, - .py-sm-5 { - padding-top: 3rem !important; - } - - .pr-sm-5, - .px-sm-5 { - padding-right: 3rem !important; - } - - .pb-sm-5, - .py-sm-5 { - padding-bottom: 3rem !important; - } - - .pl-sm-5, - .px-sm-5 { - padding-left: 3rem !important; - } - - .m-sm-auto { - margin: auto !important; - } - - .mt-sm-auto, - .my-sm-auto { - margin-top: auto !important; - } - - .mr-sm-auto, - .mx-sm-auto { - margin-right: auto !important; - } - - .mb-sm-auto, - .my-sm-auto { - margin-bottom: auto !important; - } - - .ml-sm-auto, - .mx-sm-auto { - margin-left: auto !important; - } -} -@media (min-width: 768px) { - .m-md-0 { - margin: 0 !important; - } - - .mt-md-0, - .my-md-0 { - margin-top: 0 !important; - } - - .mr-md-0, - .mx-md-0 { - margin-right: 0 !important; - } - - .mb-md-0, - .my-md-0 { - margin-bottom: 0 !important; - } - - .ml-md-0, - .mx-md-0 { - margin-left: 0 !important; - } - - .m-md-1 { - margin: 0.25rem !important; - } - - .mt-md-1, - .my-md-1 { - margin-top: 0.25rem !important; - } - - .mr-md-1, - .mx-md-1 { - margin-right: 0.25rem !important; - } - - .mb-md-1, - .my-md-1 { - margin-bottom: 0.25rem !important; - } - - .ml-md-1, - .mx-md-1 { - margin-left: 0.25rem !important; - } - - .m-md-2 { - margin: 0.5rem !important; - } - - .mt-md-2, - .my-md-2 { - margin-top: 0.5rem !important; - } - - .mr-md-2, - .mx-md-2 { - margin-right: 0.5rem !important; - } - - .mb-md-2, - .my-md-2 { - margin-bottom: 0.5rem !important; - } - - .ml-md-2, - .mx-md-2 { - margin-left: 0.5rem !important; - } - - .m-md-3 { - margin: 1rem !important; - } - - .mt-md-3, - .my-md-3 { - margin-top: 1rem !important; - } - - .mr-md-3, - .mx-md-3 { - margin-right: 1rem !important; - } - - .mb-md-3, - .my-md-3 { - margin-bottom: 1rem !important; - } - - .ml-md-3, - .mx-md-3 { - margin-left: 1rem !important; - } - - .m-md-4 { - margin: 1.5rem !important; - } - - .mt-md-4, - .my-md-4 { - margin-top: 1.5rem !important; - } - - .mr-md-4, - .mx-md-4 { - margin-right: 1.5rem !important; - } - - .mb-md-4, - .my-md-4 { - margin-bottom: 1.5rem !important; - } - - .ml-md-4, - .mx-md-4 { - margin-left: 1.5rem !important; - } - - .m-md-5 { - margin: 3rem !important; - } - - .mt-md-5, - .my-md-5 { - margin-top: 3rem !important; - } - - .mr-md-5, - .mx-md-5 { - margin-right: 3rem !important; - } - - .mb-md-5, - .my-md-5 { - margin-bottom: 3rem !important; - } - - .ml-md-5, - .mx-md-5 { - margin-left: 3rem !important; - } - - .p-md-0 { - padding: 0 !important; - } - - .pt-md-0, - .py-md-0 { - padding-top: 0 !important; - } - - .pr-md-0, - .px-md-0 { - padding-right: 0 !important; - } - - .pb-md-0, - .py-md-0 { - padding-bottom: 0 !important; - } - - .pl-md-0, - .px-md-0 { - padding-left: 0 !important; - } - - .p-md-1 { - padding: 0.25rem !important; - } - - .pt-md-1, - .py-md-1 { - padding-top: 0.25rem !important; - } - - .pr-md-1, - .px-md-1 { - padding-right: 0.25rem !important; - } - - .pb-md-1, - .py-md-1 { - padding-bottom: 0.25rem !important; - } - - .pl-md-1, - .px-md-1 { - padding-left: 0.25rem !important; - } - - .p-md-2 { - padding: 0.5rem !important; - } - - .pt-md-2, - .py-md-2 { - padding-top: 0.5rem !important; - } - - .pr-md-2, - .px-md-2 { - padding-right: 0.5rem !important; - } - - .pb-md-2, - .py-md-2 { - padding-bottom: 0.5rem !important; - } - - .pl-md-2, - .px-md-2 { - padding-left: 0.5rem !important; - } - - .p-md-3 { - padding: 1rem !important; - } - - .pt-md-3, - .py-md-3 { - padding-top: 1rem !important; - } - - .pr-md-3, - .px-md-3 { - padding-right: 1rem !important; - } - - .pb-md-3, - .py-md-3 { - padding-bottom: 1rem !important; - } - - .pl-md-3, - .px-md-3 { - padding-left: 1rem !important; - } - - .p-md-4 { - padding: 1.5rem !important; - } - - .pt-md-4, - .py-md-4 { - padding-top: 1.5rem !important; - } - - .pr-md-4, - .px-md-4 { - padding-right: 1.5rem !important; - } - - .pb-md-4, - .py-md-4 { - padding-bottom: 1.5rem !important; - } - - .pl-md-4, - .px-md-4 { - padding-left: 1.5rem !important; - } - - .p-md-5 { - padding: 3rem !important; - } - - .pt-md-5, - .py-md-5 { - padding-top: 3rem !important; - } - - .pr-md-5, - .px-md-5 { - padding-right: 3rem !important; - } - - .pb-md-5, - .py-md-5 { - padding-bottom: 3rem !important; - } - - .pl-md-5, - .px-md-5 { - padding-left: 3rem !important; - } - - .m-md-auto { - margin: auto !important; - } - - .mt-md-auto, - .my-md-auto { - margin-top: auto !important; - } - - .mr-md-auto, - .mx-md-auto { - margin-right: auto !important; - } - - .mb-md-auto, - .my-md-auto { - margin-bottom: auto !important; - } - - .ml-md-auto, - .mx-md-auto { - margin-left: auto !important; - } -} -@media (min-width: 992px) { - .m-lg-0 { - margin: 0 !important; - } - - .mt-lg-0, - .my-lg-0 { - margin-top: 0 !important; - } - - .mr-lg-0, - .mx-lg-0 { - margin-right: 0 !important; - } - - .mb-lg-0, - .my-lg-0 { - margin-bottom: 0 !important; - } - - .ml-lg-0, - .mx-lg-0 { - margin-left: 0 !important; - } - - .m-lg-1 { - margin: 0.25rem !important; - } - - .mt-lg-1, - .my-lg-1 { - margin-top: 0.25rem !important; - } - - .mr-lg-1, - .mx-lg-1 { - margin-right: 0.25rem !important; - } - - .mb-lg-1, - .my-lg-1 { - margin-bottom: 0.25rem !important; - } - - .ml-lg-1, - .mx-lg-1 { - margin-left: 0.25rem !important; - } - - .m-lg-2 { - margin: 0.5rem !important; - } - - .mt-lg-2, - .my-lg-2 { - margin-top: 0.5rem !important; - } - - .mr-lg-2, - .mx-lg-2 { - margin-right: 0.5rem !important; - } - - .mb-lg-2, - .my-lg-2 { - margin-bottom: 0.5rem !important; - } - - .ml-lg-2, - .mx-lg-2 { - margin-left: 0.5rem !important; - } - - .m-lg-3 { - margin: 1rem !important; - } - - .mt-lg-3, - .my-lg-3 { - margin-top: 1rem !important; - } - - .mr-lg-3, - .mx-lg-3 { - margin-right: 1rem !important; - } - - .mb-lg-3, - .my-lg-3 { - margin-bottom: 1rem !important; - } - - .ml-lg-3, - .mx-lg-3 { - margin-left: 1rem !important; - } - - .m-lg-4 { - margin: 1.5rem !important; - } - - .mt-lg-4, - .my-lg-4 { - margin-top: 1.5rem !important; - } - - .mr-lg-4, - .mx-lg-4 { - margin-right: 1.5rem !important; - } - - .mb-lg-4, - .my-lg-4 { - margin-bottom: 1.5rem !important; - } - - .ml-lg-4, - .mx-lg-4 { - margin-left: 1.5rem !important; - } - - .m-lg-5 { - margin: 3rem !important; - } - - .mt-lg-5, - .my-lg-5 { - margin-top: 3rem !important; - } - - .mr-lg-5, - .mx-lg-5 { - margin-right: 3rem !important; - } - - .mb-lg-5, - .my-lg-5 { - margin-bottom: 3rem !important; - } - - .ml-lg-5, - .mx-lg-5 { - margin-left: 3rem !important; - } - - .p-lg-0 { - padding: 0 !important; - } - - .pt-lg-0, - .py-lg-0 { - padding-top: 0 !important; - } - - .pr-lg-0, - .px-lg-0 { - padding-right: 0 !important; - } - - .pb-lg-0, - .py-lg-0 { - padding-bottom: 0 !important; - } - - .pl-lg-0, - .px-lg-0 { - padding-left: 0 !important; - } - - .p-lg-1 { - padding: 0.25rem !important; - } - - .pt-lg-1, - .py-lg-1 { - padding-top: 0.25rem !important; - } - - .pr-lg-1, - .px-lg-1 { - padding-right: 0.25rem !important; - } - - .pb-lg-1, - .py-lg-1 { - padding-bottom: 0.25rem !important; - } - - .pl-lg-1, - .px-lg-1 { - padding-left: 0.25rem !important; - } - - .p-lg-2 { - padding: 0.5rem !important; - } - - .pt-lg-2, - .py-lg-2 { - padding-top: 0.5rem !important; - } - - .pr-lg-2, - .px-lg-2 { - padding-right: 0.5rem !important; - } - - .pb-lg-2, - .py-lg-2 { - padding-bottom: 0.5rem !important; - } - - .pl-lg-2, - .px-lg-2 { - padding-left: 0.5rem !important; - } - - .p-lg-3 { - padding: 1rem !important; - } - - .pt-lg-3, - .py-lg-3 { - padding-top: 1rem !important; - } - - .pr-lg-3, - .px-lg-3 { - padding-right: 1rem !important; - } - - .pb-lg-3, - .py-lg-3 { - padding-bottom: 1rem !important; - } - - .pl-lg-3, - .px-lg-3 { - padding-left: 1rem !important; - } - - .p-lg-4 { - padding: 1.5rem !important; - } - - .pt-lg-4, - .py-lg-4 { - padding-top: 1.5rem !important; - } - - .pr-lg-4, - .px-lg-4 { - padding-right: 1.5rem !important; - } - - .pb-lg-4, - .py-lg-4 { - padding-bottom: 1.5rem !important; - } - - .pl-lg-4, - .px-lg-4 { - padding-left: 1.5rem !important; - } - - .p-lg-5 { - padding: 3rem !important; - } - - .pt-lg-5, - .py-lg-5 { - padding-top: 3rem !important; - } - - .pr-lg-5, - .px-lg-5 { - padding-right: 3rem !important; - } - - .pb-lg-5, - .py-lg-5 { - padding-bottom: 3rem !important; - } - - .pl-lg-5, - .px-lg-5 { - padding-left: 3rem !important; - } - - .m-lg-auto { - margin: auto !important; - } - - .mt-lg-auto, - .my-lg-auto { - margin-top: auto !important; - } - - .mr-lg-auto, - .mx-lg-auto { - margin-right: auto !important; - } - - .mb-lg-auto, - .my-lg-auto { - margin-bottom: auto !important; - } - - .ml-lg-auto, - .mx-lg-auto { - margin-left: auto !important; - } -} -@media (min-width: 1200px) { - .m-xl-0 { - margin: 0 !important; - } - - .mt-xl-0, - .my-xl-0 { - margin-top: 0 !important; - } - - .mr-xl-0, - .mx-xl-0 { - margin-right: 0 !important; - } - - .mb-xl-0, - .my-xl-0 { - margin-bottom: 0 !important; - } - - .ml-xl-0, - .mx-xl-0 { - margin-left: 0 !important; - } - - .m-xl-1 { - margin: 0.25rem !important; - } - - .mt-xl-1, - .my-xl-1 { - margin-top: 0.25rem !important; - } - - .mr-xl-1, - .mx-xl-1 { - margin-right: 0.25rem !important; - } - - .mb-xl-1, - .my-xl-1 { - margin-bottom: 0.25rem !important; - } - - .ml-xl-1, - .mx-xl-1 { - margin-left: 0.25rem !important; - } - - .m-xl-2 { - margin: 0.5rem !important; - } - - .mt-xl-2, - .my-xl-2 { - margin-top: 0.5rem !important; - } - - .mr-xl-2, - .mx-xl-2 { - margin-right: 0.5rem !important; - } - - .mb-xl-2, - .my-xl-2 { - margin-bottom: 0.5rem !important; - } - - .ml-xl-2, - .mx-xl-2 { - margin-left: 0.5rem !important; - } - - .m-xl-3 { - margin: 1rem !important; - } - - .mt-xl-3, - .my-xl-3 { - margin-top: 1rem !important; - } - - .mr-xl-3, - .mx-xl-3 { - margin-right: 1rem !important; - } - - .mb-xl-3, - .my-xl-3 { - margin-bottom: 1rem !important; - } - - .ml-xl-3, - .mx-xl-3 { - margin-left: 1rem !important; - } - - .m-xl-4 { - margin: 1.5rem !important; - } - - .mt-xl-4, - .my-xl-4 { - margin-top: 1.5rem !important; - } - - .mr-xl-4, - .mx-xl-4 { - margin-right: 1.5rem !important; - } - - .mb-xl-4, - .my-xl-4 { - margin-bottom: 1.5rem !important; - } - - .ml-xl-4, - .mx-xl-4 { - margin-left: 1.5rem !important; - } - - .m-xl-5 { - margin: 3rem !important; - } - - .mt-xl-5, - .my-xl-5 { - margin-top: 3rem !important; - } - - .mr-xl-5, - .mx-xl-5 { - margin-right: 3rem !important; - } - - .mb-xl-5, - .my-xl-5 { - margin-bottom: 3rem !important; - } - - .ml-xl-5, - .mx-xl-5 { - margin-left: 3rem !important; - } - - .p-xl-0 { - padding: 0 !important; - } - - .pt-xl-0, - .py-xl-0 { - padding-top: 0 !important; - } - - .pr-xl-0, - .px-xl-0 { - padding-right: 0 !important; - } - - .pb-xl-0, - .py-xl-0 { - padding-bottom: 0 !important; - } - - .pl-xl-0, - .px-xl-0 { - padding-left: 0 !important; - } - - .p-xl-1 { - padding: 0.25rem !important; - } - - .pt-xl-1, - .py-xl-1 { - padding-top: 0.25rem !important; - } - - .pr-xl-1, - .px-xl-1 { - padding-right: 0.25rem !important; - } - - .pb-xl-1, - .py-xl-1 { - padding-bottom: 0.25rem !important; - } - - .pl-xl-1, - .px-xl-1 { - padding-left: 0.25rem !important; - } - - .p-xl-2 { - padding: 0.5rem !important; - } - - .pt-xl-2, - .py-xl-2 { - padding-top: 0.5rem !important; - } - - .pr-xl-2, - .px-xl-2 { - padding-right: 0.5rem !important; - } - - .pb-xl-2, - .py-xl-2 { - padding-bottom: 0.5rem !important; - } - - .pl-xl-2, - .px-xl-2 { - padding-left: 0.5rem !important; - } - - .p-xl-3 { - padding: 1rem !important; - } - - .pt-xl-3, - .py-xl-3 { - padding-top: 1rem !important; - } - - .pr-xl-3, - .px-xl-3 { - padding-right: 1rem !important; - } - - .pb-xl-3, - .py-xl-3 { - padding-bottom: 1rem !important; - } - - .pl-xl-3, - .px-xl-3 { - padding-left: 1rem !important; - } - - .p-xl-4 { - padding: 1.5rem !important; - } - - .pt-xl-4, - .py-xl-4 { - padding-top: 1.5rem !important; - } - - .pr-xl-4, - .px-xl-4 { - padding-right: 1.5rem !important; - } - - .pb-xl-4, - .py-xl-4 { - padding-bottom: 1.5rem !important; - } - - .pl-xl-4, - .px-xl-4 { - padding-left: 1.5rem !important; - } - - .p-xl-5 { - padding: 3rem !important; - } - - .pt-xl-5, - .py-xl-5 { - padding-top: 3rem !important; - } - - .pr-xl-5, - .px-xl-5 { - padding-right: 3rem !important; - } - - .pb-xl-5, - .py-xl-5 { - padding-bottom: 3rem !important; - } - - .pl-xl-5, - .px-xl-5 { - padding-left: 3rem !important; - } - - .m-xl-auto { - margin: auto !important; - } - - .mt-xl-auto, - .my-xl-auto { - margin-top: auto !important; - } - - .mr-xl-auto, - .mx-xl-auto { - margin-right: auto !important; - } - - .mb-xl-auto, - .my-xl-auto { - margin-bottom: auto !important; - } - - .ml-xl-auto, - .mx-xl-auto { - margin-left: auto !important; - } -} -.text-justify { - text-align: justify !important; -} - -.text-nowrap { - white-space: nowrap !important; -} - -.text-truncate { - overflow: hidden; - text-overflow: ellipsis; - white-space: nowrap; -} - -.text-left { - text-align: left !important; -} - -.text-right { - text-align: right !important; -} - -.text-center { - text-align: center !important; -} - -@media (min-width: 576px) { - .text-sm-left { - text-align: left !important; - } - - .text-sm-right { - text-align: right !important; - } - - .text-sm-center { - text-align: center !important; - } -} -@media (min-width: 768px) { - .text-md-left { - text-align: left !important; - } - - .text-md-right { - text-align: right !important; - } - - .text-md-center { - text-align: center !important; - } -} -@media (min-width: 992px) { - .text-lg-left { - text-align: left !important; - } - - .text-lg-right { - text-align: right !important; - } - - .text-lg-center { - text-align: center !important; - } -} -@media (min-width: 1200px) { - .text-xl-left { - text-align: left !important; - } - - .text-xl-right { - text-align: right !important; - } - - .text-xl-center { - text-align: center !important; - } -} -.text-lowercase { - text-transform: lowercase !important; -} - -.text-uppercase { - text-transform: uppercase !important; -} - -.text-capitalize { - text-transform: capitalize !important; -} - -.font-weight-light { - font-weight: 300 !important; -} - -.font-weight-normal { - font-weight: 400 !important; -} - -.font-weight-bold { - font-weight: 700 !important; -} - -.font-italic { - font-style: italic !important; -} - -.text-white { - color: #fff !important; -} - -.text-primary { - color: #007bff !important; -} - -a.text-primary:hover, a.text-primary:focus { - color: #0062cc !important; -} - -.text-secondary { - color: #6c757d !important; -} - -a.text-secondary:hover, a.text-secondary:focus { - color: #545b62 !important; -} - -.text-success { - color: #28a745 !important; -} - -a.text-success:hover, a.text-success:focus { - color: #1e7e34 !important; -} - -.text-info { - color: #17a2b8 !important; -} - -a.text-info:hover, a.text-info:focus { - color: #117a8b !important; -} - -.text-warning { - color: #ffc107 !important; -} - -a.text-warning:hover, a.text-warning:focus { - color: #d39e00 !important; -} - -.text-danger { - color: #dc3545 !important; -} - -a.text-danger:hover, a.text-danger:focus { - color: #bd2130 !important; -} - -.text-light { - color: #f8f9fa !important; -} - -a.text-light:hover, a.text-light:focus { - color: #dae0e5 !important; -} - -.text-dark { - color: #343a40 !important; -} - -a.text-dark:hover, a.text-dark:focus { - color: #1d2124 !important; -} - -.text-muted { - color: #6c757d !important; -} - -.text-hide { - font: 0/0 a; - color: transparent; - text-shadow: none; - background-color: transparent; - border: 0; -} - -.visible { - visibility: visible !important; -} - -.invisible { - visibility: hidden !important; -} - -@media print { - *, - *::before, - *::after { - text-shadow: none !important; - -webkit-box-shadow: none !important; - box-shadow: none !important; - } - - a:not(.btn) { - text-decoration: underline; - } - - abbr[title]::after { - content: " (" attr(title) ")"; - } - - pre { - white-space: pre-wrap !important; - } - - pre, - blockquote { - border: 1px solid #999; - page-break-inside: avoid; - } - - thead { - display: table-header-group; - } - - tr, - img { - page-break-inside: avoid; - } - - p, - h2, - h3 { - orphans: 3; - widows: 3; - } - - h2, - h3 { - page-break-after: avoid; - } - - @page { - size: a3; - } - body { - min-width: 992px !important; - } - - .container { - min-width: 992px !important; - } - - .navbar { - display: none; - } - - .badge { - border: 1px solid #000; - } - - .table { - border-collapse: collapse !important; - } - .table td, - .table th { - background-color: #fff !important; - } - - .table-bordered th, - .table-bordered td { - border: 1px solid #ddd !important; - } -} -/*Github syntax highlighting theme via Rouge*/ -.highlight table td { - padding: 5px; -} - -.highlight table pre { - margin: 0; -} - -.highlight .cm { - color: #999988; - font-style: italic; -} - -.highlight .cp { - color: #999999; - font-weight: bold; -} - -.highlight .c1 { - color: #999988; - font-style: italic; -} - -.highlight .cs { - color: #999999; - font-weight: bold; - font-style: italic; -} - -.highlight .c, .highlight .cd { - color: #999988; - font-style: italic; -} - -.highlight .err { - color: #a61717; - background-color: #e3d2d2; -} - -.highlight .gd { - color: #000000; - background-color: #ffdddd; -} - -.highlight .ge { - color: #000000; - font-style: italic; -} - -.highlight .gr { - color: #aa0000; -} - -.highlight .gh { - color: #999999; -} - -.highlight .gi { - color: #000000; - background-color: #ddffdd; -} - -.highlight .go { - color: #888888; -} - -.highlight .gp { - color: #555555; -} - -.highlight .gs { - font-weight: bold; -} - -.highlight .gu { - color: #aaaaaa; -} - -.highlight .gt { - color: #aa0000; -} - -.highlight .kc { - color: #000000; - font-weight: bold; -} - -.highlight .kd { - color: #000000; - font-weight: bold; -} - -.highlight .kn { - color: #000000; - font-weight: bold; -} - -.highlight .kp { - color: #000000; - font-weight: bold; -} - -.highlight .kr { - color: #000000; - font-weight: bold; -} - -.highlight .kt { - color: #445588; - font-weight: bold; -} - -.highlight .k, .highlight .kv { - color: #000000; - font-weight: bold; -} - -.highlight .mf { - color: #009999; -} - -.highlight .mh { - color: #009999; -} - -.highlight .il { - color: #009999; -} - -.highlight .mi { - color: #009999; -} - -.highlight .mo { - color: #009999; -} - -.highlight .m, .highlight .mb, .highlight .mx { - color: #009999; -} - -.highlight .sb { - color: #d14; -} - -.highlight .sc { - color: #d14; -} - -.highlight .sd { - color: #d14; -} - -.highlight .s2 { - color: #d14; -} - -.highlight .se { - color: #d14; -} - -.highlight .sh { - color: #d14; -} - -.highlight .si { - color: #d14; -} - -.highlight .sx { - color: #d14; -} - -.highlight .sr { - color: #009926; -} - -.highlight .s1 { - color: #d14; -} - -.highlight .ss { - color: #990073; -} - -.highlight .s { - color: #d14; -} - -.highlight .na { - color: #008080; -} - -.highlight .bp { - color: #525252; -} - -.highlight .nb { - color: #0086B3; -} - -.highlight .nc { - color: #445588; - font-weight: bold; -} - -.highlight .no { - color: #008080; -} - -.highlight .nd { - color: #3c5d5d; - font-weight: bold; -} - -.highlight .ni { - color: #800080; -} - -.highlight .ne { - color: #990000; - font-weight: bold; -} - -.highlight .nf { - color: #990000; - font-weight: bold; -} - -.highlight .nl { - color: #990000; - font-weight: bold; -} - -.highlight .nn { - color: #555555; -} - -.highlight .nt { - color: #000080; -} - -.highlight .vc { - color: #008080; -} - -.highlight .vg { - color: #008080; -} - -.highlight .vi { - color: #008080; -} - -.highlight .nv { - color: #008080; -} - -.highlight .ow { - color: #000000; - font-weight: bold; -} - -.highlight .o { - color: #000000; - font-weight: bold; -} - -.highlight .n { - color: #000000; - font-weight: bold; -} - -.highlight .p { - color: #000000; - font-weight: bold; -} - -.highlight .w { - color: #bbbbbb; -} - -.highlight { - background-color: #f8f8f8; -} - -@font-face { - font-family: FreightSans; - font-weight: 700; - font-style: normal; - src: url("../fonts/FreightSans/freight-sans-bold.woff2") format("woff2"), url("../fonts/FreightSans/freight-sans-bold.woff") format("woff"); -} -@font-face { - font-family: FreightSans; - font-weight: 700; - font-style: italic; - src: url("../fonts/FreightSans/freight-sans-bold-italic.woff2") format("woff2"), url("../fonts/FreightSans/freight-sans-bold-italic.woff") format("woff"); -} -@font-face { - font-family: FreightSans; - font-weight: 500; - font-style: normal; - src: url("../fonts/FreightSans/freight-sans-medium.woff2") format("woff2"), url("../fonts/FreightSans/freight-sans-medium.woff") format("woff"); -} -@font-face { - font-family: FreightSans; - font-weight: 500; - font-style: italic; - src: url("../fonts/FreightSans/freight-sans-medium-italic.woff2") format("woff2"), url("../fonts/FreightSans/freight-sans-medium-italic.woff") format("woff"); -} -@font-face { - font-family: FreightSans; - font-weight: 100; - font-style: normal; - src: url("../fonts/FreightSans/freight-sans-light.woff2") format("woff2"), url("../fonts/FreightSans/freight-sans-light.woff") format("woff"); -} -@font-face { - font-family: FreightSans; - font-weight: 100; - font-style: italic; - src: url("../fonts/FreightSans/freight-sans-light-italic.woff2") format("woff2"), url("../fonts/FreightSans/freight-sans-light-italic.woff") format("woff"); -} -@font-face { - font-family: FreightSans; - font-weight: 400; - font-style: italic; - src: url("../fonts/FreightSans/freight-sans-book-italic.woff2") format("woff2"), url("../fonts/FreightSans/freight-sans-book-italic.woff") format("woff"); -} -@font-face { - font-family: FreightSans; - font-weight: 400; - font-style: normal; - src: url("../fonts/FreightSans/freight-sans-book.woff2") format("woff2"), url("../fonts/FreightSans/freight-sans-book.woff") format("woff"); -} -@font-face { - font-family: IBMPlexMono; - font-weight: 600; - font-style: normal; - unicode-range: u+0020-007f; - src: local("IBMPlexMono-SemiBold"), url("../fonts/IBMPlexMono/IBMPlexMono-SemiBold.woff2") format("woff2"), url("../fonts/IBMPlexMono/IBMPlexMono-SemiBold.woff") format("woff"); -} -@font-face { - font-family: IBMPlexMono; - font-weight: 500; - font-style: normal; - unicode-range: u+0020-007f; - src: local("IBMPlexMono-Medium"), url("../fonts/IBMPlexMono/IBMPlexMono-Medium.woff2") format("woff2"), url("../fonts/IBMPlexMono/IBMPlexMono-Medium.woff") format("woff"); -} -@font-face { - font-family: IBMPlexMono; - font-weight: 400; - font-style: normal; - unicode-range: u+0020-007f; - src: local("IBMPlexMono-Regular"), url("../fonts/IBMPlexMono/IBMPlexMono-Regular.woff2") format("woff2"), url("../fonts/IBMPlexMono/IBMPlexMono-Regular.woff") format("woff"); -} -@font-face { - font-family: IBMPlexMono; - font-weight: 300; - font-style: normal; - unicode-range: u+0020-007f; - src: local("IBMPlexMono-Light"), url("../fonts/IBMPlexMono/IBMPlexMono-Light.woff2") format("woff2"), url("../fonts/IBMPlexMono/IBMPlexMono-Light.woff") format("woff"); -} -html { - position: relative; - min-height: 100%; - font-size: 12px; -} -@media screen and (min-width: 768px) { - html { - font-size: 16px; - } -} - -* { - -webkit-box-sizing: border-box; - box-sizing: border-box; -} - -body { - font-family: FreightSans, Helvetica Neue, Helvetica, Arial, sans-serif; -} - -a:link, -a:visited, -a:hover { - text-decoration: none; - color: #e44c2c; -} - -a.with-right-arrow, .btn.with-right-arrow { - padding-right: 1.375rem; - position: relative; - background-image: url("../images/chevron-right-orange.svg"); - background-size: 6px 13px; - background-position: center right 5px; - background-repeat: no-repeat; -} -@media screen and (min-width: 768px) { - a.with-right-arrow, .btn.with-right-arrow { - background-size: 8px 14px; - background-position: center right 12px; - padding-right: 2rem; - } -} - -::-webkit-input-placeholder { - color: #e44c2c; -} - -::-moz-placeholder { - color: #e44c2c; -} - -:-ms-input-placeholder { - color: #e44c2c; -} - -:-moz-placeholder { - color: #e44c2c; -} - -.email-subscribe-form input.email { - color: #e44c2c; - border: none; - border-bottom: 1px solid #939393; - width: 100%; - background-color: transparent; - outline: none; - font-size: 1.125rem; - letter-spacing: 0.25px; - line-height: 2.25rem; -} -.email-subscribe-form input[type="submit"] { - position: absolute; - right: 0; - top: 10px; - height: 15px; - width: 15px; - background-image: url("../images/arrow-right-with-tail.svg"); - background-color: transparent; - background-repeat: no-repeat; - background-size: 15px 15px; - background-position: center center; - -webkit-appearance: none; - -moz-appearance: none; - appearance: none; - border: 0; -} - -.email-subscribe-form-fields-wrapper { - position: relative; -} - -.anchorjs-link { - color: #6c6c6d !important; -} -@media screen and (min-width: 768px) { - .anchorjs-link:hover { - color: inherit; - text-decoration: none !important; - } -} - -.pytorch-article #table-of-contents { - display: none; -} - -code, kbd, pre, samp { - font-family: IBMPlexMono,SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace; -} -code span, kbd span, pre span, samp span { - font-family: IBMPlexMono,SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace; -} - -pre { - padding: 1.125rem; -} -pre code { - font-size: 0.875rem; -} -pre.highlight { - background-color: #f3f4f7; - line-height: 1.3125rem; -} - -code.highlighter-rouge { - color: #6c6c6d; - background-color: #f3f4f7; - padding: 2px 6px; -} - -a:link code.highlighter-rouge, -a:visited code.highlighter-rouge, -a:hover code.highlighter-rouge { - color: #4974D1; -} -a:link.has-code, -a:visited.has-code, -a:hover.has-code { - color: #4974D1; -} - -p code, -h1 code, -h2 code, -h3 code, -h4 code, -h5 code, -h6 code { - font-size: 78.5%; -} - -pre { - white-space: pre-wrap; - white-space: -moz-pre-wrap; - white-space: -pre-wrap; - white-space: -o-pre-wrap; - word-wrap: break-word; -} - -.header-holder { - height: 68px; - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; - display: -webkit-box; - display: -ms-flexbox; - display: flex; - left: 0; - margin-left: auto; - margin-right: auto; - position: fixed; - right: 0; - top: 0; - width: 100%; - z-index: 9999; - background-color: #ffffff; - border-bottom: 1px solid #e2e2e2; -} -@media screen and (min-width: 1100px) { - .header-holder { - height: 90px; - } -} - -.header-container { - position: relative; - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; -} -.header-container:before, .header-container:after { - content: ""; - display: table; -} -.header-container:after { - clear: both; -} -.header-container { - *zoom: 1; -} -@media screen and (min-width: 1100px) { - .header-container { - display: block; - } -} - -.header-logo { - height: 23px; - width: 93px; - background-image: url("../images/logo.svg"); - background-repeat: no-repeat; - background-size: 93px 23px; - display: block; - float: left; - z-index: 10; -} -@media screen and (min-width: 1100px) { - .header-logo { - background-size: 108px 27px; - position: absolute; - height: 27px; - width: 108px; - top: 4px; - float: none; - } -} - -.main-menu-open-button { - background-image: url("../images/icon-menu-dots.svg"); - background-position: center center; - background-size: 25px 7px; - background-repeat: no-repeat; - width: 25px; - height: 17px; - position: absolute; - right: 0; - top: 4px; -} -@media screen and (min-width: 1100px) { - .main-menu-open-button { - display: none; - } -} - -.header-holder .main-menu { - display: none; -} -@media screen and (min-width: 1100px) { - .header-holder .main-menu { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; - -webkit-box-pack: end; - -ms-flex-pack: end; - justify-content: flex-end; - } -} -.header-holder .main-menu ul { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; - margin: 0; -} -.header-holder .main-menu ul li { - display: inline-block; - margin-right: 40px; - position: relative; -} -.header-holder .main-menu ul li.active:after { - content: "•"; - bottom: -24px; - color: #e44c2c; - font-size: 1.375rem; - left: 0; - position: absolute; - right: 0; - text-align: center; -} -.header-holder .main-menu ul li.active a { - color: #e44c2c; -} -.header-holder .main-menu ul li.docs-active:after { - content: "•"; - bottom: -24px; - color: #e44c2c; - font-size: 1.375rem; - left: -24px; - position: absolute; - right: 0; - text-align: center; -} -.header-holder .main-menu ul li:last-of-type { - margin-right: 0; -} -.header-holder .main-menu ul li a { - color: #ffffff; - font-size: 1.3rem; - letter-spacing: 0; - line-height: 2.125rem; - text-align: center; - text-decoration: none; -} -@media screen and (min-width: 1100px) { - .header-holder .main-menu ul li a:hover { - color: #e44c2c; - } -} - -.mobile-main-menu { - display: none; -} -.mobile-main-menu.open { - background-color: #262626; - display: block; - height: 100%; - left: 0; - margin-left: auto; - margin-right: auto; - min-height: 100%; - position: fixed; - right: 0; - top: 0; - width: 100%; - z-index: 99999; -} - -.mobile-main-menu .container-fluid { - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; - display: -webkit-box; - display: -ms-flexbox; - display: flex; - height: 68px; - position: relative; -} -.mobile-main-menu .container-fluid:before, .mobile-main-menu .container-fluid:after { - content: ""; - display: table; -} -.mobile-main-menu .container-fluid:after { - clear: both; -} -.mobile-main-menu .container-fluid { - *zoom: 1; -} - -.mobile-main-menu.open ul { - list-style-type: none; - padding: 0; -} -.mobile-main-menu.open ul li a, .mobile-main-menu.open .resources-mobile-menu-title { - font-size: 2rem; - color: #ffffff; - letter-spacing: 0; - line-height: 4rem; - text-decoration: none; -} -.mobile-main-menu.open ul li.active a { - color: #e44c2c; -} - -.main-menu-close-button { - background-image: url("../images/icon-close.svg"); - background-position: center center; - background-repeat: no-repeat; - background-size: 24px 24px; - height: 24px; - position: absolute; - right: 0; - width: 24px; - top: -4px; -} - -.mobile-main-menu-header-container { - position: relative; -} - -.mobile-main-menu-links-container { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; - padding-left: 2.8125rem; - height: 90vh; - margin-top: -25px; - padding-top: 50%; - overflow-y: scroll; -} -.mobile-main-menu-links-container .main-menu { - height: 100vh; -} - -.mobile-main-menu-links-container ul.resources-mobile-menu-items li { - padding-left: 15px; -} - -.site-footer { - padding: 2.5rem 0; - width: 100%; - background: #000000; - background-size: 100%; - margin-left: 0; - margin-right: 0; - position: relative; - z-index: 201; -} -@media screen and (min-width: 768px) { - .site-footer { - padding: 5rem 0; - } -} -.site-footer p { - color: #ffffff; -} -.site-footer ul { - list-style-type: none; - padding-left: 0; - margin-bottom: 0; -} -.site-footer ul li { - font-size: 1.125rem; - line-height: 2rem; - color: #A0A0A1; - padding-bottom: 0.375rem; -} -.site-footer ul li.list-title { - padding-bottom: 0.75rem; - color: #ffffff; -} -.site-footer a:link, -.site-footer a:visited { - color: inherit; -} -@media screen and (min-width: 768px) { - .site-footer a:hover { - color: #e44c2c; - } -} - -.docs-tutorials-resources { - background-color: #262626; - color: #ffffff; - padding-top: 2.5rem; - padding-bottom: 2.5rem; - position: relative; - z-index: 201; -} -@media screen and (min-width: 768px) { - .docs-tutorials-resources { - padding-top: 5rem; - padding-bottom: 5rem; - } -} -.docs-tutorials-resources p { - color: #929292; - font-size: 1.125rem; -} -.docs-tutorials-resources h2 { - font-size: 1.5rem; - letter-spacing: -0.25px; - text-transform: none; - margin-bottom: 0.25rem; -} -@media screen and (min-width: 768px) { - .docs-tutorials-resources h2 { - margin-bottom: 1.25rem; - } -} -.docs-tutorials-resources .col-md-4 { - margin-bottom: 2rem; - text-align: center; -} -@media screen and (min-width: 768px) { - .docs-tutorials-resources .col-md-4 { - margin-bottom: 0; - } -} -.docs-tutorials-resources .with-right-arrow { - margin-left: 12px; -} -.docs-tutorials-resources .with-right-arrow:hover { - background-image: url("../images/chevron-right-white.svg"); -} -.docs-tutorials-resources p { - font-size: 1rem; - line-height: 1.5rem; - letter-spacing: 0.22px; - color: #939393; - margin-bottom: 0; -} -@media screen and (min-width: 768px) { - .docs-tutorials-resources p { - margin-bottom: 1.25rem; - } -} -.docs-tutorials-resources a { - font-size: 1.125rem; - color: #e44c2c; -} -.docs-tutorials-resources a:hover { - color: #ffffff; -} - -.footer-container { - position: relative; -} - -@media screen and (min-width: 768px) { - .footer-logo-wrapper { - position: absolute; - top: 0; - left: 30px; - } -} - -.footer-logo { - background-image: url("../images/logo-icon.svg"); - background-position: center; - background-repeat: no-repeat; - background-size: 20px 24px; - display: block; - height: 24px; - margin-bottom: 2.8125rem; - width: 20px; -} -@media screen and (min-width: 768px) { - .footer-logo { - background-size: 29px 36px; - height: 36px; - margin-bottom: 0; - margin-bottom: 0; - width: 29px; - } -} - -.footer-links-wrapper { - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -ms-flex-wrap: wrap; - flex-wrap: wrap; -} -@media screen and (min-width: 768px) { - .footer-links-wrapper { - -ms-flex-wrap: initial; - flex-wrap: initial; - -webkit-box-pack: end; - -ms-flex-pack: end; - justify-content: flex-end; - } -} - -.footer-links-col { - margin-bottom: 3.75rem; - width: 50%; -} -@media screen and (min-width: 768px) { - .footer-links-col { - margin-bottom: 0; - width: 14%; - margin-right: 23px; - } - .footer-links-col.follow-us-col { - width: 18%; - margin-right: 0; - } -} -@media (min-width: 768px) and (max-width: 1239px) { - .footer-links-col { - width: 18%; - margin-right: 30px; - } -} - -.footer-social-icons { - margin: 8.5625rem 0 2.5rem 0; -} -.footer-social-icons a { - height: 32px; - width: 32px; - display: inline-block; - background-color: #CCCDD1; - border-radius: 50%; - margin-right: 5px; -} -.footer-social-icons a.facebook { - background-image: url("../images/logo-facebook-dark.svg"); - background-position: center center; - background-size: 9px 18px; - background-repeat: no-repeat; -} -.footer-social-icons a.twitter { - background-image: url("../images/logo-twitter-dark.svg"); - background-position: center center; - background-size: 17px 17px; - background-repeat: no-repeat; -} -.footer-social-icons a.youtube { - background-image: url("../images/logo-youtube-dark.svg"); - background-position: center center; - background-repeat: no-repeat; -} - -.site-footer .mc-field-group { - margin-top: -2px; -} - -article.pytorch-article { - max-width: 920px; - margin: 0 auto; -} -article.pytorch-article h2, -article.pytorch-article h3, -article.pytorch-article h4, -article.pytorch-article h5, -article.pytorch-article h6 { - margin: 1.375rem 0; - color: #262626; -} -article.pytorch-article h2 { - font-size: 1.625rem; - letter-spacing: 1.33px; - line-height: 2rem; - text-transform: none; -} -article.pytorch-article h3 { - font-size: 1.5rem; - letter-spacing: -0.25px; - line-height: 1.875rem; - text-transform: none; -} -article.pytorch-article h4, -article.pytorch-article h5, -article.pytorch-article h6 { - font-size: 1.125rem; - letter-spacing: -0.19px; - line-height: 1.875rem; -} -article.pytorch-article p { - margin-bottom: 1.125rem; -} -article.pytorch-article p, -article.pytorch-article ul li, -article.pytorch-article ol li, -article.pytorch-article dl dt, -article.pytorch-article dl dd, -article.pytorch-article blockquote { - font-size: 1rem; - line-height: 1.375rem; - color: #262626; - letter-spacing: 0.01px; - font-weight: 500; -} -article.pytorch-article table { - margin-bottom: 2.5rem; - width: 100%; -} -article.pytorch-article table thead { - border-bottom: 1px solid #cacaca; -} -article.pytorch-article table th { - padding: 0.625rem; - color: #262626; -} -article.pytorch-article table td { - padding: 0.3125rem; -} -article.pytorch-article table tr th:first-of-type, -article.pytorch-article table tr td:first-of-type { - padding-left: 0; -} -article.pytorch-article table.docutils.field-list th.field-name { - padding: 0.3125rem; - padding-left: 0; -} -article.pytorch-article table.docutils.field-list td.field-body { - padding: 0.3125rem; -} -article.pytorch-article table.docutils.field-list td.field-body p:last-of-type { - margin-bottom: 0; -} -article.pytorch-article ul, -article.pytorch-article ol { - margin: 1.5rem 0 3.125rem 0; -} -@media screen and (min-width: 768px) { - article.pytorch-article ul, - article.pytorch-article ol { - padding-left: 6.25rem; - } -} -article.pytorch-article ul li, -article.pytorch-article ol li { - margin-bottom: 0.625rem; -} -article.pytorch-article dl { - margin-bottom: 1.5rem; -} -article.pytorch-article dl dt { - margin-bottom: 0.75rem; -} -article.pytorch-article pre { - margin-bottom: 2.5rem; -} -article.pytorch-article hr { - margin-top: 4.6875rem; - margin-bottom: 4.6875rem; -} -article.pytorch-article blockquote { - margin: 0 auto; - margin-bottom: 2.5rem; - width: 65%; -} -article.pytorch-article img { - width: 100%; -} - -html { - height: 100%; -} -@media screen and (min-width: 768px) { - html { - font-size: 16px; - } -} - -body { - background: #ffffff; - height: 100%; - margin: 0; -} -body.no-scroll { - height: 100%; - overflow: hidden; -} - -p { - margin-top: 0; - margin-bottom: 1.125rem; -} -p a:link, -p a:visited, -p a:hover { - color: #e44c2c; - text-decoration: none; -} -@media screen and (min-width: 768px) { - p a:hover { - text-decoration: underline; - } -} -p a:link, -p a:visited, -p a:hover { - color: #ee4c2c; -} - -.wy-breadcrumbs li a { - color: #ee4c2c; -} - -ul.pytorch-breadcrumbs { - padding-left: 0; - list-style-type: none; -} -ul.pytorch-breadcrumbs li { - display: inline-block; - font-size: 0.875rem; -} -ul.pytorch-breadcrumbs a { - color: #ee4c2c; - text-decoration: none; -} - -.table-of-contents-link-wrapper { - display: block; - margin-top: 0; - padding: 1.25rem 1.875rem; - background-color: #f3f4f7; - position: relative; - color: #262626; - font-size: 1.25rem; -} -.table-of-contents-link-wrapper.is-open .toggle-table-of-contents { - -webkit-transform: rotate(180deg); - transform: rotate(180deg); -} -@media screen and (min-width: 1100px) { - .table-of-contents-link-wrapper { - display: none; - } -} - -.toggle-table-of-contents { - background-image: url("../images/chevron-down-grey.svg"); - background-position: center center; - background-repeat: no-repeat; - background-size: 18px 18px; - height: 100%; - position: absolute; - right: 21px; - width: 30px; - top: 0; -} - -.tutorials-header .header-logo { - background-image: url("../images/logo-dark.svg"); -} -.tutorials-header .main-menu ul li a { - color: #262626; -} -.tutorials-header .main-menu-open-button { - background-image: url("../images/icon-menu-dots-dark.svg"); -} - -.rst-content footer .rating-hr.hr-top { - margin-bottom: -0.0625rem; -} -.rst-content footer .rating-hr.hr-bottom { - margin-top: -0.0625rem; -} -.rst-content footer .rating-container { - display: -webkit-inline-box; - display: -ms-inline-flexbox; - display: inline-flex; - font-size: 1.125rem; -} -.rst-content footer .rating-container .rating-prompt, .rst-content footer .rating-container .was-helpful-thank-you { - padding: 0.625rem 1.25rem 0.625rem 1.25rem; -} -.rst-content footer .rating-container .was-helpful-thank-you { - display: none; -} -.rst-content footer .rating-container .rating-prompt.yes-link, .rst-content footer .rating-container .rating-prompt.no-link { - color: #e44c2c; - cursor: pointer; -} -.rst-content footer .rating-container .rating-prompt.yes-link:hover, .rst-content footer .rating-container .rating-prompt.no-link:hover { - background-color: #e44c2c; - color: #ffffff; -} -.rst-content footer .rating-container .stars-outer { - display: inline-block; - position: relative; - font-family: FontAwesome; - padding: 0.625rem 1.25rem 0.625rem 1.25rem; -} -.rst-content footer .rating-container .stars-outer i { - cursor: pointer; -} -.rst-content footer .rating-container .stars-outer .star-fill { - color: #ee4c2c; -} -.rst-content footer div[role="contentinfo"] { - padding-top: 2.5rem; -} -.rst-content footer div[role="contentinfo"] p { - margin-bottom: 0; -} - -h1 { - font-size: 2rem; - letter-spacing: 1.78px; - line-height: 2.5rem; - text-transform: uppercase; - margin: 1.375rem 0; -} - -span.pre { - color: #6c6c6d; - background-color: #f3f4f7; - padding: 2px 0px; -} - -pre { - padding: 1.375rem; -} - -.highlight .c1 { - color: #6c6c6d; -} - -.headerlink { - display: none !important; -} - -a:link.has-code, -a:hover.has-code, -a:visited.has-code { - color: #4974D1; -} -a:link.has-code span, -a:hover.has-code span, -a:visited.has-code span { - color: #4974D1; -} - -article.pytorch-article ul, -article.pytorch-article ol { - padding-left: 1.875rem; - margin: 0; -} -article.pytorch-article ul li, -article.pytorch-article ol li { - margin: 0; - line-height: 1.75rem; -} -article.pytorch-article ul p, -article.pytorch-article ol p { - line-height: 1.75rem; - margin-bottom: 0; -} -article.pytorch-article ul ul, -article.pytorch-article ul ol, -article.pytorch-article ol ul, -article.pytorch-article ol ol { - margin: 0; -} -article.pytorch-article h1, -article.pytorch-article h2, -article.pytorch-article h3, -article.pytorch-article h4, -article.pytorch-article h5, -article.pytorch-article h6 { - font-weight: normal; -} -article.pytorch-article h1 a, -article.pytorch-article h2 a, -article.pytorch-article h3 a, -article.pytorch-article h4 a, -article.pytorch-article h5 a, -article.pytorch-article h6 a { - color: #262626; -} -article.pytorch-article p.caption { - margin-top: 1.25rem; -} - -article.pytorch-article .section:first-of-type h1:first-of-type { - margin-top: 0; -} - -article.pytorch-article .sphx-glr-thumbcontainer { - margin: 0; - border: 1px solid #d6d7d8; - border-radius: 0; - width: 45%; - text-align: center; - margin-bottom: 5%; -} -@media screen and (max-width: 1100px) { - article.pytorch-article .sphx-glr-thumbcontainer:nth-child(odd) { - margin-left: 0; - margin-right: 2.5%; - } - article.pytorch-article .sphx-glr-thumbcontainer:nth-child(even) { - margin-right: 0; - margin-left: 2.5%; - } - article.pytorch-article .sphx-glr-thumbcontainer .figure { - width: 40%; - } -} -@media screen and (min-width: 1101px) { - article.pytorch-article .sphx-glr-thumbcontainer { - margin-right: 3%; - margin-bottom: 3%; - width: 30%; - } -} -article.pytorch-article .sphx-glr-thumbcontainer .caption-text a { - font-size: 1rem; - color: #262626; - letter-spacing: 0; - line-height: 1.5rem; - text-decoration: none; -} -article.pytorch-article .sphx-glr-thumbcontainer:hover { - -webkit-box-shadow: none; - box-shadow: none; - border-bottom-color: #ffffff; -} -article.pytorch-article .sphx-glr-thumbcontainer:hover .figure:before { - bottom: 100%; -} -article.pytorch-article .sphx-glr-thumbcontainer .figure { - width: 80%; -} -article.pytorch-article .sphx-glr-thumbcontainer .figure:before { - content: ""; - display: block; - position: absolute; - top: 0; - bottom: 35%; - left: 0; - right: 0; - background: #8A94B3; - opacity: 0.10; -} -article.pytorch-article .sphx-glr-thumbcontainer .figure a.reference.internal { - text-align: left; -} -@media screen and (min-width: 768px) { - article.pytorch-article .sphx-glr-thumbcontainer:after { - content: ""; - display: block; - width: 0; - height: 1px; - position: absolute; - bottom: 0; - left: 0; - background-color: #e44c2c; - -webkit-transition: width .250s ease-in-out; - transition: width .250s ease-in-out; - } - article.pytorch-article .sphx-glr-thumbcontainer:hover:after { - width: 100%; - } -} -@media screen and (min-width: 768px) { - article.pytorch-article .sphx-glr-thumbcontainer:after { - background-color: #ee4c2c; - } -} - -article.pytorch-article .section :not(dt) > code { - color: #262626; - border-top: solid 2px #ffffff; - background-color: #ffffff; - border-bottom: solid 2px #ffffff; - padding: 0px 3px; - -webkit-box-decoration-break: clone; - box-decoration-break: clone; -} -article.pytorch-article .section :not(dt) > code .pre { - outline: 0px; - padding: 0px; -} -article.pytorch-article .function dt, article.pytorch-article .attribute dt, article.pytorch-article .class .attribute dt, article.pytorch-article .class dt { - position: relative; - background: #f3f4f7; - padding: 0.5rem; - border-left: 3px solid #ee4c2c; - word-wrap: break-word; - padding-right: 100px; -} -article.pytorch-article .function dt em.property, article.pytorch-article .attribute dt em.property, article.pytorch-article .class dt em.property { - font-family: inherit; -} -article.pytorch-article .function dt em, article.pytorch-article .attribute dt em, article.pytorch-article .class .attribute dt em, article.pytorch-article .class dt em, article.pytorch-article .function dt .sig-paren, article.pytorch-article .attribute dt .sig-paren, article.pytorch-article .class dt .sig-paren { - font-family: IBMPlexMono,SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace; - font-size: 87.5%; -} -article.pytorch-article .function dt a, article.pytorch-article .attribute dt a, article.pytorch-article .class .attribute dt a, article.pytorch-article .class dt a { - right: 30px; - padding-right: 0; - top: 50%; - -webkit-transform: perspective(1px) translateY(-50%); - transform: perspective(1px) translateY(-50%); -} -article.pytorch-article .function dt:hover .viewcode-link, article.pytorch-article .attribute dt:hover .viewcode-link, article.pytorch-article .class dt:hover .viewcode-link { - color: #ee4c2c; -} -article.pytorch-article .function .anchorjs-link, article.pytorch-article .attribute .anchorjs-link, article.pytorch-article .class .anchorjs-link { - display: inline; - position: absolute; - right: 8px; - font-size: 1.5625rem !important; - padding-left: 0; -} -article.pytorch-article .function dt > code, article.pytorch-article .attribute dt > code, article.pytorch-article .class .attribute dt > code, article.pytorch-article .class dt > code { - color: #262626; - border-top: solid 2px #f3f4f7; - background-color: #f3f4f7; - border-bottom: solid 2px #f3f4f7; - -webkit-box-decoration-break: clone; - box-decoration-break: clone; -} -article.pytorch-article .function .viewcode-link, article.pytorch-article .attribute .viewcode-link, article.pytorch-article .class .viewcode-link { - padding-left: 0.6rem; - position: absolute; - font-size: 0.875rem; - color: #979797; - letter-spacing: 0; - line-height: 1.5rem; - text-transform: uppercase; -} -article.pytorch-article .function dd, article.pytorch-article .attribute dd, article.pytorch-article .class .attribute dd, article.pytorch-article .class dd { - padding-left: 3.75rem; -} -article.pytorch-article .function dd p, article.pytorch-article .attribute dd p, article.pytorch-article .class .attribute dd p, article.pytorch-article .class dd p { - color: #262626; -} -article.pytorch-article .function table tbody tr th.field-name, article.pytorch-article .attribute table tbody tr th.field-name, article.pytorch-article .class table tbody tr th.field-name { - white-space: nowrap; - color: #262626; - width: 20%; -} -@media screen and (min-width: 768px) { - article.pytorch-article .function table tbody tr th.field-name, article.pytorch-article .attribute table tbody tr th.field-name, article.pytorch-article .class table tbody tr th.field-name { - width: 15%; - } -} -article.pytorch-article .function table tbody tr td.field-body, article.pytorch-article .attribute table tbody tr td.field-body, article.pytorch-article .class table tbody tr td.field-body { - padding: 0.625rem; - width: 80%; - color: #262626; -} -@media screen and (min-width: 768px) { - article.pytorch-article .function table tbody tr td.field-body, article.pytorch-article .attribute table tbody tr td.field-body, article.pytorch-article .class table tbody tr td.field-body { - width: 85%; - } -} -@media screen and (min-width: 1600px) { - article.pytorch-article .function table tbody tr td.field-body, article.pytorch-article .attribute table tbody tr td.field-body, article.pytorch-article .class table tbody tr td.field-body { - padding-left: 1.25rem; - } -} -article.pytorch-article .function table tbody tr td.field-body p, article.pytorch-article .attribute table tbody tr td.field-body p, article.pytorch-article .class table tbody tr td.field-body p { - padding-left: 0px; -} -article.pytorch-article .function table tbody tr td.field-body p:last-of-type, article.pytorch-article .attribute table tbody tr td.field-body p:last-of-type, article.pytorch-article .class table tbody tr td.field-body p:last-of-type { - margin-bottom: 0; -} -article.pytorch-article .function table tbody tr td.field-body ol, article.pytorch-article .attribute table tbody tr td.field-body ol, article.pytorch-article .class table tbody tr td.field-body ol, article.pytorch-article .function table tbody tr td.field-body ul, article.pytorch-article .attribute table tbody tr td.field-body ul, article.pytorch-article .class table tbody tr td.field-body ul { - padding-left: 1rem; - padding-bottom: 0; -} -article.pytorch-article .function table.docutils.field-list, article.pytorch-article .attribute table.docutils.field-list, article.pytorch-article .class table.docutils.field-list { - margin-bottom: 0.75rem; -} -article.pytorch-article .attribute .has-code { - float: none; -} -article.pytorch-article .class dt { - border-left: none; - border-top: 3px solid #ee4c2c; - padding-left: 4em; -} -article.pytorch-article .class dt em.property { - position: absolute; - left: 0.5rem; -} -article.pytorch-article .class dd .docutils dt { - padding-left: 0.5rem; -} -article.pytorch-article .class em.property { - text-transform: uppercase; - font-style: normal; - color: #ee4c2c; - font-size: 1rem; - letter-spacing: 0; - padding-right: 0.75rem; -} -article.pytorch-article .class dl dt em.property { - position: static; - left: 0; - padding-right: 0; -} -article.pytorch-article .class .method dt, -article.pytorch-article .class .staticmethod dt { - border-left: 3px solid #ee4c2c; - border-top: none; -} -article.pytorch-article .class .method dt, -article.pytorch-article .class .staticmethod dt { - padding-left: 0.5rem; -} -article.pytorch-article .class .attribute dt { - border-top: none; -} -article.pytorch-article .class .attribute dt em.property { - position: relative; - left: 0; -} -article.pytorch-article table { - table-layout: fixed; -} - -article.pytorch-article .note, -article.pytorch-article .warning, -article.pytorch-article .tip, -article.pytorch-article .seealso, -article.pytorch-article .hint, -article.pytorch-article .important, -article.pytorch-article .caution, -article.pytorch-article .danger, -article.pytorch-article .attention, -article.pytorch-article .error { - background: #f3f4f7; - margin-top: 1.875rem; - margin-bottom: 1.125rem; -} -article.pytorch-article .note .admonition-title, -article.pytorch-article .warning .admonition-title, -article.pytorch-article .tip .admonition-title, -article.pytorch-article .seealso .admonition-title, -article.pytorch-article .hint .admonition-title, -article.pytorch-article .important .admonition-title, -article.pytorch-article .caution .admonition-title, -article.pytorch-article .danger .admonition-title, -article.pytorch-article .attention .admonition-title, -article.pytorch-article .error .admonition-title { - color: #ffffff; - letter-spacing: 1px; - text-transform: uppercase; - margin-bottom: 1.125rem; - padding: 3px 0 3px 1.375rem; - position: relative; - font-size: 0.875rem; -} -article.pytorch-article .note .admonition-title:before, -article.pytorch-article .warning .admonition-title:before, -article.pytorch-article .tip .admonition-title:before, -article.pytorch-article .seealso .admonition-title:before, -article.pytorch-article .hint .admonition-title:before, -article.pytorch-article .important .admonition-title:before, -article.pytorch-article .caution .admonition-title:before, -article.pytorch-article .danger .admonition-title:before, -article.pytorch-article .attention .admonition-title:before, -article.pytorch-article .error .admonition-title:before { - content: "\2022"; - position: absolute; - left: 9px; - color: #ffffff; - top: 2px; -} -article.pytorch-article .note p:nth-child(n + 2), -article.pytorch-article .warning p:nth-child(n + 2), -article.pytorch-article .tip p:nth-child(n + 2), -article.pytorch-article .seealso p:nth-child(n + 2), -article.pytorch-article .hint p:nth-child(n + 2), -article.pytorch-article .important p:nth-child(n + 2), -article.pytorch-article .caution p:nth-child(n + 2), -article.pytorch-article .danger p:nth-child(n + 2), -article.pytorch-article .attention p:nth-child(n + 2), -article.pytorch-article .error p:nth-child(n + 2) { - padding: 0 1.375rem; -} -article.pytorch-article .note table, -article.pytorch-article .warning table, -article.pytorch-article .tip table, -article.pytorch-article .seealso table, -article.pytorch-article .hint table, -article.pytorch-article .important table, -article.pytorch-article .caution table, -article.pytorch-article .danger table, -article.pytorch-article .attention table, -article.pytorch-article .error table { - margin: 0 2rem; - width: auto; -} -article.pytorch-article .note :not(dt) > code, -article.pytorch-article .warning :not(dt) > code, -article.pytorch-article .tip :not(dt) > code, -article.pytorch-article .seealso :not(dt) > code, -article.pytorch-article .hint :not(dt) > code, -article.pytorch-article .important :not(dt) > code, -article.pytorch-article .caution :not(dt) > code, -article.pytorch-article .danger :not(dt) > code, -article.pytorch-article .attention :not(dt) > code, -article.pytorch-article .error :not(dt) > code { - border-top: solid 2px #ffffff; - background-color: #ffffff; - border-bottom: solid 2px #ffffff; - padding: 0px 3px; - -webkit-box-decoration-break: clone; - box-decoration-break: clone; - outline: 1px solid #e9e9e9; -} -article.pytorch-article .note :not(dt) > code .pre, -article.pytorch-article .warning :not(dt) > code .pre, -article.pytorch-article .tip :not(dt) > code .pre, -article.pytorch-article .seealso :not(dt) > code .pre, -article.pytorch-article .hint :not(dt) > code .pre, -article.pytorch-article .important :not(dt) > code .pre, -article.pytorch-article .caution :not(dt) > code .pre, -article.pytorch-article .danger :not(dt) > code .pre, -article.pytorch-article .attention :not(dt) > code .pre, -article.pytorch-article .error :not(dt) > code .pre { - outline: 0px; - padding: 0px; -} -article.pytorch-article .note pre, -article.pytorch-article .warning pre, -article.pytorch-article .tip pre, -article.pytorch-article .seealso pre, -article.pytorch-article .hint pre, -article.pytorch-article .important pre, -article.pytorch-article .caution pre, -article.pytorch-article .danger pre, -article.pytorch-article .attention pre, -article.pytorch-article .error pre { - margin-bottom: 0; -} -article.pytorch-article .note .highlight, -article.pytorch-article .warning .highlight, -article.pytorch-article .tip .highlight, -article.pytorch-article .seealso .highlight, -article.pytorch-article .hint .highlight, -article.pytorch-article .important .highlight, -article.pytorch-article .caution .highlight, -article.pytorch-article .danger .highlight, -article.pytorch-article .attention .highlight, -article.pytorch-article .error .highlight { - margin: 0 2rem 1.125rem 2rem; -} -article.pytorch-article .note ul, -article.pytorch-article .note ol, -article.pytorch-article .warning ul, -article.pytorch-article .warning ol, -article.pytorch-article .tip ul, -article.pytorch-article .tip ol, -article.pytorch-article .seealso ul, -article.pytorch-article .seealso ol, -article.pytorch-article .hint ul, -article.pytorch-article .hint ol, -article.pytorch-article .important ul, -article.pytorch-article .important ol, -article.pytorch-article .caution ul, -article.pytorch-article .caution ol, -article.pytorch-article .danger ul, -article.pytorch-article .danger ol, -article.pytorch-article .attention ul, -article.pytorch-article .attention ol, -article.pytorch-article .error ul, -article.pytorch-article .error ol { - padding-left: 3.25rem; -} -article.pytorch-article .note ul li, -article.pytorch-article .note ol li, -article.pytorch-article .warning ul li, -article.pytorch-article .warning ol li, -article.pytorch-article .tip ul li, -article.pytorch-article .tip ol li, -article.pytorch-article .seealso ul li, -article.pytorch-article .seealso ol li, -article.pytorch-article .hint ul li, -article.pytorch-article .hint ol li, -article.pytorch-article .important ul li, -article.pytorch-article .important ol li, -article.pytorch-article .caution ul li, -article.pytorch-article .caution ol li, -article.pytorch-article .danger ul li, -article.pytorch-article .danger ol li, -article.pytorch-article .attention ul li, -article.pytorch-article .attention ol li, -article.pytorch-article .error ul li, -article.pytorch-article .error ol li { - color: #262626; -} -article.pytorch-article .note p, -article.pytorch-article .warning p, -article.pytorch-article .tip p, -article.pytorch-article .seealso p, -article.pytorch-article .hint p, -article.pytorch-article .important p, -article.pytorch-article .caution p, -article.pytorch-article .danger p, -article.pytorch-article .attention p, -article.pytorch-article .error p { - margin-top: 1.125rem; -} -article.pytorch-article .note .admonition-title { - background: #54c7ec; -} -article.pytorch-article .warning .admonition-title { - background: #e94f3b; -} -article.pytorch-article .tip .admonition-title { - background: #6bcebb; -} -article.pytorch-article .seealso .admonition-title { - background: #6bcebb; -} -article.pytorch-article .hint .admonition-title { - background: #a2cdde; -} -article.pytorch-article .important .admonition-title { - background: #5890ff; -} -article.pytorch-article .caution .admonition-title { - background: #f7923a; -} -article.pytorch-article .danger .admonition-title { - background: #db2c49; -} -article.pytorch-article .attention .admonition-title { - background: #f5a623; -} -article.pytorch-article .error .admonition-title { - background: #cc2f90; -} -article.pytorch-article .sphx-glr-download-link-note.admonition.note, -article.pytorch-article .reference.download.internal, article.pytorch-article .sphx-glr-signature { - display: none; -} -article.pytorch-article .admonition > p:last-of-type { - margin-bottom: 0; - padding-bottom: 1.125rem !important; -} - -.pytorch-article div.sphx-glr-download a { - background-color: #f3f4f7; - background-image: url("../images/arrow-down-orange.svg"); - background-repeat: no-repeat; - background-position: left 10px center; - background-size: 15px 15px; - border-radius: 0; - border: none; - display: block; - text-align: left; - padding: 0.9375rem 3.125rem; - position: relative; - margin: 1.25rem auto; -} -@media screen and (min-width: 768px) { - .pytorch-article div.sphx-glr-download a:after { - content: ""; - display: block; - width: 0; - height: 1px; - position: absolute; - bottom: 0; - left: 0; - background-color: #e44c2c; - -webkit-transition: width .250s ease-in-out; - transition: width .250s ease-in-out; - } - .pytorch-article div.sphx-glr-download a:hover:after { - width: 100%; - } -} -@media screen and (min-width: 768px) { - .pytorch-article div.sphx-glr-download a:after { - background-color: #ee4c2c; - } -} -@media screen and (min-width: 768px) { - .pytorch-article div.sphx-glr-download a { - background-position: left 20px center; - } -} -.pytorch-article div.sphx-glr-download a:hover { - -webkit-box-shadow: none; - box-shadow: none; - text-decoration: none; - background-image: url("../images/arrow-down-orange.svg"); - background-color: #f3f4f7; -} -.pytorch-article div.sphx-glr-download a span.pre { - background-color: transparent; - font-size: 1.125rem; - padding: 0; - color: #262626; -} -.pytorch-article div.sphx-glr-download a code, .pytorch-article div.sphx-glr-download a kbd, .pytorch-article div.sphx-glr-download a pre, .pytorch-article div.sphx-glr-download a samp, .pytorch-article div.sphx-glr-download a span.pre { - font-family: FreightSans, Helvetica Neue, Helvetica, Arial, sans-serif; -} - -.pytorch-article p.sphx-glr-script-out { - margin-bottom: 1.125rem; -} - -.pytorch-article div.sphx-glr-script-out { - margin-bottom: 2.5rem; -} -.pytorch-article div.sphx-glr-script-out .highlight { - margin-left: 0; - margin-top: 0; -} -.pytorch-article div.sphx-glr-script-out .highlight pre { - background-color: #fdede9; - padding: 1.5625rem; - color: #837b79; -} -.pytorch-article div.sphx-glr-script-out + p { - margin-top: unset; -} - -article.pytorch-article .wy-table-responsive table { - border: none; - border-color: #ffffff !important; - table-layout: fixed; -} -article.pytorch-article .wy-table-responsive table thead tr { - border-bottom: 2px solid #6c6c6d; -} -article.pytorch-article .wy-table-responsive table thead th { - line-height: 1.75rem; - padding-left: 0.9375rem; - padding-right: 0.9375rem; -} -article.pytorch-article .wy-table-responsive table tbody .row-odd { - background-color: #f3f4f7; -} -article.pytorch-article .wy-table-responsive table tbody td { - color: #6c6c6d; - white-space: normal; - padding: 0.9375rem; - font-size: 1rem; - line-height: 1.375rem; -} -article.pytorch-article .wy-table-responsive table tbody td .pre { - background: #ffffff; - color: #ee4c2c; - font-size: 87.5%; -} -article.pytorch-article .wy-table-responsive table tbody td code { - font-size: 87.5%; -} - -a[rel~="prev"], a[rel~="next"] { - padding: 0.375rem 0 0 0; -} - -img.next-page, -img.previous-page { - width: 8px; - height: 10px; - position: relative; - top: -1px; -} - -img.previous-page { - -webkit-transform: scaleX(-1); - transform: scaleX(-1); -} - -.rst-footer-buttons { - margin-top: 1.875rem; - margin-bottom: 1.875rem; -} -.rst-footer-buttons .btn:focus, -.rst-footer-buttons .btn.focus { - -webkit-box-shadow: none; - box-shadow: none; -} - -article.pytorch-article blockquote { - margin-left: 3.75rem; - color: #6c6c6d; -} - -article.pytorch-article .caption { - color: #6c6c6d; - letter-spacing: 0.25px; - line-height: 2.125rem; -} - -article.pytorch-article .math { - color: #262626; - width: auto; - text-align: center; -} -article.pytorch-article .math img { - width: auto; -} - -.pytorch-breadcrumbs-wrapper { - width: 100%; -} -@media screen and (min-width: 1101px) { - .pytorch-breadcrumbs-wrapper { - float: left; - margin-left: 3%; - width: 75%; - } -} -@media screen and (min-width: 1600px) { - .pytorch-breadcrumbs-wrapper { - width: 850px; - margin-left: 1.875rem; - } -} -.pytorch-breadcrumbs-wrapper .pytorch-breadcrumbs-aside { - float: right; -} -.pytorch-breadcrumbs-wrapper .pytorch-breadcrumbs-aside .fa.fa-github { - margin-top: 5px; - display: block; -} - -.pytorch-article .container { - padding-left: 0; - padding-right: 0; - max-width: none; -} - -a:link, -a:visited, -a:hover { - color: #ee4c2c; -} - -::-webkit-input-placeholder { - color: #ee4c2c; -} - -::-moz-placeholder { - color: #ee4c2c; -} - -:-ms-input-placeholder { - color: #ee4c2c; -} - -:-moz-placeholder { - color: #ee4c2c; -} - -@media screen and (min-width: 768px) { - .site-footer a:hover { - color: #ee4c2c; - } -} - -.docs-tutorials-resources a { - color: #ee4c2c; -} - -.header-holder { - position: relative; - z-index: 201; -} - -.header-holder .main-menu ul li.active:after { - color: #ee4c2c; -} -.header-holder .main-menu ul li.active a { - color: #ee4c2c; -} -@media screen and (min-width: 1100px) { - .header-holder .main-menu ul li a:hover { - color: #ee4c2c; - } -} - -.mobile-main-menu.open ul li.active a { - color: #ee4c2c; -} - -.version { - padding-bottom: 1rem; -} - -.pytorch-call-to-action-links { - padding-top: 0; - display: -webkit-box; - display: -ms-flexbox; - display: flex; -} -@media screen and (min-width: 768px) { - .pytorch-call-to-action-links { - padding-top: 2.5rem; - } -} -@media (min-width: 768px) and (max-width: 1239px) { - .pytorch-call-to-action-links { - padding-top: 0; - } -} -@media (min-width: 1100px) and (max-width: 1239px) { - .pytorch-call-to-action-links { - padding-top: 2.5rem; - } -} -.pytorch-call-to-action-links #tutorial-type { - display: none; -} -.pytorch-call-to-action-links .call-to-action-img, .pytorch-call-to-action-links .call-to-action-notebook-img { - height: 1.375rem; - width: 1.375rem; - margin-right: 10px; -} -.pytorch-call-to-action-links .call-to-action-notebook-img { - height: 1rem; -} -.pytorch-call-to-action-links a { - padding-right: 1.25rem; - color: #000000; - cursor: pointer; -} -.pytorch-call-to-action-links a:hover { - color: #e44c2c; -} -.pytorch-call-to-action-links a .call-to-action-desktop-view { - display: none; -} -@media screen and (min-width: 768px) { - .pytorch-call-to-action-links a .call-to-action-desktop-view { - display: block; - } -} -.pytorch-call-to-action-links a .call-to-action-mobile-view { - display: block; -} -@media screen and (min-width: 768px) { - .pytorch-call-to-action-links a .call-to-action-mobile-view { - display: none; - } -} -.pytorch-call-to-action-links a #google-colab-link, .pytorch-call-to-action-links a #download-notebook-link, -.pytorch-call-to-action-links a #github-view-link { - padding-bottom: 0.625rem; - border-bottom: 1px solid #f3f4f7; - padding-right: 2.5rem; - display: -webkit-box; - display: -ms-flexbox; - display: flex; - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; -} -.pytorch-call-to-action-links a #google-colab-link:hover, .pytorch-call-to-action-links a #download-notebook-link:hover, -.pytorch-call-to-action-links a #github-view-link:hover { - border-bottom-color: #e44c2c; - color: #e44c2c; -} - -#tutorial-cards-container #tutorial-cards { - width: 100%; -} -#tutorial-cards-container .tutorials-nav { - padding-left: 0; - padding-right: 0; - padding-bottom: 0; -} -#tutorial-cards-container .tutorials-hr { - margin-top: 1rem; - margin-bottom: 1rem; -} -#tutorial-cards-container .card.tutorials-card { - border-radius: 0; - border-color: #f3f4f7; - height: 98px; - margin-bottom: 1.25rem; - margin-bottom: 1.875rem; - overflow: scroll; - background-color: #f3f4f7; - cursor: pointer; -} -@media screen and (min-width: 1240px) { - #tutorial-cards-container .card.tutorials-card { - height: 200px; - overflow: inherit; - } -} -@media (min-width: 768px) and (max-width: 1239px) { - #tutorial-cards-container .card.tutorials-card { - height: 200px; - overflow: scroll; - } -} -#tutorial-cards-container .card.tutorials-card .tutorials-image { - position: absolute; - top: 0px; - right: 0px; - height: 96px; - width: 96px; - opacity: 0.5; -} -#tutorial-cards-container .card.tutorials-card .tutorials-image img { - height: 100%; - width: 100%; -} -@media screen and (min-width: 768px) { - #tutorial-cards-container .card.tutorials-card .tutorials-image { - height: 100%; - width: 25%; - } -} -@media (min-width: 768px) and (max-width: 1239px) { - #tutorial-cards-container .card.tutorials-card .tutorials-image { - height: 100%; - width: 198px; - } -} -#tutorial-cards-container .card.tutorials-card .tutorials-image:before { - content: ''; - position: absolute; - top: 0; - left: 0; - bottom: 0; - right: 0; - z-index: 1; - background: #000000; - opacity: .075; -} -#tutorial-cards-container .card.tutorials-card .card-title-container { - width: 70%; - display: -webkit-inline-box; - display: -ms-inline-flexbox; - display: inline-flex; -} -@media screen and (min-width: 768px) { - #tutorial-cards-container .card.tutorials-card .card-title-container { - width: 75%; - } -} -@media (min-width: 768px) and (max-width: 1239px) { - #tutorial-cards-container .card.tutorials-card .card-title-container { - width: 70%; - } -} -#tutorial-cards-container .card.tutorials-card .card-title-container h4 { - margin-bottom: 1.125rem; - margin-top: 0; - font-size: 1.5rem; -} -#tutorial-cards-container .card.tutorials-card p.card-summary, #tutorial-cards-container .card.tutorials-card p.tags { - font-size: 0.9375rem; - line-height: 1.5rem; - margin-bottom: 0; - color: #6c6c6d; - font-weight: 400; - width: 70%; -} -@media screen and (min-width: 768px) { - #tutorial-cards-container .card.tutorials-card p.card-summary, #tutorial-cards-container .card.tutorials-card p.tags { - width: 75%; - } -} -@media (min-width: 768px) and (max-width: 1239px) { - #tutorial-cards-container .card.tutorials-card p.card-summary, #tutorial-cards-container .card.tutorials-card p.tags { - width: 70%; - } -} -#tutorial-cards-container .card.tutorials-card p.tags { - margin-top: 30px; - text-overflow: ellipsis; - white-space: nowrap; - overflow: hidden; -} -#tutorial-cards-container .card.tutorials-card h4 { - color: #262626; - margin-bottom: 1.125rem; -} -#tutorial-cards-container .card.tutorials-card a { - height: 100%; -} -@media screen and (min-width: 768px) { - #tutorial-cards-container .card.tutorials-card a { - min-height: 190px; - } -} -@media (min-width: 768px) and (max-width: 1239px) { - #tutorial-cards-container .card.tutorials-card a { - min-height: 234px; - } -} -@media screen and (min-width: 768px) { - #tutorial-cards-container .card.tutorials-card:after { - content: ""; - display: block; - width: 0; - height: 1px; - position: absolute; - bottom: 0; - left: 0; - background-color: #e44c2c; - -webkit-transition: width .250s ease-in-out; - transition: width .250s ease-in-out; - } - #tutorial-cards-container .card.tutorials-card:hover:after { - width: 100%; - } -} -#tutorial-cards-container .card.tutorials-card:hover { - background-color: #ffffff; - border: 1px solid #e2e2e2; - border-bottom: none; -} -#tutorial-cards-container .card.tutorials-card:hover p.card-summary { - color: #262626; -} -#tutorial-cards-container .card.tutorials-card:hover .tutorials-image { - opacity: unset; -} -#tutorial-cards-container .tutorial-tags-container { - width: 75%; -} -#tutorial-cards-container .tutorial-tags-container.active { - width: 0; -} -#tutorial-cards-container .tutorial-filter-menu ul { - list-style-type: none; - padding-left: 1.25rem; -} -#tutorial-cards-container .tutorial-filter-menu ul li { - padding-right: 1.25rem; - word-break: break-all; -} -#tutorial-cards-container .tutorial-filter-menu ul li a { - color: #979797; -} -#tutorial-cards-container .tutorial-filter-menu ul li a:hover { - color: #e44c2c; -} -#tutorial-cards-container .tutorial-filter { - cursor: pointer; -} -#tutorial-cards-container .filter-btn { - color: #979797; - border: 1px solid #979797; - display: inline-block; - text-align: center; - white-space: nowrap; - vertical-align: middle; - padding: 0.375rem 0.75rem; - font-size: 1rem; - line-height: 1.5; - margin-bottom: 5px; -} -#tutorial-cards-container .filter-btn:hover { - border: 1px solid #e44c2c; - color: #e44c2c; -} -#tutorial-cards-container .filter-btn.selected { - background-color: #e44c2c; - border: 1px solid #e44c2c; - color: #ffffff; -} -#tutorial-cards-container .all-tag-selected { - background-color: #979797; - color: #ffffff; -} -#tutorial-cards-container .all-tag-selected:hover { - border-color: #979797; - color: #ffffff; -} -#tutorial-cards-container .pagination .page { - border: 1px solid #dee2e6; - padding: 0.5rem 0.75rem; -} -#tutorial-cards-container .pagination .active .page { - background-color: #dee2e6; -} - -article.pytorch-article .tutorials-callout-container { - padding-bottom: 50px; -} -article.pytorch-article .tutorials-callout-container .col-md-6 { - padding-bottom: 10px; -} -article.pytorch-article .tutorials-callout-container .text-container { - padding: 10px 0px 30px 0px; - padding-bottom: 10px; -} -article.pytorch-article .tutorials-callout-container .text-container .body-paragraph { - color: #666666; - font-weight: 300; - font-size: 1.125rem; - line-height: 1.875rem; -} -article.pytorch-article .tutorials-callout-container .btn.callout-button { - font-size: 1.125rem; - border-radius: 0; - border: none; - background-color: #f3f4f7; - color: #6c6c6d; - font-weight: 400; - position: relative; - letter-spacing: 0.25px; -} -@media screen and (min-width: 768px) { - article.pytorch-article .tutorials-callout-container .btn.callout-button:after { - content: ""; - display: block; - width: 0; - height: 1px; - position: absolute; - bottom: 0; - left: 0; - background-color: #e44c2c; - -webkit-transition: width .250s ease-in-out; - transition: width .250s ease-in-out; - } - article.pytorch-article .tutorials-callout-container .btn.callout-button:hover:after { - width: 100%; - } -} -article.pytorch-article .tutorials-callout-container .btn.callout-button a { - color: inherit; -} - -.pytorch-container { - margin: 0 auto; - padding: 0 1.875rem; - width: auto; - position: relative; -} -@media screen and (min-width: 1100px) { - .pytorch-container { - padding: 0; - } -} -@media screen and (min-width: 1101px) { - .pytorch-container { - margin-left: 25%; - } -} -@media screen and (min-width: 1600px) { - .pytorch-container { - margin-left: 350px; - } -} -.pytorch-container:before, .pytorch-container:after { - content: ""; - display: table; -} -.pytorch-container:after { - clear: both; -} -.pytorch-container { - *zoom: 1; -} - -.pytorch-content-wrap { - background-color: #ffffff; - display: -webkit-box; - display: -ms-flexbox; - display: flex; - position: relative; - padding-top: 0; -} -.pytorch-content-wrap:before, .pytorch-content-wrap:after { - content: ""; - display: table; -} -.pytorch-content-wrap:after { - clear: both; -} -.pytorch-content-wrap { - *zoom: 1; -} -@media screen and (min-width: 1101px) { - .pytorch-content-wrap { - padding-top: 45px; - float: left; - width: 100%; - display: block; - } -} -@media screen and (min-width: 1600px) { - .pytorch-content-wrap { - width: 100%; - } -} - -.pytorch-content { - background: #ffffff; - width: 100%; - max-width: 700px; - position: relative; -} - -.pytorch-content-left { - min-height: 100vh; - margin-top: 2.5rem; - width: 100%; -} -@media screen and (min-width: 1101px) { - .pytorch-content-left { - margin-top: 0; - margin-left: 3%; - width: 75%; - float: left; - } -} -@media screen and (min-width: 1600px) { - .pytorch-content-left { - width: 850px; - margin-left: 30px; - } -} -.pytorch-content-left .main-content { - padding-top: 0.9375rem; -} -.pytorch-content-left .main-content ul.simple { - padding-bottom: 1.25rem; -} -.pytorch-content-left .main-content .note:nth-child(1), .pytorch-content-left .main-content .warning:nth-child(1) { - margin-top: 0; -} - -.pytorch-content-right { - display: none; - position: relative; - overflow-x: hidden; - overflow-y: hidden; -} -@media screen and (min-width: 1101px) { - .pytorch-content-right { - display: block; - margin-left: 0; - width: 19%; - float: left; - height: 100%; - } -} -@media screen and (min-width: 1600px) { - .pytorch-content-right { - width: 280px; - } -} - -@media screen and (min-width: 1101px) { - .pytorch-side-scroll { - position: relative; - overflow-x: hidden; - overflow-y: scroll; - height: 100%; - } -} - -.pytorch-menu-vertical { - padding: 1.25rem 1.875rem 2.5rem 1.875rem; -} -@media screen and (min-width: 1101px) { - .pytorch-menu-vertical { - display: block; - padding-top: 0; - padding-right: 13.5%; - padding-bottom: 5.625rem; - } -} -@media screen and (min-width: 1600px) { - .pytorch-menu-vertical { - padding-left: 0; - padding-right: 1.5625rem; - } -} - -.pytorch-left-menu { - display: none; - background-color: #f3f4f7; - color: #262626; - overflow: scroll; -} -@media screen and (min-width: 1101px) { - .pytorch-left-menu { - display: block; - overflow-x: hidden; - overflow-y: hidden; - padding-bottom: 110px; - padding: 0 1.875rem 0 0; - width: 25%; - z-index: 200; - float: left; - } - .pytorch-left-menu.make-fixed { - position: fixed; - top: 0; - bottom: 0; - left: 0; - float: none; - } -} -@media screen and (min-width: 1600px) { - .pytorch-left-menu { - padding: 0 0 0 1.875rem; - width: 350px; - } -} - -.expand-menu, .hide-menu { - color: #6c6c6d; - padding-left: 10px; - cursor: pointer; -} - -.collapse { - display: none; -} - -.left-nav-top-caption { - padding-top: 1rem; -} - -.pytorch-left-menu p.caption { - color: #262626; - display: block; - font-size: 1rem; - line-height: 1.375rem; - margin-bottom: 1rem; - text-transform: none; - white-space: normal; -} - -.pytorch-left-menu-search { - margin-bottom: 2.5rem; -} -@media screen and (min-width: 1101px) { - .pytorch-left-menu-search { - margin: 1.25rem 0.625rem 1.875rem 0; - } -} - -.pytorch-left-menu-search ::-webkit-input-placeholder { - color: #262626; -} -.pytorch-left-menu-search ::-moz-placeholder { - color: #262626; -} -.pytorch-left-menu-search :-ms-input-placeholder { - color: #262626; -} -.pytorch-left-menu-search ::-ms-input-placeholder { - color: #262626; -} -.pytorch-left-menu-search ::placeholder { - color: #262626; -} - -.pytorch-left-menu-search :focus::-webkit-input-placeholder { - color: transparent; -} -.pytorch-left-menu-search :focus::-moz-placeholder { - color: transparent; -} -.pytorch-left-menu-search :focus:-ms-input-placeholder { - color: transparent; -} -.pytorch-left-menu-search :focus::-ms-input-placeholder { - color: transparent; -} -.pytorch-left-menu-search :focus::placeholder { - color: transparent; -} - -.pytorch-left-menu-search input[type=text] { - border-radius: 0; - padding: 0.5rem 0.75rem; - border-color: #ffffff; - color: #262626; - border-style: solid; - font-size: 1rem; - width: 100%; - background-color: #f3f4f7; - background-image: url("../images/search-icon.svg"); - background-repeat: no-repeat; - background-size: 18px 18px; - background-position: 12px 10px; - padding-left: 40px; - background-color: #ffffff; -} -.pytorch-left-menu-search input[type=text]:focus { - outline: 0; -} - -@media screen and (min-width: 1101px) { - .pytorch-left-menu .pytorch-side-scroll { - width: 120%; - } -} -@media screen and (min-width: 1600px) { - .pytorch-left-menu .pytorch-side-scroll { - width: 340px; - } -} - -.pytorch-right-menu { - min-height: 100px; - overflow-x: hidden; - overflow-y: hidden; - left: 0; - z-index: 200; - padding-top: 0; - position: relative; -} -@media screen and (min-width: 1101px) { - .pytorch-right-menu { - width: 100%; - } - .pytorch-right-menu.scrolling-fixed { - position: fixed; - top: 45px; - left: 83.5%; - width: 14%; - } - .pytorch-right-menu.scrolling-absolute { - position: absolute; - left: 0; - } -} -@media screen and (min-width: 1600px) { - .pytorch-right-menu { - left: 0; - width: 380px; - } - .pytorch-right-menu.scrolling-fixed { - position: fixed; - top: 45px; - left: 1230px; - } - .pytorch-right-menu.scrolling-absolute { - position: absolute; - left: 0; - } -} - -.pytorch-left-menu ul, -.pytorch-right-menu ul { - list-style-type: none; - padding-left: 0; - margin-bottom: 2.5rem; -} -.pytorch-left-menu > ul, -.pytorch-right-menu > ul { - margin-bottom: 2.5rem; -} -.pytorch-left-menu a:link, -.pytorch-left-menu a:visited, -.pytorch-left-menu a:hover, -.pytorch-right-menu a:link, -.pytorch-right-menu a:visited, -.pytorch-right-menu a:hover { - color: #6c6c6d; - font-size: 0.875rem; - line-height: 1rem; - padding: 0; - text-decoration: none; -} -.pytorch-left-menu a:link.reference.internal, -.pytorch-left-menu a:visited.reference.internal, -.pytorch-left-menu a:hover.reference.internal, -.pytorch-right-menu a:link.reference.internal, -.pytorch-right-menu a:visited.reference.internal, -.pytorch-right-menu a:hover.reference.internal { - margin-bottom: 0.3125rem; - position: relative; -} -.pytorch-left-menu li code, -.pytorch-right-menu li code { - border: none; - background: inherit; - color: inherit; - padding-left: 0; - padding-right: 0; -} -.pytorch-left-menu li span.toctree-expand, -.pytorch-right-menu li span.toctree-expand { - display: block; - float: left; - margin-left: -1.2em; - font-size: 0.8em; - line-height: 1.6em; -} -.pytorch-left-menu li.on a, .pytorch-left-menu li.current > a, -.pytorch-right-menu li.on a, -.pytorch-right-menu li.current > a { - position: relative; - border: none; -} -.pytorch-left-menu li.on a span.toctree-expand, .pytorch-left-menu li.current > a span.toctree-expand, -.pytorch-right-menu li.on a span.toctree-expand, -.pytorch-right-menu li.current > a span.toctree-expand { - display: block; - font-size: 0.8em; - line-height: 1.6em; -} -.pytorch-left-menu li.toctree-l1.current > a, -.pytorch-right-menu li.toctree-l1.current > a { - color: #ee4c2c; -} -.pytorch-left-menu li.toctree-l1.current > a:before, -.pytorch-right-menu li.toctree-l1.current > a:before { - content: "\2022"; - display: inline-block; - position: absolute; - left: -15px; - top: -10%; - font-size: 1.375rem; - color: #ee4c2c; -} -@media screen and (min-width: 1101px) { - .pytorch-left-menu li.toctree-l1.current > a:before, - .pytorch-right-menu li.toctree-l1.current > a:before { - left: -20px; - } -} -.pytorch-left-menu li.toctree-l1.current li.toctree-l2 > ul, .pytorch-left-menu li.toctree-l2.current li.toctree-l3 > ul, -.pytorch-right-menu li.toctree-l1.current li.toctree-l2 > ul, -.pytorch-right-menu li.toctree-l2.current li.toctree-l3 > ul { - display: none; -} -.pytorch-left-menu li.toctree-l1.current li.toctree-l2.current > ul, .pytorch-left-menu li.toctree-l2.current li.toctree-l3.current > ul, -.pytorch-right-menu li.toctree-l1.current li.toctree-l2.current > ul, -.pytorch-right-menu li.toctree-l2.current li.toctree-l3.current > ul { - display: block; -} -.pytorch-left-menu li.toctree-l2.current li.toctree-l3 > a, -.pytorch-right-menu li.toctree-l2.current li.toctree-l3 > a { - display: block; -} -.pytorch-left-menu li.toctree-l3, -.pytorch-right-menu li.toctree-l3 { - font-size: 0.9em; -} -.pytorch-left-menu li.toctree-l3.current li.toctree-l4 > a, -.pytorch-right-menu li.toctree-l3.current li.toctree-l4 > a { - display: block; -} -.pytorch-left-menu li.toctree-l4, -.pytorch-right-menu li.toctree-l4 { - font-size: 0.9em; -} -.pytorch-left-menu li.current ul, -.pytorch-right-menu li.current ul { - display: block; -} -.pytorch-left-menu li ul, -.pytorch-right-menu li ul { - margin-bottom: 0; - display: none; -} -.pytorch-left-menu li ul li a, -.pytorch-right-menu li ul li a { - margin-bottom: 0; -} -.pytorch-left-menu a, -.pytorch-right-menu a { - display: inline-block; - position: relative; -} -.pytorch-left-menu a:hover, -.pytorch-right-menu a:hover { - cursor: pointer; -} -.pytorch-left-menu a:active, -.pytorch-right-menu a:active { - cursor: pointer; -} - -.pytorch-left-menu ul { - padding-left: 0; -} - -.pytorch-right-menu a:link, -.pytorch-right-menu a:visited, -.pytorch-right-menu a:hover { - color: #6c6c6d; -} -.pytorch-right-menu a:link span.pre, -.pytorch-right-menu a:visited span.pre, -.pytorch-right-menu a:hover span.pre { - color: #6c6c6d; -} -.pytorch-right-menu a.reference.internal.expanded:before { - content: "-"; - font-family: monospace; - position: absolute; - left: -12px; -} -.pytorch-right-menu a.reference.internal.not-expanded:before { - content: "+"; - font-family: monospace; - position: absolute; - left: -12px; -} -.pytorch-right-menu li.active > a { - color: #ee4c2c; -} -.pytorch-right-menu li.active > a span.pre, .pytorch-right-menu li.active > a:before { - color: #ee4c2c; -} -.pytorch-right-menu li.active > a:after { - content: "\2022"; - color: #e44c2c; - display: inline-block; - font-size: 1.375rem; - left: -17px; - position: absolute; - top: 1px; -} -.pytorch-right-menu .pytorch-side-scroll > ul > li > ul > li { - margin-bottom: 0; -} -.pytorch-right-menu ul ul { - padding-left: 0; -} -.pytorch-right-menu ul ul li { - padding-left: 0px; -} -.pytorch-right-menu ul ul li a.reference.internal { - padding-left: 0; -} -.pytorch-right-menu ul ul li ul { - display: none; - padding-left: 10px; -} -.pytorch-right-menu ul ul li li a.reference.internal { - padding-left: 0; -} -.pytorch-right-menu li ul { - display: block; -} - -.pytorch-right-menu .pytorch-side-scroll { - padding-top: 20px; -} -@media screen and (min-width: 1101px) { - .pytorch-right-menu .pytorch-side-scroll { - width: 120%; - } -} -@media screen and (min-width: 1600px) { - .pytorch-right-menu .pytorch-side-scroll { - width: 400px; - } -} -.pytorch-right-menu .pytorch-side-scroll > ul { - padding-left: 10%; - padding-right: 10%; - margin-bottom: 0; -} -@media screen and (min-width: 1600px) { - .pytorch-right-menu .pytorch-side-scroll > ul { - padding-left: 25px; - } -} -.pytorch-right-menu .pytorch-side-scroll > ul > li > a.reference.internal { - color: #262626; - font-weight: 500; -} -.pytorch-right-menu .pytorch-side-scroll ul li { - position: relative; -} - -#pytorch-right-menu .side-scroll-highlight { - color: #ee4c2c; -} - -.header-container { - max-width: none; - margin-top: 4px; -} -@media screen and (min-width: 1101px) { - .header-container { - margin-top: 0; - } -} -@media screen and (min-width: 1600px) { - .header-container { - margin-top: 0; - } -} - -.container-fluid.header-holder { - padding-right: 0; - padding-left: 0; -} - -.header-holder .container { - max-width: none; - padding-right: 1.875rem; - padding-left: 1.875rem; -} -@media screen and (min-width: 1101px) { - .header-holder .container { - padding-right: 1.875rem; - padding-left: 1.875rem; - } -} - -.header-holder .main-menu { - -webkit-box-pack: unset; - -ms-flex-pack: unset; - justify-content: unset; - position: relative; -} -@media screen and (min-width: 1101px) { - .header-holder .main-menu ul { - padding-left: 0; - margin-left: 26%; - } -} -@media screen and (min-width: 1600px) { - .header-holder .main-menu ul { - padding-left: 38px; - margin-left: 310px; - } -} - -.pytorch-page-level-bar { - display: none; - -webkit-box-align: center; - -ms-flex-align: center; - align-items: center; - background-color: #ffffff; - border-bottom: 1px solid #e2e2e2; - width: 100%; - z-index: 201; -} -@media screen and (min-width: 1101px) { - .pytorch-page-level-bar { - left: 0; - display: -webkit-box; - display: -ms-flexbox; - display: flex; - height: 45px; - padding-left: 0; - width: 100%; - position: absolute; - z-index: 1; - } - .pytorch-page-level-bar.left-menu-is-fixed { - position: fixed; - top: 0; - left: 25%; - padding-left: 0; - right: 0; - width: 75%; - } -} -@media screen and (min-width: 1600px) { - .pytorch-page-level-bar { - left: 0; - right: 0; - width: auto; - z-index: 1; - } - .pytorch-page-level-bar.left-menu-is-fixed { - left: 350px; - right: 0; - width: auto; - } -} -.pytorch-page-level-bar ul, .pytorch-page-level-bar li { - margin: 0; -} - -.pytorch-shortcuts-wrapper { - display: none; -} -@media screen and (min-width: 1101px) { - .pytorch-shortcuts-wrapper { - font-size: 0.875rem; - float: left; - margin-left: 2%; - } -} -@media screen and (min-width: 1600px) { - .pytorch-shortcuts-wrapper { - margin-left: 1.875rem; - } -} - -.cookie-banner-wrapper { - display: none; -} -.cookie-banner-wrapper .container { - padding-left: 1.875rem; - padding-right: 1.875rem; - max-width: 1240px; -} -.cookie-banner-wrapper.is-visible { - display: block; - position: fixed; - bottom: 0; - background-color: #f3f4f7; - min-height: 100px; - width: 100%; - z-index: 401; - border-top: 3px solid #ededee; -} -.cookie-banner-wrapper .gdpr-notice { - color: #6c6c6d; - margin-top: 1.5625rem; - text-align: left; - max-width: 1440px; -} -@media screen and (min-width: 768px) { - .cookie-banner-wrapper .gdpr-notice { - width: 77%; - } -} -@media (min-width: 768px) and (max-width: 1239px) { - .cookie-banner-wrapper .gdpr-notice { - width: inherit; - } -} -.cookie-banner-wrapper .gdpr-notice .cookie-policy-link { - color: #343434; -} -.cookie-banner-wrapper .close-button { - -webkit-appearance: none; - -moz-appearance: none; - appearance: none; - background: transparent; - border: 1px solid #f3f4f7; - height: 1.3125rem; - position: absolute; - bottom: 42px; - right: 0; - top: 0; - cursor: pointer; - outline: none; -} -@media screen and (min-width: 768px) { - .cookie-banner-wrapper .close-button { - right: 20%; - top: inherit; - } -} -@media (min-width: 768px) and (max-width: 1239px) { - .cookie-banner-wrapper .close-button { - right: 0; - top: 0; - } -} - -.main-menu ul li .resources-dropdown a { - cursor: pointer; -} -.main-menu ul li .dropdown-menu { - border-radius: 0; - padding: 0; -} -.main-menu ul li .dropdown-menu .dropdown-item { - color: #6c6c6d; - border-bottom: 1px solid #e2e2e2; -} -.main-menu ul li .dropdown-menu .dropdown-item:last-of-type { - border-bottom-color: transparent; -} -.main-menu ul li .dropdown-menu .dropdown-item:hover { - background-color: #e44c2c; -} -.main-menu ul li .dropdown-menu .dropdown-item p { - font-size: 1rem; - color: #979797; -} -.main-menu ul li .dropdown-menu a.dropdown-item:hover { - color: #ffffff; -} -.main-menu ul li .dropdown-menu a.dropdown-item:hover p { - color: #ffffff; -} - -.resources-dropdown-menu { - left: -75px; - width: 226px; - display: none; - position: absolute; - z-index: 1000; - display: none; - float: left; - min-width: 10rem; - padding: 0.5rem 0; - font-size: 1rem; - color: #212529; - text-align: left; - list-style: none; - background-color: #ffffff; - background-clip: padding-box; - border: 1px solid rgba(0, 0, 0, 0.15); - border-radius: 0.25rem; -} - -.resources-dropdown:hover .resources-dropdown-menu { - display: block; -} - -.main-menu ul li .resources-dropdown-menu { - border-radius: 0; - padding: 0; -} -.main-menu ul li.active:hover .resources-dropdown-menu { - display: block; -} - -.main-menu ul li .resources-dropdown-menu .dropdown-item { - color: #6c6c6d; - border-bottom: 1px solid #e2e2e2; -} - -.resources-dropdown .with-down-orange-arrow { - padding-right: 2rem; - position: relative; - background: url("../images/chevron-down-orange.svg"); - background-size: 14px 18px; - background-position: top 7px right 10px; - background-repeat: no-repeat; -} - -.with-down-arrow { - padding-right: 2rem; - position: relative; - background-image: url("../images/chevron-down-black.svg"); - background-size: 14px 18px; - background-position: top 7px right 10px; - background-repeat: no-repeat; -} -.with-down-arrow:hover { - background-image: url("../images/chevron-down-orange.svg"); - background-repeat: no-repeat; -} - -.header-holder .main-menu ul li .resources-dropdown .doc-dropdown-option { - padding-top: 1rem; -} - -.header-holder .main-menu ul li a.nav-dropdown-item { - display: block; - font-size: 1rem; - line-height: 1.3125rem; - width: 100%; - padding: 0.25rem 1.5rem; - clear: both; - font-weight: 400; - color: #979797; - text-align: center; - background-color: transparent; - border-bottom: 1px solid #e2e2e2; -} -.header-holder .main-menu ul li a.nav-dropdown-item:last-of-type { - border-bottom-color: transparent; -} -.header-holder .main-menu ul li a.nav-dropdown-item:hover { - background-color: #e44c2c; - color: white; -} -.header-holder .main-menu ul li a.nav-dropdown-item .dropdown-title { - font-size: 1.125rem; - color: #6c6c6d; - letter-spacing: 0; - line-height: 34px; -} - -.header-holder .main-menu ul li a.nav-dropdown-item:hover .dropdown-title { - background-color: #e44c2c; - color: white; -} - -/*# sourceMappingURL=theme.css.map */ diff --git a/0.11./_static/doctools.js b/0.11./_static/doctools.js deleted file mode 100644 index 61ac9d266f9..00000000000 --- a/0.11./_static/doctools.js +++ /dev/null @@ -1,321 +0,0 @@ -/* - * doctools.js - * ~~~~~~~~~~~ - * - * Sphinx JavaScript utilities for all documentation. - * - * :copyright: Copyright 2007-2021 by the Sphinx team, see AUTHORS. - * :license: BSD, see LICENSE for details. - * - */ - -/** - * select a different prefix for underscore - */ -$u = _.noConflict(); - -/** - * make the code below compatible with browsers without - * an installed firebug like debugger -if (!window.console || !console.firebug) { - var names = ["log", "debug", "info", "warn", "error", "assert", "dir", - "dirxml", "group", "groupEnd", "time", "timeEnd", "count", "trace", - "profile", "profileEnd"]; - window.console = {}; - for (var i = 0; i < names.length; ++i) - window.console[names[i]] = function() {}; -} - */ - -/** - * small helper function to urldecode strings - * - * See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/decodeURIComponent#Decoding_query_parameters_from_a_URL - */ -jQuery.urldecode = function(x) { - if (!x) { - return x - } - return decodeURIComponent(x.replace(/\+/g, ' ')); -}; - -/** - * small helper function to urlencode strings - */ -jQuery.urlencode = encodeURIComponent; - -/** - * This function returns the parsed url parameters of the - * current request. Multiple values per key are supported, - * it will always return arrays of strings for the value parts. - */ -jQuery.getQueryParameters = function(s) { - if (typeof s === 'undefined') - s = document.location.search; - var parts = s.substr(s.indexOf('?') + 1).split('&'); - var result = {}; - for (var i = 0; i < parts.length; i++) { - var tmp = parts[i].split('=', 2); - var key = jQuery.urldecode(tmp[0]); - var value = jQuery.urldecode(tmp[1]); - if (key in result) - result[key].push(value); - else - result[key] = [value]; - } - return result; -}; - -/** - * highlight a given string on a jquery object by wrapping it in - * span elements with the given class name. - */ -jQuery.fn.highlightText = function(text, className) { - function highlight(node, addItems) { - if (node.nodeType === 3) { - var val = node.nodeValue; - var pos = val.toLowerCase().indexOf(text); - if (pos >= 0 && - !jQuery(node.parentNode).hasClass(className) && - !jQuery(node.parentNode).hasClass("nohighlight")) { - var span; - var isInSVG = jQuery(node).closest("body, svg, foreignObject").is("svg"); - if (isInSVG) { - span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); - } else { - span = document.createElement("span"); - span.className = className; - } - span.appendChild(document.createTextNode(val.substr(pos, text.length))); - node.parentNode.insertBefore(span, node.parentNode.insertBefore( - document.createTextNode(val.substr(pos + text.length)), - node.nextSibling)); - node.nodeValue = val.substr(0, pos); - if (isInSVG) { - var rect = document.createElementNS("http://www.w3.org/2000/svg", "rect"); - var bbox = node.parentElement.getBBox(); - rect.x.baseVal.value = bbox.x; - rect.y.baseVal.value = bbox.y; - rect.width.baseVal.value = bbox.width; - rect.height.baseVal.value = bbox.height; - rect.setAttribute('class', className); - addItems.push({ - "parent": node.parentNode, - "target": rect}); - } - } - } - else if (!jQuery(node).is("button, select, textarea")) { - jQuery.each(node.childNodes, function() { - highlight(this, addItems); - }); - } - } - var addItems = []; - var result = this.each(function() { - highlight(this, addItems); - }); - for (var i = 0; i < addItems.length; ++i) { - jQuery(addItems[i].parent).before(addItems[i].target); - } - return result; -}; - -/* - * backward compatibility for jQuery.browser - * This will be supported until firefox bug is fixed. - */ -if (!jQuery.browser) { - jQuery.uaMatch = function(ua) { - ua = ua.toLowerCase(); - - var match = /(chrome)[ \/]([\w.]+)/.exec(ua) || - /(webkit)[ \/]([\w.]+)/.exec(ua) || - /(opera)(?:.*version|)[ \/]([\w.]+)/.exec(ua) || - /(msie) ([\w.]+)/.exec(ua) || - ua.indexOf("compatible") < 0 && /(mozilla)(?:.*? rv:([\w.]+)|)/.exec(ua) || - []; - - return { - browser: match[ 1 ] || "", - version: match[ 2 ] || "0" - }; - }; - jQuery.browser = {}; - jQuery.browser[jQuery.uaMatch(navigator.userAgent).browser] = true; -} - -/** - * Small JavaScript module for the documentation. - */ -var Documentation = { - - init : function() { - this.fixFirefoxAnchorBug(); - this.highlightSearchWords(); - this.initIndexTable(); - if (DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) { - this.initOnKeyListeners(); - } - }, - - /** - * i18n support - */ - TRANSLATIONS : {}, - PLURAL_EXPR : function(n) { return n === 1 ? 0 : 1; }, - LOCALE : 'unknown', - - // gettext and ngettext don't access this so that the functions - // can safely bound to a different name (_ = Documentation.gettext) - gettext : function(string) { - var translated = Documentation.TRANSLATIONS[string]; - if (typeof translated === 'undefined') - return string; - return (typeof translated === 'string') ? translated : translated[0]; - }, - - ngettext : function(singular, plural, n) { - var translated = Documentation.TRANSLATIONS[singular]; - if (typeof translated === 'undefined') - return (n == 1) ? singular : plural; - return translated[Documentation.PLURALEXPR(n)]; - }, - - addTranslations : function(catalog) { - for (var key in catalog.messages) - this.TRANSLATIONS[key] = catalog.messages[key]; - this.PLURAL_EXPR = new Function('n', 'return +(' + catalog.plural_expr + ')'); - this.LOCALE = catalog.locale; - }, - - /** - * add context elements like header anchor links - */ - addContextElements : function() { - $('div[id] > :header:first').each(function() { - $('\u00B6'). - attr('href', '#' + this.id). - attr('title', _('Permalink to this headline')). - appendTo(this); - }); - $('dt[id]').each(function() { - $('\u00B6'). - attr('href', '#' + this.id). - attr('title', _('Permalink to this definition')). - appendTo(this); - }); - }, - - /** - * workaround a firefox stupidity - * see: https://bugzilla.mozilla.org/show_bug.cgi?id=645075 - */ - fixFirefoxAnchorBug : function() { - if (document.location.hash && $.browser.mozilla) - window.setTimeout(function() { - document.location.href += ''; - }, 10); - }, - - /** - * highlight the search words provided in the url in the text - */ - highlightSearchWords : function() { - var params = $.getQueryParameters(); - var terms = (params.highlight) ? params.highlight[0].split(/\s+/) : []; - if (terms.length) { - var body = $('div.body'); - if (!body.length) { - body = $('body'); - } - window.setTimeout(function() { - $.each(terms, function() { - body.highlightText(this.toLowerCase(), 'highlighted'); - }); - }, 10); - $('') - .appendTo($('#searchbox')); - } - }, - - /** - * init the domain index toggle buttons - */ - initIndexTable : function() { - var togglers = $('img.toggler').click(function() { - var src = $(this).attr('src'); - var idnum = $(this).attr('id').substr(7); - $('tr.cg-' + idnum).toggle(); - if (src.substr(-9) === 'minus.png') - $(this).attr('src', src.substr(0, src.length-9) + 'plus.png'); - else - $(this).attr('src', src.substr(0, src.length-8) + 'minus.png'); - }).css('display', ''); - if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) { - togglers.click(); - } - }, - - /** - * helper function to hide the search marks again - */ - hideSearchWords : function() { - $('#searchbox .highlight-link').fadeOut(300); - $('span.highlighted').removeClass('highlighted'); - }, - - /** - * make the url absolute - */ - makeURL : function(relativeURL) { - return DOCUMENTATION_OPTIONS.URL_ROOT + '/' + relativeURL; - }, - - /** - * get the current relative url - */ - getCurrentURL : function() { - var path = document.location.pathname; - var parts = path.split(/\//); - $.each(DOCUMENTATION_OPTIONS.URL_ROOT.split(/\//), function() { - if (this === '..') - parts.pop(); - }); - var url = parts.join('/'); - return path.substring(url.lastIndexOf('/') + 1, path.length - 1); - }, - - initOnKeyListeners: function() { - $(document).keydown(function(event) { - var activeElementType = document.activeElement.tagName; - // don't navigate when in search box, textarea, dropdown or button - if (activeElementType !== 'TEXTAREA' && activeElementType !== 'INPUT' && activeElementType !== 'SELECT' - && activeElementType !== 'BUTTON' && !event.altKey && !event.ctrlKey && !event.metaKey - && !event.shiftKey) { - switch (event.keyCode) { - case 37: // left - var prevHref = $('link[rel="prev"]').prop('href'); - if (prevHref) { - window.location.href = prevHref; - return false; - } - case 39: // right - var nextHref = $('link[rel="next"]').prop('href'); - if (nextHref) { - window.location.href = nextHref; - return false; - } - } - } - }); - } -}; - -// quick alias for translations -_ = Documentation.gettext; - -$(document).ready(function() { - Documentation.init(); -}); diff --git a/0.11./_static/documentation_options.js b/0.11./_static/documentation_options.js deleted file mode 100644 index 14f07141c1d..00000000000 --- a/0.11./_static/documentation_options.js +++ /dev/null @@ -1,12 +0,0 @@ -var DOCUMENTATION_OPTIONS = { - URL_ROOT: document.getElementById("documentation_options").getAttribute('data-url_root'), - VERSION: 'main', - LANGUAGE: 'None', - COLLAPSE_INDEX: false, - BUILDER: 'html', - FILE_SUFFIX: '.html', - LINK_SUFFIX: '.html', - HAS_SOURCE: true, - SOURCELINK_SUFFIX: '.txt', - NAVIGATION_WITH_KEYS: true -}; \ No newline at end of file diff --git a/0.11./_static/file.png b/0.11./_static/file.png deleted file mode 100644 index a858a410e4f..00000000000 Binary files a/0.11./_static/file.png and /dev/null differ diff --git a/0.11./_static/fonts/FreightSans/freight-sans-bold-italic.woff b/0.11./_static/fonts/FreightSans/freight-sans-bold-italic.woff deleted file mode 100644 index e317248423c..00000000000 Binary files a/0.11./_static/fonts/FreightSans/freight-sans-bold-italic.woff and /dev/null differ diff --git a/0.11./_static/fonts/FreightSans/freight-sans-bold-italic.woff2 b/0.11./_static/fonts/FreightSans/freight-sans-bold-italic.woff2 deleted file mode 100644 index cec2dc94fbb..00000000000 Binary files a/0.11./_static/fonts/FreightSans/freight-sans-bold-italic.woff2 and /dev/null differ diff --git a/0.11./_static/fonts/FreightSans/freight-sans-bold.woff b/0.11./_static/fonts/FreightSans/freight-sans-bold.woff deleted file mode 100644 index de46625edfc..00000000000 Binary files a/0.11./_static/fonts/FreightSans/freight-sans-bold.woff and /dev/null differ diff --git a/0.11./_static/fonts/FreightSans/freight-sans-bold.woff2 b/0.11./_static/fonts/FreightSans/freight-sans-bold.woff2 deleted file mode 100644 index dc05cd82bc4..00000000000 Binary files a/0.11./_static/fonts/FreightSans/freight-sans-bold.woff2 and /dev/null differ diff --git a/0.11./_static/fonts/FreightSans/freight-sans-book-italic.woff b/0.11./_static/fonts/FreightSans/freight-sans-book-italic.woff deleted file mode 100644 index a50e5038a40..00000000000 Binary files a/0.11./_static/fonts/FreightSans/freight-sans-book-italic.woff and /dev/null differ diff --git a/0.11./_static/fonts/FreightSans/freight-sans-book-italic.woff2 b/0.11./_static/fonts/FreightSans/freight-sans-book-italic.woff2 deleted file mode 100644 index fe284db6614..00000000000 Binary files a/0.11./_static/fonts/FreightSans/freight-sans-book-italic.woff2 and /dev/null differ diff --git a/0.11./_static/fonts/FreightSans/freight-sans-book.woff b/0.11./_static/fonts/FreightSans/freight-sans-book.woff deleted file mode 100644 index 6ab8775f00b..00000000000 Binary files a/0.11./_static/fonts/FreightSans/freight-sans-book.woff and /dev/null differ diff --git a/0.11./_static/fonts/FreightSans/freight-sans-book.woff2 b/0.11./_static/fonts/FreightSans/freight-sans-book.woff2 deleted file mode 100644 index 2688739f1f0..00000000000 Binary files a/0.11./_static/fonts/FreightSans/freight-sans-book.woff2 and /dev/null differ diff --git a/0.11./_static/fonts/FreightSans/freight-sans-light-italic.woff b/0.11./_static/fonts/FreightSans/freight-sans-light-italic.woff deleted file mode 100644 index beda58d4e21..00000000000 Binary files a/0.11./_static/fonts/FreightSans/freight-sans-light-italic.woff and /dev/null differ diff --git a/0.11./_static/fonts/FreightSans/freight-sans-light-italic.woff2 b/0.11./_static/fonts/FreightSans/freight-sans-light-italic.woff2 deleted file mode 100644 index e2fa0134b1a..00000000000 Binary files a/0.11./_static/fonts/FreightSans/freight-sans-light-italic.woff2 and /dev/null differ diff --git a/0.11./_static/fonts/FreightSans/freight-sans-light.woff b/0.11./_static/fonts/FreightSans/freight-sans-light.woff deleted file mode 100644 index 226a0bf8358..00000000000 Binary files a/0.11./_static/fonts/FreightSans/freight-sans-light.woff and /dev/null differ diff --git a/0.11./_static/fonts/FreightSans/freight-sans-light.woff2 b/0.11./_static/fonts/FreightSans/freight-sans-light.woff2 deleted file mode 100644 index 6d8ff2c045b..00000000000 Binary files a/0.11./_static/fonts/FreightSans/freight-sans-light.woff2 and /dev/null differ diff --git a/0.11./_static/fonts/FreightSans/freight-sans-medium-italic.woff b/0.11./_static/fonts/FreightSans/freight-sans-medium-italic.woff deleted file mode 100644 index a42115d63b3..00000000000 Binary files a/0.11./_static/fonts/FreightSans/freight-sans-medium-italic.woff and /dev/null differ diff --git a/0.11./_static/fonts/FreightSans/freight-sans-medium-italic.woff2 b/0.11./_static/fonts/FreightSans/freight-sans-medium-italic.woff2 deleted file mode 100644 index 16a7713a451..00000000000 Binary files a/0.11./_static/fonts/FreightSans/freight-sans-medium-italic.woff2 and /dev/null differ diff --git a/0.11./_static/fonts/FreightSans/freight-sans-medium.woff b/0.11./_static/fonts/FreightSans/freight-sans-medium.woff deleted file mode 100644 index 5ea34539c6f..00000000000 Binary files a/0.11./_static/fonts/FreightSans/freight-sans-medium.woff and /dev/null differ diff --git a/0.11./_static/fonts/FreightSans/freight-sans-medium.woff2 b/0.11./_static/fonts/FreightSans/freight-sans-medium.woff2 deleted file mode 100644 index c58b6a528bb..00000000000 Binary files a/0.11./_static/fonts/FreightSans/freight-sans-medium.woff2 and /dev/null differ diff --git a/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-Light.woff b/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-Light.woff deleted file mode 100644 index cf37a5c50bd..00000000000 Binary files a/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-Light.woff and /dev/null differ diff --git a/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-Light.woff2 b/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-Light.woff2 deleted file mode 100644 index 955a6eab5bb..00000000000 Binary files a/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-Light.woff2 and /dev/null differ diff --git a/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-Medium.woff b/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-Medium.woff deleted file mode 100644 index fc65a679c22..00000000000 Binary files a/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-Medium.woff and /dev/null differ diff --git a/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-Medium.woff2 b/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-Medium.woff2 deleted file mode 100644 index c352e40e34a..00000000000 Binary files a/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-Medium.woff2 and /dev/null differ diff --git a/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-Regular.woff b/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-Regular.woff deleted file mode 100644 index 7d63d89f24b..00000000000 Binary files a/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-Regular.woff and /dev/null differ diff --git a/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-Regular.woff2 b/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-Regular.woff2 deleted file mode 100644 index d0d7ded9079..00000000000 Binary files a/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-Regular.woff2 and /dev/null differ diff --git a/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-SemiBold.woff b/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-SemiBold.woff deleted file mode 100644 index 1da7753cf28..00000000000 Binary files a/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-SemiBold.woff and /dev/null differ diff --git a/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-SemiBold.woff2 b/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-SemiBold.woff2 deleted file mode 100644 index 79dffdb85f7..00000000000 Binary files a/0.11./_static/fonts/IBMPlexMono/IBMPlexMono-SemiBold.woff2 and /dev/null differ diff --git a/0.11./_static/images/arrow-down-orange.svg b/0.11./_static/images/arrow-down-orange.svg deleted file mode 100644 index e9d8e9ecf24..00000000000 --- a/0.11./_static/images/arrow-down-orange.svg +++ /dev/null @@ -1,19 +0,0 @@ - - - - Group 5 - Created with Sketch. - - - - - - - - - - - - - - \ No newline at end of file diff --git a/0.11./_static/images/arrow-right-with-tail.svg b/0.11./_static/images/arrow-right-with-tail.svg deleted file mode 100644 index 5843588fca6..00000000000 --- a/0.11./_static/images/arrow-right-with-tail.svg +++ /dev/null @@ -1,19 +0,0 @@ - - - - Page 1 - Created with Sketch. - - - - - - - - - - - - - - \ No newline at end of file diff --git a/0.11./_static/images/chevron-down-black.svg b/0.11./_static/images/chevron-down-black.svg deleted file mode 100644 index 097bc076ecf..00000000000 --- a/0.11./_static/images/chevron-down-black.svg +++ /dev/null @@ -1,16 +0,0 @@ - - - Created with Sketch. - - - - - - - - - - - - - diff --git a/0.11./_static/images/chevron-down-grey.svg b/0.11./_static/images/chevron-down-grey.svg deleted file mode 100644 index 82d6514f250..00000000000 --- a/0.11./_static/images/chevron-down-grey.svg +++ /dev/null @@ -1,18 +0,0 @@ - - - - -Created with Sketch. - - - - - - - - - - - - diff --git a/0.11./_static/images/chevron-down-orange.svg b/0.11./_static/images/chevron-down-orange.svg deleted file mode 100644 index fd79a57854c..00000000000 --- a/0.11./_static/images/chevron-down-orange.svg +++ /dev/null @@ -1,16 +0,0 @@ - - - Created with Sketch. - - - - - - - - - - - - - diff --git a/0.11./_static/images/chevron-down-white.svg b/0.11./_static/images/chevron-down-white.svg deleted file mode 100644 index e6c94e27b64..00000000000 --- a/0.11./_static/images/chevron-down-white.svg +++ /dev/null @@ -1,16 +0,0 @@ - - - Created with Sketch. - - - - - - - - - - - - - diff --git a/0.11./_static/images/chevron-right-orange.svg b/0.11./_static/images/chevron-right-orange.svg deleted file mode 100644 index 7033fc93bf4..00000000000 --- a/0.11./_static/images/chevron-right-orange.svg +++ /dev/null @@ -1,17 +0,0 @@ - - - - -Page 1 -Created with Sketch. - - - - - - - - - - diff --git a/0.11./_static/images/chevron-right-white.svg b/0.11./_static/images/chevron-right-white.svg deleted file mode 100644 index dd9e77f2616..00000000000 --- a/0.11./_static/images/chevron-right-white.svg +++ /dev/null @@ -1,17 +0,0 @@ - - - - -Page 1 -Created with Sketch. - - - - - - - - - - \ No newline at end of file diff --git a/0.11./_static/images/home-footer-background.jpg b/0.11./_static/images/home-footer-background.jpg deleted file mode 100644 index b307bb57f48..00000000000 Binary files a/0.11./_static/images/home-footer-background.jpg and /dev/null differ diff --git a/0.11./_static/images/icon-close.svg b/0.11./_static/images/icon-close.svg deleted file mode 100644 index 348964e79f7..00000000000 --- a/0.11./_static/images/icon-close.svg +++ /dev/null @@ -1,21 +0,0 @@ - - - - Page 1 - Created with Sketch. - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/0.11./_static/images/icon-menu-dots-dark.svg b/0.11./_static/images/icon-menu-dots-dark.svg deleted file mode 100644 index fa2ad044b3f..00000000000 --- a/0.11./_static/images/icon-menu-dots-dark.svg +++ /dev/null @@ -1,42 +0,0 @@ - - - - Page 1 - Created with Sketch. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/0.11./_static/images/logo-dark.svg b/0.11./_static/images/logo-dark.svg deleted file mode 100644 index 9b4c1a56ac6..00000000000 --- a/0.11./_static/images/logo-dark.svg +++ /dev/null @@ -1,30 +0,0 @@ - - - - - - - - - - - - - - - - - - - - diff --git a/0.11./_static/images/logo-facebook-dark.svg b/0.11./_static/images/logo-facebook-dark.svg deleted file mode 100644 index cff17915c4f..00000000000 --- a/0.11./_static/images/logo-facebook-dark.svg +++ /dev/null @@ -1,8 +0,0 @@ - - - - - - diff --git a/0.11./_static/images/logo-icon.svg b/0.11./_static/images/logo-icon.svg deleted file mode 100644 index 575f6823e47..00000000000 --- a/0.11./_static/images/logo-icon.svg +++ /dev/null @@ -1,12 +0,0 @@ - - - - - - - - - diff --git a/0.11./_static/images/logo-twitter-dark.svg b/0.11./_static/images/logo-twitter-dark.svg deleted file mode 100644 index 1572570f88c..00000000000 --- a/0.11./_static/images/logo-twitter-dark.svg +++ /dev/null @@ -1,16 +0,0 @@ - - - - - - - - diff --git a/0.11./_static/images/logo-youtube-dark.svg b/0.11./_static/images/logo-youtube-dark.svg deleted file mode 100644 index e3cfedd79d1..00000000000 --- a/0.11./_static/images/logo-youtube-dark.svg +++ /dev/null @@ -1,21 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - diff --git a/0.11./_static/images/logo.svg b/0.11./_static/images/logo.svg deleted file mode 100644 index f8d44b98425..00000000000 --- a/0.11./_static/images/logo.svg +++ /dev/null @@ -1,31 +0,0 @@ - - - - - - - - - - - - - - - - - - - - diff --git a/0.11./_static/images/pytorch-colab.svg b/0.11./_static/images/pytorch-colab.svg deleted file mode 100644 index 2ab15e2f307..00000000000 --- a/0.11./_static/images/pytorch-colab.svg +++ /dev/null @@ -1,24 +0,0 @@ - - - - - - - - - - - - diff --git a/0.11./_static/images/pytorch-download.svg b/0.11./_static/images/pytorch-download.svg deleted file mode 100644 index cc37d638e92..00000000000 --- a/0.11./_static/images/pytorch-download.svg +++ /dev/null @@ -1,10 +0,0 @@ - - - - - - diff --git a/0.11./_static/images/pytorch-github.svg b/0.11./_static/images/pytorch-github.svg deleted file mode 100644 index 2c2570da1de..00000000000 --- a/0.11./_static/images/pytorch-github.svg +++ /dev/null @@ -1,15 +0,0 @@ - - - - - - diff --git a/0.11./_static/images/pytorch-x.svg b/0.11./_static/images/pytorch-x.svg deleted file mode 100644 index 74856ea9fda..00000000000 --- a/0.11./_static/images/pytorch-x.svg +++ /dev/null @@ -1,10 +0,0 @@ - - - - - - - diff --git a/0.11./_static/images/search-icon.svg b/0.11./_static/images/search-icon.svg deleted file mode 100644 index ebb0df86773..00000000000 --- a/0.11./_static/images/search-icon.svg +++ /dev/null @@ -1,19 +0,0 @@ - - - - Created with Sketch. - - - - - - - - - - - - - - - diff --git a/0.11./_static/images/view-page-source-icon.svg b/0.11./_static/images/view-page-source-icon.svg deleted file mode 100644 index 6f5bbe0748f..00000000000 --- a/0.11./_static/images/view-page-source-icon.svg +++ /dev/null @@ -1,13 +0,0 @@ - - - - - - - - - - diff --git a/0.11./_static/img/pytorch-logo-dark.png b/0.11./_static/img/pytorch-logo-dark.png deleted file mode 100644 index 0288a564e22..00000000000 Binary files a/0.11./_static/img/pytorch-logo-dark.png and /dev/null differ diff --git a/0.11./_static/img/pytorch-logo-dark.svg b/0.11./_static/img/pytorch-logo-dark.svg deleted file mode 100644 index 717a3ce942f..00000000000 --- a/0.11./_static/img/pytorch-logo-dark.svg +++ /dev/null @@ -1,24 +0,0 @@ - - - - - - - - - - - - - diff --git a/0.11./_static/img/pytorch-logo-flame.png b/0.11./_static/img/pytorch-logo-flame.png deleted file mode 100644 index 370633f2ec2..00000000000 Binary files a/0.11./_static/img/pytorch-logo-flame.png and /dev/null differ diff --git a/0.11./_static/img/pytorch-logo-flame.svg b/0.11./_static/img/pytorch-logo-flame.svg deleted file mode 100644 index 5f2fb76be77..00000000000 --- a/0.11./_static/img/pytorch-logo-flame.svg +++ /dev/null @@ -1,33 +0,0 @@ - -image/svg+xml diff --git a/0.11./_static/jquery-3.5.1.js b/0.11./_static/jquery-3.5.1.js deleted file mode 100644 index 50937333b99..00000000000 --- a/0.11./_static/jquery-3.5.1.js +++ /dev/null @@ -1,10872 +0,0 @@ -/*! - * jQuery JavaScript Library v3.5.1 - * https://jquery.com/ - * - * Includes Sizzle.js - * https://sizzlejs.com/ - * - * Copyright JS Foundation and other contributors - * Released under the MIT license - * https://jquery.org/license - * - * Date: 2020-05-04T22:49Z - */ -( function( global, factory ) { - - "use strict"; - - if ( typeof module === "object" && typeof module.exports === "object" ) { - - // For CommonJS and CommonJS-like environments where a proper `window` - // is present, execute the factory and get jQuery. - // For environments that do not have a `window` with a `document` - // (such as Node.js), expose a factory as module.exports. - // This accentuates the need for the creation of a real `window`. - // e.g. var jQuery = require("jquery")(window); - // See ticket #14549 for more info. - module.exports = global.document ? - factory( global, true ) : - function( w ) { - if ( !w.document ) { - throw new Error( "jQuery requires a window with a document" ); - } - return factory( w ); - }; - } else { - factory( global ); - } - -// Pass this if window is not defined yet -} )( typeof window !== "undefined" ? window : this, function( window, noGlobal ) { - -// Edge <= 12 - 13+, Firefox <=18 - 45+, IE 10 - 11, Safari 5.1 - 9+, iOS 6 - 9.1 -// throw exceptions when non-strict code (e.g., ASP.NET 4.5) accesses strict mode -// arguments.callee.caller (trac-13335). But as of jQuery 3.0 (2016), strict mode should be common -// enough that all such attempts are guarded in a try block. -"use strict"; - -var arr = []; - -var getProto = Object.getPrototypeOf; - -var slice = arr.slice; - -var flat = arr.flat ? function( array ) { - return arr.flat.call( array ); -} : function( array ) { - return arr.concat.apply( [], array ); -}; - - -var push = arr.push; - -var indexOf = arr.indexOf; - -var class2type = {}; - -var toString = class2type.toString; - -var hasOwn = class2type.hasOwnProperty; - -var fnToString = hasOwn.toString; - -var ObjectFunctionString = fnToString.call( Object ); - -var support = {}; - -var isFunction = function isFunction( obj ) { - - // Support: Chrome <=57, Firefox <=52 - // In some browsers, typeof returns "function" for HTML elements - // (i.e., `typeof document.createElement( "object" ) === "function"`). - // We don't want to classify *any* DOM node as a function. - return typeof obj === "function" && typeof obj.nodeType !== "number"; - }; - - -var isWindow = function isWindow( obj ) { - return obj != null && obj === obj.window; - }; - - -var document = window.document; - - - - var preservedScriptAttributes = { - type: true, - src: true, - nonce: true, - noModule: true - }; - - function DOMEval( code, node, doc ) { - doc = doc || document; - - var i, val, - script = doc.createElement( "script" ); - - script.text = code; - if ( node ) { - for ( i in preservedScriptAttributes ) { - - // Support: Firefox 64+, Edge 18+ - // Some browsers don't support the "nonce" property on scripts. - // On the other hand, just using `getAttribute` is not enough as - // the `nonce` attribute is reset to an empty string whenever it - // becomes browsing-context connected. - // See https://github.com/whatwg/html/issues/2369 - // See https://html.spec.whatwg.org/#nonce-attributes - // The `node.getAttribute` check was added for the sake of - // `jQuery.globalEval` so that it can fake a nonce-containing node - // via an object. - val = node[ i ] || node.getAttribute && node.getAttribute( i ); - if ( val ) { - script.setAttribute( i, val ); - } - } - } - doc.head.appendChild( script ).parentNode.removeChild( script ); - } - - -function toType( obj ) { - if ( obj == null ) { - return obj + ""; - } - - // Support: Android <=2.3 only (functionish RegExp) - return typeof obj === "object" || typeof obj === "function" ? - class2type[ toString.call( obj ) ] || "object" : - typeof obj; -} -/* global Symbol */ -// Defining this global in .eslintrc.json would create a danger of using the global -// unguarded in another place, it seems safer to define global only for this module - - - -var - version = "3.5.1", - - // Define a local copy of jQuery - jQuery = function( selector, context ) { - - // The jQuery object is actually just the init constructor 'enhanced' - // Need init if jQuery is called (just allow error to be thrown if not included) - return new jQuery.fn.init( selector, context ); - }; - -jQuery.fn = jQuery.prototype = { - - // The current version of jQuery being used - jquery: version, - - constructor: jQuery, - - // The default length of a jQuery object is 0 - length: 0, - - toArray: function() { - return slice.call( this ); - }, - - // Get the Nth element in the matched element set OR - // Get the whole matched element set as a clean array - get: function( num ) { - - // Return all the elements in a clean array - if ( num == null ) { - return slice.call( this ); - } - - // Return just the one element from the set - return num < 0 ? this[ num + this.length ] : this[ num ]; - }, - - // Take an array of elements and push it onto the stack - // (returning the new matched element set) - pushStack: function( elems ) { - - // Build a new jQuery matched element set - var ret = jQuery.merge( this.constructor(), elems ); - - // Add the old object onto the stack (as a reference) - ret.prevObject = this; - - // Return the newly-formed element set - return ret; - }, - - // Execute a callback for every element in the matched set. - each: function( callback ) { - return jQuery.each( this, callback ); - }, - - map: function( callback ) { - return this.pushStack( jQuery.map( this, function( elem, i ) { - return callback.call( elem, i, elem ); - } ) ); - }, - - slice: function() { - return this.pushStack( slice.apply( this, arguments ) ); - }, - - first: function() { - return this.eq( 0 ); - }, - - last: function() { - return this.eq( -1 ); - }, - - even: function() { - return this.pushStack( jQuery.grep( this, function( _elem, i ) { - return ( i + 1 ) % 2; - } ) ); - }, - - odd: function() { - return this.pushStack( jQuery.grep( this, function( _elem, i ) { - return i % 2; - } ) ); - }, - - eq: function( i ) { - var len = this.length, - j = +i + ( i < 0 ? len : 0 ); - return this.pushStack( j >= 0 && j < len ? [ this[ j ] ] : [] ); - }, - - end: function() { - return this.prevObject || this.constructor(); - }, - - // For internal use only. - // Behaves like an Array's method, not like a jQuery method. - push: push, - sort: arr.sort, - splice: arr.splice -}; - -jQuery.extend = jQuery.fn.extend = function() { - var options, name, src, copy, copyIsArray, clone, - target = arguments[ 0 ] || {}, - i = 1, - length = arguments.length, - deep = false; - - // Handle a deep copy situation - if ( typeof target === "boolean" ) { - deep = target; - - // Skip the boolean and the target - target = arguments[ i ] || {}; - i++; - } - - // Handle case when target is a string or something (possible in deep copy) - if ( typeof target !== "object" && !isFunction( target ) ) { - target = {}; - } - - // Extend jQuery itself if only one argument is passed - if ( i === length ) { - target = this; - i--; - } - - for ( ; i < length; i++ ) { - - // Only deal with non-null/undefined values - if ( ( options = arguments[ i ] ) != null ) { - - // Extend the base object - for ( name in options ) { - copy = options[ name ]; - - // Prevent Object.prototype pollution - // Prevent never-ending loop - if ( name === "__proto__" || target === copy ) { - continue; - } - - // Recurse if we're merging plain objects or arrays - if ( deep && copy && ( jQuery.isPlainObject( copy ) || - ( copyIsArray = Array.isArray( copy ) ) ) ) { - src = target[ name ]; - - // Ensure proper type for the source value - if ( copyIsArray && !Array.isArray( src ) ) { - clone = []; - } else if ( !copyIsArray && !jQuery.isPlainObject( src ) ) { - clone = {}; - } else { - clone = src; - } - copyIsArray = false; - - // Never move original objects, clone them - target[ name ] = jQuery.extend( deep, clone, copy ); - - // Don't bring in undefined values - } else if ( copy !== undefined ) { - target[ name ] = copy; - } - } - } - } - - // Return the modified object - return target; -}; - -jQuery.extend( { - - // Unique for each copy of jQuery on the page - expando: "jQuery" + ( version + Math.random() ).replace( /\D/g, "" ), - - // Assume jQuery is ready without the ready module - isReady: true, - - error: function( msg ) { - throw new Error( msg ); - }, - - noop: function() {}, - - isPlainObject: function( obj ) { - var proto, Ctor; - - // Detect obvious negatives - // Use toString instead of jQuery.type to catch host objects - if ( !obj || toString.call( obj ) !== "[object Object]" ) { - return false; - } - - proto = getProto( obj ); - - // Objects with no prototype (e.g., `Object.create( null )`) are plain - if ( !proto ) { - return true; - } - - // Objects with prototype are plain iff they were constructed by a global Object function - Ctor = hasOwn.call( proto, "constructor" ) && proto.constructor; - return typeof Ctor === "function" && fnToString.call( Ctor ) === ObjectFunctionString; - }, - - isEmptyObject: function( obj ) { - var name; - - for ( name in obj ) { - return false; - } - return true; - }, - - // Evaluates a script in a provided context; falls back to the global one - // if not specified. - globalEval: function( code, options, doc ) { - DOMEval( code, { nonce: options && options.nonce }, doc ); - }, - - each: function( obj, callback ) { - var length, i = 0; - - if ( isArrayLike( obj ) ) { - length = obj.length; - for ( ; i < length; i++ ) { - if ( callback.call( obj[ i ], i, obj[ i ] ) === false ) { - break; - } - } - } else { - for ( i in obj ) { - if ( callback.call( obj[ i ], i, obj[ i ] ) === false ) { - break; - } - } - } - - return obj; - }, - - // results is for internal usage only - makeArray: function( arr, results ) { - var ret = results || []; - - if ( arr != null ) { - if ( isArrayLike( Object( arr ) ) ) { - jQuery.merge( ret, - typeof arr === "string" ? - [ arr ] : arr - ); - } else { - push.call( ret, arr ); - } - } - - return ret; - }, - - inArray: function( elem, arr, i ) { - return arr == null ? -1 : indexOf.call( arr, elem, i ); - }, - - // Support: Android <=4.0 only, PhantomJS 1 only - // push.apply(_, arraylike) throws on ancient WebKit - merge: function( first, second ) { - var len = +second.length, - j = 0, - i = first.length; - - for ( ; j < len; j++ ) { - first[ i++ ] = second[ j ]; - } - - first.length = i; - - return first; - }, - - grep: function( elems, callback, invert ) { - var callbackInverse, - matches = [], - i = 0, - length = elems.length, - callbackExpect = !invert; - - // Go through the array, only saving the items - // that pass the validator function - for ( ; i < length; i++ ) { - callbackInverse = !callback( elems[ i ], i ); - if ( callbackInverse !== callbackExpect ) { - matches.push( elems[ i ] ); - } - } - - return matches; - }, - - // arg is for internal usage only - map: function( elems, callback, arg ) { - var length, value, - i = 0, - ret = []; - - // Go through the array, translating each of the items to their new values - if ( isArrayLike( elems ) ) { - length = elems.length; - for ( ; i < length; i++ ) { - value = callback( elems[ i ], i, arg ); - - if ( value != null ) { - ret.push( value ); - } - } - - // Go through every key on the object, - } else { - for ( i in elems ) { - value = callback( elems[ i ], i, arg ); - - if ( value != null ) { - ret.push( value ); - } - } - } - - // Flatten any nested arrays - return flat( ret ); - }, - - // A global GUID counter for objects - guid: 1, - - // jQuery.support is not used in Core but other projects attach their - // properties to it so it needs to exist. - support: support -} ); - -if ( typeof Symbol === "function" ) { - jQuery.fn[ Symbol.iterator ] = arr[ Symbol.iterator ]; -} - -// Populate the class2type map -jQuery.each( "Boolean Number String Function Array Date RegExp Object Error Symbol".split( " " ), -function( _i, name ) { - class2type[ "[object " + name + "]" ] = name.toLowerCase(); -} ); - -function isArrayLike( obj ) { - - // Support: real iOS 8.2 only (not reproducible in simulator) - // `in` check used to prevent JIT error (gh-2145) - // hasOwn isn't used here due to false negatives - // regarding Nodelist length in IE - var length = !!obj && "length" in obj && obj.length, - type = toType( obj ); - - if ( isFunction( obj ) || isWindow( obj ) ) { - return false; - } - - return type === "array" || length === 0 || - typeof length === "number" && length > 0 && ( length - 1 ) in obj; -} -var Sizzle = -/*! - * Sizzle CSS Selector Engine v2.3.5 - * https://sizzlejs.com/ - * - * Copyright JS Foundation and other contributors - * Released under the MIT license - * https://js.foundation/ - * - * Date: 2020-03-14 - */ -( function( window ) { -var i, - support, - Expr, - getText, - isXML, - tokenize, - compile, - select, - outermostContext, - sortInput, - hasDuplicate, - - // Local document vars - setDocument, - document, - docElem, - documentIsHTML, - rbuggyQSA, - rbuggyMatches, - matches, - contains, - - // Instance-specific data - expando = "sizzle" + 1 * new Date(), - preferredDoc = window.document, - dirruns = 0, - done = 0, - classCache = createCache(), - tokenCache = createCache(), - compilerCache = createCache(), - nonnativeSelectorCache = createCache(), - sortOrder = function( a, b ) { - if ( a === b ) { - hasDuplicate = true; - } - return 0; - }, - - // Instance methods - hasOwn = ( {} ).hasOwnProperty, - arr = [], - pop = arr.pop, - pushNative = arr.push, - push = arr.push, - slice = arr.slice, - - // Use a stripped-down indexOf as it's faster than native - // https://jsperf.com/thor-indexof-vs-for/5 - indexOf = function( list, elem ) { - var i = 0, - len = list.length; - for ( ; i < len; i++ ) { - if ( list[ i ] === elem ) { - return i; - } - } - return -1; - }, - - booleans = "checked|selected|async|autofocus|autoplay|controls|defer|disabled|hidden|" + - "ismap|loop|multiple|open|readonly|required|scoped", - - // Regular expressions - - // http://www.w3.org/TR/css3-selectors/#whitespace - whitespace = "[\\x20\\t\\r\\n\\f]", - - // https://www.w3.org/TR/css-syntax-3/#ident-token-diagram - identifier = "(?:\\\\[\\da-fA-F]{1,6}" + whitespace + - "?|\\\\[^\\r\\n\\f]|[\\w-]|[^\0-\\x7f])+", - - // Attribute selectors: http://www.w3.org/TR/selectors/#attribute-selectors - attributes = "\\[" + whitespace + "*(" + identifier + ")(?:" + whitespace + - - // Operator (capture 2) - "*([*^$|!~]?=)" + whitespace + - - // "Attribute values must be CSS identifiers [capture 5] - // or strings [capture 3 or capture 4]" - "*(?:'((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\"|(" + identifier + "))|)" + - whitespace + "*\\]", - - pseudos = ":(" + identifier + ")(?:\\((" + - - // To reduce the number of selectors needing tokenize in the preFilter, prefer arguments: - // 1. quoted (capture 3; capture 4 or capture 5) - "('((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\")|" + - - // 2. simple (capture 6) - "((?:\\\\.|[^\\\\()[\\]]|" + attributes + ")*)|" + - - // 3. anything else (capture 2) - ".*" + - ")\\)|)", - - // Leading and non-escaped trailing whitespace, capturing some non-whitespace characters preceding the latter - rwhitespace = new RegExp( whitespace + "+", "g" ), - rtrim = new RegExp( "^" + whitespace + "+|((?:^|[^\\\\])(?:\\\\.)*)" + - whitespace + "+$", "g" ), - - rcomma = new RegExp( "^" + whitespace + "*," + whitespace + "*" ), - rcombinators = new RegExp( "^" + whitespace + "*([>+~]|" + whitespace + ")" + whitespace + - "*" ), - rdescend = new RegExp( whitespace + "|>" ), - - rpseudo = new RegExp( pseudos ), - ridentifier = new RegExp( "^" + identifier + "$" ), - - matchExpr = { - "ID": new RegExp( "^#(" + identifier + ")" ), - "CLASS": new RegExp( "^\\.(" + identifier + ")" ), - "TAG": new RegExp( "^(" + identifier + "|[*])" ), - "ATTR": new RegExp( "^" + attributes ), - "PSEUDO": new RegExp( "^" + pseudos ), - "CHILD": new RegExp( "^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\(" + - whitespace + "*(even|odd|(([+-]|)(\\d*)n|)" + whitespace + "*(?:([+-]|)" + - whitespace + "*(\\d+)|))" + whitespace + "*\\)|)", "i" ), - "bool": new RegExp( "^(?:" + booleans + ")$", "i" ), - - // For use in libraries implementing .is() - // We use this for POS matching in `select` - "needsContext": new RegExp( "^" + whitespace + - "*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\(" + whitespace + - "*((?:-\\d)?\\d*)" + whitespace + "*\\)|)(?=[^-]|$)", "i" ) - }, - - rhtml = /HTML$/i, - rinputs = /^(?:input|select|textarea|button)$/i, - rheader = /^h\d$/i, - - rnative = /^[^{]+\{\s*\[native \w/, - - // Easily-parseable/retrievable ID or TAG or CLASS selectors - rquickExpr = /^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/, - - rsibling = /[+~]/, - - // CSS escapes - // http://www.w3.org/TR/CSS21/syndata.html#escaped-characters - runescape = new RegExp( "\\\\[\\da-fA-F]{1,6}" + whitespace + "?|\\\\([^\\r\\n\\f])", "g" ), - funescape = function( escape, nonHex ) { - var high = "0x" + escape.slice( 1 ) - 0x10000; - - return nonHex ? - - // Strip the backslash prefix from a non-hex escape sequence - nonHex : - - // Replace a hexadecimal escape sequence with the encoded Unicode code point - // Support: IE <=11+ - // For values outside the Basic Multilingual Plane (BMP), manually construct a - // surrogate pair - high < 0 ? - String.fromCharCode( high + 0x10000 ) : - String.fromCharCode( high >> 10 | 0xD800, high & 0x3FF | 0xDC00 ); - }, - - // CSS string/identifier serialization - // https://drafts.csswg.org/cssom/#common-serializing-idioms - rcssescape = /([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g, - fcssescape = function( ch, asCodePoint ) { - if ( asCodePoint ) { - - // U+0000 NULL becomes U+FFFD REPLACEMENT CHARACTER - if ( ch === "\0" ) { - return "\uFFFD"; - } - - // Control characters and (dependent upon position) numbers get escaped as code points - return ch.slice( 0, -1 ) + "\\" + - ch.charCodeAt( ch.length - 1 ).toString( 16 ) + " "; - } - - // Other potentially-special ASCII characters get backslash-escaped - return "\\" + ch; - }, - - // Used for iframes - // See setDocument() - // Removing the function wrapper causes a "Permission Denied" - // error in IE - unloadHandler = function() { - setDocument(); - }, - - inDisabledFieldset = addCombinator( - function( elem ) { - return elem.disabled === true && elem.nodeName.toLowerCase() === "fieldset"; - }, - { dir: "parentNode", next: "legend" } - ); - -// Optimize for push.apply( _, NodeList ) -try { - push.apply( - ( arr = slice.call( preferredDoc.childNodes ) ), - preferredDoc.childNodes - ); - - // Support: Android<4.0 - // Detect silently failing push.apply - // eslint-disable-next-line no-unused-expressions - arr[ preferredDoc.childNodes.length ].nodeType; -} catch ( e ) { - push = { apply: arr.length ? - - // Leverage slice if possible - function( target, els ) { - pushNative.apply( target, slice.call( els ) ); - } : - - // Support: IE<9 - // Otherwise append directly - function( target, els ) { - var j = target.length, - i = 0; - - // Can't trust NodeList.length - while ( ( target[ j++ ] = els[ i++ ] ) ) {} - target.length = j - 1; - } - }; -} - -function Sizzle( selector, context, results, seed ) { - var m, i, elem, nid, match, groups, newSelector, - newContext = context && context.ownerDocument, - - // nodeType defaults to 9, since context defaults to document - nodeType = context ? context.nodeType : 9; - - results = results || []; - - // Return early from calls with invalid selector or context - if ( typeof selector !== "string" || !selector || - nodeType !== 1 && nodeType !== 9 && nodeType !== 11 ) { - - return results; - } - - // Try to shortcut find operations (as opposed to filters) in HTML documents - if ( !seed ) { - setDocument( context ); - context = context || document; - - if ( documentIsHTML ) { - - // If the selector is sufficiently simple, try using a "get*By*" DOM method - // (excepting DocumentFragment context, where the methods don't exist) - if ( nodeType !== 11 && ( match = rquickExpr.exec( selector ) ) ) { - - // ID selector - if ( ( m = match[ 1 ] ) ) { - - // Document context - if ( nodeType === 9 ) { - if ( ( elem = context.getElementById( m ) ) ) { - - // Support: IE, Opera, Webkit - // TODO: identify versions - // getElementById can match elements by name instead of ID - if ( elem.id === m ) { - results.push( elem ); - return results; - } - } else { - return results; - } - - // Element context - } else { - - // Support: IE, Opera, Webkit - // TODO: identify versions - // getElementById can match elements by name instead of ID - if ( newContext && ( elem = newContext.getElementById( m ) ) && - contains( context, elem ) && - elem.id === m ) { - - results.push( elem ); - return results; - } - } - - // Type selector - } else if ( match[ 2 ] ) { - push.apply( results, context.getElementsByTagName( selector ) ); - return results; - - // Class selector - } else if ( ( m = match[ 3 ] ) && support.getElementsByClassName && - context.getElementsByClassName ) { - - push.apply( results, context.getElementsByClassName( m ) ); - return results; - } - } - - // Take advantage of querySelectorAll - if ( support.qsa && - !nonnativeSelectorCache[ selector + " " ] && - ( !rbuggyQSA || !rbuggyQSA.test( selector ) ) && - - // Support: IE 8 only - // Exclude object elements - ( nodeType !== 1 || context.nodeName.toLowerCase() !== "object" ) ) { - - newSelector = selector; - newContext = context; - - // qSA considers elements outside a scoping root when evaluating child or - // descendant combinators, which is not what we want. - // In such cases, we work around the behavior by prefixing every selector in the - // list with an ID selector referencing the scope context. - // The technique has to be used as well when a leading combinator is used - // as such selectors are not recognized by querySelectorAll. - // Thanks to Andrew Dupont for this technique. - if ( nodeType === 1 && - ( rdescend.test( selector ) || rcombinators.test( selector ) ) ) { - - // Expand context for sibling selectors - newContext = rsibling.test( selector ) && testContext( context.parentNode ) || - context; - - // We can use :scope instead of the ID hack if the browser - // supports it & if we're not changing the context. - if ( newContext !== context || !support.scope ) { - - // Capture the context ID, setting it first if necessary - if ( ( nid = context.getAttribute( "id" ) ) ) { - nid = nid.replace( rcssescape, fcssescape ); - } else { - context.setAttribute( "id", ( nid = expando ) ); - } - } - - // Prefix every selector in the list - groups = tokenize( selector ); - i = groups.length; - while ( i-- ) { - groups[ i ] = ( nid ? "#" + nid : ":scope" ) + " " + - toSelector( groups[ i ] ); - } - newSelector = groups.join( "," ); - } - - try { - push.apply( results, - newContext.querySelectorAll( newSelector ) - ); - return results; - } catch ( qsaError ) { - nonnativeSelectorCache( selector, true ); - } finally { - if ( nid === expando ) { - context.removeAttribute( "id" ); - } - } - } - } - } - - // All others - return select( selector.replace( rtrim, "$1" ), context, results, seed ); -} - -/** - * Create key-value caches of limited size - * @returns {function(string, object)} Returns the Object data after storing it on itself with - * property name the (space-suffixed) string and (if the cache is larger than Expr.cacheLength) - * deleting the oldest entry - */ -function createCache() { - var keys = []; - - function cache( key, value ) { - - // Use (key + " ") to avoid collision with native prototype properties (see Issue #157) - if ( keys.push( key + " " ) > Expr.cacheLength ) { - - // Only keep the most recent entries - delete cache[ keys.shift() ]; - } - return ( cache[ key + " " ] = value ); - } - return cache; -} - -/** - * Mark a function for special use by Sizzle - * @param {Function} fn The function to mark - */ -function markFunction( fn ) { - fn[ expando ] = true; - return fn; -} - -/** - * Support testing using an element - * @param {Function} fn Passed the created element and returns a boolean result - */ -function assert( fn ) { - var el = document.createElement( "fieldset" ); - - try { - return !!fn( el ); - } catch ( e ) { - return false; - } finally { - - // Remove from its parent by default - if ( el.parentNode ) { - el.parentNode.removeChild( el ); - } - - // release memory in IE - el = null; - } -} - -/** - * Adds the same handler for all of the specified attrs - * @param {String} attrs Pipe-separated list of attributes - * @param {Function} handler The method that will be applied - */ -function addHandle( attrs, handler ) { - var arr = attrs.split( "|" ), - i = arr.length; - - while ( i-- ) { - Expr.attrHandle[ arr[ i ] ] = handler; - } -} - -/** - * Checks document order of two siblings - * @param {Element} a - * @param {Element} b - * @returns {Number} Returns less than 0 if a precedes b, greater than 0 if a follows b - */ -function siblingCheck( a, b ) { - var cur = b && a, - diff = cur && a.nodeType === 1 && b.nodeType === 1 && - a.sourceIndex - b.sourceIndex; - - // Use IE sourceIndex if available on both nodes - if ( diff ) { - return diff; - } - - // Check if b follows a - if ( cur ) { - while ( ( cur = cur.nextSibling ) ) { - if ( cur === b ) { - return -1; - } - } - } - - return a ? 1 : -1; -} - -/** - * Returns a function to use in pseudos for input types - * @param {String} type - */ -function createInputPseudo( type ) { - return function( elem ) { - var name = elem.nodeName.toLowerCase(); - return name === "input" && elem.type === type; - }; -} - -/** - * Returns a function to use in pseudos for buttons - * @param {String} type - */ -function createButtonPseudo( type ) { - return function( elem ) { - var name = elem.nodeName.toLowerCase(); - return ( name === "input" || name === "button" ) && elem.type === type; - }; -} - -/** - * Returns a function to use in pseudos for :enabled/:disabled - * @param {Boolean} disabled true for :disabled; false for :enabled - */ -function createDisabledPseudo( disabled ) { - - // Known :disabled false positives: fieldset[disabled] > legend:nth-of-type(n+2) :can-disable - return function( elem ) { - - // Only certain elements can match :enabled or :disabled - // https://html.spec.whatwg.org/multipage/scripting.html#selector-enabled - // https://html.spec.whatwg.org/multipage/scripting.html#selector-disabled - if ( "form" in elem ) { - - // Check for inherited disabledness on relevant non-disabled elements: - // * listed form-associated elements in a disabled fieldset - // https://html.spec.whatwg.org/multipage/forms.html#category-listed - // https://html.spec.whatwg.org/multipage/forms.html#concept-fe-disabled - // * option elements in a disabled optgroup - // https://html.spec.whatwg.org/multipage/forms.html#concept-option-disabled - // All such elements have a "form" property. - if ( elem.parentNode && elem.disabled === false ) { - - // Option elements defer to a parent optgroup if present - if ( "label" in elem ) { - if ( "label" in elem.parentNode ) { - return elem.parentNode.disabled === disabled; - } else { - return elem.disabled === disabled; - } - } - - // Support: IE 6 - 11 - // Use the isDisabled shortcut property to check for disabled fieldset ancestors - return elem.isDisabled === disabled || - - // Where there is no isDisabled, check manually - /* jshint -W018 */ - elem.isDisabled !== !disabled && - inDisabledFieldset( elem ) === disabled; - } - - return elem.disabled === disabled; - - // Try to winnow out elements that can't be disabled before trusting the disabled property. - // Some victims get caught in our net (label, legend, menu, track), but it shouldn't - // even exist on them, let alone have a boolean value. - } else if ( "label" in elem ) { - return elem.disabled === disabled; - } - - // Remaining elements are neither :enabled nor :disabled - return false; - }; -} - -/** - * Returns a function to use in pseudos for positionals - * @param {Function} fn - */ -function createPositionalPseudo( fn ) { - return markFunction( function( argument ) { - argument = +argument; - return markFunction( function( seed, matches ) { - var j, - matchIndexes = fn( [], seed.length, argument ), - i = matchIndexes.length; - - // Match elements found at the specified indexes - while ( i-- ) { - if ( seed[ ( j = matchIndexes[ i ] ) ] ) { - seed[ j ] = !( matches[ j ] = seed[ j ] ); - } - } - } ); - } ); -} - -/** - * Checks a node for validity as a Sizzle context - * @param {Element|Object=} context - * @returns {Element|Object|Boolean} The input node if acceptable, otherwise a falsy value - */ -function testContext( context ) { - return context && typeof context.getElementsByTagName !== "undefined" && context; -} - -// Expose support vars for convenience -support = Sizzle.support = {}; - -/** - * Detects XML nodes - * @param {Element|Object} elem An element or a document - * @returns {Boolean} True iff elem is a non-HTML XML node - */ -isXML = Sizzle.isXML = function( elem ) { - var namespace = elem.namespaceURI, - docElem = ( elem.ownerDocument || elem ).documentElement; - - // Support: IE <=8 - // Assume HTML when documentElement doesn't yet exist, such as inside loading iframes - // https://bugs.jquery.com/ticket/4833 - return !rhtml.test( namespace || docElem && docElem.nodeName || "HTML" ); -}; - -/** - * Sets document-related variables once based on the current document - * @param {Element|Object} [doc] An element or document object to use to set the document - * @returns {Object} Returns the current document - */ -setDocument = Sizzle.setDocument = function( node ) { - var hasCompare, subWindow, - doc = node ? node.ownerDocument || node : preferredDoc; - - // Return early if doc is invalid or already selected - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - // eslint-disable-next-line eqeqeq - if ( doc == document || doc.nodeType !== 9 || !doc.documentElement ) { - return document; - } - - // Update global variables - document = doc; - docElem = document.documentElement; - documentIsHTML = !isXML( document ); - - // Support: IE 9 - 11+, Edge 12 - 18+ - // Accessing iframe documents after unload throws "permission denied" errors (jQuery #13936) - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - // eslint-disable-next-line eqeqeq - if ( preferredDoc != document && - ( subWindow = document.defaultView ) && subWindow.top !== subWindow ) { - - // Support: IE 11, Edge - if ( subWindow.addEventListener ) { - subWindow.addEventListener( "unload", unloadHandler, false ); - - // Support: IE 9 - 10 only - } else if ( subWindow.attachEvent ) { - subWindow.attachEvent( "onunload", unloadHandler ); - } - } - - // Support: IE 8 - 11+, Edge 12 - 18+, Chrome <=16 - 25 only, Firefox <=3.6 - 31 only, - // Safari 4 - 5 only, Opera <=11.6 - 12.x only - // IE/Edge & older browsers don't support the :scope pseudo-class. - // Support: Safari 6.0 only - // Safari 6.0 supports :scope but it's an alias of :root there. - support.scope = assert( function( el ) { - docElem.appendChild( el ).appendChild( document.createElement( "div" ) ); - return typeof el.querySelectorAll !== "undefined" && - !el.querySelectorAll( ":scope fieldset div" ).length; - } ); - - /* Attributes - ---------------------------------------------------------------------- */ - - // Support: IE<8 - // Verify that getAttribute really returns attributes and not properties - // (excepting IE8 booleans) - support.attributes = assert( function( el ) { - el.className = "i"; - return !el.getAttribute( "className" ); - } ); - - /* getElement(s)By* - ---------------------------------------------------------------------- */ - - // Check if getElementsByTagName("*") returns only elements - support.getElementsByTagName = assert( function( el ) { - el.appendChild( document.createComment( "" ) ); - return !el.getElementsByTagName( "*" ).length; - } ); - - // Support: IE<9 - support.getElementsByClassName = rnative.test( document.getElementsByClassName ); - - // Support: IE<10 - // Check if getElementById returns elements by name - // The broken getElementById methods don't pick up programmatically-set names, - // so use a roundabout getElementsByName test - support.getById = assert( function( el ) { - docElem.appendChild( el ).id = expando; - return !document.getElementsByName || !document.getElementsByName( expando ).length; - } ); - - // ID filter and find - if ( support.getById ) { - Expr.filter[ "ID" ] = function( id ) { - var attrId = id.replace( runescape, funescape ); - return function( elem ) { - return elem.getAttribute( "id" ) === attrId; - }; - }; - Expr.find[ "ID" ] = function( id, context ) { - if ( typeof context.getElementById !== "undefined" && documentIsHTML ) { - var elem = context.getElementById( id ); - return elem ? [ elem ] : []; - } - }; - } else { - Expr.filter[ "ID" ] = function( id ) { - var attrId = id.replace( runescape, funescape ); - return function( elem ) { - var node = typeof elem.getAttributeNode !== "undefined" && - elem.getAttributeNode( "id" ); - return node && node.value === attrId; - }; - }; - - // Support: IE 6 - 7 only - // getElementById is not reliable as a find shortcut - Expr.find[ "ID" ] = function( id, context ) { - if ( typeof context.getElementById !== "undefined" && documentIsHTML ) { - var node, i, elems, - elem = context.getElementById( id ); - - if ( elem ) { - - // Verify the id attribute - node = elem.getAttributeNode( "id" ); - if ( node && node.value === id ) { - return [ elem ]; - } - - // Fall back on getElementsByName - elems = context.getElementsByName( id ); - i = 0; - while ( ( elem = elems[ i++ ] ) ) { - node = elem.getAttributeNode( "id" ); - if ( node && node.value === id ) { - return [ elem ]; - } - } - } - - return []; - } - }; - } - - // Tag - Expr.find[ "TAG" ] = support.getElementsByTagName ? - function( tag, context ) { - if ( typeof context.getElementsByTagName !== "undefined" ) { - return context.getElementsByTagName( tag ); - - // DocumentFragment nodes don't have gEBTN - } else if ( support.qsa ) { - return context.querySelectorAll( tag ); - } - } : - - function( tag, context ) { - var elem, - tmp = [], - i = 0, - - // By happy coincidence, a (broken) gEBTN appears on DocumentFragment nodes too - results = context.getElementsByTagName( tag ); - - // Filter out possible comments - if ( tag === "*" ) { - while ( ( elem = results[ i++ ] ) ) { - if ( elem.nodeType === 1 ) { - tmp.push( elem ); - } - } - - return tmp; - } - return results; - }; - - // Class - Expr.find[ "CLASS" ] = support.getElementsByClassName && function( className, context ) { - if ( typeof context.getElementsByClassName !== "undefined" && documentIsHTML ) { - return context.getElementsByClassName( className ); - } - }; - - /* QSA/matchesSelector - ---------------------------------------------------------------------- */ - - // QSA and matchesSelector support - - // matchesSelector(:active) reports false when true (IE9/Opera 11.5) - rbuggyMatches = []; - - // qSa(:focus) reports false when true (Chrome 21) - // We allow this because of a bug in IE8/9 that throws an error - // whenever `document.activeElement` is accessed on an iframe - // So, we allow :focus to pass through QSA all the time to avoid the IE error - // See https://bugs.jquery.com/ticket/13378 - rbuggyQSA = []; - - if ( ( support.qsa = rnative.test( document.querySelectorAll ) ) ) { - - // Build QSA regex - // Regex strategy adopted from Diego Perini - assert( function( el ) { - - var input; - - // Select is set to empty string on purpose - // This is to test IE's treatment of not explicitly - // setting a boolean content attribute, - // since its presence should be enough - // https://bugs.jquery.com/ticket/12359 - docElem.appendChild( el ).innerHTML = "" + - ""; - - // Support: IE8, Opera 11-12.16 - // Nothing should be selected when empty strings follow ^= or $= or *= - // The test attribute must be unknown in Opera but "safe" for WinRT - // https://msdn.microsoft.com/en-us/library/ie/hh465388.aspx#attribute_section - if ( el.querySelectorAll( "[msallowcapture^='']" ).length ) { - rbuggyQSA.push( "[*^$]=" + whitespace + "*(?:''|\"\")" ); - } - - // Support: IE8 - // Boolean attributes and "value" are not treated correctly - if ( !el.querySelectorAll( "[selected]" ).length ) { - rbuggyQSA.push( "\\[" + whitespace + "*(?:value|" + booleans + ")" ); - } - - // Support: Chrome<29, Android<4.4, Safari<7.0+, iOS<7.0+, PhantomJS<1.9.8+ - if ( !el.querySelectorAll( "[id~=" + expando + "-]" ).length ) { - rbuggyQSA.push( "~=" ); - } - - // Support: IE 11+, Edge 15 - 18+ - // IE 11/Edge don't find elements on a `[name='']` query in some cases. - // Adding a temporary attribute to the document before the selection works - // around the issue. - // Interestingly, IE 10 & older don't seem to have the issue. - input = document.createElement( "input" ); - input.setAttribute( "name", "" ); - el.appendChild( input ); - if ( !el.querySelectorAll( "[name='']" ).length ) { - rbuggyQSA.push( "\\[" + whitespace + "*name" + whitespace + "*=" + - whitespace + "*(?:''|\"\")" ); - } - - // Webkit/Opera - :checked should return selected option elements - // http://www.w3.org/TR/2011/REC-css3-selectors-20110929/#checked - // IE8 throws error here and will not see later tests - if ( !el.querySelectorAll( ":checked" ).length ) { - rbuggyQSA.push( ":checked" ); - } - - // Support: Safari 8+, iOS 8+ - // https://bugs.webkit.org/show_bug.cgi?id=136851 - // In-page `selector#id sibling-combinator selector` fails - if ( !el.querySelectorAll( "a#" + expando + "+*" ).length ) { - rbuggyQSA.push( ".#.+[+~]" ); - } - - // Support: Firefox <=3.6 - 5 only - // Old Firefox doesn't throw on a badly-escaped identifier. - el.querySelectorAll( "\\\f" ); - rbuggyQSA.push( "[\\r\\n\\f]" ); - } ); - - assert( function( el ) { - el.innerHTML = "" + - ""; - - // Support: Windows 8 Native Apps - // The type and name attributes are restricted during .innerHTML assignment - var input = document.createElement( "input" ); - input.setAttribute( "type", "hidden" ); - el.appendChild( input ).setAttribute( "name", "D" ); - - // Support: IE8 - // Enforce case-sensitivity of name attribute - if ( el.querySelectorAll( "[name=d]" ).length ) { - rbuggyQSA.push( "name" + whitespace + "*[*^$|!~]?=" ); - } - - // FF 3.5 - :enabled/:disabled and hidden elements (hidden elements are still enabled) - // IE8 throws error here and will not see later tests - if ( el.querySelectorAll( ":enabled" ).length !== 2 ) { - rbuggyQSA.push( ":enabled", ":disabled" ); - } - - // Support: IE9-11+ - // IE's :disabled selector does not pick up the children of disabled fieldsets - docElem.appendChild( el ).disabled = true; - if ( el.querySelectorAll( ":disabled" ).length !== 2 ) { - rbuggyQSA.push( ":enabled", ":disabled" ); - } - - // Support: Opera 10 - 11 only - // Opera 10-11 does not throw on post-comma invalid pseudos - el.querySelectorAll( "*,:x" ); - rbuggyQSA.push( ",.*:" ); - } ); - } - - if ( ( support.matchesSelector = rnative.test( ( matches = docElem.matches || - docElem.webkitMatchesSelector || - docElem.mozMatchesSelector || - docElem.oMatchesSelector || - docElem.msMatchesSelector ) ) ) ) { - - assert( function( el ) { - - // Check to see if it's possible to do matchesSelector - // on a disconnected node (IE 9) - support.disconnectedMatch = matches.call( el, "*" ); - - // This should fail with an exception - // Gecko does not error, returns false instead - matches.call( el, "[s!='']:x" ); - rbuggyMatches.push( "!=", pseudos ); - } ); - } - - rbuggyQSA = rbuggyQSA.length && new RegExp( rbuggyQSA.join( "|" ) ); - rbuggyMatches = rbuggyMatches.length && new RegExp( rbuggyMatches.join( "|" ) ); - - /* Contains - ---------------------------------------------------------------------- */ - hasCompare = rnative.test( docElem.compareDocumentPosition ); - - // Element contains another - // Purposefully self-exclusive - // As in, an element does not contain itself - contains = hasCompare || rnative.test( docElem.contains ) ? - function( a, b ) { - var adown = a.nodeType === 9 ? a.documentElement : a, - bup = b && b.parentNode; - return a === bup || !!( bup && bup.nodeType === 1 && ( - adown.contains ? - adown.contains( bup ) : - a.compareDocumentPosition && a.compareDocumentPosition( bup ) & 16 - ) ); - } : - function( a, b ) { - if ( b ) { - while ( ( b = b.parentNode ) ) { - if ( b === a ) { - return true; - } - } - } - return false; - }; - - /* Sorting - ---------------------------------------------------------------------- */ - - // Document order sorting - sortOrder = hasCompare ? - function( a, b ) { - - // Flag for duplicate removal - if ( a === b ) { - hasDuplicate = true; - return 0; - } - - // Sort on method existence if only one input has compareDocumentPosition - var compare = !a.compareDocumentPosition - !b.compareDocumentPosition; - if ( compare ) { - return compare; - } - - // Calculate position if both inputs belong to the same document - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - // eslint-disable-next-line eqeqeq - compare = ( a.ownerDocument || a ) == ( b.ownerDocument || b ) ? - a.compareDocumentPosition( b ) : - - // Otherwise we know they are disconnected - 1; - - // Disconnected nodes - if ( compare & 1 || - ( !support.sortDetached && b.compareDocumentPosition( a ) === compare ) ) { - - // Choose the first element that is related to our preferred document - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - // eslint-disable-next-line eqeqeq - if ( a == document || a.ownerDocument == preferredDoc && - contains( preferredDoc, a ) ) { - return -1; - } - - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - // eslint-disable-next-line eqeqeq - if ( b == document || b.ownerDocument == preferredDoc && - contains( preferredDoc, b ) ) { - return 1; - } - - // Maintain original order - return sortInput ? - ( indexOf( sortInput, a ) - indexOf( sortInput, b ) ) : - 0; - } - - return compare & 4 ? -1 : 1; - } : - function( a, b ) { - - // Exit early if the nodes are identical - if ( a === b ) { - hasDuplicate = true; - return 0; - } - - var cur, - i = 0, - aup = a.parentNode, - bup = b.parentNode, - ap = [ a ], - bp = [ b ]; - - // Parentless nodes are either documents or disconnected - if ( !aup || !bup ) { - - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - /* eslint-disable eqeqeq */ - return a == document ? -1 : - b == document ? 1 : - /* eslint-enable eqeqeq */ - aup ? -1 : - bup ? 1 : - sortInput ? - ( indexOf( sortInput, a ) - indexOf( sortInput, b ) ) : - 0; - - // If the nodes are siblings, we can do a quick check - } else if ( aup === bup ) { - return siblingCheck( a, b ); - } - - // Otherwise we need full lists of their ancestors for comparison - cur = a; - while ( ( cur = cur.parentNode ) ) { - ap.unshift( cur ); - } - cur = b; - while ( ( cur = cur.parentNode ) ) { - bp.unshift( cur ); - } - - // Walk down the tree looking for a discrepancy - while ( ap[ i ] === bp[ i ] ) { - i++; - } - - return i ? - - // Do a sibling check if the nodes have a common ancestor - siblingCheck( ap[ i ], bp[ i ] ) : - - // Otherwise nodes in our document sort first - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - /* eslint-disable eqeqeq */ - ap[ i ] == preferredDoc ? -1 : - bp[ i ] == preferredDoc ? 1 : - /* eslint-enable eqeqeq */ - 0; - }; - - return document; -}; - -Sizzle.matches = function( expr, elements ) { - return Sizzle( expr, null, null, elements ); -}; - -Sizzle.matchesSelector = function( elem, expr ) { - setDocument( elem ); - - if ( support.matchesSelector && documentIsHTML && - !nonnativeSelectorCache[ expr + " " ] && - ( !rbuggyMatches || !rbuggyMatches.test( expr ) ) && - ( !rbuggyQSA || !rbuggyQSA.test( expr ) ) ) { - - try { - var ret = matches.call( elem, expr ); - - // IE 9's matchesSelector returns false on disconnected nodes - if ( ret || support.disconnectedMatch || - - // As well, disconnected nodes are said to be in a document - // fragment in IE 9 - elem.document && elem.document.nodeType !== 11 ) { - return ret; - } - } catch ( e ) { - nonnativeSelectorCache( expr, true ); - } - } - - return Sizzle( expr, document, null, [ elem ] ).length > 0; -}; - -Sizzle.contains = function( context, elem ) { - - // Set document vars if needed - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - // eslint-disable-next-line eqeqeq - if ( ( context.ownerDocument || context ) != document ) { - setDocument( context ); - } - return contains( context, elem ); -}; - -Sizzle.attr = function( elem, name ) { - - // Set document vars if needed - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - // eslint-disable-next-line eqeqeq - if ( ( elem.ownerDocument || elem ) != document ) { - setDocument( elem ); - } - - var fn = Expr.attrHandle[ name.toLowerCase() ], - - // Don't get fooled by Object.prototype properties (jQuery #13807) - val = fn && hasOwn.call( Expr.attrHandle, name.toLowerCase() ) ? - fn( elem, name, !documentIsHTML ) : - undefined; - - return val !== undefined ? - val : - support.attributes || !documentIsHTML ? - elem.getAttribute( name ) : - ( val = elem.getAttributeNode( name ) ) && val.specified ? - val.value : - null; -}; - -Sizzle.escape = function( sel ) { - return ( sel + "" ).replace( rcssescape, fcssescape ); -}; - -Sizzle.error = function( msg ) { - throw new Error( "Syntax error, unrecognized expression: " + msg ); -}; - -/** - * Document sorting and removing duplicates - * @param {ArrayLike} results - */ -Sizzle.uniqueSort = function( results ) { - var elem, - duplicates = [], - j = 0, - i = 0; - - // Unless we *know* we can detect duplicates, assume their presence - hasDuplicate = !support.detectDuplicates; - sortInput = !support.sortStable && results.slice( 0 ); - results.sort( sortOrder ); - - if ( hasDuplicate ) { - while ( ( elem = results[ i++ ] ) ) { - if ( elem === results[ i ] ) { - j = duplicates.push( i ); - } - } - while ( j-- ) { - results.splice( duplicates[ j ], 1 ); - } - } - - // Clear input after sorting to release objects - // See https://github.com/jquery/sizzle/pull/225 - sortInput = null; - - return results; -}; - -/** - * Utility function for retrieving the text value of an array of DOM nodes - * @param {Array|Element} elem - */ -getText = Sizzle.getText = function( elem ) { - var node, - ret = "", - i = 0, - nodeType = elem.nodeType; - - if ( !nodeType ) { - - // If no nodeType, this is expected to be an array - while ( ( node = elem[ i++ ] ) ) { - - // Do not traverse comment nodes - ret += getText( node ); - } - } else if ( nodeType === 1 || nodeType === 9 || nodeType === 11 ) { - - // Use textContent for elements - // innerText usage removed for consistency of new lines (jQuery #11153) - if ( typeof elem.textContent === "string" ) { - return elem.textContent; - } else { - - // Traverse its children - for ( elem = elem.firstChild; elem; elem = elem.nextSibling ) { - ret += getText( elem ); - } - } - } else if ( nodeType === 3 || nodeType === 4 ) { - return elem.nodeValue; - } - - // Do not include comment or processing instruction nodes - - return ret; -}; - -Expr = Sizzle.selectors = { - - // Can be adjusted by the user - cacheLength: 50, - - createPseudo: markFunction, - - match: matchExpr, - - attrHandle: {}, - - find: {}, - - relative: { - ">": { dir: "parentNode", first: true }, - " ": { dir: "parentNode" }, - "+": { dir: "previousSibling", first: true }, - "~": { dir: "previousSibling" } - }, - - preFilter: { - "ATTR": function( match ) { - match[ 1 ] = match[ 1 ].replace( runescape, funescape ); - - // Move the given value to match[3] whether quoted or unquoted - match[ 3 ] = ( match[ 3 ] || match[ 4 ] || - match[ 5 ] || "" ).replace( runescape, funescape ); - - if ( match[ 2 ] === "~=" ) { - match[ 3 ] = " " + match[ 3 ] + " "; - } - - return match.slice( 0, 4 ); - }, - - "CHILD": function( match ) { - - /* matches from matchExpr["CHILD"] - 1 type (only|nth|...) - 2 what (child|of-type) - 3 argument (even|odd|\d*|\d*n([+-]\d+)?|...) - 4 xn-component of xn+y argument ([+-]?\d*n|) - 5 sign of xn-component - 6 x of xn-component - 7 sign of y-component - 8 y of y-component - */ - match[ 1 ] = match[ 1 ].toLowerCase(); - - if ( match[ 1 ].slice( 0, 3 ) === "nth" ) { - - // nth-* requires argument - if ( !match[ 3 ] ) { - Sizzle.error( match[ 0 ] ); - } - - // numeric x and y parameters for Expr.filter.CHILD - // remember that false/true cast respectively to 0/1 - match[ 4 ] = +( match[ 4 ] ? - match[ 5 ] + ( match[ 6 ] || 1 ) : - 2 * ( match[ 3 ] === "even" || match[ 3 ] === "odd" ) ); - match[ 5 ] = +( ( match[ 7 ] + match[ 8 ] ) || match[ 3 ] === "odd" ); - - // other types prohibit arguments - } else if ( match[ 3 ] ) { - Sizzle.error( match[ 0 ] ); - } - - return match; - }, - - "PSEUDO": function( match ) { - var excess, - unquoted = !match[ 6 ] && match[ 2 ]; - - if ( matchExpr[ "CHILD" ].test( match[ 0 ] ) ) { - return null; - } - - // Accept quoted arguments as-is - if ( match[ 3 ] ) { - match[ 2 ] = match[ 4 ] || match[ 5 ] || ""; - - // Strip excess characters from unquoted arguments - } else if ( unquoted && rpseudo.test( unquoted ) && - - // Get excess from tokenize (recursively) - ( excess = tokenize( unquoted, true ) ) && - - // advance to the next closing parenthesis - ( excess = unquoted.indexOf( ")", unquoted.length - excess ) - unquoted.length ) ) { - - // excess is a negative index - match[ 0 ] = match[ 0 ].slice( 0, excess ); - match[ 2 ] = unquoted.slice( 0, excess ); - } - - // Return only captures needed by the pseudo filter method (type and argument) - return match.slice( 0, 3 ); - } - }, - - filter: { - - "TAG": function( nodeNameSelector ) { - var nodeName = nodeNameSelector.replace( runescape, funescape ).toLowerCase(); - return nodeNameSelector === "*" ? - function() { - return true; - } : - function( elem ) { - return elem.nodeName && elem.nodeName.toLowerCase() === nodeName; - }; - }, - - "CLASS": function( className ) { - var pattern = classCache[ className + " " ]; - - return pattern || - ( pattern = new RegExp( "(^|" + whitespace + - ")" + className + "(" + whitespace + "|$)" ) ) && classCache( - className, function( elem ) { - return pattern.test( - typeof elem.className === "string" && elem.className || - typeof elem.getAttribute !== "undefined" && - elem.getAttribute( "class" ) || - "" - ); - } ); - }, - - "ATTR": function( name, operator, check ) { - return function( elem ) { - var result = Sizzle.attr( elem, name ); - - if ( result == null ) { - return operator === "!="; - } - if ( !operator ) { - return true; - } - - result += ""; - - /* eslint-disable max-len */ - - return operator === "=" ? result === check : - operator === "!=" ? result !== check : - operator === "^=" ? check && result.indexOf( check ) === 0 : - operator === "*=" ? check && result.indexOf( check ) > -1 : - operator === "$=" ? check && result.slice( -check.length ) === check : - operator === "~=" ? ( " " + result.replace( rwhitespace, " " ) + " " ).indexOf( check ) > -1 : - operator === "|=" ? result === check || result.slice( 0, check.length + 1 ) === check + "-" : - false; - /* eslint-enable max-len */ - - }; - }, - - "CHILD": function( type, what, _argument, first, last ) { - var simple = type.slice( 0, 3 ) !== "nth", - forward = type.slice( -4 ) !== "last", - ofType = what === "of-type"; - - return first === 1 && last === 0 ? - - // Shortcut for :nth-*(n) - function( elem ) { - return !!elem.parentNode; - } : - - function( elem, _context, xml ) { - var cache, uniqueCache, outerCache, node, nodeIndex, start, - dir = simple !== forward ? "nextSibling" : "previousSibling", - parent = elem.parentNode, - name = ofType && elem.nodeName.toLowerCase(), - useCache = !xml && !ofType, - diff = false; - - if ( parent ) { - - // :(first|last|only)-(child|of-type) - if ( simple ) { - while ( dir ) { - node = elem; - while ( ( node = node[ dir ] ) ) { - if ( ofType ? - node.nodeName.toLowerCase() === name : - node.nodeType === 1 ) { - - return false; - } - } - - // Reverse direction for :only-* (if we haven't yet done so) - start = dir = type === "only" && !start && "nextSibling"; - } - return true; - } - - start = [ forward ? parent.firstChild : parent.lastChild ]; - - // non-xml :nth-child(...) stores cache data on `parent` - if ( forward && useCache ) { - - // Seek `elem` from a previously-cached index - - // ...in a gzip-friendly way - node = parent; - outerCache = node[ expando ] || ( node[ expando ] = {} ); - - // Support: IE <9 only - // Defend against cloned attroperties (jQuery gh-1709) - uniqueCache = outerCache[ node.uniqueID ] || - ( outerCache[ node.uniqueID ] = {} ); - - cache = uniqueCache[ type ] || []; - nodeIndex = cache[ 0 ] === dirruns && cache[ 1 ]; - diff = nodeIndex && cache[ 2 ]; - node = nodeIndex && parent.childNodes[ nodeIndex ]; - - while ( ( node = ++nodeIndex && node && node[ dir ] || - - // Fallback to seeking `elem` from the start - ( diff = nodeIndex = 0 ) || start.pop() ) ) { - - // When found, cache indexes on `parent` and break - if ( node.nodeType === 1 && ++diff && node === elem ) { - uniqueCache[ type ] = [ dirruns, nodeIndex, diff ]; - break; - } - } - - } else { - - // Use previously-cached element index if available - if ( useCache ) { - - // ...in a gzip-friendly way - node = elem; - outerCache = node[ expando ] || ( node[ expando ] = {} ); - - // Support: IE <9 only - // Defend against cloned attroperties (jQuery gh-1709) - uniqueCache = outerCache[ node.uniqueID ] || - ( outerCache[ node.uniqueID ] = {} ); - - cache = uniqueCache[ type ] || []; - nodeIndex = cache[ 0 ] === dirruns && cache[ 1 ]; - diff = nodeIndex; - } - - // xml :nth-child(...) - // or :nth-last-child(...) or :nth(-last)?-of-type(...) - if ( diff === false ) { - - // Use the same loop as above to seek `elem` from the start - while ( ( node = ++nodeIndex && node && node[ dir ] || - ( diff = nodeIndex = 0 ) || start.pop() ) ) { - - if ( ( ofType ? - node.nodeName.toLowerCase() === name : - node.nodeType === 1 ) && - ++diff ) { - - // Cache the index of each encountered element - if ( useCache ) { - outerCache = node[ expando ] || - ( node[ expando ] = {} ); - - // Support: IE <9 only - // Defend against cloned attroperties (jQuery gh-1709) - uniqueCache = outerCache[ node.uniqueID ] || - ( outerCache[ node.uniqueID ] = {} ); - - uniqueCache[ type ] = [ dirruns, diff ]; - } - - if ( node === elem ) { - break; - } - } - } - } - } - - // Incorporate the offset, then check against cycle size - diff -= last; - return diff === first || ( diff % first === 0 && diff / first >= 0 ); - } - }; - }, - - "PSEUDO": function( pseudo, argument ) { - - // pseudo-class names are case-insensitive - // http://www.w3.org/TR/selectors/#pseudo-classes - // Prioritize by case sensitivity in case custom pseudos are added with uppercase letters - // Remember that setFilters inherits from pseudos - var args, - fn = Expr.pseudos[ pseudo ] || Expr.setFilters[ pseudo.toLowerCase() ] || - Sizzle.error( "unsupported pseudo: " + pseudo ); - - // The user may use createPseudo to indicate that - // arguments are needed to create the filter function - // just as Sizzle does - if ( fn[ expando ] ) { - return fn( argument ); - } - - // But maintain support for old signatures - if ( fn.length > 1 ) { - args = [ pseudo, pseudo, "", argument ]; - return Expr.setFilters.hasOwnProperty( pseudo.toLowerCase() ) ? - markFunction( function( seed, matches ) { - var idx, - matched = fn( seed, argument ), - i = matched.length; - while ( i-- ) { - idx = indexOf( seed, matched[ i ] ); - seed[ idx ] = !( matches[ idx ] = matched[ i ] ); - } - } ) : - function( elem ) { - return fn( elem, 0, args ); - }; - } - - return fn; - } - }, - - pseudos: { - - // Potentially complex pseudos - "not": markFunction( function( selector ) { - - // Trim the selector passed to compile - // to avoid treating leading and trailing - // spaces as combinators - var input = [], - results = [], - matcher = compile( selector.replace( rtrim, "$1" ) ); - - return matcher[ expando ] ? - markFunction( function( seed, matches, _context, xml ) { - var elem, - unmatched = matcher( seed, null, xml, [] ), - i = seed.length; - - // Match elements unmatched by `matcher` - while ( i-- ) { - if ( ( elem = unmatched[ i ] ) ) { - seed[ i ] = !( matches[ i ] = elem ); - } - } - } ) : - function( elem, _context, xml ) { - input[ 0 ] = elem; - matcher( input, null, xml, results ); - - // Don't keep the element (issue #299) - input[ 0 ] = null; - return !results.pop(); - }; - } ), - - "has": markFunction( function( selector ) { - return function( elem ) { - return Sizzle( selector, elem ).length > 0; - }; - } ), - - "contains": markFunction( function( text ) { - text = text.replace( runescape, funescape ); - return function( elem ) { - return ( elem.textContent || getText( elem ) ).indexOf( text ) > -1; - }; - } ), - - // "Whether an element is represented by a :lang() selector - // is based solely on the element's language value - // being equal to the identifier C, - // or beginning with the identifier C immediately followed by "-". - // The matching of C against the element's language value is performed case-insensitively. - // The identifier C does not have to be a valid language name." - // http://www.w3.org/TR/selectors/#lang-pseudo - "lang": markFunction( function( lang ) { - - // lang value must be a valid identifier - if ( !ridentifier.test( lang || "" ) ) { - Sizzle.error( "unsupported lang: " + lang ); - } - lang = lang.replace( runescape, funescape ).toLowerCase(); - return function( elem ) { - var elemLang; - do { - if ( ( elemLang = documentIsHTML ? - elem.lang : - elem.getAttribute( "xml:lang" ) || elem.getAttribute( "lang" ) ) ) { - - elemLang = elemLang.toLowerCase(); - return elemLang === lang || elemLang.indexOf( lang + "-" ) === 0; - } - } while ( ( elem = elem.parentNode ) && elem.nodeType === 1 ); - return false; - }; - } ), - - // Miscellaneous - "target": function( elem ) { - var hash = window.location && window.location.hash; - return hash && hash.slice( 1 ) === elem.id; - }, - - "root": function( elem ) { - return elem === docElem; - }, - - "focus": function( elem ) { - return elem === document.activeElement && - ( !document.hasFocus || document.hasFocus() ) && - !!( elem.type || elem.href || ~elem.tabIndex ); - }, - - // Boolean properties - "enabled": createDisabledPseudo( false ), - "disabled": createDisabledPseudo( true ), - - "checked": function( elem ) { - - // In CSS3, :checked should return both checked and selected elements - // http://www.w3.org/TR/2011/REC-css3-selectors-20110929/#checked - var nodeName = elem.nodeName.toLowerCase(); - return ( nodeName === "input" && !!elem.checked ) || - ( nodeName === "option" && !!elem.selected ); - }, - - "selected": function( elem ) { - - // Accessing this property makes selected-by-default - // options in Safari work properly - if ( elem.parentNode ) { - // eslint-disable-next-line no-unused-expressions - elem.parentNode.selectedIndex; - } - - return elem.selected === true; - }, - - // Contents - "empty": function( elem ) { - - // http://www.w3.org/TR/selectors/#empty-pseudo - // :empty is negated by element (1) or content nodes (text: 3; cdata: 4; entity ref: 5), - // but not by others (comment: 8; processing instruction: 7; etc.) - // nodeType < 6 works because attributes (2) do not appear as children - for ( elem = elem.firstChild; elem; elem = elem.nextSibling ) { - if ( elem.nodeType < 6 ) { - return false; - } - } - return true; - }, - - "parent": function( elem ) { - return !Expr.pseudos[ "empty" ]( elem ); - }, - - // Element/input types - "header": function( elem ) { - return rheader.test( elem.nodeName ); - }, - - "input": function( elem ) { - return rinputs.test( elem.nodeName ); - }, - - "button": function( elem ) { - var name = elem.nodeName.toLowerCase(); - return name === "input" && elem.type === "button" || name === "button"; - }, - - "text": function( elem ) { - var attr; - return elem.nodeName.toLowerCase() === "input" && - elem.type === "text" && - - // Support: IE<8 - // New HTML5 attribute values (e.g., "search") appear with elem.type === "text" - ( ( attr = elem.getAttribute( "type" ) ) == null || - attr.toLowerCase() === "text" ); - }, - - // Position-in-collection - "first": createPositionalPseudo( function() { - return [ 0 ]; - } ), - - "last": createPositionalPseudo( function( _matchIndexes, length ) { - return [ length - 1 ]; - } ), - - "eq": createPositionalPseudo( function( _matchIndexes, length, argument ) { - return [ argument < 0 ? argument + length : argument ]; - } ), - - "even": createPositionalPseudo( function( matchIndexes, length ) { - var i = 0; - for ( ; i < length; i += 2 ) { - matchIndexes.push( i ); - } - return matchIndexes; - } ), - - "odd": createPositionalPseudo( function( matchIndexes, length ) { - var i = 1; - for ( ; i < length; i += 2 ) { - matchIndexes.push( i ); - } - return matchIndexes; - } ), - - "lt": createPositionalPseudo( function( matchIndexes, length, argument ) { - var i = argument < 0 ? - argument + length : - argument > length ? - length : - argument; - for ( ; --i >= 0; ) { - matchIndexes.push( i ); - } - return matchIndexes; - } ), - - "gt": createPositionalPseudo( function( matchIndexes, length, argument ) { - var i = argument < 0 ? argument + length : argument; - for ( ; ++i < length; ) { - matchIndexes.push( i ); - } - return matchIndexes; - } ) - } -}; - -Expr.pseudos[ "nth" ] = Expr.pseudos[ "eq" ]; - -// Add button/input type pseudos -for ( i in { radio: true, checkbox: true, file: true, password: true, image: true } ) { - Expr.pseudos[ i ] = createInputPseudo( i ); -} -for ( i in { submit: true, reset: true } ) { - Expr.pseudos[ i ] = createButtonPseudo( i ); -} - -// Easy API for creating new setFilters -function setFilters() {} -setFilters.prototype = Expr.filters = Expr.pseudos; -Expr.setFilters = new setFilters(); - -tokenize = Sizzle.tokenize = function( selector, parseOnly ) { - var matched, match, tokens, type, - soFar, groups, preFilters, - cached = tokenCache[ selector + " " ]; - - if ( cached ) { - return parseOnly ? 0 : cached.slice( 0 ); - } - - soFar = selector; - groups = []; - preFilters = Expr.preFilter; - - while ( soFar ) { - - // Comma and first run - if ( !matched || ( match = rcomma.exec( soFar ) ) ) { - if ( match ) { - - // Don't consume trailing commas as valid - soFar = soFar.slice( match[ 0 ].length ) || soFar; - } - groups.push( ( tokens = [] ) ); - } - - matched = false; - - // Combinators - if ( ( match = rcombinators.exec( soFar ) ) ) { - matched = match.shift(); - tokens.push( { - value: matched, - - // Cast descendant combinators to space - type: match[ 0 ].replace( rtrim, " " ) - } ); - soFar = soFar.slice( matched.length ); - } - - // Filters - for ( type in Expr.filter ) { - if ( ( match = matchExpr[ type ].exec( soFar ) ) && ( !preFilters[ type ] || - ( match = preFilters[ type ]( match ) ) ) ) { - matched = match.shift(); - tokens.push( { - value: matched, - type: type, - matches: match - } ); - soFar = soFar.slice( matched.length ); - } - } - - if ( !matched ) { - break; - } - } - - // Return the length of the invalid excess - // if we're just parsing - // Otherwise, throw an error or return tokens - return parseOnly ? - soFar.length : - soFar ? - Sizzle.error( selector ) : - - // Cache the tokens - tokenCache( selector, groups ).slice( 0 ); -}; - -function toSelector( tokens ) { - var i = 0, - len = tokens.length, - selector = ""; - for ( ; i < len; i++ ) { - selector += tokens[ i ].value; - } - return selector; -} - -function addCombinator( matcher, combinator, base ) { - var dir = combinator.dir, - skip = combinator.next, - key = skip || dir, - checkNonElements = base && key === "parentNode", - doneName = done++; - - return combinator.first ? - - // Check against closest ancestor/preceding element - function( elem, context, xml ) { - while ( ( elem = elem[ dir ] ) ) { - if ( elem.nodeType === 1 || checkNonElements ) { - return matcher( elem, context, xml ); - } - } - return false; - } : - - // Check against all ancestor/preceding elements - function( elem, context, xml ) { - var oldCache, uniqueCache, outerCache, - newCache = [ dirruns, doneName ]; - - // We can't set arbitrary data on XML nodes, so they don't benefit from combinator caching - if ( xml ) { - while ( ( elem = elem[ dir ] ) ) { - if ( elem.nodeType === 1 || checkNonElements ) { - if ( matcher( elem, context, xml ) ) { - return true; - } - } - } - } else { - while ( ( elem = elem[ dir ] ) ) { - if ( elem.nodeType === 1 || checkNonElements ) { - outerCache = elem[ expando ] || ( elem[ expando ] = {} ); - - // Support: IE <9 only - // Defend against cloned attroperties (jQuery gh-1709) - uniqueCache = outerCache[ elem.uniqueID ] || - ( outerCache[ elem.uniqueID ] = {} ); - - if ( skip && skip === elem.nodeName.toLowerCase() ) { - elem = elem[ dir ] || elem; - } else if ( ( oldCache = uniqueCache[ key ] ) && - oldCache[ 0 ] === dirruns && oldCache[ 1 ] === doneName ) { - - // Assign to newCache so results back-propagate to previous elements - return ( newCache[ 2 ] = oldCache[ 2 ] ); - } else { - - // Reuse newcache so results back-propagate to previous elements - uniqueCache[ key ] = newCache; - - // A match means we're done; a fail means we have to keep checking - if ( ( newCache[ 2 ] = matcher( elem, context, xml ) ) ) { - return true; - } - } - } - } - } - return false; - }; -} - -function elementMatcher( matchers ) { - return matchers.length > 1 ? - function( elem, context, xml ) { - var i = matchers.length; - while ( i-- ) { - if ( !matchers[ i ]( elem, context, xml ) ) { - return false; - } - } - return true; - } : - matchers[ 0 ]; -} - -function multipleContexts( selector, contexts, results ) { - var i = 0, - len = contexts.length; - for ( ; i < len; i++ ) { - Sizzle( selector, contexts[ i ], results ); - } - return results; -} - -function condense( unmatched, map, filter, context, xml ) { - var elem, - newUnmatched = [], - i = 0, - len = unmatched.length, - mapped = map != null; - - for ( ; i < len; i++ ) { - if ( ( elem = unmatched[ i ] ) ) { - if ( !filter || filter( elem, context, xml ) ) { - newUnmatched.push( elem ); - if ( mapped ) { - map.push( i ); - } - } - } - } - - return newUnmatched; -} - -function setMatcher( preFilter, selector, matcher, postFilter, postFinder, postSelector ) { - if ( postFilter && !postFilter[ expando ] ) { - postFilter = setMatcher( postFilter ); - } - if ( postFinder && !postFinder[ expando ] ) { - postFinder = setMatcher( postFinder, postSelector ); - } - return markFunction( function( seed, results, context, xml ) { - var temp, i, elem, - preMap = [], - postMap = [], - preexisting = results.length, - - // Get initial elements from seed or context - elems = seed || multipleContexts( - selector || "*", - context.nodeType ? [ context ] : context, - [] - ), - - // Prefilter to get matcher input, preserving a map for seed-results synchronization - matcherIn = preFilter && ( seed || !selector ) ? - condense( elems, preMap, preFilter, context, xml ) : - elems, - - matcherOut = matcher ? - - // If we have a postFinder, or filtered seed, or non-seed postFilter or preexisting results, - postFinder || ( seed ? preFilter : preexisting || postFilter ) ? - - // ...intermediate processing is necessary - [] : - - // ...otherwise use results directly - results : - matcherIn; - - // Find primary matches - if ( matcher ) { - matcher( matcherIn, matcherOut, context, xml ); - } - - // Apply postFilter - if ( postFilter ) { - temp = condense( matcherOut, postMap ); - postFilter( temp, [], context, xml ); - - // Un-match failing elements by moving them back to matcherIn - i = temp.length; - while ( i-- ) { - if ( ( elem = temp[ i ] ) ) { - matcherOut[ postMap[ i ] ] = !( matcherIn[ postMap[ i ] ] = elem ); - } - } - } - - if ( seed ) { - if ( postFinder || preFilter ) { - if ( postFinder ) { - - // Get the final matcherOut by condensing this intermediate into postFinder contexts - temp = []; - i = matcherOut.length; - while ( i-- ) { - if ( ( elem = matcherOut[ i ] ) ) { - - // Restore matcherIn since elem is not yet a final match - temp.push( ( matcherIn[ i ] = elem ) ); - } - } - postFinder( null, ( matcherOut = [] ), temp, xml ); - } - - // Move matched elements from seed to results to keep them synchronized - i = matcherOut.length; - while ( i-- ) { - if ( ( elem = matcherOut[ i ] ) && - ( temp = postFinder ? indexOf( seed, elem ) : preMap[ i ] ) > -1 ) { - - seed[ temp ] = !( results[ temp ] = elem ); - } - } - } - - // Add elements to results, through postFinder if defined - } else { - matcherOut = condense( - matcherOut === results ? - matcherOut.splice( preexisting, matcherOut.length ) : - matcherOut - ); - if ( postFinder ) { - postFinder( null, results, matcherOut, xml ); - } else { - push.apply( results, matcherOut ); - } - } - } ); -} - -function matcherFromTokens( tokens ) { - var checkContext, matcher, j, - len = tokens.length, - leadingRelative = Expr.relative[ tokens[ 0 ].type ], - implicitRelative = leadingRelative || Expr.relative[ " " ], - i = leadingRelative ? 1 : 0, - - // The foundational matcher ensures that elements are reachable from top-level context(s) - matchContext = addCombinator( function( elem ) { - return elem === checkContext; - }, implicitRelative, true ), - matchAnyContext = addCombinator( function( elem ) { - return indexOf( checkContext, elem ) > -1; - }, implicitRelative, true ), - matchers = [ function( elem, context, xml ) { - var ret = ( !leadingRelative && ( xml || context !== outermostContext ) ) || ( - ( checkContext = context ).nodeType ? - matchContext( elem, context, xml ) : - matchAnyContext( elem, context, xml ) ); - - // Avoid hanging onto element (issue #299) - checkContext = null; - return ret; - } ]; - - for ( ; i < len; i++ ) { - if ( ( matcher = Expr.relative[ tokens[ i ].type ] ) ) { - matchers = [ addCombinator( elementMatcher( matchers ), matcher ) ]; - } else { - matcher = Expr.filter[ tokens[ i ].type ].apply( null, tokens[ i ].matches ); - - // Return special upon seeing a positional matcher - if ( matcher[ expando ] ) { - - // Find the next relative operator (if any) for proper handling - j = ++i; - for ( ; j < len; j++ ) { - if ( Expr.relative[ tokens[ j ].type ] ) { - break; - } - } - return setMatcher( - i > 1 && elementMatcher( matchers ), - i > 1 && toSelector( - - // If the preceding token was a descendant combinator, insert an implicit any-element `*` - tokens - .slice( 0, i - 1 ) - .concat( { value: tokens[ i - 2 ].type === " " ? "*" : "" } ) - ).replace( rtrim, "$1" ), - matcher, - i < j && matcherFromTokens( tokens.slice( i, j ) ), - j < len && matcherFromTokens( ( tokens = tokens.slice( j ) ) ), - j < len && toSelector( tokens ) - ); - } - matchers.push( matcher ); - } - } - - return elementMatcher( matchers ); -} - -function matcherFromGroupMatchers( elementMatchers, setMatchers ) { - var bySet = setMatchers.length > 0, - byElement = elementMatchers.length > 0, - superMatcher = function( seed, context, xml, results, outermost ) { - var elem, j, matcher, - matchedCount = 0, - i = "0", - unmatched = seed && [], - setMatched = [], - contextBackup = outermostContext, - - // We must always have either seed elements or outermost context - elems = seed || byElement && Expr.find[ "TAG" ]( "*", outermost ), - - // Use integer dirruns iff this is the outermost matcher - dirrunsUnique = ( dirruns += contextBackup == null ? 1 : Math.random() || 0.1 ), - len = elems.length; - - if ( outermost ) { - - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - // eslint-disable-next-line eqeqeq - outermostContext = context == document || context || outermost; - } - - // Add elements passing elementMatchers directly to results - // Support: IE<9, Safari - // Tolerate NodeList properties (IE: "length"; Safari: ) matching elements by id - for ( ; i !== len && ( elem = elems[ i ] ) != null; i++ ) { - if ( byElement && elem ) { - j = 0; - - // Support: IE 11+, Edge 17 - 18+ - // IE/Edge sometimes throw a "Permission denied" error when strict-comparing - // two documents; shallow comparisons work. - // eslint-disable-next-line eqeqeq - if ( !context && elem.ownerDocument != document ) { - setDocument( elem ); - xml = !documentIsHTML; - } - while ( ( matcher = elementMatchers[ j++ ] ) ) { - if ( matcher( elem, context || document, xml ) ) { - results.push( elem ); - break; - } - } - if ( outermost ) { - dirruns = dirrunsUnique; - } - } - - // Track unmatched elements for set filters - if ( bySet ) { - - // They will have gone through all possible matchers - if ( ( elem = !matcher && elem ) ) { - matchedCount--; - } - - // Lengthen the array for every element, matched or not - if ( seed ) { - unmatched.push( elem ); - } - } - } - - // `i` is now the count of elements visited above, and adding it to `matchedCount` - // makes the latter nonnegative. - matchedCount += i; - - // Apply set filters to unmatched elements - // NOTE: This can be skipped if there are no unmatched elements (i.e., `matchedCount` - // equals `i`), unless we didn't visit _any_ elements in the above loop because we have - // no element matchers and no seed. - // Incrementing an initially-string "0" `i` allows `i` to remain a string only in that - // case, which will result in a "00" `matchedCount` that differs from `i` but is also - // numerically zero. - if ( bySet && i !== matchedCount ) { - j = 0; - while ( ( matcher = setMatchers[ j++ ] ) ) { - matcher( unmatched, setMatched, context, xml ); - } - - if ( seed ) { - - // Reintegrate element matches to eliminate the need for sorting - if ( matchedCount > 0 ) { - while ( i-- ) { - if ( !( unmatched[ i ] || setMatched[ i ] ) ) { - setMatched[ i ] = pop.call( results ); - } - } - } - - // Discard index placeholder values to get only actual matches - setMatched = condense( setMatched ); - } - - // Add matches to results - push.apply( results, setMatched ); - - // Seedless set matches succeeding multiple successful matchers stipulate sorting - if ( outermost && !seed && setMatched.length > 0 && - ( matchedCount + setMatchers.length ) > 1 ) { - - Sizzle.uniqueSort( results ); - } - } - - // Override manipulation of globals by nested matchers - if ( outermost ) { - dirruns = dirrunsUnique; - outermostContext = contextBackup; - } - - return unmatched; - }; - - return bySet ? - markFunction( superMatcher ) : - superMatcher; -} - -compile = Sizzle.compile = function( selector, match /* Internal Use Only */ ) { - var i, - setMatchers = [], - elementMatchers = [], - cached = compilerCache[ selector + " " ]; - - if ( !cached ) { - - // Generate a function of recursive functions that can be used to check each element - if ( !match ) { - match = tokenize( selector ); - } - i = match.length; - while ( i-- ) { - cached = matcherFromTokens( match[ i ] ); - if ( cached[ expando ] ) { - setMatchers.push( cached ); - } else { - elementMatchers.push( cached ); - } - } - - // Cache the compiled function - cached = compilerCache( - selector, - matcherFromGroupMatchers( elementMatchers, setMatchers ) - ); - - // Save selector and tokenization - cached.selector = selector; - } - return cached; -}; - -/** - * A low-level selection function that works with Sizzle's compiled - * selector functions - * @param {String|Function} selector A selector or a pre-compiled - * selector function built with Sizzle.compile - * @param {Element} context - * @param {Array} [results] - * @param {Array} [seed] A set of elements to match against - */ -select = Sizzle.select = function( selector, context, results, seed ) { - var i, tokens, token, type, find, - compiled = typeof selector === "function" && selector, - match = !seed && tokenize( ( selector = compiled.selector || selector ) ); - - results = results || []; - - // Try to minimize operations if there is only one selector in the list and no seed - // (the latter of which guarantees us context) - if ( match.length === 1 ) { - - // Reduce context if the leading compound selector is an ID - tokens = match[ 0 ] = match[ 0 ].slice( 0 ); - if ( tokens.length > 2 && ( token = tokens[ 0 ] ).type === "ID" && - context.nodeType === 9 && documentIsHTML && Expr.relative[ tokens[ 1 ].type ] ) { - - context = ( Expr.find[ "ID" ]( token.matches[ 0 ] - .replace( runescape, funescape ), context ) || [] )[ 0 ]; - if ( !context ) { - return results; - - // Precompiled matchers will still verify ancestry, so step up a level - } else if ( compiled ) { - context = context.parentNode; - } - - selector = selector.slice( tokens.shift().value.length ); - } - - // Fetch a seed set for right-to-left matching - i = matchExpr[ "needsContext" ].test( selector ) ? 0 : tokens.length; - while ( i-- ) { - token = tokens[ i ]; - - // Abort if we hit a combinator - if ( Expr.relative[ ( type = token.type ) ] ) { - break; - } - if ( ( find = Expr.find[ type ] ) ) { - - // Search, expanding context for leading sibling combinators - if ( ( seed = find( - token.matches[ 0 ].replace( runescape, funescape ), - rsibling.test( tokens[ 0 ].type ) && testContext( context.parentNode ) || - context - ) ) ) { - - // If seed is empty or no tokens remain, we can return early - tokens.splice( i, 1 ); - selector = seed.length && toSelector( tokens ); - if ( !selector ) { - push.apply( results, seed ); - return results; - } - - break; - } - } - } - } - - // Compile and execute a filtering function if one is not provided - // Provide `match` to avoid retokenization if we modified the selector above - ( compiled || compile( selector, match ) )( - seed, - context, - !documentIsHTML, - results, - !context || rsibling.test( selector ) && testContext( context.parentNode ) || context - ); - return results; -}; - -// One-time assignments - -// Sort stability -support.sortStable = expando.split( "" ).sort( sortOrder ).join( "" ) === expando; - -// Support: Chrome 14-35+ -// Always assume duplicates if they aren't passed to the comparison function -support.detectDuplicates = !!hasDuplicate; - -// Initialize against the default document -setDocument(); - -// Support: Webkit<537.32 - Safari 6.0.3/Chrome 25 (fixed in Chrome 27) -// Detached nodes confoundingly follow *each other* -support.sortDetached = assert( function( el ) { - - // Should return 1, but returns 4 (following) - return el.compareDocumentPosition( document.createElement( "fieldset" ) ) & 1; -} ); - -// Support: IE<8 -// Prevent attribute/property "interpolation" -// https://msdn.microsoft.com/en-us/library/ms536429%28VS.85%29.aspx -if ( !assert( function( el ) { - el.innerHTML = ""; - return el.firstChild.getAttribute( "href" ) === "#"; -} ) ) { - addHandle( "type|href|height|width", function( elem, name, isXML ) { - if ( !isXML ) { - return elem.getAttribute( name, name.toLowerCase() === "type" ? 1 : 2 ); - } - } ); -} - -// Support: IE<9 -// Use defaultValue in place of getAttribute("value") -if ( !support.attributes || !assert( function( el ) { - el.innerHTML = ""; - el.firstChild.setAttribute( "value", "" ); - return el.firstChild.getAttribute( "value" ) === ""; -} ) ) { - addHandle( "value", function( elem, _name, isXML ) { - if ( !isXML && elem.nodeName.toLowerCase() === "input" ) { - return elem.defaultValue; - } - } ); -} - -// Support: IE<9 -// Use getAttributeNode to fetch booleans when getAttribute lies -if ( !assert( function( el ) { - return el.getAttribute( "disabled" ) == null; -} ) ) { - addHandle( booleans, function( elem, name, isXML ) { - var val; - if ( !isXML ) { - return elem[ name ] === true ? name.toLowerCase() : - ( val = elem.getAttributeNode( name ) ) && val.specified ? - val.value : - null; - } - } ); -} - -return Sizzle; - -} )( window ); - - - -jQuery.find = Sizzle; -jQuery.expr = Sizzle.selectors; - -// Deprecated -jQuery.expr[ ":" ] = jQuery.expr.pseudos; -jQuery.uniqueSort = jQuery.unique = Sizzle.uniqueSort; -jQuery.text = Sizzle.getText; -jQuery.isXMLDoc = Sizzle.isXML; -jQuery.contains = Sizzle.contains; -jQuery.escapeSelector = Sizzle.escape; - - - - -var dir = function( elem, dir, until ) { - var matched = [], - truncate = until !== undefined; - - while ( ( elem = elem[ dir ] ) && elem.nodeType !== 9 ) { - if ( elem.nodeType === 1 ) { - if ( truncate && jQuery( elem ).is( until ) ) { - break; - } - matched.push( elem ); - } - } - return matched; -}; - - -var siblings = function( n, elem ) { - var matched = []; - - for ( ; n; n = n.nextSibling ) { - if ( n.nodeType === 1 && n !== elem ) { - matched.push( n ); - } - } - - return matched; -}; - - -var rneedsContext = jQuery.expr.match.needsContext; - - - -function nodeName( elem, name ) { - - return elem.nodeName && elem.nodeName.toLowerCase() === name.toLowerCase(); - -}; -var rsingleTag = ( /^<([a-z][^\/\0>:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i ); - - - -// Implement the identical functionality for filter and not -function winnow( elements, qualifier, not ) { - if ( isFunction( qualifier ) ) { - return jQuery.grep( elements, function( elem, i ) { - return !!qualifier.call( elem, i, elem ) !== not; - } ); - } - - // Single element - if ( qualifier.nodeType ) { - return jQuery.grep( elements, function( elem ) { - return ( elem === qualifier ) !== not; - } ); - } - - // Arraylike of elements (jQuery, arguments, Array) - if ( typeof qualifier !== "string" ) { - return jQuery.grep( elements, function( elem ) { - return ( indexOf.call( qualifier, elem ) > -1 ) !== not; - } ); - } - - // Filtered directly for both simple and complex selectors - return jQuery.filter( qualifier, elements, not ); -} - -jQuery.filter = function( expr, elems, not ) { - var elem = elems[ 0 ]; - - if ( not ) { - expr = ":not(" + expr + ")"; - } - - if ( elems.length === 1 && elem.nodeType === 1 ) { - return jQuery.find.matchesSelector( elem, expr ) ? [ elem ] : []; - } - - return jQuery.find.matches( expr, jQuery.grep( elems, function( elem ) { - return elem.nodeType === 1; - } ) ); -}; - -jQuery.fn.extend( { - find: function( selector ) { - var i, ret, - len = this.length, - self = this; - - if ( typeof selector !== "string" ) { - return this.pushStack( jQuery( selector ).filter( function() { - for ( i = 0; i < len; i++ ) { - if ( jQuery.contains( self[ i ], this ) ) { - return true; - } - } - } ) ); - } - - ret = this.pushStack( [] ); - - for ( i = 0; i < len; i++ ) { - jQuery.find( selector, self[ i ], ret ); - } - - return len > 1 ? jQuery.uniqueSort( ret ) : ret; - }, - filter: function( selector ) { - return this.pushStack( winnow( this, selector || [], false ) ); - }, - not: function( selector ) { - return this.pushStack( winnow( this, selector || [], true ) ); - }, - is: function( selector ) { - return !!winnow( - this, - - // If this is a positional/relative selector, check membership in the returned set - // so $("p:first").is("p:last") won't return true for a doc with two "p". - typeof selector === "string" && rneedsContext.test( selector ) ? - jQuery( selector ) : - selector || [], - false - ).length; - } -} ); - - -// Initialize a jQuery object - - -// A central reference to the root jQuery(document) -var rootjQuery, - - // A simple way to check for HTML strings - // Prioritize #id over to avoid XSS via location.hash (#9521) - // Strict HTML recognition (#11290: must start with <) - // Shortcut simple #id case for speed - rquickExpr = /^(?:\s*(<[\w\W]+>)[^>]*|#([\w-]+))$/, - - init = jQuery.fn.init = function( selector, context, root ) { - var match, elem; - - // HANDLE: $(""), $(null), $(undefined), $(false) - if ( !selector ) { - return this; - } - - // Method init() accepts an alternate rootjQuery - // so migrate can support jQuery.sub (gh-2101) - root = root || rootjQuery; - - // Handle HTML strings - if ( typeof selector === "string" ) { - if ( selector[ 0 ] === "<" && - selector[ selector.length - 1 ] === ">" && - selector.length >= 3 ) { - - // Assume that strings that start and end with <> are HTML and skip the regex check - match = [ null, selector, null ]; - - } else { - match = rquickExpr.exec( selector ); - } - - // Match html or make sure no context is specified for #id - if ( match && ( match[ 1 ] || !context ) ) { - - // HANDLE: $(html) -> $(array) - if ( match[ 1 ] ) { - context = context instanceof jQuery ? context[ 0 ] : context; - - // Option to run scripts is true for back-compat - // Intentionally let the error be thrown if parseHTML is not present - jQuery.merge( this, jQuery.parseHTML( - match[ 1 ], - context && context.nodeType ? context.ownerDocument || context : document, - true - ) ); - - // HANDLE: $(html, props) - if ( rsingleTag.test( match[ 1 ] ) && jQuery.isPlainObject( context ) ) { - for ( match in context ) { - - // Properties of context are called as methods if possible - if ( isFunction( this[ match ] ) ) { - this[ match ]( context[ match ] ); - - // ...and otherwise set as attributes - } else { - this.attr( match, context[ match ] ); - } - } - } - - return this; - - // HANDLE: $(#id) - } else { - elem = document.getElementById( match[ 2 ] ); - - if ( elem ) { - - // Inject the element directly into the jQuery object - this[ 0 ] = elem; - this.length = 1; - } - return this; - } - - // HANDLE: $(expr, $(...)) - } else if ( !context || context.jquery ) { - return ( context || root ).find( selector ); - - // HANDLE: $(expr, context) - // (which is just equivalent to: $(context).find(expr) - } else { - return this.constructor( context ).find( selector ); - } - - // HANDLE: $(DOMElement) - } else if ( selector.nodeType ) { - this[ 0 ] = selector; - this.length = 1; - return this; - - // HANDLE: $(function) - // Shortcut for document ready - } else if ( isFunction( selector ) ) { - return root.ready !== undefined ? - root.ready( selector ) : - - // Execute immediately if ready is not present - selector( jQuery ); - } - - return jQuery.makeArray( selector, this ); - }; - -// Give the init function the jQuery prototype for later instantiation -init.prototype = jQuery.fn; - -// Initialize central reference -rootjQuery = jQuery( document ); - - -var rparentsprev = /^(?:parents|prev(?:Until|All))/, - - // Methods guaranteed to produce a unique set when starting from a unique set - guaranteedUnique = { - children: true, - contents: true, - next: true, - prev: true - }; - -jQuery.fn.extend( { - has: function( target ) { - var targets = jQuery( target, this ), - l = targets.length; - - return this.filter( function() { - var i = 0; - for ( ; i < l; i++ ) { - if ( jQuery.contains( this, targets[ i ] ) ) { - return true; - } - } - } ); - }, - - closest: function( selectors, context ) { - var cur, - i = 0, - l = this.length, - matched = [], - targets = typeof selectors !== "string" && jQuery( selectors ); - - // Positional selectors never match, since there's no _selection_ context - if ( !rneedsContext.test( selectors ) ) { - for ( ; i < l; i++ ) { - for ( cur = this[ i ]; cur && cur !== context; cur = cur.parentNode ) { - - // Always skip document fragments - if ( cur.nodeType < 11 && ( targets ? - targets.index( cur ) > -1 : - - // Don't pass non-elements to Sizzle - cur.nodeType === 1 && - jQuery.find.matchesSelector( cur, selectors ) ) ) { - - matched.push( cur ); - break; - } - } - } - } - - return this.pushStack( matched.length > 1 ? jQuery.uniqueSort( matched ) : matched ); - }, - - // Determine the position of an element within the set - index: function( elem ) { - - // No argument, return index in parent - if ( !elem ) { - return ( this[ 0 ] && this[ 0 ].parentNode ) ? this.first().prevAll().length : -1; - } - - // Index in selector - if ( typeof elem === "string" ) { - return indexOf.call( jQuery( elem ), this[ 0 ] ); - } - - // Locate the position of the desired element - return indexOf.call( this, - - // If it receives a jQuery object, the first element is used - elem.jquery ? elem[ 0 ] : elem - ); - }, - - add: function( selector, context ) { - return this.pushStack( - jQuery.uniqueSort( - jQuery.merge( this.get(), jQuery( selector, context ) ) - ) - ); - }, - - addBack: function( selector ) { - return this.add( selector == null ? - this.prevObject : this.prevObject.filter( selector ) - ); - } -} ); - -function sibling( cur, dir ) { - while ( ( cur = cur[ dir ] ) && cur.nodeType !== 1 ) {} - return cur; -} - -jQuery.each( { - parent: function( elem ) { - var parent = elem.parentNode; - return parent && parent.nodeType !== 11 ? parent : null; - }, - parents: function( elem ) { - return dir( elem, "parentNode" ); - }, - parentsUntil: function( elem, _i, until ) { - return dir( elem, "parentNode", until ); - }, - next: function( elem ) { - return sibling( elem, "nextSibling" ); - }, - prev: function( elem ) { - return sibling( elem, "previousSibling" ); - }, - nextAll: function( elem ) { - return dir( elem, "nextSibling" ); - }, - prevAll: function( elem ) { - return dir( elem, "previousSibling" ); - }, - nextUntil: function( elem, _i, until ) { - return dir( elem, "nextSibling", until ); - }, - prevUntil: function( elem, _i, until ) { - return dir( elem, "previousSibling", until ); - }, - siblings: function( elem ) { - return siblings( ( elem.parentNode || {} ).firstChild, elem ); - }, - children: function( elem ) { - return siblings( elem.firstChild ); - }, - contents: function( elem ) { - if ( elem.contentDocument != null && - - // Support: IE 11+ - // elements with no `data` attribute has an object - // `contentDocument` with a `null` prototype. - getProto( elem.contentDocument ) ) { - - return elem.contentDocument; - } - - // Support: IE 9 - 11 only, iOS 7 only, Android Browser <=4.3 only - // Treat the template element as a regular one in browsers that - // don't support it. - if ( nodeName( elem, "template" ) ) { - elem = elem.content || elem; - } - - return jQuery.merge( [], elem.childNodes ); - } -}, function( name, fn ) { - jQuery.fn[ name ] = function( until, selector ) { - var matched = jQuery.map( this, fn, until ); - - if ( name.slice( -5 ) !== "Until" ) { - selector = until; - } - - if ( selector && typeof selector === "string" ) { - matched = jQuery.filter( selector, matched ); - } - - if ( this.length > 1 ) { - - // Remove duplicates - if ( !guaranteedUnique[ name ] ) { - jQuery.uniqueSort( matched ); - } - - // Reverse order for parents* and prev-derivatives - if ( rparentsprev.test( name ) ) { - matched.reverse(); - } - } - - return this.pushStack( matched ); - }; -} ); -var rnothtmlwhite = ( /[^\x20\t\r\n\f]+/g ); - - - -// Convert String-formatted options into Object-formatted ones -function createOptions( options ) { - var object = {}; - jQuery.each( options.match( rnothtmlwhite ) || [], function( _, flag ) { - object[ flag ] = true; - } ); - return object; -} - -/* - * Create a callback list using the following parameters: - * - * options: an optional list of space-separated options that will change how - * the callback list behaves or a more traditional option object - * - * By default a callback list will act like an event callback list and can be - * "fired" multiple times. - * - * Possible options: - * - * once: will ensure the callback list can only be fired once (like a Deferred) - * - * memory: will keep track of previous values and will call any callback added - * after the list has been fired right away with the latest "memorized" - * values (like a Deferred) - * - * unique: will ensure a callback can only be added once (no duplicate in the list) - * - * stopOnFalse: interrupt callings when a callback returns false - * - */ -jQuery.Callbacks = function( options ) { - - // Convert options from String-formatted to Object-formatted if needed - // (we check in cache first) - options = typeof options === "string" ? - createOptions( options ) : - jQuery.extend( {}, options ); - - var // Flag to know if list is currently firing - firing, - - // Last fire value for non-forgettable lists - memory, - - // Flag to know if list was already fired - fired, - - // Flag to prevent firing - locked, - - // Actual callback list - list = [], - - // Queue of execution data for repeatable lists - queue = [], - - // Index of currently firing callback (modified by add/remove as needed) - firingIndex = -1, - - // Fire callbacks - fire = function() { - - // Enforce single-firing - locked = locked || options.once; - - // Execute callbacks for all pending executions, - // respecting firingIndex overrides and runtime changes - fired = firing = true; - for ( ; queue.length; firingIndex = -1 ) { - memory = queue.shift(); - while ( ++firingIndex < list.length ) { - - // Run callback and check for early termination - if ( list[ firingIndex ].apply( memory[ 0 ], memory[ 1 ] ) === false && - options.stopOnFalse ) { - - // Jump to end and forget the data so .add doesn't re-fire - firingIndex = list.length; - memory = false; - } - } - } - - // Forget the data if we're done with it - if ( !options.memory ) { - memory = false; - } - - firing = false; - - // Clean up if we're done firing for good - if ( locked ) { - - // Keep an empty list if we have data for future add calls - if ( memory ) { - list = []; - - // Otherwise, this object is spent - } else { - list = ""; - } - } - }, - - // Actual Callbacks object - self = { - - // Add a callback or a collection of callbacks to the list - add: function() { - if ( list ) { - - // If we have memory from a past run, we should fire after adding - if ( memory && !firing ) { - firingIndex = list.length - 1; - queue.push( memory ); - } - - ( function add( args ) { - jQuery.each( args, function( _, arg ) { - if ( isFunction( arg ) ) { - if ( !options.unique || !self.has( arg ) ) { - list.push( arg ); - } - } else if ( arg && arg.length && toType( arg ) !== "string" ) { - - // Inspect recursively - add( arg ); - } - } ); - } )( arguments ); - - if ( memory && !firing ) { - fire(); - } - } - return this; - }, - - // Remove a callback from the list - remove: function() { - jQuery.each( arguments, function( _, arg ) { - var index; - while ( ( index = jQuery.inArray( arg, list, index ) ) > -1 ) { - list.splice( index, 1 ); - - // Handle firing indexes - if ( index <= firingIndex ) { - firingIndex--; - } - } - } ); - return this; - }, - - // Check if a given callback is in the list. - // If no argument is given, return whether or not list has callbacks attached. - has: function( fn ) { - return fn ? - jQuery.inArray( fn, list ) > -1 : - list.length > 0; - }, - - // Remove all callbacks from the list - empty: function() { - if ( list ) { - list = []; - } - return this; - }, - - // Disable .fire and .add - // Abort any current/pending executions - // Clear all callbacks and values - disable: function() { - locked = queue = []; - list = memory = ""; - return this; - }, - disabled: function() { - return !list; - }, - - // Disable .fire - // Also disable .add unless we have memory (since it would have no effect) - // Abort any pending executions - lock: function() { - locked = queue = []; - if ( !memory && !firing ) { - list = memory = ""; - } - return this; - }, - locked: function() { - return !!locked; - }, - - // Call all callbacks with the given context and arguments - fireWith: function( context, args ) { - if ( !locked ) { - args = args || []; - args = [ context, args.slice ? args.slice() : args ]; - queue.push( args ); - if ( !firing ) { - fire(); - } - } - return this; - }, - - // Call all the callbacks with the given arguments - fire: function() { - self.fireWith( this, arguments ); - return this; - }, - - // To know if the callbacks have already been called at least once - fired: function() { - return !!fired; - } - }; - - return self; -}; - - -function Identity( v ) { - return v; -} -function Thrower( ex ) { - throw ex; -} - -function adoptValue( value, resolve, reject, noValue ) { - var method; - - try { - - // Check for promise aspect first to privilege synchronous behavior - if ( value && isFunction( ( method = value.promise ) ) ) { - method.call( value ).done( resolve ).fail( reject ); - - // Other thenables - } else if ( value && isFunction( ( method = value.then ) ) ) { - method.call( value, resolve, reject ); - - // Other non-thenables - } else { - - // Control `resolve` arguments by letting Array#slice cast boolean `noValue` to integer: - // * false: [ value ].slice( 0 ) => resolve( value ) - // * true: [ value ].slice( 1 ) => resolve() - resolve.apply( undefined, [ value ].slice( noValue ) ); - } - - // For Promises/A+, convert exceptions into rejections - // Since jQuery.when doesn't unwrap thenables, we can skip the extra checks appearing in - // Deferred#then to conditionally suppress rejection. - } catch ( value ) { - - // Support: Android 4.0 only - // Strict mode functions invoked without .call/.apply get global-object context - reject.apply( undefined, [ value ] ); - } -} - -jQuery.extend( { - - Deferred: function( func ) { - var tuples = [ - - // action, add listener, callbacks, - // ... .then handlers, argument index, [final state] - [ "notify", "progress", jQuery.Callbacks( "memory" ), - jQuery.Callbacks( "memory" ), 2 ], - [ "resolve", "done", jQuery.Callbacks( "once memory" ), - jQuery.Callbacks( "once memory" ), 0, "resolved" ], - [ "reject", "fail", jQuery.Callbacks( "once memory" ), - jQuery.Callbacks( "once memory" ), 1, "rejected" ] - ], - state = "pending", - promise = { - state: function() { - return state; - }, - always: function() { - deferred.done( arguments ).fail( arguments ); - return this; - }, - "catch": function( fn ) { - return promise.then( null, fn ); - }, - - // Keep pipe for back-compat - pipe: function( /* fnDone, fnFail, fnProgress */ ) { - var fns = arguments; - - return jQuery.Deferred( function( newDefer ) { - jQuery.each( tuples, function( _i, tuple ) { - - // Map tuples (progress, done, fail) to arguments (done, fail, progress) - var fn = isFunction( fns[ tuple[ 4 ] ] ) && fns[ tuple[ 4 ] ]; - - // deferred.progress(function() { bind to newDefer or newDefer.notify }) - // deferred.done(function() { bind to newDefer or newDefer.resolve }) - // deferred.fail(function() { bind to newDefer or newDefer.reject }) - deferred[ tuple[ 1 ] ]( function() { - var returned = fn && fn.apply( this, arguments ); - if ( returned && isFunction( returned.promise ) ) { - returned.promise() - .progress( newDefer.notify ) - .done( newDefer.resolve ) - .fail( newDefer.reject ); - } else { - newDefer[ tuple[ 0 ] + "With" ]( - this, - fn ? [ returned ] : arguments - ); - } - } ); - } ); - fns = null; - } ).promise(); - }, - then: function( onFulfilled, onRejected, onProgress ) { - var maxDepth = 0; - function resolve( depth, deferred, handler, special ) { - return function() { - var that = this, - args = arguments, - mightThrow = function() { - var returned, then; - - // Support: Promises/A+ section 2.3.3.3.3 - // https://promisesaplus.com/#point-59 - // Ignore double-resolution attempts - if ( depth < maxDepth ) { - return; - } - - returned = handler.apply( that, args ); - - // Support: Promises/A+ section 2.3.1 - // https://promisesaplus.com/#point-48 - if ( returned === deferred.promise() ) { - throw new TypeError( "Thenable self-resolution" ); - } - - // Support: Promises/A+ sections 2.3.3.1, 3.5 - // https://promisesaplus.com/#point-54 - // https://promisesaplus.com/#point-75 - // Retrieve `then` only once - then = returned && - - // Support: Promises/A+ section 2.3.4 - // https://promisesaplus.com/#point-64 - // Only check objects and functions for thenability - ( typeof returned === "object" || - typeof returned === "function" ) && - returned.then; - - // Handle a returned thenable - if ( isFunction( then ) ) { - - // Special processors (notify) just wait for resolution - if ( special ) { - then.call( - returned, - resolve( maxDepth, deferred, Identity, special ), - resolve( maxDepth, deferred, Thrower, special ) - ); - - // Normal processors (resolve) also hook into progress - } else { - - // ...and disregard older resolution values - maxDepth++; - - then.call( - returned, - resolve( maxDepth, deferred, Identity, special ), - resolve( maxDepth, deferred, Thrower, special ), - resolve( maxDepth, deferred, Identity, - deferred.notifyWith ) - ); - } - - // Handle all other returned values - } else { - - // Only substitute handlers pass on context - // and multiple values (non-spec behavior) - if ( handler !== Identity ) { - that = undefined; - args = [ returned ]; - } - - // Process the value(s) - // Default process is resolve - ( special || deferred.resolveWith )( that, args ); - } - }, - - // Only normal processors (resolve) catch and reject exceptions - process = special ? - mightThrow : - function() { - try { - mightThrow(); - } catch ( e ) { - - if ( jQuery.Deferred.exceptionHook ) { - jQuery.Deferred.exceptionHook( e, - process.stackTrace ); - } - - // Support: Promises/A+ section 2.3.3.3.4.1 - // https://promisesaplus.com/#point-61 - // Ignore post-resolution exceptions - if ( depth + 1 >= maxDepth ) { - - // Only substitute handlers pass on context - // and multiple values (non-spec behavior) - if ( handler !== Thrower ) { - that = undefined; - args = [ e ]; - } - - deferred.rejectWith( that, args ); - } - } - }; - - // Support: Promises/A+ section 2.3.3.3.1 - // https://promisesaplus.com/#point-57 - // Re-resolve promises immediately to dodge false rejection from - // subsequent errors - if ( depth ) { - process(); - } else { - - // Call an optional hook to record the stack, in case of exception - // since it's otherwise lost when execution goes async - if ( jQuery.Deferred.getStackHook ) { - process.stackTrace = jQuery.Deferred.getStackHook(); - } - window.setTimeout( process ); - } - }; - } - - return jQuery.Deferred( function( newDefer ) { - - // progress_handlers.add( ... ) - tuples[ 0 ][ 3 ].add( - resolve( - 0, - newDefer, - isFunction( onProgress ) ? - onProgress : - Identity, - newDefer.notifyWith - ) - ); - - // fulfilled_handlers.add( ... ) - tuples[ 1 ][ 3 ].add( - resolve( - 0, - newDefer, - isFunction( onFulfilled ) ? - onFulfilled : - Identity - ) - ); - - // rejected_handlers.add( ... ) - tuples[ 2 ][ 3 ].add( - resolve( - 0, - newDefer, - isFunction( onRejected ) ? - onRejected : - Thrower - ) - ); - } ).promise(); - }, - - // Get a promise for this deferred - // If obj is provided, the promise aspect is added to the object - promise: function( obj ) { - return obj != null ? jQuery.extend( obj, promise ) : promise; - } - }, - deferred = {}; - - // Add list-specific methods - jQuery.each( tuples, function( i, tuple ) { - var list = tuple[ 2 ], - stateString = tuple[ 5 ]; - - // promise.progress = list.add - // promise.done = list.add - // promise.fail = list.add - promise[ tuple[ 1 ] ] = list.add; - - // Handle state - if ( stateString ) { - list.add( - function() { - - // state = "resolved" (i.e., fulfilled) - // state = "rejected" - state = stateString; - }, - - // rejected_callbacks.disable - // fulfilled_callbacks.disable - tuples[ 3 - i ][ 2 ].disable, - - // rejected_handlers.disable - // fulfilled_handlers.disable - tuples[ 3 - i ][ 3 ].disable, - - // progress_callbacks.lock - tuples[ 0 ][ 2 ].lock, - - // progress_handlers.lock - tuples[ 0 ][ 3 ].lock - ); - } - - // progress_handlers.fire - // fulfilled_handlers.fire - // rejected_handlers.fire - list.add( tuple[ 3 ].fire ); - - // deferred.notify = function() { deferred.notifyWith(...) } - // deferred.resolve = function() { deferred.resolveWith(...) } - // deferred.reject = function() { deferred.rejectWith(...) } - deferred[ tuple[ 0 ] ] = function() { - deferred[ tuple[ 0 ] + "With" ]( this === deferred ? undefined : this, arguments ); - return this; - }; - - // deferred.notifyWith = list.fireWith - // deferred.resolveWith = list.fireWith - // deferred.rejectWith = list.fireWith - deferred[ tuple[ 0 ] + "With" ] = list.fireWith; - } ); - - // Make the deferred a promise - promise.promise( deferred ); - - // Call given func if any - if ( func ) { - func.call( deferred, deferred ); - } - - // All done! - return deferred; - }, - - // Deferred helper - when: function( singleValue ) { - var - - // count of uncompleted subordinates - remaining = arguments.length, - - // count of unprocessed arguments - i = remaining, - - // subordinate fulfillment data - resolveContexts = Array( i ), - resolveValues = slice.call( arguments ), - - // the master Deferred - master = jQuery.Deferred(), - - // subordinate callback factory - updateFunc = function( i ) { - return function( value ) { - resolveContexts[ i ] = this; - resolveValues[ i ] = arguments.length > 1 ? slice.call( arguments ) : value; - if ( !( --remaining ) ) { - master.resolveWith( resolveContexts, resolveValues ); - } - }; - }; - - // Single- and empty arguments are adopted like Promise.resolve - if ( remaining <= 1 ) { - adoptValue( singleValue, master.done( updateFunc( i ) ).resolve, master.reject, - !remaining ); - - // Use .then() to unwrap secondary thenables (cf. gh-3000) - if ( master.state() === "pending" || - isFunction( resolveValues[ i ] && resolveValues[ i ].then ) ) { - - return master.then(); - } - } - - // Multiple arguments are aggregated like Promise.all array elements - while ( i-- ) { - adoptValue( resolveValues[ i ], updateFunc( i ), master.reject ); - } - - return master.promise(); - } -} ); - - -// These usually indicate a programmer mistake during development, -// warn about them ASAP rather than swallowing them by default. -var rerrorNames = /^(Eval|Internal|Range|Reference|Syntax|Type|URI)Error$/; - -jQuery.Deferred.exceptionHook = function( error, stack ) { - - // Support: IE 8 - 9 only - // Console exists when dev tools are open, which can happen at any time - if ( window.console && window.console.warn && error && rerrorNames.test( error.name ) ) { - window.console.warn( "jQuery.Deferred exception: " + error.message, error.stack, stack ); - } -}; - - - - -jQuery.readyException = function( error ) { - window.setTimeout( function() { - throw error; - } ); -}; - - - - -// The deferred used on DOM ready -var readyList = jQuery.Deferred(); - -jQuery.fn.ready = function( fn ) { - - readyList - .then( fn ) - - // Wrap jQuery.readyException in a function so that the lookup - // happens at the time of error handling instead of callback - // registration. - .catch( function( error ) { - jQuery.readyException( error ); - } ); - - return this; -}; - -jQuery.extend( { - - // Is the DOM ready to be used? Set to true once it occurs. - isReady: false, - - // A counter to track how many items to wait for before - // the ready event fires. See #6781 - readyWait: 1, - - // Handle when the DOM is ready - ready: function( wait ) { - - // Abort if there are pending holds or we're already ready - if ( wait === true ? --jQuery.readyWait : jQuery.isReady ) { - return; - } - - // Remember that the DOM is ready - jQuery.isReady = true; - - // If a normal DOM Ready event fired, decrement, and wait if need be - if ( wait !== true && --jQuery.readyWait > 0 ) { - return; - } - - // If there are functions bound, to execute - readyList.resolveWith( document, [ jQuery ] ); - } -} ); - -jQuery.ready.then = readyList.then; - -// The ready event handler and self cleanup method -function completed() { - document.removeEventListener( "DOMContentLoaded", completed ); - window.removeEventListener( "load", completed ); - jQuery.ready(); -} - -// Catch cases where $(document).ready() is called -// after the browser event has already occurred. -// Support: IE <=9 - 10 only -// Older IE sometimes signals "interactive" too soon -if ( document.readyState === "complete" || - ( document.readyState !== "loading" && !document.documentElement.doScroll ) ) { - - // Handle it asynchronously to allow scripts the opportunity to delay ready - window.setTimeout( jQuery.ready ); - -} else { - - // Use the handy event callback - document.addEventListener( "DOMContentLoaded", completed ); - - // A fallback to window.onload, that will always work - window.addEventListener( "load", completed ); -} - - - - -// Multifunctional method to get and set values of a collection -// The value/s can optionally be executed if it's a function -var access = function( elems, fn, key, value, chainable, emptyGet, raw ) { - var i = 0, - len = elems.length, - bulk = key == null; - - // Sets many values - if ( toType( key ) === "object" ) { - chainable = true; - for ( i in key ) { - access( elems, fn, i, key[ i ], true, emptyGet, raw ); - } - - // Sets one value - } else if ( value !== undefined ) { - chainable = true; - - if ( !isFunction( value ) ) { - raw = true; - } - - if ( bulk ) { - - // Bulk operations run against the entire set - if ( raw ) { - fn.call( elems, value ); - fn = null; - - // ...except when executing function values - } else { - bulk = fn; - fn = function( elem, _key, value ) { - return bulk.call( jQuery( elem ), value ); - }; - } - } - - if ( fn ) { - for ( ; i < len; i++ ) { - fn( - elems[ i ], key, raw ? - value : - value.call( elems[ i ], i, fn( elems[ i ], key ) ) - ); - } - } - } - - if ( chainable ) { - return elems; - } - - // Gets - if ( bulk ) { - return fn.call( elems ); - } - - return len ? fn( elems[ 0 ], key ) : emptyGet; -}; - - -// Matches dashed string for camelizing -var rmsPrefix = /^-ms-/, - rdashAlpha = /-([a-z])/g; - -// Used by camelCase as callback to replace() -function fcamelCase( _all, letter ) { - return letter.toUpperCase(); -} - -// Convert dashed to camelCase; used by the css and data modules -// Support: IE <=9 - 11, Edge 12 - 15 -// Microsoft forgot to hump their vendor prefix (#9572) -function camelCase( string ) { - return string.replace( rmsPrefix, "ms-" ).replace( rdashAlpha, fcamelCase ); -} -var acceptData = function( owner ) { - - // Accepts only: - // - Node - // - Node.ELEMENT_NODE - // - Node.DOCUMENT_NODE - // - Object - // - Any - return owner.nodeType === 1 || owner.nodeType === 9 || !( +owner.nodeType ); -}; - - - - -function Data() { - this.expando = jQuery.expando + Data.uid++; -} - -Data.uid = 1; - -Data.prototype = { - - cache: function( owner ) { - - // Check if the owner object already has a cache - var value = owner[ this.expando ]; - - // If not, create one - if ( !value ) { - value = {}; - - // We can accept data for non-element nodes in modern browsers, - // but we should not, see #8335. - // Always return an empty object. - if ( acceptData( owner ) ) { - - // If it is a node unlikely to be stringify-ed or looped over - // use plain assignment - if ( owner.nodeType ) { - owner[ this.expando ] = value; - - // Otherwise secure it in a non-enumerable property - // configurable must be true to allow the property to be - // deleted when data is removed - } else { - Object.defineProperty( owner, this.expando, { - value: value, - configurable: true - } ); - } - } - } - - return value; - }, - set: function( owner, data, value ) { - var prop, - cache = this.cache( owner ); - - // Handle: [ owner, key, value ] args - // Always use camelCase key (gh-2257) - if ( typeof data === "string" ) { - cache[ camelCase( data ) ] = value; - - // Handle: [ owner, { properties } ] args - } else { - - // Copy the properties one-by-one to the cache object - for ( prop in data ) { - cache[ camelCase( prop ) ] = data[ prop ]; - } - } - return cache; - }, - get: function( owner, key ) { - return key === undefined ? - this.cache( owner ) : - - // Always use camelCase key (gh-2257) - owner[ this.expando ] && owner[ this.expando ][ camelCase( key ) ]; - }, - access: function( owner, key, value ) { - - // In cases where either: - // - // 1. No key was specified - // 2. A string key was specified, but no value provided - // - // Take the "read" path and allow the get method to determine - // which value to return, respectively either: - // - // 1. The entire cache object - // 2. The data stored at the key - // - if ( key === undefined || - ( ( key && typeof key === "string" ) && value === undefined ) ) { - - return this.get( owner, key ); - } - - // When the key is not a string, or both a key and value - // are specified, set or extend (existing objects) with either: - // - // 1. An object of properties - // 2. A key and value - // - this.set( owner, key, value ); - - // Since the "set" path can have two possible entry points - // return the expected data based on which path was taken[*] - return value !== undefined ? value : key; - }, - remove: function( owner, key ) { - var i, - cache = owner[ this.expando ]; - - if ( cache === undefined ) { - return; - } - - if ( key !== undefined ) { - - // Support array or space separated string of keys - if ( Array.isArray( key ) ) { - - // If key is an array of keys... - // We always set camelCase keys, so remove that. - key = key.map( camelCase ); - } else { - key = camelCase( key ); - - // If a key with the spaces exists, use it. - // Otherwise, create an array by matching non-whitespace - key = key in cache ? - [ key ] : - ( key.match( rnothtmlwhite ) || [] ); - } - - i = key.length; - - while ( i-- ) { - delete cache[ key[ i ] ]; - } - } - - // Remove the expando if there's no more data - if ( key === undefined || jQuery.isEmptyObject( cache ) ) { - - // Support: Chrome <=35 - 45 - // Webkit & Blink performance suffers when deleting properties - // from DOM nodes, so set to undefined instead - // https://bugs.chromium.org/p/chromium/issues/detail?id=378607 (bug restricted) - if ( owner.nodeType ) { - owner[ this.expando ] = undefined; - } else { - delete owner[ this.expando ]; - } - } - }, - hasData: function( owner ) { - var cache = owner[ this.expando ]; - return cache !== undefined && !jQuery.isEmptyObject( cache ); - } -}; -var dataPriv = new Data(); - -var dataUser = new Data(); - - - -// Implementation Summary -// -// 1. Enforce API surface and semantic compatibility with 1.9.x branch -// 2. Improve the module's maintainability by reducing the storage -// paths to a single mechanism. -// 3. Use the same single mechanism to support "private" and "user" data. -// 4. _Never_ expose "private" data to user code (TODO: Drop _data, _removeData) -// 5. Avoid exposing implementation details on user objects (eg. expando properties) -// 6. Provide a clear path for implementation upgrade to WeakMap in 2014 - -var rbrace = /^(?:\{[\w\W]*\}|\[[\w\W]*\])$/, - rmultiDash = /[A-Z]/g; - -function getData( data ) { - if ( data === "true" ) { - return true; - } - - if ( data === "false" ) { - return false; - } - - if ( data === "null" ) { - return null; - } - - // Only convert to a number if it doesn't change the string - if ( data === +data + "" ) { - return +data; - } - - if ( rbrace.test( data ) ) { - return JSON.parse( data ); - } - - return data; -} - -function dataAttr( elem, key, data ) { - var name; - - // If nothing was found internally, try to fetch any - // data from the HTML5 data-* attribute - if ( data === undefined && elem.nodeType === 1 ) { - name = "data-" + key.replace( rmultiDash, "-$&" ).toLowerCase(); - data = elem.getAttribute( name ); - - if ( typeof data === "string" ) { - try { - data = getData( data ); - } catch ( e ) {} - - // Make sure we set the data so it isn't changed later - dataUser.set( elem, key, data ); - } else { - data = undefined; - } - } - return data; -} - -jQuery.extend( { - hasData: function( elem ) { - return dataUser.hasData( elem ) || dataPriv.hasData( elem ); - }, - - data: function( elem, name, data ) { - return dataUser.access( elem, name, data ); - }, - - removeData: function( elem, name ) { - dataUser.remove( elem, name ); - }, - - // TODO: Now that all calls to _data and _removeData have been replaced - // with direct calls to dataPriv methods, these can be deprecated. - _data: function( elem, name, data ) { - return dataPriv.access( elem, name, data ); - }, - - _removeData: function( elem, name ) { - dataPriv.remove( elem, name ); - } -} ); - -jQuery.fn.extend( { - data: function( key, value ) { - var i, name, data, - elem = this[ 0 ], - attrs = elem && elem.attributes; - - // Gets all values - if ( key === undefined ) { - if ( this.length ) { - data = dataUser.get( elem ); - - if ( elem.nodeType === 1 && !dataPriv.get( elem, "hasDataAttrs" ) ) { - i = attrs.length; - while ( i-- ) { - - // Support: IE 11 only - // The attrs elements can be null (#14894) - if ( attrs[ i ] ) { - name = attrs[ i ].name; - if ( name.indexOf( "data-" ) === 0 ) { - name = camelCase( name.slice( 5 ) ); - dataAttr( elem, name, data[ name ] ); - } - } - } - dataPriv.set( elem, "hasDataAttrs", true ); - } - } - - return data; - } - - // Sets multiple values - if ( typeof key === "object" ) { - return this.each( function() { - dataUser.set( this, key ); - } ); - } - - return access( this, function( value ) { - var data; - - // The calling jQuery object (element matches) is not empty - // (and therefore has an element appears at this[ 0 ]) and the - // `value` parameter was not undefined. An empty jQuery object - // will result in `undefined` for elem = this[ 0 ] which will - // throw an exception if an attempt to read a data cache is made. - if ( elem && value === undefined ) { - - // Attempt to get data from the cache - // The key will always be camelCased in Data - data = dataUser.get( elem, key ); - if ( data !== undefined ) { - return data; - } - - // Attempt to "discover" the data in - // HTML5 custom data-* attrs - data = dataAttr( elem, key ); - if ( data !== undefined ) { - return data; - } - - // We tried really hard, but the data doesn't exist. - return; - } - - // Set the data... - this.each( function() { - - // We always store the camelCased key - dataUser.set( this, key, value ); - } ); - }, null, value, arguments.length > 1, null, true ); - }, - - removeData: function( key ) { - return this.each( function() { - dataUser.remove( this, key ); - } ); - } -} ); - - -jQuery.extend( { - queue: function( elem, type, data ) { - var queue; - - if ( elem ) { - type = ( type || "fx" ) + "queue"; - queue = dataPriv.get( elem, type ); - - // Speed up dequeue by getting out quickly if this is just a lookup - if ( data ) { - if ( !queue || Array.isArray( data ) ) { - queue = dataPriv.access( elem, type, jQuery.makeArray( data ) ); - } else { - queue.push( data ); - } - } - return queue || []; - } - }, - - dequeue: function( elem, type ) { - type = type || "fx"; - - var queue = jQuery.queue( elem, type ), - startLength = queue.length, - fn = queue.shift(), - hooks = jQuery._queueHooks( elem, type ), - next = function() { - jQuery.dequeue( elem, type ); - }; - - // If the fx queue is dequeued, always remove the progress sentinel - if ( fn === "inprogress" ) { - fn = queue.shift(); - startLength--; - } - - if ( fn ) { - - // Add a progress sentinel to prevent the fx queue from being - // automatically dequeued - if ( type === "fx" ) { - queue.unshift( "inprogress" ); - } - - // Clear up the last queue stop function - delete hooks.stop; - fn.call( elem, next, hooks ); - } - - if ( !startLength && hooks ) { - hooks.empty.fire(); - } - }, - - // Not public - generate a queueHooks object, or return the current one - _queueHooks: function( elem, type ) { - var key = type + "queueHooks"; - return dataPriv.get( elem, key ) || dataPriv.access( elem, key, { - empty: jQuery.Callbacks( "once memory" ).add( function() { - dataPriv.remove( elem, [ type + "queue", key ] ); - } ) - } ); - } -} ); - -jQuery.fn.extend( { - queue: function( type, data ) { - var setter = 2; - - if ( typeof type !== "string" ) { - data = type; - type = "fx"; - setter--; - } - - if ( arguments.length < setter ) { - return jQuery.queue( this[ 0 ], type ); - } - - return data === undefined ? - this : - this.each( function() { - var queue = jQuery.queue( this, type, data ); - - // Ensure a hooks for this queue - jQuery._queueHooks( this, type ); - - if ( type === "fx" && queue[ 0 ] !== "inprogress" ) { - jQuery.dequeue( this, type ); - } - } ); - }, - dequeue: function( type ) { - return this.each( function() { - jQuery.dequeue( this, type ); - } ); - }, - clearQueue: function( type ) { - return this.queue( type || "fx", [] ); - }, - - // Get a promise resolved when queues of a certain type - // are emptied (fx is the type by default) - promise: function( type, obj ) { - var tmp, - count = 1, - defer = jQuery.Deferred(), - elements = this, - i = this.length, - resolve = function() { - if ( !( --count ) ) { - defer.resolveWith( elements, [ elements ] ); - } - }; - - if ( typeof type !== "string" ) { - obj = type; - type = undefined; - } - type = type || "fx"; - - while ( i-- ) { - tmp = dataPriv.get( elements[ i ], type + "queueHooks" ); - if ( tmp && tmp.empty ) { - count++; - tmp.empty.add( resolve ); - } - } - resolve(); - return defer.promise( obj ); - } -} ); -var pnum = ( /[+-]?(?:\d*\.|)\d+(?:[eE][+-]?\d+|)/ ).source; - -var rcssNum = new RegExp( "^(?:([+-])=|)(" + pnum + ")([a-z%]*)$", "i" ); - - -var cssExpand = [ "Top", "Right", "Bottom", "Left" ]; - -var documentElement = document.documentElement; - - - - var isAttached = function( elem ) { - return jQuery.contains( elem.ownerDocument, elem ); - }, - composed = { composed: true }; - - // Support: IE 9 - 11+, Edge 12 - 18+, iOS 10.0 - 10.2 only - // Check attachment across shadow DOM boundaries when possible (gh-3504) - // Support: iOS 10.0-10.2 only - // Early iOS 10 versions support `attachShadow` but not `getRootNode`, - // leading to errors. We need to check for `getRootNode`. - if ( documentElement.getRootNode ) { - isAttached = function( elem ) { - return jQuery.contains( elem.ownerDocument, elem ) || - elem.getRootNode( composed ) === elem.ownerDocument; - }; - } -var isHiddenWithinTree = function( elem, el ) { - - // isHiddenWithinTree might be called from jQuery#filter function; - // in that case, element will be second argument - elem = el || elem; - - // Inline style trumps all - return elem.style.display === "none" || - elem.style.display === "" && - - // Otherwise, check computed style - // Support: Firefox <=43 - 45 - // Disconnected elements can have computed display: none, so first confirm that elem is - // in the document. - isAttached( elem ) && - - jQuery.css( elem, "display" ) === "none"; - }; - - - -function adjustCSS( elem, prop, valueParts, tween ) { - var adjusted, scale, - maxIterations = 20, - currentValue = tween ? - function() { - return tween.cur(); - } : - function() { - return jQuery.css( elem, prop, "" ); - }, - initial = currentValue(), - unit = valueParts && valueParts[ 3 ] || ( jQuery.cssNumber[ prop ] ? "" : "px" ), - - // Starting value computation is required for potential unit mismatches - initialInUnit = elem.nodeType && - ( jQuery.cssNumber[ prop ] || unit !== "px" && +initial ) && - rcssNum.exec( jQuery.css( elem, prop ) ); - - if ( initialInUnit && initialInUnit[ 3 ] !== unit ) { - - // Support: Firefox <=54 - // Halve the iteration target value to prevent interference from CSS upper bounds (gh-2144) - initial = initial / 2; - - // Trust units reported by jQuery.css - unit = unit || initialInUnit[ 3 ]; - - // Iteratively approximate from a nonzero starting point - initialInUnit = +initial || 1; - - while ( maxIterations-- ) { - - // Evaluate and update our best guess (doubling guesses that zero out). - // Finish if the scale equals or crosses 1 (making the old*new product non-positive). - jQuery.style( elem, prop, initialInUnit + unit ); - if ( ( 1 - scale ) * ( 1 - ( scale = currentValue() / initial || 0.5 ) ) <= 0 ) { - maxIterations = 0; - } - initialInUnit = initialInUnit / scale; - - } - - initialInUnit = initialInUnit * 2; - jQuery.style( elem, prop, initialInUnit + unit ); - - // Make sure we update the tween properties later on - valueParts = valueParts || []; - } - - if ( valueParts ) { - initialInUnit = +initialInUnit || +initial || 0; - - // Apply relative offset (+=/-=) if specified - adjusted = valueParts[ 1 ] ? - initialInUnit + ( valueParts[ 1 ] + 1 ) * valueParts[ 2 ] : - +valueParts[ 2 ]; - if ( tween ) { - tween.unit = unit; - tween.start = initialInUnit; - tween.end = adjusted; - } - } - return adjusted; -} - - -var defaultDisplayMap = {}; - -function getDefaultDisplay( elem ) { - var temp, - doc = elem.ownerDocument, - nodeName = elem.nodeName, - display = defaultDisplayMap[ nodeName ]; - - if ( display ) { - return display; - } - - temp = doc.body.appendChild( doc.createElement( nodeName ) ); - display = jQuery.css( temp, "display" ); - - temp.parentNode.removeChild( temp ); - - if ( display === "none" ) { - display = "block"; - } - defaultDisplayMap[ nodeName ] = display; - - return display; -} - -function showHide( elements, show ) { - var display, elem, - values = [], - index = 0, - length = elements.length; - - // Determine new display value for elements that need to change - for ( ; index < length; index++ ) { - elem = elements[ index ]; - if ( !elem.style ) { - continue; - } - - display = elem.style.display; - if ( show ) { - - // Since we force visibility upon cascade-hidden elements, an immediate (and slow) - // check is required in this first loop unless we have a nonempty display value (either - // inline or about-to-be-restored) - if ( display === "none" ) { - values[ index ] = dataPriv.get( elem, "display" ) || null; - if ( !values[ index ] ) { - elem.style.display = ""; - } - } - if ( elem.style.display === "" && isHiddenWithinTree( elem ) ) { - values[ index ] = getDefaultDisplay( elem ); - } - } else { - if ( display !== "none" ) { - values[ index ] = "none"; - - // Remember what we're overwriting - dataPriv.set( elem, "display", display ); - } - } - } - - // Set the display of the elements in a second loop to avoid constant reflow - for ( index = 0; index < length; index++ ) { - if ( values[ index ] != null ) { - elements[ index ].style.display = values[ index ]; - } - } - - return elements; -} - -jQuery.fn.extend( { - show: function() { - return showHide( this, true ); - }, - hide: function() { - return showHide( this ); - }, - toggle: function( state ) { - if ( typeof state === "boolean" ) { - return state ? this.show() : this.hide(); - } - - return this.each( function() { - if ( isHiddenWithinTree( this ) ) { - jQuery( this ).show(); - } else { - jQuery( this ).hide(); - } - } ); - } -} ); -var rcheckableType = ( /^(?:checkbox|radio)$/i ); - -var rtagName = ( /<([a-z][^\/\0>\x20\t\r\n\f]*)/i ); - -var rscriptType = ( /^$|^module$|\/(?:java|ecma)script/i ); - - - -( function() { - var fragment = document.createDocumentFragment(), - div = fragment.appendChild( document.createElement( "div" ) ), - input = document.createElement( "input" ); - - // Support: Android 4.0 - 4.3 only - // Check state lost if the name is set (#11217) - // Support: Windows Web Apps (WWA) - // `name` and `type` must use .setAttribute for WWA (#14901) - input.setAttribute( "type", "radio" ); - input.setAttribute( "checked", "checked" ); - input.setAttribute( "name", "t" ); - - div.appendChild( input ); - - // Support: Android <=4.1 only - // Older WebKit doesn't clone checked state correctly in fragments - support.checkClone = div.cloneNode( true ).cloneNode( true ).lastChild.checked; - - // Support: IE <=11 only - // Make sure textarea (and checkbox) defaultValue is properly cloned - div.innerHTML = ""; - support.noCloneChecked = !!div.cloneNode( true ).lastChild.defaultValue; - - // Support: IE <=9 only - // IE <=9 replaces "; - support.option = !!div.lastChild; -} )(); - - -// We have to close these tags to support XHTML (#13200) -var wrapMap = { - - // XHTML parsers do not magically insert elements in the - // same way that tag soup parsers do. So we cannot shorten - // this by omitting or other required elements. - thead: [ 1, "", "
" ], - col: [ 2, "", "
" ], - tr: [ 2, "", "
" ], - td: [ 3, "", "
" ], - - _default: [ 0, "", "" ] -}; - -wrapMap.tbody = wrapMap.tfoot = wrapMap.colgroup = wrapMap.caption = wrapMap.thead; -wrapMap.th = wrapMap.td; - -// Support: IE <=9 only -if ( !support.option ) { - wrapMap.optgroup = wrapMap.option = [ 1, "" ]; -} - - -function getAll( context, tag ) { - - // Support: IE <=9 - 11 only - // Use typeof to avoid zero-argument method invocation on host objects (#15151) - var ret; - - if ( typeof context.getElementsByTagName !== "undefined" ) { - ret = context.getElementsByTagName( tag || "*" ); - - } else if ( typeof context.querySelectorAll !== "undefined" ) { - ret = context.querySelectorAll( tag || "*" ); - - } else { - ret = []; - } - - if ( tag === undefined || tag && nodeName( context, tag ) ) { - return jQuery.merge( [ context ], ret ); - } - - return ret; -} - - -// Mark scripts as having already been evaluated -function setGlobalEval( elems, refElements ) { - var i = 0, - l = elems.length; - - for ( ; i < l; i++ ) { - dataPriv.set( - elems[ i ], - "globalEval", - !refElements || dataPriv.get( refElements[ i ], "globalEval" ) - ); - } -} - - -var rhtml = /<|&#?\w+;/; - -function buildFragment( elems, context, scripts, selection, ignored ) { - var elem, tmp, tag, wrap, attached, j, - fragment = context.createDocumentFragment(), - nodes = [], - i = 0, - l = elems.length; - - for ( ; i < l; i++ ) { - elem = elems[ i ]; - - if ( elem || elem === 0 ) { - - // Add nodes directly - if ( toType( elem ) === "object" ) { - - // Support: Android <=4.0 only, PhantomJS 1 only - // push.apply(_, arraylike) throws on ancient WebKit - jQuery.merge( nodes, elem.nodeType ? [ elem ] : elem ); - - // Convert non-html into a text node - } else if ( !rhtml.test( elem ) ) { - nodes.push( context.createTextNode( elem ) ); - - // Convert html into DOM nodes - } else { - tmp = tmp || fragment.appendChild( context.createElement( "div" ) ); - - // Deserialize a standard representation - tag = ( rtagName.exec( elem ) || [ "", "" ] )[ 1 ].toLowerCase(); - wrap = wrapMap[ tag ] || wrapMap._default; - tmp.innerHTML = wrap[ 1 ] + jQuery.htmlPrefilter( elem ) + wrap[ 2 ]; - - // Descend through wrappers to the right content - j = wrap[ 0 ]; - while ( j-- ) { - tmp = tmp.lastChild; - } - - // Support: Android <=4.0 only, PhantomJS 1 only - // push.apply(_, arraylike) throws on ancient WebKit - jQuery.merge( nodes, tmp.childNodes ); - - // Remember the top-level container - tmp = fragment.firstChild; - - // Ensure the created nodes are orphaned (#12392) - tmp.textContent = ""; - } - } - } - - // Remove wrapper from fragment - fragment.textContent = ""; - - i = 0; - while ( ( elem = nodes[ i++ ] ) ) { - - // Skip elements already in the context collection (trac-4087) - if ( selection && jQuery.inArray( elem, selection ) > -1 ) { - if ( ignored ) { - ignored.push( elem ); - } - continue; - } - - attached = isAttached( elem ); - - // Append to fragment - tmp = getAll( fragment.appendChild( elem ), "script" ); - - // Preserve script evaluation history - if ( attached ) { - setGlobalEval( tmp ); - } - - // Capture executables - if ( scripts ) { - j = 0; - while ( ( elem = tmp[ j++ ] ) ) { - if ( rscriptType.test( elem.type || "" ) ) { - scripts.push( elem ); - } - } - } - } - - return fragment; -} - - -var - rkeyEvent = /^key/, - rmouseEvent = /^(?:mouse|pointer|contextmenu|drag|drop)|click/, - rtypenamespace = /^([^.]*)(?:\.(.+)|)/; - -function returnTrue() { - return true; -} - -function returnFalse() { - return false; -} - -// Support: IE <=9 - 11+ -// focus() and blur() are asynchronous, except when they are no-op. -// So expect focus to be synchronous when the element is already active, -// and blur to be synchronous when the element is not already active. -// (focus and blur are always synchronous in other supported browsers, -// this just defines when we can count on it). -function expectSync( elem, type ) { - return ( elem === safeActiveElement() ) === ( type === "focus" ); -} - -// Support: IE <=9 only -// Accessing document.activeElement can throw unexpectedly -// https://bugs.jquery.com/ticket/13393 -function safeActiveElement() { - try { - return document.activeElement; - } catch ( err ) { } -} - -function on( elem, types, selector, data, fn, one ) { - var origFn, type; - - // Types can be a map of types/handlers - if ( typeof types === "object" ) { - - // ( types-Object, selector, data ) - if ( typeof selector !== "string" ) { - - // ( types-Object, data ) - data = data || selector; - selector = undefined; - } - for ( type in types ) { - on( elem, type, selector, data, types[ type ], one ); - } - return elem; - } - - if ( data == null && fn == null ) { - - // ( types, fn ) - fn = selector; - data = selector = undefined; - } else if ( fn == null ) { - if ( typeof selector === "string" ) { - - // ( types, selector, fn ) - fn = data; - data = undefined; - } else { - - // ( types, data, fn ) - fn = data; - data = selector; - selector = undefined; - } - } - if ( fn === false ) { - fn = returnFalse; - } else if ( !fn ) { - return elem; - } - - if ( one === 1 ) { - origFn = fn; - fn = function( event ) { - - // Can use an empty set, since event contains the info - jQuery().off( event ); - return origFn.apply( this, arguments ); - }; - - // Use same guid so caller can remove using origFn - fn.guid = origFn.guid || ( origFn.guid = jQuery.guid++ ); - } - return elem.each( function() { - jQuery.event.add( this, types, fn, data, selector ); - } ); -} - -/* - * Helper functions for managing events -- not part of the public interface. - * Props to Dean Edwards' addEvent library for many of the ideas. - */ -jQuery.event = { - - global: {}, - - add: function( elem, types, handler, data, selector ) { - - var handleObjIn, eventHandle, tmp, - events, t, handleObj, - special, handlers, type, namespaces, origType, - elemData = dataPriv.get( elem ); - - // Only attach events to objects that accept data - if ( !acceptData( elem ) ) { - return; - } - - // Caller can pass in an object of custom data in lieu of the handler - if ( handler.handler ) { - handleObjIn = handler; - handler = handleObjIn.handler; - selector = handleObjIn.selector; - } - - // Ensure that invalid selectors throw exceptions at attach time - // Evaluate against documentElement in case elem is a non-element node (e.g., document) - if ( selector ) { - jQuery.find.matchesSelector( documentElement, selector ); - } - - // Make sure that the handler has a unique ID, used to find/remove it later - if ( !handler.guid ) { - handler.guid = jQuery.guid++; - } - - // Init the element's event structure and main handler, if this is the first - if ( !( events = elemData.events ) ) { - events = elemData.events = Object.create( null ); - } - if ( !( eventHandle = elemData.handle ) ) { - eventHandle = elemData.handle = function( e ) { - - // Discard the second event of a jQuery.event.trigger() and - // when an event is called after a page has unloaded - return typeof jQuery !== "undefined" && jQuery.event.triggered !== e.type ? - jQuery.event.dispatch.apply( elem, arguments ) : undefined; - }; - } - - // Handle multiple events separated by a space - types = ( types || "" ).match( rnothtmlwhite ) || [ "" ]; - t = types.length; - while ( t-- ) { - tmp = rtypenamespace.exec( types[ t ] ) || []; - type = origType = tmp[ 1 ]; - namespaces = ( tmp[ 2 ] || "" ).split( "." ).sort(); - - // There *must* be a type, no attaching namespace-only handlers - if ( !type ) { - continue; - } - - // If event changes its type, use the special event handlers for the changed type - special = jQuery.event.special[ type ] || {}; - - // If selector defined, determine special event api type, otherwise given type - type = ( selector ? special.delegateType : special.bindType ) || type; - - // Update special based on newly reset type - special = jQuery.event.special[ type ] || {}; - - // handleObj is passed to all event handlers - handleObj = jQuery.extend( { - type: type, - origType: origType, - data: data, - handler: handler, - guid: handler.guid, - selector: selector, - needsContext: selector && jQuery.expr.match.needsContext.test( selector ), - namespace: namespaces.join( "." ) - }, handleObjIn ); - - // Init the event handler queue if we're the first - if ( !( handlers = events[ type ] ) ) { - handlers = events[ type ] = []; - handlers.delegateCount = 0; - - // Only use addEventListener if the special events handler returns false - if ( !special.setup || - special.setup.call( elem, data, namespaces, eventHandle ) === false ) { - - if ( elem.addEventListener ) { - elem.addEventListener( type, eventHandle ); - } - } - } - - if ( special.add ) { - special.add.call( elem, handleObj ); - - if ( !handleObj.handler.guid ) { - handleObj.handler.guid = handler.guid; - } - } - - // Add to the element's handler list, delegates in front - if ( selector ) { - handlers.splice( handlers.delegateCount++, 0, handleObj ); - } else { - handlers.push( handleObj ); - } - - // Keep track of which events have ever been used, for event optimization - jQuery.event.global[ type ] = true; - } - - }, - - // Detach an event or set of events from an element - remove: function( elem, types, handler, selector, mappedTypes ) { - - var j, origCount, tmp, - events, t, handleObj, - special, handlers, type, namespaces, origType, - elemData = dataPriv.hasData( elem ) && dataPriv.get( elem ); - - if ( !elemData || !( events = elemData.events ) ) { - return; - } - - // Once for each type.namespace in types; type may be omitted - types = ( types || "" ).match( rnothtmlwhite ) || [ "" ]; - t = types.length; - while ( t-- ) { - tmp = rtypenamespace.exec( types[ t ] ) || []; - type = origType = tmp[ 1 ]; - namespaces = ( tmp[ 2 ] || "" ).split( "." ).sort(); - - // Unbind all events (on this namespace, if provided) for the element - if ( !type ) { - for ( type in events ) { - jQuery.event.remove( elem, type + types[ t ], handler, selector, true ); - } - continue; - } - - special = jQuery.event.special[ type ] || {}; - type = ( selector ? special.delegateType : special.bindType ) || type; - handlers = events[ type ] || []; - tmp = tmp[ 2 ] && - new RegExp( "(^|\\.)" + namespaces.join( "\\.(?:.*\\.|)" ) + "(\\.|$)" ); - - // Remove matching events - origCount = j = handlers.length; - while ( j-- ) { - handleObj = handlers[ j ]; - - if ( ( mappedTypes || origType === handleObj.origType ) && - ( !handler || handler.guid === handleObj.guid ) && - ( !tmp || tmp.test( handleObj.namespace ) ) && - ( !selector || selector === handleObj.selector || - selector === "**" && handleObj.selector ) ) { - handlers.splice( j, 1 ); - - if ( handleObj.selector ) { - handlers.delegateCount--; - } - if ( special.remove ) { - special.remove.call( elem, handleObj ); - } - } - } - - // Remove generic event handler if we removed something and no more handlers exist - // (avoids potential for endless recursion during removal of special event handlers) - if ( origCount && !handlers.length ) { - if ( !special.teardown || - special.teardown.call( elem, namespaces, elemData.handle ) === false ) { - - jQuery.removeEvent( elem, type, elemData.handle ); - } - - delete events[ type ]; - } - } - - // Remove data and the expando if it's no longer used - if ( jQuery.isEmptyObject( events ) ) { - dataPriv.remove( elem, "handle events" ); - } - }, - - dispatch: function( nativeEvent ) { - - var i, j, ret, matched, handleObj, handlerQueue, - args = new Array( arguments.length ), - - // Make a writable jQuery.Event from the native event object - event = jQuery.event.fix( nativeEvent ), - - handlers = ( - dataPriv.get( this, "events" ) || Object.create( null ) - )[ event.type ] || [], - special = jQuery.event.special[ event.type ] || {}; - - // Use the fix-ed jQuery.Event rather than the (read-only) native event - args[ 0 ] = event; - - for ( i = 1; i < arguments.length; i++ ) { - args[ i ] = arguments[ i ]; - } - - event.delegateTarget = this; - - // Call the preDispatch hook for the mapped type, and let it bail if desired - if ( special.preDispatch && special.preDispatch.call( this, event ) === false ) { - return; - } - - // Determine handlers - handlerQueue = jQuery.event.handlers.call( this, event, handlers ); - - // Run delegates first; they may want to stop propagation beneath us - i = 0; - while ( ( matched = handlerQueue[ i++ ] ) && !event.isPropagationStopped() ) { - event.currentTarget = matched.elem; - - j = 0; - while ( ( handleObj = matched.handlers[ j++ ] ) && - !event.isImmediatePropagationStopped() ) { - - // If the event is namespaced, then each handler is only invoked if it is - // specially universal or its namespaces are a superset of the event's. - if ( !event.rnamespace || handleObj.namespace === false || - event.rnamespace.test( handleObj.namespace ) ) { - - event.handleObj = handleObj; - event.data = handleObj.data; - - ret = ( ( jQuery.event.special[ handleObj.origType ] || {} ).handle || - handleObj.handler ).apply( matched.elem, args ); - - if ( ret !== undefined ) { - if ( ( event.result = ret ) === false ) { - event.preventDefault(); - event.stopPropagation(); - } - } - } - } - } - - // Call the postDispatch hook for the mapped type - if ( special.postDispatch ) { - special.postDispatch.call( this, event ); - } - - return event.result; - }, - - handlers: function( event, handlers ) { - var i, handleObj, sel, matchedHandlers, matchedSelectors, - handlerQueue = [], - delegateCount = handlers.delegateCount, - cur = event.target; - - // Find delegate handlers - if ( delegateCount && - - // Support: IE <=9 - // Black-hole SVG instance trees (trac-13180) - cur.nodeType && - - // Support: Firefox <=42 - // Suppress spec-violating clicks indicating a non-primary pointer button (trac-3861) - // https://www.w3.org/TR/DOM-Level-3-Events/#event-type-click - // Support: IE 11 only - // ...but not arrow key "clicks" of radio inputs, which can have `button` -1 (gh-2343) - !( event.type === "click" && event.button >= 1 ) ) { - - for ( ; cur !== this; cur = cur.parentNode || this ) { - - // Don't check non-elements (#13208) - // Don't process clicks on disabled elements (#6911, #8165, #11382, #11764) - if ( cur.nodeType === 1 && !( event.type === "click" && cur.disabled === true ) ) { - matchedHandlers = []; - matchedSelectors = {}; - for ( i = 0; i < delegateCount; i++ ) { - handleObj = handlers[ i ]; - - // Don't conflict with Object.prototype properties (#13203) - sel = handleObj.selector + " "; - - if ( matchedSelectors[ sel ] === undefined ) { - matchedSelectors[ sel ] = handleObj.needsContext ? - jQuery( sel, this ).index( cur ) > -1 : - jQuery.find( sel, this, null, [ cur ] ).length; - } - if ( matchedSelectors[ sel ] ) { - matchedHandlers.push( handleObj ); - } - } - if ( matchedHandlers.length ) { - handlerQueue.push( { elem: cur, handlers: matchedHandlers } ); - } - } - } - } - - // Add the remaining (directly-bound) handlers - cur = this; - if ( delegateCount < handlers.length ) { - handlerQueue.push( { elem: cur, handlers: handlers.slice( delegateCount ) } ); - } - - return handlerQueue; - }, - - addProp: function( name, hook ) { - Object.defineProperty( jQuery.Event.prototype, name, { - enumerable: true, - configurable: true, - - get: isFunction( hook ) ? - function() { - if ( this.originalEvent ) { - return hook( this.originalEvent ); - } - } : - function() { - if ( this.originalEvent ) { - return this.originalEvent[ name ]; - } - }, - - set: function( value ) { - Object.defineProperty( this, name, { - enumerable: true, - configurable: true, - writable: true, - value: value - } ); - } - } ); - }, - - fix: function( originalEvent ) { - return originalEvent[ jQuery.expando ] ? - originalEvent : - new jQuery.Event( originalEvent ); - }, - - special: { - load: { - - // Prevent triggered image.load events from bubbling to window.load - noBubble: true - }, - click: { - - // Utilize native event to ensure correct state for checkable inputs - setup: function( data ) { - - // For mutual compressibility with _default, replace `this` access with a local var. - // `|| data` is dead code meant only to preserve the variable through minification. - var el = this || data; - - // Claim the first handler - if ( rcheckableType.test( el.type ) && - el.click && nodeName( el, "input" ) ) { - - // dataPriv.set( el, "click", ... ) - leverageNative( el, "click", returnTrue ); - } - - // Return false to allow normal processing in the caller - return false; - }, - trigger: function( data ) { - - // For mutual compressibility with _default, replace `this` access with a local var. - // `|| data` is dead code meant only to preserve the variable through minification. - var el = this || data; - - // Force setup before triggering a click - if ( rcheckableType.test( el.type ) && - el.click && nodeName( el, "input" ) ) { - - leverageNative( el, "click" ); - } - - // Return non-false to allow normal event-path propagation - return true; - }, - - // For cross-browser consistency, suppress native .click() on links - // Also prevent it if we're currently inside a leveraged native-event stack - _default: function( event ) { - var target = event.target; - return rcheckableType.test( target.type ) && - target.click && nodeName( target, "input" ) && - dataPriv.get( target, "click" ) || - nodeName( target, "a" ); - } - }, - - beforeunload: { - postDispatch: function( event ) { - - // Support: Firefox 20+ - // Firefox doesn't alert if the returnValue field is not set. - if ( event.result !== undefined && event.originalEvent ) { - event.originalEvent.returnValue = event.result; - } - } - } - } -}; - -// Ensure the presence of an event listener that handles manually-triggered -// synthetic events by interrupting progress until reinvoked in response to -// *native* events that it fires directly, ensuring that state changes have -// already occurred before other listeners are invoked. -function leverageNative( el, type, expectSync ) { - - // Missing expectSync indicates a trigger call, which must force setup through jQuery.event.add - if ( !expectSync ) { - if ( dataPriv.get( el, type ) === undefined ) { - jQuery.event.add( el, type, returnTrue ); - } - return; - } - - // Register the controller as a special universal handler for all event namespaces - dataPriv.set( el, type, false ); - jQuery.event.add( el, type, { - namespace: false, - handler: function( event ) { - var notAsync, result, - saved = dataPriv.get( this, type ); - - if ( ( event.isTrigger & 1 ) && this[ type ] ) { - - // Interrupt processing of the outer synthetic .trigger()ed event - // Saved data should be false in such cases, but might be a leftover capture object - // from an async native handler (gh-4350) - if ( !saved.length ) { - - // Store arguments for use when handling the inner native event - // There will always be at least one argument (an event object), so this array - // will not be confused with a leftover capture object. - saved = slice.call( arguments ); - dataPriv.set( this, type, saved ); - - // Trigger the native event and capture its result - // Support: IE <=9 - 11+ - // focus() and blur() are asynchronous - notAsync = expectSync( this, type ); - this[ type ](); - result = dataPriv.get( this, type ); - if ( saved !== result || notAsync ) { - dataPriv.set( this, type, false ); - } else { - result = {}; - } - if ( saved !== result ) { - - // Cancel the outer synthetic event - event.stopImmediatePropagation(); - event.preventDefault(); - return result.value; - } - - // If this is an inner synthetic event for an event with a bubbling surrogate - // (focus or blur), assume that the surrogate already propagated from triggering the - // native event and prevent that from happening again here. - // This technically gets the ordering wrong w.r.t. to `.trigger()` (in which the - // bubbling surrogate propagates *after* the non-bubbling base), but that seems - // less bad than duplication. - } else if ( ( jQuery.event.special[ type ] || {} ).delegateType ) { - event.stopPropagation(); - } - - // If this is a native event triggered above, everything is now in order - // Fire an inner synthetic event with the original arguments - } else if ( saved.length ) { - - // ...and capture the result - dataPriv.set( this, type, { - value: jQuery.event.trigger( - - // Support: IE <=9 - 11+ - // Extend with the prototype to reset the above stopImmediatePropagation() - jQuery.extend( saved[ 0 ], jQuery.Event.prototype ), - saved.slice( 1 ), - this - ) - } ); - - // Abort handling of the native event - event.stopImmediatePropagation(); - } - } - } ); -} - -jQuery.removeEvent = function( elem, type, handle ) { - - // This "if" is needed for plain objects - if ( elem.removeEventListener ) { - elem.removeEventListener( type, handle ); - } -}; - -jQuery.Event = function( src, props ) { - - // Allow instantiation without the 'new' keyword - if ( !( this instanceof jQuery.Event ) ) { - return new jQuery.Event( src, props ); - } - - // Event object - if ( src && src.type ) { - this.originalEvent = src; - this.type = src.type; - - // Events bubbling up the document may have been marked as prevented - // by a handler lower down the tree; reflect the correct value. - this.isDefaultPrevented = src.defaultPrevented || - src.defaultPrevented === undefined && - - // Support: Android <=2.3 only - src.returnValue === false ? - returnTrue : - returnFalse; - - // Create target properties - // Support: Safari <=6 - 7 only - // Target should not be a text node (#504, #13143) - this.target = ( src.target && src.target.nodeType === 3 ) ? - src.target.parentNode : - src.target; - - this.currentTarget = src.currentTarget; - this.relatedTarget = src.relatedTarget; - - // Event type - } else { - this.type = src; - } - - // Put explicitly provided properties onto the event object - if ( props ) { - jQuery.extend( this, props ); - } - - // Create a timestamp if incoming event doesn't have one - this.timeStamp = src && src.timeStamp || Date.now(); - - // Mark it as fixed - this[ jQuery.expando ] = true; -}; - -// jQuery.Event is based on DOM3 Events as specified by the ECMAScript Language Binding -// https://www.w3.org/TR/2003/WD-DOM-Level-3-Events-20030331/ecma-script-binding.html -jQuery.Event.prototype = { - constructor: jQuery.Event, - isDefaultPrevented: returnFalse, - isPropagationStopped: returnFalse, - isImmediatePropagationStopped: returnFalse, - isSimulated: false, - - preventDefault: function() { - var e = this.originalEvent; - - this.isDefaultPrevented = returnTrue; - - if ( e && !this.isSimulated ) { - e.preventDefault(); - } - }, - stopPropagation: function() { - var e = this.originalEvent; - - this.isPropagationStopped = returnTrue; - - if ( e && !this.isSimulated ) { - e.stopPropagation(); - } - }, - stopImmediatePropagation: function() { - var e = this.originalEvent; - - this.isImmediatePropagationStopped = returnTrue; - - if ( e && !this.isSimulated ) { - e.stopImmediatePropagation(); - } - - this.stopPropagation(); - } -}; - -// Includes all common event props including KeyEvent and MouseEvent specific props -jQuery.each( { - altKey: true, - bubbles: true, - cancelable: true, - changedTouches: true, - ctrlKey: true, - detail: true, - eventPhase: true, - metaKey: true, - pageX: true, - pageY: true, - shiftKey: true, - view: true, - "char": true, - code: true, - charCode: true, - key: true, - keyCode: true, - button: true, - buttons: true, - clientX: true, - clientY: true, - offsetX: true, - offsetY: true, - pointerId: true, - pointerType: true, - screenX: true, - screenY: true, - targetTouches: true, - toElement: true, - touches: true, - - which: function( event ) { - var button = event.button; - - // Add which for key events - if ( event.which == null && rkeyEvent.test( event.type ) ) { - return event.charCode != null ? event.charCode : event.keyCode; - } - - // Add which for click: 1 === left; 2 === middle; 3 === right - if ( !event.which && button !== undefined && rmouseEvent.test( event.type ) ) { - if ( button & 1 ) { - return 1; - } - - if ( button & 2 ) { - return 3; - } - - if ( button & 4 ) { - return 2; - } - - return 0; - } - - return event.which; - } -}, jQuery.event.addProp ); - -jQuery.each( { focus: "focusin", blur: "focusout" }, function( type, delegateType ) { - jQuery.event.special[ type ] = { - - // Utilize native event if possible so blur/focus sequence is correct - setup: function() { - - // Claim the first handler - // dataPriv.set( this, "focus", ... ) - // dataPriv.set( this, "blur", ... ) - leverageNative( this, type, expectSync ); - - // Return false to allow normal processing in the caller - return false; - }, - trigger: function() { - - // Force setup before trigger - leverageNative( this, type ); - - // Return non-false to allow normal event-path propagation - return true; - }, - - delegateType: delegateType - }; -} ); - -// Create mouseenter/leave events using mouseover/out and event-time checks -// so that event delegation works in jQuery. -// Do the same for pointerenter/pointerleave and pointerover/pointerout -// -// Support: Safari 7 only -// Safari sends mouseenter too often; see: -// https://bugs.chromium.org/p/chromium/issues/detail?id=470258 -// for the description of the bug (it existed in older Chrome versions as well). -jQuery.each( { - mouseenter: "mouseover", - mouseleave: "mouseout", - pointerenter: "pointerover", - pointerleave: "pointerout" -}, function( orig, fix ) { - jQuery.event.special[ orig ] = { - delegateType: fix, - bindType: fix, - - handle: function( event ) { - var ret, - target = this, - related = event.relatedTarget, - handleObj = event.handleObj; - - // For mouseenter/leave call the handler if related is outside the target. - // NB: No relatedTarget if the mouse left/entered the browser window - if ( !related || ( related !== target && !jQuery.contains( target, related ) ) ) { - event.type = handleObj.origType; - ret = handleObj.handler.apply( this, arguments ); - event.type = fix; - } - return ret; - } - }; -} ); - -jQuery.fn.extend( { - - on: function( types, selector, data, fn ) { - return on( this, types, selector, data, fn ); - }, - one: function( types, selector, data, fn ) { - return on( this, types, selector, data, fn, 1 ); - }, - off: function( types, selector, fn ) { - var handleObj, type; - if ( types && types.preventDefault && types.handleObj ) { - - // ( event ) dispatched jQuery.Event - handleObj = types.handleObj; - jQuery( types.delegateTarget ).off( - handleObj.namespace ? - handleObj.origType + "." + handleObj.namespace : - handleObj.origType, - handleObj.selector, - handleObj.handler - ); - return this; - } - if ( typeof types === "object" ) { - - // ( types-object [, selector] ) - for ( type in types ) { - this.off( type, selector, types[ type ] ); - } - return this; - } - if ( selector === false || typeof selector === "function" ) { - - // ( types [, fn] ) - fn = selector; - selector = undefined; - } - if ( fn === false ) { - fn = returnFalse; - } - return this.each( function() { - jQuery.event.remove( this, types, fn, selector ); - } ); - } -} ); - - -var - - // Support: IE <=10 - 11, Edge 12 - 13 only - // In IE/Edge using regex groups here causes severe slowdowns. - // See https://connect.microsoft.com/IE/feedback/details/1736512/ - rnoInnerhtml = /\s*$/g; - -// Prefer a tbody over its parent table for containing new rows -function manipulationTarget( elem, content ) { - if ( nodeName( elem, "table" ) && - nodeName( content.nodeType !== 11 ? content : content.firstChild, "tr" ) ) { - - return jQuery( elem ).children( "tbody" )[ 0 ] || elem; - } - - return elem; -} - -// Replace/restore the type attribute of script elements for safe DOM manipulation -function disableScript( elem ) { - elem.type = ( elem.getAttribute( "type" ) !== null ) + "/" + elem.type; - return elem; -} -function restoreScript( elem ) { - if ( ( elem.type || "" ).slice( 0, 5 ) === "true/" ) { - elem.type = elem.type.slice( 5 ); - } else { - elem.removeAttribute( "type" ); - } - - return elem; -} - -function cloneCopyEvent( src, dest ) { - var i, l, type, pdataOld, udataOld, udataCur, events; - - if ( dest.nodeType !== 1 ) { - return; - } - - // 1. Copy private data: events, handlers, etc. - if ( dataPriv.hasData( src ) ) { - pdataOld = dataPriv.get( src ); - events = pdataOld.events; - - if ( events ) { - dataPriv.remove( dest, "handle events" ); - - for ( type in events ) { - for ( i = 0, l = events[ type ].length; i < l; i++ ) { - jQuery.event.add( dest, type, events[ type ][ i ] ); - } - } - } - } - - // 2. Copy user data - if ( dataUser.hasData( src ) ) { - udataOld = dataUser.access( src ); - udataCur = jQuery.extend( {}, udataOld ); - - dataUser.set( dest, udataCur ); - } -} - -// Fix IE bugs, see support tests -function fixInput( src, dest ) { - var nodeName = dest.nodeName.toLowerCase(); - - // Fails to persist the checked state of a cloned checkbox or radio button. - if ( nodeName === "input" && rcheckableType.test( src.type ) ) { - dest.checked = src.checked; - - // Fails to return the selected option to the default selected state when cloning options - } else if ( nodeName === "input" || nodeName === "textarea" ) { - dest.defaultValue = src.defaultValue; - } -} - -function domManip( collection, args, callback, ignored ) { - - // Flatten any nested arrays - args = flat( args ); - - var fragment, first, scripts, hasScripts, node, doc, - i = 0, - l = collection.length, - iNoClone = l - 1, - value = args[ 0 ], - valueIsFunction = isFunction( value ); - - // We can't cloneNode fragments that contain checked, in WebKit - if ( valueIsFunction || - ( l > 1 && typeof value === "string" && - !support.checkClone && rchecked.test( value ) ) ) { - return collection.each( function( index ) { - var self = collection.eq( index ); - if ( valueIsFunction ) { - args[ 0 ] = value.call( this, index, self.html() ); - } - domManip( self, args, callback, ignored ); - } ); - } - - if ( l ) { - fragment = buildFragment( args, collection[ 0 ].ownerDocument, false, collection, ignored ); - first = fragment.firstChild; - - if ( fragment.childNodes.length === 1 ) { - fragment = first; - } - - // Require either new content or an interest in ignored elements to invoke the callback - if ( first || ignored ) { - scripts = jQuery.map( getAll( fragment, "script" ), disableScript ); - hasScripts = scripts.length; - - // Use the original fragment for the last item - // instead of the first because it can end up - // being emptied incorrectly in certain situations (#8070). - for ( ; i < l; i++ ) { - node = fragment; - - if ( i !== iNoClone ) { - node = jQuery.clone( node, true, true ); - - // Keep references to cloned scripts for later restoration - if ( hasScripts ) { - - // Support: Android <=4.0 only, PhantomJS 1 only - // push.apply(_, arraylike) throws on ancient WebKit - jQuery.merge( scripts, getAll( node, "script" ) ); - } - } - - callback.call( collection[ i ], node, i ); - } - - if ( hasScripts ) { - doc = scripts[ scripts.length - 1 ].ownerDocument; - - // Reenable scripts - jQuery.map( scripts, restoreScript ); - - // Evaluate executable scripts on first document insertion - for ( i = 0; i < hasScripts; i++ ) { - node = scripts[ i ]; - if ( rscriptType.test( node.type || "" ) && - !dataPriv.access( node, "globalEval" ) && - jQuery.contains( doc, node ) ) { - - if ( node.src && ( node.type || "" ).toLowerCase() !== "module" ) { - - // Optional AJAX dependency, but won't run scripts if not present - if ( jQuery._evalUrl && !node.noModule ) { - jQuery._evalUrl( node.src, { - nonce: node.nonce || node.getAttribute( "nonce" ) - }, doc ); - } - } else { - DOMEval( node.textContent.replace( rcleanScript, "" ), node, doc ); - } - } - } - } - } - } - - return collection; -} - -function remove( elem, selector, keepData ) { - var node, - nodes = selector ? jQuery.filter( selector, elem ) : elem, - i = 0; - - for ( ; ( node = nodes[ i ] ) != null; i++ ) { - if ( !keepData && node.nodeType === 1 ) { - jQuery.cleanData( getAll( node ) ); - } - - if ( node.parentNode ) { - if ( keepData && isAttached( node ) ) { - setGlobalEval( getAll( node, "script" ) ); - } - node.parentNode.removeChild( node ); - } - } - - return elem; -} - -jQuery.extend( { - htmlPrefilter: function( html ) { - return html; - }, - - clone: function( elem, dataAndEvents, deepDataAndEvents ) { - var i, l, srcElements, destElements, - clone = elem.cloneNode( true ), - inPage = isAttached( elem ); - - // Fix IE cloning issues - if ( !support.noCloneChecked && ( elem.nodeType === 1 || elem.nodeType === 11 ) && - !jQuery.isXMLDoc( elem ) ) { - - // We eschew Sizzle here for performance reasons: https://jsperf.com/getall-vs-sizzle/2 - destElements = getAll( clone ); - srcElements = getAll( elem ); - - for ( i = 0, l = srcElements.length; i < l; i++ ) { - fixInput( srcElements[ i ], destElements[ i ] ); - } - } - - // Copy the events from the original to the clone - if ( dataAndEvents ) { - if ( deepDataAndEvents ) { - srcElements = srcElements || getAll( elem ); - destElements = destElements || getAll( clone ); - - for ( i = 0, l = srcElements.length; i < l; i++ ) { - cloneCopyEvent( srcElements[ i ], destElements[ i ] ); - } - } else { - cloneCopyEvent( elem, clone ); - } - } - - // Preserve script evaluation history - destElements = getAll( clone, "script" ); - if ( destElements.length > 0 ) { - setGlobalEval( destElements, !inPage && getAll( elem, "script" ) ); - } - - // Return the cloned set - return clone; - }, - - cleanData: function( elems ) { - var data, elem, type, - special = jQuery.event.special, - i = 0; - - for ( ; ( elem = elems[ i ] ) !== undefined; i++ ) { - if ( acceptData( elem ) ) { - if ( ( data = elem[ dataPriv.expando ] ) ) { - if ( data.events ) { - for ( type in data.events ) { - if ( special[ type ] ) { - jQuery.event.remove( elem, type ); - - // This is a shortcut to avoid jQuery.event.remove's overhead - } else { - jQuery.removeEvent( elem, type, data.handle ); - } - } - } - - // Support: Chrome <=35 - 45+ - // Assign undefined instead of using delete, see Data#remove - elem[ dataPriv.expando ] = undefined; - } - if ( elem[ dataUser.expando ] ) { - - // Support: Chrome <=35 - 45+ - // Assign undefined instead of using delete, see Data#remove - elem[ dataUser.expando ] = undefined; - } - } - } - } -} ); - -jQuery.fn.extend( { - detach: function( selector ) { - return remove( this, selector, true ); - }, - - remove: function( selector ) { - return remove( this, selector ); - }, - - text: function( value ) { - return access( this, function( value ) { - return value === undefined ? - jQuery.text( this ) : - this.empty().each( function() { - if ( this.nodeType === 1 || this.nodeType === 11 || this.nodeType === 9 ) { - this.textContent = value; - } - } ); - }, null, value, arguments.length ); - }, - - append: function() { - return domManip( this, arguments, function( elem ) { - if ( this.nodeType === 1 || this.nodeType === 11 || this.nodeType === 9 ) { - var target = manipulationTarget( this, elem ); - target.appendChild( elem ); - } - } ); - }, - - prepend: function() { - return domManip( this, arguments, function( elem ) { - if ( this.nodeType === 1 || this.nodeType === 11 || this.nodeType === 9 ) { - var target = manipulationTarget( this, elem ); - target.insertBefore( elem, target.firstChild ); - } - } ); - }, - - before: function() { - return domManip( this, arguments, function( elem ) { - if ( this.parentNode ) { - this.parentNode.insertBefore( elem, this ); - } - } ); - }, - - after: function() { - return domManip( this, arguments, function( elem ) { - if ( this.parentNode ) { - this.parentNode.insertBefore( elem, this.nextSibling ); - } - } ); - }, - - empty: function() { - var elem, - i = 0; - - for ( ; ( elem = this[ i ] ) != null; i++ ) { - if ( elem.nodeType === 1 ) { - - // Prevent memory leaks - jQuery.cleanData( getAll( elem, false ) ); - - // Remove any remaining nodes - elem.textContent = ""; - } - } - - return this; - }, - - clone: function( dataAndEvents, deepDataAndEvents ) { - dataAndEvents = dataAndEvents == null ? false : dataAndEvents; - deepDataAndEvents = deepDataAndEvents == null ? dataAndEvents : deepDataAndEvents; - - return this.map( function() { - return jQuery.clone( this, dataAndEvents, deepDataAndEvents ); - } ); - }, - - html: function( value ) { - return access( this, function( value ) { - var elem = this[ 0 ] || {}, - i = 0, - l = this.length; - - if ( value === undefined && elem.nodeType === 1 ) { - return elem.innerHTML; - } - - // See if we can take a shortcut and just use innerHTML - if ( typeof value === "string" && !rnoInnerhtml.test( value ) && - !wrapMap[ ( rtagName.exec( value ) || [ "", "" ] )[ 1 ].toLowerCase() ] ) { - - value = jQuery.htmlPrefilter( value ); - - try { - for ( ; i < l; i++ ) { - elem = this[ i ] || {}; - - // Remove element nodes and prevent memory leaks - if ( elem.nodeType === 1 ) { - jQuery.cleanData( getAll( elem, false ) ); - elem.innerHTML = value; - } - } - - elem = 0; - - // If using innerHTML throws an exception, use the fallback method - } catch ( e ) {} - } - - if ( elem ) { - this.empty().append( value ); - } - }, null, value, arguments.length ); - }, - - replaceWith: function() { - var ignored = []; - - // Make the changes, replacing each non-ignored context element with the new content - return domManip( this, arguments, function( elem ) { - var parent = this.parentNode; - - if ( jQuery.inArray( this, ignored ) < 0 ) { - jQuery.cleanData( getAll( this ) ); - if ( parent ) { - parent.replaceChild( elem, this ); - } - } - - // Force callback invocation - }, ignored ); - } -} ); - -jQuery.each( { - appendTo: "append", - prependTo: "prepend", - insertBefore: "before", - insertAfter: "after", - replaceAll: "replaceWith" -}, function( name, original ) { - jQuery.fn[ name ] = function( selector ) { - var elems, - ret = [], - insert = jQuery( selector ), - last = insert.length - 1, - i = 0; - - for ( ; i <= last; i++ ) { - elems = i === last ? this : this.clone( true ); - jQuery( insert[ i ] )[ original ]( elems ); - - // Support: Android <=4.0 only, PhantomJS 1 only - // .get() because push.apply(_, arraylike) throws on ancient WebKit - push.apply( ret, elems.get() ); - } - - return this.pushStack( ret ); - }; -} ); -var rnumnonpx = new RegExp( "^(" + pnum + ")(?!px)[a-z%]+$", "i" ); - -var getStyles = function( elem ) { - - // Support: IE <=11 only, Firefox <=30 (#15098, #14150) - // IE throws on elements created in popups - // FF meanwhile throws on frame elements through "defaultView.getComputedStyle" - var view = elem.ownerDocument.defaultView; - - if ( !view || !view.opener ) { - view = window; - } - - return view.getComputedStyle( elem ); - }; - -var swap = function( elem, options, callback ) { - var ret, name, - old = {}; - - // Remember the old values, and insert the new ones - for ( name in options ) { - old[ name ] = elem.style[ name ]; - elem.style[ name ] = options[ name ]; - } - - ret = callback.call( elem ); - - // Revert the old values - for ( name in options ) { - elem.style[ name ] = old[ name ]; - } - - return ret; -}; - - -var rboxStyle = new RegExp( cssExpand.join( "|" ), "i" ); - - - -( function() { - - // Executing both pixelPosition & boxSizingReliable tests require only one layout - // so they're executed at the same time to save the second computation. - function computeStyleTests() { - - // This is a singleton, we need to execute it only once - if ( !div ) { - return; - } - - container.style.cssText = "position:absolute;left:-11111px;width:60px;" + - "margin-top:1px;padding:0;border:0"; - div.style.cssText = - "position:relative;display:block;box-sizing:border-box;overflow:scroll;" + - "margin:auto;border:1px;padding:1px;" + - "width:60%;top:1%"; - documentElement.appendChild( container ).appendChild( div ); - - var divStyle = window.getComputedStyle( div ); - pixelPositionVal = divStyle.top !== "1%"; - - // Support: Android 4.0 - 4.3 only, Firefox <=3 - 44 - reliableMarginLeftVal = roundPixelMeasures( divStyle.marginLeft ) === 12; - - // Support: Android 4.0 - 4.3 only, Safari <=9.1 - 10.1, iOS <=7.0 - 9.3 - // Some styles come back with percentage values, even though they shouldn't - div.style.right = "60%"; - pixelBoxStylesVal = roundPixelMeasures( divStyle.right ) === 36; - - // Support: IE 9 - 11 only - // Detect misreporting of content dimensions for box-sizing:border-box elements - boxSizingReliableVal = roundPixelMeasures( divStyle.width ) === 36; - - // Support: IE 9 only - // Detect overflow:scroll screwiness (gh-3699) - // Support: Chrome <=64 - // Don't get tricked when zoom affects offsetWidth (gh-4029) - div.style.position = "absolute"; - scrollboxSizeVal = roundPixelMeasures( div.offsetWidth / 3 ) === 12; - - documentElement.removeChild( container ); - - // Nullify the div so it wouldn't be stored in the memory and - // it will also be a sign that checks already performed - div = null; - } - - function roundPixelMeasures( measure ) { - return Math.round( parseFloat( measure ) ); - } - - var pixelPositionVal, boxSizingReliableVal, scrollboxSizeVal, pixelBoxStylesVal, - reliableTrDimensionsVal, reliableMarginLeftVal, - container = document.createElement( "div" ), - div = document.createElement( "div" ); - - // Finish early in limited (non-browser) environments - if ( !div.style ) { - return; - } - - // Support: IE <=9 - 11 only - // Style of cloned element affects source element cloned (#8908) - div.style.backgroundClip = "content-box"; - div.cloneNode( true ).style.backgroundClip = ""; - support.clearCloneStyle = div.style.backgroundClip === "content-box"; - - jQuery.extend( support, { - boxSizingReliable: function() { - computeStyleTests(); - return boxSizingReliableVal; - }, - pixelBoxStyles: function() { - computeStyleTests(); - return pixelBoxStylesVal; - }, - pixelPosition: function() { - computeStyleTests(); - return pixelPositionVal; - }, - reliableMarginLeft: function() { - computeStyleTests(); - return reliableMarginLeftVal; - }, - scrollboxSize: function() { - computeStyleTests(); - return scrollboxSizeVal; - }, - - // Support: IE 9 - 11+, Edge 15 - 18+ - // IE/Edge misreport `getComputedStyle` of table rows with width/height - // set in CSS while `offset*` properties report correct values. - // Behavior in IE 9 is more subtle than in newer versions & it passes - // some versions of this test; make sure not to make it pass there! - reliableTrDimensions: function() { - var table, tr, trChild, trStyle; - if ( reliableTrDimensionsVal == null ) { - table = document.createElement( "table" ); - tr = document.createElement( "tr" ); - trChild = document.createElement( "div" ); - - table.style.cssText = "position:absolute;left:-11111px"; - tr.style.height = "1px"; - trChild.style.height = "9px"; - - documentElement - .appendChild( table ) - .appendChild( tr ) - .appendChild( trChild ); - - trStyle = window.getComputedStyle( tr ); - reliableTrDimensionsVal = parseInt( trStyle.height ) > 3; - - documentElement.removeChild( table ); - } - return reliableTrDimensionsVal; - } - } ); -} )(); - - -function curCSS( elem, name, computed ) { - var width, minWidth, maxWidth, ret, - - // Support: Firefox 51+ - // Retrieving style before computed somehow - // fixes an issue with getting wrong values - // on detached elements - style = elem.style; - - computed = computed || getStyles( elem ); - - // getPropertyValue is needed for: - // .css('filter') (IE 9 only, #12537) - // .css('--customProperty) (#3144) - if ( computed ) { - ret = computed.getPropertyValue( name ) || computed[ name ]; - - if ( ret === "" && !isAttached( elem ) ) { - ret = jQuery.style( elem, name ); - } - - // A tribute to the "awesome hack by Dean Edwards" - // Android Browser returns percentage for some values, - // but width seems to be reliably pixels. - // This is against the CSSOM draft spec: - // https://drafts.csswg.org/cssom/#resolved-values - if ( !support.pixelBoxStyles() && rnumnonpx.test( ret ) && rboxStyle.test( name ) ) { - - // Remember the original values - width = style.width; - minWidth = style.minWidth; - maxWidth = style.maxWidth; - - // Put in the new values to get a computed value out - style.minWidth = style.maxWidth = style.width = ret; - ret = computed.width; - - // Revert the changed values - style.width = width; - style.minWidth = minWidth; - style.maxWidth = maxWidth; - } - } - - return ret !== undefined ? - - // Support: IE <=9 - 11 only - // IE returns zIndex value as an integer. - ret + "" : - ret; -} - - -function addGetHookIf( conditionFn, hookFn ) { - - // Define the hook, we'll check on the first run if it's really needed. - return { - get: function() { - if ( conditionFn() ) { - - // Hook not needed (or it's not possible to use it due - // to missing dependency), remove it. - delete this.get; - return; - } - - // Hook needed; redefine it so that the support test is not executed again. - return ( this.get = hookFn ).apply( this, arguments ); - } - }; -} - - -var cssPrefixes = [ "Webkit", "Moz", "ms" ], - emptyStyle = document.createElement( "div" ).style, - vendorProps = {}; - -// Return a vendor-prefixed property or undefined -function vendorPropName( name ) { - - // Check for vendor prefixed names - var capName = name[ 0 ].toUpperCase() + name.slice( 1 ), - i = cssPrefixes.length; - - while ( i-- ) { - name = cssPrefixes[ i ] + capName; - if ( name in emptyStyle ) { - return name; - } - } -} - -// Return a potentially-mapped jQuery.cssProps or vendor prefixed property -function finalPropName( name ) { - var final = jQuery.cssProps[ name ] || vendorProps[ name ]; - - if ( final ) { - return final; - } - if ( name in emptyStyle ) { - return name; - } - return vendorProps[ name ] = vendorPropName( name ) || name; -} - - -var - - // Swappable if display is none or starts with table - // except "table", "table-cell", or "table-caption" - // See here for display values: https://developer.mozilla.org/en-US/docs/CSS/display - rdisplayswap = /^(none|table(?!-c[ea]).+)/, - rcustomProp = /^--/, - cssShow = { position: "absolute", visibility: "hidden", display: "block" }, - cssNormalTransform = { - letterSpacing: "0", - fontWeight: "400" - }; - -function setPositiveNumber( _elem, value, subtract ) { - - // Any relative (+/-) values have already been - // normalized at this point - var matches = rcssNum.exec( value ); - return matches ? - - // Guard against undefined "subtract", e.g., when used as in cssHooks - Math.max( 0, matches[ 2 ] - ( subtract || 0 ) ) + ( matches[ 3 ] || "px" ) : - value; -} - -function boxModelAdjustment( elem, dimension, box, isBorderBox, styles, computedVal ) { - var i = dimension === "width" ? 1 : 0, - extra = 0, - delta = 0; - - // Adjustment may not be necessary - if ( box === ( isBorderBox ? "border" : "content" ) ) { - return 0; - } - - for ( ; i < 4; i += 2 ) { - - // Both box models exclude margin - if ( box === "margin" ) { - delta += jQuery.css( elem, box + cssExpand[ i ], true, styles ); - } - - // If we get here with a content-box, we're seeking "padding" or "border" or "margin" - if ( !isBorderBox ) { - - // Add padding - delta += jQuery.css( elem, "padding" + cssExpand[ i ], true, styles ); - - // For "border" or "margin", add border - if ( box !== "padding" ) { - delta += jQuery.css( elem, "border" + cssExpand[ i ] + "Width", true, styles ); - - // But still keep track of it otherwise - } else { - extra += jQuery.css( elem, "border" + cssExpand[ i ] + "Width", true, styles ); - } - - // If we get here with a border-box (content + padding + border), we're seeking "content" or - // "padding" or "margin" - } else { - - // For "content", subtract padding - if ( box === "content" ) { - delta -= jQuery.css( elem, "padding" + cssExpand[ i ], true, styles ); - } - - // For "content" or "padding", subtract border - if ( box !== "margin" ) { - delta -= jQuery.css( elem, "border" + cssExpand[ i ] + "Width", true, styles ); - } - } - } - - // Account for positive content-box scroll gutter when requested by providing computedVal - if ( !isBorderBox && computedVal >= 0 ) { - - // offsetWidth/offsetHeight is a rounded sum of content, padding, scroll gutter, and border - // Assuming integer scroll gutter, subtract the rest and round down - delta += Math.max( 0, Math.ceil( - elem[ "offset" + dimension[ 0 ].toUpperCase() + dimension.slice( 1 ) ] - - computedVal - - delta - - extra - - 0.5 - - // If offsetWidth/offsetHeight is unknown, then we can't determine content-box scroll gutter - // Use an explicit zero to avoid NaN (gh-3964) - ) ) || 0; - } - - return delta; -} - -function getWidthOrHeight( elem, dimension, extra ) { - - // Start with computed style - var styles = getStyles( elem ), - - // To avoid forcing a reflow, only fetch boxSizing if we need it (gh-4322). - // Fake content-box until we know it's needed to know the true value. - boxSizingNeeded = !support.boxSizingReliable() || extra, - isBorderBox = boxSizingNeeded && - jQuery.css( elem, "boxSizing", false, styles ) === "border-box", - valueIsBorderBox = isBorderBox, - - val = curCSS( elem, dimension, styles ), - offsetProp = "offset" + dimension[ 0 ].toUpperCase() + dimension.slice( 1 ); - - // Support: Firefox <=54 - // Return a confounding non-pixel value or feign ignorance, as appropriate. - if ( rnumnonpx.test( val ) ) { - if ( !extra ) { - return val; - } - val = "auto"; - } - - - // Support: IE 9 - 11 only - // Use offsetWidth/offsetHeight for when box sizing is unreliable. - // In those cases, the computed value can be trusted to be border-box. - if ( ( !support.boxSizingReliable() && isBorderBox || - - // Support: IE 10 - 11+, Edge 15 - 18+ - // IE/Edge misreport `getComputedStyle` of table rows with width/height - // set in CSS while `offset*` properties report correct values. - // Interestingly, in some cases IE 9 doesn't suffer from this issue. - !support.reliableTrDimensions() && nodeName( elem, "tr" ) || - - // Fall back to offsetWidth/offsetHeight when value is "auto" - // This happens for inline elements with no explicit setting (gh-3571) - val === "auto" || - - // Support: Android <=4.1 - 4.3 only - // Also use offsetWidth/offsetHeight for misreported inline dimensions (gh-3602) - !parseFloat( val ) && jQuery.css( elem, "display", false, styles ) === "inline" ) && - - // Make sure the element is visible & connected - elem.getClientRects().length ) { - - isBorderBox = jQuery.css( elem, "boxSizing", false, styles ) === "border-box"; - - // Where available, offsetWidth/offsetHeight approximate border box dimensions. - // Where not available (e.g., SVG), assume unreliable box-sizing and interpret the - // retrieved value as a content box dimension. - valueIsBorderBox = offsetProp in elem; - if ( valueIsBorderBox ) { - val = elem[ offsetProp ]; - } - } - - // Normalize "" and auto - val = parseFloat( val ) || 0; - - // Adjust for the element's box model - return ( val + - boxModelAdjustment( - elem, - dimension, - extra || ( isBorderBox ? "border" : "content" ), - valueIsBorderBox, - styles, - - // Provide the current computed size to request scroll gutter calculation (gh-3589) - val - ) - ) + "px"; -} - -jQuery.extend( { - - // Add in style property hooks for overriding the default - // behavior of getting and setting a style property - cssHooks: { - opacity: { - get: function( elem, computed ) { - if ( computed ) { - - // We should always get a number back from opacity - var ret = curCSS( elem, "opacity" ); - return ret === "" ? "1" : ret; - } - } - } - }, - - // Don't automatically add "px" to these possibly-unitless properties - cssNumber: { - "animationIterationCount": true, - "columnCount": true, - "fillOpacity": true, - "flexGrow": true, - "flexShrink": true, - "fontWeight": true, - "gridArea": true, - "gridColumn": true, - "gridColumnEnd": true, - "gridColumnStart": true, - "gridRow": true, - "gridRowEnd": true, - "gridRowStart": true, - "lineHeight": true, - "opacity": true, - "order": true, - "orphans": true, - "widows": true, - "zIndex": true, - "zoom": true - }, - - // Add in properties whose names you wish to fix before - // setting or getting the value - cssProps: {}, - - // Get and set the style property on a DOM Node - style: function( elem, name, value, extra ) { - - // Don't set styles on text and comment nodes - if ( !elem || elem.nodeType === 3 || elem.nodeType === 8 || !elem.style ) { - return; - } - - // Make sure that we're working with the right name - var ret, type, hooks, - origName = camelCase( name ), - isCustomProp = rcustomProp.test( name ), - style = elem.style; - - // Make sure that we're working with the right name. We don't - // want to query the value if it is a CSS custom property - // since they are user-defined. - if ( !isCustomProp ) { - name = finalPropName( origName ); - } - - // Gets hook for the prefixed version, then unprefixed version - hooks = jQuery.cssHooks[ name ] || jQuery.cssHooks[ origName ]; - - // Check if we're setting a value - if ( value !== undefined ) { - type = typeof value; - - // Convert "+=" or "-=" to relative numbers (#7345) - if ( type === "string" && ( ret = rcssNum.exec( value ) ) && ret[ 1 ] ) { - value = adjustCSS( elem, name, ret ); - - // Fixes bug #9237 - type = "number"; - } - - // Make sure that null and NaN values aren't set (#7116) - if ( value == null || value !== value ) { - return; - } - - // If a number was passed in, add the unit (except for certain CSS properties) - // The isCustomProp check can be removed in jQuery 4.0 when we only auto-append - // "px" to a few hardcoded values. - if ( type === "number" && !isCustomProp ) { - value += ret && ret[ 3 ] || ( jQuery.cssNumber[ origName ] ? "" : "px" ); - } - - // background-* props affect original clone's values - if ( !support.clearCloneStyle && value === "" && name.indexOf( "background" ) === 0 ) { - style[ name ] = "inherit"; - } - - // If a hook was provided, use that value, otherwise just set the specified value - if ( !hooks || !( "set" in hooks ) || - ( value = hooks.set( elem, value, extra ) ) !== undefined ) { - - if ( isCustomProp ) { - style.setProperty( name, value ); - } else { - style[ name ] = value; - } - } - - } else { - - // If a hook was provided get the non-computed value from there - if ( hooks && "get" in hooks && - ( ret = hooks.get( elem, false, extra ) ) !== undefined ) { - - return ret; - } - - // Otherwise just get the value from the style object - return style[ name ]; - } - }, - - css: function( elem, name, extra, styles ) { - var val, num, hooks, - origName = camelCase( name ), - isCustomProp = rcustomProp.test( name ); - - // Make sure that we're working with the right name. We don't - // want to modify the value if it is a CSS custom property - // since they are user-defined. - if ( !isCustomProp ) { - name = finalPropName( origName ); - } - - // Try prefixed name followed by the unprefixed name - hooks = jQuery.cssHooks[ name ] || jQuery.cssHooks[ origName ]; - - // If a hook was provided get the computed value from there - if ( hooks && "get" in hooks ) { - val = hooks.get( elem, true, extra ); - } - - // Otherwise, if a way to get the computed value exists, use that - if ( val === undefined ) { - val = curCSS( elem, name, styles ); - } - - // Convert "normal" to computed value - if ( val === "normal" && name in cssNormalTransform ) { - val = cssNormalTransform[ name ]; - } - - // Make numeric if forced or a qualifier was provided and val looks numeric - if ( extra === "" || extra ) { - num = parseFloat( val ); - return extra === true || isFinite( num ) ? num || 0 : val; - } - - return val; - } -} ); - -jQuery.each( [ "height", "width" ], function( _i, dimension ) { - jQuery.cssHooks[ dimension ] = { - get: function( elem, computed, extra ) { - if ( computed ) { - - // Certain elements can have dimension info if we invisibly show them - // but it must have a current display style that would benefit - return rdisplayswap.test( jQuery.css( elem, "display" ) ) && - - // Support: Safari 8+ - // Table columns in Safari have non-zero offsetWidth & zero - // getBoundingClientRect().width unless display is changed. - // Support: IE <=11 only - // Running getBoundingClientRect on a disconnected node - // in IE throws an error. - ( !elem.getClientRects().length || !elem.getBoundingClientRect().width ) ? - swap( elem, cssShow, function() { - return getWidthOrHeight( elem, dimension, extra ); - } ) : - getWidthOrHeight( elem, dimension, extra ); - } - }, - - set: function( elem, value, extra ) { - var matches, - styles = getStyles( elem ), - - // Only read styles.position if the test has a chance to fail - // to avoid forcing a reflow. - scrollboxSizeBuggy = !support.scrollboxSize() && - styles.position === "absolute", - - // To avoid forcing a reflow, only fetch boxSizing if we need it (gh-3991) - boxSizingNeeded = scrollboxSizeBuggy || extra, - isBorderBox = boxSizingNeeded && - jQuery.css( elem, "boxSizing", false, styles ) === "border-box", - subtract = extra ? - boxModelAdjustment( - elem, - dimension, - extra, - isBorderBox, - styles - ) : - 0; - - // Account for unreliable border-box dimensions by comparing offset* to computed and - // faking a content-box to get border and padding (gh-3699) - if ( isBorderBox && scrollboxSizeBuggy ) { - subtract -= Math.ceil( - elem[ "offset" + dimension[ 0 ].toUpperCase() + dimension.slice( 1 ) ] - - parseFloat( styles[ dimension ] ) - - boxModelAdjustment( elem, dimension, "border", false, styles ) - - 0.5 - ); - } - - // Convert to pixels if value adjustment is needed - if ( subtract && ( matches = rcssNum.exec( value ) ) && - ( matches[ 3 ] || "px" ) !== "px" ) { - - elem.style[ dimension ] = value; - value = jQuery.css( elem, dimension ); - } - - return setPositiveNumber( elem, value, subtract ); - } - }; -} ); - -jQuery.cssHooks.marginLeft = addGetHookIf( support.reliableMarginLeft, - function( elem, computed ) { - if ( computed ) { - return ( parseFloat( curCSS( elem, "marginLeft" ) ) || - elem.getBoundingClientRect().left - - swap( elem, { marginLeft: 0 }, function() { - return elem.getBoundingClientRect().left; - } ) - ) + "px"; - } - } -); - -// These hooks are used by animate to expand properties -jQuery.each( { - margin: "", - padding: "", - border: "Width" -}, function( prefix, suffix ) { - jQuery.cssHooks[ prefix + suffix ] = { - expand: function( value ) { - var i = 0, - expanded = {}, - - // Assumes a single number if not a string - parts = typeof value === "string" ? value.split( " " ) : [ value ]; - - for ( ; i < 4; i++ ) { - expanded[ prefix + cssExpand[ i ] + suffix ] = - parts[ i ] || parts[ i - 2 ] || parts[ 0 ]; - } - - return expanded; - } - }; - - if ( prefix !== "margin" ) { - jQuery.cssHooks[ prefix + suffix ].set = setPositiveNumber; - } -} ); - -jQuery.fn.extend( { - css: function( name, value ) { - return access( this, function( elem, name, value ) { - var styles, len, - map = {}, - i = 0; - - if ( Array.isArray( name ) ) { - styles = getStyles( elem ); - len = name.length; - - for ( ; i < len; i++ ) { - map[ name[ i ] ] = jQuery.css( elem, name[ i ], false, styles ); - } - - return map; - } - - return value !== undefined ? - jQuery.style( elem, name, value ) : - jQuery.css( elem, name ); - }, name, value, arguments.length > 1 ); - } -} ); - - -function Tween( elem, options, prop, end, easing ) { - return new Tween.prototype.init( elem, options, prop, end, easing ); -} -jQuery.Tween = Tween; - -Tween.prototype = { - constructor: Tween, - init: function( elem, options, prop, end, easing, unit ) { - this.elem = elem; - this.prop = prop; - this.easing = easing || jQuery.easing._default; - this.options = options; - this.start = this.now = this.cur(); - this.end = end; - this.unit = unit || ( jQuery.cssNumber[ prop ] ? "" : "px" ); - }, - cur: function() { - var hooks = Tween.propHooks[ this.prop ]; - - return hooks && hooks.get ? - hooks.get( this ) : - Tween.propHooks._default.get( this ); - }, - run: function( percent ) { - var eased, - hooks = Tween.propHooks[ this.prop ]; - - if ( this.options.duration ) { - this.pos = eased = jQuery.easing[ this.easing ]( - percent, this.options.duration * percent, 0, 1, this.options.duration - ); - } else { - this.pos = eased = percent; - } - this.now = ( this.end - this.start ) * eased + this.start; - - if ( this.options.step ) { - this.options.step.call( this.elem, this.now, this ); - } - - if ( hooks && hooks.set ) { - hooks.set( this ); - } else { - Tween.propHooks._default.set( this ); - } - return this; - } -}; - -Tween.prototype.init.prototype = Tween.prototype; - -Tween.propHooks = { - _default: { - get: function( tween ) { - var result; - - // Use a property on the element directly when it is not a DOM element, - // or when there is no matching style property that exists. - if ( tween.elem.nodeType !== 1 || - tween.elem[ tween.prop ] != null && tween.elem.style[ tween.prop ] == null ) { - return tween.elem[ tween.prop ]; - } - - // Passing an empty string as a 3rd parameter to .css will automatically - // attempt a parseFloat and fallback to a string if the parse fails. - // Simple values such as "10px" are parsed to Float; - // complex values such as "rotate(1rad)" are returned as-is. - result = jQuery.css( tween.elem, tween.prop, "" ); - - // Empty strings, null, undefined and "auto" are converted to 0. - return !result || result === "auto" ? 0 : result; - }, - set: function( tween ) { - - // Use step hook for back compat. - // Use cssHook if its there. - // Use .style if available and use plain properties where available. - if ( jQuery.fx.step[ tween.prop ] ) { - jQuery.fx.step[ tween.prop ]( tween ); - } else if ( tween.elem.nodeType === 1 && ( - jQuery.cssHooks[ tween.prop ] || - tween.elem.style[ finalPropName( tween.prop ) ] != null ) ) { - jQuery.style( tween.elem, tween.prop, tween.now + tween.unit ); - } else { - tween.elem[ tween.prop ] = tween.now; - } - } - } -}; - -// Support: IE <=9 only -// Panic based approach to setting things on disconnected nodes -Tween.propHooks.scrollTop = Tween.propHooks.scrollLeft = { - set: function( tween ) { - if ( tween.elem.nodeType && tween.elem.parentNode ) { - tween.elem[ tween.prop ] = tween.now; - } - } -}; - -jQuery.easing = { - linear: function( p ) { - return p; - }, - swing: function( p ) { - return 0.5 - Math.cos( p * Math.PI ) / 2; - }, - _default: "swing" -}; - -jQuery.fx = Tween.prototype.init; - -// Back compat <1.8 extension point -jQuery.fx.step = {}; - - - - -var - fxNow, inProgress, - rfxtypes = /^(?:toggle|show|hide)$/, - rrun = /queueHooks$/; - -function schedule() { - if ( inProgress ) { - if ( document.hidden === false && window.requestAnimationFrame ) { - window.requestAnimationFrame( schedule ); - } else { - window.setTimeout( schedule, jQuery.fx.interval ); - } - - jQuery.fx.tick(); - } -} - -// Animations created synchronously will run synchronously -function createFxNow() { - window.setTimeout( function() { - fxNow = undefined; - } ); - return ( fxNow = Date.now() ); -} - -// Generate parameters to create a standard animation -function genFx( type, includeWidth ) { - var which, - i = 0, - attrs = { height: type }; - - // If we include width, step value is 1 to do all cssExpand values, - // otherwise step value is 2 to skip over Left and Right - includeWidth = includeWidth ? 1 : 0; - for ( ; i < 4; i += 2 - includeWidth ) { - which = cssExpand[ i ]; - attrs[ "margin" + which ] = attrs[ "padding" + which ] = type; - } - - if ( includeWidth ) { - attrs.opacity = attrs.width = type; - } - - return attrs; -} - -function createTween( value, prop, animation ) { - var tween, - collection = ( Animation.tweeners[ prop ] || [] ).concat( Animation.tweeners[ "*" ] ), - index = 0, - length = collection.length; - for ( ; index < length; index++ ) { - if ( ( tween = collection[ index ].call( animation, prop, value ) ) ) { - - // We're done with this property - return tween; - } - } -} - -function defaultPrefilter( elem, props, opts ) { - var prop, value, toggle, hooks, oldfire, propTween, restoreDisplay, display, - isBox = "width" in props || "height" in props, - anim = this, - orig = {}, - style = elem.style, - hidden = elem.nodeType && isHiddenWithinTree( elem ), - dataShow = dataPriv.get( elem, "fxshow" ); - - // Queue-skipping animations hijack the fx hooks - if ( !opts.queue ) { - hooks = jQuery._queueHooks( elem, "fx" ); - if ( hooks.unqueued == null ) { - hooks.unqueued = 0; - oldfire = hooks.empty.fire; - hooks.empty.fire = function() { - if ( !hooks.unqueued ) { - oldfire(); - } - }; - } - hooks.unqueued++; - - anim.always( function() { - - // Ensure the complete handler is called before this completes - anim.always( function() { - hooks.unqueued--; - if ( !jQuery.queue( elem, "fx" ).length ) { - hooks.empty.fire(); - } - } ); - } ); - } - - // Detect show/hide animations - for ( prop in props ) { - value = props[ prop ]; - if ( rfxtypes.test( value ) ) { - delete props[ prop ]; - toggle = toggle || value === "toggle"; - if ( value === ( hidden ? "hide" : "show" ) ) { - - // Pretend to be hidden if this is a "show" and - // there is still data from a stopped show/hide - if ( value === "show" && dataShow && dataShow[ prop ] !== undefined ) { - hidden = true; - - // Ignore all other no-op show/hide data - } else { - continue; - } - } - orig[ prop ] = dataShow && dataShow[ prop ] || jQuery.style( elem, prop ); - } - } - - // Bail out if this is a no-op like .hide().hide() - propTween = !jQuery.isEmptyObject( props ); - if ( !propTween && jQuery.isEmptyObject( orig ) ) { - return; - } - - // Restrict "overflow" and "display" styles during box animations - if ( isBox && elem.nodeType === 1 ) { - - // Support: IE <=9 - 11, Edge 12 - 15 - // Record all 3 overflow attributes because IE does not infer the shorthand - // from identically-valued overflowX and overflowY and Edge just mirrors - // the overflowX value there. - opts.overflow = [ style.overflow, style.overflowX, style.overflowY ]; - - // Identify a display type, preferring old show/hide data over the CSS cascade - restoreDisplay = dataShow && dataShow.display; - if ( restoreDisplay == null ) { - restoreDisplay = dataPriv.get( elem, "display" ); - } - display = jQuery.css( elem, "display" ); - if ( display === "none" ) { - if ( restoreDisplay ) { - display = restoreDisplay; - } else { - - // Get nonempty value(s) by temporarily forcing visibility - showHide( [ elem ], true ); - restoreDisplay = elem.style.display || restoreDisplay; - display = jQuery.css( elem, "display" ); - showHide( [ elem ] ); - } - } - - // Animate inline elements as inline-block - if ( display === "inline" || display === "inline-block" && restoreDisplay != null ) { - if ( jQuery.css( elem, "float" ) === "none" ) { - - // Restore the original display value at the end of pure show/hide animations - if ( !propTween ) { - anim.done( function() { - style.display = restoreDisplay; - } ); - if ( restoreDisplay == null ) { - display = style.display; - restoreDisplay = display === "none" ? "" : display; - } - } - style.display = "inline-block"; - } - } - } - - if ( opts.overflow ) { - style.overflow = "hidden"; - anim.always( function() { - style.overflow = opts.overflow[ 0 ]; - style.overflowX = opts.overflow[ 1 ]; - style.overflowY = opts.overflow[ 2 ]; - } ); - } - - // Implement show/hide animations - propTween = false; - for ( prop in orig ) { - - // General show/hide setup for this element animation - if ( !propTween ) { - if ( dataShow ) { - if ( "hidden" in dataShow ) { - hidden = dataShow.hidden; - } - } else { - dataShow = dataPriv.access( elem, "fxshow", { display: restoreDisplay } ); - } - - // Store hidden/visible for toggle so `.stop().toggle()` "reverses" - if ( toggle ) { - dataShow.hidden = !hidden; - } - - // Show elements before animating them - if ( hidden ) { - showHide( [ elem ], true ); - } - - /* eslint-disable no-loop-func */ - - anim.done( function() { - - /* eslint-enable no-loop-func */ - - // The final step of a "hide" animation is actually hiding the element - if ( !hidden ) { - showHide( [ elem ] ); - } - dataPriv.remove( elem, "fxshow" ); - for ( prop in orig ) { - jQuery.style( elem, prop, orig[ prop ] ); - } - } ); - } - - // Per-property setup - propTween = createTween( hidden ? dataShow[ prop ] : 0, prop, anim ); - if ( !( prop in dataShow ) ) { - dataShow[ prop ] = propTween.start; - if ( hidden ) { - propTween.end = propTween.start; - propTween.start = 0; - } - } - } -} - -function propFilter( props, specialEasing ) { - var index, name, easing, value, hooks; - - // camelCase, specialEasing and expand cssHook pass - for ( index in props ) { - name = camelCase( index ); - easing = specialEasing[ name ]; - value = props[ index ]; - if ( Array.isArray( value ) ) { - easing = value[ 1 ]; - value = props[ index ] = value[ 0 ]; - } - - if ( index !== name ) { - props[ name ] = value; - delete props[ index ]; - } - - hooks = jQuery.cssHooks[ name ]; - if ( hooks && "expand" in hooks ) { - value = hooks.expand( value ); - delete props[ name ]; - - // Not quite $.extend, this won't overwrite existing keys. - // Reusing 'index' because we have the correct "name" - for ( index in value ) { - if ( !( index in props ) ) { - props[ index ] = value[ index ]; - specialEasing[ index ] = easing; - } - } - } else { - specialEasing[ name ] = easing; - } - } -} - -function Animation( elem, properties, options ) { - var result, - stopped, - index = 0, - length = Animation.prefilters.length, - deferred = jQuery.Deferred().always( function() { - - // Don't match elem in the :animated selector - delete tick.elem; - } ), - tick = function() { - if ( stopped ) { - return false; - } - var currentTime = fxNow || createFxNow(), - remaining = Math.max( 0, animation.startTime + animation.duration - currentTime ), - - // Support: Android 2.3 only - // Archaic crash bug won't allow us to use `1 - ( 0.5 || 0 )` (#12497) - temp = remaining / animation.duration || 0, - percent = 1 - temp, - index = 0, - length = animation.tweens.length; - - for ( ; index < length; index++ ) { - animation.tweens[ index ].run( percent ); - } - - deferred.notifyWith( elem, [ animation, percent, remaining ] ); - - // If there's more to do, yield - if ( percent < 1 && length ) { - return remaining; - } - - // If this was an empty animation, synthesize a final progress notification - if ( !length ) { - deferred.notifyWith( elem, [ animation, 1, 0 ] ); - } - - // Resolve the animation and report its conclusion - deferred.resolveWith( elem, [ animation ] ); - return false; - }, - animation = deferred.promise( { - elem: elem, - props: jQuery.extend( {}, properties ), - opts: jQuery.extend( true, { - specialEasing: {}, - easing: jQuery.easing._default - }, options ), - originalProperties: properties, - originalOptions: options, - startTime: fxNow || createFxNow(), - duration: options.duration, - tweens: [], - createTween: function( prop, end ) { - var tween = jQuery.Tween( elem, animation.opts, prop, end, - animation.opts.specialEasing[ prop ] || animation.opts.easing ); - animation.tweens.push( tween ); - return tween; - }, - stop: function( gotoEnd ) { - var index = 0, - - // If we are going to the end, we want to run all the tweens - // otherwise we skip this part - length = gotoEnd ? animation.tweens.length : 0; - if ( stopped ) { - return this; - } - stopped = true; - for ( ; index < length; index++ ) { - animation.tweens[ index ].run( 1 ); - } - - // Resolve when we played the last frame; otherwise, reject - if ( gotoEnd ) { - deferred.notifyWith( elem, [ animation, 1, 0 ] ); - deferred.resolveWith( elem, [ animation, gotoEnd ] ); - } else { - deferred.rejectWith( elem, [ animation, gotoEnd ] ); - } - return this; - } - } ), - props = animation.props; - - propFilter( props, animation.opts.specialEasing ); - - for ( ; index < length; index++ ) { - result = Animation.prefilters[ index ].call( animation, elem, props, animation.opts ); - if ( result ) { - if ( isFunction( result.stop ) ) { - jQuery._queueHooks( animation.elem, animation.opts.queue ).stop = - result.stop.bind( result ); - } - return result; - } - } - - jQuery.map( props, createTween, animation ); - - if ( isFunction( animation.opts.start ) ) { - animation.opts.start.call( elem, animation ); - } - - // Attach callbacks from options - animation - .progress( animation.opts.progress ) - .done( animation.opts.done, animation.opts.complete ) - .fail( animation.opts.fail ) - .always( animation.opts.always ); - - jQuery.fx.timer( - jQuery.extend( tick, { - elem: elem, - anim: animation, - queue: animation.opts.queue - } ) - ); - - return animation; -} - -jQuery.Animation = jQuery.extend( Animation, { - - tweeners: { - "*": [ function( prop, value ) { - var tween = this.createTween( prop, value ); - adjustCSS( tween.elem, prop, rcssNum.exec( value ), tween ); - return tween; - } ] - }, - - tweener: function( props, callback ) { - if ( isFunction( props ) ) { - callback = props; - props = [ "*" ]; - } else { - props = props.match( rnothtmlwhite ); - } - - var prop, - index = 0, - length = props.length; - - for ( ; index < length; index++ ) { - prop = props[ index ]; - Animation.tweeners[ prop ] = Animation.tweeners[ prop ] || []; - Animation.tweeners[ prop ].unshift( callback ); - } - }, - - prefilters: [ defaultPrefilter ], - - prefilter: function( callback, prepend ) { - if ( prepend ) { - Animation.prefilters.unshift( callback ); - } else { - Animation.prefilters.push( callback ); - } - } -} ); - -jQuery.speed = function( speed, easing, fn ) { - var opt = speed && typeof speed === "object" ? jQuery.extend( {}, speed ) : { - complete: fn || !fn && easing || - isFunction( speed ) && speed, - duration: speed, - easing: fn && easing || easing && !isFunction( easing ) && easing - }; - - // Go to the end state if fx are off - if ( jQuery.fx.off ) { - opt.duration = 0; - - } else { - if ( typeof opt.duration !== "number" ) { - if ( opt.duration in jQuery.fx.speeds ) { - opt.duration = jQuery.fx.speeds[ opt.duration ]; - - } else { - opt.duration = jQuery.fx.speeds._default; - } - } - } - - // Normalize opt.queue - true/undefined/null -> "fx" - if ( opt.queue == null || opt.queue === true ) { - opt.queue = "fx"; - } - - // Queueing - opt.old = opt.complete; - - opt.complete = function() { - if ( isFunction( opt.old ) ) { - opt.old.call( this ); - } - - if ( opt.queue ) { - jQuery.dequeue( this, opt.queue ); - } - }; - - return opt; -}; - -jQuery.fn.extend( { - fadeTo: function( speed, to, easing, callback ) { - - // Show any hidden elements after setting opacity to 0 - return this.filter( isHiddenWithinTree ).css( "opacity", 0 ).show() - - // Animate to the value specified - .end().animate( { opacity: to }, speed, easing, callback ); - }, - animate: function( prop, speed, easing, callback ) { - var empty = jQuery.isEmptyObject( prop ), - optall = jQuery.speed( speed, easing, callback ), - doAnimation = function() { - - // Operate on a copy of prop so per-property easing won't be lost - var anim = Animation( this, jQuery.extend( {}, prop ), optall ); - - // Empty animations, or finishing resolves immediately - if ( empty || dataPriv.get( this, "finish" ) ) { - anim.stop( true ); - } - }; - doAnimation.finish = doAnimation; - - return empty || optall.queue === false ? - this.each( doAnimation ) : - this.queue( optall.queue, doAnimation ); - }, - stop: function( type, clearQueue, gotoEnd ) { - var stopQueue = function( hooks ) { - var stop = hooks.stop; - delete hooks.stop; - stop( gotoEnd ); - }; - - if ( typeof type !== "string" ) { - gotoEnd = clearQueue; - clearQueue = type; - type = undefined; - } - if ( clearQueue ) { - this.queue( type || "fx", [] ); - } - - return this.each( function() { - var dequeue = true, - index = type != null && type + "queueHooks", - timers = jQuery.timers, - data = dataPriv.get( this ); - - if ( index ) { - if ( data[ index ] && data[ index ].stop ) { - stopQueue( data[ index ] ); - } - } else { - for ( index in data ) { - if ( data[ index ] && data[ index ].stop && rrun.test( index ) ) { - stopQueue( data[ index ] ); - } - } - } - - for ( index = timers.length; index--; ) { - if ( timers[ index ].elem === this && - ( type == null || timers[ index ].queue === type ) ) { - - timers[ index ].anim.stop( gotoEnd ); - dequeue = false; - timers.splice( index, 1 ); - } - } - - // Start the next in the queue if the last step wasn't forced. - // Timers currently will call their complete callbacks, which - // will dequeue but only if they were gotoEnd. - if ( dequeue || !gotoEnd ) { - jQuery.dequeue( this, type ); - } - } ); - }, - finish: function( type ) { - if ( type !== false ) { - type = type || "fx"; - } - return this.each( function() { - var index, - data = dataPriv.get( this ), - queue = data[ type + "queue" ], - hooks = data[ type + "queueHooks" ], - timers = jQuery.timers, - length = queue ? queue.length : 0; - - // Enable finishing flag on private data - data.finish = true; - - // Empty the queue first - jQuery.queue( this, type, [] ); - - if ( hooks && hooks.stop ) { - hooks.stop.call( this, true ); - } - - // Look for any active animations, and finish them - for ( index = timers.length; index--; ) { - if ( timers[ index ].elem === this && timers[ index ].queue === type ) { - timers[ index ].anim.stop( true ); - timers.splice( index, 1 ); - } - } - - // Look for any animations in the old queue and finish them - for ( index = 0; index < length; index++ ) { - if ( queue[ index ] && queue[ index ].finish ) { - queue[ index ].finish.call( this ); - } - } - - // Turn off finishing flag - delete data.finish; - } ); - } -} ); - -jQuery.each( [ "toggle", "show", "hide" ], function( _i, name ) { - var cssFn = jQuery.fn[ name ]; - jQuery.fn[ name ] = function( speed, easing, callback ) { - return speed == null || typeof speed === "boolean" ? - cssFn.apply( this, arguments ) : - this.animate( genFx( name, true ), speed, easing, callback ); - }; -} ); - -// Generate shortcuts for custom animations -jQuery.each( { - slideDown: genFx( "show" ), - slideUp: genFx( "hide" ), - slideToggle: genFx( "toggle" ), - fadeIn: { opacity: "show" }, - fadeOut: { opacity: "hide" }, - fadeToggle: { opacity: "toggle" } -}, function( name, props ) { - jQuery.fn[ name ] = function( speed, easing, callback ) { - return this.animate( props, speed, easing, callback ); - }; -} ); - -jQuery.timers = []; -jQuery.fx.tick = function() { - var timer, - i = 0, - timers = jQuery.timers; - - fxNow = Date.now(); - - for ( ; i < timers.length; i++ ) { - timer = timers[ i ]; - - // Run the timer and safely remove it when done (allowing for external removal) - if ( !timer() && timers[ i ] === timer ) { - timers.splice( i--, 1 ); - } - } - - if ( !timers.length ) { - jQuery.fx.stop(); - } - fxNow = undefined; -}; - -jQuery.fx.timer = function( timer ) { - jQuery.timers.push( timer ); - jQuery.fx.start(); -}; - -jQuery.fx.interval = 13; -jQuery.fx.start = function() { - if ( inProgress ) { - return; - } - - inProgress = true; - schedule(); -}; - -jQuery.fx.stop = function() { - inProgress = null; -}; - -jQuery.fx.speeds = { - slow: 600, - fast: 200, - - // Default speed - _default: 400 -}; - - -// Based off of the plugin by Clint Helfers, with permission. -// https://web.archive.org/web/20100324014747/http://blindsignals.com/index.php/2009/07/jquery-delay/ -jQuery.fn.delay = function( time, type ) { - time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time; - type = type || "fx"; - - return this.queue( type, function( next, hooks ) { - var timeout = window.setTimeout( next, time ); - hooks.stop = function() { - window.clearTimeout( timeout ); - }; - } ); -}; - - -( function() { - var input = document.createElement( "input" ), - select = document.createElement( "select" ), - opt = select.appendChild( document.createElement( "option" ) ); - - input.type = "checkbox"; - - // Support: Android <=4.3 only - // Default value for a checkbox should be "on" - support.checkOn = input.value !== ""; - - // Support: IE <=11 only - // Must access selectedIndex to make default options select - support.optSelected = opt.selected; - - // Support: IE <=11 only - // An input loses its value after becoming a radio - input = document.createElement( "input" ); - input.value = "t"; - input.type = "radio"; - support.radioValue = input.value === "t"; -} )(); - - -var boolHook, - attrHandle = jQuery.expr.attrHandle; - -jQuery.fn.extend( { - attr: function( name, value ) { - return access( this, jQuery.attr, name, value, arguments.length > 1 ); - }, - - removeAttr: function( name ) { - return this.each( function() { - jQuery.removeAttr( this, name ); - } ); - } -} ); - -jQuery.extend( { - attr: function( elem, name, value ) { - var ret, hooks, - nType = elem.nodeType; - - // Don't get/set attributes on text, comment and attribute nodes - if ( nType === 3 || nType === 8 || nType === 2 ) { - return; - } - - // Fallback to prop when attributes are not supported - if ( typeof elem.getAttribute === "undefined" ) { - return jQuery.prop( elem, name, value ); - } - - // Attribute hooks are determined by the lowercase version - // Grab necessary hook if one is defined - if ( nType !== 1 || !jQuery.isXMLDoc( elem ) ) { - hooks = jQuery.attrHooks[ name.toLowerCase() ] || - ( jQuery.expr.match.bool.test( name ) ? boolHook : undefined ); - } - - if ( value !== undefined ) { - if ( value === null ) { - jQuery.removeAttr( elem, name ); - return; - } - - if ( hooks && "set" in hooks && - ( ret = hooks.set( elem, value, name ) ) !== undefined ) { - return ret; - } - - elem.setAttribute( name, value + "" ); - return value; - } - - if ( hooks && "get" in hooks && ( ret = hooks.get( elem, name ) ) !== null ) { - return ret; - } - - ret = jQuery.find.attr( elem, name ); - - // Non-existent attributes return null, we normalize to undefined - return ret == null ? undefined : ret; - }, - - attrHooks: { - type: { - set: function( elem, value ) { - if ( !support.radioValue && value === "radio" && - nodeName( elem, "input" ) ) { - var val = elem.value; - elem.setAttribute( "type", value ); - if ( val ) { - elem.value = val; - } - return value; - } - } - } - }, - - removeAttr: function( elem, value ) { - var name, - i = 0, - - // Attribute names can contain non-HTML whitespace characters - // https://html.spec.whatwg.org/multipage/syntax.html#attributes-2 - attrNames = value && value.match( rnothtmlwhite ); - - if ( attrNames && elem.nodeType === 1 ) { - while ( ( name = attrNames[ i++ ] ) ) { - elem.removeAttribute( name ); - } - } - } -} ); - -// Hooks for boolean attributes -boolHook = { - set: function( elem, value, name ) { - if ( value === false ) { - - // Remove boolean attributes when set to false - jQuery.removeAttr( elem, name ); - } else { - elem.setAttribute( name, name ); - } - return name; - } -}; - -jQuery.each( jQuery.expr.match.bool.source.match( /\w+/g ), function( _i, name ) { - var getter = attrHandle[ name ] || jQuery.find.attr; - - attrHandle[ name ] = function( elem, name, isXML ) { - var ret, handle, - lowercaseName = name.toLowerCase(); - - if ( !isXML ) { - - // Avoid an infinite loop by temporarily removing this function from the getter - handle = attrHandle[ lowercaseName ]; - attrHandle[ lowercaseName ] = ret; - ret = getter( elem, name, isXML ) != null ? - lowercaseName : - null; - attrHandle[ lowercaseName ] = handle; - } - return ret; - }; -} ); - - - - -var rfocusable = /^(?:input|select|textarea|button)$/i, - rclickable = /^(?:a|area)$/i; - -jQuery.fn.extend( { - prop: function( name, value ) { - return access( this, jQuery.prop, name, value, arguments.length > 1 ); - }, - - removeProp: function( name ) { - return this.each( function() { - delete this[ jQuery.propFix[ name ] || name ]; - } ); - } -} ); - -jQuery.extend( { - prop: function( elem, name, value ) { - var ret, hooks, - nType = elem.nodeType; - - // Don't get/set properties on text, comment and attribute nodes - if ( nType === 3 || nType === 8 || nType === 2 ) { - return; - } - - if ( nType !== 1 || !jQuery.isXMLDoc( elem ) ) { - - // Fix name and attach hooks - name = jQuery.propFix[ name ] || name; - hooks = jQuery.propHooks[ name ]; - } - - if ( value !== undefined ) { - if ( hooks && "set" in hooks && - ( ret = hooks.set( elem, value, name ) ) !== undefined ) { - return ret; - } - - return ( elem[ name ] = value ); - } - - if ( hooks && "get" in hooks && ( ret = hooks.get( elem, name ) ) !== null ) { - return ret; - } - - return elem[ name ]; - }, - - propHooks: { - tabIndex: { - get: function( elem ) { - - // Support: IE <=9 - 11 only - // elem.tabIndex doesn't always return the - // correct value when it hasn't been explicitly set - // https://web.archive.org/web/20141116233347/http://fluidproject.org/blog/2008/01/09/getting-setting-and-removing-tabindex-values-with-javascript/ - // Use proper attribute retrieval(#12072) - var tabindex = jQuery.find.attr( elem, "tabindex" ); - - if ( tabindex ) { - return parseInt( tabindex, 10 ); - } - - if ( - rfocusable.test( elem.nodeName ) || - rclickable.test( elem.nodeName ) && - elem.href - ) { - return 0; - } - - return -1; - } - } - }, - - propFix: { - "for": "htmlFor", - "class": "className" - } -} ); - -// Support: IE <=11 only -// Accessing the selectedIndex property -// forces the browser to respect setting selected -// on the option -// The getter ensures a default option is selected -// when in an optgroup -// eslint rule "no-unused-expressions" is disabled for this code -// since it considers such accessions noop -if ( !support.optSelected ) { - jQuery.propHooks.selected = { - get: function( elem ) { - - /* eslint no-unused-expressions: "off" */ - - var parent = elem.parentNode; - if ( parent && parent.parentNode ) { - parent.parentNode.selectedIndex; - } - return null; - }, - set: function( elem ) { - - /* eslint no-unused-expressions: "off" */ - - var parent = elem.parentNode; - if ( parent ) { - parent.selectedIndex; - - if ( parent.parentNode ) { - parent.parentNode.selectedIndex; - } - } - } - }; -} - -jQuery.each( [ - "tabIndex", - "readOnly", - "maxLength", - "cellSpacing", - "cellPadding", - "rowSpan", - "colSpan", - "useMap", - "frameBorder", - "contentEditable" -], function() { - jQuery.propFix[ this.toLowerCase() ] = this; -} ); - - - - - // Strip and collapse whitespace according to HTML spec - // https://infra.spec.whatwg.org/#strip-and-collapse-ascii-whitespace - function stripAndCollapse( value ) { - var tokens = value.match( rnothtmlwhite ) || []; - return tokens.join( " " ); - } - - -function getClass( elem ) { - return elem.getAttribute && elem.getAttribute( "class" ) || ""; -} - -function classesToArray( value ) { - if ( Array.isArray( value ) ) { - return value; - } - if ( typeof value === "string" ) { - return value.match( rnothtmlwhite ) || []; - } - return []; -} - -jQuery.fn.extend( { - addClass: function( value ) { - var classes, elem, cur, curValue, clazz, j, finalValue, - i = 0; - - if ( isFunction( value ) ) { - return this.each( function( j ) { - jQuery( this ).addClass( value.call( this, j, getClass( this ) ) ); - } ); - } - - classes = classesToArray( value ); - - if ( classes.length ) { - while ( ( elem = this[ i++ ] ) ) { - curValue = getClass( elem ); - cur = elem.nodeType === 1 && ( " " + stripAndCollapse( curValue ) + " " ); - - if ( cur ) { - j = 0; - while ( ( clazz = classes[ j++ ] ) ) { - if ( cur.indexOf( " " + clazz + " " ) < 0 ) { - cur += clazz + " "; - } - } - - // Only assign if different to avoid unneeded rendering. - finalValue = stripAndCollapse( cur ); - if ( curValue !== finalValue ) { - elem.setAttribute( "class", finalValue ); - } - } - } - } - - return this; - }, - - removeClass: function( value ) { - var classes, elem, cur, curValue, clazz, j, finalValue, - i = 0; - - if ( isFunction( value ) ) { - return this.each( function( j ) { - jQuery( this ).removeClass( value.call( this, j, getClass( this ) ) ); - } ); - } - - if ( !arguments.length ) { - return this.attr( "class", "" ); - } - - classes = classesToArray( value ); - - if ( classes.length ) { - while ( ( elem = this[ i++ ] ) ) { - curValue = getClass( elem ); - - // This expression is here for better compressibility (see addClass) - cur = elem.nodeType === 1 && ( " " + stripAndCollapse( curValue ) + " " ); - - if ( cur ) { - j = 0; - while ( ( clazz = classes[ j++ ] ) ) { - - // Remove *all* instances - while ( cur.indexOf( " " + clazz + " " ) > -1 ) { - cur = cur.replace( " " + clazz + " ", " " ); - } - } - - // Only assign if different to avoid unneeded rendering. - finalValue = stripAndCollapse( cur ); - if ( curValue !== finalValue ) { - elem.setAttribute( "class", finalValue ); - } - } - } - } - - return this; - }, - - toggleClass: function( value, stateVal ) { - var type = typeof value, - isValidValue = type === "string" || Array.isArray( value ); - - if ( typeof stateVal === "boolean" && isValidValue ) { - return stateVal ? this.addClass( value ) : this.removeClass( value ); - } - - if ( isFunction( value ) ) { - return this.each( function( i ) { - jQuery( this ).toggleClass( - value.call( this, i, getClass( this ), stateVal ), - stateVal - ); - } ); - } - - return this.each( function() { - var className, i, self, classNames; - - if ( isValidValue ) { - - // Toggle individual class names - i = 0; - self = jQuery( this ); - classNames = classesToArray( value ); - - while ( ( className = classNames[ i++ ] ) ) { - - // Check each className given, space separated list - if ( self.hasClass( className ) ) { - self.removeClass( className ); - } else { - self.addClass( className ); - } - } - - // Toggle whole class name - } else if ( value === undefined || type === "boolean" ) { - className = getClass( this ); - if ( className ) { - - // Store className if set - dataPriv.set( this, "__className__", className ); - } - - // If the element has a class name or if we're passed `false`, - // then remove the whole classname (if there was one, the above saved it). - // Otherwise bring back whatever was previously saved (if anything), - // falling back to the empty string if nothing was stored. - if ( this.setAttribute ) { - this.setAttribute( "class", - className || value === false ? - "" : - dataPriv.get( this, "__className__" ) || "" - ); - } - } - } ); - }, - - hasClass: function( selector ) { - var className, elem, - i = 0; - - className = " " + selector + " "; - while ( ( elem = this[ i++ ] ) ) { - if ( elem.nodeType === 1 && - ( " " + stripAndCollapse( getClass( elem ) ) + " " ).indexOf( className ) > -1 ) { - return true; - } - } - - return false; - } -} ); - - - - -var rreturn = /\r/g; - -jQuery.fn.extend( { - val: function( value ) { - var hooks, ret, valueIsFunction, - elem = this[ 0 ]; - - if ( !arguments.length ) { - if ( elem ) { - hooks = jQuery.valHooks[ elem.type ] || - jQuery.valHooks[ elem.nodeName.toLowerCase() ]; - - if ( hooks && - "get" in hooks && - ( ret = hooks.get( elem, "value" ) ) !== undefined - ) { - return ret; - } - - ret = elem.value; - - // Handle most common string cases - if ( typeof ret === "string" ) { - return ret.replace( rreturn, "" ); - } - - // Handle cases where value is null/undef or number - return ret == null ? "" : ret; - } - - return; - } - - valueIsFunction = isFunction( value ); - - return this.each( function( i ) { - var val; - - if ( this.nodeType !== 1 ) { - return; - } - - if ( valueIsFunction ) { - val = value.call( this, i, jQuery( this ).val() ); - } else { - val = value; - } - - // Treat null/undefined as ""; convert numbers to string - if ( val == null ) { - val = ""; - - } else if ( typeof val === "number" ) { - val += ""; - - } else if ( Array.isArray( val ) ) { - val = jQuery.map( val, function( value ) { - return value == null ? "" : value + ""; - } ); - } - - hooks = jQuery.valHooks[ this.type ] || jQuery.valHooks[ this.nodeName.toLowerCase() ]; - - // If set returns undefined, fall back to normal setting - if ( !hooks || !( "set" in hooks ) || hooks.set( this, val, "value" ) === undefined ) { - this.value = val; - } - } ); - } -} ); - -jQuery.extend( { - valHooks: { - option: { - get: function( elem ) { - - var val = jQuery.find.attr( elem, "value" ); - return val != null ? - val : - - // Support: IE <=10 - 11 only - // option.text throws exceptions (#14686, #14858) - // Strip and collapse whitespace - // https://html.spec.whatwg.org/#strip-and-collapse-whitespace - stripAndCollapse( jQuery.text( elem ) ); - } - }, - select: { - get: function( elem ) { - var value, option, i, - options = elem.options, - index = elem.selectedIndex, - one = elem.type === "select-one", - values = one ? null : [], - max = one ? index + 1 : options.length; - - if ( index < 0 ) { - i = max; - - } else { - i = one ? index : 0; - } - - // Loop through all the selected options - for ( ; i < max; i++ ) { - option = options[ i ]; - - // Support: IE <=9 only - // IE8-9 doesn't update selected after form reset (#2551) - if ( ( option.selected || i === index ) && - - // Don't return options that are disabled or in a disabled optgroup - !option.disabled && - ( !option.parentNode.disabled || - !nodeName( option.parentNode, "optgroup" ) ) ) { - - // Get the specific value for the option - value = jQuery( option ).val(); - - // We don't need an array for one selects - if ( one ) { - return value; - } - - // Multi-Selects return an array - values.push( value ); - } - } - - return values; - }, - - set: function( elem, value ) { - var optionSet, option, - options = elem.options, - values = jQuery.makeArray( value ), - i = options.length; - - while ( i-- ) { - option = options[ i ]; - - /* eslint-disable no-cond-assign */ - - if ( option.selected = - jQuery.inArray( jQuery.valHooks.option.get( option ), values ) > -1 - ) { - optionSet = true; - } - - /* eslint-enable no-cond-assign */ - } - - // Force browsers to behave consistently when non-matching value is set - if ( !optionSet ) { - elem.selectedIndex = -1; - } - return values; - } - } - } -} ); - -// Radios and checkboxes getter/setter -jQuery.each( [ "radio", "checkbox" ], function() { - jQuery.valHooks[ this ] = { - set: function( elem, value ) { - if ( Array.isArray( value ) ) { - return ( elem.checked = jQuery.inArray( jQuery( elem ).val(), value ) > -1 ); - } - } - }; - if ( !support.checkOn ) { - jQuery.valHooks[ this ].get = function( elem ) { - return elem.getAttribute( "value" ) === null ? "on" : elem.value; - }; - } -} ); - - - - -// Return jQuery for attributes-only inclusion - - -support.focusin = "onfocusin" in window; - - -var rfocusMorph = /^(?:focusinfocus|focusoutblur)$/, - stopPropagationCallback = function( e ) { - e.stopPropagation(); - }; - -jQuery.extend( jQuery.event, { - - trigger: function( event, data, elem, onlyHandlers ) { - - var i, cur, tmp, bubbleType, ontype, handle, special, lastElement, - eventPath = [ elem || document ], - type = hasOwn.call( event, "type" ) ? event.type : event, - namespaces = hasOwn.call( event, "namespace" ) ? event.namespace.split( "." ) : []; - - cur = lastElement = tmp = elem = elem || document; - - // Don't do events on text and comment nodes - if ( elem.nodeType === 3 || elem.nodeType === 8 ) { - return; - } - - // focus/blur morphs to focusin/out; ensure we're not firing them right now - if ( rfocusMorph.test( type + jQuery.event.triggered ) ) { - return; - } - - if ( type.indexOf( "." ) > -1 ) { - - // Namespaced trigger; create a regexp to match event type in handle() - namespaces = type.split( "." ); - type = namespaces.shift(); - namespaces.sort(); - } - ontype = type.indexOf( ":" ) < 0 && "on" + type; - - // Caller can pass in a jQuery.Event object, Object, or just an event type string - event = event[ jQuery.expando ] ? - event : - new jQuery.Event( type, typeof event === "object" && event ); - - // Trigger bitmask: & 1 for native handlers; & 2 for jQuery (always true) - event.isTrigger = onlyHandlers ? 2 : 3; - event.namespace = namespaces.join( "." ); - event.rnamespace = event.namespace ? - new RegExp( "(^|\\.)" + namespaces.join( "\\.(?:.*\\.|)" ) + "(\\.|$)" ) : - null; - - // Clean up the event in case it is being reused - event.result = undefined; - if ( !event.target ) { - event.target = elem; - } - - // Clone any incoming data and prepend the event, creating the handler arg list - data = data == null ? - [ event ] : - jQuery.makeArray( data, [ event ] ); - - // Allow special events to draw outside the lines - special = jQuery.event.special[ type ] || {}; - if ( !onlyHandlers && special.trigger && special.trigger.apply( elem, data ) === false ) { - return; - } - - // Determine event propagation path in advance, per W3C events spec (#9951) - // Bubble up to document, then to window; watch for a global ownerDocument var (#9724) - if ( !onlyHandlers && !special.noBubble && !isWindow( elem ) ) { - - bubbleType = special.delegateType || type; - if ( !rfocusMorph.test( bubbleType + type ) ) { - cur = cur.parentNode; - } - for ( ; cur; cur = cur.parentNode ) { - eventPath.push( cur ); - tmp = cur; - } - - // Only add window if we got to document (e.g., not plain obj or detached DOM) - if ( tmp === ( elem.ownerDocument || document ) ) { - eventPath.push( tmp.defaultView || tmp.parentWindow || window ); - } - } - - // Fire handlers on the event path - i = 0; - while ( ( cur = eventPath[ i++ ] ) && !event.isPropagationStopped() ) { - lastElement = cur; - event.type = i > 1 ? - bubbleType : - special.bindType || type; - - // jQuery handler - handle = ( - dataPriv.get( cur, "events" ) || Object.create( null ) - )[ event.type ] && - dataPriv.get( cur, "handle" ); - if ( handle ) { - handle.apply( cur, data ); - } - - // Native handler - handle = ontype && cur[ ontype ]; - if ( handle && handle.apply && acceptData( cur ) ) { - event.result = handle.apply( cur, data ); - if ( event.result === false ) { - event.preventDefault(); - } - } - } - event.type = type; - - // If nobody prevented the default action, do it now - if ( !onlyHandlers && !event.isDefaultPrevented() ) { - - if ( ( !special._default || - special._default.apply( eventPath.pop(), data ) === false ) && - acceptData( elem ) ) { - - // Call a native DOM method on the target with the same name as the event. - // Don't do default actions on window, that's where global variables be (#6170) - if ( ontype && isFunction( elem[ type ] ) && !isWindow( elem ) ) { - - // Don't re-trigger an onFOO event when we call its FOO() method - tmp = elem[ ontype ]; - - if ( tmp ) { - elem[ ontype ] = null; - } - - // Prevent re-triggering of the same event, since we already bubbled it above - jQuery.event.triggered = type; - - if ( event.isPropagationStopped() ) { - lastElement.addEventListener( type, stopPropagationCallback ); - } - - elem[ type ](); - - if ( event.isPropagationStopped() ) { - lastElement.removeEventListener( type, stopPropagationCallback ); - } - - jQuery.event.triggered = undefined; - - if ( tmp ) { - elem[ ontype ] = tmp; - } - } - } - } - - return event.result; - }, - - // Piggyback on a donor event to simulate a different one - // Used only for `focus(in | out)` events - simulate: function( type, elem, event ) { - var e = jQuery.extend( - new jQuery.Event(), - event, - { - type: type, - isSimulated: true - } - ); - - jQuery.event.trigger( e, null, elem ); - } - -} ); - -jQuery.fn.extend( { - - trigger: function( type, data ) { - return this.each( function() { - jQuery.event.trigger( type, data, this ); - } ); - }, - triggerHandler: function( type, data ) { - var elem = this[ 0 ]; - if ( elem ) { - return jQuery.event.trigger( type, data, elem, true ); - } - } -} ); - - -// Support: Firefox <=44 -// Firefox doesn't have focus(in | out) events -// Related ticket - https://bugzilla.mozilla.org/show_bug.cgi?id=687787 -// -// Support: Chrome <=48 - 49, Safari <=9.0 - 9.1 -// focus(in | out) events fire after focus & blur events, -// which is spec violation - http://www.w3.org/TR/DOM-Level-3-Events/#events-focusevent-event-order -// Related ticket - https://bugs.chromium.org/p/chromium/issues/detail?id=449857 -if ( !support.focusin ) { - jQuery.each( { focus: "focusin", blur: "focusout" }, function( orig, fix ) { - - // Attach a single capturing handler on the document while someone wants focusin/focusout - var handler = function( event ) { - jQuery.event.simulate( fix, event.target, jQuery.event.fix( event ) ); - }; - - jQuery.event.special[ fix ] = { - setup: function() { - - // Handle: regular nodes (via `this.ownerDocument`), window - // (via `this.document`) & document (via `this`). - var doc = this.ownerDocument || this.document || this, - attaches = dataPriv.access( doc, fix ); - - if ( !attaches ) { - doc.addEventListener( orig, handler, true ); - } - dataPriv.access( doc, fix, ( attaches || 0 ) + 1 ); - }, - teardown: function() { - var doc = this.ownerDocument || this.document || this, - attaches = dataPriv.access( doc, fix ) - 1; - - if ( !attaches ) { - doc.removeEventListener( orig, handler, true ); - dataPriv.remove( doc, fix ); - - } else { - dataPriv.access( doc, fix, attaches ); - } - } - }; - } ); -} -var location = window.location; - -var nonce = { guid: Date.now() }; - -var rquery = ( /\?/ ); - - - -// Cross-browser xml parsing -jQuery.parseXML = function( data ) { - var xml; - if ( !data || typeof data !== "string" ) { - return null; - } - - // Support: IE 9 - 11 only - // IE throws on parseFromString with invalid input. - try { - xml = ( new window.DOMParser() ).parseFromString( data, "text/xml" ); - } catch ( e ) { - xml = undefined; - } - - if ( !xml || xml.getElementsByTagName( "parsererror" ).length ) { - jQuery.error( "Invalid XML: " + data ); - } - return xml; -}; - - -var - rbracket = /\[\]$/, - rCRLF = /\r?\n/g, - rsubmitterTypes = /^(?:submit|button|image|reset|file)$/i, - rsubmittable = /^(?:input|select|textarea|keygen)/i; - -function buildParams( prefix, obj, traditional, add ) { - var name; - - if ( Array.isArray( obj ) ) { - - // Serialize array item. - jQuery.each( obj, function( i, v ) { - if ( traditional || rbracket.test( prefix ) ) { - - // Treat each array item as a scalar. - add( prefix, v ); - - } else { - - // Item is non-scalar (array or object), encode its numeric index. - buildParams( - prefix + "[" + ( typeof v === "object" && v != null ? i : "" ) + "]", - v, - traditional, - add - ); - } - } ); - - } else if ( !traditional && toType( obj ) === "object" ) { - - // Serialize object item. - for ( name in obj ) { - buildParams( prefix + "[" + name + "]", obj[ name ], traditional, add ); - } - - } else { - - // Serialize scalar item. - add( prefix, obj ); - } -} - -// Serialize an array of form elements or a set of -// key/values into a query string -jQuery.param = function( a, traditional ) { - var prefix, - s = [], - add = function( key, valueOrFunction ) { - - // If value is a function, invoke it and use its return value - var value = isFunction( valueOrFunction ) ? - valueOrFunction() : - valueOrFunction; - - s[ s.length ] = encodeURIComponent( key ) + "=" + - encodeURIComponent( value == null ? "" : value ); - }; - - if ( a == null ) { - return ""; - } - - // If an array was passed in, assume that it is an array of form elements. - if ( Array.isArray( a ) || ( a.jquery && !jQuery.isPlainObject( a ) ) ) { - - // Serialize the form elements - jQuery.each( a, function() { - add( this.name, this.value ); - } ); - - } else { - - // If traditional, encode the "old" way (the way 1.3.2 or older - // did it), otherwise encode params recursively. - for ( prefix in a ) { - buildParams( prefix, a[ prefix ], traditional, add ); - } - } - - // Return the resulting serialization - return s.join( "&" ); -}; - -jQuery.fn.extend( { - serialize: function() { - return jQuery.param( this.serializeArray() ); - }, - serializeArray: function() { - return this.map( function() { - - // Can add propHook for "elements" to filter or add form elements - var elements = jQuery.prop( this, "elements" ); - return elements ? jQuery.makeArray( elements ) : this; - } ) - .filter( function() { - var type = this.type; - - // Use .is( ":disabled" ) so that fieldset[disabled] works - return this.name && !jQuery( this ).is( ":disabled" ) && - rsubmittable.test( this.nodeName ) && !rsubmitterTypes.test( type ) && - ( this.checked || !rcheckableType.test( type ) ); - } ) - .map( function( _i, elem ) { - var val = jQuery( this ).val(); - - if ( val == null ) { - return null; - } - - if ( Array.isArray( val ) ) { - return jQuery.map( val, function( val ) { - return { name: elem.name, value: val.replace( rCRLF, "\r\n" ) }; - } ); - } - - return { name: elem.name, value: val.replace( rCRLF, "\r\n" ) }; - } ).get(); - } -} ); - - -var - r20 = /%20/g, - rhash = /#.*$/, - rantiCache = /([?&])_=[^&]*/, - rheaders = /^(.*?):[ \t]*([^\r\n]*)$/mg, - - // #7653, #8125, #8152: local protocol detection - rlocalProtocol = /^(?:about|app|app-storage|.+-extension|file|res|widget):$/, - rnoContent = /^(?:GET|HEAD)$/, - rprotocol = /^\/\//, - - /* Prefilters - * 1) They are useful to introduce custom dataTypes (see ajax/jsonp.js for an example) - * 2) These are called: - * - BEFORE asking for a transport - * - AFTER param serialization (s.data is a string if s.processData is true) - * 3) key is the dataType - * 4) the catchall symbol "*" can be used - * 5) execution will start with transport dataType and THEN continue down to "*" if needed - */ - prefilters = {}, - - /* Transports bindings - * 1) key is the dataType - * 2) the catchall symbol "*" can be used - * 3) selection will start with transport dataType and THEN go to "*" if needed - */ - transports = {}, - - // Avoid comment-prolog char sequence (#10098); must appease lint and evade compression - allTypes = "*/".concat( "*" ), - - // Anchor tag for parsing the document origin - originAnchor = document.createElement( "a" ); - originAnchor.href = location.href; - -// Base "constructor" for jQuery.ajaxPrefilter and jQuery.ajaxTransport -function addToPrefiltersOrTransports( structure ) { - - // dataTypeExpression is optional and defaults to "*" - return function( dataTypeExpression, func ) { - - if ( typeof dataTypeExpression !== "string" ) { - func = dataTypeExpression; - dataTypeExpression = "*"; - } - - var dataType, - i = 0, - dataTypes = dataTypeExpression.toLowerCase().match( rnothtmlwhite ) || []; - - if ( isFunction( func ) ) { - - // For each dataType in the dataTypeExpression - while ( ( dataType = dataTypes[ i++ ] ) ) { - - // Prepend if requested - if ( dataType[ 0 ] === "+" ) { - dataType = dataType.slice( 1 ) || "*"; - ( structure[ dataType ] = structure[ dataType ] || [] ).unshift( func ); - - // Otherwise append - } else { - ( structure[ dataType ] = structure[ dataType ] || [] ).push( func ); - } - } - } - }; -} - -// Base inspection function for prefilters and transports -function inspectPrefiltersOrTransports( structure, options, originalOptions, jqXHR ) { - - var inspected = {}, - seekingTransport = ( structure === transports ); - - function inspect( dataType ) { - var selected; - inspected[ dataType ] = true; - jQuery.each( structure[ dataType ] || [], function( _, prefilterOrFactory ) { - var dataTypeOrTransport = prefilterOrFactory( options, originalOptions, jqXHR ); - if ( typeof dataTypeOrTransport === "string" && - !seekingTransport && !inspected[ dataTypeOrTransport ] ) { - - options.dataTypes.unshift( dataTypeOrTransport ); - inspect( dataTypeOrTransport ); - return false; - } else if ( seekingTransport ) { - return !( selected = dataTypeOrTransport ); - } - } ); - return selected; - } - - return inspect( options.dataTypes[ 0 ] ) || !inspected[ "*" ] && inspect( "*" ); -} - -// A special extend for ajax options -// that takes "flat" options (not to be deep extended) -// Fixes #9887 -function ajaxExtend( target, src ) { - var key, deep, - flatOptions = jQuery.ajaxSettings.flatOptions || {}; - - for ( key in src ) { - if ( src[ key ] !== undefined ) { - ( flatOptions[ key ] ? target : ( deep || ( deep = {} ) ) )[ key ] = src[ key ]; - } - } - if ( deep ) { - jQuery.extend( true, target, deep ); - } - - return target; -} - -/* Handles responses to an ajax request: - * - finds the right dataType (mediates between content-type and expected dataType) - * - returns the corresponding response - */ -function ajaxHandleResponses( s, jqXHR, responses ) { - - var ct, type, finalDataType, firstDataType, - contents = s.contents, - dataTypes = s.dataTypes; - - // Remove auto dataType and get content-type in the process - while ( dataTypes[ 0 ] === "*" ) { - dataTypes.shift(); - if ( ct === undefined ) { - ct = s.mimeType || jqXHR.getResponseHeader( "Content-Type" ); - } - } - - // Check if we're dealing with a known content-type - if ( ct ) { - for ( type in contents ) { - if ( contents[ type ] && contents[ type ].test( ct ) ) { - dataTypes.unshift( type ); - break; - } - } - } - - // Check to see if we have a response for the expected dataType - if ( dataTypes[ 0 ] in responses ) { - finalDataType = dataTypes[ 0 ]; - } else { - - // Try convertible dataTypes - for ( type in responses ) { - if ( !dataTypes[ 0 ] || s.converters[ type + " " + dataTypes[ 0 ] ] ) { - finalDataType = type; - break; - } - if ( !firstDataType ) { - firstDataType = type; - } - } - - // Or just use first one - finalDataType = finalDataType || firstDataType; - } - - // If we found a dataType - // We add the dataType to the list if needed - // and return the corresponding response - if ( finalDataType ) { - if ( finalDataType !== dataTypes[ 0 ] ) { - dataTypes.unshift( finalDataType ); - } - return responses[ finalDataType ]; - } -} - -/* Chain conversions given the request and the original response - * Also sets the responseXXX fields on the jqXHR instance - */ -function ajaxConvert( s, response, jqXHR, isSuccess ) { - var conv2, current, conv, tmp, prev, - converters = {}, - - // Work with a copy of dataTypes in case we need to modify it for conversion - dataTypes = s.dataTypes.slice(); - - // Create converters map with lowercased keys - if ( dataTypes[ 1 ] ) { - for ( conv in s.converters ) { - converters[ conv.toLowerCase() ] = s.converters[ conv ]; - } - } - - current = dataTypes.shift(); - - // Convert to each sequential dataType - while ( current ) { - - if ( s.responseFields[ current ] ) { - jqXHR[ s.responseFields[ current ] ] = response; - } - - // Apply the dataFilter if provided - if ( !prev && isSuccess && s.dataFilter ) { - response = s.dataFilter( response, s.dataType ); - } - - prev = current; - current = dataTypes.shift(); - - if ( current ) { - - // There's only work to do if current dataType is non-auto - if ( current === "*" ) { - - current = prev; - - // Convert response if prev dataType is non-auto and differs from current - } else if ( prev !== "*" && prev !== current ) { - - // Seek a direct converter - conv = converters[ prev + " " + current ] || converters[ "* " + current ]; - - // If none found, seek a pair - if ( !conv ) { - for ( conv2 in converters ) { - - // If conv2 outputs current - tmp = conv2.split( " " ); - if ( tmp[ 1 ] === current ) { - - // If prev can be converted to accepted input - conv = converters[ prev + " " + tmp[ 0 ] ] || - converters[ "* " + tmp[ 0 ] ]; - if ( conv ) { - - // Condense equivalence converters - if ( conv === true ) { - conv = converters[ conv2 ]; - - // Otherwise, insert the intermediate dataType - } else if ( converters[ conv2 ] !== true ) { - current = tmp[ 0 ]; - dataTypes.unshift( tmp[ 1 ] ); - } - break; - } - } - } - } - - // Apply converter (if not an equivalence) - if ( conv !== true ) { - - // Unless errors are allowed to bubble, catch and return them - if ( conv && s.throws ) { - response = conv( response ); - } else { - try { - response = conv( response ); - } catch ( e ) { - return { - state: "parsererror", - error: conv ? e : "No conversion from " + prev + " to " + current - }; - } - } - } - } - } - } - - return { state: "success", data: response }; -} - -jQuery.extend( { - - // Counter for holding the number of active queries - active: 0, - - // Last-Modified header cache for next request - lastModified: {}, - etag: {}, - - ajaxSettings: { - url: location.href, - type: "GET", - isLocal: rlocalProtocol.test( location.protocol ), - global: true, - processData: true, - async: true, - contentType: "application/x-www-form-urlencoded; charset=UTF-8", - - /* - timeout: 0, - data: null, - dataType: null, - username: null, - password: null, - cache: null, - throws: false, - traditional: false, - headers: {}, - */ - - accepts: { - "*": allTypes, - text: "text/plain", - html: "text/html", - xml: "application/xml, text/xml", - json: "application/json, text/javascript" - }, - - contents: { - xml: /\bxml\b/, - html: /\bhtml/, - json: /\bjson\b/ - }, - - responseFields: { - xml: "responseXML", - text: "responseText", - json: "responseJSON" - }, - - // Data converters - // Keys separate source (or catchall "*") and destination types with a single space - converters: { - - // Convert anything to text - "* text": String, - - // Text to html (true = no transformation) - "text html": true, - - // Evaluate text as a json expression - "text json": JSON.parse, - - // Parse text as xml - "text xml": jQuery.parseXML - }, - - // For options that shouldn't be deep extended: - // you can add your own custom options here if - // and when you create one that shouldn't be - // deep extended (see ajaxExtend) - flatOptions: { - url: true, - context: true - } - }, - - // Creates a full fledged settings object into target - // with both ajaxSettings and settings fields. - // If target is omitted, writes into ajaxSettings. - ajaxSetup: function( target, settings ) { - return settings ? - - // Building a settings object - ajaxExtend( ajaxExtend( target, jQuery.ajaxSettings ), settings ) : - - // Extending ajaxSettings - ajaxExtend( jQuery.ajaxSettings, target ); - }, - - ajaxPrefilter: addToPrefiltersOrTransports( prefilters ), - ajaxTransport: addToPrefiltersOrTransports( transports ), - - // Main method - ajax: function( url, options ) { - - // If url is an object, simulate pre-1.5 signature - if ( typeof url === "object" ) { - options = url; - url = undefined; - } - - // Force options to be an object - options = options || {}; - - var transport, - - // URL without anti-cache param - cacheURL, - - // Response headers - responseHeadersString, - responseHeaders, - - // timeout handle - timeoutTimer, - - // Url cleanup var - urlAnchor, - - // Request state (becomes false upon send and true upon completion) - completed, - - // To know if global events are to be dispatched - fireGlobals, - - // Loop variable - i, - - // uncached part of the url - uncached, - - // Create the final options object - s = jQuery.ajaxSetup( {}, options ), - - // Callbacks context - callbackContext = s.context || s, - - // Context for global events is callbackContext if it is a DOM node or jQuery collection - globalEventContext = s.context && - ( callbackContext.nodeType || callbackContext.jquery ) ? - jQuery( callbackContext ) : - jQuery.event, - - // Deferreds - deferred = jQuery.Deferred(), - completeDeferred = jQuery.Callbacks( "once memory" ), - - // Status-dependent callbacks - statusCode = s.statusCode || {}, - - // Headers (they are sent all at once) - requestHeaders = {}, - requestHeadersNames = {}, - - // Default abort message - strAbort = "canceled", - - // Fake xhr - jqXHR = { - readyState: 0, - - // Builds headers hashtable if needed - getResponseHeader: function( key ) { - var match; - if ( completed ) { - if ( !responseHeaders ) { - responseHeaders = {}; - while ( ( match = rheaders.exec( responseHeadersString ) ) ) { - responseHeaders[ match[ 1 ].toLowerCase() + " " ] = - ( responseHeaders[ match[ 1 ].toLowerCase() + " " ] || [] ) - .concat( match[ 2 ] ); - } - } - match = responseHeaders[ key.toLowerCase() + " " ]; - } - return match == null ? null : match.join( ", " ); - }, - - // Raw string - getAllResponseHeaders: function() { - return completed ? responseHeadersString : null; - }, - - // Caches the header - setRequestHeader: function( name, value ) { - if ( completed == null ) { - name = requestHeadersNames[ name.toLowerCase() ] = - requestHeadersNames[ name.toLowerCase() ] || name; - requestHeaders[ name ] = value; - } - return this; - }, - - // Overrides response content-type header - overrideMimeType: function( type ) { - if ( completed == null ) { - s.mimeType = type; - } - return this; - }, - - // Status-dependent callbacks - statusCode: function( map ) { - var code; - if ( map ) { - if ( completed ) { - - // Execute the appropriate callbacks - jqXHR.always( map[ jqXHR.status ] ); - } else { - - // Lazy-add the new callbacks in a way that preserves old ones - for ( code in map ) { - statusCode[ code ] = [ statusCode[ code ], map[ code ] ]; - } - } - } - return this; - }, - - // Cancel the request - abort: function( statusText ) { - var finalText = statusText || strAbort; - if ( transport ) { - transport.abort( finalText ); - } - done( 0, finalText ); - return this; - } - }; - - // Attach deferreds - deferred.promise( jqXHR ); - - // Add protocol if not provided (prefilters might expect it) - // Handle falsy url in the settings object (#10093: consistency with old signature) - // We also use the url parameter if available - s.url = ( ( url || s.url || location.href ) + "" ) - .replace( rprotocol, location.protocol + "//" ); - - // Alias method option to type as per ticket #12004 - s.type = options.method || options.type || s.method || s.type; - - // Extract dataTypes list - s.dataTypes = ( s.dataType || "*" ).toLowerCase().match( rnothtmlwhite ) || [ "" ]; - - // A cross-domain request is in order when the origin doesn't match the current origin. - if ( s.crossDomain == null ) { - urlAnchor = document.createElement( "a" ); - - // Support: IE <=8 - 11, Edge 12 - 15 - // IE throws exception on accessing the href property if url is malformed, - // e.g. http://example.com:80x/ - try { - urlAnchor.href = s.url; - - // Support: IE <=8 - 11 only - // Anchor's host property isn't correctly set when s.url is relative - urlAnchor.href = urlAnchor.href; - s.crossDomain = originAnchor.protocol + "//" + originAnchor.host !== - urlAnchor.protocol + "//" + urlAnchor.host; - } catch ( e ) { - - // If there is an error parsing the URL, assume it is crossDomain, - // it can be rejected by the transport if it is invalid - s.crossDomain = true; - } - } - - // Convert data if not already a string - if ( s.data && s.processData && typeof s.data !== "string" ) { - s.data = jQuery.param( s.data, s.traditional ); - } - - // Apply prefilters - inspectPrefiltersOrTransports( prefilters, s, options, jqXHR ); - - // If request was aborted inside a prefilter, stop there - if ( completed ) { - return jqXHR; - } - - // We can fire global events as of now if asked to - // Don't fire events if jQuery.event is undefined in an AMD-usage scenario (#15118) - fireGlobals = jQuery.event && s.global; - - // Watch for a new set of requests - if ( fireGlobals && jQuery.active++ === 0 ) { - jQuery.event.trigger( "ajaxStart" ); - } - - // Uppercase the type - s.type = s.type.toUpperCase(); - - // Determine if request has content - s.hasContent = !rnoContent.test( s.type ); - - // Save the URL in case we're toying with the If-Modified-Since - // and/or If-None-Match header later on - // Remove hash to simplify url manipulation - cacheURL = s.url.replace( rhash, "" ); - - // More options handling for requests with no content - if ( !s.hasContent ) { - - // Remember the hash so we can put it back - uncached = s.url.slice( cacheURL.length ); - - // If data is available and should be processed, append data to url - if ( s.data && ( s.processData || typeof s.data === "string" ) ) { - cacheURL += ( rquery.test( cacheURL ) ? "&" : "?" ) + s.data; - - // #9682: remove data so that it's not used in an eventual retry - delete s.data; - } - - // Add or update anti-cache param if needed - if ( s.cache === false ) { - cacheURL = cacheURL.replace( rantiCache, "$1" ); - uncached = ( rquery.test( cacheURL ) ? "&" : "?" ) + "_=" + ( nonce.guid++ ) + - uncached; - } - - // Put hash and anti-cache on the URL that will be requested (gh-1732) - s.url = cacheURL + uncached; - - // Change '%20' to '+' if this is encoded form body content (gh-2658) - } else if ( s.data && s.processData && - ( s.contentType || "" ).indexOf( "application/x-www-form-urlencoded" ) === 0 ) { - s.data = s.data.replace( r20, "+" ); - } - - // Set the If-Modified-Since and/or If-None-Match header, if in ifModified mode. - if ( s.ifModified ) { - if ( jQuery.lastModified[ cacheURL ] ) { - jqXHR.setRequestHeader( "If-Modified-Since", jQuery.lastModified[ cacheURL ] ); - } - if ( jQuery.etag[ cacheURL ] ) { - jqXHR.setRequestHeader( "If-None-Match", jQuery.etag[ cacheURL ] ); - } - } - - // Set the correct header, if data is being sent - if ( s.data && s.hasContent && s.contentType !== false || options.contentType ) { - jqXHR.setRequestHeader( "Content-Type", s.contentType ); - } - - // Set the Accepts header for the server, depending on the dataType - jqXHR.setRequestHeader( - "Accept", - s.dataTypes[ 0 ] && s.accepts[ s.dataTypes[ 0 ] ] ? - s.accepts[ s.dataTypes[ 0 ] ] + - ( s.dataTypes[ 0 ] !== "*" ? ", " + allTypes + "; q=0.01" : "" ) : - s.accepts[ "*" ] - ); - - // Check for headers option - for ( i in s.headers ) { - jqXHR.setRequestHeader( i, s.headers[ i ] ); - } - - // Allow custom headers/mimetypes and early abort - if ( s.beforeSend && - ( s.beforeSend.call( callbackContext, jqXHR, s ) === false || completed ) ) { - - // Abort if not done already and return - return jqXHR.abort(); - } - - // Aborting is no longer a cancellation - strAbort = "abort"; - - // Install callbacks on deferreds - completeDeferred.add( s.complete ); - jqXHR.done( s.success ); - jqXHR.fail( s.error ); - - // Get transport - transport = inspectPrefiltersOrTransports( transports, s, options, jqXHR ); - - // If no transport, we auto-abort - if ( !transport ) { - done( -1, "No Transport" ); - } else { - jqXHR.readyState = 1; - - // Send global event - if ( fireGlobals ) { - globalEventContext.trigger( "ajaxSend", [ jqXHR, s ] ); - } - - // If request was aborted inside ajaxSend, stop there - if ( completed ) { - return jqXHR; - } - - // Timeout - if ( s.async && s.timeout > 0 ) { - timeoutTimer = window.setTimeout( function() { - jqXHR.abort( "timeout" ); - }, s.timeout ); - } - - try { - completed = false; - transport.send( requestHeaders, done ); - } catch ( e ) { - - // Rethrow post-completion exceptions - if ( completed ) { - throw e; - } - - // Propagate others as results - done( -1, e ); - } - } - - // Callback for when everything is done - function done( status, nativeStatusText, responses, headers ) { - var isSuccess, success, error, response, modified, - statusText = nativeStatusText; - - // Ignore repeat invocations - if ( completed ) { - return; - } - - completed = true; - - // Clear timeout if it exists - if ( timeoutTimer ) { - window.clearTimeout( timeoutTimer ); - } - - // Dereference transport for early garbage collection - // (no matter how long the jqXHR object will be used) - transport = undefined; - - // Cache response headers - responseHeadersString = headers || ""; - - // Set readyState - jqXHR.readyState = status > 0 ? 4 : 0; - - // Determine if successful - isSuccess = status >= 200 && status < 300 || status === 304; - - // Get response data - if ( responses ) { - response = ajaxHandleResponses( s, jqXHR, responses ); - } - - // Use a noop converter for missing script - if ( !isSuccess && jQuery.inArray( "script", s.dataTypes ) > -1 ) { - s.converters[ "text script" ] = function() {}; - } - - // Convert no matter what (that way responseXXX fields are always set) - response = ajaxConvert( s, response, jqXHR, isSuccess ); - - // If successful, handle type chaining - if ( isSuccess ) { - - // Set the If-Modified-Since and/or If-None-Match header, if in ifModified mode. - if ( s.ifModified ) { - modified = jqXHR.getResponseHeader( "Last-Modified" ); - if ( modified ) { - jQuery.lastModified[ cacheURL ] = modified; - } - modified = jqXHR.getResponseHeader( "etag" ); - if ( modified ) { - jQuery.etag[ cacheURL ] = modified; - } - } - - // if no content - if ( status === 204 || s.type === "HEAD" ) { - statusText = "nocontent"; - - // if not modified - } else if ( status === 304 ) { - statusText = "notmodified"; - - // If we have data, let's convert it - } else { - statusText = response.state; - success = response.data; - error = response.error; - isSuccess = !error; - } - } else { - - // Extract error from statusText and normalize for non-aborts - error = statusText; - if ( status || !statusText ) { - statusText = "error"; - if ( status < 0 ) { - status = 0; - } - } - } - - // Set data for the fake xhr object - jqXHR.status = status; - jqXHR.statusText = ( nativeStatusText || statusText ) + ""; - - // Success/Error - if ( isSuccess ) { - deferred.resolveWith( callbackContext, [ success, statusText, jqXHR ] ); - } else { - deferred.rejectWith( callbackContext, [ jqXHR, statusText, error ] ); - } - - // Status-dependent callbacks - jqXHR.statusCode( statusCode ); - statusCode = undefined; - - if ( fireGlobals ) { - globalEventContext.trigger( isSuccess ? "ajaxSuccess" : "ajaxError", - [ jqXHR, s, isSuccess ? success : error ] ); - } - - // Complete - completeDeferred.fireWith( callbackContext, [ jqXHR, statusText ] ); - - if ( fireGlobals ) { - globalEventContext.trigger( "ajaxComplete", [ jqXHR, s ] ); - - // Handle the global AJAX counter - if ( !( --jQuery.active ) ) { - jQuery.event.trigger( "ajaxStop" ); - } - } - } - - return jqXHR; - }, - - getJSON: function( url, data, callback ) { - return jQuery.get( url, data, callback, "json" ); - }, - - getScript: function( url, callback ) { - return jQuery.get( url, undefined, callback, "script" ); - } -} ); - -jQuery.each( [ "get", "post" ], function( _i, method ) { - jQuery[ method ] = function( url, data, callback, type ) { - - // Shift arguments if data argument was omitted - if ( isFunction( data ) ) { - type = type || callback; - callback = data; - data = undefined; - } - - // The url can be an options object (which then must have .url) - return jQuery.ajax( jQuery.extend( { - url: url, - type: method, - dataType: type, - data: data, - success: callback - }, jQuery.isPlainObject( url ) && url ) ); - }; -} ); - -jQuery.ajaxPrefilter( function( s ) { - var i; - for ( i in s.headers ) { - if ( i.toLowerCase() === "content-type" ) { - s.contentType = s.headers[ i ] || ""; - } - } -} ); - - -jQuery._evalUrl = function( url, options, doc ) { - return jQuery.ajax( { - url: url, - - // Make this explicit, since user can override this through ajaxSetup (#11264) - type: "GET", - dataType: "script", - cache: true, - async: false, - global: false, - - // Only evaluate the response if it is successful (gh-4126) - // dataFilter is not invoked for failure responses, so using it instead - // of the default converter is kludgy but it works. - converters: { - "text script": function() {} - }, - dataFilter: function( response ) { - jQuery.globalEval( response, options, doc ); - } - } ); -}; - - -jQuery.fn.extend( { - wrapAll: function( html ) { - var wrap; - - if ( this[ 0 ] ) { - if ( isFunction( html ) ) { - html = html.call( this[ 0 ] ); - } - - // The elements to wrap the target around - wrap = jQuery( html, this[ 0 ].ownerDocument ).eq( 0 ).clone( true ); - - if ( this[ 0 ].parentNode ) { - wrap.insertBefore( this[ 0 ] ); - } - - wrap.map( function() { - var elem = this; - - while ( elem.firstElementChild ) { - elem = elem.firstElementChild; - } - - return elem; - } ).append( this ); - } - - return this; - }, - - wrapInner: function( html ) { - if ( isFunction( html ) ) { - return this.each( function( i ) { - jQuery( this ).wrapInner( html.call( this, i ) ); - } ); - } - - return this.each( function() { - var self = jQuery( this ), - contents = self.contents(); - - if ( contents.length ) { - contents.wrapAll( html ); - - } else { - self.append( html ); - } - } ); - }, - - wrap: function( html ) { - var htmlIsFunction = isFunction( html ); - - return this.each( function( i ) { - jQuery( this ).wrapAll( htmlIsFunction ? html.call( this, i ) : html ); - } ); - }, - - unwrap: function( selector ) { - this.parent( selector ).not( "body" ).each( function() { - jQuery( this ).replaceWith( this.childNodes ); - } ); - return this; - } -} ); - - -jQuery.expr.pseudos.hidden = function( elem ) { - return !jQuery.expr.pseudos.visible( elem ); -}; -jQuery.expr.pseudos.visible = function( elem ) { - return !!( elem.offsetWidth || elem.offsetHeight || elem.getClientRects().length ); -}; - - - - -jQuery.ajaxSettings.xhr = function() { - try { - return new window.XMLHttpRequest(); - } catch ( e ) {} -}; - -var xhrSuccessStatus = { - - // File protocol always yields status code 0, assume 200 - 0: 200, - - // Support: IE <=9 only - // #1450: sometimes IE returns 1223 when it should be 204 - 1223: 204 - }, - xhrSupported = jQuery.ajaxSettings.xhr(); - -support.cors = !!xhrSupported && ( "withCredentials" in xhrSupported ); -support.ajax = xhrSupported = !!xhrSupported; - -jQuery.ajaxTransport( function( options ) { - var callback, errorCallback; - - // Cross domain only allowed if supported through XMLHttpRequest - if ( support.cors || xhrSupported && !options.crossDomain ) { - return { - send: function( headers, complete ) { - var i, - xhr = options.xhr(); - - xhr.open( - options.type, - options.url, - options.async, - options.username, - options.password - ); - - // Apply custom fields if provided - if ( options.xhrFields ) { - for ( i in options.xhrFields ) { - xhr[ i ] = options.xhrFields[ i ]; - } - } - - // Override mime type if needed - if ( options.mimeType && xhr.overrideMimeType ) { - xhr.overrideMimeType( options.mimeType ); - } - - // X-Requested-With header - // For cross-domain requests, seeing as conditions for a preflight are - // akin to a jigsaw puzzle, we simply never set it to be sure. - // (it can always be set on a per-request basis or even using ajaxSetup) - // For same-domain requests, won't change header if already provided. - if ( !options.crossDomain && !headers[ "X-Requested-With" ] ) { - headers[ "X-Requested-With" ] = "XMLHttpRequest"; - } - - // Set headers - for ( i in headers ) { - xhr.setRequestHeader( i, headers[ i ] ); - } - - // Callback - callback = function( type ) { - return function() { - if ( callback ) { - callback = errorCallback = xhr.onload = - xhr.onerror = xhr.onabort = xhr.ontimeout = - xhr.onreadystatechange = null; - - if ( type === "abort" ) { - xhr.abort(); - } else if ( type === "error" ) { - - // Support: IE <=9 only - // On a manual native abort, IE9 throws - // errors on any property access that is not readyState - if ( typeof xhr.status !== "number" ) { - complete( 0, "error" ); - } else { - complete( - - // File: protocol always yields status 0; see #8605, #14207 - xhr.status, - xhr.statusText - ); - } - } else { - complete( - xhrSuccessStatus[ xhr.status ] || xhr.status, - xhr.statusText, - - // Support: IE <=9 only - // IE9 has no XHR2 but throws on binary (trac-11426) - // For XHR2 non-text, let the caller handle it (gh-2498) - ( xhr.responseType || "text" ) !== "text" || - typeof xhr.responseText !== "string" ? - { binary: xhr.response } : - { text: xhr.responseText }, - xhr.getAllResponseHeaders() - ); - } - } - }; - }; - - // Listen to events - xhr.onload = callback(); - errorCallback = xhr.onerror = xhr.ontimeout = callback( "error" ); - - // Support: IE 9 only - // Use onreadystatechange to replace onabort - // to handle uncaught aborts - if ( xhr.onabort !== undefined ) { - xhr.onabort = errorCallback; - } else { - xhr.onreadystatechange = function() { - - // Check readyState before timeout as it changes - if ( xhr.readyState === 4 ) { - - // Allow onerror to be called first, - // but that will not handle a native abort - // Also, save errorCallback to a variable - // as xhr.onerror cannot be accessed - window.setTimeout( function() { - if ( callback ) { - errorCallback(); - } - } ); - } - }; - } - - // Create the abort callback - callback = callback( "abort" ); - - try { - - // Do send the request (this may raise an exception) - xhr.send( options.hasContent && options.data || null ); - } catch ( e ) { - - // #14683: Only rethrow if this hasn't been notified as an error yet - if ( callback ) { - throw e; - } - } - }, - - abort: function() { - if ( callback ) { - callback(); - } - } - }; - } -} ); - - - - -// Prevent auto-execution of scripts when no explicit dataType was provided (See gh-2432) -jQuery.ajaxPrefilter( function( s ) { - if ( s.crossDomain ) { - s.contents.script = false; - } -} ); - -// Install script dataType -jQuery.ajaxSetup( { - accepts: { - script: "text/javascript, application/javascript, " + - "application/ecmascript, application/x-ecmascript" - }, - contents: { - script: /\b(?:java|ecma)script\b/ - }, - converters: { - "text script": function( text ) { - jQuery.globalEval( text ); - return text; - } - } -} ); - -// Handle cache's special case and crossDomain -jQuery.ajaxPrefilter( "script", function( s ) { - if ( s.cache === undefined ) { - s.cache = false; - } - if ( s.crossDomain ) { - s.type = "GET"; - } -} ); - -// Bind script tag hack transport -jQuery.ajaxTransport( "script", function( s ) { - - // This transport only deals with cross domain or forced-by-attrs requests - if ( s.crossDomain || s.scriptAttrs ) { - var script, callback; - return { - send: function( _, complete ) { - script = jQuery( " - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - - - - - - - - - - - - - - - -
- - - - -
-
- -
- Shortcuts -
-
- -
-
- - - -
- -
- - -
- - -
-
- -
-
-
- - -
-
-
-
-
- - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
-
-

Docs

-

Access comprehensive developer documentation for PyTorch

- View Docs -
- -
-

Tutorials

-

Get in-depth tutorials for beginners and advanced developers

- View Tutorials -
- -
-

Resources

-

Find development resources and get your questions answered

- View Resources -
-
-
-
- - - - - - - - - -
-
-
-
- - -
-
-
- - -
- - - - - - - - \ No newline at end of file diff --git a/0.11./auto_examples/plot_repurposing_annotations.html b/0.11./auto_examples/plot_repurposing_annotations.html deleted file mode 100644 index efa8e3e63ae..00000000000 --- a/0.11./auto_examples/plot_repurposing_annotations.html +++ /dev/null @@ -1,905 +0,0 @@ - - - - - - - - - - - - Repurposing masks into bounding boxes — Torchvision main documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - - - - - - - - - - - - - - - -
- - - - -
-
- -
- Shortcuts -
-
- -
-
- - - -
- -
-
- - -
-

Repurposing masks into bounding boxes

-

The following example illustrates the operations available -the torchvision.ops module for repurposing -segmentation masks into object localization annotations for different tasks -(e.g. transforming masks used by instance and panoptic segmentation -methods into bounding boxes used by object detection methods).

-
# sphinx_gallery_thumbnail_path = "../../gallery/assets/repurposing_annotations_thumbnail.png"
-
-import os
-import numpy as np
-import torch
-import matplotlib.pyplot as plt
-
-import torchvision.transforms.functional as F
-
-
-ASSETS_DIRECTORY = "assets"
-
-plt.rcParams["savefig.bbox"] = "tight"
-
-
-def show(imgs):
-    if not isinstance(imgs, list):
-        imgs = [imgs]
-    fix, axs = plt.subplots(ncols=len(imgs), squeeze=False)
-    for i, img in enumerate(imgs):
-        img = img.detach()
-        img = F.to_pil_image(img)
-        axs[0, i].imshow(np.asarray(img))
-        axs[0, i].set(xticklabels=[], yticklabels=[], xticks=[], yticks=[])
-
-
-
-

Masks

-

In tasks like instance and panoptic segmentation, masks are commonly defined, and are defined by this package, -as a multi-dimensional array (e.g. a NumPy array or a PyTorch tensor) with the following shape:

-
-

(num_objects, height, width)

-
-

Where num_objects is the number of annotated objects in the image. Each (height, width) object corresponds to exactly -one object. For example, if your input image has the dimensions 224 x 224 and has four annotated objects the shape -of your masks annotation has the following shape:

-
-

(4, 224, 224).

-
-

A nice property of masks is that they can be easily repurposed to be used in methods to solve a variety of object -localization tasks.

-
-
-

Converting Masks to Bounding Boxes

-

For example, the masks_to_boxes() operation can be used to -transform masks into bounding boxes that can be -used as input to detection models such as FasterRCNN and RetinaNet. -We will take images and masks from the PenFudan Dataset.

-
from torchvision.io import read_image
-
-img_path = os.path.join(ASSETS_DIRECTORY, "FudanPed00054.png")
-mask_path = os.path.join(ASSETS_DIRECTORY, "FudanPed00054_mask.png")
-img = read_image(img_path)
-mask = read_image(mask_path)
-
-
-

Here the masks are represented as a PNG Image, with floating point values. -Each pixel is encoded as different colors, with 0 being background. -Notice that the spatial dimensions of image and mask match.

-
print(mask.size())
-print(img.size())
-print(mask)
-
-
-

Out:

-
torch.Size([1, 498, 533])
-torch.Size([3, 498, 533])
-tensor([[[0, 0, 0,  ..., 0, 0, 0],
-         [0, 0, 0,  ..., 0, 0, 0],
-         [0, 0, 0,  ..., 0, 0, 0],
-         ...,
-         [0, 0, 0,  ..., 0, 0, 0],
-         [0, 0, 0,  ..., 0, 0, 0],
-         [0, 0, 0,  ..., 0, 0, 0]]], dtype=torch.uint8)
-
-
-
# We get the unique colors, as these would be the object ids.
-obj_ids = torch.unique(mask)
-
-# first id is the background, so remove it.
-obj_ids = obj_ids[1:]
-
-# split the color-encoded mask into a set of boolean masks.
-# Note that this snippet would work as well if the masks were float values instead of ints.
-masks = mask == obj_ids[:, None, None]
-
-
-

Now the masks are a boolean tensor. -The first dimension in this case 3 and denotes the number of instances: there are 3 people in the image. -The other two dimensions are height and width, which are equal to the dimensions of the image. -For each instance, the boolean tensors represent if the particular pixel -belongs to the segmentation mask of the image.

-
print(masks.size())
-print(masks)
-
-
-

Out:

-
torch.Size([3, 498, 533])
-tensor([[[False, False, False,  ..., False, False, False],
-         [False, False, False,  ..., False, False, False],
-         [False, False, False,  ..., False, False, False],
-         ...,
-         [False, False, False,  ..., False, False, False],
-         [False, False, False,  ..., False, False, False],
-         [False, False, False,  ..., False, False, False]],
-
-        [[False, False, False,  ..., False, False, False],
-         [False, False, False,  ..., False, False, False],
-         [False, False, False,  ..., False, False, False],
-         ...,
-         [False, False, False,  ..., False, False, False],
-         [False, False, False,  ..., False, False, False],
-         [False, False, False,  ..., False, False, False]],
-
-        [[False, False, False,  ..., False, False, False],
-         [False, False, False,  ..., False, False, False],
-         [False, False, False,  ..., False, False, False],
-         ...,
-         [False, False, False,  ..., False, False, False],
-         [False, False, False,  ..., False, False, False],
-         [False, False, False,  ..., False, False, False]]])
-
-
-

Let us visualize an image and plot its corresponding segmentation masks. -We will use the draw_segmentation_masks() to draw the segmentation masks.

-
from torchvision.utils import draw_segmentation_masks
-
-drawn_masks = []
-for mask in masks:
-    drawn_masks.append(draw_segmentation_masks(img, mask, alpha=0.8, colors="blue"))
-
-show(drawn_masks)
-
-
-plot repurposing annotations

To convert the boolean masks into bounding boxes. -We will use the masks_to_boxes() from the torchvision.ops module -It returns the boxes in (xmin, ymin, xmax, ymax) format.

-
from torchvision.ops import masks_to_boxes
-
-boxes = masks_to_boxes(masks)
-print(boxes.size())
-print(boxes)
-
-
-

Out:

-
torch.Size([3, 4])
-tensor([[ 96., 134., 181., 417.],
-        [286., 113., 357., 331.],
-        [363., 120., 436., 328.]])
-
-
-

As the shape denotes, there are 3 boxes and in (xmin, ymin, xmax, ymax) format. -These can be visualized very easily with draw_bounding_boxes() utility -provided in torchvision.utils.

-
from torchvision.utils import draw_bounding_boxes
-
-drawn_boxes = draw_bounding_boxes(img, boxes, colors="red")
-show(drawn_boxes)
-
-
-plot repurposing annotations

These boxes can now directly be used by detection models in torchvision. -Here is demo with a Faster R-CNN model loaded from -fasterrcnn_resnet50_fpn()

-
from torchvision.models.detection import fasterrcnn_resnet50_fpn
-
-model = fasterrcnn_resnet50_fpn(pretrained=True, progress=False)
-print(img.size())
-
-img = F.convert_image_dtype(img, torch.float)
-target = {}
-target["boxes"] = boxes
-target["labels"] = labels = torch.ones((masks.size(0),), dtype=torch.int64)
-detection_outputs = model(img.unsqueeze(0), [target])
-
-
-

Out:

-
Downloading: "https://download.pytorch.org/models/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth" to /root/.cache/torch/hub/checkpoints/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth
-torch.Size([3, 498, 533])
-/root/project/env/lib/python3.7/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  /opt/conda/conda-bld/pytorch_1634272092750/work/aten/src/ATen/native/TensorShape.cpp:2157.)
-  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
-
-
-
-
-

Converting Segmentation Dataset to Detection Dataset

-

With this utility it becomes very simple to convert a segmentation dataset to a detection dataset. -With this we can now use a segmentation dataset to train a detection model. -One can similarly convert panoptic dataset to detection dataset. -Here is an example where we re-purpose the dataset from the -PenFudan Detection Tutorial.

-
class SegmentationToDetectionDataset(torch.utils.data.Dataset):
-    def __init__(self, root, transforms):
-        self.root = root
-        self.transforms = transforms
-        # load all image files, sorting them to
-        # ensure that they are aligned
-        self.imgs = list(sorted(os.listdir(os.path.join(root, "PNGImages"))))
-        self.masks = list(sorted(os.listdir(os.path.join(root, "PedMasks"))))
-
-    def __getitem__(self, idx):
-        # load images and masks
-        img_path = os.path.join(self.root, "PNGImages", self.imgs[idx])
-        mask_path = os.path.join(self.root, "PedMasks", self.masks[idx])
-
-        img = read_image(img_path)
-        mask = read_image(mask_path)
-
-        img = F.convert_image_dtype(img, dtype=torch.float)
-        mask = F.convert_image_dtype(mask, dtype=torch.float)
-
-        # We get the unique colors, as these would be the object ids.
-        obj_ids = torch.unique(mask)
-
-        # first id is the background, so remove it.
-        obj_ids = obj_ids[1:]
-
-        # split the color-encoded mask into a set of boolean masks.
-        masks = mask == obj_ids[:, None, None]
-
-        boxes = masks_to_boxes(masks)
-
-        # there is only one class
-        labels = torch.ones((masks.shape[0],), dtype=torch.int64)
-
-        target = {}
-        target["boxes"] = boxes
-        target["labels"] = labels
-
-        if self.transforms is not None:
-            img, target = self.transforms(img, target)
-
-        return img, target
-
-
-

Total running time of the script: ( 0 minutes 2.456 seconds)

- -

Gallery generated by Sphinx-Gallery

-
-
- - -
- -
- - -
-
- - -
-
- - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
-
-

Docs

-

Access comprehensive developer documentation for PyTorch

- View Docs -
- -
-

Tutorials

-

Get in-depth tutorials for beginners and advanced developers

- View Tutorials -
- -
-

Resources

-

Find development resources and get your questions answered

- View Resources -
-
-
-
- - - - - - - - - -
-
-
-
- - -
-
-
- - -
- - - - - - - - \ No newline at end of file diff --git a/0.11./auto_examples/plot_scripted_tensor_transforms.html b/0.11./auto_examples/plot_scripted_tensor_transforms.html deleted file mode 100644 index d590f1925c2..00000000000 --- a/0.11./auto_examples/plot_scripted_tensor_transforms.html +++ /dev/null @@ -1,807 +0,0 @@ - - - - - - - - - - - - Tensor transforms and JIT — Torchvision main documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - - - - - - - - - - - - - - - -
- - - - -
-
- -
- Shortcuts -
-
- -
-
- - - -
- -
-
- - -
-

Tensor transforms and JIT

-

This example illustrates various features that are now supported by the -image transformations on Tensor images. In particular, we -show how image transforms can be performed on GPU, and how one can also script -them using JIT compilation.

-

Prior to v0.8.0, transforms in torchvision have traditionally been PIL-centric -and presented multiple limitations due to that. Now, since v0.8.0, transforms -implementations are Tensor and PIL compatible and we can achieve the following -new features:

-
    -
  • transform multi-band torch tensor images (with more than 3-4 channels)

  • -
  • torchscript transforms together with your model for deployment

  • -
  • support for GPU acceleration

  • -
  • batched transformation such as for videos

  • -
  • read and decode data directly as torch tensor with torchscript support (for PNG and JPEG image formats)

  • -
-
-

Note

-

These features are only possible with Tensor images.

-
-
from pathlib import Path
-
-import matplotlib.pyplot as plt
-import numpy as np
-
-import torch
-import torchvision.transforms as T
-from torchvision.io import read_image
-
-
-plt.rcParams["savefig.bbox"] = 'tight'
-torch.manual_seed(1)
-
-
-def show(imgs):
-    fix, axs = plt.subplots(ncols=len(imgs), squeeze=False)
-    for i, img in enumerate(imgs):
-        img = T.ToPILImage()(img.to('cpu'))
-        axs[0, i].imshow(np.asarray(img))
-        axs[0, i].set(xticklabels=[], yticklabels=[], xticks=[], yticks=[])
-
-
-

The read_image() function allows to read an image and -directly load it as a tensor

-
dog1 = read_image(str(Path('assets') / 'dog1.jpg'))
-dog2 = read_image(str(Path('assets') / 'dog2.jpg'))
-show([dog1, dog2])
-
-
-plot scripted tensor transforms
-

Transforming images on GPU

-

Most transforms natively support tensors on top of PIL images (to visualize -the effect of the transforms, you may refer to see -Illustration of transforms). -Using tensor images, we can run the transforms on GPUs if cuda is available!

- -plot scripted tensor transforms
-
-

Scriptable transforms for easier deployment via torchscript

-

We now show how to combine image transformations and a model forward pass, -while using torch.jit.script to obtain a single scripted module.

-

Let’s define a Predictor module that transforms the input tensor and then -applies an ImageNet model on it.

-
from torchvision.models import resnet18
-
-
-class Predictor(nn.Module):
-
-    def __init__(self):
-        super().__init__()
-        self.resnet18 = resnet18(pretrained=True, progress=False).eval()
-        self.transforms = nn.Sequential(
-            T.Resize([256, ]),  # We use single int value inside a list due to torchscript type restrictions
-            T.CenterCrop(224),
-            T.ConvertImageDtype(torch.float),
-            T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
-        )
-
-    def forward(self, x: torch.Tensor) -> torch.Tensor:
-        with torch.no_grad():
-            x = self.transforms(x)
-            y_pred = self.resnet18(x)
-            return y_pred.argmax(dim=1)
-
-
-

Now, let’s define scripted and non-scripted instances of Predictor and -apply it on multiple tensor images of the same size

-
predictor = Predictor().to(device)
-scripted_predictor = torch.jit.script(predictor).to(device)
-
-batch = torch.stack([dog1, dog2]).to(device)
-
-res = predictor(batch)
-res_scripted = scripted_predictor(batch)
-
-
-

Out:

-
Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /root/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
-
-
-

We can verify that the prediction of the scripted and non-scripted models are -the same:

-
import json
-
-with open(Path('assets') / 'imagenet_class_index.json', 'r') as labels_file:
-    labels = json.load(labels_file)
-
-for i, (pred, pred_scripted) in enumerate(zip(res, res_scripted)):
-    assert pred == pred_scripted
-    print(f"Prediction for Dog {i + 1}: {labels[str(pred.item())]}")
-
-
-

Out:

-
Prediction for Dog 1: ['n02113023', 'Pembroke']
-Prediction for Dog 2: ['n02106662', 'German_shepherd']
-
-
-

Since the model is scripted, it can be easily dumped on disk and re-used

-
import tempfile
-
-with tempfile.NamedTemporaryFile() as f:
-    scripted_predictor.save(f.name)
-
-    dumped_scripted_predictor = torch.jit.load(f.name)
-    res_scripted_dumped = dumped_scripted_predictor(batch)
-assert (res_scripted_dumped == res_scripted).all()
-
-
-

Total running time of the script: ( 0 minutes 1.822 seconds)

- -

Gallery generated by Sphinx-Gallery

-
-
- - -
- -
- - -
-
- - -
-
- - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
-
-

Docs

-

Access comprehensive developer documentation for PyTorch

- View Docs -
- -
-

Tutorials

-

Get in-depth tutorials for beginners and advanced developers

- View Tutorials -
- -
-

Resources

-

Find development resources and get your questions answered

- View Resources -
-
-
-
- - - - - - - - - -
-
-
-
- - -
-
-
- - -
- - - - - - - - \ No newline at end of file diff --git a/0.11./auto_examples/plot_transforms.html b/0.11./auto_examples/plot_transforms.html deleted file mode 100644 index 1da0394d74b..00000000000 --- a/0.11./auto_examples/plot_transforms.html +++ /dev/null @@ -1,1009 +0,0 @@ - - - - - - - - - - - - Illustration of transforms — Torchvision main documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - - - - - - - - - - - - - - - -
- - - - -
-
- -
- Shortcuts -
-
- -
-
- - - -
- -
-
- - -
-

Illustration of transforms

-

This example illustrates the various transforms available in the -torchvision.transforms module.

-
# sphinx_gallery_thumbnail_path = "../../gallery/assets/transforms_thumbnail.png"
-
-from PIL import Image
-from pathlib import Path
-import matplotlib.pyplot as plt
-import numpy as np
-
-import torch
-import torchvision.transforms as T
-
-
-plt.rcParams["savefig.bbox"] = 'tight'
-orig_img = Image.open(Path('assets') / 'astronaut.jpg')
-# if you change the seed, make sure that the randomly-applied transforms
-# properly show that the image can be both transformed and *not* transformed!
-torch.manual_seed(0)
-
-
-def plot(imgs, with_orig=True, row_title=None, **imshow_kwargs):
-    if not isinstance(imgs[0], list):
-        # Make a 2d grid even if there's just 1 row
-        imgs = [imgs]
-
-    num_rows = len(imgs)
-    num_cols = len(imgs[0]) + with_orig
-    fig, axs = plt.subplots(nrows=num_rows, ncols=num_cols, squeeze=False)
-    for row_idx, row in enumerate(imgs):
-        row = [orig_img] + row if with_orig else row
-        for col_idx, img in enumerate(row):
-            ax = axs[row_idx, col_idx]
-            ax.imshow(np.asarray(img), **imshow_kwargs)
-            ax.set(xticklabels=[], yticklabels=[], xticks=[], yticks=[])
-
-    if with_orig:
-        axs[0, 0].set(title='Original image')
-        axs[0, 0].title.set_size(8)
-    if row_title is not None:
-        for row_idx in range(num_rows):
-            axs[row_idx, 0].set(ylabel=row_title[row_idx])
-
-    plt.tight_layout()
-
-
-
-

Pad

-

The Pad transform -(see also pad()) -fills image borders with some pixel values.

-
padded_imgs = [T.Pad(padding=padding)(orig_img) for padding in (3, 10, 30, 50)]
-plot(padded_imgs)
-
-
-Original image
-
-

Resize

-

The Resize transform -(see also resize()) -resizes an image.

-
resized_imgs = [T.Resize(size=size)(orig_img) for size in (30, 50, 100, orig_img.size)]
-plot(resized_imgs)
-
-
-Original image
-
-

CenterCrop

-

The CenterCrop transform -(see also center_crop()) -crops the given image at the center.

-
center_crops = [T.CenterCrop(size=size)(orig_img) for size in (30, 50, 100, orig_img.size)]
-plot(center_crops)
-
-
-Original image
-
-

FiveCrop

-

The FiveCrop transform -(see also five_crop()) -crops the given image into four corners and the central crop.

- -Original image
-
-

Grayscale

-

The Grayscale transform -(see also to_grayscale()) -converts an image to grayscale

-
gray_img = T.Grayscale()(orig_img)
-plot([gray_img], cmap='gray')
-
-
-Original image
-
-

Random transforms

-

The following transforms are random, which means that the same transfomer -instance will produce different result each time it transforms a given image.

-
-

ColorJitter

-

The ColorJitter transform -randomly changes the brightness, saturation, and other properties of an image.

-
jitter = T.ColorJitter(brightness=.5, hue=.3)
-jitted_imgs = [jitter(orig_img) for _ in range(4)]
-plot(jitted_imgs)
-
-
-Original image
-
-

GaussianBlur

-

The GaussianBlur transform -(see also gaussian_blur()) -performs gaussian blur transform on an image.

-
blurrer = T.GaussianBlur(kernel_size=(5, 9), sigma=(0.1, 5))
-blurred_imgs = [blurrer(orig_img) for _ in range(4)]
-plot(blurred_imgs)
-
-
-Original image
-
-

RandomPerspective

-

The RandomPerspective transform -(see also perspective()) -performs random perspective transform on an image.

-
perspective_transformer = T.RandomPerspective(distortion_scale=0.6, p=1.0)
-perspective_imgs = [perspective_transformer(orig_img) for _ in range(4)]
-plot(perspective_imgs)
-
-
-Original image
-
-

RandomRotation

-

The RandomRotation transform -(see also rotate()) -rotates an image with random angle.

-
rotater = T.RandomRotation(degrees=(0, 180))
-rotated_imgs = [rotater(orig_img) for _ in range(4)]
-plot(rotated_imgs)
-
-
-Original image
-
-

RandomAffine

-

The RandomAffine transform -(see also affine()) -performs random affine transform on an image.

-
affine_transfomer = T.RandomAffine(degrees=(30, 70), translate=(0.1, 0.3), scale=(0.5, 0.75))
-affine_imgs = [affine_transfomer(orig_img) for _ in range(4)]
-plot(affine_imgs)
-
-
-Original image
-
-

RandomCrop

-

The RandomCrop transform -(see also crop()) -crops an image at a random location.

-
cropper = T.RandomCrop(size=(128, 128))
-crops = [cropper(orig_img) for _ in range(4)]
-plot(crops)
-
-
-Original image
-
-

RandomResizedCrop

-

The RandomResizedCrop transform -(see also resized_crop()) -crops an image at a random location, and then resizes the crop to a given -size.

-
resize_cropper = T.RandomResizedCrop(size=(32, 32))
-resized_crops = [resize_cropper(orig_img) for _ in range(4)]
-plot(resized_crops)
-
-
-Original image
-
-

RandomInvert

-

The RandomInvert transform -(see also invert()) -randomly inverts the colors of the given image.

-
inverter = T.RandomInvert()
-invertered_imgs = [inverter(orig_img) for _ in range(4)]
-plot(invertered_imgs)
-
-
-Original image
-
-

RandomPosterize

-

The RandomPosterize transform -(see also posterize()) -randomly posterizes the image by reducing the number of bits -of each color channel.

-
posterizer = T.RandomPosterize(bits=2)
-posterized_imgs = [posterizer(orig_img) for _ in range(4)]
-plot(posterized_imgs)
-
-
-Original image
-
-

RandomSolarize

-

The RandomSolarize transform -(see also solarize()) -randomly solarizes the image by inverting all pixel values above -the threshold.

-
solarizer = T.RandomSolarize(threshold=192.0)
-solarized_imgs = [solarizer(orig_img) for _ in range(4)]
-plot(solarized_imgs)
-
-
-Original image
-
-

RandomAdjustSharpness

-

The RandomAdjustSharpness transform -(see also adjust_sharpness()) -randomly adjusts the sharpness of the given image.

-
sharpness_adjuster = T.RandomAdjustSharpness(sharpness_factor=2)
-sharpened_imgs = [sharpness_adjuster(orig_img) for _ in range(4)]
-plot(sharpened_imgs)
-
-
-Original image
-
-

RandomAutocontrast

-

The RandomAutocontrast transform -(see also autocontrast()) -randomly applies autocontrast to the given image.

-
autocontraster = T.RandomAutocontrast()
-autocontrasted_imgs = [autocontraster(orig_img) for _ in range(4)]
-plot(autocontrasted_imgs)
-
-
-Original image
-
-

RandomEqualize

-

The RandomEqualize transform -(see also equalize()) -randomly equalizes the histogram of the given image.

-
equalizer = T.RandomEqualize()
-equalized_imgs = [equalizer(orig_img) for _ in range(4)]
-plot(equalized_imgs)
-
-
-Original image
-
-

AutoAugment

-

The AutoAugment transform -automatically augments data based on a given auto-augmentation policy. -See AutoAugmentPolicy for the available policies.

-
policies = [T.AutoAugmentPolicy.CIFAR10, T.AutoAugmentPolicy.IMAGENET, T.AutoAugmentPolicy.SVHN]
-augmenters = [T.AutoAugment(policy) for policy in policies]
-imgs = [
-    [augmenter(orig_img) for _ in range(4)]
-    for augmenter in augmenters
-]
-row_title = [str(policy).split('.')[-1] for policy in policies]
-plot(imgs, row_title=row_title)
-
-
-Original image
-
-

RandAugment

-

The RandAugment transform automatically augments the data.

-
augmenter = T.RandAugment()
-imgs = [augmenter(orig_img) for _ in range(4)]
-plot(imgs)
-
-
-Original image
-
-

TrivialAugmentWide

-

The TrivialAugmentWide transform automatically augments the data.

-
augmenter = T.TrivialAugmentWide()
-imgs = [augmenter(orig_img) for _ in range(4)]
-plot(imgs)
-
-
-Original image
-
-
-

Randomly-applied transforms

-

Some transforms are randomly-applied given a probability p. That is, the -transformed image may actually be the same as the original one, even when -called with the same transformer instance!

-
-

RandomHorizontalFlip

-

The RandomHorizontalFlip transform -(see also hflip()) -performs horizontal flip of an image, with a given probability.

-
hflipper = T.RandomHorizontalFlip(p=0.5)
-transformed_imgs = [hflipper(orig_img) for _ in range(4)]
-plot(transformed_imgs)
-
-
-Original image
-
-

RandomVerticalFlip

-

The RandomVerticalFlip transform -(see also vflip()) -performs vertical flip of an image, with a given probability.

-
vflipper = T.RandomVerticalFlip(p=0.5)
-transformed_imgs = [vflipper(orig_img) for _ in range(4)]
-plot(transformed_imgs)
-
-
-Original image
-
-

RandomApply

-

The RandomApply transform -randomly applies a list of transforms, with a given probability.

-
applier = T.RandomApply(transforms=[T.RandomCrop(size=(64, 64))], p=0.5)
-transformed_imgs = [applier(orig_img) for _ in range(4)]
-plot(transformed_imgs)
-
-
-Original image

Total running time of the script: ( 0 minutes 8.589 seconds)

- -

Gallery generated by Sphinx-Gallery

-
-
-
- - -
- -
- - -
-
- - -
-
- - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
-
-

Docs

-

Access comprehensive developer documentation for PyTorch

- View Docs -
- -
-

Tutorials

-

Get in-depth tutorials for beginners and advanced developers

- View Tutorials -
- -
-

Resources

-

Find development resources and get your questions answered

- View Resources -
-
-
-
- - - - - - - - - -
-
-
-
- - -
-
-
- - -
- - - - - - - - \ No newline at end of file diff --git a/0.11./auto_examples/plot_video_api.html b/0.11./auto_examples/plot_video_api.html deleted file mode 100644 index 2d730efbc30..00000000000 --- a/0.11./auto_examples/plot_video_api.html +++ /dev/null @@ -1,4718 +0,0 @@ - - - - - - - - - - - - Video API — Torchvision main documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - - - - - - - - - - - - - - - -
- - - - -
-
- -
- Shortcuts -
-
- -
-
- - - -
- -
-
- - -
-

Video API

-

This example illustrates some of the APIs that torchvision offers for -videos, together with the examples on how to build datasets and more.

-
-

1. Introduction: building a new video object and examining the properties

-

First we select a video to test the object out. For the sake of argument -we’re using one from kinetics400 dataset. -To create it, we need to define the path and the stream we want to use.

-

Chosen video statistics:

-
    -
  • -
    WUzgd7C1pWA.mp4
      -
    • -
      source:
        -
      • kinetics-400

      • -
      -
      -
      -
    • -
    • -
      video:
        -
      • H-264

      • -
      • MPEG-4 AVC (part 10) (avc1)

      • -
      • fps: 29.97

      • -
      -
      -
      -
    • -
    • -
      audio:
        -
      • MPEG AAC audio (mp4a)

      • -
      • sample rate: 48K Hz

      • -
      -
      -
      -
    • -
    -
    -
    -
  • -
-
import torch
-import torchvision
-from torchvision.datasets.utils import download_url
-
-# Download the sample video
-download_url(
-    "https://github.com/pytorch/vision/blob/main/test/assets/videos/WUzgd7C1pWA.mp4?raw=true",
-    ".",
-    "WUzgd7C1pWA.mp4"
-)
-video_path = "./WUzgd7C1pWA.mp4"
-
-
-

Out:

-
Downloading https://raw.githubusercontent.com/pytorch/vision/main/test/assets/videos/WUzgd7C1pWA.mp4 to ./WUzgd7C1pWA.mp4
-
-0.1%
-0.2%
-0.3%
-0.5%
-0.6%
-0.7%
-0.8%
-0.9%
-1.0%
-1.2%
-1.3%
-1.4%
-1.5%
-1.6%
-1.7%
-1.8%
-2.0%
-2.1%
-2.2%
-2.3%
-2.4%
-2.5%
-2.6%
-2.8%
-2.9%
-3.0%
-3.1%
-3.2%
-3.3%
-3.5%
-3.6%
-3.7%
-3.8%
-3.9%
-4.0%
-4.1%
-4.3%
-4.4%
-4.5%
-4.6%
-4.7%
-4.8%
-4.9%
-5.1%
-5.2%
-5.3%
-5.4%
-5.5%
-5.6%
-5.8%
-5.9%
-6.0%
-6.1%
-6.2%
-6.3%
-6.4%
-6.6%
-6.7%
-6.8%
-6.9%
-7.0%
-7.1%
-7.3%
-7.4%
-7.5%
-7.6%
-7.7%
-7.8%
-7.9%
-8.1%
-8.2%
-8.3%
-8.4%
-8.5%
-8.6%
-8.7%
-8.9%
-9.0%
-9.1%
-9.2%
-9.3%
-9.4%
-9.6%
-9.7%
-9.8%
-9.9%
-10.0%
-10.1%
-10.2%
-10.4%
-10.5%
-10.6%
-10.7%
-10.8%
-10.9%
-11.1%
-11.2%
-11.3%
-11.4%
-11.5%
-11.6%
-11.7%
-11.9%
-12.0%
-12.1%
-12.2%
-12.3%
-12.4%
-12.5%
-12.7%
-12.8%
-12.9%
-13.0%
-13.1%
-13.2%
-13.4%
-13.5%
-13.6%
-13.7%
-13.8%
-13.9%
-14.0%
-14.2%
-14.3%
-14.4%
-14.5%
-14.6%
-14.7%
-14.8%
-15.0%
-15.1%
-15.2%
-15.3%
-15.4%
-15.5%
-15.7%
-15.8%
-15.9%
-16.0%
-16.1%
-16.2%
-16.3%
-16.5%
-16.6%
-16.7%
-16.8%
-16.9%
-17.0%
-17.2%
-17.3%
-17.4%
-17.5%
-17.6%
-17.7%
-17.8%
-18.0%
-18.1%
-18.2%
-18.3%
-18.4%
-18.5%
-18.6%
-18.8%
-18.9%
-19.0%
-19.1%
-19.2%
-19.3%
-19.5%
-19.6%
-19.7%
-19.8%
-19.9%
-20.0%
-20.1%
-20.3%
-20.4%
-20.5%
-20.6%
-20.7%
-20.8%
-20.9%
-21.1%
-21.2%
-21.3%
-21.4%
-21.5%
-21.6%
-21.8%
-21.9%
-22.0%
-22.1%
-22.2%
-22.3%
-22.4%
-22.6%
-22.7%
-22.8%
-22.9%
-23.0%
-23.1%
-23.3%
-23.4%
-23.5%
-23.6%
-23.7%
-23.8%
-23.9%
-24.1%
-24.2%
-24.3%
-24.4%
-24.5%
-24.6%
-24.7%
-24.9%
-25.0%
-25.1%
-25.2%
-25.3%
-25.4%
-25.6%
-25.7%
-25.8%
-25.9%
-26.0%
-26.1%
-26.2%
-26.4%
-26.5%
-26.6%
-26.7%
-26.8%
-26.9%
-27.0%
-27.2%
-27.3%
-27.4%
-27.5%
-27.6%
-27.7%
-27.9%
-28.0%
-28.1%
-28.2%
-28.3%
-28.4%
-28.5%
-28.7%
-28.8%
-28.9%
-29.0%
-29.1%
-29.2%
-29.4%
-29.5%
-29.6%
-29.7%
-29.8%
-29.9%
-30.0%
-30.2%
-30.3%
-30.4%
-30.5%
-30.6%
-30.7%
-30.8%
-31.0%
-31.1%
-31.2%
-31.3%
-31.4%
-31.5%
-31.7%
-31.8%
-31.9%
-32.0%
-32.1%
-32.2%
-32.3%
-32.5%
-32.6%
-32.7%
-32.8%
-32.9%
-33.0%
-33.2%
-33.3%
-33.4%
-33.5%
-33.6%
-33.7%
-33.8%
-34.0%
-34.1%
-34.2%
-34.3%
-34.4%
-34.5%
-34.6%
-34.8%
-34.9%
-35.0%
-35.1%
-35.2%
-35.3%
-35.5%
-35.6%
-35.7%
-35.8%
-35.9%
-36.0%
-36.1%
-36.3%
-36.4%
-36.5%
-36.6%
-36.7%
-36.8%
-36.9%
-37.1%
-37.2%
-37.3%
-37.4%
-37.5%
-37.6%
-37.8%
-37.9%
-38.0%
-38.1%
-38.2%
-38.3%
-38.4%
-38.6%
-38.7%
-38.8%
-38.9%
-39.0%
-39.1%
-39.3%
-39.4%
-39.5%
-39.6%
-39.7%
-39.8%
-39.9%
-40.1%
-40.2%
-40.3%
-40.4%
-40.5%
-40.6%
-40.7%
-40.9%
-41.0%
-41.1%
-41.2%
-41.3%
-41.4%
-41.6%
-41.7%
-41.8%
-41.9%
-42.0%
-42.1%
-42.2%
-42.4%
-42.5%
-42.6%
-42.7%
-42.8%
-42.9%
-43.0%
-43.2%
-43.3%
-43.4%
-43.5%
-43.6%
-43.7%
-43.9%
-44.0%
-44.1%
-44.2%
-44.3%
-44.4%
-44.5%
-44.7%
-44.8%
-44.9%
-45.0%
-45.1%
-45.2%
-45.4%
-45.5%
-45.6%
-45.7%
-45.8%
-45.9%
-46.0%
-46.2%
-46.3%
-46.4%
-46.5%
-46.6%
-46.7%
-46.8%
-47.0%
-47.1%
-47.2%
-47.3%
-47.4%
-47.5%
-47.7%
-47.8%
-47.9%
-48.0%
-48.1%
-48.2%
-48.3%
-48.5%
-48.6%
-48.7%
-48.8%
-48.9%
-49.0%
-49.2%
-49.3%
-49.4%
-49.5%
-49.6%
-49.7%
-49.8%
-50.0%
-50.1%
-50.2%
-50.3%
-50.4%
-50.5%
-50.6%
-50.8%
-50.9%
-51.0%
-51.1%
-51.2%
-51.3%
-51.5%
-51.6%
-51.7%
-51.8%
-51.9%
-52.0%
-52.1%
-52.3%
-52.4%
-52.5%
-52.6%
-52.7%
-52.8%
-52.9%
-53.1%
-53.2%
-53.3%
-53.4%
-53.5%
-53.6%
-53.8%
-53.9%
-54.0%
-54.1%
-54.2%
-54.3%
-54.4%
-54.6%
-54.7%
-54.8%
-54.9%
-55.0%
-55.1%
-55.3%
-55.4%
-55.5%
-55.6%
-55.7%
-55.8%
-55.9%
-56.1%
-56.2%
-56.3%
-56.4%
-56.5%
-56.6%
-56.7%
-56.9%
-57.0%
-57.1%
-57.2%
-57.3%
-57.4%
-57.6%
-57.7%
-57.8%
-57.9%
-58.0%
-58.1%
-58.2%
-58.4%
-58.5%
-58.6%
-58.7%
-58.8%
-58.9%
-59.0%
-59.2%
-59.3%
-59.4%
-59.5%
-59.6%
-59.7%
-59.9%
-60.0%
-60.1%
-60.2%
-60.3%
-60.4%
-60.5%
-60.7%
-60.8%
-60.9%
-61.0%
-61.1%
-61.2%
-61.4%
-61.5%
-61.6%
-61.7%
-61.8%
-61.9%
-62.0%
-62.2%
-62.3%
-62.4%
-62.5%
-62.6%
-62.7%
-62.8%
-63.0%
-63.1%
-63.2%
-63.3%
-63.4%
-63.5%
-63.7%
-63.8%
-63.9%
-64.0%
-64.1%
-64.2%
-64.3%
-64.5%
-64.6%
-64.7%
-64.8%
-64.9%
-65.0%
-65.1%
-65.3%
-65.4%
-65.5%
-65.6%
-65.7%
-65.8%
-66.0%
-66.1%
-66.2%
-66.3%
-66.4%
-66.5%
-66.6%
-66.8%
-66.9%
-67.0%
-67.1%
-67.2%
-67.3%
-67.5%
-67.6%
-67.7%
-67.8%
-67.9%
-68.0%
-68.1%
-68.3%
-68.4%
-68.5%
-68.6%
-68.7%
-68.8%
-68.9%
-69.1%
-69.2%
-69.3%
-69.4%
-69.5%
-69.6%
-69.8%
-69.9%
-70.0%
-70.1%
-70.2%
-70.3%
-70.4%
-70.6%
-70.7%
-70.8%
-70.9%
-71.0%
-71.1%
-71.3%
-71.4%
-71.5%
-71.6%
-71.7%
-71.8%
-71.9%
-72.1%
-72.2%
-72.3%
-72.4%
-72.5%
-72.6%
-72.7%
-72.9%
-73.0%
-73.1%
-73.2%
-73.3%
-73.4%
-73.6%
-73.7%
-73.8%
-73.9%
-74.0%
-74.1%
-74.2%
-74.4%
-74.5%
-74.6%
-74.7%
-74.8%
-74.9%
-75.0%
-75.2%
-75.3%
-75.4%
-75.5%
-75.6%
-75.7%
-75.9%
-76.0%
-76.1%
-76.2%
-76.3%
-76.4%
-76.5%
-76.7%
-76.8%
-76.9%
-77.0%
-77.1%
-77.2%
-77.4%
-77.5%
-77.6%
-77.7%
-77.8%
-77.9%
-78.0%
-78.2%
-78.3%
-78.4%
-78.5%
-78.6%
-78.7%
-78.8%
-79.0%
-79.1%
-79.2%
-79.3%
-79.4%
-79.5%
-79.7%
-79.8%
-79.9%
-80.0%
-80.1%
-80.2%
-80.3%
-80.5%
-80.6%
-80.7%
-80.8%
-80.9%
-81.0%
-81.1%
-81.3%
-81.4%
-81.5%
-81.6%
-81.7%
-81.8%
-82.0%
-82.1%
-82.2%
-82.3%
-82.4%
-82.5%
-82.6%
-82.8%
-82.9%
-83.0%
-83.1%
-83.2%
-83.3%
-83.5%
-83.6%
-83.7%
-83.8%
-83.9%
-84.0%
-84.1%
-84.3%
-84.4%
-84.5%
-84.6%
-84.7%
-84.8%
-84.9%
-85.1%
-85.2%
-85.3%
-85.4%
-85.5%
-85.6%
-85.8%
-85.9%
-86.0%
-86.1%
-86.2%
-86.3%
-86.4%
-86.6%
-86.7%
-86.8%
-86.9%
-87.0%
-87.1%
-87.2%
-87.4%
-87.5%
-87.6%
-87.7%
-87.8%
-87.9%
-88.1%
-88.2%
-88.3%
-88.4%
-88.5%
-88.6%
-88.7%
-88.9%
-89.0%
-89.1%
-89.2%
-89.3%
-89.4%
-89.6%
-89.7%
-89.8%
-89.9%
-90.0%
-90.1%
-90.2%
-90.4%
-90.5%
-90.6%
-90.7%
-90.8%
-90.9%
-91.0%
-91.2%
-91.3%
-91.4%
-91.5%
-91.6%
-91.7%
-91.9%
-92.0%
-92.1%
-92.2%
-92.3%
-92.4%
-92.5%
-92.7%
-92.8%
-92.9%
-93.0%
-93.1%
-93.2%
-93.4%
-93.5%
-93.6%
-93.7%
-93.8%
-93.9%
-94.0%
-94.2%
-94.3%
-94.4%
-94.5%
-94.6%
-94.7%
-94.8%
-95.0%
-95.1%
-95.2%
-95.3%
-95.4%
-95.5%
-95.7%
-95.8%
-95.9%
-96.0%
-96.1%
-96.2%
-96.3%
-96.5%
-96.6%
-96.7%
-96.8%
-96.9%
-97.0%
-97.1%
-97.3%
-97.4%
-97.5%
-97.6%
-97.7%
-97.8%
-98.0%
-98.1%
-98.2%
-98.3%
-98.4%
-98.5%
-98.6%
-98.8%
-98.9%
-99.0%
-99.1%
-99.2%
-99.3%
-99.5%
-99.6%
-99.7%
-99.8%
-99.9%
-100.0%
-
-
-

Streams are defined in a similar fashion as torch devices. We encode them as strings in a form -of stream_type:stream_id where stream_type is a string and stream_id a long int. -The constructor accepts passing a stream_type only, in which case the stream is auto-discovered. -Firstly, let’s get the metadata for our particular video:

-
stream = "video"
-video = torchvision.io.VideoReader(video_path, stream)
-video.get_metadata()
-
-
-

Out:

-
{'video': {'duration': [10.9109], 'fps': [29.97002997002997]}, 'audio': {'duration': [10.9], 'framerate': [48000.0]}, 'subtitles': {'duration': []}, 'cc': {'duration': []}}
-
-
-

Here we can see that video has two streams - a video and an audio stream. -Currently available stream types include [‘video’, ‘audio’]. -Each descriptor consists of two parts: stream type (e.g. ‘video’) and a unique stream id -(which are determined by video encoding). -In this way, if the video container contains multiple streams of the same type, -users can access the one they want. -If only stream type is passed, the decoder auto-detects first stream of that type and returns it.

-

Let’s read all the frames from the video stream. By default, the return value of -next(video_reader) is a dict containing the following fields.

-

The return fields are:

-
    -
  • data: containing a torch.tensor

  • -
  • pts: containing a float timestamp of this particular frame

  • -
-
metadata = video.get_metadata()
-video.set_current_stream("audio")
-
-frames = []  # we are going to save the frames here.
-ptss = []  # pts is a presentation timestamp in seconds (float) of each frame
-for frame in video:
-    frames.append(frame['data'])
-    ptss.append(frame['pts'])
-
-print("PTS for first five frames ", ptss[:5])
-print("Total number of frames: ", len(frames))
-approx_nf = metadata['audio']['duration'][0] * metadata['audio']['framerate'][0]
-print("Approx total number of datapoints we can expect: ", approx_nf)
-print("Read data size: ", frames[0].size(0) * len(frames))
-
-
-

Out:

-
PTS for first five frames  [0.0, 0.021332999999999998, 0.042667, 0.064, 0.08533299999999999]
-Total number of frames:  511
-Approx total number of datapoints we can expect:  523200.0
-Read data size:  523264
-
-
-

But what if we only want to read certain time segment of the video? -That can be done easily using the combination of our seek function, and the fact that each call -to next returns the presentation timestamp of the returned frame in seconds.

-

Given that our implementation relies on python iterators, -we can leverage itertools to simplify the process and make it more pythonic.

-

For example, if we wanted to read ten frames from second second:

-
import itertools
-video.set_current_stream("video")
-
-frames = []  # we are going to save the frames here.
-
-# We seek into a second second of the video and use islice to get 10 frames since
-for frame, pts in itertools.islice(video.seek(2), 10):
-    frames.append(frame)
-
-print("Total number of frames: ", len(frames))
-
-
-

Out:

-
Total number of frames:  10
-
-
-

Or if we wanted to read from 2nd to 5th second, -We seek into a second second of the video, -then we utilize the itertools takewhile to get the -correct number of frames:

-
video.set_current_stream("video")
-frames = []  # we are going to save the frames here.
-video = video.seek(2)
-
-for frame in itertools.takewhile(lambda x: x['pts'] <= 5, video):
-    frames.append(frame['data'])
-
-print("Total number of frames: ", len(frames))
-approx_nf = (5 - 2) * video.get_metadata()['video']['fps'][0]
-print("We can expect approx: ", approx_nf)
-print("Tensor size: ", frames[0].size())
-
-
-

Out:

-
Total number of frames:  90
-We can expect approx:  89.91008991008991
-Tensor size:  torch.Size([3, 256, 340])
-
-
-
-
-

2. Building a sample read_video function

-

We can utilize the methods above to build the read video function that follows -the same API to the existing read_video function.

-
def example_read_video(video_object, start=0, end=None, read_video=True, read_audio=True):
-    if end is None:
-        end = float("inf")
-    if end < start:
-        raise ValueError(
-            "end time should be larger than start time, got "
-            "start time={} and end time={}".format(start, end)
-        )
-
-    video_frames = torch.empty(0)
-    video_pts = []
-    if read_video:
-        video_object.set_current_stream("video")
-        frames = []
-        for frame in itertools.takewhile(lambda x: x['pts'] <= end, video_object.seek(start)):
-            frames.append(frame['data'])
-            video_pts.append(frame['pts'])
-        if len(frames) > 0:
-            video_frames = torch.stack(frames, 0)
-
-    audio_frames = torch.empty(0)
-    audio_pts = []
-    if read_audio:
-        video_object.set_current_stream("audio")
-        frames = []
-        for frame in itertools.takewhile(lambda x: x['pts'] <= end, video_object.seek(start)):
-            frames.append(frame['data'])
-            video_pts.append(frame['pts'])
-        if len(frames) > 0:
-            audio_frames = torch.cat(frames, 0)
-
-    return video_frames, audio_frames, (video_pts, audio_pts), video_object.get_metadata()
-
-
-# Total number of frames should be 327 for video and 523264 datapoints for audio
-vf, af, info, meta = example_read_video(video)
-print(vf.size(), af.size())
-
-
-

Out:

-
torch.Size([327, 3, 256, 340]) torch.Size([523264, 1])
-
-
-
-
-

3. Building an example randomly sampled dataset (can be applied to training dataest of kinetics400)

-

Cool, so now we can use the same principle to make the sample dataset. -We suggest trying out iterable dataset for this purpose. -Here, we are going to build an example dataset that reads randomly selected 10 frames of video.

-

Make sample dataset

-
import os
-os.makedirs("./dataset", exist_ok=True)
-os.makedirs("./dataset/1", exist_ok=True)
-os.makedirs("./dataset/2", exist_ok=True)
-
-
-

Download the videos

-
from torchvision.datasets.utils import download_url
-download_url(
-    "https://github.com/pytorch/vision/blob/main/test/assets/videos/WUzgd7C1pWA.mp4?raw=true",
-    "./dataset/1", "WUzgd7C1pWA.mp4"
-)
-download_url(
-    "https://github.com/pytorch/vision/blob/main/test/assets/videos/RATRACE_wave_f_nm_np1_fr_goo_37.avi?raw=true",
-    "./dataset/1",
-    "RATRACE_wave_f_nm_np1_fr_goo_37.avi"
-)
-download_url(
-    "https://github.com/pytorch/vision/blob/main/test/assets/videos/SOX5yA1l24A.mp4?raw=true",
-    "./dataset/2",
-    "SOX5yA1l24A.mp4"
-)
-download_url(
-    "https://github.com/pytorch/vision/blob/main/test/assets/videos/v_SoccerJuggling_g23_c01.avi?raw=true",
-    "./dataset/2",
-    "v_SoccerJuggling_g23_c01.avi"
-)
-download_url(
-    "https://github.com/pytorch/vision/blob/main/test/assets/videos/v_SoccerJuggling_g24_c01.avi?raw=true",
-    "./dataset/2",
-    "v_SoccerJuggling_g24_c01.avi"
-)
-
-
-

Out:

-
Downloading https://raw.githubusercontent.com/pytorch/vision/main/test/assets/videos/WUzgd7C1pWA.mp4 to ./dataset/1/WUzgd7C1pWA.mp4
-
-0.1%
-0.2%
-0.3%
-0.5%
-0.6%
-0.7%
-0.8%
-0.9%
-1.0%
-1.2%
-1.3%
-1.4%
-1.5%
-1.6%
-1.7%
-1.8%
-2.0%
-2.1%
-2.2%
-2.3%
-2.4%
-2.5%
-2.6%
-2.8%
-2.9%
-3.0%
-3.1%
-3.2%
-3.3%
-3.5%
-3.6%
-3.7%
-3.8%
-3.9%
-4.0%
-4.1%
-4.3%
-4.4%
-4.5%
-4.6%
-4.7%
-4.8%
-4.9%
-5.1%
-5.2%
-5.3%
-5.4%
-5.5%
-5.6%
-5.8%
-5.9%
-6.0%
-6.1%
-6.2%
-6.3%
-6.4%
-6.6%
-6.7%
-6.8%
-6.9%
-7.0%
-7.1%
-7.3%
-7.4%
-7.5%
-7.6%
-7.7%
-7.8%
-7.9%
-8.1%
-8.2%
-8.3%
-8.4%
-8.5%
-8.6%
-8.7%
-8.9%
-9.0%
-9.1%
-9.2%
-9.3%
-9.4%
-9.6%
-9.7%
-9.8%
-9.9%
-10.0%
-10.1%
-10.2%
-10.4%
-10.5%
-10.6%
-10.7%
-10.8%
-10.9%
-11.1%
-11.2%
-11.3%
-11.4%
-11.5%
-11.6%
-11.7%
-11.9%
-12.0%
-12.1%
-12.2%
-12.3%
-12.4%
-12.5%
-12.7%
-12.8%
-12.9%
-13.0%
-13.1%
-13.2%
-13.4%
-13.5%
-13.6%
-13.7%
-13.8%
-13.9%
-14.0%
-14.2%
-14.3%
-14.4%
-14.5%
-14.6%
-14.7%
-14.8%
-15.0%
-15.1%
-15.2%
-15.3%
-15.4%
-15.5%
-15.7%
-15.8%
-15.9%
-16.0%
-16.1%
-16.2%
-16.3%
-16.5%
-16.6%
-16.7%
-16.8%
-16.9%
-17.0%
-17.2%
-17.3%
-17.4%
-17.5%
-17.6%
-17.7%
-17.8%
-18.0%
-18.1%
-18.2%
-18.3%
-18.4%
-18.5%
-18.6%
-18.8%
-18.9%
-19.0%
-19.1%
-19.2%
-19.3%
-19.5%
-19.6%
-19.7%
-19.8%
-19.9%
-20.0%
-20.1%
-20.3%
-20.4%
-20.5%
-20.6%
-20.7%
-20.8%
-20.9%
-21.1%
-21.2%
-21.3%
-21.4%
-21.5%
-21.6%
-21.8%
-21.9%
-22.0%
-22.1%
-22.2%
-22.3%
-22.4%
-22.6%
-22.7%
-22.8%
-22.9%
-23.0%
-23.1%
-23.3%
-23.4%
-23.5%
-23.6%
-23.7%
-23.8%
-23.9%
-24.1%
-24.2%
-24.3%
-24.4%
-24.5%
-24.6%
-24.7%
-24.9%
-25.0%
-25.1%
-25.2%
-25.3%
-25.4%
-25.6%
-25.7%
-25.8%
-25.9%
-26.0%
-26.1%
-26.2%
-26.4%
-26.5%
-26.6%
-26.7%
-26.8%
-26.9%
-27.0%
-27.2%
-27.3%
-27.4%
-27.5%
-27.6%
-27.7%
-27.9%
-28.0%
-28.1%
-28.2%
-28.3%
-28.4%
-28.5%
-28.7%
-28.8%
-28.9%
-29.0%
-29.1%
-29.2%
-29.4%
-29.5%
-29.6%
-29.7%
-29.8%
-29.9%
-30.0%
-30.2%
-30.3%
-30.4%
-30.5%
-30.6%
-30.7%
-30.8%
-31.0%
-31.1%
-31.2%
-31.3%
-31.4%
-31.5%
-31.7%
-31.8%
-31.9%
-32.0%
-32.1%
-32.2%
-32.3%
-32.5%
-32.6%
-32.7%
-32.8%
-32.9%
-33.0%
-33.2%
-33.3%
-33.4%
-33.5%
-33.6%
-33.7%
-33.8%
-34.0%
-34.1%
-34.2%
-34.3%
-34.4%
-34.5%
-34.6%
-34.8%
-34.9%
-35.0%
-35.1%
-35.2%
-35.3%
-35.5%
-35.6%
-35.7%
-35.8%
-35.9%
-36.0%
-36.1%
-36.3%
-36.4%
-36.5%
-36.6%
-36.7%
-36.8%
-36.9%
-37.1%
-37.2%
-37.3%
-37.4%
-37.5%
-37.6%
-37.8%
-37.9%
-38.0%
-38.1%
-38.2%
-38.3%
-38.4%
-38.6%
-38.7%
-38.8%
-38.9%
-39.0%
-39.1%
-39.3%
-39.4%
-39.5%
-39.6%
-39.7%
-39.8%
-39.9%
-40.1%
-40.2%
-40.3%
-40.4%
-40.5%
-40.6%
-40.7%
-40.9%
-41.0%
-41.1%
-41.2%
-41.3%
-41.4%
-41.6%
-41.7%
-41.8%
-41.9%
-42.0%
-42.1%
-42.2%
-42.4%
-42.5%
-42.6%
-42.7%
-42.8%
-42.9%
-43.0%
-43.2%
-43.3%
-43.4%
-43.5%
-43.6%
-43.7%
-43.9%
-44.0%
-44.1%
-44.2%
-44.3%
-44.4%
-44.5%
-44.7%
-44.8%
-44.9%
-45.0%
-45.1%
-45.2%
-45.4%
-45.5%
-45.6%
-45.7%
-45.8%
-45.9%
-46.0%
-46.2%
-46.3%
-46.4%
-46.5%
-46.6%
-46.7%
-46.8%
-47.0%
-47.1%
-47.2%
-47.3%
-47.4%
-47.5%
-47.7%
-47.8%
-47.9%
-48.0%
-48.1%
-48.2%
-48.3%
-48.5%
-48.6%
-48.7%
-48.8%
-48.9%
-49.0%
-49.2%
-49.3%
-49.4%
-49.5%
-49.6%
-49.7%
-49.8%
-50.0%
-50.1%
-50.2%
-50.3%
-50.4%
-50.5%
-50.6%
-50.8%
-50.9%
-51.0%
-51.1%
-51.2%
-51.3%
-51.5%
-51.6%
-51.7%
-51.8%
-51.9%
-52.0%
-52.1%
-52.3%
-52.4%
-52.5%
-52.6%
-52.7%
-52.8%
-52.9%
-53.1%
-53.2%
-53.3%
-53.4%
-53.5%
-53.6%
-53.8%
-53.9%
-54.0%
-54.1%
-54.2%
-54.3%
-54.4%
-54.6%
-54.7%
-54.8%
-54.9%
-55.0%
-55.1%
-55.3%
-55.4%
-55.5%
-55.6%
-55.7%
-55.8%
-55.9%
-56.1%
-56.2%
-56.3%
-56.4%
-56.5%
-56.6%
-56.7%
-56.9%
-57.0%
-57.1%
-57.2%
-57.3%
-57.4%
-57.6%
-57.7%
-57.8%
-57.9%
-58.0%
-58.1%
-58.2%
-58.4%
-58.5%
-58.6%
-58.7%
-58.8%
-58.9%
-59.0%
-59.2%
-59.3%
-59.4%
-59.5%
-59.6%
-59.7%
-59.9%
-60.0%
-60.1%
-60.2%
-60.3%
-60.4%
-60.5%
-60.7%
-60.8%
-60.9%
-61.0%
-61.1%
-61.2%
-61.4%
-61.5%
-61.6%
-61.7%
-61.8%
-61.9%
-62.0%
-62.2%
-62.3%
-62.4%
-62.5%
-62.6%
-62.7%
-62.8%
-63.0%
-63.1%
-63.2%
-63.3%
-63.4%
-63.5%
-63.7%
-63.8%
-63.9%
-64.0%
-64.1%
-64.2%
-64.3%
-64.5%
-64.6%
-64.7%
-64.8%
-64.9%
-65.0%
-65.1%
-65.3%
-65.4%
-65.5%
-65.6%
-65.7%
-65.8%
-66.0%
-66.1%
-66.2%
-66.3%
-66.4%
-66.5%
-66.6%
-66.8%
-66.9%
-67.0%
-67.1%
-67.2%
-67.3%
-67.5%
-67.6%
-67.7%
-67.8%
-67.9%
-68.0%
-68.1%
-68.3%
-68.4%
-68.5%
-68.6%
-68.7%
-68.8%
-68.9%
-69.1%
-69.2%
-69.3%
-69.4%
-69.5%
-69.6%
-69.8%
-69.9%
-70.0%
-70.1%
-70.2%
-70.3%
-70.4%
-70.6%
-70.7%
-70.8%
-70.9%
-71.0%
-71.1%
-71.3%
-71.4%
-71.5%
-71.6%
-71.7%
-71.8%
-71.9%
-72.1%
-72.2%
-72.3%
-72.4%
-72.5%
-72.6%
-72.7%
-72.9%
-73.0%
-73.1%
-73.2%
-73.3%
-73.4%
-73.6%
-73.7%
-73.8%
-73.9%
-74.0%
-74.1%
-74.2%
-74.4%
-74.5%
-74.6%
-74.7%
-74.8%
-74.9%
-75.0%
-75.2%
-75.3%
-75.4%
-75.5%
-75.6%
-75.7%
-75.9%
-76.0%
-76.1%
-76.2%
-76.3%
-76.4%
-76.5%
-76.7%
-76.8%
-76.9%
-77.0%
-77.1%
-77.2%
-77.4%
-77.5%
-77.6%
-77.7%
-77.8%
-77.9%
-78.0%
-78.2%
-78.3%
-78.4%
-78.5%
-78.6%
-78.7%
-78.8%
-79.0%
-79.1%
-79.2%
-79.3%
-79.4%
-79.5%
-79.7%
-79.8%
-79.9%
-80.0%
-80.1%
-80.2%
-80.3%
-80.5%
-80.6%
-80.7%
-80.8%
-80.9%
-81.0%
-81.1%
-81.3%
-81.4%
-81.5%
-81.6%
-81.7%
-81.8%
-82.0%
-82.1%
-82.2%
-82.3%
-82.4%
-82.5%
-82.6%
-82.8%
-82.9%
-83.0%
-83.1%
-83.2%
-83.3%
-83.5%
-83.6%
-83.7%
-83.8%
-83.9%
-84.0%
-84.1%
-84.3%
-84.4%
-84.5%
-84.6%
-84.7%
-84.8%
-84.9%
-85.1%
-85.2%
-85.3%
-85.4%
-85.5%
-85.6%
-85.8%
-85.9%
-86.0%
-86.1%
-86.2%
-86.3%
-86.4%
-86.6%
-86.7%
-86.8%
-86.9%
-87.0%
-87.1%
-87.2%
-87.4%
-87.5%
-87.6%
-87.7%
-87.8%
-87.9%
-88.1%
-88.2%
-88.3%
-88.4%
-88.5%
-88.6%
-88.7%
-88.9%
-89.0%
-89.1%
-89.2%
-89.3%
-89.4%
-89.6%
-89.7%
-89.8%
-89.9%
-90.0%
-90.1%
-90.2%
-90.4%
-90.5%
-90.6%
-90.7%
-90.8%
-90.9%
-91.0%
-91.2%
-91.3%
-91.4%
-91.5%
-91.6%
-91.7%
-91.9%
-92.0%
-92.1%
-92.2%
-92.3%
-92.4%
-92.5%
-92.7%
-92.8%
-92.9%
-93.0%
-93.1%
-93.2%
-93.4%
-93.5%
-93.6%
-93.7%
-93.8%
-93.9%
-94.0%
-94.2%
-94.3%
-94.4%
-94.5%
-94.6%
-94.7%
-94.8%
-95.0%
-95.1%
-95.2%
-95.3%
-95.4%
-95.5%
-95.7%
-95.8%
-95.9%
-96.0%
-96.1%
-96.2%
-96.3%
-96.5%
-96.6%
-96.7%
-96.8%
-96.9%
-97.0%
-97.1%
-97.3%
-97.4%
-97.5%
-97.6%
-97.7%
-97.8%
-98.0%
-98.1%
-98.2%
-98.3%
-98.4%
-98.5%
-98.6%
-98.8%
-98.9%
-99.0%
-99.1%
-99.2%
-99.3%
-99.5%
-99.6%
-99.7%
-99.8%
-99.9%
-100.0%
-Downloading https://raw.githubusercontent.com/pytorch/vision/main/test/assets/videos/RATRACE_wave_f_nm_np1_fr_goo_37.avi to ./dataset/1/RATRACE_wave_f_nm_np1_fr_goo_37.avi
-
-0.4%
-0.8%
-1.2%
-1.6%
-1.9%
-2.3%
-2.7%
-3.1%
-3.5%
-3.9%
-4.3%
-4.7%
-5.0%
-5.4%
-5.8%
-6.2%
-6.6%
-7.0%
-7.4%
-7.8%
-8.2%
-8.5%
-8.9%
-9.3%
-9.7%
-10.1%
-10.5%
-10.9%
-11.3%
-11.7%
-12.0%
-12.4%
-12.8%
-13.2%
-13.6%
-14.0%
-14.4%
-14.8%
-15.1%
-15.5%
-15.9%
-16.3%
-16.7%
-17.1%
-17.5%
-17.9%
-18.3%
-18.6%
-19.0%
-19.4%
-19.8%
-20.2%
-20.6%
-21.0%
-21.4%
-21.7%
-22.1%
-22.5%
-22.9%
-23.3%
-23.7%
-24.1%
-24.5%
-24.9%
-25.2%
-25.6%
-26.0%
-26.4%
-26.8%
-27.2%
-27.6%
-28.0%
-28.3%
-28.7%
-29.1%
-29.5%
-29.9%
-30.3%
-30.7%
-31.1%
-31.5%
-31.8%
-32.2%
-32.6%
-33.0%
-33.4%
-33.8%
-34.2%
-34.6%
-35.0%
-35.3%
-35.7%
-36.1%
-36.5%
-36.9%
-37.3%
-37.7%
-38.1%
-38.4%
-38.8%
-39.2%
-39.6%
-40.0%
-40.4%
-40.8%
-41.2%
-41.6%
-41.9%
-42.3%
-42.7%
-43.1%
-43.5%
-43.9%
-44.3%
-44.7%
-45.0%
-45.4%
-45.8%
-46.2%
-46.6%
-47.0%
-47.4%
-47.8%
-48.2%
-48.5%
-48.9%
-49.3%
-49.7%
-50.1%
-50.5%
-50.9%
-51.3%
-51.7%
-52.0%
-52.4%
-52.8%
-53.2%
-53.6%
-54.0%
-54.4%
-54.8%
-55.1%
-55.5%
-55.9%
-56.3%
-56.7%
-57.1%
-57.5%
-57.9%
-58.3%
-58.6%
-59.0%
-59.4%
-59.8%
-60.2%
-60.6%
-61.0%
-61.4%
-61.7%
-62.1%
-62.5%
-62.9%
-63.3%
-63.7%
-64.1%
-64.5%
-64.9%
-65.2%
-65.6%
-66.0%
-66.4%
-66.8%
-67.2%
-67.6%
-68.0%
-68.3%
-68.7%
-69.1%
-69.5%
-69.9%
-70.3%
-70.7%
-71.1%
-71.5%
-71.8%
-72.2%
-72.6%
-73.0%
-73.4%
-73.8%
-74.2%
-74.6%
-75.0%
-75.3%
-75.7%
-76.1%
-76.5%
-76.9%
-77.3%
-77.7%
-78.1%
-78.4%
-78.8%
-79.2%
-79.6%
-80.0%
-80.4%
-80.8%
-81.2%
-81.6%
-81.9%
-82.3%
-82.7%
-83.1%
-83.5%
-83.9%
-84.3%
-84.7%
-85.0%
-85.4%
-85.8%
-86.2%
-86.6%
-87.0%
-87.4%
-87.8%
-88.2%
-88.5%
-88.9%
-89.3%
-89.7%
-90.1%
-90.5%
-90.9%
-91.3%
-91.7%
-92.0%
-92.4%
-92.8%
-93.2%
-93.6%
-94.0%
-94.4%
-94.8%
-95.1%
-95.5%
-95.9%
-96.3%
-96.7%
-97.1%
-97.5%
-97.9%
-98.3%
-98.6%
-99.0%
-99.4%
-99.8%
-100.2%
-Downloading https://raw.githubusercontent.com/pytorch/vision/main/test/assets/videos/SOX5yA1l24A.mp4 to ./dataset/2/SOX5yA1l24A.mp4
-
-0.2%
-0.4%
-0.5%
-0.7%
-0.9%
-1.1%
-1.3%
-1.5%
-1.6%
-1.8%
-2.0%
-2.2%
-2.4%
-2.6%
-2.7%
-2.9%
-3.1%
-3.3%
-3.5%
-3.7%
-3.8%
-4.0%
-4.2%
-4.4%
-4.6%
-4.8%
-4.9%
-5.1%
-5.3%
-5.5%
-5.7%
-5.8%
-6.0%
-6.2%
-6.4%
-6.6%
-6.8%
-6.9%
-7.1%
-7.3%
-7.5%
-7.7%
-7.9%
-8.0%
-8.2%
-8.4%
-8.6%
-8.8%
-9.0%
-9.1%
-9.3%
-9.5%
-9.7%
-9.9%
-10.1%
-10.2%
-10.4%
-10.6%
-10.8%
-11.0%
-11.1%
-11.3%
-11.5%
-11.7%
-11.9%
-12.1%
-12.2%
-12.4%
-12.6%
-12.8%
-13.0%
-13.2%
-13.3%
-13.5%
-13.7%
-13.9%
-14.1%
-14.3%
-14.4%
-14.6%
-14.8%
-15.0%
-15.2%
-15.4%
-15.5%
-15.7%
-15.9%
-16.1%
-16.3%
-16.4%
-16.6%
-16.8%
-17.0%
-17.2%
-17.4%
-17.5%
-17.7%
-17.9%
-18.1%
-18.3%
-18.5%
-18.6%
-18.8%
-19.0%
-19.2%
-19.4%
-19.6%
-19.7%
-19.9%
-20.1%
-20.3%
-20.5%
-20.7%
-20.8%
-21.0%
-21.2%
-21.4%
-21.6%
-21.7%
-21.9%
-22.1%
-22.3%
-22.5%
-22.7%
-22.8%
-23.0%
-23.2%
-23.4%
-23.6%
-23.8%
-23.9%
-24.1%
-24.3%
-24.5%
-24.7%
-24.9%
-25.0%
-25.2%
-25.4%
-25.6%
-25.8%
-26.0%
-26.1%
-26.3%
-26.5%
-26.7%
-26.9%
-27.0%
-27.2%
-27.4%
-27.6%
-27.8%
-28.0%
-28.1%
-28.3%
-28.5%
-28.7%
-28.9%
-29.1%
-29.2%
-29.4%
-29.6%
-29.8%
-30.0%
-30.2%
-30.3%
-30.5%
-30.7%
-30.9%
-31.1%
-31.3%
-31.4%
-31.6%
-31.8%
-32.0%
-32.2%
-32.3%
-32.5%
-32.7%
-32.9%
-33.1%
-33.3%
-33.4%
-33.6%
-33.8%
-34.0%
-34.2%
-34.4%
-34.5%
-34.7%
-34.9%
-35.1%
-35.3%
-35.5%
-35.6%
-35.8%
-36.0%
-36.2%
-36.4%
-36.6%
-36.7%
-36.9%
-37.1%
-37.3%
-37.5%
-37.6%
-37.8%
-38.0%
-38.2%
-38.4%
-38.6%
-38.7%
-38.9%
-39.1%
-39.3%
-39.5%
-39.7%
-39.8%
-40.0%
-40.2%
-40.4%
-40.6%
-40.8%
-40.9%
-41.1%
-41.3%
-41.5%
-41.7%
-41.9%
-42.0%
-42.2%
-42.4%
-42.6%
-42.8%
-42.9%
-43.1%
-43.3%
-43.5%
-43.7%
-43.9%
-44.0%
-44.2%
-44.4%
-44.6%
-44.8%
-45.0%
-45.1%
-45.3%
-45.5%
-45.7%
-45.9%
-46.1%
-46.2%
-46.4%
-46.6%
-46.8%
-47.0%
-47.2%
-47.3%
-47.5%
-47.7%
-47.9%
-48.1%
-48.2%
-48.4%
-48.6%
-48.8%
-49.0%
-49.2%
-49.3%
-49.5%
-49.7%
-49.9%
-50.1%
-50.3%
-50.4%
-50.6%
-50.8%
-51.0%
-51.2%
-51.4%
-51.5%
-51.7%
-51.9%
-52.1%
-52.3%
-52.5%
-52.6%
-52.8%
-53.0%
-53.2%
-53.4%
-53.5%
-53.7%
-53.9%
-54.1%
-54.3%
-54.5%
-54.6%
-54.8%
-55.0%
-55.2%
-55.4%
-55.6%
-55.7%
-55.9%
-56.1%
-56.3%
-56.5%
-56.7%
-56.8%
-57.0%
-57.2%
-57.4%
-57.6%
-57.8%
-57.9%
-58.1%
-58.3%
-58.5%
-58.7%
-58.8%
-59.0%
-59.2%
-59.4%
-59.6%
-59.8%
-59.9%
-60.1%
-60.3%
-60.5%
-60.7%
-60.9%
-61.0%
-61.2%
-61.4%
-61.6%
-61.8%
-62.0%
-62.1%
-62.3%
-62.5%
-62.7%
-62.9%
-63.1%
-63.2%
-63.4%
-63.6%
-63.8%
-64.0%
-64.1%
-64.3%
-64.5%
-64.7%
-64.9%
-65.1%
-65.2%
-65.4%
-65.6%
-65.8%
-66.0%
-66.2%
-66.3%
-66.5%
-66.7%
-66.9%
-67.1%
-67.3%
-67.4%
-67.6%
-67.8%
-68.0%
-68.2%
-68.4%
-68.5%
-68.7%
-68.9%
-69.1%
-69.3%
-69.4%
-69.6%
-69.8%
-70.0%
-70.2%
-70.4%
-70.5%
-70.7%
-70.9%
-71.1%
-71.3%
-71.5%
-71.6%
-71.8%
-72.0%
-72.2%
-72.4%
-72.6%
-72.7%
-72.9%
-73.1%
-73.3%
-73.5%
-73.7%
-73.8%
-74.0%
-74.2%
-74.4%
-74.6%
-74.7%
-74.9%
-75.1%
-75.3%
-75.5%
-75.7%
-75.8%
-76.0%
-76.2%
-76.4%
-76.6%
-76.8%
-76.9%
-77.1%
-77.3%
-77.5%
-77.7%
-77.9%
-78.0%
-78.2%
-78.4%
-78.6%
-78.8%
-79.0%
-79.1%
-79.3%
-79.5%
-79.7%
-79.9%
-80.0%
-80.2%
-80.4%
-80.6%
-80.8%
-81.0%
-81.1%
-81.3%
-81.5%
-81.7%
-81.9%
-82.1%
-82.2%
-82.4%
-82.6%
-82.8%
-83.0%
-83.2%
-83.3%
-83.5%
-83.7%
-83.9%
-84.1%
-84.3%
-84.4%
-84.6%
-84.8%
-85.0%
-85.2%
-85.3%
-85.5%
-85.7%
-85.9%
-86.1%
-86.3%
-86.4%
-86.6%
-86.8%
-87.0%
-87.2%
-87.4%
-87.5%
-87.7%
-87.9%
-88.1%
-88.3%
-88.5%
-88.6%
-88.8%
-89.0%
-89.2%
-89.4%
-89.6%
-89.7%
-89.9%
-90.1%
-90.3%
-90.5%
-90.6%
-90.8%
-91.0%
-91.2%
-91.4%
-91.6%
-91.7%
-91.9%
-92.1%
-92.3%
-92.5%
-92.7%
-92.8%
-93.0%
-93.2%
-93.4%
-93.6%
-93.8%
-93.9%
-94.1%
-94.3%
-94.5%
-94.7%
-94.9%
-95.0%
-95.2%
-95.4%
-95.6%
-95.8%
-95.9%
-96.1%
-96.3%
-96.5%
-96.7%
-96.9%
-97.0%
-97.2%
-97.4%
-97.6%
-97.8%
-98.0%
-98.1%
-98.3%
-98.5%
-98.7%
-98.9%
-99.1%
-99.2%
-99.4%
-99.6%
-99.8%
-100.0%
-100.2%
-Downloading https://raw.githubusercontent.com/pytorch/vision/main/test/assets/videos/v_SoccerJuggling_g23_c01.avi to ./dataset/2/v_SoccerJuggling_g23_c01.avi
-
-0.2%
-0.4%
-0.6%
-0.8%
-1.0%
-1.2%
-1.4%
-1.6%
-1.8%
-2.0%
-2.2%
-2.4%
-2.6%
-2.8%
-3.0%
-3.2%
-3.4%
-3.6%
-3.8%
-4.0%
-4.2%
-4.4%
-4.6%
-4.8%
-5.0%
-5.2%
-5.4%
-5.6%
-5.8%
-6.0%
-6.2%
-6.4%
-6.6%
-6.8%
-7.0%
-7.3%
-7.5%
-7.7%
-7.9%
-8.1%
-8.3%
-8.5%
-8.7%
-8.9%
-9.1%
-9.3%
-9.5%
-9.7%
-9.9%
-10.1%
-10.3%
-10.5%
-10.7%
-10.9%
-11.1%
-11.3%
-11.5%
-11.7%
-11.9%
-12.1%
-12.3%
-12.5%
-12.7%
-12.9%
-13.1%
-13.3%
-13.5%
-13.7%
-13.9%
-14.1%
-14.3%
-14.5%
-14.7%
-14.9%
-15.1%
-15.3%
-15.5%
-15.7%
-15.9%
-16.1%
-16.3%
-16.5%
-16.7%
-16.9%
-17.1%
-17.3%
-17.5%
-17.7%
-17.9%
-18.1%
-18.3%
-18.5%
-18.7%
-18.9%
-19.1%
-19.3%
-19.5%
-19.7%
-19.9%
-20.1%
-20.3%
-20.5%
-20.7%
-20.9%
-21.1%
-21.4%
-21.6%
-21.8%
-22.0%
-22.2%
-22.4%
-22.6%
-22.8%
-23.0%
-23.2%
-23.4%
-23.6%
-23.8%
-24.0%
-24.2%
-24.4%
-24.6%
-24.8%
-25.0%
-25.2%
-25.4%
-25.6%
-25.8%
-26.0%
-26.2%
-26.4%
-26.6%
-26.8%
-27.0%
-27.2%
-27.4%
-27.6%
-27.8%
-28.0%
-28.2%
-28.4%
-28.6%
-28.8%
-29.0%
-29.2%
-29.4%
-29.6%
-29.8%
-30.0%
-30.2%
-30.4%
-30.6%
-30.8%
-31.0%
-31.2%
-31.4%
-31.6%
-31.8%
-32.0%
-32.2%
-32.4%
-32.6%
-32.8%
-33.0%
-33.2%
-33.4%
-33.6%
-33.8%
-34.0%
-34.2%
-34.4%
-34.6%
-34.8%
-35.0%
-35.2%
-35.4%
-35.7%
-35.9%
-36.1%
-36.3%
-36.5%
-36.7%
-36.9%
-37.1%
-37.3%
-37.5%
-37.7%
-37.9%
-38.1%
-38.3%
-38.5%
-38.7%
-38.9%
-39.1%
-39.3%
-39.5%
-39.7%
-39.9%
-40.1%
-40.3%
-40.5%
-40.7%
-40.9%
-41.1%
-41.3%
-41.5%
-41.7%
-41.9%
-42.1%
-42.3%
-42.5%
-42.7%
-42.9%
-43.1%
-43.3%
-43.5%
-43.7%
-43.9%
-44.1%
-44.3%
-44.5%
-44.7%
-44.9%
-45.1%
-45.3%
-45.5%
-45.7%
-45.9%
-46.1%
-46.3%
-46.5%
-46.7%
-46.9%
-47.1%
-47.3%
-47.5%
-47.7%
-47.9%
-48.1%
-48.3%
-48.5%
-48.7%
-48.9%
-49.1%
-49.3%
-49.5%
-49.8%
-50.0%
-50.2%
-50.4%
-50.6%
-50.8%
-51.0%
-51.2%
-51.4%
-51.6%
-51.8%
-52.0%
-52.2%
-52.4%
-52.6%
-52.8%
-53.0%
-53.2%
-53.4%
-53.6%
-53.8%
-54.0%
-54.2%
-54.4%
-54.6%
-54.8%
-55.0%
-55.2%
-55.4%
-55.6%
-55.8%
-56.0%
-56.2%
-56.4%
-56.6%
-56.8%
-57.0%
-57.2%
-57.4%
-57.6%
-57.8%
-58.0%
-58.2%
-58.4%
-58.6%
-58.8%
-59.0%
-59.2%
-59.4%
-59.6%
-59.8%
-60.0%
-60.2%
-60.4%
-60.6%
-60.8%
-61.0%
-61.2%
-61.4%
-61.6%
-61.8%
-62.0%
-62.2%
-62.4%
-62.6%
-62.8%
-63.0%
-63.2%
-63.4%
-63.6%
-63.8%
-64.1%
-64.3%
-64.5%
-64.7%
-64.9%
-65.1%
-65.3%
-65.5%
-65.7%
-65.9%
-66.1%
-66.3%
-66.5%
-66.7%
-66.9%
-67.1%
-67.3%
-67.5%
-67.7%
-67.9%
-68.1%
-68.3%
-68.5%
-68.7%
-68.9%
-69.1%
-69.3%
-69.5%
-69.7%
-69.9%
-70.1%
-70.3%
-70.5%
-70.7%
-70.9%
-71.1%
-71.3%
-71.5%
-71.7%
-71.9%
-72.1%
-72.3%
-72.5%
-72.7%
-72.9%
-73.1%
-73.3%
-73.5%
-73.7%
-73.9%
-74.1%
-74.3%
-74.5%
-74.7%
-74.9%
-75.1%
-75.3%
-75.5%
-75.7%
-75.9%
-76.1%
-76.3%
-76.5%
-76.7%
-76.9%
-77.1%
-77.3%
-77.5%
-77.7%
-77.9%
-78.2%
-78.4%
-78.6%
-78.8%
-79.0%
-79.2%
-79.4%
-79.6%
-79.8%
-80.0%
-80.2%
-80.4%
-80.6%
-80.8%
-81.0%
-81.2%
-81.4%
-81.6%
-81.8%
-82.0%
-82.2%
-82.4%
-82.6%
-82.8%
-83.0%
-83.2%
-83.4%
-83.6%
-83.8%
-84.0%
-84.2%
-84.4%
-84.6%
-84.8%
-85.0%
-85.2%
-85.4%
-85.6%
-85.8%
-86.0%
-86.2%
-86.4%
-86.6%
-86.8%
-87.0%
-87.2%
-87.4%
-87.6%
-87.8%
-88.0%
-88.2%
-88.4%
-88.6%
-88.8%
-89.0%
-89.2%
-89.4%
-89.6%
-89.8%
-90.0%
-90.2%
-90.4%
-90.6%
-90.8%
-91.0%
-91.2%
-91.4%
-91.6%
-91.8%
-92.0%
-92.2%
-92.5%
-92.7%
-92.9%
-93.1%
-93.3%
-93.5%
-93.7%
-93.9%
-94.1%
-94.3%
-94.5%
-94.7%
-94.9%
-95.1%
-95.3%
-95.5%
-95.7%
-95.9%
-96.1%
-96.3%
-96.5%
-96.7%
-96.9%
-97.1%
-97.3%
-97.5%
-97.7%
-97.9%
-98.1%
-98.3%
-98.5%
-98.7%
-98.9%
-99.1%
-99.3%
-99.5%
-99.7%
-99.9%
-100.1%
-Downloading https://raw.githubusercontent.com/pytorch/vision/main/test/assets/videos/v_SoccerJuggling_g24_c01.avi to ./dataset/2/v_SoccerJuggling_g24_c01.avi
-
-0.2%
-0.3%
-0.5%
-0.7%
-0.8%
-1.0%
-1.2%
-1.3%
-1.5%
-1.6%
-1.8%
-2.0%
-2.1%
-2.3%
-2.5%
-2.6%
-2.8%
-3.0%
-3.1%
-3.3%
-3.5%
-3.6%
-3.8%
-3.9%
-4.1%
-4.3%
-4.4%
-4.6%
-4.8%
-4.9%
-5.1%
-5.3%
-5.4%
-5.6%
-5.8%
-5.9%
-6.1%
-6.2%
-6.4%
-6.6%
-6.7%
-6.9%
-7.1%
-7.2%
-7.4%
-7.6%
-7.7%
-7.9%
-8.1%
-8.2%
-8.4%
-8.5%
-8.7%
-8.9%
-9.0%
-9.2%
-9.4%
-9.5%
-9.7%
-9.9%
-10.0%
-10.2%
-10.4%
-10.5%
-10.7%
-10.8%
-11.0%
-11.2%
-11.3%
-11.5%
-11.7%
-11.8%
-12.0%
-12.2%
-12.3%
-12.5%
-12.7%
-12.8%
-13.0%
-13.2%
-13.3%
-13.5%
-13.6%
-13.8%
-14.0%
-14.1%
-14.3%
-14.5%
-14.6%
-14.8%
-15.0%
-15.1%
-15.3%
-15.5%
-15.6%
-15.8%
-15.9%
-16.1%
-16.3%
-16.4%
-16.6%
-16.8%
-16.9%
-17.1%
-17.3%
-17.4%
-17.6%
-17.8%
-17.9%
-18.1%
-18.2%
-18.4%
-18.6%
-18.7%
-18.9%
-19.1%
-19.2%
-19.4%
-19.6%
-19.7%
-19.9%
-20.1%
-20.2%
-20.4%
-20.5%
-20.7%
-20.9%
-21.0%
-21.2%
-21.4%
-21.5%
-21.7%
-21.9%
-22.0%
-22.2%
-22.4%
-22.5%
-22.7%
-22.9%
-23.0%
-23.2%
-23.3%
-23.5%
-23.7%
-23.8%
-24.0%
-24.2%
-24.3%
-24.5%
-24.7%
-24.8%
-25.0%
-25.2%
-25.3%
-25.5%
-25.6%
-25.8%
-26.0%
-26.1%
-26.3%
-26.5%
-26.6%
-26.8%
-27.0%
-27.1%
-27.3%
-27.5%
-27.6%
-27.8%
-27.9%
-28.1%
-28.3%
-28.4%
-28.6%
-28.8%
-28.9%
-29.1%
-29.3%
-29.4%
-29.6%
-29.8%
-29.9%
-30.1%
-30.2%
-30.4%
-30.6%
-30.7%
-30.9%
-31.1%
-31.2%
-31.4%
-31.6%
-31.7%
-31.9%
-32.1%
-32.2%
-32.4%
-32.5%
-32.7%
-32.9%
-33.0%
-33.2%
-33.4%
-33.5%
-33.7%
-33.9%
-34.0%
-34.2%
-34.4%
-34.5%
-34.7%
-34.9%
-35.0%
-35.2%
-35.3%
-35.5%
-35.7%
-35.8%
-36.0%
-36.2%
-36.3%
-36.5%
-36.7%
-36.8%
-37.0%
-37.2%
-37.3%
-37.5%
-37.6%
-37.8%
-38.0%
-38.1%
-38.3%
-38.5%
-38.6%
-38.8%
-39.0%
-39.1%
-39.3%
-39.5%
-39.6%
-39.8%
-39.9%
-40.1%
-40.3%
-40.4%
-40.6%
-40.8%
-40.9%
-41.1%
-41.3%
-41.4%
-41.6%
-41.8%
-41.9%
-42.1%
-42.2%
-42.4%
-42.6%
-42.7%
-42.9%
-43.1%
-43.2%
-43.4%
-43.6%
-43.7%
-43.9%
-44.1%
-44.2%
-44.4%
-44.5%
-44.7%
-44.9%
-45.0%
-45.2%
-45.4%
-45.5%
-45.7%
-45.9%
-46.0%
-46.2%
-46.4%
-46.5%
-46.7%
-46.9%
-47.0%
-47.2%
-47.3%
-47.5%
-47.7%
-47.8%
-48.0%
-48.2%
-48.3%
-48.5%
-48.7%
-48.8%
-49.0%
-49.2%
-49.3%
-49.5%
-49.6%
-49.8%
-50.0%
-50.1%
-50.3%
-50.5%
-50.6%
-50.8%
-51.0%
-51.1%
-51.3%
-51.5%
-51.6%
-51.8%
-51.9%
-52.1%
-52.3%
-52.4%
-52.6%
-52.8%
-52.9%
-53.1%
-53.3%
-53.4%
-53.6%
-53.8%
-53.9%
-54.1%
-54.2%
-54.4%
-54.6%
-54.7%
-54.9%
-55.1%
-55.2%
-55.4%
-55.6%
-55.7%
-55.9%
-56.1%
-56.2%
-56.4%
-56.6%
-56.7%
-56.9%
-57.0%
-57.2%
-57.4%
-57.5%
-57.7%
-57.9%
-58.0%
-58.2%
-58.4%
-58.5%
-58.7%
-58.9%
-59.0%
-59.2%
-59.3%
-59.5%
-59.7%
-59.8%
-60.0%
-60.2%
-60.3%
-60.5%
-60.7%
-60.8%
-61.0%
-61.2%
-61.3%
-61.5%
-61.6%
-61.8%
-62.0%
-62.1%
-62.3%
-62.5%
-62.6%
-62.8%
-63.0%
-63.1%
-63.3%
-63.5%
-63.6%
-63.8%
-63.9%
-64.1%
-64.3%
-64.4%
-64.6%
-64.8%
-64.9%
-65.1%
-65.3%
-65.4%
-65.6%
-65.8%
-65.9%
-66.1%
-66.2%
-66.4%
-66.6%
-66.7%
-66.9%
-67.1%
-67.2%
-67.4%
-67.6%
-67.7%
-67.9%
-68.1%
-68.2%
-68.4%
-68.6%
-68.7%
-68.9%
-69.0%
-69.2%
-69.4%
-69.5%
-69.7%
-69.9%
-70.0%
-70.2%
-70.4%
-70.5%
-70.7%
-70.9%
-71.0%
-71.2%
-71.3%
-71.5%
-71.7%
-71.8%
-72.0%
-72.2%
-72.3%
-72.5%
-72.7%
-72.8%
-73.0%
-73.2%
-73.3%
-73.5%
-73.6%
-73.8%
-74.0%
-74.1%
-74.3%
-74.5%
-74.6%
-74.8%
-75.0%
-75.1%
-75.3%
-75.5%
-75.6%
-75.8%
-75.9%
-76.1%
-76.3%
-76.4%
-76.6%
-76.8%
-76.9%
-77.1%
-77.3%
-77.4%
-77.6%
-77.8%
-77.9%
-78.1%
-78.3%
-78.4%
-78.6%
-78.7%
-78.9%
-79.1%
-79.2%
-79.4%
-79.6%
-79.7%
-79.9%
-80.1%
-80.2%
-80.4%
-80.6%
-80.7%
-80.9%
-81.0%
-81.2%
-81.4%
-81.5%
-81.7%
-81.9%
-82.0%
-82.2%
-82.4%
-82.5%
-82.7%
-82.9%
-83.0%
-83.2%
-83.3%
-83.5%
-83.7%
-83.8%
-84.0%
-84.2%
-84.3%
-84.5%
-84.7%
-84.8%
-85.0%
-85.2%
-85.3%
-85.5%
-85.6%
-85.8%
-86.0%
-86.1%
-86.3%
-86.5%
-86.6%
-86.8%
-87.0%
-87.1%
-87.3%
-87.5%
-87.6%
-87.8%
-87.9%
-88.1%
-88.3%
-88.4%
-88.6%
-88.8%
-88.9%
-89.1%
-89.3%
-89.4%
-89.6%
-89.8%
-89.9%
-90.1%
-90.3%
-90.4%
-90.6%
-90.7%
-90.9%
-91.1%
-91.2%
-91.4%
-91.6%
-91.7%
-91.9%
-92.1%
-92.2%
-92.4%
-92.6%
-92.7%
-92.9%
-93.0%
-93.2%
-93.4%
-93.5%
-93.7%
-93.9%
-94.0%
-94.2%
-94.4%
-94.5%
-94.7%
-94.9%
-95.0%
-95.2%
-95.3%
-95.5%
-95.7%
-95.8%
-96.0%
-96.2%
-96.3%
-96.5%
-96.7%
-96.8%
-97.0%
-97.2%
-97.3%
-97.5%
-97.6%
-97.8%
-98.0%
-98.1%
-98.3%
-98.5%
-98.6%
-98.8%
-99.0%
-99.1%
-99.3%
-99.5%
-99.6%
-99.8%
-99.9%
-100.1%
-
-
-

Housekeeping and utilities

-
import os
-import random
-
-from torchvision.datasets.folder import make_dataset
-from torchvision import transforms as t
-
-
-def _find_classes(dir):
-    classes = [d.name for d in os.scandir(dir) if d.is_dir()]
-    classes.sort()
-    class_to_idx = {cls_name: i for i, cls_name in enumerate(classes)}
-    return classes, class_to_idx
-
-
-def get_samples(root, extensions=(".mp4", ".avi")):
-    _, class_to_idx = _find_classes(root)
-    return make_dataset(root, class_to_idx, extensions=extensions)
-
-
-

We are going to define the dataset and some basic arguments. -We assume the structure of the FolderDataset, and add the following parameters:

-
    -
  • clip_len: length of a clip in frames

  • -
  • frame_transform: transform for every frame individually

  • -
  • video_transform: transform on a video sequence

  • -
-
-

Note

-

We actually add epoch size as using IterableDataset() -class allows us to naturally oversample clips or images from each video if needed.

-
-
class RandomDataset(torch.utils.data.IterableDataset):
-    def __init__(self, root, epoch_size=None, frame_transform=None, video_transform=None, clip_len=16):
-        super(RandomDataset).__init__()
-
-        self.samples = get_samples(root)
-
-        # Allow for temporal jittering
-        if epoch_size is None:
-            epoch_size = len(self.samples)
-        self.epoch_size = epoch_size
-
-        self.clip_len = clip_len
-        self.frame_transform = frame_transform
-        self.video_transform = video_transform
-
-    def __iter__(self):
-        for i in range(self.epoch_size):
-            # Get random sample
-            path, target = random.choice(self.samples)
-            # Get video object
-            vid = torchvision.io.VideoReader(path, "video")
-            metadata = vid.get_metadata()
-            video_frames = []  # video frame buffer
-
-            # Seek and return frames
-            max_seek = metadata["video"]['duration'][0] - (self.clip_len / metadata["video"]['fps'][0])
-            start = random.uniform(0., max_seek)
-            for frame in itertools.islice(vid.seek(start), self.clip_len):
-                video_frames.append(self.frame_transform(frame['data']))
-                current_pts = frame['pts']
-            # Stack it into a tensor
-            video = torch.stack(video_frames, 0)
-            if self.video_transform:
-                video = self.video_transform(video)
-            output = {
-                'path': path,
-                'video': video,
-                'target': target,
-                'start': start,
-                'end': current_pts}
-            yield output
-
-
-

Given a path of videos in a folder structure, i.e:

-
    -
  • -
    dataset
      -
    • -
      class 1
        -
      • file 0

      • -
      • file 1

      • -
      • -
      -
      -
      -
    • -
    • -
      class 2
        -
      • file 0

      • -
      • file 1

      • -
      • -
      -
      -
      -
    • -
    • -
    -
    -
    -
  • -
-

We can generate a dataloader and test the dataset.

-
transforms = [t.Resize((112, 112))]
-frame_transform = t.Compose(transforms)
-
-dataset = RandomDataset("./dataset", epoch_size=None, frame_transform=frame_transform)
-
-
-
from torch.utils.data import DataLoader
-loader = DataLoader(dataset, batch_size=12)
-data = {"video": [], 'start': [], 'end': [], 'tensorsize': []}
-for batch in loader:
-    for i in range(len(batch['path'])):
-        data['video'].append(batch['path'][i])
-        data['start'].append(batch['start'][i].item())
-        data['end'].append(batch['end'][i].item())
-        data['tensorsize'].append(batch['video'][i].size())
-print(data)
-
-
-

Out:

-
{'video': ['./dataset/1/WUzgd7C1pWA.mp4', './dataset/2/SOX5yA1l24A.mp4', './dataset/2/v_SoccerJuggling_g24_c01.avi', './dataset/1/WUzgd7C1pWA.mp4', './dataset/2/SOX5yA1l24A.mp4'], 'start': [1.6059886546397426, 2.8462735255185843, 5.794335670319363, 3.7124644717480897, 5.732515897132387], 'end': [2.135467, 3.370033, 6.306299999999999, 4.237566999999999, 6.239567], 'tensorsize': [torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112])]}
-
-
-
-
-

4. Data Visualization

-

Example of visualized video

-
import matplotlib.pylab as plt
-
-plt.figure(figsize=(12, 12))
-for i in range(16):
-    plt.subplot(4, 4, i + 1)
-    plt.imshow(batch["video"][0, i, ...].permute(1, 2, 0))
-    plt.axis("off")
-
-
-plot video api

Cleanup the video and dataset:

-
import os
-import shutil
-os.remove("./WUzgd7C1pWA.mp4")
-shutil.rmtree("./dataset")
-
-
-

Total running time of the script: ( 0 minutes 4.826 seconds)

- -

Gallery generated by Sphinx-Gallery

-
-
- - -
- -
- - -
-
- - -
-
- - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
-
-

Docs

-

Access comprehensive developer documentation for PyTorch

- View Docs -
- -
-

Tutorials

-

Get in-depth tutorials for beginners and advanced developers

- View Tutorials -
- -
-

Resources

-

Find development resources and get your questions answered

- View Resources -
-
-
-
- - - - - - - - - -
-
-
-
- - -
-
-
- - -
- - - - - - - - \ No newline at end of file diff --git a/0.11./auto_examples/plot_visualization_utils.html b/0.11./auto_examples/plot_visualization_utils.html deleted file mode 100644 index bbbbb180702..00000000000 --- a/0.11./auto_examples/plot_visualization_utils.html +++ /dev/null @@ -1,1090 +0,0 @@ - - - - - - - - - - - - Visualization utilities — Torchvision main documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - - - - - - - - - - - - - - - -
- - - - -
-
- -
- Shortcuts -
-
- -
-
- - - -
- -
-
- - -
-

Visualization utilities

-

This example illustrates some of the utilities that torchvision offers for -visualizing images, bounding boxes, and segmentation masks.

-
# sphinx_gallery_thumbnail_path = "../../gallery/assets/visualization_utils_thumbnail.png"
-
-import torch
-import numpy as np
-import matplotlib.pyplot as plt
-
-import torchvision.transforms.functional as F
-
-
-plt.rcParams["savefig.bbox"] = 'tight'
-
-
-def show(imgs):
-    if not isinstance(imgs, list):
-        imgs = [imgs]
-    fix, axs = plt.subplots(ncols=len(imgs), squeeze=False)
-    for i, img in enumerate(imgs):
-        img = img.detach()
-        img = F.to_pil_image(img)
-        axs[0, i].imshow(np.asarray(img))
-        axs[0, i].set(xticklabels=[], yticklabels=[], xticks=[], yticks=[])
-
-
-
-

Visualizing a grid of images

-

The make_grid() function can be used to create a -tensor that represents multiple images in a grid. This util requires a single -image of dtype uint8 as input.

-
from torchvision.utils import make_grid
-from torchvision.io import read_image
-from pathlib import Path
-
-dog1_int = read_image(str(Path('assets') / 'dog1.jpg'))
-dog2_int = read_image(str(Path('assets') / 'dog2.jpg'))
-
-grid = make_grid([dog1_int, dog2_int, dog1_int, dog2_int])
-show(grid)
-
-
-plot visualization utils
-
-

Visualizing bounding boxes

-

We can use draw_bounding_boxes() to draw boxes on an -image. We can set the colors, labels, width as well as font and font size. -The boxes are in (xmin, ymin, xmax, ymax) format.

-
from torchvision.utils import draw_bounding_boxes
-
-
-boxes = torch.tensor([[50, 50, 100, 200], [210, 150, 350, 430]], dtype=torch.float)
-colors = ["blue", "yellow"]
-result = draw_bounding_boxes(dog1_int, boxes, colors=colors, width=5)
-show(result)
-
-
-plot visualization utils

Naturally, we can also plot bounding boxes produced by torchvision detection -models. Here is demo with a Faster R-CNN model loaded from -fasterrcnn_resnet50_fpn() -model. You can also try using a RetinaNet with -retinanet_resnet50_fpn(), an SSDlite with -ssdlite320_mobilenet_v3_large() or an SSD with -ssd300_vgg16(). For more details -on the output of such models, you may refer to Instance segmentation models.

-
from torchvision.models.detection import fasterrcnn_resnet50_fpn
-from torchvision.transforms.functional import convert_image_dtype
-
-
-batch_int = torch.stack([dog1_int, dog2_int])
-batch = convert_image_dtype(batch_int, dtype=torch.float)
-
-model = fasterrcnn_resnet50_fpn(pretrained=True, progress=False)
-model = model.eval()
-
-outputs = model(batch)
-print(outputs)
-
-
-

Out:

-
[{'boxes': tensor([[215.9767, 171.1661, 402.0078, 378.7391],
-        [344.6341, 172.6735, 357.6114, 220.1435],
-        [153.1306, 185.5568, 172.9223, 254.7014]], grad_fn=<StackBackward0>), 'labels': tensor([18,  1,  1]), 'scores': tensor([0.9989, 0.0701, 0.0611], grad_fn=<IndexBackward0>)}, {'boxes': tensor([[ 23.5963, 132.4332, 449.9359, 493.0222],
-        [225.8183, 124.6292, 467.2861, 492.2621],
-        [ 18.5249, 135.4171, 420.9786, 479.2226]], grad_fn=<StackBackward0>), 'labels': tensor([18, 18, 17]), 'scores': tensor([0.9980, 0.0879, 0.0671], grad_fn=<IndexBackward0>)}]
-
-
-

Let’s plot the boxes detected by our model. We will only plot the boxes with a -score greater than a given threshold.

-
score_threshold = .8
-dogs_with_boxes = [
-    draw_bounding_boxes(dog_int, boxes=output['boxes'][output['scores'] > score_threshold], width=4)
-    for dog_int, output in zip(batch_int, outputs)
-]
-show(dogs_with_boxes)
-
-
-plot visualization utils
-
-

Visualizing segmentation masks

-

The draw_segmentation_masks() function can be used to -draw segmentation masks on images. Semantic segmentation and instance -segmentation models have different outputs, so we will treat each -independently.

-
-

Semantic segmentation models

-

We will see how to use it with torchvision’s FCN Resnet-50, loaded with -fcn_resnet50(). You can also try using -DeepLabv3 (deeplabv3_resnet50()) or -lraspp mobilenet models -(lraspp_mobilenet_v3_large()).

-

Let’s start by looking at the ouput of the model. Remember that in general, -images must be normalized before they’re passed to a semantic segmentation -model.

-
from torchvision.models.segmentation import fcn_resnet50
-
-
-model = fcn_resnet50(pretrained=True, progress=False)
-model = model.eval()
-
-normalized_batch = F.normalize(batch, mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225))
-output = model(normalized_batch)['out']
-print(output.shape, output.min().item(), output.max().item())
-
-
-

Out:

-
Downloading: "https://download.pytorch.org/models/fcn_resnet50_coco-1167a1af.pth" to /root/.cache/torch/hub/checkpoints/fcn_resnet50_coco-1167a1af.pth
-torch.Size([2, 21, 500, 500]) -7.089669704437256 14.858256340026855
-
-
-

As we can see above, the output of the segmentation model is a tensor of shape -(batch_size, num_classes, H, W). Each value is a non-normalized score, and -we can normalize them into [0, 1] by using a softmax. After the softmax, -we can interpret each value as a probability indicating how likely a given -pixel is to belong to a given class.

-

Let’s plot the masks that have been detected for the dog class and for the -boat class:

-
sem_classes = [
-    '__background__', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus',
-    'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike',
-    'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'
-]
-sem_class_to_idx = {cls: idx for (idx, cls) in enumerate(sem_classes)}
-
-normalized_masks = torch.nn.functional.softmax(output, dim=1)
-
-dog_and_boat_masks = [
-    normalized_masks[img_idx, sem_class_to_idx[cls]]
-    for img_idx in range(batch.shape[0])
-    for cls in ('dog', 'boat')
-]
-
-show(dog_and_boat_masks)
-
-
-plot visualization utils

As expected, the model is confident about the dog class, but not so much for -the boat class.

-

The draw_segmentation_masks() function can be used to -plots those masks on top of the original image. This function expects the -masks to be boolean masks, but our masks above contain probabilities in [0, -1]. To get boolean masks, we can do the following:

-
class_dim = 1
-boolean_dog_masks = (normalized_masks.argmax(class_dim) == sem_class_to_idx['dog'])
-print(f"shape = {boolean_dog_masks.shape}, dtype = {boolean_dog_masks.dtype}")
-show([m.float() for m in boolean_dog_masks])
-
-
-plot visualization utils

Out:

-
shape = torch.Size([2, 500, 500]), dtype = torch.bool
-
-
-

The line above where we define boolean_dog_masks is a bit cryptic, but you -can read it as the following query: “For which pixels is ‘dog’ the most likely -class?”

-
-

Note

-

While we’re using the normalized_masks here, we would have -gotten the same result by using the non-normalized scores of the model -directly (as the softmax operation preserves the order).

-
-

Now that we have boolean masks, we can use them with -draw_segmentation_masks() to plot them on top of the -original images:

-
from torchvision.utils import draw_segmentation_masks
-
-dogs_with_masks = [
-    draw_segmentation_masks(img, masks=mask, alpha=0.7)
-    for img, mask in zip(batch_int, boolean_dog_masks)
-]
-show(dogs_with_masks)
-
-
-plot visualization utils

We can plot more than one mask per image! Remember that the model returned as -many masks as there are classes. Let’s ask the same query as above, but this -time for all classes, not just the dog class: “For each pixel and each class -C, is class C the most most likely class?”

-

This one is a bit more involved, so we’ll first show how to do it with a -single image, and then we’ll generalize to the batch

-
num_classes = normalized_masks.shape[1]
-dog1_masks = normalized_masks[0]
-class_dim = 0
-dog1_all_classes_masks = dog1_masks.argmax(class_dim) == torch.arange(num_classes)[:, None, None]
-
-print(f"dog1_masks shape = {dog1_masks.shape}, dtype = {dog1_masks.dtype}")
-print(f"dog1_all_classes_masks = {dog1_all_classes_masks.shape}, dtype = {dog1_all_classes_masks.dtype}")
-
-dog_with_all_masks = draw_segmentation_masks(dog1_int, masks=dog1_all_classes_masks, alpha=.6)
-show(dog_with_all_masks)
-
-
-plot visualization utils

Out:

-
dog1_masks shape = torch.Size([21, 500, 500]), dtype = torch.float32
-dog1_all_classes_masks = torch.Size([21, 500, 500]), dtype = torch.bool
-
-
-

We can see in the image above that only 2 masks were drawn: the mask for the -background and the mask for the dog. This is because the model thinks that -only these 2 classes are the most likely ones across all the pixels. If the -model had detected another class as the most likely among other pixels, we -would have seen its mask above.

-

Removing the background mask is as simple as passing -masks=dog1_all_classes_masks[1:], because the background class is the -class with index 0.

-

Let’s now do the same but for an entire batch of images. The code is similar -but involves a bit more juggling with the dimensions.

-
class_dim = 1
-all_classes_masks = normalized_masks.argmax(class_dim) == torch.arange(num_classes)[:, None, None, None]
-print(f"shape = {all_classes_masks.shape}, dtype = {all_classes_masks.dtype}")
-# The first dimension is the classes now, so we need to swap it
-all_classes_masks = all_classes_masks.swapaxes(0, 1)
-
-dogs_with_masks = [
-    draw_segmentation_masks(img, masks=mask, alpha=.6)
-    for img, mask in zip(batch_int, all_classes_masks)
-]
-show(dogs_with_masks)
-
-
-plot visualization utils

Out:

-
shape = torch.Size([21, 2, 500, 500]), dtype = torch.bool
-
-
-
-
-

Instance segmentation models

-

Instance segmentation models have a significantly different output from the -semantic segmentation models. We will see here how to plot the masks for such -models. Let’s start by analyzing the output of a Mask-RCNN model. Note that -these models don’t require the images to be normalized, so we don’t need to -use the normalized batch.

-
-

Note

-

We will here describe the output of a Mask-RCNN model. The models in -Object Detection, Instance Segmentation and Person Keypoint Detection all have a similar output -format, but some of them may have extra info like keypoints for -keypointrcnn_resnet50_fpn(), and some -of them may not have masks, like -fasterrcnn_resnet50_fpn().

-
-
from torchvision.models.detection import maskrcnn_resnet50_fpn
-model = maskrcnn_resnet50_fpn(pretrained=True, progress=False)
-model = model.eval()
-
-output = model(batch)
-print(output)
-
-
-

Out:

-
Downloading: "https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth" to /root/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth
-[{'boxes': tensor([[219.7444, 168.1722, 400.7379, 384.0263],
-        [343.9716, 171.2287, 358.3447, 222.6263],
-        [301.0303, 192.6917, 313.8879, 232.3154]], grad_fn=<StackBackward0>), 'labels': tensor([18,  1,  1]), 'scores': tensor([0.9987, 0.7187, 0.6525], grad_fn=<IndexBackward0>), 'masks': tensor([[[[0., 0., 0.,  ..., 0., 0., 0.],
-          [0., 0., 0.,  ..., 0., 0., 0.],
-          [0., 0., 0.,  ..., 0., 0., 0.],
-          ...,
-          [0., 0., 0.,  ..., 0., 0., 0.],
-          [0., 0., 0.,  ..., 0., 0., 0.],
-          [0., 0., 0.,  ..., 0., 0., 0.]]],
-
-
-        [[[0., 0., 0.,  ..., 0., 0., 0.],
-          [0., 0., 0.,  ..., 0., 0., 0.],
-          [0., 0., 0.,  ..., 0., 0., 0.],
-          ...,
-          [0., 0., 0.,  ..., 0., 0., 0.],
-          [0., 0., 0.,  ..., 0., 0., 0.],
-          [0., 0., 0.,  ..., 0., 0., 0.]]],
-
-
-        [[[0., 0., 0.,  ..., 0., 0., 0.],
-          [0., 0., 0.,  ..., 0., 0., 0.],
-          [0., 0., 0.,  ..., 0., 0., 0.],
-          ...,
-          [0., 0., 0.,  ..., 0., 0., 0.],
-          [0., 0., 0.,  ..., 0., 0., 0.],
-          [0., 0., 0.,  ..., 0., 0., 0.]]]], grad_fn=<UnsqueezeBackward0>)}, {'boxes': tensor([[ 44.6767, 137.9018, 446.5324, 487.3429],
-        [  0.0000, 288.0053, 489.9293, 490.2352]], grad_fn=<StackBackward0>), 'labels': tensor([18, 15]), 'scores': tensor([0.9978, 0.0697], grad_fn=<IndexBackward0>), 'masks': tensor([[[[0., 0., 0.,  ..., 0., 0., 0.],
-          [0., 0., 0.,  ..., 0., 0., 0.],
-          [0., 0., 0.,  ..., 0., 0., 0.],
-          ...,
-          [0., 0., 0.,  ..., 0., 0., 0.],
-          [0., 0., 0.,  ..., 0., 0., 0.],
-          [0., 0., 0.,  ..., 0., 0., 0.]]],
-
-
-        [[[0., 0., 0.,  ..., 0., 0., 0.],
-          [0., 0., 0.,  ..., 0., 0., 0.],
-          [0., 0., 0.,  ..., 0., 0., 0.],
-          ...,
-          [0., 0., 0.,  ..., 0., 0., 0.],
-          [0., 0., 0.,  ..., 0., 0., 0.],
-          [0., 0., 0.,  ..., 0., 0., 0.]]]], grad_fn=<UnsqueezeBackward0>)}]
-
-
-

Let’s break this down. For each image in the batch, the model outputs some -detections (or instances). The number of detections varies for each input -image. Each instance is described by its bounding box, its label, its score -and its mask.

-

The way the output is organized is as follows: the output is a list of length -batch_size. Each entry in the list corresponds to an input image, and it -is a dict with keys ‘boxes’, ‘labels’, ‘scores’, and ‘masks’. Each value -associated to those keys has num_instances elements in it. In our case -above there are 3 instances detected in the first image, and 2 instances in -the second one.

-

The boxes can be plotted with draw_bounding_boxes() -as above, but here we’re more interested in the masks. These masks are quite -different from the masks that we saw above for the semantic segmentation -models.

-
dog1_output = output[0]
-dog1_masks = dog1_output['masks']
-print(f"shape = {dog1_masks.shape}, dtype = {dog1_masks.dtype}, "
-      f"min = {dog1_masks.min()}, max = {dog1_masks.max()}")
-
-
-

Out:

-
shape = torch.Size([3, 1, 500, 500]), dtype = torch.float32, min = 0.0, max = 0.9999862909317017
-
-
-

Here the masks corresponds to probabilities indicating, for each pixel, how -likely it is to belong to the predicted label of that instance. Those -predicted labels correspond to the ‘labels’ element in the same output dict. -Let’s see which labels were predicted for the instances of the first image.

-
inst_classes = [
-    '__background__', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus',
-    'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'N/A', 'stop sign',
-    'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
-    'elephant', 'bear', 'zebra', 'giraffe', 'N/A', 'backpack', 'umbrella', 'N/A', 'N/A',
-    'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
-    'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket',
-    'bottle', 'N/A', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl',
-    'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
-    'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'N/A', 'dining table',
-    'N/A', 'N/A', 'toilet', 'N/A', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
-    'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'N/A', 'book',
-    'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'
-]
-
-inst_class_to_idx = {cls: idx for (idx, cls) in enumerate(inst_classes)}
-
-print("For the first dog, the following instances were detected:")
-print([inst_classes[label] for label in dog1_output['labels']])
-
-
-

Out:

-
For the first dog, the following instances were detected:
-['dog', 'person', 'person']
-
-
-

Interestingly, the model detects two persons in the image. Let’s go ahead and -plot those masks. Since draw_segmentation_masks() -expects boolean masks, we need to convert those probabilities into boolean -values. Remember that the semantic of those masks is “How likely is this pixel -to belong to the predicted class?”. As a result, a natural way of converting -those masks into boolean values is to threshold them with the 0.5 probability -(one could also choose a different threshold).

-
proba_threshold = 0.5
-dog1_bool_masks = dog1_output['masks'] > proba_threshold
-print(f"shape = {dog1_bool_masks.shape}, dtype = {dog1_bool_masks.dtype}")
-
-# There's an extra dimension (1) to the masks. We need to remove it
-dog1_bool_masks = dog1_bool_masks.squeeze(1)
-
-show(draw_segmentation_masks(dog1_int, dog1_bool_masks, alpha=0.9))
-
-
-plot visualization utils

Out:

-
shape = torch.Size([3, 1, 500, 500]), dtype = torch.bool
-
-
-

The model seems to have properly detected the dog, but it also confused trees -with people. Looking more closely at the scores will help us plotting more -relevant masks:

-
print(dog1_output['scores'])
-
-
-

Out:

-
tensor([0.9987, 0.7187, 0.6525], grad_fn=<IndexBackward0>)
-
-
-

Clearly the model is more confident about the dog detection than it is about -the people detections. That’s good news. When plotting the masks, we can ask -for only those that have a good score. Let’s use a score threshold of .75 -here, and also plot the masks of the second dog.

-
score_threshold = .75
-
-boolean_masks = [
-    out['masks'][out['scores'] > score_threshold] > proba_threshold
-    for out in output
-]
-
-dogs_with_masks = [
-    draw_segmentation_masks(img, mask.squeeze(1))
-    for img, mask in zip(batch_int, boolean_masks)
-]
-show(dogs_with_masks)
-
-
-plot visualization utils

The two ‘people’ masks in the first image where not selected because they have -a lower score than the score threshold. Similarly in the second image, the -instance with class 15 (which corresponds to ‘bench’) was not selected.

-

Total running time of the script: ( 0 minutes 9.400 seconds)

- -

Gallery generated by Sphinx-Gallery

-
-
-
- - -
- -
- - -
-
- - -
-
- - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
-
-

Docs

-

Access comprehensive developer documentation for PyTorch

- View Docs -
- -
-

Tutorials

-

Get in-depth tutorials for beginners and advanced developers

- View Tutorials -
- -
-

Resources

-

Find development resources and get your questions answered

- View Resources -
-
-
-
- - - - - - - - - -
-
-
-
- - -
-
-
- - -
- - - - - - - - \ No newline at end of file diff --git a/0.11./auto_examples/sg_execution_times.html b/0.11./auto_examples/sg_execution_times.html deleted file mode 100644 index be8b0e980e4..00000000000 --- a/0.11./auto_examples/sg_execution_times.html +++ /dev/null @@ -1,662 +0,0 @@ - - - - - - - - - - - - Computation times — Torchvision main documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - - - - - - - - - - - - - - - -
- -
    - -
  • - - - Docs - - > -
  • - - -
  • Computation times
  • - - -
  • - - - - - -
  • - -
- - -
-
- -
- Shortcuts -
-
- -
-
- - - -
- -
- - -
-
- - - - -
- - - -
-

- © Copyright 2017-present, Torch Contributors. - -

-
- -
- Built with Sphinx using a theme provided by Read the Docs. -
- - -
- -
-
- -
-
- -
-
-
-
- - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
-
-

Docs

-

Access comprehensive developer documentation for PyTorch

- View Docs -
- -
-

Tutorials

-

Get in-depth tutorials for beginners and advanced developers

- View Tutorials -
- -
-

Resources

-

Find development resources and get your questions answered

- View Resources -
-
-
-
- - - - - - - - - -
-
-
-
- - -
-
-
- - -
- - - - - - - - \ No newline at end of file diff --git a/0.11./datasets.html b/0.11./datasets.html deleted file mode 100644 index 4f205f27eeb..00000000000 --- a/0.11./datasets.html +++ /dev/null @@ -1,2548 +0,0 @@ - - - - - - - - - - - - torchvision.datasets — Torchvision main documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - - - - - - - - - - - - - - - -
- -
    - -
  • - - - Docs - - > -
  • - - -
  • torchvision.datasets
  • - - -
  • - - - - - -
  • - -
- - -
-
- -
- Shortcuts -
-
- -
-
- - - -
- -
-
- -
-

torchvision.datasets

-

All datasets are subclasses of torch.utils.data.Dataset -i.e, they have __getitem__ and __len__ methods implemented. -Hence, they can all be passed to a torch.utils.data.DataLoader -which can load multiple samples in parallel using torch.multiprocessing workers. -For example:

-
imagenet_data = torchvision.datasets.ImageNet('path/to/imagenet_root/')
-data_loader = torch.utils.data.DataLoader(imagenet_data,
-                                          batch_size=4,
-                                          shuffle=True,
-                                          num_workers=args.nThreads)
-
-
-

All the datasets have almost similar API. They all have two common arguments: -transform and target_transform to transform the input and target respectively. -You can also create your own datasets using the provided base classes.

-
-

Caltech

-
-
-class torchvision.datasets.Caltech101(root: str, target_type: Union[List[str], str] = 'category', transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = False)[source]
-

Caltech 101 Dataset.

-
-

Warning

-

This class needs scipy to load target files from .mat format.

-
-
-
Parameters
-
    -
  • root (string) – Root directory of dataset where directory -caltech101 exists or will be saved to if download is set to True.

  • -
  • target_type (string or list, optional) – Type of target to use, category or -annotation. Can also be a list to output a tuple with all specified -target types. category represents the target class, and -annotation is a list of points from a hand-generated outline. -Defaults to category.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • download (bool, optional) – If true, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

(image, target) where the type of target specified by target_type.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-class torchvision.datasets.Caltech256(root: str, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = False)[source]
-

Caltech 256 Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory of dataset where directory -caltech256 exists or will be saved to if download is set to True.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • download (bool, optional) – If true, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

(image, target) where target is index of the target class.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-

CelebA

-
-
-class torchvision.datasets.CelebA(root: str, split: str = 'train', target_type: Union[List[str], str] = 'attr', transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = False)[source]
-

Large-scale CelebFaces Attributes (CelebA) Dataset Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory where images are downloaded to.

  • -
  • split (string) – One of {‘train’, ‘valid’, ‘test’, ‘all’}. -Accordingly dataset is selected.

  • -
  • target_type (string or list, optional) –

    Type of target to use, attr, identity, bbox, -or landmarks. Can also be a list to output a tuple with all specified target types. -The targets represent:

    -
    -
      -
    • attr (np.array shape=(40,) dtype=int): binary (0, 1) labels for attributes

    • -
    • identity (int): label for each person (data points with the same identity are the same person)

    • -
    • bbox (np.array shape=(4,) dtype=int): bounding box (x, y, width, height)

    • -
    • landmarks (np.array shape=(10,) dtype=int): landmark points (lefteye_x, lefteye_y, righteye_x, -righteye_y, nose_x, nose_y, leftmouth_x, leftmouth_y, rightmouth_x, rightmouth_y)

    • -
    -
    -

    Defaults to attr. If empty, None will be returned as target.

    -

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.ToTensor

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • download (bool, optional) – If true, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

Sample and meta data, optionally transformed by the respective transforms.

-
-
Return type
-

(Any)

-
-
-
- -
- -
-
-

CIFAR

-
-
-class torchvision.datasets.CIFAR10(root: str, train: bool = True, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = False)[source]
-

CIFAR10 Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory of dataset where directory -cifar-10-batches-py exists or will be saved to if download is set to True.

  • -
  • train (bool, optional) – If True, creates dataset from training set, otherwise -creates from test set.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • download (bool, optional) – If true, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

(image, target) where target is index of the target class.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-class torchvision.datasets.CIFAR100(root: str, train: bool = True, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = False)[source]
-

CIFAR100 Dataset.

-

This is a subclass of the CIFAR10 Dataset.

-
- -
-
-

Cityscapes

-
-

Note

-

Requires Cityscape to be downloaded.

-
-
-
-class torchvision.datasets.Cityscapes(root: str, split: str = 'train', mode: str = 'fine', target_type: Union[List[str], str] = 'instance', transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, transforms: Optional[Callable] = None)[source]
-

Cityscapes Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory of dataset where directory leftImg8bit -and gtFine or gtCoarse are located.

  • -
  • split (string, optional) – The image split to use, train, test or val if mode=”fine” -otherwise train, train_extra or val

  • -
  • mode (string, optional) – The quality mode to use, fine or coarse

  • -
  • target_type (string or list, optional) – Type of target to use, instance, semantic, polygon -or color. Can also be a list to output a tuple with all specified target types.

  • -
  • transform (callable, optional) – A function/transform that takes in a PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • transforms (callable, optional) – A function/transform that takes input sample and its target as entry -and returns a transformed version.

  • -
-
-
-

Examples

-

Get semantic segmentation target

-
dataset = Cityscapes('./data/cityscapes', split='train', mode='fine',
-                     target_type='semantic')
-
-img, smnt = dataset[0]
-
-
-

Get multiple targets

-
dataset = Cityscapes('./data/cityscapes', split='train', mode='fine',
-                     target_type=['instance', 'color', 'polygon'])
-
-img, (inst, col, poly) = dataset[0]
-
-
-

Validate on the “coarse” set

-
dataset = Cityscapes('./data/cityscapes', split='val', mode='coarse',
-                     target_type='semantic')
-
-img, smnt = dataset[0]
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

(image, target) where target is a tuple of all target types if target_type is a list with more -than one item. Otherwise target is a json object if target_type=”polygon”, else the image segmentation.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-

COCO

-
-

Note

-

These require the COCO API to be installed

-
-
-

Captions

-
-
-class torchvision.datasets.CocoCaptions(root: str, annFile: str, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, transforms: Optional[Callable] = None)[source]
-

MS Coco Captions Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory where images are downloaded to.

  • -
  • annFile (string) – Path to json annotation file.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.ToTensor

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • transforms (callable, optional) – A function/transform that takes input sample and its target as entry -and returns a transformed version.

  • -
-
-
-

Example

-
import torchvision.datasets as dset
-import torchvision.transforms as transforms
-cap = dset.CocoCaptions(root = 'dir where images are',
-                        annFile = 'json annotation file',
-                        transform=transforms.ToTensor())
-
-print('Number of samples: ', len(cap))
-img, target = cap[3] # load 4th sample
-
-print("Image Size: ", img.size())
-print(target)
-
-
-

Output:

-
Number of samples: 82783
-Image Size: (3L, 427L, 640L)
-[u'A plane emitting smoke stream flying over a mountain.',
-u'A plane darts across a bright blue sky behind a mountain covered in snow',
-u'A plane leaves a contrail above the snowy mountain top.',
-u'A mountain that has a plane flying overheard in the distance.',
-u'A mountain view with a plume of smoke in the background']
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

Sample and meta data, optionally transformed by the respective transforms.

-
-
Return type
-

(Any)

-
-
-
- -
- -
-
-

Detection

-
-
-class torchvision.datasets.CocoDetection(root: str, annFile: str, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, transforms: Optional[Callable] = None)[source]
-

MS Coco Detection Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory where images are downloaded to.

  • -
  • annFile (string) – Path to json annotation file.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.ToTensor

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • transforms (callable, optional) – A function/transform that takes input sample and its target as entry -and returns a transformed version.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

Sample and meta data, optionally transformed by the respective transforms.

-
-
Return type
-

(Any)

-
-
-
- -
- -
-
-
-

EMNIST

-
-
-class torchvision.datasets.EMNIST(root: str, split: str, **kwargs: Any)[source]
-

EMNIST Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory of dataset where EMNIST/processed/training.pt -and EMNIST/processed/test.pt exist.

  • -
  • split (string) – The dataset has 6 different splits: byclass, bymerge, -balanced, letters, digits and mnist. This argument specifies -which one to use.

  • -
  • train (bool, optional) – If True, creates dataset from training.pt, -otherwise from test.pt.

  • -
  • download (bool, optional) – If true, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
-
-
-
- -
-
-

FakeData

-
-
-class torchvision.datasets.FakeData(size: int = 1000, image_size: Tuple[int, int, int] = (3, 224, 224), num_classes: int = 10, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, random_offset: int = 0)[source]
-

A fake dataset that returns randomly generated images and returns them as PIL images

-
-
Parameters
-
    -
  • size (int, optional) – Size of the dataset. Default: 1000 images

  • -
  • image_size (tuple, optional) – Size if the returned images. Default: (3, 224, 224)

  • -
  • num_classes (int, optional) – Number of classes in the dataset. Default: 10

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • random_offset (int) – Offsets the index-based random seed used to -generate each image. Default: 0

  • -
-
-
-
- -
-
-

Fashion-MNIST

-
-
-class torchvision.datasets.FashionMNIST(root: str, train: bool = True, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = False)[source]
-

Fashion-MNIST Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory of dataset where FashionMNIST/processed/training.pt -and FashionMNIST/processed/test.pt exist.

  • -
  • train (bool, optional) – If True, creates dataset from training.pt, -otherwise from test.pt.

  • -
  • download (bool, optional) – If true, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
-
-
-
- -
-
-

Flickr

-
-
-class torchvision.datasets.Flickr8k(root: str, ann_file: str, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None)[source]
-

Flickr8k Entities Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory where images are downloaded to.

  • -
  • ann_file (string) – Path to annotation file.

  • -
  • transform (callable, optional) – A function/transform that takes in a PIL image -and returns a transformed version. E.g, transforms.ToTensor

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

Tuple (image, target). target is a list of captions for the image.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-class torchvision.datasets.Flickr30k(root: str, ann_file: str, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None)[source]
-

Flickr30k Entities Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory where images are downloaded to.

  • -
  • ann_file (string) – Path to annotation file.

  • -
  • transform (callable, optional) – A function/transform that takes in a PIL image -and returns a transformed version. E.g, transforms.ToTensor

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

Tuple (image, target). target is a list of captions for the image.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-

HMDB51

-
-
-class torchvision.datasets.HMDB51(root: str, annotation_path: str, frames_per_clip: int, step_between_clips: int = 1, frame_rate: Optional[int] = None, fold: int = 1, train: bool = True, transform: Optional[Callable] = None, _precomputed_metadata: Optional[Dict[str, Any]] = None, num_workers: int = 1, _video_width: int = 0, _video_height: int = 0, _video_min_dimension: int = 0, _audio_samples: int = 0)[source]
-

HMDB51 -dataset.

-

HMDB51 is an action recognition video dataset. -This dataset consider every video as a collection of video clips of fixed size, specified -by frames_per_clip, where the step in frames between each clip is given by -step_between_clips.

-

To give an example, for 2 videos with 10 and 15 frames respectively, if frames_per_clip=5 -and step_between_clips=5, the dataset size will be (2 + 3) = 5, where the first two -elements will come from video 1, and the next three elements from video 2. -Note that we drop clips which do not have exactly frames_per_clip elements, so not all -frames in a video might be present.

-

Internally, it uses a VideoClips object to handle clip creation.

-
-
Parameters
-
    -
  • root (string) – Root directory of the HMDB51 Dataset.

  • -
  • annotation_path (str) – Path to the folder containing the split files.

  • -
  • frames_per_clip (int) – Number of frames in a clip.

  • -
  • step_between_clips (int) – Number of frames between each clip.

  • -
  • fold (int, optional) – Which fold to use. Should be between 1 and 3.

  • -
  • train (bool, optional) – If True, creates a dataset from the train split, -otherwise from the test split.

  • -
  • transform (callable, optional) – A function/transform that takes in a TxHxWxC video -and returns a transformed version.

  • -
-
-
Returns
-

A 3-tuple with the following entries:

-
-
    -
  • video (Tensor[T, H, W, C]): The T video frames

  • -
  • audio(Tensor[K, L]): the audio frames, where K is the number of channels -and L is the number of points

  • -
  • label (int): class of the video clip

  • -
-
-

-
-
Return type
-

tuple

-
-
-
-
-__getitem__(idx: int)Tuple[torch.Tensor, torch.Tensor, int][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

Sample and meta data, optionally transformed by the respective transforms.

-
-
Return type
-

(Any)

-
-
-
- -
- -
-
-

ImageNet

-
-
-class torchvision.datasets.ImageNet(root: str, split: str = 'train', download: Optional[str] = None, **kwargs: Any)[source]
-

ImageNet 2012 Classification Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory of the ImageNet Dataset.

  • -
  • split (string, optional) – The dataset split, supports train, or val.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • loader – A function to load an image given its path.

  • -
-
-
-
- -
-

Note

-

This requires scipy to be installed

-
-
-
-

iNaturalist

-
-
-class torchvision.datasets.INaturalist(root: str, version: str = '2021_train', target_type: Union[List[str], str] = 'full', transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = False)[source]
-

iNaturalist Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory of dataset where the image files are stored. -This class does not require/use annotation files.

  • -
  • version (string, optional) – Which version of the dataset to download/use. One of -‘2017’, ‘2018’, ‘2019’, ‘2021_train’, ‘2021_train_mini’, ‘2021_valid’. -Default: 2021_train.

  • -
  • target_type (string or list, optional) –

    Type of target to use, for 2021 versions, one of:

    -
      -
    • full: the full category (species)

    • -
    • kingdom: e.g. “Animalia”

    • -
    • phylum: e.g. “Arthropoda”

    • -
    • class: e.g. “Insecta”

    • -
    • order: e.g. “Coleoptera”

    • -
    • family: e.g. “Cleridae”

    • -
    • genus: e.g. “Trichodes”

    • -
    -

    for 2017-2019 versions, one of:

    -
      -
    • full: the full (numeric) category

    • -
    • super: the super category, e.g. “Amphibians”

    • -
    -

    Can also be a list to output a tuple with all specified target types. -Defaults to full.

    -

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • download (bool, optional) – If true, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

(image, target) where the type of target specified by target_type.

-
-
Return type
-

tuple

-
-
-
- -
-
-category_name(category_type: str, category_id: int)str[source]
-
-
Parameters
-
    -
  • category_type (str) – one of “full”, “kingdom”, “phylum”, “class”, “order”, “family”, “genus” or “super”

  • -
  • category_id (int) – an index (class id) from this category

  • -
-
-
Returns
-

the name of the category

-
-
-
- -
- -
-
-

Kinetics-400

-
-
-class torchvision.datasets.Kinetics400(root: str, frames_per_clip: int, num_classes: Optional[Any] = None, split: Optional[Any] = None, download: Optional[Any] = None, num_download_workers: Optional[Any] = None, **kwargs: Any)[source]
-

Kinetics-400 -dataset.

-

Kinetics-400 is an action recognition video dataset. -This dataset consider every video as a collection of video clips of fixed size, specified -by frames_per_clip, where the step in frames between each clip is given by -step_between_clips.

-

To give an example, for 2 videos with 10 and 15 frames respectively, if frames_per_clip=5 -and step_between_clips=5, the dataset size will be (2 + 3) = 5, where the first two -elements will come from video 1, and the next three elements from video 2. -Note that we drop clips which do not have exactly frames_per_clip elements, so not all -frames in a video might be present.

-

Internally, it uses a VideoClips object to handle clip creation.

-
-
Parameters
-
    -
  • root (string) –

    Root directory of the Kinetics-400 Dataset. Should be structured as follows:

    -
    root/
    -├── class1
    -│   ├── clip1.avi
    -│   ├── clip2.avi
    -│   ├── clip3.mp4
    -│   └── ...
    -└── class2
    -    ├── clipx.avi
    -    └── ...
    -
    -
    -

  • -
  • frames_per_clip (int) – number of frames in a clip

  • -
  • step_between_clips (int) – number of frames between each clip

  • -
  • transform (callable, optional) – A function/transform that takes in a TxHxWxC video -and returns a transformed version.

  • -
-
-
Returns
-

A 3-tuple with the following entries:

-
-
    -
  • video (Tensor[T, H, W, C]): the T video frames

  • -
  • audio(Tensor[K, L]): the audio frames, where K is the number of channels -and L is the number of points

  • -
  • label (int): class of the video clip

  • -
-
-

-
-
Return type
-

tuple

-
-
-
-
-__getitem__(idx: int)Tuple[torch.Tensor, torch.Tensor, int]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

Sample and meta data, optionally transformed by the respective transforms.

-
-
Return type
-

(Any)

-
-
-
- -
- -
-
-

KITTI

-
-
-class torchvision.datasets.Kitti(root: str, train: bool = True, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, transforms: Optional[Callable] = None, download: bool = False)[source]
-

KITTI Dataset.

-

It corresponds to the “left color images of object” dataset, for object detection.

-
-
Parameters
-
    -
  • root (string) –

    Root directory where images are downloaded to. -Expects the following folder structure if download=False:

    -
    <root>
    -    └── Kitti
    -        └─ raw
    -            ├── training
    -            |   ├── image_2
    -            |   └── label_2
    -            └── testing
    -                └── image_2
    -
    -
    -

  • -
  • train (bool, optional) – Use train split if true, else test split. -Defaults to train.

  • -
  • transform (callable, optional) – A function/transform that takes in a PIL image -and returns a transformed version. E.g, transforms.ToTensor

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • transforms (callable, optional) – A function/transform that takes input sample -and its target as entry and returns a transformed version.

  • -
  • download (bool, optional) – If true, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-

Get item at a given index.

-
-
Parameters
-

index (int) – Index

-
-
Returns
-

(image, target), where -target is a list of dictionaries with the following keys:

-
    -
  • type: str

  • -
  • truncated: float

  • -
  • occluded: int

  • -
  • alpha: float

  • -
  • bbox: float[4]

  • -
  • dimensions: float[3]

  • -
  • locations: float[3]

  • -
  • rotation_y: float

  • -
-

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-

KMNIST

-
-
-class torchvision.datasets.KMNIST(root: str, train: bool = True, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = False)[source]
-

Kuzushiji-MNIST Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory of dataset where KMNIST/processed/training.pt -and KMNIST/processed/test.pt exist.

  • -
  • train (bool, optional) – If True, creates dataset from training.pt, -otherwise from test.pt.

  • -
  • download (bool, optional) – If true, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
-
-
-
- -
-
-

LFW

-
-
-class torchvision.datasets.LFWPeople(root: str, split: str = '10fold', image_set: str = 'funneled', transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = False)[source]
-

LFW Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory of dataset where directory -lfw-py exists or will be saved to if download is set to True.

  • -
  • split (string, optional) – The image split to use. Can be one of train, test, -10fold (default).

  • -
  • image_set (str, optional) – Type of image funneling to use, original, funneled or -deepfunneled. Defaults to funneled.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomRotation

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • download (bool, optional) – If true, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

Tuple (image, target) where target is the identity of the person.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-class torchvision.datasets.LFWPairs(root: str, split: str = '10fold', image_set: str = 'funneled', transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = False)[source]
-

LFW Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory of dataset where directory -lfw-py exists or will be saved to if download is set to True.

  • -
  • split (string, optional) – The image split to use. Can be one of train, test, -10fold. Defaults to 10fold.

  • -
  • image_set (str, optional) – Type of image funneling to use, original, funneled or -deepfunneled. Defaults to funneled.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomRotation

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • download (bool, optional) – If true, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any, int][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

(image1, image2, target) where target is 0 for different indentities and 1 for same identities.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-

LSUN

-
-
-class torchvision.datasets.LSUN(root: str, classes: Union[str, List[str]] = 'train', transform: Optional[Callable] = None, target_transform: Optional[Callable] = None)[source]
-

LSUN dataset.

-

You will need to install the lmdb package to use this dataset: run -pip install lmdb

-
-
Parameters
-
    -
  • root (string) – Root directory for the database files.

  • -
  • classes (string or list) – One of {‘train’, ‘val’, ‘test’} or a list of -categories to load. e,g. [‘bedroom_train’, ‘church_outdoor_train’].

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

Tuple (image, target) where target is the index of the target category.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-

MNIST

-
-
-class torchvision.datasets.MNIST(root: str, train: bool = True, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = False)[source]
-

MNIST Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory of dataset where MNIST/processed/training.pt -and MNIST/processed/test.pt exist.

  • -
  • train (bool, optional) – If True, creates dataset from training.pt, -otherwise from test.pt.

  • -
  • download (bool, optional) – If true, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
-
-
-
- -
-
-

Omniglot

-
-
-class torchvision.datasets.Omniglot(root: str, background: bool = True, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = False)[source]
-

Omniglot Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory of dataset where directory -omniglot-py exists.

  • -
  • background (bool, optional) – If True, creates dataset from the “background” set, otherwise -creates from the “evaluation” set. This terminology is defined by the authors.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • download (bool, optional) – If true, downloads the dataset zip files from the internet and -puts it in root directory. If the zip files are already downloaded, they are not -downloaded again.

  • -
-
-
-
- -
-
-

PhotoTour

-
-
-class torchvision.datasets.PhotoTour(root: str, name: str, train: bool = True, transform: Optional[Callable] = None, download: bool = False)[source]
-

Multi-view Stereo Correspondence Dataset.

-
-

Note

-

We only provide the newer version of the dataset, since the authors state that it

-
-

is more suitable for training descriptors based on difference of Gaussian, or Harris corners, as the -patches are centred on real interest point detections, rather than being projections of 3D points as is the -case in the old dataset.

-
-

The original dataset is available under http://phototour.cs.washington.edu/patches/default.htm.

-
-
-
Parameters
-
    -
  • root (string) – Root directory where images are.

  • -
  • name (string) – Name of the dataset to load.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version.

  • -
  • download (bool, optional) – If true, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again.

  • -
-
-
-
-
-__getitem__(index: int)Union[torch.Tensor, Tuple[Any, Any, torch.Tensor]][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

(data1, data2, matches)

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-

Places365

-
-
-class torchvision.datasets.Places365(root: str, split: str = 'train-standard', small: bool = False, download: bool = False, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, loader: Callable[[str], Any] = <function default_loader>)[source]
-

Places365 classification dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory of the Places365 dataset.

  • -
  • split (string, optional) – The dataset split. Can be one of train-standard (default), train-challenge, -val.

  • -
  • small (bool, optional) – If True, uses the small images, i. e. resized to 256 x 256 pixels, instead of the -high resolution ones.

  • -
  • download (bool, optional) – If True, downloads the dataset components and places them in root. Already -downloaded archives are not downloaded again.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • loader – A function to load an image given its path.

  • -
-
-
Raises
-
    -
  • RuntimeError – If download is False and the meta files, i. e. the devkit, are not present or corrupted.

  • -
  • RuntimeError – If download is True and the image archive is already extracted.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

Sample and meta data, optionally transformed by the respective transforms.

-
-
Return type
-

(Any)

-
-
-
- -
- -
-
-

QMNIST

-
-
-class torchvision.datasets.QMNIST(root: str, what: Optional[str] = None, compat: bool = True, train: bool = True, **kwargs: Any)[source]
-

QMNIST Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory of dataset whose processed -subdir contains torch binary files with the datasets.

  • -
  • what (string,optional) – Can be ‘train’, ‘test’, ‘test10k’, -‘test50k’, or ‘nist’ for respectively the mnist compatible -training set, the 60k qmnist testing set, the 10k qmnist -examples that match the mnist testing set, the 50k -remaining qmnist testing examples, or all the nist -digits. The default is to select ‘train’ or ‘test’ -according to the compatibility argument ‘train’.

  • -
  • compat (bool,optional) – A boolean that says whether the target -for each example is class number (for compatibility with -the MNIST dataloader) or a torch vector containing the -full qmnist information. Default=True.

  • -
  • download (bool, optional) – If true, downloads the dataset from -the internet and puts it in root directory. If dataset is -already downloaded, it is not downloaded again.

  • -
  • transform (callable, optional) – A function/transform that -takes in an PIL image and returns a transformed -version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform -that takes in the target and transforms it.

  • -
  • train (bool,optional,compatibility) – When argument ‘what’ is -not specified, this boolean decides whether to load the -training set ot the testing set. Default: True.

  • -
-
-
-
- -
-
-

SBD

-
-
-class torchvision.datasets.SBDataset(root: str, image_set: str = 'train', mode: str = 'boundaries', download: bool = False, transforms: Optional[Callable] = None)[source]
-

Semantic Boundaries Dataset

-

The SBD currently contains annotations from 11355 images taken from the PASCAL VOC 2011 dataset.

-
-

Note

-

Please note that the train and val splits included with this dataset are different from -the splits in the PASCAL VOC dataset. In particular some “train” images might be part of -VOC2012 val. -If you are interested in testing on VOC 2012 val, then use image_set=’train_noval’, -which excludes all val images.

-
-
-

Warning

-

This class needs scipy to load target files from .mat format.

-
-
-
Parameters
-
    -
  • root (string) – Root directory of the Semantic Boundaries Dataset

  • -
  • image_set (string, optional) – Select the image_set to use, train, val or train_noval. -Image set train_noval excludes VOC 2012 val images.

  • -
  • mode (string, optional) – Select target type. Possible values ‘boundaries’ or ‘segmentation’. -In case of ‘boundaries’, the target is an array of shape [num_classes, H, W], -where num_classes=20.

  • -
  • download (bool, optional) – If true, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again.

  • -
  • transforms (callable, optional) – A function/transform that takes input sample and its target as entry -and returns a transformed version. Input sample is PIL image and target is a numpy array -if mode=’boundaries’ or PIL image if mode=’segmentation’.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

Sample and meta data, optionally transformed by the respective transforms.

-
-
Return type
-

(Any)

-
-
-
- -
- -
-
-

SBU

-
-
-class torchvision.datasets.SBU(root: str, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = True)[source]
-

SBU Captioned Photo Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory of dataset where tarball -SBUCaptionedPhotoDataset.tar.gz exists.

  • -
  • transform (callable, optional) – A function/transform that takes in a PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • download (bool, optional) – If True, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

(image, target) where target is a caption for the photo.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-

SEMEION

-
-
-class torchvision.datasets.SEMEION(root: str, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = True)[source]
-

SEMEION Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory of dataset where directory -semeion.py exists.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • download (bool, optional) – If true, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

(image, target) where target is index of the target class.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-

STL10

-
-
-class torchvision.datasets.STL10(root: str, split: str = 'train', folds: Optional[int] = None, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = False)[source]
-

STL10 Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory of dataset where directory -stl10_binary exists.

  • -
  • split (string) – One of {‘train’, ‘test’, ‘unlabeled’, ‘train+unlabeled’}. -Accordingly dataset is selected.

  • -
  • folds (int, optional) – One of {0-9} or None. -For training, loads one of the 10 pre-defined folds of 1k samples for the -standard evaluation procedure. If no value is passed, loads the 5k samples.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • download (bool, optional) – If true, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

(image, target) where target is index of the target class.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-

SVHN

-
-
-class torchvision.datasets.SVHN(root: str, split: str = 'train', transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = False)[source]
-

SVHN Dataset. -Note: The SVHN dataset assigns the label 10 to the digit 0. However, in this Dataset, -we assign the label 0 to the digit 0 to be compatible with PyTorch loss functions which -expect the class labels to be in the range [0, C-1]

-
-

Warning

-

This class needs scipy to load data from .mat format.

-
-
-
Parameters
-
    -
  • root (string) – Root directory of dataset where directory -SVHN exists.

  • -
  • split (string) – One of {‘train’, ‘test’, ‘extra’}. -Accordingly dataset is selected. ‘extra’ is Extra training set.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • download (bool, optional) – If true, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

(image, target) where target is index of the target class.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-

UCF101

-
-
-class torchvision.datasets.UCF101(root: str, annotation_path: str, frames_per_clip: int, step_between_clips: int = 1, frame_rate: Optional[int] = None, fold: int = 1, train: bool = True, transform: Optional[Callable] = None, _precomputed_metadata: Optional[Dict[str, Any]] = None, num_workers: int = 1, _video_width: int = 0, _video_height: int = 0, _video_min_dimension: int = 0, _audio_samples: int = 0)[source]
-

UCF101 dataset.

-

UCF101 is an action recognition video dataset. -This dataset consider every video as a collection of video clips of fixed size, specified -by frames_per_clip, where the step in frames between each clip is given by -step_between_clips. The dataset itself can be downloaded from the dataset website; -annotations that annotation_path should be pointing to can be downloaded from here -<https://www.crcv.ucf.edu/data/UCF101/UCF101TrainTestSplits-RecognitionTask.zip>.

-

To give an example, for 2 videos with 10 and 15 frames respectively, if frames_per_clip=5 -and step_between_clips=5, the dataset size will be (2 + 3) = 5, where the first two -elements will come from video 1, and the next three elements from video 2. -Note that we drop clips which do not have exactly frames_per_clip elements, so not all -frames in a video might be present.

-

Internally, it uses a VideoClips object to handle clip creation.

-
-
Parameters
-
    -
  • root (string) – Root directory of the UCF101 Dataset.

  • -
  • annotation_path (str) – path to the folder containing the split files; -see docstring above for download instructions of these files

  • -
  • frames_per_clip (int) – number of frames in a clip.

  • -
  • step_between_clips (int, optional) – number of frames between each clip.

  • -
  • fold (int, optional) – which fold to use. Should be between 1 and 3.

  • -
  • train (bool, optional) – if True, creates a dataset from the train split, -otherwise from the test split.

  • -
  • transform (callable, optional) – A function/transform that takes in a TxHxWxC video -and returns a transformed version.

  • -
-
-
Returns
-

A 3-tuple with the following entries:

-
-
    -
  • video (Tensor[T, H, W, C]): the T video frames

  • -
  • audio(Tensor[K, L]): the audio frames, where K is the number of channels -and L is the number of points

  • -
  • label (int): class of the video clip

  • -
-
-

-
-
Return type
-

tuple

-
-
-
-
-__getitem__(idx: int)Tuple[torch.Tensor, torch.Tensor, int][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

Sample and meta data, optionally transformed by the respective transforms.

-
-
Return type
-

(Any)

-
-
-
- -
- -
-
-

USPS

-
-
-class torchvision.datasets.USPS(root: str, train: bool = True, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = False)[source]
-

USPS Dataset. -The data-format is : [label [index:value ]*256 n] * num_lines, where label lies in [1, 10]. -The value for each pixel lies in [-1, 1]. Here we transform the label into [0, 9] -and make pixel values in [0, 255].

-
-
Parameters
-
    -
  • root (string) – Root directory of dataset to store``USPS`` data files.

  • -
  • train (bool, optional) – If True, creates dataset from usps.bz2, -otherwise from usps.t.bz2.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • download (bool, optional) – If true, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

(image, target) where target is index of the target class.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-

VOC

-
-
-class torchvision.datasets.VOCSegmentation(root: str, year: str = '2012', image_set: str = 'train', download: bool = False, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, transforms: Optional[Callable] = None)[source]
-

Pascal VOC Segmentation Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory of the VOC Dataset.

  • -
  • year (string, optional) – The dataset year, supports years "2007" to "2012".

  • -
  • image_set (string, optional) – Select the image_set to use, "train", "trainval" or "val". If -year=="2007", can also be "test".

  • -
  • download (bool, optional) – If true, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • transforms (callable, optional) – A function/transform that takes input sample and its target as entry -and returns a transformed version.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

(image, target) where target is the image segmentation.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-class torchvision.datasets.VOCDetection(root: str, year: str = '2012', image_set: str = 'train', download: bool = False, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, transforms: Optional[Callable] = None)[source]
-

Pascal VOC Detection Dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory of the VOC Dataset.

  • -
  • year (string, optional) – The dataset year, supports years "2007" to "2012".

  • -
  • image_set (string, optional) – Select the image_set to use, "train", "trainval" or "val". If -year=="2007", can also be "test".

  • -
  • download (bool, optional) – If true, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again. -(default: alphabetic indexing of VOC’s 20 classes).

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, required) – A function/transform that takes in the -target and transforms it.

  • -
  • transforms (callable, optional) – A function/transform that takes input sample and its target as entry -and returns a transformed version.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

(image, target) where target is a dictionary of the XML tree.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-

WIDERFace

-
-
-class torchvision.datasets.WIDERFace(root: str, split: str = 'train', transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = False)[source]
-

WIDERFace Dataset.

-
-
Parameters
-
    -
  • root (string) –

    Root directory where images and annotations are downloaded to. -Expects the following folder structure if download=False:

    -
    <root>
    -    └── widerface
    -        ├── wider_face_split ('wider_face_split.zip' if compressed)
    -        ├── WIDER_train ('WIDER_train.zip' if compressed)
    -        ├── WIDER_val ('WIDER_val.zip' if compressed)
    -        └── WIDER_test ('WIDER_test.zip' if compressed)
    -
    -
    -

  • -
  • split (string) – The dataset split to use. One of {train, val, test}. -Defaults to train.

  • -
  • transform (callable, optional) – A function/transform that takes in a PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • download (bool, optional) – If true, downloads the dataset from the internet and -puts it in root directory. If dataset is already downloaded, it is not -downloaded again.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

(image, target) where target is a dict of annotations for all faces in the image. -target=None for the test split.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-

Base classes for custom datasets

-
-
-class torchvision.datasets.DatasetFolder(root: str, loader: Callable[[str], Any], extensions: Optional[Tuple[str, ]] = None, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, is_valid_file: Optional[Callable[[str], bool]] = None)[source]
-

A generic data loader.

-

This default directory structure can be customized by overriding the -find_classes() method.

-
-
Parameters
-
    -
  • root (string) – Root directory path.

  • -
  • loader (callable) – A function to load a sample given its path.

  • -
  • extensions (tuple[string]) – A list of allowed extensions. -both extensions and is_valid_file should not be passed.

  • -
  • transform (callable, optional) – A function/transform that takes in -a sample and returns a transformed version. -E.g, transforms.RandomCrop for images.

  • -
  • target_transform (callable, optional) – A function/transform that takes -in the target and transforms it.

  • -
  • is_valid_file – A function that takes path of a file -and check if the file is a valid file (used to check of corrupt files) -both extensions and is_valid_file should not be passed.

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any][source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

(sample, target) where target is class_index of the target class.

-
-
Return type
-

tuple

-
-
-
- -
-
-find_classes(directory: str)Tuple[List[str], Dict[str, int]][source]
-

Find the class folders in a dataset structured as follows:

-
directory/
-├── class_x
-│   ├── xxx.ext
-│   ├── xxy.ext
-│   └── ...
-│       └── xxz.ext
-└── class_y
-    ├── 123.ext
-    ├── nsdf3.ext
-    └── ...
-    └── asd932_.ext
-
-
-

This method can be overridden to only consider -a subset of classes, or to adapt to a different dataset directory structure.

-
-
Parameters
-

directory (str) – Root directory path, corresponding to self.root

-
-
Raises
-

FileNotFoundError – If dir has no class folders.

-
-
Returns
-

List of all classes and dictionary mapping each class to an index.

-
-
Return type
-

(Tuple[List[str], Dict[str, int]])

-
-
-
- -
-
-static make_dataset(directory: str, class_to_idx: Dict[str, int], extensions: Optional[Tuple[str, ]] = None, is_valid_file: Optional[Callable[[str], bool]] = None)List[Tuple[str, int]][source]
-

Generates a list of samples of a form (path_to_sample, class).

-

This can be overridden to e.g. read files from a compressed zip file instead of from the disk.

-
-
Parameters
-
    -
  • directory (str) – root dataset directory, corresponding to self.root.

  • -
  • class_to_idx (Dict[str, int]) – Dictionary mapping class name to class index.

  • -
  • extensions (optional) – A list of allowed extensions. -Either extensions or is_valid_file should be passed. Defaults to None.

  • -
  • is_valid_file (optional) – A function that takes path of a file -and checks if the file is a valid file -(used to check of corrupt files) both extensions and -is_valid_file should not be passed. Defaults to None.

  • -
-
-
Raises
-
    -
  • ValueError – In case class_to_idx is empty.

  • -
  • ValueError – In case extensions and is_valid_file are None or both are not None.

  • -
  • FileNotFoundError – In case no valid file was found for any class.

  • -
-
-
Returns
-

samples of a form (path_to_sample, class)

-
-
Return type
-

List[Tuple[str, int]]

-
-
-
- -
- -
-
-class torchvision.datasets.ImageFolder(root: str, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, loader: Callable[[str], Any] = <function default_loader>, is_valid_file: Optional[Callable[[str], bool]] = None)[source]
-

A generic data loader where the images are arranged in this way by default:

-
root/dog/xxx.png
-root/dog/xxy.png
-root/dog/[...]/xxz.png
-
-root/cat/123.png
-root/cat/nsdf3.png
-root/cat/[...]/asd932_.png
-
-
-

This class inherits from DatasetFolder so -the same methods can be overridden to customize the dataset.

-
-
Parameters
-
    -
  • root (string) – Root directory path.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
  • loader (callable, optional) – A function to load an image given its path.

  • -
  • is_valid_file – A function that takes path of an Image file -and check if the file is a valid file (used to check of corrupt files)

  • -
-
-
-
-
-__getitem__(index: int)Tuple[Any, Any]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

(sample, target) where target is class_index of the target class.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-class torchvision.datasets.VisionDataset(root: str, transforms: Optional[Callable] = None, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None)[source]
-

Base Class For making datasets which are compatible with torchvision. -It is necessary to override the __getitem__ and __len__ method.

-
-
Parameters
-
    -
  • root (string) – Root directory of dataset.

  • -
  • transforms (callable, optional) – A function/transforms that takes in -an image and a label and returns the transformed versions of both.

  • -
  • transform (callable, optional) – A function/transform that takes in an PIL image -and returns a transformed version. E.g, transforms.RandomCrop

  • -
  • target_transform (callable, optional) – A function/transform that takes in the -target and transforms it.

  • -
-
-
-
-

Note

-

transforms and the combination of transform and target_transform are mutually exclusive.

-
-
-
-__getitem__(index: int)Any[source]
-
-
Parameters
-

index (int) – Index

-
-
Returns
-

Sample and meta data, optionally transformed by the respective transforms.

-
-
Return type
-

(Any)

-
-
-
- -
- -
-
- - -
- -
- - -
-
- - -
-
- - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
-
-

Docs

-

Access comprehensive developer documentation for PyTorch

- View Docs -
- -
-

Tutorials

-

Get in-depth tutorials for beginners and advanced developers

- View Tutorials -
- -
-

Resources

-

Find development resources and get your questions answered

- View Resources -
-
-
-
- - - - - - - - - -
-
-
-
- - -
-
-
- - -
- - - - - - - - \ No newline at end of file diff --git a/0.11./feature_extraction.html b/0.11./feature_extraction.html deleted file mode 100644 index 79f3c6e6da5..00000000000 --- a/0.11./feature_extraction.html +++ /dev/null @@ -1,946 +0,0 @@ - - - - - - - - - - - - torchvision.models.feature_extraction — Torchvision main documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - - - - - - - - - - - - - - - -
- -
    - -
  • - - - Docs - - > -
  • - - -
  • torchvision.models.feature_extraction
  • - - -
  • - - - - - -
  • - -
- - -
-
- -
- Shortcuts -
-
- -
-
- - - -
- -
-
- -
-

torchvision.models.feature_extraction

-

Feature extraction utilities let us tap into our models to access intermediate -transformations of our inputs. This could be useful for a variety of -applications in computer vision. Just a few examples are:

-
    -
  • Visualizing feature maps.

  • -
  • Extracting features to compute image descriptors for tasks like facial -recognition, copy-detection, or image retrieval.

  • -
  • Passing selected features to downstream sub-networks for end-to-end training -with a specific task in mind. For example, passing a hierarchy of features -to a Feature Pyramid Network with object detection heads.

  • -
-

Torchvision provides create_feature_extractor() for this purpose. -It works by following roughly these steps:

-
    -
  1. Symbolically tracing the model to get a graphical representation of -how it transforms the input, step by step.

  2. -
  3. Setting the user-selected graph nodes as outputs.

  4. -
  5. Removing all redundant nodes (anything downstream of the output nodes).

  6. -
  7. Generating python code from the resulting graph and bundling that into a -PyTorch module together with the graph itself.

  8. -
-
-

-
-

The torch.fx documentation -provides a more general and detailed explanation of the above procedure and -the inner workings of the symbolic tracing.

-

About Node Names

-

In order to specify which nodes should be output nodes for extracted -features, one should be familiar with the node naming convention used here -(which differs slightly from that used in torch.fx). A node name is -specified as a . separated path walking the module hierarchy from top level -module down to leaf operation or leaf module. For instance "layer4.2.relu" -in ResNet-50 represents the output of the ReLU of the 2nd block of the 4th -layer of the ResNet module. Here are some finer points to keep in mind:

-
    -
  • When specifying node names for create_feature_extractor(), you may -provide a truncated version of a node name as a shortcut. To see how this -works, try creating a ResNet-50 model and printing the node names with -train_nodes, _ = get_graph_node_names(model) print(train_nodes) and -observe that the last node pertaining to layer4 is -"layer4.2.relu_2". One may specify "layer4.2.relu_2" as the return -node, or just "layer4" as this, by convention, refers to the last node -(in order of execution) of layer4.

  • -
  • If a certain module or operation is repeated more than once, node names get -an additional _{int} postfix to disambiguate. For instance, maybe the -addition (+) operation is used three times in the same forward -method. Then there would be "path.to.module.add", -"path.to.module.add_1", "path.to.module.add_2". The counter is -maintained within the scope of the direct parent. So in ResNet-50 there is -a "layer4.1.add" and a "layer4.2.add". Because the addition -operations reside in different blocks, there is no need for a postfix to -disambiguate.

  • -
-

An Example

-

Here is an example of how we might extract features for MaskRCNN:

-
import torch
-from torchvision.models import resnet50
-from torchvision.models.feature_extraction import get_graph_node_names
-from torchvision.models.feature_extraction import create_feature_extractor
-from torchvision.models.detection.mask_rcnn import MaskRCNN
-from torchvision.models.detection.backbone_utils import LastLevelMaxPool
-from torchvision.ops.feature_pyramid_network import FeaturePyramidNetwork
-
-
-# To assist you in designing the feature extractor you may want to print out
-# the available nodes for resnet50.
-m = resnet50()
-train_nodes, eval_nodes = get_graph_node_names(resnet50())
-
-# The lists returned, are the names of all the graph nodes (in order of
-# execution) for the input model traced in train mode and in eval mode
-# respectively. You'll find that `train_nodes` and `eval_nodes` are the same
-# for this example. But if the model contains control flow that's dependent
-# on the training mode, they may be different.
-
-# To specify the nodes you want to extract, you could select the final node
-# that appears in each of the main layers:
-return_nodes = {
-    # node_name: user-specified key for output dict
-    'layer1.2.relu_2': 'layer1',
-    'layer2.3.relu_2': 'layer2',
-    'layer3.5.relu_2': 'layer3',
-    'layer4.2.relu_2': 'layer4',
-}
-
-# But `create_feature_extractor` can also accept truncated node specifications
-# like "layer1", as it will just pick the last node that's a descendent of
-# of the specification. (Tip: be careful with this, especially when a layer
-# has multiple outputs. It's not always guaranteed that the last operation
-# performed is the one that corresponds to the output you desire. You should
-# consult the source code for the input model to confirm.)
-return_nodes = {
-    'layer1': 'layer1',
-    'layer2': 'layer2',
-    'layer3': 'layer3',
-    'layer4': 'layer4',
-}
-
-# Now you can build the feature extractor. This returns a module whose forward
-# method returns a dictionary like:
-# {
-#     'layer1': output of layer 1,
-#     'layer2': output of layer 2,
-#     'layer3': output of layer 3,
-#     'layer4': output of layer 4,
-# }
-create_feature_extractor(m, return_nodes=return_nodes)
-
-# Let's put all that together to wrap resnet50 with MaskRCNN
-
-# MaskRCNN requires a backbone with an attached FPN
-class Resnet50WithFPN(torch.nn.Module):
-    def __init__(self):
-        super(Resnet50WithFPN, self).__init__()
-        # Get a resnet50 backbone
-        m = resnet50()
-        # Extract 4 main layers (note: MaskRCNN needs this particular name
-        # mapping for return nodes)
-        self.body = create_feature_extractor(
-            m, return_nodes={f'layer{k}': str(v)
-                             for v, k in enumerate([1, 2, 3, 4])})
-        # Dry run to get number of channels for FPN
-        inp = torch.randn(2, 3, 224, 224)
-        with torch.no_grad():
-            out = self.body(inp)
-        in_channels_list = [o.shape[1] for o in out.values()]
-        # Build FPN
-        self.out_channels = 256
-        self.fpn = FeaturePyramidNetwork(
-            in_channels_list, out_channels=self.out_channels,
-            extra_blocks=LastLevelMaxPool())
-
-    def forward(self, x):
-        x = self.body(x)
-        x = self.fpn(x)
-        return x
-
-
-# Now we can build our model!
-model = MaskRCNN(Resnet50WithFPN(), num_classes=91).eval()
-
-
-
-

API Reference

-
-
-torchvision.models.feature_extraction.create_feature_extractor(model: torch.nn.modules.module.Module, return_nodes: Optional[Union[List[str], Dict[str, str]]] = None, train_return_nodes: Optional[Union[List[str], Dict[str, str]]] = None, eval_return_nodes: Optional[Union[List[str], Dict[str, str]]] = None, tracer_kwargs: Dict = {}, suppress_diff_warning: bool = False)torch.fx.graph_module.GraphModule[source]
-

Creates a new graph module that returns intermediate nodes from a given -model as dictionary with user specified keys as strings, and the requested -outputs as values. This is achieved by re-writing the computation graph of -the model via FX to return the desired nodes as outputs. All unused nodes -are removed, together with their corresponding parameters.

-

Desired output nodes must be specified as a . separated -path walking the module hierarchy from top level module down to leaf -operation or leaf module. For more details on the node naming conventions -used here, please see the relevant subheading -in the documentation.

-

Not all models will be FX traceable, although with some massaging they can -be made to cooperate. Here’s a (not exhaustive) list of tips:

-
-
    -
  • If you don’t need to trace through a particular, problematic -sub-module, turn it into a “leaf module” by passing a list of -leaf_modules as one of the tracer_kwargs (see example below). -It will not be traced through, but rather, the resulting graph will -hold a reference to that module’s forward method.

  • -
  • Likewise, you may turn functions into leaf functions by passing a -list of autowrap_functions as one of the tracer_kwargs (see -example below).

  • -
  • Some inbuilt Python functions can be problematic. For instance, -int will raise an error during tracing. You may wrap them in your -own function and then pass that in autowrap_functions as one of -the tracer_kwargs.

  • -
-
-

For further information on FX see the -torch.fx documentation.

-
-
Parameters
-
    -
  • model (nn.Module) – model on which we will extract the features

  • -
  • return_nodes (list or dict, optional) – either a List or a Dict -containing the names (or partial names - see note above) -of the nodes for which the activations will be returned. If it is -a Dict, the keys are the node names, and the values -are the user-specified keys for the graph module’s returned -dictionary. If it is a List, it is treated as a Dict mapping -node specification strings directly to output names. In the case -that train_return_nodes and eval_return_nodes are specified, -this should not be specified.

  • -
  • train_return_nodes (list or dict, optional) – similar to -return_nodes. This can be used if the return nodes -for train mode are different than those from eval mode. -If this is specified, eval_return_nodes must also be specified, -and return_nodes should not be specified.

  • -
  • eval_return_nodes (list or dict, optional) – similar to -return_nodes. This can be used if the return nodes -for train mode are different than those from eval mode. -If this is specified, train_return_nodes must also be specified, -and return_nodes should not be specified.

  • -
  • tracer_kwargs (dict, optional) – a dictionary of keywork arguments for -NodePathTracer (which passes them onto it’s parent class -torch.fx.Tracer).

  • -
  • suppress_diff_warning (bool, optional) – whether to suppress a warning -when there are discrepancies between the train and eval version of -the graph. Defaults to False.

  • -
-
-
-

Examples:

-
>>> # Feature extraction with resnet
->>> model = torchvision.models.resnet18()
->>> # extract layer1 and layer3, giving as names `feat1` and feat2`
->>> model = create_feature_extractor(
->>>     model, {'layer1': 'feat1', 'layer3': 'feat2'})
->>> out = model(torch.rand(1, 3, 224, 224))
->>> print([(k, v.shape) for k, v in out.items()])
->>>     [('feat1', torch.Size([1, 64, 56, 56])),
->>>      ('feat2', torch.Size([1, 256, 14, 14]))]
-
->>> # Specifying leaf modules and leaf functions
->>> def leaf_function(x):
->>>     # This would raise a TypeError if traced through
->>>     return int(x)
->>>
->>> class LeafModule(torch.nn.Module):
->>>     def forward(self, x):
->>>         # This would raise a TypeError if traced through
->>>         int(x.shape[0])
->>>         return torch.nn.functional.relu(x + 4)
->>>
->>> class MyModule(torch.nn.Module):
->>>     def __init__(self):
->>>         super().__init__()
->>>         self.conv = torch.nn.Conv2d(3, 1, 3)
->>>         self.leaf_module = LeafModule()
->>>
->>>     def forward(self, x):
->>>         leaf_function(x.shape[0])
->>>         x = self.conv(x)
->>>         return self.leaf_module(x)
->>>
->>> model = create_feature_extractor(
->>>     MyModule(), return_nodes=['leaf_module'],
->>>     tracer_kwargs={'leaf_modules': [LeafModule],
->>>                    'autowrap_functions': [leaf_function]})
-
-
-
- -
-
-torchvision.models.feature_extraction.get_graph_node_names(model: torch.nn.modules.module.Module, tracer_kwargs: Dict = {}, suppress_diff_warning: bool = False)Tuple[List[str], List[str]][source]
-

Dev utility to return node names in order of execution. See note on node -names under create_feature_extractor(). Useful for seeing which node -names are available for feature extraction. There are two reasons that -node names can’t easily be read directly from the code for a model:

-
-
    -
  1. Not all submodules are traced through. Modules from torch.nn all -fall within this category.

  2. -
  3. Nodes representing the repeated application of the same operation -or leaf module get a _{counter} postfix.

  4. -
-
-

The model is traced twice: once in train mode, and once in eval mode. Both -sets of node names are returned.

-

For more details on the node naming conventions used here, please see the -relevant subheading in the -documentation.

-
-
Parameters
-
    -
  • model (nn.Module) – model for which we’d like to print node names

  • -
  • tracer_kwargs (dict, optional) –

    a dictionary of keywork arguments for -NodePathTracer (they are eventually passed onto -torch.fx.Tracer).

    -

  • -
  • suppress_diff_warning (bool, optional) – whether to suppress a warning -when there are discrepancies between the train and eval version of -the graph. Defaults to False.

  • -
-
-
Returns
-

a list of node names from tracing the model in -train mode, and another from tracing the model in eval mode.

-
-
Return type
-

tuple(list, list)

-
-
-

Examples:

-
>>> model = torchvision.models.resnet18()
->>> train_nodes, eval_nodes = get_graph_node_names(model)
-
-
-
- -
-
- - -
- -
- - -
-
- - -
-
- - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
-
-

Docs

-

Access comprehensive developer documentation for PyTorch

- View Docs -
- -
-

Tutorials

-

Get in-depth tutorials for beginners and advanced developers

- View Tutorials -
- -
-

Resources

-

Find development resources and get your questions answered

- View Resources -
-
-
-
- - - - - - - - - -
-
-
-
- - -
-
-
- - -
- - - - - - - - \ No newline at end of file diff --git a/0.11./genindex.html b/0.11./genindex.html deleted file mode 100644 index bf6b562a62c..00000000000 --- a/0.11./genindex.html +++ /dev/null @@ -1,1490 +0,0 @@ - - - - - - - - - - - - Index — Torchvision main documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - - - - - - - - - - - - - - - -
- - - - -
-
- -
- Shortcuts -
-
- -
-
- - - -
- -
-
- - -

Index

- -
- _ - | A - | B - | C - | D - | E - | F - | G - | H - | I - | K - | L - | M - | N - | O - | P - | Q - | R - | S - | T - | U - | V - | W - -
-

_

- - - -
- -

A

- - - -
- -

B

- - - -
- -

C

- - - -
- -

D

- - - -
- -

E

- - - -
- -

F

- - -
- -

G

- - - -
- -

H

- - - -
- -

I

- - - -
- -

K

- - - -
- -

L

- - - -
- -

M

- - - -
- -

N

- - - -
- -

O

- - -
- -

P

- - - -
- -

Q

- - -
- -

R

- - - -
- -

S

- - - -
- -

T

- - - -
- -

U

- - - -
- -

V

- - - -
- -

W

- - - -
- - - -
- -
-
- - - - -
- - - -
-

- © Copyright 2017-present, Torch Contributors. - -

-
- -
- Built with Sphinx using a theme provided by Read the Docs. -
- - -
- -
-
- -
-
-
- -
-
-
-
-
- - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
-
-

Docs

-

Access comprehensive developer documentation for PyTorch

- View Docs -
- -
-

Tutorials

-

Get in-depth tutorials for beginners and advanced developers

- View Tutorials -
- -
-

Resources

-

Find development resources and get your questions answered

- View Resources -
-
-
-
- - - - - - - - - -
-
-
-
- - -
-
-
- - -
- - - - - - - - \ No newline at end of file diff --git a/0.11./index.html b/0.11./index.html deleted file mode 100644 index 05705ca8adc..00000000000 --- a/0.11./index.html +++ /dev/null @@ -1,813 +0,0 @@ - - - - - - - - - - - - torchvision — Torchvision main documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - - - - - - - - - - - - - - - -
- - - - -
-
- -
- Shortcuts -
-
- -
-
- - - -
- -
-
- -
-

torchvision

-

This library is part of the PyTorch project. PyTorch is an open source -machine learning framework.

-

Features described in this documentation are classified by release status:

-
-

Stable: These features will be maintained long-term and there should generally -be no major performance limitations or gaps in documentation. -We also expect to maintain backwards compatibility (although -breaking changes can happen and notice will be given one release ahead -of time).

-

Beta: Features are tagged as Beta because the API may change based on -user feedback, because the performance needs to improve, or because -coverage across operators is not yet complete. For Beta features, we are -committing to seeing the feature through to the Stable classification. -We are not, however, committing to backwards compatibility.

-

Prototype: These features are typically not available as part of -binary distributions like PyPI or Conda, except sometimes behind run-time -flags, and are at an early stage for feedback and testing.

-
-

The torchvision package consists of popular datasets, model -architectures, and common image transformations for computer vision.

- -
-

Examples and training references

- -
-
-
-torchvision.get_image_backend()[source]
-

Gets the name of the package used to load images

-
- -
-
-torchvision.get_video_backend()[source]
-

Returns the currently active video backend used to decode videos.

-
-
Returns
-

Name of the video backend. one of {‘pyav’, ‘video_reader’}.

-
-
Return type
-

str

-
-
-
- -
-
-torchvision.set_image_backend(backend)[source]
-

Specifies the package used to load images.

-
-
Parameters
-

backend (string) – Name of the image backend. one of {‘PIL’, ‘accimage’}. -The accimage package uses the Intel IPP library. It is -generally faster than PIL, but does not support as many operations.

-
-
-
- -
-
-torchvision.set_video_backend(backend)[source]
-

Specifies the package used to decode videos.

-
-
Parameters
-

backend (string) – Name of the video backend. one of {‘pyav’, ‘video_reader’}. -The pyav package uses the 3rd party PyAv library. It is a Pythonic -binding for the FFmpeg libraries. -The video_reader package includes a native C++ implementation on -top of FFMPEG libraries, and a python API of TorchScript custom operator. -It generally decodes faster than pyav, but is perhaps less robust.

-
-
-
-

Note

-

Building with FFMPEG is disabled by default in the latest main. If you want to use the ‘video_reader’ -backend, please compile torchvision from source.

-
-
- - -
-

Indices

- -
-
- - -
- -
-
- - - - - - -
- - - -
-

- © Copyright 2017-present, Torch Contributors. - -

-
- -
- Built with Sphinx using a theme provided by Read the Docs. -
- - -
- -
-
- -
-
-
- - -
-
-
-
-
- - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
-
-

Docs

-

Access comprehensive developer documentation for PyTorch

- View Docs -
- -
-

Tutorials

-

Get in-depth tutorials for beginners and advanced developers

- View Tutorials -
- -
-

Resources

-

Find development resources and get your questions answered

- View Resources -
-
-
-
- - - - - - - - - -
-
-
-
- - -
-
-
- - -
- - - - - - - - \ No newline at end of file diff --git a/0.11./io.html b/0.11./io.html deleted file mode 100644 index 2db05a84452..00000000000 --- a/0.11./io.html +++ /dev/null @@ -1,1140 +0,0 @@ - - - - - - - - - - - - torchvision.io — Torchvision main documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - - - - - - - - - - - - - - - -
- - - - -
-
- -
- Shortcuts -
-
- -
-
- - - -
- -
-
- -
-

torchvision.io

-

The torchvision.io package provides functions for performing IO -operations. They are currently specific to reading and writing video and -images.

-
-

Video

-
-
-torchvision.io.read_video(filename: str, start_pts: Union[float, fractions.Fraction] = 0, end_pts: Optional[Union[float, fractions.Fraction]] = None, pts_unit: str = 'pts')Tuple[torch.Tensor, torch.Tensor, Dict[str, Any]][source]
-

Reads a video from a file, returning both the video frames as well as -the audio frames

-
-
Parameters
-
    -
  • filename (str) – path to the video file

  • -
  • start_pts (int if pts_unit = 'pts', float / Fraction if pts_unit = 'sec', optional) – The start presentation time of the video

  • -
  • end_pts (int if pts_unit = 'pts', float / Fraction if pts_unit = 'sec', optional) – The end presentation time

  • -
  • pts_unit (str, optional) – unit in which start_pts and end_pts values will be interpreted, -either ‘pts’ or ‘sec’. Defaults to ‘pts’.

  • -
-
-
Returns
-

the T video frames -aframes (Tensor[K, L]): the audio frames, where K is the number of channels and L is the number of points -info (Dict): metadata for the video and audio. Can contain the fields video_fps (float) and audio_fps (int)

-
-
Return type
-

vframes (Tensor[T, H, W, C])

-
-
-
- -
-
-torchvision.io.read_video_timestamps(filename: str, pts_unit: str = 'pts')Tuple[List[int], Optional[float]][source]
-

List the video frames timestamps.

-

Note that the function decodes the whole video frame-by-frame.

-
-
Parameters
-
    -
  • filename (str) – path to the video file

  • -
  • pts_unit (str, optional) – unit in which timestamp values will be returned -either ‘pts’ or ‘sec’. Defaults to ‘pts’.

  • -
-
-
Returns
-

presentation timestamps for each one of the frames in the video. -video_fps (float, optional): the frame rate for the video

-
-
Return type
-

pts (List[int] if pts_unit = ‘pts’, List[Fraction] if pts_unit = ‘sec’)

-
-
-
- -
-
-torchvision.io.write_video(filename: str, video_array: torch.Tensor, fps: float, video_codec: str = 'libx264', options: Optional[Dict[str, Any]] = None, audio_array: Optional[torch.Tensor] = None, audio_fps: Optional[float] = None, audio_codec: Optional[str] = None, audio_options: Optional[Dict[str, Any]] = None)None[source]
-

Writes a 4d tensor in [T, H, W, C] format in a video file

-
-
Parameters
-
    -
  • filename (str) – path where the video will be saved

  • -
  • video_array (Tensor[T, H, W, C]) – tensor containing the individual frames, -as a uint8 tensor in [T, H, W, C] format

  • -
  • fps (Number) – video frames per second

  • -
  • video_codec (str) – the name of the video codec, i.e. “libx264”, “h264”, etc.

  • -
  • options (Dict) – dictionary containing options to be passed into the PyAV video stream

  • -
  • audio_array (Tensor[C, N]) – tensor containing the audio, where C is the number of channels -and N is the number of samples

  • -
  • audio_fps (Number) – audio sample rate, typically 44100 or 48000

  • -
  • audio_codec (str) – the name of the audio codec, i.e. “mp3”, “aac”, etc.

  • -
  • audio_options (Dict) – dictionary containing options to be passed into the PyAV audio stream

  • -
-
-
-
- -
-
-

Fine-grained video API

-

In addition to the read_video function, we provide a high-performance -lower-level API for more fine-grained control compared to the read_video function. -It does all this whilst fully supporting torchscript.

-
-
-class torchvision.io.VideoReader(path: str, stream: str = 'video')[source]
-

Fine-grained video-reading API. -Supports frame-by-frame reading of various streams from a single video -container.

-

Example

-

The following examples creates a VideoReader object, seeks into 2s -point, and returns a single frame:

-
import torchvision
-video_path = "path_to_a_test_video"
-reader = torchvision.io.VideoReader(video_path, "video")
-reader.seek(2.0)
-frame = next(reader)
-
-
-

VideoReader implements the iterable API, which makes it suitable to -using it in conjunction with itertools for more advanced reading. -As such, we can use a VideoReader instance inside for loops:

-
reader.seek(2)
-for frame in reader:
-    frames.append(frame['data'])
-# additionally, `seek` implements a fluent API, so we can do
-for frame in reader.seek(2):
-    frames.append(frame['data'])
-
-
-

With itertools, we can read all frames between 2 and 5 seconds with the -following code:

-
for frame in itertools.takewhile(lambda x: x['pts'] <= 5, reader.seek(2)):
-    frames.append(frame['data'])
-
-
-

and similarly, reading 10 frames after the 2s timestamp can be achieved -as follows:

-
for frame in itertools.islice(reader.seek(2), 10):
-    frames.append(frame['data'])
-
-
-
-

Note

-

Each stream descriptor consists of two parts: stream type (e.g. ‘video’) and -a unique stream id (which are determined by the video encoding). -In this way, if the video contaner contains multiple -streams of the same type, users can acces the one they want. -If only stream type is passed, the decoder auto-detects first stream of that type.

-
-
-
Parameters
-
    -
  • path (string) – Path to the video file in supported format

  • -
  • stream (string, optional) – descriptor of the required stream, followed by the stream id, -in the format {stream_type}:{stream_id}. Defaults to "video:0". -Currently available options include ['video', 'audio']

  • -
-
-
-

Examples using VideoReader:

-
-
-
-__next__()Dict[str, Any][source]
-

Decodes and returns the next frame of the current stream. -Frames are encoded as a dict with mandatory -data and pts fields, where data is a tensor, and pts is a -presentation timestamp of the frame expressed in seconds -as a float.

-
-
Returns
-

a dictionary and containing decoded frame (data) -and corresponding timestamp (pts) in seconds

-
-
Return type
-

(dict)

-
-
-
- -
-
-get_metadata()Dict[str, Any][source]
-

Returns video metadata

-
-
Returns
-

dictionary containing duration and frame rate for every stream

-
-
Return type
-

(dict)

-
-
-
- -
-
-seek(time_s: float)torchvision.io.VideoReader[source]
-

Seek within current stream.

-
-
Parameters
-

time_s (float) – seek time in seconds

-
-
-
-

Note

-

Current implementation is the so-called precise seek. This -means following seek, call to next() will return the -frame with the exact timestamp if it exists or -the first frame with timestamp larger than time_s.

-
-
- -
-
-set_current_stream(stream: str)bool[source]
-

Set current stream. -Explicitly define the stream we are operating on.

-
-
Parameters
-

stream (string) – descriptor of the required stream. Defaults to "video:0" -Currently available stream types include ['video', 'audio']. -Each descriptor consists of two parts: stream type (e.g. ‘video’) and -a unique stream id (which are determined by video encoding). -In this way, if the video contaner contains multiple -streams of the same type, users can acces the one they want. -If only stream type is passed, the decoder auto-detects first stream -of that type and returns it.

-
-
Returns
-

True on succes, False otherwise

-
-
Return type
-

(bool)

-
-
-
- -
- -

Example of inspecting a video:

-
import torchvision
-video_path = "path to a test video"
-# Constructor allocates memory and a threaded decoder
-# instance per video. At the moment it takes two arguments:
-# path to the video file, and a wanted stream.
-reader = torchvision.io.VideoReader(video_path, "video")
-
-# The information about the video can be retrieved using the
-# `get_metadata()` method. It returns a dictionary for every stream, with
-# duration and other relevant metadata (often frame rate)
-reader_md = reader.get_metadata()
-
-# metadata is structured as a dict of dicts with following structure
-# {"stream_type": {"attribute": [attribute per stream]}}
-#
-# following would print out the list of frame rates for every present video stream
-print(reader_md["video"]["fps"])
-
-# we explicitly select the stream we would like to operate on. In
-# the constructor we select a default video stream, but
-# in practice, we can set whichever stream we would like
-video.set_current_stream("video:0")
-
-
-
-
-

Image

-
-
-class torchvision.io.ImageReadMode(value)[source]
-

Support for various modes while reading images.

-

Use ImageReadMode.UNCHANGED for loading the image as-is, -ImageReadMode.GRAY for converting to grayscale, -ImageReadMode.GRAY_ALPHA for grayscale with transparency, -ImageReadMode.RGB for RGB and ImageReadMode.RGB_ALPHA for -RGB with transparency.

-
- -
-
-torchvision.io.read_image(path: str, mode: torchvision.io.image.ImageReadMode = <ImageReadMode.UNCHANGED: 0>)torch.Tensor[source]
-

Reads a JPEG or PNG image into a 3 dimensional RGB Tensor. -Optionally converts the image to the desired format. -The values of the output tensor are uint8 between 0 and 255.

-
-
Parameters
-
    -
  • path (str) – path of the JPEG or PNG image.

  • -
  • mode (ImageReadMode) – the read mode used for optionally converting the image. -Default: ImageReadMode.UNCHANGED. -See ImageReadMode class for more information on various -available modes.

  • -
-
-
Returns
-

output (Tensor[image_channels, image_height, image_width])

-
-
-

Examples using read_image:

-
- -
-
-torchvision.io.decode_image(input: torch.Tensor, mode: torchvision.io.image.ImageReadMode = <ImageReadMode.UNCHANGED: 0>)torch.Tensor[source]
-

Detects whether an image is a JPEG or PNG and performs the appropriate -operation to decode the image into a 3 dimensional RGB Tensor.

-

Optionally converts the image to the desired format. -The values of the output tensor are uint8 between 0 and 255.

-
-
Parameters
-
    -
  • input (Tensor) – a one dimensional uint8 tensor containing the raw bytes of the -PNG or JPEG image.

  • -
  • mode (ImageReadMode) – the read mode used for optionally converting the image. -Default: ImageReadMode.UNCHANGED. -See ImageReadMode class for more information on various -available modes.

  • -
-
-
Returns
-

output (Tensor[image_channels, image_height, image_width])

-
-
-
- -
-
-torchvision.io.encode_jpeg(input: torch.Tensor, quality: int = 75)torch.Tensor[source]
-

Takes an input tensor in CHW layout and returns a buffer with the contents -of its corresponding JPEG file.

-
-
Parameters
-
    -
  • input (Tensor[channels, image_height, image_width])) – int8 image tensor of -c channels, where c must be 1 or 3.

  • -
  • quality (int) – Quality of the resulting JPEG file, it must be a number between -1 and 100. Default: 75

  • -
-
-
Returns
-

-
A one dimensional int8 tensor that contains the raw bytes of the

JPEG file.

-
-
-

-
-
Return type
-

output (Tensor[1])

-
-
-
- -
-
-torchvision.io.decode_jpeg(input: torch.Tensor, mode: torchvision.io.image.ImageReadMode = <ImageReadMode.UNCHANGED: 0>, device: str = 'cpu')torch.Tensor[source]
-

Decodes a JPEG image into a 3 dimensional RGB Tensor. -Optionally converts the image to the desired format. -The values of the output tensor are uint8 between 0 and 255.

-
-
Parameters
-
    -
  • input (Tensor[1]) – a one dimensional uint8 tensor containing -the raw bytes of the JPEG image. This tensor must be on CPU, -regardless of the device parameter.

  • -
  • mode (ImageReadMode) – the read mode used for optionally -converting the image. Default: ImageReadMode.UNCHANGED. -See ImageReadMode class for more information on various -available modes.

  • -
  • device (str or torch.device) – The device on which the decoded image will -be stored. If a cuda device is specified, the image will be decoded -with nvjpeg. This is only -supported for CUDA version >= 10.1

  • -
-
-
Returns
-

output (Tensor[image_channels, image_height, image_width])

-
-
-
- -
-
-torchvision.io.write_jpeg(input: torch.Tensor, filename: str, quality: int = 75)[source]
-

Takes an input tensor in CHW layout and saves it in a JPEG file.

-
-
Parameters
-
    -
  • input (Tensor[channels, image_height, image_width]) – int8 image tensor of c -channels, where c must be 1 or 3.

  • -
  • filename (str) – Path to save the image.

  • -
  • quality (int) – Quality of the resulting JPEG file, it must be a number -between 1 and 100. Default: 75

  • -
-
-
-
- -
-
-torchvision.io.encode_png(input: torch.Tensor, compression_level: int = 6)torch.Tensor[source]
-

Takes an input tensor in CHW layout and returns a buffer with the contents -of its corresponding PNG file.

-
-
Parameters
-
    -
  • input (Tensor[channels, image_height, image_width]) – int8 image tensor of -c channels, where c must 3 or 1.

  • -
  • compression_level (int) – Compression factor for the resulting file, it must be a number -between 0 and 9. Default: 6

  • -
-
-
Returns
-

-
A one dimensional int8 tensor that contains the raw bytes of the

PNG file.

-
-
-

-
-
Return type
-

Tensor[1]

-
-
-
- -
-
-torchvision.io.decode_png(input: torch.Tensor, mode: torchvision.io.image.ImageReadMode = <ImageReadMode.UNCHANGED: 0>)torch.Tensor[source]
-

Decodes a PNG image into a 3 dimensional RGB Tensor. -Optionally converts the image to the desired format. -The values of the output tensor are uint8 between 0 and 255.

-
-
Parameters
-
    -
  • input (Tensor[1]) – a one dimensional uint8 tensor containing -the raw bytes of the PNG image.

  • -
  • mode (ImageReadMode) – the read mode used for optionally -converting the image. Default: ImageReadMode.UNCHANGED. -See ImageReadMode class for more information on various -available modes.

  • -
-
-
Returns
-

output (Tensor[image_channels, image_height, image_width])

-
-
-
- -
-
-torchvision.io.write_png(input: torch.Tensor, filename: str, compression_level: int = 6)[source]
-

Takes an input tensor in CHW layout (or HW in the case of grayscale images) -and saves it in a PNG file.

-
-
Parameters
-
    -
  • input (Tensor[channels, image_height, image_width]) – int8 image tensor of -c channels, where c must be 1 or 3.

  • -
  • filename (str) – Path to save the image.

  • -
  • compression_level (int) – Compression factor for the resulting file, it must be a number -between 0 and 9. Default: 6

  • -
-
-
-
- -
-
-torchvision.io.read_file(path: str)torch.Tensor[source]
-

Reads and outputs the bytes contents of a file as a uint8 Tensor -with one dimension.

-
-
Parameters
-

path (str) – the path to the file to be read

-
-
Returns
-

data (Tensor)

-
-
-
- -
-
-torchvision.io.write_file(filename: str, data: torch.Tensor)None[source]
-

Writes the contents of a uint8 tensor with one dimension to a -file.

-
-
Parameters
-
    -
  • filename (str) – the path to the file to be written

  • -
  • data (Tensor) – the contents to be written to the output file

  • -
-
-
-
- -
-
- - -
- -
- - -
-
- - -
-
- - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
-
-

Docs

-

Access comprehensive developer documentation for PyTorch

- View Docs -
- -
-

Tutorials

-

Get in-depth tutorials for beginners and advanced developers

- View Tutorials -
- -
-

Resources

-

Find development resources and get your questions answered

- View Resources -
-
-
-
- - - - - - - - - -
-
-
-
- - -
-
-
- - -
- - - - - - - - \ No newline at end of file diff --git a/0.11./models.html b/0.11./models.html deleted file mode 100644 index 0ed958ab9de..00000000000 --- a/0.11./models.html +++ /dev/null @@ -1,3138 +0,0 @@ - - - - - - - - - - - - torchvision.models — Torchvision main documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - - - - - - - - - - - - - - - -
- -
    - -
  • - - - Docs - - > -
  • - - -
  • torchvision.models
  • - - -
  • - - - - - -
  • - -
- - -
-
- -
- Shortcuts -
-
- -
-
- - - -
- -
-
- -
-

torchvision.models

-

The models subpackage contains definitions of models for addressing -different tasks, including: image classification, pixelwise semantic -segmentation, object detection, instance segmentation, person -keypoint detection and video classification.

-
-

Note

-

Backward compatibility is guaranteed for loading a serialized -state_dict to the model created using old PyTorch version. -On the contrary, loading entire saved models or serialized -ScriptModules (seralized using older versions of PyTorch) -may not preserve the historic behaviour. Refer to the following -documentation

-
-
-

Classification

-

The models subpackage contains definitions for the following model -architectures for image classification:

- -

You can construct a model with random weights by calling its constructor:

-
import torchvision.models as models
-resnet18 = models.resnet18()
-alexnet = models.alexnet()
-vgg16 = models.vgg16()
-squeezenet = models.squeezenet1_0()
-densenet = models.densenet161()
-inception = models.inception_v3()
-googlenet = models.googlenet()
-shufflenet = models.shufflenet_v2_x1_0()
-mobilenet_v2 = models.mobilenet_v2()
-mobilenet_v3_large = models.mobilenet_v3_large()
-mobilenet_v3_small = models.mobilenet_v3_small()
-resnext50_32x4d = models.resnext50_32x4d()
-wide_resnet50_2 = models.wide_resnet50_2()
-mnasnet = models.mnasnet1_0()
-efficientnet_b0 = models.efficientnet_b0()
-efficientnet_b1 = models.efficientnet_b1()
-efficientnet_b2 = models.efficientnet_b2()
-efficientnet_b3 = models.efficientnet_b3()
-efficientnet_b4 = models.efficientnet_b4()
-efficientnet_b5 = models.efficientnet_b5()
-efficientnet_b6 = models.efficientnet_b6()
-efficientnet_b7 = models.efficientnet_b7()
-regnet_y_400mf = models.regnet_y_400mf()
-regnet_y_800mf = models.regnet_y_800mf()
-regnet_y_1_6gf = models.regnet_y_1_6gf()
-regnet_y_3_2gf = models.regnet_y_3_2gf()
-regnet_y_8gf = models.regnet_y_8gf()
-regnet_y_16gf = models.regnet_y_16gf()
-regnet_y_32gf = models.regnet_y_32gf()
-regnet_x_400mf = models.regnet_x_400mf()
-regnet_x_800mf = models.regnet_x_800mf()
-regnet_x_1_6gf = models.regnet_x_1_6gf()
-regnet_x_3_2gf = models.regnet_x_3_2gf()
-regnet_x_8gf = models.regnet_x_8gf()
-regnet_x_16gf = models.regnet_x_16gf()
-regnet_x_32gf = models.regnet_x_32gf()
-
-
-

We provide pre-trained models, using the PyTorch torch.utils.model_zoo. -These can be constructed by passing pretrained=True:

-
import torchvision.models as models
-resnet18 = models.resnet18(pretrained=True)
-alexnet = models.alexnet(pretrained=True)
-squeezenet = models.squeezenet1_0(pretrained=True)
-vgg16 = models.vgg16(pretrained=True)
-densenet = models.densenet161(pretrained=True)
-inception = models.inception_v3(pretrained=True)
-googlenet = models.googlenet(pretrained=True)
-shufflenet = models.shufflenet_v2_x1_0(pretrained=True)
-mobilenet_v2 = models.mobilenet_v2(pretrained=True)
-mobilenet_v3_large = models.mobilenet_v3_large(pretrained=True)
-mobilenet_v3_small = models.mobilenet_v3_small(pretrained=True)
-resnext50_32x4d = models.resnext50_32x4d(pretrained=True)
-wide_resnet50_2 = models.wide_resnet50_2(pretrained=True)
-mnasnet = models.mnasnet1_0(pretrained=True)
-efficientnet_b0 = models.efficientnet_b0(pretrained=True)
-efficientnet_b1 = models.efficientnet_b1(pretrained=True)
-efficientnet_b2 = models.efficientnet_b2(pretrained=True)
-efficientnet_b3 = models.efficientnet_b3(pretrained=True)
-efficientnet_b4 = models.efficientnet_b4(pretrained=True)
-efficientnet_b5 = models.efficientnet_b5(pretrained=True)
-efficientnet_b6 = models.efficientnet_b6(pretrained=True)
-efficientnet_b7 = models.efficientnet_b7(pretrained=True)
-regnet_y_400mf = models.regnet_y_400mf(pretrained=True)
-regnet_y_800mf = models.regnet_y_800mf(pretrained=True)
-regnet_y_1_6gf = models.regnet_y_1_6gf(pretrained=True)
-regnet_y_3_2gf = models.regnet_y_3_2gf(pretrained=True)
-regnet_y_8gf = models.regnet_y_8gf(pretrained=True)
-regnet_y_16gf = models.regnet_y_16gf(pretrained=True)
-regnet_y_32gf = models.regnet_y_32gf(pretrained=True)
-regnet_x_400mf = models.regnet_x_400mf(pretrained=True)
-regnet_x_800mf = models.regnet_x_800mf(pretrained=True)
-regnet_x_1_6gf = models.regnet_x_1_6gf(pretrained=True)
-regnet_x_3_2gf = models.regnet_x_3_2gf(pretrained=True)
-regnet_x_8gf = models.regnet_x_8gf(pretrained=True)
-regnet_x_16gf = models.regnet_x_16gf(pretrainedTrue)
-regnet_x_32gf = models.regnet_x_32gf(pretrained=True)
-
-
-

Instancing a pre-trained model will download its weights to a cache directory. -This directory can be set using the TORCH_MODEL_ZOO environment variable. See -torch.utils.model_zoo.load_url() for details.

-

Some models use modules which have different training and evaluation -behavior, such as batch normalization. To switch between these modes, use -model.train() or model.eval() as appropriate. See -train() or eval() for details.

-

All pre-trained models expect input images normalized in the same way, -i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), -where H and W are expected to be at least 224. -The images have to be loaded in to a range of [0, 1] and then normalized -using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. -You can use the following transform to normalize:

-
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
-                                 std=[0.229, 0.224, 0.225])
-
-
-

An example of such normalization can be found in the imagenet example -here

-

The process for obtaining the values of mean and std is roughly equivalent -to:

-
import torch
-from torchvision import datasets, transforms as T
-
-transform = T.Compose([T.Resize(256), T.CenterCrop(224), T.ToTensor()])
-dataset = datasets.ImageNet(".", split="train", transform=transform)
-
-means = []
-stds = []
-for img in subset(dataset):
-    means.append(torch.mean(img))
-    stds.append(torch.std(img))
-
-mean = torch.mean(torch.tensor(means))
-std = torch.mean(torch.tensor(stds))
-
-
-

Unfortunately, the concrete subset that was used is lost. For more -information see this discussion -or these experiments.

-

The sizes of the EfficientNet models depend on the variant. For the exact input sizes -check here

-

ImageNet 1-crop error rates

- ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Model

Acc@1

Acc@5

AlexNet

56.522

79.066

VGG-11

69.020

88.628

VGG-13

69.928

89.246

VGG-16

71.592

90.382

VGG-19

72.376

90.876

VGG-11 with batch normalization

70.370

89.810

VGG-13 with batch normalization

71.586

90.374

VGG-16 with batch normalization

73.360

91.516

VGG-19 with batch normalization

74.218

91.842

ResNet-18

69.758

89.078

ResNet-34

73.314

91.420

ResNet-50

76.130

92.862

ResNet-101

77.374

93.546

ResNet-152

78.312

94.046

SqueezeNet 1.0

58.092

80.420

SqueezeNet 1.1

58.178

80.624

Densenet-121

74.434

91.972

Densenet-169

75.600

92.806

Densenet-201

76.896

93.370

Densenet-161

77.138

93.560

Inception v3

77.294

93.450

GoogleNet

69.778

89.530

ShuffleNet V2 x1.0

69.362

88.316

ShuffleNet V2 x0.5

60.552

81.746

MobileNet V2

71.878

90.286

MobileNet V3 Large

74.042

91.340

MobileNet V3 Small

67.668

87.402

ResNeXt-50-32x4d

77.618

93.698

ResNeXt-101-32x8d

79.312

94.526

Wide ResNet-50-2

78.468

94.086

Wide ResNet-101-2

78.848

94.284

MNASNet 1.0

73.456

91.510

MNASNet 0.5

67.734

87.490

EfficientNet-B0

77.692

93.532

EfficientNet-B1

78.642

94.186

EfficientNet-B2

80.608

95.310

EfficientNet-B3

82.008

96.054

EfficientNet-B4

83.384

96.594

EfficientNet-B5

83.444

96.628

EfficientNet-B6

84.008

96.916

EfficientNet-B7

84.122

96.908

regnet_x_400mf

72.834

90.950

regnet_x_800mf

75.212

92.348

regnet_x_1_6gf

77.040

93.440

regnet_x_3_2gf

78.364

93.992

regnet_x_8gf

79.344

94.686

regnet_x_16gf

80.058

94.944

regnet_x_32gf

80.622

95.248

regnet_y_400mf

74.046

91.716

regnet_y_800mf

76.420

93.136

regnet_y_1_6gf

77.950

93.966

regnet_y_3_2gf

78.948

94.576

regnet_y_8gf

80.032

95.048

regnet_y_16gf

80.424

95.240

regnet_y_32gf

80.878

95.340

-
-

Alexnet

-
-
-torchvision.models.alexnet(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.alexnet.AlexNet[source]
-

AlexNet model architecture from the -“One weird trick…” paper. -The required minimum input size of the model is 63x63.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-

VGG

-
-
-torchvision.models.vgg11(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.vgg.VGG[source]
-

VGG 11-layer model (configuration “A”) from -“Very Deep Convolutional Networks For Large-Scale Image Recognition”. -The required minimum input size of the model is 32x32.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.vgg11_bn(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.vgg.VGG[source]
-

VGG 11-layer model (configuration “A”) with batch normalization -“Very Deep Convolutional Networks For Large-Scale Image Recognition”. -The required minimum input size of the model is 32x32.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.vgg13(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.vgg.VGG[source]
-

VGG 13-layer model (configuration “B”) -“Very Deep Convolutional Networks For Large-Scale Image Recognition”. -The required minimum input size of the model is 32x32.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.vgg13_bn(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.vgg.VGG[source]
-

VGG 13-layer model (configuration “B”) with batch normalization -“Very Deep Convolutional Networks For Large-Scale Image Recognition”. -The required minimum input size of the model is 32x32.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.vgg16(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.vgg.VGG[source]
-

VGG 16-layer model (configuration “D”) -“Very Deep Convolutional Networks For Large-Scale Image Recognition”. -The required minimum input size of the model is 32x32.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.vgg16_bn(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.vgg.VGG[source]
-

VGG 16-layer model (configuration “D”) with batch normalization -“Very Deep Convolutional Networks For Large-Scale Image Recognition”. -The required minimum input size of the model is 32x32.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.vgg19(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.vgg.VGG[source]
-

VGG 19-layer model (configuration “E”) -“Very Deep Convolutional Networks For Large-Scale Image Recognition”. -The required minimum input size of the model is 32x32.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.vgg19_bn(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.vgg.VGG[source]
-

VGG 19-layer model (configuration ‘E’) with batch normalization -“Very Deep Convolutional Networks For Large-Scale Image Recognition”. -The required minimum input size of the model is 32x32.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-

ResNet

-
-
-torchvision.models.resnet18(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.resnet.ResNet[source]
-

ResNet-18 model from -“Deep Residual Learning for Image Recognition”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-

Examples using resnet18:

-
- -
-
-torchvision.models.resnet34(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.resnet.ResNet[source]
-

ResNet-34 model from -“Deep Residual Learning for Image Recognition”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.resnet50(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.resnet.ResNet[source]
-

ResNet-50 model from -“Deep Residual Learning for Image Recognition”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.resnet101(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.resnet.ResNet[source]
-

ResNet-101 model from -“Deep Residual Learning for Image Recognition”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.resnet152(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.resnet.ResNet[source]
-

ResNet-152 model from -“Deep Residual Learning for Image Recognition”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-

SqueezeNet

-
-
-torchvision.models.squeezenet1_0(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.squeezenet.SqueezeNet[source]
-

SqueezeNet model architecture from the “SqueezeNet: AlexNet-level -accuracy with 50x fewer parameters and <0.5MB model size” paper. -The required minimum input size of the model is 21x21.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.squeezenet1_1(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.squeezenet.SqueezeNet[source]
-

SqueezeNet 1.1 model from the official SqueezeNet repo. -SqueezeNet 1.1 has 2.4x less computation and slightly fewer parameters -than SqueezeNet 1.0, without sacrificing accuracy. -The required minimum input size of the model is 17x17.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-

DenseNet

-
-
-torchvision.models.densenet121(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.densenet.DenseNet[source]
-

Densenet-121 model from -“Densely Connected Convolutional Networks”. -The required minimum input size of the model is 29x29.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
  • memory_efficient (bool) – but slower. Default: False. See “paper”.

  • -
-
-
-
- -
-
-torchvision.models.densenet169(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.densenet.DenseNet[source]
-

Densenet-169 model from -“Densely Connected Convolutional Networks”. -The required minimum input size of the model is 29x29.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
  • memory_efficient (bool) –

    but slower. Default: False. See “paper”.

    -

  • -
-
-
-
- -
-
-torchvision.models.densenet161(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.densenet.DenseNet[source]
-

Densenet-161 model from -“Densely Connected Convolutional Networks”. -The required minimum input size of the model is 29x29.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
  • memory_efficient (bool) –

    but slower. Default: False. See “paper”.

    -

  • -
-
-
-
- -
-
-torchvision.models.densenet201(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.densenet.DenseNet[source]
-

Densenet-201 model from -“Densely Connected Convolutional Networks”. -The required minimum input size of the model is 29x29.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
  • memory_efficient (bool) –

    but slower. Default: False. See “paper”.

    -

  • -
-
-
-
- -
-
-

Inception v3

-
-
-torchvision.models.inception_v3(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.inception.Inception3[source]
-

Inception v3 model architecture from -“Rethinking the Inception Architecture for Computer Vision”. -The required minimum input size of the model is 75x75.

-
-

Note

-

Important: In contrast to the other models the inception_v3 expects tensors with a size of -N x 3 x 299 x 299, so ensure your images are sized accordingly.

-
-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
  • aux_logits (bool) – If True, add an auxiliary branch that can improve training. -Default: True

  • -
  • transform_input (bool) – If True, preprocesses the input according to the method with which it -was trained on ImageNet. Default: False

  • -
-
-
-
- -
-

Note

-

This requires scipy to be installed

-
-
-
-

GoogLeNet

-
-
-torchvision.models.googlenet(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.googlenet.GoogLeNet[source]
-

GoogLeNet (Inception v1) model architecture from -“Going Deeper with Convolutions”. -The required minimum input size of the model is 15x15.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
  • aux_logits (bool) – If True, adds two auxiliary branches that can improve training. -Default: False when pretrained is True otherwise True

  • -
  • transform_input (bool) – If True, preprocesses the input according to the method with which it -was trained on ImageNet. Default: False

  • -
-
-
-
- -
-

Note

-

This requires scipy to be installed

-
-
-
-

ShuffleNet v2

-
-
-torchvision.models.shufflenet_v2_x0_5(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.shufflenetv2.ShuffleNetV2[source]
-

Constructs a ShuffleNetV2 with 0.5x output channels, as described in -“ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.shufflenet_v2_x1_0(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.shufflenetv2.ShuffleNetV2[source]
-

Constructs a ShuffleNetV2 with 1.0x output channels, as described in -“ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.shufflenet_v2_x1_5(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.shufflenetv2.ShuffleNetV2[source]
-

Constructs a ShuffleNetV2 with 1.5x output channels, as described in -“ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.shufflenet_v2_x2_0(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.shufflenetv2.ShuffleNetV2[source]
-

Constructs a ShuffleNetV2 with 2.0x output channels, as described in -“ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-

MobileNet v2

-
-
-torchvision.models.mobilenet_v2(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.mobilenetv2.MobileNetV2[source]
-

Constructs a MobileNetV2 architecture from -“MobileNetV2: Inverted Residuals and Linear Bottlenecks”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-

MobileNet v3

-
-
-torchvision.models.mobilenet_v3_large(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.mobilenetv3.MobileNetV3[source]
-

Constructs a large MobileNetV3 architecture from -“Searching for MobileNetV3”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.mobilenet_v3_small(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.mobilenetv3.MobileNetV3[source]
-

Constructs a small MobileNetV3 architecture from -“Searching for MobileNetV3”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-

ResNext

-
-
-torchvision.models.resnext50_32x4d(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.resnet.ResNet[source]
-

ResNeXt-50 32x4d model from -“Aggregated Residual Transformation for Deep Neural Networks”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.resnext101_32x8d(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.resnet.ResNet[source]
-

ResNeXt-101 32x8d model from -“Aggregated Residual Transformation for Deep Neural Networks”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-

Wide ResNet

-
-
-torchvision.models.wide_resnet50_2(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.resnet.ResNet[source]
-

Wide ResNet-50-2 model from -“Wide Residual Networks”.

-

The model is the same as ResNet except for the bottleneck number of channels -which is twice larger in every block. The number of channels in outer 1x1 -convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048 -channels, and in Wide ResNet-50-2 has 2048-1024-2048.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.wide_resnet101_2(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.resnet.ResNet[source]
-

Wide ResNet-101-2 model from -“Wide Residual Networks”.

-

The model is the same as ResNet except for the bottleneck number of channels -which is twice larger in every block. The number of channels in outer 1x1 -convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048 -channels, and in Wide ResNet-50-2 has 2048-1024-2048.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-

MNASNet

-
-
-torchvision.models.mnasnet0_5(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.mnasnet.MNASNet[source]
-

MNASNet with depth multiplier of 0.5 from -“MnasNet: Platform-Aware Neural Architecture Search for Mobile”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.mnasnet0_75(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.mnasnet.MNASNet[source]
-

MNASNet with depth multiplier of 0.75 from -“MnasNet: Platform-Aware Neural Architecture Search for Mobile”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.mnasnet1_0(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.mnasnet.MNASNet[source]
-

MNASNet with depth multiplier of 1.0 from -“MnasNet: Platform-Aware Neural Architecture Search for Mobile”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.mnasnet1_3(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.mnasnet.MNASNet[source]
-

MNASNet with depth multiplier of 1.3 from -“MnasNet: Platform-Aware Neural Architecture Search for Mobile”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-

EfficientNet

-
-
-torchvision.models.efficientnet_b0(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.efficientnet.EfficientNet[source]
-

Constructs a EfficientNet B0 architecture from -“EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.efficientnet_b1(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.efficientnet.EfficientNet[source]
-

Constructs a EfficientNet B1 architecture from -“EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.efficientnet_b2(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.efficientnet.EfficientNet[source]
-

Constructs a EfficientNet B2 architecture from -“EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.efficientnet_b3(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.efficientnet.EfficientNet[source]
-

Constructs a EfficientNet B3 architecture from -“EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.efficientnet_b4(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.efficientnet.EfficientNet[source]
-

Constructs a EfficientNet B4 architecture from -“EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.efficientnet_b5(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.efficientnet.EfficientNet[source]
-

Constructs a EfficientNet B5 architecture from -“EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.efficientnet_b6(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.efficientnet.EfficientNet[source]
-

Constructs a EfficientNet B6 architecture from -“EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.efficientnet_b7(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.efficientnet.EfficientNet[source]
-

Constructs a EfficientNet B7 architecture from -“EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-

RegNet

-
-
-torchvision.models.regnet_y_400mf(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.regnet.RegNet[source]
-

Constructs a RegNetY_400MF architecture from -“Designing Network Design Spaces”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.regnet_y_800mf(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.regnet.RegNet[source]
-

Constructs a RegNetY_800MF architecture from -“Designing Network Design Spaces”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.regnet_y_1_6gf(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.regnet.RegNet[source]
-

Constructs a RegNetY_1.6GF architecture from -“Designing Network Design Spaces”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.regnet_y_3_2gf(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.regnet.RegNet[source]
-

Constructs a RegNetY_3.2GF architecture from -“Designing Network Design Spaces”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.regnet_y_8gf(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.regnet.RegNet[source]
-

Constructs a RegNetY_8GF architecture from -“Designing Network Design Spaces”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.regnet_y_16gf(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.regnet.RegNet[source]
-

Constructs a RegNetY_16GF architecture from -“Designing Network Design Spaces”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.regnet_y_32gf(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.regnet.RegNet[source]
-

Constructs a RegNetY_32GF architecture from -“Designing Network Design Spaces”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.regnet_x_400mf(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.regnet.RegNet[source]
-

Constructs a RegNetX_400MF architecture from -“Designing Network Design Spaces”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.regnet_x_800mf(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.regnet.RegNet[source]
-

Constructs a RegNetX_800MF architecture from -“Designing Network Design Spaces”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.regnet_x_1_6gf(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.regnet.RegNet[source]
-

Constructs a RegNetX_1.6GF architecture from -“Designing Network Design Spaces”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.regnet_x_3_2gf(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.regnet.RegNet[source]
-

Constructs a RegNetX_3.2GF architecture from -“Designing Network Design Spaces”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.regnet_x_8gf(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.regnet.RegNet[source]
-

Constructs a RegNetX_8GF architecture from -“Designing Network Design Spaces”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.regnet_x_16gf(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.regnet.RegNet[source]
-

Constructs a RegNetX_16GF architecture from -“Designing Network Design Spaces”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-torchvision.models.regnet_x_32gf(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.regnet.RegNet[source]
-

Constructs a RegNetX_32GF architecture from -“Designing Network Design Spaces”.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on ImageNet

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
-
- -
-
-

Quantized Models

-

The following architectures provide support for INT8 quantized models. You can get -a model with random weights by calling its constructor:

-
import torchvision.models as models
-googlenet = models.quantization.googlenet()
-inception_v3 = models.quantization.inception_v3()
-mobilenet_v2 = models.quantization.mobilenet_v2()
-mobilenet_v3_large = models.quantization.mobilenet_v3_large()
-resnet18 = models.quantization.resnet18()
-resnet50 = models.quantization.resnet50()
-resnext101_32x8d = models.quantization.resnext101_32x8d()
-shufflenet_v2_x0_5 = models.quantization.shufflenet_v2_x0_5()
-shufflenet_v2_x1_0 = models.quantization.shufflenet_v2_x1_0()
-shufflenet_v2_x1_5 = models.quantization.shufflenet_v2_x1_5()
-shufflenet_v2_x2_0 = models.quantization.shufflenet_v2_x2_0()
-
-
-

Obtaining a pre-trained quantized model can be done with a few lines of code:

-
import torchvision.models as models
-model = models.quantization.mobilenet_v2(pretrained=True, quantize=True)
-model.eval()
-# run the model with quantized inputs and weights
-out = model(torch.rand(1, 3, 224, 224))
-
-
-

We provide pre-trained quantized weights for the following models:

- ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Model

Acc@1

Acc@5

MobileNet V2

71.658

90.150

MobileNet V3 Large

73.004

90.858

ShuffleNet V2

68.360

87.582

ResNet 18

69.494

88.882

ResNet 50

75.920

92.814

ResNext 101 32x8d

78.986

94.480

Inception V3

77.176

93.354

GoogleNet

69.826

89.404

-
-
-
-

Semantic Segmentation

-

The models subpackage contains definitions for the following model -architectures for semantic segmentation:

- -

As with image classification models, all pre-trained models expect input images normalized in the same way. -The images have to be loaded in to a range of [0, 1] and then normalized using -mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. -They have been trained on images resized such that their minimum size is 520.

-

For details on how to plot the masks of such models, you may refer to Semantic segmentation models.

-

The pre-trained models have been trained on a subset of COCO train2017, on the 20 categories that are -present in the Pascal VOC dataset. You can see more information on how the subset has been selected in -references/segmentation/coco_utils.py. The classes that the pre-trained model outputs are the following, -in order:

-
-
['__background__', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus',
- 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike',
- 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']
-
-
-
-

The accuracies of the pre-trained models evaluated on COCO val2017 are as follows

- ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Network

mean IoU

global pixelwise acc

FCN ResNet50

60.5

91.4

FCN ResNet101

63.7

91.9

DeepLabV3 ResNet50

66.4

92.4

DeepLabV3 ResNet101

67.4

92.4

DeepLabV3 MobileNetV3-Large

60.3

91.2

LR-ASPP MobileNetV3-Large

57.9

91.2

-
-

Fully Convolutional Networks

-
-
-torchvision.models.segmentation.fcn_resnet50(pretrained: bool = False, progress: bool = True, num_classes: int = 21, aux_loss: Optional[bool] = None, **kwargs: Any)torch.nn.modules.module.Module[source]
-

Constructs a Fully-Convolutional Network model with a ResNet-50 backbone.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on COCO train2017 which -contains the same classes as Pascal VOC

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
  • num_classes (int) – number of output classes of the model (including the background)

  • -
  • aux_loss (bool) – If True, it uses an auxiliary loss

  • -
-
-
-

Examples using fcn_resnet50:

-
- -
-
-torchvision.models.segmentation.fcn_resnet101(pretrained: bool = False, progress: bool = True, num_classes: int = 21, aux_loss: Optional[bool] = None, **kwargs: Any)torch.nn.modules.module.Module[source]
-

Constructs a Fully-Convolutional Network model with a ResNet-101 backbone.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on COCO train2017 which -contains the same classes as Pascal VOC

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
  • num_classes (int) – number of output classes of the model (including the background)

  • -
  • aux_loss (bool) – If True, it uses an auxiliary loss

  • -
-
-
-
- -
-
-

DeepLabV3

-
-
-torchvision.models.segmentation.deeplabv3_resnet50(pretrained: bool = False, progress: bool = True, num_classes: int = 21, aux_loss: Optional[bool] = None, **kwargs: Any)torch.nn.modules.module.Module[source]
-

Constructs a DeepLabV3 model with a ResNet-50 backbone.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on COCO train2017 which -contains the same classes as Pascal VOC

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
  • num_classes (int) – number of output classes of the model (including the background)

  • -
  • aux_loss (bool) – If True, it uses an auxiliary loss

  • -
-
-
-

Examples using deeplabv3_resnet50:

-
- -
-
-torchvision.models.segmentation.deeplabv3_resnet101(pretrained: bool = False, progress: bool = True, num_classes: int = 21, aux_loss: Optional[bool] = None, **kwargs: Any)torch.nn.modules.module.Module[source]
-

Constructs a DeepLabV3 model with a ResNet-101 backbone.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on COCO train2017 which -contains the same classes as Pascal VOC

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
  • num_classes (int) – The number of classes

  • -
  • aux_loss (bool) – If True, include an auxiliary classifier

  • -
-
-
-
- -
-
-torchvision.models.segmentation.deeplabv3_mobilenet_v3_large(pretrained: bool = False, progress: bool = True, num_classes: int = 21, aux_loss: Optional[bool] = None, **kwargs: Any)torch.nn.modules.module.Module[source]
-

Constructs a DeepLabV3 model with a MobileNetV3-Large backbone.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on COCO train2017 which -contains the same classes as Pascal VOC

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
  • num_classes (int) – number of output classes of the model (including the background)

  • -
  • aux_loss (bool) – If True, it uses an auxiliary loss

  • -
-
-
-
- -
-
-

LR-ASPP

-
-
-torchvision.models.segmentation.lraspp_mobilenet_v3_large(pretrained: bool = False, progress: bool = True, num_classes: int = 21, **kwargs: Any)torch.nn.modules.module.Module[source]
-

Constructs a Lite R-ASPP Network model with a MobileNetV3-Large backbone.

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on COCO train2017 which -contains the same classes as Pascal VOC

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
  • num_classes (int) – number of output classes of the model (including the background)

  • -
-
-
-

Examples using lraspp_mobilenet_v3_large:

-
- -
-
-
-

Object Detection, Instance Segmentation and Person Keypoint Detection

-

The models subpackage contains definitions for the following model -architectures for detection:

- -

The pre-trained models for detection, instance segmentation and -keypoint detection are initialized with the classification models -in torchvision.

-

The models expect a list of Tensor[C, H, W], in the range 0-1. -The models internally resize the images but the behaviour varies depending -on the model. Check the constructor of the models for more information. The -output format of such models is illustrated in Instance segmentation models.

-

For object detection and instance segmentation, the pre-trained -models return the predictions of the following classes:

-
-
COCO_INSTANCE_CATEGORY_NAMES = [
-    '__background__', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus',
-    'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'N/A', 'stop sign',
-    'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
-    'elephant', 'bear', 'zebra', 'giraffe', 'N/A', 'backpack', 'umbrella', 'N/A', 'N/A',
-    'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
-    'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket',
-    'bottle', 'N/A', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl',
-    'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
-    'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'N/A', 'dining table',
-    'N/A', 'N/A', 'toilet', 'N/A', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
-    'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'N/A', 'book',
-    'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'
-]
-
-
-
-

Here are the summary of the accuracies for the models trained on -the instances set of COCO train2017 and evaluated on COCO val2017.

- ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Network

box AP

mask AP

keypoint AP

Faster R-CNN ResNet-50 FPN

37.0

    -
  • -
-
    -
  • -
-

Faster R-CNN MobileNetV3-Large FPN

32.8

    -
  • -
-
    -
  • -
-

Faster R-CNN MobileNetV3-Large 320 FPN

22.8

    -
  • -
-
    -
  • -
-

RetinaNet ResNet-50 FPN

36.4

    -
  • -
-
    -
  • -
-

SSD300 VGG16

25.1

    -
  • -
-
    -
  • -
-

SSDlite320 MobileNetV3-Large

21.3

    -
  • -
-
    -
  • -
-

Mask R-CNN ResNet-50 FPN

37.9

34.6

    -
  • -
-
-

For person keypoint detection, the accuracies for the pre-trained -models are as follows

- ------ - - - - - - - - - - - - - - -

Network

box AP

mask AP

keypoint AP

Keypoint R-CNN ResNet-50 FPN

54.6

    -
  • -
-

65.0

-

For person keypoint detection, the pre-trained model return the -keypoints in the following order:

-
-
COCO_PERSON_KEYPOINT_NAMES = [
-    'nose',
-    'left_eye',
-    'right_eye',
-    'left_ear',
-    'right_ear',
-    'left_shoulder',
-    'right_shoulder',
-    'left_elbow',
-    'right_elbow',
-    'left_wrist',
-    'right_wrist',
-    'left_hip',
-    'right_hip',
-    'left_knee',
-    'right_knee',
-    'left_ankle',
-    'right_ankle'
-]
-
-
-
-
-

Runtime characteristics

-

The implementations of the models for object detection, instance segmentation -and keypoint detection are efficient.

-

In the following table, we use 8 GPUs to report the results. During training, -we use a batch size of 2 per GPU for all models except SSD which uses 4 -and SSDlite which uses 24. During testing a batch size of 1 is used.

-

For test time, we report the time for the model evaluation and postprocessing -(including mask pasting in image), but not the time for computing the -precision-recall.

- ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Network

train time (s / it)

test time (s / it)

memory (GB)

Faster R-CNN ResNet-50 FPN

0.2288

0.0590

5.2

Faster R-CNN MobileNetV3-Large FPN

0.1020

0.0415

1.0

Faster R-CNN MobileNetV3-Large 320 FPN

0.0978

0.0376

0.6

RetinaNet ResNet-50 FPN

0.2514

0.0939

4.1

SSD300 VGG16

0.2093

0.0744

1.5

SSDlite320 MobileNetV3-Large

0.1773

0.0906

1.5

Mask R-CNN ResNet-50 FPN

0.2728

0.0903

5.4

Keypoint R-CNN ResNet-50 FPN

0.3789

0.1242

6.8

-
-
-

Faster R-CNN

-
-
-torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=False, progress=True, num_classes=91, pretrained_backbone=True, trainable_backbone_layers=None, **kwargs)[source]
-

Constructs a Faster R-CNN model with a ResNet-50-FPN backbone.

-

Reference: “Faster R-CNN: Towards Real-Time Object Detection with -Region Proposal Networks”.

-

The input to the model is expected to be a list of tensors, each of shape [C, H, W], one for each -image, and should be in 0-1 range. Different images can have different sizes.

-

The behavior of the model changes depending if it is in training or evaluation mode.

-

During training, the model expects both the input tensors, as well as a targets (list of dictionary), -containing:

-
-
    -
  • boxes (FloatTensor[N, 4]): the ground-truth boxes in [x1, y1, x2, y2] format, with -0 <= x1 < x2 <= W and 0 <= y1 < y2 <= H.

  • -
  • labels (Int64Tensor[N]): the class label for each ground-truth box

  • -
-
-

The model returns a Dict[Tensor] during training, containing the classification and regression -losses for both the RPN and the R-CNN.

-

During inference, the model requires only the input tensors, and returns the post-processed -predictions as a List[Dict[Tensor]], one for each input image. The fields of the Dict are as -follows, where N is the number of detections:

-
-
    -
  • boxes (FloatTensor[N, 4]): the predicted boxes in [x1, y1, x2, y2] format, with -0 <= x1 < x2 <= W and 0 <= y1 < y2 <= H.

  • -
  • labels (Int64Tensor[N]): the predicted labels for each detection

  • -
  • scores (Tensor[N]): the scores of each detection

  • -
-
-

For more details on the output, you may refer to Instance segmentation models.

-

Faster R-CNN is exportable to ONNX for a fixed batch size with inputs images of fixed size.

-

Example:

-
>>> model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
->>> # For training
->>> images, boxes = torch.rand(4, 3, 600, 1200), torch.rand(4, 11, 4)
->>> labels = torch.randint(1, 91, (4, 11))
->>> images = list(image for image in images)
->>> targets = []
->>> for i in range(len(images)):
->>>     d = {}
->>>     d['boxes'] = boxes[i]
->>>     d['labels'] = labels[i]
->>>     targets.append(d)
->>> output = model(images, targets)
->>> # For inference
->>> model.eval()
->>> x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
->>> predictions = model(x)
->>>
->>> # optionally, if you want to export the model to ONNX:
->>> torch.onnx.export(model, x, "faster_rcnn.onnx", opset_version = 11)
-
-
-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on COCO train2017

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
  • num_classes (int) – number of output classes of the model (including the background)

  • -
  • pretrained_backbone (bool) – If True, returns a model with backbone pre-trained on Imagenet

  • -
  • trainable_backbone_layers (int) – number of trainable (not frozen) resnet layers starting from final block. -Valid values are between 0 and 5, with 5 meaning all backbone layers are trainable.

  • -
-
-
-

Examples using fasterrcnn_resnet50_fpn:

-
- -
-
-torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(pretrained=False, progress=True, num_classes=91, pretrained_backbone=True, trainable_backbone_layers=None, **kwargs)[source]
-

Constructs a high resolution Faster R-CNN model with a MobileNetV3-Large FPN backbone. -It works similarly to Faster R-CNN with ResNet-50 FPN backbone. See -fasterrcnn_resnet50_fpn() for more -details.

-

Example:

-
>>> model = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(pretrained=True)
->>> model.eval()
->>> x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
->>> predictions = model(x)
-
-
-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on COCO train2017

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
  • num_classes (int) – number of output classes of the model (including the background)

  • -
  • pretrained_backbone (bool) – If True, returns a model with backbone pre-trained on Imagenet

  • -
  • trainable_backbone_layers (int) – number of trainable (not frozen) resnet layers starting from final block. -Valid values are between 0 and 6, with 6 meaning all backbone layers are trainable.

  • -
-
-
-
- -
-
-torchvision.models.detection.fasterrcnn_mobilenet_v3_large_320_fpn(pretrained=False, progress=True, num_classes=91, pretrained_backbone=True, trainable_backbone_layers=None, **kwargs)[source]
-

Constructs a low resolution Faster R-CNN model with a MobileNetV3-Large FPN backbone tunned for mobile use-cases. -It works similarly to Faster R-CNN with ResNet-50 FPN backbone. See -fasterrcnn_resnet50_fpn() for more -details.

-

Example:

-
>>> model = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_320_fpn(pretrained=True)
->>> model.eval()
->>> x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
->>> predictions = model(x)
-
-
-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on COCO train2017

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
  • num_classes (int) – number of output classes of the model (including the background)

  • -
  • pretrained_backbone (bool) – If True, returns a model with backbone pre-trained on Imagenet

  • -
  • trainable_backbone_layers (int) – number of trainable (not frozen) resnet layers starting from final block. -Valid values are between 0 and 6, with 6 meaning all backbone layers are trainable.

  • -
-
-
-
- -
-
-

RetinaNet

-
-
-torchvision.models.detection.retinanet_resnet50_fpn(pretrained=False, progress=True, num_classes=91, pretrained_backbone=True, trainable_backbone_layers=None, **kwargs)[source]
-

Constructs a RetinaNet model with a ResNet-50-FPN backbone.

-

Reference: “Focal Loss for Dense Object Detection”.

-

The input to the model is expected to be a list of tensors, each of shape [C, H, W], one for each -image, and should be in 0-1 range. Different images can have different sizes.

-

The behavior of the model changes depending if it is in training or evaluation mode.

-

During training, the model expects both the input tensors, as well as a targets (list of dictionary), -containing:

-
-
    -
  • boxes (FloatTensor[N, 4]): the ground-truth boxes in [x1, y1, x2, y2] format, with -0 <= x1 < x2 <= W and 0 <= y1 < y2 <= H.

  • -
  • labels (Int64Tensor[N]): the class label for each ground-truth box

  • -
-
-

The model returns a Dict[Tensor] during training, containing the classification and regression -losses.

-

During inference, the model requires only the input tensors, and returns the post-processed -predictions as a List[Dict[Tensor]], one for each input image. The fields of the Dict are as -follows, where N is the number of detections:

-
-
    -
  • boxes (FloatTensor[N, 4]): the predicted boxes in [x1, y1, x2, y2] format, with -0 <= x1 < x2 <= W and 0 <= y1 < y2 <= H.

  • -
  • labels (Int64Tensor[N]): the predicted labels for each detection

  • -
  • scores (Tensor[N]): the scores of each detection

  • -
-
-

For more details on the output, you may refer to Instance segmentation models.

-

Example:

-
>>> model = torchvision.models.detection.retinanet_resnet50_fpn(pretrained=True)
->>> model.eval()
->>> x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
->>> predictions = model(x)
-
-
-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on COCO train2017

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
  • num_classes (int) – number of output classes of the model (including the background)

  • -
  • pretrained_backbone (bool) – If True, returns a model with backbone pre-trained on Imagenet

  • -
  • trainable_backbone_layers (int) – number of trainable (not frozen) resnet layers starting from final block. -Valid values are between 0 and 5, with 5 meaning all backbone layers are trainable.

  • -
-
-
-

Examples using retinanet_resnet50_fpn:

-
- -
-
-

SSD

-
-
-torchvision.models.detection.ssd300_vgg16(pretrained: bool = False, progress: bool = True, num_classes: int = 91, pretrained_backbone: bool = True, trainable_backbone_layers: Optional[int] = None, **kwargs: Any)[source]
-

Constructs an SSD model with input size 300x300 and a VGG16 backbone.

-

Reference: “SSD: Single Shot MultiBox Detector”.

-

The input to the model is expected to be a list of tensors, each of shape [C, H, W], one for each -image, and should be in 0-1 range. Different images can have different sizes but they will be resized -to a fixed size before passing it to the backbone.

-

The behavior of the model changes depending if it is in training or evaluation mode.

-

During training, the model expects both the input tensors, as well as a targets (list of dictionary), -containing:

-
-
    -
  • boxes (FloatTensor[N, 4]): the ground-truth boxes in [x1, y1, x2, y2] format, with -0 <= x1 < x2 <= W and 0 <= y1 < y2 <= H.

  • -
  • labels (Int64Tensor[N]): the class label for each ground-truth box

  • -
-
-

The model returns a Dict[Tensor] during training, containing the classification and regression -losses.

-

During inference, the model requires only the input tensors, and returns the post-processed -predictions as a List[Dict[Tensor]], one for each input image. The fields of the Dict are as -follows, where N is the number of detections:

-
-
    -
  • boxes (FloatTensor[N, 4]): the predicted boxes in [x1, y1, x2, y2] format, with -0 <= x1 < x2 <= W and 0 <= y1 < y2 <= H.

  • -
  • labels (Int64Tensor[N]): the predicted labels for each detection

  • -
  • scores (Tensor[N]): the scores for each detection

  • -
-
-

Example

-
>>> model = torchvision.models.detection.ssd300_vgg16(pretrained=True)
->>> model.eval()
->>> x = [torch.rand(3, 300, 300), torch.rand(3, 500, 400)]
->>> predictions = model(x)
-
-
-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on COCO train2017

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
  • num_classes (int) – number of output classes of the model (including the background)

  • -
  • pretrained_backbone (bool) – If True, returns a model with backbone pre-trained on Imagenet

  • -
  • trainable_backbone_layers (int) – number of trainable (not frozen) resnet layers starting from final block. -Valid values are between 0 and 5, with 5 meaning all backbone layers are trainable.

  • -
-
-
-

Examples using ssd300_vgg16:

-
- -
-
-

SSDlite

-
-
-torchvision.models.detection.ssdlite320_mobilenet_v3_large(pretrained: bool = False, progress: bool = True, num_classes: int = 91, pretrained_backbone: bool = False, trainable_backbone_layers: Optional[int] = None, norm_layer: Optional[Callable[[], torch.nn.modules.module.Module]] = None, **kwargs: Any)[source]
-

Constructs an SSDlite model with input size 320x320 and a MobileNetV3 Large backbone, as described at -“Searching for MobileNetV3” and -“MobileNetV2: Inverted Residuals and Linear Bottlenecks”.

-

See ssd300_vgg16() for more details.

-

Example

-
>>> model = torchvision.models.detection.ssdlite320_mobilenet_v3_large(pretrained=True)
->>> model.eval()
->>> x = [torch.rand(3, 320, 320), torch.rand(3, 500, 400)]
->>> predictions = model(x)
-
-
-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on COCO train2017

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
  • num_classes (int) – number of output classes of the model (including the background)

  • -
  • pretrained_backbone (bool) – If True, returns a model with backbone pre-trained on Imagenet

  • -
  • trainable_backbone_layers (int) – number of trainable (not frozen) resnet layers starting from final block. -Valid values are between 0 and 6, with 6 meaning all backbone layers are trainable.

  • -
  • norm_layer (callable, optional) – Module specifying the normalization layer to use.

  • -
-
-
-

Examples using ssdlite320_mobilenet_v3_large:

-
- -
-
-

Mask R-CNN

-
-
-torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=False, progress=True, num_classes=91, pretrained_backbone=True, trainable_backbone_layers=None, **kwargs)[source]
-

Constructs a Mask R-CNN model with a ResNet-50-FPN backbone.

-

Reference: “Mask R-CNN”.

-

The input to the model is expected to be a list of tensors, each of shape [C, H, W], one for each -image, and should be in 0-1 range. Different images can have different sizes.

-

The behavior of the model changes depending if it is in training or evaluation mode.

-

During training, the model expects both the input tensors, as well as a targets (list of dictionary), -containing:

-
-
    -
  • boxes (FloatTensor[N, 4]): the ground-truth boxes in [x1, y1, x2, y2] format, with -0 <= x1 < x2 <= W and 0 <= y1 < y2 <= H.

  • -
  • labels (Int64Tensor[N]): the class label for each ground-truth box

  • -
  • masks (UInt8Tensor[N, H, W]): the segmentation binary masks for each instance

  • -
-
-

The model returns a Dict[Tensor] during training, containing the classification and regression -losses for both the RPN and the R-CNN, and the mask loss.

-

During inference, the model requires only the input tensors, and returns the post-processed -predictions as a List[Dict[Tensor]], one for each input image. The fields of the Dict are as -follows, where N is the number of detected instances:

-
-
    -
  • boxes (FloatTensor[N, 4]): the predicted boxes in [x1, y1, x2, y2] format, with -0 <= x1 < x2 <= W and 0 <= y1 < y2 <= H.

  • -
  • labels (Int64Tensor[N]): the predicted labels for each instance

  • -
  • scores (Tensor[N]): the scores or each instance

  • -
  • masks (UInt8Tensor[N, 1, H, W]): the predicted masks for each instance, in 0-1 range. In order to -obtain the final segmentation masks, the soft masks can be thresholded, generally -with a value of 0.5 (mask >= 0.5)

  • -
-
-

For more details on the output and on how to plot the masks, you may refer to Instance segmentation models.

-

Mask R-CNN is exportable to ONNX for a fixed batch size with inputs images of fixed size.

-

Example:

-
>>> model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)
->>> model.eval()
->>> x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
->>> predictions = model(x)
->>>
->>> # optionally, if you want to export the model to ONNX:
->>> torch.onnx.export(model, x, "mask_rcnn.onnx", opset_version = 11)
-
-
-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on COCO train2017

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
  • num_classes (int) – number of output classes of the model (including the background)

  • -
  • pretrained_backbone (bool) – If True, returns a model with backbone pre-trained on Imagenet

  • -
  • trainable_backbone_layers (int) – number of trainable (not frozen) resnet layers starting from final block. -Valid values are between 0 and 5, with 5 meaning all backbone layers are trainable.

  • -
-
-
-

Examples using maskrcnn_resnet50_fpn:

-
- -
-
-

Keypoint R-CNN

-
-
-torchvision.models.detection.keypointrcnn_resnet50_fpn(pretrained=False, progress=True, num_classes=2, num_keypoints=17, pretrained_backbone=True, trainable_backbone_layers=None, **kwargs)[source]
-

Constructs a Keypoint R-CNN model with a ResNet-50-FPN backbone.

-

Reference: “Mask R-CNN”.

-

The input to the model is expected to be a list of tensors, each of shape [C, H, W], one for each -image, and should be in 0-1 range. Different images can have different sizes.

-

The behavior of the model changes depending if it is in training or evaluation mode.

-

During training, the model expects both the input tensors, as well as a targets (list of dictionary), -containing:

-
-
    -
  • boxes (FloatTensor[N, 4]): the ground-truth boxes in [x1, y1, x2, y2] format, with -0 <= x1 < x2 <= W and 0 <= y1 < y2 <= H.

  • -
  • labels (Int64Tensor[N]): the class label for each ground-truth box

  • -
  • keypoints (FloatTensor[N, K, 3]): the K keypoints location for each of the N instances, in the -format [x, y, visibility], where visibility=0 means that the keypoint is not visible.

  • -
-
-

The model returns a Dict[Tensor] during training, containing the classification and regression -losses for both the RPN and the R-CNN, and the keypoint loss.

-

During inference, the model requires only the input tensors, and returns the post-processed -predictions as a List[Dict[Tensor]], one for each input image. The fields of the Dict are as -follows, where N is the number of detected instances:

-
-
    -
  • boxes (FloatTensor[N, 4]): the predicted boxes in [x1, y1, x2, y2] format, with -0 <= x1 < x2 <= W and 0 <= y1 < y2 <= H.

  • -
  • labels (Int64Tensor[N]): the predicted labels for each instance

  • -
  • scores (Tensor[N]): the scores or each instance

  • -
  • keypoints (FloatTensor[N, K, 3]): the locations of the predicted keypoints, in [x, y, v] format.

  • -
-
-

For more details on the output, you may refer to Instance segmentation models.

-

Keypoint R-CNN is exportable to ONNX for a fixed batch size with inputs images of fixed size.

-

Example:

-
>>> model = torchvision.models.detection.keypointrcnn_resnet50_fpn(pretrained=True)
->>> model.eval()
->>> x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
->>> predictions = model(x)
->>>
->>> # optionally, if you want to export the model to ONNX:
->>> torch.onnx.export(model, x, "keypoint_rcnn.onnx", opset_version = 11)
-
-
-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on COCO train2017

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
  • num_classes (int) – number of output classes of the model (including the background)

  • -
  • num_keypoints (int) – number of keypoints, default 17

  • -
  • pretrained_backbone (bool) – If True, returns a model with backbone pre-trained on Imagenet

  • -
  • trainable_backbone_layers (int) – number of trainable (not frozen) resnet layers starting from final block. -Valid values are between 0 and 5, with 5 meaning all backbone layers are trainable.

  • -
-
-
-

Examples using keypointrcnn_resnet50_fpn:

-
- -
-
-
-

Video classification

-

We provide models for action recognition pre-trained on Kinetics-400. -They have all been trained with the scripts provided in references/video_classification.

-

All pre-trained models expect input images normalized in the same way, -i.e. mini-batches of 3-channel RGB videos of shape (3 x T x H x W), -where H and W are expected to be 112, and T is a number of video frames in a clip. -The images have to be loaded in to a range of [0, 1] and then normalized -using mean = [0.43216, 0.394666, 0.37645] and std = [0.22803, 0.22145, 0.216989].

-
-

Note

-

The normalization parameters are different from the image classification ones, and correspond -to the mean and std from Kinetics-400.

-
-
-

Note

-

For now, normalization code can be found in references/video_classification/transforms.py, -see the Normalize function there. Note that it differs from standard normalization for -images because it assumes the video is 4d.

-
-

Kinetics 1-crop accuracies for clip length 16 (16x112x112)

- ----- - - - - - - - - - - - - - - - - - - - - -

Network

Clip acc@1

Clip acc@5

ResNet 3D 18

52.75

75.45

ResNet MC 18

53.90

76.29

ResNet (2+1)D

57.50

78.81

-
-

ResNet 3D

-
-
-torchvision.models.video.r3d_18(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.video.resnet.VideoResNet[source]
-

Construct 18 layer Resnet3D model as in -https://arxiv.org/abs/1711.11248

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on Kinetics-400

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
Returns
-

R3D-18 network

-
-
Return type
-

nn.Module

-
-
-
- -
-
-

ResNet Mixed Convolution

-
-
-torchvision.models.video.mc3_18(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.video.resnet.VideoResNet[source]
-

Constructor for 18 layer Mixed Convolution network as in -https://arxiv.org/abs/1711.11248

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on Kinetics-400

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
Returns
-

MC3 Network definition

-
-
Return type
-

nn.Module

-
-
-
- -
-
-

ResNet (2+1)D

-
-
-torchvision.models.video.r2plus1d_18(pretrained: bool = False, progress: bool = True, **kwargs: Any)torchvision.models.video.resnet.VideoResNet[source]
-

Constructor for the 18 layer deep R(2+1)D network as in -https://arxiv.org/abs/1711.11248

-
-
Parameters
-
    -
  • pretrained (bool) – If True, returns a model pre-trained on Kinetics-400

  • -
  • progress (bool) – If True, displays a progress bar of the download to stderr

  • -
-
-
Returns
-

R(2+1)D-18 network

-
-
Return type
-

nn.Module

-
-
-
- -
-
-
- - -
- -
- - -
-
- - -
-
- - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
-
-

Docs

-

Access comprehensive developer documentation for PyTorch

- View Docs -
- -
-

Tutorials

-

Get in-depth tutorials for beginners and advanced developers

- View Tutorials -
- -
-

Resources

-

Find development resources and get your questions answered

- View Resources -
-
-
-
- - - - - - - - - -
-
-
-
- - -
-
-
- - -
- - - - - - - - \ No newline at end of file diff --git a/0.11./objects.inv b/0.11./objects.inv deleted file mode 100644 index 6628fe1086e..00000000000 Binary files a/0.11./objects.inv and /dev/null differ diff --git a/0.11./ops.html b/0.11./ops.html deleted file mode 100644 index e19eb6fb25d..00000000000 --- a/0.11./ops.html +++ /dev/null @@ -1,1225 +0,0 @@ - - - - - - - - - - - - torchvision.ops — Torchvision main documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - - - - - - - - - - - - - - - -
- - - - -
-
- -
- Shortcuts -
-
- -
-
- - - -
- -
-
- -
-

torchvision.ops

-

torchvision.ops implements operators that are specific for Computer Vision.

-
-

Note

-

All operators have native support for TorchScript.

-
-
-
-torchvision.ops.batched_nms(boxes: torch.Tensor, scores: torch.Tensor, idxs: torch.Tensor, iou_threshold: float)torch.Tensor[source]
-

Performs non-maximum suppression in a batched fashion.

-

Each index value correspond to a category, and NMS -will not be applied between elements of different categories.

-
-
Parameters
-
    -
  • boxes (Tensor[N, 4]) – boxes where NMS will be performed. They -are expected to be in (x1, y1, x2, y2) format with 0 <= x1 < x2 and -0 <= y1 < y2.

  • -
  • scores (Tensor[N]) – scores for each one of the boxes

  • -
  • idxs (Tensor[N]) – indices of the categories for each one of the boxes.

  • -
  • iou_threshold (float) – discards all overlapping boxes with IoU > iou_threshold

  • -
-
-
Returns
-

int64 tensor with the indices of the elements that have been kept by NMS, sorted -in decreasing order of scores

-
-
Return type
-

Tensor

-
-
-
- -
-
-torchvision.ops.box_area(boxes: torch.Tensor)torch.Tensor[source]
-

Computes the area of a set of bounding boxes, which are specified by their -(x1, y1, x2, y2) coordinates.

-
-
Parameters
-

boxes (Tensor[N, 4]) – boxes for which the area will be computed. They -are expected to be in (x1, y1, x2, y2) format with -0 <= x1 < x2 and 0 <= y1 < y2.

-
-
Returns
-

the area for each box

-
-
Return type
-

Tensor[N]

-
-
-
- -
-
-torchvision.ops.box_convert(boxes: torch.Tensor, in_fmt: str, out_fmt: str)torch.Tensor[source]
-

Converts boxes from given in_fmt to out_fmt. -Supported in_fmt and out_fmt are:

-

‘xyxy’: boxes are represented via corners, x1, y1 being top left and x2, y2 being bottom right. -This is the format that torchvision utilities expect.

-

‘xywh’ : boxes are represented via corner, width and height, x1, y2 being top left, w, h being width and height.

-

‘cxcywh’ : boxes are represented via centre, width and height, cx, cy being center of box, w, h -being width and height.

-
-
Parameters
-
    -
  • boxes (Tensor[N, 4]) – boxes which will be converted.

  • -
  • in_fmt (str) – Input format of given boxes. Supported formats are [‘xyxy’, ‘xywh’, ‘cxcywh’].

  • -
  • out_fmt (str) – Output format of given boxes. Supported formats are [‘xyxy’, ‘xywh’, ‘cxcywh’]

  • -
-
-
Returns
-

Boxes into converted format.

-
-
Return type
-

Tensor[N, 4]

-
-
-
- -
-
-torchvision.ops.box_iou(boxes1: torch.Tensor, boxes2: torch.Tensor)torch.Tensor[source]
-

Return intersection-over-union (Jaccard index) between two sets of boxes.

-

Both sets of boxes are expected to be in (x1, y1, x2, y2) format with -0 <= x1 < x2 and 0 <= y1 < y2.

-
-
Parameters
-
    -
  • boxes1 (Tensor[N, 4]) – first set of boxes

  • -
  • boxes2 (Tensor[M, 4]) – second set of boxes

  • -
-
-
Returns
-

the NxM matrix containing the pairwise IoU values for every element in boxes1 and boxes2

-
-
Return type
-

Tensor[N, M]

-
-
-
- -
-
-torchvision.ops.clip_boxes_to_image(boxes: torch.Tensor, size: Tuple[int, int])torch.Tensor[source]
-

Clip boxes so that they lie inside an image of size size.

-
-
Parameters
-
    -
  • boxes (Tensor[N, 4]) – boxes in (x1, y1, x2, y2) format -with 0 <= x1 < x2 and 0 <= y1 < y2.

  • -
  • size (Tuple[height, width]) – size of the image

  • -
-
-
Returns
-

clipped boxes

-
-
Return type
-

Tensor[N, 4]

-
-
-
- -
-
-torchvision.ops.deform_conv2d(input: torch.Tensor, offset: torch.Tensor, weight: torch.Tensor, bias: Optional[torch.Tensor] = None, stride: Tuple[int, int] = (1, 1), padding: Tuple[int, int] = (0, 0), dilation: Tuple[int, int] = (1, 1), mask: Optional[torch.Tensor] = None)torch.Tensor[source]
-

Performs Deformable Convolution v2, described in -Deformable ConvNets v2: More Deformable, Better Results if mask is not None and -Performs Deformable Convolution, described in -Deformable Convolutional Networks if mask is None.

-
-
Parameters
-
    -
  • input (Tensor[batch_size, in_channels, in_height, in_width]) – input tensor

  • -
  • offset (Tensor[batch_size, 2 * offset_groups * kernel_height * kernel_width, out_height, out_width]) – offsets to be applied for each position in the convolution kernel.

  • -
  • weight (Tensor[out_channels, in_channels // groups, kernel_height, kernel_width]) – convolution weights, -split into groups of size (in_channels // groups)

  • -
  • bias (Tensor[out_channels]) – optional bias of shape (out_channels,). Default: None

  • -
  • stride (int or Tuple[int, int]) – distance between convolution centers. Default: 1

  • -
  • padding (int or Tuple[int, int]) – height/width of padding of zeroes around -each image. Default: 0

  • -
  • dilation (int or Tuple[int, int]) – the spacing between kernel elements. Default: 1

  • -
  • mask (Tensor[batch_size, offset_groups * kernel_height * kernel_width, out_height, out_width]) – masks to be applied for each position in the convolution kernel. Default: None

  • -
-
-
Returns
-

result of convolution

-
-
Return type
-

Tensor[batch_sz, out_channels, out_h, out_w]

-
-
-
-
Examples::
>>> input = torch.rand(4, 3, 10, 10)
->>> kh, kw = 3, 3
->>> weight = torch.rand(5, 3, kh, kw)
->>> # offset and mask should have the same spatial size as the output
->>> # of the convolution. In this case, for an input of 10, stride of 1
->>> # and kernel size of 3, without padding, the output size is 8
->>> offset = torch.rand(4, 2 * kh * kw, 8, 8)
->>> mask = torch.rand(4, kh * kw, 8, 8)
->>> out = deform_conv2d(input, offset, weight, mask=mask)
->>> print(out.shape)
->>> # returns
->>>  torch.Size([4, 5, 8, 8])
-
-
-
-
-
- -
-
-torchvision.ops.generalized_box_iou(boxes1: torch.Tensor, boxes2: torch.Tensor)torch.Tensor[source]
-

Return generalized intersection-over-union (Jaccard index) between two sets of boxes.

-

Both sets of boxes are expected to be in (x1, y1, x2, y2) format with -0 <= x1 < x2 and 0 <= y1 < y2.

-
-
Parameters
-
    -
  • boxes1 (Tensor[N, 4]) – first set of boxes

  • -
  • boxes2 (Tensor[M, 4]) – second set of boxes

  • -
-
-
Returns
-

the NxM matrix containing the pairwise generalized IoU values -for every element in boxes1 and boxes2

-
-
Return type
-

Tensor[N, M]

-
-
-
- -
-
-torchvision.ops.masks_to_boxes(masks: torch.Tensor)torch.Tensor[source]
-

Compute the bounding boxes around the provided masks.

-

Returns a [N, 4] tensor containing bounding boxes. The boxes are in (x1, y1, x2, y2) format with -0 <= x1 < x2 and 0 <= y1 < y2.

-
-
Parameters
-

masks (Tensor[N, H, W]) – masks to transform where N is the number of masks -and (H, W) are the spatial dimensions.

-
-
Returns
-

bounding boxes

-
-
Return type
-

Tensor[N, 4]

-
-
-

Examples using masks_to_boxes:

-
- -
-
-torchvision.ops.nms(boxes: torch.Tensor, scores: torch.Tensor, iou_threshold: float)torch.Tensor[source]
-

Performs non-maximum suppression (NMS) on the boxes according -to their intersection-over-union (IoU).

-

NMS iteratively removes lower scoring boxes which have an -IoU greater than iou_threshold with another (higher scoring) -box.

-

If multiple boxes have the exact same score and satisfy the IoU -criterion with respect to a reference box, the selected box is -not guaranteed to be the same between CPU and GPU. This is similar -to the behavior of argsort in PyTorch when repeated values are present.

-
-
Parameters
-
    -
  • boxes (Tensor[N, 4])) – boxes to perform NMS on. They -are expected to be in (x1, y1, x2, y2) format with 0 <= x1 < x2 and -0 <= y1 < y2.

  • -
  • scores (Tensor[N]) – scores for each one of the boxes

  • -
  • iou_threshold (float) – discards all overlapping boxes with IoU > iou_threshold

  • -
-
-
Returns
-

int64 tensor with the indices of the elements that have been kept -by NMS, sorted in decreasing order of scores

-
-
Return type
-

Tensor

-
-
-
- -
-
-torchvision.ops.ps_roi_align(input: torch.Tensor, boxes: torch.Tensor, output_size: int, spatial_scale: float = 1.0, sampling_ratio: int = - 1)torch.Tensor[source]
-

Performs Position-Sensitive Region of Interest (RoI) Align operator -mentioned in Light-Head R-CNN.

-
-
Parameters
-
    -
  • input (Tensor[N, C, H, W]) – The input tensor, i.e. a batch with N elements. Each element -contains C feature maps of dimensions H x W.

  • -
  • boxes (Tensor[K, 5] or List[Tensor[L, 4]]) – the box coordinates in (x1, y1, x2, y2) -format where the regions will be taken from. -The coordinate must satisfy 0 <= x1 < x2 and 0 <= y1 < y2. -If a single Tensor is passed, then the first column should -contain the index of the corresponding element in the batch, i.e. a number in [0, N - 1]. -If a list of Tensors is passed, then each Tensor will correspond to the boxes for an element i -in the batch.

  • -
  • output_size (int or Tuple[int, int]) – the size of the output (in bins or pixels) after the pooling -is performed, as (height, width).

  • -
  • spatial_scale (float) – a scaling factor that maps the input coordinates to -the box coordinates. Default: 1.0

  • -
  • sampling_ratio (int) – number of sampling points in the interpolation grid -used to compute the output value of each pooled output bin. If > 0, -then exactly sampling_ratio x sampling_ratio sampling points per bin are used. If -<= 0, then an adaptive number of grid points are used (computed as -ceil(roi_width / output_width), and likewise for height). Default: -1

  • -
-
-
Returns
-

The pooled RoIs

-
-
Return type
-

Tensor[K, C / (output_size[0] * output_size[1]), output_size[0], output_size[1]]

-
-
-
- -
-
-torchvision.ops.ps_roi_pool(input: torch.Tensor, boxes: torch.Tensor, output_size: int, spatial_scale: float = 1.0)torch.Tensor[source]
-

Performs Position-Sensitive Region of Interest (RoI) Pool operator -described in R-FCN

-
-
Parameters
-
    -
  • input (Tensor[N, C, H, W]) – The input tensor, i.e. a batch with N elements. Each element -contains C feature maps of dimensions H x W.

  • -
  • boxes (Tensor[K, 5] or List[Tensor[L, 4]]) – the box coordinates in (x1, y1, x2, y2) -format where the regions will be taken from. -The coordinate must satisfy 0 <= x1 < x2 and 0 <= y1 < y2. -If a single Tensor is passed, then the first column should -contain the index of the corresponding element in the batch, i.e. a number in [0, N - 1]. -If a list of Tensors is passed, then each Tensor will correspond to the boxes for an element i -in the batch.

  • -
  • output_size (int or Tuple[int, int]) – the size of the output (in bins or pixels) after the pooling -is performed, as (height, width).

  • -
  • spatial_scale (float) – a scaling factor that maps the input coordinates to -the box coordinates. Default: 1.0

  • -
-
-
Returns
-

The pooled RoIs.

-
-
Return type
-

Tensor[K, C / (output_size[0] * output_size[1]), output_size[0], output_size[1]]

-
-
-
- -
-
-torchvision.ops.remove_small_boxes(boxes: torch.Tensor, min_size: float)torch.Tensor[source]
-

Remove boxes which contains at least one side smaller than min_size.

-
-
Parameters
-
    -
  • boxes (Tensor[N, 4]) – boxes in (x1, y1, x2, y2) format -with 0 <= x1 < x2 and 0 <= y1 < y2.

  • -
  • min_size (float) – minimum size

  • -
-
-
Returns
-

indices of the boxes that have both sides -larger than min_size

-
-
Return type
-

Tensor[K]

-
-
-
- -
-
-torchvision.ops.roi_align(input: torch.Tensor, boxes: Union[torch.Tensor, List[torch.Tensor]], output_size: None, spatial_scale: float = 1.0, sampling_ratio: int = - 1, aligned: bool = False)torch.Tensor[source]
-

Performs Region of Interest (RoI) Align operator with average pooling, as described in Mask R-CNN.

-
-
Parameters
-
    -
  • input (Tensor[N, C, H, W]) – The input tensor, i.e. a batch with N elements. Each element -contains C feature maps of dimensions H x W. -If the tensor is quantized, we expect a batch size of N == 1.

  • -
  • boxes (Tensor[K, 5] or List[Tensor[L, 4]]) – the box coordinates in (x1, y1, x2, y2) -format where the regions will be taken from. -The coordinate must satisfy 0 <= x1 < x2 and 0 <= y1 < y2. -If a single Tensor is passed, then the first column should -contain the index of the corresponding element in the batch, i.e. a number in [0, N - 1]. -If a list of Tensors is passed, then each Tensor will correspond to the boxes for an element i -in the batch.

  • -
  • output_size (int or Tuple[int, int]) – the size of the output (in bins or pixels) after the pooling -is performed, as (height, width).

  • -
  • spatial_scale (float) – a scaling factor that maps the input coordinates to -the box coordinates. Default: 1.0

  • -
  • sampling_ratio (int) – number of sampling points in the interpolation grid -used to compute the output value of each pooled output bin. If > 0, -then exactly sampling_ratio x sampling_ratio sampling points per bin are used. If -<= 0, then an adaptive number of grid points are used (computed as -ceil(roi_width / output_width), and likewise for height). Default: -1

  • -
  • aligned (bool) – If False, use the legacy implementation. -If True, pixel shift the box coordinates it by -0.5 for a better alignment with the two -neighboring pixel indices. This version is used in Detectron2

  • -
-
-
Returns
-

The pooled RoIs.

-
-
Return type
-

Tensor[K, C, output_size[0], output_size[1]]

-
-
-
- -
-
-torchvision.ops.roi_pool(input: torch.Tensor, boxes: Union[torch.Tensor, List[torch.Tensor]], output_size: None, spatial_scale: float = 1.0)torch.Tensor[source]
-

Performs Region of Interest (RoI) Pool operator described in Fast R-CNN

-
-
Parameters
-
    -
  • input (Tensor[N, C, H, W]) – The input tensor, i.e. a batch with N elements. Each element -contains C feature maps of dimensions H x W.

  • -
  • boxes (Tensor[K, 5] or List[Tensor[L, 4]]) – the box coordinates in (x1, y1, x2, y2) -format where the regions will be taken from. -The coordinate must satisfy 0 <= x1 < x2 and 0 <= y1 < y2. -If a single Tensor is passed, then the first column should -contain the index of the corresponding element in the batch, i.e. a number in [0, N - 1]. -If a list of Tensors is passed, then each Tensor will correspond to the boxes for an element i -in the batch.

  • -
  • output_size (int or Tuple[int, int]) – the size of the output after the cropping -is performed, as (height, width)

  • -
  • spatial_scale (float) – a scaling factor that maps the input coordinates to -the box coordinates. Default: 1.0

  • -
-
-
Returns
-

The pooled RoIs.

-
-
Return type
-

Tensor[K, C, output_size[0], output_size[1]]

-
-
-
- -
-
-torchvision.ops.sigmoid_focal_loss(inputs: torch.Tensor, targets: torch.Tensor, alpha: float = 0.25, gamma: float = 2, reduction: str = 'none')[source]
-

Original implementation from https://github.com/facebookresearch/fvcore/blob/master/fvcore/nn/focal_loss.py . -Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002.

-
-
Parameters
-
    -
  • inputs – A float tensor of arbitrary shape. -The predictions for each example.

  • -
  • targets – A float tensor with the same shape as inputs. Stores the binary -classification label for each element in inputs -(0 for the negative class and 1 for the positive class).

  • -
  • alpha – (optional) Weighting factor in range (0,1) to balance -positive vs negative examples or -1 for ignore. Default = 0.25

  • -
  • gamma – Exponent of the modulating factor (1 - p_t) to -balance easy vs hard examples.

  • -
  • reduction – ‘none’ | ‘mean’ | ‘sum’ -‘none’: No reduction will be applied to the output. -‘mean’: The output will be averaged. -‘sum’: The output will be summed.

  • -
-
-
Returns
-

Loss tensor with the reduction option applied.

-
-
-
- -
-
-torchvision.ops.stochastic_depth(input: torch.Tensor, p: float, mode: str, training: bool = True)torch.Tensor[source]
-

Implements the Stochastic Depth from “Deep Networks with Stochastic Depth” used for randomly dropping residual -branches of residual architectures.

-
-
Parameters
-
    -
  • input (Tensor[N, ..]) – The input tensor or arbitrary dimensions with the first one -being its batch i.e. a batch with N rows.

  • -
  • p (float) – probability of the input to be zeroed.

  • -
  • mode (str) – "batch" or "row". -"batch" randomly zeroes the entire input, "row" zeroes -randomly selected rows from the batch.

  • -
  • training – apply stochastic depth if is True. Default: True

  • -
-
-
Returns
-

The randomly zeroed tensor.

-
-
Return type
-

Tensor[N, ..]

-
-
-
- -
-
-class torchvision.ops.RoIAlign(output_size: None, spatial_scale: float, sampling_ratio: int, aligned: bool = False)[source]
-

See roi_align().

-
- -
-
-class torchvision.ops.PSRoIAlign(output_size: int, spatial_scale: float, sampling_ratio: int)[source]
-

See ps_roi_align().

-
- -
-
-class torchvision.ops.RoIPool(output_size: None, spatial_scale: float)[source]
-

See roi_pool().

-
- -
-
-class torchvision.ops.PSRoIPool(output_size: int, spatial_scale: float)[source]
-

See ps_roi_pool().

-
- -
-
-class torchvision.ops.DeformConv2d(in_channels: int, out_channels: int, kernel_size: int, stride: int = 1, padding: int = 0, dilation: int = 1, groups: int = 1, bias: bool = True)[source]
-

See deform_conv2d().

-
- -
-
-class torchvision.ops.MultiScaleRoIAlign(featmap_names: List[str], output_size: Union[int, Tuple[int], List[int]], sampling_ratio: int, *, canonical_scale: int = 224, canonical_level: int = 4)[source]
-

Multi-scale RoIAlign pooling, which is useful for detection with or without FPN.

-

It infers the scale of the pooling via the heuristics specified in eq. 1 -of the Feature Pyramid Network paper. -They keyword-only parameters canonical_scale and canonical_level -correspond respectively to 224 and k0=4 in eq. 1, and -have the following meaning: canonical_level is the target level of the pyramid from -which to pool a region of interest with w x h = canonical_scale x canonical_scale.

-
-
Parameters
-
    -
  • featmap_names (List[str]) – the names of the feature maps that will be used -for the pooling.

  • -
  • output_size (List[Tuple[int, int]] or List[int]) – output size for the pooled region

  • -
  • sampling_ratio (int) – sampling ratio for ROIAlign

  • -
  • canonical_scale (int, optional) – canonical_scale for LevelMapper

  • -
  • canonical_level (int, optional) – canonical_level for LevelMapper

  • -
-
-
-

Examples:

-
>>> m = torchvision.ops.MultiScaleRoIAlign(['feat1', 'feat3'], 3, 2)
->>> i = OrderedDict()
->>> i['feat1'] = torch.rand(1, 5, 64, 64)
->>> i['feat2'] = torch.rand(1, 5, 32, 32)  # this feature won't be used in the pooling
->>> i['feat3'] = torch.rand(1, 5, 16, 16)
->>> # create some random bounding boxes
->>> boxes = torch.rand(6, 4) * 256; boxes[:, 2:] += boxes[:, :2]
->>> # original image size, before computing the feature maps
->>> image_sizes = [(512, 512)]
->>> output = m(i, [boxes], image_sizes)
->>> print(output.shape)
->>> torch.Size([6, 5, 3, 3])
-
-
-
- -
-
-class torchvision.ops.FeaturePyramidNetwork(in_channels_list: List[int], out_channels: int, extra_blocks: Optional[torchvision.ops.feature_pyramid_network.ExtraFPNBlock] = None)[source]
-

Module that adds a FPN from on top of a set of feature maps. This is based on -“Feature Pyramid Network for Object Detection”.

-

The feature maps are currently supposed to be in increasing depth -order.

-

The input to the model is expected to be an OrderedDict[Tensor], containing -the feature maps on top of which the FPN will be added.

-
-
Parameters
-
    -
  • in_channels_list (list[int]) – number of channels for each feature map that -is passed to the module

  • -
  • out_channels (int) – number of channels of the FPN representation

  • -
  • extra_blocks (ExtraFPNBlock or None) – if provided, extra operations will -be performed. It is expected to take the fpn features, the original -features and the names of the original features as input, and returns -a new list of feature maps and their corresponding names

  • -
-
-
-

Examples:

-
>>> m = torchvision.ops.FeaturePyramidNetwork([10, 20, 30], 5)
->>> # get some dummy data
->>> x = OrderedDict()
->>> x['feat0'] = torch.rand(1, 10, 64, 64)
->>> x['feat2'] = torch.rand(1, 20, 16, 16)
->>> x['feat3'] = torch.rand(1, 30, 8, 8)
->>> # compute the FPN on top of x
->>> output = m(x)
->>> print([(k, v.shape) for k, v in output.items()])
->>> # returns
->>>   [('feat0', torch.Size([1, 5, 64, 64])),
->>>    ('feat2', torch.Size([1, 5, 16, 16])),
->>>    ('feat3', torch.Size([1, 5, 8, 8]))]
-
-
-
- -
-
-class torchvision.ops.StochasticDepth(p: float, mode: str)[source]
-

See stochastic_depth().

-
- -
- - -
- -
- - -
-
- -
-
-
- - -
-
-
-
-
- - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
-
-

Docs

-

Access comprehensive developer documentation for PyTorch

- View Docs -
- -
-

Tutorials

-

Get in-depth tutorials for beginners and advanced developers

- View Tutorials -
- -
-

Resources

-

Find development resources and get your questions answered

- View Resources -
-
-
-
- - - - - - - - - -
-
-
-
- - -
-
-
- - -
- - - - - - - - \ No newline at end of file diff --git a/0.11./py-modindex.html b/0.11./py-modindex.html deleted file mode 100644 index ecfc5507977..00000000000 --- a/0.11./py-modindex.html +++ /dev/null @@ -1,648 +0,0 @@ - - - - - - - - - - - - Python Module Index — Torchvision main documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - - - - - - - - - - - - - - - -
- - - - -
-
- -
- Shortcuts -
-
- -
-
- - - -
- -
- - -
-
- - - - -
- - - -
-

- © Copyright 2017-present, Torch Contributors. - -

-
- -
- Built with Sphinx using a theme provided by Read the Docs. -
- - -
- -
-
- -
-
-
- -
-
-
-
-
- - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
-
-

Docs

-

Access comprehensive developer documentation for PyTorch

- View Docs -
- -
-

Tutorials

-

Get in-depth tutorials for beginners and advanced developers

- View Tutorials -
- -
-

Resources

-

Find development resources and get your questions answered

- View Resources -
-
-
-
- - - - - - - - - -
-
-
-
- - -
-
-
- - -
- - - - - - - - \ No newline at end of file diff --git a/0.11./search.html b/0.11./search.html deleted file mode 100644 index bdedc075c0e..00000000000 --- a/0.11./search.html +++ /dev/null @@ -1,641 +0,0 @@ - - - - - - - - - - - - Search — Torchvision main documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - - - - - - - - - - - - - - - -
- - - - -
-
- -
- Shortcuts -
-
- -
-
- - - -
- -
-
- - - - -
- -
- -
- -
-
- - - - -
- - - -
-

- © Copyright 2017-present, Torch Contributors. - -

-
- -
- Built with Sphinx using a theme provided by Read the Docs. -
- - -
- -
-
- -
-
-
- -
-
-
-
-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
-
-

Docs

-

Access comprehensive developer documentation for PyTorch

- View Docs -
- -
-

Tutorials

-

Get in-depth tutorials for beginners and advanced developers

- View Tutorials -
- -
-

Resources

-

Find development resources and get your questions answered

- View Resources -
-
-
-
- - - - - - - - - -
-
-
-
- - -
-
-
- - -
- - - - - - - - \ No newline at end of file diff --git a/0.11./searchindex.js b/0.11./searchindex.js deleted file mode 100644 index a2197c7dbae..00000000000 --- a/0.11./searchindex.js +++ /dev/null @@ -1 +0,0 @@ -Search.setIndex({docnames:["auto_examples/index","auto_examples/plot_repurposing_annotations","auto_examples/plot_scripted_tensor_transforms","auto_examples/plot_transforms","auto_examples/plot_video_api","auto_examples/plot_visualization_utils","auto_examples/sg_execution_times","datasets","feature_extraction","index","io","models","ops","training_references","transforms","utils"],envversion:{"sphinx.domains.c":2,"sphinx.domains.changeset":1,"sphinx.domains.citation":1,"sphinx.domains.cpp":3,"sphinx.domains.index":1,"sphinx.domains.javascript":2,"sphinx.domains.math":2,"sphinx.domains.python":2,"sphinx.domains.rst":2,"sphinx.domains.std":2,"sphinx.ext.intersphinx":1,"sphinx.ext.todo":2,"sphinx.ext.viewcode":1,sphinx:56},filenames:["auto_examples/index.rst","auto_examples/plot_repurposing_annotations.rst","auto_examples/plot_scripted_tensor_transforms.rst","auto_examples/plot_transforms.rst","auto_examples/plot_video_api.rst","auto_examples/plot_visualization_utils.rst","auto_examples/sg_execution_times.rst","datasets.rst","feature_extraction.rst","index.rst","io.rst","models.rst","ops.rst","training_references.rst","transforms.rst","utils.rst"],objects:{"":{torchvision:[9,0,0,"-"]},"torchvision.datasets":{CIFAR100:[7,1,1,""],CIFAR10:[7,1,1,""],Caltech101:[7,1,1,""],Caltech256:[7,1,1,""],CelebA:[7,1,1,""],Cityscapes:[7,1,1,""],CocoCaptions:[7,1,1,""],CocoDetection:[7,1,1,""],DatasetFolder:[7,1,1,""],EMNIST:[7,1,1,""],FakeData:[7,1,1,""],FashionMNIST:[7,1,1,""],Flickr30k:[7,1,1,""],Flickr8k:[7,1,1,""],HMDB51:[7,1,1,""],INaturalist:[7,1,1,""],ImageFolder:[7,1,1,""],ImageNet:[7,1,1,""],KMNIST:[7,1,1,""],Kinetics400:[7,1,1,""],Kitti:[7,1,1,""],LFWPairs:[7,1,1,""],LFWPeople:[7,1,1,""],LSUN:[7,1,1,""],MNIST:[7,1,1,""],Omniglot:[7,1,1,""],PhotoTour:[7,1,1,""],Places365:[7,1,1,""],QMNIST:[7,1,1,""],SBDataset:[7,1,1,""],SBU:[7,1,1,""],SEMEION:[7,1,1,""],STL10:[7,1,1,""],SVHN:[7,1,1,""],UCF101:[7,1,1,""],USPS:[7,1,1,""],VOCDetection:[7,1,1,""],VOCSegmentation:[7,1,1,""],VisionDataset:[7,1,1,""],WIDERFace:[7,1,1,""]},"torchvision.datasets.CIFAR10":{__getitem__:[7,2,1,""]},"torchvision.datasets.Caltech101":{__getitem__:[7,2,1,""]},"torchvision.datasets.Caltech256":{__getitem__:[7,2,1,""]},"torchvision.datasets.CelebA":{__getitem__:[7,2,1,""]},"torchvision.datasets.Cityscapes":{__getitem__:[7,2,1,""]},"torchvision.datasets.CocoCaptions":{__getitem__:[7,2,1,""]},"torchvision.datasets.CocoDetection":{__getitem__:[7,2,1,""]},"torchvision.datasets.DatasetFolder":{__getitem__:[7,2,1,""],find_classes:[7,2,1,""],make_dataset:[7,2,1,""]},"torchvision.datasets.Flickr30k":{__getitem__:[7,2,1,""]},"torchvision.datasets.Flickr8k":{__getitem__:[7,2,1,""]},"torchvision.datasets.HMDB51":{__getitem__:[7,2,1,""]},"torchvision.datasets.INaturalist":{__getitem__:[7,2,1,""],category_name:[7,2,1,""]},"torchvision.datasets.ImageFolder":{__getitem__:[7,2,1,""]},"torchvision.datasets.Kinetics400":{__getitem__:[7,2,1,""]},"torchvision.datasets.Kitti":{__getitem__:[7,2,1,""]},"torchvision.datasets.LFWPairs":{__getitem__:[7,2,1,""]},"torchvision.datasets.LFWPeople":{__getitem__:[7,2,1,""]},"torchvision.datasets.LSUN":{__getitem__:[7,2,1,""]},"torchvision.datasets.PhotoTour":{__getitem__:[7,2,1,""]},"torchvision.datasets.Places365":{__getitem__:[7,2,1,""]},"torchvision.datasets.SBDataset":{__getitem__:[7,2,1,""]},"torchvision.datasets.SBU":{__getitem__:[7,2,1,""]},"torchvision.datasets.SEMEION":{__getitem__:[7,2,1,""]},"torchvision.datasets.STL10":{__getitem__:[7,2,1,""]},"torchvision.datasets.SVHN":{__getitem__:[7,2,1,""]},"torchvision.datasets.UCF101":{__getitem__:[7,2,1,""]},"torchvision.datasets.USPS":{__getitem__:[7,2,1,""]},"torchvision.datasets.VOCDetection":{__getitem__:[7,2,1,""]},"torchvision.datasets.VOCSegmentation":{__getitem__:[7,2,1,""]},"torchvision.datasets.VisionDataset":{__getitem__:[7,2,1,""]},"torchvision.datasets.WIDERFace":{__getitem__:[7,2,1,""]},"torchvision.io":{ImageReadMode:[10,1,1,""],VideoReader:[10,1,1,""],decode_image:[10,3,1,""],decode_jpeg:[10,3,1,""],decode_png:[10,3,1,""],encode_jpeg:[10,3,1,""],encode_png:[10,3,1,""],read_file:[10,3,1,""],read_image:[10,3,1,""],read_video:[10,3,1,""],read_video_timestamps:[10,3,1,""],write_file:[10,3,1,""],write_jpeg:[10,3,1,""],write_png:[10,3,1,""],write_video:[10,3,1,""]},"torchvision.io.VideoReader":{__next__:[10,2,1,""],get_metadata:[10,2,1,""],seek:[10,2,1,""],set_current_stream:[10,2,1,""]},"torchvision.models":{alexnet:[11,3,1,""],densenet121:[11,3,1,""],densenet161:[11,3,1,""],densenet169:[11,3,1,""],densenet201:[11,3,1,""],efficientnet_b0:[11,3,1,""],efficientnet_b1:[11,3,1,""],efficientnet_b2:[11,3,1,""],efficientnet_b3:[11,3,1,""],efficientnet_b4:[11,3,1,""],efficientnet_b5:[11,3,1,""],efficientnet_b6:[11,3,1,""],efficientnet_b7:[11,3,1,""],googlenet:[11,3,1,""],inception_v3:[11,3,1,""],mnasnet0_5:[11,3,1,""],mnasnet0_75:[11,3,1,""],mnasnet1_0:[11,3,1,""],mnasnet1_3:[11,3,1,""],mobilenet_v2:[11,3,1,""],mobilenet_v3_large:[11,3,1,""],mobilenet_v3_small:[11,3,1,""],regnet_x_16gf:[11,3,1,""],regnet_x_1_6gf:[11,3,1,""],regnet_x_32gf:[11,3,1,""],regnet_x_3_2gf:[11,3,1,""],regnet_x_400mf:[11,3,1,""],regnet_x_800mf:[11,3,1,""],regnet_x_8gf:[11,3,1,""],regnet_y_16gf:[11,3,1,""],regnet_y_1_6gf:[11,3,1,""],regnet_y_32gf:[11,3,1,""],regnet_y_3_2gf:[11,3,1,""],regnet_y_400mf:[11,3,1,""],regnet_y_800mf:[11,3,1,""],regnet_y_8gf:[11,3,1,""],resnet101:[11,3,1,""],resnet152:[11,3,1,""],resnet18:[11,3,1,""],resnet34:[11,3,1,""],resnet50:[11,3,1,""],resnext101_32x8d:[11,3,1,""],resnext50_32x4d:[11,3,1,""],shufflenet_v2_x0_5:[11,3,1,""],shufflenet_v2_x1_0:[11,3,1,""],shufflenet_v2_x1_5:[11,3,1,""],shufflenet_v2_x2_0:[11,3,1,""],squeezenet1_0:[11,3,1,""],squeezenet1_1:[11,3,1,""],vgg11:[11,3,1,""],vgg11_bn:[11,3,1,""],vgg13:[11,3,1,""],vgg13_bn:[11,3,1,""],vgg16:[11,3,1,""],vgg16_bn:[11,3,1,""],vgg19:[11,3,1,""],vgg19_bn:[11,3,1,""],wide_resnet101_2:[11,3,1,""],wide_resnet50_2:[11,3,1,""]},"torchvision.models.detection":{fasterrcnn_mobilenet_v3_large_320_fpn:[11,3,1,""],fasterrcnn_mobilenet_v3_large_fpn:[11,3,1,""],fasterrcnn_resnet50_fpn:[11,3,1,""],keypointrcnn_resnet50_fpn:[11,3,1,""],maskrcnn_resnet50_fpn:[11,3,1,""],retinanet_resnet50_fpn:[11,3,1,""],ssd300_vgg16:[11,3,1,""],ssdlite320_mobilenet_v3_large:[11,3,1,""]},"torchvision.models.feature_extraction":{create_feature_extractor:[8,3,1,""],get_graph_node_names:[8,3,1,""]},"torchvision.models.segmentation":{deeplabv3_mobilenet_v3_large:[11,3,1,""],deeplabv3_resnet101:[11,3,1,""],deeplabv3_resnet50:[11,3,1,""],fcn_resnet101:[11,3,1,""],fcn_resnet50:[11,3,1,""],lraspp_mobilenet_v3_large:[11,3,1,""]},"torchvision.models.video":{mc3_18:[11,3,1,""],r2plus1d_18:[11,3,1,""],r3d_18:[11,3,1,""]},"torchvision.ops":{DeformConv2d:[12,1,1,""],FeaturePyramidNetwork:[12,1,1,""],MultiScaleRoIAlign:[12,1,1,""],PSRoIAlign:[12,1,1,""],PSRoIPool:[12,1,1,""],RoIAlign:[12,1,1,""],RoIPool:[12,1,1,""],StochasticDepth:[12,1,1,""],batched_nms:[12,3,1,""],box_area:[12,3,1,""],box_convert:[12,3,1,""],box_iou:[12,3,1,""],clip_boxes_to_image:[12,3,1,""],deform_conv2d:[12,3,1,""],generalized_box_iou:[12,3,1,""],masks_to_boxes:[12,3,1,""],nms:[12,3,1,""],ps_roi_align:[12,3,1,""],ps_roi_pool:[12,3,1,""],remove_small_boxes:[12,3,1,""],roi_align:[12,3,1,""],roi_pool:[12,3,1,""],sigmoid_focal_loss:[12,3,1,""],stochastic_depth:[12,3,1,""]},"torchvision.transforms":{AutoAugment:[14,1,1,""],AutoAugmentPolicy:[14,1,1,""],CenterCrop:[14,1,1,""],ColorJitter:[14,1,1,""],Compose:[14,1,1,""],ConvertImageDtype:[14,1,1,""],FiveCrop:[14,1,1,""],GaussianBlur:[14,1,1,""],Grayscale:[14,1,1,""],Lambda:[14,1,1,""],LinearTransformation:[14,1,1,""],Normalize:[14,1,1,""],PILToTensor:[14,1,1,""],Pad:[14,1,1,""],RandAugment:[14,1,1,""],RandomAdjustSharpness:[14,1,1,""],RandomAffine:[14,1,1,""],RandomApply:[14,1,1,""],RandomAutocontrast:[14,1,1,""],RandomChoice:[14,1,1,""],RandomCrop:[14,1,1,""],RandomEqualize:[14,1,1,""],RandomErasing:[14,1,1,""],RandomGrayscale:[14,1,1,""],RandomHorizontalFlip:[14,1,1,""],RandomInvert:[14,1,1,""],RandomOrder:[14,1,1,""],RandomPerspective:[14,1,1,""],RandomPosterize:[14,1,1,""],RandomResizedCrop:[14,1,1,""],RandomRotation:[14,1,1,""],RandomSizedCrop:[14,1,1,""],RandomSolarize:[14,1,1,""],RandomVerticalFlip:[14,1,1,""],Resize:[14,1,1,""],Scale:[14,1,1,""],TenCrop:[14,1,1,""],ToPILImage:[14,1,1,""],ToTensor:[14,1,1,""],TrivialAugmentWide:[14,1,1,""],functional:[14,0,0,"-"]},"torchvision.transforms.AutoAugment":{forward:[14,2,1,""],get_params:[14,2,1,""]},"torchvision.transforms.CenterCrop":{forward:[14,2,1,""]},"torchvision.transforms.ColorJitter":{forward:[14,2,1,""],get_params:[14,2,1,""]},"torchvision.transforms.FiveCrop":{forward:[14,2,1,""]},"torchvision.transforms.GaussianBlur":{forward:[14,2,1,""],get_params:[14,2,1,""]},"torchvision.transforms.Grayscale":{forward:[14,2,1,""]},"torchvision.transforms.LinearTransformation":{forward:[14,2,1,""]},"torchvision.transforms.Normalize":{forward:[14,2,1,""]},"torchvision.transforms.Pad":{forward:[14,2,1,""]},"torchvision.transforms.RandAugment":{forward:[14,2,1,""]},"torchvision.transforms.RandomAdjustSharpness":{forward:[14,2,1,""]},"torchvision.transforms.RandomAffine":{forward:[14,2,1,""],get_params:[14,2,1,""]},"torchvision.transforms.RandomAutocontrast":{forward:[14,2,1,""]},"torchvision.transforms.RandomCrop":{forward:[14,2,1,""],get_params:[14,2,1,""]},"torchvision.transforms.RandomEqualize":{forward:[14,2,1,""]},"torchvision.transforms.RandomErasing":{forward:[14,2,1,""],get_params:[14,2,1,""]},"torchvision.transforms.RandomGrayscale":{forward:[14,2,1,""]},"torchvision.transforms.RandomHorizontalFlip":{forward:[14,2,1,""]},"torchvision.transforms.RandomInvert":{forward:[14,2,1,""]},"torchvision.transforms.RandomPerspective":{forward:[14,2,1,""],get_params:[14,2,1,""]},"torchvision.transforms.RandomPosterize":{forward:[14,2,1,""]},"torchvision.transforms.RandomResizedCrop":{forward:[14,2,1,""],get_params:[14,2,1,""]},"torchvision.transforms.RandomRotation":{forward:[14,2,1,""],get_params:[14,2,1,""]},"torchvision.transforms.RandomSolarize":{forward:[14,2,1,""]},"torchvision.transforms.RandomVerticalFlip":{forward:[14,2,1,""]},"torchvision.transforms.Resize":{forward:[14,2,1,""]},"torchvision.transforms.TenCrop":{forward:[14,2,1,""]},"torchvision.transforms.TrivialAugmentWide":{forward:[14,2,1,""]},"torchvision.transforms.functional":{InterpolationMode:[14,1,1,""],adjust_brightness:[14,3,1,""],adjust_contrast:[14,3,1,""],adjust_gamma:[14,3,1,""],adjust_hue:[14,3,1,""],adjust_saturation:[14,3,1,""],adjust_sharpness:[14,3,1,""],affine:[14,3,1,""],autocontrast:[14,3,1,""],center_crop:[14,3,1,""],convert_image_dtype:[14,3,1,""],crop:[14,3,1,""],equalize:[14,3,1,""],erase:[14,3,1,""],five_crop:[14,3,1,""],gaussian_blur:[14,3,1,""],get_image_num_channels:[14,3,1,""],get_image_size:[14,3,1,""],hflip:[14,3,1,""],invert:[14,3,1,""],normalize:[14,3,1,""],pad:[14,3,1,""],perspective:[14,3,1,""],pil_to_tensor:[14,3,1,""],posterize:[14,3,1,""],resize:[14,3,1,""],resized_crop:[14,3,1,""],rgb_to_grayscale:[14,3,1,""],rotate:[14,3,1,""],solarize:[14,3,1,""],ten_crop:[14,3,1,""],to_grayscale:[14,3,1,""],to_pil_image:[14,3,1,""],to_tensor:[14,3,1,""],vflip:[14,3,1,""]},"torchvision.utils":{draw_bounding_boxes:[15,3,1,""],draw_segmentation_masks:[15,3,1,""],make_grid:[15,3,1,""],save_image:[15,3,1,""]},torchvision:{get_image_backend:[9,3,1,""],get_video_backend:[9,3,1,""],set_image_backend:[9,3,1,""],set_video_backend:[9,3,1,""]}},objnames:{"0":["py","module","Python module"],"1":["py","class","Python class"],"2":["py","method","Python method"],"3":["py","function","Python function"]},objtypes:{"0":"py:module","1":"py:class","2":"py:method","3":"py:function"},terms:{"0000":5,"004":11,"0053":5,"0078":5,"008":11,"020":11,"02002":12,"021332999999999998":4,"0222":5,"0263":5,"0303":5,"032":11,"0376":11,"040":11,"0415":11,"042":11,"042667":4,"046":11,"048":11,"04896":14,"054":11,"058":11,"0590":11,"0611":5,"064":4,"066":11,"0671":5,"0697":5,"0701":5,"0744":11,"078":11,"08533299999999999":4,"086":11,"0879":5,"089669704437256":5,"0903":11,"0906":11,"092":[6,11],"0939":11,"0978":11,"100":[3,4,5,10],"1000":7,"101":[7,11],"10158":14,"1020":11,"1024":11,"10fold":7,"10k":7,"112":[4,11],"11248":11,"113":1,"11355":7,"1167a1af":5,"120":1,"1200":11,"121":11,"122":11,"123":7,"124":5,"1242":11,"128":3,"130":11,"1306":5,"132":5,"134":1,"135":5,"135467":4,"136":11,"137":5,"138":11,"1435":5,"150":[5,11],"152":11,"153":5,"157":15,"15x15":11,"161":11,"1661":5,"168":5,"169":11,"16x112x112":11,"1708":[12,14],"171":5,"1711":11,"172":5,"1722":5,"176":11,"1773":11,"178":11,"17x17":11,"180":[3,14],"181":1,"185":5,"186":11,"192":[3,5],"1x1":11,"200":5,"2007":7,"201":11,"2011":7,"2012":7,"2017":7,"2018":7,"2019":7,"2021":7,"2021_train":7,"2021_train_mini":7,"2021_valid":7,"2048":11,"2093":11,"210":5,"2103":14,"212":11,"215":5,"2157":1,"216989":11,"218":11,"219":5,"21x21":11,"220":5,"22145":11,"222":5,"2226":5,"224":[1,2,5,7,8,11,12,14],"225":[2,5,11,14],"22803":11,"2287":5,"2288":11,"229":[2,5,11,14],"232":5,"2352":5,"237566999999999":4,"239567":4,"240":[11,15],"246":11,"248":11,"2514":11,"254":5,"255":[7,10,14,15],"256":[2,4,7,8,11,12],"258fb6c6":1,"2621":5,"264":4,"2728":11,"284":11,"286":[1,11],"2861":5,"288":5,"294":11,"299":11,"29x29":11,"2gf":11,"2nd":[4,8],"300":11,"300x300":11,"301":5,"306299999999999":4,"310":11,"312":11,"313":5,"314":11,"3154":5,"316":11,"320":11,"320x320":11,"327":4,"328":1,"32x32":11,"32x4d":11,"32x8d":11,"331":1,"3333333333333333":14,"340":[4,11],"3429":5,"343":5,"344":[5,11],"3447":5,"348":11,"350":5,"354":11,"357":[1,5],"358":5,"360":11,"362":11,"363":1,"364":11,"370":11,"370033":4,"374":11,"376":11,"37645":11,"378":5,"3789":11,"382":11,"384":[5,11],"394666":11,"3rd":9,"400":[4,5,6,9,11],"402":[5,11],"404":11,"406":[2,5,11,14],"417":1,"4171":5,"420":[5,11],"424":11,"427l":7,"430":5,"43216":11,"4332":5,"434":11,"436":1,"440":11,"44100":10,"444":11,"445":1,"446":5,"449":5,"450":11,"456":[1,2,5,6,11,14],"467":5,"468":11,"479":5,"480":11,"48000":[4,10],"485":[2,5,11,14],"487":5,"489":5,"48k":4,"490":[5,11],"492":5,"493":5,"494":11,"498":1,"4th":[7,8],"500":[5,11],"50k":7,"50x":11,"510":11,"511":4,"512":[11,12],"516":11,"520":11,"522":11,"523200":4,"523264":4,"5249":5,"526":11,"530":11,"532":11,"5324":5,"533":1,"546":11,"552":11,"5568":5,"560":11,"576":11,"582":11,"586":11,"589":[3,6],"592":11,"594":11,"5963":5,"5mb":11,"5th":4,"600":11,"6059886546397426":4,"608":11,"60k":7,"6114":5,"618":11,"622":11,"624":11,"6263":5,"628":11,"6292":5,"6341":5,"63x63":11,"640l":7,"642":11,"6525":5,"658":11,"668":11,"6735":5,"6767":5,"686":11,"6917":5,"692":11,"698":11,"6gf":11,"7014":5,"7124644717480897":4,"716":11,"7187":5,"732515897132387":4,"734":11,"7379":5,"7391":5,"7444":5,"746":11,"758":11,"75x75":11,"778":11,"794335670319363":4,"806":11,"810":11,"814":11,"8183":5,"822":[2,6],"826":[4,6,11],"82783":7,"834":11,"842":11,"8462735255185843":4,"848":11,"858":11,"858256340026855":5,"862":11,"876":11,"878":11,"882":11,"8879":5,"896":11,"9018":5,"908":11,"91008991008991":4,"9109":4,"916":11,"920":11,"9223":5,"928":11,"9293":5,"9359":5,"944":11,"948":11,"950":11,"966":11,"97002997002997":4,"9716":5,"972":11,"9767":5,"9786":5,"986":11,"992":11,"9978":5,"9980":5,"9987":5,"9989":5,"9999862909317017":5,"boolean":[1,5,7,14],"break":[5,9,14],"byte":10,"case":[1,4,5,7,8,10,11,12,14],"class":[1,2,4,5,8,9,10,11,12,14],"default":[4,7,8,9,10,11,12,14,15],"enum":14,"export":11,"final":[8,11,14,15],"float":[1,2,4,5,7,10,12,14,15],"function":[1,2,5,7,8,9,10,11],"import":[1,2,3,4,5,7,8,10,11,14],"int":[1,2,4,7,8,10,11,12,14,15],"long":[4,9],"new":[2,5,8,12,14],"return":[1,2,4,5,7,8,9,10,11,12,14,15],"short":14,"static":[7,14],"super":[2,4,7,8],"switch":11,"true":[1,2,3,4,5,7,10,11,12,14,15],"try":[4,5,8,14],"while":[2,5,10,13,14],But:[4,8],For:[1,4,5,7,8,9,11,14],Going:11,NMS:12,Not:8,One:[1,7,8,11],PTS:4,That:[3,4,5,14],The:[1,2,3,4,5,7,8,9,10,11,12,14,15],Then:[8,14],There:[5,8,14],These:[1,2,5,7,9,11,13,14],Use:[7,10,14],Useful:8,Using:2,Will:14,With:[1,10],__background__:[5,11],__call__:14,__getitem__:[1,7],__init__:[1,2,4,8,14],__iter__:4,__len__:7,__next__:10,_audio_sampl:7,_find_class:4,_precomputed_metadata:7,_vf:1,_video_height:7,_video_min_dimens:7,_video_width:7,aac:[4,10],about:[5,8,10,14],abov:[3,4,5,7,8,14],abs:[11,12,14],absolut:[14,15],acc:[10,11],acceler:2,accept:[4,8,14],access:[4,8],accimag:9,accord:[7,11,12,14],accordingli:[7,11,14],accuraci:[11,14],achiev:[2,8,10],across:[5,7,9,14],act:14,action:[7,11],activ:[8,9],actual:[3,4],adapt:[7,12],add:[4,8,11,12],add_1:8,add_2:8,added:12,addit:[8,10],addition:10,address:11,adjust:[3,14],adjust_bright:14,adjust_contrast:14,adjust_gamma:14,adjust_hu:14,adjust_satur:14,adjust_sharp:[3,14],advanc:10,aeroplan:[5,11],affin:[3,14],affine_img:3,affine_transfom:3,afram:10,after:[5,10,12,14],again:[7,14],against:14,aggreg:11,ahead:[5,9],airplan:[5,11],alia:14,align:[1,12],all:[0,1,2,3,4,5,7,8,10,11,12,14,15],all_classes_mask:5,alloc:10,allow:[2,4,7,14],almost:7,along:14,alpha:[1,5,7,12,14,15],alphabet:7,alreadi:7,also:[2,3,5,7,8,9,13,14,15],although:[8,9,14],alwai:[8,14,15],among:5,amount:[14,15],amphibian:7,analyz:5,angl:[3,14],ani:[7,10,11,14],animalia:7,ann_fil:7,annfil:7,annot:[1,7],annotation_path:7,anoth:[5,8,12],anti:14,antialia:14,antialias:14,anyth:8,api:[0,6,7,9,14],appear:8,append:[1,4,10,11],appl:[5,11],appli:[2,12,14],applic:[8,14],applier:3,appropri:[10,11],approx:4,approx_nf:4,arang:5,arbitrari:[12,14],architectur:[9,11,12],archiv:7,area:[12,14],arg:[7,14],argmax:[2,5],argsort:12,argument:[1,4,7,8,10,14,15],around:[12,14],arrai:[1,7,14],arrang:7,art:14,arthropoda:7,arxiv:[11,12,14],asarrai:[1,2,3,5],asd932_:7,ask:5,aspect:14,assert:2,asset:[1,2,3,4,5],assets_directori:1,assign:7,assist:8,associ:5,assum:[4,11,14],assumpt:14,astronaut:3,aten:1,attach:8,attr:[1,7],attribut:[7,10],audio:[4,7,10],audio_arrai:10,audio_codec:10,audio_fp:10,audio_fram:4,audio_opt:10,audio_pt:4,augment:[3,9],author:7,auto:[3,4,10],auto_exampl:6,auto_examples_jupyt:0,auto_examples_python:0,autoaug:14,autoaugmentpolici:[3,14],autocontrast:[3,14],autocontrasted_img:3,autodiff:14,autom:14,automat:[3,9],autowrap_funct:8,aux_logit:11,aux_loss:11,auxiliari:11,avail:[1,2,3,4,7,8,9,10,13,14],avc1:4,avc:4,averag:12,avg:14,avi:[4,7],avoid:14,awar:11,axi:[4,14],axs:[1,2,3,5],back:14,backbon:[8,11],backbone_util:8,backend:9,background:[1,5,7,11],backpack:[5,11],backward:[9,11,13,14],balanc:[7,12],ball:[5,11],banana:[5,11],band:[2,14],bar:11,base:[3,9,12,14],basebal:[5,11],basic:4,bat:[5,11],batch:[2,4,5,7,11,12,14,15],batch_int:5,batch_siz:[4,5,7,12],batch_sz:12,batched_nm:12,bbox:[1,2,3,5,7],bear:[5,11],becaus:[5,8,9,11,14],becom:[1,14],bed:[5,11],bedroom_train:7,been:[2,5,11,12],befor:[5,11,12,14],behavior:[11,12,14],behaviour:11,behind:[7,9],being:[1,7,12,14],belong:[1,5,14],below:[0,8,14],bench:[5,11],beta:9,better:12,between:[7,8,10,11,12,14,15],bf2d0c1e:5,bia:12,bicub:14,bicycl:[5,11],bilinear:14,bin:12,binari:[7,9,11,12],binaryio:15,bind:9,bird:[5,11],bit:[3,5,14],bl_flip:14,black:14,bld:1,blob:[4,12],block:[8,11],blue:[1,5,7],blur:[3,14],blurred_img:3,blurrer:3,boat:[5,11],bodi:8,book:[5,11],bool:[5,7,8,10,11,12,14,15],boolean_dog_mask:5,boolean_mask:5,border:[3,14],both:[3,7,8,10,11,12,14],bottl:[5,11],bottleneck:11,bottom:[12,14],bottom_left:3,bottom_right:3,bound:[0,6,7,10,11,12,14,15],boundari:7,bowl:[5,11],box:[0,6,7,10,11,12,14,15],box_area:12,box_convert:12,box_iou:12,boxes1:12,boxes2:12,br_flip:14,branch:[11,12],bright:[3,7,14],brightness_factor:14,broccoli:[5,11],buffer:[4,10],build:[8,9,14],bundl:8,bus:[5,11],byclass:7,bymerg:7,bz2:7,cach:[1,2,5,11],cake:[5,11],calcul:14,call:[3,4,10,11,14,15],callabl:[7,11],caltech101:7,caltech256:7,caltech:9,can:[1,2,3,5,7,8,9,10,11,14,15],cannot:14,canonical_level:12,canonical_scal:12,cap:7,car:[5,11],care:8,carrot:[5,11],cast:14,cat:[4,5,7,11],categori:[7,8,11,12],category_id:7,category_nam:7,category_typ:7,ceil:12,celeba:9,celebfac:7,cell:[5,11],center:[3,12,14],center_crop:[3,14],center_flip:14,centercrop:[2,11,14],centr:[7,12],central:[3,14],centric:2,certain:[4,8],chain:14,chair:[5,11],challeng:7,chang:[3,9,11,14],channel:[2,3,7,8,10,11,12,14],check:[7,11,14],checkpoint:[1,2,5],choic:[4,14],choos:[5,14],chosen:[4,14],church_outdoor_train:7,chw:10,cifar100:7,cifar10:[3,7,14],cifar:9,cityscap:9,class1:7,class2:7,class_dim:5,class_i:7,class_index:7,class_to_idx:[4,7],class_x:7,classif:[7,9,12,13,14],classifi:[9,11],cleanup:4,clearli:5,clerida:7,click:[1,2,3,4,5],clip1:7,clip2:7,clip3:7,clip:[4,7,11,12],clip_boxes_to_imag:12,clip_len:4,clipx:7,clock:[5,11],clockwis:14,close:5,closer:14,cls:5,cls_name:4,cmap:3,cmyk:14,cnn:[1,5,12],coars:7,coco:[9,11],coco_instance_category_nam:11,coco_person_keypoint_nam:11,coco_util:11,cococapt:7,cocodetect:7,code:[0,1,2,3,4,5,8,10,11],codec:10,col:7,col_idx:3,coleoptera:7,collect:7,color:[1,3,5,7,14,15],colorjitt:14,column:[12,14],com:[4,12,13],combin:[2,4,7],come:7,commit:9,common:[7,9,14],commonli:1,compar:10,compat:[2,7,9,11,13,14],compil:[2,9],complementari:14,complet:[9,14],complex:14,compon:[7,14],compos:[4,11,14],composit:9,compress:[7,10],compression_level:10,comput:[8,9,11,12,14,15],concret:11,conda:[1,9,13],confid:5,configur:11,confirm:8,confus:5,conjunct:10,connect:11,consecut:14,consid:[7,14],consist:[4,9,10],constant:14,construct:11,constructor:[4,10,11],consult:8,contain:[4,5,7,8,10,11,12,14,15],contan:10,content:10,contrail:7,contrari:11,contrast:[11,14],contrast_factor:14,control:[8,10,14],conv2d:8,conv:8,convent:8,convers:9,convert:[3,5,10,12,14],convert_image_dtyp:[1,5,14],convertimagedtyp:[2,14],convnet:12,convolut:12,cool:4,cooper:8,coordin:[12,14,15],copi:[8,14],core:13,corner:[3,7,12,14],correct:[4,14],correspond:[1,5,7,8,10,11,12,13,14],corrupt:7,couch:[5,11],could:[5,8],counter:[8,14],covari:14,cover:7,coverag:9,cow:[5,11],cpp:1,cpu:[2,10,12],crcv:7,creat:[4,5,7,8,10,11,12,14],create_feature_extractor:8,creation:7,criterion:12,crop:[3,11,12,14],cropper:3,cryptic:5,cuda:[2,10],cup:[5,11],current:[4,7,9,10,12,13],current_pt:4,custom:[9,14],cxcywh:12,cyclic:14,dark:14,darker:14,dart:7,data1:7,data2:7,data:[1,2,3,7,10,12,14],data_load:7,databas:7,dataload:[4,7],datapoint:4,dataset:[9,11,13,14],datasetfold:7,deactiv:14,deal:14,decid:7,decod:[2,4,9,10],decode_imag:10,decode_jpeg:10,decode_png:10,decreas:12,deep:[11,12,14],deeper:11,deepfunnel:7,deeplabv3:5,deeplabv3_mobilenet_v3_larg:11,deeplabv3_resnet101:11,deeplabv3_resnet50:[5,11],def:[1,2,3,4,5,8,14],default_load:7,defin:[1,2,4,5,7,10,14],definit:11,deform:12,deform_conv2d:12,deformconv2d:12,degre:[3,14],demo:[1,5],denot:[1,14,15],dens:[11,12],densenet121:11,densenet161:11,densenet169:11,densenet201:11,depend:[8,11,14],deprec:14,depth:[11,12,14],deriv:14,descend:8,describ:[5,9,11,12,14],descriptor:[4,7,8,10],design:[8,11],desir:[8,10,14],detach:[1,5],detail:[5,8,11,14],detect:[4,5,8,9,10,12,13],detection_output:1,detector:11,detectron2:12,determin:[4,10,14,15],dev:8,deviat:14,devic:[2,4,9,10],devkit:7,dict:[4,5,7,8,10,11],dictionari:[7,8,10,11],differ:[1,3,5,7,8,11,12,14],digit:7,dilat:12,dim:[2,5],dimens:[1,5,7,10,12,14],dimension:[1,10],dine:[5,11],diningt:[5,11],dir:[4,7],direct:[8,14],directli:[1,2,5,8,14],directori:[7,11,15],disabl:9,disambigu:8,discard:12,discov:4,discrep:8,discuss:11,disk:[2,7],displai:[11,15],distanc:[7,12],distort:14,distortion_scal:[3,14],distribut:[9,14],docstr:7,document:[8,9,11,15],doe:[7,9,10,14],dog1:[2,5],dog1_all_classes_mask:5,dog1_bool_mask:5,dog1_int:5,dog1_mask:5,dog1_output:5,dog2:[2,5],dog2_int:5,dog:[2,5,7,11],dog_and_boat_mask:5,dog_int:5,dog_with_all_mask:5,dogs_with_box:5,dogs_with_mask:5,don:[5,8,14],done:[4,11,14],donut:[5,11],dot:14,down:[5,8],download:[0,1,2,3,4,5,7,11],download_url:4,downsampl:14,downstream:8,dpython:14,draw:[1,5,15],draw_bounding_box:[1,5,15],draw_segmentation_mask:[1,5,15],drawn:[5,15],drawn_box:1,drawn_mask:1,drier:[5,11],drop:[7,12],dry:8,dset:7,dtype:[1,5,7,14,15],due:2,dummi:12,dump:2,dumped_scripted_predictor:2,durat:[4,10],dure:[8,11],each:[1,3,4,5,7,8,10,11,12,14,15],earli:9,easi:12,easili:[1,2,4,8],edg:14,edu:7,effect:[2,14],effici:11,efficientnet_b0:11,efficientnet_b1:11,efficientnet_b2:11,efficientnet_b3:11,efficientnet_b4:11,efficientnet_b5:11,efficientnet_b6:11,efficientnet_b7:11,either:[7,8,10],element:[5,7,12,14,15],eleph:[5,11],els:[2,3,7,14],emit:7,emnist:9,empir:14,empti:[4,7],encod:[1,4,10],encode_jpeg:10,encode_png:10,end:[4,8,10],end_pt:10,endpoint:14,enhanc:14,enough:14,ensur:[1,11],entir:[5,11,12,14],entiti:7,entri:[5,7,15],enumer:[1,2,3,4,5,8],env:1,environ:11,epoch:4,epoch_s:4,equal:[1,3,14],equalized_img:3,equat:14,equival:[11,14],eras:14,error:[8,11,14],especi:8,etc:[10,14],eval:[2,5,8,11],eval_nod:8,eval_return_nod:8,evalu:[7,11],even:3,eventu:8,everi:[4,7,10,11,12],exact:[10,11,12],exactli:[1,7,12,14],exampl:[1,2,3,5,7,8,10,11,12,14,15],example_read_video:4,except:[9,11,14],exclud:7,exclus:7,execut:[6,8],exhaust:8,exist:[4,7,10,14],exist_ok:4,expand:14,expans:14,expect:[4,5,7,9,11,12,14],experi:11,explan:8,explicitli:10,expon:12,express:10,ext:7,extens:[4,7,15],extra:[5,7,12],extra_block:[8,12],extract:[7,8],extractor:8,extrafpnblock:12,f37072fd:2,face:7,facebookresearch:12,facial:8,fact:4,factor:[10,12,14],fake:7,fakedata:9,fall:8,fals:[1,2,3,5,7,8,10,11,12,14,15],famili:7,familiar:8,fashion:[4,9,12],fashionmnist:7,fast:12,faster:[1,5,9],faster_rcnn:11,fasterrcnn:1,fasterrcnn_mobilenet_v3_large_320_fpn:11,fasterrcnn_mobilenet_v3_large_fpn:11,fasterrcnn_resnet50_fpn:[1,5,11],fasterrcnn_resnet50_fpn_coco:1,favor:14,fcn:[5,11,12],fcn_resnet101:11,fcn_resnet50:[5,11],fcn_resnet50_coco:5,feat0:12,feat1:[8,12],feat2:[8,12],feat3:12,featmap_nam:12,featur:[2,8,9,12],feature_extract:9,feature_pyramid_network:[8,12],featurepyramidnetwork:[8,12],feedback:9,few:[8,11],fewer:11,ff00ff:15,ffmpeg:9,field:[4,10,11],fig:3,figsiz:4,figur:4,file:[1,4,6,7,10,15],filenam:[10,15],filenotfounderror:7,fill:[3,14,15],fillcolor:14,find:[7,8],find_class:7,fine:[7,9,14],finer:8,fire:[5,11],first:[1,4,5,7,10,12,14],firstli:4,five:4,five_crop:[3,14],fivecrop:14,fix:[1,2,5,7,11,14],flag:[9,14],flatten:14,flickr30k:7,flickr8k:7,flickr:9,flip:[3,14],float32:[5,14],float64:14,floattensor:[11,14],flow:8,fluent:10,fly:7,focal:11,focal_loss:12,fold:7,folder:[4,7],folderdataset:4,follow:[1,2,3,4,5,7,8,10,11,12,14],font:[5,15],font_siz:15,fork:[5,11],form:[4,7],format:[1,2,4,5,7,10,11,12,14,15],forth:14,forward:[2,8,14],found:[7,11,15],four:[1,3,14],fpn:[8,11,12],fps:[4,10],frac:14,fraction:[10,14],frame:[4,7,10,11],frame_r:7,frame_transform:4,framer:4,frames_per_clip:7,framework:9,free:14,frisbe:[5,11],from:[1,2,3,4,5,7,8,9,10,11,12,13,14,15],frozen:11,fudanped00054:1,fudanped00054_mask:1,full:[1,2,3,4,5,7,15],fulli:10,funnel:7,further:8,fuse:14,fvcore:12,gain:14,galleri:[1,2,3,4,5,9],gamma:[12,14],gap:9,gaussian:[3,7,14],gaussian_blur:[3,14],gaussianblur:14,gener:[0,1,2,3,4,5,7,8,9,11,12,13,15],generalized_box_i:12,genu:7,german_shepherd:2,get:[1,4,5,7,8,9,11,12,14],get_graph_node_nam:8,get_image_backend:9,get_image_num_channel:14,get_image_s:14,get_metadata:[4,10],get_param:14,get_sampl:4,get_video_backend:9,giraff:[5,11],github:[4,12,13],githubusercont:4,give:[7,8,14],given:[3,4,5,7,8,9,12,14,15],glass:[5,11],global:11,glove:[5,11],going:4,good:5,got:4,gotten:5,gpu:[11,12],grad_fn:5,grai:[3,10,14],grain:[9,14],graph:8,graph_modul:8,graphic:8,graphmodul:8,gray_alpha:10,gray_img:3,grayscal:[10,14],greater:[5,12,14],grid:[3,12,15],ground:11,group:12,gtcoars:7,gtfine:7,guarante:[8,11,12,13],guidelin:11,h264:10,had:5,hair:[5,11],ham:14,hand:7,handbag:[5,11],handl:7,happen:9,hard:12,harri:7,has:[1,4,5,7,8,11,14,15],have:[2,5,7,11,12,14],head:[8,12],height:[1,7,12,14],help:[5,14],henc:7,here:[1,2,3,4,5,7,8,11],heurist:12,hflip:[3,14],hflipper:3,hierarchi:8,high:[7,10,11,14],higher:12,histogram:[3,14],histor:11,hmdb51:9,hold:[8,14],horizont:[3,14],hors:[5,11],hot:[5,11],housekeep:4,how:[2,4,5,8,11,14],howev:[7,9],hsv:14,htm:7,http:[1,2,4,5,7,11,12,13,14],hub:[1,2,5],hue:[3,14],hue_factor:14,hydrant:[5,11],ident:7,ids:1,idx:[1,5,7,12],ignor:[1,12,14],illustr:[0,1,2,4,5,6,11,14],imag:[1,3,4,7,8,9,11,12,13,15],image1:7,image2:7,image_2:7,image_channel:10,image_height:10,image_s:[7,12],image_set:7,image_width:10,imagefold:7,imagenet:[2,3,9,11,14],imagenet_class_index:2,imagenet_data:7,imagenet_root:7,imagereadmod:10,img:[1,2,3,5,7,11,14,15],img_height:14,img_idx:5,img_path:1,img_siz:14,img_width:14,implement:[2,4,7,9,10,11,12,14],implicitli:14,improv:[9,11,14],imshow:[1,2,3,4,5],imshow_kwarg:3,in_channel:12,in_channels_list:[8,12],in_fmt:12,in_height:12,in_width:12,inaturalist:9,inbuilt:8,incept:14,inception3:11,inception_v3:11,includ:[4,7,9,10,11],increas:[12,14],indent:7,independ:[5,14],index:[1,5,7,9,12],indexbackward0:5,indic:[5,12],individu:[4,10],inf:4,infer:[11,12],info:[4,5,10],inform:[7,8,10,11],inherit:7,initi:11,inner:8,inp:8,inplac:14,input:[1,2,5,7,8,10,11,12,14,15],insecta:7,insid:[2,10,12],inspect:10,inst:7,inst_class:5,inst_class_to_idx:5,instal:[7,11,13],instanc:[1,2,3,7,8,9,10],instead:[1,7,13,14,15],instruct:7,int32:14,int64:[1,12,14],int64tensor:11,int8:[10,11],integ:14,intel:9,intens:14,interest:[5,7,12],interestingli:5,intermedi:8,intern:[1,7,11],internet:7,interpol:[12,14],interpolationmod:14,interpret:[5,10,14],intersect:12,interv:14,invari:14,invert:[3,11,14],invertered_img:3,involv:5,iou:[11,12],iou_threshold:12,ipp:9,ipynb:[1,2,3,4,5],is_avail:2,is_dir:4,is_valid_fil:7,isinst:[1,3,5],islic:[4,10],item:[2,4,5,7,8,12],iter:[4,10,12],iterabledataset:4,itertool:[4,10],its:[1,5,7,10,11,12,14],itself:[7,8],jaccard:12,jit:[0,6,10,11,14],jitted_img:3,jitter:[3,4,14],join:1,jpeg:[2,10],jpg:[2,3,5],json:[2,7],juggl:5,jupyt:[0,1,2,3,4,5],just:[3,5,8],keep:[8,14],kei:[5,7,8],kept:12,kernel:[12,14],kernel_height:12,kernel_s:[3,12,14],kernel_width:12,keyboard:[5,11],keypoint:[5,9],keypoint_rcnn:11,keypointrcnn_resnet50_fpn:[5,11],keyword:12,keywork:8,kinet:[4,9,11],kinetics400:7,kingdom:7,kite:[5,11],kitti:9,kmnist:9,knife:[5,11],known:14,ksize:14,kuzushiji:7,kwarg:[1,7,11,14,15],lab:14,label:[1,2,5,7,11,12,15],label_2:7,labels_fil:2,lambd:14,lambda:[4,10,14],lanczo:14,landmark:7,laptop:[5,11],larg:[7,11,13,14],larger:[4,10,11,12,14],largest:14,last:[8,11,14],lastlevelmaxpool:8,latest:[9,13],law:14,layer1:8,layer2:8,layer3:8,layer4:8,layer:[8,11],layout:10,lead:14,leaf:8,leaf_funct:8,leaf_modul:8,leafmodul:8,learn:[9,11,13,14],least:[11,12],leav:7,left:[7,12,14],left_ankl:11,left_ear:11,left_elbow:11,left_ey:11,left_hip:11,left_kne:11,left_should:11,left_wrist:11,lefteye_i:7,lefteye_x:7,leftimg8bit:7,leftmouth_i:7,leftmouth_x:7,legaci:12,len:[1,2,3,4,5,7,11,14],length:[4,5,11,14],less:[9,11],let:[1,2,4,5,8],letter:7,level:[8,10,11,12],levelmapp:12,leverag:4,lfw:9,lfwpair:7,lfwpeopl:7,lib:1,librari:15,libx264:10,lie:[12,14],lies:7,light:[5,11,12],lighter:14,lightest:14,like:[1,5,8,9,10,14],likewis:[8,12],limit:[2,9],line:[5,11],linear:[11,14],lineartransform:14,link:14,list:[1,2,3,5,7,8,10,11,12,14,15],listdir:1,lite:11,lmdb:7,load:[1,2,5,7,9,10,11],load_url:11,loader:[4,7,15],local:1,locat:[3,7,11,14],longer:14,look:5,loop:[10,14],loss:[7,11,12],lost:11,low:11,lower:[5,10,12,14],lowest:14,lraspp:5,lraspp_mobilenet_v3_larg:[5,11],lsun:9,machin:9,maco:15,made:[8,14],magnitud:14,mai:[2,3,5,8,9,11,14,15],main:[4,8,9],maintain:[8,9,14],major:9,make:[3,4,7,10,14,15],make_dataset:[4,7],make_grid:[5,15],makedir:4,mandatori:10,mani:[5,9,13],manual_se:[2,3,14],map:[7,8,12,14],mask:[0,6,10,12,14,15],mask_path:1,mask_rcnn:[8,11],maskrcnn:8,maskrcnn_resnet50_fpn:[5,11],maskrcnn_resnet50_fpn_coco:5,masks_to_box:[1,12],massag:8,master:12,mat:7,match:[1,7,14],matplotlib:[1,2,3,4,5],matrix:[12,14],max:[5,14,15],max_dtyp:14,max_seek:4,max_siz:14,maxim:14,maximum:[12,14],mayb:8,mc3:11,mc3_18:11,mean:[3,5,10,11,12,13,14,15],mean_vector:14,memori:[10,11],memory_effici:11,mention:12,meshgrid:1,meta:[4,7],metadata:[4,10],meter:[5,11],meth:14,method:[1,4,7,8,10,11,14],microwav:[5,11],might:[7,8,13,14],min:[5,14,15],min_siz:12,mind:[8,14],mini:[11,15],minimum:[11,12,14],minut:[1,2,3,4,5],mismatch:14,mix:14,mnasnet0_5:11,mnasnet0_75:11,mnasnet1_0:11,mnasnet1_3:11,mnist:9,mobil:11,mobilenet:5,mobilenet_v2:11,mobilenet_v3_larg:11,mobilenet_v3_smal:11,mobilenetv2:11,mobilenetv3:11,mode:[7,8,10,11,12,14],model:[1,2,9,12,13,14],model_zoo:11,modul:[1,2,3,8,11,12,14],modulelist:14,moment:10,more:[2,4,5,7,8,10,11,12,14],most:[2,5,14],motorbik:[5,11],motorcycl:[5,11],mountain:7,mous:[5,11],mp3:10,mp4:[4,7],mp4a:4,mpeg:4,much:[5,14],multi:[1,2,7,12],multibox:11,multipl:[2,4,5,7,8,10,12,14],multipli:[11,14],multiprocess:7,multiscaleroialign:12,must:[5,8,10,12,14],mutat:14,mutual:7,my_segmentation_transform:14,mymodul:8,myrotationtransform:14,n02106662:2,n02113023:2,name:[2,4,7,8,9,10,12,13],namedtemporaryfil:2,nativ:[1,2,9,12],natur:[4,5],ncol:[1,2,3,5],ncrop:14,ndarrai:14,nearest:14,necessari:7,need:[4,5,7,8,9,13],neg:[12,14],neighbor:12,network:[8,12,14],neural:11,newer:7,next:[4,7,10],nice:1,nightli:13,nist:7,nms:12,no_grad:[2,8],node:8,node_nam:8,nodepathtrac:8,non:[2,5,12,14],none:[1,3,4,5,7,8,10,11,12,14,15],norm_lay:11,normal:[2,5,11,14,15],normalized_batch:5,normalized_mask:5,nose:11,nose_i:7,nose_x:7,notabl:14,note:[1,5,7,8,10,11,14,15],notebook:[0,1,2,3,4,5],notic:[1,9],now:[1,2,4,5,8,11,14],nrow:[3,15],nsdf3:7,nthread:7,num_class:[5,7,8,11],num_col:3,num_download_work:7,num_inst:5,num_keypoint:11,num_lin:7,num_magnitude_bin:14,num_mask:15,num_object:1,num_op:14,num_output_channel:14,num_row:3,num_work:7,number:[1,3,4,5,7,8,10,11,12,14,15],numer:7,numpi:[1,2,3,5,7,14],nvjpeg:10,nxm:12,obj_id:1,object:[1,5,7,8,9,10,12,14,15],observ:8,obtain:[2,11],occlud:7,off:[4,14],offer:[4,5,13],offici:11,offlin:14,offset:[7,12,14],offset_group:12,often:10,old:[7,11],older:[11,13],omit:[14,15],omniglot:9,onc:8,one:[1,2,3,4,5,7,8,9,10,11,12,14,15],ones:[1,5,7,11],onli:[1,2,4,5,7,9,10,11,12],onnx:11,onto:8,open:[2,3,9],oper:[1,5,8,9,10,12,14],oppos:14,ops:[1,8,9],opset_vers:11,opt:1,option:[7,8,10,11,12,14,15],orang:[5,11],order:[5,7,8,11,12,14],ordereddict:12,org:[1,2,5,11,12,14],organ:5,orig_img:3,origin:[3,5,7,12,14],other:[1,3,5,10,11,14,15],otherwis:[7,10,11],ouput:5,our:[4,5,8],out:[1,2,4,5,8,10,11,12,14],out_channel:[8,12],out_fmt:12,out_h:12,out_height:12,out_w:12,out_width:12,outer:11,outlin:7,output:[4,5,7,8,10,11,12,14],output_s:[12,14],output_width:12,outsid:14,oven:[5,11],over:[7,12,14,15],overal:14,overflow:14,overheard:7,overlap:12,overrid:7,overridden:7,overrul:14,oversampl:4,own:[7,8],p_t:12,packag:[1,7,10,13],pad:[12,14,15],pad_if_need:14,pad_valu:15,padded_img:3,padding_mod:14,pairwis:12,panopt:1,paper:[11,12],parallel:[7,14],param:14,paramet:[4,7,8,9,10,11,12,14,15],parent:8,park:[5,11],part:[4,7,9,10,13],parti:9,partial:8,particular:[1,2,4,7,8],pascal:[7,11],pass:[1,2,4,5,7,8,10,11,12,14,15],past:11,patch:7,path:[1,2,3,4,5,7,8,10,15],path_to_a_test_video:10,path_to_sampl:7,pathlib:[2,3,5,15],pedmask:1,pembrok:2,penfudan:1,peopl:[1,5],per:[5,10,11,12,14],perform:[2,3,8,9,10,12,14],perhap:9,permut:4,person:[5,7,9],perspect:[3,14],perspective_img:3,perspective_transform:3,pertain:8,phone:[5,11],photo:7,phototour:9,phylum:7,pic:14,pick:[8,14],pil:[2,3,7,9,15],pil_to_tensor:14,piltotensor:14,pip:7,pipelin:14,pixel:[1,3,5,7,12,14,15],pixelwis:11,pizza:[5,11],place:[7,14],places365:9,plane:7,plant:[5,11],platform:11,pleas:[7,8,9,14],plot:[1,3,5,11,15],plot_repurposing_annot:[1,6],plot_scripted_tensor_transform:[2,6],plot_transform:[3,6],plot_video_api:[4,6],plot_visualization_util:[5,6],plt:[1,2,3,4,5],plu:14,plume:7,png:[1,2,3,5,7,10,15],pngimag:1,point:[1,7,8,10,12,14,15],poli:7,polici:[3,14],polygon:7,pool:12,popular:9,popularli:14,portion:14,posit:[12,14],possibl:[2,7],post:[11,14],poster:[3,14],posterized_img:3,postfix:8,postprocess:11,pot:[5,11],pottedpl:[5,11],power:14,practic:[10,11,14],pre:[7,11,13],precis:[10,11],pred:2,pred_script:2,predict:[2,5,11,12],predictor:2,prefer:14,preprocess:11,present:[2,4,7,10,11,12],preserv:[5,11,14],pretrain:[1,2,5,11],pretrained_backbon:11,pretrainedtru:11,previou:14,principl:4,print:[1,2,4,5,7,8,10,12],prior:2,proba_threshold:5,probabl:[3,5,12,14],problemat:8,procedur:[7,8],process:[4,7,11],produc:[3,5,14],product:14,progress:[1,2,5,11],project:[1,7,9],properli:[3,5],properti:[1,3],proport:14,propos:11,prototyp:9,provid:[1,7,8,10,11,12,13,14],ps_roi_align:12,ps_roi_pool:12,psroialign:12,psroipool:12,pth:[1,2,5],pts:[4,10],pts_unit:10,ptss:4,purpos:[1,4,8],put:[7,8],pyav:[9,10],pylab:4,pypi:9,pyplot:[1,2,3,5],pyramid:[8,12],python3:1,python:[0,1,2,3,4,5,8,9,14],pytorch:[1,2,4,5,7,8,11,12,13],pytorch_1634272092750:1,qmnist:9,qualiti:[7,10],quantiz:12,queri:5,quit:5,r2plus1d_18:11,r3d:11,r3d_18:11,racket:[5,11],rais:[4,7,8,14],rand:[8,11,12],randaug:14,randint:[11,14],randn:8,random:[4,7,11,12,14,15],random_offset:7,randomadjustsharp:14,randomaffin:14,randomappli:14,randomautocontrast:14,randomchoic:14,randomcrop:[2,7,14],randomdataset:4,randomequ:14,randomeras:14,randomgrayscal:14,randomhorizontalflip:[2,14],randominvert:14,randomli:[7,12,14],randomord:14,randomperspect:14,randomposter:14,randomresizedcrop:14,randomrot:[7,14],randomsizedcrop:14,randomsolar:14,randomverticalflip:14,rang:[3,4,5,7,11,12,14,15],rate:[4,10,11],rather:[7,8,15],ratio:[12,14],ratrace_wave_f_nm_np1_fr_goo_37:4,raw:[4,7,10],rcnn:5,rcparam:[1,2,3,5],read:[2,4,5,7,8,10],read_audio:4,read_fil:10,read_imag:[1,2,5,10],read_video:10,read_video_timestamp:10,reader:10,reader_md:10,real:[7,11,14],reason:8,recal:11,recognit:[7,8,11],recognitiontask:7,rectangl:14,red:[1,15],reduc:[3,14],reduct:12,redund:8,refer:[2,5,11,12,14],reflect:14,refriger:[5,11],regardless:10,region:[11,12,14],regnet_x_16gf:11,regnet_x_1_6gf:11,regnet_x_32gf:11,regnet_x_3_2gf:11,regnet_x_400mf:11,regnet_x_800mf:11,regnet_x_8gf:11,regnet_y_16gf:11,regnet_y_1_6gf:11,regnet_y_32gf:11,regnet_y_3_2gf:11,regnet_y_400mf:11,regnet_y_800mf:11,regnet_y_8gf:11,regnetx_16gf:11,regnetx_1:11,regnetx_32gf:11,regnetx_3:11,regnetx_400mf:11,regnetx_800mf:11,regnetx_8gf:11,regnety_16gf:11,regnety_1:11,regnety_32gf:11,regnety_3:11,regnety_400mf:11,regnety_800mf:11,regnety_8gf:11,regress:11,releas:[1,9,13],relev:[5,8,10],reli:[4,13],relu:8,relu_2:8,remain:7,remap:14,rememb:5,remot:[5,11],remov:[1,4,5,8,12,14],remove_small_box:12,repeat:[8,12,14],repo:11,report:11,repres:[1,5,7,8,12,14,15],represent:[8,12],reproduc:14,repurpos:[0,6,10,11,12,14,15],repurposing_annotations_thumbnail:1,request:[8,15],requir:[1,5,7,8,10,11,14],res:2,res_script:2,res_scripted_dump:2,resampl:14,rescal:14,reshap:14,resid:8,residu:[11,12],resiz:[2,4,7,11,14],resize_cropp:3,resized_crop:[3,14],resized_img:3,resnet101:11,resnet152:11,resnet18:[2,8,11],resnet34:11,resnet3d:11,resnet50:[8,11],resnet50withfpn:8,resnet:[5,8],resnext101_32x8d:11,resnext50_32x4d:11,resolut:[7,11],respect:[7,8,12,14,15],restrict:2,result:[3,5,8,10,11,12,14,15],result_avg:14,rethink:11,retinanet:[1,5,12],retinanet_resnet50_fpn:[5,11],retriev:[8,10],return_nod:8,revers:14,rgb:[10,11,14,15],rgb_alpha:10,rgb_to_grayscal:14,rgba:14,right:[12,14],right_ankl:11,right_ear:11,right_elbow:11,right_ey:11,right_hip:11,right_kne:11,right_should:11,right_wrist:11,righteye_i:7,righteye_x:7,rightmouth_i:7,rightmouth_x:7,rmtree:4,robust:9,roi:12,roi_align:12,roi_pool:12,roi_width:12,roialign:12,roipool:12,root:[1,2,4,5,7],rotat:[3,14],rotated_img:3,rotation_i:7,rotation_transform:14,roughli:[8,11],row:[3,12,15],row_idx:3,row_titl:3,rpn:11,run:[1,2,3,4,5,7,8,9,11],runtimeerror:[7,14],sacrif:11,safer:13,sai:7,sake:4,same:[2,3,4,5,7,8,10,11,12,14,15],sampl:[7,10,12,14],sampling_ratio:12,sandwich:[5,11],satisfi:12,satur:[3,14],saturation_factor:14,save:[2,4,7,10,11,15],save_imag:15,savefig:[1,2,3,5],saw:5,sbd:9,sbdataset:7,sbu:9,sbucaptionedphotodataset:7,scale:[3,7,11,12,14,15],scale_each:15,scale_rang:14,scandir:4,scipi:[7,11],scissor:[5,11],scope:8,score:[5,11,12],score_threshold:5,script:[1,2,3,4,5,11,13,14],scriptabl:9,scripted_predictor:2,scripted_transform:14,scriptmodul:11,search:[11,14,15],sec:10,second:[1,2,3,4,5,10,12,14],see:[2,3,4,5,7,8,9,10,11,12,14],seed:[3,7,14],seek:[4,10],seem:[5,14],seen:5,segment:[4,7,9,13,14,15],segmentationtodetectiondataset:1,select:[4,5,7,8,10,11,12,14],self:[1,2,4,7,8,14],sem_class:5,sem_class_to_idx:5,semant:[7,9,14],semeion:9,sensit:12,separ:[8,15],sequenc:[4,14],sequenti:[2,14],seral:11,serial:11,serv:14,set:[1,2,3,5,7,8,10,11,12,14],set_current_stream:[4,10],set_image_backend:9,set_siz:3,set_video_backend:9,sever:14,shadow:14,shape:[1,5,7,8,11,12,14,15],sharp:[3,14],sharpen:14,sharpened_img:3,sharpness_adjust:3,sharpness_factor:[3,14],shear:14,sheep:[5,11],shift:[12,14,15],shortcut:8,shorter:14,shot:11,should:[4,7,8,9,11,12,14,15],show:[1,2,3,5,14],shown:14,shuffl:7,shufflenet_v2_x0_5:11,shufflenet_v2_x1_0:11,shufflenet_v2_x1_5:11,shufflenet_v2_x2_0:11,shufflenetv2:11,shutil:4,side:[12,14],sigma:[3,14],sigma_i:14,sigma_max:14,sigma_min:14,sigma_x:14,sigmoid_focal_loss:12,sign:[5,11],signific:14,significantli:5,similar:[4,5,7,8,12,13],similarli:[1,5,10,11],simpl:[1,5,14],simplifi:4,sinc:[2,4,5,7,14],singl:[2,5,10,11,12,14,15],sink:[5,11],site:1,size:[1,2,3,4,5,7,8,11,12,14,15],skateboard:[5,11],ski:[5,11],sky:7,slightli:[8,11,14],slower:11,small:[7,11],smaller:[12,14],smnt:7,smoke:7,snippet:1,snow:7,snowboard:[5,11],snowi:7,sofa:[5,11],soft:11,softmax:5,solar:[3,14],solarized_img:3,solid:14,solv:1,some:[3,4,5,7,8,11,12,14],sometim:9,sort:[1,4,12],sourc:[0,1,2,3,4,5,7,8,9,10,11,12,14,15],sox5ya1l24a:4,space:[11,12,14],spatial:[1,12],spatial_scal:12,speci:7,specif:[8,10,12],specifi:[7,8,9,10,11,12,14,15],sphinx:[0,1,2,3,4,5],sphinx_gallery_thumbnail_path:[1,3,5],split:[1,3,7,11,12],spoon:[5,11],sport:[5,11],squar:14,squeez:[1,2,3,5],squeezenet1_0:11,squeezenet1_1:11,src:1,ssd300:11,ssd300_vgg16:[5,11],ssd:5,ssdlite320:11,ssdlite320_mobilenet_v3_larg:[5,11],ssdlite:5,stabl:[9,13],stack:[2,4,5,14],stackbackward0:5,stage:9,standalon:14,standard:[7,11,14],start:[4,5,10,11],start_pt:10,startpoint:14,state:[7,14],state_dict:11,statist:4,statu:9,std:[5,11,14],stderr:11,step:[7,8],step_between_clip:7,stereo:7,still:14,stl10:9,stl10_binari:7,stochast:12,stochastic_depth:12,stochasticdepth:12,stop:[5,11],store:[7,10,12,14],str:[2,3,5,7,8,9,10,12,14,15],strategi:14,stream:[4,7,10],stream_id:[4,10],stream_typ:[4,10],stride:12,string:[4,7,8,9,10,15],structur:[4,7,10],studi:14,sub:8,subclass:7,subdir:7,subhead:8,submodul:8,subpackag:11,subplot:[1,2,3,4,5],subset:[7,11],subtitl:4,subtract:14,succ:10,suggest:4,suitabl:[7,10],suitcas:[5,11],sum:12,summari:11,support:[2,7,9,10,11,12,14],suppos:[12,14],suppress:[8,12],suppress_diff_warn:8,sure:[3,14],surfboard:[5,11],svd:14,svhn:[3,9,14],swap:5,swapax:5,symbol:8,symmetr:14,system:15,tabl:[5,11],tag:9,take:[1,7,10,12],taken:[7,12],takewhil:[4,10],tap:8,tar:7,tarbal:7,target:[1,4,7,11,12,14],target_transform:7,target_typ:7,task:[1,8,11,14],techniqu:14,teddi:[5,11],tempfil:2,tempor:4,ten:[4,14],ten_crop:14,tencrop:14,tenni:[5,11],tensor:[0,1,4,5,6,7,9,10,11,12,15],tensors:4,tensorshap:1,term:9,terminolog:7,test10k:7,test50k:7,test:[4,7,9,10,11,14],text:14,than:[2,4,5,7,8,9,10,11,12,14,15],thei:[1,4,5,7,8,10,11,12,13,14],them:[1,2,4,5,7,8,13],therefor:14,thi:[1,2,3,4,5,7,8,9,10,11,12,13,14,15],think:5,those:[5,8],though:14,thread:10,three:[7,8],threshold:[3,5,11,14],through:[8,9],tie:[5,11],tight:[1,2,3,5],tight_layout:3,time:[1,2,3,4,5,8,9,10,11,14],time_:10,timestamp:[4,10],tip:8,titl:3,tl_flip:14,to_grayscal:[3,14],to_pil_imag:[1,5,14],to_tensor:14,toaster:[5,11],togeth:[2,4,8,14],toilet:[5,11],toothbrush:[5,11],top:[2,5,7,8,9,12,13,14,15],top_left:3,top_right:3,topilimag:[2,14],torch:[1,2,3,4,5,7,8,9,10,11,12,15],torch_model_zoo:11,torchaudio:9,torchelast:9,torchscript:[9,10,12,14],torchserv:9,torchtext:9,torchvis:[1,2,3,4,5,13],total:[1,2,3,4,5,6],totensor:[7,11,14],toward:11,tr_flip:14,trace:8,traceabl:8,tracer:8,tracer_kwarg:8,tradition:2,traffic:[5,11],train2017:11,train:[1,5,7,8,11,12,14],train_extra:7,train_nod:8,train_nov:7,train_return_nod:8,trainabl:11,trainable_backbone_lay:11,trainval:7,transfom:3,transform:[0,1,4,5,6,7,8,9,10,11,12,13],transform_input:11,transform_num:14,transformation_matrix:14,transformed_dog1:2,transformed_dog2:2,transformed_img:3,transforms_thumbnail:3,translat:[3,14],transpar:[10,14,15],treat:[5,8],tree:[5,7,13],trichod:7,trick:11,trigger:1,trivialaug:14,trivialaugmentwid:14,truck:[5,11],truetyp:15,truncat:[7,8],truth:11,tun:11,tune:14,tupl:[7,8,10,12,14,15],turn:[8,14],tutori:1,tvmonitor:[5,11],twice:[8,11],two:[1,4,5,7,8,10,11,12,14],txhxwxc:7,type:[1,2,4,7,8,9,10,11,12,14,15],typeerror:8,typic:[9,10],ucf101:9,ucf101traintestsplit:7,ucf:7,uint8:[1,5,10,14,15],uint8tensor:11,umbrella:[5,11],unchang:[10,14],under:[7,8],underli:14,unfortun:11,uniform:[4,14],uniformli:14,union:[7,8,10,12,15],uniqu:[1,4,10],unit:10,unlabel:7,unsqueez:1,unsqueezebackward0:5,unus:8,upcom:1,upper:14,use:[1,2,4,5,7,9,10,11,12,13,14,15],used:[1,2,5,7,8,9,10,11,12,14,15],useful:[8,12,14],user:[4,8,9,10,14],userwarn:1,uses:[7,9,11],using:[2,4,5,7,10,11,12,14,15],usp:9,util:[0,1,4,6,7,8,9,10,11,12,14],v_soccerjuggling_g23_c01:4,v_soccerjuggling_g24_c01:4,val2017:11,val:7,valid:[7,11],valu:[1,2,3,4,5,7,8,10,11,12,14,15],value_rang:15,valueerror:[4,7],vari:[5,11],variabl:11,variant:11,varieti:[1,8],variou:[2,3,10],vase:[5,11],vector:[7,14],veri:[1,11],verifi:2,version:[7,8,10,11,12,13,14],vertic:[3,14],vertical_flip:14,vflip:[3,14],vflipper:3,vframe:10,vgg11:11,vgg11_bn:11,vgg13:11,vgg13_bn:11,vgg16:11,vgg16_bn:11,vgg19:11,vgg19_bn:11,via:[8,12],vid:4,video:[0,2,6,7,9,13,14],video_arrai:10,video_classif:11,video_codec:10,video_fp:10,video_fram:4,video_object:4,video_path:[4,10],video_pt:4,video_read:[4,9],video_transform:4,videoclip:7,videoread:[4,10],videoresnet:11,view:[7,14],visibl:11,vision:[4,8,9,11,12,13],visiondataset:7,visual:[0,1,2,6,8,10,11,14,15],visualization_utils_thumbnail:5,voc2012:7,voc:[9,11],vocdetect:7,vocsegment:7,wai:[4,5,7,10,11],walk:8,want:[4,8,9,10,11],warn:8,washington:7,websit:7,weight:[11,12,13],weird:11,well:[1,5,10,11,14],were:[1,5],what:[4,7],when:[3,5,7,8,11,12,14,15],where:[1,4,5,7,10,11,12,14,15],whether:[7,8,10],which:[1,3,4,5,7,8,10,11,12,13,14],whichev:10,whilst:10,white:14,whiten:14,whole:[10,14],whose:[7,8],wide:14,wide_resnet101_2:11,wide_resnet50_2:11,wider_face_split:7,wider_test:7,wider_train:7,wider_v:7,widerfac:9,width:[1,5,7,12,14,15],window:15,wine:[5,11],with_orig:3,within:[8,10],without:[11,12,14],won:12,word:15,work:[1,8,11,14],worker:7,would:[1,5,8,10],wrap:8,write:[8,10],write_fil:10,write_jpeg:10,write_png:10,write_video:10,written:10,wuzgd7c1pwa:4,www:7,xla:9,xmax:[1,5,15],xmin:[1,5,15],xml:7,xtick:[1,2,3,5],xticklabel:[1,2,3,5],xxx:7,xxy:7,xxz:7,xywh:12,xyxi:12,y_pred:2,ycbcr:14,year:7,yellow:5,yet:[9,13,14],yield:4,ylabel:3,ymax:[1,5,15],ymin:[1,5,15],you:[2,3,5,7,8,9,11,13,14,15],your:[1,2,7,8,11,14],ytick:[1,2,3,5],yticklabel:[1,2,3,5],zebra:[5,11],zero:[12,14],zhong:14,zip:[0,2,5,7]},titles:["Example gallery","Repurposing masks into bounding boxes","Tensor transforms and JIT","Illustration of transforms","Video API","Visualization utilities","Computation times","torchvision.datasets","torchvision.models.feature_extraction","torchvision","torchvision.io","torchvision.models","torchvision.ops","Training references","torchvision.transforms","torchvision.utils"],titleterms:{"400":7,"class":7,"function":[4,14],"new":4,alexnet:11,api:[4,8,10],appli:[3,4],aspp:11,augment:14,autoaug:3,automat:14,base:7,bound:[1,5],box:[1,5],build:4,caltech:7,can:4,caption:7,celeba:7,centercrop:3,characterist:11,cifar:7,cityscap:7,classif:11,cnn:11,coco:7,colorjitt:3,composit:14,comput:6,convers:14,convert:1,convolut:11,custom:7,data:4,dataest:4,dataset:[1,4,7],deeplabv3:11,densenet:11,deploy:2,detect:[1,7,11],easier:2,efficientnet:11,emnist:7,examin:4,exampl:[0,4,9],fakedata:7,fashion:7,faster:11,feature_extract:8,fine:10,fivecrop:3,flickr:7,fulli:11,galleri:0,gaussianblur:3,gener:14,googlenet:11,gpu:2,grain:10,grayscal:3,grid:5,hmdb51:7,illustr:3,imag:[2,5,10,14],imagenet:7,inaturalist:7,incept:11,indic:9,instanc:[5,11],introduct:4,jit:2,keypoint:11,kinet:7,kinetics400:4,kitti:7,kmnist:7,lfw:7,librari:9,lsun:7,mask:[1,5,11],mix:11,mnasnet:11,mnist:7,mobilenet:11,model:[5,8,11],network:11,object:[4,11],omniglot:7,onli:14,ops:12,packag:9,pad:3,person:11,phototour:7,pil:14,places365:7,properti:4,pytorch:9,qmnist:7,quantiz:11,randaug:3,random:3,randomadjustsharp:3,randomaffin:3,randomappli:3,randomautocontrast:3,randomcrop:3,randomequ:3,randomhorizontalflip:3,randominvert:3,randomli:[3,4],randomperspect:3,randomposter:3,randomresizedcrop:3,randomrot:3,randomsolar:3,randomverticalflip:3,read_video:4,refer:[8,9,13],regnet:11,repurpos:1,resiz:3,resnet:11,resnext:11,retinanet:11,runtim:11,sampl:4,sbd:7,sbu:7,scriptabl:[2,14],segment:[1,5,11],semant:[5,11],semeion:7,shufflenet:11,squeezenet:11,ssd:11,ssdlite:11,stl10:7,svhn:7,tensor:[2,14],time:6,torch:14,torchscript:2,torchvis:[7,8,9,10,11,12,14,15],train:[4,9,13],transform:[2,3,14],trivialaugmentwid:3,ucf101:7,usp:7,util:[5,15],vgg:11,via:2,video:[4,10,11],visual:[4,5],voc:7,wide:11,widerfac:7}}) \ No newline at end of file diff --git a/0.11./training_references.html b/0.11./training_references.html deleted file mode 100644 index 747160bd1df..00000000000 --- a/0.11./training_references.html +++ /dev/null @@ -1,663 +0,0 @@ - - - - - - - - - - - - Training references — Torchvision main documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - - - - - - - - - - - - - - - -
- -
    - -
  • - - - Docs - - > -
  • - - -
  • Training references
  • - - -
  • - - - - - -
  • - -
- - -
-
- -
- Shortcuts -
-
- -
-
- - - -
- -
-
- -
-

Training references

-

On top of the many models, datasets, and image transforms, Torchvision also -provides training reference scripts. These are the scripts that we use to train -the models which are then available with pre-trained weights.

-

These scripts are not part of the core package and are instead available on -GitHub. We currently -provide references for -classification, -detection, -segmentation, -similarity learning, -and video classification.

-

While these scripts are largely stable, they do not offer backward compatibility -guarantees.

-

In general, these scripts rely on the latest (not yet released) pytorch version -or the latest torchvision version. This means that to use them, you might need -to install the latest pytorch and torchvision versions, with e.g.:

-
conda install pytorch torchvision -c pytorch-nightly
-
-
-

If you need to rely on an older stable version of pytorch or torchvision, e.g. -torchvision 0.10, then it’s safer to use the scripts from that corresponding -release on GitHub, namely -https://github.com/pytorch/vision/tree/v0.10.0/references.

-
- - -
- -
-
- - - - - - -
- - - -
-

- © Copyright 2017-present, Torch Contributors. - -

-
- -
- Built with Sphinx using a theme provided by Read the Docs. -
- - -
- -
-
- -
-
- -
-
-
-
- - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
-
-

Docs

-

Access comprehensive developer documentation for PyTorch

- View Docs -
- -
-

Tutorials

-

Get in-depth tutorials for beginners and advanced developers

- View Tutorials -
- -
-

Resources

-

Find development resources and get your questions answered

- View Resources -
-
-
-
- - - - - - - - - -
-
-
-
- - -
-
-
- - -
- - - - - - - - \ No newline at end of file diff --git a/0.11./transforms.html b/0.11./transforms.html deleted file mode 100644 index 9859804d7de..00000000000 --- a/0.11./transforms.html +++ /dev/null @@ -1,3578 +0,0 @@ - - - - - - - - - - - - torchvision.transforms — Torchvision main documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - - - - - - - - - - - - - - - -
- -
    - -
  • - - - Docs - - > -
  • - - -
  • torchvision.transforms
  • - - -
  • - - - - - -
  • - -
- - -
-
- -
- Shortcuts -
-
- -
-
- - - -
- -
-
- -
-

torchvision.transforms

-

Transforms are common image transformations. They can be chained together using Compose. -Most transform classes have a function equivalent: functional -transforms give fine-grained control over the -transformations. -This is useful if you have to build a more complex transformation pipeline -(e.g. in the case of segmentation tasks).

-

Most transformations accept both PIL -images and tensor images, although some transformations are PIL-only and some are tensor-only. The Conversion Transforms may be used to -convert to and from PIL images.

-

The transformations that accept tensor images also accept batches of tensor -images. A Tensor Image is a tensor with (C, H, W) shape, where C is a -number of channels, H and W are image height and width. A batch of -Tensor Images is a tensor of (B, C, H, W) shape, where B is a number -of images in the batch.

-

The expected range of the values of a tensor image is implicitly defined by -the tensor dtype. Tensor images with a float dtype are expected to have -values in [0, 1). Tensor images with an integer dtype are expected to -have values in [0, MAX_DTYPE] where MAX_DTYPE is the largest value -that can be represented in that dtype.

-

Randomized transformations will apply the same transformation to all the -images of a given batch, but they will produce different transformations -across calls. For reproducible transformations across calls, you may use -functional transforms.

-

The following examples illustrate the use of the available transforms:

-
-
-
-

Warning

-

Since v0.8.0 all random transformations are using torch default random generator to sample random parameters. -It is a backward compatibility breaking change and user should set the random state as following:

-
# Previous versions
-# import random
-# random.seed(12)
-
-# Now
-import torch
-torch.manual_seed(17)
-
-
-

Please, keep in mind that the same seed for torch random generator and Python random generator will not -produce the same results.

-
-
-

Scriptable transforms

-

In order to script the transformations, please use torch.nn.Sequential instead of Compose.

-
transforms = torch.nn.Sequential(
-    transforms.CenterCrop(10),
-    transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
-)
-scripted_transforms = torch.jit.script(transforms)
-
-
-

Make sure to use only scriptable transformations, i.e. that work with torch.Tensor and does not require -lambda functions or PIL.Image.

-

For any custom transformations to be used with torch.jit.script, they should be derived from torch.nn.Module.

-
-
-

Compositions of transforms

-
-
-class torchvision.transforms.Compose(transforms)[source]
-

Composes several transforms together. This transform does not support torchscript. -Please, see the note below.

-
-
Parameters
-

transforms (list of Transform objects) – list of transforms to compose.

-
-
-

Example

-
>>> transforms.Compose([
->>>     transforms.CenterCrop(10),
->>>     transforms.PILToTensor(),
->>>     transforms.ConvertImageDtype(torch.float),
->>> ])
-
-
-
-

Note

-

In order to script the transformations, please use torch.nn.Sequential as below.

-
>>> transforms = torch.nn.Sequential(
->>>     transforms.CenterCrop(10),
->>>     transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
->>> )
->>> scripted_transforms = torch.jit.script(transforms)
-
-
-

Make sure to use only scriptable transformations, i.e. that work with torch.Tensor, does not require -lambda functions or PIL.Image.

-
-

Examples using Compose:

-
-
- -
-
-

Transforms on PIL Image and torch.*Tensor

-
-
-class torchvision.transforms.CenterCrop(size)[source]
-

Crops the given image at the center. -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means an arbitrary number of leading dimensions. -If image size is smaller than output size along any edge, image is padded with 0 and then center cropped.

-
-
Parameters
-

size (sequence or int) – Desired output size of the crop. If size is an -int instead of sequence like (h, w), a square crop (size, size) is -made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).

-
-
-

Examples using CenterCrop:

-
-
-forward(img)[source]
-
-
Parameters
-

img (PIL Image or Tensor) – Image to be cropped.

-
-
Returns
-

Cropped image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
- -
-
-class torchvision.transforms.ColorJitter(brightness=0, contrast=0, saturation=0, hue=0)[source]
-

Randomly change the brightness, contrast, saturation and hue of an image. -If the image is torch Tensor, it is expected -to have […, 1 or 3, H, W] shape, where … means an arbitrary number of leading dimensions. -If img is PIL Image, mode “1”, “I”, “F” and modes with transparency (alpha channel) are not supported.

-
-
Parameters
-
    -
  • brightness (float or tuple of python:float (min, max)) – How much to jitter brightness. -brightness_factor is chosen uniformly from [max(0, 1 - brightness), 1 + brightness] -or the given [min, max]. Should be non negative numbers.

  • -
  • contrast (float or tuple of python:float (min, max)) – How much to jitter contrast. -contrast_factor is chosen uniformly from [max(0, 1 - contrast), 1 + contrast] -or the given [min, max]. Should be non negative numbers.

  • -
  • saturation (float or tuple of python:float (min, max)) – How much to jitter saturation. -saturation_factor is chosen uniformly from [max(0, 1 - saturation), 1 + saturation] -or the given [min, max]. Should be non negative numbers.

  • -
  • hue (float or tuple of python:float (min, max)) – How much to jitter hue. -hue_factor is chosen uniformly from [-hue, hue] or the given [min, max]. -Should have 0<= hue <= 0.5 or -0.5 <= min <= max <= 0.5.

  • -
-
-
-

Examples using ColorJitter:

-
-
-forward(img)[source]
-
-
Parameters
-

img (PIL Image or Tensor) – Input image.

-
-
Returns
-

Color jittered image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
-
-static get_params(brightness: Optional[List[float]], contrast: Optional[List[float]], saturation: Optional[List[float]], hue: Optional[List[float]])Tuple[torch.Tensor, Optional[float], Optional[float], Optional[float], Optional[float]][source]
-

Get the parameters for the randomized transform to be applied on image.

-
-
Parameters
-
    -
  • brightness (tuple of python:float (min, max), optional) – The range from which the brightness_factor is chosen -uniformly. Pass None to turn off the transformation.

  • -
  • contrast (tuple of python:float (min, max), optional) – The range from which the contrast_factor is chosen -uniformly. Pass None to turn off the transformation.

  • -
  • saturation (tuple of python:float (min, max), optional) – The range from which the saturation_factor is chosen -uniformly. Pass None to turn off the transformation.

  • -
  • hue (tuple of python:float (min, max), optional) – The range from which the hue_factor is chosen uniformly. -Pass None to turn off the transformation.

  • -
-
-
Returns
-

The parameters used to apply the randomized transform -along with their random order.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-class torchvision.transforms.FiveCrop(size)[source]
-

Crop the given image into four corners and the central crop. -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means an arbitrary number of leading -dimensions

-
-

Note

-

This transform returns a tuple of images and there may be a mismatch in the number of -inputs and targets your Dataset returns. See below for an example of how to deal with -this.

-
-
-
Parameters
-

size (sequence or int) – Desired output size of the crop. If size is an int -instead of sequence like (h, w), a square crop of size (size, size) is made. -If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).

-
-
-

Example

-
>>> transform = Compose([
->>>    FiveCrop(size), # this is a list of PIL Images
->>>    Lambda(lambda crops: torch.stack([ToTensor()(crop) for crop in crops])) # returns a 4D tensor
->>> ])
->>> #In your test loop you can do the following:
->>> input, target = batch # input is a 5d tensor, target is 2d
->>> bs, ncrops, c, h, w = input.size()
->>> result = model(input.view(-1, c, h, w)) # fuse batch size and ncrops
->>> result_avg = result.view(bs, ncrops, -1).mean(1) # avg over crops
-
-
-

Examples using FiveCrop:

-
-
-forward(img)[source]
-
-
Parameters
-

img (PIL Image or Tensor) – Image to be cropped.

-
-
Returns
-

tuple of 5 images. Image can be PIL Image or Tensor

-
-
-
- -
- -
-
-class torchvision.transforms.Grayscale(num_output_channels=1)[source]
-

Convert image to grayscale. -If the image is torch Tensor, it is expected -to have […, 3, H, W] shape, where … means an arbitrary number of leading dimensions

-
-
Parameters
-

num_output_channels (int) – (1 or 3) number of channels desired for output image

-
-
Returns
-

Grayscale version of the input.

-
    -
  • If num_output_channels == 1 : returned image is single channel

  • -
  • If num_output_channels == 3 : returned image is 3 channel with r == g == b

  • -
-

-
-
Return type
-

PIL Image

-
-
-

Examples using Grayscale:

-
-
-forward(img)[source]
-
-
Parameters
-

img (PIL Image or Tensor) – Image to be converted to grayscale.

-
-
Returns
-

Grayscaled image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
- -
-
-class torchvision.transforms.Pad(padding, fill=0, padding_mode='constant')[source]
-

Pad the given image on all sides with the given “pad” value. -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means at most 2 leading dimensions for mode reflect and symmetric, -at most 3 leading dimensions for mode edge, -and an arbitrary number of leading dimensions for mode constant

-
-
Parameters
-
    -
  • padding (int or sequence) –

    Padding on each border. If a single int is provided this -is used to pad all borders. If sequence of length 2 is provided this is the padding -on left/right and top/bottom respectively. If a sequence of length 4 is provided -this is the padding for the left, top, right and bottom borders respectively.

    -
    -

    Note

    -

    In torchscript mode padding as single int is not supported, use a sequence of -length 1: [padding, ].

    -
    -

  • -
  • fill (number or str or tuple) – Pixel fill value for constant fill. Default is 0. If a tuple of -length 3, it is used to fill R, G, B channels respectively. -This value is only used when the padding_mode is constant. -Only number is supported for torch Tensor. -Only int or str or tuple value is supported for PIL Image.

  • -
  • padding_mode (str) –

    Type of padding. Should be: constant, edge, reflect or symmetric. -Default is constant.

    -
      -
    • constant: pads with a constant value, this value is specified with fill

    • -
    • edge: pads with the last value at the edge of the image. -If input a 5D torch Tensor, the last 3 dimensions will be padded instead of the last 2

    • -
    • reflect: pads with reflection of image without repeating the last value on the edge. -For example, padding [1, 2, 3, 4] with 2 elements on both sides in reflect mode -will result in [3, 2, 1, 2, 3, 4, 3, 2]

    • -
    • symmetric: pads with reflection of image repeating the last value on the edge. -For example, padding [1, 2, 3, 4] with 2 elements on both sides in symmetric mode -will result in [2, 1, 1, 2, 3, 4, 4, 3]

    • -
    -

  • -
-
-
-

Examples using Pad:

-
-
-forward(img)[source]
-
-
Parameters
-

img (PIL Image or Tensor) – Image to be padded.

-
-
Returns
-

Padded image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
- -
-
-class torchvision.transforms.RandomAffine(degrees, translate=None, scale=None, shear=None, interpolation=<InterpolationMode.NEAREST: 'nearest'>, fill=0, fillcolor=None, resample=None)[source]
-

Random affine transformation of the image keeping center invariant. -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means an arbitrary number of leading dimensions.

-
-
Parameters
-
    -
  • degrees (sequence or number) – Range of degrees to select from. -If degrees is a number instead of sequence like (min, max), the range of degrees -will be (-degrees, +degrees). Set to 0 to deactivate rotations.

  • -
  • translate (tuple, optional) – tuple of maximum absolute fraction for horizontal -and vertical translations. For example translate=(a, b), then horizontal shift -is randomly sampled in the range -img_width * a < dx < img_width * a and vertical shift is -randomly sampled in the range -img_height * b < dy < img_height * b. Will not translate by default.

  • -
  • scale (tuple, optional) – scaling factor interval, e.g (a, b), then scale is -randomly sampled from the range a <= scale <= b. Will keep original scale by default.

  • -
  • shear (sequence or number, optional) – Range of degrees to select from. -If shear is a number, a shear parallel to the x axis in the range (-shear, +shear) -will be applied. Else if shear is a sequence of 2 values a shear parallel to the x axis in the -range (shear[0], shear[1]) will be applied. Else if shear is a sequence of 4 values, -a x-axis shear in (shear[0], shear[1]) and y-axis shear in (shear[2], shear[3]) will be applied. -Will not apply shear by default.

  • -
  • interpolation (InterpolationMode) – Desired interpolation enum defined by -torchvision.transforms.InterpolationMode. Default is InterpolationMode.NEAREST. -If input is Tensor, only InterpolationMode.NEAREST, InterpolationMode.BILINEAR are supported. -For backward compatibility integer values (e.g. PIL.Image.NEAREST) are still acceptable.

  • -
  • fill (sequence or number) – Pixel fill value for the area outside the transformed -image. Default is 0. If given a number, the value is used for all bands respectively.

  • -
  • fillcolor (sequence or number, optional) – deprecated argument and will be removed since v0.10.0. -Please use the fill parameter instead.

  • -
  • resample (int, optional) – deprecated argument and will be removed since v0.10.0. -Please use the interpolation parameter instead.

  • -
-
-
-

Examples using RandomAffine:

-
-
-forward(img)[source]
-
-

img (PIL Image or Tensor): Image to be transformed.

-
-
-
Returns
-

Affine transformed image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
-
-static get_params(degrees: List[float], translate: Optional[List[float]], scale_ranges: Optional[List[float]], shears: Optional[List[float]], img_size: List[int])Tuple[float, Tuple[int, int], float, Tuple[float, float]][source]
-

Get parameters for affine transformation

-
-
Returns
-

params to be passed to the affine transformation

-
-
-
- -
- -
-
-class torchvision.transforms.RandomApply(transforms, p=0.5)[source]
-

Apply randomly a list of transformations with a given probability.

-
-

Note

-

In order to script the transformation, please use torch.nn.ModuleList as input instead of list/tuple of -transforms as shown below:

-
>>> transforms = transforms.RandomApply(torch.nn.ModuleList([
->>>     transforms.ColorJitter(),
->>> ]), p=0.3)
->>> scripted_transforms = torch.jit.script(transforms)
-
-
-

Make sure to use only scriptable transformations, i.e. that work with torch.Tensor, does not require -lambda functions or PIL.Image.

-
-
-
Parameters
-
-
-
-

Examples using RandomApply:

-
- -
-
-class torchvision.transforms.RandomCrop(size, padding=None, pad_if_needed=False, fill=0, padding_mode='constant')[source]
-

Crop the given image at a random location. -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means an arbitrary number of leading dimensions, -but if non-constant padding is used, the input is expected to have at most 2 leading dimensions

-
-
Parameters
-
    -
  • size (sequence or int) – Desired output size of the crop. If size is an -int instead of sequence like (h, w), a square crop (size, size) is -made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).

  • -
  • padding (int or sequence, optional) –

    Optional padding on each border -of the image. Default is None. If a single int is provided this -is used to pad all borders. If sequence of length 2 is provided this is the padding -on left/right and top/bottom respectively. If a sequence of length 4 is provided -this is the padding for the left, top, right and bottom borders respectively.

    -
    -

    Note

    -

    In torchscript mode padding as single int is not supported, use a sequence of -length 1: [padding, ].

    -
    -

  • -
  • pad_if_needed (boolean) – It will pad the image if smaller than the -desired size to avoid raising an exception. Since cropping is done -after padding, the padding seems to be done at a random offset.

  • -
  • fill (number or str or tuple) – Pixel fill value for constant fill. Default is 0. If a tuple of -length 3, it is used to fill R, G, B channels respectively. -This value is only used when the padding_mode is constant. -Only number is supported for torch Tensor. -Only int or str or tuple value is supported for PIL Image.

  • -
  • padding_mode (str) –

    Type of padding. Should be: constant, edge, reflect or symmetric. -Default is constant.

    -
      -
    • constant: pads with a constant value, this value is specified with fill

    • -
    • edge: pads with the last value at the edge of the image. -If input a 5D torch Tensor, the last 3 dimensions will be padded instead of the last 2

    • -
    • reflect: pads with reflection of image without repeating the last value on the edge. -For example, padding [1, 2, 3, 4] with 2 elements on both sides in reflect mode -will result in [3, 2, 1, 2, 3, 4, 3, 2]

    • -
    • symmetric: pads with reflection of image repeating the last value on the edge. -For example, padding [1, 2, 3, 4] with 2 elements on both sides in symmetric mode -will result in [2, 1, 1, 2, 3, 4, 4, 3]

    • -
    -

  • -
-
-
-

Examples using RandomCrop:

-
-
-forward(img)[source]
-
-
Parameters
-

img (PIL Image or Tensor) – Image to be cropped.

-
-
Returns
-

Cropped image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
-
-static get_params(img: torch.Tensor, output_size: Tuple[int, int])Tuple[int, int, int, int][source]
-

Get parameters for crop for a random crop.

-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – Image to be cropped.

  • -
  • output_size (tuple) – Expected output size of the crop.

  • -
-
-
Returns
-

params (i, j, h, w) to be passed to crop for random crop.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-class torchvision.transforms.RandomGrayscale(p=0.1)[source]
-

Randomly convert image to grayscale with a probability of p (default 0.1). -If the image is torch Tensor, it is expected -to have […, 3, H, W] shape, where … means an arbitrary number of leading dimensions

-
-
Parameters
-

p (float) – probability that image should be converted to grayscale.

-
-
Returns
-

Grayscale version of the input image with probability p and unchanged -with probability (1-p). -- If input image is 1 channel: grayscale version is 1 channel -- If input image is 3 channel: grayscale version is 3 channel with r == g == b

-
-
Return type
-

PIL Image or Tensor

-
-
-
-
-forward(img)[source]
-
-
Parameters
-

img (PIL Image or Tensor) – Image to be converted to grayscale.

-
-
Returns
-

Randomly grayscaled image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
- -
-
-class torchvision.transforms.RandomHorizontalFlip(p=0.5)[source]
-

Horizontally flip the given image randomly with a given probability. -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means an arbitrary number of leading -dimensions

-
-
Parameters
-

p (float) – probability of the image being flipped. Default value is 0.5

-
-
-

Examples using RandomHorizontalFlip:

-
-
-forward(img)[source]
-
-
Parameters
-

img (PIL Image or Tensor) – Image to be flipped.

-
-
Returns
-

Randomly flipped image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
- -
-
-class torchvision.transforms.RandomPerspective(distortion_scale=0.5, p=0.5, interpolation=<InterpolationMode.BILINEAR: 'bilinear'>, fill=0)[source]
-

Performs a random perspective transformation of the given image with a given probability. -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means an arbitrary number of leading dimensions.

-
-
Parameters
-
    -
  • distortion_scale (float) – argument to control the degree of distortion and ranges from 0 to 1. -Default is 0.5.

  • -
  • p (float) – probability of the image being transformed. Default is 0.5.

  • -
  • interpolation (InterpolationMode) – Desired interpolation enum defined by -torchvision.transforms.InterpolationMode. Default is InterpolationMode.BILINEAR. -If input is Tensor, only InterpolationMode.NEAREST, InterpolationMode.BILINEAR are supported. -For backward compatibility integer values (e.g. PIL.Image.NEAREST) are still acceptable.

  • -
  • fill (sequence or number) – Pixel fill value for the area outside the transformed -image. Default is 0. If given a number, the value is used for all bands respectively.

  • -
-
-
-

Examples using RandomPerspective:

-
-
-forward(img)[source]
-
-
Parameters
-

img (PIL Image or Tensor) – Image to be Perspectively transformed.

-
-
Returns
-

Randomly transformed image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
-
-static get_params(width: int, height: int, distortion_scale: float)Tuple[List[List[int]], List[List[int]]][source]
-

Get parameters for perspective for a random perspective transform.

-
-
Parameters
-
    -
  • width (int) – width of the image.

  • -
  • height (int) – height of the image.

  • -
  • distortion_scale (float) – argument to control the degree of distortion and ranges from 0 to 1.

  • -
-
-
Returns
-

List containing [top-left, top-right, bottom-right, bottom-left] of the original image, -List containing [top-left, top-right, bottom-right, bottom-left] of the transformed image.

-
-
-
- -
- -
-
-class torchvision.transforms.RandomResizedCrop(size, scale=(0.08, 1.0), ratio=(0.75, 1.3333333333333333), interpolation=<InterpolationMode.BILINEAR: 'bilinear'>)[source]
-

Crop a random portion of image and resize it to a given size.

-

If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means an arbitrary number of leading dimensions

-

A crop of the original image is made: the crop has a random area (H * W) -and a random aspect ratio. This crop is finally resized to the given -size. This is popularly used to train the Inception networks.

-
-
Parameters
-
    -
  • size (int or sequence) –

    expected output size of the crop, for each edge. If size is an -int instead of sequence like (h, w), a square output size (size, size) is -made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).

    -
    -

    Note

    -

    In torchscript mode size as single int is not supported, use a sequence of length 1: [size, ].

    -
    -

  • -
  • scale (tuple of python:float) – Specifies the lower and upper bounds for the random area of the crop, -before resizing. The scale is defined with respect to the area of the original image.

  • -
  • ratio (tuple of python:float) – lower and upper bounds for the random aspect ratio of the crop, before -resizing.

  • -
  • interpolation (InterpolationMode) – Desired interpolation enum defined by -torchvision.transforms.InterpolationMode. Default is InterpolationMode.BILINEAR. -If input is Tensor, only InterpolationMode.NEAREST, InterpolationMode.BILINEAR and -InterpolationMode.BICUBIC are supported. -For backward compatibility integer values (e.g. PIL.Image.NEAREST) are still acceptable.

  • -
-
-
-

Examples using RandomResizedCrop:

-
-
-forward(img)[source]
-
-
Parameters
-

img (PIL Image or Tensor) – Image to be cropped and resized.

-
-
Returns
-

Randomly cropped and resized image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
-
-static get_params(img: torch.Tensor, scale: List[float], ratio: List[float])Tuple[int, int, int, int][source]
-

Get parameters for crop for a random sized crop.

-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – Input image.

  • -
  • scale (list) – range of scale of the origin size cropped

  • -
  • ratio (list) – range of aspect ratio of the origin aspect ratio cropped

  • -
-
-
Returns
-

params (i, j, h, w) to be passed to crop for a random -sized crop.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-class torchvision.transforms.RandomRotation(degrees, interpolation=<InterpolationMode.NEAREST: 'nearest'>, expand=False, center=None, fill=0, resample=None)[source]
-

Rotate the image by angle. -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means an arbitrary number of leading dimensions.

-
-
Parameters
-
    -
  • degrees (sequence or number) – Range of degrees to select from. -If degrees is a number instead of sequence like (min, max), the range of degrees -will be (-degrees, +degrees).

  • -
  • interpolation (InterpolationMode) – Desired interpolation enum defined by -torchvision.transforms.InterpolationMode. Default is InterpolationMode.NEAREST. -If input is Tensor, only InterpolationMode.NEAREST, InterpolationMode.BILINEAR are supported. -For backward compatibility integer values (e.g. PIL.Image.NEAREST) are still acceptable.

  • -
  • expand (bool, optional) – Optional expansion flag. -If true, expands the output to make it large enough to hold the entire rotated image. -If false or omitted, make the output image the same size as the input image. -Note that the expand flag assumes rotation around the center and no translation.

  • -
  • center (sequence, optional) – Optional center of rotation, (x, y). Origin is the upper left corner. -Default is the center of the image.

  • -
  • fill (sequence or number) – Pixel fill value for the area outside the rotated -image. Default is 0. If given a number, the value is used for all bands respectively.

  • -
  • resample (int, optional) – deprecated argument and will be removed since v0.10.0. -Please use the interpolation parameter instead.

  • -
-
-
-

Examples using RandomRotation:

-
-
-forward(img)[source]
-
-
Parameters
-

img (PIL Image or Tensor) – Image to be rotated.

-
-
Returns
-

Rotated image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
-
-static get_params(degrees: List[float])float[source]
-

Get parameters for rotate for a random rotation.

-
-
Returns
-

angle parameter to be passed to rotate for random rotation.

-
-
Return type
-

float

-
-
-
- -
- -
-
-class torchvision.transforms.RandomSizedCrop(*args, **kwargs)[source]
-

Note: This transform is deprecated in favor of RandomResizedCrop.

-
- -
-
-class torchvision.transforms.RandomVerticalFlip(p=0.5)[source]
-

Vertically flip the given image randomly with a given probability. -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means an arbitrary number of leading -dimensions

-
-
Parameters
-

p (float) – probability of the image being flipped. Default value is 0.5

-
-
-

Examples using RandomVerticalFlip:

-
-
-forward(img)[source]
-
-
Parameters
-

img (PIL Image or Tensor) – Image to be flipped.

-
-
Returns
-

Randomly flipped image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
- -
-
-class torchvision.transforms.Resize(size, interpolation=<InterpolationMode.BILINEAR: 'bilinear'>, max_size=None, antialias=None)[source]
-

Resize the input image to the given size. -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means an arbitrary number of leading dimensions

-
-

Warning

-

The output image might be different depending on its type: when downsampling, the interpolation of PIL images -and tensors is slightly different, because PIL applies antialiasing. This may lead to significant differences -in the performance of a network. Therefore, it is preferable to train and serve a model with the same input -types. See also below the antialias parameter, which can help making the output of PIL images and tensors -closer.

-
-
-
Parameters
-
    -
  • size (sequence or int) –

    Desired output size. If size is a sequence like -(h, w), output size will be matched to this. If size is an int, -smaller edge of the image will be matched to this number. -i.e, if height > width, then image will be rescaled to -(size * height / width, size).

    -
    -

    Note

    -

    In torchscript mode size as single int is not supported, use a sequence of length 1: [size, ].

    -
    -

  • -
  • interpolation (InterpolationMode) – Desired interpolation enum defined by -torchvision.transforms.InterpolationMode. Default is InterpolationMode.BILINEAR. -If input is Tensor, only InterpolationMode.NEAREST, InterpolationMode.BILINEAR and -InterpolationMode.BICUBIC are supported. -For backward compatibility integer values (e.g. PIL.Image.NEAREST) are still acceptable.

  • -
  • max_size (int, optional) – The maximum allowed for the longer edge of -the resized image: if the longer edge of the image is greater -than max_size after being resized according to size, then -the image is resized again so that the longer edge is equal to -max_size. As a result, size might be overruled, i.e the -smaller edge may be shorter than size. This is only supported -if size is an int (or a sequence of length 1 in torchscript -mode).

  • -
  • antialias (bool, optional) –

    antialias flag. If img is PIL Image, the flag is ignored and anti-alias -is always used. If img is Tensor, the flag is False by default and can be set to True for -InterpolationMode.BILINEAR only mode. This can help making the output for PIL images and tensors -closer.

    -
    -

    Warning

    -

    There is no autodiff support for antialias=True option with input img as Tensor.

    -
    -

  • -
-
-
-

Examples using Resize:

-
-
-
-forward(img)[source]
-
-
Parameters
-

img (PIL Image or Tensor) – Image to be scaled.

-
-
Returns
-

Rescaled image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
- -
-
-class torchvision.transforms.Scale(*args, **kwargs)[source]
-

Note: This transform is deprecated in favor of Resize.

-
- -
-
-class torchvision.transforms.TenCrop(size, vertical_flip=False)[source]
-

Crop the given image into four corners and the central crop plus the flipped version of -these (horizontal flipping is used by default). -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means an arbitrary number of leading -dimensions

-
-

Note

-

This transform returns a tuple of images and there may be a mismatch in the number of -inputs and targets your Dataset returns. See below for an example of how to deal with -this.

-
-
-
Parameters
-
    -
  • size (sequence or int) – Desired output size of the crop. If size is an -int instead of sequence like (h, w), a square crop (size, size) is -made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).

  • -
  • vertical_flip (bool) – Use vertical flipping instead of horizontal

  • -
-
-
-

Example

-
>>> transform = Compose([
->>>    TenCrop(size), # this is a list of PIL Images
->>>    Lambda(lambda crops: torch.stack([ToTensor()(crop) for crop in crops])) # returns a 4D tensor
->>> ])
->>> #In your test loop you can do the following:
->>> input, target = batch # input is a 5d tensor, target is 2d
->>> bs, ncrops, c, h, w = input.size()
->>> result = model(input.view(-1, c, h, w)) # fuse batch size and ncrops
->>> result_avg = result.view(bs, ncrops, -1).mean(1) # avg over crops
-
-
-
-
-forward(img)[source]
-
-
Parameters
-

img (PIL Image or Tensor) – Image to be cropped.

-
-
Returns
-

tuple of 10 images. Image can be PIL Image or Tensor

-
-
-
- -
- -
-
-class torchvision.transforms.GaussianBlur(kernel_size, sigma=(0.1, 2.0))[source]
-

Blurs image with randomly chosen Gaussian blur. -If the image is torch Tensor, it is expected -to have […, C, H, W] shape, where … means an arbitrary number of leading dimensions.

-
-
Parameters
-
    -
  • kernel_size (int or sequence) – Size of the Gaussian kernel.

  • -
  • sigma (float or tuple of python:float (min, max)) – Standard deviation to be used for -creating kernel to perform blurring. If float, sigma is fixed. If it is tuple -of float (min, max), sigma is chosen uniformly at random to lie in the -given range.

  • -
-
-
Returns
-

Gaussian blurred version of the input image.

-
-
Return type
-

PIL Image or Tensor

-
-
-

Examples using GaussianBlur:

-
-
-forward(img: torch.Tensor)torch.Tensor[source]
-
-
Parameters
-

img (PIL Image or Tensor) – image to be blurred.

-
-
Returns
-

Gaussian blurred image

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
-
-static get_params(sigma_min: float, sigma_max: float)float[source]
-

Choose sigma for random gaussian blurring.

-
-
Parameters
-
    -
  • sigma_min (float) – Minimum standard deviation that can be chosen for blurring kernel.

  • -
  • sigma_max (float) – Maximum standard deviation that can be chosen for blurring kernel.

  • -
-
-
Returns
-

Standard deviation to be passed to calculate kernel for gaussian blurring.

-
-
Return type
-

float

-
-
-
- -
- -
-
-class torchvision.transforms.RandomInvert(p=0.5)[source]
-

Inverts the colors of the given image randomly with a given probability. -If img is a Tensor, it is expected to be in […, 1 or 3, H, W] format, -where … means it can have an arbitrary number of leading dimensions. -If img is PIL Image, it is expected to be in mode “L” or “RGB”.

-
-
Parameters
-

p (float) – probability of the image being color inverted. Default value is 0.5

-
-
-

Examples using RandomInvert:

-
-
-forward(img)[source]
-
-
Parameters
-

img (PIL Image or Tensor) – Image to be inverted.

-
-
Returns
-

Randomly color inverted image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
- -
-
-class torchvision.transforms.RandomPosterize(bits, p=0.5)[source]
-

Posterize the image randomly with a given probability by reducing the -number of bits for each color channel. If the image is torch Tensor, it should be of type torch.uint8, -and it is expected to have […, 1 or 3, H, W] shape, where … means an arbitrary number of leading dimensions. -If img is PIL Image, it is expected to be in mode “L” or “RGB”.

-
-
Parameters
-
    -
  • bits (int) – number of bits to keep for each channel (0-8)

  • -
  • p (float) – probability of the image being color inverted. Default value is 0.5

  • -
-
-
-

Examples using RandomPosterize:

-
-
-forward(img)[source]
-
-
Parameters
-

img (PIL Image or Tensor) – Image to be posterized.

-
-
Returns
-

Randomly posterized image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
- -
-
-class torchvision.transforms.RandomSolarize(threshold, p=0.5)[source]
-

Solarize the image randomly with a given probability by inverting all pixel -values above a threshold. If img is a Tensor, it is expected to be in […, 1 or 3, H, W] format, -where … means it can have an arbitrary number of leading dimensions. -If img is PIL Image, it is expected to be in mode “L” or “RGB”.

-
-
Parameters
-
    -
  • threshold (float) – all pixels equal or above this value are inverted.

  • -
  • p (float) – probability of the image being color inverted. Default value is 0.5

  • -
-
-
-

Examples using RandomSolarize:

-
-
-forward(img)[source]
-
-
Parameters
-

img (PIL Image or Tensor) – Image to be solarized.

-
-
Returns
-

Randomly solarized image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
- -
-
-class torchvision.transforms.RandomAdjustSharpness(sharpness_factor, p=0.5)[source]
-

Adjust the sharpness of the image randomly with a given probability. If the image is torch Tensor, -it is expected to have […, 1 or 3, H, W] shape, where … means an arbitrary number of leading dimensions.

-
-
Parameters
-
    -
  • sharpness_factor (float) – How much to adjust the sharpness. Can be -any non negative number. 0 gives a blurred image, 1 gives the -original image while 2 increases the sharpness by a factor of 2.

  • -
  • p (float) – probability of the image being color inverted. Default value is 0.5

  • -
-
-
-

Examples using RandomAdjustSharpness:

-
-
-forward(img)[source]
-
-
Parameters
-

img (PIL Image or Tensor) – Image to be sharpened.

-
-
Returns
-

Randomly sharpened image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
- -
-
-class torchvision.transforms.RandomAutocontrast(p=0.5)[source]
-

Autocontrast the pixels of the given image randomly with a given probability. -If the image is torch Tensor, it is expected -to have […, 1 or 3, H, W] shape, where … means an arbitrary number of leading dimensions. -If img is PIL Image, it is expected to be in mode “L” or “RGB”.

-
-
Parameters
-

p (float) – probability of the image being autocontrasted. Default value is 0.5

-
-
-

Examples using RandomAutocontrast:

-
-
-forward(img)[source]
-
-
Parameters
-

img (PIL Image or Tensor) – Image to be autocontrasted.

-
-
Returns
-

Randomly autocontrasted image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
- -
-
-class torchvision.transforms.RandomEqualize(p=0.5)[source]
-

Equalize the histogram of the given image randomly with a given probability. -If the image is torch Tensor, it is expected -to have […, 1 or 3, H, W] shape, where … means an arbitrary number of leading dimensions. -If img is PIL Image, it is expected to be in mode “P”, “L” or “RGB”.

-
-
Parameters
-

p (float) – probability of the image being equalized. Default value is 0.5

-
-
-

Examples using RandomEqualize:

-
-
-forward(img)[source]
-
-
Parameters
-

img (PIL Image or Tensor) – Image to be equalized.

-
-
Returns
-

Randomly equalized image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
- -
-
-

Transforms on PIL Image only

-
-
-class torchvision.transforms.RandomChoice(transforms, p=None)[source]
-

Apply single transformation randomly picked from a list. This transform does not support torchscript.

-
- -
-
-class torchvision.transforms.RandomOrder(transforms)[source]
-

Apply a list of transformations in a random order. This transform does not support torchscript.

-
- -
-
-

Transforms on torch.*Tensor only

-
-
-class torchvision.transforms.LinearTransformation(transformation_matrix, mean_vector)[source]
-

Transform a tensor image with a square transformation matrix and a mean_vector computed -offline. -This transform does not support PIL Image. -Given transformation_matrix and mean_vector, will flatten the torch.*Tensor and -subtract mean_vector from it which is then followed by computing the dot -product with the transformation matrix and then reshaping the tensor to its -original shape.

-
-
Applications:

whitening transformation: Suppose X is a column vector zero-centered data. -Then compute the data covariance matrix [D x D] with torch.mm(X.t(), X), -perform SVD on this matrix and pass it as transformation_matrix.

-
-
-
-
Parameters
-
    -
  • transformation_matrix (Tensor) – tensor [D x D], D = C x H x W

  • -
  • mean_vector (Tensor) – tensor [D], D = C x H x W

  • -
-
-
-
-
-forward(tensor: torch.Tensor)torch.Tensor[source]
-
-
Parameters
-

tensor (Tensor) – Tensor image to be whitened.

-
-
Returns
-

Transformed image.

-
-
Return type
-

Tensor

-
-
-
- -
- -
-
-class torchvision.transforms.Normalize(mean, std, inplace=False)[source]
-

Normalize a tensor image with mean and standard deviation. -This transform does not support PIL Image. -Given mean: (mean[1],...,mean[n]) and std: (std[1],..,std[n]) for n -channels, this transform will normalize each channel of the input -torch.*Tensor i.e., -output[channel] = (input[channel] - mean[channel]) / std[channel]

-
-

Note

-

This transform acts out of place, i.e., it does not mutate the input tensor.

-
-
-
Parameters
-
    -
  • mean (sequence) – Sequence of means for each channel.

  • -
  • std (sequence) – Sequence of standard deviations for each channel.

  • -
  • inplace (bool,optional) – Bool to make this operation in-place.

  • -
-
-
-

Examples using Normalize:

-
-
-forward(tensor: torch.Tensor)torch.Tensor[source]
-
-
Parameters
-

tensor (Tensor) – Tensor image to be normalized.

-
-
Returns
-

Normalized Tensor image.

-
-
Return type
-

Tensor

-
-
-
- -
- -
-
-class torchvision.transforms.RandomErasing(p=0.5, scale=(0.02, 0.33), ratio=(0.3, 3.3), value=0, inplace=False)[source]
-

Randomly selects a rectangle region in an torch Tensor image and erases its pixels. -This transform does not support PIL Image. -‘Random Erasing Data Augmentation’ by Zhong et al. See https://arxiv.org/abs/1708.04896

-
-
Parameters
-
    -
  • p – probability that the random erasing operation will be performed.

  • -
  • scale – range of proportion of erased area against input image.

  • -
  • ratio – range of aspect ratio of erased area.

  • -
  • value – erasing value. Default is 0. If a single int, it is used to -erase all pixels. If a tuple of length 3, it is used to erase -R, G, B channels respectively. -If a str of ‘random’, erasing each pixel with random values.

  • -
  • inplace – boolean to make this transform inplace. Default set to False.

  • -
-
-
Returns
-

Erased Image.

-
-
-

Example

-
>>> transform = transforms.Compose([
->>>   transforms.RandomHorizontalFlip(),
->>>   transforms.PILToTensor(),
->>>   transforms.ConvertImageDtype(torch.float),
->>>   transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
->>>   transforms.RandomErasing(),
->>> ])
-
-
-
-
-forward(img)[source]
-
-
Parameters
-

img (Tensor) – Tensor image to be erased.

-
-
Returns
-

Erased Tensor image.

-
-
Return type
-

img (Tensor)

-
-
-
- -
-
-static get_params(img: torch.Tensor, scale: Tuple[float, float], ratio: Tuple[float, float], value: Optional[List[float]] = None)Tuple[int, int, int, int, torch.Tensor][source]
-

Get parameters for erase for a random erasing.

-
-
Parameters
-
    -
  • img (Tensor) – Tensor image to be erased.

  • -
  • scale (sequence) – range of proportion of erased area against input image.

  • -
  • ratio (sequence) – range of aspect ratio of erased area.

  • -
  • value (list, optional) – erasing value. If None, it is interpreted as “random” -(erasing each pixel with random values). If len(value) is 1, it is interpreted as a number, -i.e. value[0].

  • -
-
-
Returns
-

params (i, j, h, w, v) to be passed to erase for random erasing.

-
-
Return type
-

tuple

-
-
-
- -
- -
-
-class torchvision.transforms.ConvertImageDtype(dtype: torch.dtype)[source]
-

Convert a tensor image to the given dtype and scale the values accordingly -This function does not support PIL Image.

-
-
Parameters
-

dtype (torch.dpython:type) – Desired data type of the output

-
-
-
-

Note

-

When converting from a smaller to a larger integer dtype the maximum values are not mapped exactly. -If converted back and forth, this mismatch has no effect.

-
-
-
Raises
-

RuntimeError – When trying to cast torch.float32 to torch.int32 or torch.int64 as - well as for trying to cast torch.float64 to torch.int64. These conversions might lead to - overflow errors since the floating point dtype cannot store consecutive integers over the whole range - of the integer dtype.

-
-
-

Examples using ConvertImageDtype:

-
- -
-
-

Conversion Transforms

-
-
-class torchvision.transforms.ToPILImage(mode=None)[source]
-

Convert a tensor or an ndarray to PIL Image. This transform does not support torchscript.

-

Converts a torch.*Tensor of shape C x H x W or a numpy ndarray of shape -H x W x C to a PIL Image while preserving the value range.

-
-
Parameters
-

mode (PIL.Image mode) – color space and pixel depth of input data (optional). -If mode is None (default) there are some assumptions made about the input data: -- If the input has 4 channels, the mode is assumed to be RGBA. -- If the input has 3 channels, the mode is assumed to be RGB. -- If the input has 2 channels, the mode is assumed to be LA. -- If the input has 1 channel, the mode is determined by the data type (i.e int, float, -short).

-
-
-

Examples using ToPILImage:

-
- -
-
-class torchvision.transforms.ToTensor[source]
-

Convert a PIL Image or numpy.ndarray to tensor. This transform does not support torchscript.

-

Converts a PIL Image or numpy.ndarray (H x W x C) in the range -[0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0] -if the PIL Image belongs to one of the modes (L, LA, P, I, F, RGB, YCbCr, RGBA, CMYK, 1) -or if the numpy.ndarray has dtype = np.uint8

-

In the other cases, tensors are returned without scaling.

-
-

Note

-

Because the input image is scaled to [0.0, 1.0], this transformation should not be used when -transforming target image masks. See the references for implementing the transforms for image masks.

-
-
- -
-
-class torchvision.transforms.PILToTensor[source]
-

Convert a PIL Image to a tensor of the same type. This transform does not support torchscript.

-

Converts a PIL Image (H x W x C) to a Tensor of shape (C x H x W).

-
- -
-
-

Generic Transforms

-
-
-class torchvision.transforms.Lambda(lambd)[source]
-

Apply a user-defined lambda as a transform. This transform does not support torchscript.

-
-
Parameters
-

lambd (function) – Lambda/function to be used for transform.

-
-
-
- -
-
-

Automatic Augmentation Transforms

-

AutoAugment is a common Data Augmentation technique that can improve the accuracy of Image Classification models. -Though the data augmentation policies are directly linked to their trained dataset, empirical studies show that -ImageNet policies provide significant improvements when applied to other datasets. -In TorchVision we implemented 3 policies learned on the following datasets: ImageNet, CIFAR10 and SVHN. -The new transform can be used standalone or mixed-and-matched with existing transforms:

-
-
-class torchvision.transforms.AutoAugmentPolicy(value)[source]
-

AutoAugment policies learned on different datasets. -Available policies are IMAGENET, CIFAR10 and SVHN.

-

Examples using AutoAugmentPolicy:

-
- -
-
-class torchvision.transforms.AutoAugment(policy: torchvision.transforms.autoaugment.AutoAugmentPolicy = <AutoAugmentPolicy.IMAGENET: 'imagenet'>, interpolation: torchvision.transforms.functional.InterpolationMode = <InterpolationMode.NEAREST: 'nearest'>, fill: Optional[List[float]] = None)[source]
-

AutoAugment data augmentation method based on -“AutoAugment: Learning Augmentation Strategies from Data”. -If the image is torch Tensor, it should be of type torch.uint8, and it is expected -to have […, 1 or 3, H, W] shape, where … means an arbitrary number of leading dimensions. -If img is PIL Image, it is expected to be in mode “L” or “RGB”.

-
-
Parameters
-
    -
  • policy (AutoAugmentPolicy) – Desired policy enum defined by -torchvision.transforms.autoaugment.AutoAugmentPolicy. Default is AutoAugmentPolicy.IMAGENET.

  • -
  • interpolation (InterpolationMode) – Desired interpolation enum defined by -torchvision.transforms.InterpolationMode. Default is InterpolationMode.NEAREST. -If input is Tensor, only InterpolationMode.NEAREST, InterpolationMode.BILINEAR are supported.

  • -
  • fill (sequence or number, optional) – Pixel fill value for the area outside the transformed -image. If given a number, the value is used for all bands respectively.

  • -
-
-
-

Examples using AutoAugment:

-
-
-forward(img: torch.Tensor)torch.Tensor[source]
-
-

img (PIL Image or Tensor): Image to be transformed.

-
-
-
Returns
-

AutoAugmented image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
-
-static get_params(transform_num: int)Tuple[int, torch.Tensor, torch.Tensor][source]
-

Get parameters for autoaugment transformation

-
-
Returns
-

params required by the autoaugment transformation

-
-
-
- -
- -

RandAugment is a simple high-performing Data Augmentation technique which improves the accuracy of Image Classification models.

-
-
-class torchvision.transforms.RandAugment(num_ops: int = 2, magnitude: int = 9, num_magnitude_bins: int = 31, interpolation: torchvision.transforms.functional.InterpolationMode = <InterpolationMode.NEAREST: 'nearest'>, fill: Optional[List[float]] = None)[source]
-

RandAugment data augmentation method based on -“RandAugment: Practical automated data augmentation with a reduced search space”. -If the image is torch Tensor, it should be of type torch.uint8, and it is expected -to have […, 1 or 3, H, W] shape, where … means an arbitrary number of leading dimensions. -If img is PIL Image, it is expected to be in mode “L” or “RGB”.

-
-
Parameters
-
    -
  • num_ops (int) – Number of augmentation transformations to apply sequentially.

  • -
  • magnitude (int) – Magnitude for all the transformations.

  • -
  • num_magnitude_bins (int) – The number of different magnitude values.

  • -
  • interpolation (InterpolationMode) – Desired interpolation enum defined by -torchvision.transforms.InterpolationMode. Default is InterpolationMode.NEAREST. -If input is Tensor, only InterpolationMode.NEAREST, InterpolationMode.BILINEAR are supported.

  • -
  • fill (sequence or number, optional) – Pixel fill value for the area outside the transformed -image. If given a number, the value is used for all bands respectively.

  • -
-
-
-

Examples using RandAugment:

-
-
-forward(img: torch.Tensor)torch.Tensor[source]
-
-

img (PIL Image or Tensor): Image to be transformed.

-
-
-
Returns
-

Transformed image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
- -

TrivialAugmentWide is a dataset-independent data-augmentation technique which improves the accuracy of Image Classification models.

-
-
-class torchvision.transforms.TrivialAugmentWide(num_magnitude_bins: int = 31, interpolation: torchvision.transforms.functional.InterpolationMode = <InterpolationMode.NEAREST: 'nearest'>, fill: Optional[List[float]] = None)[source]
-

Dataset-independent data-augmentation with TrivialAugment Wide, as described in -“TrivialAugment: Tuning-free Yet State-of-the-Art Data Augmentation” <https://arxiv.org/abs/2103.10158>. -If the image is torch Tensor, it should be of type torch.uint8, and it is expected -to have […, 1 or 3, H, W] shape, where … means an arbitrary number of leading dimensions. -If img is PIL Image, it is expected to be in mode “L” or “RGB”.

-
-
Parameters
-
    -
  • num_magnitude_bins (int) – The number of different magnitude values.

  • -
  • interpolation (InterpolationMode) – Desired interpolation enum defined by -torchvision.transforms.InterpolationMode. Default is InterpolationMode.NEAREST. -If input is Tensor, only InterpolationMode.NEAREST, InterpolationMode.BILINEAR are supported.

  • -
  • fill (sequence or number, optional) – Pixel fill value for the area outside the transformed -image. If given a number, the value is used for all bands respectively.

  • -
-
-
-

Examples using TrivialAugmentWide:

-
-
-forward(img: torch.Tensor)torch.Tensor[source]
-
-

img (PIL Image or Tensor): Image to be transformed.

-
-
-
Returns
-

Transformed image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
- -
-
-

Functional Transforms

-

Functional transforms give you fine-grained control of the transformation pipeline. -As opposed to the transformations above, functional transforms don’t contain a random number -generator for their parameters. -That means you have to specify/generate all parameters, but the functional transform will give you -reproducible results across calls.

-

Example: -you can apply a functional transform with the same parameters to multiple images like this:

-
import torchvision.transforms.functional as TF
-import random
-
-def my_segmentation_transforms(image, segmentation):
-    if random.random() > 0.5:
-        angle = random.randint(-30, 30)
-        image = TF.rotate(image, angle)
-        segmentation = TF.rotate(segmentation, angle)
-    # more transforms ...
-    return image, segmentation
-
-
-

Example: -you can use a functional transform to build transform classes with custom behavior:

-
import torchvision.transforms.functional as TF
-import random
-
-class MyRotationTransform:
-    """Rotate by one of the given angles."""
-
-    def __init__(self, angles):
-        self.angles = angles
-
-    def __call__(self, x):
-        angle = random.choice(self.angles)
-        return TF.rotate(x, angle)
-
-rotation_transform = MyRotationTransform(angles=[-30, -15, 0, 15, 30])
-
-
-
-
-class torchvision.transforms.functional.InterpolationMode(value)[source]
-

Interpolation modes -Available interpolation methods are nearest, bilinear, bicubic, box, hamming, and lanczos.

-
- -
-
-torchvision.transforms.functional.adjust_brightness(img: torch.Tensor, brightness_factor: float)torch.Tensor[source]
-

Adjust brightness of an image.

-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – Image to be adjusted. -If img is torch Tensor, it is expected to be in […, 1 or 3, H, W] format, -where … means it can have an arbitrary number of leading dimensions.

  • -
  • brightness_factor (float) – How much to adjust the brightness. Can be -any non negative number. 0 gives a black image, 1 gives the -original image while 2 increases the brightness by a factor of 2.

  • -
-
-
Returns
-

Brightness adjusted image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
-
-torchvision.transforms.functional.adjust_contrast(img: torch.Tensor, contrast_factor: float)torch.Tensor[source]
-

Adjust contrast of an image.

-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – Image to be adjusted. -If img is torch Tensor, it is expected to be in […, 1 or 3, H, W] format, -where … means it can have an arbitrary number of leading dimensions.

  • -
  • contrast_factor (float) – How much to adjust the contrast. Can be any -non negative number. 0 gives a solid gray image, 1 gives the -original image while 2 increases the contrast by a factor of 2.

  • -
-
-
Returns
-

Contrast adjusted image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
-
-torchvision.transforms.functional.adjust_gamma(img: torch.Tensor, gamma: float, gain: float = 1)torch.Tensor[source]
-

Perform gamma correction on an image.

-

Also known as Power Law Transform. Intensities in RGB mode are adjusted -based on the following equation:

-
-\[I_{\text{out}} = 255 \times \text{gain} \times \left(\frac{I_{\text{in}}}{255}\right)^{\gamma}\]
-

See Gamma Correction for more details.

-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – PIL Image to be adjusted. -If img is torch Tensor, it is expected to be in […, 1 or 3, H, W] format, -where … means it can have an arbitrary number of leading dimensions. -If img is PIL Image, modes with transparency (alpha channel) are not supported.

  • -
  • gamma (float) – Non negative real number, same as \(\gamma\) in the equation. -gamma larger than 1 make the shadows darker, -while gamma smaller than 1 make dark regions lighter.

  • -
  • gain (float) – The constant multiplier.

  • -
-
-
Returns
-

Gamma correction adjusted image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
-
-torchvision.transforms.functional.adjust_hue(img: torch.Tensor, hue_factor: float)torch.Tensor[source]
-

Adjust hue of an image.

-

The image hue is adjusted by converting the image to HSV and -cyclically shifting the intensities in the hue channel (H). -The image is then converted back to original image mode.

-

hue_factor is the amount of shift in H channel and must be in the -interval [-0.5, 0.5].

-

See Hue for more details.

-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – Image to be adjusted. -If img is torch Tensor, it is expected to be in […, 1 or 3, H, W] format, -where … means it can have an arbitrary number of leading dimensions. -If img is PIL Image mode “1”, “I”, “F” and modes with transparency (alpha channel) are not supported.

  • -
  • hue_factor (float) – How much to shift the hue channel. Should be in -[-0.5, 0.5]. 0.5 and -0.5 give complete reversal of hue channel in -HSV space in positive and negative direction respectively. -0 means no shift. Therefore, both -0.5 and 0.5 will give an image -with complementary colors while 0 gives the original image.

  • -
-
-
Returns
-

Hue adjusted image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
-
-torchvision.transforms.functional.adjust_saturation(img: torch.Tensor, saturation_factor: float)torch.Tensor[source]
-

Adjust color saturation of an image.

-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – Image to be adjusted. -If img is torch Tensor, it is expected to be in […, 1 or 3, H, W] format, -where … means it can have an arbitrary number of leading dimensions.

  • -
  • saturation_factor (float) – How much to adjust the saturation. 0 will -give a black and white image, 1 will give the original image while -2 will enhance the saturation by a factor of 2.

  • -
-
-
Returns
-

Saturation adjusted image.

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
-
-torchvision.transforms.functional.adjust_sharpness(img: torch.Tensor, sharpness_factor: float)torch.Tensor[source]
-

Adjust the sharpness of an image.

-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – Image to be adjusted. -If img is torch Tensor, it is expected to be in […, 1 or 3, H, W] format, -where … means it can have an arbitrary number of leading dimensions.

  • -
  • sharpness_factor (float) – How much to adjust the sharpness. Can be -any non negative number. 0 gives a blurred image, 1 gives the -original image while 2 increases the sharpness by a factor of 2.

  • -
-
-
Returns
-

Sharpness adjusted image.

-
-
Return type
-

PIL Image or Tensor

-
-
-

Examples using adjust_sharpness:

-
- -
-
-torchvision.transforms.functional.affine(img: torch.Tensor, angle: float, translate: List[int], scale: float, shear: List[float], interpolation: torchvision.transforms.functional.InterpolationMode = <InterpolationMode.NEAREST: 'nearest'>, fill: Optional[List[float]] = None, resample: Optional[int] = None, fillcolor: Optional[List[float]] = None)torch.Tensor[source]
-

Apply affine transformation on the image keeping image center invariant. -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means an arbitrary number of leading dimensions.

-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – image to transform.

  • -
  • angle (number) – rotation angle in degrees between -180 and 180, clockwise direction.

  • -
  • translate (sequence of python:integers) – horizontal and vertical translations (post-rotation translation)

  • -
  • scale (float) – overall scale

  • -
  • shear (float or sequence) – shear angle value in degrees between -180 to 180, clockwise direction. -If a sequence is specified, the first value corresponds to a shear parallel to the x axis, while -the second value corresponds to a shear parallel to the y axis.

  • -
  • interpolation (InterpolationMode) – Desired interpolation enum defined by -torchvision.transforms.InterpolationMode. Default is InterpolationMode.NEAREST. -If input is Tensor, only InterpolationMode.NEAREST, InterpolationMode.BILINEAR are supported. -For backward compatibility integer values (e.g. PIL.Image.NEAREST) are still acceptable.

  • -
  • fill (sequence or number, optional) –

    Pixel fill value for the area outside the transformed -image. If given a number, the value is used for all bands respectively.

    -
    -

    Note

    -

    In torchscript mode single int/float value is not supported, please use a sequence -of length 1: [value, ].

    -
    -

  • -
  • fillcolor (sequence, int, float) – deprecated argument and will be removed since v0.10.0. -Please use the fill parameter instead.

  • -
  • resample (int, optional) – deprecated argument and will be removed since v0.10.0. -Please use the interpolation parameter instead.

  • -
-
-
Returns
-

Transformed image.

-
-
Return type
-

PIL Image or Tensor

-
-
-

Examples using affine:

-
- -
-
-torchvision.transforms.functional.autocontrast(img: torch.Tensor)torch.Tensor[source]
-

Maximize contrast of an image by remapping its -pixels per channel so that the lowest becomes black and the lightest -becomes white.

-
-
Parameters
-

img (PIL Image or Tensor) – Image on which autocontrast is applied. -If img is torch Tensor, it is expected to be in […, 1 or 3, H, W] format, -where … means it can have an arbitrary number of leading dimensions. -If img is PIL Image, it is expected to be in mode “L” or “RGB”.

-
-
Returns
-

An image that was autocontrasted.

-
-
Return type
-

PIL Image or Tensor

-
-
-

Examples using autocontrast:

-
- -
-
-torchvision.transforms.functional.center_crop(img: torch.Tensor, output_size: List[int])torch.Tensor[source]
-

Crops the given image at the center. -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means an arbitrary number of leading dimensions. -If image size is smaller than output size along any edge, image is padded with 0 and then center cropped.

-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – Image to be cropped.

  • -
  • output_size (sequence or int) – (height, width) of the crop box. If int or sequence with single int, -it is used for both directions.

  • -
-
-
Returns
-

Cropped image.

-
-
Return type
-

PIL Image or Tensor

-
-
-

Examples using center_crop:

-
- -
-
-torchvision.transforms.functional.convert_image_dtype(image: torch.Tensor, dtype: torch.dtype = torch.float32)torch.Tensor[source]
-

Convert a tensor image to the given dtype and scale the values accordingly -This function does not support PIL Image.

-
-
Parameters
-
    -
  • image (torch.Tensor) – Image to be converted

  • -
  • dtype (torch.dpython:type) – Desired data type of the output

  • -
-
-
Returns
-

Converted image

-
-
Return type
-

Tensor

-
-
-
-

Note

-

When converting from a smaller to a larger integer dtype the maximum values are not mapped exactly. -If converted back and forth, this mismatch has no effect.

-
-
-
Raises
-

RuntimeError – When trying to cast torch.float32 to torch.int32 or torch.int64 as - well as for trying to cast torch.float64 to torch.int64. These conversions might lead to - overflow errors since the floating point dtype cannot store consecutive integers over the whole range - of the integer dtype.

-
-
-

Examples using convert_image_dtype:

-
- -
-
-torchvision.transforms.functional.crop(img: torch.Tensor, top: int, left: int, height: int, width: int)torch.Tensor[source]
-

Crop the given image at specified location and output size. -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means an arbitrary number of leading dimensions. -If image size is smaller than output size along any edge, image is padded with 0 and then cropped.

-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – Image to be cropped. (0,0) denotes the top left corner of the image.

  • -
  • top (int) – Vertical component of the top left corner of the crop box.

  • -
  • left (int) – Horizontal component of the top left corner of the crop box.

  • -
  • height (int) – Height of the crop box.

  • -
  • width (int) – Width of the crop box.

  • -
-
-
Returns
-

Cropped image.

-
-
Return type
-

PIL Image or Tensor

-
-
-

Examples using crop:

-
- -
-
-torchvision.transforms.functional.equalize(img: torch.Tensor)torch.Tensor[source]
-

Equalize the histogram of an image by applying -a non-linear mapping to the input in order to create a uniform -distribution of grayscale values in the output.

-
-
Parameters
-

img (PIL Image or Tensor) – Image on which equalize is applied. -If img is torch Tensor, it is expected to be in […, 1 or 3, H, W] format, -where … means it can have an arbitrary number of leading dimensions. -The tensor dtype must be torch.uint8 and values are expected to be in [0, 255]. -If img is PIL Image, it is expected to be in mode “P”, “L” or “RGB”.

-
-
Returns
-

An image that was equalized.

-
-
Return type
-

PIL Image or Tensor

-
-
-

Examples using equalize:

-
- -
-
-torchvision.transforms.functional.erase(img: torch.Tensor, i: int, j: int, h: int, w: int, v: torch.Tensor, inplace: bool = False)torch.Tensor[source]
-

Erase the input Tensor Image with given value. -This transform does not support PIL Image.

-
-
Parameters
-
    -
  • img (Tensor Image) – Tensor image of size (C, H, W) to be erased

  • -
  • i (int) – i in (i,j) i.e coordinates of the upper left corner.

  • -
  • j (int) – j in (i,j) i.e coordinates of the upper left corner.

  • -
  • h (int) – Height of the erased region.

  • -
  • w (int) – Width of the erased region.

  • -
  • v – Erasing value.

  • -
  • inplace (bool, optional) – For in-place operations. By default is set False.

  • -
-
-
Returns
-

Erased image.

-
-
Return type
-

Tensor Image

-
-
-
- -
-
-torchvision.transforms.functional.five_crop(img: torch.Tensor, size: List[int])Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor][source]
-

Crop the given image into four corners and the central crop. -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means an arbitrary number of leading dimensions

-
-

Note

-

This transform returns a tuple of images and there may be a -mismatch in the number of inputs and targets your Dataset returns.

-
-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – Image to be cropped.

  • -
  • size (sequence or int) – Desired output size of the crop. If size is an -int instead of sequence like (h, w), a square crop (size, size) is -made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).

  • -
-
-
Returns
-

tuple (tl, tr, bl, br, center) -Corresponding top left, top right, bottom left, bottom right and center crop.

-
-
Return type
-

tuple

-
-
-

Examples using five_crop:

-
- -
-
-torchvision.transforms.functional.gaussian_blur(img: torch.Tensor, kernel_size: List[int], sigma: Optional[List[float]] = None)torch.Tensor[source]
-

Performs Gaussian blurring on the image by given kernel. -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means an arbitrary number of leading dimensions.

-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – Image to be blurred

  • -
  • kernel_size (sequence of python:ints or int) –

    Gaussian kernel size. Can be a sequence of integers -like (kx, ky) or a single integer for square kernels.

    -
    -

    Note

    -

    In torchscript mode kernel_size as single int is not supported, use a sequence of -length 1: [ksize, ].

    -
    -

  • -
  • sigma (sequence of python:floats or float, optional) –

    Gaussian kernel standard deviation. Can be a -sequence of floats like (sigma_x, sigma_y) or a single float to define the -same sigma in both X/Y directions. If None, then it is computed using -kernel_size as sigma = 0.3 * ((kernel_size - 1) * 0.5 - 1) + 0.8. -Default, None.

    -
    -

    Note

    -

    In torchscript mode sigma as single float is -not supported, use a sequence of length 1: [sigma, ].

    -
    -

  • -
-
-
Returns
-

Gaussian Blurred version of the image.

-
-
Return type
-

PIL Image or Tensor

-
-
-

Examples using gaussian_blur:

-
- -
-
-torchvision.transforms.functional.get_image_num_channels(img: torch.Tensor)int[source]
-

Returns the number of channels of an image.

-
-
Parameters
-

img (PIL Image or Tensor) – The image to be checked.

-
-
Returns
-

The number of channels.

-
-
Return type
-

int

-
-
-
- -
-
-torchvision.transforms.functional.get_image_size(img: torch.Tensor)List[int][source]
-

Returns the size of an image as [width, height].

-
-
Parameters
-

img (PIL Image or Tensor) – The image to be checked.

-
-
Returns
-

The image size.

-
-
Return type
-

List[int]

-
-
-
- -
-
-torchvision.transforms.functional.hflip(img: torch.Tensor)torch.Tensor[source]
-

Horizontally flip the given image.

-
-
Parameters
-

img (PIL Image or Tensor) – Image to be flipped. If img -is a Tensor, it is expected to be in […, H, W] format, -where … means it can have an arbitrary number of leading -dimensions.

-
-
Returns
-

Horizontally flipped image.

-
-
Return type
-

PIL Image or Tensor

-
-
-

Examples using hflip:

-
- -
-
-torchvision.transforms.functional.invert(img: torch.Tensor)torch.Tensor[source]
-

Invert the colors of an RGB/grayscale image.

-
-
Parameters
-

img (PIL Image or Tensor) – Image to have its colors inverted. -If img is torch Tensor, it is expected to be in […, 1 or 3, H, W] format, -where … means it can have an arbitrary number of leading dimensions. -If img is PIL Image, it is expected to be in mode “L” or “RGB”.

-
-
Returns
-

Color inverted image.

-
-
Return type
-

PIL Image or Tensor

-
-
-

Examples using invert:

-
- -
-
-torchvision.transforms.functional.normalize(tensor: torch.Tensor, mean: List[float], std: List[float], inplace: bool = False)torch.Tensor[source]
-

Normalize a float tensor image with mean and standard deviation. -This transform does not support PIL Image.

-
-

Note

-

This transform acts out of place by default, i.e., it does not mutates the input tensor.

-
-

See Normalize for more details.

-
-
Parameters
-
    -
  • tensor (Tensor) – Float tensor image of size (C, H, W) or (B, C, H, W) to be normalized.

  • -
  • mean (sequence) – Sequence of means for each channel.

  • -
  • std (sequence) – Sequence of standard deviations for each channel.

  • -
  • inplace (bool,optional) – Bool to make this operation inplace.

  • -
-
-
Returns
-

Normalized Tensor image.

-
-
Return type
-

Tensor

-
-
-

Examples using normalize:

-
- -
-
-torchvision.transforms.functional.pad(img: torch.Tensor, padding: List[int], fill: int = 0, padding_mode: str = 'constant')torch.Tensor[source]
-

Pad the given image on all sides with the given “pad” value. -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means at most 2 leading dimensions for mode reflect and symmetric, -at most 3 leading dimensions for mode edge, -and an arbitrary number of leading dimensions for mode constant

-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – Image to be padded.

  • -
  • padding (int or sequence) –

    Padding on each border. If a single int is provided this -is used to pad all borders. If sequence of length 2 is provided this is the padding -on left/right and top/bottom respectively. If a sequence of length 4 is provided -this is the padding for the left, top, right and bottom borders respectively.

    -
    -

    Note

    -

    In torchscript mode padding as single int is not supported, use a sequence of -length 1: [padding, ].

    -
    -

  • -
  • fill (number or str or tuple) – Pixel fill value for constant fill. Default is 0. -If a tuple of length 3, it is used to fill R, G, B channels respectively. -This value is only used when the padding_mode is constant. -Only number is supported for torch Tensor. -Only int or str or tuple value is supported for PIL Image.

  • -
  • padding_mode (str) –

    Type of padding. Should be: constant, edge, reflect or symmetric. -Default is constant.

    -
      -
    • constant: pads with a constant value, this value is specified with fill

    • -
    • edge: pads with the last value at the edge of the image. -If input a 5D torch Tensor, the last 3 dimensions will be padded instead of the last 2

    • -
    • reflect: pads with reflection of image without repeating the last value on the edge. -For example, padding [1, 2, 3, 4] with 2 elements on both sides in reflect mode -will result in [3, 2, 1, 2, 3, 4, 3, 2]

    • -
    • symmetric: pads with reflection of image repeating the last value on the edge. -For example, padding [1, 2, 3, 4] with 2 elements on both sides in symmetric mode -will result in [2, 1, 1, 2, 3, 4, 4, 3]

    • -
    -

  • -
-
-
Returns
-

Padded image.

-
-
Return type
-

PIL Image or Tensor

-
-
-

Examples using pad:

-
- -
-
-torchvision.transforms.functional.perspective(img: torch.Tensor, startpoints: List[List[int]], endpoints: List[List[int]], interpolation: torchvision.transforms.functional.InterpolationMode = <InterpolationMode.BILINEAR: 'bilinear'>, fill: Optional[List[float]] = None)torch.Tensor[source]
-

Perform perspective transform of the given image. -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means an arbitrary number of leading dimensions.

-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – Image to be transformed.

  • -
  • startpoints (list of list of python:ints) – List containing four lists of two integers corresponding to four corners -[top-left, top-right, bottom-right, bottom-left] of the original image.

  • -
  • endpoints (list of list of python:ints) – List containing four lists of two integers corresponding to four corners -[top-left, top-right, bottom-right, bottom-left] of the transformed image.

  • -
  • interpolation (InterpolationMode) – Desired interpolation enum defined by -torchvision.transforms.InterpolationMode. Default is InterpolationMode.BILINEAR. -If input is Tensor, only InterpolationMode.NEAREST, InterpolationMode.BILINEAR are supported. -For backward compatibility integer values (e.g. PIL.Image.NEAREST) are still acceptable.

  • -
  • fill (sequence or number, optional) –

    Pixel fill value for the area outside the transformed -image. If given a number, the value is used for all bands respectively.

    -
    -

    Note

    -

    In torchscript mode single int/float value is not supported, please use a sequence -of length 1: [value, ].

    -
    -

  • -
-
-
Returns
-

transformed Image.

-
-
Return type
-

PIL Image or Tensor

-
-
-

Examples using perspective:

-
- -
-
-torchvision.transforms.functional.pil_to_tensor(pic)[source]
-

Convert a PIL Image to a tensor of the same type. -This function does not support torchscript.

-

See PILToTensor for more details.

-
-

Note

-

A deep copy of the underlying array is performed.

-
-
-
Parameters
-

pic (PIL Image) – Image to be converted to tensor.

-
-
Returns
-

Converted image.

-
-
Return type
-

Tensor

-
-
-
- -
-
-torchvision.transforms.functional.posterize(img: torch.Tensor, bits: int)torch.Tensor[source]
-

Posterize an image by reducing the number of bits for each color channel.

-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – Image to have its colors posterized. -If img is torch Tensor, it should be of type torch.uint8 and -it is expected to be in […, 1 or 3, H, W] format, where … means -it can have an arbitrary number of leading dimensions. -If img is PIL Image, it is expected to be in mode “L” or “RGB”.

  • -
  • bits (int) – The number of bits to keep for each channel (0-8).

  • -
-
-
Returns
-

Posterized image.

-
-
Return type
-

PIL Image or Tensor

-
-
-

Examples using posterize:

-
- -
-
-torchvision.transforms.functional.resize(img: torch.Tensor, size: List[int], interpolation: torchvision.transforms.functional.InterpolationMode = <InterpolationMode.BILINEAR: 'bilinear'>, max_size: Optional[int] = None, antialias: Optional[bool] = None)torch.Tensor[source]
-

Resize the input image to the given size. -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means an arbitrary number of leading dimensions

-
-

Warning

-

The output image might be different depending on its type: when downsampling, the interpolation of PIL images -and tensors is slightly different, because PIL applies antialiasing. This may lead to significant differences -in the performance of a network. Therefore, it is preferable to train and serve a model with the same input -types. See also below the antialias parameter, which can help making the output of PIL images and tensors -closer.

-
-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – Image to be resized.

  • -
  • size (sequence or int) –

    Desired output size. If size is a sequence like -(h, w), the output size will be matched to this. If size is an int, -the smaller edge of the image will be matched to this number maintaining -the aspect ratio. i.e, if height > width, then image will be rescaled to -\(\left(\text{size} \times \frac{\text{height}}{\text{width}}, \text{size}\right)\).

    -
    -

    Note

    -

    In torchscript mode size as single int is not supported, use a sequence of length 1: [size, ].

    -
    -

  • -
  • interpolation (InterpolationMode) – Desired interpolation enum defined by -torchvision.transforms.InterpolationMode. -Default is InterpolationMode.BILINEAR. If input is Tensor, only InterpolationMode.NEAREST, -InterpolationMode.BILINEAR and InterpolationMode.BICUBIC are supported. -For backward compatibility integer values (e.g. PIL.Image.NEAREST) are still acceptable.

  • -
  • max_size (int, optional) – The maximum allowed for the longer edge of -the resized image: if the longer edge of the image is greater -than max_size after being resized according to size, then -the image is resized again so that the longer edge is equal to -max_size. As a result, size might be overruled, i.e the -smaller edge may be shorter than size. This is only supported -if size is an int (or a sequence of length 1 in torchscript -mode).

  • -
  • antialias (bool, optional) –

    antialias flag. If img is PIL Image, the flag is ignored and anti-alias -is always used. If img is Tensor, the flag is False by default and can be set to True for -InterpolationMode.BILINEAR only mode. This can help making the output for PIL images and tensors -closer.

    -
    -

    Warning

    -

    There is no autodiff support for antialias=True option with input img as Tensor.

    -
    -

  • -
-
-
Returns
-

Resized image.

-
-
Return type
-

PIL Image or Tensor

-
-
-

Examples using resize:

-
- -
-
-torchvision.transforms.functional.resized_crop(img: torch.Tensor, top: int, left: int, height: int, width: int, size: List[int], interpolation: torchvision.transforms.functional.InterpolationMode = <InterpolationMode.BILINEAR: 'bilinear'>)torch.Tensor[source]
-

Crop the given image and resize it to desired size. -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means an arbitrary number of leading dimensions

-

Notably used in RandomResizedCrop.

-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – Image to be cropped. (0,0) denotes the top left corner of the image.

  • -
  • top (int) – Vertical component of the top left corner of the crop box.

  • -
  • left (int) – Horizontal component of the top left corner of the crop box.

  • -
  • height (int) – Height of the crop box.

  • -
  • width (int) – Width of the crop box.

  • -
  • size (sequence or int) – Desired output size. Same semantics as resize.

  • -
  • interpolation (InterpolationMode) – Desired interpolation enum defined by -torchvision.transforms.InterpolationMode. -Default is InterpolationMode.BILINEAR. If input is Tensor, only InterpolationMode.NEAREST, -InterpolationMode.BILINEAR and InterpolationMode.BICUBIC are supported. -For backward compatibility integer values (e.g. PIL.Image.NEAREST) are still acceptable.

  • -
-
-
Returns
-

Cropped image.

-
-
Return type
-

PIL Image or Tensor

-
-
-

Examples using resized_crop:

-
- -
-
-torchvision.transforms.functional.rgb_to_grayscale(img: torch.Tensor, num_output_channels: int = 1)torch.Tensor[source]
-

Convert RGB image to grayscale version of image. -If the image is torch Tensor, it is expected -to have […, 3, H, W] shape, where … means an arbitrary number of leading dimensions

-
-

Note

-

Please, note that this method supports only RGB images as input. For inputs in other color spaces, -please, consider using meth:~torchvision.transforms.functional.to_grayscale with PIL Image.

-
-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – RGB Image to be converted to grayscale.

  • -
  • num_output_channels (int) – number of channels of the output image. Value can be 1 or 3. Default, 1.

  • -
-
-
Returns
-

Grayscale version of the image.

-
    -
  • if num_output_channels = 1 : returned image is single channel

  • -
  • if num_output_channels = 3 : returned image is 3 channel with r = g = b

  • -
-

-
-
Return type
-

PIL Image or Tensor

-
-
-
- -
-
-torchvision.transforms.functional.rotate(img: torch.Tensor, angle: float, interpolation: torchvision.transforms.functional.InterpolationMode = <InterpolationMode.NEAREST: 'nearest'>, expand: bool = False, center: Optional[List[int]] = None, fill: Optional[List[float]] = None, resample: Optional[int] = None)torch.Tensor[source]
-

Rotate the image by angle. -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means an arbitrary number of leading dimensions.

-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – image to be rotated.

  • -
  • angle (number) – rotation angle value in degrees, counter-clockwise.

  • -
  • interpolation (InterpolationMode) – Desired interpolation enum defined by -torchvision.transforms.InterpolationMode. Default is InterpolationMode.NEAREST. -If input is Tensor, only InterpolationMode.NEAREST, InterpolationMode.BILINEAR are supported. -For backward compatibility integer values (e.g. PIL.Image.NEAREST) are still acceptable.

  • -
  • expand (bool, optional) – Optional expansion flag. -If true, expands the output image to make it large enough to hold the entire rotated image. -If false or omitted, make the output image the same size as the input image. -Note that the expand flag assumes rotation around the center and no translation.

  • -
  • center (sequence, optional) – Optional center of rotation. Origin is the upper left corner. -Default is the center of the image.

  • -
  • fill (sequence or number, optional) –

    Pixel fill value for the area outside the transformed -image. If given a number, the value is used for all bands respectively.

    -
    -

    Note

    -

    In torchscript mode single int/float value is not supported, please use a sequence -of length 1: [value, ].

    -
    -

  • -
-
-
Returns
-

Rotated image.

-
-
Return type
-

PIL Image or Tensor

-
-
-

Examples using rotate:

-
- -
-
-torchvision.transforms.functional.solarize(img: torch.Tensor, threshold: float)torch.Tensor[source]
-

Solarize an RGB/grayscale image by inverting all pixel values above a threshold.

-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – Image to have its colors inverted. -If img is torch Tensor, it is expected to be in […, 1 or 3, H, W] format, -where … means it can have an arbitrary number of leading dimensions. -If img is PIL Image, it is expected to be in mode “L” or “RGB”.

  • -
  • threshold (float) – All pixels equal or above this value are inverted.

  • -
-
-
Returns
-

Solarized image.

-
-
Return type
-

PIL Image or Tensor

-
-
-

Examples using solarize:

-
- -
-
-torchvision.transforms.functional.ten_crop(img: torch.Tensor, size: List[int], vertical_flip: bool = False)List[torch.Tensor][source]
-

Generate ten cropped images from the given image. -Crop the given image into four corners and the central crop plus the -flipped version of these (horizontal flipping is used by default). -If the image is torch Tensor, it is expected -to have […, H, W] shape, where … means an arbitrary number of leading dimensions

-
-

Note

-

This transform returns a tuple of images and there may be a -mismatch in the number of inputs and targets your Dataset returns.

-
-
-
Parameters
-
    -
  • img (PIL Image or Tensor) – Image to be cropped.

  • -
  • size (sequence or int) – Desired output size of the crop. If size is an -int instead of sequence like (h, w), a square crop (size, size) is -made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).

  • -
  • vertical_flip (bool) – Use vertical flipping instead of horizontal

  • -
-
-
Returns
-

tuple (tl, tr, bl, br, center, tl_flip, tr_flip, bl_flip, br_flip, center_flip) -Corresponding top left, top right, bottom left, bottom right and -center crop and same for the flipped image.

-
-
Return type
-

tuple

-
-
-
- -
-
-torchvision.transforms.functional.to_grayscale(img, num_output_channels=1)[source]
-

Convert PIL image of any mode (RGB, HSV, LAB, etc) to grayscale version of image. -This transform does not support torch Tensor.

-
-
Parameters
-
    -
  • img (PIL Image) – PIL Image to be converted to grayscale.

  • -
  • num_output_channels (int) – number of channels of the output image. Value can be 1 or 3. Default is 1.

  • -
-
-
Returns
-

Grayscale version of the image.

-
    -
  • if num_output_channels = 1 : returned image is single channel

  • -
  • if num_output_channels = 3 : returned image is 3 channel with r = g = b

  • -
-

-
-
Return type
-

PIL Image

-
-
-

Examples using to_grayscale:

-
- -
-
-torchvision.transforms.functional.to_pil_image(pic, mode=None)[source]
-

Convert a tensor or an ndarray to PIL Image. This function does not support torchscript.

-

See ToPILImage for more details.

-
-
Parameters
-
    -
  • pic (Tensor or numpy.ndarray) – Image to be converted to PIL Image.

  • -
  • mode (PIL.Image mode) – color space and pixel depth of input data (optional).

  • -
-
-
-
-
Returns
-

Image converted to PIL Image.

-
-
Return type
-

PIL Image

-
-
-

Examples using to_pil_image:

-
- -
-
-torchvision.transforms.functional.to_tensor(pic)[source]
-

Convert a PIL Image or numpy.ndarray to tensor. -This function does not support torchscript.

-

See ToTensor for more details.

-
-
Parameters
-

pic (PIL Image or numpy.ndarray) – Image to be converted to tensor.

-
-
Returns
-

Converted image.

-
-
Return type
-

Tensor

-
-
-
- -
-
-torchvision.transforms.functional.vflip(img: torch.Tensor)torch.Tensor[source]
-

Vertically flip the given image.

-
-
Parameters
-

img (PIL Image or Tensor) – Image to be flipped. If img -is a Tensor, it is expected to be in […, H, W] format, -where … means it can have an arbitrary number of leading -dimensions.

-
-
Returns
-

Vertically flipped image.

-
-
Return type
-

PIL Image or Tensor

-
-
-

Examples using vflip:

-
- -
-
- - -
- -
- - -
-
- - -
-
- - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
-
-

Docs

-

Access comprehensive developer documentation for PyTorch

- View Docs -
- -
-

Tutorials

-

Get in-depth tutorials for beginners and advanced developers

- View Tutorials -
- -
-

Resources

-

Find development resources and get your questions answered

- View Resources -
-
-
-
- - - - - - - - - -
-
-
-
- - -
-
-
- - -
- - - - - - - - \ No newline at end of file diff --git a/0.11./utils.html b/0.11./utils.html deleted file mode 100644 index 3aaf959e182..00000000000 --- a/0.11./utils.html +++ /dev/null @@ -1,776 +0,0 @@ - - - - - - - - - - - - torchvision.utils — Torchvision main documentation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
- - - - - - - - - - - - - - - - -
- -
    - -
  • - - - Docs - - > -
  • - - -
  • torchvision.utils
  • - - -
  • - - - - - -
  • - -
- - -
-
- -
- Shortcuts -
-
- -
-
- - - -
- -
-
- -
-

torchvision.utils

-
-
-torchvision.utils.make_grid(tensor: Union[torch.Tensor, List[torch.Tensor]], nrow: int = 8, padding: int = 2, normalize: bool = False, value_range: Optional[Tuple[int, int]] = None, scale_each: bool = False, pad_value: int = 0, **kwargs)torch.Tensor[source]
-

Make a grid of images.

-
-
Parameters
-
    -
  • tensor (Tensor or list) – 4D mini-batch Tensor of shape (B x C x H x W) -or a list of images all of the same size.

  • -
  • nrow (int, optional) – Number of images displayed in each row of the grid. -The final grid size is (B / nrow, nrow). Default: 8.

  • -
  • padding (int, optional) – amount of padding. Default: 2.

  • -
  • normalize (bool, optional) – If True, shift the image to the range (0, 1), -by the min and max values specified by value_range. Default: False.

  • -
  • value_range (tuple, optional) – tuple (min, max) where min and max are numbers, -then these numbers are used to normalize the image. By default, min and max -are computed from the tensor.

  • -
  • scale_each (bool, optional) – If True, scale each image in the batch of -images separately rather than the (min, max) over all images. Default: False.

  • -
  • pad_value (float, optional) – Value for the padded pixels. Default: 0.

  • -
-
-
Returns
-

the tensor containing grid of images.

-
-
Return type
-

grid (Tensor)

-
-
-

Examples using make_grid:

-
- -
-
-torchvision.utils.save_image(tensor: Union[torch.Tensor, List[torch.Tensor]], fp: Union[str, pathlib.Path, BinaryIO], format: Optional[str] = None, **kwargs)None[source]
-

Save a given Tensor into an image file.

-
-
Parameters
-
    -
  • tensor (Tensor or list) – Image to be saved. If given a mini-batch tensor, -saves the tensor as a grid of images by calling make_grid.

  • -
  • fp (string or file object) – A filename or a file object

  • -
  • format (Optional) – If omitted, the format to use is determined from the filename extension. -If a file object was used instead of a filename, this parameter should always be used.

  • -
  • **kwargs – Other arguments are documented in make_grid.

  • -
-
-
-
- -
-
-torchvision.utils.draw_bounding_boxes(image: torch.Tensor, boxes: torch.Tensor, labels: Optional[List[str]] = None, colors: Optional[Union[List[Union[str, Tuple[int, int, int]]], str, Tuple[int, int, int]]] = None, fill: Optional[bool] = False, width: int = 1, font: Optional[str] = None, font_size: int = 10)torch.Tensor[source]
-

Draws bounding boxes on given image. -The values of the input image should be uint8 between 0 and 255. -If fill is True, Resulting Tensor should be saved as PNG image.

-
-
Parameters
-
    -
  • image (Tensor) – Tensor of shape (C x H x W) and dtype uint8.

  • -
  • boxes (Tensor) – Tensor of size (N, 4) containing bounding boxes in (xmin, ymin, xmax, ymax) format. Note that -the boxes are absolute coordinates with respect to the image. In other words: 0 <= xmin < xmax < W and -0 <= ymin < ymax < H.

  • -
  • labels (List[str]) – List containing the labels of bounding boxes.

  • -
  • colors (Union[List[Union[str, Tuple[int, int, int]]], str, Tuple[int, int, int]]) – List containing the colors -or a single color for all of the bounding boxes. The colors can be represented as str or -Tuple[int, int, int].

  • -
  • fill (bool) – If True fills the bounding box with specified color.

  • -
  • width (int) – Width of bounding box.

  • -
  • font (str) – A filename containing a TrueType font. If the file is not found in this filename, the loader may -also search in other directories, such as the fonts/ directory on Windows or /Library/Fonts/, -/System/Library/Fonts/ and ~/Library/Fonts/ on macOS.

  • -
  • font_size (int) – The requested font size in points.

  • -
-
-
Returns
-

Image Tensor of dtype uint8 with bounding boxes plotted.

-
-
Return type
-

img (Tensor[C, H, W])

-
-
-

Examples using draw_bounding_boxes:

-
- -
-
-torchvision.utils.draw_segmentation_masks(image: torch.Tensor, masks: torch.Tensor, alpha: float = 0.8, colors: Optional[List[Union[str, Tuple[int, int, int]]]] = None)torch.Tensor[source]
-

Draws segmentation masks on given RGB image. -The values of the input image should be uint8 between 0 and 255.

-
-
Parameters
-
    -
  • image (Tensor) – Tensor of shape (3, H, W) and dtype uint8.

  • -
  • masks (Tensor) – Tensor of shape (num_masks, H, W) or (H, W) and dtype bool.

  • -
  • alpha (float) – Float number between 0 and 1 denoting the transparency of the masks. -0 means full transparency, 1 means no transparency.

  • -
  • colors (list or None) – List containing the colors of the masks. The colors can -be represented as PIL strings e.g. “red” or “#FF00FF”, or as RGB tuples e.g. (240, 10, 157). -When masks has a single entry of shape (H, W), you can pass a single color instead of a list -with one element. By default, random colors are generated for each mask.

  • -
-
-
Returns
-

Image Tensor, with segmentation masks drawn on top.

-
-
Return type
-

img (Tensor[C, H, W])

-
-
-

Examples using draw_segmentation_masks:

-
- -
- - -
- -
- - -
-
- -
-
- -
-
-
-
- - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-
-
-

Docs

-

Access comprehensive developer documentation for PyTorch

- View Docs -
- -
-

Tutorials

-

Get in-depth tutorials for beginners and advanced developers

- View Tutorials -
- -
-

Resources

-

Find development resources and get your questions answered

- View Resources -
-
-
-
- - - - - - - - - -
-
-
-
- - -
-
-
- - -
- - - - - - - - \ No newline at end of file diff --git a/0.11/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip b/0.11/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip index c3644a3c9e0..386aeea972b 100644 Binary files a/0.11/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip and b/0.11/_downloads/07fcc19ba03226cd3d83d4e40ec44385/auto_examples_python.zip differ diff --git a/0.11/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip b/0.11/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip index eaa9c5c9c45..edefd9291ca 100644 Binary files a/0.11/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip and b/0.11/_downloads/6f1e7a639e0699d6164445b55e6c116d/auto_examples_jupyter.zip differ diff --git a/0.11/_images/sphx_glr_plot_video_api_001.png b/0.11/_images/sphx_glr_plot_video_api_001.png index 88e5d7f3051..0305457b9fc 100644 Binary files a/0.11/_images/sphx_glr_plot_video_api_001.png and b/0.11/_images/sphx_glr_plot_video_api_001.png differ diff --git a/0.11/_images/sphx_glr_plot_video_api_thumb.png b/0.11/_images/sphx_glr_plot_video_api_thumb.png index 9a9f9d5a7dd..c4555201856 100644 Binary files a/0.11/_images/sphx_glr_plot_video_api_thumb.png and b/0.11/_images/sphx_glr_plot_video_api_thumb.png differ diff --git a/0.11/_sources/auto_examples/plot_repurposing_annotations.rst.txt b/0.11/_sources/auto_examples/plot_repurposing_annotations.rst.txt index 1adb0ea2e23..082093ea317 100644 --- a/0.11/_sources/auto_examples/plot_repurposing_annotations.rst.txt +++ b/0.11/_sources/auto_examples/plot_repurposing_annotations.rst.txt @@ -430,7 +430,7 @@ Here is an example where we re-purpose the dataset from the .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 2.531 seconds) + **Total running time of the script:** ( 0 minutes 2.456 seconds) .. _sphx_glr_download_auto_examples_plot_repurposing_annotations.py: diff --git a/0.11/_sources/auto_examples/plot_scripted_tensor_transforms.rst.txt b/0.11/_sources/auto_examples/plot_scripted_tensor_transforms.rst.txt index 001bd6bf430..9c02fd67a58 100644 --- a/0.11/_sources/auto_examples/plot_scripted_tensor_transforms.rst.txt +++ b/0.11/_sources/auto_examples/plot_scripted_tensor_transforms.rst.txt @@ -281,7 +281,7 @@ Since the model is scripted, it can be easily dumped on disk and re-used .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 2.409 seconds) + **Total running time of the script:** ( 0 minutes 1.822 seconds) .. _sphx_glr_download_auto_examples_plot_scripted_tensor_transforms.py: diff --git a/0.11/_sources/auto_examples/plot_transforms.rst.txt b/0.11/_sources/auto_examples/plot_transforms.rst.txt index 0c0f6c0eee2..251dcf0c605 100644 --- a/0.11/_sources/auto_examples/plot_transforms.rst.txt +++ b/0.11/_sources/auto_examples/plot_transforms.rst.txt @@ -763,7 +763,7 @@ randomly applies a list of transforms, with a given probability. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 8.766 seconds) + **Total running time of the script:** ( 0 minutes 8.589 seconds) .. _sphx_glr_download_auto_examples_plot_transforms.py: diff --git a/0.11/_sources/auto_examples/plot_video_api.rst.txt b/0.11/_sources/auto_examples/plot_video_api.rst.txt index dca154634cb..fcfe603cbb1 100644 --- a/0.11/_sources/auto_examples/plot_video_api.rst.txt +++ b/0.11/_sources/auto_examples/plot_video_api.rst.txt @@ -568,7 +568,7 @@ We can generate a dataloader and test the dataset. .. code-block:: none - {'video': ['./dataset/2/v_SoccerJuggling_g24_c01.avi', './dataset/1/RATRACE_wave_f_nm_np1_fr_goo_37.avi', './dataset/2/SOX5yA1l24A.mp4', './dataset/1/WUzgd7C1pWA.mp4', './dataset/2/SOX5yA1l24A.mp4'], 'start': [4.527937796860869, 0.32808810871331967, 2.1127425502685773, 6.973557045986544, 4.402754675265702], 'end': [5.038367, 0.833333, 2.635967, 7.474132999999999, 4.9049], 'tensorsize': [torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112])]} + {'video': ['./dataset/1/WUzgd7C1pWA.mp4', './dataset/2/SOX5yA1l24A.mp4', './dataset/2/v_SoccerJuggling_g24_c01.avi', './dataset/1/WUzgd7C1pWA.mp4', './dataset/2/SOX5yA1l24A.mp4'], 'start': [1.6059886546397426, 2.8462735255185843, 5.794335670319363, 3.7124644717480897, 5.732515897132387], 'end': [2.135467, 3.370033, 6.306299999999999, 4.237566999999999, 6.239567], 'tensorsize': [torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112])]} @@ -626,7 +626,7 @@ Cleanup the video and dataset: .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 3.665 seconds) + **Total running time of the script:** ( 0 minutes 4.826 seconds) .. _sphx_glr_download_auto_examples_plot_video_api.py: diff --git a/0.11/_sources/auto_examples/plot_visualization_utils.rst.txt b/0.11/_sources/auto_examples/plot_visualization_utils.rst.txt index 6618fe39485..1ce1c492e36 100644 --- a/0.11/_sources/auto_examples/plot_visualization_utils.rst.txt +++ b/0.11/_sources/auto_examples/plot_visualization_utils.rst.txt @@ -789,7 +789,7 @@ instance with class 15 (which corresponds to 'bench') was not selected. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 10.765 seconds) + **Total running time of the script:** ( 0 minutes 9.400 seconds) .. _sphx_glr_download_auto_examples_plot_visualization_utils.py: diff --git a/0.11/_sources/auto_examples/sg_execution_times.rst.txt b/0.11/_sources/auto_examples/sg_execution_times.rst.txt index c03a006765b..2425834e3b0 100644 --- a/0.11/_sources/auto_examples/sg_execution_times.rst.txt +++ b/0.11/_sources/auto_examples/sg_execution_times.rst.txt @@ -5,16 +5,16 @@ Computation times ================= -**00:28.136** total execution time for **auto_examples** files: +**00:27.092** total execution time for **auto_examples** files: +-----------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_plot_visualization_utils.py` (``plot_visualization_utils.py``) | 00:10.765 | 0.0 MB | +| :ref:`sphx_glr_auto_examples_plot_visualization_utils.py` (``plot_visualization_utils.py``) | 00:09.400 | 0.0 MB | +-----------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_plot_transforms.py` (``plot_transforms.py``) | 00:08.766 | 0.0 MB | +| :ref:`sphx_glr_auto_examples_plot_transforms.py` (``plot_transforms.py``) | 00:08.589 | 0.0 MB | +-----------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_plot_video_api.py` (``plot_video_api.py``) | 00:03.665 | 0.0 MB | +| :ref:`sphx_glr_auto_examples_plot_video_api.py` (``plot_video_api.py``) | 00:04.826 | 0.0 MB | +-----------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_plot_repurposing_annotations.py` (``plot_repurposing_annotations.py``) | 00:02.531 | 0.0 MB | +| :ref:`sphx_glr_auto_examples_plot_repurposing_annotations.py` (``plot_repurposing_annotations.py``) | 00:02.456 | 0.0 MB | +-----------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_auto_examples_plot_scripted_tensor_transforms.py` (``plot_scripted_tensor_transforms.py``) | 00:02.409 | 0.0 MB | +| :ref:`sphx_glr_auto_examples_plot_scripted_tensor_transforms.py` (``plot_scripted_tensor_transforms.py``) | 00:01.822 | 0.0 MB | +-----------------------------------------------------------------------------------------------------------+-----------+--------+ diff --git a/0.11/auto_examples/plot_repurposing_annotations.html b/0.11/auto_examples/plot_repurposing_annotations.html index a402b851b1a..d051bf12dc8 100644 --- a/0.11/auto_examples/plot_repurposing_annotations.html +++ b/0.11/auto_examples/plot_repurposing_annotations.html @@ -566,7 +566,7 @@

Converting Segmentation Dataset to Detection Datasetreturn img, target -

Total running time of the script: ( 0 minutes 2.531 seconds)

+

Total running time of the script: ( 0 minutes 2.456 seconds)

-

Total running time of the script: ( 0 minutes 2.409 seconds)

+

Total running time of the script: ( 0 minutes 1.822 seconds)

-Original image

Total running time of the script: ( 0 minutes 8.766 seconds)

+Original image

Total running time of the script: ( 0 minutes 8.589 seconds)

Out:

-
{'video': ['./dataset/2/v_SoccerJuggling_g24_c01.avi', './dataset/1/RATRACE_wave_f_nm_np1_fr_goo_37.avi', './dataset/2/SOX5yA1l24A.mp4', './dataset/1/WUzgd7C1pWA.mp4', './dataset/2/SOX5yA1l24A.mp4'], 'start': [4.527937796860869, 0.32808810871331967, 2.1127425502685773, 6.973557045986544, 4.402754675265702], 'end': [5.038367, 0.833333, 2.635967, 7.474132999999999, 4.9049], 'tensorsize': [torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112])]}
+
{'video': ['./dataset/1/WUzgd7C1pWA.mp4', './dataset/2/SOX5yA1l24A.mp4', './dataset/2/v_SoccerJuggling_g24_c01.avi', './dataset/1/WUzgd7C1pWA.mp4', './dataset/2/SOX5yA1l24A.mp4'], 'start': [1.6059886546397426, 2.8462735255185843, 5.794335670319363, 3.7124644717480897, 5.732515897132387], 'end': [2.135467, 3.370033, 6.306299999999999, 4.237566999999999, 6.239567], 'tensorsize': [torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112]), torch.Size([16, 3, 112, 112])]}
 
@@ -4378,7 +4378,7 @@

4. Data Visualizationshutil.rmtree("./dataset")

-

Total running time of the script: ( 0 minutes 3.665 seconds)

+

Total running time of the script: ( 0 minutes 4.826 seconds)