From b2e29b817e1c18a84b63d6246a96715baa172f6b Mon Sep 17 00:00:00 2001 From: Sergey Serebryakov Date: Mon, 16 Apr 2018 16:40:27 -0700 Subject: [PATCH] Updating documentation. --- README.md | 102 +--------- docs/_sidebar.md | 5 +- docs/data/data.md | 156 ++++++++++----- docs/index.md | 58 ++---- docs/intro/advanced_intro.md | 365 +++++++++++++++++++++++++++++++++++ docs/intro/imgs/chart.png | Bin 0 -> 37705 bytes docs/intro/intro.md | 2 +- docs/models/models.md | 4 +- docs/precision/precision.md | 6 +- 9 files changed, 504 insertions(+), 194 deletions(-) mode change 100644 => 120000 README.md create mode 100644 docs/intro/advanced_intro.md create mode 100644 docs/intro/imgs/chart.png diff --git a/README.md b/README.md deleted file mode 100644 index b1f0bbf..0000000 --- a/README.md +++ /dev/null @@ -1,101 +0,0 @@ -# __Deep Learning Benchmarking Suite__ -Deep Learning Benchmarking Suite (DLBS) is a collection of tools for providing consistent and reproducible benchmark experiments on various hardware/software combinations. In particular, DLBS provides the following functionality: -1. Implements internally various deep models. Our goal is to provide same model implementations for all supported frameworks. Deep models that are supported include various VGGs, ResNets, AlexNet, GoogleNet and others. -2. Benchmarks single node CPU/multi-GPU configurations. Frameworks that are now supported: BVLC/NVIDIA/Intel Caffe, Caffe2, TensorFlow, MXNet and TensorRT. Due to rapid development progress of these frameworks, we fix framework versions to particular commit that we have tested. -3. Supports inference and training phases. -4. Benchmarking tools can use real data if dataset is available. Else, falls back to synthetic data. -5. Supports bare metal and docker environments. - -## Supported platforms -Deep Learning Benchmarking Suite was tested on various servers with Ubuntu / -RedHat / CentOS operating systems with/without NVIDIA GPUs. It may not work with -Mac OS due to slightly different command line API of some of the tools we use -(like, for instance, sed) - we will fix this in one of the next releases. - -## Installation -1. Install Docker and NVIDIA Docker for containerized benchmarks. Read [here](/docker/docker.md?id=docker) why we prefer to use docker and [here](/docker/install_docker.md?id=installing-docker) for installing/troubleshooting tips. This is not required. DLBS can work with bare metal framework installations. -2. Clone Deep Learning Benchmarking Suite from [GitHub](https://github.com/HewlettPackard/dlcookbook-dlbs.git) - ```bash - git clone https://github.com/HewlettPackard/dlcookbook-dlbs dlbs - ``` -3. Build/pull docker images for containerized benchmarks or build/install host frameworks for bare metal benchmarks. - 1. [TensorFlow](http://tensorflow.org) - 2. [BVLC Caffe](http://caffe.berkeleyvision.org/) - 3. [NVIDIA Caffe](https://github.com/NVIDIA/caffe) - 4. [Intel Caffe](https://github.com/intel/caffe) - 5. [Caffe2](http://caffe2.ai) - 6. [MXNet](http://mxnet.io) - 7. [TensorRT](https://developer.nvidia.com/tensorrt) - - There are several ways to get Docker images. Read [here](/docker/pull_build_images.md?id=buildpull-docker-images) about various options. - -## Quick start -Assuming TensorFlow is installed and CUDA enabled GPU is present, execute the following commands to run simple experiment with ResNet50 model (if you do not have GPUs, see below): -```bash -# Go to DLBS home folder -cd dlbs -# Build TensorFlow image that's set as default in standard configuration files. -# Alternatively, you can skip this step and use your own image or pull image from NVIDIA GPU Cloud -cd ./docker -./build tensorflow/cuda9-cudnn7 -cd .. -# Setup python paths -export PYTHONPATH=$(pwd)/python:$PYTHONPATH -# Run experiment. It will run containerized GPU TensorFlow with default image 'hpe/tensorflow:cuda9-cudnn7' -# If you want to use your own image, add this argument: -Ptensorflow.docker_image='"YOUR_DOCKER_IMAGE_NAME"' -python ./python/dlbs/experimenter.py run -Pexp.framework='"tensorflow"' -Pexp.model='"resnet50"' -Pexp.gpus='"0"' -Pexp.bench_root='"./benchmarks/my_experiment"' -Pexp.log_file='"./benchmarks/my_experiment/tf.log"' -# Print some results -python ./python/dlbs/logparser.py --keys exp.device_type results.time exp.framework_title exp.model_title exp.replica_batch -- ./benchmarks/my_experiment/tf.log -``` - -If you do not have NVIDIA GPUs, run TensorFlow in CPU mode (the only difference is that -GPUs set to empty string: `--exp.gpus=""`): -```bash -# First steps same as in above GPU example - go to DLBS root folder and build/pull image. -# You may want to build a CPU only version of TensorFlow. By default, experimenter will use -# 'docker' to run CPU workloads what may not work. In the example below I override this -# behavior by providing exp.docker_launcher parameter. -cd dlbs -# Setup python paths -export PYTHONPATH=$(pwd)/python:$PYTHONPATH -# Run experiment -python ./python/dlbs/experimenter.py run -Pexp.framework='"tensorflow"' -Pexp.model='"resnet50"' -Pexp.gpus='""' -Pexp.log_file='"./benchmarks/my_experiment/tf.log"' -Pexp.docker_launcher='"nvidia-docker"' -# Print some results -python ./python/dlbs/logparser.py --keys exp.device_type results.time exp.framework_title exp.model_title exp.replica_batch -- ./benchmarks/my_experiment/tf.log -``` - -If everything is OK, you should expect seeing this JSON (training time - an average batch time - of course will be different depending on your GPU/CPU models): -```json -{ - "data": [ - { - "exp.device_type": "gpu", - "exp.replica_batch": "16", - "exp.framework_title": "TensorFlow", - "exp.model_title": "ResNet50", - "results.time": 255.59105431309905 - } - ] -} -``` - -If `results.time` is not there, study ./benchmarks/my_experiment/tf.log for error messages. - - - -## Deep Learning CookBook -Deep Learning Benchmarking Suite is part of HPE's Deep Learning CookBook project. -A project overview can be found on HPE developer portal [here](https://developer.hpe.com/platform/deep-learning-cookbook/home) - -## Documentation - -We host documentation on GitHub pages [here](http://hewlettpackard.github.io/dlcookbook-dlbs). - -## License - -Deep Learning Benchmarking Suite is released under the [Apache 2.0 license](./LICENSE). - -## Contact us - -* Natalia Vassilieva -* Sergey Serebryakov diff --git a/README.md b/README.md new file mode 120000 index 0000000..ed9e8db --- /dev/null +++ b/README.md @@ -0,0 +1 @@ +./docs/index.md \ No newline at end of file diff --git a/docs/_sidebar.md b/docs/_sidebar.md index 6a5c46f..027d1be 100644 --- a/docs/_sidebar.md +++ b/docs/_sidebar.md @@ -4,7 +4,7 @@ - [Install](/docker/install_docker.md?id=installing-docker) - [Network](/docker/docker_network.md?id=docker-networking) - [Pull/build images](/docker/pull_build_images.md?id=buildpull-docker-images) -- [Introduction](/intro/intro.md?id=introduction) +- [Introduction](/intro/intro.md?id=introduction-to-benchmarking-suite) - [Tutorials](/tutorials/tutorials.md?id=tutorials) - [Models](/models/models.md?id=models) - [Parameters](/parameters/parameters.md?id=parameters) @@ -16,7 +16,8 @@ - [TensorFlow](/frameworks/tensorflow.md?id=tensorflow) - [PyTorch](/frameworks/pytorch.md?id=pytorch) - [TensorRT](/frameworks/tensorrt.md?id=tensorrt) -- [Data](/data/data.md?id=data) +- [Input data](/data/data.md?id=data) +- [Data Precision](/precision/precision.md?id=data-precision) - [Resource monitor](/monitor/monitor.md?id=resource-monitor) - [System information](/sysinfo/sysinfo.md?id=system-information) - [Extending DLBS](/extend/dlbs.md?id=extending-deep-learning-benchmarking-suite) diff --git a/docs/data/data.md b/docs/data/data.md index 0ced2c7..c506c36 100644 --- a/docs/data/data.md +++ b/docs/data/data.md @@ -1,58 +1,120 @@ # __Data__ -The benchmarking suite supports real and synthetic data. By default, synthetic data is used. Synthetic data is basically a randomly initialized tensor of an appropriate shape. - -Two parameters are defined in `exp` namespace that can be used to provide an additional information to a processing scripts once benchmark has been done: - -1. `exp.data` (synthetic, real). Indicates if real data was used. Must be provided by a user now. Default value is synthetic. -2. `exp.data_store` (mem, local-hdd, local-ssd, nfs ...) - a user defined value that described where data was located. Must be provided by a user. - -Benchmarkers are welcome to introduce any other parameters they need to describe data ingestion pipeline in a more granular way. - -For now, every framework has it's specific data ingestion parameters. However, a path to a dataset is always defined by a parameter `${framework_family}.data_dir`, for instance, `tensorflow.data_dir`, `caffe.data_dir`, `mxnet.data_dir` etc. - -> TensorRT does not support real data - only synthetic. - -> In current version, only image-type of datasets are supported. However, if input pipeline -> is only specified by a directory, it will work. - -One thing to remember preparing benchmark dataset is that various models define their own shape for input images. For instance, InceptionV3's input shape is `3x299x299` while ResNet50's input shape is `3x224x224`. The [models](/models/models.md?id=supported-models) section provides detailed information on all supported models and their input shapes. +DLBS can benchmark DL workloads with __synthetic__ or __real__ data. Synthetic data means there's no real dataset. Instead, +a synthetic (fake, dummy) dataset stored in memory is used. This basically means that with synthetic data we do not take +into account overhead associated with ingesting data. Why is synthetic data useful? + +1. It gives an optimal performance assuming overhead associated with data ingestion pipeline is zero. Thus, it can be used to + evaluate performance of ingestion pipelines from both software and hardware (storage) perspectives. +2. Data ingestion is infrastructure dependent. Data, depending on its size, can be stored in memory, local HDD/SDD, NFS or some + high performance parallel/distributed file system. We do not know this in advance and wrong assumptions may lead to + incorrect numbers that can be either too optimistic or too pessimistic. +3. As it was mentioned, synthetic data provides optimal performance assuming data ingestion is completely overlapped with + forward/backward computations. This is a good way to benchmark ingestion libraries and various data storage options. + +What needs to be taken into account when benchmarking ingestion pipelines with DLBS? +1. Some of the supported frameworks provide additional parameters for tuning ingestion pipelines such as, for instance, + number of preprocessing or loader threads. Setting these parameters properly may have significant impact on performance, + especially, if benchmarked models are not computationally expensive such as AlexNetOWT or fully connected neural nets. +2. Components that load data and preprocess it (scale, mirror etc.) may not be optimally written. DLBS tries to reuse as much + as possible what's available in frameworks. In some cases, such as PyTorch, a custom data loader was written that loads + data from Caffe's LMDB datasets and this can be improved. +3. Various preprocessing options significantly influence preprocessing time. Be default, DLBS uses minimal set of + transformations including crop/scale and mirror. No heavy distortions are enabled. +4. In general, it's a good idea to benchmark only ingestion pipeline getting performance under these conditions (no + computations involved). At this moment, only PyTorch backend provides this functionality. +5. The location of data may have impact on performance, especially for light models such as AlexNetOWT that require + high ingress traffic to keep GPUs busy. +6. The data caching will have a very significant impact on performance. The very first time data is accessed it may + get cached by an operating system (if data set is not large). Thus, the first epoch will be slow. The following + epochs will be dramatically faster. This generally results in a fact that bencharkers need to understand what they + benchmark. The possible strategy could be the following: + 1. Make an assumption on what dataset is used. If it's small/medium sized, assume data will be cached. Either run + warm-up epoch to force operating system to cache your dataset or put it in /dev/shm. + 2. For large datasets make sure it's not cached. Either disable file system cache, or, make sure the data is removed + from cache before running new epoch/benchmark ([dd](https://www.gnu.org/software/coreutils/manual/html_node/dd-invocation.html) + utility can do that - search for _nocache_ there). In current version of DLBS, there is no option to invoke custom + user callback before running a new epoch. Contact us if you need this. + +By default, benchmarking suite uses synthetic data. There are three global and multiple framework-specific parameters +that affect ingestion pipelines: + 1. `exp.data_dir` A full path to a folder where data is stored. Caffe's forks use LMDB/LEVELDB datasets, TensorFlow + uses files in tfrecord format, Caffe2 and PyTorch use LMDB datasets. MXNet uses recordio files. The backend for NVIDIA + inference engine TensorRT does not support real data. + Default value of this parameter is empty what means use synthetic data. + 2. `exp.data` (synthetic, real). By default, the value of this parameter is set by experimenter script. It is 'synthetic' + if `exp.data_dir` is empty and 'real' otherwise. Can be used to search for experiments with real data. + 3. `exp.data_store` This is optional parameter that indicates what type of storage was used. It is a user defined string + with no specific format that indicates storage properties. Benchmarks can introduce any other parameters they need to + provide additional details in a more structured way. + +> Only CNNs support real data. Other models, such as fully connected ones (DeepMNIST, AcousticModel) do not support +> real data and can only be used with synthetic data. + +It was mentioned that the `exp.data_dir` parameter defines path to a dataset. It's OK to use this parameter if one framework +is benchmarked. If two or more frameworks are benchmarks in a same experiment, it may not be very convenient to add extension +sections that will define value for this parameter depending on the framework. In this case, users can use framework specific +dataset paths that look like this `${framework_family}.data_dir`: `tensorflow.data_dir`, `mxnet.data_dir`, `caffe2.data_dir` etc. +In this case no extensions are required. By default, the value of `exp.data_dir` parameter is set to be `"${${exp.framework}.data_dir}"`, +so, it will pick whatever dataset is specified for current active framework. + +One thing to remember preparing benchmark dataset is that various models define their own shape for input images. For instance, +InceptionV3's input shape is `3x299x299` while ResNet50's input shape is `3x224x224`. The +[models](/models/models.md?id=supported-models) section provides detailed information on all supported models and their input shapes. + +The following sections describe framework specific parameters. They are divided into three categories: (1) __mandatory__ that needs +to be specified to enable real data, (2) __optional__ that are optional and may be skipped and (3) __critical__ that can significantly +influence the performance. Normally, you want to try several values for critical parameters to see what works best for you for +your particular configuration. Default values should work OK for compute intensive models such as ResNet50 that does not require +large number of images per second. ### Caffe -> Caffe can work with datasets stored in LMDB or LEVELDB databases. - -1. `caffe.data_dir` A data directory if real data should be used. If empty, synthetic data is used (no data ingestion pipeline). -2. `caffe.mirror` In case of real data, specifies if 'mirrowing' should be applied. -3. `caffe.data_mean_file` In case of real data, specifies path to an image mean file." -4. `caffe.data_backend` In case of real data, specifies its storage backend ('LMDB' or 'LEVELDB'). +Caffe can work with datasets stored in LMDB or LEVELDB databases. +1. Mandatory parameters + * `caffe.data_dir=""` A data directory if real data should be used. If empty, synthetic data is used (no data ingestion pipeline). + * `caffe.data_mean_file=""` In case of real data, specifies path to an image mean file. + * `caffe.data_backend="LMDB"` In case of real data, specifies its storage backend ('LMDB' or 'LEVELDB'). +2. Optional parameters + * `caffe.mirror=true` In case of real data, specifies if 'mirrowing' should be applied. ### Caffe2 -> Caffe2 can work with datasets stored in LMDB database. - -1. `caffe2.data_dir` A data directory if real data should be used. If empty, synthetic data is used (no data ingestion pipeline). -2. `caffe2.data_backend` In case of real data, specifies its storage backend ('lmdb'). +Caffe2 can work with datasets stored in LMDB database. +1. Mandatory parameters + * `caffe2.data_dir=""` A data directory if real data should be used. If empty, synthetic data is used (no data ingestion pipeline). + * `caffe2.data_backend="lmdb"` In case of real data, specifies its storage backend. +2. Critical parameters + * `caffe2.num_decode_thread=1` Number of image decode threads when real dataset is used. For deep compute intensive models + it can be as small as 1. For high throughput models such as AlexNetOWT it should be set to 6-8 threads for 4 V100 to + provide ~ 9k images/second (depending on the model of your processor). ### MXNet -> Caffe2 can work with datasets stored in \*.rec files - -1. `mxnet.data_dir` A data directory if real data should be used. If empty, synthetic data is used (no data ingestion pipeline). - -### TensorFlow - -> TensorFlow can work with datasets stored in \*.tfrecord files. Basically, experimenter -> exposes a subset of data-related parameters of a tf_cnn_benchmarks project. - -1. `tensorflow.data_dir` A data directory if real data should be used. If empty, synthetic data is used (no data ingestion pipeline). See tf_cnn_benchmarks.py for more details. -2. `tensorflow.data_name` This is a 'data_name' parameter for tf_cnn_benchmarks. See tf_cnn_benchmarks.py for more details. -3. `tensorflow.distortions` This is a 'distortions' parameter for tf_cnn_benchmarks. See tf_cnn_benchmarks.py for more details. - -> Setting `tensorflow.distortions` to true will significantly slow down easy computable -> models such as AlexNet. +MXNet can work with datasets stored in \*.rec files. +1. Mandatory parameters + * `mxnet.data_dir` A data directory if real data should be used. If empty, synthetic data is used (no data ingestion pipeline). +2. Critical parameters + * `mxnet.preprocess_threads=4` Number preprocess threads for data ingestion pipeline when real data is used. + * `mxnet.prefetch_buffer=10` Number of batches to prefetch (buffer size). ### PyTorch -> PyTorch can now work with datasets of [raw images](http://pytorch.org/docs/master/torchvision/datasets.html#imagefolder). +PyTorch work with Caffe's LMDB datasets. +1. Mandatory parameters + * `pytorch.data_dir=""` A data directory if real data should be used. If empty, synthetic data is used (no data ingestion pipeline). + * `pytorch.data_backend="caffe_lmdb"` The type of dataset specified by *pytorch.data_dir*. Two datasets are supported. The first one + is *caffe_lmdb*. This is exactly the same type of datasets that Caffe frameworks use. The second type is *image_folder* that can + be read by a torchvision's [ImageFolder dataset](https://github.com/pytorch/vision/blob/master/torchvision/datasets/folder.py#L72). +2. Optional parameters + * `pytorch.data_shuffle=false` Enable/disable shuffling for both real and synthetic datasets. +3. Critical parameters + * `pytorch.num_loader_threads=4` Number of worker threads to be used by data loader (for real datasets). -1. `pytorch.data_dir` A data directory if real data should be used. If empty, synthetic data is used (no data ingestion pipeline). -2. `pytorch.data_backend` The type of dataset specified by *pytorch.data_dir*. Two datasets are supported. The first one is *caffe_lmdb*. This is exactly the same type of datasets that Caffe frameworks use. The second type is *image_folder* that can be read by a torchvision's [ImageFolder dataset](https://github.com/pytorch/vision/blob/master/torchvision/datasets/folder.py#L72). -3. `pytorch.data_shuffle` Enable/disable shuffling for both real and synthetic datasets. -4. `pytorch.num_loader_threads` Number of worker threads to be used by data loader (for synthetic and real datasets). +### TensorFlow +TensorFlow can work with datasets stored in \*.tfrecord files. Basically, experimenter exposes a subset of data-related parameters of +a tf_cnn_benchmarks project. +1. Mandatory parameters + * `tensorflow.data_dir=""` A data directory if real data should be used. If empty, synthetic data is used (no data ingestion pipeline). + See tf_cnn_benchmarks.py for more details. + * `tensorflow.data_name=""` This is a 'data_name' parameter for tf_cnn_benchmarks. See tf_cnn_benchmarks.py for more details. If you use imagenet + type of dataset, set it to "imagenet". +2. Critical parameters + * `tensorflow.distortions=false` This is a 'distortions' parameter for tf_cnn_benchmarks. See tf_cnn_benchmarks.py for more details. + This activates additional image transformations and will significantly decrease throughput that ingestion pipeline can provide. diff --git a/docs/index.md b/docs/index.md index deb11f3..4261857 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,5 +1,5 @@ # __Deep Learning Benchmarking Suite__ -Deep Learning Benchmarking Suite (DLBS) is a collection of command line tools for providing consistent and reproducible benchmark experiments on various hardware/software combinations. In particular, DLBS provides the following functionality: +Deep Learning Benchmarking Suite (DLBS) is a collection of command line tools for running consistent and reproducible benchmark experiments on various hardware/software combinations. In particular, DLBS provides the following functionality: 1. Implements internally various deep models. Our goal is to provide same model implementations for all supported frameworks. Deep models that are supported include various VGGs, ResNets, AlexNet and GoogleNet models. 2. Benchmarks single node multi GPU configurations. Frameworks that are now supported: BVLC Caffe, NVIDIA Caffe, Intel Caffe, Caffe2, TensorFlow, MXNet, PyTorch and NVIDIA inference engine TensorRT. 3. Supports inference and training phases. @@ -33,36 +33,33 @@ Mac OS due to slightly different command line API of some of the tools we use There are several ways to get Docker images. Read [here](/docker/pull_build_images.md?id=buildpull-docker-images) about various options including images from [NVIDIA GPU Cloud](https://www.nvidia.com/en-us/gpu-cloud/). We may not support the newest framework versions due to API change. ## Quick start -Assuming TensorFlow is installed and CUDA enabled GPU is present, execute the following commands to run simple experiment with ResNet50 model (if you do not have GPUs, see below): +Assuming CUDA enabled GPU is present, execute the following commands to run simple experiment with ResNet50 model (if you do not have GPUs, see below): ```bash # Go to DLBS home folder cd dlbs # Setup python paths export PYTHONPATH=$(pwd)/python:$PYTHONPATH +# Build TensorFlow image. In the case of TensorFlow, the `hpe/tensorflow:cuda9-cudnn7` image +# located in tensorflow/cuda9-cudnn7 is the default TensorFlow image. +# Alternatively, you can skip this step and use your own image, pull image from NVIDIA GPU Cloud +# or use your bare metal TensorFlow installation. +# This will build an image named `hpe/tensorflow:cuda9-cudnn7` +cd ./docker +./build.sh tensorflow/cuda9-cudnn7 +cd .. # Create folder for experiment results mkdir -p ./benchmarks/my_experiment # Run experiment python ./python/dlbs/experimenter.py run -Pexp.framework='"tensorflow"' -Pexp.model='"resnet50"' -Pexp.gpus='"0"' -Pexp.log_file='"./benchmarks/my_experiment/tf.log"' # Print some results -python ./python/dlbs/logparser.py ./benchmarks/my_experiment/tf.log --output_params "exp.device_type,exp.phase,results.time,exp.framework_title,exp.model_title,exp.replica_batch,exp.framework_ver" +python ./python/dlbs/logparser.py ./benchmarks/my_experiment/tf.log --output_params "exp.device_type,exp.phase,results.time,results.throughput,exp.framework_title,exp.model_title,exp.replica_batch,exp.framework_ver" ``` -If you do not have NVIDIA GPUs, run TensorFlow in CPU mode (the only difference is that -GPUs set to empty string: `--exp.gpus=""`): -```bash -# Go to DLBS home folder -cd dlbs -# Setup python paths -export PYTHONPATH=$(pwd)/python:$PYTHONPATH -# Create folder for experiment results -mkdir -p ./benchmarks/my_experiment -# Run experiment -python ./python/dlbs/experimenter.py run -Pexp.framework='"tensorflow"' -Pexp.model='"resnet50"' -Pexp.device_type='"cpu"' -Pexp.log_file='"./benchmarks/my_experiment/tf.log"' -# Print some results -python ./python/dlbs/logparser.py ./benchmarks/my_experiment/tf.log --output_params "exp.device_type,exp.phase,results.time,exp.framework_title,exp.model_title,exp.replica_batch,exp.framework_ver" -``` +To use multiple GPUs with data parallel schema, provide list of GPUs i.e. `--exp.gpus='"0,1,2,3"'` +to use 4 GPUs. If you do not have NVIDIA GPUs, set list of GPUs to empty value i.e. `--exp.gpus='""'`. That will instruct +benchmarking suite to use CPUs. -If everything is OK, you should expect seeing this JSON (training time - an average batch time - of course will be different): +If everything is OK, you should expect seeing JSON similar to this one: ```json { "data": [ @@ -73,31 +70,18 @@ If everything is OK, you should expect seeing this JSON (training time - an aver "exp.model_title": "ResNet50", "exp.phase": "training", "exp.replica_batch": 16, - "results.time": 273.27070879590093 + "results.time": 273.27, + "results.throughput": 58.55 } ] } ``` +The `results.time` - is an average time in milliseconds to process one batch of data. If it is not there, +study ./benchmarks/my_experiment/tf.log for error messages. The `results.throughput` parameter is the number +of instances per second, in this case, number of images/seconds. -If `results.time` is not there, study ./benchmarks/my_experiment/tf.log for error messages. - -## Further reading +The [advanced introduction](./intro/advanced_intro.md?id=advanced-introduction-to-benchmarking-suite) contains more examples of what DLBS can do. -- [Docker](/docker/docker.md?id=docker) - - [Install](/docker/install_docker.md?id=installing-docker) - - [Network](/docker/docker_network.md?id=docker-networking) - - [Pull/build images](/docker/pull_build_images.md?id=buildpull-docker-images) -- [Introduction](/intro/intro.md?id=introduction) -- [Tutorials](/tutorials/tutorials.md?id=tutorials) -- [Frameworks](/frameworks/frameworks.md?id=frameworks) - - [Caffe](/frameworks/caffe.md?id=caffe) - - [Caffe2](/frameworks/caffe2.md?id=caffe2) - - [MXNet](/frameworks/mxnet.md?id=mxnet) - - [TensorFlow](/frameworks/tensorflow.md?id=tensorflow) - - [TensorRT](/frameworks/tensorrt.md?id=tensorrt) - - [PyTorch](/frameworks/pytorch.md?id=pytorch) -- [Extending DLBS](/extend/dlbs.md?id=extending-deep-learning-benchmarking-suite) -- [Resource monitor](/monitor/monitor.md?id=resource-monitor) ## Contact us diff --git a/docs/intro/advanced_intro.md b/docs/intro/advanced_intro.md new file mode 100644 index 0000000..c946e10 --- /dev/null +++ b/docs/intro/advanced_intro.md @@ -0,0 +1,365 @@ +__Advanced introduction to Benchmarking Suite__ +============================================================================== + +Overview +-------- + +In this document we introduce [Deep Learning Benchmarking Suite +(DLBS)](https://github.com/HewlettPackard/dlcookbook-dlbs) and show step by step how one +can use this tool to perform end-to-end performance analysis of deep learning +workloads, from running benchmarks to reporting results. + +Deep Learning Benchmarking Suite +-------------------------------- + +The [Deep Learning Benchmarking Suite](https://hewlettpackard.github.io/dlcookbook-dlbs) (DLBS) is a +collection of tools which assist in running DL benchmarks in a consistent and reproducible manner across +a range of software and hardware combinations. Out of the box the DLBS supports the following: + +1. Single-node, multi-GPU benchmarks. + +2. Seven DL frameworks including TensorFlow, BVLC/NVIDIA/Intel Caffe, Caffe2, MXNet and + PyTorch and one inference engine NVIDIA TensorRT. + +3. Eighteen + [models](https://hewlettpackard.github.io/dlcookbook-dlbs/\#/models/models?id=supported-models) + for all supported frameworks. We try to make sure that a model + implementation is consistent (the same) across all frameworks. + +4. Either bare metal or containerized frameworks including containers from + [NVIDIA GPU Cloud.](https://ngc.nvidia.com/) + +5. [Basic + dockerfiles](https://hewlettpackard.github.io/dlcookbook-dlbs/\#/docker/pull_build_images?id=buildpull-docker-images) + for all frameworks which users can use to build containers with DL + frameworks locally on their machines. + +6. Simple resource monitoring that tracks parameters such as CPU/GPU + utilization, memory and power consumption etc. + +7. Basic reporting capabilities to build exploratory reports as well as reports + that investigate both weak and strong scaling. Scripts to plot charts are + also included. We plan to include advanced python notebook-based reporting + capabilities in the nearest future. + +More detailed information can be found on +[GitHub](https://github.com/HewlettPackard/dlcookbook-dlbs), +[documentation](https://hewlettpackard.github.io/dlcookbook-dlbs) and HPE +[developer](https://developer.hpe.com/platform/deep-learning-cookbook/home) +portals. + +Installation +------------ + +1. Install Docker and NVIDIA Docker for running containerized benchmarks. We + have a quick overview + [here](https://hewlettpackard.github.io/dlcookbook-dlbs/#/docker/docker?id=docker) + on why we recommend using docker. If you want to use bare metal framework + installations, skip all steps specific to containers. + +2. Clone Deep Learning Benchmarking Suite from + [GitHub](https://github.com/HewlettPackard/dlcookbook-dlbs): + + ```bash + git clone https://github.com/HewlettPackard/dlcookbook-dlbs dlbs + ``` + +3. The DLBS depends on modules from standard python library (python 2.7 only, python + 3.x is not supported currently). Optional dependencies that do not influence + the benchmarking process are listed in `python/requirements.txt`. If these + dependencies are not found, the code that uses it will be disabled. + +4. Build/pull docker images. If you do not have your own docker images, you can + pull images or build images yourself. This + [page](https://hewlettpackard.github.io/dlcookbook-dlbs/#/docker/pull_build_images?id=buildpull-docker-images) + provides more details. + +Benchmarking workflow +--------------------- + +1. The user defines configuration which they want to explore. + + Configuration can be provided as a JSON file or command line arguments. In + this post we will be using command line arguments for simplicity. + Configuration can include definitions of frameworks, models, datasets etc. + Once it is done, + +2. The user runs this configuration. + + Depending on exploration space, this process may take several minutes or + several days. The result of this stage is a collection of raw textual log + files. Those log files contain framework specific outputs as well as + benchmark information logged by the DLBS. Then, + +3. The user runs the log parser. + + This is the part of DLBS that parses log files, extracts information and + serializes it as a JSON file. There is an option to produce a compressed JSON + file since textual JSON may be quite large. + +4. The user performs analysis of the results. + + The DLBS provides basic functionality for querying the JSON file and + extracting the results of interest. The DLBS can also build basic + exploratory, weak and strong scaling reports as well as plot various charts. + +Before going through the benchmark steps described below, the user will need to +set up the benchmark directories. In the following description it will be +assumed that the user will be running benchmarks in the +*DLBS_ROOT/benchmarks/benchmark* folder where *DLBS_ROOT* is the root folder of +the DLBS it was cloned into: + +```bash +# Go to the root folder of the DLBS +cd ./dlbs + +# Create benchmark directories +mkdir -p ./benchmarks/benchmark + +# Go to that folder +cd ./benchmarks/benchmark + +# Setup host environment including python paths etc. Make sure you use Python 2.7. +source ../../scripts/environment.sh + +python --version # Make sure it is 2.7 +echo $DLBS_NAME # You should see here "Deep Learning Benchmarking Suite". +echo $DLBS_ROOT # You should see here path to your root directory. + +# Define shortcuts to scripts that we will use most +experimenter=../../python/dlbs/experimenter.py +parser=../../python/dlbs/logparser.py +``` + +Usually, for every series of benchmarks it is generally a good practice to +create a shell script that performs all these initializations so that they do not +need be reentered for each run. Also, note that the benchmarking directory can +reside at any place on the file system as long as the DLBS host environment is +properly initialized by calling the `environment.sh` script. + +Getting help in command line +---------------------------- + +The DLBS provides basic help for input/output parameters and frameworks: + +```bash +# Show help module functionality +python $experimenter help --help + +# Show list of supported frameworks +python $experimenter help --frameworks + +# Show list of commonly used parameters for tensorflow +python $experimenter help --frameworks tensorflow + +# Show help message for parameter 'exp.gpus' (it can be a regular expression) +python $experimenter help --params exp.gpus + +# Perform full text search in descriptions (it can be a regular expression) +python $experimenter help --text cuda +``` + +Do not worry if the output of the above-mentioned commands does not make sense +for now, the input and output parameters and their specifications will be +explained later in this post. + +Benchmark configuration +----------------------- + +In this particular example a series of benchmarks will be run on a 4 GPU +machine, using several frameworks and one neural network model. The command line +that launches benchmarks is as follows: + +```bash +python $script run --log-level=info\ + -Vexp.framework='["mxnet", "tensorflow", "caffe2"]'\ + -Vexp.gpus='["0", "0,1", "0,1,2,3"]'\ + -Vexp.model='["resnet50"]'\ + -Vexp.replica_batch='[16]'\ + -Pexp.docker=true\ + -Pexp.num_warmup_batches=10\ + -Pexp.num_batches=100\ + -Pexp.phase='"training"'\ + -Pexp.log_file='"${BENCH_ROOT}/logs/${exp.framework}/${exp.model}_${exp.num_gpus}_${exp.effective_batch}.log"'\ + -Pmxnet.docker_image='"hpe/mxnet:cuda9-cudnn7"'\ + -Pcaffe2.docker_image='"hpe/caffe2:cuda8-cudnn6"'\ + -Ptensorflow.docker_image='"nvcr.io/nvidia/tensorflow:17.11"' +``` + +Configuration parameters are specified with `-V` and `-P` command line +arguments. The `V` parameters are called *variables*. The experimenter script +uses them to generate multiple benchmarks by computing the Cartesian product on +the range of values for variables. The `P` parameters are called parameters that +do not contribute to generating various benchmarks and just define values for +specific parameters. A particular configuration parameter can be a variable and +a parameter in different configurations depending on your needs. + +The `V` and `P` arguments are followed by a parameter name and its value. Let's +consider in details configuration provided above. It defines 4 variables - +**exp.framework**, **exp.gpus**, **exp.model** and **exp.replica_batch**: + +- **exp.framework** A framework identifier to run. + +- **exp.gpus** A list of GPUs to use. If empty, CPU will be used instead. + +- **exp.model** A neural network model to benchmark. + +- **exp.replica_batch** A replica batch size. Another term for a device batch + size. + +> Variables values are usually assigned lists with different options. In the +example configuration shown above the variables are: + +- Three frameworks: *MXNet*, *TensorFlow* and *Caffe2*. + +- Three different sets of GPUs using 1, 2 and 4 GPUs respectively in + particular combinations of GPU IDs (1 GPU: 0; 2 GPUs: 0, 1; 4 GPUs: 0, 1, 2, + 3.) + +- One value for the replica batch size, *16*; by default, experimenter uses + weak scaling strategy. + +- Similarly only one value for the models *ResNet-50*. + +So, given these configuration experimenter will run in total `3 * 3 * 1 * 1 = 9` +benchmarks. + +The remaining configuration parameters are benchmark parameters that do not need +to be varied (i.e. they do not contribute to generating new benchmark +configurations) though they may have their specific values in different +benchmark like `exp.log_file` in the example above: + +- **exp.docker** A boolean parameter specifying if docker containers should be + used. + +- **exp.num_warmup_batches** Number of warm-up batches to run. + +- **exp.num_batches** Number of benchmark batches to run. + +- **exp.phase** The benchmark phase (training/inference). + +- **exp.log_file** Benchmark log file. As it can be seen, parameters may refer + other parameters. It’s similar to variable expansion in bash, though greatly + simplified. + +These are so called general parameters. There can be a framework specific +parameters that are used to simplify configurations. Framework specific +parameters belong to framework specific namespace (i.e. they start with +*framework.*): + +- **mxnet.docker_image** A docker image for *MXNet*. + +- **caffe2.docker_image** A docker image for *Caffe2*. + +- **tensorflow.docker_image** A docker image for *TensorFlow*. + +> The configuration used in this post is simplified and is not intended to be +> used in real benchmarks. To use this configuration in a reasonable +> benchmark, at a minimum, the number of warmup and benchmark batches need to +> be increased. + +> DISCLAIMER: The benchmarks were ran on a machine with the GPUs' frequencies +> reduced for maintenance reasons. + +Parsing log files +----------------- + +As specified by a `exp.log_file` parameter, log files produced by individual +benchmarks will be stored in *${BENCH_ROOT}/logs* directory. It may be +worthwhile spending some time browsing those files. In addition to framework +specific log output, e.g., the output from TensorFlow; they contain metadata +about the frameworks and experiments configuration parameters and variables; +system performance monitoring information, time series output of iteration +performance and system configuration information and parameters that were +specified by the user in the configuration command line or JSON configuration +files. Most importantly it will contain summarized information on the +performance and timing results of the benchmark in the *results.* namespace. + +The log parser tool can parse those log files and print out information to a +console or JSON format file. This JSON file can then be used to plot charts and +build strong/weak/exploration reports. This JSON file will also be an input to +more advanced reporting tools that we plan to release as open source in the near +future. + +To parse log files and extract all information, run the following command: + +```bash +python $parser ./logs --recursive --output_file ./benchmarks.json +``` + +> The log parser can write compressed files. To enable this, set file +> extension to *json.gz* + +Performing results analysis +--------------------------- + +### Weak scaling reports + +The DLBS provides basic functionality for analyzing results. The following +command will generate a weak scaling report for benchmarks using the *MXNet* +framework with the *ResNet50* model: + +```bash +reporter=../../python/dlbs/reports/summary_builder.py +python $reporter --summary-file ./benchmarks.json \ + --type weak-scaling \ + --target-variable results.time \ + --query '{"exp.framework":"mxnet","exp.model":"resnet50"}' +``` + +It will print out several tables outlining the average batch times in +milliseconds, the throughput for various numbers of GPUs, and speedup and +efficiency estimates. If your benchmarks contain many more data, you can also +try to build a *strong-scaling* report. For single GPU training or inference +benchmark *exploration* this report will provide useful insights. The value for +target variable must be *results.time*. This is the output parameter that +contains a time in milliseconds for one batch. The *query* parameter specifies a +query that will select data points to build a report. It is a JSON dictionary +that maps keys (parameter names) to their values (constraints). + +The following report will be printed: + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Batch time (milliseconds) +Network Batch 1 2 4 +ResNet50 16 913.44 1029.24 1065.12 + +Inferences Per Second (IPS, throughput) +Network Batch 1 2 4 +ResNet50 16 17 31 60 + +Speedup (instances per second) +Network Batch 1 2 4 +ResNet50 16 1 1.82 3.53 + +Efficiency = 100% * t1 / tN +Network Batch 1 2 4 +ResNet50 16 100.00 88.74 85.75 +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +### Building charts + +To build graphical report, run the following command: + +```bash +plotter=../../python/dlbs/reports/series_builder.py +python $plotter ./benchmarks.json \ + --xparam exp.num_gpus \ + --yparam results.throughput \ + --chart-file ./chart.png \ + --chart-type line \ + --series '[{"exp.framework":"mxnet","ecxp.model":"resnet50"},{"exp.framework":"caffe2","exp.model":"resnet50"},{"exp.framework":"tensorflow","exp.model":"resnet50"}]' \ + --aggregation avg \ + --chart-opts '{"title":"ResNet50 performance","xlabel":"Number of GPUs","ylabel":"Throughput","legend":["MXNet","Caffe2","TensorFlow"]}' +``` + +The following chart will be created: +

+Next Steps +---------- + +In this document we demonstrated how DLBS assists in running basic single-node +multi-GPU benchmarks. A number of tutorial scripts that can be used as examples +are located in a tutorials +[folder](https://github.com/HewlettPackard/dlcookbook-dlbs/tree/master/tutorials/dlcookbook) +that demonstrate advanced usage of DLBS. diff --git a/docs/intro/imgs/chart.png b/docs/intro/imgs/chart.png new file mode 100644 index 0000000000000000000000000000000000000000..87e46cb4fb1d4216a0e9637c458d753c241165cb GIT binary patch literal 37705 zcmeFZg;!N=w>P{2Q9=PJ5fDTrq>=6rLA^sHgV;fd&6TOqB%F`|nGLpZxzXNqhD4GpC=QpRR#{ zZ)Ig=dFM?rRaSUD5jI&Wc~8<5ZhK|tvroaCrWM-_`jJjg&{T==kg>TE_rB*eq z%-U6NpQq{frASC@a5FFjj}{pd+_{smSx}p|=_}mc-hR9|U1oFt#b=KCsi{P7Z-mj$ z&k~lFmLJXNUwfVJ--36&OGo#%NBAQwq|*uswoq5UtizyzU#&LrWXM@x%q~zA0Dv` z)WdJ_oVnEn5>9&TpycVSnvxD#;$fdTsmrUq=qNB~rO+6jtaQ39>?v?@ zaWOJFdXt>I>+E>9Z*|{$wznW7gBAhLs->~A$hfC_Smnu+t9%K$b-&{U-J;+(J7Z>f zw;L(zV%QAi2>ox6aS0T5S3UmecCa?PKdh5s`@5s8qT-SJk+s6hm%Bf8Vl4Rdn=s-N z5}rMMnyiwiT~=1cYCgh>iiQ@;X-14lOiVl;a%v7D8pzY3P74n=d?}UnLeA;nv(Hy>~rNR%oBRCvOiTVsPI5$=%-8 zHna1)^EhgMNW*G&c2-{Zshga#GVvWg`&aV z*-;duW6~}r6tK?)4?78 z_`uuTmgEHm1zm}PakGtqMULz0Q0!qTbS*Ael`l|i?d(>js`<6GwHeiaycPAmT42AT zaP>L~qOK#7;l+^C!P;oZ9e!eAVPRqx?XF}|-=iL{i^rvwlf9Zx_HG*gf0r=`Xd?xp)8mvllPYtfp(SwJM`9323$&uvzeK-V_xRLxsC; z5y8o~TasbcEURn#cptgO zi;Hi~1+(ZI7&OAJfBW`r-Sl+QwCCyiyOJ!tl$eKNZy&8^<<%%4sOjmSm%HqyLmd(s zHU``X%E-vjH#W}DEVpguv3l|(U~Xe0x9a3i7L5n&7Y67wEe8Vwis*90%$i4ShEB`9 zqTh6CYf|Wf@NeGyf_N@19RfdpiiwHI3p}+pZ+2;KAzm*)CV1#P~`Y`v<{)f!!s@=LJ60Zv}i(+SUv~S zn%~flrBEW`xUERX6%-T*H8eERG)gUlWHQ)Gyms1Yv68dZi#tY!zC6B$E+=%p*YEBD z&s7U0;{5cGY`ny3Iu{Y->+5T~@{8Kw)azoG5?#*o^4!TlLs?n5M(A?M>p~JGLeTAC zK_f6)$8+@66jUEwB!dc48U9?smxhI1uOeP>x0mWwH8;&M2ni^TTNrYisMe zp`obdj=H@m(AGQk(OV7Z-Af7_dm1AgV$b7UA zo$idwqj$M46}xyXGn3BN)|LdCp|bMjrQ_M3th~8De>xU_#=Tf9Eh;itX#0qMpp>UA zs9tPp7;iaRNR3F~ccQs__pZ*VKlZH<3mhDr7B4dnQp?GTW>`ryxf#7n`P3(|(xqm@ z?_mapCnn<5!Tbu*&NGHaO|y!gtnkg7c05CGGs>X0%y$ z(^G<5w~XTt_V>~6!hqFjd-nV}^7Cb@vab;t#`#cbUV7xI*tjTWDrF5yn5t0ll(Gm2 z#KVr+?dIRaBPI?gDq?r7PJ{}IiDyua5TeY;99P1SuUcS{erx+QN15RE?PnP>2}cXz zvN3!P^axQ=QG~OLOJAB)bn5{DAz{UG8gzyEtgX%}`j<2aEy!{wUUa%kts*PUj~Mx!TnoW_i`hnVzuhFPPKY zy1E`beE1F?EQ-W*x`ceSs{>+gZVrZBdVwG2Vq2SSbv^{U***mTCU&$|)FT}bq*iWAO&=Ll?|V(yR&dAt zVAPmq-1K)x{ouFL$8Zs(l9DKT9&bA)~KNhi`LrF5yWe^xG?S;QyTwi;_tl)0%A4TtlGut5j-RZEl!ZRbA;4iD>6O0E z%*=?u+Z=6FZf0s#a=|=Ef$57xInWmS%WMpgm?kcc3{a`T;CCgcT(Z{c&LZnsv6KB( zNf^K+Fe@`;lNQ5eg=TdwPi{G^4lxaIb+xy%Sq7Jt@hE4jio$kU8P1~=`BJ(P4mb2_ z?8j>upS`J)xbPmv*w{)Rd402G}FQFYa<1|P&l+3>^wX% zhf{9InU@6Hh|PLNWNmGY$94h3)6)}XbJVgUkr<(mTAnDen1J$>HMJz*oWmHIjF&z> zJ`u^ucd5C#hes};J2L>*Q~`8F2<@1fj_xUPGTK5_OUM%|Uyun19RGsh#?Ic`fjbi=3(H_qoMIDnjy1=Wr7wt@S+9qL zhrfK&$zVbfBar02^A37j%E-tF9dzXmm?n_`B*I-N4l3mc8(CQamv;~2&;fAde9d5T zln6yfk9PIC+{EN05jQvY@A9R@m$^E%iB_43fjN(-N~EQwIbQ1&n@0YEGHCAZmf7Cg zlH;>qPF|ZR?|+rpV2(teFCRX?**!)i_7+Qj|Ic6uIq#?YlJ5l$qU^*lHFqlT#UgG|AA3e7Ccdw&t=+bpz<-8t$8rhu(9+V9dVt{L z(zf~>LtZ^yXQ^VDEFk^aC(?Ngy`t<>jte`VZ3Hhzj)X*i34vUwU zccRKAys1eXb`f>$V|b#5L09VVKZS?;YUj01Pg6=tw%!xgfwzL+R99E~1=_aN*Q1b; zkuhskygCy^Di4PK4}$S&W`EkYZ)#&uiK|g3$sw)N$z74|D)?(S8$8HnMn z7{^QcpZ!RlofP)mC+gDB1Ng7+-l{7p?q7aRd?b`SbHQE{ptv0zH7_NY2+B zd~1GoRxjST-Ul5?Ph^G9BLK$%`xHOinCvs*Fv^~Ykjb;YJIhr|Ufc4>$)xG+ z%}ho5C!wT~n*~5QK@T8d)4#%(@E*22J2aDUmtVhrZ5g;Rs(?DP z#&R<v?f(%&uaU z&#=L@PAty8x9eL~yutHWSXk9`+6RhTt<>_);pQ}V$!iYMn)3q=Hv%Jv*ZqhUR(-zr%VXFu zaoLw;HuI{5uP{?3!&*8!J72lD?7~&Bdz{$ADz-vjf8Cpm0(i`RZR8{HaiF@GXO}ms z01U*km_!k|R#QmG17pk_G_{LH({)d{INzuD!&`Ku+m@*j^ukr}{>pLVT6aco;ghrs z^{A?A$}mwT+Q~TcLXknlTN%aaI9*MX#jRs?fsGh%7!+xzHYH2U}a@BZ{T$c za2L$Bnu9z1jtgXekb9nzlJXHdI|xfC*F_~$J={qKR)~zO-1kb8M{5(me2Hc<&1wx} zGO;pH`pSk;tj<(kUVi-!Uj$3-*{2$hQ`D{ITRN<-+VDV13RceIygCCgo6L6($}v{J zS2h^9_E!i%#-9*I4$A|PVnE~x6^STrQ<%9}m<;3rpk#f6?aw%x31Z<^> zb@R21n@96_r3OisEHmK~$zM{w<)g^u4fkiOu`hNfV2c2`;`KP;Nb)>+$Wp!ci(sGk zpK4$dFC!}p9VZDG39tv#M|n?Dk?oC#pP#JNyR)M`ZToEQc;-cW+n4oUkNrO%?ycqW zj1ga`Sb)R_jW4fi2g`1;3yI&oVWSs#p1a~u2)upm1-$DAy|TL-2jD-mPE8kZn{wyvdjK2|9Hs+rp>B%w8m|hS zZ9Rq^k0b}cLj1KU?Vy+N@bL7)GCX3(NmkA>T|G>{ZzHjomrwarE@RBbbV6=bj$7oZBFs;a7Q zJjO;x^^nwXvZaM)I`ErJ#^W>vJp>e*q$h%bzPB+M2YMN)g#7Yk zB{wJ~Ev=Mmb})27b^$q{ZGUwb^ZIoTHP5L_Sixt|6Oe)yND?_VF2IsOrMQ|PBl+~{ z+h>BXc@=Xt%cadbQ}PQRt+_BQ@9S9_7SK@n{r0u=34@nGI&7WM!9Nn*yk*5I42Wm$!42Ic)8A z;@QTzU&FUEa;zL?E)y)~j%}Oova{Qx?Rjhnko8&}fLrW%(Bc0$HY9 zmIYjrXo^mT-2QCQUZ*!l#Ix$5=4g}}ZA_E{N4GoL(i1veV@Li|mp}aa>}cD4tNz-+ zccoxbPE%Hcmg}JPEl<~ay&L$d@D0m>u0)GT)?IVG{A2FJErGf!E75gRm#8C3u`mk3 zw)S?>G=;uuw~MXMk&(?H>5%}JPvCP%fky(RxC0uC5XaL}lRlp}XxAP&{80=)fZI*& zUwQiM*%#o0=t9FoL&OO(hkFb(0ClnnCG^hw7`l%484qpkc}Bm-_GM65;&641+7B)G z;8AkB8SUpthFz@*TLziK|AzO^IPN8Y{rUi;G zL+ZmIseLBL^6U+y^n7}Lvj3u+2&N1da0dgVFsNM>1Gi+iyZ8dhYT>slP?qr5uZ)Ze zp(5at{8Cfr2`0_d05Kd-bhR0ch0(qXdV!Mcl6);V3HyF>;p};ph6g21ysTa;W#IBq zu3-WB0a1F##XCjWFS1EO@q7+gp-�n`hf##0UT%0}k~8TGf+ODth`L*u-())L#mc z9_5>eU`*wv>kUA~MzU}`9JCSzHS(C*7GhkFQQ%fXYH5Y+M=!4X>H`wvl^^nE{q#5zeMADl%ZB6sp- zO}156R#wW&%Rfd%DbFa?y?gf#_}=`x1G+!^uD|;GZEPK9m(wc0zB1XeEy(acqlzGt zJtd{XRL=o^S<;)O!orKGjv@d37pJ z%kso+R-#C}-b|yCVSK!a>3*gmQ10qa-|fgqLjHUSZ?jccErRr)fkj7*$;}>Hc5#I^ zcLF_GB@>r&4>#>ocup(>f!BY0kgP`FC${%gh;s50m@~?tY^0%8DjLzIY+rMCT z98#8Ez1^^-{6@9ix!5rAA6_SX_NSUW{nGvO&x}t96UCpQ&9iz}+<4>4AIYl{ohVy3 zR3Sl`kFT$M)?f&w@WmGkXgM8W)AI_;CHA;+vefXP*=nSixI!$MX7l1dB26iR5h=?XhD$pevKb`DgEuXWUJ)#;G|3vMv)4#j zm=EC7c98L2(F_=kpAItZA%E(1cdhJ1H0)R|Pv;%eA%I!nsaL^M08RNRyggLa`gke3 z!`tD&Tv_c%Etk;V{=SsM^bGSp>Z zx%~<3R&WT0lV*OcGa{b+zGCJCy5&7h>gEz9&!oq^f=} z1-sF8*u;&2xa?p>m=5Q@fTj}5V{`8sF2xny_D|F;)wbaOgxqEi=#p+U*#x@_Xdj}t z+|F2$e(>=)0w9o7rQ@KZJkB^1d_C&_DGJAe(9d5Uwi!AD*)4}L1mF{N<#uq`TyoVH*sm~;fEwGY5 zT=LO3Z`E476baWZyC6*KrJgFc3$z$7VS@(-P-#bSV`BryJ2H|86ytelzh{TFm%pK5 z8i2@$_+dHu8FUYX=h5u7PoF+fj75F;AOa(Ugu?_6M9_uG%^Cx6XhO)iH@LJ096@WkmAo>ROaWxoQM0e6nIHIpnV|;90-^WmG@WaOcE*4}C94 zuV+T!vVrMva#c4@xQl23)L6P8v8PYl?k~A*p}q$wEp#-42Ez!PEQb8z;^xjyY;REe zXXoe9^(ujgoz1F z4$>RoUI9EIc6FhZiJw_$4@c6JuUCPr>g(SFrHHe{tXRFFu~DATA1FJqXBtQXgdLIm zYSQXnK8?3F5xu-k!7jdH+qMN>UF;Gr_fhgWMtMjNS_He(&hs6&=9kDyP`;NkPuOCA z1I(7G_+3(a*=BoA;$&~75d@Iqtw!8$ir?^|PJTnzr=q3Jc#6Wx&K?ekH~|WXecAv8 z1*OGn$seicZ;xBmwoeJmb!q(i&ny6}%M)lM7t2J8XRc9hynN&qgI&un!>E3q|5mmn zinvnu>#G9MhMc*aF`47;d`>*PIVG zwDk4$dsp{if|4*WG`{}2I-KW~@|*MZFF!Er@X5)~dR=xJbllwqYA-MN;kkIG7-CZS zwQU+b;p+Dkc&2EMh4_gksCuSJf)a^+X{a3`Y#%-|WjWp^c z{_czh$YIi#f-Zcvg$7!0Q%g&s`RF5X4kWCtS)mPt317ru-(p6(c}Q!9R(&Yq;meU) zFJZv)^Pr?QwYKhpv`NB(+GkpPokAc6m{%cqclLaDfYAW^0rB8vd=?W}(Z=H|uPRt3 zAH(I9m@=dvT-Td4%Xx67hQRbTRGO@LwXcSm>&3q^G>F?!%iL#!FdNElhaR&G-XU`8 zg6as+Iufu^0Z5>L9-2N;OT`F#3Ed#&&|M!Z28mCouaK0>ybWk7FCb9(U8>D|3sOYp zk(mKAdIW5HaF(bU8L#W;=*;(AoSz2U75n2g=zlf)A+6~ZFF)O6y`+spCB%I8DECGx z-<2;>r$_2nA{9o8E+mBhYd%M9#&ZDdfC>e6)#gNPrU#rBIinpM8~${NwpF2YPX*Xr zJU8h{qSOB0rnI@ad6(9EyoT!sNwv-6(@8U~pRYz@XqORQ*RgFM0RP$vR?Z;JjKzS{+N;y z3{ft*2hF~J@mS-^miA@^&-kBULG4>58X~U}Ul>y5P#0uA80vB<5jt=D-jUqh@bqu| zL|*hk+NAMLK=|2BFKlnF`bagNvYVb8I7YeB5uAU=?^|0Lc|xwlvhI*1+3?)<=I$}p z%w4G*)Pheo=i(^;QZ2t#{|;Z<&p&Cy5hkiXL#h93_(nZL5F#h-H<+|Bg|iPH-X6`& zQLO6YjVVa*DUk0Bt!YYymk=AYeV{^QYm|0(MPg7j^_NE{bOvhZA0PA@B76iAj+T@3 zII*ZIOHY*%5)nJvT&W{^YRJ?sjZg`A3Jwk;V1pp991% zR}`wH3RqYkK5ReQ1iSTpEW0s2SUl!-j?8gO;oud{_g;leiQewwPns~8`J?2pr^O38 z+6Rj`{*WhaK+pHiixUh^G!u`Tp@$2IcwBfbKP0Gvym{|_|BD7nk>|=-1gW(64iCSN zM6b-x8=QeS4Qw&?$8YpH0$Q2g91U(5PFwr?zrm-a;p7|~=>hkz5qjEBKTHB^C**7o zYioNtKOdB`HRZMmXe10`86fJvFxvqhBf}2h^CH6z^YgHAXTTL;lQ~@O zLj$cH#sN|nlT1^v0oIx^x+aFPt!mSLTcFaNk|E{Btd;KkcXP67?S>?0eSCz}#}v9( zuZt*7!#^g}pEd4cZ_Hm_#5p-r+#bE{|3_9nLSG{gZ05diFKN|FEq_8M8Ts*=;1;u{ z?nu5K&8`RdcVKwgfV=@7XCqAAPD-yclsaGyNOMQW^Ozd$>I~+?K$c3+(S%-OfX!mp z?Gnq$4xsx;8N}mcPc-B7vT+(jcteov+ySs583`_K*uA_?=#8WBd1k}8)HE~!;FA8T zc5{LzfwJ8D1r3l30<5A!t7#!kO-&x>t-FXyr%f7&4S^O*{cLu^ayK>9)3h z_^)m2A9}y@X&8@Dy~>GFSYhHntVnW8ikdEAjHFw*FMIT$Se9PtDj&j90kHbf2KM^C21q ziJeekJv}|dg9i^tmdr--zqIdyFdiECI`I1K1~7b)p&BH^BjdK@0Ie)Ew7I^%p5&s= z+Jp}t{K$x^L0nOhUCmZ>>BM|%7?~GnsN<-^_l1NgKo$XMBg6vW4~NCrKv_J_9X@N| zwO}DO!mwExDM)2ILTTZGCM^SdXYym`o@zxPz>^u1EE-!~I-QyU>4qdaF4(YyysG@x z%g2wy5?B_WTsLqGby;&~xG3;RpL&5^JTdnvHnts-2+$m4YFa^V2)VpCb#bDP#H!ZD z6#-BCU09eg$e!X765BvX6-q6&V1<>I`CwyXnS=R%pNFTc_go+-IvVM-QS7?_gb5EG z#KhAc{ea^4SI;p53Ur;E2M+-=R$BTxxKdo*!_YL~PMblvP^)q0g}$Pgr=0-4AmkQq z%u`uW`8lhP%h2uu{VB6!xu#J}L{K@nIOxbJuQ8T0(QWBx@iJegR)*bMilKfG*R?Cq z^}I_<%bzOHyYp^w2eFODq3x7Y)hZKohj-lw=pf|Lu6bmrN%q76qp)kM)RX6{4!4D@?4ut^lO~B3^mhSBDMouUyZ4@|F?&Dhceoweixht}bb$zXBW&HZ%P%RX6Y(vsDXx;3E4n}%xZ;xuoCR{%W@D_>_9@R8js2X9M(0Iffak2r4xs> z5jLGTJTZZH=a{9-<$JiEJdQ+>Sa98Q(KZ&DtE;9Pzn&*)UmH(HpoeO1ds&R9lP(g- zzDOfnBB3It5fr=wek0f|)()KK2EaF9&?vw{udN4r{>s(PqlFjH8q>a?=Ij2M9p}vF z`l~7Ho!Y_KyA-#vVUcGb;6r4_A94nQ_U__T;_dw=4vZtjt=~NY*ksV?c?)@}ZvI4L zh%JA_{`?H>Vti?PxB3UB=?}N>5`uqr#}&YpG>jj6{=zP4agq5kNI7i04xFKwt6_FR z)CAsNn9f}CoCM?78PD$X7R~HVB{>UiKPA2}B>+V7TryXtCusRf)OrxDN}gQv=zDbE z$1CJ>78}1ht{UgH_Mpu$j%V>a{R>~7kNC&MN~?PL5U6aR_A(3v;+_O5w6_m+MMmB% zC721L4vT$eSVmgrI&0~3m*T}gXeGAi>TiXkrN0sNKhal;g4zgC4SM^}0FO@H947I~1byTfS%0C4bmtThJJ3{$Vft5C1oDCkZiV=>RmaSyzW{!SSQZp^L z8wcxfanR@g7-M)n!qF`N38P`*o>?b#ts50%cnY1lfK9?};UpTZ@Zf}EqL*#~TMAhs z*^CyNwLY(={1Vt_nl4Gw6RMIWA8?Rdp(j$r@%b>dpgq#KEnGq{l*>P4OZ!2e7A45`|Ff>z?vmoz5OWkZO3|?vL zy@bs#!>iDBL{s|&2APJ%S6&kOgHa68G{8VC2PoP8c!vF(HV>sotv^MCm=7q^Pc4&o zjmXrGWd0V6Co|UVzfxw~#=1^`rmhDD)&m3!W-rP>S{Cg$Fq~`GWYY zZO7!~)3*ez8XrqL)1XF(!_F@XS=$P(Ipc0ja{TbW!R{wa%Zpnm@Zm2X zBAoe8Qk|_wmuEC6sESYVM8r^%UB$?#bWU$qyfJ42@qOTpkksA$9G6vmiF<$L(4#nu zjBuC3jF%JT(Y~ST%}u!u&N&`02BO>`{fm2gATxziC}Vh7rMOybGVox)|Ty3kyuI--`a=K0$^lB_$=Ke`x5k zwt|30S{K-c`{nr|=+11G6FgUO$lCzicFBJSU6_45$vW7nM!3DW@=*TU%>IF>^s+iO zzZo{g0Pp-&bFW)6R;<$ery@Nra(Tk4JG>u=MEr5QA+Cr%6~q8ZC!{}VReK%<(i_?q z*v0YyR=|~O1J3bsEDgc~tOTLTtqtoo?&aJ!y}h5mXT-4*->rLYl-m`u$M^OaD`io{ zku)R(tb0werb$WY5NLUyU6=YNrsZxMY1(OZLyyQVsz~I3)Q;^I7=bOiR z#v4gj^YJor)7kXSZMrjFp|bw)7v9{@4J^{~zi@W{@s%L|HL7MXoAhY_)M(aN6I7YL zlAg{iDC)0(fdJh@W-cJ9XEL072Qo}Fyu7y|mN7B;!fIfBef@>4?at%VuI_Fce*STu zghWdcbnUD`;$!XFFwl=|4jCBZs&f5U+Xh#T?Pc$Erm}pOs+UQXugJ%+H+ zGq^OhT2DcUamo%D!vlb%-0_&x+Q^$!gZAs#fWNx&S5#EFT(mHyqHHN^c=Hw&(r3v_ zo<#2X4R_6XHryokE#nn7EDacs-wAJfa8SF2a(VaGT#VrALl_y z10eb5iw7{#|Fllin}npL3a#|Y7aXV=Unq=u&{*-bQ*M_cOb9e%{e8)1mo8pO`gI5< z5aYO`_U%#~3aO5YjBjU-y=WLqH@66fr}62q7KdBt+>dReSah_X6M>*e$5h*z#%kF3 zOyN+5qpj9&3cp_EMvgGdX8J##YfoLmpD< zv}56qF4xdkb6b9d5KWYE-|Jx|Rkj0>mLW6FCja$4%`vl>gYU*9{VXCfPKEY=k%=M( zTeJeWKRnTvWC2GceS9ClHOnt8sM9Eaz1f&3k-s^eJnU19W*?80)8eDf?2Y&L&m!!2 z%Uv(3SIHSt33?f3R*ia6mPq?(%7V(J*u#$KMjl}*u#l#k9eO8X*QHjdb>bTa|5dVn zD{I43!?sR);U z_lFIn^1Lrg{LD4re<;jMo644-IgZvJ^X%}_x#rvXG8ZMUVB{Cp1#63c{5$vL7bpMv zsu^#Cs!>FW{!lPt;)1W58q2_77=&u<6_E3iihZ<~_#|FW#Q2uobhSzZ?Z3}_{om*m zS#i)it<^2{)6bbq>&li+_)B~6$U^uX zm2LG2HzMKwV#jkeOPn>Gt8T<%xmO2DdL8fmNI-BAJ5Qz35&!#3-cth0?e0U`a>qj< zdZMZ+p|_Y?W_rdJdP}8R6J}-MtvNi6_MtevZkURgT(ChV{tA{UUA+MACn#E@-3U-H zG8fwBww`d{{>XP5nyN(SH>!JQb)#Pl@W zG27!-y*g`dEJ~^5DoB2L)v?>}^qV)<=TJYdt}EAYVFl^&uurm(P6zStKYQT7=#J(( z+wNm2#jmTf@soC1DGlyu{W}IH zl;zR7K-_D7{{i(=;@$GyP{@z2?3L@zUrCtqk4hC`qU=w3)y5zN0P7o|Q_gEFY)(>) zq&XC~+{_jQG?Fg#l$gxqmh8s$cE>+Bjm%W{_yPhG8Ee zspKbau!2pO|MvaJ!or{9!(6-UkpEqZ&9PrFYMe^jYSjnUHHI@*i)Jx(#f%@S#_!|b zOKrZ#%sBNV5?U;D??G*LzeE#_c%{Nj<*dKLmp5% zfa}B{nY^vvQ`n4Sp6|&MC~zgogvjxp=gH|n@{#Y+OWjs`p5p|)u~9Ntak29^y3c&o zgKZ^{Eb%R^BT_S~5w4$Y`VY;l4m7j%pPFh9dsk;iOy7RK@tH#eok`ojyT5+wXIFEi z6<_uHRPOh-_m-|`b61(n4x<)F`o{~aKmV)6(>;=qKe|{}y5}j_SWSNCeTk{j-nZ5d zix*Kg45bS%kuXY7Kjz>yhDWs6VESTC7V6sy!se>R-Lj7kXx^Jn9$}ynz0{CSliFuC z%FUhoi>*_hf=kW5-uhC!nN%n_PU%qhnKKwxcaT@v*6p|al&1`z@XkmD$-xol<*m%$ z_xE2*Xa(u(?}-x*xRd4=O0>5ch*12(bpg5 zs~W1IF_kKd)L5e#;k*4S?{@mCAaJ2KSxGLd-|yPH-hp_1=dB9_A)#VsjF+=e!Wn=2 z?2P_h5AjNyS=T8L)x?k2E#aPhc%y8) zoJUIUVqvwbwUbNnJ^=wCO;xS=Z-%1j{<>^gGyinb!r0_#rs9_C>0KK89RCCd^l9wW z{)T!YJVD$aYPyCua;7;#R*lV20xP9P{eNKlf8I12`iY<0^ysf>Abub=Qz;M zQ(bhaq3E%gDIaMre6($0z0*5pO^cgXfn0}*Sch<>43%xI2`B&0)BW<6sVd}um$YqZ zub6|z-6{SkLK+cJvb9t_wQpULHA6|J=5)QWg8Vd3Z|n=cy&GjiKzod5Gb789hu4N% zp~EO*M0GTI&BF*-fN)m$w1wK~O1XF>@4_D2# zoY-A+kCQ;9oDp9>v&}}zV7*O+UbFf!6h7$y<@ve_pTH>vMY%2P-RlPSBPQW9(HT(9 z_B)$Rv#WulF1oZ@8V-^CZzBHgUA_8G{Tx+Avsb*D$Z*_BFmt;a>O;5eoLKD)sn^b! zM+7hxZrm!vn;>#!K@AfricZz{J#as zIKa}H$2Pyz|JBU4pr}M20u|b(L~oTOfRWw5UGl=0U6?do%l*red*-+-^9~)2EWw%h z%HO9)ylANmpit;%`u_$B)i8gp&Mec9a9%6y7d1p@k`&Ews_$-|7*xV|sP-^m=GLJg z?+H1A)Q!`~KD7`3tOG)d_|6#mo#8&?JWOrVsCtbVRdcEKCMe?S?mc^^7 z3VIAo&)?yH2^r$QDIs$G{qMpW8ch~9!GwGMf#!~95_aGvS`fvW<}4-qm(k9V_XEPb$-WqaHPbD9J48?iTvMG{xb^@ zB-O3k;MB?+7lBjf-J%pbnI{AhJ)_)gzlFuRH(`2*Vr|v9%3QsUs6XP4YS$*6oD~hn zTDLc)UGHLEc`Ame+2SA#H~%d5&=LA%#b@#p*9>T3_~ zDPerQiJ|0*fA?!?dzZ^4&lhpAp03!~iKNxB)g4}Gqm44Xo!!J!0bHKZ<0vj8$E=w{ z*X7k82WCw5Jx~5lKz@CWhwhc<5Bj-jyTP#rAwl`169|7H-yR z#hrn0f$yBZN|?!_KZQ3m`1zl?E~sF7I{r0zN8Yi&E)P+|?@XfYM=Vd2ktQABwzT#u z&3AM?Sb8MxiNQ9VWp?=ePrS|+dtt^jS{dnC6opz){VT+SqtW|+i)bELV0%rGw5G?0 zc(MB%-8dne!x2mB$Ky*ZW)g2=D0Tat+g>&@<5pkSqV=vUO-~yw)IhJ@>a|usE-vgz zO0$(l4!e>3U^Vm*E#jWV#}5$#ni-GSlQ7d9iJU?i*uw`N=NoavPkqbv_$*qhCRZ6+ z;xNkkUzI3NhD*b4sU{F@hh<~i%U0oLaMdY0%vX@&w{i^jczR>^s4hJ!t&_7o9Qq@wN@2hXQwH;Dv0W$yGX>^FK6+)fMWmwQ9cSpN$YxLSr&_p@Ot_n@JHqlpe>*!_;qDRjls0~A`JRA1 zAIYx*l++6Drsh5zZijAxJj=!SG6gNSAs?Y6QC%I2)k1mvv!>8&dPL1m2ZQ@|^DSrR z2!R-WI3;rP{*x{6YfTkn@x)9RrgdMPv^*fT6vp(fEhR%f8YM0Ms+D@xD{i;ytdd0Z zxR%N9t~o64QIshTRPe=x$A@=uZ%^m1Ef<>AZ|Y44x(Mz$z!Qs6La)0#TN0Lz<62Kg z#6(B8I!${Z_o6&Ab$qApKo1-Fi=S6#j)yL)rb{0ZX=VRmsxprcRzBRT#LLPxD^{@D z*}_Ja(t~>0iT6rXZjb^^p~DJF`2ebK*6O}sR>+h4kmPS{ZIx`qf=tFvPzcSvl=r*M zQK^Kngx{6zXNyoo?J@TTD}EG4C!uHv1yoNXu*zITOW*bGQ1`AD=XR$@5T8M5=&QED z8qbktVR!=-7t84X0uh&i<;8??2fjOC77ipoqVnr)c^5nyazPyx!@h#pv)pK1$ne?9 zoyoQ_G?bQ~nRztAN?bUYV~*iT?COq6^jr9B)BlRF4NLYt1Z32SAY%P14ZHT@gkfrG z*jOE`$8Rq)a3BrRn=b~M({(^gn;S2s)tuhC`(om%NYV)Qh0s1W8=Ed6j}B|BD4jX7 z`cv`T+tp9wTxJY{mJg2Jo$3*xH!>uCjNHhcJp zby7vWmO4CBMce+e!bi%A2!5li)05lSb@k|cA#V<>d{Tpl21i4(>Nkf|YZYG6qb3*Y zm<0q}kDQ_1Xro7+QGx*69H?5QXS;C3Nj;wwt~EZAZHc7z-kYwodz$3?Rm1!rq8t9C zl$O3okhOd%yHn<_s5R`RxPGGaI@wT(OTs6K4qKTBLFuvA7neIZa)X03lsNkd22~e} zVCe^3oo-;;NWqX*v`k$CAls2NG||aSJYA4imX24=b8XQ7*hps zEHH#BMl1S`a0ZPN-^{ljj*nbJ9u0<&@=zu{XoO*zUlpg+NF_~`QE*)E+^@KTH^qyf zy~ba$I{3;5ZCZ5UN}KF8`(9H#``fsdkw0>~X`l$kj*9h6+T+*IfP_~jISM=byT z?a*HgUhVwaS{LNL-o1Z6lnn1rr%~=NA+7E9mhvD|GHx(W`HfnSAO4(`@f*6|$rTyX zv4VfbSTIgJaEc%Nn7D!I*${6&C(I7veTYL=Z003#S&+gBAqa}?juoGsxkFM7PWfSh z7K3y>zkdB1AyF=dVI$R-k=wfzES&3K8$CW~xc(OBfan6t4*FZnu;>l1ZlW(siq1^W z8>Y2*k~|WhRnqhsyHI-a?qEx#-$UMue2;{C|1KG(SBkW%(lu6Q=G-QAc|xT32(e9! z8#k)+mL@8lSl}BCXufRq78uagO<&Qi>FxJVGkvl8$2gOkJjoZcp4&klI!x=M5FUKp)>yFoU-3cylbcQiS zzfWT2&%*MC@2!By3LN&gsRmW*(BZnKqaznf)fOr?F(We5QSK;s|GqCA3f`Fy6Nax` zxP>5QQWpoy5gE9KzwpSn633vj{16e}oS9_swCV$9-Ag9Zoxb^>Tefz;ptvJ!h=*#i z1BWxs^2U0N`8YPxEVQ_i2WC{NZ#0SpY=78%OsYb=771MmN+n8lb>9^t7RqxbWo|d& z+$%W9D{$cF9fIV}ozQySWXORG=b~S|s!-`@0a~F$Cqh+DPDE6+zSzMi{lv(Um4ub# z`k~T%j;2RVp{KTdO?ywI;q4b#!A=Qt(;p?-NQmew`PsG$XrNPE8)?79O~1G2R{P}g zG-LTXC@+%I7TT=?p4WL)5N<~lKNsA)U5M{!Hgym%OT6U2hJ~vuJ~@?LW8hIy=J>@@ zRO{0rbtnv(9Wt|o}$!ZGxjl@&FkJhVSv2@upny5#ZkYs&`rzC1iG zea#j0IsdrX`z5(dsb$0vHIeEv*|(c! zgK41YIDXgNZA(KrK~+h=#maiO{>RqqFE$QCt?`>GD#G;)J#fjM`-3V~+bv|*Nm#u> zl11}|Ktw+rct(C}kF)cUmY^(zO}>5m_QQO1XN)touMG}F#IPIVKuY9me*TBOy*+%b zx`+ru2xC|tSHT(bva$(x>5mX=9QBO-yjw1DOf9UkXv3D}UC=o<>i%?6erA?0^7xRQ zYT%oK=zK@V+Z2&A{1F};TjuY>Vmczj9{BHx*4B)o9@z_>3SiIO%AZ@t3L@k@D$bdT z(xYlLSsCBK(P4t;PQB@+OpM5oj%~#cX?Vm=5D+8qq}j(H`&7ZLn2c%+sz2 z>p8&lOwY*Z&l>*e6tWMw0Kfo;xk;^%Yk<%|beZxOG4$_>-)z@UK#WHoc!5(%@bwyx z;QMqM8X6XGbg7yI+q<|~ZItfSZU|$GBqv=fO1zQwY`^dhc3%h;Wl4|EI(xk;#{1mf zZ3`1_eDVaC>hzJsXAR#2*%<9(+y@x417ORyQhLQhSZNo&R3W;o2GXE#09YkGkJD_3 zanDr{(qTrwvUw=k~u^dkd&4*Kl2Q(jlcFAteSWAR%1>qNtRlG>V{f zHxm#96cJsBNP{RyO+dOskWMM-keoEqb>H80_8I%^z0Vl;j&U7h4Hi1*A78xRThH?x z*1DcyYmN2^e7DFLsaI+=R^B=>5+2hP)SHPZ`4JWME}O=+liw_(9J4@Fm45RlN1d^q zIhTTELFcB*ktp0)1#Vml0D5j+%zbDw1t`cMMmJOIC74rV1FTEAHwA=X8nov+pIl#> zr+<@Xfw%5s?Ki$Q^8OW{&bFJE-+iS}WBfRdn>Ps;{&X+hSlGlV9d5Zj{N&E_q`W6O z+i$ja!;`_q(x+iiQB-7}597VfCCH_fvrun|_k7{pLYC{ZCo}TbvKa{GKlQtAt1pP2 zv1;ug_0!c~x7JN%2K#$<$=R>z1F6|Fa0TE(7ZOT8b}X1YdMO@BA-6V%zK`Z6UL$pL zpjNy|Ym-3geMwFrpn%Bbm`a}kK!q$2joGA1z7kjRxVs%|mNY>@Z#HT+ZQ{3`03O?1Q^e^pKJ)G~i?nr!|s99G~ zUR?Arrr@FFL#i5o?Is=Nb!xZonYfdH60HMp!{SH@KLDaijz#s@f;#U0Up*>^S(TNTxXA1GB$`66oT)rQ;T%L)d5%B098);BpKgA zzFWi^Z*pFXH|?D#F`uKGn*It2m4Heb9lEL<`z0IUT8uc=HW$roz&HtuC!Rz#fvOvo zO%f*$J33xmIM6Kr07s8s@ zi0jwP>Dl__g)yqcTisG9qyhZu089`gG^D4)F(Ar3Pzl}Q_dc)(aS}P53Hw;NLnXT` zPA+$Bb^hb_eNMKMC}GSp4i{;|38jcHsw}$Sn%5-Aj=FA4RU{iNckW2hGtCqAXLFM~ zAH9^3E}W@ItTC>bR9^ql=gO~}MoFCF z`#n=XddHN^GtZWAc@OtKp@U1{?TNrVfQ*!H4UUMDpydtIJ(2F5dw1`4K7z33ig7dk zWCrbx{e=w+kFsALauVeX1K|NYei>M$s^ncu^|Ci2oV?^n>@L-33(66T{LuFCYW|U@Ah~~Mc-F?VQ&S%)qZ=?OA7AI~G7fDD z&{za?;tW%0%5J*m2ijV*)GoIF^kLq;VXd0|Zjz&c7glGb%i=k}e47Cv@+t;lY(iQ> z0TC!*R*x@pviBH*`B2FRG%v39SjH4eyjy;LX*7YEej=Bvt7x-iB`2+u)xY1q{sq0O z(H7gnuDBNe#8g`0qieS5;?7^o&@QbJfB0fJxZ z^k@ru_%#EUf~bmrKhnW@9Nk#3uTF}h9L#91Ng_#6jUv6EyMx*knk;F_bI0Lici4Dt zM?IMmebau%olh18b2^IntS|>aOtF>rQw}Q9FSD`^G{RfV%XhlKmd1t?MKY9JOKcN= zAtt^+KUO6WC^^tnrxW3%Npr^#2RjkCY`(4EEw>=RAmvk(qoR;5q$VV5Cu6Oa4Y5^& zG(;F;aL7Te_AYo6fJepcW2glbcrf4(aPj~_FRw6rhlQ=7LvV}yzF|~z?qL@$)xvyT zwxzAaKq$Rm}}##1Nt=? z`q~gkCopwM7k`Bt0OZi$?`Vbi6$x}q$%h6fK55DXieML;+OMnR zj*f*r5lUgCX7e^fBFwKg>*^SWBZ7BQYcqV%q)u6rqOjjPvP1?;CD?4jvfD?~omCLk0N?+VQDBr=UCv!uQf7(%r_Z%YwcDLti@fxB@K=UrOgg4xFNTKt zBD?|76IUd!zN%ASMy7Pk^u4Fg29;>P(sk$?_n@ayX=2!kuP$wa>M*q&qfYLnsxg0T zvyYiQ)Y4zaYGqOXYj~$|9`;v!HqNbC86ZHVch`TkdY zKEtx17!{j8!NidykE%nODcROYlLJFe35;a%OIr1HEgefp(Qu6QWLpV^XHnP7z#<1$ zG|!hsb*7ULf*l(9x8k=9o{FZdBPmi}H5 zahcX03|8Olwx``If&2#foptJZCs@lLip#cuckC(9F3O%B2?Y|8_aXu34WJ-gb5DA- zS+4M#@xDi?gy7;!3n>QupCY2JeC+`fvVlgHX}QA5#^+Tm_-_M}g}>7ou$bq~%>^ zG+Qxv>!@!;w4>Rzww(z<6s-;vK22@7zJh`&@ybMO6?}y38gN`CcXVCbh`%s8@ITfzw);38e@!mrF${vXs z3HSp%&|f^A%5+PkxwRFdm4vmLQ{(TgKnnRG(V&@#=bRq3JC6UPzHz$`dk~71GMuM^ z^2en;XBlWQd+{sT*+0+0OlRWXs196>)|SOIT2e{}$ut)N%9 zCryQsTbIwzy`DtF_}0}go1ML44+=;fy0P>#EsHH3rl0v>s0Kd;C?OT}kAL$!BS41w2>S&_k)6KmK<%z(jebbydMw z!X^g)u$q9zAqRhq<7U_?UKKgvuR~ba#Mmd=I8+w_(*_=z)8EkOhV>7g+)9YPqkn)e z)6R9!cmBuGx$h>?&Nv=rMn>Z+=R@NP!kK5&INz@G*wCL5&sZvYJD+mcq@EuOu1qt$ zczZ-#*FhxqDh%rvc=1(e-UXKysD+duO@{MP+?gFBUkl)Dg2?5;45ZAS6g; zI_t_5iQ8Xg7$b8rXooz@%4oQ%!iQ<#O#rn)ziU)*w>^6w_AAWX1K3#06EP`l3eWVv zWPlN7vjZFB0JvkRlf4-*E2!JIiwZw|x@7FTtv&9ur2?T*n&S)k&~GkRu7+vIHaV>O zu%pz9zP8+s5dGfPfee#|29C`s)5)GF{okxqHF?Bq{M&RnSwa=m3>M9F3RF{-^30kw-HTRO^9Yed{e&X zjwGy*v7v{t0gev2^q(6*ANfE_3*R3|RuN1+AU2RYl$u0;J40V_EwQJO;Xp`GmN6_gr(vqF_am{Gq7)u6u^fCZHc*4gy+01BdrK z?JK8YYrq?JJrcC%eK~9B#UjB>wm9UkF~&GO{Zt`NB0VX1driJq!qq54+yWS6hF+>dpa<8^8g+KY!O213+bE` z`4|7-O#`9h;8#JmhJ-8sb86gnZbx-Z!+!IQ^jC~^31nH<^SeAtz=bRs9;3e?`2NRu z_zm#6%~W-^JCA^&&kFq6X_oWo1BE{j*0eZ<5z0 zeGCSf4Gf3s$Llw-=YptqCmR}VlupmxYmae3d5P{mhK=Q+>o8FIt>QE1l2Rw*M9Ap9 zr6StXAiKz9`SZ5cLU{0wp;>SVV{JXXJ62ZQ?2#D_+t3DRt@Vy-8ep&M$B^VU{;x^y z&a_Uth_AOxkGDh0F|@?Kmhxk_3v=4Y-WZAhI4a$@IkHZLHHmNf1ZN%SRLL#~hXNSl zLn&}TNhGMQQqk6vUa|bWwbQG2B9`gaK78YA{p$d>zOTR29G|$iye0BGVl4KfV8;_H zhf?K58!@rrJM)`-$R-+G?0h@P1}7xvJUsLX*5tIlr{^7H1=N9Q5D24yxH9{dbQcn# z{bq1*?AtP4lQSmG3LDoEag`}100YUhWYmi95>hRc>|~_U#%&GPqH=$vjdL9!TmVhN z#d|V>!Py7t%$fgPZkI*dGox;>38WqpnNLG9A~aOu zjdH+}G3GuimHc6_Hk{#|Gz8~#>w|YM-b>K_?CG~4P9Swun3MAwlKozNi*;|`BGXQ_ ztGI<40=yrfkQxl^t6o%8_$C%`1@Vgg^AD0{7V|J>Zu7{zr6S{1$zW`!>b`fE$6=OV zZ&!qgmqIBDo8i7o=<>5L``aZr(=`T!Dg9AUE;0we9@Kif{aZfgIUFjd+28$?*Q|Ke zog+=^ykb)F5;9M>UgTl#?fFXljqiseo>QdOwJ7aet%Qn!Jh5~BE7VucR|8jx&355* zljistTKze-wMDod*GDWVvB}`xqhHSdB|h8XiyzK_V1Fa895ZrLx+OJsg(&s1?wjlU zZY=N8FXb9xHPVT|Bks*}QXuDnXz~nHyZmhqt_3mJ-YKnZ=znS0+)Tu@x4IxxIU3J- zoxX)%aB6kAb`P!MiHWM&J~e7DBYVdx-@dyvT*dt-o~*8G_$U$WM~Rkj`SW!okgZMu znVr`6{rI&&5-_$wr>3tT`y%sfJ=zAYpSQNVwJYPTl!)U`*w4Mb(w6g8^UL$uunM|Y zxB{}ZX&Y~KkBp%#x3F`z)Z6;uFFeG3%M;Eb@rMCzXluI%nT|XjrO0}CP|+7PjI*>C zGPHlS&f9BJA~AmXUc5t+1-R6GcHm)Ba&qy$+;`jpPl)YWrxorbP?3d1L_`ccAvV)J zk&4kd%3sSdYHptIihLGy;PN~y z;ZS%ULZvP)haBp&0owR-SxMsO>Xp7&jL960VP~!_HM8r96882--g!DogbySpt z#R1P=UJn^(C%QbUCrG`^WGAsKtLeZ)ug9n+X1?F{Y+L4k^a6CmQ|o40cjxb?e3HhcH(-h0$Hph2c5?cb3unyPSnt-R{tNrO|Gdo14+JOxA60xDJk0@jaE z$pUJ0MGcyht)$cS25L2j$yyxX_WS`Sn_h(*@4XM#>e|}EfLv?~0KdR4PLXquZo1m2 zh3j`}KhZ-5XBK}{(krR4PwdcDfb+hT;rG&HV^CA7F*^FT!6w_Ylxf*3fc}0ep zfMYV2-YbEwTH1MZrcNN-kIFbDd#W$lLuvo%qFT#RwI?WgjpuyJmg@y` zt@tT5Re+w%1?zSk-wm7@;DvNxj4GgOB<_L!p#6crseT{EXGNeYRyW;SQ`U|7-jkoK z-U~e`%VZ1QdLM#g?oNlP}e}zH%oN zXB=zV_r29mjE6l@HlwBB?g^Nin_acN*V+B!7v$X8iNGT-_xghrmEt_zJGo`WYwEX^p|#8!hWo>uOZ zVOJ9Yzw=8dNKw;5B%SN-=m=z_vuRi$_^~*AV%0fMTYQx_{cVH}v$=M*1a`1H{gMzN zcnv0t6egZRVaJ9~OoEyrXCmurR7d2ca%-Ye){Ai%b>c8=cInZjy_I%*{|iT6gV+x6 zjQVoH6()8TY&wiOGA^1yi2+@W*Y2tcoaZgAttMF1!x7y8>Z&E+Y}~Q=7MzrYZskxf2ZR~JiS3&{+g)7n=)Hp+-08%PJ774 z%MUSEko!ZK4FPQ!mO9|@^Ya63(l21-fO;H}DJTU1hXz8eee)&`f*uD#uHI5Bw!3%R zxXkM_1GD)8F-eV|nY4EQNEX~SIwZJR)0HyteR#l?(vIOsKDhaIl{|5UX$g)4a2u8! zI|Gj&RASmj%ba<^SKYNxs3(xodsq8qSsv2g!ETvd%@`yb){8MQqfk_VoXbst=gzkl zj${g1$S8_8betD9*)Bc0JmjU6u<;Z7=MQEuuC6W+If0g6*;ewL(RlNDmS%IR6@Q?? z2$lRUcX;~@=1HSb&FAgBRoiAx8Sm1T$K6=X^~?-gBX`8@`&5ba;hlfTn(KDVH$QY| zkbmY{wjUiuuEwBoQA(G~)6sh)<*WaIW31>2Y~#c{4&oL5Bv};0s)Jq7t20+Oz-8Ku zQ*}9rJVbTP-zWj-XtpMvk_&lWCno9tD08ZRH+hjc zJm5w|#LsGO+%EO0Qwaw#vD4g;EY?$hW>mv~r$C%J`^E7|iNhj&HZwU#SXVb@nEs$6 zsC)$;k>c(dv@dr1iNm_-l2?C!AzZIbvD26JvS} z|F-VJo=71h$(v4nUH5x`@3jhjRrT$BmUQb=t%2F$x)o>$LGQDg!aU6wn61#?k;Gu| z%5NqyOxjkl#z0j^KGgd%jJ=RPqkF&x5A~f2c)?Hi6ylk4*3({Q(+vHomRKWF@1gWk z3rtgcu9|LZTjH?ZdlMGQ-f{N9*&^fgbdzIrCGDSBn8%<}a(LH|bzhcoWej6rz|7@V z4m#-R<+oxlxEn` zgoU2W=X7h!e3xc2Lo&S&`fW0da!R0x0IVs&F!8OS+HS{ko5+31Mu)bY`>-BW&ZzH_L{DY0mV))c?qys@ zW6eF}#uMaGRu8kYE3P7UX;**+NLg!Df!0kJ&Tu=hOXsA-#203w_I4-dlbw_8kE@YM z`t%wmX<7#0QL&|Gd`blac*+Q|O9x||l3#3CEiE?aW?qJdPewTVp#}pd5F56Wo%7dK zkcC6oF@nz`!>DGPL}z%rqE!!n{?poO{(0XHgEnRe3^3?Ep1<#lkh!fsfHemx!5{;J zLCU_4c+oe3eGQUQ>gnU;hUp2nS`Q=)RgD1p3WE4jwdpM%wWDhFKoO7Si)@nr*OQ{Rm06kN+85$MlfOt=t(M!SFU=v>2MozXc&S2ujf_mk|e}gB9@&iX6SI^xF7yzhp8>Aa1ZCUr7fi695Mk@mYfxjO%Kt7yRh?xlG zBloOw#=|ZiFpqp%op(p#sR4X1)nI?j@PPI;_8Y;7Pwr@pOx(~p68}$z2i}9Z56RWi zDDysHCK@j<#ZUBvFAaAvN<|0o9FTkW z-7QC#J%cZe6Nt9&lLx5U8#KmGOavyScD`%EV~K_^h)M(y(43+V3N}6cJaONNx*Ws$ zN>O&@SvX)I&$BLm;nB%2H!_Tb`cymcl{{~lIzgqCgD1ZqnhmZES-8s9 z+*4jE*QZ{%L2$vPrHUFVi0&W$~zjzh z7bQoZ(wrEgUTGw3SqsrfRFm693G=ET1NWX1K0L++AAW2~9Du&ye*BtnXg8-k$pl^Q zmE*{|aFTy(?&vl9<--R%XAS7X)Lv2MSQq|C*%K@M*hgReYX6lHof{79Z|k~TjefGqbY_K+}5iJ)nzrz9J#2l`*yH(+$Xmlv7-L+!d{p%8c&3>On z+Oc@}d&3(v{J*~8o4A`K+8$l=j@L~-7`Pc3d9O_^n`zj;Qvy5+cvEKD$r>ez{qsmZE2rBp53sKc~2^* zuELK@cYacc+bx)GcQ!?pDzna9e#qKo3g>xe)A z_Mqxl&RM8S4i;nG%_Ak|-S@wi?>tLeo!33p=(sE4u|H`fz4~bNmXPix7`|Art0Gsg z3MDy0Tawezt8#QkNdKg}PZ*T>GduT-e9pdF`&LxA^=lpstm$lil>7Fc)oOpc;X6QY zS_7tQ2p_t#icguSb+e3QEs3PTv+kG%!7XY-EJw8!^Mp{H=?B}gi$=i?wC%XAP>`AG^jtYhC6N_+WEhsqO|zNh}@ zL<1MPan|jy?zt;OKwa?32jE26ANpO7X>Lgu+s+&g&L8>2WgYJKubJ}L;{H3? zeG*!r+wy~_V$~`$i>asdBDM_2kNqX;7*Jl2Jk?gI2%Zy-Fd*5UY9z9PU6#hCg z?&L}%8Rv?ei_o$X{`~pBE+g3)1|{AIecIgHK=gMEd*s@*yb=pEI#7Rc9ps-i-H11N zUyhkfU7e>pc-}aF#5ZSbjvnkM=sSWdmm1jPgwVV_@I#9rpx-TlOOh5 z(B|9%DCj=G);@N;cr7)4cq4Hl;;T`O`sv+vdiwkR5d_Hatk3)G`5u4?NK^ajjE=(U zV<;{*z5VX~a$;kwZmn6w9Iv}RodCIHlnGDR0NA8YKJaF($b$N%JG#xIUV(mGE;`*$ zvF+H6T|L5oqKSVcM|XGYo@9#g))~IUE$;@w8#HbFt@HDrlOEF7yyYR#MG^m@Fjova z>+<#2RNCVa*hj?9$VL11U0es$eqv0Vp2H&AbayrCp({rdvvnD`bat5v=1mWg&NX&k zzM(5U_8DVoRBsUUGgUQWgk=Oy8u^a^OwwSdxiuB-Yq@9M!M0M4{zca>UUPiPmAK`s zF@GfSnvkxIKt1x&qvMwC_VAdtm~0``p7#38_S-?v?7GKdOS8EN@F=z|_LnwlPsX>c zH7d^iBnqOM)a(%?pgCWgxSwY$L1Ugd7%M|E#-Z=hsir~msZ(*|NqaO?l@krhUe}(G zh|6qbgTlWTQbjK0BqJTM`=PNw1$-_iTpOk{93>gp zSJmCF@*D9Jo*OZt%^In+rM;}1Et$VMoNucp?C~7Rg&7+8!Hr8W1#B!qV}!Mk zfI?_YDOhD*LroI$8#6!EWWPYK%pgl-Qc@QbZ}*^P0%^2l%dSKp=nw`qF)O{Y?Vdtx zI;VpNxy=CzbM(bHiv`?TI*W1W?{HET+?tsy_}ib(h|s=nP!`+NylLZROv z_plBu%^aY@1*{JmT3QWIX#}JJ5wve=eO1Tt|GzfnU(mIe7G60p3f0`*I>b`K(xog{ z@t>T(CNdxWA?Zi`-isrbJq}Wp7bsZu0cUxYx9DmcCDgm#C#=%7!Y@R2z;Wg(-WgW8tFfmSHPB;L-`$ts2`sTff540HwqxW?> zxVvBUyOcyc?8*wc9JD?W1_l5=#>o4=sp&BmsQ;lb)QRg% zIe&WP%9uQqT^mCxAf@MDE{mZVz#wQQ*(gEU_Bun@b%u^V^|uHXb~|s^__CMFZ)Y27 z{^SsFi$NKmEIn>NTDzRf8Da9~sD8Sk(VQkzRwy6}kAX!nK5xkkkO?X$vRpj+Cx(Ea zsk%OY7*$F10XRejV-QhGPiUcjI$kM`0uCrpn9u~o#<_^??d_}L;$?mFF`$M9MT*MB z17JIIa2M?|wgwYZ|7iYMh5&^BYP(X6@)a zzW+jgySb2Ob^FfOecWe!;ik8xsO&MIgm8!JQu+1RuScBU-d{syE@t&~6fvr(s3=D$ zTl%(p6%3dve>f^TCm%{=_GUK=$V3GFR1Gxwa7{s45~xl(4?!dis3ehlkJWOriDu9|Yg>`?pqFmr<HgPV+B_e3 zUqsG0gdU-e7ZXsN=Y0j-PTsz{iTBJppuj1d{W4Z9$MWVVZs%+QUb51Q3uNRM?n1G^ zF>mr$-3QafBp$EwnItsVOmyD=-{vdwh zj!Q&kKq|r95g6ioX=sB&NYsH2YE@%2KS91Du@e=ro^3GTy9BDzA44JCuR4>&LS7t z2;H$@bJo`#kRk<0!MqF!X(}p{ffOf9 zM|<pf6@K^eoV-_@pxs68HXylWGg&}pB@@bRr`>*X z0A@9)D1QX1n61pLGoUpIdDR$6cMJh{?*RDn7*((FRfguam8IC+zf-F698cE4wsY;B zXy2)L-2U9T$=ug99~p(B$^A0C|Cnbq177y*VZug62;?*{T(=ppCW3@q${ajw;a#g3kBq@R=I-3;4tp;ykF!cu&a zANHGHRZUsEdkpk=Pl2Z)@Uc%thBarK^Sc3AdcOONq)!nHvIhDNs!-Fi*U58tP|S^` zM1640=e5MVZee1vD?5iEl+H4 z1*g07V zx6LZKl`gau(}+w%IOq`EMm!9o!MH9$e9b{7BUuHR<=E8DeFmpK>ziG2B_xyD*d3EY zf70+5*QCF_hsMD8L#B$&-xk}L7afigb<%9c*iLcD<+6`4L;_*Q|CWMVm!6S1@ zQ*3ee4lCmE7{gUG*dyyHy;TKVkIkIUgqMnouiNGRsC@bcV=_muJv#o8wM=Aw@I(c` zA@O5vJ4JTMoTQK_n{UH2apGK|LPZ&JZ$NTJYGm)BA+u4(&M@+H(qQ@n?UoMuw=AGy zx1_~2gGpk`_2{%80Pl6uvj6y~((1#C^|6Y}UEzY=E(^aUG>ThHd-6-#ztP64P7A3l zJ-84j(G}D%RTu=CNL{a)fA1$G1bp3#LCX6J^H`ti_Bi z4HxKapwY*5uiXa|@5t80e-wxD!>`qtzjLbeA8|)P6$V+HEXt@tLB?}@*BxCi(Ftzi z5LHW-Q>y9S-VD1>LmT7Z>;z&Q0Y@ey5>H09+bU58>^GMqmWErHp;aMo$|7M4f%$}f zB-TmN3s)9MQMg@ma7<4tLFPxf#mWfjC;9ic&!KXfgM-eH9inAhCoqqc{0BTt^45iGK^a6nSurv~~e{8zSVV z#e?UOnSsi?nK0LV!vqh_y$TJtn`eu-qcucTH7-u+4juK7LozT7)*yG>Am=Tmn}o10 z+9E>GS0+AE%IR%iS1P2xsa|yP-yvYbz$9P&@7(~Kru=SvMLE@_|)++E)}QS;$Jh z5*!#gaxA-MOGAG~@1MdTf{inG+H~zrpJ|nyMXmy^+DLu6jpocj@EU!ail_MbRhIp!{jppf97 z1$JtGB(@TmAZn5i9YJ1Ae{d_UJesG4VgE}f&Js$NUl;zS`jDeaVxCSMN3bh%@MuIX zH$(lWlrx&WvEfIJh@hwiU%H?g&LnoV4Z0Ym&aUIX@!Eff(3BRk*M#Q;1sXR&U0iVa z@OU9vk+|}JZ4s&0JCqpI#D@oyuhspM?JtV1k(eJ`epb-F&-JRlxM1X?uc^pw3=>Z6 zNPnDepz&?}T*#9iUdHKu^~cIMA=Iz9(u`FuDk++Wn#KBV?p*)c(G`aI zKJl8;q`o3ef(7-;q@z59Vu$9Os9yFb`AFP6ExHi5cJW(K$CCv~MT>AA$uz7OAyMV&fq}_ek2u zW#f>f;^fj~>lqc}Nt2G#7tvwS4~;cp)lpHxET4Dk#y-{Tifap@8SdMv_TAH@2{j>& z)Mcb|$1&@1HvUE+Cw66SCM^&bbyJ1`!H+#b57GPun_I zc@noqvM^Ov3K7cPf0SXk5?Z9b>*kx}?=%)ADn3_c7)sPAa zZ>!juI1c$E{-xmzTo-zBvnh+1t$QuXQ`pknhi)zmw_bVK80<=nF|bI)A4Dxh?_+_N zjjF!#3QGTS_OKhpno?X01gEG4H7>~xh_tMevxGQ>HHf~D7J_J*a$$xX+TPN?c z56SH=7J}qx6MlI9JBu-&Mv7mChPqq_D9NK!T@{0allC&Jufr5or_*n>MTAAx7b%ne zF^TP}i%m0V9GmpT)0|0ja!;yo9T7Qu_8rCDnsv;lxzNY+$0TtbKg#j2%{(2C*IKGj z_#TAoT{x$|?R`&`IPRs3^X4sSNge`SYo8HzxjCf+HiF!Z%7Gha(1hv z``dliH!L*tPDLtefqV3tgYK%mB@W|INbN*tbyeH2@S333$pdBMXDly>&fu&&Y)#gM z67An^mnHr~>Y_Lg$fMp)PE`3n=*Om`HL;z9C&wgg=&`tZlNy0&Lo2^+^k&;8*IqS9 zHi8&*zS(2XN43)N;Cef_8wZf@xgspf^d_0oC-{_;z3Dn+SDV?%~xO4n4phe z47yi#ZoZn0x0j-qN!c3gc_7jJfAHYt92!JJ@W3VdAPZK4lQ#igc5Zxi$PY}gs@>9vQ$ra z2;}Gr+)nkg4RV+=Y|Cewh?4x|9Og|E8UB!2V!OVT`UuGOAK}-A%ZUDSA-dJX`?N^> z;wum+KN2PN=QCe9DQf484_Kk3doJ_9JLvT`Lq}JIr|aWZf38tFG%E{jg!Nb&e7evX zKK&^r9|x;r9k#yfO-Q&%vM6$fde; zA4W&woDt!V#?OE5o9RWE+2jnj#;RDCK3->bbH|16pM8c5p1EVeTl)Cs9@X2*WU5+w z$P(yBC1I;DxgX0=qxnf z<1manLiy`_ySN!#x)j}DRrH9S z5c-zf-FTQ^;Udy#Ah_m#jFyco7kOfR6DvQF{Q|-{|Y_cLbCN!gz04!hUHp z{Z&}lLy&0&b!i9q{u{pRnMX>%nh*Wp31h)77YPUm+`1;bu)J_vPA)DpXu(xox~urR ztG}=BI{L`N*n8y$z|sgTOMXs0J(C>Tg6M9bASJ>98`{Yc`ix#*xbX^5`<5lWY|r2W zFNH-!1VIHB1h*C@YHA8bU}5zmaDbha5iUMHH+YB-FeooSr=@)Z{Q?o2p~+<(MMZsA zXo3XM4hW^mLA|8YF8%uh6pg^kXFt<6Hkw#9wX_tUKK=UlYTC-mB>yS}q40MQS!x~>QyOF_^F~@gUTn1`n;0IBMj(QN1Nf`SlwE5N;`fvquJKybWdOs^e z>lvs(WUS9$gdLfdmKKZ%AjpCa#2OcfK>?XUoLza#%LHZbsMy$6M79KkS3-d$JcN>! zFd3T4LDWRN3^uzrbPV#o(Lf}-09xF{C;8T_;FnFOo%~5E@EQP4u!!DoIll=}I>7iU z^sX`l_blxJp93Ofn`5}e6$k_n46tvc1FeJ$S*9PlxMcsmJdtV7h9ey;J)BL~v zbvTC=y@&JtIwud0vl~wbH#c|b_J9Ep+g{=0>-@C>_UkTO5;Fw%7a+sS>bDXFXn$+i zc);h>I3Q#IE&(kaolTIbHE|h-HdJLbH4ad60ybP|Bm+8@M;p_Puzf|kupnO>Y-UPN z2;Tr@36St)73NO@V&2_RXSJ|{7cXAW2iF0~4)Y=5t>Tds z>g=2YiPQHVJ_v$<9gENU&FX`tES^;jMMZeC;VW0Kz5sS=XiW3&5eJTYUbhfFz-|Jb z^M-6q-c=rGIR&YBgkKrRR3IS*K}3AicX!Ot?|2(w8R1Cuc+RDnR;UNu zW9@ew{<_{n=>G#V^qwK$Eo(dWJ{_4#6{bB}${1BwRuP*#QZ2F_#4@K=iwWU1*A6 zwFH7^um#gAD`QJuCpNof6%~>!EG*GgQBHt@1`GYb(C|L&cp!iV&YziYR-t#AQc#?i zY~9wXc0t#v3Dk6lwGc^TfCQd9d$taw?Z9_TE-wCnG@?ST(4s3*1!!GBizsjm3#sNG zpz){%gB?c4p9Z@ap)5iDo}E3*%Blvd4N}mcv{nFZwg`p`QKp%@Ff}#R9w!0SFEH-J!0rDsEbK7=s8K;dL6fViZz3WnVb)-!6n@QjUo*B5F$i+kY&`~z zNx-Sc9dh#X&xi3>Kc&z`suoI0xGGIpqwCOfF1wObULFevD46QyACdq3GyEr!f2iD* z2Sksdy5YP!Dqisf9v0pM5q?A@4D@rr+An=ug1Sy1LJb0TKU*hj%$x4)7UCR)(1VoF zWu@1qJIlX#KY)5R*umu=OQ2oYemKLY{QbHj9_2MWo^snfB= zS1dWu1MDT&$pTwj?^bMp>=P9&ZE$q7ZrC7f(8zD!1QAyIm((1~#Z(9T`;M@EXmiCq zH?Po!ecAVgiy`t#h|prWD|TTpmz75++hM(e`rVE^GJZd-iZ*+0v*thh$zHJmr4vUG zA+DfWXtv(99j`3gcl7m@zIyd4=>6clefzc+8kiAr?!W2dXzBw*k9fRF5~ykl87H8X z7UVK3eoFu!{-|RUGb|nutctnkr2@4W_y(+kLNS@^mo7aA3B95rt5V-XH|YMgrAr1V zW}Bv(bR#6s%fQI#OQ$F)w%KEPZ= zVPKC&%fLG&fBN(s1fNvn#gMg*b4Nu5RL0O#APbgOTKduVaA)I}Ws<7}^Wno5C?vqCa9LXV_}fw?5Zd((4n~4vHBh<#0=}rIsHn=VPDw2-Ez^8G zzOn(3))F8KB%x+kfAF9lQ3NV?;R4>Nio;mXiD3uTrpVHD&@hf4W z`-tzO@Wj%xWat*-`SXnc&y|NK2bTpDe2e;p%yZxOTQyo0onvHd1KoKTdsxtBpc)D2r~Xxs@AozUmnR6C&|o[Impl](https://github.com/HewlettPackard/dlcookbook-dlbs/blob/master/python/pytorch_benchmarks/models/deep_mnist.py) - eng_acoustic_model[EngAcousticModel](http://ethereon.github.io/netscope/#/gist/10f5dee56b6f7bbb5da26749bd37ae16) + eng_acoustic_model[AcousticModel](http://ethereon.github.io/netscope/#/gist/10f5dee56b6f7bbb5da26749bd37ae16) 540x1x1 34,678,784133 [Impl](https://github.com/HewlettPackard/dlcookbook-dlbs/blob/master/python/tf_cnn_benchmarks/models/engacoustic_model.py) @@ -181,7 +181,7 @@ The experimenter script accepts ``--model`` command line argument that specifies 1. __AlexNet__ Same as [BVLC Caffe's version](https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet) _without_ grouped convolutions in layers 2, 4 and 5 (`group=1`). This does not significantly change number of trainable parameters but does change computational profile - roughly from 0.7 gFLOP to 1.14 gFLOP for forward pass. 2. __DeepMNIST__ A fully-connected architecture mentioned [here](http://yann.lecun.com/exdb/mnist/) described in this [paper](http://arxiv.org/abs/1003.0358). -3. __EngAcousticModel__ A fully-connected architecture that's typically used in hybrid HMM-DNN speech recognition systems (English language) for acoustic modeling. Similar to a speech network described in Large Scale Distributed Deep Networks [paper](https://research.google.com/archive/large_deep_networks_nips2012.html). +3. __AcousticModel__ A fully-connected architecture that's typically used in hybrid HMM-DNN speech recognition systems (English language) for acoustic modeling. Similar to a speech network described in Large Scale Distributed Deep Networks [paper](https://research.google.com/archive/large_deep_networks_nips2012.html). 4. __GoogleNet__ Same as version implemented in BVLC Caffe [here](https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet). Reference publication is [here](http://arxiv.org/abs/1409.4842). 5. __Inception3__ and __Inception4__ are based on original implementation in [tf_cnn_benchmarks](https://github.com/HewlettPackard/dlcookbook-dlbs/blob/master/python/tf_cnn_benchmarks/inception_model.py). Inception3 model publication is [here](http://arxiv.org/abs/1512.00567). Inception4 publication is [here](http://arxiv.org/abs/1602.07261). 6. __Overfeat__ A model described in this [paper](https://arxiv.org/pdf/1312.6229.pdf). Based on Google's tf_cnn_benchmarks with additional dropout operators applied to 6th and 7th layers as described in the paper. diff --git a/docs/precision/precision.md b/docs/precision/precision.md index 302a156..aeef303 100644 --- a/docs/precision/precision.md +++ b/docs/precision/precision.md @@ -54,9 +54,7 @@ python experimenter.py ... --Pnvidia_caffe.precision=`"mixed"` ... The `exp.use_tensor_core` does not affect behavior of NVIDIA Caffe at this point. ### TensorRT -TensorRT supports single, half and int8 inference. Use `exp.dtype` to control it. If I am -not mistaken, at this point (we use 2.1 version) TensorRT does not support tensor core -operations on Volta GPUs - the `exp.use_tensor_core` does not affect behavior of TensorRT at this point. +TensorRT supports single, half and int8 inference. Use `exp.dtype` to control it. ### MXNet The MXNet framework supports float32/float16 with optional tensor core math. DL Benchmarking @@ -64,7 +62,7 @@ Suite will set up the environment. Use standard parameters `exp.dtype` and `exp. to specify benchmark settings. The data type can be either float32 or float16. The tensor core math is controlled via environmental variable MXNET_CUDA_ALLOW_TENSOR_CORE. See this [code snippet](https://github.com/apache/incubator-mxnet/blob/a36bf573ad82550dbb6692a89d7ddd1d5e4487fd/src/common/cuda_utils.h) -how it is used. This environmental variable will be set automatically by benchmarking tool. +how it is used. This environmental variable will be set automatically by benchmarking suite. ### Caffe2 The Caffe2 framework supports float32/float16 data types, tensor core operations and