Skip to content

Commit

Permalink
Merge pull request #231 from plaidml/050-final
Browse files Browse the repository at this point in the history
0.5.0
  • Loading branch information
Brian Retford committed Feb 5, 2019
2 parents 7623a07 + c73156b commit ec748d1
Show file tree
Hide file tree
Showing 205 changed files with 18,991 additions and 5,104 deletions.
4 changes: 4 additions & 0 deletions BUILD
Original file line number Diff line number Diff line change
@@ -1 +1,5 @@
# Copyright 2019 Intel Corporation
#
# For build instructions, see <docs/building.md>.

package(default_visibility = ["//visibility:public"])
141 changes: 67 additions & 74 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,96 +2,87 @@

*A platform for making deep learning work everywhere.*

**Vertex.AI (the creators of PlaidML) is excited to join Intel's Artificial Intelligence Products Group. PlaidML will soon be re-licensed under Apache 2. Read the announcement [here!](http://vertex.ai)**

[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/plaidml/plaidml/blob/master/LICENSE)

[![Build Status](https://travis-ci.org/plaidml/plaidml.svg?branch=master)](https://travis-ci.org/plaidml/plaidml)

PlaidML is the *easiest, fastest* way to learn and deploy deep learning on any device, especially those running macOS or Windows:
* **Fastest:** PlaidML is often 10x faster (or more) than popular platforms (like TensorFlow CPU) because it supports all GPUs, *independent of make and model*.
* PlaidML accelerates deep learning on AMD, Intel, NVIDIA, ARM, and embedded GPUs.
* **Easiest:** PlaidML is simple to [install](docs/installing.md) and supports multiple frontends (Keras and ONNX currently)
* **Free:** PlaidML is completely open source and doesn't rely on any vendor libraries with proprietary and restrictive licenses.

For most platforms, getting started with accelerated deep learning is as easy as running a few commands (assuming you have Python (v2 or v3) installed (if this doesn't work, see the [installation instructions](docs/installing.md)):
```
virtualenv plaidml
source plaidml/bin/activate
pip install plaidml-keras plaidbench
```
Choose which accelerator you'd like to use (many computers, especially laptops, have multiple):
```
plaidml-setup
```
- [Documentation](https://vertexai-plaidml.readthedocs-hosted.com/)
- [Installation Instructions](docs/install.rst)
- [Building PlaidML](docs/building.md)
- [Contributing](docs/contributing.rst)
- [Reporting Issues](#reporting-issues)

Next, try benchmarking MobileNet inference performance:
```
plaidbench keras mobilenet
```
Or, try training MobileNet:
```
plaidbench --batch-size 16 keras --train mobilenet
```

PlaidML is an advanced and portable tensor compiler for enabling deep learning
on laptops, embedded devices, or other devices where the available
computing hardware is not well supported or the available software stack contains
unpalatable license restrictions.

# About PlaidML
PlaidML sits underneath common machine learning frameworks, enabling users to
access any hardware supported by PlaidML. PlaidML supports Keras, ONNX, and nGraph.

PlaidML is a multi-language acceleration platform that:

* Enables practitioners to deploy high-performance neural nets on any device
* Allows hardware developers to quickly integrate with high-level frameworks
* Allows framework developers to easily add support for many kinds of hardware
* Works on all major platforms — Linux, [macOS](http://vertex.ai/blog/plaidml-mac-preview), [Windows](http://vertex.ai/blog/deep-learning-for-everyone-plaidml-for-windows)
* Allows developers to create hardware accelerated, novel, performance portable research kernels.
As a component within the [nGraph Compiler stack], PlaidML further extends the
capabilities of specialized deep-learning hardware (especially GPUs,) and makes
it both easier and faster to access or make use of subgraph-level optimizations
that would otherwise be bounded by the compute limitations of the device.

For examples and benchmarks, see our [blog](http://vertex.ai/blog).
As a component under [Keras], PlaidML can accelerate training workloads with
customized or automatically-generated Tile code. It works especially well on
GPUs, and it doesn't require use of CUDA/cuDNN on Nvidia* hardware, while
achieving comparable performance.

- [Documentation](https://vertexai-plaidml.readthedocs-hosted.com/)
- [Installation Instructions](docs/installing.md)
- [Building PlaidML](docs/building.md)
- [Contributing](docs/contributing.rst)
- [Reporting Issues](#reporting-issues)
It works on all major operating systems: Linux, macOS, and Windows.


## Getting started

### Recent Release Notes
* PlaidML 0.3.3 - 0.3.5
* Support Keras 2.2.0 - 2.2.2
* Support ONNX 1.2.1
* Upgrade kernel scheduling
* Revise documentation
* Add HALs for CUDA and Metal
* Various bugfixes and improvements
* PlaidML 0.3.2
* Now supports ONNX 1.1.0 as a backend through [onnx-plaidml](https://github.com/plaidml/onnx-plaidml)
* Preliminary support for LLVM. Currently only supports CPUs, and only on Linux and macOS. More soon.
* Support for LSTMs & RNNs with static loop sizes, such as examples/imdb_lstm.py (from Keras)
* Training networks with embeddings is especially slow (#96)
* RNNs are only staticly sized if the input's sequence length is explicitly specified (#97)
* Fixes bug related to embeddings (#92)
* Adds a shared generic op library in python to make creating frontends easier
* plaidml-keras now uses this library
* Uses [plaidml/toolchain](https://github.com/plaidml/toolchain) for builds
* Building for ARM is now simple (–-config=linux_arm_32v7)
* Various fixes for bugs (#89)
For most platforms, getting started with accelerated deep learning is as easy as
running a few commands (assuming you have Python (v2 or v3) installed. If this
doesn't work, see the [installation instructions]:

virtualenv plaidml
source plaidml/bin/activate
pip install plaidml-keras plaidbench

Choose which accelerator you'd like to use (many computers, especially laptops, have multiple):

plaidml-setup

Next, try benchmarking MobileNet inference performance:

plaidbench keras mobilenet

Or, try training MobileNet:

plaidbench --batch-size 16 keras --train mobilenet


### Validated Hardware

Vertex.AI runs a comprehensive set of tests for each release against these hardware targets:
* AMD

* AMD
* R9 Nano
* RX 480
* Vega 10
* NVIDIA
* K80, GTX 780, GT 640M
* GTX 1070, 1050
* Intel

* Intel
* HD4000
* HD Graphics 505

* NVIDIA
* K80, GTX 780, GT 640M
* GTX 1070, 1050

### Validated Networks
We support all of the Keras application networks from current versions of 2.x. Validated networks are tested for performance and
correctness as part of our continuous integration system.

* CNNs
We support all of the Keras application networks from current versions of 2.x.
Validated networks are tested for performance and correctness as part of our
continuous integration system.

* CNNs
* Inception v3
* ResNet50
* VGG19
Expand All @@ -100,12 +91,12 @@ correctness as part of our continuous integration system.
* DenseNet
* ShuffleNet

* LSTM
* LSTM
* examples/imdb_lstm.py (from keras)

## Installation Instructions

See detailed per platform instructions [here](docs/installing.md).
See detailed per-platform instructions [here].

### Plaidvision and Plaidbench

Expand All @@ -119,6 +110,7 @@ We've developed two open source projects:
### Hello VGG
One of the great things about Keras is how easy it is to play with state of the art networks. Here's all the code you
need to run VGG-19:

```python
#!/usr/bin/env python
import numpy as np
Expand Down Expand Up @@ -151,11 +143,12 @@ print("Ran in {} seconds".format(time.time() - start))

```

## License

PlaidML is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
## Reporting Issues
Either open a ticket on [GitHub] or post to [plaidml-dev].

Our open source goals include 1) helping students get started with deep learning as easily as possible and 2) helping researchers develop new methods more quickly than is possible with other tools. PlaidML is unique in being fully open source and free of dependence on libraries like cuDNN that carry revocable and redistribution-prohibiting licenses. For situations where an alternate license is preferable please contact [solutions@vertex.ai](mailto:solutions@vertex.ai).

## Reporting Issues
Either open a ticket on [GitHub](https://github.com/plaidml/plaidml/issues) or post to [plaidml-dev](https://groups.google.com/forum/#!forum/plaidml-dev).
[nGraph Compiler stack]: https://ngraph.nervanasys.com/docs/latest/
[Keras]: https://keras.io/
[here]: docs/install.rst
[GitHub]: https://github.com/plaidml/plaidml/issues
[plaidml-dev]: https://groups.google.com/forum/#!forum/plaidml-dev
4 changes: 3 additions & 1 deletion WORKSPACE
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
# Bazel Workspace for PlaidML
workspace(name = "com_intel_plaidml")

load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")

git_repository(
name = "toolchain",
remote = "https://github.com/plaidml/toolchain",
tag = "0.1.2",
commit = "a487bf9f2cc4edc47d376606abaaf29d85fffcd8",
)

load(
Expand Down
2 changes: 0 additions & 2 deletions base/util/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@ plaidml_cc_library(
"json_transfer.cc",
"logging.cc",
"perf_counter.cc",
"printstring.cc",
"uuid.cc",
"zipfile.cc",
],
Expand All @@ -37,7 +36,6 @@ plaidml_cc_library(
"lookup.h",
"pdebug.h",
"perf_counter.h",
"printstring.h",
"stream_container.h",
"sync.h",
"throw.h",
Expand Down
4 changes: 3 additions & 1 deletion base/util/json_transfer.cc
Original file line number Diff line number Diff line change
@@ -1,13 +1,15 @@
#include "base/util/json_transfer.h"

#include <boost/format.hpp>

namespace vertexai {

static const std::map<Json::ValueType, std::string> g_type_to_str{
{Json::objectValue, "object"}, {Json::arrayValue, "array"}, {Json::stringValue, "string"},
{Json::booleanValue, "bool"}, {Json::intValue, "int"}, {Json::realValue, "real"},
{Json::nullValue, "null"}};

std::string exception_msg(const Json::ValueType& t) { return printstring("unknown json type with enum %d", t); }
std::string exception_msg(const Json::ValueType& t) { return str(boost::format("unknown json type with enum %d") % t); }

void throw_bad_type(const Json::ValueType& found_type, const Json::ValueType& expected_type) {
auto found_it = g_type_to_str.find(found_type);
Expand Down
7 changes: 4 additions & 3 deletions base/util/lookup.h
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,8 @@

#include <map>

#include "base/util/printstring.h"
#include <boost/format.hpp>

#include "base/util/throw.h"

namespace vertexai {
Expand All @@ -11,7 +12,7 @@ template <typename M>
typename M::mapped_type& safe_at(M* map, const typename M::key_type& key) {
auto it = map->find(key);
if (it == map->end()) {
throw_with_trace(std::runtime_error(printstring("Key not found: %s", to_string(key).c_str())));
throw_with_trace(std::runtime_error(str(boost::format("Key not found: %s") % to_string(key))));
}
return it->second;
}
Expand All @@ -20,7 +21,7 @@ template <typename M>
const typename M::mapped_type& safe_at(const M& map, const typename M::key_type& key) {
auto it = map.find(key);
if (it == map.end()) {
throw_with_trace(std::runtime_error(printstring("Key not found: %s", to_string(key).c_str())));
throw_with_trace(std::runtime_error(str(boost::format("Key not found: %s") % to_string(key))));
}
return it->second;
}
Expand Down
37 changes: 0 additions & 37 deletions base/util/printstring.cc

This file was deleted.

11 changes: 0 additions & 11 deletions base/util/printstring.h

This file was deleted.

18 changes: 12 additions & 6 deletions base/util/runfiles_db.cc
Original file line number Diff line number Diff line change
Expand Up @@ -36,13 +36,19 @@ RunfilesDB::RunfilesDB(const char* prefix, const char* environ_override_var) {
std::string manifest_filename = runfiles_dir;
manifest_filename += "/MANIFEST";
std::ifstream manifest{manifest_filename};
while (manifest) {
std::string logical_name;
std::string physical_name;
manifest >> logical_name >> physical_name;
if (manifest) {
logical_to_physical_[logical_name] = physical_name;
for (;;) {
std::string line;
std::getline(manifest, line);
if (!manifest) {
break;
}
auto split_pos = line.find(' ');
if (split_pos == std::string::npos) {
continue;
}
std::string logical_name = line.substr(0, split_pos);
std::string physical_name = line.substr(split_pos + 1);
logical_to_physical_[logical_name] = physical_name;
}
}
}
Expand Down
6 changes: 3 additions & 3 deletions base/util/transfer_object.h
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,14 @@
#include <utility>
#include <vector>

#include "base/util/printstring.h"
#include <boost/format.hpp>

namespace vertexai {

class deserialization_error : public std::runtime_error {
public:
explicit deserialization_error(const std::string& err)
: std::runtime_error(printstring("deserialization: %s", err.c_str())) {}
: std::runtime_error(str(boost::format("deserialization: %s") % err)) {}
};

class transfer_flags {
Expand Down Expand Up @@ -225,7 +225,7 @@ void transfer_field(Context& ctx, const std::string& name, int tag, Object& obj,
} else {
if (!ctx.has_field(name, tag)) {
if (flags & TF_STRICT) {
throw deserialization_error(printstring("Field '%s' is missing and strict is set", name.c_str()));
throw deserialization_error(str(boost::format("Field '%s' is missing and strict is set") % name));
}
if (flags & TF_NO_DEFAULT) {
return;
Expand Down
9 changes: 8 additions & 1 deletion bzl/boost.BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ cc_library(
cc_library(
name = "stacktrace",
defines = select({
"@toolchain//:macos_x86_64": ["_GNU_SOURCE"],
"@toolchain//:macos_x86_64": ["BOOST_STACKTRACE_GNU_SOURCE_NOT_REQUIRED"],
"//conditions:default": [],
}),
linkopts = select({
Expand Down Expand Up @@ -96,3 +96,10 @@ cc_library(
":system",
],
)

genrule(
name = "license",
srcs = ["LICENSE_1_0.txt"],
outs = ["boost-LICENSE"],
cmd = "cp $(SRCS) $@",
)
Loading

0 comments on commit ec748d1

Please sign in to comment.