Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Branch 199651315 #19839

Merged
merged 58 commits into from
Jun 7, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
58 commits
Select commit Hold shift + click to select a range
30947aa
Automated g4 rollback of changelist 199140117
tensorflower-gardener Jun 6, 2018
da264cf
Fix the bug in python3 the devices list in multi_worker_strategy beco…
Jun 6, 2018
93cb963
Fixes an error where a defun with no outputs crashes when called on i…
tensorflower-gardener Jun 6, 2018
5621de9
Add distributed all-reduce for multi-worker mirrored strategy.
Jun 6, 2018
980c390
Misc fixes.
shashishekhar Jun 6, 2018
879fc34
Use memmove instead of memcpy for the large tensors on Linux.
ezhulenev Jun 6, 2018
6aeb1fd
[XLA:GPU] Allow intermediate outputs for reduce input fusions.
d0k Jun 6, 2018
88ac13a
Rename some functions in MatrixMatrixBlockPanelEmitter; NFC
Jun 6, 2018
bbe49e7
Split out HloBatchNormInstruction as subclasses from HloInstruction.
tensorflower-gardener Jun 6, 2018
57c68dd
Limit number of entries in the cache.
Jun 6, 2018
20d3228
Fix URLs in security/index.md and point SECURITY.md's vuln list to se…
Jun 6, 2018
51f0ff1
boosted_trees: follow up on previous double precision commit. Using t…
yk5 Jun 6, 2018
ae2a2ae
enhance Tensorflow GBDT and GBRT model by exposing a new two dimensio…
tensorflower-gardener Jun 6, 2018
8b46062
Fixes eager safety problems with tf.contrib.lookup
alextp Jun 6, 2018
8f2e5f0
[TF:XLA] Add a implementation of RandomShuffle.
tensorflower-gardener Jun 6, 2018
9dc20c7
Support taking gradients of de-serialized cond.
saxenasaurabh Jun 6, 2018
eccec6b
Adding gradients for the LogMatrixDeterminant op + tests.
tensorflower-gardener Jun 6, 2018
b6aeb32
Fix runtime failure in executor_benchmark.
mrry Jun 6, 2018
2cce1a8
Use get*ArrayRegion instead of get*ArrayElements in TFlite JNI code.
tensorflower-gardener Jun 6, 2018
4a2104c
Estimate Squeeze cost in the same way as Reshape.
yacoder Jun 6, 2018
7cb4b12
Removed parts of numbers_test that caused asan/msan/tsan failure
Jun 6, 2018
b1e5c6e
Remove _USE_C_API staging in tests now that the C API is enabled by d…
skye Jun 6, 2018
617405d
[TF:XLA] Fix the control edges for ops without inputs/outputs passed …
tensorflower-gardener Jun 6, 2018
64204dd
Allow SavedModelBuilder to use custom Savers, and pass custom Savers …
Jun 6, 2018
c4a3763
quantize_weights flag for tflite_convert.
Jun 6, 2018
032f804
Add support for dilation. This is previously missed and would result in
Jun 6, 2018
40a5601
Updated documentation relating to quantized input stats.
Jun 6, 2018
60bd73a
Make the LLVM IR GEMM tile size configurable; NFC
Jun 6, 2018
4a1889c
Code cleanup: use absl::string_view to pass string-like objects.
tatianashp Jun 7, 2018
068255c
Run cross_tower_ops_test with test sharding.
Jun 7, 2018
86cfb0b
Make the noop returned by tpu.replicate() trigger TPU computations.
tensorflower-gardener Jun 7, 2018
68d1fc4
Fix taking higher-order derivatives of cond_v2.
skye Jun 7, 2018
cf6e709
Remove _USE_C_API test_util methods now that the C API is enabled by …
skye Jun 7, 2018
f6ead21
Download tf.keras datasets from GCS and add license information.
fchollet Jun 7, 2018
8c649dd
Automated g4 rollback of changelist 199476694
Jun 7, 2018
74fd9ce
Update variable recording and add benchmark with defun.
tensorflower-gardener Jun 7, 2018
cccbb9b
Cache the rematerializable status.
tensorflower-gardener Jun 7, 2018
39cb0e4
Fix the docstring as it is stale. The initializer has no default in
Jun 7, 2018
cd5fa01
Disable broken keras_test on guitar.
gunan Jun 7, 2018
a82c2b8
Disable scoped_allocator_test in msan
gunan Jun 7, 2018
e4e2708
Disabling broken zip_test_conv
gunan Jun 7, 2018
9aa1154
ArgMax supports quantization, so make the transformation know that.
Jun 7, 2018
c2368f8
Apply if_override_eigen_strong_inline to three more ops
tensorflower-gardener Jun 7, 2018
c70b712
Implementation of TensorFlowEqual and TensorFlowNotEqual.
tensorflower-gardener Jun 7, 2018
3ddc925
Improve performance of HloComputation::MakeInstructionPostOrder
tensorflower-gardener Jun 7, 2018
fcc3282
Update revision of clang in download scripts
ilya-biryukov Jun 7, 2018
54773fd
Add GetAllRegisteredKernels helper
jtkeeling Jun 7, 2018
4b3c9fe
Implement scatter_nd_add for resource variables.
adria-p Jun 7, 2018
866bc31
Update ops-related pbtxt files.
tensorflower-gardener Jun 7, 2018
3f31670
Go: Update generated wrapper functions for TensorFlow ops.
tensorflower-gardener Jun 7, 2018
537e8c7
Remove _USE_C_API staging from session.py.
skye Jun 7, 2018
f66782c
Add convolution and convolution1d to the public API
tensorflower-gardener Jun 7, 2018
086d96a
Fix bug due to incorrect nesting of return statement in eager iterato…
pavithrasv Jun 7, 2018
bf1ab06
Allow replace_expression to generate simple names, nor just Expr node…
Jun 7, 2018
bff89b6
Typos in documentation and style improvements in tests.
tensorflower-gardener Jun 7, 2018
a3c46fc
Change unimplemented ops error message.
Jun 7, 2018
796fff8
[XLA:GPU] Fix non-const reduce init value generation to handle multi-…
d0k Jun 7, 2018
4771467
Merge commit for internal changes
Jun 7, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
11 changes: 3 additions & 8 deletions SECURITY.md
Original file line number Diff line number Diff line change
Expand Up @@ -242,12 +242,7 @@ v//Fw6ZeY+HmRDFdirjD7wXtIuER4vqCryIqR6Xe9X8oJXz9L/Jhslc=
-----END PGP PUBLIC KEY BLOCK-----
```

### Known vulnerabilities

| Type | Versions affected | Reported by | Additional Information |
|--------------------|:-----------------:|-----------------------|-----------------------------|
| TensorFlow Lite TOCO FlatBuffer Parsing Vulnerability | <= 1.7 | Blade Team of Tencent | [security advisory](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/docs_src/security/advisory/tfsa-2018-003.md) |
| GIF File Parsing Null Pointer Dereference Error | <= 1.5 | Blade Team of Tencent | [security advisory](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/docs_src/security/advisory/tfsa-2018-002.md) |
| BMP File Parser Out-of-bounds Read | <= 1.6 | Blade Team of Tencent | [security advisory](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/docs_src/security/advisory/tfsa-2018-001.md) |
| Out Of Bounds Read | <=1.4 | Blade Team of Tencent | [issue report](https://github.com/tensorflow/tensorflow/issues/14959) |
### Known Vulnerabilities

For a list of known vulnerabilities and security advisories for TensorFlow,
(https://github.com/tensorflow/tensorflow/blob/master/tensorflow/security/index.md)[click here].
2 changes: 2 additions & 0 deletions tensorflow/compiler/tests/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -545,7 +545,9 @@ tf_xla_py_test(
],
deps = [
":xla_test",
"//tensorflow/python:array_ops",
"//tensorflow/python:framework",
"//tensorflow/python:math_ops",
"//tensorflow/python:platform_test",
"//tensorflow/python:random_ops",
],
Expand Down
38 changes: 32 additions & 6 deletions tensorflow/compiler/tests/random_ops_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,8 @@

from tensorflow.compiler.tests.xla_test import XLATestCase
from tensorflow.python.framework import dtypes
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import random_ops
from tensorflow.python.platform import googletest

Expand All @@ -47,18 +49,18 @@ def _testRngIsNotConstant(self, rng, dtype):
# We use exact equality here. If the random-number generator is producing
# deterministic output, all three outputs will be bitwise identical.
self.assertTrue((not np.array_equal(y, z)) or
(not np.array_equal(z, w)) or
(not np.array_equal(y, w)))
(not np.array_equal(z, w)) or (not np.array_equal(y, w)))

def testRandomUniformIsNotConstant(self):

def rng(dtype):
return random_ops.random_uniform(shape=[2], dtype=dtype,
maxval=1000000)
return random_ops.random_uniform(shape=[2], dtype=dtype, maxval=1000000)

for dtype in self._random_types():
self._testRngIsNotConstant(rng, dtype)

def testRandomNormalIsNotConstant(self):

def rng(dtype):
return random_ops.random_normal(shape=[2], dtype=dtype)

Expand All @@ -70,13 +72,14 @@ def testRandomUniformIsInRange(self):
for dtype in self._random_types():
with self.test_session() as sess:
with self.test_scope():
x = random_ops.random_uniform(shape=[1000], dtype=dtype, minval=-2,
maxval=33)
x = random_ops.random_uniform(
shape=[1000], dtype=dtype, minval=-2, maxval=33)
y = sess.run(x)
self.assertTrue((y >= -2).sum() == 1000)
self.assertTrue((y < 33).sum() == 1000)

def testTruncatedNormalIsNotConstant(self):

def rng(dtype):
return random_ops.truncated_normal(shape=[2], dtype=dtype)

Expand All @@ -94,6 +97,29 @@ def testTruncatedNormalIsInRange(self):
self.assertTrue((y >= -2).sum() == count)
self.assertTrue((y <= 2).sum() == count)

def testShuffle1d(self):
with self.test_session() as sess:
with self.test_scope():
x = math_ops.range(20)
shuffle = random_ops.random_shuffle(x)
result = sess.run(shuffle)
expected = range(20)
# Compare sets to avoid randomness behavior changes but make sure still
# have all the values.
self.assertAllEqual(set(result), set(expected))

def testShuffle2d(self):
with self.test_session() as sess:
with self.test_scope():
x = array_ops.diag(math_ops.range(20))
shuffle = random_ops.random_shuffle(x)
result = sess.run(shuffle)
expected = np.diag(range(20)).flatten()
# Compare sets to avoid randomness behavior changes but make sure still
# have all the values.
self.assertAllEqual(len(result.flatten()), len(expected))
self.assertAllEqual(set(result.flatten()), set(expected))


if __name__ == '__main__':
googletest.main()
8 changes: 7 additions & 1 deletion tensorflow/compiler/tf2xla/functionalize_control_flow.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1438,7 +1438,13 @@ Status FunctionalizeControlFlow(const FunctionLibraryDefinition* lookup_library,
// connected to all source nodes in the graph. Many graphs violate this
// invariant.
std::vector<ControlFlowInfo> cf_info;
TF_RETURN_IF_ERROR(BuildControlFlowInfo(graph, &cf_info));
std::vector<string> unreachable_nodes;
TF_RETURN_IF_ERROR(BuildControlFlowInfo(graph, &cf_info, &unreachable_nodes));
if (!unreachable_nodes.empty()) {
return errors::InvalidArgument(
"The following nodes are unreachable from the source in the graph: ",
tensorflow::str_util::Join(unreachable_nodes, ", "));
}

// Builds Frames, indexed by name.
std::unordered_map<string, Frame> frames;
Expand Down
92 changes: 92 additions & 0 deletions tensorflow/compiler/tf2xla/kernels/random_ops.cc
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,8 @@ limitations under the License.
// TODO(misard,phawkins): handle random number generator seeds/states correctly.
// TODO(misard,phawkins): add tests.

#include "tensorflow/compiler/tf2xla/kernels/gather_op_helpers.h"
#include "tensorflow/compiler/tf2xla/lib/util.h"
#include "tensorflow/compiler/tf2xla/lib/while_loop.h"
#include "tensorflow/compiler/tf2xla/shape_util.h"
#include "tensorflow/compiler/tf2xla/xla_helpers.h"
Expand Down Expand Up @@ -56,6 +58,96 @@ class RandomUniformOp : public XlaOpKernel {
REGISTER_XLA_OP(Name("RandomUniform").CompileTimeConstInput("shape"),
RandomUniformOp);

class RandomShuffleOp : public XlaOpKernel {
public:
explicit RandomShuffleOp(OpKernelConstruction* ctx) : XlaOpKernel(ctx) {}

void Compile(XlaOpKernelContext* ctx) override {
auto builder = ctx->builder();
xla::XlaOp input = ctx->Input(0);
TensorShape input_shape = ctx->InputShape(0);
const int64 n = input_shape.dim_size(0);
int64 num_elements = 1;
for (tensorflow::TensorShapeDim dimension : input_shape) {
num_elements *= dimension.size;
}
if (num_elements <= 1 || n <= 1) {
// No shuffling is required, so copy input directly to output
ctx->SetOutput(0, input);
} else {
// Generate the random swaps for the indices.
auto zero = builder->Broadcast(
builder->ConstantLiteral(xla::Literal::Zero(xla::S32)),
gtl::ArraySlice<int64>({n}));
auto n_maxval = builder->Broadcast(builder->ConstantR0<int32>(n),
gtl::ArraySlice<int64>({n}));
auto swaps_shape = xla::ShapeUtil::MakeShape(xla::S32, {n});
auto swaps = builder->RngUniform(zero, n_maxval, swaps_shape);

// Generate range(n) as the initial value for the indices to be swapped.
auto index_init_body_fn = [&](xla::XlaOp i,
gtl::ArraySlice<xla::XlaOp> loop_vars,
xla::XlaBuilder* builder)
-> xla::StatusOr<std::vector<xla::XlaOp>> {
auto indices = loop_vars[0];
i = builder->Reshape(i, {}, {1});
// indices[i] = i
indices = builder->DynamicUpdateSlice(indices, i, i);
return std::vector<xla::XlaOp>{indices};
};
// for i in range(n):
xla::XlaOp index_zeros = Zeros(builder, swaps_shape);
auto index_init_loop_result =
XlaForEachIndex(n, xla::S32, index_init_body_fn, {index_zeros},
"index_init_loop", builder)
.ValueOrDie();
auto indices = index_init_loop_result[0];

// Swap the indices at i and swaps[i].
auto swap_body_fn = [&](xla::XlaOp i,
gtl::ArraySlice<xla::XlaOp> loop_vars,
xla::XlaBuilder* builder)
-> xla::StatusOr<std::vector<xla::XlaOp>> {
auto swaps = loop_vars[0];
auto indices = loop_vars[1];
i = builder->Reshape(i, {}, {1});
// temp = indices[i]
auto temp = builder->DynamicSlice(indices, i, {1});
// swap_index = swaps[i]
auto swap_index = builder->DynamicSlice(swaps, i, {1});
// swap_value = indices[swaps[i]]
auto swap_value = builder->DynamicSlice(indices, swap_index, {1});
// indices[i] = indices[swaps[i]]
indices = builder->DynamicUpdateSlice(indices, swap_value, i);
// indices[swaps[i]] = temp
indices = builder->DynamicUpdateSlice(indices, temp, swap_index);
return std::vector<xla::XlaOp>{swaps, indices};
};
// for i in range(n):
auto swap_loop_result =
XlaForEachIndex(n, xla::S32, swap_body_fn, {swaps, indices},
"indices_swap_loop", builder)
.ValueOrDie();
auto swapped_indices = swap_loop_result[1];

// Gather the data using the swapped indices as the shuffled order.
auto indices_tensor_shape = TensorShape({n});
DataType type = ctx->expected_output_dtype(0);
xla::XlaOp gather;
OP_REQUIRES_OK(ctx, XlaGather(input, input_shape, swapped_indices,
indices_tensor_shape,
/*axis=*/0, /*indices_are_nd=*/false, type,
DT_INT32, builder, &gather));
ctx->SetOutput(0, gather);
}
}

private:
TF_DISALLOW_COPY_AND_ASSIGN(RandomShuffleOp);
};

REGISTER_XLA_OP(Name("RandomShuffle"), RandomShuffleOp);

class RandomUniformIntOp : public XlaOpKernel {
public:
explicit RandomUniformIntOp(OpKernelConstruction* ctx) : XlaOpKernel(ctx) {}
Expand Down
1 change: 1 addition & 0 deletions tensorflow/compiler/tf2xla/xla_compiler.cc
Original file line number Diff line number Diff line change
Expand Up @@ -652,6 +652,7 @@ Status XlaCompiler::CompileSingleOp(
.Finalize(graph.get(), &node);
TF_RETURN_IF_ERROR(status);
}
FixupSourceAndSinkEdges(graph.get());

return CompileGraph(options, name, std::move(graph), args, result);
}
Expand Down
38 changes: 38 additions & 0 deletions tensorflow/compiler/tf2xla/xla_compiler_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ limitations under the License.
#include "tensorflow/core/framework/resource_mgr.h"
#include "tensorflow/core/framework/tensor.h"
#include "tensorflow/core/framework/tensor_testutil.h"
#include "tensorflow/core/graph/algorithm.h"
#include "tensorflow/core/graph/graph.h"
#include "tensorflow/core/graph/graph_constructor.h"
#include "tensorflow/core/lib/core/status_test_util.h"
Expand Down Expand Up @@ -1049,5 +1050,42 @@ TEST_F(XlaCompilerTest, NodeWithInvalidDataType) {
<< status.error_message();
}

TEST_F(XlaCompilerTest, SingleOpWithoutInputs) {
std::unique_ptr<Graph> graph(new Graph(OpRegistry::Global()));
NodeDef no_op;
no_op.set_name("NoOp");
no_op.set_op("NoOp");
Status status;
graph->AddNode(no_op, &status);
TF_ASSERT_OK(status);

std::vector<XlaCompiler::Argument> args;
XlaCompiler compiler(DefaultOptions());
// No control edge linking NoOp with source/sink.
{
std::unique_ptr<Graph> graph_copy(new Graph(OpRegistry::Global()));
CopyGraph(*graph, graph_copy.get());
XlaCompiler::CompilationResult result;
status = compiler.CompileGraph(XlaCompiler::CompileOptions(), "NoOp",
std::move(graph_copy), args, &result);
ASSERT_FALSE(status.ok());
EXPECT_TRUE(str_util::StrContains(status.error_message(),
"The following nodes are unreachable "
"from the source in the graph: NoOp"))
<< status.error_message();
}

// Fix control edges for NoOp.
{
std::unique_ptr<Graph> graph_copy(new Graph(OpRegistry::Global()));
CopyGraph(*graph, graph_copy.get());
EXPECT_TRUE(FixupSourceAndSinkEdges(graph_copy.get()));
XlaCompiler::CompilationResult result;
TF_ASSERT_OK(compiler.CompileGraph(XlaCompiler::CompileOptions(), "NoOp",
std::move(graph_copy), args, &result));
EXPECT_EQ(0, result.resource_updates.size());
}
}

} // namespace
} // namespace tensorflow
6 changes: 5 additions & 1 deletion tensorflow/compiler/xla/service/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -269,6 +269,7 @@ cc_library(
"dfs_hlo_visitor.cc",
"hlo_computation.cc",
"hlo_instruction.cc",
"hlo_instructions.cc",
"hlo_module.cc",
"hlo_opcode.cc",
"hlo_sharding.cc",
Expand All @@ -280,11 +281,13 @@ cc_library(
"hlo_computation.h",
"hlo_domain_metadata.h",
"hlo_instruction.h",
"hlo_instructions.h",
"hlo_module.h",
"hlo_opcode.h",
"hlo_sharding.h",
],
deps = [
":hlo_casting_utils",
":hlo_module_config",
":hlo_proto",
":hlo_reachability",
Expand Down Expand Up @@ -3015,13 +3018,14 @@ cc_library(
cc_library(
name = "hlo_casting_utils",
hdrs = ["hlo_casting_utils.h"],
deps = [":hlo"],
deps = ["//tensorflow/core:lib"],
)

tf_cc_test(
name = "hlo_casting_utils_test",
srcs = ["hlo_casting_utils_test.cc"],
deps = [
":hlo",
":hlo_casting_utils",
"//tensorflow/compiler/xla/tests:xla_internal_test_main", # fixdeps: keep
"//tensorflow/core:test",
Expand Down
39 changes: 39 additions & 0 deletions tensorflow/compiler/xla/service/cpu/cpu_options.cc
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ limitations under the License.
#include "tensorflow/compiler/xla/service/cpu/cpu_options.h"

#include "tensorflow/core/lib/strings/numbers.h"
#include "tensorflow/core/lib/strings/str_util.h"

namespace {

Expand All @@ -24,6 +25,7 @@ const char* const kXlaDisableVectorizedReduce = "xla_disable_vectorized_reduce";
const char* const kLlvmIrDotTilingFactor = "xla_llvm_dot_tiling_factor";
const char* const kXlaEnableExperimentalLlvmIrGemm =
"xla_enable_experimental_llvm_ir_gemm";
const char* const kLlvmIrGemmTileSize = "xla_llvm_ir_gemm_tile_size";

} // namespace

Expand Down Expand Up @@ -62,6 +64,43 @@ bool EnableExperimentalLlvmIrGemm(const HloModuleConfig& config) {
return extra_options_map.count(kXlaEnableExperimentalLlvmIrGemm) > 0;
}

static tensorflow::StringPiece RemoveSuffix(tensorflow::StringPiece str,
tensorflow::StringPiece suffix) {
CHECK_GE(str.size(), suffix.size());
CHECK_EQ(str.substr(str.size() - suffix.size()), suffix);
return str.substr(0, str.size() - suffix.size());
}

tensorflow::gtl::optional<std::tuple<int64, int64, int64>> LlvmIrGemmTileSize(
const HloModuleConfig& config) {
const auto& extra_options_map =
config.debug_options().xla_backend_extra_options();
auto it = extra_options_map.find(kLlvmIrGemmTileSize);
if (it == extra_options_map.end()) {
return tensorflow::gtl::nullopt;
}

std::vector<string> tile_components =
tensorflow::str_util::Split(it->second, ':');
CHECK_EQ(tile_components.size(), 3);

int64 tile_size_m;
int64 tile_size_k;
int64 tile_size_n_in_vector_width;

CHECK(tensorflow::strings::safe_strto64(tile_components[0], &tile_size_m));
CHECK(tensorflow::strings::safe_strto64(tile_components[1], &tile_size_k));

tensorflow::StringPiece tile_size_n_in_vector_width_str =
RemoveSuffix(tile_components[2], "*vectwidth");

CHECK(tensorflow::strings::safe_strto64(tile_size_n_in_vector_width_str,
&tile_size_n_in_vector_width));

return std::tuple<int64, int64, int64>(tile_size_m, tile_size_k,
tile_size_n_in_vector_width);
}

} // namespace options
} // namespace cpu
} // namespace xla
2 changes: 2 additions & 0 deletions tensorflow/compiler/xla/service/cpu/cpu_options.h
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,8 @@ bool VectorizedReduceDisabled(const HloModuleConfig& config);
bool EnableExperimentalLlvmIrGemm(const HloModuleConfig& config);
tensorflow::gtl::optional<int64> LlvmIrGemvTilingFactor(
const HloModuleConfig& config);
tensorflow::gtl::optional<std::tuple<int64, int64, int64>> LlvmIrGemmTileSize(
const HloModuleConfig& config);

} // namespace options
} // namespace cpu
Expand Down