Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Commit

Permalink
Move block.optimize_for backend_opts to kwargs
Browse files Browse the repository at this point in the history
Signed-off-by: Serge Panev <spanev@nvidia.com>
  • Loading branch information
Kh4L committed Oct 21, 2020
1 parent 0bc01e9 commit 7373172
Show file tree
Hide file tree
Showing 4 changed files with 13 additions and 13 deletions.
6 changes: 3 additions & 3 deletions example/extensions/lib_pass/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,15 +88,15 @@ The `optimize_for` API takes at least 1 argument, `backend` which is a string th
For the Gluon API, `hybridize` can be called on HybridBlocks to execute a graph pass on the internal CachedOp Symbol.

```python
block.hybridize(backend=None, backend_opts=None, **kwargs)
block.hybridize(backend=None, **kwargs)
```

The `hybridize` function prepares the HybridBlock to be converted into a backend symbol. The `backend` argument is a string that identifies which pass that will be executed on the model. The `backend_opts` takes other user-specified options that will be passed to the backend APIs. The actual pass runs once just before the first the forward pass.
The `hybridize` function prepares the HybridBlock to be converted into a backend symbol. The `backend` argument is a string that identifies which pass that will be executed on the model. `**kwargs` might contain other user-specified options that will be passed to the backend APIs. The actual pass runs once just before the first the forward pass.

If you just want to run a graph pass on the HybridBlock but not run a complete forward pass, you can use the `optimize_for` API that combines the work done in the `hybridize` API with part of the work done in the forward pass.

```python
block.optimize_for(x, backend=None, backend_opts=None, **kwargs)
block.optimize_for(x, backend=None, **kwargs)
```

When the `optimize_for` API is called on a HybridBlock it runs the graph pass immediately. This lets users export the modified model without running a complete forward pass.
Expand Down
6 changes: 3 additions & 3 deletions example/extensions/lib_subgraph/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,15 +107,15 @@ The `optimize_for` API takes at least 1 argument, `backend` which is a string th
For the Gluon API, `hybridize` can be called on HybridBlocks to partition the internal CachedOp Symbol.

```python
block.hybridize(backend=None, backend_opts=None, clear=True, **kwargs)
block.hybridize(backend=None, clear=True, **kwargs)
```

The `hybridize` function prepares the HybridBlock to be converted into a backend symbol. The `backend` argument is a string that identifies which backend that will partition the model. The `backend_opts` are other user-specified options (as a Python dictionary of strings mapped to strings) that will be passed to the backend partitioning APIs. The `clear` argument defaults to `True` and clears any previous optimizations done on the block. If you want to chain optimizations together, set `clear` to `False`. The actual partitioning takes place during the forward pass. If you want to use `hybridize` to chain multiple optimizations, be sure to execute a forward pass after each call to `hybridize`.
The `hybridize` function prepares the HybridBlock to be converted into a backend symbol. The `backend` argument is a string that identifies which backend that will partition the model. `**kwargs` are other user-specified options (as a Python dictionary of strings mapped to strings) that will be passed to the backend partitioning APIs. The `clear` argument defaults to `True` and clears any previous optimizations done on the block. If you want to chain optimizations together, set `clear` to `False`. The actual partitioning takes place during the forward pass. If you want to use `hybridize` to chain multiple optimizations, be sure to execute a forward pass after each call to `hybridize`.

If you just want to partition the HybridBlock but not run a complete forward pass, you can use the `optimize_for` API that combines the work done in the `hybridize` API with part of the work done in the forward pass.

```python
block.optimize_for(x, backend=None, backend_opts=None, clear=True, **kwargs)
block.optimize_for(x, backend=None, clear=True, **kwargs)
```

When the `optimize_for` API is called on a HybridBlock it partitions immediately. This lets users export the partitioned model without running a complete forward pass. Chaining multiple optimizations is as simple as calling `optimize_for` multiple times, no need to execute a forward pass (as opposed to `hybridize`).
Expand Down
6 changes: 3 additions & 3 deletions example/extensions/lib_subgraph/test_subgraph.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ def test(backend):
inputs = [a,b]
sym_block = nn.SymbolBlock(sym, inputs)
sym_block.initialize()
sym_block.hybridize(backend=backend, backend_opts={'dedup_subgraph':True})
sym_block.hybridize(backend=backend, dedup_subgraph=True)
out2 = sym_block(mx.nd.ones((3,2)),mx.nd.ones((3,2)))
print(out2)

Expand All @@ -103,14 +103,14 @@ def test(backend):
sym_block2 = nn.SymbolBlock(sym, inputs)
sym_block2.initialize()
sym_block2.optimize_for(mx.nd.ones((3,2)), mx.nd.ones((3,2)), backend=backend,
backend_opts={'dedup_subgraph':True})
dedup_subgraph=True)
sym_block2.export('partitioned')

# Test with additional input to subgraph op
print('-------------------------------')
print('Testing %s Gluon Hybridize partitioning with extra input' % backend)
sym_block2.optimize_for(mx.nd.ones((3,2)), mx.nd.ones((3,2)), backend="addInputPass",
clear=False, backend_opts={'dedup_subgraph':True})
clear=False, dedup_subgraph=True)
out3 = sym_block2(mx.nd.ones((3,2)),mx.nd.ones((3,2)))
print(out3)

Expand Down
8 changes: 4 additions & 4 deletions python/mxnet/gluon/block.py
Original file line number Diff line number Diff line change
Expand Up @@ -1059,7 +1059,7 @@ def _call_cached_op(self, *args):
out = [out]
return _regroup(out, self._out_format)

def optimize_for(self, x, *args, backend=None, backend_opts=None, clear=True, **kwargs):
def optimize_for(self, x, *args, backend=None, clear=True, static_alloc=False, static_shape=False, **kwargs):
"""Partitions the current HybridBlock and optimizes it for a given backend
without executing a forward pass. Modifies the HybridBlock in-place.
Expand Down Expand Up @@ -1087,19 +1087,19 @@ def optimize_for(self, x, *args, backend=None, backend_opts=None, clear=True, **
other inputs to model
backend : str
The name of backend, as registered in `SubgraphBackendRegistry`, default None
backend_opts : dict of user-specified options to pass to the backend for partitioning, optional
Passed on to `PrePartition` and `PostPartition` functions of `SubgraphProperty`
clear : clears any previous optimizations
static_alloc : bool, default False
Statically allocate memory to improve speed. Memory usage may increase.
static_shape : bool, default False
Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.
**kwargs: The backend options, optional
Passed on to `PrePartition` and `PostPartition` functions of `SubgraphProperty`
"""

# do hybrize API call
self.hybridize(True, backend, backend_opts, clear, **kwargs)
self.hybridize(True, backend, kwargs, clear, static_alloc=static_alloc, static_shape=static_shape)

# do part of forward API call
has_symbol, has_ndarray, ctx_set, _ = _gather_type_ctx_info([x] + list(args))
Expand Down

0 comments on commit 7373172

Please sign in to comment.