Skip to content

Commit

Permalink
Merge pull request #786 from adrianeboyd/chore/update-develop-for-v8.2-1
Browse files Browse the repository at this point in the history
* Move compatiblity-related code into a separate `compat` module (#652)

* Add `compat` module to encapsulate imports of optional 3rd party frameworks/libraries

* Replace references to compat code in `.util` with references to `.compat`
Remove `cupy_ops. has_cupy` , `backends.has_cupy`, and `api.has_cupy`

* Update example notebook

* `util.set_active_gpu`: Return `None` if GPU is unavailable

* `util`: Import tensorflow and mxnet with shorthand names
Fix markdown formatting

* `api`: Re-export `has_cupy` from `compat`

* `backends`: Preserve `has_cupy` export for bwd-compat, remove superfluous imports

* Revert "Update example notebook"

This reverts commit 9f068a4.

* `util`: Revert changes to `set_active_gpu`, raise an error if no GPU is detected
Clarify docs

* NumpyOps: Add a method to get a table of C BLAS functions (#643)

* NumpyOps: Add a method to get a table of C BLAS functions

This table can be used for downstream `cdef nogil` functions that need
to use a BLAS function from the BLAS implementation used by an Ops
subclass.

* Bump blis requiment to >=0.9.0,<0.10.0

* NumpyOps: do not construct CBlas on every NumpyOps.cblas() call

* api-backends: Fix superfluous wording

* Fix a unit test in the PyTorch wrapper (#663)

* Fix a unit test in the PyTorch wrapper

This test checked whether the allocator was set to the PyTorch allocator
when the PyTorch shim is used. However, this is not the case when
PyTorch is installed, but CuPy isn't, so the test would fail. Since this
test relies on CuPy, disable it when CuPy is not available.

* Fix merge fallout

* `CupyOps`: Simplify `asarray` (#661)

* `CupyOps`: Simplify `asarray`

* Remove `cast_array` flag and use `astype` unconditionally

* Revert unconditional call to `astype`

* Remove no-op

* NumpyOps: Better type-casting in `asarray` (#656)

* `NumpyOps`: Better type-casting in `asarray`

* Simplify `dtype` check

* Update thinc/backends/numpy_ops.pyx

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

* Simplify casting further, avoid copies if possible

* Remove no-op

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

* Fix out-of-bounds writes in NumpyOps/CupyOps (#664)

* Fix out-of-bounds writes in NumpyOps/CupyOps

- Using `{CupyOps,NumpyOps}.adam` with incompatible shapes for weights,
  gradients, or moments resulted in out-of-bound writes.
- Using `NumpyOps.adam` with non-float32 arrays resulted filling arrays
  with incorrect data.

* Remove print debugging remnants

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

* More print debugging remnants

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

* Set version to v8.1.0.dev0 (#666)

* Fix model.copy() bug where layer used more than once (#659)

* Fix model.copy() bug where layer used more than once

* Expand functionality to include shims

* Corrections after review

* Added default for Model._copy()

* `conftest.py`: Handle exception caused by `pytest` options being added twice in CI builds (#670)

* Auto-format code with `black` + Pin `black` requirement (#673)

* Add `autoblack` GitHub action

* Fix command

* Add `black` to `requirements.txt`

* Add support for bot-invoked slow tests (#672)

* `Shim`: Fix potential data race when allocated on different threads

* Fix two warnings (#676)

- torch.nn.functional.sigmoid is deprecated in favor of torch.sigmoid.
- Clip cosh input in sechsq to avoid overflow.

* Replace use of gpu_is_available with has_cupy_gpu (#675)

* Replace use of gpu_is_available with has_cupy_gpu

This PR is in preparation of better non-CUDA device support. Once we
support non-CUDA GPUs, there may be GPUs available that are not 'CuPy
GPUs'. In all places where we use `gpu_is_available` we actually mean:
is 'CuPy available with a CUDA GPU'? So, this PR replaces uses of
`gpu_is_available` to `has_cupy_gpu`. This allows us to use
`gpu_is_available` in the future to check if any GPU is available.

In addition to that, some code had expressions like

```
has_cupy and gpu_is_available()
```

This PR simplify such conditions to `has_cupy_gpu`, since `has_cupy_gpu`
implies that `has_cupy`.

* Remove unused import

* Improve error message when no CUDA GPU is found

* Fix another error message when no CUDA GPU is found

* Fixes for slow tests (#671)

* `test_uniqued`: Disable test timing for `test_uniqued_doesnt_change_result` (#678)

* `test_to_categorical`: Ensure that `label_smoothing < 0.5` (#680)

* `test_to_categorical`: Ensure that `label_smoothing < 0.5`

* Use `exclude_max` instead of clamping to `0.49`

* test_ops: do not lower precision in conversion to Torch tensor (#681)

* test_ops: do not lower precision in conversion to Torch tensor

float64 test values close to zero were rounded by conversion to a
float32 Torch tensor, resuling in mismatches between Thinc and Torch
gradients. This change prevents the loss in precision.

* test_ops: compare arrays on same device in Torch comparison

* test_maxout: compare arrays with same precision

* Add `test_slow_gpu` explosion-bot command

* Auto-format code with black (#682)

Co-authored-by: explosion-bot <explosion-bot@users.noreply.github.com>

* Azure: pin protobuf to fix Tensorflow

* Extend typing_extensions to <4.2.0 (#689)

* xp2{tensorflow,torch}: convert NumPy arrays using dlpack (#686)

* xp2{tensorflow,torch}: convert NumPy arrays using dlpack

Newer versions of NumPy can expose arrays as dlpack capsules. Use this
functionality (when supported) to speed up NumPy -> Torch/Tensorflow
array conversion.

* Fix up copy paste error

* `test_model_gpu`: Use TF memory pool if available, feature-gate test (#688)

* `test_model_gpu`: Use TF memory pool if available, feature-gate test

* Fix typo

* `test_predict_extensive`: Disable test time monitoring

* Fix imports, use `has_cupy_gpu` for forward-compat

* `conftest`: Use `pytest_sessionstart` to enable TF GPU memory growth

* Bump version to v8.1.0.dev1 (#694)

* `NumpyOps`: Do not use global for `CBlas` (#697)

* Merge pytorch-device branch into master (#695)

* Remove use of `torch.set_default_tensor_type` (#674)

* Remove use of `torch.set_default_tensor_type`

This PR removes use of `torch.set_default_tensor_type`. There are
various reasons why we should probably move away from using this
function:

- Upstream will deprecate and remove it:
  pytorch/pytorch#53124
- We cannot use this mechanism for other devices than CPU/CUDA, such as
  Metal Performance Shaders.
- It offers little flexibility in allocating Torch models on different
  devices.

This PR makes `PyTorchWrapper`/`PyTorchShim` flexible in terms of the
devices it can use. Both classes add a `device` argument to their
constructors that takes a `torch.device` instance. The shim ensures that
the model is on the given device. The wrapper ensures that input tensors
are on the correct device, by calling `xp2torch` with the new `device`
keyword argument.

Even though this approach offers more flexibility, as a default we want
to use the `cpu` device when `NumpyOps` is used and `cuda:N` when
CupyOps is used. In order to do so, this PR also adds a new function
`get_torch_default_device` that returns the correct device for the
currently active Ops. `PyTorchWrapper`/`PyTorchShim`/`xp2torch` use this
function when `None` is given as the device to fall back on this
default, mimicking the behavior from before this PR.

* Add some typing fixes

* Remove spurious cupy import

* Small fixes

- Use `torch.cuda.current_device()` to get the current PyTorch CUDA
  device.
- Do not use `torch_set_default_tensor_type` in `set_active_gpu`.

* Add `test_slow_gpu` explosion-bot command

* Auto-format code with black (#682)

Co-authored-by: explosion-bot <explosion-bot@users.noreply.github.com>

* Azure: pin protobuf to fix Tensorflow

* Extend typing_extensions to <4.2.0 (#689)

* Add support for PyTorch Metal Performance Shaders (#685)

* Add `test_slow_gpu` explosion-bot command

* Auto-format code with black (#682)

Co-authored-by: explosion-bot <explosion-bot@users.noreply.github.com>

* Add support for PyTorch Metal Performance Shaders

Nightly PyTorch versions add support for Metal Performance Shaders
(MPS). Metal is a low-level graphics API for Apple platforms that also
supports compute kernels (shaders). MPS is a framework of
highly-optimized compute and graphics kernels, including kernels for
neural networks. MPS is supported on both Apple Silicon, such as the M1
family of SoC, as well as a range of AMD GPUs used in Macs.

Since devices are handled in Thinc through a specific `Ops`
implementation (e.g. `CupyOps` == CUDA GPUs), this change introduces the
`MPSOps` class. This class is a subclass of `NumpyOps` or
`AppleOps` (when available). `MPSOps` does not override any methods, but
is used to signal to relevant code paths (e.g. `xp2torch`) that Torch
tensors should be placed on the MPS device.

The mapping in the previously introduced `get_torch_default_device`
function is updated to:

- `NumpyOps` -> `cpu`
- `CupyOps` -> `cuda:N`, where N is the selected CUDA device.
- `MPSOps` -> `mps`

to ensure placement of Torch tensors on the `mps` device when `MPSOps`
is active.

Finally, the following booleans have been added to or changed in
`compat`:

- `has_torch_mps` (new): PyTorch has MPS support
- `has_torch_mps_gpu` (new): PyTorch has MPS support and an
  MPS-capable GPU is available.
- `has_torch_cuda_gpu` (new): PyTorch has CUDA support and a
  CUDA-capable GPU is available.
- `has_torch_gpu` (changed): PyTorch has a GPU available (CUDA
  or MPS).

* Test PyTorch wrapper with all xp ops

* Azure: pin protobuf to fix Tensorflow

* Extend typing_extensions to <4.2.0 (#689)

* Fix type checking error

* Only back-off to NumpyOps on import error

We do not want to hide other issues while importing thinc_apple_ops.

* Remove unneeded `has_torch_mps` bool

* Add `has_gpu` bool and use it in `util`

* Replace another expression by has_gpu

* Set `has_torch_gpu` to `has_torch_cuda_gpu`

We need to decide whether we want to make the potentially breaking
change from `has_torch_cuda_gpu` to `has_torch_cuda_gpu or
has_torch_mps_gpu`. But since the latter is not needed for this PR,
remove the change.

* Update thinc/util.py

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>

Co-authored-by: shademe <shadeMe@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: explosion-bot <explosion-bot@users.noreply.github.com>
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>

Co-authored-by: shademe <shadeMe@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: explosion-bot <explosion-bot@users.noreply.github.com>
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>

* Expose `get_torch_default_device` through `thinc.api` (#698)

* Make `CBlas` methods standalone functions to avoid using vtables (#700)

* Make CBlas methods standalone functions to avoid using vtables

When testing #696, we found that adding new CBlas methods results in an
ABI compatibility. This would mean that every time we add a CBlas
method, we also have to rebuild spaCy.

The ABI incompatibility occurs because Cython generates a vtable for
cdef methods, even when the class or its methods are final. This vtable
is used by the caller to look up the (address of) the methods. When
methods are added, the vtable of the caller is out-of-sync when the
calling code is not recompiled.

This change works around this issue by making the methods of CBlas
standalone functions.

* Add link to PR in comments

For future reference.

* Add Dockerfile for building the website (#699)

* Add Dockerfile for building the website

This Dockerfile was taken from spaCy.

* README: Remove command substitution in example

* Bump version to v8.1.0.dev2 (#701)

* Use blis~=0.7.8 (#704)

Until the haswell bug is fixed in BLIS v0.9, switch back to blis~=0.7.8.

* Set version to v8.1.0.dev3 (#705)

* Speed up HashEmbed layer by avoiding large temporary arrays (#696)

* Speed up HashEmbed layer by avoiding large temporary arrays

The HashEmbed layer sums up keyed embeddings. For instance, a key matrix
of the shape (50000, 4) will result in 50,000 embeddings, each computed
by summing 4 embeddings. The HashEmbed layer computed the embeddings as
follows:

vectors[keys].sum(axis=1)

where `vectors` is an embedding matrix. However, this way of computing
embeddings results in very large allocations. Suppose that `vectors`
is (4000, 64). Even though the final embedding matrix is (50000, 64),
the first expression will construct a temporary array of shape
(50000, 4, 64).

This change avoids this by introducing a `gather_add` op as a
counterpart to `scatter_add`. In this particular example, the `NumpyOps`
implementation only allocates the final (50000, 64) array, computing
the embeddings in-place using the BLAS saxpy function.

In benchmarks with an M1 Max on de_core_news_lg, this improved
processing speed from 40511 WPS to 45591 (12.5% faster).

* Simplify saxpy call

* Fixup types

* NumpyOps.gather_add: add support for double

* NumpyOps.gather_add: support int and unsigned int indices

* Add gather_add CUDA kernel

* Add tests for gather_add

* Comment fixup

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>

* api-backends: document Ops.gather_add

* Ops.gather_add: arguments should be 2D arrays

* Comment fix

* Ops.gather_add returns Float2d

* docs: Ops.gather_add is new in 8.1

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>

* Auto-format code with black (#706)

Co-authored-by: explosion-bot <explosion-bot@users.noreply.github.com>

* Fix MyPy error when Torch without MPS support is installed (#708)

* Check that Torch-verified activations obey `inplace` (#709)

And fix some activations that do not obey the `inplace` kwarg.

* Increase test deadline to 30 minutes to prevent spurious test failures (#714)

* `test_mxnet_wrapper`: Feature-gate GPU test (#717)

* Add Ops.reduce_{first,last} plus tests (#710)

* Add Ops.reduce_{first,last} plus tests

* Add docs for reduce_{first,last}

* Typing fix

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>

* Typing fixes (use InT)

* Fix some some reduction issues when using CuPy

* One maxout test fails with the latest CuPy.

Values of 5.9e-39 and 0 have an infinite relative difference. Accept
with a very strict tolerance (1e-10).

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>

* Label smooth threshold fix (#707)

* correcting label smoothing param contraint

* test new label smooth validation error

* less than 0 input validation

* string concat

* small update to error msg

* fix max smoothing coefficient

* double check error message

* Update thinc/util.py

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

* test error message fix

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

* Set version to v8.1.0 (#718)

* `get_array_module` with non-array input returns `None` (#703)

* if not xp array module is None

* raise error

* update test

* more detailed error

* Update thinc/tests/test_util.py

Co-authored-by: Daniël de Kok <me@github.danieldk.eu>

* Update thinc/util.py

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

* Update thinc/tests/test_util.py

Co-authored-by: Daniël de Kok <me@github.danieldk.eu>
Co-authored-by: svlandeg <svlandeg@github.com>
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

* Update build constraints and requirements for aarch64 wheels (#722)

* Extend build constraints for aarch64

* Skip mypy for aarch64

* Auto-format code with black (#723)

Co-authored-by: explosion-bot <explosion-bot@users.noreply.github.com>

* Fix version string (#724)

* Extend to mypy<0.970 (#725)

* Fix typo

* Update build constraints for arm64 and aarch64 wheels (#716)

* Ops: replace FloatsType by constrained typevar (#720)

* Ops: replace FloatsType by constrained typevar

Ops used the `FloatsType`, which had `FloatsXd` as its bound. MyPy could
not infer that code such as the following is correct,

```
def dish(self, X: FloatsType, inplace: bool = False) -> FloatsType:
    tmp = X * X
    # ...
```

because the inferred type is the union (or a subtype). If we instead
constrain the type variable as follows:

```
FloatsType = TypeVar("FloatsType",
    Floats1d, Floats2d, Floats3d, Floats4d)
```

the type paramater will be instantiated with a single concrete type,
solving such issues.

* Remove a bunch of casts and ignores that are not necessary anymore

* Unroll `argmax` in `maxout` for small sizes of `P` (#702)

* Unroll `argmax` in `maxout` for small sizes of `P`

`maxout` uses the `argmax` function to determine the index of the
maximum value of each `P` inputs. `argmax` uses a generic array loop,
which impedes speculative execution and `could` also prevent unrolling
of the outer `maxout` loop.

This change unrolls `argmax` with small values of `P` using a variadic
template. This leads to a small performance improvement.

* Unmodernize struct initialization

* Change Docker image tag to thinc-ai (#732)

This is purely a cosmetic change, but less confusing than thinc-io :).

* Add `with_signpost_interval` layer (#711)

* Add with_signpost_interval layer

This layer wraps a layer, adding macOS interval signposts for the
forward and backward pass. These intervals can then be visualized
in the macOS Instruments.app timeline.

* Fix reference in api-layers.md

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

* End message is optional since signpost 0.0.3

* with_signpost_interval: also wrap init callback

* docs: we wrap init as well

* Add documentation fixes

Suggested by @svlandeg.

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

* Docs: Fix/update `label_smoothing` description, run prettier (#733)

* Add Dish activation (#719)

* Add Ops.(backprop_)dish and CUDA kernel

Dish is a Swish/GELU-like activation function. Since it does not rely on
elementary operations like `exp` or `erf`, it can generally be computed
faster than Swish and GELU:

https://twitter.com/danieldekok/status/1484898130441166853

* Make mypy happy

Apparently, X * X does not typecheck (?!?).

* test_compare_activations_to_torch: test with different dY

Also fix the backprop_dish CUDA kernel, which would fail now (thanks
@shadeMe).

* test_compare_activations_to_torch: be slightly more (absolute) tolerant

Or the Dish test would fail (possibly different accuracies for sqrt?).

* doc fix

* Update dish types to use `FloatsXdT`

* docs: add version tag to `(backprop_)dish`

* Add Dish Thinc layer

* Add Dish layer docs

Also update description as suggested by @kadarakos.

* Fix dish description

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

* Auto-format code with black (#737)

Co-authored-by: explosion-bot <explosion-bot@users.noreply.github.com>

* Increment `blis` version upper-bound to `0.10.0` (#736)

* asarrayDf: take `Sequence[float]`, not `Sequence[int]` (#739)

* Use confection for configurations (#745)

* Remove redundant tests. Add confection to requirement.txt and setup.cfg. Adjust cnfig.py.

* Add reference to confection in website/docs/usage-config.md.

* Update confection reference in docs.

* Extend imports fro confection for backwards compatibility.

* `PyTorchGradScaler`: Cache `_found_inf` on the CPU (#746)

* `PyTorchGradScaler`: Cache `_found_inf` on the CPU

This prevents unnecessary overhead from launching kernels on the GPU in hot backward passes.

* Only pin `_found_inf` to the CPU

* Always store `_found_inf` as a `bool`

* More general remap_ids (#726)

* work with cupy arrays and 2d arrays

* force mypy pass

* addressing comments

* return correct shape empty array

* test remap_ids with Ints2d

* Update thinc/layers/remap_ids.py

Co-authored-by: Daniël de Kok <me@github.danieldk.eu>

* use numpy array

* remove cupy import

* mini fix

* more strict typing

* adjust test

* Update thinc/layers/remap_ids.py

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

* remove check

* Update thinc/layers/remap_ids.py

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

* address reviews

* Update thinc/layers/remap_ids.py

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

* simplify casting

* Update thinc/layers/remap_ids.py

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

* Update thinc/layers/remap_ids.py

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

* remap_ids legacy

* legacy

* test version 1 and 2

* rename legacy to v1

* adding old test back

* remap_ids docs update

* Update website/docs/api-layers.md

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

* Update website/docs/api-layers.md

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

* make init/forward attribute setting more clear

* Update website/docs/api-layers.md

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

* Update website/docs/api-layers.md

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

* Update website/docs/api-layers.md

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

* prettier

* update model type

* prettier

* Use new _v2 instead of renamed _v1

Co-authored-by: Daniël de Kok <me@github.danieldk.eu>
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

* Auto-format code with black (#753)

Co-authored-by: explosion-bot <explosion-bot@users.noreply.github.com>

* Switch to macos-latest (#755)

* `util`: Explicitly call `__dlpack__` built-in method in `xp2tensorflow` (#757)

`tf.experimental.dlpack.from_dlpack` expects a `PyCapsule` object.

* Set version to 8.1.1 (#758)

* Remove references to FastAPI being an Explosion product (#761)

* Remove references to FastAPI being an Explosion product.

* Remove period at end of subheader.

* Update code example for Ragged (#756)

* Update code example for Ragged.

* Import from thinc.api.

* Update setup.cfg (#748)

Register fix_random_seed as a pytest-randomly entry point.

* Update cupy extras, quickstart (#740)

* Update cupy extras, quickstart

* Rename extra cuda-wheel to cuda-autodetect

* disable mypy run for Python 3.10 (#768)

* disable mypy run for Python 3.10

* dot

* Reorder requirements in requirements.txt (#770)

Move `confection` to the section with required explosion packages.

* Revert blis range to <0.8.0 (#772)

Due to more reports of access violations in windows, reduce supported
blis versions back to `<0.8.0`.

* Set version to v8.1.2 (#773)

* Fix `fix_random_seed` entrypoint in setup.cfg (#775)

* Support both Python 3.6 and Pydantic 1.10 (#779)

* support both Python 3.6 and Pydantic 1.10

* Simplify according to Adriane's suggestion

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

* update to latest mypy and exclude Python 3.6 (#776)

* update to latest mypy and exclude Python 3.6

* fix typing of ops.alloc

* fix ArrayT usage in types.py

* Set version to v8.1.3 (#781)

* Update CI around conflicting extras requirements (#783)

* Update torch install, update package requirements after installing extra deps

* Only reinstall requirements

* Run test suite twice

* Check package requirements after extras

* Update thinc-apple-ops test for current macos jobs

* Move notebook extras

* Skip mypy in tests with extras

* Use torch<1.12.0

* Try to figure out numpy version (non)requirements

* More numpy version tests

* Adjust for all

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
Co-authored-by: Daniël de Kok <me@danieldk.eu>
Co-authored-by: Richard Hudson <richard@explosion.ai>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: explosion-bot <explosion-bot@users.noreply.github.com>
Co-authored-by: kadarakos <kadar.akos@gmail.com>
Co-authored-by: Daniël de Kok <me@github.danieldk.eu>
Co-authored-by: svlandeg <svlandeg@github.com>
Co-authored-by: Christian Clauss <cclauss@me.com>
Co-authored-by: Paul O'Leary McCann <polm@dampfkraft.com>
Co-authored-by: Raphael Mitsch <r.mitsch@outlook.com>
Co-authored-by: Will Frey <jfrey89@gmail.com>
Co-authored-by: Timothée Mazzucotelli <pawamoy@pm.me>
  • Loading branch information
15 people authored Oct 12, 2022
2 parents f6eee9a + 07b7a09 commit 52b23f9
Show file tree
Hide file tree
Showing 86 changed files with 2,000 additions and 2,923 deletions.
44 changes: 44 additions & 0 deletions .github/workflows/autoblack.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# GitHub Action that uses Black to reformat all Python code and submits a PR
# in regular intervals. Inspired by: https://github.com/cclauss/autoblack

name: autoblack
on:
workflow_dispatch: # allow manual trigger
schedule:
- cron: '0 8 * * 5' # every Friday at 8am UTC

jobs:
autoblack:
if: github.repository_owner == 'explosion'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
ref: ${{ github.head_ref }}
- uses: actions/setup-python@v2
- run: pip install black
- name: Auto-format code if needed
run: black thinc
# We can't run black --check here because that returns a non-zero excit
# code and makes GitHub think the action failed
- name: Check for modified files
id: git-check
run: echo ::set-output name=modified::$(if git diff-index --quiet HEAD --; then echo "false"; else echo "true"; fi)
- name: Create Pull Request
if: steps.git-check.outputs.modified == 'true'
uses: peter-evans/create-pull-request@v3
with:
title: Auto-format code with black
labels: meta
commit-message: Auto-format code with black
committer: GitHub <noreply@github.com>
author: explosion-bot <explosion-bot@users.noreply.github.com>
body: _This PR is auto-generated._
branch: autoblack
delete-branch: true
draft: false
- name: Check outputs
if: steps.git-check.outputs.modified == 'true'
run: |
echo "Pull Request Number - ${{ steps.cpr.outputs.pull-request-number }}"
echo "Pull Request URL - ${{ steps.cpr.outputs.pull-request-url }}"
2 changes: 1 addition & 1 deletion .github/workflows/explosionbot.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,5 +23,5 @@ jobs:
env:
INPUT_TOKEN: ${{ secrets.EXPLOSIONBOT_TOKEN }}
INPUT_BK_TOKEN: ${{ secrets.BUILDKITE_SECRET }}
ENABLED_COMMANDS: "test_gpu"
ENABLED_COMMANDS: "test_gpu,test_slow,test_slow_gpu"
ALLOWED_TEAMS: "spacy-maintainers"
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

# Thinc: A refreshing functional take on deep learning, compatible with your favorite libraries

### From the makers of [spaCy](https://spacy.io), [Prodigy](https://prodi.gy) and [FastAPI](https://fastapi.tiangolo.com)
### From the makers of [spaCy](https://spacy.io) and [Prodigy](https://prodi.gy)

[Thinc](https://thinc.ai) is a **lightweight deep learning library** that offers an elegant,
type-checked, functional-programming API for **composing models**, with support
Expand Down
29 changes: 21 additions & 8 deletions azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ jobs:
imageName: 'windows-2019'
python.version: '3.6'
Python37Mac:
imageName: 'macos-10.15'
imageName: 'macos-latest'
python.version: '3.7'
Python38Linux:
imageName: 'ubuntu-latest'
Expand Down Expand Up @@ -63,6 +63,7 @@ jobs:
- script: |
python -m mypy thinc
displayName: 'Run mypy'
condition: ne(variables['python.version'], '3.6')
- task: DeleteFiles@1
inputs:
Expand All @@ -82,25 +83,37 @@ jobs:
- script: |
pip install -r requirements.txt
pip install "tensorflow~=2.5.0"
pip install "mxnet; sys_platform != 'win32'"
pip install "torch==1.9.0+cpu" -f https://download.pytorch.org/whl/torch_stable.html
pip install ipykernel pydot graphviz
python -m ipykernel install --name thinc-notebook-tests --user
displayName: 'Install test dependencies'
python -m pytest --pyargs thinc --cov=thinc --cov-report=term
displayName: 'Run tests without extras'
- script: |
pip install "protobuf~=3.20.0" "tensorflow~=2.5.0"
pip install "mxnet; sys_platform != 'win32'"
pip install torch --extra-index-url https://download.pytorch.org/whl/cpu
# torch does not have a direct numpy requirement but is compiled against
# a newer version than the oldest supported numpy for windows and
# python 3.10; this version of numpy would not work with
# tensorflow~=2.5.0 as specified above, but there is no release for
# python 3.10 anyway
pip install "numpy~=1.23.0; python_version=='3.10' and sys_platform=='win32'"
pip install -r requirements.txt
pip uninstall -y mypy
displayName: 'Install extras for testing'
- script: |
python -m pytest --pyargs thinc --cov=thinc --cov-report=term
displayName: 'Run tests'
displayName: 'Run tests with extras'
- script: |
pip uninstall -y tensorflow
pip install thinc-apple-ops
python -m pytest --pyargs thinc_apple_ops
displayName: 'Run tests for thinc-apple-ops'
condition: and(startsWith(variables['imageName'], 'macos'), eq(variables['python.version'], '3.9'))
condition: and(startsWith(variables['imageName'], 'macos'), eq(variables['python.version'], '3.10'))
- script: |
python -m pytest --pyargs thinc
displayName: 'Run tests with thinc-apple-ops'
condition: and(startsWith(variables['imageName'], 'macos'), eq(variables['python.version'], '3.9'))
condition: and(startsWith(variables['imageName'], 'macos'), eq(variables['python.version'], '3.10'))
6 changes: 4 additions & 2 deletions build-constraints.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
# build version constraints for use with wheelwright + multibuild
numpy==1.15.0; python_version<='3.7'
numpy==1.17.3; python_version=='3.8'
numpy==1.15.0; python_version<='3.7' and platform_machine!='aarch64'
numpy==1.19.2; python_version<='3.7' and platform_machine=='aarch64'
numpy==1.17.3; python_version=='3.8' and platform_machine!='aarch64'
numpy==1.19.2; python_version=='3.8' and platform_machine=='aarch64'
numpy==1.19.3; python_version=='3.9'
numpy==1.21.3; python_version=='3.10'
numpy; python_version>='3.11'
9 changes: 7 additions & 2 deletions examples/transformers_tagger.py
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,9 @@ def forward(
return TokensPlus(**token_data), lambda d_tokens: []

return Model(
"tokenizer", forward, attrs={"tokenizer": AutoTokenizer.from_pretrained(name)},
"tokenizer",
forward,
attrs={"tokenizer": AutoTokenizer.from_pretrained(name)},
)


Expand Down Expand Up @@ -166,11 +168,14 @@ def convert_transformer_outputs(model, inputs_outputs, is_train):

def backprop(d_tokvecs: List[Floats2d]) -> ArgsKwargs:
# Restore entries for bos and eos markers.
shim = model.shims[0]
row = model.ops.alloc2f(1, d_tokvecs[0].shape[1])
d_tokvecs = [model.ops.xp.vstack((row, arr, row)) for arr in d_tokvecs]
return ArgsKwargs(
args=(torch_tokvecs,),
kwargs={"grad_tensors": xp2torch(model.ops.pad(d_tokvecs))},
kwargs={
"grad_tensors": xp2torch(model.ops.pad(d_tokvecs, device=shim.device))
},
)

return tokvecs, backprop
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ requires = [
"murmurhash>=1.0.2,<1.1.0",
"cymem>=2.0.2,<2.1.0",
"preshed>=3.0.2,<3.1.0",
"blis>=0.4.0,<0.8.0",
"blis>=0.7.8,<0.8.0",
"numpy>=1.15.0",
]
build-backend = "setuptools.build_meta"
10 changes: 6 additions & 4 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,18 @@
murmurhash>=1.0.2,<1.1.0
cymem>=2.0.2,<2.1.0
preshed>=3.0.2,<3.1.0
blis>=0.4.0,<0.8.0
blis>=0.7.8,<0.8.0
srsly>=2.4.0,<3.0.0
wasabi>=0.8.1,<1.1.0
catalogue>=2.0.4,<2.1.0
confection>=0.0.1,<1.0.0
ml_datasets>=0.2.0,<0.3.0
# Third-party dependencies
pydantic>=1.7.4,!=1.8,!=1.8.1,<1.10.0
pydantic>=1.7.4,!=1.8,!=1.8.1,<1.11.0
numpy>=1.15.0
# Backports of modern Python features
dataclasses>=0.6,<1.0; python_version < "3.7"
typing_extensions>=3.7.4.1,<4.0.0.0; python_version < "3.8"
typing_extensions>=3.7.4.1,<4.2.0; python_version < "3.8"
contextvars>=2.4,<3; python_version < "3.7"
# Development dependencies
cython>=0.25.0,<3.0
Expand All @@ -22,7 +23,7 @@ pytest-cov>=2.7.0,<2.8.0
coverage>=5.0.0,<6.0.0
mock>=2.0.0,<3.0.0
flake8>=3.5.0,<3.6.0
mypy>=0.901,<0.960
mypy>=0.980,<0.990; platform_machine != "aarch64" and python_version >= "3.7"
types-mock>=0.1.1
types-contextvars>=0.1.2; python_version < "3.7"
types-dataclasses>=0.1.3; python_version < "3.7"
Expand All @@ -33,3 +34,4 @@ nbconvert>=5.6.1,<6.2.0
nbformat>=5.0.4,<5.2.0
# Test to_disk/from_disk against pathlib.Path subclasses
pathy>=0.3.5
black>=22.0,<23.0
21 changes: 17 additions & 4 deletions setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -35,24 +35,29 @@ setup_requires =
cymem>=2.0.2,<2.1.0
preshed>=3.0.2,<3.1.0
murmurhash>=1.0.2,<1.1.0
blis>=0.4.0,<0.8.0
blis>=0.7.8,<0.8.0
install_requires =
# Explosion-provided dependencies
blis>=0.4.0,<0.8.0
blis>=0.7.8,<0.8.0
murmurhash>=1.0.2,<1.1.0
cymem>=2.0.2,<2.1.0
preshed>=3.0.2,<3.1.0
wasabi>=0.8.1,<1.1.0
srsly>=2.4.0,<3.0.0
catalogue>=2.0.4,<2.1.0
confection>=0.0.1,<1.0.0
# Third-party dependencies
setuptools
numpy>=1.15.0
pydantic>=1.7.4,!=1.8,!=1.8.1,<1.10.0
pydantic>=1.7.4,!=1.8,!=1.8.1,<1.11.0
# Backports of modern Python features
dataclasses>=0.6,<1.0; python_version < "3.7"
typing_extensions>=3.7.4.1,<4.0.0.0; python_version < "3.8"
typing_extensions>=3.7.4.1,<4.2.0; python_version < "3.8"
contextvars>=2.4,<3; python_version < "3.7"

[options.entry_points]
pytest_randomly.random_seeder =
thinc = thinc.api:fix_random_seed

[options.extras_require]
cuda =
Expand Down Expand Up @@ -83,6 +88,14 @@ cuda114 =
cupy-cuda114>=5.0.0b4
cuda115 =
cupy-cuda115>=5.0.0b4
cuda116 =
cupy-cuda116>=5.0.0b4
cuda117 =
cupy-cuda117>=5.0.0b4
cuda11x =
cupy-cuda11x>=11.0.0
cuda-autodetect =
cupy-wheel>=11.0.0
datasets =
ml_datasets>=0.2.0,<0.3.0
torch =
Expand Down
3 changes: 2 additions & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,15 @@

PACKAGES = find_packages()
MOD_NAMES = [
"thinc.backends.cblas",
"thinc.backends.linalg",
"thinc.backends.numpy_ops",
"thinc.extra.search",
"thinc.layers.sparselinear",
]
COMPILE_OPTIONS = {
"msvc": ["/Ox", "/EHsc"],
"other": ["-O3", "-Wno-strict-prototypes", "-Wno-unused-function"],
"other": ["-O3", "-Wno-strict-prototypes", "-Wno-unused-function", "-std=c++11"],
}
COMPILER_DIRECTIVES = {
"language_level": -3,
Expand Down
2 changes: 1 addition & 1 deletion thinc/about.py
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
__version__ = "8.0.15"
__version__ = "8.1.3"
__release__ = True
7 changes: 5 additions & 2 deletions thinc/api.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,16 +16,18 @@
from .util import DataValidationError, data_validation
from .util import to_categorical, get_width, get_array_module, to_numpy
from .util import torch2xp, xp2torch, tensorflow2xp, xp2tensorflow, mxnet2xp, xp2mxnet
from .util import get_torch_default_device
from .compat import has_cupy
from .backends import get_ops, set_current_ops, get_current_ops, use_ops
from .backends import Ops, CupyOps, NumpyOps, has_cupy, set_gpu_allocator
from .backends import Ops, CupyOps, MPSOps, NumpyOps, set_gpu_allocator
from .backends import use_pytorch_for_gpu_memory, use_tensorflow_for_gpu_memory

from .layers import Dropout, Embed, expand_window, HashEmbed, LayerNorm, Linear
from .layers import Maxout, Mish, MultiSoftmax, Relu, softmax_activation, Softmax, LSTM
from .layers import CauchySimilarity, ParametricAttention, Logistic
from .layers import resizable, sigmoid_activation, Sigmoid, SparseLinear
from .layers import ClippedLinear, ReluK, HardTanh, HardSigmoid
from .layers import HardSwish, HardSwishMobilenet, Swish, Gelu
from .layers import Dish, HardSwish, HardSwishMobilenet, Swish, Gelu
from .layers import PyTorchWrapper, PyTorchRNNWrapper, PyTorchLSTM
from .layers import TensorFlowWrapper, keras_subclass, MXNetWrapper
from .layers import PyTorchWrapper_v2, Softmax_v2
Expand All @@ -38,6 +40,7 @@
from .layers import with_reshape, with_getitem, strings2arrays, list2array
from .layers import list2ragged, ragged2list, list2padded, padded2list, remap_ids
from .layers import array_getitem, with_cpu, with_debug, with_nvtx_range
from .layers import with_signpost_interval
from .layers import tuplify

from .layers import reduce_first, reduce_last, reduce_max, reduce_mean, reduce_sum
Expand Down
18 changes: 10 additions & 8 deletions thinc/backends/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,15 @@
import threading

from .ops import Ops
from .cupy_ops import CupyOps, has_cupy
from .cupy_ops import CupyOps
from .numpy_ops import NumpyOps
from .mps_ops import MPSOps
from ._cupy_allocators import cupy_tensorflow_allocator, cupy_pytorch_allocator
from ._param_server import ParamServer
from ..util import assert_tensorflow_installed, assert_pytorch_installed
from ..util import is_cupy_array, set_torch_tensor_type_for_ops, require_cpu
from ..util import get_torch_default_device, is_cupy_array, require_cpu
from .. import registry
from ..compat import cupy, has_cupy


context_ops: ContextVar[Optional[Ops]] = ContextVar("context_ops", default=None)
Expand Down Expand Up @@ -46,9 +48,11 @@ def use_pytorch_for_gpu_memory() -> None: # pragma: no cover
We'd like to support routing Tensorflow memory allocation via PyTorch as well
(or vice versa), but do not currently have an implementation for it.
"""
import cupy.cuda

assert_pytorch_installed()

if get_torch_default_device().type != "cuda":
return

pools = context_pools.get()
if "pytorch" not in pools:
pools["pytorch"] = cupy.cuda.MemoryPool(allocator=cupy_pytorch_allocator)
Expand All @@ -65,8 +69,6 @@ def use_tensorflow_for_gpu_memory() -> None: # pragma: no cover
We'd like to support routing PyTorch memory allocation via Tensorflow as
well (or vice versa), but do not currently have an implementation for it.
"""
import cupy.cuda

assert_tensorflow_installed()
pools = context_pools.get()
if "tensorflow" not in pools:
Expand Down Expand Up @@ -94,7 +96,7 @@ def get_ops(name: str, **kwargs) -> Ops:

cls: Optional[Callable[..., Ops]] = None
if name == "cpu":
_import_extra_cpu_backends()
_import_extra_cpu_backends()
cls = ops_by_name.get("numpy")
cls = ops_by_name.get("apple", cls)
cls = ops_by_name.get("bigendian", cls)
Expand Down Expand Up @@ -137,7 +139,6 @@ def set_current_ops(ops: Ops) -> None:
"""Change the current backend object."""
context_ops.set(ops)
_get_thread_state().ops = ops
set_torch_tensor_type_for_ops(ops)


def contextvars_eq_thread_ops() -> bool:
Expand Down Expand Up @@ -173,6 +174,7 @@ def _create_thread_local(
"ParamServer",
"Ops",
"CupyOps",
"MPSOps",
"NumpyOps",
"has_cupy",
]
Loading

0 comments on commit 52b23f9

Please sign in to comment.