Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions docs/source/api/layers.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,10 @@

[[autodoc]] kernels.use_kernel_forward_from_hub

### use_kernel_func_from_hub

[[autodoc]] kernels.use_kernel_func_from_hub

### replace_kernel_forward_from_hub

[[autodoc]] kernels.replace_kernel_forward_from_hub
Expand Down Expand Up @@ -36,14 +40,26 @@

[[autodoc]] kernels.Mode

### FuncRepository

[[autodoc]] kernels.FuncRepository

### LayerRepository

[[autodoc]] kernels.LayerRepository

### LocalFuncRepository

[[autodoc]] kernels.LocalFuncRepository

### LocalLayerRepository

[[autodoc]] kernels.LocalLayerRepository

### LockedFuncRepository

[[autodoc]] kernels.LockedFuncRepository

### LockedLayerRepository

[[autodoc]] kernels.LockedLayerRepository
45 changes: 45 additions & 0 deletions docs/source/layers.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,36 @@ replace_kernel_forward_from_hub(SiluAndMul, "SiluAndMul")
it signifies that the maintainer intends to keep the `forward` signature
compatible with layers from the hub.

### Using a function as a layer

Sometimes it can be useful to make a function extensible, for example
because the function cannot be replaced by a layer. In such cases, you
can annotate the function with the `use_kernel_func_from_hub` decorator:

```python
@use_kernel_func_from_hub("silu_and_mul")
def silu_and_mul(x: torch.Tensor) -> torch.Tensor:
d = x.shape[-1] // 2
return F.silu(x[..., :d]) * x[..., d:]
```

This will replace the function by an instantiated `torch.nn.Module`
(singleton) that calls the function itself in its forward method.

**Note:** for kernelization to see the function, it must be a member of
another `torch.nn.Module` that is part of the model. For example:

```python
class FeedForward(nn.Module):
def __init__(self, in_features: int, out_features: int):
self.linear = nn.Linear(in_features, out_features)
# Note: silu_and_mul is a Torch module.
self.silu_and_mul = silu_and_mul

def forward(self, x: torch.Tensor) -> torch.Tensor:
return self.silu_and_mul(self.linear(x))
```

## Kernelizing a model

A model will not use Hub kernels by default, even if it contains extensible
Expand Down Expand Up @@ -157,6 +187,21 @@ with use_kernel_mapping(kernel_layer_mapping):
This ensures that the mapping is not active anymore outside the
`with`-scope.

If the layer is stateless (it does not use member variables in its forward _or_ it was
originally a function that was converted into a kernel layer with
`use_kernel_func_from_hub`), it can also be mapped to a kernel function:

```python
kernel_layer_mapping = {
"SiluAndMul": {
"cuda": FuncRepository(
repo_id="kernels-community/activation",
func_name="silu_and_mul",
),
}
}
Comment on lines +195 to +202
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome!!!

```

### Using version bounds

Kernels are versioned using tags of the form `v<major>.<minor>.<patch>`.
Expand Down
17 changes: 13 additions & 4 deletions src/kernels/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,22 @@

__version__ = importlib.metadata.version("kernels")

from kernels.layer import Device, CUDAProperties
from kernels.layer import kernelize, register_kernel_mapping, use_kernel_mapping
from kernels.layer import Mode
from kernels.layer import (
CUDAProperties,
Device,
FuncRepository,
LayerRepository,
LocalFuncRepository,
LocalLayerRepository,
LockedFuncRepository,
LockedLayerRepository,
Mode,
kernelize,
register_kernel_mapping,
replace_kernel_forward_from_hub,
use_kernel_forward_from_hub,
use_kernel_func_from_hub,
use_kernel_mapping,
)
from kernels.utils import (
get_kernel,
Expand All @@ -25,8 +32,11 @@
"__version__",
"CUDAProperties",
"Device",
"FuncRepository",
"LayerRepository",
"LocalFuncRepository",
"LocalLayerRepository",
"LockedFuncRepository",
"LockedLayerRepository",
"Mode",
"get_kernel",
Expand All @@ -38,7 +48,6 @@
"load_kernel",
"register_kernel_mapping",
"replace_kernel_forward_from_hub",
"replace_kernel_func_from_hub",
"use_kernel_forward_from_hub",
"use_kernel_func_from_hub",
"use_kernel_mapping",
Expand Down
12 changes: 11 additions & 1 deletion src/kernels/layer/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,10 @@
from .device import Device, CUDAProperties
from .device import CUDAProperties, Device
from .func import (
FuncRepository,
LocalFuncRepository,
LockedFuncRepository,
use_kernel_func_from_hub,
)
from .kernelize import (
kernelize,
register_kernel_mapping,
Expand All @@ -16,13 +22,17 @@
__all__ = [
"CUDAProperties",
"Device",
"FuncRepository",
"LayerRepository",
"LocalFuncRepository",
"LocalLayerRepository",
"LockedFuncRepository",
"LockedLayerRepository",
"Mode",
"kernelize",
"register_kernel_mapping",
"replace_kernel_forward_from_hub",
"use_kernel_forward_from_hub",
"use_kernel_func_from_hub",
"use_kernel_mapping",
]
Loading
Loading