Kamae is a Python package comprising a set of reusable components for preprocessing inputs offline (Spark) and online (TensorFlow).
Build all your big-data preprocessing pipelines in Spark, and get your Keras preprocessing model for free!
The library is designed with three main usage patterns in mind:
- Import and use Keras preprocessing layers directly.
This is the recommended usage pattern for complex use-cases. For example when your data is not tabular, or when you need to apply preprocessing steps that are not supported by the provided Spark Pipeline interface. The library provides a set of Keras subclassed layers that can be imported and used directly in a Keras model. You can chain these layers together to create complex preprocessing steps, and then use the resulting model as the input to a trainable model.
- Use the provided Spark Pipeline interface to build Keras preprocessing models.
This is the recommended usage pattern for big data use-cases, (classification, regression, ranking) where your data is tabular, and you want to apply standard preprocessing steps such as normalization, one-hot encoding, etc. The library provides Spark transformers, estimators and pipelining so that a user can chain together preprocessing steps in Spark, fit the pipeline on a Spark DataFrame, and then export the result as a Keras model. Unit tests ensure parity between the Spark and Keras implementations of the preprocessing layers.
- Use the provided Sklearn Pipeline interface to build Keras preprocessing models.
Note: This is provided as an example of how Kamae could be extended to support other pipeline SDKs but it is NOT actively supported. It is far behind the Spark interface in terms of transformer coverage & enhancements we have made such as type & shape parity. Contributions are welcome, but please use at your own risk.
Works in the same way as the Spark pipeline interface, just using Scikit-learn transformers, estimators and pipelines. This is the recommended usage pattern for small data use-cases, (classification, regression, ranking) where your data is tabular, and you want to apply standard preprocessing steps such as normalization, one-hot encoding, etc.
Keras Tuner support is also provided for the Spark & Scikit-learn Pipeline interface, whereby a model builder function is returned so that the hyperparameters of the preprocessing steps can be tuned using the Keras Tuner API.
Once you have created a Kamae preprocessing model, you can use it as the input to a trainable model. See these docs for more information.
For advice on achieving type parity between the Spark and Keras implementations of the preprocessing layers, see these docs.
For information on achieving shape parity between the Spark and Keras implementations of the preprocessing layers, see these docs.
See the examples directory for various examples of how to use the Spark Pipeline interface. Similarly, see the examples directory for various examples of how to use the Scikit-learn Pipeline interface. Follow the development instructions below to run the examples locally.
Transformation | Description | Keras Layer | Spark Transformer | Scikit-learn Transformer |
---|---|---|---|---|
AbsoluteValue | Applies the abs(x) transform. |
Link | Link | Not yet implemented |
ArrayConcatenate | Assembles multiple features into a single array. | Link | Link | Link |
ArrayCrop | Crops or pads a feature array to a consistent size. | Link | Link | Not yet implemented |
ArraySplit | Splits a feature array into multiple features. | Link | Link | Link |
ArraySubtractMinimum | Subtracts the minimum element in an array from therest to compute a timestamp difference. Ignores padded values. | Link | Link | Not yet implemented |
BearingAngle | Compute the bearing angle (https://en.wikipedia.org/wiki/Bearing_(navigation)) between two pairs of lat/long. | Link | Link | Not yet implemented |
Bin | Bins a numerical column into string categorical bins. Users can specify the bin values, labels and a default label. | Link | Link | Not yet implemented |
BloomEncode | Hash encodes a string feature multiple times to create an array of indices. Useful for compressing input dimensions for embeddings. Paper: https://arxiv.org/pdf/1706.03993.pdf | Link | Link | Not yet implemented |
Bucketize | Buckets a numerical column into integer bins. | Link | Link | Not yet implemented |
ConditionalStandardScale | Normalises by the mean and standard deviation, with ability to: apply a mask on another column, not scale the zeros, and apply a non standard scaling function. | Link | Link | Not yet implemented |
CosineSimilarity | Computes the cosine similarity between two array features. | Link | Link | Not yet implemented |
CurrentDate | Returns the current date for use in other transformers. | Link | Link | Not yet implemented |
CurrentDateTime | Returns the current date time in the format yyyy-MM-dd HH:mm:ss.SSS for use in other transformers. | Link | Link | Not yet implemented |
CurrentUnixTimestamp | Returns the current unix timestamp in either seconds or milliseconds for use in other transformers. | Link | Link | Not yet implemented |
DateAdd | Adds a static or dynamic number of days to a date feature. NOTE: Destroys any time component of the datetime if present. | Link | Link | Not yet implemented |
DateDiff | Computes the number of days between two date features. | Link | Link | Not yet implemented |
DateParse | Parses a string date of format YYYY-MM-DD to extract a given date part. E.g. day of year. | Link | Link | Not yet implemented |
DateTimeToUnixTimestamp | Converts a UTC datetime string to unix timestamp. | Link | Link | Not yet implemented |
Divide | Divides a single feature by a constant or divides multiple features against each other. | Link | Link | Not yet implemented |
Exp | Applies the exp(x) operation to the feature. | Link | Link | Not yet implemented |
Exponent | Applies the x^exponent to a single feature or x^y for multiple features. | Link | Link | Not yet implemented |
HashIndex | Transforms strings to indices via a hash table of predeterminded size. | Link | Link | Not yet implemented |
HaversineDistance | Computes the haversine distance between latitude and longitude pairs. | Link | Link | Not yet implemented |
Identity | Applies the identity operation, leaving the input the same. | Link | Link | Link |
IfStatement | Computes a simple if statement on a set of columns/tensors and/or constants. | Link | Link | Not yet implemented |
Impute | Performs imputation of either mean or median value of the data over a specified mask. | Link | Link | Not yet implemented |
LambdaFunction | Transforms an input (or multiple inputs) to an output (or multiple outputs) with a user provided tensorflow function. | Link | Link | Not yet implemented |
ListMax | Computes the listwise max of a feature, optionally calculated only on the top items based on another given feature. | Link | Link | Not yet implemented |
ListMean | Computes the listwise mean of a feature, optionally calculated only on the top items based on another given feature. | Link | Link | Not yet implemented |
ListMedian | Computes the listwise median of a feature, optionally calculated only on the top items based on another given feature. | Link | Link | Not yet implemented |
ListMin | Computes the listwise min of a feature, optionally calculated only on the top items based on another given feature. | Link | Link | Not yet implemented |
ListStdDev | Computes the listwise standard deviation of a feature, optionally calculated only on the top items based on another given feature. | Link | Link | Not yet implemented |
Log | Applies the natural logarithm log(alpha + x) transform . |
Link | Link | Link |
LogicalAnd | Performs an and(x, y) operation on multiple boolean features. | Link | Link | Not yet implemented |
LogicalNot | Performs a not(x) operation on a single boolean feature. | Link | Link | Not yet implemented |
LogicalOr | Performs an or(x, y) operation on multiple boolean features. | Link | Link | Not yet implemented |
Max | Computes the maximum of a feature with a constant or multiple other features. | Link | Link | Not yet implemented |
Mean | Computes the mean of a feature with a constant or multiple other features. | Link | Link | Not yet implemented |
Min | Computes the minimum of a feature with a constant or multiple other features. | Link | Link | Not yet implemented |
Modulo | Computes the modulo of a feature with the mod divisor being a constant or another feature. | Link | Link | Not yet implemented |
Multiply | Multiplies a single feature by a constant or multiples multiple features together. | Link | Link | Not yet implemented |
NumericalIfStatement | Performs a simple if else statement witha given operator. Value to check, result if true or false can be constants or features. | Link | Link | Not yet implemented |
OneHotEncode | Transforms a string to a one-hot array. | Link | Link | Not yet implemented |
OrdinalArrayEncode | Encodes strings in an array according to the order in which they appear. Only for 2D tensors. | Link | Link | Not yet implemented |
Round | Rounds a floating feature to the nearest integer using ceil , floor or a standard round op. |
Link | Link | Not yet implemented |
RoundToDecimal | Rounds a floating feature to the nearest decimal precision. | Link | Link | Not yet implemented |
SharedOneHotEncode | Transforms a string to a one-hot array, using labels across multiple inputs to determine the one-hot size. | Link | Link | Not yet implemented |
SharedStringIndex | Transforms strings to indices via a vocabulary lookup, sharing the vocabulary across multiple inputs. | Link | Link | Not yet implemented |
SingleFeatureArrayStandardScale | Normalises by the mean and standard deviation calculated over all elements of all inputs, with ability to mask a specified value. | Link | Link | Not yet implemented |
StandardScale | Normalises by the mean and standard deviation, with ability to mask a specified value. | Link | Link | Link |
StringAffix | Prefixes and suffixes a string with provided constants. | Link | Link | Not yet implemented |
StringArrayConstant | Inserts provided string array constant into a column. | Link | Link | Not yet implemented |
StringCase | Applies an upper or lower casing operation to the feature. | Link | Link | Not yet implemented |
StringConcatenate | Joins string columns using the provided separator. | Link | Link | Not yet implemented |
StringContains | Checks for the existence of a constant or tensor-element substring within a feature. | Link | Link | Not yet implemented |
StringContainsList | Checks for the existence of any string from a list of string constants within a feature. | Link | Link | Not yet implemented |
StringEqualsIfStatement | Performs a simple if else statement on string equality. Value to check, result if true or false can be constants or features. | Link | Link | Not yet implemented |
StringIndex | Transforms strings to indices via a vocabulary lookup | Link | Link | Not yet implemented |
StringListToString | Concatenates a list of strings to a single string with a given delimiter. | Link | Link | Not yet implemented |
StringMap | Maps a list of string values to a list of other string values with a standard CASE WHEN statement. Can provide a default value for ELSE. | Link | Link | Not yet implemented |
StringIsInList | Checks if the feature is equal to at least one of the strings provided. | Link | Link | Not yet implemented |
StringReplace | Performs a regex replace operation on a feature with constant params or between multiple features | Link | Link | Not yet implemented |
StringToStringList | Splits a string by a separator, returning a list of parametrised length (with a default value for missing inputs). | Link | Link | Not yet implemented |
SubStringDelimAtIndex | Splits a string column using the provided delimiter, and returns the value at the index given. If the index is out of bounds, returns a given default value | Link | Link | Not yet implemented |
Subtract | Subtracts a constant from a single feature or subtracts multiple features from each other. | Link | Link | Not yet implemented |
Sum | Adds a constant to a single feature or sums multiple features together. | Link | Link | Not yet implemented |
UnixTimestampToDateTime | Converts a unix timestamp to a UTC datetime string. | Link | Link | Not yet implemented |
From tensorflow>=2.13.0
onwards, TensorFlow directly releases builds for Mac ARM chips.
Kamae supports tensorflow>=2.9.1,<2.19.0
, however, if you require tensorflow<2.13.0
and are using a Mac ARM chip, you will need to install tensorflow-macos<2.13.0
yourself.
From tensorflow>=2.18.0
onwards, TensorFlow does not release builds for Mac x86_64 chips. If you are on an old Mac chip, please bear this in mind when using the library.
The Kamae package is pushed to PyPI, and can be installed using the command:
pip install kamae
Alternatively, the package can be installed from the source code by downloading the latest release .tar file from the Releases page and running the following command:
pip install kamae-<version>.tar
Local development is in Python 3.10. uv can install this for you, once you have run make setup-uv
. Then run make install
The final package supports Python 3.8 -> 3.12.
pipx
is used to install uv
and pre-commit
in isolated environments.
Installing pipx
depends on your operating system. See the pipx installation instructions.
Once python 3.10 and pipx
are installed, run the below make command to set up the project:
make setup
A Makefile is provided to simplify common development tasks. The available commands can be listed by running:
make help
In order to get setup for local development, you will need to install the project dependencies and pre-commit hooks. This can be done by running:
make setup
Once the dependencies are installed, tests, formatting & linting can be run by running:
make all
You can run an example of the package by running:
make run-example
You can test the inference of a model served by TensorFlow Serving by running:
make test-tf-serving
Lastly, you can run both an example and test the inference of a model (above two commands) in one command by running:
make test-end-to-end
See the docs here for more details on testing inference.
For local development, dependency management is controlled with the uv package, which can be installed by following the instructions here.
To contribute to the project, a branch should be created from the main
branch, and a pull request should be opened when the changes are ready to be reviewed.
Please follow these docs for contributing new transformers.
The project uses pre-commit hooks to enforce linting and formatting standards. You should install the pre-commit hooks before committing for the first time by running:
uv run pre-commit install
Additionally, for a pull request to be accepted, the code must pass the unit tests found in the tests/
directory. The full suite of formatting, linting, coverage checks, and tests can be run locally with the command:
make all
Versioning for the project is performed by the semantic-release package. When a pull request is merged into the main
branch, the package version will be automatically updated based on the squashed commit message from the PR title.
Commits prefixed with fix:
will trigger a patch version update, feat:
will trigger a minor version update, and BREAKING CHANGE:
will trigger a major version update. Note BREAKING CHANGE:
needs to be in the commit body/footer as detailed here. All other commit prefixes will trigger no version update. PR titles should therefore be prefixed accordingly.
For any questions or concerns please reach out to the team.