Data and AI Assets Catalog and Execution Engine
Allows upload, registration, execution, and deployment of:
- AI pipelines and pipeline components
- Models
- Datasets
- Notebooks
Additionally it provides:
- Automated sample pipeline code generation to execute registered models, datasets and notebooks
- Pipelines engine powered by Kubeflow Pipelines on Tekton, core of Watson AI Pipelines
- Components registry for Kubeflow Pipelines
- Datasets management by Datashim
- Preregistered Datasets from Data Asset Exchange (DAX) and Models from Model Asset Exchange (MAX)
- Serving engine by KFServing
- Model Metadata schemas
For more details about the project please follow this announcement blog post.
For a simple up-and-running MLX with asset catalog only, we created a Quickstart Guide using Docker Compose.
For a slightly more resource-hungry local deployment that allows pipeline execution, we created the MLX with Kubernetes in Docker (KIND) deployment option.
For a full deployment, we use Kubeflow Kfctl tooling.
By default, the MLX UI is available at http://<cluster_node_ip>:30380/mlx/
If you deployed on a Kubernetes cluster, run the following and look for the External-IP column to find the public IP of a node.
kubectl get node -o wide
If you deployed using OpenShift, you can use IstioIngresGateway Route. You can find it in the OpenShift Console or using the CLI.
oc get route -n istio-system
For information on how to import data and AI assets using MLX's catalog importer, use this guide.
For information on how to use MLX and create assets check out this guide.
Contributions can be made to either the UI or API.
For information about adding new features, bug fixing, communication or UI and API setup, please refer to this document.
MLX Troubleshooting Instructions
- Slack: @lfaifoundation/ml-exchange
- Mailing lists:
- MLX-Announce for top-level milestone messages and announcements
- MLX-TSC for top-level governance discussions and decissions
- MLX-Technical-Discuss for technical discussions and questions