Spice.ai makes it easy for developers to build apps that learn and adapt by streamlining the use of machine learning (ML) in software. Combined with time-series data, developers can create applications that continuously improve using ML recommendations.
Spice.ai takes a developer-first approach, and is focused on a fast, iterative, inner development loop, enabling developers to get started with ML in minutes instead of months.
📢 Read the Spice.ai announcement blog post at blog.spiceai.org.
📺 View a getting started walkthrough of Spice.ai in action here.
💻 Learn about our approach to building intelligent applications and not just "doing AI"
The Spice.ai runtime is written in Golang and Python and runs as a container or microservice. It's deployable to any public cloud, on-premises, and edge. It is configured with a simple manifest and accessed by HTTP APIs.
- A lightweight, portable ML runtime accessible by simple HTTP APIs, allowing developers to use their preferred languages and frameworks
- A dashboard to visualize data and learning
- A developer-friendly CLI
- Simple, git-committable, configuration and code
Spice.ai also includes a library of community-driven data components for streaming and processing time series data, enabling developers to quickly and easily combine data with learning to create intelligent models.
Modern developers build with the community by leveraging registries such as npm, NuGet, and pip. The Spice.ai platform includes spicerack.org, the registry for ML building blocks.
As the community shares their ML building blocks (aka Spicepods, or pods for short), developers can quickly add them to their Spice.ai applications enabling them to stream data and build learning into their applications quickly and easily. Initially, Spicepods contain simple definitions of how the app should learn, and eventually will enable the sharing and use of fully-trained models.
Spice.ai is for developers who want to build intelligent applications but don't have the time or resources to learn, build and integrate the required ML to do so.
Imagine you have timestamped measurements of the room temperature and access to air-conditioning controls. If you had a time-series ML engine, your application could optimize when the A/C activates. You could reduce energy usage by not overcooling the room as the temperature drops.
Now imagine learning Python or R, neural networks, deep-learning algorithms and building a system that streams and processes time-series data to do that. With Spice.ai — which includes a time-series ML engine accessible over HTTP APIs, a library of community-driven components for data streaming and processing, and an ecosystem of pre-created ML configurations — you can build upon the experience of the community instead of doing it all yourself. You can focus on business logic and building your application instead of the ML.
Spice.ai and spicerack.org are both pre-release, early, alpha software. Until v1.0, Spice.ai may have gaps, including limited deep learning algorithms, training-at-scale, and simulated environments. Also, Spicepods aren't searchable or listed on spicerack.org yet.
Our intention with this preview is to work with developers early to define and create the developer experience together. 🚀 See the Roadmap to v1.0-stable for upcoming features.
We greatly appreciate and value your support! You can help Spice.ai in a number of ways:
- ⭐️ Star this repo.
- Build an app with Spice.ai and send us feedback and suggestions at email@example.com or on Discord.
- File an issue if you see something not quite working correctly.
- Follow us on Reddit, Twitter, and LinkedIn.
- Join our team (We’re hiring!)
- Contribute code or documentation to the project (see CONTRIBUTING.md).
We’re also starting a community call series soon!
Thank you for sharing this journey with us.
First, ⭐️ star this repo! Thank you for your support! 🙏
- Docker is required. Self-host and metal support is on the roadmap.
- Only macOS and Linux are natively supported. WSL 2 is required for Windows.
- darwin/arm64 is not yet supported (i.e. Apple's M1 Macs). We use M1s ourselves, so we hope to support this very soon. :-)
⭐️ We highly recommend using GitHub Codespaces to get started. Codespaces enables you to run Spice.ai in a virtual environment in the cloud. If you use Codespaces, the install is not required and you may skip to the Getting Started with Codespaces section.
- Install Docker
- Install the Spice CLI
Step 1. Install Docker: While self-hosting on baremetal hardware will be supported, the Developer Preview currently requires Docker. To install Docker, please follow these instructions.
Step 2. Install the Spice CLI: Run the following
curl command in your terminal.
curl https://install.spiceai.org | /bin/bash
You may need to restart your terminal for the
spice command to be added to your PATH.
The recommended way to get started with Spice.ai is to use GitHub Codespaces.
Create a new GitHub Codespace in the
spiceai/quickstarts repo at github.com/spiceai/quickstarts/codespaces.
Once you open the Codespace, Spice.ai and everything you need to get started will already be installed. Continue on to train your first pod.
A Spicepod is simply a collection of configuration and data that is used to train and deploy your own AI.
We will add intelligence to a sample application, ServerOps, by creating and training a Spicepod that offers recommendations to the application for different server operations, such as performing server maintenance.
If you are using GitHub Codespaces, skip Step 1. and continue with Step 2., as the repository will already be cloned.
Step 1. Clone the Spice.ai quickstarts repository:
cd $HOME git clone https://github.com/spiceai/quickstarts cd quickstarts/serverops
Step 2. Start the Spice runtime with
cd $HOME/quickstarts/serverops spice run
Step. 3. In a new terminal, add the ServerOps quickstart pod:
So that we can leave Spice.ai running, add the quickstart pod in a new terminal tab or window. If you are running in GitHub Codespaces, you can open a new terminal by clicking the split-terminal button in VS Code.
spice add quickstarts/serverops
The Spice.ai CLI will download the ServerOps quickstart pod and add the pod manifest to your project at
The Spice runtime will then automatically detect the pod and start your first training run!
Note, automatic training relies on your system's filewatcher. In some cases, this might be disabled or not work as expected. If training does not start, follow the command to retrain the pod below.
Navigate to http://localhost:8000 in your favorite browser. You will see an overview of your pods. From here, you can click on the
serverops pod to see a chart of the pod's training progress.
In addition to automatic training on each manifest change, training can be started by using the Spice CLI from within your app directory.
spice train serverops
After training the pod, you can now get a recommendation for an action from it!
To see how Spice.ai makes creating intelligent applications easy, try running and reviewing the sample ServerOps Node or Powershell apps,
npm install node serverops.js
Congratulations! In just a few minutes you downloaded and installed the Spice.ai CLI and runtime, created your first Spicepod, trained it, and got a recommendation from it.
This is just the start of the journey with Spice.ai. Next, try one of the quickstart tutorials or in-depth samples for creating intelligent applications.
- ServerOps sample - a more in-depth version of the quickstart you just completed, using CPU metrics from your own machine
- Gardener sample - Intelligently water a simulated garden
- Trader quickstart - a basic Bitcoin trading bot
Spice.ai started with the vision to make AI easy for developers. We are building Spice.ai in the open and with the community. Reach out on Discord or by email to get involved. We will be starting a community call series soon!