Superduper is a Python based framework for building end-2-end AI-data workflows and applications on your own data, integrating with major databases. It supports the latest technologies and techniques, including LLMs, vector-search, RAG, multimodality as well as classical AI and ML paradigms.
Developers may leverage Superduper by building compositional and declarative objects which out-source the details of deployment, orchestration and versioning, and more to the Superduper engine. This allows developers to completely avoid implementing MLOps, ETL pipelines, model deployment, data migration and synchronization.
Using Superduper is simply "CAPE": Connect to your data, apply arbitrary AI to that data, package and reuse the application on arbitrary data, and execute AI-database queries and predictions on the resulting AI outputs and data.
- Connect
- Apply
- Package
- Execute
Connect
db = superduper('mongodb|postgres|mysql|sqlite|duckdb|snowflake://<your-db-uri>')
Apply
listener = MyLLM('self_hosted_llm', architecture='llama-3.2', postprocess=my_postprocess).to_listener('documents', key='txt')
db.apply(listener)
Package
application = Application('my-analysis-app', components=[listener, vector_index])
template = Template('my-analysis', component=app, substitutions={'documents': 'table'})
template.export('my-analysis')
Execute
query = db['documents'].like({'txt', 'Tell me about Superduper'}, vector_index='my-index').select()
query.execute()
Superduper may be run anywhere; you can also contact us to learn more about the enterprise platform for bringing your Superduper workflows to production at scale.
Superduper is flexible enough to support a huge range of AI techniques and paradigms. We have a range of pre-built functionality in the plugins
and templates
directories. In particular, Superduper excels when AI and data need to interact in a continuous and tightly integrated fashion. Here are some illustrative examples, which you may try out from our templates:
- Semantic multimodal vector search (images, text, video)
- Retrieval augmented generation with specialized requirements (data fetching involves semantic search as well as business rules and pre-processing)
- LLM finetuning on database hosted data
- Transfer learning using multimodal data
We're looking to connect with enthusiastic developers to contribute to the repertoire of amazing pre-built templates and workflows available in Superduper open-source. Please join the discussion, by contributing issues and pull requests!
- Create a Superduper data-AI connection/ datalayer consisting of your own
- databackend (database/ datalake/ datawarehouse)
- metadata store (same or other as databackend)
- artifact store (to store big objects)
- compute implementation
- Build complex units of functionality (
Component
) using a declarative programming model, which integrate closely with data in your databackend, using a simple set of primitives and base classes. - Build larger units of functionality wrapping several interrelated
Component
instances into an AI-dataApplication
- Reuse battle-tested
Component
,Model
andApplication
instances usingTemplate
, giving developers an easy point to start with difficult AI implementations - A transparent, human-readable, web-friendly and highly portable serialization protocol, "Superduper-protocol", to communicate results of experimentation, make
Application
lineage and versioning easy to follow, and create an elegant segway from the AI world to the databasing/ typed-data worlds. - Execute queries using a combination of outputs of
Model
instances as well as primary databackend data, to enable the latest generation of AI-data applications, including all flavours of vector-search, RAG, and much, much more.
Massive flexibility
Combine any Python based AI model, API from the ecosystem with the most established, battle tested databases and warehouses; Snowflake, MongoDB, Postgres, MySQL, SQL Server, SQLite, BigQuery, and Clickhouse are all supported.
Seamless integration avoiding MLOps
Remove the need to implement MLOps, using the declarative and compositional Superduper components, which specify the end state that the models and data should reach.
Promote code reusability and portability
Package components as templates, exposing the key parameters required to reuse and communicate AI applications in your community and organization.
Cost savings
Implement vector search and embedding generation without requiring a dedicated vector database. Effortlessly toggle between self hosted models and API hosted models, without major code changes.
Move to production without any additional effort
Superduper's REST API, allows installed models to be served without additional development work. For enterprise grade scalability, fail safes, security and logging, applications and workflows created with Superduper, may be deployed in one click on Superduper enterprise.
We are working on an upcoming release of 0.4.0
. In this release we have:
This will enable a large diversity of Component
types in addition to the well established Model
, Listener
, VectorIndex
.
This will allow developers to create a range of functionality which reacts to incoming data changes
Components saved as Template
instances, will allow users to easily redeploy their already deployed and tested Component
and Application
implementations, on alternative data sources, and with key parameters toggled to cater to operational requirements.
These Template
instances may be applied with Superduper with a simple single command
superduper apply <template> '{"variable_1": "value_1", "variable_2": ...}'
or:
from superduper import templates
app = template(variable_1='value_1', variable_2='value_2', ...)
db.apply(app)
Now you may view your Component
, Application
and Template
instances in the user-interface, and execute queries using QueryTemplate
instances, directly against the REST server.
superduper start
Installation:
pip install superduper-framework
View available pre-built templates:
superduper ls
Connect and apply a pre-built template:
(Note: the pre-built templates are only supported by Python 3.10; you may use all of the other features in Python 3.11+.)
# e.g. 'mongodb://localhost:27017/test_db'
SUPERDUPER_DATA_BACKEND=<your-db-uri> superduper apply simple_rag
Execute a query or prediction on the results:
from superduper import superduper
db = superduper('<your-db-uri>') # e.g. 'mongodb://localhost:27017/test_db'
db['rag'].predict('Tell me about superduper')
View and monitor everything in the Superduper interface. From the command line:
superduper start
After doing this you are ready to build your own components, applications and templates!
Get started by copying an existing template, to your own development environment:
superduper bootstrap <template_name> --destination templates/my-template
Edit the build.ipynb
notebook, to build your own functionality.
- MongoDB
- MongoDB Atlas
- Snowflake
- PostgreSQL
- MySQL
- SQLite
- DuckDB
- Google BigQuery
- Microsoft SQL Server (MSSQL)
- ClickHouse
If you have any problems, questions, comments, or ideas:
- Join our Slack (we look forward to seeing you there).
- Search through our GitHub Discussions, or add a new question.
- Comment an existing issue or create a new one.
- Help us to improve Superduper by providing your valuable feedback here!
- Email us at
gethelp@superduper.io
. - Visit our YouTube channel.
- Follow us on Twitter (now X).
- Connect with us on LinkedIn.
- Feel free to contact a maintainer or community volunteer directly!
There are many ways to contribute, and they are not limited to writing code. We welcome all contributions such as:
- Bug reports
- Documentation improvements
- Enhancement suggestions
- Feature requests
- Expanding the tutorials and use case examples
Please see our Contributing Guide for details.
Thanks goes to these wonderful people:
Superduper is open-source and intended to be a community effort, and it wouldn't be possible without your support and enthusiasm. It is distributed under the terms of the Apache 2.0 license. Any contribution made to this project will be subject to the same provisions.
We are looking for nice people who are invested in the problem we are trying to solve to join us full-time. Find roles that we are trying to fill here!