Vespa sample applications
Getting started - Basic Sample Applications
This is the intro application to Vespa. Learn how to configure the schema for simple recommendation and search use cases.
A simple application demonstrating vector ANN search through HNSW, creating embedding vectors from a language model inside Vespa, and hybrid text and semantic ranking.
This is the sample application used in the Vespa tutorial. Please follow the tutorial. This application demonstrates basic search functionality. It also demonstrates how to build a recommendation system where approximate nearest neighbor search in a shared user/item embedding space is used to retrieve recommended content for a user. This sample app also demonstrates use of parent-child relationships.
This sample application demonstrates billion-scale image search using CLIP retrieval. Features separation of compute from storage, and query time vector similarity de-duping.
Full-fledged State-of-the-Art Search, Ranking and Question Answering applications
These are great starting points for bringing the latest advancements in Search and Ranking to your domain!
This sample application demonstrates state-of-the-art text ranking using Transformer (BERT) models and GBDT models for text ranking. It uses the MS Marco passage and document ranking datasets.
The document ranking part of the sample app uses a trained LTR (Learning to rank) model using GBDT with LightGBM. The passage ranking part uses multiple state of the art pretrained language models in a multiphase retrieval and ranking pipeline. See also Pretrained Transformer Models for Search blog post series. There is also a simpler ranking app also using the MS Marco relevancy dataset. See text-search which uses traditional IR text matching with BM25/Vespa nativeRank.
Create an end-to-end E-Commerce shopping engine using use-case-shopping. This use case also bundles a frontend application. It uses the Amazon product data set. It demonstrates building next generation E-commerce Search using Vespa.
This sample application demonstrates end to end question answering using Facebook's DPR models (Dense passage Retriever for Question Answering). It is using Vespa's approximate nearest neighbor search to efficiently retrieve text passages from a Wikipedia based collection of 21M passages. A BERT based reader component reads the top ranking passages and produces the textual answer to the question. See also Efficient Open Domain Question Answering with Vespa and Scaling Question Answering with Vespa.
This sample application demonstrates search-as-you-type where for each keystroke of the user, we retrieve the best matching documents. It also demonstrates search suggestions (query autocompletion).
These sample application demonstrates various Vespa features and capabilities.
A sample Vespa application which demonstrates using Vespa as a stateless ML model inference server where Vespa takes care of distributing ML models to multiple serving containers, offering horizontal scaling and safe deployment. Model versioning and feature processing pipeline. Stateless ML model serving can also be used in state-of-the-art retrieval and ranking pipelines, e.g. query classification and encoding text queries to dense vector representation for efficient retrieval using Vespa's approximate nearest neighbor search.
Note: Applications with pom.xml are Java/Maven projects and must be built before being deployed. Refer to the Developer Guide for more information.
Contribute to the Vespa sample applications.