Skip to content

Internet search engine for text-oriented websites. Indexing the small, old and weird web.


Notifications You must be signed in to change notification settings


Repository files navigation

Marginalia Search

This is the source code for Marginalia Search.

The aim of the project is to develop new and alternative discovery methods for the Internet. It's an experimental workshop as much as it is a public service, the overarching goal is to elevate the more human, non-commercial sides of the Internet.

A side-goal is to do this without requiring datacenters and enterprise hardware budgets, to be able to run this operation on affordable hardware with minimal operational overhead.

The long term plan is to refine the search engine so that it provide enough public value that the project can be funded through grants, donations and commercial API licenses (non-commercial share-alike is always free).

The system can both be run as a copy of Marginalia Search, or as a white-label search engine for your own data (either crawled or side-loaded). At present the logic isn't very configurable, and a lot of the judgements made are based on the Marginalia project's goals, but additional configurability is being worked on!

Here's a demo of the set-up and operation of the self-hostable barebones mode of the search engine: 🌎

Set up

To set up a local test environment, follow the instructions in 📄 run/!

Further documentation is available at 🌎

Before compiling, it's necessary to run ⚙️ run/ This will download supplementary model data that is necessary to run the code. These are also necessary to run the tests.

If you wish to hack on the code, check out 📄 doc/

Hardware Requirements

A production-like environment requires a lot of RAM and ideally enterprise SSDs for the index, as well as some additional terabytes of slower harddrives for storing crawl data. It can be made to run on smaller hardware by limiting size of the index.

The system will definitely run on a 32 Gb machine, possibly smaller, but at that size it may not perform very well as it relies on disk caching to be fast.

A local developer's deployment is possible with much smaller hardware (and index size).

Project Structure

📁 code/ - The Source Code. See 📄 code/ for a further breakdown of the structure and architecture.

📁 run/ - Scripts and files used to run the search engine locally

📁 third-party/ - Third party code

📁 doc/ - Supplementary documentation

📄 - How to contribute

📄 - License terms


You can email with any questions or feedback.


The bulk of the project is available with AGPL 3.0, with exceptions. Some parts are co-licensed under MIT, third party code may have different licenses. See the appropriate /


The project uses modified Calendar Versioning, where the first two pairs of numbers are a year and month coinciding with the latest crawling operation, and the third number is a patch number.


For example, 23.03.02 is a release with crawl data from March 2023 (released in May 2023). It is the second patch for the 23.02 release.

Versions with the same year and month are compatible with each other, or offer an upgrade path where the same data set can be used, but across different crawl sets data format changes may be introduced, and you're generally expected to re-crawl the data from scratch as crawler data has shelf life approximately as long as the major release cycles of this project. After about 2-3 months it gets noticeably stale with many dead links.

For development purposes, crawling is discouraged and sample data is available. See 📄 run/ for more information.



Consider donating to the project.


This project was funded through the NGI0 Entrust Fund, a fund established by NLnet with financial support from the European Commission's Next Generation Internet programme, under the aegis of DG Communications Networks, Content and Technology under grant agreement No 101069594.

NLnet Foundation NGI0