No ads, no tracking, no profit
Mwmbl is a non-profit, open source search engine where the community determines the rankings. We aim to be a replacement for commercial search engines such as Google and Bing.
We have our own index powered by our community. Our index is currently much smaller than those of commercial search engines, with around 500 million unique URLs (more stats). The quality is a long way's off from matching the commercial engines at the moment, but you can help change that by joining us! We aim to have 1 billion unique URLs indexed by the end of 2024, 10 billion by the end of 2025 and 100 billion by the end of 2026 by which point we should be comparable with the commercial search engines.
Our main community is on Matrix but we also have a Discord server for non-development related discussion.
The community is responsible for crawling the web (see below) and curating search results. We are friendly and welcoming. Join us!
All documentation is at https://book.mwmbl.org.
Crawling is distributed across the community, while indexing is centralised on the main server.
If you have spare computer power and bandwidth, the best way you can help is by running our command line crawler with as many threads as you can spare.
If you have Firefox you can help out by installing our extension. This will crawl the web in the background. It does not use or access any of your personal data. Instead it crawls a set of URLs sent from our central server. After extracting a summary of each page, it batches these up and sends the data to the central server to be stored and indexed.
The motives of ad-funded search engines are at odds with providing an optimal user experience. These sites are optimised for ad revenue, with user experience taking second place. This means that pages are loaded with ads which are often not clearly distinguished from search results. Also, eitland on Hacker News comments:
Thinking about it it seems logical that for a search engine that practically speaking has monopoly both on users and as mattgb points out - [to some] degree also on indexing - serving the correct answer first is just dumb: if they can keep me going between their search results and tech blogs with their ads embedded one, two or five times extra that means one, two or five times more ad impressions.
The space of alternative search engines has expanded rapidly in recent years. Here's a very incomplete list of some that have interested me:
- search.marginalia.nu - a search engine favouring text-heavy websites
- SearXNG - an open source meta search engine
- YaCy - an open source distributed search engine
- Stract - an open source, private search engine with a focus on privacy and customizability
- Brave
- DuckDuckGo
- Kagi
Of these, YaCy is the closest in spirit to the idea of a non-profit search engine. The index is distributed across a peer-to-peer network. Unfortunately this design decision slows the fetching of search results.
Marginalia Search is fantastic, but our goals are different: we aim to be a replacement for commercial search engines whereas Marginalia aims to provide a different type of search.
All other search engines that I've come across are for-profit. Please let me know if I've missed one!
To be a good search engine, we need to store many items, but the cost of running the engine is at least proportional to the number of items stored. Our main consideration is thus to reduce the cost per item stored.
The design is founded on the observation that most items rank for a small set of terms. In the extreme version of this, where each item ranks for a single term, the usual inverted index design is grossly inefficient, since we have to store each term at least twice: once in the index and once in the item data itself.
Our design is a giant hash map. We have a single store consisting of a fixed number N of pages. Each page is of a fixed size (currently 4096 bytes to match a page of memory), and consists of a compressed list of items. Given a term for which we want an item to rank, we compute a hash of the term, a value between 0 and N - 1. The item is then stored in the corresponding page.
To retrieve pages, we simply compute the hash of the terms in the user query and load the corresponding pages, filter the items to those containing the term and rank the items. Since each page is small, this can be done very quickly.
Because we compress the list of items, we can rank for more than a single term and maintain an index smaller than the inverted index design. At least, that's the theory. This idea has yet to be tested on a large scale.
There are multiple ways to help:
- Help us crawl the web
- Donate some money towards hosting costs and supporting our volunteers
- Give feedback/suggestions
- Assist in development of the engine itself
If you would like to help in any of these or other ways, thank you! Please join our Matrix chat server or email the main author (email address is in the git commit history).
For trying out the service locally see the section in the Mwmbl book.
Note: this method is not recommended as it is more involved, and your index will not include any data unless you set up a crawler to crawl to your server. You will need to set up your own Backblaze or S3 equivalent storage, or have access to the production keys, which we probably won't give you.
Follow the deployment instructions
Like "mumble". I live in Mumbles, which is spelt "Mwmbwls" in Welsh. But the intended meaning is "to mumble", as in "don't search, just mwmbl!"