New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve speed of current search #41

Closed
oliversauter opened this Issue Mar 1, 2017 · 66 comments

Comments

Projects
None yet
9 participants
@oliversauter
Copy link
Collaborator

oliversauter commented Mar 1, 2017

Currently we use PouchDB quick-search for...well....search :)

But PouchDB-QS is already very slow with just a few dozen pages.
Also it is limited in the amount of filter options we can build in without work-arounds.

The task would be to come up with a solution for searching, that is:

  • fast
  • scalable (to a limit, does not have to last 2 years worth of content to be indexed - if it could, great)
  • runs in the browser,
  • we can build custom rankings,
  • compatible with PouchDB
  • possible implementation a variety of custom filters (i.e. time, domain, source, or other fields that are saved)
  • persists the index

@Treora any other requirements that come to mind?

@oliversauter

This comment has been minimized.

Copy link
Collaborator Author

oliversauter commented Mar 1, 2017

A possible idea is to copy/mimic how WorldBrain does it right now: https://github.com/WorldBrain/Research-Engine/blob/ed1e6bc4e2a0f3386105fb5249492560e9418d39/src/js/background.js

@Treora can you give a short description how the mechanism works?

@Treora

This comment has been minimized.

Copy link
Collaborator

Treora commented Mar 2, 2017

Search should indeed be faster, pouchdb-quick-search was not built for the scale we use it for. In its creator's own words:

@Treora: that sounds quite hopeless to me.

We could try create an index that largely lives in memory, or do less indexing but keep recent documents in memory (such that searching is only slow for older documents), or possibly something else. Regarding less indexing:

@Treora can you give a short description how the mechanism works?

I have not dived the details of how the WorldBrain (or actually Falcon) implementation of full text search works, but what I understood is that it orders visited pages by time, pulls the pages of a certain time range (default 2 weeks) into memory, and then simply does an indexOf for the query words in each of these pages, then returning the pages that contain the given words. No text index at all, but quite fast for a reasonable amount of pages (20k pages appears fine still).

In the short term, we could try to do some index-less approach for quickly filtering recent documents containing particular words; in the longer term, I would love to find/build a scalable, proper search engine, but that is a project by itself.

@Treora Treora added the performance label Mar 2, 2017

@reficul31

This comment has been minimized.

Copy link
Contributor

reficul31 commented Mar 3, 2017

Please tell me if I am wrong. But I think instead of taking the whole text of the page into the db how about we filter out certain words such as "as","the","in","a" etc. These words would rarely come up in the searching query. How about we take the whole text of the page and just take the important words leaving out the common words. For example if the page says "The effects of global warming on different countries".
We just save the words "effects","global","warming","different","countries". Then we could use something like the levenshtein approximate string matching to show the results. I think this would decreases the speed of the search engine by some time.

@mangalutsav

This comment has been minimized.

Copy link

mangalutsav commented Mar 3, 2017

Can we use elastic search on keywords it's very fast and reliable

@hackboxlive

This comment has been minimized.

Copy link

hackboxlive commented Mar 3, 2017

I agree strongly with @reficul31 and would furthur suggest that we could do one of these things:

:- Instead of filtering out certain common words and saving the rest of the words, we could map each page with certain tags only using certain learning algortihms(something like auto-tagging). This would speed up the search multifold.

:- The other thing which could be done is the implementation of an already known search algorithm like Page Ranking or HITS algorithm. I say, we should go for the latter one.

@oliversauter

This comment has been minimized.

Copy link
Collaborator Author

oliversauter commented Mar 3, 2017

Has anyone of you worked with Lunr.js? Might that be an option to implement?

@Treora

This comment has been minimized.

Copy link
Collaborator

Treora commented Mar 3, 2017

Thanks everybody for thinking along. I'll try to address the points quickly:
@reficul31: I did not mention removing stop words, but this is indeed be a preprocessing step in practically every text search algorithm. The pouchdb-quick-search we currently use does it, and so does Falcon (both fixed for English though, but language detection is a task for another time). Using an approximate string matching algorithm (e.g. Levenshtein) on the query words would not be making things faster, but is a feature that would be nice to be more lenient with typos and endings (word stemming helps too).
@mangalutsav: elasticsearch does not run in a browser, so that is not an option here.
@hackboxlive: extracting only a few topic words/tags from a page, instead of indexing all but the obvious stop words, may be interesting; it would not be full-text search as such. Something like LSA may be a good addition at some moment, but I don't see a ready-made approach to do this now. PageRank or HITS seems out of the question here.

Despite the validity of some of your suggestions, I would like to keep the scope of the current issue just to getting a quick solution to make a faster search. I'd love to see or create a good in-browser search engine in the long term, but that requires more skill and time than appears to be available here and now.

@Treora

This comment has been minimized.

Copy link
Collaborator

Treora commented Mar 3, 2017

@oliversauter: lunr.js would indeed be a possible tool of choice to do stemming, stop word filtering, etcetera. It is also used by pouchdb-quick-search. The question is still how to persist and access the index, which will become too large to keep everything in memory. Here is some interesting example code using Lunr and Dexie (a wrapper for IndexedDB), by the developer of pouchdb(&-quick-search).

As indicated, this may be quite a task to do properly. I removed new-comer tag accordingly. If somebody feels comfortable with the tools I'd be happy to see experiments though, but we could start with doing something simple and limiting the scope of the problem to search e.g. only recently visited pages by default.

@mangalutsav

This comment has been minimized.

Copy link

mangalutsav commented Mar 4, 2017

@oliversauter there is a new plugin for browser
https://www.npmjs.com/package/elasticsearch-browser
so it can be taken into consideration

@Treora

This comment has been minimized.

Copy link
Collaborator

Treora commented Mar 4, 2017

@mangalutsav: that is only a client that connects to a remote Elasticsearch server, not an in-browser search engine.

@oliversauter

This comment has been minimized.

Copy link
Collaborator Author

oliversauter commented Mar 4, 2017

I just found this: http://elasticlunr.com/

Anybody experience with it?

@arpitgogia

This comment has been minimized.

Copy link
Collaborator

arpitgogia commented Mar 4, 2017

I have no experience with this but it looks a very good alternative to in-browser elastic search. Would be ideal performance wise.

@mangalutsav

This comment has been minimized.

Copy link

mangalutsav commented Mar 5, 2017

@Treora my bad, but I was sure something like that existed. @oliversauter I don't have experience with it but it looks good.

@oliversauter

This comment has been minimized.

Copy link
Collaborator Author

oliversauter commented Mar 6, 2017

@rutujasurve94

This comment has been minimized.

Copy link

rutujasurve94 commented Mar 6, 2017

Hello,
I found this: https://github.com/hexagon/thinker-fts. This has options for direct and indirect hits (weighted ranker for partial matching). It can retrieve results within 10ms on a 5000 average wikipedia sized dataset.
I also found: http://fusejs.io/
This is a fuzzy-search javascript library.

@rutujasurve94

This comment has been minimized.

Copy link

rutujasurve94 commented Mar 6, 2017

I even came across this:
https://github.com/eklem/search-index-norch-cookbook
Search index has been able to index upto 1.3 million documents so far.
This has mechanisms for auto-completion and also has ways in which we can index really large datasets, for eg: By setting the memory of a node in the following way or using big batch sizes:
$ node --max-old-space-size=8192 [your indexing script]
I need to search a bit more on this, though.

@rutujasurve94

This comment has been minimized.

Copy link

rutujasurve94 commented Mar 6, 2017

In terms of speed, this one seems to be the fastest (uses compressed index files): https://github.com/shibukawa/fm-index.jsx . I'm not sure if we can use it in the browser.
Elasticlunr seems to be a good option found so far.

@rutujasurve94

This comment has been minimized.

Copy link

rutujasurve94 commented Mar 6, 2017

This perhaps may be useful:
https://github.com/anywhichway/reasondb This uses JOQuLaR (Javascript Object Query Language and Representation), an SQL like syntax for ReasonDb database. It has a lot of predicate types.
Also this: https://github.com/cshum/levi (Uses IndexDB database).

@Treora

This comment has been minimized.

Copy link
Collaborator

Treora commented Mar 6, 2017

Thanks all for the great research. I apparently underestimated how much already existed. It would be nice to try if one of these tools can be a good replacement for pouchdb-quick-search, or if they would have similar performance problems or perhaps other issues. Some questions to ask for each:

  • Can it keep just an index without storing the documents themselves?
  • Does it need to have the whole index in memory at runtime? (and is that prohibitively large?)
  • If so, does it provide an easy way to persist the index in local storage (disk)?
  • Does it provide good search features: stemming, stop-words, multi-field, field boosting, unicode normalisation, prefix search, fuzzy matches?

From a quick glance at each, I organised the suggestions into three categories (note I may be wrong about things):

  1. I do not expect these to be a likely match for our purposes:

    • Fuse.js: focus is on fuzzy string matching, few other features, I expect bad performance at large scale.
    • fm-index: lacks features, e.g. no multi-field search.
    • reasondb: tries to be a full database.
  2. Possibly interesting:

    • elasticlunr.js (or lunr.js): looks nice, but works completely in-memory. Does provide an easy way to load/save its index, but we'd have to store it to disk every time.
    • thinker: also in-memory; not sure if it's suitable for in-browser use.
  3. Ones that seem worth a better look:

    • Levi: keeps index in IndexedDB
    • search-index (+cookbook): pretty code; supposedly persists the index, but I have not discovered how.
    • Fullproof: keeps index in IndexedDB, nice search features.
@mangalutsav

This comment has been minimized.

Copy link

mangalutsav commented Mar 7, 2017

So can I start working now, I think we should go with Levi

@oliversauter

This comment has been minimized.

Copy link
Collaborator Author

oliversauter commented Mar 7, 2017

Hey @mangalutsav

Thanks for your offer to take it on. :)
Can you elaborate why Levi is the best choice for you?

@mangalutsav

This comment has been minimized.

Copy link

mangalutsav commented Mar 8, 2017

probably because it's code is easy to understand but I realize now that it is a dead project so search-index is also good as it is relatively more active and has well defined api

@oliversauter

This comment has been minimized.

Copy link
Collaborator Author

oliversauter commented Mar 8, 2017

@mangalutsav

I think it is important that we make a good evaluation of the search technology to implement, because it will have many implications and will take a while to implement.

Therefore, it would be great to hear a little bit more on how you evaluate your choice, taking into account the requirements mentioned in the initial post. (Obviously if you have other requirements that you think are important we are very happy to hear them)

If anything is unclear that prevents you from making your analysis, feel free to let us know.

Thanks a lot :)

@mangalutsav

This comment has been minimized.

Copy link

mangalutsav commented Mar 9, 2017

@oliversauter I understand that, so I will evaluate all the options that are available

@mangalutsav

This comment has been minimized.

Copy link

mangalutsav commented Mar 10, 2017

elasticlunr is the best choice if we don't have much worry about memory, it is much better than lunrjs in terms of speed it's query time is on par with elastic search
http://fiatjaf.alhur.es/js-search-engines-comparison/
You can see here lunr is one of the fastest and elasticlunr is even faster
If we are aiming for high speed we should go with elasticlunr, I am still to evaluate fullproof and search-index

@Treora

This comment has been minimized.

Copy link
Collaborator

Treora commented Mar 26, 2017

Nice work @RajPratim21, looking forward to see your exact setup and results.

@mangalutsav: did you by any chance get around to run some tests as well?

@mangalutsav

This comment has been minimized.

Copy link

mangalutsav commented Mar 27, 2017

yup, I agree with @RajPratim21 elastic Lunr is fast, scalable and compaitable with pouche db

@RajPratim21

This comment has been minimized.

Copy link
Contributor

RajPratim21 commented Mar 29, 2017

@Treora @oliversauter I have reduced the index size from 38 MB to 24 MB for data size 32 MB by keeping parameter index.saveDocument(false)
it further enhance the performance by bringing seaching time to on an average 25-30 Milliseconds .without Query boasting. http://elasticlunr.com/docs/index.html

@oliversauter

This comment has been minimized.

Copy link
Collaborator Author

oliversauter commented Mar 29, 2017

Great stuff :)

So the difference between false and true are that the first only saves the fields that need to be indexed and the latter all fields, even if they are not part of the searchable fields?

@RajPratim21

This comment has been minimized.

Copy link
Contributor

RajPratim21 commented Mar 29, 2017

@RajPratim21

This comment has been minimized.

Copy link
Contributor

RajPratim21 commented Mar 31, 2017

@oliversauter @Treora @arpitgogia I have completed my analysis on the search libraries elastic lunr , search-index , levi and a bit on fullproof. Sorry for making this post a bit long but it's indeed an analysis in detials.
Data set used https://www.kaggle.com/stanfordu/stanford-question-answering-dataset size 32 MB
and have got some interestng results, (something we really needed).
Elasticlunr
As discussed earlier it's the fastest nobody can beat it, but speed comes in the cost of space. @oliversauter you were right that elasticlunr stores everything into memory all at once and never goes back to secondary memory that's why it's so fast. there's a feature request in elasticlunr for external storage space weixsong/elasticlunr.js#29 , Maximum space optimization we can achive is via removing docs from index keeping only ids, what I achived already and resulted in Index size 24MB it's not suitable for client side application.

seach-index
By using search-index with preprocessed data (changing json format) I achived greater speed than previous time , this time average search speed is 250-300 milliseconds half of previous times,
But there's some thing that I didn't notice last time, search-index's DB containing index file in binary format , keep on increasing after every time a search is made. Reason is it saves some metadata everytime a search is made, metadata of size 2MB everything related to search , which can be cleared to reduce size, but if not cleared DB size grows to 150 MB for data of size 32 MB,
Not whole DB get's loaded into RAM/(main memory), only index is loaded form DB and rest is storred localy (into the hard-drive). According to creators of search-index for a data size of 100 MB index size that is loaded is approximately 100 MB , hover to the end of the discussion here fergiemcdowall/search-index#255 (comment) .
seach-index < Elasticlunr both interms of speed and space taken into memory.

Levi
Levi has got some magic :P , though it's search spped is comparitively slow to elasticlunr and search-index . For data of size 32 MB it takes approximately 500-700 milliseconds for search. Which is still acceptable according to @oliversauter (under 1 second) .
Levi's magic is in it''s index compression . For a data size of 32 MB it's DB size is 4.2 MB, yes you read it correctly 4.2 MB neary 4MB 8 times less than data size.
and it also stores metadata everytime a search is made , but metadata size is 2KB for our given Data set, in compare to 2 MB in search-index. How Levi compress the index so much? well @oliversauter , it uses n-grams and cosine distance approach that I mentioned earlier to compress the data, and they map the data not present in n-grams too(which I was not clear how to achive earlier) and provides excellent support for full-text search.
point to be noted if search-index is used as search-library then it needs additional preprocessing of tokenization, that is for a seach "eminem is awesome rapper" you need to pass "eminem", "awesome","rapper" into pipeline, which will require a time and make the performance of levi comparable to search-index.
One more salient feature of LeviDB it takes comparitively very less time than search-index to load the data into index file intially required to set up.

FullProof
All the above analysis are done for node js application. But fullproof is not supported for server side backed application, only supported for browsers.
reyesr/fullproof#1
reyesr/fullproof#36
the above feature has been requested once in 2012 then again in 2015 still no positive response and in recent one no response at all. It's clear a support for server side is not comming in recent times, so when WorldBrain ports to server side this search api won't be supported and have to port for something else, therefore I didn't proceed testing this one as I am looking for a long term sollution.
Also the docs are not very clear with fullproof no running code demonastrated etc,

from above analysis I will take Levi.js as the best one for our case, but it's just me final choice is made by experts here.

@arpitgogia regarding access of hard-disk dat via chrome extensions, chrome don't allow the extension to access any of the hard-disk's memory ,but allows it to store it's data (our case indexDB) into the local space provided to it. In short it can store it's own data into the space provided to it, but can't access other's data.
http://stackoverflow.com/questions/5364062/how-can-i-save-information-locally-in-my-chrome-extension
https://superuser.com/questions/507536/where-does-google-chrome-save-localstorage-from-extensions
https://productforums.google.com/forum/#!topic/chrome/6EVtjeaWObs/discussion%5B1-25%5D

@Treora Here are my repos ,
https://github.com/RajPratim21/search_index
https://github.com/RajPratim21/levi-test
https://github.com/RajPratim21/elasticlunr_test

here is a json data source of size 300 MB and 400 MB if some one wants to do some real tests.
http://times.cs.uiuc.edu/~wang296/Data/

also could be useful haven't tested this one though.
http://text-analytics101.rxnlp.com/2011/07/user-review-datasets_20.html

Sorry to make this long....

Visualization
Blue for Levi, red for search-index, Green for elastic-lunr.
plot
**

@arpitgogia

This comment has been minimized.

Copy link
Collaborator

arpitgogia commented Apr 1, 2017

@RajPratim21 Such detailed analysis deserved a long post 😄 . You've done some amazing work 👍 💯
Small question, what are the X and Y axes in the graph?
@oliversauter @Treora , after concluding that Levi is a good combination of fast and low memory consumption, can we help Levi's performance by modifying our own data such that it becomes more easily searchable?

@RajPratim21

This comment has been minimized.

Copy link
Contributor

RajPratim21 commented Apr 1, 2017

@arpitgogia y axis is time in milliseconds and x axis are different search querry which are lablled in integers 1,2 3..., In the file they are labeled as this.
datafile.txt

@RajPratim21

This comment has been minimized.

Copy link
Contributor

RajPratim21 commented Apr 1, 2017

@arpitgogia @Treora @oliversauter levi provides support of searching through the fields , which gives better results, for example suppose our doc contains two field "title", and "text contain in the doc" then if we want full text search, we may be looking into the "text inside docs" only and may not be in "title" , if that's the case then we can boast "text" fields more, and results in better performance or atlest more accurate results. We may also provide user with choice of searching through fields via inclusion of some search list bar assigned next to searcch tab from where user can search through the different fields, For example user can chose to search through titles, or search through texts etc fields. I recently tested that levi provides faster search results when searched through individual fields via query boosting. https://www.npmjs.com/package/levi#searchstreamquery-options ,
also levi provides querry expansion functionalities of search, that is I can search fro "lorem ips"
and it can provide results for "lorem ipsum", "lorem "ipso" etc , maximum 10 expansions are done.

@oliversauter

This comment has been minimized.

Copy link
Collaborator Author

oliversauter commented Apr 1, 2017

@RajPratim21 Yes indeed, this analysis deserves such a long post. :) Thanks for that grand work! 🎉

Sounds like Levi could indeed be a candidate.

and they map the data not present in n-grams too

This means we can still have full-text search for the words that are not in the n-grams?

also levi provides query expansion functionalities of search,

Aaawe sweet, that was not given in the current search implementation (of Research-Engine) :)

@arpitgogia

This comment has been minimized.

Copy link
Collaborator

arpitgogia commented Apr 1, 2017

levi provides faster search results when searched through indivitual fields via query boasting

Does this mean, that making multiple queries on different fields is faster than making one query on multiple fields?

@RajPratim21

This comment has been minimized.

Copy link
Contributor

RajPratim21 commented Apr 1, 2017

@oliversauter yes we can have full text search for every word.
@arpitgogia I didn't mean exactly that and can't comment on that as I haven't tested it that way, but what I have tested is making one query on one fields is faster than making one query on multiple fields.

@Treora

This comment has been minimized.

Copy link
Collaborator

Treora commented Apr 21, 2017

@RajPratim21:
Sorry for my slow reply. I was about to reply way earlier, but got lost in doing some further research. Thanks still for the comparison! It is really nice to get some quantative impression of the performance of each.

A few thoughts:

  • I don't know how much a comparison running in NodeJS tells us about the performance in browsers, as there it will use a different database underneath. Would be worth running tests in the browser.
  • I wonder why you conclude levi is faster, while in the graphs you plot its line is mostly below the one of search-index.

To get a better feel for things, I also tried testing levi in the browser, using 2k wikipedia articles as data set (the Q/A texts seemed somewhat short). You already mentioned it was slow during initialisation (= indexing, I assume). I appears to take almost a second per article, which sounds way too long to me. I have not investigated the bottleneck. Code is here. I would like to also add search-index to this test (help welcome).

By the way, you say about levi that..

it uses n-grams and cosine distance approach that I mentioned earlier to compress the data

I read the source code (which is pretty short actually), but did not notice any n-gram approach; is it advertised like that somewhere? If I am not mistaken it just computes the usual cosine similarity on the tf-idf vectors of the text tokens (words).

@arpitgogia

This comment has been minimized.

Copy link
Collaborator

arpitgogia commented Apr 24, 2017

I wonder why you conclude levi is faster, while in the graphs you plot its line is mostly below the one of search-index.

@Treora the index size of levi is also 1/8th the size of the data.
I think at some point there has to be a trade off between space and performance.

@Treora

This comment has been minimized.

Copy link
Collaborator

Treora commented Apr 24, 2017

I wonder why you conclude levi is faster, while in the graphs you plot its line is mostly below the one of search-index.

@Treora the index size of levi is also 1/8th the size of the data.
I think at some point there has to be a trade off between space and performance.

Ok maybe that was then meant with "levi beats everyone". I wonder why the difference between the two (and whether it is similar when run in the browser).

@arpitgogia

This comment has been minimized.

Copy link
Collaborator

arpitgogia commented Apr 25, 2017

I wonder if we can use two indices. Like a small index of about a 100 MB made using elastic-lunr(fastest according to the above graph) and another index for older documents using Levi. The latter wouldn't be loaded in the RAM, rather called on demand whenever the Search page is used. And the former index can be used for the omni bar search.

@RajPratim21

This comment has been minimized.

Copy link
Contributor

RajPratim21 commented Apr 28, 2017

@Treora I was unavailable for some time due to my exams, I believe @arpitgogia has answered why I claimed it to be best,(it's based on comparitive performance of search and time) in our case major problem was with space and regarding ngram-approach, I read it only in there document as advertised by them only and when they implement tf-idf they do tokenize the words into unigrams only that is single words removing the stop words and etc fancy stuff, that's a basic n-gram approach where n=1, they may have not gone for higher n.
Finally I will try my hands on the stuffs that you mentioned.

@Treora

This comment has been minimized.

Copy link
Collaborator

Treora commented Jun 2, 2017

Current status update: we decided to collaborate on creating a new text search engine, more focussed on performance than any of the ones we know of, built for IndexedDB specifically. This is being worked on by @bwbroersma as his GSoC project for WorldBrain.

As a quick stop-gap I might just quit using pouchdb-quick-search and do search simply by running a plain stupid literal word matcher on all pages. Bad approach with mediocre results, but much faster, at least when the number of documents is small.

@Treora

This comment has been minimized.

Copy link
Collaborator

Treora commented Jun 21, 2017

The quick plain stupid word filter solution was implemented in #105, thus getting rid of pouchdb-quick-search. It's not beautiful, but it works for now. As said, a better search engine is in the works.
Closing this issue.

@gpakosz

This comment has been minimized.

Copy link

gpakosz commented Jul 27, 2017

@RajPratim21 Hello, I landed here googling for "fusejs elasticlunr". I'm curious, is there something in particular that made you decide not even benchmarking fusejs in the first place?

@RajPratim21

This comment has been minimized.

Copy link
Contributor

RajPratim21 commented Aug 7, 2017

@gpakosz I didn't benchmark Fuse Js not for any specific reason I didn't get time to go through that one also and spend more time studying above mentioned ones. As you mentioned in your personal email to me, you are using elasticlunr.js for a product documentation built by a static site generator. it needs to be decided that how much amount of data you will be handling. Elasticlunr.js beats everyone in terms of speed when Memory Space is not an issue, that is perfectly suitable for systems where the whole thing is deployed on a dedicated server/cloud/system, as it does in in memory search that is the whole index is loaded into your Primary memory and then search queries are handled.
In a nutshell, if you are dealing with less amount of data on a non-dedicated system or having dedicated server/cloud/system for even large amount of data, No one can beat elasticlunr.js.

@oliversauter

This comment has been minimized.

Copy link
Collaborator Author

oliversauter commented Aug 8, 2017

hey guys! for your information, we decided to go with search-index for now, as long as Benjamin is not finished with his slow-search library.
The corresponding PR is here: WorldBrain/Memex#69

@gpakosz

This comment has been minimized.

Copy link

gpakosz commented Aug 8, 2017

@RajPratim21 Thanks for the follow up. I'm happy with Elasticlunr.js so far with index sizes < 2MB. I'm building the index when generating the static site, which means readers have to perform an HTTP get to fetch it to their client but makes index generation time a non issue.

@oliversauter is the rationale for going with search-index documented somewhere by chance?

@mangalutsav Hi, reading the whole comments again, did you evaluate Fuse.js in the end?

@oliversauter

This comment has been minimized.

Copy link
Collaborator Author

oliversauter commented Aug 8, 2017

@gpakosz No, but it is fairly quick:
Its the only search library that does not run entirely in-memory and is actively under development.
We were looking for one that runs on indexedDB/LevelDB so we can handle large amounts of data in the browser. (1GB base data)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment