Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Query time slower than gensim? #10

Closed
DomHudson opened this issue May 18, 2018 · 4 comments
Closed

Query time slower than gensim? #10

DomHudson opened this issue May 18, 2018 · 4 comments
Assignees
Labels
optimization question Further information is requested

Comments

@DomHudson
Copy link

Hi!

I really hope this question doesn't come across as critical - I think this project is a great idea and really loving the speed at which it can lazy-load models.

I had one question - loading the Google news vectors is massively quicker in magnitude than gensim, however I'm finding that querying is significantly slower. Is this to be expected? It's is quite possible that this is a trade-off against loading time but want to confirm that there's nothing weird going on in my environment.

Code i'm using for testing:

import json
import os
import timeit


ITERATIONS = 500

# Tokens are loaded from disk.
# tokens = ...
tokens = json.dumps(tokens)

mag = timeit.timeit(
'''
for token in tokens:
    try:
        getVector(token)
    except:
        pass
''',
    setup =
'''
from pymagnitude import Magnitude
vec = Magnitude('/home/dom/Code/ner/ner/data/GoogleNews-vectors-negative300.magnitude')
getVector = vec.query
tokens = {}
'''.format(tokens),
    number = ITERATIONS
)

gensim = timeit.timeit(
'''
for token in tokens:
    try:
        getVector(token)
    except:
        pass
''',
    setup = 
'''
from gensim.models import KeyedVectors
vec = KeyedVectors.load('/home/dom/Code/ner/ner/data/GoogleNews-vectors-negative300.w2v', mmap='r')
getVector = vec.__getitem__
tokens = {}
'''.format(tokens),
    number = ITERATIONS
)

print('Gensim is {}x faster'.format(mag / gensim))

For the code in the above; I get gensim being approximately 5x faster if memory-mapped and if not over 13x faster.

@AjayP13
Copy link
Contributor

AjayP13 commented May 18, 2018

Hey Dom,

The answer is complicated, but I can give you a little bit more information on how Magnitude works and the settings and options available for caching and loading and that may help inform how to get the speed to best match your use case. Magnitude's settings are out-of-the-box configured to make local developing and iterating on models using word vectors reasonably fast as well as approach in-memory speed for production server deployment. The time trade-off made is we get rid of initial load times to make iterating much faster by making initial queries a little slower, but for many repeated queries we want Magnitude to approach Gensim so it can still be used in production.

This means on benchmarks that don't simulate repeated lookups it may appear slow because those benchmarks don't fully utilize the cache that would be available an average production scenario. Moreover, there are background threads run when Magnitude boots up that begin eagerly pre-fetching data into caches. This is useful for production environments to pre-fetch data into the cache in the background, but will also negatively impact a benchmark score.

How it works

Magnitude ultimately works by using a SQLite index for looking up a token and getting the vector. It lets SQLite and the OS manage caching data into memory which should be quite optimized.

However, there is one additional layer of caching introduced by Magnitude where query calls are LRU-cached (the size of the LRU can be configured with the lazy_loading constructor argument and is by default unbounded). What this means is that, over time, Magnitude will get faster as the same words are looked up over and over again. We found this to work really well practically due to Zipf's Law. Even though the first look up of the word "the" might be a little slow compared to Gensim's mmap method, it is negligible, since over time the look up for the word "the" will be fast since it will hit the in-memory LRU cache.

Different configurations of Magnitude

Here's a few things you can try:

  1. Turn off background threads eagerly loading the LRU caches:
vectors = Magnitude('/path/to/w2v.magnitude', eager=False)
  1. Turn on blocking (this will turn off lazy-loading and require you to wait a little bit before you can perform queries, but make the queries faster):
vectors = Magnitude('/path/to/w2v.magnitude', blocking=True)
  1. Use the raw vectors NumPy mmap:
vectors.get_vectors_mmap()

This requires knowing the index of the word you want to lookup (you may need a separate data structure to do this). It is also takes sometime to build this mmap, so you will have to wait. However, it is cached even between different runs of Python on your computer/server's /tmp/ directory. So you only have to wait once.

Recommendation

Overall, I suspect if you want to / need to eke out every bit of performance in your application, there possibly are faster ways to do so than what we use in Magnitude, but we try to make Magnitude the simplest way to make development and production use of word vectors reasonably fast without having to one configuration/library for development and another configuration/library for production. We also add a ton of the nice to have features like misspelling lookups, OOV lookups, approximate lookups, multi-threading support etc. Would really depend on your application whether it makes sense to trade these off or not.

If you have any ways of making Magnitude faster that you find from your usage, that doesn't affect its current performance, feel free to submit a PR!

@AjayP13 AjayP13 self-assigned this May 18, 2018
@AjayP13 AjayP13 added question Further information is requested optimization labels May 18, 2018
@DomHudson
Copy link
Author

Thank you very much for the highly informative response - it is massively appreciated! I'm including the profile data at the bottom of this comment for posterity - but it absolutely agrees with your points.

One thing I did spot is the use of fetchall in _vector_for_key.

Altering this method to following (using fetchone) did result in a minor performance boost despite the LIMIT statement.

    def _vector_for_key(self, key):
        """Queries the database for a single key."""
        result = self._db().execute(
            """
                SELECT *
                FROM `magnitude`
                WHERE key = ?
                ORDER BY key = ? COLLATE BINARY DESC
                LIMIT 1;""",
            (key, key)).fetchone()
        if result is None or self._key_t(result[0]) != self._key_t(key):
            return None
        else:
            return self._db_result_to_vec(result[1:])

I will also do some further research, and see if I can spot anything else - many thanks.


         8947786 function calls (8945672 primitive calls) in 94.256 seconds

   Ordered by: internal time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
      793   88.668    0.112   88.668    0.112 {method 'execute' of 'sqlite3.Cursor' objects}
980896/980000    2.783    0.000   94.256    0.000 /home/dom/Code/magnitude/pymagnitude/third_party/repoze/lru/__init__.py:337(cached_wrapper)
   980137    0.855    0.000    0.855    0.000 /home/dom/Code/magnitude/pymagnitude/third_party/repoze/lru/__init__.py:102(get)
  2941929    0.724    0.000    1.062    0.000 /home/dom/Code/magnitude/pymagnitude/third_party/repoze/lru/__init__.py:343(<genexpr>)
  1962414    0.339    0.000    0.339    0.000 {built-in method builtins.isinstance}
      793    0.229    0.000    0.229    0.000 {method 'fetchall' of 'sqlite3.Cursor' objects}
   980896    0.149    0.000    0.149    0.000 /home/dom/Code/magnitude/pymagnitude/third_party/repoze/lru/__init__.py:344(<genexpr>)
     7327    0.122    0.000    0.122    0.000 {method 'uniform' of 'mtrand.RandomState' objects}
   980898    0.102    0.000    0.102    0.000 {method 'items' of 'dict' objects}
      154    0.038    0.000   88.751    0.576 /home/dom/Code/magnitude/pymagnitude/__init__.py:515(_out_of_vocab_vector)

plasticity-admin pushed a commit that referenced this issue May 20, 2018
…nstead of `fetchall` where applicable
@AjayP13
Copy link
Contributor

AjayP13 commented May 21, 2018

@DomHudson The CI had some trouble deploying to PyPI since a dependency broke, but it's all fixed now. The SQLite query optimization is now on v.0.1.18. Run pip install pymagnitude -U or pip3 install pymagnitude -U to update.

Thanks for the tip! Feel free to open another issue or PR, if there's any other changes you see fit.

@DomHudson
Copy link
Author

Great thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
optimization question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants