Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

questions #2

Open
guypaskar opened this issue Jun 14, 2015 · 16 comments
Open

questions #2

guypaskar opened this issue Jun 14, 2015 · 16 comments

Comments

@guypaskar
Copy link

Hi , I liked this project a lot. couple of questions:

  1. where do you save the data? on the main process memory or on the disk?
  2. is it possible to work with it without cluster? i.e only with main process?

Thanks,
Guy

@PaquitoSoft
Copy link
Owner

Hi @guypaskar

  1. Memored stores data in the memory of the main process
  2. There is no point for using this library in non clustered environments. It has born with the purpose to help in that context. If you need an in-memory cache for a single process application, I'd suggets Isaacs https://github.com/isaacs/node-lru-cache.

Please, let me know if these aswers are ok for you.

Thanks!

@guypaskar
Copy link
Author

@PaquitoSoft , Thank you for the quick answer.

  1. So in fact we only have the memory of one node process and we are bound to it, even if we run 10 processes right?

  2. My app can be configurable to be with cluster or without cluster this is why I wanted a simple support for it, just for compatability.

@PaquitoSoft
Copy link
Owner

Hi @guypaskar

  1. Yes. There's only one process memory available (I think bu default is something near 1.4 GB)

  2. In my experience, it's a good idea to always run in cluster mode (if you're considering that idea) even if you only run one process because you get the benefit from using domains to better handling unexpected errors. I don't think adding support for a non cluster environment makes sense for this module.

Thanks.

@guypaskar
Copy link
Author

Is there any LRU mechanism to prevent from the memory to be very high?

Is it possible to do multi insert or multi delete? That would be very helpful!

@PaquitoSoft
Copy link
Owner

Hi @guypaskar

Currently, as there is no memory limits, there's no eviction policy implemented.
I think this is something this module must have but it implies sobre extra checks that may lower its performance. I definitly need to think about this problem to find the better solution (maybe follow isaacs's decisions).

In the other hand, adding multi insert/delete should be much easier to implement.
I will do it as soon as I find some time to work in it.

Thanks.

@guypaskar
Copy link
Author

Thanks @PaquitoSoft

It seems like after a while reads/writes to memored gets really slow. at some points the callback returns after 40 seconds. (to save an sql query results of ~ 9000 rows)

Also , it seems like the memory is really increasing as if there is a memory leak. I save the file like this:
memored.store(key, {stream:stream},180000, function() {}) , so it's suppose to be cleared...

Any ideas on why these 2 issues are happening?

Guy

@PaquitoSoft
Copy link
Owner

Hi @guypaskar

Regarding the first issue, I didn't tested this module with lots of keys and maybe some checks are not well optimized when working on a large dataset.

The second one may be caused by the new keys rate. I mean, if you insert new keys faster than they get removed (ttl), the memory will increase.

As soon as I have time to work on this project again, I will definitely try to find solutions to the problems you pointed out.

@PaquitoSoft
Copy link
Owner

Hi @guypaskar,

I've just published a new version (v1.1.0) with support for multi-insert/read/remove operations.

I'll create a separate issue for the cache size limit requirement.

I leave this issue open because I need to take a look about what you say of memored being slow in some situations.

Cheers.

@nicola-spb
Copy link

Hi, @PaquitoSoft

Can I use your module on some clusters? (shared memory on some machines) I want to have common memored.

@PaquitoSoft
Copy link
Owner

Hi @nicola-spb,

I'm not sure if I get your question, but I think the answer is no.

If your trying to share the same memory among different applications running in cluster mode, that's not possible with this module.

Cheers.

@PaquitoSoft
Copy link
Owner

@guypaskar, regarding the memory leak issue, are you setting up memored to run the cache cleanup? https://github.com/PaquitoSoft/memored#invalidation-management

If you do not specify the interval for the cache cleanup, cache entries will not be deleted unless you try to read them and they are expired.

@crostagnol
Copy link

Hi,
What if the main process fails/crashes? All data stored in memored is lost or there's a way to retrieve it when the main process starts again?

Thanks.

@PaquitoSoft
Copy link
Owner

Hi @crostagnol

Yes, if the main process fails then all the data is lost.
This is an in-memory cache which stores all the data in the main process.

Regards.

@PaquitoSoft
Copy link
Owner

@guypaskar, I commited a new file (/demo/test-load.js) with an example of an intensive use of this module.
In the example I launched up to 25 workers for the same master, writing and reading cache entries, both with purge enabled and disabled.
Even with thousands of entries in the cache with this high load, the average read time is below 2 milliseconds.

You can play with that file configuring the constants at the top of it.

It would be great if you could set up a repo with an example of your read delays so I can verify your scenario.

Thanks.

@inpras
Copy link

inpras commented Mar 12, 2017

How do I propagate value changes (example onChange) across interested workers? Can you give some hints please.

@PaquitoSoft
Copy link
Owner

PaquitoSoft commented Mar 13, 2017

Hello @inpras

If you mean being able to know when an specific cache vale has changed, that's something not supported in this library. I didn't see that feature in other caching libraries either.

You can open another issue requesting that feature and I will look into it when I have some time available.

Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants