Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for > 1 MB of data #54

Closed
ghost opened this issue Dec 30, 2010 · 3 comments
Closed

Support for > 1 MB of data #54

ghost opened this issue Dec 30, 2010 · 3 comments

Comments

@ghost
Copy link

ghost commented Dec 30, 2010

I'd like to see a support for data values bigger than 1 MB, memcached can already be configured (using the "-I" option) to enable it. I propose the following alternative patches which add the support to dalli. In the first one is the value set by user, in the second we find out the value in runtime. They were tested with memcached-1.4.4.

Set by user:

--- lib/dalli/server.rb.orig        2010-12-30 17:15:42.000000000 +0100
+++ lib/dalli/server.rb     2010-12-30 17:27:01.000000000 +0100
@@ -17,7 +17,9 @@
       # times a socket operation may fail before considering the server dead
       :socket_max_failures => 2,
       # amount of time to sleep between retries when a failure occurs
-      :socket_failure_delay => 0.01
+      :socket_failure_delay => 0.01,
+      # maximal size of data value in bytes (= size of memcached slab page; default is 1 MB, could be overriden with "memcached -I <size>")
+      :value_max_bytes => 1024 * 1024
     }

     def initialize(attribs, options = {})
@@ -126,8 +128,6 @@
       Thread.current[:dalli_multi]
     end

-    ONE_MB = 1024 * 1024
-
     def get(key)
       req = [REQUEST, OPCODES[:get], key.bytesize, 0, 0, 0, key.bytesize, 0, 0, key].pack(FORMAT[:get])
       write(req)
@@ -255,7 +255,7 @@
         value = Zlib::Deflate.deflate(value)
         compressed = true
       end
-      raise Dalli::DalliError, "Value too large, memcached can only store 1MB of data per key" if value.bytesize > ONE_MB
+      raise Dalli::DalliError, "Value too large, memcached can only store #{@options[:value_max_bytes]} bytes of data per key" if value.bytesize > @options[:value_max_bytes]
       flags = 0
       flags |= FLAG_COMPRESSED if compressed
       flags |= FLAG_MARSHALLED if marshalled

Runtime check:

--- lib/dalli/server.rb.orig        2010-12-30 17:15:42.000000000 +0100
+++ lib/dalli/server.rb     2010-12-30 19:29:46.000000000 +0100
@@ -126,8 +126,6 @@
       Thread.current[:dalli_multi]
     end

-    ONE_MB = 1024 * 1024
-
     def get(key)
       req = [REQUEST, OPCODES[:get], key.bytesize, 0, 0, 0, key.bytesize, 0, 0, key].pack(FORMAT[:get])
       write(req)
@@ -255,7 +253,7 @@
         value = Zlib::Deflate.deflate(value)
         compressed = true
       end
-      raise Dalli::DalliError, "Value too large, memcached can only store 1MB of data per key" if value.bytesize > ONE_MB
+      raise Dalli::DalliError, "Value too large, memcached can only store #{@value_max_bytes} bytes of data per key" if value.bytesize > @value_max_bytes
       flags = 0
       flags |= FLAG_COMPRESSED if compressed
       flags |= FLAG_MARSHALLED if marshalled
@@ -366,6 +364,9 @@
         @version = version # trigger actual connect
         sasl_authentication if Dalli::Server.need_auth?
         up!
+        # maximal size of data value in bytes (= size of memcached slab page; default is 1 MB, could be overriden with "memcached -I <size>")
+        item_size_max = stats('settings')['item_size_max']
+        @value_max_bytes = item_size_max ? item_size_max.to_i : (1024 * 1024)
       rescue Dalli::DalliError # SASL auth failure
         raise
       rescue SystemCallError, Timeout::Error, EOFError
@mperham
Copy link
Collaborator

mperham commented Dec 31, 2010

It's doubtful I will accept this as caching values this large has always struck me as a design smell. Your application design is probably questionable if you are caching values this large.

On a side note, you can enable compression to get larger values, assuming your values are compressible.

@ghost
Copy link
Author

ghost commented Dec 31, 2010

OK, you have certainly the right to make your software "opinionated" and I respect your choice. To clarify my opinion - I am used to work with software, which "... was not designed to stop its users from doing stupid things, as that would also stop them from doing clever things".

Note also, that in the second patch will dalli automatically query the actual limit of the server. In a case, where the user intentionally configured his memcached to support larger values, will unpatched dalli incorrectly report, that memcached is limited to 1 MB - that's clearly a bug in my opinion.

@mperham
Copy link
Collaborator

mperham commented Dec 31, 2010

Support greater than 1MB values, closed by 55def36

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant