Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: petergoldstein/dalli
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: main
Choose a base ref
...
head repository: Shopify/dalli
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: main
Choose a head ref
Can’t automatically merge. Don’t worry, you can still create the pull request.

Commits on Sep 4, 2023

  1. Bump actions/checkout from 3 to 4

    Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
    - [Release notes](https://github.com/actions/checkout/releases)
    - [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
    - [Commits](actions/checkout@v3...v4)
    
    ---
    updated-dependencies:
    - dependency-name: actions/checkout
      dependency-type: direct:production
      update-type: version-update:semver-major
    ...
    
    Signed-off-by: dependabot[bot] <support@github.com>
    dependabot[bot] authored Sep 4, 2023

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    54d1f2b View commit details

Commits on Oct 16, 2024

  1. example adding support for arbitrary meta flags and for getting them …

    …back as part of the response
    danmayer committed Oct 16, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    dfdd8f1 View commit details
  2. fix rubocop

    danmayer committed Oct 16, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    23d36b9 View commit details
  3. Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    9929eb7 View commit details
  4. Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    45aaa77 View commit details
  5. Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    a1840e1 View commit details

Commits on Oct 17, 2024

  1. fix for Ruby head tests

    danmayer committed Oct 17, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    707a361 View commit details

Commits on Oct 21, 2024

  1. Merge pull request #5 from Shopify/add_initial_benchmark

    add example benchmark showing Dalli doesn't optimally handle large strings
    danmayer authored Oct 21, 2024

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    d545443 View commit details

Commits on Oct 22, 2024

  1. Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    dcbc756 View commit details

Commits on Oct 24, 2024

  1. alternative approach to multi commands that doesn't require extra MN …

    …command, instead last call isn't quiet so one can ensure the buffer is clean... improvement is small but on remote ring it is one less network trip per hundred
    danmayer committed Oct 24, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    6271dfa View commit details
  2. Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    c780b56 View commit details
  3. additional benchmarks

    danmayer committed Oct 24, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    1cf3ad2 View commit details
  4. rubocop

    danmayer committed Oct 24, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    45c72ce View commit details
  5. ruby only stackprof

    danmayer committed Oct 24, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    b3eb73f View commit details
  6. improved meta_set

    danmayer committed Oct 24, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    6ed8d63 View commit details

Commits on Oct 25, 2024

  1. perf fix for get_multi

    danmayer committed Oct 25, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    0eb7c2f View commit details
  2. fix rubocop

    danmayer committed Oct 25, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    af716a1 View commit details
  3. Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    a75d37d View commit details

Commits on Oct 28, 2024

  1. Merge pull request #4 from Shopify/example_meta_get_with_meta_flags

    example meta get with optional flags
    danmayer authored Oct 28, 2024

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    3411cca View commit details

Commits on Oct 29, 2024

  1. Unverified

    This user has not yet uploaded their public signing key.
    Copy the full SHA
    cafc2e6 View commit details
  2. fix get multi bug

    danmayer committed Oct 29, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    ec1d47f View commit details
  3. Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    21d8c86 View commit details
  4. handle network errors

    danmayer committed Oct 29, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    47fd792 View commit details

Commits on Nov 3, 2024

  1. fix rubocop and tests

    danmayer committed Nov 3, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    edfbd1e View commit details
  2. cleanup

    danmayer committed Nov 3, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    f4f95f2 View commit details
  3. Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    df841fd View commit details
  4. rubocop

    danmayer committed Nov 3, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    b6935b9 View commit details
  5. fix tests

    danmayer committed Nov 3, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    ffcbceb View commit details
  6. remove ruby 2.6

    danmayer committed Nov 3, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    270cd0b View commit details
  7. more bundler fixes

    danmayer committed Nov 3, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    a09e408 View commit details
  8. ignore vendored files

    danmayer committed Nov 3, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    b4e6849 View commit details
  9. start toxiproxy

    danmayer committed Nov 3, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    67855cd View commit details

Commits on Nov 4, 2024

  1. remove older JRuby

    danmayer committed Nov 4, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    edd781c View commit details
  2. skip network test on binary

    danmayer committed Nov 4, 2024

    Verified

    This commit was signed with the committer’s verified signature.
    danmayer Dan Mayer
    Copy the full SHA
    d05db6c View commit details
  3. Update lib/dalli/client.rb

    Co-authored-by: Mark Rattle <113051003+mrattle@users.noreply.github.com>
    danmayer and mrattle authored Nov 4, 2024

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    082bcfe View commit details

Commits on Nov 5, 2024

  1. Merge pull request #6 from Shopify/experimental_multi_set

    optimize multi_set for faster command and single node ring
    danmayer authored Nov 5, 2024

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    42b7792 View commit details

Commits on Nov 7, 2024

  1. working state

    nickamorim committed Nov 7, 2024
    Copy the full SHA
    0e4fe56 View commit details
  2. Copy the full SHA
    eda56b8 View commit details
  3. Copy the full SHA
    7337986 View commit details

Commits on Nov 11, 2024

  1. Copy the full SHA
    83495d9 View commit details

Commits on Nov 12, 2024

  1. Minor cleanups in Dalli

    grcooper committed Nov 12, 2024

    Unverified

    This user has not yet uploaded their public signing key.
    Copy the full SHA
    7b44603 View commit details
  2. Merge pull request #10 from Shopify/grcooper/minor-fixes-and-clarific…

    …ations
    
    Minor cleanups in Dalli
    grcooper authored Nov 12, 2024

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    8a60218 View commit details
  3. Copy the full SHA
    4236c1c View commit details
  4. Copy the full SHA
    6e22d31 View commit details

Commits on Nov 13, 2024

  1. Merge pull request #9 from Shopify/nickamorim/timeouts

    Use IO#timeout for Timeouts
    nickamorim authored Nov 13, 2024

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    0840ae8 View commit details

Commits on Nov 19, 2024

  1. Unverified

    This user has not yet uploaded their public signing key.
    Copy the full SHA
    b48104f View commit details
  2. Merge pull request #16 from Shopify/grcooper/remove-old-ruby-versions

    Remove support for < ruby 3.2
    grcooper authored Nov 19, 2024

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    43fc82d View commit details
  3. Update required ruby version

    grcooper committed Nov 19, 2024

    Unverified

    This user has not yet uploaded their public signing key.
    Copy the full SHA
    dcc26a1 View commit details
  4. Fix for new rubocop rules

    grcooper committed Nov 19, 2024

    Unverified

    This user has not yet uploaded their public signing key.
    Copy the full SHA
    0187a50 View commit details
  5. Merge pull request #17 from Shopify/grcooper/update-required-ruby-ver…

    …sion
    
    Update required ruby version
    grcooper authored Nov 19, 2024

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    4d3d9a0 View commit details
Showing with 2,723 additions and 2,654 deletions.
  1. +26 −0 .github/workflows/benchmarks.yml
  2. +38 −0 .github/workflows/profile.yml
  3. +1 −1 .github/workflows/rubocop.yml
  4. +3 −7 .github/workflows/tests.yml
  5. +1 −0 .gitignore
  6. +7 −1 .rubocop.yml
  7. +1 −1 .standard.yml
  8. +2 −0 CHANGELOG.md
  9. +3 −0 Gemfile
  10. +226 −0 bin/benchmark
  11. +177 −0 bin/profile
  12. +16 −0 bin/start-toxiproxy.sh
  13. +1 −1 dalli.gemspec
  14. +7 −1 lib/dalli.rb
  15. +37 −20 lib/dalli/client.rb
  16. +8 −2 lib/dalli/key_manager.rb
  17. +1 −1 lib/dalli/options.rb
  18. +17 −6 lib/dalli/pipelined_getter.rb
  19. +29 −0 lib/dalli/pipelined_setter.rb
  20. +7 −1 lib/dalli/protocol/base.rb
  21. +0 −173 lib/dalli/protocol/binary.rb
  22. +0 −117 lib/dalli/protocol/binary/request_formatter.rb
  23. +0 −36 lib/dalli/protocol/binary/response_header.rb
  24. +0 −239 lib/dalli/protocol/binary/response_processor.rb
  25. +0 −60 lib/dalli/protocol/binary/sasl_authentication.rb
  26. +21 −3 lib/dalli/protocol/connection_manager.rb
  27. +97 −7 lib/dalli/protocol/meta.rb
  28. +1 −1 lib/dalli/protocol/meta/key_regularizer.rb
  29. +4 −9 lib/dalli/protocol/meta/request_formatter.rb
  30. +48 −5 lib/dalli/protocol/meta/response_processor.rb
  31. +2 −2 lib/dalli/protocol/server_config_parser.rb
  32. +2 −2 lib/dalli/protocol/value_compressor.rb
  33. +4 −1 lib/dalli/protocol/value_marshaller.rb
  34. +17 −3 lib/dalli/protocol/value_serializer.rb
  35. +2 −2 lib/dalli/ring.rb
  36. +2 −2 lib/dalli/server.rb
  37. +32 −33 lib/dalli/socket.rb
  38. +2 −2 lib/rack/session/dalli.rb
  39. +3 −13 scripts/install_memcached.sh
  40. +13 −14 test/benchmark_test.rb
  41. +4 −4 test/helper.rb
  42. +36 −47 test/helpers/memcached.rb
  43. +1 −1 test/integration/test_authentication.rb
  44. +254 −258 test/integration/test_cas.rb
  45. +27 −31 test/integration/test_compressor.rb
  46. +39 −43 test/integration/test_concurrency.rb
  47. +6 −10 test/integration/test_connection_pool.rb
  48. +13 −17 test/integration/test_encoding.rb
  49. +109 −117 test/integration/test_failover.rb
  50. +19 −23 test/integration/test_marshal.rb
  51. +41 −45 test/integration/test_memcached_admin.rb
  52. +68 −72 test/integration/test_namespace_and_key.rb
  53. +228 −254 test/integration/test_network.rb
  54. +382 −271 test/integration/test_operations.rb
  55. +184 −81 test/integration/test_pipelined_get.rb
  56. +57 −0 test/integration/test_pipelined_set.rb
  57. +217 −221 test/integration/test_quiet.rb
  58. +0 −89 test/integration/test_sasl.rb
  59. +15 −19 test/integration/test_serializer.rb
  60. +20 −24 test/integration/test_ttl.rb
  61. +11 −11 test/protocol/meta/test_request_formatter.rb
  62. +0 −111 test/protocol/test_binary.rb
  63. +77 −33 test/protocol/test_value_serializer.rb
  64. +2 −1 test/test_client_options.rb
  65. +32 −32 test/test_rack_session.rb
  66. +10 −22 test/test_ring.rb
  67. +13 −14 test/utils/memcached_manager.rb
  68. +0 −37 test/utils/memcached_mock.rb
26 changes: 26 additions & 0 deletions .github/workflows/benchmarks.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
name: Benchmarks

on: [push, pull_request]

jobs:
build:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4
- name: Install Memcached 1.6.23
working-directory: scripts
env:
MEMCACHED_VERSION: 1.6.23
run: |
chmod +x ./install_memcached.sh
./install_memcached.sh
memcached -d
memcached -d -p 11222
- name: Set up Ruby
uses: ruby/setup-ruby@v1
with:
ruby-version: 3.2
bundler-cache: true # 'bundle install' and cache
- name: Run Benchmarks
run: RUBY_YJIT_ENABLE=1 BENCH_TARGET=all bundle exec bin/benchmark
38 changes: 38 additions & 0 deletions .github/workflows/profile.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
name: Profiles

on: [push, pull_request]

jobs:
build:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4
- name: Install Memcached 1.6.23
working-directory: scripts
env:
MEMCACHED_VERSION: 1.6.23
run: |
chmod +x ./install_memcached.sh
./install_memcached.sh
memcached -d
- name: Set up Ruby
uses: ruby/setup-ruby@v1
with:
ruby-version: 3.4
bundler-cache: true # 'bundle install' and cache
- name: Run Profiles
run: RUBY_YJIT_ENABLE=1 BENCH_TARGET=all bundle exec bin/profile
- name: Upload profile results
uses: actions/upload-artifact@v4
with:
name: profile-results
path: |
client_get_profile.json
socket_get_profile.json
client_set_profile.json
socket_set_profile.json
client_get_multi_profile.json
socket_get_multi_profile.json
client_set_multi_profile.json
socket_set_multi_profile.json
2 changes: 1 addition & 1 deletion .github/workflows/rubocop.yml
Original file line number Diff line number Diff line change
@@ -11,7 +11,7 @@ jobs:
- name: Set up Ruby
uses: ruby/setup-ruby@v1
with:
ruby-version: 2.6
ruby-version: 3.2
bundler-cache: true # 'bundle install' and cache
- name: Run RuboCop
run: bundle exec rubocop --parallel
10 changes: 3 additions & 7 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
@@ -13,13 +13,7 @@ jobs:
- head
- '3.3'
- '3.2'
- '3.1'
- '3.0'
- '2.7'
- '2.6'
- jruby-9.3
- jruby-9.4
memcached-version: ['1.5.22', '1.6.23']
memcached-version: ['1.6.23']

steps:
- uses: actions/checkout@v4
@@ -30,6 +24,8 @@ jobs:
run: |
chmod +x ./install_memcached.sh
./install_memcached.sh
- name: Install and start toxiproxy
run: ./bin/start-toxiproxy.sh
- name: Set up Ruby ${{ matrix.ruby-version }}
uses: ruby/setup-ruby@v1
with:
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -25,6 +25,7 @@ profile.html
## Environment normalisation:
/.bundle/
/lib/bundler/man/
dev.yml

# for a library or gem, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
8 changes: 7 additions & 1 deletion .rubocop.yml
Original file line number Diff line number Diff line change
@@ -7,7 +7,10 @@ require:

AllCops:
NewCops: enable
TargetRubyVersion: 2.6
TargetRubyVersion: 3.2
Exclude:
- 'bin/**/*'
- 'vendor/**/*'

Metrics/BlockLength:
Max: 50
@@ -17,3 +20,6 @@ Metrics/BlockLength:
Style/Documentation:
Exclude:
- 'test/**/*'

Metrics/MethodLength:
Max: 20
2 changes: 1 addition & 1 deletion .standard.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
fix: false # default: false
parallel: true # default: false
ruby_version: 2.5.1 # default: RUBY_VERSION
ruby_version: 3.3.0 # default: RUBY_VERSION
default_ignores: false # default: true

ignore: # default: []
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -5,6 +5,8 @@ Unreleased
==========

- Fix cannot read response data included terminator `\r\n` when use meta protocol (matsubara0507)
- Remove binary protocol support (grcooper)
- Add support for `raw` client option (nherson)

3.2.8
==========
3 changes: 3 additions & 0 deletions Gemfile
Original file line number Diff line number Diff line change
@@ -6,6 +6,7 @@ gemspec

group :development, :test do
gem 'connection_pool'
gem 'debug'
gem 'minitest', '~> 5'
gem 'rack', '~> 2.0', '>= 2.2.0'
gem 'rake', '~> 13.0'
@@ -14,6 +15,8 @@ group :development, :test do
gem 'rubocop-performance'
gem 'rubocop-rake'
gem 'simplecov'
gem 'stackprof', platform: :mri
gem 'toxiproxy'
end

group :test do
226 changes: 226 additions & 0 deletions bin/benchmark
Original file line number Diff line number Diff line change
@@ -0,0 +1,226 @@
#!/usr/bin/env ruby
# frozen_string_literal: true

# This helps benchmark current performance of Dalli
# as well as compare performance of optimizated and non-optimized calls like multi-set vs set
#
# run with:
# bundle exec bin/benchmark
# RUBY_YJIT_ENABLE=1 BENCH_TARGET=get bundle exec bin/benchmark
require 'bundler/inline'
require 'json'

gemfile do
source 'https://rubygems.org'
gem 'benchmark-ips'
gem 'logger'
end

require_relative '../lib/dalli'
require 'benchmark/ips'
require 'monitor'

##
# StringSerializer is a serializer that avoids the overhead of Marshal or JSON.
##
class StringSerializer
def self.dump(value)
value
end

def self.load(value)
value
end
end

dalli_url = ENV['BENCH_CACHE_URL'] || "127.0.0.1:11211"

if dalli_url.include?('unix')
ENV['BENCH_CACHE_URL'].gsub('unix://','')
end
bench_target = ENV['BENCH_TARGET'] || 'set'
bench_time = (ENV['BENCH_TIME'] || 10).to_i
bench_warmup = (ENV['BENCH_WARMUP'] || 3).to_i
bench_payload_size = (ENV['BENCH_PAYLOAD_SIZE'] || 700_000).to_i
payload = 'B' * bench_payload_size
TERMINATOR = "\r\n"
puts "yjit: #{RubyVM::YJIT.enabled?}"

client = Dalli::Client.new('localhost', serializer: StringSerializer, compress: false, raw: true)
multi_client = Dalli::Client.new('localhost:11211,localhost:11222', serializer: StringSerializer, compress: false, raw: true)

# The raw socket implementation is used to benchmark the performance of dalli & the overhead of the various abstractions
# in the library.
sock = TCPSocket.new('127.0.0.1', '11211', connect_timeout: 1)
sock.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_NODELAY, true)
sock.setsockopt(Socket::SOL_SOCKET, Socket::SO_KEEPALIVE, true)
# Benchmarks didn't see any performance gains from increasing the SO_RCVBUF buffer size
# sock.setsockopt(Socket::SOL_SOCKET, ::Socket::SO_RCVBUF, 1024 * 1024 * 8)
# Benchamrks did see an improvement in performance when increasing the SO_SNDBUF buffer size
sock.setsockopt(Socket::SOL_SOCKET, Socket::SO_SNDBUF, 1024 * 1024 * 8)

# ensure the clients are all connected and working
client.set('key', payload)
sock.write("set sock_key 0 3600 #{payload.bytesize}\r\n")
sock.write(payload)
sock.write(TERMINATOR)
sock.flush
sock.readline # clear the buffer

# ensure we have basic data for the benchmarks and get calls
payload_smaller = 'B' * 50_000
pairs = {}
100.times do |i|
pairs["multi_#{i}"] = payload_smaller
end
client.quiet do
pairs.each do |key, value|
client.set(key, value, 3600, raw: true)
end
end

###
# GC Suite
# benchmark without GC skewing things
###
class GCSuite
def warming(*)
run_gc
end

def running(*)
run_gc
end

def warmup_stats(*); end

def add_report(*); end

private

def run_gc
GC.enable
GC.start
GC.disable
end
end
suite = GCSuite.new

def sock_get_multi(sock, pairs)
count = pairs.length
pairs.each_key do |key|
count -= 1
tail = count.zero? ? '' : 'q'
sock.write("mg #{key} v f k #{tail}\r\n")
end
sock.flush
# read all the memcached responses back and build a hash of key value pairs
results = {}
last_result = false
while (line = sock.readline.chomp!(TERMINATOR)) != ''
last_result = true if line.start_with?('EN ')
next unless line.start_with?('VA ') || last_result

_, value_length, _flags, key = line.split
results[key[1..]] = sock.read(value_length.to_i)
sock.read(TERMINATOR.length)
break if results.size == pairs.size
break if last_result
end
results
end


if %w[all set].include?(bench_target)
Benchmark.ips do |x|
x.config(warmup: bench_warmup, time: bench_time, suite: suite)
x.report('client set') { client.set('key', payload) }
#x.report('multi client set') { multi_client.set('string_key', payload) }
x.report('raw sock set') do
sock.write("ms sock_key #{payload.bytesize} T3600 MS\r\n")
sock.write(payload)
sock.write("\r\n")
sock.flush
sock.readline # clear the buffer
end
x.compare!
end
end

@lock = Monitor.new
if %w[all get].include?(bench_target)
Benchmark.ips do |x|
x.config(warmup: bench_warmup, time: bench_time, suite: suite)
x.report('get dalli') { client.get('key') }
# NOTE: while this is the fastest it is not thread safe and is blocking vs IO sharing friendly
x.report('get sock') do
sock.write("get sock_key\r\n")
sock.readline
sock.read(payload.bytesize)
end
# NOTE: This shows that when adding thread safety & non-blocking IO we are slower for single process/thread use case
x.report('get sock non-blocking') do
@lock.synchronize do
sock.write("get sock_key\r\n")
sock.readline
count = payload.bytesize
value = String.new(capacity: count + 1)
loop do
begin
value << sock.read_nonblock(count - value.bytesize)
rescue Errno::EAGAIN
IO.select([sock])
retry
rescue EOFError
puts "EOFError"
break
end
break if value.bytesize == count
end
end
end
x.compare!
end
end

if %w[all get_multi].include?(bench_target)
Benchmark.ips do |x|
x.config(warmup: bench_warmup, time: bench_time, suite: suite)
x.report('get 100 keys') { client.get_multi(pairs.keys) }
x.report('get 100 keys raw sock') { sock_get_multi(sock, pairs) }
x.compare!
end
end

if %w[all set_multi].include?(bench_target)
Benchmark.ips do |x|
x.config(warmup: bench_warmup, time: bench_time, suite: suite)
x.report('write 100 keys simple') do
client.quiet do
pairs.each do |key, value|
client.set(key, value, 3600, raw: true)
end
end
end
x.report('multi client set_multi 100') do
multi_client.set_multi(pairs, 3600, raw: true)
end
x.report('write 100 keys rawsock') do
count = pairs.length
tail = ''
value_bytesize = payload_smaller.bytesize
ttl = 3600

pairs.each do |key, value|
count -= 1
tail = count.zero? ? '' : 'q'
sock.write(String.new("ms #{key} #{value_bytesize} c F0 T#{ttl} MS #{tail}\r\n",
capacity: key.size + value_bytesize + 40) << value << TERMINATOR)
end
sock.flush
sock.gets(TERMINATOR) # clear the buffer
end
x.report('write_mutli 100 keys') { client.set_multi(pairs, 3600, raw: true) }
x.compare!
end
end
177 changes: 177 additions & 0 deletions bin/profile
Original file line number Diff line number Diff line change
@@ -0,0 +1,177 @@
#!/usr/bin/env ruby
# frozen_string_literal: true

# This helps profile specific call paths in Dalli
# finding and fixing performance issues in these profiles should result in improvements in the dalli benchmarks
#
# run with:
# RUBY_YJIT_ENABLE=1 bundle exec bin/profile
require 'bundler/inline'
require 'json'

gemfile do
source 'https://rubygems.org'
gem 'benchmark-ips'
gem 'vernier'
gem 'logger'
end

require_relative '../lib/dalli'
require 'benchmark/ips'
require 'vernier'

##
# StringSerializer is a serializer that avoids the overhead of Marshal or JSON.
##
class StringSerializer
def self.dump(value)
value
end

def self.load(value)
value
end
end

dalli_url = ENV['BENCH_CACHE_URL'] || "127.0.0.1:11211"

if dalli_url.include?('unix')
ENV['BENCH_CACHE_URL'].gsub('unix://','')
end
bench_target = ENV['BENCH_TARGET'] || 'get'
bench_time = (ENV['BENCH_TIME'] || 10).to_i
bench_payload_size = (ENV['BENCH_PAYLOAD_SIZE'] || 700_000).to_i
TERMINATOR = "\r\n"
puts "yjit: #{RubyVM::YJIT.enabled?}"

client = Dalli::Client.new('localhost', serializer: StringSerializer, compress: false)

# The raw socket implementation is used to benchmark the performance of dalli & the overhead of the various abstractions
# in the library.
sock = TCPSocket.new('127.0.0.1', '11211', connect_timeout: 1)
sock.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_NODELAY, true)
sock.setsockopt(Socket::SOL_SOCKET, Socket::SO_KEEPALIVE, true)
# Benchmarks didn't see any performance gains from increasing the SO_RCVBUF buffer size
# sock.setsockopt(Socket::SOL_SOCKET, ::Socket::SO_RCVBUF, 1024 * 1024 * 8)
# Benchamrks did see an improvement in performance when increasing the SO_SNDBUF buffer size
sock.setsockopt(Socket::SOL_SOCKET, Socket::SO_SNDBUF, 1024 * 1024 * 8)

payload = 'B' * bench_payload_size
dalli_key = 'dalli_key'
# ensure the clients are all connected and working
client.set(dalli_key, payload)
sock.write("set sock_key 0 3600 #{payload.bytesize}\r\n")
sock.write(payload)
sock.write(TERMINATOR)
sock.flush
sock.readline # clear the buffer

# ensure we have basic data for the benchmarks and get calls
payload_smaller = 'B' * 50_000
pairs = {}
100.times do |i|
pairs["multi_#{i}"] = payload_smaller
end
client.quiet do
pairs.each do |key, value|
client.set(key, value, 3600, raw: true)
end
end

def sock_get_multi(sock, pairs)
count = pairs.length
pairs.each_key do |key|
count -= 1
tail = count.zero? ? '' : 'q'
sock.write("mg #{key} v f k #{tail}\r\n")
end
sock.flush
# read all the memcached responses back and build a hash of key value pairs
results = {}
last_result = false
while (line = sock.readline.chomp!(TERMINATOR)) != ''
last_result = true if line.start_with?('EN ')
next unless line.start_with?('VA ') || last_result

_, value_length, _flags, key = line.split
results[key[1..]] = sock.read(value_length.to_i)
sock.read(TERMINATOR.length)
break if results.size == pairs.size
break if last_result
end
results
end

def sock_set_multi(sock, pairs)
count = pairs.length
tail = ''
ttl = 3600

pairs.each do |key, value|
count -= 1
tail = count.zero? ? '' : 'q'
sock.write(String.new("ms #{key} #{value.bytesize} c F0 T#{ttl} MS #{tail}\r\n", capacity: key.size + value.bytesize + 40))
sock.write(value)
sock.write(TERMINATOR)
end
sock.flush
sock.gets(TERMINATOR) # clear the buffer
end

if %w[all get].include?(bench_target)
Vernier.profile(out: 'client_get_profile.json') do
start_time = Time.now
client.get(dalli_key) while Time.now - start_time < bench_time
end

Vernier.profile(out: 'socket_get_profile.json') do
start_time = Time.now
while Time.now - start_time < bench_time do
sock.write("get sock_key\r\n")
sock.readline
sock.read(payload.bytesize)
end
end
end

if %w[all set].include?(bench_target)
Vernier.profile(out: 'client_set_profile.json') do
start_time = Time.now
client.set(dalli_key, payload, 3600, raw: true) while Time.now - start_time < bench_time
end

Vernier.profile(out: 'socket_set_profile.json') do
start_time = Time.now
while Time.now - start_time < bench_time
sock.write("ms sock_key #{payload.bytesize} T3600 MS\r\n")
sock.write(payload)
sock.write("\r\n")
sock.flush
sock.readline # clear the buffer
end
end
end

if %w[all get_multi].include?(bench_target)
Vernier.profile(out: 'client_get_multi_profile.json') do
start_time = Time.now
client.get_multi(pairs.keys) while Time.now - start_time < bench_time
end

Vernier.profile(out: 'socket_get_multi_profile.json') do
start_time = Time.now
sock_get_multi(sock, pairs) while Time.now - start_time < bench_time
end
end

if %w[all set_multi].include?(bench_target)
Vernier.profile(out: 'client_set_multi_profile.json') do
start_time = Time.now
client.set_multi(pairs, 3600, raw: true) while Time.now - start_time < bench_time
end

Vernier.profile(out: 'socket_set_multi_profile.json') do
start_time = Time.now
sock_set_multi(sock, pairs) while Time.now - start_time < bench_time
end
end
16 changes: 16 additions & 0 deletions bin/start-toxiproxy.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
#!/bin/bash -e

VERSION='v2.4.0'

if [[ "$OSTYPE" == "linux"* ]]; then
DOWNLOAD_TYPE="linux-amd64"
elif [[ "$OSTYPE" == "darwin"* ]]; then
DOWNLOAD_TYPE="darwin-amd64"
fi

echo "[dowload toxiproxy for $DOWNLOAD_TYPE]"
curl --silent -L https://github.com/Shopify/toxiproxy/releases/download/$VERSION/toxiproxy-server-$DOWNLOAD_TYPE -o ./bin/toxiproxy-server

echo "[start toxiproxy]"
chmod +x ./bin/toxiproxy-server
nohup bash -c "./bin/toxiproxy-server 2>&1 | sed -e 's/^/[toxiproxy] /' &"
2 changes: 1 addition & 1 deletion dalli.gemspec
Original file line number Diff line number Diff line change
@@ -17,7 +17,7 @@ Gem::Specification.new do |s|
'Gemfile'
]
s.homepage = 'https://github.com/petergoldstein/dalli'
s.required_ruby_version = '>= 2.6'
s.required_ruby_version = '>= 3.2'

s.metadata = {
'bug_tracker_uri' => 'https://github.com/petergoldstein/dalli/issues',
8 changes: 7 additions & 1 deletion lib/dalli.rb
Original file line number Diff line number Diff line change
@@ -12,6 +12,9 @@ class DalliError < RuntimeError; end
# socket/server communication error
class NetworkError < DalliError; end

# socket/server communication error, but we should retry
class RetryableNetworkError < NetworkError; end

# no server available/alive error
class RingError < DalliError; end

@@ -27,6 +30,9 @@ class ValueOverMaxSize < DalliError; end
# operation is not permitted in a multi block
class NotPermittedMultiOpError < DalliError; end

# server error, raised when Memcached responds with a SERVER_ERROR
class ServerError < DalliError; end

# Implements the NullObject pattern to store an application-defined value for 'Key not found' responses.
class NilObject; end # rubocop:disable Lint/EmptyClass
NOT_FOUND = NilObject.new
@@ -60,10 +66,10 @@ def self.logger=(logger)
require_relative 'dalli/client'
require_relative 'dalli/key_manager'
require_relative 'dalli/pipelined_getter'
require_relative 'dalli/pipelined_setter'
require_relative 'dalli/ring'
require_relative 'dalli/protocol'
require_relative 'dalli/protocol/base'
require_relative 'dalli/protocol/binary'
require_relative 'dalli/protocol/connection_manager'
require_relative 'dalli/protocol/meta'
require_relative 'dalli/protocol/response_buffer'
57 changes: 37 additions & 20 deletions lib/dalli/client.rb
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# frozen_string_literal: true

require 'digest/md5'
require 'set'

# encoding: ascii
module Dalli
##
# Dalli::Client is the main class which developers will use to interact with
# Memcached.
##
# rubocop:disable Metrics/ClassLength
class Client
##
# Dalli::Client is the main class which developers will use to interact with
@@ -35,6 +35,8 @@ class Client
# to 0 or forever.
# - :compress - if true Dalli will compress values larger than compression_min_size bytes before sending them
# to memcached. Default: true.
# - :raw - if true Dalli will not attempt to serialize values, which can be overridden by explicitly passing
# `:raw => false` as a request option when writing data. Default: false.
# - :compression_min_size - the minimum size (in bytes) for which Dalli will compress values sent to Memcached.
# Defaults to 4K.
# - :serializer - defaults to Marshal
@@ -43,8 +45,6 @@ class Client
# #fetch operations.
# - :digest_class - defaults to Digest::MD5, allows you to pass in an object that responds to the hexdigest method,
# useful for injecting a FIPS compliant hash object.
# - :protocol - one of either :binary or :meta, defaulting to :binary. This sets the protocol that Dalli uses
# to communicate with memcached.
#
def initialize(servers = nil, options = {})
@normalized_servers = ::Dalli::ServersArgNormalizer.normalize_servers(servers)
@@ -68,8 +68,8 @@ def get(key, req_options = nil)
# Gat (get and touch) fetch an item and simultaneously update its expiration time.
#
# If a value is not found, then +nil+ is returned.
def gat(key, ttl = nil)
perform(:gat, key, ttl_or_default(ttl))
def gat(key, ttl = nil, req_options = nil)
perform(:gat, key, ttl_or_default(ttl), req_options)
end

##
@@ -95,6 +95,7 @@ def get_cas(key)
# Fetch multiple keys efficiently.
# If a block is given, yields key/value pairs one at a time.
# Otherwise returns a hash of { 'key' => 'value', 'key2' => 'value1' }
# rubocop:disable Metrics/AbcSize
def get_multi(*keys)
keys.flatten!
keys.compact!
@@ -103,12 +104,15 @@ def get_multi(*keys)

if block_given?
pipelined_getter.process(keys) { |k, data| yield k, data.first }
elsif ring.servers.size == 1
pipelined_getter.process(keys)
else
{}.tap do |hash|
pipelined_getter.process(keys) { |k, data| hash[k] = data.first }
end
end
end
# rubocop:enable Metrics/AbcSize

##
# Fetch multiple keys efficiently, including available metadata such as CAS.
@@ -155,8 +159,8 @@ def fetch(key, ttl = nil, req_options = nil)
# - nil if the key did not exist.
# - false if the value was changed by someone else.
# - true if the value was successfully updated.
def cas(key, ttl = nil, req_options = nil, &block)
cas_core(key, false, ttl, req_options, &block)
def cas(key, ttl = nil, req_options = nil, &)
cas_core(key, false, ttl, req_options, &)
end

##
@@ -166,8 +170,8 @@ def cas(key, ttl = nil, req_options = nil, &block)
# Returns:
# - false if the value was changed by someone else.
# - true if the value was successfully updated.
def cas!(key, ttl = nil, req_options = nil, &block)
cas_core(key, true, ttl, req_options, &block)
def cas!(key, ttl = nil, req_options = nil, &)
cas_core(key, true, ttl, req_options, &)
end

##
@@ -197,6 +201,23 @@ def quiet
end
alias multi quiet

##
# set multiple keys efficiently in pipelined quiet mode. Returns nil.
def set_multi(pairs, ttl, req_options = nil)
return if pairs.empty?

if ring.servers.length == 1
pipelined_setter.process(pairs, ttl, req_options)
else
quiet do
pairs.each do |key, value|
set(key, value, ttl, req_options)
end
end
end
nil
end

def set(key, value, ttl = nil, req_options = nil)
set_cas(key, value, 0, ttl, req_options)
end
@@ -392,16 +413,7 @@ def ttl_or_default(ttl)
end

def ring
@ring ||= Dalli::Ring.new(@normalized_servers, protocol_implementation, @options)
end

def protocol_implementation
@protocol_implementation ||= case @options[:protocol]&.to_s
when 'meta'
Dalli::Protocol::Meta
else
Dalli::Protocol::Binary
end
@ring ||= Dalli::Ring.new(@normalized_servers, @options)
end

##
@@ -424,7 +436,7 @@ def perform(*all_args)

server = ring.server_for_key(key)
server.request(op, key, *args)
rescue NetworkError => e
rescue RetryableNetworkError => e
Dalli.logger.debug { e.inspect }
Dalli.logger.debug { 'retrying request with new server' }
retry
@@ -440,5 +452,10 @@ def normalize_options(opts)
def pipelined_getter
PipelinedGetter.new(ring, @key_manager)
end

def pipelined_setter
PipelinedSetter.new(ring)
end
end
end
# rubocop:enable Metrics/ClassLength
10 changes: 8 additions & 2 deletions lib/dalli/key_manager.rb
Original file line number Diff line number Diff line change
@@ -30,7 +30,7 @@ class KeyManager

def initialize(client_options)
@key_options =
DEFAULTS.merge(client_options.select { |k, _| OPTIONS.include?(k) })
DEFAULTS.merge(client_options.slice(*OPTIONS))
validate_digest_class_option(@key_options)

@namespace = namespace_from_options
@@ -70,14 +70,20 @@ def key_without_namespace(key)
key.sub(namespace_regexp, '')
end

def key_values_without_namespace(key_values)
return key_values if namespace.nil?

key_values.transform_keys! { |key| key_without_namespace(key) }
end

def digest_class
@digest_class ||= @key_options[:digest_class]
end

def namespace_regexp
return /\A#{Regexp.escape(evaluate_namespace)}:/ if namespace.is_a?(Proc)

@namespace_regexp ||= /\A#{Regexp.escape(namespace)}:/.freeze unless namespace.nil?
@namespace_regexp ||= /\A#{Regexp.escape(namespace)}:/ unless namespace.nil?
end

def validate_digest_class_option(opts)
2 changes: 1 addition & 1 deletion lib/dalli/options.rb
Original file line number Diff line number Diff line change
@@ -6,7 +6,7 @@ module Dalli
# Make Dalli threadsafe by using a lock around all
# public server methods.
#
# Dalli::Protocol::Binary.extend(Dalli::Threadsafe)
# Dalli::Protocol::Meta.extend(Dalli::Threadsafe)
#
module Threadsafe
def self.extended(obj)
23 changes: 17 additions & 6 deletions lib/dalli/pipelined_getter.rb
Original file line number Diff line number Diff line change
@@ -10,18 +10,29 @@ def initialize(ring, key_manager)
@key_manager = key_manager
end

def optimized_for_single_server(keys)
keys.map! { |a| @key_manager.validate_key(a.to_s) }
results = @ring.servers.first.request(:read_multi_req, keys)
@key_manager.key_values_without_namespace(results)
end

##
# Yields, one at a time, keys and their values+attributes.
#
def process(keys, &block)
return {} if keys.empty?

@ring.lock do
servers = setup_requests(keys)
start_time = Time.now
servers = fetch_responses(servers, start_time, @ring.socket_timeout, &block) until servers.empty?
# optimized path only works for single server setups at the moment
if @ring.servers.size > 1 || block
@ring.lock do
servers = setup_requests(keys)
start_time = Time.now
servers = fetch_responses(servers, start_time, @ring.socket_timeout, &block) until servers.empty?
end
else
optimized_for_single_server(keys)
end
rescue NetworkError => e
rescue RetryableNetworkError => e
Dalli.logger.debug { e.inspect }
Dalli.logger.debug { 'retrying pipelined gets because of timeout' }
retry
@@ -112,7 +123,7 @@ def fetch_responses(servers, start_time, timeout, &block)
servers
rescue NetworkError
# Abort and raise if we encountered a network error. This triggers
# a retry at the top level.
# a retry at the top level on RetryableNetworkError
abort_without_timeout(servers)
raise
end
29 changes: 29 additions & 0 deletions lib/dalli/pipelined_setter.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# frozen_string_literal: true

module Dalli
##
# Contains logic for the pipelined sets implemented by the client.
##
class PipelinedSetter
def initialize(ring)
@ring = ring
end

##
# Writes multiple keys and values to the server.
##
def process(pairs, ttl, req_options = nil)
return if pairs.empty?

# Single server, no locking, and no grouping of pairs to server, performance optimization.
# Note: groups_for_keys(pairs.keys) is slow, so we avoid it.
raise 'not yet implemented' unless @ring.servers.length == 1

@ring.servers.first.request(:write_multi_storage_req, :set, pairs, ttl, 0, req_options)
rescue RetryableNetworkError => e
Dalli.logger.debug { e.inspect }
Dalli.logger.debug { 'retrying pipelined set because of timeout' }
retry
end
end
end
8 changes: 7 additions & 1 deletion lib/dalli/protocol/base.rb
Original file line number Diff line number Diff line change
@@ -7,7 +7,7 @@
module Dalli
module Protocol
##
# Base class for a single Memcached server, containing logic common to all
# Base class for a single Memcached client, containing logic common to all
# protocols. Contains logic for managing connection state to the server and value
# handling.
##
@@ -197,6 +197,12 @@ def ensure_connected!
connected?
end

def meta_flag_options(opts)
return nil unless opts.is_a?(Hash)

opts[:meta_flags]
end

def cache_nils?(opts)
return false unless opts.is_a?(Hash)

173 changes: 0 additions & 173 deletions lib/dalli/protocol/binary.rb

This file was deleted.

117 changes: 0 additions & 117 deletions lib/dalli/protocol/binary/request_formatter.rb

This file was deleted.

36 changes: 0 additions & 36 deletions lib/dalli/protocol/binary/response_header.rb

This file was deleted.

239 changes: 0 additions & 239 deletions lib/dalli/protocol/binary/response_processor.rb

This file was deleted.

60 changes: 0 additions & 60 deletions lib/dalli/protocol/binary/sasl_authentication.rb

This file was deleted.

24 changes: 21 additions & 3 deletions lib/dalli/protocol/connection_manager.rb
Original file line number Diff line number Diff line change
@@ -4,7 +4,7 @@
require 'socket'
require 'timeout'

require 'dalli/pid_cache'
require_relative '../pid_cache'

module Dalli
module Protocol
@@ -146,6 +146,12 @@ def abort_request!
@request_in_progress = false
end

def readline
@sock.readline
rescue SystemCallError, *TIMEOUT_ERRORS, EOFError => e
error_on_request!(e)
end

def read_line
data = @sock.gets("\r\n")
error_on_request!('EOF in read_line') if data.nil?
@@ -155,7 +161,13 @@ def read_line
end

def read(count)
@sock.readfull(count)
@sock.read(count)
rescue SystemCallError, *TIMEOUT_ERRORS, EOFError => e
error_on_request!(e)
end

def read_exact(count)
@sock.read(count)
rescue SystemCallError, *TIMEOUT_ERRORS, EOFError => e
error_on_request!(e)
end
@@ -172,6 +184,12 @@ def read_nonblock
@sock.read_available
end

def flush
@sock.flush
rescue SystemCallError, *TIMEOUT_ERRORS, EOFError => e
error_on_request!(e)
end

def max_allowed_failures
@max_allowed_failures ||= @options[:socket_max_failures] || 2
end
@@ -192,7 +210,7 @@ def error_on_request!(err_or_string)
def reconnect!(message)
close
sleep(options[:socket_failure_delay]) if options[:socket_failure_delay]
raise Dalli::NetworkError, message
raise Dalli::RetryableNetworkError, message
end

def reset_down_info
104 changes: 97 additions & 7 deletions lib/dalli/protocol/meta.rb
Original file line number Diff line number Diff line change
@@ -13,22 +13,103 @@ module Protocol
##
class Meta < Base
TERMINATOR = "\r\n"
SUPPORTS_CAPACITY = Gem::Version.new(RUBY_VERSION) >= Gem::Version.new('3.4.0')

def response_processor
@response_processor ||= ResponseProcessor.new(@connection_manager, @value_marshaller)
end

# NOTE: Additional public methods should be overridden in Dalli::Threadsafe

private

# NOTE: experimental write_multi_storage_req for optimized pipelined storage
# * only supports single server
# * only supports set at the moment
# * doesn't support cas at the moment
def write_multi_storage_req(_mode, pairs, ttl = nil, _cas = nil, options = {})
ttl = TtlSanitizer.sanitize(ttl) if ttl

pairs.each do |key, raw_value|
(value, bitflags) = @value_marshaller.store(key, raw_value, options)
encoded_key, _base64 = KeyRegularizer.encode(key)
value_bytesize = value.bytesize
# if last pair of hash, add TERMINATOR

# NOTE: The most efficient way to build the command
# * avoid making new strings capacity is set to the max possible size of the command
# * socket write each piece indepentantly to avoid extra allocation
# * first chunk uses interpolated values to avoid extra allocation, but << for larger 'value' strings
# * avoids using the request formatter pattern for single inline builder
@connection_manager.write("ms #{encoded_key} #{value_bytesize} c F#{bitflags} T#{ttl} MS q\r\n")
@connection_manager.write(value)
@connection_manager.write(TERMINATOR)
end
write_noop
@connection_manager.flush

response_processor.consume_all_responses_until_mn
end

# rubocop:disable Metrics/AbcSize
# rubocop:disable Metrics/MethodLength
# rubocop:disable Metrics/CyclomaticComplexity
# rubocop:disable Metrics/PerceivedComplexity
def read_multi_req(keys)
# Pre-allocate the results hash with expected size
results = SUPPORTS_CAPACITY ? Hash.new(nil, capacity: keys.size) : {}
optimized_for_raw = @value_marshaller.raw_by_default
key_index = optimized_for_raw ? 2 : 3

post_get_req = optimized_for_raw ? "v k q\r\n" : "v f k q\r\n"
keys.each do |key|
@connection_manager.write("mg #{key} #{post_get_req}")
end
@connection_manager.write("mn\r\n")
@connection_manager.flush

terminator_length = TERMINATOR.length
while (line = @connection_manager.readline)
break if line == TERMINATOR || line[0, 2] == 'MN'
next unless line[0, 3] == 'VA '

# VA value_length flags key
tokens = line.split
value = @connection_manager.read_exact(tokens[1].to_i)
bitflags = optimized_for_raw ? 0 : @response_processor.bitflags_from_tokens(tokens)
@connection_manager.read_exact(terminator_length) # read the terminator
results[tokens[key_index].byteslice(1..-1)] =
@value_marshaller.retrieve(value, bitflags)
end
results
end
# rubocop:enable Metrics/AbcSize
# rubocop:enable Metrics/MethodLength
# rubocop:enable Metrics/CyclomaticComplexity
# rubocop:enable Metrics/PerceivedComplexity

# Retrieval Commands
# rubocop:disable Metrics/AbcSize
# rubocop:disable Metrics/CyclomaticComplexity
# rubocop:disable Metrics/PerceivedComplexity
def get(key, options = nil)
encoded_key, base64 = KeyRegularizer.encode(key)
req = RequestFormatter.meta_get(key: encoded_key, base64: base64)
write(req)
response_processor.meta_get_with_value(cache_nils: cache_nils?(options))
meta_options = meta_flag_options(options)

if !meta_options && !base64 && !quiet? && @value_marshaller.raw_by_default
write("mg #{encoded_key} v\r\n")
else
write(RequestFormatter.meta_get(key: encoded_key, base64: base64, meta_flags: meta_options))
end
if !meta_options && !base64 && !quiet? && @value_marshaller.raw_by_default
response_processor.meta_get_with_value(cache_nils: cache_nils?(options), skip_flags: true)
elsif meta_options
response_processor.meta_get_with_value_and_meta_flags(cache_nils: cache_nils?(options))
else
response_processor.meta_get_with_value(cache_nils: cache_nils?(options))
end
end
# rubocop:enable Metrics/AbcSize
# rubocop:enable Metrics/CyclomaticComplexity
# rubocop:enable Metrics/PerceivedComplexity

def quiet_get_request(key)
encoded_key, base64 = KeyRegularizer.encode(key)
@@ -38,9 +119,14 @@ def quiet_get_request(key)
def gat(key, ttl, options = nil)
ttl = TtlSanitizer.sanitize(ttl)
encoded_key, base64 = KeyRegularizer.encode(key)
req = RequestFormatter.meta_get(key: encoded_key, ttl: ttl, base64: base64)
req = RequestFormatter.meta_get(key: encoded_key, ttl: ttl, base64: base64,
meta_flags: meta_flag_options(options))
write(req)
response_processor.meta_get_with_value(cache_nils: cache_nils?(options))
if meta_flag_options(options)
response_processor.meta_get_with_value_and_meta_flags(cache_nils: cache_nils?(options))
else
response_processor.meta_get_with_value(cache_nils: cache_nils?(options))
end
end

def touch(key, ttl)
@@ -85,6 +171,8 @@ def write_storage_req(mode, key, raw_value, ttl = nil, cas = nil, options = {})
bitflags: bitflags, cas: cas,
ttl: ttl, mode: mode, quiet: quiet?, base64: base64)
write(req)
write(value)
write(TERMINATOR)
end
# rubocop:enable Metrics/ParameterLists

@@ -105,6 +193,8 @@ def write_append_prepend_req(mode, key, value, ttl = nil, cas = nil, _options =
req = RequestFormatter.meta_set(key: encoded_key, value: value, base64: base64,
cas: cas, ttl: ttl, mode: mode, quiet: quiet?)
write(req)
write(value)
write(TERMINATOR)
end
# rubocop:enable Metrics/ParameterLists

2 changes: 1 addition & 1 deletion lib/dalli/protocol/meta/key_regularizer.rb
Original file line number Diff line number Diff line change
@@ -10,7 +10,7 @@ class Meta
# memcached supports the use of base64 hashes for keys containing
# whitespace or non-ASCII characters, provided the 'b' flag is included in the request.
class KeyRegularizer
WHITESPACE = /\s/.freeze
WHITESPACE = /\s/

def self.encode(key)
return [key, false] if key.ascii_only? && !WHITESPACE.match(key)
13 changes: 4 additions & 9 deletions lib/dalli/protocol/meta/request_formatter.rb
Original file line number Diff line number Diff line change
@@ -13,31 +13,29 @@ class RequestFormatter
# and introducing an intermediate object seems like overkill.
#
# rubocop:disable Metrics/CyclomaticComplexity
# rubocop:disable Metrics/MethodLength
# rubocop:disable Metrics/ParameterLists
# rubocop:disable Metrics/PerceivedComplexity
def self.meta_get(key:, value: true, return_cas: false, ttl: nil, base64: false, quiet: false)
def self.meta_get(key:, value: true, return_cas: false, ttl: nil, base64: false, quiet: false, meta_flags: nil)
cmd = "mg #{key}"
cmd << ' v f' if value
cmd << ' c' if return_cas
cmd << ' b' if base64
cmd << " T#{ttl}" if ttl
cmd << " #{meta_flags.join(' ')}" if meta_flags
cmd << ' k q s' if quiet # Return the key in the response if quiet
cmd + TERMINATOR
end

def self.meta_set(key:, value:, bitflags: nil, cas: nil, ttl: nil, mode: :set, base64: false, quiet: false)
cmd = "ms #{key} #{value.bytesize}"
cmd << ' c' unless %i[append prepend].include?(mode)
cmd << ' c' if !quiet && !%i[append prepend].include?(mode)
cmd << ' b' if base64
cmd << " F#{bitflags}" if bitflags
cmd << cas_string(cas)
cmd << cas_string(cas) if cas && cas != 0
cmd << " T#{ttl}" if ttl
cmd << " M#{mode_to_token(mode)}"
cmd << ' q' if quiet
cmd << TERMINATOR
cmd << value
cmd + TERMINATOR
end

def self.meta_delete(key:, cas: nil, ttl: nil, base64: false, quiet: false)
@@ -62,7 +60,6 @@ def self.meta_arithmetic(key:, delta:, initial:, incr: true, cas: nil, ttl: nil,
cmd + TERMINATOR
end
# rubocop:enable Metrics/CyclomaticComplexity
# rubocop:enable Metrics/MethodLength
# rubocop:enable Metrics/ParameterLists
# rubocop:enable Metrics/PerceivedComplexity

@@ -87,7 +84,6 @@ def self.stats(arg = nil)
cmd + TERMINATOR
end

# rubocop:disable Metrics/MethodLength
def self.mode_to_token(mode)
case mode
when :add
@@ -102,7 +98,6 @@ def self.mode_to_token(mode)
'S'
end
end
# rubocop:enable Metrics/MethodLength

def self.cas_string(cas)
cas = parse_to_64_bit_int(cas, nil)
53 changes: 48 additions & 5 deletions lib/dalli/protocol/meta/response_processor.rb
Original file line number Diff line number Diff line change
@@ -21,18 +21,23 @@ class ResponseProcessor
STAT = 'STAT'
VA = 'VA'
VERSION = 'VERSION'
SERVER_ERROR = 'SERVER_ERROR'

def initialize(io_source, value_marshaller)
@io_source = io_source
@value_marshaller = value_marshaller
end

def meta_get_with_value(cache_nils: false)
def meta_get_with_value(cache_nils: false, skip_flags: false)
tokens = error_on_unexpected!([VA, EN, HD])
return cache_nils ? ::Dalli::NOT_FOUND : nil if tokens.first == EN
return true unless tokens.first == VA

@value_marshaller.retrieve(read_data(tokens[1].to_i), bitflags_from_tokens(tokens))
if skip_flags
@value_marshaller.retrieve(read_data(tokens[1].to_i), 0)
else
@value_marshaller.retrieve(read_data(tokens[1].to_i), bitflags_from_tokens(tokens))
end
end

def meta_get_with_value_and_cas
@@ -45,6 +50,18 @@ def meta_get_with_value_and_cas
[@value_marshaller.retrieve(read_data(tokens[1].to_i), bitflags_from_tokens(tokens)), cas]
end

def meta_get_with_value_and_meta_flags(cache_nils: false)
tokens = error_on_unexpected!([VA, EN, HD])
return [(cache_nils ? ::Dalli::NOT_FOUND : nil), {}] if tokens.first == EN

meta_flags = meta_flags_from_tokens(tokens)
return [(cache_nils ? ::Dalli::NOT_FOUND : nil), meta_flags] unless tokens.first == VA

value, bitflag = @value_marshaller.retrieve(read_data(tokens[1].to_i), bitflags_from_tokens(tokens))
meta_flags[:bitflag] = bitflag
[value, meta_flags]
end

def meta_get_without_value
tokens = error_on_unexpected!([EN, HD])
tokens.first == EN ? nil : true
@@ -167,9 +184,21 @@ def header_from_buffer(buf)

def error_on_unexpected!(expected_codes)
tokens = next_line_to_tokens
raise Dalli::DalliError, "Response error: #{tokens.first}" unless expected_codes.include?(tokens.first)

tokens
return tokens if expected_codes.include?(tokens.first)

raise Dalli::ServerError if tokens.first == SERVER_ERROR

raise Dalli::DalliError, "Response error: #{tokens.first}"
end

def meta_flags_from_tokens(tokens)
{
c: cas_from_tokens(tokens),
h: hit_from_tokens(tokens),
l: last_accessed_from_tokens(tokens),
t: ttl_remaining_from_tokens(tokens)
}
end

def bitflags_from_tokens(tokens)
@@ -180,6 +209,18 @@ def cas_from_tokens(tokens)
value_from_tokens(tokens, 'c')&.to_i
end

def hit_from_tokens(tokens)
value_from_tokens(tokens, 'h')&.to_i != 0
end

def last_accessed_from_tokens(tokens)
value_from_tokens(tokens, 'l')&.to_i
end

def ttl_remaining_from_tokens(tokens)
value_from_tokens(tokens, 't')&.to_i
end

def key_from_tokens(tokens)
encoded_key = value_from_tokens(tokens, 'k')
base64_encoded = tokens.any?('b')
@@ -207,7 +248,9 @@ def next_line_to_tokens
end

def read_data(data_size)
@io_source.read(data_size + TERMINATOR.bytesize)&.chomp!(TERMINATOR)
resp_data = @io_source.read(data_size)
@io_source.read(TERMINATOR.bytesize)
resp_data
end
end
end
4 changes: 2 additions & 2 deletions lib/dalli/protocol/server_config_parser.rb
Original file line number Diff line number Diff line change
@@ -6,7 +6,7 @@ module Dalli
module Protocol
##
# Dalli::Protocol::ServerConfigParser parses a server string passed to
# a Dalli::Protocol::Binary instance into the hostname, port, weight, and
# a Dalli::Protocol::Meta instance into the hostname, port, weight, and
# socket_type.
##
class ServerConfigParser
@@ -16,7 +16,7 @@ class ServerConfigParser
# can limit character set to LDH + '.'. Hex digit section
# is there to support IPv6 addresses, which need to be specified with
# a bounding []
SERVER_CONFIG_REGEXP = /\A(\[([\h:]+)\]|[^:]+)(?::(\d+))?(?::(\d+))?\z/.freeze
SERVER_CONFIG_REGEXP = /\A(\[([\h:]+)\]|[^:]+)(?::(\d+))?(?::(\d+))?\z/

DEFAULT_PORT = 11_211
DEFAULT_WEIGHT = 1
4 changes: 2 additions & 2 deletions lib/dalli/protocol/value_compressor.rb
Original file line number Diff line number Diff line change
@@ -35,7 +35,7 @@ def initialize(client_options)
end

@compression_options =
DEFAULTS.merge(client_options.select { |k, _| OPTIONS.include?(k) })
DEFAULTS.merge(client_options.slice(*OPTIONS))
end

def store(value, req_options, bitflags)
@@ -47,7 +47,7 @@ def store(value, req_options, bitflags)
end

def retrieve(value, bitflags)
compressed = (bitflags & FLAG_COMPRESSED) != 0
compressed = bitflags.anybits?(FLAG_COMPRESSED)
compressed ? compressor.decompress(value) : value

# TODO: We likely want to move this rescue into the Dalli::Compressor / Dalli::GzipCompressor
5 changes: 4 additions & 1 deletion lib/dalli/protocol/value_marshaller.rb
Original file line number Diff line number Diff line change
@@ -22,12 +22,15 @@ class ValueMarshaller
def_delegators :@value_serializer, :serializer
def_delegators :@value_compressor, :compressor, :compression_min_size, :compress_by_default?

attr_reader :raw_by_default

def initialize(client_options)
@value_serializer = ValueSerializer.new(client_options)
@value_compressor = ValueCompressor.new(client_options)
@raw_by_default = client_options[:raw] || false

@marshal_options =
DEFAULTS.merge(client_options.select { |k, _| OPTIONS.include?(k) })
DEFAULTS.merge(client_options.slice(*OPTIONS))
end

def store(key, value, options = nil)
20 changes: 17 additions & 3 deletions lib/dalli/protocol/value_serializer.rb
Original file line number Diff line number Diff line change
@@ -10,6 +10,7 @@ module Protocol
##
class ValueSerializer
DEFAULTS = {
raw: false,
serializer: Marshal
}.freeze

@@ -23,18 +24,18 @@ class ValueSerializer

def initialize(protocol_options)
@serialization_options =
DEFAULTS.merge(protocol_options.select { |k, _| OPTIONS.include?(k) })
DEFAULTS.merge(protocol_options.slice(*OPTIONS))
end

def store(value, req_options, bitflags)
do_serialize = !(req_options && req_options[:raw])
do_serialize = serialize_value?(req_options)
store_value = do_serialize ? serialize_value(value) : value.to_s
bitflags |= FLAG_SERIALIZED if do_serialize
[store_value, bitflags]
end

def retrieve(value, bitflags)
serialized = (bitflags & FLAG_SERIALIZED) != 0
serialized = bitflags.anybits?(FLAG_SERIALIZED)
if serialized
begin
serializer.load(value)
@@ -61,6 +62,19 @@ def serialize_value(value)
exc.set_backtrace e.backtrace
raise exc
end

private

# Defaults to client-level option, unless a request-level override is provided
def serialize_value?(req_options)
return !raw_by_default? unless req_options && !req_options[:raw].nil?

!req_options[:raw]
end

def raw_by_default?
@serialization_options[:raw]
end
end
end
end
4 changes: 2 additions & 2 deletions lib/dalli/ring.rb
Original file line number Diff line number Diff line change
@@ -23,9 +23,9 @@ class Ring

attr_accessor :servers, :continuum

def initialize(servers_arg, protocol_implementation, options)
def initialize(servers_arg, options)
@servers = servers_arg.map do |s|
protocol_implementation.new(s, options)
Dalli::Protocol::Meta.new(s, options)
end
@continuum = nil
@continuum = build_continuum(servers) if servers.size > 1
4 changes: 2 additions & 2 deletions lib/dalli/server.rb
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# frozen_string_literal: true

module Dalli # rubocop:disable Style/Documentation
warn 'Dalli::Server is deprecated, use Dalli::Protocol::Binary instead'
Server = Protocol::Binary
warn 'Dalli::Server is deprecated, use Dalli::Protocol::Meta instead'
Server = Protocol::Meta
end
65 changes: 32 additions & 33 deletions lib/dalli/socket.rb
Original file line number Diff line number Diff line change
@@ -8,24 +8,17 @@ module Dalli
# Various socket implementations used by Dalli.
##
module Socket
NON_BLOCK_SIZE = 8196
##
# Common methods for all socket implementations.
##
module InstanceMethods
def readfull(count)
value = String.new(capacity: count + 1)
loop do
result = read_nonblock(count - value.bytesize, exception: false)
value << result if append_to_buffer?(result)
break if value.bytesize == count
end
value
end
WAIT_RCS = %i[wait_writable wait_readable].freeze

def read_available
value = +''
loop do
result = read_nonblock(8196, exception: false)
result = read_nonblock(NON_BLOCK_SIZE, exception: false)
break if WAIT_RCS.include?(result)
raise Errno::ECONNRESET, "Connection reset: #{logged_options.inspect}" unless result

@@ -34,25 +27,20 @@ def read_available
value
end

WAIT_RCS = %i[wait_writable wait_readable].freeze
FILTERED_OUT_OPTIONS = %i[username password].freeze
def logged_options
options.except(*FILTERED_OUT_OPTIONS)
end

def append_to_buffer?(result)
raise Timeout::Error, "IO timeout: #{logged_options.inspect}" if nonblock_timed_out?(result)
raise Timeout::Error, "IO timeout: #{logged_options.inspect}" if read_nonblock_timed_out?(result)
raise Errno::ECONNRESET, "Connection reset: #{logged_options.inspect}" unless result

!WAIT_RCS.include?(result)
end

def nonblock_timed_out?(result)
return true if result == :wait_readable && !wait_readable(options[:socket_timeout])

# TODO: Do we actually need this? Looks to be only used in read_nonblock
result == :wait_writable && !wait_writable(options[:socket_timeout])
end

FILTERED_OUT_OPTIONS = %i[username password].freeze
def logged_options
options.reject { |k, _| FILTERED_OUT_OPTIONS.include? k }
def read_nonblock_timed_out?(result)
result == :wait_readable && !wait_readable(options[:socket_timeout])
end
end

@@ -104,18 +92,18 @@ def self.create_socket_with_timeout(host, port, options)
# To check this we are using the fact that resolv-replace
# aliases TCPSocket#initialize method to #original_resolv_initialize.
# https://github.com/ruby/resolv-replace/blob/v0.1.1/lib/resolv-replace.rb#L21
if RUBY_VERSION >= '3.0' &&
!::TCPSocket.private_instance_methods.include?(:original_resolv_initialize)
sock = new(host, port, connect_timeout: options[:socket_timeout])
yield(sock)
else
if ::TCPSocket.private_instance_methods.include?(:original_resolv_initialize)
Timeout.timeout(options[:socket_timeout]) do
sock = new(host, port)
yield(sock)
end
else
sock = new(host, port, connect_timeout: options[:socket_timeout])
yield(sock)
end
end

# rubocop:disable Metrics/AbcSize
def self.init_socket_options(sock, options)
sock.setsockopt(::Socket::IPPROTO_TCP, ::Socket::TCP_NODELAY, true)
sock.setsockopt(::Socket::SOL_SOCKET, ::Socket::SO_KEEPALIVE, true) if options[:keepalive]
@@ -124,13 +112,18 @@ def self.init_socket_options(sock, options)

return unless options[:socket_timeout]

seconds, fractional = options[:socket_timeout].divmod(1)
microseconds = fractional * 1_000_000
timeval = [seconds, microseconds].pack('l_2')
if sock.respond_to?(:timeout=)
sock.timeout = options[:socket_timeout]
else
seconds, fractional = options[:socket_timeout].divmod(1)
microseconds = fractional * 1_000_000
timeval = [seconds, microseconds].pack('l_2')

sock.setsockopt(::Socket::SOL_SOCKET, ::Socket::SO_RCVTIMEO, timeval)
sock.setsockopt(::Socket::SOL_SOCKET, ::Socket::SO_SNDTIMEO, timeval)
sock.setsockopt(::Socket::SOL_SOCKET, ::Socket::SO_RCVTIMEO, timeval)
sock.setsockopt(::Socket::SOL_SOCKET, ::Socket::SO_SNDTIMEO, timeval)
end
end
# rubocop:enable Metrics/AbcSize

def self.wrapping_ssl_socket(tcp_socket, host, ssl_context)
ssl_socket = Dalli::Socket::SSLSocket.new(tcp_socket, ssl_context)
@@ -151,7 +144,6 @@ def initialize(*_args)
end
end
else

##
# UNIX represents a UNIX domain socket, which is an interprocess communication
# mechanism between processes on the same host. Used when the Memcached server
@@ -168,9 +160,16 @@ def self.open(path, options = {})
Timeout.timeout(options[:socket_timeout]) do
sock = new(path)
sock.options = { path: path }.merge(options)
init_socket_options(sock, options)
sock
end
end

def self.init_socket_options(sock, options)
# Following the options supported in https://man7.org/linux/man-pages/man7/unix.7.html
sock.setsockopt(::Socket::SOL_SOCKET, ::Socket::SO_SNDBUF, options[:sndbuf]) if options[:sndbuf]
sock.timeout = options[:socket_timeout] if options[:socket_timeout] && sock.respond_to?(:timeout=)
end
end
end
end
4 changes: 2 additions & 2 deletions lib/rack/session/dalli.rb
Original file line number Diff line number Diff line change
@@ -175,8 +175,8 @@ def ensure_connection_pool_added!
raise e
end

def with_dalli_client(result_on_error = nil, &block)
@data.with(&block)
def with_dalli_client(result_on_error = nil, &)
@data.with(&)
rescue ::Dalli::DalliError, Errno::ECONNREFUSED
raise if $ERROR_INFO.message.include?('undefined class')

16 changes: 3 additions & 13 deletions scripts/install_memcached.sh
Original file line number Diff line number Diff line change
@@ -4,26 +4,16 @@ version=$MEMCACHED_VERSION


sudo apt-get -y remove memcached
sudo apt-get install libevent-dev libsasl2-dev sasl2-bin
sudo apt-get install libevent-dev

echo Installing Memcached version ${version}

# Install memcached with SASL and TLS support
# Install memcached TLS support
wget https://memcached.org/files/memcached-${version}.tar.gz
tar -zxvf memcached-${version}.tar.gz
cd memcached-${version}
./configure --enable-sasl --enable-tls
./configure --enable-tls
make
sudo mv memcached /usr/local/bin/

echo Memcached version ${version} installation complete

echo Configuring SASL

# Create SASL credentials for testing
echo 'mech_list: plain' | sudo tee -a /usr/lib/sasl2/memcached.conf > /dev/null

echo testtest | sudo saslpasswd2 -a memcached -c testuser -p
sudo chmod 644 /etc/sasldb2

echo SASL configuration complete
27 changes: 13 additions & 14 deletions test/benchmark_test.rb
Original file line number Diff line number Diff line change
@@ -4,11 +4,11 @@
require_relative 'helper'
require 'benchmark'

def profile(&block)
def profile(&)
return yield unless ENV['PROFILE']

prof = RubyProf::Profile.new
result = prof.profile(&block)
result = prof.profile(&)
rep = RubyProf::GraphHtmlPrinter.new(result)
file = File.new('profile.html', 'w')
rep.print(file)
@@ -36,13 +36,12 @@ def profile(&block)
end

it 'runs benchmarks' do
protocol = :binary
memcached(protocol, @port) do
memcached(@port) do
profile do
Benchmark.bm(37) do |x|
n = 2500

@m = Dalli::Client.new(@servers, protocol: protocol)
@m = Dalli::Client.new(@servers)
x.report('set:plain:dalli') do
n.times do
@m.set @key1, @marshalled, 0, raw: true
@@ -54,7 +53,7 @@ def profile(&block)
end
end

@m = Dalli::Client.new(@servers, protocol: protocol)
@m = Dalli::Client.new(@servers)
x.report('setq:plain:dalli') do
@m.multi do
n.times do
@@ -68,7 +67,7 @@ def profile(&block)
end
end

@m = Dalli::Client.new(@servers, protocol: protocol)
@m = Dalli::Client.new(@servers)
x.report('set:ruby:dalli') do
n.times do
@m.set @key1, @value
@@ -80,7 +79,7 @@ def profile(&block)
end
end

@m = Dalli::Client.new(@servers, protocol: protocol)
@m = Dalli::Client.new(@servers)
x.report('get:plain:dalli') do
n.times do
@m.get @key1, raw: true
@@ -92,7 +91,7 @@ def profile(&block)
end
end

@m = Dalli::Client.new(@servers, protocol: protocol)
@m = Dalli::Client.new(@servers)
x.report('get:ruby:dalli') do
n.times do
@m.get @key1
@@ -104,15 +103,15 @@ def profile(&block)
end
end

@m = Dalli::Client.new(@servers, protocol: protocol)
@m = Dalli::Client.new(@servers)
x.report('multiget:ruby:dalli') do
n.times do
# We don't use the keys array because splat is slow
@m.get_multi @key1, @key2, @key3, @key4, @key5, @key6
end
end

@m = Dalli::Client.new(@servers, protocol: protocol)
@m = Dalli::Client.new(@servers)
# rubocop:disable Lint/SuppressedException
x.report('missing:ruby:dalli') do
n.times do
@@ -126,7 +125,7 @@ def profile(&block)
end
# rubocop:enable Lint/SuppressedException

@m = Dalli::Client.new(@servers, protocol: protocol)
@m = Dalli::Client.new(@servers)
x.report('mixed:ruby:dalli') do
n.times do
@m.set @key1, @value
@@ -144,7 +143,7 @@ def profile(&block)
end
end

@m = Dalli::Client.new(@servers, protocol: protocol)
@m = Dalli::Client.new(@servers)
x.report('mixedq:ruby:dalli') do
n.times do
@m.multi do
@@ -173,7 +172,7 @@ def profile(&block)
end
end

@m = Dalli::Client.new(@servers, protocol: protocol)
@m = Dalli::Client.new(@servers)
x.report('incr:ruby:dalli') do
counter = 'foocount'
n.times do
8 changes: 4 additions & 4 deletions test/helper.rb
Original file line number Diff line number Diff line change
@@ -7,11 +7,11 @@
require 'minitest/autorun'
require_relative 'helpers/memcached'

ENV['SASL_CONF_PATH'] = "#{File.dirname(__FILE__)}/sasl/memcached.conf"

require 'dalli'
require 'logger'
require 'securerandom'
require 'toxiproxy'
require 'debug'

Dalli.logger = Logger.new($stdout)
Dalli.logger.level = Logger::ERROR
@@ -27,8 +27,8 @@ module Minitest
class Spec
include Memcached::Helper

def assert_error(error, regexp = nil, &block)
ex = assert_raises(error, &block)
def assert_error(error, regexp = nil, &)
ex = assert_raises(error, &)

assert_match(regexp, ex.message, "#{ex.class.name}: #{ex.message}\n#{ex.backtrace.join("\n\t")}")
end
83 changes: 36 additions & 47 deletions test/helpers/memcached.rb
Original file line number Diff line number Diff line change
@@ -3,29 +3,9 @@
require 'socket'
require_relative '../utils/certificate_generator'
require_relative '../utils/memcached_manager'
require_relative '../utils/memcached_mock'

module Memcached
module Helper
# Forks the current process and starts a new mock Memcached server on
# port 22122.
#
# memcached_mock(lambda {|sock| socket.write('123') }) do
# assert_equal "PONG", Dalli::Client.new('localhost:22122').get('abc')
# end
#
def memcached_mock(prc, meth = :start, meth_args = [])
return unless supports_fork?

begin
pid = fork_mock_process(prc, meth, meth_args)
sleep 0.3 # Give time for the socket to start listening.
yield
ensure
kill_process(pid)
end
end

# Launches a memcached process with the specified arguments. Takes
# a block to which an initialized Dalli::Client and the port_or_socket
# is passed.
@@ -36,28 +16,54 @@ def memcached_mock(prc, meth = :start, meth_args = [])
# client_options - Options passed to the Dalli::Client on initialization
# terminate_process - whether to terminate the memcached process on
# exiting the block
def memcached(protocol, port_or_socket, args = '', client_options = {}, terminate_process: true)
dc = MemcachedManager.start_and_flush_with_retry(port_or_socket, args, client_options.merge(protocol: protocol))
def memcached(port_or_socket, args = '', client_options = {}, terminate_process: true)
dc = MemcachedManager.start_and_flush_with_retry(port_or_socket, args, client_options)
yield dc, port_or_socket if block_given?
memcached_kill(port_or_socket) if terminate_process
end

# Launches a memcached process using the memcached method in this module,
# but sets terminate_process to false ensuring that the process persists
# past execution of the block argument.
# rubocop:disable Metrics/ParameterLists
def memcached_persistent(protocol = :binary, port_or_socket = 21_345, args = '', client_options = {}, &block)
memcached(protocol, port_or_socket, args, client_options, terminate_process: false, &block)
def memcached_persistent(port_or_socket = 21_345, args = '', client_options = {}, &)
memcached(port_or_socket, args, client_options, terminate_process: false, &)
end

###
# Launches a persistent memcached process that is proxied through Toxiproxy
# to test network errors.
# uses port 21347 for the Toxiproxy proxy port and the specified port_or_socket
# for the memcached process.
###
def toxi_memcached_persistent(
port = MemcachedManager::TOXIPROXY_UPSTREAM_PORT,
args = '',
client_options = {},
&
)
raise 'Toxiproxy does not support unix sockets' if port.to_i.zero?

unless @toxy_configured
Toxiproxy.populate([{
name: 'dalli_memcached',
listen: "localhost:#{MemcachedManager::TOXIPROXY_MEMCACHED_PORT}",
upstream: "localhost:#{port}"
}])
end
@toxy_configured ||= true
memcached_persistent(port, args, client_options) do |dc, _|
dc.close # We don't need the client to talk directly to memcached
end
dc = Dalli::Client.new("localhost:#{MemcachedManager::TOXIPROXY_MEMCACHED_PORT}", client_options)
yield dc, port
end
# rubocop:enable Metrics/ParameterLists

# Launches a persistent memcached process, configured to use SSL
def memcached_ssl_persistent(protocol = :binary, port_or_socket = rand(21_397..21_896), &block)
memcached_persistent(protocol,
port_or_socket,
def memcached_ssl_persistent(port_or_socket = rand(21_397..21_896), &)
memcached_persistent(port_or_socket,
CertificateGenerator.ssl_args,
{ ssl_context: CertificateGenerator.ssl_context },
&block)
&)
end

# Kills the memcached process that was launched using this helper on hte
@@ -66,25 +72,8 @@ def memcached_kill(port_or_socket)
MemcachedManager.stop(port_or_socket)
end

# Launches a persistent memcached process, configured to use SASL authentication
def memcached_sasl_persistent(port_or_socket = 21_398, &block)
memcached_persistent(:binary, port_or_socket, '-S', sasl_credentials, &block)
end

# The SASL credentials used for the test SASL server
def sasl_credentials
{ username: 'testuser', password: 'testtest' }
end

private

def fork_mock_process(prc, meth, meth_args)
fork do
trap('TERM') { exit }
MemcachedMock.send(meth, *meth_args) { |*args| prc.call(*args) }
end
end

def kill_process(pid)
return unless pid

2 changes: 1 addition & 1 deletion test/integration/test_authentication.rb
Original file line number Diff line number Diff line change
@@ -7,7 +7,7 @@
let(:username) { SecureRandom.hex(5) }
it 'raises an error if the username is set' do
err = assert_raises Dalli::DalliError do
memcached_persistent(:meta, 21_345, '', username: username) do |dc|
memcached_persistent(21_345, '', username: username) do |dc|
dc.flush
dc.set('key1', 'abcd')
end
512 changes: 254 additions & 258 deletions test/integration/test_cas.rb

Large diffs are not rendered by default.

58 changes: 27 additions & 31 deletions test/integration/test_compressor.rb
Original file line number Diff line number Diff line change
@@ -14,43 +14,39 @@ def self.decompress(data)
end

describe 'Compressor' do
MemcachedManager.supported_protocols.each do |p|
describe "using the #{p} protocol" do
it 'default to Dalli::Compressor' do
memcached(p, 29_199) do |dc|
dc.set 1, 2
it 'default to Dalli::Compressor' do
memcached(29_199) do |dc|
dc.set 1, 2

assert_equal Dalli::Compressor, dc.instance_variable_get(:@ring).servers.first.compressor
end
end
assert_equal Dalli::Compressor, dc.instance_variable_get(:@ring).servers.first.compressor
end
end

it 'support a custom compressor' do
memcached(p, 29_199) do |_dc|
memcache = Dalli::Client.new('127.0.0.1:29199', { compressor: NoopCompressor })
memcache.set 1, 2
begin
assert_equal NoopCompressor,
memcache.instance_variable_get(:@ring).servers.first.compressor

memcached(p, 19_127) do |newdc|
assert newdc.set('string-test', 'a test string')
assert_equal('a test string', newdc.get('string-test'))
end
end
it 'support a custom compressor' do
memcached(29_199) do |_dc|
memcache = Dalli::Client.new('127.0.0.1:29199', { compressor: NoopCompressor })
memcache.set 1, 2
begin
assert_equal NoopCompressor,
memcache.instance_variable_get(:@ring).servers.first.compressor

memcached(19_127) do |newdc|
assert newdc.set('string-test', 'a test string')
assert_equal('a test string', newdc.get('string-test'))
end
end
end
end

describe 'GzipCompressor' do
it 'compress and uncompress data using Zlib::GzipWriter/Reader' do
memcached(p, 19_127) do |_dc|
memcache = Dalli::Client.new('127.0.0.1:19127', { compress: true, compressor: Dalli::GzipCompressor })
data = (0...1025).map { rand(65..90).chr }.join
describe 'GzipCompressor' do
it 'compress and uncompress data using Zlib::GzipWriter/Reader' do
memcached(19_127) do |_dc|
memcache = Dalli::Client.new('127.0.0.1:19127', { compress: true, compressor: Dalli::GzipCompressor })
data = (0...1025).map { rand(65..90).chr }.join

assert memcache.set('test', data)
assert_equal(data, memcache.get('test'))
assert_equal Dalli::GzipCompressor, memcache.instance_variable_get(:@ring).servers.first.compressor
end
end
assert memcache.set('test', data)
assert_equal(data, memcache.get('test'))
assert_equal Dalli::GzipCompressor, memcache.instance_variable_get(:@ring).servers.first.compressor
end
end
end
82 changes: 39 additions & 43 deletions test/integration/test_concurrency.rb
Original file line number Diff line number Diff line change
@@ -3,53 +3,49 @@
require_relative '../helper'

describe 'concurrent behavior' do
MemcachedManager.supported_protocols.each do |p|
describe "using the #{p} protocol" do
it 'supports multithreaded access' do
memcached_persistent(p) do |cache|
cache.flush
workers = []

cache.set('f', 'zzz')

assert op_cas_succeeds((cache.cas('f') do |value|
value << 'z'
end))
assert_equal 'zzzz', cache.get('f')

# Have a bunch of threads perform a bunch of operations at the same time.
# Verify the result of each operation to ensure the request and response
# are not intermingled between threads.
10.times do
workers << Thread.new do
100.times do
cache.set('a', 9)
cache.set('b', 11)
cache.incr('cat', 10, 0, 10)
cache.set('f', 'zzz')
res = cache.cas('f') do |value|
value << 'z'
end

refute_nil res
refute cache.add('a', 11)
assert_equal({ 'a' => 9, 'b' => 11 }, cache.get_multi(%w[a b]))
inc = cache.incr('cat', 10)

assert_equal 0, inc % 5
cache.decr('cat', 5)

assert_equal 11, cache.get('b')

assert_equal %w[a b], cache.get_multi('a', 'b', 'c').keys.sort
end
it 'supports multithreaded access' do
memcached_persistent do |cache|
cache.flush
workers = []

cache.set('f', 'zzz')

assert op_cas_succeeds((cache.cas('f') do |value|
value << 'z'
end))
assert_equal 'zzzz', cache.get('f')

# Have a bunch of threads perform a bunch of operations at the same time.
# Verify the result of each operation to ensure the request and response
# are not intermingled between threads.
10.times do
workers << Thread.new do
100.times do
cache.set('a', 9)
cache.set('b', 11)
cache.incr('cat', 10, 0, 10)
cache.set('f', 'zzz')
res = cache.cas('f') do |value|
value << 'z'
end
end

workers.each(&:join)
cache.flush
refute_nil res
refute cache.add('a', 11)
assert_equal({ 'a' => 9, 'b' => 11 }, cache.get_multi(%w[a b]))
inc = cache.incr('cat', 10)

assert_equal 0, inc % 5
cache.decr('cat', 5)

assert_equal 11, cache.get('b')

assert_equal %w[a b], cache.get_multi('a', 'b', 'c').keys.sort
end
end
end

workers.each(&:join)
cache.flush
end
end
end
16 changes: 6 additions & 10 deletions test/integration/test_connection_pool.rb
Original file line number Diff line number Diff line change
@@ -3,19 +3,15 @@
require_relative '../helper'

describe 'connection pool behavior' do
MemcachedManager.supported_protocols.each do |p|
describe "using the #{p} protocol" do
it 'can masquerade as a connection pool using the with method' do
memcached_persistent(p) do |dc|
dc.with { |c| c.set('some_key', 'some_value') }
it 'can masquerade as a connection pool using the with method' do
memcached_persistent do |dc|
dc.with { |c| c.set('some_key', 'some_value') }

assert_equal 'some_value', dc.get('some_key')
assert_equal 'some_value', dc.get('some_key')

dc.with { |c| c.delete('some_key') }
dc.with { |c| c.delete('some_key') }

assert_nil dc.get('some_key')
end
end
assert_nil dc.get('some_key')
end
end
end
30 changes: 13 additions & 17 deletions test/integration/test_encoding.rb
Original file line number Diff line number Diff line change
@@ -3,27 +3,23 @@
require_relative '../helper'

describe 'Encoding' do
MemcachedManager.supported_protocols.each do |p|
describe "using the #{p} protocol" do
it 'supports Unicode values' do
memcached_persistent(p) do |dc|
key = 'foo'
utf8 = 'ƒ©åÍÎ'
it 'supports Unicode values' do
memcached_persistent do |dc|
key = 'foo'
utf8 = 'ƒ©åÍÎ'

assert dc.set(key, utf8)
assert_equal utf8, dc.get(key)
end
end
assert dc.set(key, utf8)
assert_equal utf8, dc.get(key)
end
end

it 'supports Unicode keys' do
memcached_persistent(p) do |dc|
utf_key = utf8 = 'ƒ©åÍÎ'
it 'supports Unicode keys' do
memcached_persistent do |dc|
utf_key = utf8 = 'ƒ©åÍÎ'

dc.set(utf_key, utf8)
dc.set(utf_key, utf8)

assert_equal utf8, dc.get(utf_key)
end
end
assert_equal utf8, dc.get(utf_key)
end
end
end
226 changes: 109 additions & 117 deletions test/integration/test_failover.rb
Original file line number Diff line number Diff line change
@@ -3,175 +3,167 @@
require_relative '../helper'

describe 'failover' do
MemcachedManager.supported_protocols.each do |p|
describe "using the #{p} protocol" do
# Timeouts on JRuby work differently and aren't firing, meaning we're
# not testing the condition
unless defined? JRUBY_VERSION
describe 'timeouts' do
it 'not lead to corrupt sockets' do
memcached_persistent(p) do |dc|
value = { test: '123' }
begin
Timeout.timeout 0.01 do
start_time = Time.now
10_000.times do
dc.set('test_123', value)
end

flunk("Did not timeout in #{Time.now - start_time}")
end
rescue Timeout::Error
# Ignore expected timeout
end

assert_equal(value, dc.get('test_123'))
describe 'timeouts' do
it 'not lead to corrupt sockets' do
memcached_persistent do |dc|
value = { test: '123' }
begin
Timeout.timeout 0.01 do
start_time = Time.now
10_000.times do
dc.set('test_123', value)
end

flunk("Did not timeout in #{Time.now - start_time}")
end
rescue Timeout::Error
# Ignore expected timeout
end

assert_equal(value, dc.get('test_123'))
end
end

describe 'assuming some bad servers' do
it 'silently reconnect if server hiccups' do
server_port = 30_124
memcached_persistent(p, server_port) do |dc, port|
dc.set 'foo', 'bar'
foo = dc.get 'foo'
describe 'assuming some bad servers' do
it 'silently reconnect if server hiccups' do
server_port = 30_124
memcached_persistent(server_port) do |dc, port|
dc.set 'foo', 'bar'
foo = dc.get 'foo'

assert_equal('bar', foo)
assert_equal('bar', foo)

memcached_kill(port)
memcached_persistent(p, port) do
foo = dc.get 'foo'
memcached_kill(port)
memcached_persistent(port) do
foo = dc.get 'foo'

assert_nil foo
assert_nil foo

memcached_kill(port)
end
memcached_kill(port)
end
end
end

it 'reconnects if server idles the connection' do
port1 = 32_112
port2 = 37_887
it 'reconnects if server idles the connection' do
port1 = 32_112
port2 = 37_887

memcached(p, port1, '-o idle_timeout=1') do |_, first_port|
memcached(p, port2, '-o idle_timeout=1') do |_, second_port|
dc = Dalli::Client.new ["localhost:#{first_port}", "localhost:#{second_port}"]
dc.set 'foo', 'bar'
dc.set 'foo2', 'bar2'
foo = dc.get_multi 'foo', 'foo2'
memcached(port1, '-o idle_timeout=1') do |_, first_port|
memcached(port2, '-o idle_timeout=1') do |_, second_port|
dc = Dalli::Client.new ["localhost:#{first_port}", "localhost:#{second_port}"]
dc.set 'foo', 'bar'
dc.set 'foo2', 'bar2'
foo = dc.get_multi 'foo', 'foo2'

assert_equal({ 'foo' => 'bar', 'foo2' => 'bar2' }, foo)
assert_equal({ 'foo' => 'bar', 'foo2' => 'bar2' }, foo)

# wait for socket to expire and get cleaned up
sleep 5
# wait for socket to expire and get cleaned up
sleep 5

foo = dc.get_multi 'foo', 'foo2'
foo = dc.get_multi 'foo', 'foo2'

assert_equal({ 'foo' => 'bar', 'foo2' => 'bar2' }, foo)
end
assert_equal({ 'foo' => 'bar', 'foo2' => 'bar2' }, foo)
end
end
end

it 'handle graceful failover' do
port1 = 31_777
port2 = 32_113
memcached_persistent(p, port1) do |_first_dc, first_port|
memcached_persistent(p, port2) do |_second_dc, second_port|
dc = Dalli::Client.new ["localhost:#{first_port}", "localhost:#{second_port}"]
dc.set 'foo', 'bar'
foo = dc.get 'foo'
it 'handle graceful failover' do
port1 = 31_777
port2 = 32_113
memcached_persistent(port1) do |_first_dc, first_port|
memcached_persistent(port2) do |_second_dc, second_port|
dc = Dalli::Client.new ["localhost:#{first_port}", "localhost:#{second_port}"]
dc.set 'foo', 'bar'
foo = dc.get 'foo'

assert_equal('bar', foo)
assert_equal('bar', foo)

memcached_kill(first_port)
memcached_kill(first_port)

dc.set 'foo', 'bar'
foo = dc.get 'foo'
dc.set 'foo', 'bar'
foo = dc.get 'foo'

assert_equal('bar', foo)
assert_equal('bar', foo)

memcached_kill(second_port)
memcached_kill(second_port)

assert_raises Dalli::RingError, message: 'No server available' do
dc.set 'foo', 'bar'
end
assert_raises Dalli::RingError, message: 'No server available' do
dc.set 'foo', 'bar'
end
end
end
end

it 'handle them gracefully in get_multi' do
port1 = 32_971
port2 = 34_312
memcached_persistent(p, port1) do |_first_dc, first_port|
memcached(p, port2) do |_second_dc, second_port|
dc = Dalli::Client.new ["localhost:#{first_port}", "localhost:#{second_port}"]
dc.set 'a', 'a1'
result = dc.get_multi ['a']
it 'handle them gracefully in get_multi' do
port1 = 32_971
port2 = 34_312
memcached_persistent(port1) do |_first_dc, first_port|
memcached(port2) do |_second_dc, second_port|
dc = Dalli::Client.new ["localhost:#{first_port}", "localhost:#{second_port}"]
dc.set 'a', 'a1'
result = dc.get_multi ['a']

assert_equal({ 'a' => 'a1' }, result)
assert_equal({ 'a' => 'a1' }, result)

memcached_kill(first_port)
memcached_kill(first_port)

result = dc.get_multi ['a']
result = dc.get_multi ['a']

assert_equal({ 'a' => 'a1' }, result)
end
assert_equal({ 'a' => 'a1' }, result)
end
end
end

it 'handle graceful failover in get_multi' do
port1 = 34_541
port2 = 33_044
memcached_persistent(p, port1) do |_first_dc, first_port|
memcached_persistent(p, port2) do |_second_dc, second_port|
dc = Dalli::Client.new ["localhost:#{first_port}", "localhost:#{second_port}"]
dc.set 'foo', 'foo1'
dc.set 'bar', 'bar1'
result = dc.get_multi %w[foo bar]
it 'handle graceful failover in get_multi' do
port1 = 34_541
port2 = 33_044
memcached_persistent(port1) do |_first_dc, first_port|
memcached_persistent(port2) do |_second_dc, second_port|
dc = Dalli::Client.new ["localhost:#{first_port}", "localhost:#{second_port}"]
dc.set 'foo', 'foo1'
dc.set 'bar', 'bar1'
result = dc.get_multi %w[foo bar]

assert_equal({ 'foo' => 'foo1', 'bar' => 'bar1' }, result)
assert_equal({ 'foo' => 'foo1', 'bar' => 'bar1' }, result)

memcached_kill(first_port)
memcached_kill(first_port)

dc.set 'foo', 'foo1'
dc.set 'bar', 'bar1'
result = dc.get_multi %w[foo bar]
dc.set 'foo', 'foo1'
dc.set 'bar', 'bar1'
result = dc.get_multi %w[foo bar]

assert_equal({ 'foo' => 'foo1', 'bar' => 'bar1' }, result)
assert_equal({ 'foo' => 'foo1', 'bar' => 'bar1' }, result)

memcached_kill(second_port)
memcached_kill(second_port)

result = dc.get_multi %w[foo bar]
result = dc.get_multi %w[foo bar]

assert_empty(result)
end
assert_empty(result)
end
end
end
end

it 'stats it still properly report' do
port1 = 34_547
port2 = 33_219
memcached_persistent(p, port1) do |_first_dc, first_port|
memcached_persistent(p, port2) do |_second_dc, second_port|
dc = Dalli::Client.new ["localhost:#{first_port}", "localhost:#{second_port}"]
result = dc.stats
it 'stats it still properly report' do
port1 = 34_547
port2 = 33_219
memcached_persistent(port1) do |_first_dc, first_port|
memcached_persistent(port2) do |_second_dc, second_port|
dc = Dalli::Client.new ["localhost:#{first_port}", "localhost:#{second_port}"]
result = dc.stats

assert_instance_of Hash, result["localhost:#{first_port}"]
assert_instance_of Hash, result["localhost:#{second_port}"]
assert_instance_of Hash, result["localhost:#{first_port}"]
assert_instance_of Hash, result["localhost:#{second_port}"]

memcached_kill(first_port)
memcached_kill(first_port)

dc = Dalli::Client.new ["localhost:#{first_port}", "localhost:#{second_port}"]
result = dc.stats
dc = Dalli::Client.new ["localhost:#{first_port}", "localhost:#{second_port}"]
result = dc.stats

assert_instance_of NilClass, result["localhost:#{first_port}"]
assert_instance_of Hash, result["localhost:#{second_port}"]
assert_instance_of NilClass, result["localhost:#{first_port}"]
assert_instance_of Hash, result["localhost:#{second_port}"]

memcached_kill(second_port)
end
end
memcached_kill(second_port)
end
end
end
42 changes: 19 additions & 23 deletions test/integration/test_marshal.rb
Original file line number Diff line number Diff line change
@@ -4,35 +4,31 @@
require 'json'

describe 'Serializer configuration' do
MemcachedManager.supported_protocols.each do |p|
describe "using the #{p} protocol" do
it 'does not allow values over the 1MB limit' do
memcached_persistent(p) do |dc|
value = SecureRandom.random_bytes((1024 * 1024) + 30_000)
it 'does not allow values over the 1MB limit' do
memcached_persistent do |dc|
value = SecureRandom.random_bytes((1024 * 1024) + 30_000)

with_nil_logger do
assert_raises Dalli::ValueOverMaxSize do
dc.set('verylarge', value)
end
end
with_nil_logger do
assert_raises Dalli::ValueOverMaxSize do
dc.set('verylarge', value)
end
end
end
end

it 'allow large values under the limit to be set' do
memcached_persistent(p) do |dc|
value = '0' * 1024 * 1024
it 'allow large values under the limit to be set' do
memcached_persistent do |dc|
value = '0' * 1024 * 1024

assert dc.set('verylarge', value, nil, compress: true)
end
end
assert dc.set('verylarge', value, nil, compress: true)
end
end

it 'errors appropriately when the value cannot be marshalled' do
memcached_persistent(p) do |dc|
with_nil_logger do
assert_raises Dalli::MarshalError do
dc.set('a', proc { true })
end
end
it 'errors appropriately when the value cannot be marshalled' do
memcached_persistent do |dc|
with_nil_logger do
assert_raises Dalli::MarshalError do
dc.set('a', proc { true })
end
end
end
86 changes: 41 additions & 45 deletions test/integration/test_memcached_admin.rb
Original file line number Diff line number Diff line change
@@ -4,64 +4,60 @@
require 'json'

describe 'memcached admin commands' do
MemcachedManager.supported_protocols.each do |p|
describe "using the #{p} protocol" do
describe 'stats' do
it 'support stats' do
memcached_persistent(p) do |dc|
# make sure that get_hits would not equal 0
dc.set(:a, '1234567890' * 100_000)
dc.get(:a)
describe 'stats' do
it 'support stats' do
memcached_persistent do |dc|
# make sure that get_hits would not equal 0
dc.set(:a, '1234567890' * 100_000)
dc.get(:a)

stats = dc.stats
servers = stats.keys
stats = dc.stats
servers = stats.keys

assert(servers.any? do |s|
stats[s]['get_hits'].to_i != 0
end, 'general stats failed')
assert(servers.any? do |s|
stats[s]['get_hits'].to_i != 0
end, 'general stats failed')

stats_items = dc.stats(:items)
servers = stats_items.keys
stats_items = dc.stats(:items)
servers = stats_items.keys

assert(servers.all? do |s|
stats_items[s].keys.any? do |key|
key =~ /items:[0-9]+:number/
end
end, 'stats items failed')
assert(servers.all? do |s|
stats_items[s].keys.any? do |key|
key =~ /items:[0-9]+:number/
end
end, 'stats items failed')

stats_slabs = dc.stats(:slabs)
servers = stats_slabs.keys
stats_slabs = dc.stats(:slabs)
servers = stats_slabs.keys

assert(servers.all? do |s|
stats_slabs[s].keys.any?('active_slabs')
end, 'stats slabs failed')
assert(servers.all? do |s|
stats_slabs[s].keys.any?('active_slabs')
end, 'stats slabs failed')

# reset_stats test
results = dc.reset_stats
# reset_stats test
results = dc.reset_stats

assert(results.all? { |x| x })
stats = dc.stats
servers = stats.keys
assert(results.all? { |x| x })
stats = dc.stats
servers = stats.keys

# check if reset was performed
servers.each do |s|
assert_equal 0, dc.stats[s]['get_hits'].to_i
end
end
# check if reset was performed
servers.each do |s|
assert_equal 0, dc.stats[s]['get_hits'].to_i
end
end
end
end

describe 'version' do
it 'support version operation' do
memcached_persistent(p) do |dc|
v = dc.version
servers = v.keys
describe 'version' do
it 'support version operation' do
memcached_persistent do |dc|
v = dc.version
servers = v.keys

assert(servers.any? do |s|
!v[s].nil?
end, 'version failed')
end
end
assert(servers.any? do |s|
!v[s].nil?
end, 'version failed')
end
end
end
140 changes: 68 additions & 72 deletions test/integration/test_namespace_and_key.rb
Original file line number Diff line number Diff line change
@@ -3,94 +3,90 @@
require_relative '../helper'

describe 'Namespace and key behavior' do
MemcachedManager.supported_protocols.each do |p|
describe "using the #{p} protocol" do
it 'handles namespaced keys' do
memcached_persistent(p) do |_, port|
dc = Dalli::Client.new("localhost:#{port}", namespace: 'a')
dc.set('namespaced', 1)
dc2 = Dalli::Client.new("localhost:#{port}", namespace: 'b')
dc2.set('namespaced', 2)

assert_equal 1, dc.get('namespaced')
assert_equal 2, dc2.get('namespaced')
end
end
it 'handles namespaced keys' do
memcached_persistent do |_, port|
dc = Dalli::Client.new("localhost:#{port}", namespace: 'a')
dc.set('namespaced', 1)
dc2 = Dalli::Client.new("localhost:#{port}", namespace: 'b')
dc2.set('namespaced', 2)

assert_equal 1, dc.get('namespaced')
assert_equal 2, dc2.get('namespaced')
end
end

it 'handles a nil namespace' do
memcached_persistent(p) do |_, port|
dc = Dalli::Client.new("localhost:#{port}", namespace: nil)
dc.set('key', 1)
it 'handles a nil namespace' do
memcached_persistent do |_, port|
dc = Dalli::Client.new("localhost:#{port}", namespace: nil)
dc.set('key', 1)

assert_equal 1, dc.get('key')
end
end
assert_equal 1, dc.get('key')
end
end

it 'truncates cache keys that are too long' do
memcached_persistent(p) do |_, port|
dc = Dalli::Client.new("localhost:#{port}", namespace: 'some:namspace')
key = 'this-cache-key-is-far-too-long-so-it-must-be-hashed-and-truncated-and-stuff' * 10
value = 'some value'
it 'truncates cache keys that are too long' do
memcached_persistent do |_, port|
dc = Dalli::Client.new("localhost:#{port}", namespace: 'some:namspace')
key = 'this-cache-key-is-far-too-long-so-it-must-be-hashed-and-truncated-and-stuff' * 10
value = 'some value'

assert op_addset_succeeds(dc.set(key, value))
assert_equal value, dc.get(key)
end
end
assert op_addset_succeeds(dc.set(key, value))
assert_equal value, dc.get(key)
end
end

it 'handles namespaced keys in get_multi' do
memcached_persistent(p) do |_, port|
dc = Dalli::Client.new("localhost:#{port}", namespace: 'a')
dc.set('a', 1)
dc.set('b', 2)
it 'handles namespaced keys in get_multi' do
memcached_persistent do |_, port|
dc = Dalli::Client.new("localhost:#{port}", namespace: 'a')
dc.set('a', 1)
dc.set('b', 2)

assert_equal({ 'a' => 1, 'b' => 2 }, dc.get_multi('a', 'b'))
end
end
assert_equal({ 'a' => 1, 'b' => 2 }, dc.get_multi('a', 'b'))
end
end

it 'handles special Regexp characters in namespace with get_multi' do
memcached_persistent(p) do |_, port|
# /(?!)/ is a contradictory PCRE and should never be able to match
dc = Dalli::Client.new("localhost:#{port}", namespace: '(?!)')
dc.set('a', 1)
dc.set('b', 2)
it 'handles special Regexp characters in namespace with get_multi' do
memcached_persistent do |_, port|
# /(?!)/ is a contradictory PCRE and should never be able to match
dc = Dalli::Client.new("localhost:#{port}", namespace: '(?!)')
dc.set('a', 1)
dc.set('b', 2)

assert_equal({ 'a' => 1, 'b' => 2 }, dc.get_multi('a', 'b'))
end
end
assert_equal({ 'a' => 1, 'b' => 2 }, dc.get_multi('a', 'b'))
end
end

it 'allows whitespace characters in keys' do
memcached_persistent(p) do |dc|
dc.set "\t", 1
it 'allows whitespace characters in keys' do
memcached_persistent do |dc|
dc.set "\t", 1

assert_equal 1, dc.get("\t")
dc.set "\n", 1
assert_equal 1, dc.get("\t")
dc.set "\n", 1

assert_equal 1, dc.get("\n")
dc.set ' ', 1
assert_equal 1, dc.get("\n")
dc.set ' ', 1

assert_equal 1, dc.get(' ')
end
end
assert_equal 1, dc.get(' ')
end
end

it 'does not allow blanks for keys' do
memcached_persistent(p) do |dc|
assert_raises ArgumentError do
dc.set '', 1
end
assert_raises ArgumentError do
dc.set nil, 1
end
end
it 'does not allow blanks for keys' do
memcached_persistent do |dc|
assert_raises ArgumentError do
dc.set '', 1
end
assert_raises ArgumentError do
dc.set nil, 1
end
end
end

it 'allow the namespace to be a symbol' do
memcached_persistent(p) do |_, port|
dc = Dalli::Client.new("localhost:#{port}", namespace: :wunderschoen)
dc.set 'x' * 251, 1
it 'allow the namespace to be a symbol' do
memcached_persistent do |_, port|
dc = Dalli::Client.new("localhost:#{port}", namespace: :wunderschoen)
dc.set 'x' * 251, 1

assert_equal(1, dc.get(('x' * 251).to_s))
end
end
assert_equal(1, dc.get(('x' * 251).to_s))
end
end
end
482 changes: 228 additions & 254 deletions test/integration/test_network.rb

Large diffs are not rendered by default.

653 changes: 382 additions & 271 deletions test/integration/test_operations.rb

Large diffs are not rendered by default.

265 changes: 184 additions & 81 deletions test/integration/test_pipelined_get.rb
Original file line number Diff line number Diff line change
@@ -3,103 +3,206 @@
require_relative '../helper'

describe 'Pipelined Get' do
MemcachedManager.supported_protocols.each do |p|
describe "using the #{p} protocol" do
it 'supports pipelined get' do
memcached_persistent(p) do |dc|
dc.close
dc.flush
resp = dc.get_multi(%w[a b c d e f])

assert_empty(resp)

dc.set('a', 'foo')
dc.set('b', 123)
dc.set('c', %w[a b c])

# Invocation without block
resp = dc.get_multi(%w[a b c d e f])
expected_resp = { 'a' => 'foo', 'b' => 123, 'c' => %w[a b c] }

assert_equal(expected_resp, resp)

# Invocation with block
dc.get_multi(%w[a b c d e f]) do |k, v|
assert(expected_resp.key?(k) && expected_resp[k] == v)
expected_resp.delete(k)
end

assert_empty expected_resp

# Perform a big quiet set with 1000 elements.
arr = []
dc.multi do
1000.times do |idx|
dc.set idx, idx
arr << idx
end
end

# Retrieve the elements with a pipelined get
result = dc.get_multi(arr)

assert_equal(1000, result.size)
assert_equal(50, result['50'])
it 'supports pipelined get' do
memcached_persistent do |dc|
dc.close
dc.flush
resp = dc.get_multi(%w[a b c d e f])

assert_empty(resp)

dc.set('a', 'foo')
dc.set('b', 123)
dc.set('c', %w[a b c])

# Invocation without block
resp = dc.get_multi(%w[a b c d e f])
expected_resp = { 'a' => 'foo', 'b' => 123, 'c' => %w[a b c] }

assert_equal(expected_resp, resp)

# Invocation with block
dc.get_multi(%w[a b c d e f]) do |k, v|
assert(expected_resp.key?(k) && expected_resp[k] == v)
expected_resp.delete(k)
end

assert_empty expected_resp

# Perform a big quiet set with 1000 elements.
arr = []
dc.multi do
1000.times do |idx|
dc.set idx, idx
arr << idx
end
end

it 'supports pipelined get with keys containing Unicode or spaces' do
memcached_persistent(p) do |dc|
dc.close
dc.flush
# Retrieve the elements with a pipelined get
result = dc.get_multi(arr)

assert_equal(1000, result.size)
assert_equal(50, result['50'])
end
end

it 'supports pipelined get for single key' do
memcached_persistent do |dc|
dc.close
dc.flush

keys_to_query = ['a']

resp = dc.get_multi(keys_to_query)

assert_empty(resp)

dc.set('a', 'foo')

# Invocation without block
resp = dc.get_multi(keys_to_query)
expected_resp = { 'a' => 'foo' }

assert_equal(expected_resp, resp)

# Invocation with block
dc.get_multi(keys_to_query) do |k, v|
assert(expected_resp.key?(k) && expected_resp[k] == v)
expected_resp.delete(k)
end

assert_empty expected_resp
end
end

keys_to_query = ['a', 'b', 'contains space', 'ƒ©åÍÎ']
it 'meta read_multi_req supports optimized raw get' do
memcached_persistent do |_, port|
dc = Dalli::Client.new("localhost:#{port}", namespace: 'some:namspace', raw: true)
dc.close
dc.flush

resp = dc.get_multi(keys_to_query)
keys_to_query = %w[a b]

resp = dc.get_multi(keys_to_query)

assert_empty(resp)

dc.set('a', 'foo')
dc.set('b', 'bar')

resp = dc.get_multi(keys_to_query)
expected_resp = { 'a' => 'foo', 'b' => 'bar' }

assert_equal(expected_resp, resp)
end
end

assert_empty(resp)
it 'meta read_multi_req supports pipelined get doesnt default value on misses' do
memcached_persistent do |_, port|
dc = Dalli::Client.new("localhost:#{port}", namespace: 'some:namspace')
dc.close
dc.flush

dc.set('a', 'foo')
dc.set('contains space', 123)
dc.set('ƒ©åÍÎ', %w[a b c])
keys_to_query = %w[a b]

# Invocation without block
resp = dc.get_multi(keys_to_query)
expected_resp = { 'a' => 'foo', 'contains space' => 123, 'ƒ©åÍÎ' => %w[a b c] }
resp = dc.get_multi(keys_to_query)

assert_equal(expected_resp, resp)
assert_empty(resp)

# Invocation with block
dc.get_multi(keys_to_query) do |k, v|
assert(expected_resp.key?(k) && expected_resp[k] == v)
expected_resp.delete(k)
end
dc.set('a', 'foo')

assert_empty expected_resp
# Invocation without block
resp = dc.get_multi(keys_to_query)
expected_resp = { 'a' => 'foo' }

assert_nil(resp['b'])
assert_equal(expected_resp, resp)

# Invocation with block
dc.get_multi(keys_to_query) do |k, v|
assert(expected_resp.key?(k) && expected_resp[k] == v)
expected_resp.delete(k)
end

assert_empty expected_resp
end
end

it 'raises network errors' do
toxi_memcached_persistent(19_997, '', { down_retry_delay: 0 }) do |dc|
dc.close
dc.flush

resp = dc.get_multi(%w[a b c d e f])

assert_empty(resp)

dc.set('a', 'foo')
dc.set('b', 123)
dc.set('c', %w[a b c])

Toxiproxy[/dalli_memcached/].down do
assert_raises Dalli::NetworkError do
dc.get_multi(%w[a b c d e f])
end
end

describe 'pipeline_next_responses' do
it 'raises NetworkError when called before pipeline_response_setup' do
memcached_persistent(p) do |dc|
server = dc.send(:ring).servers.first
server.request(:pipelined_get, %w[a b])
assert_raises Dalli::NetworkError do
server.pipeline_next_responses
end
end
val = dc.get_multi(%w[a b c d e f])

assert_equal({ 'a' => 'foo', 'b' => 123, 'c' => %w[a b c] }, val)
end
end

it 'supports pipelined get with keys containing Unicode or spaces' do
memcached_persistent do |dc|
dc.close
dc.flush

keys_to_query = ['a', 'b', 'contains space', 'ƒ©åÍÎ']

resp = dc.get_multi(keys_to_query)

assert_empty(resp)

dc.set('a', 'foo')
dc.set('contains space', 123)
dc.set('ƒ©åÍÎ', %w[a b c])

# Invocation without block
resp = dc.get_multi(keys_to_query)
expected_resp = { 'a' => 'foo', 'contains space' => 123, 'ƒ©åÍÎ' => %w[a b c] }

assert_equal(expected_resp, resp)

# Invocation with block
dc.get_multi(keys_to_query) do |k, v|
assert(expected_resp.key?(k) && expected_resp[k] == v)
expected_resp.delete(k)
end

assert_empty expected_resp
end
end

describe 'pipeline_next_responses' do
it 'raises NetworkError when called before pipeline_response_setup' do
memcached_persistent do |dc|
server = dc.send(:ring).servers.first
server.request(:pipelined_get, %w[a b])
assert_raises Dalli::NetworkError do
server.pipeline_next_responses
end
end
end

it 'raises NetworkError when called after pipeline_abort' do
memcached_persistent(p) do |dc|
server = dc.send(:ring).servers.first
server.request(:pipelined_get, %w[a b])
server.pipeline_response_setup
server.pipeline_abort
assert_raises Dalli::NetworkError do
server.pipeline_next_responses
end
end
it 'raises NetworkError when called after pipeline_abort' do
memcached_persistent do |dc|
server = dc.send(:ring).servers.first
server.request(:pipelined_get, %w[a b])
server.pipeline_response_setup
server.pipeline_abort
assert_raises Dalli::NetworkError do
server.pipeline_next_responses
end
end
end
57 changes: 57 additions & 0 deletions test/integration/test_pipelined_set.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
# frozen_string_literal: true

require_relative '../helper'

describe 'Pipelined Get' do
it 'supports pipelined set' do
memcached_persistent do |dc|
dc.close
dc.flush
end
toxi_memcached_persistent do |dc|
dc.close
dc.flush

resp = dc.get_multi(%w[a b c d e f])

assert_empty(resp)

pairs = { 'a' => 'foo', 'b' => 123, 'c' => 'raw' }
dc.set_multi(pairs, 60, raw: true)

# Invocation without block
resp = dc.get_multi(%w[a b c d e f])
expected_resp = { 'a' => 'foo', 'b' => '123', 'c' => 'raw' }

assert_equal(expected_resp, resp)
end
end

it 'pipelined set raises network errors' do
memcached_persistent do |dc|
dc.close
dc.flush
end
toxi_memcached_persistent(19_997, '', { down_retry_delay: 0 }) do |dc|
dc.close
dc.flush

resp = dc.get_multi(%w[a b c d e f])

assert_empty(resp)

pairs = { 'a' => 'foo', 'b' => 123, 'c' => 'raw' }

Toxiproxy[/dalli_memcached/].down do
assert_raises Dalli::NetworkError do
dc.set_multi(pairs, 60, raw: true)
end
end
# Invocation without block should reconnect and not have set any keys
resp = dc.get_multi(%w[a b c d e f])
expected_resp = {}

assert_equal(expected_resp, resp)
end
end
end
438 changes: 217 additions & 221 deletions test/integration/test_quiet.rb

Large diffs are not rendered by default.

89 changes: 0 additions & 89 deletions test/integration/test_sasl.rb

This file was deleted.

34 changes: 15 additions & 19 deletions test/integration/test_serializer.rb
Original file line number Diff line number Diff line change
@@ -4,28 +4,24 @@
require 'json'

describe 'Serializer configuration' do
MemcachedManager.supported_protocols.each do |p|
describe "using the #{p} protocol" do
it 'defaults to Marshal' do
memcached(p, 29_198) do |dc|
dc.set 1, 2
it 'defaults to Marshal' do
memcached(29_198) do |dc|
dc.set 1, 2

assert_equal Marshal, dc.instance_variable_get(:@ring).servers.first.serializer
end
end
assert_equal Marshal, dc.instance_variable_get(:@ring).servers.first.serializer
end
end

it 'supports a custom serializer' do
memcached(p, 29_198) do |_dc, port|
memcache = Dalli::Client.new("127.0.0.1:#{port}", serializer: JSON)
memcache.set 1, 2
begin
assert_equal JSON, memcache.instance_variable_get(:@ring).servers.first.serializer
it 'supports a custom serializer' do
memcached(29_198) do |_dc, port|
memcache = Dalli::Client.new("127.0.0.1:#{port}", serializer: JSON)
memcache.set 1, 2
begin
assert_equal JSON, memcache.instance_variable_get(:@ring).servers.first.serializer

memcached(p, 21_956) do |newdc|
assert newdc.set('json_test', { 'foo' => 'bar' })
assert_equal({ 'foo' => 'bar' }, newdc.get('json_test'))
end
end
memcached(21_956) do |newdc|
assert newdc.set('json_test', { 'foo' => 'bar' })
assert_equal({ 'foo' => 'bar' }, newdc.get('json_test'))
end
end
end
44 changes: 20 additions & 24 deletions test/integration/test_ttl.rb
Original file line number Diff line number Diff line change
@@ -3,36 +3,32 @@
require_relative '../helper'

describe 'TTL behavior' do
MemcachedManager.supported_protocols.each do |p|
describe "using the #{p} protocol" do
it 'raises error with invalid client level expires_in' do
bad_data = [{ bad: 'expires in data' }, Hash, [1, 2, 3]]
it 'raises error with invalid client level expires_in' do
bad_data = [{ bad: 'expires in data' }, Hash, [1, 2, 3]]

bad_data.each do |bad|
assert_raises ArgumentError do
Dalli::Client.new('foo', { expires_in: bad })
end
end
bad_data.each do |bad|
assert_raises ArgumentError do
Dalli::Client.new('foo', { expires_in: bad })
end
end
end

it 'supports a TTL on set' do
memcached_persistent(p) do |dc|
key = 'foo'
it 'supports a TTL on set' do
memcached_persistent do |dc|
key = 'foo'

assert dc.set(key, 'bar', 1)
assert_equal 'bar', dc.get(key)
sleep 1.2
assert dc.set(key, 'bar', 1)
assert_equal 'bar', dc.get(key)
sleep 1.2

assert_nil dc.get(key)
end
end
assert_nil dc.get(key)
end
end

it 'generates an ArgumentError for ttl that does not support to_i' do
memcached_persistent(p) do |dc|
assert_raises ArgumentError do
dc.set('foo', 'bar', [])
end
end
it 'generates an ArgumentError for ttl that does not support to_i' do
memcached_persistent do |dc|
assert_raises ArgumentError do
dc.set('foo', 'bar', [])
end
end
end
22 changes: 11 additions & 11 deletions test/protocol/meta/test_request_formatter.rb
Original file line number Diff line number Diff line change
@@ -41,61 +41,61 @@
let(:ttl) { rand(500..999) }

it 'returns the default (treat as a set, no CAS check) when just passed key, datalen, and bitflags' do
assert_equal "ms #{key} #{val.bytesize} c F#{bitflags} MS\r\n#{val}\r\n",
assert_equal "ms #{key} #{val.bytesize} c F#{bitflags} MS\r\n",
Dalli::Protocol::Meta::RequestFormatter.meta_set(key: key, value: val, bitflags: bitflags)
end

it 'supports the add mode' do
assert_equal "ms #{key} #{val.bytesize} c F#{bitflags} ME\r\n#{val}\r\n",
assert_equal "ms #{key} #{val.bytesize} c F#{bitflags} ME\r\n",
Dalli::Protocol::Meta::RequestFormatter.meta_set(key: key, value: val, bitflags: bitflags,
mode: :add)
end

it 'supports the replace mode' do
assert_equal "ms #{key} #{val.bytesize} c F#{bitflags} MR\r\n#{val}\r\n",
assert_equal "ms #{key} #{val.bytesize} c F#{bitflags} MR\r\n",
Dalli::Protocol::Meta::RequestFormatter.meta_set(key: key, value: val, bitflags: bitflags,
mode: :replace)
end

it 'passes a TTL if one is provided' do
assert_equal "ms #{key} #{val.bytesize} c F#{bitflags} T#{ttl} MS\r\n#{val}\r\n",
assert_equal "ms #{key} #{val.bytesize} c F#{bitflags} T#{ttl} MS\r\n",
Dalli::Protocol::Meta::RequestFormatter.meta_set(key: key, value: val, ttl: ttl, bitflags: bitflags)
end

it 'omits the CAS flag on append' do
assert_equal "ms #{key} #{val.bytesize} MA\r\n#{val}\r\n",
assert_equal "ms #{key} #{val.bytesize} MA\r\n",
Dalli::Protocol::Meta::RequestFormatter.meta_set(key: key, value: val, mode: :append)
end

it 'omits the CAS flag on prepend' do
assert_equal "ms #{key} #{val.bytesize} MP\r\n#{val}\r\n",
assert_equal "ms #{key} #{val.bytesize} MP\r\n",
Dalli::Protocol::Meta::RequestFormatter.meta_set(key: key, value: val, mode: :prepend)
end

it 'passes a CAS if one is provided' do
assert_equal "ms #{key} #{val.bytesize} c F#{bitflags} C#{cas} MS\r\n#{val}\r\n",
assert_equal "ms #{key} #{val.bytesize} c F#{bitflags} C#{cas} MS\r\n",
Dalli::Protocol::Meta::RequestFormatter.meta_set(key: key, value: val, bitflags: bitflags, cas: cas)
end

it 'excludes CAS if set to 0' do
assert_equal "ms #{key} #{val.bytesize} c F#{bitflags} MS\r\n#{val}\r\n",
assert_equal "ms #{key} #{val.bytesize} c F#{bitflags} MS\r\n",
Dalli::Protocol::Meta::RequestFormatter.meta_set(key: key, value: val, bitflags: bitflags, cas: 0)
end

it 'excludes non-numeric CAS values' do
assert_equal "ms #{key} #{val.bytesize} c F#{bitflags} MS\r\n#{val}\r\n",
assert_equal "ms #{key} #{val.bytesize} c F#{bitflags} MS\r\n",
Dalli::Protocol::Meta::RequestFormatter.meta_set(key: key, value: val, bitflags: bitflags,
cas: "\nset importantkey 1 1000 8\ninjected")
end

it 'sets the quiet mode if configured' do
assert_equal "ms #{key} #{val.bytesize} c F#{bitflags} MS q\r\n#{val}\r\n",
assert_equal "ms #{key} #{val.bytesize} F#{bitflags} MS q\r\n",
Dalli::Protocol::Meta::RequestFormatter.meta_set(key: key, value: val, bitflags: bitflags,
quiet: true)
end

it 'sets the base64 mode if configured' do
assert_equal "ms #{key} #{val.bytesize} c b F#{bitflags} MS\r\n#{val}\r\n",
assert_equal "ms #{key} #{val.bytesize} c b F#{bitflags} MS\r\n",
Dalli::Protocol::Meta::RequestFormatter.meta_set(key: key, value: val, bitflags: bitflags,
base64: true)
end
111 changes: 0 additions & 111 deletions test/protocol/test_binary.rb

This file was deleted.

110 changes: 77 additions & 33 deletions test/protocol/test_value_serializer.rb
Original file line number Diff line number Diff line change
@@ -35,54 +35,98 @@
let(:vs_options) { { serializer: serializer } }
let(:raw_value) { Object.new }

describe 'when the request options are nil' do
let(:req_options) { nil }
describe 'when raw is not default' do
describe 'when the request options are nil' do
let(:req_options) { nil }

it 'serializes the value' do
serializer.expect :dump, serialized_dummy, [raw_value]
val, newbitflags = vs.store(raw_value, req_options, bitflags)
it 'serializes the value' do
serializer.expect :dump, serialized_dummy, [raw_value]
val, newbitflags = vs.store(raw_value, req_options, bitflags)

assert_equal val, serialized_dummy
assert_equal newbitflags, (bitflags | 0x1)
serializer.verify
assert_equal val, serialized_dummy
assert_equal newbitflags, (bitflags | 0x1)
serializer.verify
end
end
end

describe 'when the request options do not specify a value for the :raw key' do
let(:req_options) { { other: SecureRandom.hex(4) } }
describe 'when the request options do not specify a value for the :raw key' do
let(:req_options) { { other: SecureRandom.hex(4) } }

it 'serializes the value' do
serializer.expect :dump, serialized_dummy, [raw_value]
val, newbitflags = vs.store(raw_value, req_options, bitflags)
it 'serializes the value' do
serializer.expect :dump, serialized_dummy, [raw_value]
val, newbitflags = vs.store(raw_value, req_options, bitflags)

assert_equal val, serialized_dummy
assert_equal newbitflags, (bitflags | 0x1)
serializer.verify
assert_equal val, serialized_dummy
assert_equal newbitflags, (bitflags | 0x1)
serializer.verify
end
end
end

describe 'when the request options value for the :raw key is false' do
let(:req_options) { { raw: false } }
describe 'when the request options value for the :raw key is false' do
let(:req_options) { { raw: false } }

it 'serializes the value' do
serializer.expect :dump, serialized_dummy, [raw_value]
val, newbitflags = vs.store(raw_value, req_options, bitflags)
it 'serializes the value' do
serializer.expect :dump, serialized_dummy, [raw_value]
val, newbitflags = vs.store(raw_value, req_options, bitflags)

assert_equal val, serialized_dummy
assert_equal newbitflags, (bitflags | 0x1)
serializer.verify
assert_equal val, serialized_dummy
assert_equal newbitflags, (bitflags | 0x1)
serializer.verify
end
end

describe 'when the request options value for the :raw key is true' do
let(:req_options) { { raw: true } }

it 'does not call the serializer and just converts the input value to a string' do
val, newbitflags = vs.store(raw_value, req_options, bitflags)

assert_equal val, raw_value.to_s
assert_equal newbitflags, bitflags
serializer.verify
end
end
end

describe 'when the request options value for the :raw key is true' do
let(:req_options) { { raw: true } }
describe 'when raw is default' do
let(:vs) { Dalli::Protocol::ValueSerializer.new(vs_options) }
let(:vs_options) { { serializer: serializer, raw: true } }

it 'does not call the serializer and just converts the input value to a string' do
val, newbitflags = vs.store(raw_value, req_options, bitflags)
describe 'when the request options do not specify a value for the :raw key' do
let(:req_options) { { other: SecureRandom.hex(4) } }

assert_equal val, raw_value.to_s
assert_equal newbitflags, bitflags
serializer.verify
it 'uses string conversion as a default' do
val, newbitflags = vs.store(raw_value, req_options, bitflags)

assert_equal raw_value.to_s, val
assert_equal newbitflags, bitflags
serializer.verify
end
end

describe 'when the request options value for the :raw key is false' do
let(:req_options) { { raw: false } }

it 'serializes the value' do
serializer.expect :dump, serialized_dummy, [raw_value]
val, newbitflags = vs.store(raw_value, req_options, bitflags)

assert_equal val, serialized_dummy
assert_equal newbitflags, (bitflags | 0x1)
serializer.verify
end
end

describe 'when the request options value for the :raw key is true' do
let(:req_options) { { raw: true } }

it 'does not call the serializer and just converts the input value to a string' do
val, newbitflags = vs.store(raw_value, req_options, bitflags)

assert_equal val, raw_value.to_s
assert_equal newbitflags, bitflags
serializer.verify
end
end
end

3 changes: 2 additions & 1 deletion test/test_client_options.rb
Original file line number Diff line number Diff line change
@@ -4,9 +4,10 @@

describe 'Dalli client options' do
it 'not warn about valid options' do
dc = Dalli::Client.new('foo', compress: true)
dc = Dalli::Client.new('foo', compress: true, raw: true)
# Rails.logger.expects :warn
assert_operator dc.instance_variable_get(:@options), :[], :compress
assert_operator dc.instance_variable_get(:@options), :[], :raw
end

describe 'servers configuration' do
64 changes: 32 additions & 32 deletions test/test_rack_session.rb
Original file line number Diff line number Diff line change
@@ -9,7 +9,7 @@
describe Rack::Session::Dalli do
before do
@port = 19_129
memcached_persistent(:binary, @port)
memcached_persistent(@port)
Rack::Session::Dalli::DEFAULT_DALLI_OPTIONS[:memcache_server] = "localhost:#{@port}"

# test memcache connection
@@ -114,7 +114,7 @@
res = Rack::MockRequest.new(rsd).get('/')

assert_includes res['Set-Cookie'], "#{session_key}="
assert_equal '{"counter"=>1}', res.body
assert_equal '{"counter"=>1}', res.body.delete(' ')
end

it 'determines session from a cookie' do
@@ -123,8 +123,8 @@
res = req.get('/')
cookie = res['Set-Cookie']

assert_equal '{"counter"=>2}', req.get('/', 'HTTP_COOKIE' => cookie).body
assert_equal '{"counter"=>3}', req.get('/', 'HTTP_COOKIE' => cookie).body
assert_equal '{"counter"=>2}', req.get('/', 'HTTP_COOKIE' => cookie).body.delete(' ')
assert_equal '{"counter"=>3}', req.get('/', 'HTTP_COOKIE' => cookie).body.delete(' ')
end

it 'determines session only from a cookie by default' do
@@ -133,8 +133,8 @@
res = req.get('/')
sid = res['Set-Cookie'][session_match, 1]

assert_equal '{"counter"=>1}', req.get("/?rack.session=#{sid}").body
assert_equal '{"counter"=>1}', req.get("/?rack.session=#{sid}").body
assert_equal '{"counter"=>1}', req.get("/?rack.session=#{sid}").body.delete(' ')
assert_equal '{"counter"=>1}', req.get("/?rack.session=#{sid}").body.delete(' ')
end

it 'determines session from params' do
@@ -143,8 +143,8 @@
res = req.get('/')
sid = res['Set-Cookie'][session_match, 1]

assert_equal '{"counter"=>2}', req.get("/?rack.session=#{sid}").body
assert_equal '{"counter"=>3}', req.get("/?rack.session=#{sid}").body
assert_equal '{"counter"=>2}', req.get("/?rack.session=#{sid}").body.delete(' ')
assert_equal '{"counter"=>3}', req.get("/?rack.session=#{sid}").body.delete(' ')
end

it 'survives nonexistant cookies' do
@@ -153,7 +153,7 @@
res = Rack::MockRequest.new(rsd)
.get('/', 'HTTP_COOKIE' => bad_cookie)

assert_equal '{"counter"=>1}', res.body
assert_equal '{"counter"=>1}', res.body.delete(' ')
cookie = res['Set-Cookie'][session_match]

refute_match(/#{bad_cookie}/, cookie)
@@ -173,32 +173,32 @@
rsd = Rack::Session::Dalli.new(incrementor, expire_after: 3)
res = Rack::MockRequest.new(rsd).get('/')

assert_includes res.body, '"counter"=>1'
assert_includes res.body.delete(' '), '"counter"=>1'
cookie = res['Set-Cookie']
puts 'Sleeping to expire session' if $DEBUG
sleep 4
res = Rack::MockRequest.new(rsd).get('/', 'HTTP_COOKIE' => cookie)

refute_equal cookie, res['Set-Cookie']
assert_includes res.body, '"counter"=>1'
assert_includes res.body.delete(' '), '"counter"=>1'
end

it 'maintains freshness of existing sessions' do
rsd = Rack::Session::Dalli.new(incrementor, expire_after: 3)
res = Rack::MockRequest.new(rsd).get('/')

assert_includes res.body, '"counter"=>1'
assert_includes res.body.delete(' '), '"counter"=>1'
cookie = res['Set-Cookie']
res = Rack::MockRequest.new(rsd).get('/', 'HTTP_COOKIE' => cookie)

assert_equal cookie, res['Set-Cookie']
assert_includes res.body, '"counter"=>2'
assert_includes res.body.delete(' '), '"counter"=>2'
puts 'Sleeping to expire session' if $DEBUG
sleep 4
res = Rack::MockRequest.new(rsd).get('/', 'HTTP_COOKIE' => cookie)

refute_equal cookie, res['Set-Cookie']
assert_includes res.body, '"counter"=>1'
assert_includes res.body.delete(' '), '"counter"=>1'
end

it 'does not send the same session id if it did not change' do
@@ -208,17 +208,17 @@
res0 = req.get('/')
cookie = res0['Set-Cookie'][session_match]

assert_equal '{"counter"=>1}', res0.body
assert_equal '{"counter"=>1}', res0.body.delete(' ')

res1 = req.get('/', 'HTTP_COOKIE' => cookie)

assert_nil res1['Set-Cookie']
assert_equal '{"counter"=>2}', res1.body
assert_equal '{"counter"=>2}', res1.body.delete(' ')

res2 = req.get('/', 'HTTP_COOKIE' => cookie)

assert_nil res2['Set-Cookie']
assert_equal '{"counter"=>3}', res2.body
assert_equal '{"counter"=>3}', res2.body.delete(' ')
end

it 'deletes cookies with :drop option' do
@@ -230,17 +230,17 @@
res1 = req.get('/')
session = (cookie = res1['Set-Cookie'])[session_match]

assert_equal '{"counter"=>1}', res1.body
assert_equal '{"counter"=>1}', res1.body.delete(' ')

res2 = dreq.get('/', 'HTTP_COOKIE' => cookie)

assert_nil res2['Set-Cookie']
assert_equal '{"counter"=>2}', res2.body
assert_equal '{"counter"=>2}', res2.body.delete(' ')

res3 = req.get('/', 'HTTP_COOKIE' => cookie)

refute_equal session, res3['Set-Cookie'][session_match]
assert_equal '{"counter"=>1}', res3.body
assert_equal '{"counter"=>1}', res3.body.delete(' ')
end

it 'provides new session id with :renew option' do
@@ -252,23 +252,23 @@
res1 = req.get('/')
session = (cookie = res1['Set-Cookie'])[session_match]

assert_equal '{"counter"=>1}', res1.body
assert_equal '{"counter"=>1}', res1.body.delete(' ')

res2 = rreq.get('/', 'HTTP_COOKIE' => cookie)
new_cookie = res2['Set-Cookie']
new_session = new_cookie[session_match]

refute_equal session, new_session
assert_equal '{"counter"=>2}', res2.body
assert_equal '{"counter"=>2}', res2.body.delete(' ')

res3 = req.get('/', 'HTTP_COOKIE' => new_cookie)

assert_equal '{"counter"=>3}', res3.body
assert_equal '{"counter"=>3}', res3.body.delete(' ')

# Old cookie was deleted
res4 = req.get('/', 'HTTP_COOKIE' => cookie)

assert_equal '{"counter"=>1}', res4.body
assert_equal '{"counter"=>1}', res4.body.delete(' ')
end

it 'omits cookie with :defer option but still updates the state' do
@@ -281,15 +281,15 @@
res0 = dreq.get('/')

assert_nil res0['Set-Cookie']
assert_equal '{"counter"=>1}', res0.body
assert_equal '{"counter"=>1}', res0.body.delete(' ')

res0 = creq.get('/')
res1 = dreq.get('/', 'HTTP_COOKIE' => res0['Set-Cookie'])

assert_equal '{"counter"=>2}', res1.body
assert_equal '{"counter"=>2}', res1.body.delete(' ')
res2 = dreq.get('/', 'HTTP_COOKIE' => res0['Set-Cookie'])

assert_equal '{"counter"=>3}', res2.body
assert_equal '{"counter"=>3}', res2.body.delete(' ')
end

it 'omits cookie and state update with :skip option' do
@@ -302,15 +302,15 @@
res0 = sreq.get('/')

assert_nil res0['Set-Cookie']
assert_equal '{"counter"=>1}', res0.body
assert_equal '{"counter"=>1}', res0.body.delete(' ')

res0 = creq.get('/')
res1 = sreq.get('/', 'HTTP_COOKIE' => res0['Set-Cookie'])

assert_equal '{"counter"=>2}', res1.body
assert_equal '{"counter"=>2}', res1.body.delete(' ')
res2 = sreq.get('/', 'HTTP_COOKIE' => res0['Set-Cookie'])

assert_equal '{"counter"=>2}', res2.body
assert_equal '{"counter"=>2}', res2.body.delete(' ')
end

it 'updates deep hashes correctly' do
@@ -332,13 +332,13 @@
ses0 = JSON.parse(res0.body)

refute_nil ses0
assert_equal '{"a"=>"b", "c"=>{"d"=>"e"}, "f"=>{"g"=>{"h"=>"i"}}, "test"=>true}', ses0.to_s
assert_equal '{"a"=>"b", "c"=>{"d"=>"e"}, "f"=>{"g"=>{"h"=>"i"}}, "test"=>true}'.delete(' '), ses0.to_s.delete(' ')

res1 = req.get('/', 'HTTP_COOKIE' => cookie)
ses1 = JSON.parse(res1.body)

refute_nil ses1
assert_equal '{"a"=>"b", "c"=>{"d"=>"e"}, "f"=>{"g"=>{"h"=>"j"}}, "test"=>true}', ses1.to_s
assert_equal '{"a"=>"b", "c"=>{"d"=>"e"}, "f"=>{"g"=>{"h"=>"j"}}, "test"=>true}'.delete(' '), ses1.to_s.delete(' ')

refute_equal ses0, ses1
end
32 changes: 10 additions & 22 deletions test/test_ring.rb
Original file line number Diff line number Diff line change
@@ -2,23 +2,11 @@

require_relative 'helper'

class TestServer
attr_reader :name

def initialize(attribs, _client_options = {})
@name = attribs
end

def weight
1
end
end

describe 'Ring' do
describe 'a ring of servers' do
it 'have the continuum sorted by value' do
servers = ['localhost:11211', 'localhost:9500']
ring = Dalli::Ring.new(servers, TestServer, {})
ring = Dalli::Ring.new(servers, {})
previous_value = 0
ring.continuum.each do |entry|
assert_operator entry.value, :>, previous_value
@@ -27,7 +15,7 @@ def weight
end

it 'raise when no servers are available/defined' do
ring = Dalli::Ring.new([], TestServer, {})
ring = Dalli::Ring.new([], {})
assert_raises Dalli::RingError, message: 'No server available' do
ring.server_for_key('test')
end
@@ -36,16 +24,16 @@ def weight
describe 'containing only a single server' do
it "raise correctly when it's not alive" do
servers = ['localhost:12345']
ring = Dalli::Ring.new(servers, Dalli::Protocol::Binary, {})
ring = Dalli::Ring.new(servers, {})
assert_raises Dalli::RingError, message: 'No server available' do
ring.server_for_key('test')
end
end

it "return the server when it's alive" do
servers = ['localhost:19191']
ring = Dalli::Ring.new(servers, Dalli::Protocol::Binary, {})
memcached(:binary, 19_191) do |mc|
ring = Dalli::Ring.new(servers, {})
memcached(19_191) do |mc|
ring = mc.send(:ring)

assert_equal ring.servers.first.port, ring.server_for_key('test').port
@@ -56,16 +44,16 @@ def weight
describe 'containing multiple servers' do
it 'raise correctly when no server is alive' do
servers = ['localhost:12345', 'localhost:12346']
ring = Dalli::Ring.new(servers, Dalli::Protocol::Binary, {})
ring = Dalli::Ring.new(servers, {})
assert_raises Dalli::RingError, message: 'No server available' do
ring.server_for_key('test')
end
end

it 'return an alive server when at least one is alive' do
servers = ['localhost:12346', 'localhost:19191']
ring = Dalli::Ring.new(servers, Dalli::Protocol::Binary, {})
memcached(:binary, 19_191) do |mc|
ring = Dalli::Ring.new(servers, {})
memcached(19_191) do |mc|
ring = mc.send(:ring)

assert_equal ring.servers.first.port, ring.server_for_key('test').port
@@ -74,13 +62,13 @@ def weight
end

it 'detect when a dead server is up again' do
memcached(:binary, 19_997) do
memcached(19_997) do
down_retry_delay = 0.5
dc = Dalli::Client.new(['localhost:19997', 'localhost:19998'], down_retry_delay: down_retry_delay)

assert_equal 1, dc.stats.values.compact.count

memcached(:binary, 19_998) do
memcached(19_998) do
assert_equal 2, dc.stats.values.compact.count
end
end
27 changes: 13 additions & 14 deletions test/utils/memcached_manager.rb
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
# frozen_string_literal: true

require 'tempfile'
##
# Utility module for spinning up memcached instances locally, and generating a corresponding
# Dalli::Client to access the local instance. Supports access via TCP and UNIX domain socket.
##
# rubocop:disable Metrics/ModuleLength
module MemcachedManager
# TODO: This is all UNIX specific. To support
# running CI on Windows we'll need to conditionally
@@ -16,10 +18,14 @@ module MemcachedManager
].freeze

MEMCACHED_CMD = 'memcached'
MEMCACHED_VERSION_CMD = "#{MEMCACHED_CMD} -h | head -1"
MEMCACHED_VERSION_REGEXP = /^memcached (\d\.\d\.\d+)/.freeze
MEMCACHED_VERSION_CMD = "#{MEMCACHED_CMD} -h | head -1".freeze
MEMCACHED_VERSION_REGEXP = /^memcached (\d\.\d\.\d+)/
MEMCACHED_MIN_MAJOR_VERSION = ::Dalli::MIN_SUPPORTED_MEMCACHED_VERSION

TOXIPROXY_MEMCACHED_PORT = 21_347
TOXIPROXY_UPSTREAM_PORT = 21_348
UNIX_SOCKET_PATH = (f = Tempfile.new('dalli_test')
f.close
f.path)
@running_pids = {}

def self.start_and_flush_with_retry(port_or_socket, args = '', client_options = {})
@@ -47,8 +53,9 @@ def self.client_for_port_or_socket(port_or_socket, client_options)
end

def self.start(port_or_socket, args)
cmd_with_args, key = cmd_with_args(port_or_socket, args)
raise 'Do not re-use toxiproxy port for memcached' if port_or_socket == TOXIPROXY_MEMCACHED_PORT

cmd_with_args, key = cmd_with_args(port_or_socket, args)
@running_pids[key] ||= begin
pid = IO.popen(cmd_with_args).pid
at_exit do
@@ -105,24 +112,15 @@ def self.version
end

MIN_META_VERSION = '1.6'
def self.supported_protocols
return [] unless version

version > MIN_META_VERSION ? %i[binary meta] : %i[binary]
end

META_DELETE_CAS_FIX_PATCH_VERSION = '13'
def self.supports_delete_cas?(protocol)
return true unless protocol == :meta

def self.supports_delete_cas?
return false unless version > MIN_META_VERSION

minor_patch_delimiter = version.index('.', 2)
minor_version = version[0...minor_patch_delimiter]
return true if minor_version > MIN_META_VERSION

patch_version = version[minor_patch_delimiter + 1..]

patch_version >= META_DELETE_CAS_FIX_PATCH_VERSION
end

@@ -147,3 +145,4 @@ def self.determine_cmd
raise Errno::ENOENT, "Unable to find memcached #{MEMCACHED_MIN_MAJOR_VERSION}+ locally"
end
end
# rubocop:enable Metrics/ModuleLength
37 changes: 0 additions & 37 deletions test/utils/memcached_mock.rb

This file was deleted.