Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add opaque data for KEY_TOKEN token #951

Open
wants to merge 1 commit into
base: next
Choose a base branch
from

Conversation

ton31337
Copy link

@ton31337 ton31337 commented Nov 7, 2022

No description provided.

@dormando
Copy link
Member

dormando commented Nov 8, 2022

This is a pretty clever idea. Not sure I can take the patch but I definitely give it points for creativity. I'll think it over when I get some time.

Thanks!

@ton31337
Copy link
Author

ton31337 commented Nov 8, 2022

@dormando thanks for a quick response. We considered using LiteSpeed Memcached implementation, or filtering using eBPF (regex), or writing our own implementation of memcached (aka proxy), but decided it's much better to stick with the performance, lots of perfect work in vanilla memcached, and make a patch that some other people could use as well.

@dormando
Copy link
Member

dormando commented Nov 8, 2022

The proxy mode I've been working on might be a better place for this sort of thing, though it will still be a while before I have it wired properly. That would add a small abstraction layer you can use to do these sorts of things without having to hook it so deeply into the main code.

The missing piece of the proxy right now is local query resolution (this works via a hack but it's not as fast/native), as it can only proxy requests to others over the network. I would also need to think about where to expose the ipv6 client token at the API level, which would be pretty cool to support.

@ton31337
Copy link
Author

@dormando a quick update for you, we tested this in a shared hosting environment with lots of WordPress installs, and so far, the tests are promising (as expected).

It can be used to implement pseudo-namespacing using IPv6 the last 8-bytes
(64-bits) as the client id (opaque data).

For example:

```
$ ip6tables -t nat -A POSTROUTING -p tcp --dport 11211 -j SNAT --to-source 2a02:4780:1000::ffff:ffff
```

The request comes from this source IP, with key `key1`. Then the key will be
rewritten to `4294967295_key1`. Where 4294967295 is basically ffff:ffff from
IPv6.

This case mostly usable in shared-hosting environments where you need memcached
, but also namespace isolation is a MUST.

Each website has its own dedicated IPv6 address, hence this is easy to distinguish
the client and rewrite key on request.

Run:

```
./memcached -vvv -o opaque_ipv6_ns
```

Debug:

first source IPv6:
```
<24 new auto-negotiating client connection
24: going from conn_new_cmd to conn_waiting
24: going from conn_waiting to conn_read
24: going from conn_read to conn_parse_cmd
24: Client using the ascii protocol
<24 set key1 0 0 10 noreply
24: going from conn_parse_cmd to conn_nread
> NOT FOUND 4294967295_key1
>24 NOREPLY STORED
24: going from conn_nread to conn_new_cmd
24: going from conn_new_cmd to conn_parse_cmd
<24 get key1
> FOUND KEY 4294967295_key1
>24 sending key 4294967295_key1
>24 END
```

second source IPv6:
```
<24 new auto-negotiating client connection
24: going from conn_new_cmd to conn_waiting
24: going from conn_waiting to conn_read
24: going from conn_read to conn_parse_cmd
24: Client using the ascii protocol
<24 set key1 0 0 10 noreply
24: going from conn_parse_cmd to conn_nread
> NOT FOUND 4043374591_key1
>24 NOREPLY STORED
24: going from conn_nread to conn_new_cmd
24: going from conn_new_cmd to conn_parse_cmd
<24 get key1
> FOUND KEY 4043374591_key1
>24 sending key 4043374591_key1
>24 END
```

Implemented only for ASCII mode.

Signed-off-by: Donatas Abraitis <donatas@hostinger.com>
@dormando
Copy link
Member

dormando commented Apr 3, 2023

Just tossing an update in here so you don't think I forgot; proxy has come a long way in the last few months. I'm working on a major API revision at the moment, so I'm hoping within a month or two we'll see a good way to hook this in. Doesn't take long to write the code but validating/baking takes a few weeks.

It will definitely be a little slower but probably barely measurable unless your request rate is millions/sec per machine.

@ton31337
Copy link
Author

ton31337 commented Apr 3, 2023

Just tossing an update in here so you don't think I forgot; proxy has come a long way in the last few months. I'm working on a major API revision at the moment, so I'm hoping within a month or two we'll see a good way to hook this in. Doesn't take long to write the code but validating/baking takes a few weeks.

It will definitely be a little slower but probably barely measurable unless your request rate is millions/sec per machine.

Cool, thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants