Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple pushes per client for subscriptions that have a context_id #1064

Closed
mupkoo opened this issue Apr 22, 2021 · 26 comments
Closed

Multiple pushes per client for subscriptions that have a context_id #1064

mupkoo opened this issue Apr 22, 2021 · 26 comments
Assignees

Comments

@mupkoo
Copy link

mupkoo commented Apr 22, 2021

Environment

  • Elixir version (elixir -v): 1.11.3 (compiled with Erlang/OTP 23)
  • Absinthe version (mix deps | grep absinthe): 1.6.4

Expected behavior

Publishing to a subscription should result in just a single push per client.

Actual behavior

Using a subscription with a context_id results in the same number of pushes as the number of connected clients, so for example if you have 5 clients connected and you publish to a given subscription, all 5 clients will receive the data 5 times each.

This is not true when there is no context_id

Relevant Schema/Middleware Code

I have created a repository to showcase the problem.
https://github.com/mupkoo/absinthe-duplicate-subscriptions

Here is a quick view of schema.ex

defmodule ShowcaseWeb.Schema do
  use Absinthe.Schema

  object :item do
    field :title, :string
  end

  query do
  end

  subscription do
    field :change, :item do
      arg :id, non_null(:id)
      config fn %{id: id}, _ -> {:ok, topic: id, context_id: "global"} end
    end

    field :change_private, :item do
      arg :id, non_null(:id)
      config fn %{id: id}, _ -> {:ok, topic: id} end
    end

    field :change_no_args, :item do
      config fn _, _ -> {:ok, topic: "*", context_id: "global"} end
    end
  end
end

This is the test case that showcases the behaviour.
https://github.com/mupkoo/absinthe-duplicate-subscriptions/blob/master/test/showcase_web/schema_test.exs#L5-L38


I have mentioned this problem on the absinthe_phoenix package as well
absinthe-graphql/absinthe_phoenix#83 (comment)

@benwilson512
Copy link
Contributor

Thank you so much for creating an example! I will look into this today.

@youroff
Copy link

youroff commented Jun 17, 2021

We observe the same behavior and even worse: turning context_id: "global" made messages from adjacent topics leak to the users who weren't subscribed.

@benwilson512 benwilson512 self-assigned this Jun 21, 2021
@Nitrodist
Copy link

We are severely affected by this and prevents us from upgrading from 1.5 to 1.6 👀

@v-anyukov
Copy link

In addition it looks like deduplication itself does not work anymore. Resolution happens for every subscribed client, which leads to performance issues.

@mupkoo
Copy link
Author

mupkoo commented Nov 19, 2021

It seems like this PR (#775) is disabling the test that makes sure that de-duplications work as expected

https://github.com/absinthe-graphql/absinthe/pull/775/files#diff-2d1ba39315325b35cec6a60875bc11082cfbaf47c74482abbc9907735b79c90aR396

@adampash
Copy link

adampash commented Jan 8, 2022

I'm using Absinthe in production and have started running into this problem as well. Has anyone had any luck on workarounds, or is using 1.5 the best option for now?

@dorian-marchal
Copy link
Contributor

In addition it looks like deduplication itself does not work anymore. Resolution happens for every subscribed client, which leads to performance issues.

Same issue here on 1.6.6, resolution happens for every subscribed client and all clients receive multiple updates.

@bismark
Copy link

bismark commented Mar 9, 2022

[Updated example below]

This seems like a pretty significant security flaw. It appears to effectively cause the topic to be ignored. I.e. if I have the following subscription (modified from the OP's showcase):

subscription do
    field :change, :item do
      config fn _, %{context: %{user_id: user_id}} ->
        {:ok, topic: user_id, context_id: "global"}
      end
      resolve fn _, %{context: %{user_id: user_id}} ->
        {:ok, %{user_id: user_id}}
      end
    end
end

All subscribers with the same document, regardless of their user_id, will receive all pushes to :change.

Example:

Start two websockets and subscribe on both:

$ wscat -c "localhost:4000/socket/websocket?user_id=1&vsn=2.0.0"
connected (press CTRL+C to quit)
> ["9","9","__absinthe__:control","phx_join",{}]
< ["9","9","__absinthe__:control","phx_reply",{"response":{},"status":"ok"}]
> ["9","12","__absinthe__:control","doc",{"query":"subscription change {\n change { user_id }\n }\n","variables":{}}]
< ["9","12","__absinthe__:control","phx_reply",{"response":{"subscriptionId":"__absinthe__:doc:global:57C9CEDCFE29923D7B2F2639A344D94CE6F4EBA88AD97484D3A83B9F64F52032"},"status":"ok"}]
 $  wscat -c "localhost:4000/socket/websocket?user_id=2&vsn=2.0.0"
connected (press CTRL+C to quit)
> ["9","9","__absinthe__:control","phx_join",{}]
< ["9","9","__absinthe__:control","phx_reply",{"response":{},"status":"ok"}]
> ["9","12","__absinthe__:control","doc",{"query":"subscription change {\n change { user_id }\n }\n","variables":{}}]
< ["9","12","__absinthe__:control","phx_reply",{"response":{"subscriptionId":"__absinthe__:doc:global:57C9CEDCFE29923D7B2F2639A344D94CE6F4EBA88AD97484D3A83B9F64F52032"},"status":"ok"}]

First is subscribed on topic: "1", second on topic: "2".

I then run

Absinthe.Subscription.publish( ShowcaseWeb.Endpoint, nil, change: "1")

and receive on both sockets:

< [null,null,"__absinthe__:doc:global:57C9CEDCFE29923D7B2F2639A344D94CE6F4EBA88AD97484D3A83B9F64F52032","subscription:data",{"result":{"data":{"change":{"user_id":"1"}}},"subscriptionId":"__absinthe__:doc:global:57C9CEDCFE29923D7B2F2639A344D94CE6F4EBA88AD97484D3A83B9F64F52032"}]

If you are relying on topic as a way of controlling access to sensitive information, context_id will bypass that.

@benwilson512
Copy link
Contributor

Ah, this is by far the most succinct description of the issue, thanks for doing this. Let me look into this this weekend.

@girishramnani
Copy link

Any update on this ticket?

@akeemboatswain
Copy link

Also curious on if you made any progress on this @benwilson512

@girishramnani
Copy link

girishramnani commented Jun 27, 2022

I have been trying to debug this bug and mostly found the fix for it. I have tested my fix on the example provided by @mupkoo ( Thank you!! ). But its a hacky solution in my opinion so I thought I should explain the problem here so someone can find a better solution? ( I will still open a PR for the fix so people can get going ).

Background

  • There are two pub sub systems are at play here. One is Phoenix which handles pub sub using topics and Absinthe which uses field_keys e.g {:change_no_args, "*"} for
field :change_no_args, :item do
      config fn _, _ -> {:ok, topic: "*", context_id: "global"} end
    end

Flow

  • when a context_id: "global" and a topic: "*" is used ( basically when we everyone gets the same data ) the pub sub topic for phoenix pub sub and absinthe side stay same for all subscribers
    e.g. __absinthe__:doc:global:84250C7CA81A8CA68FA652D8661949F1025FD6D92D8EEB7220BC014009B20539

So if there were 5 subscribers on phoenix then they all are connected to one topic.

This wasn't a problem when context_id was not provided because in that case absinthe defaults to :erlang.unique_number hence on Phoenix pub sub side all 5 topics are unique and each has only 1 user subscribed to it.

PR - #1175

Open to any advice on how to fix this?

  • I thought of making the Absinthe.Pubsub.Registry to not allow duplicate keys?

@cospin
Copy link

cospin commented Sep 13, 2022

Any progress on this? @benwilson512 @maartenvanvliet

@girishramnani
Copy link

girishramnani commented Sep 13, 2022

@cospin you can use my PR - #1175 it’s already being used in production at my company

@v-anyukov
Copy link

@benwilson512 @maartenvanvliet could we merge PR from prev comment?

@benwilson512
Copy link
Contributor

The PR has no tests, and it also says that it isn't the best solution.

@girishramnani
Copy link

I can add tests but yeah this is not the best solution but it does work

@benwilson512
Copy link
Contributor

Tests would be very helpful.

@girishramnani
Copy link

Ok will add tests ( will spent time on this on weekend )

@girishramnani
Copy link

girishramnani commented Dec 4, 2022

Added a test similar to what is provided here https://github.com/mupkoo/absinthe-duplicate-subscriptions/blob/master/test/showcase_web/schema_test.exs#L5-L38

@v-anyukov
Copy link

call me necroposter, but... would be nice to merge it if it wasn't solved somewhere else.

@girishramnani
Copy link

It has been nearly a year now and it’s being used at scale at my company with no errors

@beligante
Copy link
Contributor

@girishramnani I think what's missing on your tests is

Instead of:

refute_receive(:broadcast)

you should

refute_received({:broadcast, _})

With those changes, your tests should break without what your code changes are fixing -- As previously requested by Ben. I'm happy to do that adjustment -- but I don't want to steal your thunder here.


Also, side note. You mentioned before that your solution could be better. Your purpose is to avoid duplicates on the registry. The issue with that, by design, when the connection dies, the registry will be cleaned right? So, we have that out of the box with duplicate entries -- that said, in the ideal world, that could be fixed by redesigning the whole registry process and adding some supervisors to deregister things when connection processes die. However, your solution is still valid, IMO. But I'll let the signoff to @benwilson512

@warmwaffles
Copy link

Has #1249 solved this?

@beligante
Copy link
Contributor

Has #1249 solved this?

Yessir. We've been running with the latest build in production for ~2months with no issues

@seivan
Copy link
Contributor

seivan commented Oct 6, 2023

@benwilson512 You might want to close this ticket, given the PR was merged 👍🏼

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.