Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Swaps out the redis implementation for a different shard #48

Closed
wants to merge 11 commits into from

Conversation

jwoertink
Copy link
Collaborator

@jwoertink jwoertink commented Jul 17, 2022

Fixes #46

This PR swaps out the redis implementation to a new shard. With the new shard, the monkey patch is no longer required, and several stability improvements are made. This redis shard includes better connection pooling, and support for things like redis cluster. This also includes a new RedisPinger to ensure redis stays alive.

The main breaking change is that some of the settings have been removed. These settings will need to be removed from your config after upgrading:

setting pool_redis_publish : Bool = false
setting redis_pool_size : Int32 = 5
setting redis_pool_timeout : Float64 = 5.0

src/cable/server.cr Outdated Show resolved Hide resolved
@jwoertink
Copy link
Collaborator Author

The CI failures are because Ameba doesn't support Crystal < 1.4.0. Ameba isn't necessary here. I can either take it back out, or we can just lock Cable up to 1.4.0+ too. (This is what we did with Lucky)

@jwoertink
Copy link
Collaborator Author

@jgaskins if you're poking around this, you'll see that even though the specs pass, they throw all those IO::Error https://github.com/cable-cr/cable/runs/7380811910?check_suite_focus=true#step:6:7 if that helps any.

@jwoertink jwoertink marked this pull request as ready for review July 18, 2022 02:02
@jwoertink jwoertink added the BREAKING CHANGE This includes a change that breaks backwards compatibility label Jul 18, 2022
@jwoertink jwoertink mentioned this pull request Oct 5, 2022
@@ -24,6 +24,9 @@ module Cable
# Use a single connection
getter redis_subscribe = Redis::Connection.new(URI.parse(Cable.settings.url))
getter fiber_channel = ::Channel({String, String}).new
getter pinger : Cable::RedisPinger do
Cable::RedisPinger.new(self)
end
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💯

@jwoertink
Copy link
Collaborator Author

jwoertink commented Oct 6, 2022

Good news is the memory leak is fixed.... bad news is there's still some issue where when either the docker container swaps, or redis disconnects, it requires a full reboot to fix. It didn't seem to happen before 6b11799 It felt like it was handling autoreconnect just fine, but less than 24 hours after deploying that change I had to reboot the whole machine.

Just to add to this, it looks like sending messages is fine, but it's on the broadcast back where they fail. It's definitely something in cable since I was getting the same issue on master branch. Something that happens over time, so it's not immediately reproducible. Could be related to how the Cluster is handled?

@jgaskins
Copy link
Contributor

jgaskins commented Oct 6, 2022

The folks at EatApp had a Redis disconnect issue related to pub/sub, too, related to message size. Does any of the discussion in jgaskins/redis#7 help?

@mjeffrey18
Copy link
Contributor

Hey @jwoertink @jgaskins, I'm going to create another PR with just the error-handling stuff on its own as we added some error rate handling logic which has been super useful.

Although I love the idea of putting @jgaskins Redis shard in here, as we use it also, I guess long term, shall we consider an adapter pattern? There are some differences in the way each Redis shard handles types.
For instance, for hgetall I had to write a converter

  def fetch_statuses
    return unless (vals = redis.hgetall(cache_key))
    vals = vals.as(Array(Redis::Value))

    if vals.try(&.empty?)
      nil
    else
      data = {} of String => Int64 | Int32
      vals.each_with_index do |val, idx|
        if idx.even?
          data[val.to_s] = vals[idx + 1].to_s.to_i
        end
      end
      data
    end
  end

It was mildly painful but worth it since we have our own version of the cable shard. But for others, upgrading could result in substantial breaking changes. Which maybe just needs to happen anyways to make this more stable.

I was also toying with the idea of having a Postgres PubSub backend a while back when Redis was giving me a massive headache. Rails supports a few other backends. We could have 1 for each of the Redis shards as a v1.
Another thing I was experimenting with was Redis streams, so we don't miss pub/sub messages.
Could be an interesting approach to allow a buffet.

@jwoertink
Copy link
Collaborator Author

I definitely want to add a postgres backend #49 but doing a redis backend gets a little tricky. We ran in to this issue with Mosquito. The thing is, we can't include both redis shards in to the same project at the same time. So in order for Cable to have a generic pluggable redis interface, we'd need some sort of mock-redis object as a "default", then we'd have to maintain 2 additional shards (i.e. cable-cr/jgaskins-redis-adapter, cable-cr/stefanwille-redis-adapter) Doable for sure, but also a bit of a pain to manage (for me).

I agree with the type handling. There's this issue jgaskins/redis#32 which I think will make that a lot nicer.

I appreciate the info @mjeffrey18! It sounds like you're running in to a lot of the same stuff we are, so that's a huge help 🙏

rescue e : IO::Error
Cable::Logger.error { "#{e.class.name} Exception: #{e.message} -> #{self.class.name}#call { socket.on_message(message) }" }
# Redis may have some error, restart Cable
socket.close(HTTP::WebSocket::CloseCode::NormalClosure, "Farewell")
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In #54 @mjeffrey18 added the socket.close, but this rescue branch doesn't exist on master.... Should we be closing the socket here too? If so, then which CloseCode should be used?

@jwoertink
Copy link
Collaborator Author

This PR is no longer necessary. The current implementation allows for choosing which Redis shard you want to use.

@jwoertink jwoertink closed this Nov 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
BREAKING CHANGE This includes a change that breaks backwards compatibility
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Look at using alternate Redis shard
3 participants