-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
create function is not synchronized #31
Comments
@moxgeek , Thank you for opening this issue. I'm not sure I understand where this happens. Are these many different clients or one WebSocket client with many requests? Do you have example code I can see? Thanks, |
@boazsegev thank you again ... so my code is like that :
so every client will send different data to the server but at the same time
i believe that iodine should handle this not put it in the controller . |
Hi @moxgeek , I think I understand a little better, thank you. You have many clients creating data at the same time. The If I understand correctly, this might be an issue related with database connection corruption or the database access. Are you using ActiveRecord or Sequel? Are you using something different for the database access? The iodine gem tries to support ActiveRecord and Sequel out of the box because it's important to close and reconnect the connection pool every time a worker is spawned. As long as you are using a connection pool properly, this shouldn't happen. Which version of iodine are you using? Which version of Plezi? Thanks, |
Plezi version: 0.16.3 |
I can't recreate the issue. Do you have an example I can run? a controller? Also, the client connections, are they WebSocket or HTTP? I tried recreating with both, but couldn't recreate a storage error (no corrupt or missing data).
Which Redis connector are you using? Are you sure it isn't related to the Redis connection? Consider that concurrency is important for web servers and applications. All connections should be able to run in parallel, otherwise the application might be slow if it needs database access (exactly like when using a single thread). This means that database connections should protect against data corruption issues that relate to concurrency. If this is iodine's Redis connection, maybe the issue is in that connection management? If it's another gem, maybe there's also an issue with that. |
is HTTP not websocket ( i use websocket to send data )
my old controller ( here the problem can be showen ) :
for redis version , i use only |
@moxgeek , After reviewing the issue, I think I have two observations you might want to consider.
As for the first concern: I think it will need a change of approach, such as pushing new data into a Redis list (using I can't help with this error because this is very application specific. It should be considered as a bug and resolved on your end. Using a Mutex solved it in a very local way. It doesn't help with scaling, since the Mutex is process/machine bound. As for the second concern: There are two possible solutions:
For a connection pool, you might consider something similar to this (untested) code: require 'thread'
require 'redis'
# Use a Ruby Queue primitive for a thread-safe pool
REDIS_POOL = Queue.new
# Initialize pool on Iodine start
Iodine.on_state(:on_start) do
# 16 connections per process
16.times { REDIS_POOL << Redis.new }
end
# Close pooled connections when we're done
Iodine.on_state(:on_finish) do
until(REDIS_POOL.empty?)
REDIS_POOL.pop.quit
end
end
# Replace this sample with real code.
class ExampleCtrl
def create
# collect connection from the pool
redis = REDIS_POOL.pop
#... do stuff and then return connection to the pool
REDIS_POOL.push redis
end Using iodine's bundled Redis connection might look something like this: REDIS_CONNECTION = if Iodine::PubSub.default.is_a? Iodine::PubSub::Redis
# redis was initialized from the command-line
Iodine::PubSub.default
else
# create a new redis connection instance
Iodine::PubSub::Redis.new "redis://localhost:6379/"
end
# Replace this sample with real code.
class ExampleCtrl
def create
begin
data = JSON.parse(request.body.read)
rescue
# replace this line with: render(:error) or whatever you use
return [400, {}, ["Bad request!"]]
end
content = data['content'].to_json
# replace: redis.get & redis.set with a list command, `LPUSH`
REDIS_CONNECTION.cmd("LPUSH", "User:#{x}", content) do
# publish data only after it was submitted to the Redis database
publish CHANNEL+x.to_s , content
end
"Scheduled!"
end
end |
hi , sorry i was away from my computer i tried to do your proposition , and it's working well ...
everything work great if i submit post data one by one , however if i send data with a threads ( 10 for my example ) it shows "bizarre" exception : the full stack : https://www.pastiebin.com/5c544b9b3fc7e i tried to solve it by updating the ruby version to 2.6.0 and 2.6.1 and nothing changed |
Hi @moxgeek , Thank you for exposing this issue 🙏! I found the issue and I'm working on a fix. You'll have it later today, I hope. Bo. |
thanks ! it will work ( client recive a well-formated json response but if we send this :
the client recive a "bizarre" response !
|
Hi @moxgeek , Thank you for your patience and for exposing the Redis issue. I released a patch for the Redis engine (v. 0.7.21). This should fix the issue where Ruby crashes due to a Global Lock violation. As for the JSON issue - I can't replicate the issue. Maybe this is related to the content of XXXX or maybe I'm not replicating this issue properly. Please let me know if the patch fixes your issue and if you can show me a way to replicate the JSON error. Thanks! |
hello @boazsegev , thank you , i will try the pull iodine and let you know . about the Json , the XXXX is not a confidential , the json example is what i really send to plezi .
when i print result on terminal after geting the body from rq it shows the same value ( woring )
the result is :
if i replace
|
i just did it with more simple json
and it will work with
maybe is linked to the lenght of the json . |
i fixed the problem by using the redis without iodine :
|
Hi @moxgeek , Thank you for keeping me posted. I understand that you upgraded to the latest Please note that:
As for the actual issue: I'm currently traveling in Costa Rica and my computer-time and internet access are both limited. I will continue looking further into the issue. I suspect it might be related to memory reference counting or marking, either in relation to the Ruby GC or in relation to facil.io's memory allocations... which mean that engaging the memory allocator might be required for reproducing the issue. I am still trying to reproduce the issue. Kindly, |
hello again @boazsegev for resolving the encoding problem , i use the redis gem instead of Iodine redis .
|
Hi @moxgeek , Thank you very much for all the information, it was very helpful in exposing the issue. I managed to re-create the issue and I think I fixed it in my latest commit. I'm not releasing the patch yet, since I want to do some more tests, but I would appreciate if you could confirm that the patch actually solves the encoding issue on your end. Thank you very much for exposing this concern! It only happens with JSON encoded messages in the Redis engine reply that have many escaped characters, so it wasn't easy to pinpoint. Cheers! |
(y) thanks , it's working without any encoding problem . i have a question , what is the difference between Iodine::PubSub::Redis and Redis.new from ruby gem |
I'm happy we managed to solve this for you Thank you again for all the information and for opening this issue 👍🏻🙏🏻 As for your question, the Redis gem and iodine's Redis engine are significantly different in design. TL;DR;The iodine Redis engine is more opinionated and makes many tasks easier for the developer (set and forget). The Redis gem offers more control over connectivity, but assumes the developer will manage connections, process forking and any other design choices.
|
hello again ...
while trying to send a considerable amount of request via HTTP ( as ws ) to the create function , i figure out that some of data is lost when trying to call the create function many time at the same time ...
i tried to use mutex to handle the problem , but nothing changed
when i start iodine with one worker and one thread nothing is lost .
EDIT
i apologise , about the Mutex resolve the problem by using this :
however i still thinking that iodine need to handle that without using Mutex on my code ...
The text was updated successfully, but these errors were encountered: