Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Use enif_consume_timeslice and don't monopolize scheduler thread #49
Wow. That's much smaller than I was expecting to convert to using the new timeslice function. I'd like to see a couple changes before merging this though.
First, don't make this a compile time switch. Just convert the whole thing over and use a single #ifdef to change the definition of consume_timeslice that return's 0 if the function doesn't exist. I'd probably put this into util.c as
Secondly, the calculation used to give a percentage to
Third, the processed byte calculation looks subtly broken when encoding bignums (via
All in all, this looks fairly solid. Definitely a lot cleaner than I was expecting.
I have removed compile time switch
This check is not necessary because it is present at
I can squash commits into one if you wish.
Do you agree with everything else?
@urbanserj First off, this is quite awesome. You made all the changes I requested just fine and I'm planning on merging this but I'm still trying to reason my way through the calculation for the time slice call.
I'm new to cycle.h but as I read the file itself it seems quite adamant that we shouldn't be trying to convert it to a time unit:
I did some googling on various time functions to see if we couldn't cover most platforms and I was reminded on Windows' terribleness with time.
I'm thinking about switching back to something along the lines of your original patch but phrasing it slightly different. For both decoding and encoding we'll add an option that is the amount of data decoded or encoded that's handled before yielding back to Erlang. Then before yielding we just call enif_consume_timeslice(env, 100) to call it a full time slice. This seems to make a lot more sense to me rather than attempting to try and play games with Erlang's idea of a time slice as a unit of time.
How does that sound?
1 millisecond from
Before first call of jiffy, process can produce 1999 reductions. If jiffy calls
I suppose that since noone can change number of reductions in beam, this setting won't be popular.
I did some research using
On Mon, Aug 26, 2013 at 4:41 PM, Sergey Urbanovich <firstname.lastname@example.org
If I'm taking too much of your time I can try and address it in the next
Thanks for your help so far, though! I'd put this off for a long time
I've been playing around a bit with these changes and ran into a problem causing the beam to segfault. When encoding a bignum, jiffy returns an iolist and the continuation seems to not handle this very well.
Here's how to reproduce it (at least on R16B02):
1> jiffy:encode([trunc(math:pow(2, 64)) || _ <- lists:seq(1, 1000)]).  11130 segmentation fault (core dumped) erl -pa ebin
As a bonus, here is something very strange:
1> jiffy:encode([trunc(math:pow(2, 64)) || _ <- lists:seq(1, 960)]), ok. ok 2> jiffy:encode([trunc(math:pow(2, 64)) || _ <- lists:seq(1, 960)]), ok.  11372 segmentation fault (core dumped) erl -pa ebin