-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How high can we grow the size of group chat limit. #701
Comments
Adding and removing large numbers of members is the concern. |
Do you have benchmarks for how long it would take? I have been working on benchmarking larger groups but it's hard to get something that is dominated by the setup or memory. For a 1000 member group for example I see 734ms to add a new member. |
Hello @franziskuskiefer ! Doing benchmarking and have some early findings here: #793 I plan to have the full HTML report uploaded today but running into some bugs with all the runs. Also, I'm benchmarking libxmtp e2e here which will probably yield slightly different results than the pure mls benchmarks. 734 ms for adding a member to a 1000 person group seems to check out, howevever. Right now bulk adding ~8000 people to an empty group adds up to ~8 seconds e2e libxmtp. There is a minor bug to sort out before benchmarking adding a single member to a large group, hoping to follow up with a benchmark for that soon once the bug is fixed. |
Great! This sounds roughly like what I'd expect. |
this makes sense & is also great news for the MLS side, those numbers are awesome. I will have to review my benchmarks to try and make sure that I'm not accidentally benching memory allocations where I shouldn't be (In the current state I definitely am). I also think that at least on libxmtp side benchmarks will be dominated by network overhead + there are definitely some optimizations to make internally to make things go faster, so i'm still pessimistic about getting libxmtp benchmarks to be much better than they are purely through modifying the benchmarking code. I am very interested in the code you have for this too -- Also, curious about your opinions about criterion. Do you think we should stay away from it or does it still make sense to use for our benchmarks? |
When we make sure that the benchmarks are somewhat aligned we should be able to estimate the overhead on the libxmtp side and see if it makes sense to reduce that.
The benchmarks are on a branch here. I'll get them PRed over the next days. I've done a couple different things first to try to move the memory operations outside of the measured code. But since none of that worked (I think I did that a couple times before, but for some reason I had to re-lean this lesson) I did a simple benchmarking macro. Essentially, I'm trying not to move any memory around. There is always one
These are known issues. I filed something a couple years back to make sure it's on record. |
Up to 20k. How close can we get to that performantly?
The text was updated successfully, but these errors were encountered: