New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch to parking_lot and fix Clippy warnings #146
Conversation
@locka99 any feedback or reactions on this one? Thanks! |
What are your thoughts on using tokio::sync::Mutex instead of The advantage of using |
It's not possible to switch to |
8bf53de
to
598b1b9
Compare
I somehow missed this merge request - I'll have to think about it some more but it would be nice to break this up a bit, e.g. Rename the macro trace_read_lock_unwrap() / trace_write_lock_unwrap() to reduce the noise. I'm not sure of the issue with threads and RwLock / Mutex and threads. The server is using futures but clones and moves the refcount into the future's async function. From then on it is pinned to the future so it can basically be used by whatever worker thread is making progress on the future. |
Closed by mistake |
Well there are a couple of functions that I cannot make async without this change (see the error I added in the initial post). I started out by trying to work around that and keep using I'm perfectly fine to rename the macros back for now and change those in a later PR if you prefer that. But maybe I should first open a PR to show what the async client will look like based on this branch? Then we can discuss these related PRs by looking at the both of them at the same time. Does that make sense? Or are you fine with just looking at this one as long as the marcos are renamed and the PR is rebased? |
cbf6ca9
to
2810e23
Compare
@locka99 I noticed you already renamed the macros, so that clears up this PR quite a bit already. It's still big, but most of it is pretty easy to ready now. Let me know how you want to proceed. I was offline for a while, but finding my way back to my desk again. So will pickup the async client related work in a bit again. |
Yes I renamed to simplify your patch a bit. I'm actually away on vacation at the moment so I don't know if I have time to carefully review it yet but I thought it would help to cut some of the noise out for when I do. |
Thanks for the update and enjoy your vacation (the PR can wait)! |
087e3f5
to
87693ac
Compare
I'm pushing some clippy fixes which should reduce the size of this patch at the same time. |
Oh man... 😞 @locka99 is there any way forward with this PR, these rebases are killing me. EDIT: Looking a bit closer it seems not as bad I I initially thought, but in general rebasing this branch is quite labor intensive, so any approach of or suggestion on how to move this forward would be great. As for the |
87693ac
to
aee1bcf
Compare
I know it's painful but this patch is huge and incorporates things which are unrelated to just the lock change. That's why I've renamed macros and done clippy fixes to reduce the patch to something smaller. But there are changes to functionality, clippy fixes, reformatting, refactoring and changes in behaviour around the client loop {}. If we can tackle each of these in smaller patches or understand the differences then I would probably just take the rwlock or at least try it. For example if you can run "cargo +nightly fmt" it might reduce a lot of noise. Then we can tackle the other diffs and try and understand what they're about. |
I fully understand what you are saying @locka99 and I think I can maybe (without it being too much work) split this PR in two parts... I'm a little bit ill at the moment, but I'll have a look at it later this week. Thanks! |
85297c6
to
c3e238a
Compare
Hmm... My idea didn't work out very well (turned out to be a lót of work after all)... Did run I'm open to go through the PR together (using Teams or something like that) if that would help? So I can explain the changes for each block that is unclear? |
85ab250
to
008805c
Compare
008805c
to
4ba7b29
Compare
@locka99 is there any change we can get this merged before moving to the new crate setup announced in #192? I will make sure this is rebased tomorrow morning and I'm available to discuss or work on any additional questions/requirements this week if needed... I do have a working async-client based on this PR which I'm hoping to be able to contribute soon. I'm currently testing the client in several environments and it looks good, but it seems there is at least 1 issue left with something blocking the worker threads when using multiple clients in de same runtime over a longer period of time. But didn't have time to validating and/or debug that yet. |
c306723
to
fac9a7d
Compare
Rebased and refactored the changes so they can be applied and used with the new create layout... |
fac9a7d
to
b9b1f8a
Compare
@locka99 : what would you think about sharing the maintenance burden of this crate, if it's hard to find the time to review large PRs ? It looks like @svanharmelen is using this crate professionally, and so are we at @enexflow. |
I've created a clean parking_lot branch for this work without any noise or potential merge conflicts. Please check it out and see if it satisfies your needs. |
I've landed parking_lot changes, switching RwLock and Mutex to the versions from that crate, and updating the imports. So I'll close this PR. If there are any other refactors then they should be separately resubmitted with smaller patches. |
Check... I'm going to checkout the branch you pointed out later this evening to see how things work out using that branch 👍🏻 As for all the clippy changes, how do you like to receive those? A PR per feature (previously crate? Or a PR per file? Please let me know what you prefer... EDIT: I see it's in |
Depends how complicated they are. I prefer patches which fix one thing instead of many so I can see what it is doing. |
Looks like my Thanks for all your time and effort @locka99 and I will for sure not create such big PRs again! |
@locka99 I understand this is a massive PR which also contains backwards incompatible changes, so let me try to explain why I think this PR is really worth it...
The main thing that this PR does, is switching from
std::sync::{Mutex, RwLock}
toparking_lot::{Mutex, RwLock}
. This change is needed becausestd::sync::{Mutex, RwLock}
cannot be shared between threads safely (a.k.a. it is missing thestd::marker::Send
trait) which is required when using it together withfutures::Future
:I'm hitting this problem as I'm trying to create a full
async_client
without changing too much of the existing logic (so without trying to change mutexes to channels for example). So really just making the first step in having a full async client which can then be further optimized along the way, but can be used right away.Initially I think it makes the most sense to add it next to the existing
client
, but of course it would be very nice (and much easier to maintain) if they can be merged one day. Yet before I can even offer a PR with a fullasync_client
that we can talk about, I need to be able to use all the existing mutexes in async code which is not possible without switching toparking_lot
.Additionally I tried to fix as much Clippy warnings as I could to silence my editor a bit. These are all relatively small fixes which in no way change any of the logic! It just improves the code a tiny bit but adding small optimizations and rewriting a few loops or
if else
statements that should improve the efficiency or readability. But as you can see/test for yourself all the tests still pass and everything still works the same as before.Last little thing I did in this PR (which I can revert if you want) is changing
~1.8
to just1.8
for alltokio
dependencies. Having it locked to~1.8
means that any other project that wants to use this crate is forced to also use this exact same version. This caused some issues for us, so for now I stripped the~
. If that is not acceptable I will add that back, but that would mean I need to maintain a fork to be able to use the client in our own project.I really hope I can work with you to get this PR merged. I understand that changing to
parking_lot
could mean that some people using this crate (these crates) will need to update some of their code when they want to use the latest version. Yet the fix is as simple as only removing a few.unwrap()
calls which the compiler will also tell you. So while it does have impact, I would argue the impact is very low and the potential gain (having a full async client) could be worth the pain...Thanks and curious to hear your thoughts on this one!