Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upMajor-release candidate #122
Conversation
canndrew
and others
added some commits
Apr 18, 2016
sstewartgallus
suggested changes
Mar 11, 2017
sstewartgallus left a comment
|
Too many useless comments. Specify why not what. |
| pub fn new(t: T) -> CachePadded<T> { | ||
| // Assert the validity. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
konstin
Mar 12, 2017
Contributor
It's not all "useless comments", it's @ticki's coding style. It's explained in the internals thread corresponding to this PR: https://internals.rust-lang.org/t/crossbeam-request-for-help/4933/16.
After the experience of working with a huge undocumented code base, I do appreciate this style very much. And even if those comments are directly deducible from the code, having a human readable comment still improves the readability.
This comment has been minimized.
This comment has been minimized.
sstewartgallus
Mar 12, 2017
@konstin "it's somebodies coding style" is not an argument. It directly impedes readability to labal things redundantly. Any piece of the source code, even if it is a comment, should pay its weight or go to the chopping block. Most software source code metrics such as cyclomatic complexity and number of bugs directly correlate with the numbers of line of code. More code means more room for bugs and out of date documentation.
This comment has been minimized.
This comment has been minimized.
drewhk
Mar 14, 2017
I agree with @konstin. Concurrency code can benefit from extensive documentation, higher granularity than usual if necessary.
Most software source code metrics such as cyclomatic complexity and number of bugs directly correlate with the numbers of line of code.
I don't see how comments would contribute to cyclomatic complexity and code complexity in general.
More code means more room for bugs and out of date documentation.
So conversely, zero documentation means no room for being out-of-date? There is obviously a balance here. With core, concurrency related code, I prefer heavy commenting. Even tiny things can have significance. Also, out-of-date comments should be less of an issue as refactoring core concurrency code should be infrequent and heavily reviewed anyway.
Just my 2c.
This comment has been minimized.
This comment has been minimized.
jeehoonkang
Mar 14, 2017
Contributor
I generally agree with @drewhk that extensive documentation helps a lot, especially for concurrency code. But for this specific comment, I agree with @sstewartgallus : Assert the validity. doesn't give us additional information than the code itself (assert_valid::<T>();). Surely it would be very helpful to write down the reason why we need to assert the validity.
This comment has been minimized.
This comment has been minimized.
drewhk
Mar 14, 2017
Sure, there is a discussion to be had about the right balance here ;) I just wanted to voice my concern about blanket statements about comment granularity.
| assert_valid::<T>(); | ||
|
|
||
| // Construct the (zeroed) type. |
This comment has been minimized.
This comment has been minimized.
| }; | ||
|
|
||
| // Copy the data into the untyped buffer. |
This comment has been minimized.
This comment has been minimized.
| /// `None` maps to the null pointer. | ||
| #[inline] | ||
| fn opt_shared_into_raw<T>(val: Option<Shared<T>>) -> *mut T { | ||
| // If `None`, return the null pointer. |
This comment has been minimized.
This comment has been minimized.
| /// `None` maps to the null pointer. | ||
| #[inline] | ||
| fn opt_box_into_raw<T>(val: Option<Box<T>>) -> *mut T { | ||
| // If `None`, return the null pointer. |
This comment has been minimized.
This comment has been minimized.
| guard.unlinked(head); | ||
| // Read the data. |
This comment has been minimized.
This comment has been minimized.
| unsafe { | ||
| // Unlink the head node from the epoch. |
This comment has been minimized.
This comment has been minimized.
| let next = head.next.load(Relaxed, &guard); | ||
| if self.head.cas_shared(Some(head), next, Release) { | ||
| // Set the node to the tail node, unless it changed (ABA condition). |
This comment has been minimized.
This comment has been minimized.
| None => return None, | ||
| } | ||
| } | ||
| } | ||
|
|
||
| /// Check if this queue is empty. | ||
| pub fn is_empty(&self) -> bool { | ||
| // Pin the epoch. |
This comment has been minimized.
This comment has been minimized.
| let guard = epoch::pin(); | ||
| // Test if the head is a null pointer. |
This comment has been minimized.
This comment has been minimized.
schets
reviewed
Mar 11, 2017
| /// call). | ||
| pub fn enter(&self) -> bool { | ||
| // Increment the counter. | ||
| if self.in_critical.fetch_add(1, atomic::Ordering::Relaxed) > 0 { |
This comment has been minimized.
This comment has been minimized.
schets
Mar 11, 2017
Member
I expect this would make pinning more expensive. On Intel, you've essentially replaced a load->store->mfence sequence with a lock xadd -> mfence sequence, which isn't too different than a mfence->mfence sequence. Maybe the second fence is fast and can be ordered around since the previous fence already blocked everything, but it seems inefficient at first glance. The story is better but not great on ll/sc architectures. Since it's not a big change in clarity over load->add->store I think it should get changed back.
pitdicker
reviewed
Mar 11, 2017
| //! - When the thread subsequently reads from a lock-free data structure, the pointers it extracts | ||
| //! act like references with lifetime tied to the `Guard`. This allows threads to safely read | ||
| //! from snapshotted data, being guaranteed that the data will remain allocated until they exit | ||
| //! the epoch. | ||
| //! | ||
| //! To put the `Guard` to use, Crossbeam provides a set of three pointer types meant to work together: | ||
| //! |
This comment has been minimized.
This comment has been minimized.
arthurprs
requested changes
Mar 11, 2017
arthurprs left a comment
|
I did a first pass and the code looks good. |
| // think, especially if several threads constantly creates new garbage). | ||
| loop { | ||
| // Load the head. | ||
| let head = self.head.load(atomic::Ordering::Acquire); |
This comment has been minimized.
This comment has been minimized.
arthurprs
Mar 11, 2017
I don't see the problem, it loads the head to set n.next then tries to replace head with n.
| } | ||
| } | ||
|
|
||
| impl<T> ArcCell<T> { |
This comment has been minimized.
This comment has been minimized.
| /// Returns `None` if the queue is observed to be empty. | ||
| pub fn try_pop(&self) -> Option<T> { | ||
| /// This returns `None` if the queue is observed to be empty. | ||
| pub fn dequeue(&self) -> Option<T> { |
This comment has been minimized.
This comment has been minimized.
| data: t, | ||
| /// Push an element on top of the stack. | ||
| pub fn push(&self, elem: T) { | ||
| // Construct the node. |
This comment has been minimized.
This comment has been minimized.
arthurprs
Mar 11, 2017
Overall I'd refrain from these comments, unless the comment is badly worded or incorrect. The code review surface is already huge to start nitpicking these.
| /// There may only be one worker per deque, and operations on the worker | ||
| /// require mutable access to the worker itself. | ||
| #[derive(Debug)] | ||
| pub struct Worker<T> { |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
I have two general comments to make about the
|
Vtec234
reviewed
Mar 11, 2017
| /// The next node. | ||
| next: &'a Atomic<ParticipantNode>, | ||
| // Has `self.next()` **not** been called before? | ||
| first: bool, |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
Given the size and complexity of this whole thing, I've come to think that it might be better if @ticki could split this into several smaller PRs. It's barely been reviewed and we're already crawling through an 80-comment thread. A potential organisation:
What do people think? |
This comment has been minimized.
This comment has been minimized.
|
@Vtec234 I like that. It's definitely a mistake of mine to make such a big PR with a lot of unrelated changes. |
This comment has been minimized.
This comment has been minimized.
|
btw, I've enabled such that all with write access can add comments
Please feel free to use. |
This comment has been minimized.
This comment has been minimized.
gnzlbg
commented
Mar 12, 2017
|
Are the test running with the thread sanitizer already on CI by default already? |
This comment has been minimized.
This comment has been minimized.
|
I strongly agree with @Vtec234 's suggestion to split this PR (#122 (comment)). I would like someone to close this PR, and make a tracking issue for each change item mentioned above. Having said about "someone", I feel this project currently lacks a proper decision making procedure, even for closing issues and merging PRs :\ For now I would like to ask if @ticki could provide leadership for this PR (which happens to be currently the most important one). If you are in short of time, please kindly tell me what you want me to do :) Of course, for a longer term, we eventually need to establish some institution. |
This comment has been minimized.
This comment has been minimized.
I really want some help with one thing in particular: The bug in Either way it's hard for me to do alone. |
This comment has been minimized.
This comment has been minimized.
See #124 for the time being. We'll have to play it a bit by ear for a while, I think, and gradually formalize a procedure. |
jeehoonkang
reviewed
Mar 15, 2017
| /// Do not let your code's safety or correctness depend on this being the real size. The actual | ||
| /// value may not be equal to this. | ||
| // TODO: For now, treat this as an arch-independent constant. | ||
| const CACHE_LINE: usize = 64; |
This comment has been minimized.
This comment has been minimized.
jeehoonkang
Mar 15, 2017
•
Contributor
In x86-64, the size of CachePadded<T> changed from 256 (sizeof(usize) * CACHE_LINE = 8 * 32 = 256) to 64 (sizeof(u8) * CACHE_LINE = 1 * 64 = 64). But 64 bytes are not sufficient: e.g. Participant, which is fed to CachePadded, is 104 bytes big. That causes all the test failures.
The tests still fail for other reasons even after fixing this by setting const CACHE_LINE: usize = 256;, though..
This comment has been minimized.
This comment has been minimized.
ticki
Mar 15, 2017
Author
Member
Holy fuck! So simple.
The tests still fail for other reasons even after fixing this by setting const CACHE_LINE: usize = 256;, though..
All the tests run for me...
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
arthurprs
commented
Mar 22, 2017
•
|
I spend a few hours on this with no success, most commits left the build broken and then a latter commit fixes it. So git bisect was useless. I tried getting each commit compiling but after a few hours I gave up, but that's probably the only way to find it without eyeballing or some magical debug powers. |
This comment has been minimized.
This comment has been minimized.
Why?
Not sure I agree with this.
I feel like this is a mistake. It's not unsafe or anything, I just think people will use it incorrectly.
Not sure about this either. |
This comment has been minimized.
This comment has been minimized.
It's my horrible habit: Just write and then fix when you're done. Makes it hard to split up
The previous implementation blocked as well. AFAICS there is no non-blocking way of doing this, and the blocking part is only during initialization and even then is rare.
That is, well, very weird thing. Like, of course it calls the inner destructor, and even the free is more sensical than the vector hack. If you want a changed destructor, you can just wrap your
I see literally no way of 'misusing' this?
This one is interesting. I don't think it is an issue in reality, as |
This comment has been minimized.
This comment has been minimized.
Maybe blocking was the wrong word? I'm sorry. The manual implementation is lock-free.
This is true, but it's expanding the scope in which it can be used incorrectly. I don't think it should be done idly, but I don't see much of a benefit to it either. The old invariant is "only used on
I think I may have caused some confusion here so I'll drop this point. Sorry. |
This comment has been minimized.
This comment has been minimized.
That's a good point. It can be tempting to store, say, in a struct. |
This comment has been minimized.
This comment has been minimized.
|
I am closing this, as (1) we decided to split this PR into smaller ones, and (2) there is not so much traffic on this PR these days. As a follow-up, I will make the smaller PRs of the changes made in this PR. As noted above, the commits are quite mixed so that it will be hard to maintain the commits as-is. I will make new commits based on the diffs. |
jeehoonkang
closed this
Apr 4, 2017
This comment has been minimized.
This comment has been minimized.
arthurprs
commented
Apr 4, 2017
|
@jeehoonkang thanks for picking this up. I'll do my best to review in a timely manner. |
ticki commentedMar 10, 2017
•
edited by arthurprs
Changes
chase_levas it is too complex to be in the crossbeam core crate (I plan making a side crate with this).Owned<T>in favor ofBox<T>.Shared<T>fromGuard.Pinned.memmodule, moving its items to the crate root.ArcCell(eventually moved to the secondary crate).cas().AtomicOption, which basically mirroredepoch::Atomic.try_poptopop.Atomic<T>.Defaultto most of the types.maptoShared<T>, allowing internal referencing withowning_ref.lazy_staticrather than manual implementation of it.CachePadded(was that all?)
Right now, I'm going to get some sleep. I don't have much time tomorrow, so I request someone to help me with this: The tests are broken, from what I suspect to be a simple bug in
CachePadded, but I'm too tired to figure it out.