Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added RefCell blanket implementation for I2C to allow sharing, see #35 #55

Closed

Conversation

@therealprof
Copy link
Contributor

@therealprof therealprof commented Mar 5, 2018

Signed-off-by: Daniel Egger daniel@eggers-club.de

@therealprof
Copy link
Contributor Author

@therealprof therealprof commented Mar 5, 2018

@japaric Any chance you could check this out and let me know whether this is what you had in mind?

@burrbull
Copy link
Contributor

@burrbull burrbull commented Mar 6, 2018

This also useful for blocking RefCell, etc.

@therealprof
Copy link
Contributor Author

@therealprof therealprof commented Mar 6, 2018

@burrbull "blocking RefCell"? You probably mean other blocking busses? Yes, it is. I just wanted to start with one I can actually test to make sure it hits the spot.

@burrbull
Copy link
Contributor

@burrbull burrbull commented Mar 6, 2018

I mean RefCell for blocking Delay (DelayUs, DelayMs).
github removed brackets with content.

@therealprof
Copy link
Contributor Author

@therealprof therealprof commented Mar 6, 2018

Yes, totally agree that this would be useful.

@japaric
Copy link
Member

@japaric japaric commented Mar 10, 2018

I know I originally suggested this, and I know there are at least two other threads discussing this
issue, which I haven't had the chance to catch up with yet (*), so take my comment with a grain of
salt.

I'm concerned about the scalability and the "coupling" of this approach.

To elaborate: RefCell is not interrupt or thread safe; that means that if you want to use the same
I2C bus with two or more drivers that live in different execution contexts you'll need some other
protection mechanism. There's no standard Mutex in Cortex-M land; there's an interrupt based one
in the cortex-m crate but every (RT)OS (binding) is going to have its own Mutex type. That means
that every one of these protection mechanisms will have to have a similar blanket implementation.

That's the scalability issue; the "coupling" problem is that cortex-m, cortex-m-rtfm and every
RTOS crate will have to depend on the embedded-hal crate, or the other way around, to provide
these blanket implementations because of coherence. Ideally I'd like to keep the concerns separated:
the RTOS crates should only care about multitasking and resource protection, and embedded-hal
should only care about providing I/O interfaces; where they need to interact they should do so via
some common third party interface, like Futures and Generators -- this way one doesn't have to
depend on the other. The practical problem I see with coupling is that it restricts what minor
versions of embedded-hal one can use with $RTOS v0.x.y and vice versa; it could also lead to minor
version bumps of $RTOS whenever there's a new minor version of embedded-hal or vice versa.

I'd like us to explore, and exhaust, all the other options before committing to one approach -- I
see this PR as committing to the "couple resource protection to the driver" approach.

(*) I'll post some other idea that I have in one of those threads if it has not been brought up
already.

@hannobraun
Copy link
Member

@hannobraun hannobraun commented Jul 17, 2018

As there are valid arguments against this solution, and as the discussion in #35 has since moved on to other approaches, it seems to me that this pull request can be closed.

@therealprof Do you agree?

@therealprof
Copy link
Contributor Author

@therealprof therealprof commented Jul 17, 2018

The validity of the arguments against this solution depends on the POV but as long as there is a workable solution I'm happy.

@Rahix
Copy link

@Rahix Rahix commented Jul 17, 2018

@therealprof did you take a look at my proposed solution yet? I tried implementing your proxy idea and it looks promising in my opinion.

@therealprof
Copy link
Contributor Author

@therealprof therealprof commented Jul 17, 2018

@Rahix Not yet but I will. ;)

The problem I see is that the number of hoops one has to jump through to use Rust on MCUs is accumulating, so every time we say "Well, we can't do that because it's considered unsafe and in Rust we don't do unsafe." to something that is quite common everywhere else that is driving people away from Rust instead of attracting them.

@hannobraun
Copy link
Member

@hannobraun hannobraun commented Jul 18, 2018

The problem I see is that the number of hoops one has to jump through to use Rust on MCUs is accumulating

Without commenting on the specifics of this case, I believe that to some degree this is unavoidable. There's something we want to get out of using Rust, and often we need to pay for that in complexity.

That doesn't mean we shouldn't do our best to minimize that additional complexity though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

5 participants