Skip to content

Async drivers #279

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 54 commits into from
Aug 22, 2023
Merged

Async drivers #279

merged 54 commits into from
Aug 22, 2023

Conversation

ivmarkov
Copy link
Collaborator

@ivmarkov ivmarkov commented Jul 8, 2023

Overview

This PR is introducing/changing async support to four drivers: GPIO, ADC, I2S, and SPI:

  • GPIO: existing async support simplified and more flexible now
  • ADC: added both blocking and async "continuous" reading mode (with DMA) - only for ESP IDF 5+ for now
  • I2S: added read_async and write_async async methods; removed unsafe subscription to ISR events
  • SPI: added read_async and write_async async methods

There is a certain commonality between the async support for all the drivers from above and the reason why I did one PR for all these drivers - as I wanted to see if this simple approach would work everywhere:

  • All async support is implemented as an incremental add-on to the existing blocking drivers, in just adding async functions next to the blocking ones. No blocking code was damaged in this implementation :-) Semantics of blocking code stays the same
  • The async support for all drivers (except GPIO which is even simpler) follows a very simple pattern:
    • It subscribes an ISR callback to the driver (all of the above drivers support such ISR callback subscription)
    • On our ISR callback being called, an "embassy-sync"-like Notification object is set to "triggered", which awakes the async executor and schedules any task await-ing (polling) the Notification object - see below
    • The new, async read_async / write_async methods in each driver are simply wrappers, that - in a loop - call the blocking read/write methods with a timeout 0 (so they - in fact - don't block then) and if these methods return ESP_ERR_TIMEOUT (meaning, the read/write was unsuccessful because the driver's receive FreeRtos queue was empty or the driver's send FreeRtos queue was full), these methods just await the Notification which is triggered from the ISR
    • The above simple trick (should) work just fine, because in all these drivers, their native ISR handling is called either to fetch the next data from the driver FreeRtos send queue and send it to the actual hardware, or to fetch the incoming data from the hardware and then push it to the driver FreeRtos receive queue. Since our own callbacks are called after this ISR queue processing, awaking our read/write futures is what we should do, as the chance that the nonblocking read/write will succeed after that is very high (if it does not succeed, the async code just awaits the notification again)

Next steps (short term)

  • Test the changes:
    • ADC: I've tested a bit the ADC continuous driver which is brand new
    • GPIO: I'll test the GPIO driver shortly
    • SPI: Pending. @Vollbrecht would appreciate if you can take a look
    • I2S: Pending. @dacut would appreciate if you can take a look
  • Code-review the changes:
    • I2S: @dacut I would really appreciate a code review by you. The changes to the I2S drivers are a bit bigger than to the others (though - again - no blocking code should've been semantically changed!), because of the following roadblock: The read/write blocking API was defined by you in a trait. Unfortunately, I couldn't just add async read_async/async write_async method declarations to your traits, because async methods in traits are not yet supported in stable Rust. This forced me to actually retire your traits and instead implement the read/write methods directly on to the I2sDriver structure. This - in turn - and to avoid enormous code repetition - meant that I needed to fold all of the I2sStdDriver, I2sPdmDriver and I2sTdmDriver into the I2sDriver itself. As a result, all constructors have a std/tdm/pdm suffix now, as in I2sStdDriver::new_rx() became I2sDriver::new_std_rx(). But... the outcome (IMO) is a simpler API, as the only I2S driver we have now is the I2sDriver struct, and the pdm vs std vs tdm selection is just solved by using the appropriate constructor.
    • SPI: @Vollbrecht @Dominaezzz I would really appreciate a code review by you. @Dominaezzz - I do realize that this approach is quite a bit different from your original PR, where you went with a completely separate async driver for SPI and therefore I would be very interested in your assessment of my approach. Specifically if/why it would be inferior to what you were implementing originally. I was actually looking at your approach, and I think there is only one significant difference - you are trying to schedule to the native ESP IDF driver multiple ESP IDF transactions per device, while I only allow a single active transaction per device (which is actually what allows me to simplify the code so much and implement it as a simple add on to the existing driver). In my opinion, we actually don't lose anything with my approach, but let's argue over this once you start reviewing
    • ADC: If somebody wants to look into this - welcome! This is brand new code for the blocking case as well
    • GPIO: ditto. I got rid of the explicit InputFuture in favor of async/await (why do we need it? as if writing poll implementations is fun), removed the need for alloc, and made the async support a tad more flexible in that the GPIO driver does not do ISR subscribe/unsubscribe every time you await a pin input state, which means the chance to miss state changes is lower

Next steps (mid term)

If the above approach (trigger notification from the ISR and then just wrap around the existing driver's blocking read/write methods with a timeout of 0) proves successful, we need to "operate" - in a similar fashion - all of UART, I2C, CAN, PWM and LEDC. UART, I2C and CAN specifically will need small PRs to the ESP IDF itself, as they don't currently offer a way for the user to subscribe and be called back from the driver's ISR handling routine. But these drivers have mostly everything else in place (i.e. FreeRtos queues and read/write methods with a timeout) so the changes to the ESP IDF should be minimal.

@ivmarkov ivmarkov force-pushed the async-drivers branch 12 times, most recently from 111e209 to 03c8cf7 Compare July 9, 2023 12:02
Copy link
Contributor

@Dominaezzz Dominaezzz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've only looked at SPI. (I'll look at gpio at some point since I know I'll have some thoughts about the interrupt handling 🙂)

Besides the inability to queue multiple transactions I don't think your implementation is lacking any features that mine had.

I do think this feature is quite important though as it means the next ESP-IDF transactions can still happen even when the runtime is too busy polling other futures.

The reason my implementation was so complex is because "async in traits" weren't a thing a the time so I had to make an SpiFuture struct to use a GAT return type.
With "async in traits" being available now, I'd have just written a much simpler for loop with all the state being implicit on the stack.

Also, the reason I went with a separate async driver was to avoid mixing polling and interrupt transactions but &mut self should be enough to prevent this. The other reason was because the e-hal async trait has the same method names as the sync trait, which will make the compiler unhappy. I still think it's best to separate them but this is up to you.

This PR is good timing though, as I had just thought about how to take advantage of "async in traits" but I hadn't written any code yet, so I can put my ideas here.

I have more comments but they're sort of contingent on how you response to my comment about the Operation trait, so I'll just wait until we resolve that.

src/spi.rs Outdated
@@ -300,10 +308,11 @@ pub mod config {
}

pub struct SpiBusDriver<T> {
_lock: Lock,
lock: Option<Lock>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why has this become optional? I saw transaction_async takes is_locked: bool but why would users want to not have a lock? That would lead to some non-transactional behavior, which users can already achieve by individually calling the operations themselves.

Copy link
Collaborator Author

@ivmarkov ivmarkov Jul 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. Think about multiple devices on a single bus. This case is impossible to do asynchronously (assuming no changes are made to the ESP IDF itself and no horrible hacks). Reason: ESP IDF's lock the bus functions are blocking/synchronous and unfortunately we have to use these so as to implement proper embedded-hal (or if you wish - "Rust") transactions, as the native ESP IDF transactions are constrained in terms of buffer size. Therefore:

  • Either you have just one device on the bus, and then locking the bus always completes without waiting
  • Or you need to lift the "lock the bus" requirement - which - of course - means you are not doing "real" transactions, from e-hal POV, but at least the stuff runs without blocking.

The esp-idf-hal SPI public *_async APIs do take a lock_bus: bool extra parameter precisely for that reason. I mean, the user - in the current state of things, has to pick the lesser evil. Which one is the lesser evil (try to lock the bus and block if there are multiple devices on it or don't lock the bus but then no real transactions - no big deal for stuff like LCD I guess) only the user knows.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I hadn't considered the blocking aspect of this.
I see your point. I think it'd be better if users just make the individual calls to read, write, etc. if they what lock_bus = false behaviour.

spi_device_acquire_bus takes a timeout though, so you could make it poll like you've done with spi_device_get_trans_result I think.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

spi_device_acquire_bus takes a timeout though, so you could make it poll like you've done with spi_device_get_trans_result I think.

Timeout is of no use if I don't get notified when to try again. Hmmm...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, don't you have that problem already with spi_device_queue_trans? It internally locks the bus, which is why it takes the timeout parameter as well.

                loop {
                    match esp!(unsafe {
                        spi_device_queue_trans(handle, &mut transaction as *mut _, delay::NON_BLOCK)
                    }) {
                        Ok(_) => break,
                        Err(e) if e.code() != ESP_ERR_TIMEOUT => return Err(e),
                        _ => NOTIFIER[*host as usize].wait().await,
                    }
                }

Ahhh the shared notifier makes this work. Pesky pesky. Idk if this is too far but maybe some shared state per driver to keep a bag of all wakers waiting for transactions to end maybe?

There's also the option of calling the waker immediately but I'll get scolded for that suggestion haha.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I agree with you. Now that you've pointed out the native esp-idf driver problem, the issue I brought up still stands.

How do you solve this problem for spi_device_queue_trans? Since you don't know when the bus lock is released. (spi_device_queue_trans and spi_device_polling_transmit locks the bus)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We pre-lock it. Which we anyway have to do if the keep-cs-active flag is raised (and it always is by us, as the ESP IDF SPI transactions don't fit the notion of E-HAL transactions). Otherwise the SPI driver returns an error.

Copy link
Collaborator Author

@ivmarkov ivmarkov Aug 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... but as the table at the top of the driver says - you can't currently use async when you have multiple devices on the bus. Or actually you can, but it will not be "async".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah fair enough, it all makes sense now.
At least folks who want multiple devices can create an SpiBusDriver, share it with an async mutex and use software CS.
In a future, it'd be interesting to consider injecting some kinda thread to perform the blocking.

A fun thing you can do now to enforce the documentation is to only implement the async methods on SpiDevice when T is BorrowMut instead of just Borrow.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah fair enough, it all makes sense now. At least folks who want multiple devices can create an SpiBusDriver, share it with an async mutex and use software CS. In a future, it'd be interesting to consider injecting some kinda thread to perform the blocking.

Coincidentally, I just implemented something similar to ^^^ in Spi*DeviceDriver (including the "SoftCs" one): they all now use a hidden shared async mutex which is centralized in SpiDriver before hitting the blocking call to "lock the bus". Thus, if all devices on the bus are used in an async way, the blocking "lock the bus" call is no longer an issue.

src/spi.rs Outdated
Comment on lines 1438 to 1603
pub fn fetch<F: Future>(mut fut: F) -> F::Output {
// safety: we don't move the future after this line.
let mut fut = unsafe { Pin::new_unchecked(&mut fut) };

let raw_waker = RawWaker::new(core::ptr::null(), &VTABLE);
let waker = unsafe { Waker::from_raw(raw_waker) };
let mut cx = Context::from_waker(&waker);

match fut.as_mut().poll(&mut cx) {
Poll::Ready(output) => output,
Poll::Pending => panic!("Future is not ready yet"),
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a complicated way to re-use the do_read between read and read_async. It's a bit awkward to generate a Future for synchronous code.

There's a way to implement the reuse without messing with a waker, context and panic.
See the Operation trait I had in my async SPI PR (but ignore the CsRule stuff I did, using peekable like @Vollbrecht did, is better).

trait Operation {
    fn fill(&mut self, transaction: &mut spi_transaction_t) -> bool;
}

The trait just gives a way to generate the ESP-IDF transactions, then you can write a function to send operations synchronously and one to send operations asynchronously.

Of course I didn't do this for e-hal transactions yet but it should be doable, see the TransferTrail enum I wrote. Worse case scenario the method can be duplicated for e-hal transactions. The duplication will be needed for transaction delays anyway.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a complicated way to re-use the do_read between read and read_async. It's a bit awkward to generate a Future for synchronous code.

I'm in a good company though! See what I found just the other day.

Seriously though - good point, I'll look into it. If we get a net win in terms of code size by adopting Operation + ChunksMut - I'll switch.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. I'm not really sure that the Operation approach resulted in net savings in terms of LOCs, but I think it was a win in terms of simplicity when I applied it consistently throughout the codebase... as I was able to centralize the actual execution of transactions in like 3-4 key methods. Everything else now in this (large) driver is just delegating boilerplate.

I've not followed literally your approach with the Operation trait - it is more of a switch to monadic combinators (map, flat_map and so on) to build an iterator of spi_transaction_t instances out of everything possible. This iterator is then executed either synchronously, or asynchronously. But same idea, I think.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did a quick skim and this iterator approach is much cleaner than the Operation trait. I'll take a proper look in a bit.

@dacut
Copy link
Contributor

dacut commented Jul 10, 2023

Will try to get a look at it this evening (US Pacific, -0700).

@ivmarkov
Copy link
Collaborator Author

ivmarkov commented Jul 10, 2023

I do think this feature is quite important though as it means the next ESP-IDF transactions can still happen even when the runtime is too busy polling other futures.

Queueing multiple transactions across several devices is still possible, and will naturally happen. For multiple transactions on a single device... I have to think a bit if that's possible to implement without complicating the code too much, which is a key requirement for me to have a first pass at async. I can't escape the "YAGNI" feeling though.
It goes like this:

If the CPU is constantly busy by polling too many other futures, one is mixing an IO bound async execution (SPI and other peripherals) with a CPU-bound execution ("async" futures which are not really async as they do computationally intensive tasks). This is an anti-pattern IMO. One should offload the CPU-bound computations to a thread pool (or on embedded - to a separate thread-local executor which runs within a lower priority thread). This way, the critical IO-bound workloads (as in scheduling new SPI transactions) will always preempt the lower prio computations and run without delays.

The reason my implementation was so complex is because "async in traits" weren't a thing a the time so I had to make an SpiFuture struct to use a GAT return type. With "async in traits" being available now, I'd have just written a much simpler for loop with all the state being implicit on the stack.

Let me set some context here. From my POV, "async fn in trait" (AFIT) is not a thing yet. :-( It is (a) in nightly only and (b) still even marked as "incomplete". Worse, they blew all deadlines outlined in their May Issue of the Inside Rust blog. As in - none of the deadlines were met, so AFIT won't hit stable this year, in my opinion.

Another context: for this reason (and others), esp-idf-hal and esp-idf-svc do have their own public APIs, which are independent of embedded-hal and embedded-svc. Programming for ESP32 only? Sure, use the esp-idf-hal public blocking APIs directly - no need to go via the embedded-hal traits. Willing to use async and favoring stable Rust even at the expense of platform-neutrality? Sure, use the esp-idf-hal public async APIs directly, as they are not trait based either and hence do not need AFIT (and implement embedded-hal-async's AFIT only when the nightly feature is enabled anyway).

The other reason was because the e-hal async trait has the same method names as the sync trait, which will make the compiler unhappy. I still think it's best to separate them but this is up to you.

My patch does not even implement yet the AFIT traits of embedded-hal-async, but that's not the point and that would be trivial to do. The patch currently only implements the future public async API of esp-idf-hal. Hence the need to suffix the async stuff with _async. I think splitting the driver just to have two public APIs both named e.g. read - one - blocking, and the other - async is not worth it. And again, this is completely orthogonal to the fact that the blocking and async traits have name collisions in their e.g. read method names. The latter is a non-issue.

This PR is good timing though, as I had just thought about how to take advantage of "async in traits" but I hadn't written any code yet, so I can put my ideas here.

As per above, AFIT - for now - would only be a thing when the "nightly" feature is enabled, and it would only be used as much as to implement the embedded-hal-async traits by a simple delegation to the _async methods of the esp-idf-hal drivers. Not more.

With that said, aren't you confusing stuff? Why do you need AFIT? It is only related to implementing traits. These - in turn - are necessary only when you want to abstract concrete implementations and hide them behind a layer of trait-based generics. This is only strictly necessary when you implement the embedded-hal-async API, but for nothing else. For everything else, you have async-methods-in-structs, which work just fine IMO. and of course all the lower level Future APIs.

I have more comments but they're sort of contingent on how you response to my comment about the Operation trait, so I'll just wait until we resolve that.

Let me see the operation trait...

@Dominaezzz
Copy link
Contributor

My patch does not even implement yet the AFIT traits of embedded-hal-async, but that's not the point and that would be trivial to do.

With that said, aren't you confusing stuff? Why do you need AFIT? It is only related to implementing traits.

I think you misunderstand me or missed the point I was making with AFIT.
In my PR I was trying to return a named type from the e-hal-async methods since that was the only way (without boxing) to do async in traits at the time.
When I say take advantage of AFIT, what I mean is removing the tedious restriction of having to return a named type. Be it in a trait or a stand alone method of a struct.

This is an anti-pattern IMO. One should offload the CPU-bound computations to a thread pool (or on embedded - to a separate thread-local executor which runs within a lower priority thread). This way, the critical IO-bound workloads (as in scheduling new SPI transactions) will always preempt the lower prio computations and run without delays.

Yes, you achieve this by queuing. The peripheral would then be able to execute transactions without CPU needing to spend time setting up the next IO bound transaction. Though I see what you mean, at the end of the day this is an optimisation.

would only be a thing when the "nightly" feature is enabled

Doesn't building Rust for esp already require nightly or am I mistaken?

And again, this is completely orthogonal to the fact that the blocking and async traits have name collisions in their e.g. read method names.

What do you mean? At the end of the day don't you want to implement the e-hal traits? You'll still have to take them into consideration when designing the API.
A little side note, isn't it also unappealing to see code that has _async suffixes everywhere? Even ignoring e-hal side of things.

@ivmarkov
Copy link
Collaborator Author

ivmarkov commented Jul 10, 2023

My patch does not even implement yet the AFIT traits of embedded-hal-async, but that's not the point and that would be trivial to do.

With that said, aren't you confusing stuff? Why do you need AFIT? It is only related to implementing traits.

I think you misunderstand me or missed the point I was making with AFIT. In my PR I was trying to return a named type from the e-hal-async methods since that was the only way (without boxing) to do async in traits at the time. When I say take advantage of AFIT, what I mean is removing the tedious restriction of having to return a named type. Be it in a trait or a stand alone method of a struct.

But then again: you don't have any of these restrictions to methods in regular structs. These can be trivially async, and can return impl Whatever as much as they want to. Restrictions (i.e. the need to use nightly) apply only to async in traits (AFIT) and to type-alias-impl-trait. Given that the code in this PR does not use any traits whatsoever in the first place, I do not understand how any of these nightly-only features would help us?

This is an anti-pattern IMO. One should offload the CPU-bound computations to a thread pool (or on embedded - to a separate thread-local executor which runs within a lower priority thread). This way, the critical IO-bound workloads (as in scheduling new SPI transactions) will always preempt the lower prio computations and run without delays.

Yes, you achieve this by queuing. The peripheral would then be able to execute transactions without CPU needing to spend time setting up the next IO bound transaction. Though I see what you mean, at the end of the day this is an optimisation.

Both do queueing, just on a different level. Each executor has a hidden task queue. By (trivially) using two async executors, you get two such hidden queues, where the tasks in the queue of the first executor are prioritized over the tasks in the queue of the second executor, if the second executor runs in a lower priority thread.

But then and as I said - if not too much trouble - I'll look into supporting the native queueing caps of the ESP IDF SPI driver.

would only be a thing when the "nightly" feature is enabled

Doesn't building Rust for esp already require nightly or am I mistaken?

Yes and no. the ESP-RS toolchain is - strictly speaking - a nightly toolchain, so you can enable and use nightly features. Yet, my understanding is that it is always branched from the same commit which becomes "the next stable Rust", so to say. So it is also stable.

The one and only reason why the esp-idf-* crates really need a nightly Rust is because cargo -Zbuild-std is nightly only. And I really want to have a usable subset of the crates which relies only on this single nightly feature. Who knows - in future that might get stabilized as well.

And again, this is completely orthogonal to the fact that the blocking and async traits have name collisions in their e.g. read method names.

What do you mean? At the end of the day don't you want to implement the e-hal traits? You'll still have to take them into consideration when designing the API.

I want to implement them, but behind a feature flag. nightly. Only when this feature flag is enabled, only then the async traits would be "implemented". If that's an argument, this is also how Embassy does stuff - nightly-only AFIT & friends is behind a nightly feature flags. Gives you abstractions and implementations of the e-hal traits. Rest still usable (but no abstraction) via regular async-methods-in-structs.

A little side note, isn't it also unappealing to see code that has _async suffixes everywhere? Even ignoring e-hal side of things.

What is the lesser evil: (a) having two drivers, just because of this (and like a common "core" ugggh, more complexity and annoying delegation boilerplate code to the common "core"), or (b) a single driver and the _async ugliness?

@Dominaezzz
Copy link
Contributor

I do not understand how any of these nightly-only features would help us?

The nightly feature doesn't help. The removal of the restriction of having to return a named type helps.

What is the lesser evil: (a) having two drivers, just because of this (and like a common "core" ugggh, more complexity and annoying delegation boilerplate code to the common "core"), or (b) a single driver and the _async ugliness?

Well, if you're asking me, for what I think, I'll have to chose (a). It depends how the code looks after you look into the user context and Operation trait.

@ivmarkov
Copy link
Collaborator Author

ivmarkov commented Jul 10, 2023

What is the lesser evil: (a) having two drivers, just because of this (and like a common "core" ugggh, more complexity and annoying delegation boilerplate code to the common "core"), or (b) a single driver and the _async ugliness?

Well, if you're asking me, for what I think, I'll have to chose (a). It depends how the code looks after you look into the user context and Operation trait.

Again - and if that's an argument - Embassy also chooses (b). Except that the ugliness goes to the blocking part - for obvious reasons in their case, as they don't have a traditional task scheduler ala FreeRtos.

@ivmarkov
Copy link
Collaborator Author

I do not understand how any of these nightly-only features would help us?

The nightly feature doesn't help. The removal of the restriction of having to return a named type helps.

I really need an example as to what you mean. Sorry - I feel lost in this particular item of the debate . :-)

@Dominaezzz
Copy link
Contributor

Embassy also chooses (b)

Any particular reason to follow embassy so closely?

I really need an example as to what you mean. Sorry - I feel lost in this particular item of the debate . :-)

Let's just drop this, no point wasting energy on it. In my initial comment, I was only explaining why my implementation was complex. It doesn't really change anything about this PR.

@ivmarkov
Copy link
Collaborator Author

ivmarkov commented Jul 11, 2023

Embassy also chooses (b)

Any particular reason to follow embassy so closely?

I trust Dario's judgement on embedded design - particularly w.r.t. async - as I've seen a lot of their stuff in Embassy stand the test of time. (Not that we are always in agreement - look at the discussion around the TCP async traits in embedded-nal-async back in time.) He also listens - the split of the former embassy monolith into micro-crates came originally upon my request I believe - as everything in their ecosystem except the HALs (obviously) and the all-static embassy-executor (not so obvious) applies to the esp-idf-* ecosystem as well.

Also I personally view the Embassy HAL crates as a modernization on the original ones. I've mostly copied from there the PinDriver metaphor (modulo the names; I mean Flex - really?), as well as the PeripheralRef / Peripheral stuff, which reduced the generics' explosion considerably and re-introduced drop in the whole E-HAL mess. All clever stuff, I must say.

I really need an example as to what you mean. Sorry - I feel lost in this particular item of the debate . :-)

Let's just drop this, no point wasting energy on it. In my initial comment, I was only explaining why my implementation was complex. It doesn't really change anything about this PR.

Sure, np.

Copy link
Contributor

@dacut dacut left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there's one case where an OutputPin was flipped to an InputPin by mistake (see comments); feel free to change this and submit.

I have some comments about the removal of Option<> on the pins; there was an intent there, but I've since forgotten it (probably related to ADC/DAC support). I doubt that will come to fruition, as it seems to be poorly supported in ESP-IDF (requires fiddling with the GPIO MUX, which isn't supported).

The core driver changes look good. Undoes some of the earlier review comments about splitting the driver, but that's alright. :-)

I have not studied the async workflow throughly by any means, and am definitely not read up enough to comment on the Waker bits. I looks correct, and I'm certainly looking forward to being able to use it (looks far more ergonomic).

@dacut
Copy link
Contributor

dacut commented Jul 21, 2023

@ivmarkov, do you have a rough estimate as to when this will land? I have a few fixes for remmy (esp-rs@matrix) on the I2S side that I'd like him to test; if it'll be awhile, I'll go ahead and have him test against a branch over on my side. Thanks!

@ivmarkov
Copy link
Collaborator Author

@ivmarkov, do you have a rough estimate as to when this will land? I have a few fixes for remmy (esp-rs@matrix) on the I2S side that I'd like him to test; if it'll be awhile, I'll go ahead and have him test against a branch over on my side. Thanks!

Hopefully next weekend (I'm on travel right now, and there was quite some activity on the matter-rs project - sorry for the delays).

What I would strongly suggest is to PR the changes agains this branch (async-drivers) instead of master. This way they would be ready when this branch lands in master.

@dacut
Copy link
Contributor

dacut commented Jul 22, 2023

No need to be sorry! Just wanted to coordinate. Safe travels!

@dacut
Copy link
Contributor

dacut commented Jul 27, 2023

Just FYI, @ivmarkov, you'll probably need to add a couple of Clippy #[allow] statements in src/private.rs due to a new lint introduced into nightly that affects RISC-V (but not Xtensa yet).

dacut@b247e94 has the changes.

Just a #[allow(unknown_lints, clippy::needless_pass_by_ref_mut)] to decorate the poll_wait() methods.

  • Since they take a &mut Context but don't call mut methods on the context, Clippy now complains, but the signature is supposed to be &mut Context here for async
  • unknown_lints is required for Xtensa since clippy::needless_pass_by_ref_mut isn't known on that side yet.

@ivmarkov
Copy link
Collaborator Author

ivmarkov commented Aug 11, 2023

@dacut Hey, this ate a good part of my Saturday Friday afternoon, but - yay! - I can confirm that after the refactoring, at least STD, in TX + MSB stereo works OK! Including write_async!

... and the above with the upcoming BT support in esp-idf-svc (A2DP sink)

@dacut
Copy link
Contributor

dacut commented Aug 11, 2023

Sorry about that, @ivmarkov, but glad you got it working!

Comment on lines +1716 to +1717
if let Some(notification) =
unsafe { (transaction.user as *mut Notification as *const Notification).as_ref() }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason not to split this into multiple lines?

Suggested change
if let Some(notification) =
unsafe { (transaction.user as *mut Notification as *const Notification).as_ref() }
let condition = transaction.user as *mut Notification as *const Notification;
if let Some(notification) = unsafe { condition.as_ref() }

self.notified.store(false, Ordering::SeqCst);
}

#[allow(unused)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This macro still needed?

@ivmarkov ivmarkov merged commit 9fc2e8c into master Aug 22, 2023
@ivmarkov ivmarkov deleted the async-drivers branch August 27, 2023 08:05
@oliverocean
Copy link

@ivmarkov I realize this change was a little while ago, but thank you for adding adc_read_raw() to adc.rs! I used it on a project at work today to implement an NTC thermistor on an esp32s3.

I was using esp-idf-hal version 0.41, which only has adc_read() and I couldn't figure out how to convert the return value of adc_read() to a temperature (it does some voltage conversion to the raw read that I don't quite understand). After I looked through the adc.rs source code for version 0.42 to see how adc_read() works, I saw there was a new functionadc_read_raw() that returns the raw ADC value. Perfect!

I'm trying to push for replacing C code with Rust at my company and this will be very helpful to continue to do so. So thank you (and your fellow contributors) for all your hard work! :)

@oliverocean
Copy link

ps. thank you also @MabezDev, @Dominaezzz , and @dacut!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants