Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/polling #12

Open
wants to merge 11 commits into
base: master
Choose a base branch
from

Conversation

TremblingMen
Copy link

Added polling support to improve performance.

@coveralls
Copy link

coveralls commented Jul 6, 2022

Coverage Status

Coverage decreased (-4.3%) to 93.691% when pulling c479fe8 on embedded-rust-iml:feature/Polling into 23a2a93 on eldruin:master.

Copy link
Owner

@eldruin eldruin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting. Thank you for this proposal.
A couple of thoughts:

  • It could be that in some applications this repeated polling is undesirable, since it blocks the I2C bus. I think a straight-forward solution would be to add a disable_polling method. Could you do that?
  • Looking at the Storage::wait method, it seems we could simplify it by implementing the loop directly instead of using the repeat_timeout! and nb::block! with something like:
// I just wrote this down, please excuse small errors.
loop {
    match self.count_down.wait() {
        Err(nb::Error::Other(e)) => { break Err(e) }
        Err(nb::Error::WouldBlock) => {
              if self.eeprom.polling {
                     match self.eeprom.read_byte(0) {
                            Ok(_) => { break Ok(()) } // done
                            Err(Error::I2C(_)) => { } // not ready, repeat
                            Err(e) => { break  Err(e) } // TODO this error probably needs some mapping to the outer error type
                     }
                 }
            }
        Ok(_) => break Ok(()),
    }
}

TODO:

  • Remove embedded-timeout-macros if solution above is viable
  • Changelog
  • Tests
  • Docs, README update

Cargo.toml Show resolved Hide resolved
Copy link
Owner

@eldruin eldruin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you be able to add some tests? I would take care of the rest.

Copy link
Owner

@eldruin eldruin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for adding the tests! only a small thing left

@@ -25,7 +25,7 @@ where
// Furthermore, we also have to wait for the countdown before reading the eeprom again.
// Basically, we have to wait before any I2C access and ensure that the countdown is
// running again afterwards.
count_down.start(Duration::from_millis(0));
count_down.start(Duration::from_millis(5));
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason for this change?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess @eldruin is right and the initial countdown should be zero. The rationale is already given in the comment above.

@@ -25,6 +25,7 @@ resolver = "2"
embedded-hal = "0.2"
embedded-storage = "0.2.0"
nb = "1.0.0"
embedded-timeout-macros = "0.3"
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
embedded-timeout-macros = "0.3"

@guillaume-michel
Copy link

Hi,
Can we have a status on this PR, please?
I am will to help for this to be merged.

@eldruin
Copy link
Owner

eldruin commented Dec 2, 2022

I will merge this and do the last fixes myself on it over the next few days.
I will comment here once that is done.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants