Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

propose new interface for delay #4

Open
GoogleCodeExporter opened this issue May 17, 2015 · 10 comments
Open

propose new interface for delay #4

GoogleCodeExporter opened this issue May 17, 2015 · 10 comments

Comments

@GoogleCodeExporter
Copy link

Narcoleptic's delay interface is defined as

void delay(int milliseconds);

I propose to make it:  void delay(uint32_t milliseconds);

Current implementation has a maximum of ~32767 milliseconds sleep()  ==> ~half 
a minute
Proposed interface has a maximum of ~4294967296 milliseconds sleep() ==> ~49.7 
days

Proposed interface is backwards compatible, and makes narcoleptic delay() 
interface identical to Arduino's delay(). 
Costs : 2 bytes





Original issue reported on code.google.com by rob.till...@gmail.com on 16 Sep 2013 at 11:19

@GoogleCodeExporter
Copy link
Author

uint32_t might little overkill.

uint16_t takes extra 113 Bytes -> But could hold a minute, which is required 
for most.
 int32_t takes extra 212 Bytes -> 24,8 Days. Highest benefit per size
uint32_t takes extra 390 Bytes -> I don't think if it's valuable to sleep 1 
month.


Original comment by erdem...@gmail.com on 17 Oct 2013 at 2:54

@GoogleCodeExporter
Copy link
Author

[deleted comment]

1 similar comment
@GoogleCodeExporter
Copy link
Author

[deleted comment]

@GoogleCodeExporter
Copy link
Author

UPDATED VERSION (some new insights)

I'm sorry you are right, the 2 bytes is only the interface delta. 
The arduino will also load math code for unsigned etc,

I redid your test with MerlinTheCat.ino example (UNO IDE 1.54r2) to test some 
variations.

delay(int)      -> 1682 bytes (reference)
delay(uint16_t) -> 1764 bytes = +82
delay(long)     -> 1890 bytes = +208
delay(uint32_t) -> 2084 bytes = +402
so approx same numbers. using long would indeed give most per size


a "8x less narcoleptic" delay function
'''
void NarcolepticClass::delay(int milliseconds) {
  while (milliseconds >= 8000) { sleep(WDTO_8S);    milliseconds -= 8000; }
  while (milliseconds >= 1000) { sleep(WDTO_1S);    milliseconds -= 1000; }
  while (milliseconds >= 125)  { sleep(WDTO_120MS); milliseconds -= 120; }
  while (milliseconds >= 16)   { sleep(WDTO_15MS);  milliseconds -= 15; }
}
'''
resulted in
delay(int)      -> 1582 bytes = -100
delay(long)     -> 1678 bytes = -4

So there can be a trade between code size and sleep-efficiency.
Merged this idea with issue #5
added an #define in .h and some conditional code in .cpp

narcoleptic.h:
---------------
// uncomment to have a slightly smaller lib
// #define NARCOLEPTIC_SMALL_LIB


narcoleptic.cpp:
-----------------
#ifdef NARCOLEPTIC_SMALL_LIB

void NarcolepticClass::delay(int milliseconds) {
#ifdef WDP3
  while (milliseconds >= 8000)      { sleep(WDTO_8S);    milliseconds -= 8000; }
#endif
  while (milliseconds >= 1000)      { sleep(WDTO_1S);    milliseconds -= 1000; }
  while (milliseconds >= 125)       { sleep(WDTO_120MS); milliseconds -= 120; }
  while (milliseconds >= 16)        { sleep(WDTO_15MS);  milliseconds -= 15; }
}

#else

void NarcolepticClass::delay(int milliseconds) {
#ifdef WDP3
  while (milliseconds >= 8000)      { sleep(WDTO_8S); milliseconds -= 8000; }
  if (milliseconds >= 4000)         { sleep(WDTO_4S); milliseconds -= 4000; }
  if (milliseconds >= 2000)         { sleep(WDTO_2S); milliseconds -= 2000; }
#else
  while (milliseconds >= 2000)      { sleep(WDTO_2S); milliseconds -= 2000; }
#endif
  if (milliseconds >= 1000)         { sleep(WDTO_1S); milliseconds -= 1000; }
  if (milliseconds >= 500)          { sleep(WDTO_500MS); milliseconds -= 500; }
  if (milliseconds >= 250)          { sleep(WDTO_250MS); milliseconds -= 250; }
  if (milliseconds >= 125)          { sleep(WDTO_120MS); milliseconds -= 120; }
  if (milliseconds >= 64)           { sleep(WDTO_60MS); milliseconds -= 60; }
  if (milliseconds >= 32)           { sleep(WDTO_30MS); milliseconds -= 30; }
  if (milliseconds >= 16)           { sleep(WDTO_15MS); milliseconds -= 15; }
}

#endif

Original comment by rob.till...@gmail.com on 18 Oct 2013 at 8:28

@GoogleCodeExporter
Copy link
Author

Thanks for this good solution.

I try to solve uint32_t issue by using c++ templates. It's good and 
automatically adjust function by your needs but when user try to give 2 
different variable type, it creates bigger overhead. So It's better to keep 
away from templates. :)

Original comment by erdem...@gmail.com on 18 Oct 2013 at 11:04

@GoogleCodeExporter
Copy link
Author

I saw you fixed issue #6 so it might be time to get the 1b version of the
lib out :)

Original comment by rob.till...@gmail.com on 18 Oct 2013 at 11:09

@GoogleCodeExporter
Copy link
Author

Yes. I think author will update the library soon.
Thanks to Peter Knight for this beautiful and efficient piece of code.

Original comment by erdem...@gmail.com on 18 Oct 2013 at 11:18

@GoogleCodeExporter
Copy link
Author

In the lib is also this definition of micros
uint32_t microseconds = milliseconds * 1000L;

is this a problem when delay(uint32_t milliseconds) is a uint32_t already?

Original comment by cgru...@uni-osnabrueck.de on 15 Jan 2015 at 11:30

@GoogleCodeExporter
Copy link
Author

Only if 1000xmilliseconds needs bigger area than type of milliseconds but 
easily avoided by

uint32_t microseconds = (uint32_t)milliseconds * 1000L;

But for me, It's better to change it to

int32_t microseconds = milliseconds * 1000L;

to avoid using uint32_t. 2 billion microseconds is enough for everyone. :)

Original comment by erdem...@gmail.com on 16 Jan 2015 at 3:50

@GoogleCodeExporter
Copy link
Author

Using uint32 should be better. It uses the same storage space as int32 but 
doesn't allow negative values, which aren't valid in this case anyway, and 
gives you double the useful values.  The only case where it might be worse is 
if the machine code representation of operations on that type were worse on 
your processor for some reason, but that seems unlikely. 

Original comment by andrew.p...@gmail.com on 18 Jan 2015 at 12:29

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant