Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing end of word, adding support for microseconds #1

Merged
merged 4 commits into from Aug 19, 2016

Conversation

gpummer
Copy link
Contributor

@gpummer gpummer commented Aug 15, 2016

This is my first pull request ever, so any comments are appreciated :)

  • commit 43d5759 should be trivial.
  • commit b58237e should support linux/mac user. My development environment is a Mac (and ubuntu vm for cross compiling)
  • commit bc43d2d is a first step to support microseconds (delay, getting time). I needed microseconds because HC-SR04 (measuring distances) echoes a pulse with a length of microseconds. Ideally the trigger pulse is also 10 microseconds long, although 1ms (1 delay) is ok. Therefore also the addition in the event-handling. Maybe it's better to fill in the field time_us (forth_gpio.c) optionally.

I ran the 'test' module successfully.

@zeroflag
Copy link
Owner

Very nice job, thanks. I wonder if we could get rid of the old hundredth second based time word and leave only the new microsecond base one. The same applies to the delay.

What do you think? Do you use both time and time-us, as well as delay and delay-us?

@gpummer
Copy link
Contributor Author

gpummer commented Aug 16, 2016

Hi!

Very nice job, thanks.

:) Thanks. But you are the one who did the great job!

I wonder if we could get rid of the old hundredth second based time word and leave only the new microsecond base one. The same applies to the delay.

ANS Forth only defines the word MS ( n — ) to delay the execution n milliseconds. It has no word for "getting milliseconds from the system“. There is a nice summary of multiple forth dialects at https://www.rosettacode.org/wiki/System_time#Forth for this case.

So, I would vote for a new/additional word MS ( n — ) which overtakes DELAY. MS@ ( — ms ) could overtake TIME. With this naming system, we could use US ( n — ) to delay microseconds and US@ ( — n ) to get microseconds from the system.

I would not remove the millisecond variant because „everybody“ knows, that ms are used to calculate/convert dates. Additional the microseconds variant is too short (if one CELL is used) aka doesn’t last long enough.

What do you think? Do you use both time and time-us, as well as delay and delay-us?

This is a very good question! I thought about it as I implemented the .time_us field in the event structure, because it seems sooo double defined. Maybe it is possible to remove the .time (milliseconds) field, because everyone knows how to convert microseconds to milliseconds and this „time“ is mainly used to calculate time-spans. All of your samples are using ms and I think ms is good enough for 95+% of all use cases. The us variants are the exception of the rule. delay is nice to the system (other ESP task are working), I am not sure if delay-us is nice to the system, but the programmer using it knows why he is using it. I definitely use delay-us. I mainly use time-us in the case of the event. The cool thing about forth is, it is a high level language also for low level problems.

I will try to get more experience in the next days. In my eyes punyforth is „work and fun in progress“, so I think it is acceptable, that not all words/features are perfect at the first time. I am having no problem if words/features are changing if this change is documented.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub #1 (comment), or mute the thread https://github.com/notifications/unsubscribe-auth/AGgB7Q-erp70m7_yZLgc-aVs526aqfVwks5qgc4cgaJpZM4Jk0lm.

@zeroflag
Copy link
Owner

So, I would vote for a new/additional word MS ( n — ) which overtakes DELAY. MS@ ( — ms ) could overtake TIME. With this naming system, we could use US ( n — ) to delay microseconds and US@ ( — n ) to get microseconds from the system.

This sounds like a good naming convention, I like it. The only thing I'm not sure about it is the difference between sdk_os_delay_us and vTaskDelay. The current delay implementation uses the latter. If these functions work significantly differently (e.g. if vsTaskDelay blocks only the current task, while sdk_os_delay_us blocks everything) then ms and us doesn't sound so right, because based on the name I would expect that the only difference between them is the quantity of the waiting period.

I'll try to figure out the difference between sdk_os_delay_us and vTaskDelay, or if you already know, please let me know.

By the way, vTaskDelay(500000/portTICK_RATE_US) isn't the same as sdk_os_delay_us(500000) ?

This is a very good question! I thought about it as I implemented the .time_us field in the event structure, because it seems sooo double defined. Maybe it is possible to remove the .time (milliseconds) field, because everyone knows how to convert microseconds to milliseconds and this „time“ is mainly used to calculate time-spans. All of your samples are using ms and I think ms is good enough for 95+% of all use cases. The us variants are the exception of the rule. delay is nice to the system (other ESP task are working), I am not sure if delay-us is nice to the system, but the programmer using it knows why he is using it. I definitely use delay-us. I mainly use time-us in the case of the event. The cool thing about forth is, it is a high level language also for low level problems

Yeah, changing it later is always an option.

What worries me when I add new words, is that the dictionary space on the esp is quite small (24kb). It can can be quickly filled up by loading 5-6 modules (although this is not the typical case). I can increase it up to 30kb but doing so, could result stability issues because for example LWIP allocates netcon buffers dynamically, and it needs additional space. Maybe some tree shaking or dead code elimination or minification applied to uber.forth will be the ultimate solution for the limited space.

@gpummer
Copy link
Contributor Author

gpummer commented Aug 18, 2016

This sounds like a good naming convention, I like it. The only thing I'm not sure about it is the difference between sdk_os_delay_us and vTaskDelay. The current delay implementation uses the latter. If these functions work significantly differently (e.g. if vsTaskDelay blocks only the current task, while sdk_os_delay_us blocks everything) then ms and us doesn't sound so right, because based on the name I would expect that the only difference between them is the quantity of the waiting period.

I'll try to figure out the difference between sdk_os_delay_us and vTaskDelay, or if you already know, please let me know.

I couldn’t find any „hard facts“ from espressif regarding busy waiting in (sdk_)os_delay_us(). But I think, os_delay_us() does busy waiting. The maximum value of 0xffff adds to this impression. Also the arduino implementation (delayMicroseconds()) gives the impression of busy waiting. For this case I would agree with you that ms and us are not so good names. Maybe „busy-wait-us“ is a better name, at least it expresses the intention?

By the way, vTaskDelay(500000/portTICK_RATE_US) isn't the same as sdk_os_delay_us(500000) ?

I'm not sure if I understand your question. The net result - waiting 50ms - should be the same, if you use 50000 instead of 500000, because of the maximum value for sdk_os_delay_us(). I hope to get access to an oscilloscope the next days to verify us and ms.
What worries me when I add new words, is that the dictionary space on the esp is quite small (24kb). It can can be quickly filled up by loading 5-6 modules (although this is not the typical case). I can increase it up to 30kb but doing so, could result stability issues because for example LWIP allocates netcon buffers dynamically, and it needs additional space. Maybe some tree shaking or dead code elimination or minification applied to uber.forth will be the ultimate solution for the limited space.

I like the „modules.py“ approach, because this gives the possibility to have a big source code base on the one side and have a (typically) small application code on the other side. Yes, if the code base grows, the granularity of this modules could become a problem, also the sheer amount of the modules and their dependencies. What I do not like so much is the compilation process at startup (and printing out the „%“ prompt). Sorry, no offense intended. I thought about a „compilation“ of the modules dictionary at the cross compiling system. Similar to words.S. This could solve both problems (space und startup processing). At the moment my forth knowledge is a way to small. Dead or unused code elimination - at the mentioned compilation stage - sounds cool, but I’m not sure, if this breaks "dynamic“ used words.

You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub #1 (comment), or mute the thread https://github.com/notifications/unsubscribe-auth/AGgB7b0aqW2U0KIs9uxLmd5Hvq3xtKvZks5qg3JggaJpZM4Jk0lm.

@zeroflag zeroflag merged commit f1153de into zeroflag:master Aug 19, 2016
@zeroflag
Copy link
Owner

zeroflag commented Aug 19, 2016

I couldn’t find any „hard facts“ from espressif regarding busy waiting in (sdk_)os_delay_us(). But I think, os_delay_us() does busy waiting.

Yeah, I think the same. Anyways, I convinced myself that it's ok :). I still like the ms, ms@, us, us@ naming.

What I do not like so much is the compilation process at startup (and printing out the „%“ prompt). Sorry, no offense intended. I thought about a „compilation“ of the modules dictionary at the cross compiling system.

I agree. Hiding the prompt is probably not too difficult, storing the binary dictionary instead of the source code in flash is trickier. At first I went to that direction but ran into problems I can't recall at the moment. There are rooms for improvements in this regards.

I merged the code then renamed the followings:

  • delay to ms
  • delay-us to us
  • time to ms@
  • time-us to us@

How about renaming fields in the event structure: .time -> .ms and .time-us -> .us ?

@gpummer
Copy link
Contributor Author

gpummer commented Aug 20, 2016

I merged the code then renamed the followings:

Cool!

How about renaming fields in the event structure: .time -> .ms and .time-us -> .us ?

Yes! This seems consequent!

@zeroflag
Copy link
Owner

Yes! This seems consequent!

Done.

@zeroflag
Copy link
Owner

zeroflag commented Aug 24, 2016

This is the response what the developer of esp-open-rtos gave regarding the sdk_os_delay_us and vTaskDelay.

Hi Attila,

For microsecond precision delay, you'll want sdk_os_delay_us().

  • vTaskDelay suspends the current task in the FreeRTOS scheduler, for N ticks (this is why you divide by portTICK_RATE_MS, etc when passing the argument.) The default scheduler tick is 10ms, although you can configure this tick time you generally don't want it to be too low (due to the overhead of context switching each time a tick happens.)
  • sdk_os_delay_us() "busy-waits" for the specified amount of time. The processor sits in a loop until that amount of time has gone past.

vTaskDelay() is better for long or imprecise delays, because it lets another task wake up and run while the first task is suspended.

sdk_os_delay_us() is better for very precise short delays, you can also surround such a call with vTaskEnterCritical / vTaskExitCritical to disable interrupts. This will guarantee very precise timing except when the NMI triggers (which you can't prevent, unfortunately). Just bear in mind that while delaying in this fashion you are preventing other tasks or interrupt handlers in the system from running.

The final options for predictable delays are timers. You could use one of the timer peripherals and its interrupt (have a look in core/include/esp/timers.h). This gives you a predictable event that will interrupt normal task execution at a finite time.

There is also a way to hook a timer event handler into the NMI handler. I don't actually remember the details of this, it's pretty hacky and AFAIK not documented as part of esp-open-rtos... :/

Angus

So yes, sdk_os_delay_us is implemented as a busy loop, and vTaskDelay is not suitable for high precision delays. A further improvement could be made by using vTaskEnterCritical / vTaskExitCritical around sdk_os_delay_us to make it more precise.

@gpummer
Copy link
Contributor Author

gpummer commented Aug 24, 2016

So yes, sdk_os_delay_us is implemented as a busy loop, and vTaskDelay is not suitable for high
precision delays. A further improvement could be made by using vTaskEnterCritical /
vTaskExitCritical around sdk_os_delay_us to make it more precise.

Many thanks for your investigations! This are great news.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants