-
Notifications
You must be signed in to change notification settings - Fork 224
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
create timer with clock #211
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one, what looks like obvious, typo, but otherwise lgtm
rclpy/src/rclpy/_rclpy.c
Outdated
|
||
rcl_clock_t * clock = (rcl_clock_t *)PyCapsule_GetPointer(pyentity, "rcl_clock_t"); | ||
rcl_ret_t ret_clock = rcl_clock_fini(clock); | ||
PyMem_Free(timer); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this supposed to be clock
instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! Absolutely a bug - fixed in fa732b9.
rclpy/src/rclpy/_rclpy.c
Outdated
@@ -1193,20 +1193,33 @@ rclpy_create_timer(PyObject * Py_UNUSED(self), PyObject * args) | |||
return NULL; | |||
} | |||
|
|||
rcl_clock_t * clock = (rcl_clock_t *) PyMem_Malloc(sizeof(rcl_clock_t)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that timer
already gets allocated with PyMem_Malloc
, but what I'm wondering is why we do this rather use the rcl_allocator_t
? Maybe @sloretz or whoever did this originally has an idea. Even if we don't have to use PyMem_Malloc
, it might be the right preference anyways. It's just that we're mixing the two, which happen to be the same right now, but are not necessarily both using malloc
under the hood.
We control both the allocation and free of these resources, so either should work I think. We're never relying on Python to free them for us (I think).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have a good reason for using this or mallloc
via an rcl_allocator_t
. There is more info about pymalloc here. As long as memory is free'd by the same allocator it was allocated with then I wouldn't expect problems from mixing the two.
We control both the allocation and free of these resources, so either should work I think. We're never relying on Python to free them for us (I think).
AFAIK this is correct.
The typo free-ing the wrong variable has been fixed.
could you comment on the next steps for this? we have Clocks in rclpy now (I just merged #209) and I'm wondering if we plan to let users pass a clock into the Timer in the rclpy API at a later date or not (probably this PR is just to keep it compiling with changes in connected PRs and change to the API is followup, but want to double check) |
I would suggest to get the current PRs merged. For In a follow up PR we can add new API (probably to the Node class) to create timers with an arbitrary clock type. Internally it has to maintain (at least) one clock per type. Additional timers with the same clock type should be able to share the clock. |
e2a2e0e
to
f938bac
Compare
* Alternative way of giving timers a clock 1) Pass clock into rclpy_create_timer 2) Make use of pycapsule destructor
Match API change from ros2/rcl#272. Connect to ros2/rclcpp#523.