-
-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pathviewR: Tools to import, clean, and visualize animal movement data in R #409
Comments
Thanks @vbaliga and co-authors for your submission! I have a few minor comments/questions below, and will now start looking for reviewers. Editor checks:
Editor comments
Reviewers: |
Please add this badge to the package README
|
Hi @maelle, Thank you for looking over
Thanks for pointing this out. We very much welcome users to reach out to us regarding further import functions. We created an Issue template in our repository to allow users to request further import functions, and a link to our Issues templates is now provided in the section of the vignette that you specified.
Thanks for catching this! We have now added a citation of the Manual of this package. It appears both as a
Please forgive our ignorance. We use Travis CI because I had the impression that it was emphasized in the rOpenSci devguide and because of my familiarity with it in authoring a previous rOpenSci package (ropensci/workloopR). We added GitHub Actions because we were interested in seeing what it had to offer. Admittedly, we are less familiar with GitHub Actions' breadth of coverage (moreover, it is a newer service, right?). Would you advise we use only GitHub Actions? Or is it okay that we continue using both Travis CI and GitHub Actions?
The badge has been added. Thanks again for considering our package! Best regards, |
👋 @vbaliga, @scienceisfiction and @epress12! Thanks for your work!
I've found a reviewer that I'll assign soon, and am still looking for the second reviewer. |
Thanks a lot @asbonnetlebrun for agreeing to review! Your review is due on 2020-12-09. Please find links to our reviewer guide and review template. |
Hi @maelle, Thanks very much for your advice. We've expanded our issue template to welcome people to contribute functions and added a question to the template accordingly (see ropensci/pathviewr@7c4c32d). We have also now shifted to using GitHub Actions exclusively for CI using the "standard" check across Mac/Win/Linux and revised our method of updating Codecov through Actions as well (see ropensci/pathviewr@f3a787b). Thanks again! |
Thanks a lot @marcosci for agreeing to review! Your review is due on 2020-12-21. Please find links to our reviewer guide and review template. |
With this, now both reviewers @asbonnetlebrun and @marcosci are assigned. 😸 |
Hi @asbonnetlebrun. I'm rOpenSci's Community Manager. If you would like to join our Slack group, please email me at |
Thanks for the update @asbonnetlebrun! |
Package ReviewPlease check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide
I don’t have any working relationship with any of the package authors.
DocumentationThe package includes all the following forms of documentation:
There are examples for almost all exported functions in R help. Exceptions are the
Only in the DESCRIPTION file, but not in the README. Functionality
See details in my comments about some potential coding mistakes.
All tests pass on my machine as they have been designed (but see in my comments for one test that likely should be re-written).
Estimated hours spent reviewing: 8
Review CommentsI want to first point out that my background is on animal movement data, but not with motion capture cameras (I focus on tracking of wild animals, using GPS or similar loggers), so I was not fully familiar with the interests/culture of the field. I thought the package was well documented overall, and the three vignettes in particular really helped me understand what the package was for. That being said, I think it would still benefit from some additional information in the different documents (vignettes, README, R help). See my comments below from some specific points that would be good to add. I also very much appreciated the effort made to make it all work with pipes. In the rest of my review, I’ll first focus on comments on the three vignettes to help clarify some points to the users. Then I’ll focus on the code of the visual perception functions, where I think I found some mistakes and that would be worth checking. Data import and cleaning Maybe that was me not being very familiar with the kind of experiments the package is relevant for, but I had trouble to understand when we would use the Also, considering how the Minor point: the link to the vignette for managing frame gaps is missing in the text. Managing frame gaps Could there be cases when frame gaps can vary between devices (i.e. if I understood well in the case of the Motive data, between subjects)? If so, would it be relevant to allow the autodetect approach to be applied to each subject separately? (or maybe that is not relevant...) Visual perception Also, my understanding is that here you implement only two cases:
On this, a question: is there any plan to code more situations or are these the typical situations for these kinds of experiment? If you welcome suggestions from users of other settings, maybe you could say that in the vignette in the same way you mention that you are happy to work towards the inclusion of more data types? More generally, I would have appreciated some definition of what you call “spatial frequency” and “visual angle”. Although I understand that these might be obvious for people in the field (which are ultimately the target users of the package), but I feel like these terms (visual angle, and in particular spatial frequency) are of such broad usage that they might deserve some better description of what they mean in the context of the package. Maybe include a diagram in the vignette for users to visualise what these values represent? In particular, the Value section in the R help for the Comments on the code of the visual perception functions My understanding of the visual angle is as follows: let’s consider an isosceles triangle with one vertex at the bird’s eye, with the side of unique length being on the wall and of a length equal to the wavelength of the stimulus, and the third vertex to be at the bird’s eye, with the bisector of the angle at the bird’s eye being the segment of shortest distance between the bird and the wall (would have been easier with a diagram, sorry). Then the angle at the bird’s eye should be the visual angle – right? If I am correct, let’s dig into the
On that note, there is no need for an
Can you please correct the calculations, and adapt the test-calc_vis_angle_box.R file? And also on this, I noticed that the user can supply negative values for the Finally, still on distances, inspecting the resulting distances to the negative and positive walls from a range of positive and negative width positions seem to indicate that the distance calculations for the other functions are correct. Spatial frequency It currently is:
I think, unless I got the trigonometry wrong, that this represents the number of cycles per 2 degrees of visual angle, and therefore should be changed to:
Just one final broader question, don’t these calculations assume that the bird is parallel to the length axis of the tunnel? Is that not important/common practice not to use the rotation information? Many thanks for your contribution, and I hope this will helps! |
Many thanks @asbonnetlebrun for your thorough review! 🙏 |
@asbonnetlebrun, thank you very much for your thoughtful & constructive review! My co-authors and I have each read through your comments and appreciate the time & effort you put in to reviewing our package @maelle, this is a slightly awkward time for us -- our university's semester is wrapping up and of course the holiday season is approaching. That said, I am certain we can address our reviews in a timely manner. Since we are also pending one more review, would it be OK if we held off on making changes until both reviews are in? Given that the due date for @marcosci's is 2020-12-21, I am confident that my co-authors and I could address both reviews within a couple weeks after (perhaps by 2021-01-04?). Would that be OK with you? |
Package ReviewPlease check off boxes as applicable, and elaborate in comments below. Your review is not limited to these topics, as described in the reviewer guide
DocumentationThe package includes all the following forms of documentation:
Functionality
Estimated hours spent reviewing: 5
Review CommentsFollowing @asbonnetlebrun example - I am coming from a landscape ecology, so looking at your package from a system detached point of view. I tried the package out with a .csv file from my phone that I tracked while going for a walk and everything worked just fine. Overall, there is really not much one can add to @asbonnetlebrun comments. The package is in an exceptional shape, there are:
All examples in the vignettes run just fine. Furthermore, the package itself is well documented, I will list a couple of things that crossed my mind:
Other than that ... impressive work! The package is in excellent good shape and is in my opinion a nice addition to the rOpenSci bestiary. |
Thanks a ton @marcosci for your review! 😸
Such a neat idea! Regarding the package name, yes we recommend all lower-case, especially if the package isn't on CRAN yet. I myself renamed my Ropenaq package after review and I don't regret doing it. |
Hi everyone -- hope you have been enjoying the holiday season! @marcosci -- thank you very much for your constructive review @maelle -- thank you for your advice; we are considering renaming the package. We'll likely do this as a final step after addressing all the reviewers' other comments. @asbonnetlebrun and @marcosci -- we would be happy to list you both as "rev" in our description file. Could you please let us know how you'd like your names to appear and what other contact info (email, ORCID...etc) you wish to have listed? As you may have seen, we have been working on addressing all the comments. We'll let you know when we have rounded out all our edits. Thanks! |
Happy New Year @vbaliga, @scienceisfiction and @epress12, and thanks for the update! (also Happy New Year to @asbonnetlebrun & @marcosci) |
Hi @asbonnetlebrun, @marcosci, and @maelle, Thank you sincerely for taking the time and effort to review our submission. Below, you will find point-by-point responses to each of your comments. Your original comments may not appear fully (for sake of concision), but please note that we believe that we have addressed each item to the best of our capabilities. Since addressing these items was handled over several commits, the most appropriate commit that reflects the change will also be noted within each of our responses. Please also note that the package is now entitled Review from Reviewer 1 (asbonnetlebrun)DocumentationExamples for all exported functions in R Help that run successfully locally
We added in examples for each of the following functions. Please note that we consoldiated some of the visual guidance functions, so some of the original functions referred to in this reviewer comment have now been replaced or renamed.
Community guidelines including contribution guidelines in the README or CONTRIBUTING, and DESCRIPTION with URL, BugReports and Maintainer (which may be autogenerated via Authors@R).
We have added contribution guidelines (with links to Issues pages) in our README. (f5d47e7) FunctionalityWe will address these items in the
We have added you as a "rev" within our description file (5b0661). Please let us know if you would like your name to appear differently and/or would like to adjust contact info etc. Review CommentsData import and cleaning
We have clarified language of this vignette to indicate that relabeling & gathering are only necessary in certain cases (e.g. using Motive data). Please let us know what you think. (591dd50)
We have clarified the circumstances under which standardization is needed and what types of landmarks are appropriate (591dd50). Additionally, in the Help file for
Thanks! We have revised language of this vignette on what (0,0,0) represents (591dd50)
Thanks for catching this -- we have added the link (591dd50) Managing frame gaps
We have added details of what
Sorry for being unclear - this function actually has this feature! We have made a slight modification to the language that will hopefully make it more clear (62b63b7) Visual perception
Thanks. We have heavily revised the language of the first couple paragraphs accordingly, and a figure has also been inserted to hopefully give readers a better sense of the concepts at hand. Please let us know what you think! (97704bc)
We have revised the language of how these experiments are introduced (97704bc). We have also provided links to the Issues page where users can request e.g. different tunnel setups (20bb54a)
We have added in definitions along with citations of some journal articles that provide nice summaries of these topics. (fa7b12e) Comments on the code of the visual perception functions
Thanks for the suggestion. The visual perception functions now include calculations for the end walls, though the outputs from these functions only include the end wall towards which the subject is moving towards. Please see the changes to following functions:
Note from VBB: for brevity, the details of your calculations will not be included here.
Thanks! We have now replaced the
Thanks sincerely for catching this! We have made corrections to the calculations of vis angle and SF (6a36758)(99c9ef5). We have also updated
Great catch! We have now added guidance to the Help file of Spatial frequency
Thanks, we have made corrections to the calculation of SF (99c9ef5)
Yes, this is true and a great point. It is actually not common practice in the field to use rotation information -- quite a few studies still rely solely on positional data. That said, we definitely agree that adding rotation is an important way to advance our understanding of visual guidance. At this time, we allow for rotation data to be imported & wrangled along with all the positional data, but rotation data have not yet been integrated into the visual perception functions. This is largely because how rotation data are encoded can be tricky (depending on how the rotation matrix is composed and/or how orientation axes are defined on each subject). Accordingly, we are still working on a way that will work generically for most use cases. For now, we have added a note in the Help files for each visual perception that rotation information may be integrated in future pathviewr updates (46ba850). We have also done so in the Visual perception vignette (a3a7795) Review from Reviewer 2 (marcosci)
We have added you as a "rev" within our description file (5b0661). Please let us know if you would like your name to appear differently and/or would like to adjust contact info etc. Review Comments
Yes thanks, this is a good point and we agree. We have made a slight alteration to our description and readme. We don't want to oversell the features of our package, so we hope that where we have landed with the language strikes a nice
We agree and added a short walkthrough of what movement data look like, both generally and specifically in Motive and Flydra (591dd50)
We have added contribution guidelines provided by rOpenSci (with slight modification) (5252918)
We've updated the Maintainer field with VBB as the maintainer (9146e09)
Yeah, we see what you mean and went ahead and changed the name to Additional items from the editor (maelle)
As noted above, we changed the name to On behalf of @scienceisfiction and @epress12, thanks again for all your feedback and advice! And sorry for the slight delay in getting back to you. My (VBB's) wife just gave birth to our first child last week -- it's been a pretty wild ride! Best regards, |
@vbaliga wow, congratulations! 🥚 🐢 ✨ @vbaliga @scienceisfiction and @epress12, thanks a lot for your work and detailed response! @asbonnetlebrun @marcosci Could you please indicate whether you are happy with the authors' response? Template https://devguide.ropensci.org/approval2template.html |
@asbonnetlebrun @marcosci Friendly reminder 😸 Could you please indicate whether you are happy with the authors' response? Template devguide.ropensci.org/approval2template.html |
@maelle @vbaliga Sorry again for taking so long to have a look. I've now checked the authors' response and I am very happy with it. In particular, the authors made a really nice job at clarifying what the different functions of the package are for (very nice figure in the visual perception vignette!). Great job, thanks! |
Reviewer ResponseSorry - I was quite buried in things the last couple of weeks 😮 Final approval (post-review)
Estimated hours spent reviewing: I may have forgotten how much time I spent reviewing ... |
Thank you @asbonnetlebrun @marcosci! |
@ropensci-review-bot approve |
Approved! Thanks @vbaliga for submitting and @asbonnetlebrun, @marcosci for your reviews! 😁 To-dos:
Should you want to acknowledge your reviewers in your package DESCRIPTION, you can do so by making them Welcome aboard! We'd love to host a post about your package - either a short introduction to it with an example for a technical audience or a longer post with some narrative about its development or something you learned, and an example of its use for a broader readership. If you are interested, consult the blog guide, and tag @stefaniebutland in your reply. She will get in touch about timing and can answer any questions. We've put together an online book with our best practice and tips, this chapter starts the 3d section that's about guidance for after onboarding. Please tell us what could be improved, the corresponding repo is here. |
(don't mind me testing the bot now works correctly 😅 ) |
Hi everyone - thanks so much for reviewing Cheers, |
I can now ask one last question: why the turtle? 🙂 |
Lots of little reasons, but among them one of the bigger ones is: https://xkcd.com/889/ |
Hi @maelle, thanks again for all your help. We have taken care of all the items on the to-do list.
Happy to! Thanks again for the reviews.
Thanks, we are discussing the idea of this and will be sure to reach out to Stefanie if we opt to contribute |
Awesome, thanks for the update! Please also tell me if other authors need to be invited to the ropensci GitHub organization. |
Submitting Author: Name (@vbaliga)
Due date for @asbonnetlebrun: 2020-12-05Repository: https://github.com/vbaliga/pathviewr
Version submitted: v0.9.4 (ropensci/pathviewr@6e04870)
Editor: @maelle
Reviewers: @asbonnetlebrun, @marcosci
Due date for @marcosci: 2020-12-21
Archive: TBD
Version accepted: TBD
Scope
Please indicate which category or categories from our package fit policies this package falls under: (Please check an appropriate box below. If you are unsure, we suggest you make a pre-submission inquiry.):
Explain how and why the package falls under these categories (briefly, 1-2 sentences):
pathviewR
provides tools to extract data from motion capture software such as Optitrack's Motive and the Straw Lab's Flydra (among other sources). Tools to wrangle and clean data sets to isolate specific trajectories of animal movement are offered, along with the capability to use automated workflows for processing data from high throughput laboratory experiments.Who is the target audience and what are scientific applications of this package?
Users of motion capture software, including researchers who study animal and/or object motion in a laboratory setting, especially for analyses of visual guidance of behavior.
Are there other R packages that accomplish the same thing? If so, how does yours differ or meet our criteria for best-in-category?
We are not aware of any other R (or otherwise open source) packages that import & wrangle data from Optitrack's
Motive
or the Straw Lab'sFlydra
. Although there are other R packages that handle animal movement trajectory data, these packages tend to be geared towards analyses of large-scale, geospatial movement data (e.g.trajectories
or various R packages in theAniMove
software suite). In contrast,pathviewR
specifically targets tracking of animals (or other subjects) in laboratory settings (e.g. in a flight tunnel) and offers tools to estimate visual perceptions as subjects move in relation to their surroundings.(If applicable) Does your package comply with our guidance around Ethics, Data Privacy and Human Subjects Research?
pathviewR
, although not specifically geared towards collecting data from human subjects, is in compliance with rOpenSci's policy.If you made a pre-submission enquiry, please paste the link to the corresponding issue, forum post, or other discussion, or @tag the editor you contacted.
Technical checks
Confirm each of the following by checking the box.
This package:
Publication options
JOSS Options
paper.md
matching JOSS's requirements with a high-level description in the package root or ininst/
.MEE Options
Code of conduct
The text was updated successfully, but these errors were encountered: