Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Identify requests #137

Merged
merged 9 commits into from
Oct 6, 2020
Merged

Identify requests #137

merged 9 commits into from
Oct 6, 2020

Conversation

joesilber
Copy link
Contributor

Code now distinguishes "TRACKED" targets from "REQUESTED".

TRACKED:
These are errors calculated with respect to POS_T, POS_P. They show whether we have control over the robots.

REQUESTED:
Code now parses MOVE_CMD field, and identifies the requested targets from the caller. Due to anticollision, travel limits, and enabled/disabled state, the petal of course has the right to refuse some of these targets. Hence REQUESTED will vary from TRACKED in these cases. Errors calculated with respect to REQUESTED show us how well we are getting fibers to desired locations / whether our desired locations are possible.

Copy link
Collaborator

@julienguy julienguy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please add (to the github conversation) an example I could run at beyonce or desicleanroom2 so I can test the new functionalities?

@joesilber
Copy link
Contributor Author

git checkout posmoves-tp-updates
get_posmoves --host beyonce.lbl.gov --port 5432 --password reader --petal-ids 1 --exposure-ids 3094 --pos-ids M02226,M02182,M06357,M06074,M06389,M06303,M01981,M06304,M02136,M01504,M06522,M01867,M01858,M06949,M06073,M06574 -o ~/jhsilber/posmovedata/test_3094/ -c -t
git checkout identify_requests
analyze_pos_performance -i ~/jhsilber/posmovedata/test_3094/* -o ~/jhsilber/posmovedata/test_3094/

@joesilber
Copy link
Contributor Author

Example result:

image

@julienguy
Copy link
Collaborator

Thanks. I verified I can run this at nersc. So I am going to merge, but I have one comment: is there a metric that would allow us to detect an actual collision? The max error on this example is 100um, but it's not clear if it's due to a poor blind move accuracy or some friction or collision.

@julienguy julienguy merged commit 832b551 into master Oct 6, 2020
@joesilber
Copy link
Contributor Author

Thanks. I do not have any quantitative metric, that would say a 100 um error was or was not a "glancing" collision. It's about 3x RMS error. Sort of a gray area as to whether it's a collision or something else.

One would be tempted to just run 100 targets with restricted patrol radius, but that isn't exactly comparable. Because the errors should be magnified with both arms extended.

What I can do is run my new replay.py tool and generate an animation of what the move tables say should have happened. We can look at the animation and see if there was a "close call" between that positioner and any others.

@joesilber
Copy link
Contributor Author

I would be delighted if you would like to try out replay.py --- it is very useful, and I am feeling bottlenecked in terms of what I can get through in a day.

@julienguy julienguy deleted the identify_requests branch October 21, 2020 22:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants