-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make test on the performance of the controller #18
Comments
It would be nice to come up with a sort of workspace estimation, maybe just for a limited portion of the reaching space of the robot. In the far past, we did something similar for the iCub: https://github.com/robotology/icub-workspace-estimation. I'm not saying that we should do the same thing, but we could visualize a similar narrower estimation. iCubLeftRightArmJointWorkspace.mp4See https://github.com/robotology/icub-workspace-estimation/tree/master/examples. |
I started working on this activity. This is a WIP. I started developing a "test" for understand which pose are reachable with the planning group Screencast.from.11-08-2023.03.39.22.PM.webmIn this video, the 100% of the trajectory is reached 3 times over 10, instead the other poses are not completely reachable due to some collisions that are detected (I avoided the collisions between links when computing the Cartesian trajectory). I also printed out the desired pose and the current pose, so it's possible to compute the error between them. |
Today I repeated the test with 100 poses instead of 10 with a result of ~23/100 poses reached. I saved the results and I tried to plot them with a scatter3 on matlab and I obtained: Of course, they are not already enough, but it was only an attempt to see the result. I'm planning to collect something like 1000 poses. I noticed that when from a non-reached pose I was trying to come back to the home position with a cartesian trajectory again, sometimes the controller was not able to compute the inverse movement, so the controller got stuck. In that sense, from this point on, the trajectory generation was influenced and it returned a lot of failures. To prevent this situation, after the random valid pose for the end effector is reached, the coming back movement is set in the joint space, not in the cartesian one, in order to start each trajectory generation from the same starting position of the kinematic chain. |
How are these random pose generated? 23/100 seems a quite low success rate.
This is a potential problem that we should solve on our end. We ought to be able to reinstate the Cartesian control irrespective of the previous result. Maybe, an initialization problem of the solver? |
As said in the previous comment, I used the getRandomPose method provided by MoveIt. It generates some random poses but valid ones, so they should be solved without any problem. Maybe they are not due to some detected collisions between adjacent links during the execution (so far, the collisions are avoided, so the IK resolution fails when one is detected)
I could investigate. I'm going to read some documentation and the paper about TRAC-IK implementation. |
Some updates on this activity:
|
Doing more tests for the collision issue, I noticed that with iCub model, the links that went into collision more frequently are the pairs I checked the self-collision matrix from the Those collisions are probably due to covers, but they have to be taken into account when running the test on the real robot, so I think that they have not to be disabled. In this case, the colliding links are Instead of using |
Nice investigation @martinaxgloria 👍🏻 At any rate, I think that we can make things simpler.
|
Today, @Nicogene and I made some visual analysis about the collisions retrieved during the tests. In particular, starting from the statement that during workspace sampling a collision between As you can see from the image above, the collision is due to contact between the forearm cover and the hand mesh: in this particular model, the hand shrinkwrap is really different from the real one, since we don't have the fingers and also the cover is modeled with some other pieces that are not present on the real robot like the one on the wrist highlighted in red in the figure that comes into contact. For this reason, we stated that we could disable the self-collision check between the two links mentioned above. After doing that, we could sample again the workspace, but this time with cc @pattacini |
After a f2f alignment with @pattacini and @Nicogene, we decided to go on without collisions check for the time being to reduce the number of variables that could cause failure. Moreover, we decided to investigate whether there's a correspondence (or even a dependency) between the pose error (difference between the wanted pose to be reached and the one actually reached) and the computeCartesianPath returning value (as said in the API documentation, this function "return a value that is between 0.0 and 1.0 indicating the fraction of the path achieved as described by the waypoints. Return -1.0 in case of error. "). Today I'm going to collect this data and make the analysis. |
From a first preliminary analysis (thanks @pattacini for the help), it turned out that the percentage indicated the fraction of the path achieved is somehow related with the position error between the ideal and the real current pose. However, we obtained that for quite high success percentage (~80%), the error in position is equally quite high (~4 cm). For this reason, in accordance also with @Nicogene, we decided to give up with this fractions and rely only on the errors in both position and orientation. |
The data analysis regarding only the position error resulted in something like this: The two plots represent the workspace in front of iCub sampled with a spatial step of 5 cm for each direction, one with the eef (i.e. right hand) oriented downward, the other with the eef oriented leftward (a reaching-like orientation). The colormap indicates the error between the desired and the current positions. Regarding the orientation, instead, this afternoon I read the literature on how computing the error angle with quaternions. Tomorrow I'm going to add other plots. |
Some updates on that. During the last few days, I collected other data to have a more detailed analysis of the performance of this controller. In particular, starting from this configuration file with In this sense, I decided to stay on
For this reason, I increased the values of I'll add more updates after this test. |
After the workaround of |
Here you are the final plots. At the end, I plotted the position and orientation estimation for both downward and leftward orientation of the right hand with TRAC-IK parameters tuned as per the previous comment. Please, let me know if they are ok so that I could start writing the report about this activity to be included in this repo. |
Fine with me! |
Thanks @pattacini! cc @Nicogene |
Fine also for me! |
I think the most is done, maybe something should be adjusted/added. I'll do my best |
In the meantime I read again the report if something is missing in my opinion, I opened a PR with everything about these tests, including the report: |
Superb! |
Once done we could write a report to be added to the repo
The text was updated successfully, but these errors were encountered: