Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make test on the performance of the controller #18

Closed
Nicogene opened this issue Oct 30, 2023 · 22 comments · Fixed by #19
Closed

Make test on the performance of the controller #18

Nicogene opened this issue Oct 30, 2023 · 22 comments · Fixed by #19
Assignees
Labels
domain-software Related to Software team-fix Related to Team Fix

Comments

@Nicogene
Copy link
Member

Once done we could write a report to be added to the repo

@Nicogene Nicogene added domain-software Related to Software team-fix Related to Team Fix labels Oct 30, 2023
@pattacini
Copy link
Member

It would be nice to come up with a sort of workspace estimation, maybe just for a limited portion of the reaching space of the robot.

In the far past, we did something similar for the iCub: https://github.com/robotology/icub-workspace-estimation.

I'm not saying that we should do the same thing, but we could visualize a similar narrower estimation.

iCubLeftRightArmJointWorkspace.mp4

See https://github.com/robotology/icub-workspace-estimation/tree/master/examples.

@martinaxgloria
Copy link
Member

martinaxgloria commented Nov 8, 2023

I started working on this activity. This is a WIP.

I started developing a "test" for understand which pose are reachable with the planning group torso + right_arm. I used getRandomPose method provided by moveit::planning_interface::MoveGroup class to generate valid random pose (position + orientation) for the end effector (i.e. right hand). So far, I tried with 10 pose and this is what I obtained:

Screencast.from.11-08-2023.03.39.22.PM.webm

In this video, the 100% of the trajectory is reached 3 times over 10, instead the other poses are not completely reachable due to some collisions that are detected (I avoided the collisions between links when computing the Cartesian trajectory). I also printed out the desired pose and the current pose, so it's possible to compute the error between them.
I'll keep working on it

cc @Nicogene @pattacini

@martinaxgloria
Copy link
Member

Today I repeated the test with 100 poses instead of 10 with a result of ~23/100 poses reached. I saved the results and I tried to plot them with a scatter3 on matlab and I obtained:

scatter

Of course, they are not already enough, but it was only an attempt to see the result. I'm planning to collect something like 1000 poses.

I noticed that when from a non-reached pose I was trying to come back to the home position with a cartesian trajectory again, sometimes the controller was not able to compute the inverse movement, so the controller got stuck. In that sense, from this point on, the trajectory generation was influenced and it returned a lot of failures. To prevent this situation, after the random valid pose for the end effector is reached, the coming back movement is set in the joint space, not in the cartesian one, in order to start each trajectory generation from the same starting position of the kinematic chain.

@pattacini
Copy link
Member

How are these random pose generated? 23/100 seems a quite low success rate.

noticed that when from a non-reached pose I was trying to come back to the home position with a cartesian trajectory again, sometimes the controller was not able to compute the inverse movement, so the controller got stuck. In that sense, from this point on, the trajectory generation was influenced and it returned a lot of failures. To prevent this situation, after the random valid pose for the end effector is reached, the coming back movement is set in the joint space, not in the cartesian one, in order to start each trajectory generation from the same starting position of the kinematic chain.

This is a potential problem that we should solve on our end. We ought to be able to reinstate the Cartesian control irrespective of the previous result. Maybe, an initialization problem of the solver?

@martinaxgloria
Copy link
Member

How are these random pose generated? 23/100 seems a quite low success rate.

As said in the previous comment, I used the getRandomPose method provided by MoveIt. It generates some random poses but valid ones, so they should be solved without any problem. Maybe they are not due to some detected collisions between adjacent links during the execution (so far, the collisions are avoided, so the IK resolution fails when one is detected)

Maybe, an initialization problem of the solver?

I could investigate. I'm going to read some documentation and the paper about TRAC-IK implementation.

@martinaxgloria
Copy link
Member

martinaxgloria commented Nov 10, 2023

Some updates on this activity:

  • there's a TRAC-IK parameter called position_only_ik (it is not so documented, you can find the definition here). As stated in this configuration file, the default value is set to false: this means that both orientation and position are taken into account. In this configuration, many times the solver comes into a failure due to some link collision (see for example this for reference).
  • I tried to set this position_only_ik parameter to true and set 10 random poses, with a result of 9 successes over 10. This means that probably the failure is due to the orientation, rather than the set position. Of course, this is not our case since we want to control both position and orientation.
  • I can try to use both position and orientation and try to retrieve which kind of collision is detected and between which links/joints/covers when 100% of the trajectory success is not obtained.
  • At the same time, instead of using getRandomPose function, we could sample the workspace as done in here

cc @Nicogene @pattacini

@martinaxgloria
Copy link
Member

Doing more tests for the collision issue, I noticed that with iCub model, the links that went into collision more frequently are the pairs chest - root_link and r_forearm - r_hand.

Screenshot from 2023-11-13 16-16-51

I checked the self-collision matrix from the moveit setup assistant and those pairs have collisions check enabled as expected since they're not adjacent and they can go in contact.
Screenshot from 2023-11-13 16-18-15
Screenshot from 2023-11-13 16-18-31

Those collisions are probably due to covers, but they have to be taken into account when running the test on the real robot, so I think that they have not to be disabled.
Moreover, I did a double-check with ergocub model to understand if these collisions were detected in that case too, and so it is:
Screenshot from 2023-11-13 17-12-45

In this case, the colliding links are r_hand_palm - r_wrist_1, not adjacent in the kinematic chain and with self-collision check enabled.

Instead of using getRandomPose method, I tried sampling the space as did in icub-workspace-estimation repo, but carefully reading the code I wasn't able to understand how to sample also the orientation, either than the position.

cc @Nicogene @pattacini

@pattacini
Copy link
Member

Nice investigation @martinaxgloria 👍🏻

At any rate, I think that we can make things simpler.
Let's do the following:

  • Disable all types of collision checking straight away, no matter what. Let's pretend that covers are not there and that we only rely on the given joint bounds. We can deepen this aspect later.
  • We should only sample a quite limited workspace in front of the robot. Regarding orientation, let's narrow down our analysis to just 2 orientations:
    • palm down
    • palm oriented leftward considering the right hand and vice-versa for the left hand.

@martinaxgloria
Copy link
Member

martinaxgloria commented Nov 15, 2023

Today, @Nicogene and I made some visual analysis about the collisions retrieved during the tests. In particular, starting from the statement that during workspace sampling a collision between r_hand and r_forearm is often retrieved, we moved the r_wrist_pitch to its upper limit from the yarpmotorgui using iCubGazeboV2_5 (no hands) model to obtain the collision:

image

As you can see from the image above, the collision is due to contact between the forearm cover and the hand mesh: in this particular model, the hand shrinkwrap is really different from the real one, since we don't have the fingers and also the cover is modeled with some other pieces that are not present on the real robot like the one on the wrist highlighted in red in the figure that comes into contact. For this reason, we stated that we could disable the self-collision check between the two links mentioned above. After doing that, we could sample again the workspace, but this time with collision check enabled during cartesian trajectory generation and see the results.

cc @pattacini

@pattacini
Copy link
Member

pattacini commented Nov 15, 2023

Orientation error in axis-angle notation, from the Sciavicco-Siciliano:

image

cc @martinaxgloria @Nicogene

@martinaxgloria
Copy link
Member

martinaxgloria commented Nov 16, 2023

After a f2f alignment with @pattacini and @Nicogene, we decided to go on without collisions check for the time being to reduce the number of variables that could cause failure. Moreover, we decided to investigate whether there's a correspondence (or even a dependency) between the pose error (difference between the wanted pose to be reached and the one actually reached) and the computeCartesianPath returning value (as said in the API documentation, this function "return a value that is between 0.0 and 1.0 indicating the fraction of the path achieved as described by the waypoints. Return -1.0 in case of error. ").

Today I'm going to collect this data and make the analysis.
cc @Nicogene @pattacini

@martinaxgloria
Copy link
Member

From a first preliminary analysis (thanks @pattacini for the help), it turned out that the percentage indicated the fraction of the path achieved is somehow related with the position error between the ideal and the real current pose.

MicrosoftTeams-image

However, we obtained that for quite high success percentage (~80%), the error in position is equally quite high (~4 cm). For this reason, in accordance also with @Nicogene, we decided to give up with this fractions and rely only on the errors in both position and orientation.

@martinaxgloria
Copy link
Member

martinaxgloria commented Nov 16, 2023

The data analysis regarding only the position error resulted in something like this:

position_error

The two plots represent the workspace in front of iCub sampled with a spatial step of 5 cm for each direction, one with the eef (i.e. right hand) oriented downward, the other with the eef oriented leftward (a reaching-like orientation). The colormap indicates the error between the desired and the current positions.

Regarding the orientation, instead, this afternoon I read the literature on how computing the error angle with quaternions. Tomorrow I'm going to add other plots.

cc @Nicogene @pattacini

@martinaxgloria
Copy link
Member

Some updates on that. During the last few days, I collected other data to have a more detailed analysis of the performance of this controller. In particular, starting from this configuration file with TRAC-IK parameters description, I tried to sample the workspace (different positions, but orientation of the right hand always the same, i.e. leftward) changing the value of epsilon, which default value is 1e-5. The resulting plots showed very little difference when this parameter is set to 1e-3, 1e-4 and 1e-5, but with 1e-6 it resulted in very high error in both position and orientation.

In this sense, I decided to stay on epsilon = 1e-4 and this time I changed the parameter called solve_type, which could be set to Distance (by default), Speed, Manipulation1 or Manipulation2 (here the documentation about this). I obtained the worst results with Speed, while with Distance I often resulted in:

[ERROR] |gazebo-yarp-plugins.plugins.GazeboYarpControlBoard| An hardware fault occurred on joint  0  torque too big! (  2047.52  )

For this reason, I increased the values of max_torques to very high unrealistic values (like 100000.0) and now I'm repeating the acquisition.

I'll add more updates after this test.

cc @Nicogene @pattacini

@martinaxgloria
Copy link
Member

After the workaround of max_torques, the best solve_type is Distance both in terms of position and orientation. For this reason, I will take this configuration (solve_type = Distance and epsilon = 1e-4) as the best one and I'll post here the final plots for both the orientations of the right hand chosen (leftward and downward as per this comment)

cc @Nicogene @pattacini

@martinaxgloria
Copy link
Member

Here you are the final plots. At the end, I plotted the position and orientation estimation for both downward and leftward orientation of the right hand with TRAC-IK parameters tuned as per the previous comment.

final_plot

Please, let me know if they are ok so that I could start writing the report about this activity to be included in this repo.

cc @Nicogene @pattacini

@pattacini
Copy link
Member

Fine with me!
Well done 🚀

@martinaxgloria
Copy link
Member

martinaxgloria commented Nov 23, 2023

Thanks @pattacini!
If you want, you can find a first attempt to write down the report here. I created a new ros2 package inside the branch feat/test_controller with the code and the report. I'm going to add also the matlab snippets I used to generate the plots. Any comment/suggestion is welcome

cc @Nicogene

@Nicogene
Copy link
Member Author

Fine also for me!
It would be super to have the report ready for the tomorrow review 💪🏻

@martinaxgloria
Copy link
Member

I think the most is done, maybe something should be adjusted/added. I'll do my best

@martinaxgloria
Copy link
Member

In the meantime I read again the report if something is missing in my opinion, I opened a PR with everything about these tests, including the report:

cc @Nicogene @pattacini

@pattacini pattacini linked a pull request Nov 23, 2023 that will close this issue
@pattacini
Copy link
Member

Thanks @pattacini! If you want, you can find a first attempt to write down the report here. I created a new ros2 package inside the branch feat/test_controller with the code and the report. I'm going to add also the matlab snippets I used to generate the plots. Any comment/suggestion is welcome

cc @Nicogene

Superb!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
domain-software Related to Software team-fix Related to Team Fix
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants