Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Contact forces are zero in Ant-v2/v3 #1541

Closed
pratikac opened this issue Jun 20, 2019 · 13 comments
Closed

Contact forces are zero in Ant-v2/v3 #1541

pratikac opened this issue Jun 20, 2019 · 13 comments
Labels

Comments

@pratikac
Copy link

The contact forces are all zero in the the MuJoCo Ant-v2/v3 environments. If one runs

import gym
import numpy as np

e = gym.make('Ant-v3')
for _ in range(100):
    e.reset()
    for i in range(1000):
        x=e.step(e.action_space.sample())[0]
        print(np.linalg.norm(x[27:]))

the last 84 dimensions in the state out of 111 dimensions are always zero. These dimensions are the contact forces in ant_v3.py (https://github.com/openai/gym/blob/master/gym/envs/mujoco/ant_v3.py#L124). Since the ant is in contact with the ground, I believe there should be some contact forces that are non-zero. Can you please clarify if this is the expected behavior?

@pratikac pratikac changed the title External forces are zero in Ant-v2/v3 Contact forces are zero in Ant-v2/v3 Jun 20, 2019
@AlexanderGri
Copy link

The same goes for the Humanoid environment - contact forces are zero in the case of MuJoCo 2.0 . The environment uses (v2 and v3) the cfrc_ext field of PyMjData which is a proxy to the cfrc_ext field of MuJoCo C structure _mjData . But the changelog to MuJoCo 2.0 contains the following line:

The function mj_rnePostConstraint is no longer called by default. Instead, it is called only when the model contains force-related sensors which need the results of this function. The user can still call it directly in order to compute the mjData fields cacc, cfrc_ext, cfrc_int if desired.

Due to this Humanoid and Ant reward structure differs between MuJoCo 2.0 and <2.0 . Is it intentional?

@pshvechikov
Copy link

Also, found this very frustrating.
Algorithm benchmarks differ between versions of MuJoCo.
This can misguide researchers and invalidate comparison to previously published papers.
So, this seems like a severe bug.
Please, @christopherhesse @pzhokhov, led some light on the issue, as soon as it is possible for you.
Thanks in advance.

@christopherhesse
Copy link
Contributor

Thanks for reporting this bug! It looks like this could have been introduced by #1401

Can you guys verify that using mujoco-py==1.50.1.68 fixes this issue?

@AlexanderGri
Copy link

Thanks for your response!

Using mujoco-py<2.0 will fix this issue since older versions enforce using MuJoCo<2.0 which computes contact forces by default.

I see two other options to not to lose MuJoCo updates:

  • Use force-related sensors in .xml models as suggested in changelog

The function mj_rnePostConstraint is no longer called by default. Instead, it is called only when the model contains force-related sensors which need the results of this function. The user can still call it directly in order to compute the mjData fields cacc, cfrc_ext, cfrc_int if desired.

  • Maybe it is possible to use mj_rnePostConstraint with mujoco-py API without modifying library. Here is an example of dirty solution requiring recompile

@christopherhesse
Copy link
Contributor

I don't know if we are supporting two different versions of mujoco for any particular reason. I think I'd rather just stick to the old version of there is no compelling reason to upgrade. Any objections?

@AlexanderGri
Copy link

I don't know if we are supporting two different versions of mujoco for any particular reason. I think I'd rather just stick to the old version of there is no compelling reason to upgrade. Any objections?

I see several possible issues with this decision:

  • mujoco<2.0 can't be used to simulate interactions with deformable objects
    I saw at least two RL papers of such kind last month (one uses Bullet, the other - mujoco 2.0)
  • mujoco 2.0 contains several new features in its simulator */mujoco200/sample/simulate.cpp (but i'm not sure if it is just presented in code samples in 2.0 or it is new functionality)
  • gym will restrict itself to the existing features being unable to incorporate new ones (in fact, the generalization of previous two issues)

@christopherhesse
Copy link
Contributor

christopherhesse commented Nov 8, 2019

gym doesn't depend on any particular mujoco version, just the gym mujoco environments do, so this should not affect other uses of gym + mujoco 2.0.

@christopherhesse
Copy link
Contributor

This means you can't use the gym-mujoco environments with mujoco 2.0 (unless you override the requirements) but they don't really work with mujoco 2.0 anyway. Users of pip install gym will not be affected, only users of pip install gym[mujoco]

Thanks for the bug report @pratikac!

@DanielTakeshi
Copy link

@christopherhesse Sorry for the late question, but I just want to make sure I understand the implication of research that has used MuJoCo 2.0 for these environments (Ant-v2, Hopper-v2, HalfCheetah-v2, etc). Does this issue mean that results and learning curves in papers which use those -v2 environments cannot be considered trusted, reliable, or interesting?

@tarod13
Copy link

tarod13 commented Nov 22, 2019

I'd say that those results where clearly obtained under different conditions and so comparisons with others that use the previous versions of MuJoCo are pointless. However, comparisons between methods on the same environments should be as valid as other comparisons. In fact, these environments could be harder, since less information is provided to the agent.

@DanielTakeshi
Copy link

@tarod13 That's a good point. My main concern would be if we see a set of papers that show:

  • Under MuJoCo 1.5, with environments at v1, algorithm X is better than prior state of the art methods.
  • Under MuJoCo 2.0, with environments at v2, algorithm Y is better than prior state of the art methods, including algorithm X.

But, under MuJoCo 1.5, algorithm X is actually still better than Y.

I guess this might be unavoidable, though.

@brickerino
Copy link

brickerino commented Nov 22, 2019

@tarod13 @DanielTakeshi
Guys, it's a different thing. Versions of environments do not depend on version of MuJoCo and vice versa. You can use v2 or v3 with MuJoCo 1.5 for example.

@tarod13
Copy link

tarod13 commented Nov 22, 2019

@brickerino Yes, you are right. However, different versions of MuJoCo make the v2 and v3 environments provide different information. That's why I say that they are not comparable if the MuJoCo version is different and also why using MuJoCo 2.0 may result in ultimately harder environments, while they are actually the same environments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants