Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Understanding BoundedConstraint class #62

Closed
nicholasprayogo opened this issue Aug 10, 2022 · 6 comments
Closed

Understanding BoundedConstraint class #62

nicholasprayogo opened this issue Aug 10, 2022 · 6 comments
Labels
question Further information is requested

Comments

@nicholasprayogo
Copy link

nicholasprayogo commented Aug 10, 2022

I'm not sure if this is a bug or i'm using the functionality wrong, so please let me know accordingly.

Here is how I'm trying to use BoundedConstraint, specified in my task_config yaml file.

constraints:
  - constraint_form: bounded_constraint
    lower_bounds: [0, 0, 0] # should match state dim
    upper_bounds: [2.6, 0, 0] 
    constrained_variable: state
    active_dims: 0  # only position

When i use BoundedConstraint for constraining state (I have 3 states but want to constraint only 1), I realize I needed to supply lower_bounds & upper_bounds with shapes that are equal to the number of states (e.g. 3 if have 3 states), due to self.dim being defined as such: self.dim = env.state_dim which is used in self.constraint_filter = np.eye(self.dim)[active_dims] here, where it is supposed to only extract active_dims from these bounds for the target state to be constrained.

But when i do so, where lower_bounds matches shape of env.state_dim, the assertion here assert A.shape[1] == self.dim, '[ERROR] A has the wrong dimension!' fails.

This seem to fail because in Constraint , inside this code chunk, after constraint_filter is defined, self.dim is overwritten by len(active_dims), so it would have shape of active_dims which when I use is 1, while A already has shape (6, 3) due to self.dim being set to env.state_dim which was 3 earlier.

if self.constrained_variable == ConstrainedVariableType.STATE:
    self.dim = env.state_dim
elif self.constrained_variable == ConstrainedVariableType.INPUT:
    self.dim = env.action_dim
elif self.constrained_variable == ConstrainedVariableType.INPUT_AND_STATE:
    self.dim = env.state_dim + env.action_dim
else:
    raise NotImplementedError('[ERROR] invalid constrained_variable (use STATE, INPUT or INPUT_AND_STATE).')
# Save the strictness attribute
self.strict = strict
# Only want to select specific dimensions, implemented via a filter matrix.
if active_dims is not None:
    if isinstance(active_dims, int):
        active_dims = [active_dims]
    assert isinstance(active_dims, (list, np.ndarray)), '[ERROR] active_dims is not a list/array.'
    assert (len(active_dims) <= self.dim), '[ERROR] more active_dim than constrainable self.dim'
    assert all(isinstance(n, int) for n in active_dims), '[ERROR] non-integer active_dim.'
    assert all((n < self.dim) for n in active_dims), '[ERROR] active_dim not stricly smaller than self.dim.'
    assert (len(active_dims) == len(set(active_dims))), '[ERROR] duplicates in active_dim'
    self.constraint_filter = np.eye(self.dim)[active_dims]
    self.dim = len(active_dims)

Could you please check if this was the issue?

Another attempt I took was to set lower_bounds to have shape same as active_dims, e.g. 1 (only want to constraint 1 state). It doesn't work, because matmul would fail here self.sym_func = lambda x: self.A @ self.constraint_filter @ x - self.b for LinearConstraint.
Full error: ValueError: matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 1 is different from 3)

Summary:

What is the right shape for lower_bounds and upper_bounds?

  • When I set it equal to shape of env.state_dim, I get error from assert A.shape[1] == self.dim, '[ERROR] A has the wrong dimension!' because self.dim = len(active_dims).
  • When I set it equal to len(active_dims), it throws error ValueError: matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 1 is different from 3) because of self.sym_func = lambda x: self.A @ self.constraint_filter @ x - self.b.

Would love to know if this is indeed a bug or if I'm using it wrong. I'll also keep trying again in case maybe I missed anything.

Thank you!

@adamhall @JacopoPan @Justin-Yuan

@nicholasprayogo nicholasprayogo changed the title BoundedConstraint seems to not work Understanding BoundedConstraint class Aug 10, 2022
@adamhall
Copy link
Contributor

Hi Nicholas,

Thanks for the comment. You actually have to set lower_bounds and upper_bounds to have the same dimension as active_dims. For example, the following works as expected for me

  constraints:
    - constraint_form: bounded_constraint
      lower_bounds: [ 0 ] # should match state dim
      upper_bounds: [ 2.6 ]
      constrained_variable: state
      active_dims: 0  # only position

as self.dim = len(active_dims) is set here. Can you try this and see if it works? If not, can you post a minimal working example of your bug so I can recreate it? An alternative option is to set the dimensions you do not want to constraint to large values

  constraints:
    - constraint_form: bounded_constraint
      lower_bounds: [ 0, -100, -100 ]
      upper_bounds: [ 2.6, 100, 100]
      constrained_variable: state

On another note, why do you only have 3 states? cartpole has 4, and quadrotor has either 2, 6, or 12, depending on which quad you run.

@JacopoPan
Copy link
Member

JacopoPan commented Aug 11, 2022

@adamhall

If missing, can you then create a patch/PR with the dimension check on lower_bounds, upper_bounds, active_dims raising and exception and error message?

It could also be something to mention in the docstrings of BoundedConstraint's constructor (and any other similar class)

Args:
env (BenchmarkEnv): The environment to constraint.
lower_bounds (np.array or list): Lower bound of constraint.
upper_bounds (np.array or list): Uppbound of constraint.
constrained_variable (ConstrainedVariableType): Type of constraint.
strict (optional, bool): Whether the constraint is violated also when equal to its threshold.
active_dims (list or int): List specifying which dimensions the constraint is active for.
tolerance (float): The distance at which is_almost_active(env) triggers.

(I think Nicholas has a different state vector length because wants to work on creating a new environment)

@adamhall
Copy link
Contributor

@JacopoPan Yes for sure, but I just want to make sure this is actually the issue first, so I'll wait until Nicholas gets it working.

@JacopoPan JacopoPan added the question Further information is requested label Aug 12, 2022
@nicholasprayogo
Copy link
Author

nicholasprayogo commented Aug 15, 2022

Hi @adamhall

Thanks a lot for the detailed response and explanation, as well as helping me check if it is working on your end.

Yes indeed I actually tried that out as well, which as I mentioned in the above post, if I set lower_bounds to have same dimension as active_dims, I get the following error:

ValueError: matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 1 is different from 3)

However, I did a sanity check with the following code and indeed it works.

lower_bounds =np.array( [0.2])
upper_bounds = np.array([5])

dim = lower_bounds.shape[0]

A = np.vstack((-np.eye(dim), np.eye(dim)))
b = np.hstack((-lower_bounds, upper_bounds))

active_dims = [0]

x = np.array([0,0,0]) 

dim_state = x.shape[0]
constraint_filter = np.eye(dim_state)[active_dims]

print(A@constraint_filter@x)  
array([-3.,  3.])

After further debugging, I realized since I've been experimenting with classical control for my own environment, I haven't been paying attention to my _set_observation_space() function, which I was mainly using when testing RL methods.

I just found out env.state_dim is set to depend on observation_space (and not my env.state)

self.action_dim = self.action_space.shape[0]
self.obs_dim = self.observation_space.shape[0]
if hasattr(self, "state_space"):
self.state_dim = self.state_space.shape[0]
else:
self.state_dim = self.obs_dim

so when I forgot to update my observation_space accordingly, the env.state_dim becomes incorrect.

I fixed the observation_space and seems like now I can see the constraints returned properly.

Thank you so much for your clarification.

It's a silly mistake for me forgetting to maintain my observation_space as I experiment with different state dimensions 😄 but perhaps this realization might come in handy for other users implementing their own environments (that's based on BenchmarkEnv) and using BoundedConstraints, ensuring lower_bounds should have same shape as active_dims.

@JacopoPan
Copy link
Member

can you then create a patch/PR with the dimension check on lower_bounds, upper_bounds, active_dims raising and exception and error message?

@adamhall PR patch to main and/or dev-experiment-class?

adamhall added a commit to adamhall/safe-control-gym that referenced this issue Sep 3, 2022
@adamhall
Copy link
Contributor

adamhall commented Sep 5, 2022

Some warnings and checks for the dimension of the bounds and the active_dims added as part of PR #88.

@adamhall adamhall closed this as completed Sep 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants