You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, when i use the command "python run.py --config-path config_halfcheetah_sappo_convex.json"
It will use the auto_LiRPA package and I have installed this package.
It will have an error:
File "atla/src/policy_gradients/steps.py", line 796, in robust_ppo_step
stdev=stdev).mean()
File "atla/src/policy_gradients/convex_relaxation.py", line 100, in get_kl_bound
ilb, iub = model.compute_bounds(inputs, IBP=True, C=None, method=None, bound_lower=True, bound_upper=True)
File "anaconda3/lib/python3.7/site-packages/auto_LiRPA-0.1-py3.7.egg/auto_LiRPA/bound_general.py", line 592, in compute_bounds
lower, upper = self._IBP_general(node=final, C=C)
File "anaconda3/lib/python3.7/site-packages/auto_LiRPA-0.1-py3.7.egg/auto_LiRPA/bound_general.py", line 763, in _IBP_general
node.interval = node.interval_propagate(*inp, C=C)
File "anaconda3/lib/python3.7/site-packages/auto_LiRPA-0.1-py3.7.egg/auto_LiRPA/bound_ops.py", line 694, in interval_propagate
center, deviation = BoundLinear._propogate_Linf(h_L, h_U, w)
File "anaconda3/lib/python3.7/site-packages/auto_LiRPA-0.1-py3.7.egg/auto_LiRPA/bound_ops.py", line 647, in _propogate_Linf
center = torch.bmm(mid.unsqueeze(1), w.transpose(-1, -2)).squeeze(1)
RuntimeError: Expected tensor to have size 64 at dimension 0, but got size 1 for argument #2 'batch2' (while checking arguments for bmm)
Could you give some idea to fix this auto_LiRPA bug?
The text was updated successfully, but these errors were encountered:
Hi, when i use the command "python run.py --config-path config_halfcheetah_sappo_convex.json"
It will use the auto_LiRPA package and I have installed this package.
It will have an error:
File "atla/src/policy_gradients/steps.py", line 796, in robust_ppo_step
stdev=stdev).mean()
File "atla/src/policy_gradients/convex_relaxation.py", line 100, in get_kl_bound
ilb, iub = model.compute_bounds(inputs, IBP=True, C=None, method=None, bound_lower=True, bound_upper=True)
File "anaconda3/lib/python3.7/site-packages/auto_LiRPA-0.1-py3.7.egg/auto_LiRPA/bound_general.py", line 592, in compute_bounds
lower, upper = self._IBP_general(node=final, C=C)
File "anaconda3/lib/python3.7/site-packages/auto_LiRPA-0.1-py3.7.egg/auto_LiRPA/bound_general.py", line 763, in _IBP_general
node.interval = node.interval_propagate(*inp, C=C)
File "anaconda3/lib/python3.7/site-packages/auto_LiRPA-0.1-py3.7.egg/auto_LiRPA/bound_ops.py", line 694, in interval_propagate
center, deviation = BoundLinear._propogate_Linf(h_L, h_U, w)
File "anaconda3/lib/python3.7/site-packages/auto_LiRPA-0.1-py3.7.egg/auto_LiRPA/bound_ops.py", line 647, in _propogate_Linf
center = torch.bmm(mid.unsqueeze(1), w.transpose(-1, -2)).squeeze(1)
RuntimeError: Expected tensor to have size 64 at dimension 0, but got size 1 for argument #2 'batch2' (while checking arguments for bmm)
Could you give some idea to fix this auto_LiRPA bug?
The text was updated successfully, but these errors were encountered: