Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error encountered when running multitask bayesian optimization #111

Closed
wuzheng-sjtu opened this issue Apr 8, 2021 · 3 comments
Closed

Comments

@wuzheng-sjtu
Copy link

wuzheng-sjtu commented Apr 8, 2021

  • Operating System: Ubuntu 16.04
  • Python version: 3.7
  • summit version used: 0.8.0

Description

Hi @marcosfelt, thanks for sharing the awesome work!
I have encountered an error when I was trying to deploy multitask bayesian optimization on my own task.

Basically when I tried to call the suggest_experiments function, it raises error like this:

  File "obj_funcs/mtbo_transfer_summit.py", line 184, in <module>
    mtbo_transfer(100, pretrain_data)
  File "obj_funcs/mtbo_transfer_summit.py", line 161, in mtbo_transfer
    result = mtbo.suggest_experiments(num_experiments=1, prev_res=prev_res)
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/summit/strategies/multitask.py", line 138, in suggest_experiments
    fit_gpytorch_model(mll)
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/botorch/fit.py", line 126, in fit_gpytorch_model
    mll, _ = optimizer(mll, track_iterations=False, **kwargs)
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/botorch/optim/fit.py", line 247, in fit_gpytorch_scipy
    callback=cb,
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/scipy/optimize/_minimize.py", line 620, in minimize
    callback=callback, **options)
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/scipy/optimize/lbfgsb.py", line 308, in _minimize_lbfgsb
    finite_diff_rel_step=finite_diff_rel_step)
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/scipy/optimize/optimize.py", line 262, in _prepare_scalar_function
    finite_diff_rel_step, bounds, epsilon=epsilon)
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/scipy/optimize/_differentiable_functions.py", line 136, in __init__
    self._update_fun()
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/scipy/optimize/_differentiable_functions.py", line 226, in _update_fun
    self._update_fun_impl()
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/scipy/optimize/_differentiable_functions.py", line 133, in update_fun
    self.f = fun_wrapped(self.x)
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/scipy/optimize/_differentiable_functions.py", line 130, in fun_wrapped
    return fun(x, *args)
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/scipy/optimize/optimize.py", line 74, in __call__
    self._compute_if_needed(x, *args)
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/scipy/optimize/optimize.py", line 68, in _compute_if_needed
    fg = self.fun(x, *args)
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/botorch/optim/utils.py", line 219, in _scipy_objective_and_grad
    raise e  # pragma: nocover
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/botorch/optim/utils.py", line 212, in _scipy_objective_and_grad
    output = mll.model(*train_inputs)
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/gpytorch/models/exact_gp.py", line 257, in __call__
    res = super().__call__(*inputs, **kwargs)
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/gpytorch/module.py", line 28, in __call__
    outputs = self.forward(*inputs, **kwargs)
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/botorch/models/multitask.py", line 167, in forward
    covar = covar_x.mul(covar_i)
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/gpytorch/lazy/lazy_tensor.py", line 1162, in mul
    return self._mul_matrix(lazify(other))
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/gpytorch/lazy/lazy_tensor.py", line 506, in _mul_matrix
    return NonLazyTensor(self.evaluate() * other.evaluate())
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/gpytorch/utils/memoize.py", line 59, in g
    return _add_to_cache(self, cache_name, method(self, *args, **kwargs), *args, kwargs_pkl=kwargs_pkl)
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/gpytorch/lazy/lazy_tensor.py", line 906, in evaluate
    res = self.matmul(eye)
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/gpytorch/lazy/interpolated_lazy_tensor.py", line 402, in matmul
    right_interp_res = left_t_interp(self.right_interp_indices, self.right_interp_values, tensor, base_size)
  File "/home/bizon/anaconda3/envs/primitive/lib/python3.7/site-packages/gpytorch/utils/interpolation.py", line 230, in left_t_interp
    summing_matrix = cls(summing_matrix_indices, summing_matrix_values, size)
RuntimeError: size is inconsistent with indices: for dim 1, size is 1 but found index 1

Here is how I call the function:

mtbo = MTBO(
        domain=domain,
        pretraining_data=pretraining_data,
        task=1,
        )

result = mtbo.suggest_experiments(num_experiments=1, prev_res=prev_res)

The pretraining_data and prev_res are all wrapped as DataSet format.
Here is what the concatenation of pretraining_data and prev_res looks like:

new data: 
NAME approach_stiffness_trans approach_stiffness_ang  ... strategy     task
TYPE                     DATA                   DATA  ... METADATA METADATA
0                   37.500000             112.500000  ...      LHS        1
1                  112.500000              37.500000  ...      LHS        1
0                  109.855384             146.133033  ...     MTBO        1
1                   17.365006              95.320634  ...     MTBO        1
2                   88.126421              49.029255  ...     MTBO        1
..                        ...                    ...  ...      ...      ...
495                  1.076072             137.851873  ...     MTBO        1
496                 34.013880             108.785283  ...     MTBO        1
497                 30.227277             112.787455  ...     MTBO        1
498                 79.603186             126.381992  ...     MTBO        1
499                 54.544665             103.928718  ...     MTBO        1

I'm wondering if that's the correct way to construct the previous results and pretraining data from other tasks.
Could you share some insights on how to debug this? Thank you very much!

@wuzheng-sjtu
Copy link
Author

After digging for a while, I found similar problems facebook/Ax#433 and facebook/Ax#183. It seems like this is a bug raised by gpytorch version 1.3.0. However, upgrading to version 1.4.0 cannot still resolve the error on my side.

@marcosfelt marcosfelt added bug Something isn't working and removed bug Something isn't working labels Apr 9, 2021
@marcosfelt
Copy link
Member

Hi Zheng, what if you change the task in the pretraining data to be 0? The pretraining data should have a different task than the optimization task.

@wuzheng-sjtu
Copy link
Author

Hi @marcosfelt , the fix works! Thank you so much for your advice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants