Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with tolerance for floating point and its relevance when using log_scale = True #2183

Closed
VMLC-PV opened this issue Feb 6, 2024 · 7 comments
Assignees

Comments

@VMLC-PV
Copy link

VMLC-PV commented Feb 6, 2024

Hi,

I was playing around today with some parameters with very low values, typically ranging from 1e-14 to 1e-19.
I used to always give Ax the log transform values so there were no issues but now I have a slightly different problem which forces me to put a constraint and I cannot use the log-transformed values anymore. Then when passing the true values to Ax it raises the following error:

UserInputError: Parameter range (9.9999e-15) is very small and likely to cause numerical errors. Consider reparameterizing your problem by scaling the parameter.`

I understand where that comes from but I also find it very restrictive especially since I would prefer to use the log of these values.
To illustrate I wrote a small example of what my code would look like.

params1 = {'name':'params1', 'type':'range', 'bounds':[5e-10,1e-6], 'value_type':'float', 'log_scale':True}
params2 = {'name':'params2', 'type':'range', 'bounds':[5e-10,1e-6], 'value_type':'float', 'log_scale':True}
params3 = {'name':'params3', 'type':'range', 'bounds':[1e-19,1e-14], 'value_type':'float', 'log_scale':True}

# Constraint
A = -- # some float value
parameter_constraints = [f'params2 <= 4*params1',f'params3<= A*(params1 +params2)'] 

Because of the second constraint, I cannot just give the log value to Ax.

Then my suggestion is the following, when log_scale is used can't we just give the log values to the surrogate and only delog the values when printing the results of the optimization and evaluating the constraint?
Wouldn't it also make more sense that when we use the log_scale option the surrogate is actually trained using the log values?

Thanks for your help,
Vincent

@VMLC-PV VMLC-PV changed the title Issue with tolerance for floating point and it's relevance when using log_scale = True Issue with tolerance for floating point and its relevance when using log_scale = True Feb 6, 2024
@esantorella esantorella self-assigned this Feb 6, 2024
@esantorella
Copy link
Contributor

esantorella commented Feb 6, 2024

Hi! I see how it seems imperfect that the validation is on the parameters in their original scale rather than the log scale, since all the modeling happens on a log scale. However, we have this error because there sometimes are operations on parameters in their original scale. For example, Ax may suggest a candidate that, when serialized or saved, is converted from double to single precision and back. On such a small scale, precision loss can cause the candidate to be outside the bounds.

As a workaround, would you be able to log the parameters and bounds before passing them to Ax and not use the log transform?

@VMLC-PV
Copy link
Author

VMLC-PV commented Feb 6, 2024

No because of the constraint that I need to use. It's a physics problem so I cannot really get away from it.
if I take the log values of the parameters and bounds (which I usually do) then the constraint does not work with Ax because the constraint would need to be:

10**params3<=` A*(10**params1 +10**params2)

Which, unless I am wrong, does not work, right?
As to the argument about the operations on the parameters in their original scale, if the code first converts things from log to the original scale and back this should not be a problem no?
Basically, only the surrogate model training would see the log scaled value, anything else still gets the real scale ones.
I also feel like most surrogates would perform better when given the log values.

@esantorella
Copy link
Contributor

Oh yes, you're right, nonlinear parameter constraints aren't supported. There's more discussion of this at #153 and in several other issues. Perhaps you could multiply the parameters by large constants before passing them to Ax?

Regarding operations on parameters in their original scale, it's true that modeling only happens in log space. The concern is Ax operations that aren't modeling, which often happen on the original scale. For example, serialization with Pandas in Ax can force float64 data to float32.

@VMLC-PV
Copy link
Author

VMLC-PV commented Feb 6, 2024

I thought about the multiplication by a large constant. I did not mention it as I was hoping for a more elegant, less 'hacky' solution.
But I understand the issue.
Would be great to have the support for non-linear constraints in Ax too.
For the time being, I'll make it work with the hacky way.
If you think nothing more will happen on this topic, feel free to close the issue. Otherwise, I'll wait some time (in case someone else comes up with something) and then close it myself.
Thanks for the help!

@esantorella
Copy link
Contributor

Another solution would be to use nonlinear inequality constraints in BoTorch and not use Ax. See here on how to set them up and documentation at botorch.org for using BoTorch generally. But that would be considerably more effort than the hacky "multiply by a constant" solution.

I'm going to close this since we have other issues discussing nonlinear inequality constraints in Ax (or rather, their absence), but please feel free to reopen with any additional questions.

@VMLC-PV
Copy link
Author

VMLC-PV commented Feb 12, 2024

Since nothing will probably happen, I close this issue.
Thanks again @esantorella for the help!

@VMLC-PV VMLC-PV closed this as completed Feb 12, 2024
@VMLC-PV
Copy link
Author

VMLC-PV commented Feb 23, 2024

Just for completeness the trick suggested above by @esantorella works just fine.

multiply the parameters by large constants before passing them to Ax

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants