Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test master #420

Closed
wants to merge 1 commit into from
Closed

Test master #420

wants to merge 1 commit into from

Conversation

ChrisRackauckas
Copy link
Member

No description provided.

@ChrisRackauckas
Copy link
Member Author

@vikram-s-narayan can I get some help trying to figure out what regressed here?

@vikram-s-narayan
Copy link
Contributor

@vikram-s-narayan can I get some help trying to figure out what regressed here?

The failing tests can be traced back to QuasiMonteCarlo.jl. This gist helps reproduce the results and may help pinpoint the issue for the folks at QuasiMonteCarlo.

I have also raised an issue with QuasiMonteCarlo.jl.

@vikram-s-narayan
Copy link
Contributor

vikram-s-narayan commented Dec 13, 2022

QuasiMonteCarlo@0.2.19 includes the lower bound as the first sample when we use GoldenSample(). This was not the case with QuasiMonteCarlo@0.2.16 which maintained an offset. For example, if we do:


using QuasiMonteCarlo
lb = [0.125, 5.0, 5.0]
ub = [1.0, 10.0, 10.0]
n_test = 5
transpose(QuasiMonteCarlo.sample(n_test, lb, ub, GoldenSample()))

We get the following results:

With QuasiMonteCarlo@0.2.16:

 0.192681  5.96681  6.29918
 0.697863  9.43361  5.09836
 0.328044  7.90042  8.89754
 0.833226  6.36723  7.69671
 0.463407  9.83403  6.49589

With QuasiMonteCarlo@0.2.19 (note that the first record is the same as the lower bound):

 0.125     5.0      5.0
 0.841776  8.35522  7.7485
 0.683552  6.71044  5.497
 0.525328  5.06565  8.24551
 0.367104  8.42087  5.99401

This was causing the test failure as I had used GoldenSample for the generating the test dataset. The training dataset used SobolSample which did not include the lower bound. So essentially, the model was faced with interpolating a point outside of the bounds in which it had been trained and hence results were sub-optimal.

If we remove the first datapoint, results are much better as can be seen from this gist. The root mean square error falls from ~200 to ~50.

The issue raised earlier on QuasiMonteCarlo.jl has also been updated with this new information. Hoping that this test will automatically pass once we fix QuasiMonteCarlo.jl :)

@ChrisRackauckas ChrisRackauckas deleted the ChrisRackauckas-patch-2 branch September 22, 2023 14:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants