Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added a new best-fit plotting function #18

Merged
merged 8 commits into from
Oct 19, 2022

Conversation

KathrynJones1
Copy link
Contributor

Plotting function that plots the best-fit GP model, along with the confidence region and the raw data. I've tried to adjust it to the new naming scheme so see what you think :)

Copy link
Collaborator

@pscicluna pscicluna left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks very good, just a few small comments!

y_raw = ydata()

# creating array of 10000 test points across the range of the data
x_fine = torch.linspace(x_raw.min(), x_raw.max(), 10000)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think x_fine should be defined in terms of self._xdata_transformed instead of self.xdata_raw. This would also have some knock-on effects further down

Suggested change
x_fine = torch.linspace(x_raw.min(), x_raw.max(), 10000)
x_fine = torch.linspace(self._xdata_transformed.min(), self._xdata_transformed.max(), 10000)

ax.plot(x_raw.numpy(), y_raw.numpy(), 'k*')

# Plot predictive GP mean as blue line
ax.plot(x_fine.numpy(), observed_pred.mean.numpy(), 'b')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the next place where the transformation matters, as x_fine is now defined in the transformed system. I think this should look something like:

Suggested change
ax.plot(x_fine.numpy(), observed_pred.mean.numpy(), 'b')
ax.plot(self.x_transform.inverse(x_fine).numpy(), observed_pred.mean.numpy(), 'b')

ax.plot(x_fine.numpy(), observed_pred.mean.numpy(), 'b')

# Shade between the lower and upper confidence bounds
ax.fill_between(x_fine.numpy(), lower.numpy(), upper.numpy(), alpha=0.5)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line would also need an inverse transform for x_fine, just like line 259.

setup.py Outdated Show resolved Hide resolved
setup.py Outdated Show resolved Hide resolved
KathrynJones1 and others added 5 commits October 19, 2022 16:16
Co-authored-by: Peter Scicluna <pscicluna@users.noreply.github.com>
Co-authored-by: Peter Scicluna <pscicluna@users.noreply.github.com>
@pscicluna pscicluna merged commit 5e67fee into ICSM:main Oct 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants