Skip to content

Does it make sense to optimize Gaussian Process hyperparameters during active learning? #107

@tumble-weed

Description

@tumble-weed

Hi,

In the active learning for regression example, we have used gaussian processes. While the sklearn version seems to keep its length scale and noise parameters static ( maybe i am doing something wrong), other implementations allow for modifying these using gradient descent ( for e.g. gpytorch).

Under batch learning circumstances we would have the whole train set , and tuning the hyperparameters to maximize log likelihood makes sense, but does it also make sense to do so while performing active learning, and having datasets of really small sizes?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions