-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Storing intermediate results of a Composite Model #841
Comments
@olivierlabayle Thanks for raising this interesting question about creating new interface points for composite models. Of course, if you are not interested in the ordinary But I am interested in clarifying exactly what you want here. I see that there are two possible objectives here. Do you want the output of |
@ablaom Thanks for getting back at me so quickly. I am working in causality, which means that my scenario differs from the traditional MLJ framework in the following ways:
The reason why I am so interested in the learning network API is that I think it provides a nice caching and scheduling mechanism. For instance, again in my use case, I might want to change one hyperparameter of model3 (see below) so that the whole procedure will not refit model1 and model2 because their upstream has not changed. To cut it short, I think using the Hope this helps! using MLJ
LinearRegressor = @load LinearRegressor pkg=MLJLinearModels verbosity = 0
mutable struct MyModel <: MLJ.DeterministicComposite
model1
model2
model3
end
function MLJ.fit(m::MyModel, verbosity, X, W, y)
Xs = source(X)
Ws = source(W)
ys = source(y)
mach1 = machine(m.model1, Xs, ys)
mach2 = machine(m.model2, Ws, ys)
ypred1 = MLJ.predict(mach1, Xs)
ypred2 = MLJ.predict(mach2, Ws)
Y = hcat(ypred1, ypred2)
mach3 = machine(m.model3, Y, ys)
ypred3 = MLJ.predict(mach3, Y)
μpred = node(x->mean(x), ypred3)
σpred = node((x, μ)->mean((x.-μ).^2), ypred3, μpred)
estimate = node((μ, σ2)->(μ, σ2), μpred, σpred)
mach = machine(Deterministic(), Xs, ys; predict=estimate)
return!(mach, m, verbosity)
end
X, y = make_regression(500, 5)
model = MyModel(LinearRegressor(), LinearRegressor(), LinearRegressor())
mach = machine(model, X, X, y)
fit!(mach)
estimate = MLJ.predict(mach) |
@olivierlabayle I've played around with this a bit today and will get your feedback on one experiment in the next day or so. |
@ablaom That's great, very happy to hear that, thanks a lot! |
@olivierlabayle Please have a look at JuliaAI/MLJBase.jl#644 which addresses the original suggestion and give me your feedback. I think in the immediate term causal inference with targeted learning is out-of-scope. My focus for the next few months will be moving towards version 1.0. Perhaps you can hack around the other obstacles for now, eg by exporting a predict node that you have no intention of using. You might also want to conceptualise your model as a transformer with a single tuple |
Yes I understand and I wasn't planning on having a dedicated MLJ structure for this. As you say, I will be hacking a bit, for now it's a mode with unused predict node but I like the transformer idea. I think with this pull request I should be good to go and benefit from the learning network machinery. |
Wouldn't it be more intuitive/self-explanatory to add a mach = machine(Deterministic(), Xs, ys; predict=ypred3, μpred=μpred, σpred=σpred) would become mach = machine(Deterministic(), Xs, ys; predict=ypred3, report=(μpred=μpred, σpred=σpred)) @ablaom I also stumbled over this issue while implementing composite detectors, which should store training scores in the report for the composite model. |
@davnn Thanks for chiming in here. I also thought of this, but it seemed a bit more complicated. But yes, as you say this may be "more intuitive/self-explanatory". I should be happy to make that change.
Ah, yes, I can imagine that could be so. Does this mean we need to expedite this somewhat? Currently this is low on my priorities as I am swamped with other stuff. |
Nope, consider it low prio as well, just using a custom |
@ablaom Thank you for managing to do this feature! |
I'm having a difficult time converting my custom function return_with_scores!(network_mach, model, verbosity, scores_train, X)
fitresult, cache, report = MLJ.return!(network_mach, model, verbosity)
report = merge(report, (scores=scores_train(X),))
return fitresult, cache, report
end instead of |
@davnn Good point. I suggest we add a What do you think? |
Thank you for your detailed thoughts on how we could go forward. I need some more time to think about it. I'm a bit afraid of feature creep in MLJ, but maybe that's not a big problem. |
Alternatively, we could introduce more generic accessor functions, In your use case, you overload |
I would prefer to keep the API simple with a It might make sense to follow the uniform access principle for things like the models' report, i.e. discourage or even disallow direct access to model intrinsics such as |
Thanks for these points. I have some ideas about how to do this properly (and also how to greatly simplify the learning networks "export" process) but it's going to take a little time. I will keep you posted, and I appreciate your patience. |
Hi!
Is your feature request related to a problem? Please describe.
I am trying to use the learning network API and would like to store additional results in the
fitresult
of my composite model. Could you provide some guidance on how to do this properly?Describe the solution you'd like
Ideally I'd like to be able to store the value of any node that was computed at training time.
Describe alternatives you've considered
It seems that only the submodels
fitresults
are natively stored, so one way to do it I guess would to define some kind ofResultModel
as a submodel for whatever value I would like and compute the result in the fit! function of this model.Additional context
I must add that the learning network I am trying to build is not regular in that it will never be used for prediction however I feel that what I'm trying to do may be of general use in MLJ.
For instance, the following works fine except that I can't retrieve the value of the final node because of the anonymization in the
return!
. Moreover I don't think this is appropriate as I guess all computations (except fitting) would be made again each time I call the node right?The text was updated successfully, but these errors were encountered: