-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: Run auto model by default #26
Conversation
So it looks like |
Can you elaborate? What's the fitlins-optimal approach, and what is it currently doing? |
The interface I set up assumes one model instead of a list of models, and they're loaded in-interface. If we can't combine it into one big model, I'll need to figure out the best way to handle lists.
…-------- Original message --------
From: Tal Yarkoni <notifications@github.com>
Date: 4/30/18 12:24 (GMT-08:00)
To: poldracklab/fitlins <fitlins@noreply.github.com>
Cc: Christopher Markiewicz <markiewicz@stanford.edu>, Author <author@noreply.github.com>
Subject: Re: [poldracklab/fitlins] ENH: Run auto model by default (#26)
Can you elaborate? What's the fitlins-optimal approach, and what is it currently doing?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<#26 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AAFF8uQ08a_M18-_BWp283h8RFgZE9qJks5tt2T0gaJpZM4TtC9L>.
|
Ohhh, I see. Sorry, I'd forgotten What I think we could do though that might work for your needs is allow the user to pass keyword selectors to |
Well, the point here would be to auto run, so I wouldn't assume enough selectors from users to get a unique model, as currently built. If we can't have a single model that handles all the tasks, then I should handle lists over here. |
I think if you're going to go down that road (which seems reasonable), you'll need to implicitly loop over all detected tasks and run the whole fitlins pipeline for each one... so it seems like a more general problem than what's happening in |
I guess an alternative would be to have a flag on |
I was thinking of combining in the sense of building a model as a series of independent chains (i.e. block n might depend on blocks 1..n-1, but doesn't need to depend on all). The goal would not be to change the semantics; just the organization. |
@Shotgunosine I think in principle the approach you suggest already works under the current spec. But fitting one big model across multiple tasks is something of an edge case, and has a different meaning (and will produce different results) from the typical scenario, where one is more likely to have several different models, each of which needs to be fit to a different task. @effigies I may be misunderstanding what you're saying, but the models coming back from |
@tyarkoni I agree each element of the list should be independent. I was suggesting there might be (I don't know) a way of combining into a single model where the blocks are independent. |
Just pushed a proposed refactoring, which always assumes multiple independent models. I suspect flattening inside the node will probably end up producing an untenable situation when we try to recombine at the second level... It's possible that what we want is to just load the models (whether they're json files or auto-generated) and construct entirely independent pipelines for each. |
I can't think of any situation in which a user would want/need to recombine models at the second level in a way that can be easily automated. I think we can safely construct completely independent pipelines for each task (beginning with the subsetting of the input image list based on the |
Think this is ready, by the way. Reviews welcome. Will merge tomorrow unless somebody requests more time to review. |
Guess I forgot. |
Closes #24