Analyzer extension exit status#3347
Conversation
|
Hi Jonah. data_loadable and run_completed are a bit redundant. (unles the computation is done but the wring failed.) |
for more information, see https://pre-commit.ci
|
I also fixed the edge case where (for some reason) the run completes, but then the data file gets deleted; it will now raise a warning upon loading the sorting analyzer, and return None on calls to get_extension(), since the extension is no longer usable and should be re-computed as if it had never been. |
for more information, see https://pre-commit.ci
|
Also — I still think my naive / default usage of this code would be to check Just taking the third example: if sorting_analyzer.has_extension("spike_amplitudes"):
peak_amplitudes = sorting_analyzer.get_extension("spike_amplitudes").get_data()
else:
peak_amplitudes = Noneright now, that will fail if the extension doesn't have data, since spike_amps = sorting_analyzer.get_extension("spike_amplitudes")
if spike_amps:
peak_amplitudes = spike_amps.get_data()
else:
peak_amplitudes = None(or with a tertiary operator plus assignment expression it can fit in one line) peak_amplitudes = _ext.get_data() if (_ext := sorting_analyzer.get_extension("spike_amplitudes")) else Nonebut the pattern still just feels confusing 🤷 |
|
@samuelgarcia this is good to merge on my side. I added a back-compatibility logic for loading folders/zarr produced prior to this change |
| for r, result in enumerate(results): | ||
| extension_name, variable_name = result_routage[r] | ||
| extension_instances[extension_name].data[variable_name] = result | ||
| extension_instances[extension_name].run_info["runtime_s"] = runtime_s |
There was a problem hiding this comment.
OK for me but if we want the total run ttime then suming of run_time will be sring because this run time is shared.
There was a problem hiding this comment.
This is the best estimate we can get :)
|
Merci beaucoup Jonah et Alessio. |
Fixes #3329
Here's a first pass at adding information about an extension's completion (or not) to its metadata.
This PR adds a
run_info.jsonfile to each extension's folder. This contains keys:run_completed(whether or not the call toAnalyzerExtension._run()finished),data_loadable(whether or notAnalyzerExtension.load_data()can be called without an error)runtime_s(the runtime ofAnalyzerExtension._run()in seconds).The core logic is implemented in
AnalyzerExtension.run()and wraps the call to the extension-specific_run(). Then whenAnalyzerExtension.get_extension()is called (and redirects toAnalyzerExtension.load()internally), if the run wasn't completed, it simply returns None as if the extension doesn't exist, and the user would be able to catch that and re-run the extension.There is a mild complication with extensions that use the
run_node_pipelineif called throughcompute_several_extensions, because thenAnalyzerExtension.run()is never actually called. Instead, it looks likeAnalyzerExtension.save()is used, so I added the relevant lines there to catch that, and assume that if the code makes it there, the run is completed.TODO:
merge()andcopy()are correct, I don't totally understand when those methods are expected to be called / what's wrapping them.Notes:
run_info.jsonfile than trying to cram this intoinfo.json, since that is more about the code itself.