-
Notifications
You must be signed in to change notification settings - Fork 131
More flexible execution #358
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More flexible execution #358
Conversation
eggerdj
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if it would not make more sense to introduce more structure into what these hooks can be? Right now they seem like a loos collection of functions. Whose signature often varies. Looking at them a little closer I also see that the order of the arguments is inconsistent. For example job and experiment_data below:
def set_analysis(experiment, analysis, job, experiment_data, **kwargs):
def add_job_metadata(experiment, experiment_data, job, run_options, **kwargs):
| {} | ||
| ) | ||
| timing_constraints["acquire_alignment"] = getattr( | ||
| timing_constraints, "acquire_alignment", 16 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this 16 hard-coded here? This should be configurable, e.g. in an option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, agreed. I just copied it from the original code.
|
Thanks Daniel for the comment. In current implementation I assume no function takes *args, i.e. arguments are always given as keywords, so order of it doesn't matter. The returned value is used to override var dict (so function returns values in dict) in the event hook's namespace, thus functions always takes all variables generated during the execution chain. So we can remove all arguments, and instead but I guess there is much better way to pass arguments (perhaps using some function decorator). |
|
Hmm, here are some things I don't like about this:
All of those complaints could be mitigated, but at a cost and whether or not the cost is worth it depends on what the benefit of using this abstraction layer is. One alternative approach is to continue to add more empty |
|
Thanks @wshanks , I agree to your points. Actually this is similar approach to qiskit transpiler, in which each step is registered as a class initialized in advance with arbitrary vars in the pass manager class namespace. This class approach may improve the visibility, but still this cannot return arbitrary new vars (it is limited to circuit). I find the another downside of the method approach, that is code reusability. For example, T1, T2, and Tphi characterization need timing constraints, but they are directly created from the |
|
Hi Noaki, while the suggested runner and events framework allows high degree of flexibility I do think it's an overkill and adds considerable complexity to the existing experiments framework design. Here are a few points that crossed my mind when reviewing the suggested code:
So as a big fan of the KISS principle, I think that what Will suggested in his comment (which also goes in line with what issue #151 suggests) is the right approach. Breaking the run flow into smaller pieces which are part of |
|
Thanks Eli. I agree the event handler approach has a readability issue and we find we more prefer adding more method approach. Though I don't stick to the proposed code here, what do you think of the usability of code? For example, if we need we cannot reuse the code. Perhaps we can avoid this situation by very carefully designing methods, i.e. provide as many method as possible for each atomic procedure, however this also makes the base class messy and makes it harder to track execution orders. Another problematic situation would be given |
|
What about making |
|
I think that is the reasonable approach. However, if we allow the methods to call externally defined functions, this is maybe the same with the event handler approach if it can take callback instead of strings. The only difference would be
what do you think? |
|
Alternative solution would be allowing each method to take pre/post callbacks, like self._transpile(..., pre_callback=[step_a, step_b], post_callback=[step_c])if external functions are allowed. |
|
If I understand correctly, for the alternative solution you are basically suggesting to have this code: Class SomeExp(BaseExperiment):
def _transpile():
super()._transpile(...., pre_callback=[step_a, step_b], post_callback=[step_c])instead of: Class SomeExp(BaseExperiment):
def _transpile():
step_a()
step_b()
super()._transpile(....)
step_c()I may be missing some benefits of the callback approach, but not seeing them, personally I prefer the second option (i.e. explicit calls in the derived class). Maybe other developers can chime in as well, as I think we kind of agreed on the general approach but now it's down to style (again, unless I'm missing some fundamental advantage of the callback approach). Actually now that I'm thinking about it a bit more, with the explicit method call approach, you can take the results of Class SomeExp(BaseExperiment):
def _transpile():
step_a()
step_b()
circuits = super()._transpile(....)
transorm_circuits_somehow(circuits)
step_c(circuits)I'm not sure how you will have this level of flexibility with the callback approach |
|
I think this is just a preference of style as you said. I'm fine with the second one. However, I find a issue in this style Class SomeExp(BaseExperiment):
def _transpile():
step_a()
step_b()
circuits = super()._transpile(....)
transorm_circuits_somehow(circuits)
step_c(circuits)In this example we need to check what have been done in the superclass to know full event chain. If the class is nested multiple times, tracking the whole events might be hard. I still prefer having explicit hook methods at the expense of simplicity... |
|
I think that Oliver made a good point yesterday about the entry barrier for users when it comes to adding new experiments. Since deep class hierarchies will probably not be a very common case (I think it's reasonable to assume that most experiments will derive from BaseExperiment or maybe from a common experiment class deriving from BaseExperiment, e.g for cals), I'd think we should balance towards simplicity and user friendliness |
|
Okey, that make sense. Given we don't have deep class hierarchies, the hook methods have another advantage. For example, if we want to add some extra feature (like add-on) to experiment, perhaps mix-in class is useful approach like we are discussing in #251 (still not sure if we adopt). class MyExperiment1(BaseExperiment):
pass
class AddOnFeature(BaseExperiment):
def post_transpile(circuits):
add_custom_gate_definition(circuits)
return circuits
class MyExperiment2(MyExperiment1, AddOnFeature)
passThis allows us to separately define characterization and calibration experiments, i.e. append parameter update logic to base characterization logic to create calibration. Then no more characterization vs calibration discussion happens. Though this is just one of possibilities for now, if we remove hook methods, we cannot realize this framework. i.e. even if we provide So something like class BaseExperiment:
def run(*args, **kwargs):
...
# run post transpile
pre_transpile = getatter(self, "_pre_transpile", None)
if pre_transpile:
pre_transpile(*args, **kwargs)
# run standard transpile
self._transpile(*args, **kwargs)
# run post transpile
post_transpile = getatter(self, "_post_transpile", None)
if post_transpile:
post_transpile(*args, **kwargs)
...works. Perhaps this is not acceptable in the sense of user (contributor) friendliness. |
|
I got your point, makes sense. But why not simply overriding the In addition, I think that in your approach you are bound to how the |
I agree. However this doesn't fit in with the parallel experiment. If we think of the situation we run RB and T1 experiment simultaneously with different qubits (e.g. RB on q0 and T1 on q1). These experiments need different execution logic.
If we control the execution with a single
we can cope with this situation. Namely, we can execute 1 and 3 for each sub experiment, and 2 is provided by the base class and not overridable. Within each piece (1 and 3), perhaps calling externally defined methods is the most easy-to-understand approach as you mentioned. Or, perhaps we can define entire processing routine externally and attach this to the class member, something like class RB(BaseExperiment):
__sub_routine_transpile__ = transpile_with_count
__sub_routine_expdata__ = base_analysis |
|
We had enough discussion and PR is finalized in #380 |
Summary
This PR adds more flexible execution chain for experiment run.
close #151
Details and comments
Execution chain of experiment can slightly differ per each experiment and it seems to be difficult to implement the perfect base class logic for every experiments.
For example:
_postprocess_transpiled_circuitsis only used by RB to count circuit operations and indeed this method name is too broad context. For example, if we want to implement an experiment that needs count with other feature (that is not RB subclass), we cannot reuse the counting code.In this PR, execution chain is divided into each single piece of processing, and developer can combine them to create a full chain. Each processing step (pre-process/transpile/execution/post-process) is stored in the class attribute as a list of function names for serialization.