-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for Autotuning DSL #1
Comments
@jesus-gorronogoitia Can you please comment? |
Hi @kartshy That's fine to include Autotuning DSL as part of the AADM DSL. There is only need to identify where (within AADM) an Autotuning model should be embedded. |
It is a sister node for optimization. It is relevant for a particular application (skyline_extractor in the Snow example) |
In the example we have optimisation as a single entry like
instead it should be a list optimisation{ |
@jesus-gorronogoitia , @gmeditsk @zoevas I hope this gives a clear view of where we want to go with the optimisation DSL.
|
Need to add
|
@jesus-gorronogoitia Please find below the updated DSL in json format.
|
Hello @kartshy , @jesus-gorronogoitia With the current implementation, having optimizations as a simple list, reasoner takes the type of framework by:
and in conjuction with the capabilities in aadm, proposes optimizations. Now, that the optimizations won't be a list, and I see that it is a block containing information for app_type, I suppose, that according to the selected app_type, the reasoner will propose optimizations based on capabilities. That's how I understand it:
So, I suppose that XLA is proposed by the reasoner. 2) if app_type = HPC, reasoner should return if MPI, OPENMP, OPENACC e.tc. blocks should be enabled, 3) About autotuning and opt_build, there are specific criteria, that are being enabled, or the user decides that? Please correct me if I am wrong in any point. Thank you so much in advance. |
For 1) TensorFlow, PyTorch and Keras are type of AI frameworks.
In general I have designed this in such a way that the DSL expands based on AoE specified options. For example, There are some entries which can be inferred based on the application or infrastructure model.
I will add these criteria and send you later. |
DSL with constrainst. (Added ETL for HPC also)
|
Hi @kartshy |
Hi @kartshy |
Hi @kartshy |
Yes. But that will be done in Y3. Y2 focus is only HPC and AI_training. There is BigData in AI_training (ETL) ,so we may skip BigData as there is no mapping to use cases applications. |
Currently constraints are only for the entries I have mentioned in the DSL above. My understanding is that based on the Application and Target model, these entries may be enabled or disabled for the user. I can also enable or disable when I process in WP4, but we agreed the logic to be in KB. For example, for the constraint (number of GPUs > 0) we should get the number of GPUs from the infrastructure model and then use the condition to enable or disable the optimisation. |
Yes for HPC they are boolean and AI_training it is integer. We can change the HPC to integer also. I dont have a full idea in HPC yet. |
Still unclear to me how to support the formalization of constraints: Questions: I think, by the moment, waiting for you to be more precise in the constraint format, I will support this generic format for constraint: |
Hi @kartshy |
Hi @kartshy
I assume that once the app_type is chosen (e.g. AI_Training) then only that section appears below in the model. Similarly, once the config.ai_framework (or config.parallalisation) is choosen, only that section appears below in the model (e.g. TensorFlow or MPI). Please, confirm. |
@jesus-gorronogoitia You are fast. I am on leave (Uk Holiday) today, will have a look tomorrow. |
take your time, enjoy your day off. |
First version of autotuning DSL and editor implemented. See #4 for details |
We can discuss in the thursday call. |
This autotuning DSL is big. My initial thought that we wont support it as DSL but string or file name (input) which is passed to the tool. I will email you the full DSL. |
If autotuning DSL is big and complex, it should be provided by the users as you suggest, referenced the path to an external file. Then, during AADM deployment, the IDE can retrieve this file and attached its content as string to the AADM model to be sent to the KB. Let's discuss this point on Thursday meeting |
Agreement: Autotunning DSL edition will not be supported by IDE inline the Optimization DSL editor. It will be provided by the user as a separate file content. Upon the saving or deployment of the AADM model, if autotunning block is present, the user will be prompted to provide the autotunning content file. The content of this file will be embedded as string into the autotuning input property. |
Can we add the file to the artifacts instead of embedding as a string? |
Hi @kartshy What do you mean by adding the autotuning file to the artifacts? Do you mean to send the autotuning file to the KB not embedded as string into the optimization model, but as a separate file? |
Implemented IDE support for selecting the autotuning model from the file system, prompting the user to select the model, whose path is associated to the input property of autotuning. It is pending to decide how this autotuning model will be sent to the Optimization engine. |
Hello @kartshy , @jesus-gorronogoitia, Regarding the optimization section and its schema supported by the KB or not, suppose that the optimization schema is not supported by the KB, I am thinking if the reasoner can infer all the input needed from the rest of the aadm so as to return optimizations. In optimization section, I understand that the following variables are the ones needed
So, if the optimization schema is not supported by the KB, do you know if it is possible Thanks in advance |
Hello @kartshy, Just for confirmation, I understood from the last ontology meeting, that for now, the reasoner will For finding which are the applicable optimizations, reasoner checks which are the capabilities of a node template and few parameters from the optimization json such as app_type, cpu_type, ai_framework. Please, inform me in case any other parameter should be taken into account. Please, correct me if I am wrong in any point. So as to start the implementation as soon as possible, could you provide an end-to-end example, Any other that we should take in mind regarding the reasoner? Should the reasoner send anything to optimizer? Or for now, just the IDE will send the serialized optimization json to the optimizer, and just the reasoner will assist the IDE by sending the applicable optimizations to IDE. Thank you so much in advance, |
The reasoner can infer many things based on the application and infrastructure node. But I dont have a clear view to define those now. SO we have pushed those work past M18. Reasoner is only involved in implementing the constraint in the optimisation DSL like SSDs and number of GPUs. For example, to enable xla optimisations , we need number of GPUs > 0. So my expectation is for the reasoner to find of the number of GPUs is > 0 and then enable the xla optmimisation.
In the current DSL spec, there are only two constraint for SSD and number of GPUs. Both the values need to be inferred from the infrastructure node. |
Hello @kartshy , Thanks for the answer. For the interim review, regarding the optimizations, what needs to be implemented by the reasoner? Regards, |
We should discuss it in the weekly call. But I don't expect the reasoner to
interact with the optimiser.
IDE will provide the optimisation dsl to WP4 and we will handle everything
there.
…On Mon, Jun 22, 2020 at 1:09 PM Zoe Vasileiou ***@***.***> wrote:
Hello @kartshy <https://github.com/kartshy> ,
Thanks for the answer. For the interim review, regarding the
optimizations, what needs to be implemented by the reasoner?
I remember from the ontology meeting that constraints such as (Constraint
: number of GPUs > 0) will be removed from the optimization DSL, and in the
future QoE will provide those constraints to the reasoner.
Also, the IDE will serialize the optimization DSL to json.
So, should the reasoner send anything to the optimizer? Or should reasoner
assist the IDE by proposing optimizations, based on the optimization json
selections(e.g. app_type) and the capabilities of the aadm? Or anything
else that I have missed?
Regards,
Zoe
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AANK4YIUSJXXIA2QQVNPM73RX4DAHANCNFSM4MZNWQZA>
.
--
Thanks & Regards
Karthee S
|
@kartshy about hpc, MPI and OpenMP, when they get enabled? |
MPI and OpenMP will be enabled by default. We are calling applications that use MPI and OpenMP as traditional HPC applications. They can use OpenAcc or OpenCL for accelerators like GPUs. |
Closing this as it is supported in M18 deliverable. Will create a new issue if required. |
Need to support Autotuning DSL as part of the Application optimisations.
The DSL contains
•Tuning parameters can be defined, constrained and injected into application source, build or run
•Easy to integrate with application build and run
•Can tune for any metric output, not just runtime, and take max, min or average of a set of runs
The example input is
We need to be able to supply this input in the SODALITE IDE and ten process it in the applicatin optimiser. It should be part of a Autotuning section!
The text was updated successfully, but these errors were encountered: