-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] Customize the number of input/output variables in generated graphs #117
Comments
Do you want to only generate a chain of operators? Or you hope at model-wise you only need to create one input and compare one output? Both of these need to be implemented. Just let me know :) |
Hi @ntcmp2u , thanks for your interest in nnsmith. I assume you mean "model-wise" single input output.
|
@ganler @lazycal Thank you so much for quick response. Yes, what I mean is the "model-wise" single input output. Although this may impact the diversity of generated graphs, I believe that it could be valuable to simulate a real-world senario -- most of time our trained model is a single input model. I will investigate the pick_var_group function to implement it. Thank you again for your assist. |
There are two ways to make a model produce only one output. The first way is what I said to make only one leaf node in the graph. Another simple way is to "cheat" by only marking one value in the graph as output (others could be cut by DCE). Because you are talking about real-world scenarios, I bet you are talking about the first one. For that pick_var_groups is not the right place to work on and it is completely unnecessary to hack the codebase (which is hard too), as I made it essentially for grouping connections to rank-dtype compatible variables/placeholders while your desired constraint is on model topology. The easiest way IMO to achieve the topological constraint is to (i) generate a large model; and (ii) picking an intermediate value, marking it alive and then do a use-def analysis + only preserve its usee chain (this is similar to clipping a subgraph from a large one such that the subgraph is single i/o). This is very easy in NNSmith as the GraphIR is an SSA and has built-in use-def analysis support. That being said, let me know if you want me to quickly implement that for you and then you shall be able to go for it with a commandline flag. Of course you are always welcome to try it on your own and even upstream the patch. NB: for real-world-like models or whatever structure in user intents, we are building a DSL for describing any desired model patterns. It is not going to ship very recently but yeah stay tuned :) |
@ganler I tried to implement it for a few hours but failed to do it in a right way. Perhaps I still need time to get familiar with NNSmith's implementation. If possible, could you implement that and export a command line flag? Thank you so much for assisting. |
@ntcmp2u No worries at all. I will find a time this weekend. -- And don't feel frustrated as it is hard to extend on big and weakly documented codebase in the beginning (enhancing the doc is a longer-term plan... since it is currently maintained/developed by a very small team). Meanwhile feel free to post any questions regarding the implementation if you are interested. Thanks. |
@ganler Hi, sorry for bothering you. Just want to know if the one-leaf generation is implemented. If you can't find a time, perhaps I can try to understand the use-def analysis and implement one. |
@ntcmp2u Sorry for the delay (I totally forgot that... I need to update my TODO list more timely lol). Once #119 is merged you can try: python nnsmith/cli/model_gen.py model.type=torch backend.type="torchjit" debug.viz=1 mgen.method="single-io-cinit" mgen.max_nodes=10 It shall get you something like: |
@ganler Thank you so much for the assist! |
Hi. I am currently learning to use nnsmith. I would like to know if I can customize the generated graph as a graph with only a single input variable and a single output variable.
Is there any way to quickly implement this?
The text was updated successfully, but these errors were encountered: