Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

allow users to evaluate an AST directly #51

Closed
wants to merge 1 commit into from

Conversation

jmdejong
Copy link
Contributor

Sometimes the AST will come from a different source (for example when using hy ).
Calling run directly misses some initialization and error handling.

This adds the eval_ast function which is like eval, but takes an AST instead of a string.

@codecov-io
Copy link

codecov-io commented Sep 28, 2018

Codecov Report

Merging #51 into master will increase coverage by 0.08%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #51      +/-   ##
==========================================
+ Coverage   91.44%   91.52%   +0.08%     
==========================================
  Files           4        4              
  Lines         736      743       +7     
==========================================
+ Hits          673      680       +7     
  Misses         63       63
Impacted Files Coverage Δ
asteval/asteval.py 91.28% <100%> (+0.1%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 392b9c6...74e27b2. Read the comment docs.

@newville
Copy link
Member

Why not just pass the ast to run? I think that should work, but maybe not. Maybe it is just a documentation issue?

Don't know anything about hy. I would caution against saying that we support ast generated by hy or any other ast-generator without a good amount of testing and probably talking with the relevant developers.

@jmdejong
Copy link
Contributor Author

Why not just pass the ast to run?

run misses a few features, like resetting self.error and self.start_time. In my case I kept getting time limit exceeded errors because it wasn't reset.
run also does not have the same error handling as in eval, making that inconsistent.
These problems are easy to fix on the user side too, but this makes it into actual API functionality.

I would caution against saying that we support ast generated by hy

Hy parses its code into the same ast as python. If a program can evaluate the output from ast.parse it can evaluate the ast produced by hy.

On the other hand I don't mind waiting with this until the refactoring of the exception handling. I think that would already fix a lot of the issues here.

@newville
Copy link
Member

Why not just pass the ast to run?

run misses a few features, like resetting self.error and self.start_time. In my case I kept getting time limit exceeded errors because it wasn't reset.
run also does not have the same error handling as in eval, making that inconsistent.
These problems are easy to fix on the user side too, but this makes it into actual API functionality.'

I would say to try to fix run rather than add a different method to evaluate an AST.

I would caution against saying that we support ast generated by hy

Hy parses its code into the same ast as python. If a program can evaluate the output from ast.parse it can evaluate the ast produced by hy.

Uh, well maybe. First, why does hy generate an AST and then why would someone want to use asteval to evaluate that AST? If hy is generating AST what do they intend to do with that? For sure, other tools generated AST for lexical anaylysis and so on.

asteval expects a symbol table to be present and for symbols named in the AST (including functions and builtins) to be found. asteval does not support all AST branches that can be generated by Python. Because of that, we cannot guarantee that AST generated by some other tool can be evaluated by asteval. It may be that it can work in some cases. And it may be that we (or the people who write hy) want that to work, but no one has actually said that.

Changing asteval to be able to evaluate AST generated by other tools is not a high priority. It might be desirable, but it might not be. If it is, we'd probably want to work with the people generating that AST.

On the other hand I don't mind waiting with this until the refactoring of the exception handling. I think that would already fix a lot of the issues here.

I'm pretty sure we do not want two methods that can evaluate AST.

@newville
Copy link
Member

@jmdejong closing this. I don't think we want to try to evaluate AST produced by some other module without carefully identifying the source of that code.

@newville newville closed this Nov 17, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants