Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Autolev parser #14758

Merged
merged 1 commit into from Jul 26, 2018

Conversation

Projects
None yet
6 participants
@NikhilPappu
Copy link
Contributor

NikhilPappu commented May 30, 2018

  • parsing
    • Added a submodule autolev which can be used to parse Autolev code to SymPy code.
  • physics.mechanics
    • Added a center of mass function in functions.py which returns the position vector of the center of
      mass of a system of bodies.
    • Added a corner case check in kane.py (Passes dummy symbols to q_ind and kd_eqs if not passed in
      to prevent errors which shouldn't occur).
  • physics.vector
    • Changed _w_diff_dcm in frame.py to get the correct results.
@NikhilPappu

This comment has been minimized.

Copy link
Contributor Author

NikhilPappu commented May 31, 2018

The directory and files aren't properly in place.
I haven't taken care of the dependencies and imports either.
I will update it soon by making the directory sympy/parsing/autolev_parser similar to sympy/parsing/latex which also uses ANTLR.

@asmeurer

This comment has been minimized.

Copy link
Member

asmeurer commented May 31, 2018

This should follow some of the same patterns as the LaTeX parser. Files that are autogenerated should contain a warning at the top that they are autogenerated and shouldn't be generated by hand.

@asmeurer

This comment has been minimized.

Copy link
Member

asmeurer commented May 31, 2018

Also it should use the same version of antlr if possible, so that we don't have to install different versions for the tests.

@NikhilPappu

This comment has been minimized.

Copy link
Contributor Author

NikhilPappu commented May 31, 2018

@asmeurer I will add the warning. I don't think there will be much of a change between different versions of antlr4. I'll update the version to the one used in the LaTeX parser anyway.

@NikhilPappu

This comment has been minimized.

Copy link
Contributor Author

NikhilPappu commented Jun 1, 2018

@asmeurer Seems like the antlr version I used (4.7.1) is the same as the one used in the LaTeX parser.
Were the LaTeX parser files generated with the target language Python 2? I am generating the antlr files with target language Python 3 and it is using type annotations (supposedly a Python 3.5 feature) everywhere which is failing on the Python 2 tests.

@NikhilPappu

This comment has been minimized.

Copy link
Contributor Author

NikhilPappu commented Jun 1, 2018

@asmeurer I don't understand the Travis errors. Could you tell me why they are occurring?
There seems to be some error with test_setup.py
$ bin/test_travis.sh +[[ true == \t\r\u\e ]] +python bin/test_setup.py Traceback (most recent call last): File "bin/test_setup.py", line 26, in <module> assert setup.modules == module_list, set(setup.modules).symmetric_difference(set(module_list)) AssertionError: set([])

Also the line from antlr4 import * in the antlr generated files is throwing an error regarding implicit imports. This line appears in the antlr files of the LaTeX parser as well. Why is that passing while this isn't?

@asmeurer

This comment has been minimized.

Copy link
Member

asmeurer commented Jun 2, 2018

@NikhilPappu I would take a look at the LaTeX parser PR (#13706) to get an idea of what sorts of things you should do here to make this work.

@NikhilPappu

This comment has been minimized.

Copy link
Contributor Author

NikhilPappu commented Jun 9, 2018

@certik @moorepants Can you please go over this PR and give me feedback? Thanks!

@certik

This comment has been minimized.

Copy link
Member

certik commented Jun 12, 2018

I think it looks good. Overall I am against checking such large autogenerated files into git, as it increases the size of the git repository (forever). We can generate them for the release tarball.

Regarding tests, is this tested on Travis? It should be.

I see some test failures, would you mind fixing those please.

@NikhilPappu

This comment has been minimized.

Copy link
Contributor Author

NikhilPappu commented Jun 13, 2018

@certik I'm not sure I quite understood what you meant. Do you mean the antlr generated files? How would this be different from the LaTeX parser which also has large autogenerated files? I think generating them only for a release tarball is a good idea though.

Did you mean the unit tests? I did add a file test_autolev.py which checks the parser generated outputs against the correct respective outputs for the example Autolev codes in the test_examples folder. I think this a good way to write tests as it is not easy to write simple one liner unit tests. For example the Kane() command needs the context of the whole system and can't be checked on its own.

I'll make sure the Travis build passes.


vec: ID ('>')+
| '0>'
| '1>>';

This comment has been minimized.

@certik

certik Jun 13, 2018

Member

In here, please add spaces before "|", so that it is aligned with the ":", just like you have done for the expr: below. I think that seems to be the accepted formatting for the g4 files.

This comment has been minimized.

@certik

certik Jun 13, 2018

Member

And the same for things above.

@certik

This comment has been minimized.

Copy link
Member

certik commented Jun 13, 2018

@NikhilPappu yes, I meant the antlr generated parser. Yes, it is the same for the latex parser. If you look at the files, they even have a binary part at the beginning (properly encoded as a Python string). I think in general it's best if such files are not checked into the git repository, but rather generated in the release tarball (or on Travis for tests). It does make the git repository not "self-contained" in a sense that you have to have antlr installed in order to generate those, but I think users should use the tarball (or Conda packages, or other packages) all of which would have it generated (since they are usually built from the tarball). /cc @asmeurer

Regarding the tests in this PR, I think they way you did it is perfect. I agree with your point.

So as far as I am concerned, just do:

  • fix the Travis tests
  • change the formatting in the g4 based on my two comments above
@NikhilPappu

This comment has been minimized.

Copy link
Contributor Author

NikhilPappu commented Jun 13, 2018

@certik
One error was due to this in the doctests and tests:

>>> from sympy.parsing.autolev import parse_autolev
>>> directory = "sympy/parsing/autolev/test_examples/"
>>> parse_autolev(directory+"test7_in.txt", "print")

Maybe this was a reason it passed on Windows but failed in Travis which uses Unix I think.
Will changing it to parse_autolev(os.path.abspath(directory+"test7_in.txt"), "print") work?
What would be the best way to read a file in a particular directory for the tests and doctests?

@moorepants

This comment has been minimized.

Copy link
Member

moorepants commented Jun 13, 2018

Make sure to use os.path.join to build directory strings.

from sympy.external import import_module

def parse_autolev(inp, outFile=None):
"""

This comment has been minimized.

@moorepants

moorepants Jun 13, 2018

Member

Please follow numpydoc standards for docstrings.


from sympy.external import import_module

def parse_autolev(inp, outFile=None):

This comment has been minimized.

@moorepants

moorepants Jun 13, 2018

Member

use a PEP8 linter and follow its suggestions, for example: outFile -> out_file.

@@ -0,0 +1,59 @@
Newtonian N

This comment has been minimized.

@moorepants

moorepants Jun 13, 2018

Member

Where do these examples come from? We need to assess the copyright and our use of them. Adding them to the sympy repo means they are licensed under the BSD license.

This comment has been minimized.

@NikhilPappu

NikhilPappu Jun 13, 2018

Author Contributor

All the examples added till now are from this Autolev Tutorial.
http://web.mae.ufl.edu/~fregly/PDFs/autolev_tutorial.pdf

This comment has been minimized.

@NikhilPappu

NikhilPappu Jun 13, 2018

Author Contributor

I would like to use some examples from Dynamics Online henceforth but as you said previously we might need to alter the code a little due to copyright issues.

This comment has been minimized.

@moorepants

moorepants Jun 13, 2018

Member

Does the tutorial have any copyright information?

This comment has been minimized.

@moorepants

moorepants Jun 13, 2018

Member

@certik Do you have any thoughts on how to handle this? I'm pretty sure both of these sources are not under any kind of open license.

This comment has been minimized.

@NikhilPappu

NikhilPappu Jun 13, 2018

Author Contributor

The first page says Copyright 1996-2005 by Paul Mitiguy and Keith Reckdahl. All Rights Reserved.

This comment has been minimized.

@certik

certik Jun 13, 2018

Member

Yes, regarding the copyright, there are only two ways out. Either

  • ask the authors to license the files we use under a BSD license

or

  • rewrite the files from scratch (without referencing them), i.e., write them as good tests, without having any similarity to the original files. You can still use the original files on your local computer to ensure it actually works, but in sympy we have to have our own files.
zero = sm.Matrix([i.collect(g)for i in zero]).reshape((zero).shape[0], (zero).shape[1])

import scipy.integrate as sc
import pydy.codegen.ode_function_generators as pd

This comment has been minimized.

@moorepants

moorepants Jun 13, 2018

Member

This makes pydy a dependency of sympy, no? I don't think that we want any of the numerics based on pydy functions in sympy. SymPy is currently a strict dependency of PyDy, not the other way around. What you can do is generate numeric evaluation results using evalf(). You can compare the arbitrary precision values.

This comment has been minimized.

@NikhilPappu

NikhilPappu Jun 13, 2018

Author Contributor

I'm not sure what you mean by using evalf() here. Don't we need to integrate the eoms. Is it fine to use the other imports?
Is it fine to do it like you have done here : http://www.moorepants.info/blog/npendulum.html?
You don't seem to use a PyDy import here.

This comment has been minimized.

@moorepants

moorepants Jun 13, 2018

Member

If you want to check numerical results in SymPy you need to use things like xreplace, subs, and evalf. This calls the underlying mpmath library and outputs arbitrary precision results. For long complicated expressions, evaluating them in both Autolev and SymPy numerically for random inputs is a good way to check the correctness.

You can use lambdify as in the npendulum blog post too, but this will give you floating point precision.

This comment has been minimized.

@NikhilPappu

NikhilPappu Jun 13, 2018

Author Contributor

I haven't written tests to compare the numerical outputs of SymPy and Autolev yet.
The current tests in test_autolev.py and parsing/test_examples just compare the parser generated output against the correct output (the files test_out1-10 contain what I consider the correct outputs) so that if anything is changed one can see if anything is broken.
You can look at the file test_autolev.py and it will become clear how the tests work.

The code generated here isn't for comparing numerical outputs.
The code generated here is the code for Simulation of EOMs after the system has been specified. This is the parsed equivalent of Autolev's Kane(), Input, Output and Code Dynamics() commands.

@moorepants

This comment has been minimized.

Copy link
Member

moorepants commented Jun 13, 2018

This is looking great so far!.

Some big picture comments:

  • This PR is way to big to review easily. For future PRs think about the smallest changes you can make and add them incrementally to build up functionality. For this one, don't add any more feature (reduce them if you can).
  • It isn't clear where you do equivalency checks for autolev output and input.
  • We decided to commit the autogenerated antlr files in the latex parser code, so that is what is done here. A separate PR could try to address not including these files and only having them generated in the sympy build process, as per Ondrej's suggestion. But for now I think we should follow suit.
  • I don't recommend trying to get the bicycle or any other complicated examples working here. A PR per simple autolev example would be a better way to start. Build up the functionality incrementally.
@moorepants

This comment has been minimized.

Copy link
Member

moorepants commented Jun 13, 2018

We don't have an equivalent for NICheck(). But it seems like it wouldn't be hard to implement. The equation is creates is:
selection_196
Pg 293 of Online Dynamics

it may be that our momentum methods on particle and body do this. not sure about dotting into the partial velocities. It might be a matter of something of this flavor:

sigma = 0
for body_or_particle in system_objects:
    sigma += body_or_particle.momentum().dot(body_or_particle.partial_velocity)

Maybe you can create a function that does what NICheck() does. We need a better more informative name though...

@NikhilPappu

This comment has been minimized.

Copy link
Contributor Author

NikhilPappu commented Jun 13, 2018

@moorepants

Some clarifications:

  • As I have mentioned in one of the above replies, I am using test_examples/test1-10_in.txt files as the
    input and checking the parser generated output against the files named test1-10_out.py which contain
    the correct outputs.
    You can have a look at parsing/tests/test_autolev.py.
  • I have checked that the numerical results for the examples used till now are the same in both Autolev
    and the generated SymPy code but I haven't written tests for it. The current tests just check the output
    codes.
  • I have parsed Autolev commands Kane(), Input, Output and Code Dynamics() using scipy and numpy
    code for Simulation of eoms. You seemed to have confused them up with numerical tests I think.

What I plan to do next:

  • I won't be adding any large amount of code for a while. I will work on cleaning things up a bit (fixing
    the Travis errors, pep8 conventions, adding more comments, small code changes for the convenience of
    use etc).

  • I will focus on using more simple examples like you suggested and won't include Bicycle Autolev,
    Autolev Tutorial examples 5.8 and 5.9 as they are more complicated.

  • I will add Examples 5.2 and 5.7 followed by some simple examples from Kane's book (probably by
    tweaking the names a little) after I clean up the code to complete this PR.
    Any code additions they will need should be minimal (as most parts of these examples can already be
    parsed) so the PR won't change much.

  • Should I leave the commented parts of Example 5.6 and 5.7 (have a look at them once in the Autolev
    Tutorial) for later (the more difficult examples PR)?
    They use a command of the type Kane(F1, F2) which shuffles the equations to be solved for the
    specified forces f1 and f2 instead of for the coordinates and speeds. They go on to plot these forces
    with time.
    I am thinking I'll leave NiCheck for later as well.

@certik

This comment has been minimized.

Copy link
Member

certik commented Jun 13, 2018

Regarding the antlr generated files: the problem is that once this is merged, even if the files are removed later, the git repository will still contain them, and thus the size of it will be increased forever.

@asmeurer, what do you think we should do here?

@NikhilPappu

This comment has been minimized.

Copy link
Contributor Author

NikhilPappu commented Jun 13, 2018

@moorepants I'll change the names of test1-10_out.py to expected_output1-10.txt as the name seems misleading. These are the correct parsed code for the input Autolev codes provided. Not the actual tests.
The tests are in parsing/tests/test_autolev.py.

@NikhilPappu

This comment has been minimized.

Copy link
Contributor Author

NikhilPappu commented Jun 14, 2018

@asmeurer @certik
This is the kind of code I am using in test_autolev.py

directory = "sympy/parsing/tests/test_examples/"
parse_autolev(os.path.join(*(directory+inFileName).split('/')), os.path.join(*(directory+"output.py").split('/')))
correctFile = open(os.path.join(*(directory+outFileName).split('/')), 'r')
outputFile = open(os.path.join(*(directory+"output.py").split('/')), 'r')`

This is passing on my PC when I run bin/test parsing but it is causing Travis errors of this sort:

 File "/home/travis/virtualenv/python2.7.14/lib/python2.7/site-packages/sympy-1.1.2.dev0-py2.7.egg/sympy/utilities/runtests.py", line 1274, in _timeout
    function()
  File "/home/travis/virtualenv/python2.7.14/lib/python2.7/site-packages/sympy-1.1.2.dev0-py2.7.egg/sympy/parsing/tests/test_autolev.py", line 39, in test_example1
    _test_examples("test1_in.txt", "expected_output1.txt")
  File "/home/travis/virtualenv/python2.7.14/lib/python2.7/site-packages/sympy-1.1.2.dev0-py2.7.egg/sympy/parsing/tests/test_autolev.py", line 12, in _test_examples
    correctFile = open(os.path.join(*(directory+outFileName).split('/')), 'r')
IOError: [Errno 2] No such file or directory: 'sympy/parsing/tests/test_examples/expected_output1.txt'

How can I fix this? How can I access specific files in the SymPy directories?
Will using something like os.path.abspath() or os.getcwd() work?
I think fixing this would fix the Travis errors.
What would be the best way to do this type of thing?

@moorepants

This comment has been minimized.

Copy link
Member

moorepants commented Jun 14, 2018

You should use __file__ to get relative locations to the currently executing file.

@NikhilPappu

This comment has been minimized.

Copy link
Contributor Author

NikhilPappu commented Jun 14, 2018

@moorepants Can you show me an example? Also tests seem to be run using utilities/runtests.py in Travis while I use bin/test on my PC. Would that make any difference?

@certik

This comment has been minimized.

Copy link
Member

certik commented Jun 14, 2018

@NikhilPappu not sure how to fix it. Try to ask on the mailinglist if anyone knows.

@moorepants

This comment has been minimized.

@certik

certik approved these changes Jul 26, 2018

@certik certik merged commit 355d9a7 into sympy:master Jul 26, 2018

2 checks passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
sympy-bot/release-notes The release notes look OK
Details
@certik

This comment has been minimized.

Copy link
Member

certik commented Jul 26, 2018

I think this looks good. Thank you for your contribution!

@asmeurer

This comment has been minimized.

Copy link
Member

asmeurer commented Jul 26, 2018

In the future, do not squash large pull requests like this into one commit. The release notes entry shows four distinct changes, so there should be distinct commits. At the very least, your commit message should be at least as descriptive as your release notes (see https://github.com/sympy/sympy/wiki/Development-workflow#writing-commit-messages).

asmeurer added a commit to sympy/sympy-bot that referenced this pull request Jul 26, 2018

@asmeurer

This comment has been minimized.

Copy link
Member

asmeurer commented Jul 26, 2018

Also please take a look at the bot status whenever you edit the release notes or before merging to verify that they look correct. In this case, there was a bug in the way it handled the multiple bullet points (which I have fixed at sympy/sympy-bot#15).

@NikhilPappu

This comment has been minimized.

Copy link
Contributor Author

NikhilPappu commented Jul 26, 2018

@asmeurer Sorry about the commits Aaron. I shall follow these conventions and the release notes thumb rule from next time for sure. I shall also check the bot status.

Parameters
----------
inp: str

This comment has been minimized.

@moorepants

moorepants Jul 30, 2018

Member

Why not "input" instead of "inp"?

This comment has been minimized.

@NikhilPappu

NikhilPappu Jul 31, 2018

Author Contributor

Sorry I changed that. I thought it was conflicting with Python but it was just the syntax highlighting on my editor.

1. Can be the name of an output file to which the SymPy code should be written to.
2. Can be the string "print". In this case the SymPy code is written to stdout.
3. Can be the string "list". In this it returns a list containing the SymPy code.
Each element in the list corresponds to one line of code.

This comment has been minimized.

@moorepants

moorepants Jul 30, 2018

Member

What happens when output=None? Maybe the default should be to stdout.

This comment has been minimized.

@NikhilPappu

NikhilPappu Jul 31, 2018

Author Contributor

I think that is better. I'll remove the print option and set the default to stdout.

>>> l.append("INPUT Q1=.1,Q2=.2,U1=0,U2=0)" # doctest: +SKIP
>>> l.append("INPUT TFINAL=10, INTEGSTP=.01)" # doctest: +SKIP
>>> l.append("CODE DYNAMICS() double_pendulum.c)" # doctest: +SKIP
>>> parse_autolev("\\n".join(l), "print") # doctest: +SKIP

This comment has been minimized.

@moorepants

moorepants Jul 30, 2018

Member

Any reason this can't be written like:

>>> """\
... My
... Autolev
... code\
... """
...

?

This comment has been minimized.

@NikhilPappu

NikhilPappu Jul 31, 2018

Author Contributor

It can be written in a better way. I shall change it.

kane = me.KanesMethod(frame_n, q_ind=[q1,q2], u_ind=[u1, u2], kd_eqs = kd_eqs)
fr, frstar = kane.kanes_equations([particle_p, particle_r], forceList)
zero = fr+frstar
from pydy.system import System

This comment has been minimized.

@moorepants

moorepants Jul 30, 2018

Member

I think the PyDy code output should be optional. We are creating a circular dependency of sorts by including it. Either the autolev parser belongs in PyDy or an extension to the parser belongs in PyDy. I think we should stick to outputting only what SymPy can do with SymPy code.

Another option could be a flag: parse_autolev(..., include_pydy=True).

This comment has been minimized.

@NikhilPappu

NikhilPappu Jul 31, 2018

Author Contributor

I think adding the flag is a good idea. I shall do that.

from sympy.external import import_module


def parse_autolev(inp, output=None):

This comment has been minimized.

@moorepants

moorepants Jul 30, 2018

Member

What version(s) of autolev code does this parse?

This comment has been minimized.

@NikhilPappu

NikhilPappu Jul 31, 2018

Author Contributor

I have followed the Autolev Tutorial and the Autolev version you sent me (these are version 4.1) and Dynamics Online (which uses an older version) as guides. I didn't find too much of a difference when I used the older codes in the newer version but I think I went with the newer version in case of a conflict or deprecation warning. So I would say 4.1.
Also, one has to keep in mind some conventions in some cases while writing Autolev code for the parser for it to work properly (I shall discuss all the nuances in the documentation).

This comment has been minimized.

@moorepants

moorepants Aug 8, 2018

Member

Ok, stating clearly that we are following the autolev 4.1 spec is good.

__import__kwargs={'fromlist': ['AutolevLexer']}).AutolevLexer
AutolevListener = import_module('sympy.parsing.autolev._antlr.autolevlistener',
__import__kwargs={'fromlist': ['AutolevListener']}).AutolevListener
except Exception:

This comment has been minimized.

@moorepants

moorepants Jul 30, 2018

Member

This should be a specific exception.

This comment has been minimized.

@NikhilPappu

NikhilPappu Jul 31, 2018

Author Contributor

I copied this style from the LaTeX parser.
I think ImportError should work though.

angvelmat = diffed * dcm2diff.T
# angvelmat = diffed * dcm2diff.T
# This one seems to produce the correct result when I checked using Autolev.
angvelmat = dcm2diff*diffed.T

This comment has been minimized.

@moorepants

moorepants Jul 30, 2018

Member

This should be in a separate PR with a specific test for it. What is the test case that is broken?

This comment has been minimized.

@NikhilPappu

NikhilPappu Jul 31, 2018

Author Contributor

I shall add a test for it.
There was no test case for this to begin with so I didn't break anything.

@@ -381,6 +382,55 @@ def gravity(acceleration, *bodies):

return gravity_force


def center_of_mass(point, *bodies):

This comment has been minimized.

@moorepants

moorepants Jul 30, 2018

Member

Are there any tests for this?

This comment has been minimized.

@NikhilPappu

NikhilPappu Jul 31, 2018

Author Contributor

I haven't added a test for this. I shall do so in the next PR.

@moorepants

This comment has been minimized.

Copy link
Member

moorepants commented Jul 30, 2018

@NikhilPappu I've had some time to look at this. Sorry it is after it was merged. This is really nice work and a great addition. I've left some comments that can be addressed in new PRs.

Also, I'm a bit confused on how the testing works. What I imagine as a good test for this is that we write some autolev code, run the autolev code to get its symbolic results, then we run the parser on the autolev code to generate the sympy code, and then run the sympy code to get its symbolic results.

Once we have the symbolic results from both programs we can compare them in two ways:

  1. Check whether the symbolics are the same. This would involved converting autolev results to sympy expressions and then subtracting the results from the parsed code and then simplifying to see if we get zero.

Or

  1. We substitute in random arbitrary precision or floating point numbers into the symbolic expressions from both systems and compare the numerical results.

It isn't clear to me if either of these are done. And thus I am not sure how to tell whether the autolev parser generates equivalent code. Can you comment on this?

@certik

This comment has been minimized.

Copy link
Member

certik commented Jul 30, 2018

@moorepants thanks for the review. Yes, @NikhilPappu please address the stuff in new PRs.

To answer your question about tests --- my understanding is that it tests the (current) results from the parser. So if you change some code in the parser, it will break the tests, thus ensuring that the parser works. This has the advantage that it runs quickly (no pydy necessary). If a bug is discovered, i.e., that the tests, as currently written, test an incorrect result, then we correct the test and fix the bug.

We can always write slower and more deeper tests to actually run pydy and autoleve side by side and compare the results. But the above seems like an excellent way to test the parser itself.

@moorepants

This comment has been minimized.

Copy link
Member

moorepants commented Jul 30, 2018

I see how these are regression tests. But it isn't clear to me how we know the parser emits correct (symbolic and numerically speaking) code.

@certik

This comment has been minimized.

Copy link
Member

certik commented Jul 30, 2018

@moorepants, yes, these are regression tests. We do not know that the parser is emitting correct code. In order to know that, we need to finish some tests in the pydy I assume. I would be nice if @NikhilPappu could write such tests also, probably in pydy, before the GSoC is over. At least a few examples.

@moorepants

This comment has been minimized.

Copy link
Member

moorepants commented Jul 30, 2018

I'll discuss with him and I should be able to help with some.

@NikhilPappu NikhilPappu deleted the NikhilPappu:autolev_parser branch Aug 1, 2018

@NikhilPappu NikhilPappu restored the NikhilPappu:autolev_parser branch Aug 1, 2018

NikhilPappu added a commit to NikhilPappu/sympy that referenced this pull request Aug 2, 2018

Updated the parser code and made changes requested in sympy#14758.
The major changes in this commit include the code I have changed
in _listener_autolev_antlr.py. The changes to other files are
minor. I have also made the changes requested in PR sympy#14758
after it had been merged.

Some specific changes are:
1. Changed the input rule in the grammar and parser code to fix errors.
2. Added a flag include_pydy in parse_autolev.
3. Changed the doctest in __init__.py to make it look better.
4. Removed the print option. stdout is now the default.
5. Made various changes to _listener_autolev_antlr to parse
   more files. Revamped the processVariables function quite a bit.
   Changed the mass function and the pydy output code a bit followed
   by some minor changes.
6. I have also added a .subs(kindiffdict()) in the forcing full method
   of kane.py. This is required for the pydy numerical code to work in
   some cases. This doesn't break any of the test cases.

@NikhilPappu NikhilPappu referenced this pull request Aug 2, 2018

Merged

Updated parser code #15006

NikhilPappu added a commit to NikhilPappu/sympy that referenced this pull request Aug 2, 2018

Updated the parser code and made changes requested in sympy#14758.
The major changes in this commit include the code I have changed
in _listener_autolev_antlr.py. The changes to other files are
minor. I have also made the changes requested in PR sympy#14758
after it had been merged.

Some specific changes are:
1. Changed the input rule in the grammar and parser code to fix errors.
2. Added a flag include_pydy in parse_autolev.
3. Changed the doctest in __init__.py to make it look better.
4. Removed the print option. stdout is now the default.
5. Made various changes to _listener_autolev_antlr to parse
   more files. Revamped the processVariables function quite a bit.
   Changed the mass function and the pydy output code a bit followed
   by some minor changes.
6. I have also added a .subs(kindiffdict()) in the forcing full method
   of kane.py. This is required for the pydy numerical code to work in
   some cases. This doesn't break any of the test cases.
7. Changed zip to zip_longest in test_autolev.py. Also added commented
   code for the tests in the GitLab repo.

NikhilPappu added a commit to NikhilPappu/sympy that referenced this pull request Aug 5, 2018

Updated the parser code and made changes requested in sympy#14758.
The major changes in this commit include the code I have changed
in _listener_autolev_antlr.py. The changes to other files are
minor. I have also made the changes requested in PR sympy#14758
after it had been merged.

Some specific changes are:
1. Changed the input rule in the grammar and parser code to fix errors.
2. Added a flag include_pydy in parse_autolev.
3. Changed the doctest in __init__.py to make it look better.
4. Removed the print option. stdout is now the default.
5. Made various changes to _listener_autolev_antlr to parse
   more files. Revamped the processVariables function quite a bit.
   Changed the mass function and the pydy output code a bit followed
   by some minor changes.
6. I have also added a .subs(kindiffdict()) in the forcing full method
   of kane.py. This is required for the pydy numerical code to work in
   some cases. This doesn't break any of the test cases.
7. Changed zip to zip_longest in test_autolev.py. Also added commented
   code for the tests in the GitLab repo.
from sympy.external import import_module


def parse_autolev(inp, output=None):

This comment has been minimized.

@moorepants

moorepants Aug 8, 2018

Member

I had a new thought about this main parsing function. Most functions in python that operate on text are more versatile if they can take more than a string as an input. For example the pandas read_csv() function has this primary argument:

filepath_or_buffer : str, pathlib.Path, py._path.local.LocalPath or any object with a read() method (such as a file handle or StringIO)
    The string could be a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is expected. For instance, a local file could be file://localhost/path/to/table.csv

The parser function then outputs a string. This string could be sent to a buffer or file or whatever the user wants. I'm not sure that having the output file as a kwarg is a good idea, because it silos the user into doing one type of thing and isnt' flexible. It basically means that users would only ever want to write a file to disk.

With that said, I think the parser API would be more flexible if it followed these more pythonic conventions.

@certik, do you have any thoughts on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.