Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Autolev parser #14758

Merged
merged 1 commit into from
Jul 26, 2018
Merged

Autolev parser #14758

merged 1 commit into from
Jul 26, 2018

Conversation

NikhilPappu
Copy link
Contributor

@NikhilPappu NikhilPappu commented May 30, 2018

  • parsing
    • Added a submodule autolev which can be used to parse Autolev code to SymPy code.
  • physics.mechanics
    • Added a center of mass function in functions.py which returns the position vector of the center of
      mass of a system of bodies.
    • Added a corner case check in kane.py (Passes dummy symbols to q_ind and kd_eqs if not passed in
      to prevent errors which shouldn't occur).
  • physics.vector
    • Changed _w_diff_dcm in frame.py to get the correct results.

@NikhilPappu
Copy link
Contributor Author

NikhilPappu commented May 31, 2018

The directory and files aren't properly in place.
I haven't taken care of the dependencies and imports either.
I will update it soon by making the directory sympy/parsing/autolev_parser similar to sympy/parsing/latex which also uses ANTLR.

@asmeurer
Copy link
Member

This should follow some of the same patterns as the LaTeX parser. Files that are autogenerated should contain a warning at the top that they are autogenerated and shouldn't be generated by hand.

@asmeurer
Copy link
Member

Also it should use the same version of antlr if possible, so that we don't have to install different versions for the tests.

@NikhilPappu
Copy link
Contributor Author

@asmeurer I will add the warning. I don't think there will be much of a change between different versions of antlr4. I'll update the version to the one used in the LaTeX parser anyway.

@Abdullahjavednesar Abdullahjavednesar added the PR: author's turn The PR has been reviewed and the author needs to submit more changes. label May 31, 2018
@NikhilPappu
Copy link
Contributor Author

@asmeurer Seems like the antlr version I used (4.7.1) is the same as the one used in the LaTeX parser.
Were the LaTeX parser files generated with the target language Python 2? I am generating the antlr files with target language Python 3 and it is using type annotations (supposedly a Python 3.5 feature) everywhere which is failing on the Python 2 tests.

@NikhilPappu
Copy link
Contributor Author

@asmeurer I don't understand the Travis errors. Could you tell me why they are occurring?
There seems to be some error with test_setup.py
$ bin/test_travis.sh +[[ true == \t\r\u\e ]] +python bin/test_setup.py Traceback (most recent call last): File "bin/test_setup.py", line 26, in <module> assert setup.modules == module_list, set(setup.modules).symmetric_difference(set(module_list)) AssertionError: set([])

Also the line from antlr4 import * in the antlr generated files is throwing an error regarding implicit imports. This line appears in the antlr files of the LaTeX parser as well. Why is that passing while this isn't?

@asmeurer
Copy link
Member

asmeurer commented Jun 2, 2018

@NikhilPappu I would take a look at the LaTeX parser PR (#13706) to get an idea of what sorts of things you should do here to make this work.

@NikhilPappu
Copy link
Contributor Author

NikhilPappu commented Jun 9, 2018

@certik @moorepants Can you please go over this PR and give me feedback? Thanks!

@certik
Copy link
Member

certik commented Jun 12, 2018

I think it looks good. Overall I am against checking such large autogenerated files into git, as it increases the size of the git repository (forever). We can generate them for the release tarball.

Regarding tests, is this tested on Travis? It should be.

I see some test failures, would you mind fixing those please.

@NikhilPappu
Copy link
Contributor Author

@certik I'm not sure I quite understood what you meant. Do you mean the antlr generated files? How would this be different from the LaTeX parser which also has large autogenerated files? I think generating them only for a release tarball is a good idea though.

Did you mean the unit tests? I did add a file test_autolev.py which checks the parser generated outputs against the correct respective outputs for the example Autolev codes in the test_examples folder. I think this a good way to write tests as it is not easy to write simple one liner unit tests. For example the Kane() command needs the context of the whole system and can't be checked on its own.

I'll make sure the Travis build passes.


vec: ID ('>')+
| '0>'
| '1>>';
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In here, please add spaces before "|", so that it is aligned with the ":", just like you have done for the expr: below. I think that seems to be the accepted formatting for the g4 files.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And the same for things above.

@certik
Copy link
Member

certik commented Jun 13, 2018

@NikhilPappu yes, I meant the antlr generated parser. Yes, it is the same for the latex parser. If you look at the files, they even have a binary part at the beginning (properly encoded as a Python string). I think in general it's best if such files are not checked into the git repository, but rather generated in the release tarball (or on Travis for tests). It does make the git repository not "self-contained" in a sense that you have to have antlr installed in order to generate those, but I think users should use the tarball (or Conda packages, or other packages) all of which would have it generated (since they are usually built from the tarball). /cc @asmeurer

Regarding the tests in this PR, I think they way you did it is perfect. I agree with your point.

So as far as I am concerned, just do:

  • fix the Travis tests
  • change the formatting in the g4 based on my two comments above

@NikhilPappu
Copy link
Contributor Author

NikhilPappu commented Jun 13, 2018

@certik
One error was due to this in the doctests and tests:

>>> from sympy.parsing.autolev import parse_autolev
>>> directory = "sympy/parsing/autolev/test_examples/"
>>> parse_autolev(directory+"test7_in.txt", "print")

Maybe this was a reason it passed on Windows but failed in Travis which uses Unix I think.
Will changing it to parse_autolev(os.path.abspath(directory+"test7_in.txt"), "print") work?
What would be the best way to read a file in a particular directory for the tests and doctests?

@moorepants
Copy link
Member

Make sure to use os.path.join to build directory strings.

from sympy.external import import_module

def parse_autolev(inp, outFile=None):
"""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please follow numpydoc standards for docstrings.


from sympy.external import import_module

def parse_autolev(inp, outFile=None):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use a PEP8 linter and follow its suggestions, for example: outFile -> out_file.

@@ -0,0 +1,59 @@
Newtonian N
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where do these examples come from? We need to assess the copyright and our use of them. Adding them to the sympy repo means they are licensed under the BSD license.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the examples added till now are from this Autolev Tutorial.
http://web.mae.ufl.edu/~fregly/PDFs/autolev_tutorial.pdf

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to use some examples from Dynamics Online henceforth but as you said previously we might need to alter the code a little due to copyright issues.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the tutorial have any copyright information?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@certik Do you have any thoughts on how to handle this? I'm pretty sure both of these sources are not under any kind of open license.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first page says Copyright 1996-2005 by Paul Mitiguy and Keith Reckdahl. All Rights Reserved.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, regarding the copyright, there are only two ways out. Either

  • ask the authors to license the files we use under a BSD license

or

  • rewrite the files from scratch (without referencing them), i.e., write them as good tests, without having any similarity to the original files. You can still use the original files on your local computer to ensure it actually works, but in sympy we have to have our own files.

zero = sm.Matrix([i.collect(g)for i in zero]).reshape((zero).shape[0], (zero).shape[1])

import scipy.integrate as sc
import pydy.codegen.ode_function_generators as pd
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes pydy a dependency of sympy, no? I don't think that we want any of the numerics based on pydy functions in sympy. SymPy is currently a strict dependency of PyDy, not the other way around. What you can do is generate numeric evaluation results using evalf(). You can compare the arbitrary precision values.

Copy link
Contributor Author

@NikhilPappu NikhilPappu Jun 13, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what you mean by using evalf() here. Don't we need to integrate the eoms. Is it fine to use the other imports?
Is it fine to do it like you have done here : http://www.moorepants.info/blog/npendulum.html?
You don't seem to use a PyDy import here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want to check numerical results in SymPy you need to use things like xreplace, subs, and evalf. This calls the underlying mpmath library and outputs arbitrary precision results. For long complicated expressions, evaluating them in both Autolev and SymPy numerically for random inputs is a good way to check the correctness.

You can use lambdify as in the npendulum blog post too, but this will give you floating point precision.

Copy link
Contributor Author

@NikhilPappu NikhilPappu Jun 13, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't written tests to compare the numerical outputs of SymPy and Autolev yet.
The current tests in test_autolev.py and parsing/test_examples just compare the parser generated output against the correct output (the files test_out1-10 contain what I consider the correct outputs) so that if anything is changed one can see if anything is broken.
You can look at the file test_autolev.py and it will become clear how the tests work.

The code generated here isn't for comparing numerical outputs.
The code generated here is the code for Simulation of EOMs after the system has been specified. This is the parsed equivalent of Autolev's Kane(), Input, Output and Code Dynamics() commands.

@moorepants
Copy link
Member

This is looking great so far!.

Some big picture comments:

  • This PR is way to big to review easily. For future PRs think about the smallest changes you can make and add them incrementally to build up functionality. For this one, don't add any more feature (reduce them if you can).
  • It isn't clear where you do equivalency checks for autolev output and input.
  • We decided to commit the autogenerated antlr files in the latex parser code, so that is what is done here. A separate PR could try to address not including these files and only having them generated in the sympy build process, as per Ondrej's suggestion. But for now I think we should follow suit.
  • I don't recommend trying to get the bicycle or any other complicated examples working here. A PR per simple autolev example would be a better way to start. Build up the functionality incrementally.

@moorepants
Copy link
Member

We don't have an equivalent for NICheck(). But it seems like it wouldn't be hard to implement. The equation is creates is:
selection_196
Pg 293 of Online Dynamics

it may be that our momentum methods on particle and body do this. not sure about dotting into the partial velocities. It might be a matter of something of this flavor:

sigma = 0
for body_or_particle in system_objects:
    sigma += body_or_particle.momentum().dot(body_or_particle.partial_velocity)

Maybe you can create a function that does what NICheck() does. We need a better more informative name though...

@NikhilPappu
Copy link
Contributor Author

NikhilPappu commented Jun 13, 2018

@moorepants

Some clarifications:

  • As I have mentioned in one of the above replies, I am using test_examples/test1-10_in.txt files as the
    input and checking the parser generated output against the files named test1-10_out.py which contain
    the correct outputs.
    You can have a look at parsing/tests/test_autolev.py.
  • I have checked that the numerical results for the examples used till now are the same in both Autolev
    and the generated SymPy code but I haven't written tests for it. The current tests just check the output
    codes.
  • I have parsed Autolev commands Kane(), Input, Output and Code Dynamics() using scipy and numpy
    code for Simulation of eoms. You seemed to have confused them up with numerical tests I think.

What I plan to do next:

  • I won't be adding any large amount of code for a while. I will work on cleaning things up a bit (fixing
    the Travis errors, pep8 conventions, adding more comments, small code changes for the convenience of
    use etc).

  • I will focus on using more simple examples like you suggested and won't include Bicycle Autolev,
    Autolev Tutorial examples 5.8 and 5.9 as they are more complicated.

  • I will add Examples 5.2 and 5.7 followed by some simple examples from Kane's book (probably by
    tweaking the names a little) after I clean up the code to complete this PR.
    Any code additions they will need should be minimal (as most parts of these examples can already be
    parsed) so the PR won't change much.

  • Should I leave the commented parts of Example 5.6 and 5.7 (have a look at them once in the Autolev
    Tutorial) for later (the more difficult examples PR)?
    They use a command of the type Kane(F1, F2) which shuffles the equations to be solved for the
    specified forces f1 and f2 instead of for the coordinates and speeds. They go on to plot these forces
    with time.
    I am thinking I'll leave NiCheck for later as well.

@certik
Copy link
Member

certik commented Jun 13, 2018

Regarding the antlr generated files: the problem is that once this is merged, even if the files are removed later, the git repository will still contain them, and thus the size of it will be increased forever.

@asmeurer, what do you think we should do here?

@NikhilPappu
Copy link
Contributor Author

NikhilPappu commented Jun 13, 2018

@moorepants I'll change the names of test1-10_out.py to expected_output1-10.txt as the name seems misleading. These are the correct parsed code for the input Autolev codes provided. Not the actual tests.
The tests are in parsing/tests/test_autolev.py.

@NikhilPappu
Copy link
Contributor Author

@asmeurer @certik
This is the kind of code I am using in test_autolev.py

directory = "sympy/parsing/tests/test_examples/"
parse_autolev(os.path.join(*(directory+inFileName).split('/')), os.path.join(*(directory+"output.py").split('/')))
correctFile = open(os.path.join(*(directory+outFileName).split('/')), 'r')
outputFile = open(os.path.join(*(directory+"output.py").split('/')), 'r')`

This is passing on my PC when I run bin/test parsing but it is causing Travis errors of this sort:

 File "/home/travis/virtualenv/python2.7.14/lib/python2.7/site-packages/sympy-1.1.2.dev0-py2.7.egg/sympy/utilities/runtests.py", line 1274, in _timeout
    function()
  File "/home/travis/virtualenv/python2.7.14/lib/python2.7/site-packages/sympy-1.1.2.dev0-py2.7.egg/sympy/parsing/tests/test_autolev.py", line 39, in test_example1
    _test_examples("test1_in.txt", "expected_output1.txt")
  File "/home/travis/virtualenv/python2.7.14/lib/python2.7/site-packages/sympy-1.1.2.dev0-py2.7.egg/sympy/parsing/tests/test_autolev.py", line 12, in _test_examples
    correctFile = open(os.path.join(*(directory+outFileName).split('/')), 'r')
IOError: [Errno 2] No such file or directory: 'sympy/parsing/tests/test_examples/expected_output1.txt'

How can I fix this? How can I access specific files in the SymPy directories?
Will using something like os.path.abspath() or os.getcwd() work?
I think fixing this would fix the Travis errors.
What would be the best way to do this type of thing?

@moorepants
Copy link
Member

You should use __file__ to get relative locations to the currently executing file.

@NikhilPappu
Copy link
Contributor Author

@moorepants Can you show me an example? Also tests seem to be run using utilities/runtests.py in Travis while I use bin/test on my PC. Would that make any difference?

@certik
Copy link
Member

certik commented Jun 14, 2018

@NikhilPappu not sure how to fix it. Try to ask on the mailinglist if anyone knows.

@moorepants
Copy link
Member

@asmeurer
Copy link
Member

Also please take a look at the bot status whenever you edit the release notes or before merging to verify that they look correct. In this case, there was a bug in the way it handled the multiple bullet points (which I have fixed at sympy/sympy-bot#15).

@NikhilPappu
Copy link
Contributor Author

@asmeurer Sorry about the commits Aaron. I shall follow these conventions and the release notes thumb rule from next time for sure. I shall also check the bot status.


Parameters
----------
inp: str
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not "input" instead of "inp"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I changed that. I thought it was conflicting with Python but it was just the syntax highlighting on my editor.

1. Can be the name of an output file to which the SymPy code should be written to.
2. Can be the string "print". In this case the SymPy code is written to stdout.
3. Can be the string "list". In this it returns a list containing the SymPy code.
Each element in the list corresponds to one line of code.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens when output=None? Maybe the default should be to stdout.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that is better. I'll remove the print option and set the default to stdout.

>>> l.append("INPUT Q1=.1,Q2=.2,U1=0,U2=0)" # doctest: +SKIP
>>> l.append("INPUT TFINAL=10, INTEGSTP=.01)" # doctest: +SKIP
>>> l.append("CODE DYNAMICS() double_pendulum.c)" # doctest: +SKIP
>>> parse_autolev("\\n".join(l), "print") # doctest: +SKIP
Copy link
Member

@moorepants moorepants Jul 30, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason this can't be written like:

>>> """\
... My
... Autolev
... code\
... """
...

?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It can be written in a better way. I shall change it.

kane = me.KanesMethod(frame_n, q_ind=[q1,q2], u_ind=[u1, u2], kd_eqs = kd_eqs)
fr, frstar = kane.kanes_equations([particle_p, particle_r], forceList)
zero = fr+frstar
from pydy.system import System
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the PyDy code output should be optional. We are creating a circular dependency of sorts by including it. Either the autolev parser belongs in PyDy or an extension to the parser belongs in PyDy. I think we should stick to outputting only what SymPy can do with SymPy code.

Another option could be a flag: parse_autolev(..., include_pydy=True).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think adding the flag is a good idea. I shall do that.

from sympy.external import import_module


def parse_autolev(inp, output=None):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What version(s) of autolev code does this parse?

Copy link
Contributor Author

@NikhilPappu NikhilPappu Jul 31, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have followed the Autolev Tutorial and the Autolev version you sent me (these are version 4.1) and Dynamics Online (which uses an older version) as guides. I didn't find too much of a difference when I used the older codes in the newer version but I think I went with the newer version in case of a conflict or deprecation warning. So I would say 4.1.
Also, one has to keep in mind some conventions in some cases while writing Autolev code for the parser for it to work properly (I shall discuss all the nuances in the documentation).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, stating clearly that we are following the autolev 4.1 spec is good.

__import__kwargs={'fromlist': ['AutolevLexer']}).AutolevLexer
AutolevListener = import_module('sympy.parsing.autolev._antlr.autolevlistener',
__import__kwargs={'fromlist': ['AutolevListener']}).AutolevListener
except Exception:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be a specific exception.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I copied this style from the LaTeX parser.
I think ImportError should work though.

angvelmat = diffed * dcm2diff.T
# angvelmat = diffed * dcm2diff.T
# This one seems to produce the correct result when I checked using Autolev.
angvelmat = dcm2diff*diffed.T
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be in a separate PR with a specific test for it. What is the test case that is broken?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I shall add a test for it.
There was no test case for this to begin with so I didn't break anything.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change should not have been merged. Just wanted to note that this broke working code as seen here: #16824. This PR seemed to have been merged without my concern addressed.

@@ -381,6 +382,55 @@ def gravity(acceleration, *bodies):

return gravity_force


def center_of_mass(point, *bodies):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there any tests for this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't added a test for this. I shall do so in the next PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was also not addressed before merging.

@moorepants
Copy link
Member

@NikhilPappu I've had some time to look at this. Sorry it is after it was merged. This is really nice work and a great addition. I've left some comments that can be addressed in new PRs.

Also, I'm a bit confused on how the testing works. What I imagine as a good test for this is that we write some autolev code, run the autolev code to get its symbolic results, then we run the parser on the autolev code to generate the sympy code, and then run the sympy code to get its symbolic results.

Once we have the symbolic results from both programs we can compare them in two ways:

  1. Check whether the symbolics are the same. This would involved converting autolev results to sympy expressions and then subtracting the results from the parsed code and then simplifying to see if we get zero.

Or

  1. We substitute in random arbitrary precision or floating point numbers into the symbolic expressions from both systems and compare the numerical results.

It isn't clear to me if either of these are done. And thus I am not sure how to tell whether the autolev parser generates equivalent code. Can you comment on this?

@certik
Copy link
Member

certik commented Jul 30, 2018

@moorepants thanks for the review. Yes, @NikhilPappu please address the stuff in new PRs.

To answer your question about tests --- my understanding is that it tests the (current) results from the parser. So if you change some code in the parser, it will break the tests, thus ensuring that the parser works. This has the advantage that it runs quickly (no pydy necessary). If a bug is discovered, i.e., that the tests, as currently written, test an incorrect result, then we correct the test and fix the bug.

We can always write slower and more deeper tests to actually run pydy and autoleve side by side and compare the results. But the above seems like an excellent way to test the parser itself.

@moorepants
Copy link
Member

moorepants commented Jul 30, 2018

I see how these are regression tests. But it isn't clear to me how we know the parser emits correct (symbolic and numerically speaking) code.

@certik
Copy link
Member

certik commented Jul 30, 2018

@moorepants, yes, these are regression tests. We do not know that the parser is emitting correct code. In order to know that, we need to finish some tests in the pydy I assume. I would be nice if @NikhilPappu could write such tests also, probably in pydy, before the GSoC is over. At least a few examples.

@moorepants
Copy link
Member

I'll discuss with him and I should be able to help with some.

@NikhilPappu NikhilPappu deleted the autolev_parser branch August 1, 2018 19:33
@NikhilPappu NikhilPappu restored the autolev_parser branch August 1, 2018 19:35
@NikhilPappu NikhilPappu mentioned this pull request Aug 2, 2018
NikhilPappu added a commit to NikhilPappu/sympy that referenced this pull request Aug 5, 2018
The major changes in this commit include the code I have changed
in _listener_autolev_antlr.py. The changes to other files are
minor. I have also made the changes requested in PR sympy#14758
after it had been merged.

Some specific changes are:
1. Changed the input rule in the grammar and parser code to fix errors.
2. Added a flag include_pydy in parse_autolev.
3. Changed the doctest in __init__.py to make it look better.
4. Removed the print option. stdout is now the default.
5. Made various changes to _listener_autolev_antlr to parse
   more files. Revamped the processVariables function quite a bit.
   Changed the mass function and the pydy output code a bit followed
   by some minor changes.
6. I have also added a .subs(kindiffdict()) in the forcing full method
   of kane.py. This is required for the pydy numerical code to work in
   some cases. This doesn't break any of the test cases.
7. Changed zip to zip_longest in test_autolev.py. Also added commented
   code for the tests in the GitLab repo.
from sympy.external import import_module


def parse_autolev(inp, output=None):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a new thought about this main parsing function. Most functions in python that operate on text are more versatile if they can take more than a string as an input. For example the pandas read_csv() function has this primary argument:

filepath_or_buffer : str, pathlib.Path, py._path.local.LocalPath or any object with a read() method (such as a file handle or StringIO)
    The string could be a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is expected. For instance, a local file could be file://localhost/path/to/table.csv

The parser function then outputs a string. This string could be sent to a buffer or file or whatever the user wants. I'm not sure that having the output file as a kwarg is a good idea, because it silos the user into doing one type of thing and isnt' flexible. It basically means that users would only ever want to write a file to disk.

With that said, I think the parser API would be more flexible if it followed these more pythonic conventions.

@certik, do you have any thoughts on this?

@certik
Copy link
Member

certik commented Oct 16, 2019 via email

@moorepants
Copy link
Member

It was fixed in PR #16828. The issue is that I'm teaching a course with the package now and the bug is present in the latest release of SymPy which is what my students are using.

@@ -121,6 +121,9 @@ def __init__(self, frame, q_ind, u_ind, kd_eqs=None, q_dependent=None,
u_auxiliary=None):

"""Please read the online documentation. """
if not q_ind:
q_ind = [dynamicsymbols('dummy_q')]
kd_eqs = [dynamicsymbols('dummy_kd')]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change was also introduced with no tests or explanation.

@certik
Copy link
Member

certik commented Oct 16, 2019

It was fixed in PR #16828. The issue is that I'm teaching a course with the package now and the bug is present in the latest release of SymPy which is what my students are using.

I merged it, I am really sorry that it broke things. I am glad it is fixed. What are the possible paths forward to get it working for your students? I know @asmeurer is trying to do a new release, but that might not be on time for your classes. We can upload a conda package into a special channel just for you, using the latest master. Let me know what you want to do.

@moorepants
Copy link
Member

I'm running a new jupyterhub server for the class. I can make a new docker image with the patch to make it available to the students. Don't worry about it. I just wanted to document what happened here and review the other changes.

@certik
Copy link
Member

certik commented Oct 16, 2019 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants