Skip to content

Conversation

@rgreinho
Copy link
Contributor

Creates a CLI for OpenAlchemy. The first command available allows the
users to build a package for the models and a distributable archive.

@rgreinho
Copy link
Contributor Author

rgreinho commented Oct 11, 2020

I usually do not test my CLIs, because they are simply wrappers around backend functions which as well tested. I was not sure what your take on this was so I did not remove it from the coverage.

If you would like me to test it, would you happen to have some ideas/feedback to do so? I know click has some mechanism for testing, but at the end of the day it is only to validate the command arguments which is done by the CLI parser anyway. 🤷

@rgreinho rgreinho force-pushed the add-cli branch 2 times, most recently from 72ac06f to 485769c Compare October 11, 2020 14:04
@rgreinho
Copy link
Contributor Author

I created another draft PR (#202), which is add a new subcommand, but it is more for discussion for now.

Adding the other command was useful to see how to refactor and test the code. I would still like to submit 2 different PRs, one for each command, but I thought it would be a good idea to discuss the implementation beforehand.

@rgreinho
Copy link
Contributor Author

In this PR, I removed the .python-versions file, because it was preventing me from starting vscode, and even creating my venv.

I figured out that since there are 3 python versions specified in the file, ALL the versions MUST be installed with pyenv. Maybe only putting the latest version in this file should be enough then?

@jdkandersson
Copy link
Owner

jdkandersson commented Oct 13, 2020

In this PR, I removed the .python-versions file, because it was preventing me from starting vscode, and even creating my venv.

I figured out that since there are 3 python versions specified in the file, ALL the versions MUST be installed with pyenv. Maybe only putting the latest version in this file should be enough then?

I don't mind the removal, I believe it was required for tox. I should remove the tox tests as I haven't been using them and instead only testing the earlier versions and other platforms on CI.

@jdkandersson
Copy link
Owner

jdkandersson commented Oct 13, 2020

I usually do not test my CLIs, because they are simply wrappers around backend functions which as well tested. I was not sure what your take on this was so I did not remove it from the coverage.

If you would like me to test it, would you happen to have some ideas/feedback to do so? I know click has some mechanism for testing, but at the end of the day it is only to validate the command arguments which is done by the CLI parser anyway. 🤷

Is it possible to test it in a similar way as was done with the build command? I.e. you just create a subprocess and try to run the command?

The argument for testing the CLI is to ensure that, if someone changes the interface of the functions exposed by the CLI, that those tests fail and let the person know to also update the CLI.

Understanding this better now, It would be really good to exercise the CLI Python functions. A stretch would be to also execute example CLI commands as described above, they can help serve as examples. I have never worked with CLIs before but want to get better. If you would like, I'm happy to merge this into a feature branch and write the tests on your behalf. I'm equally happy for you to write those tests.

@jdkandersson
Copy link
Owner

Please add documentation 😄

Please also update the readme with the additional feature.

@rgreinho
Copy link
Contributor Author

In this PR, I removed the .python-versions file, because it was preventing me from starting vscode, and even creating my venv.
I figured out that since there are 3 python versions specified in the file, ALL the versions MUST be installed with pyenv. Maybe only putting the latest version in this file should be enough then?

I don't mind the removal, I believe it was required for tox. I should remove the tox tests as I haven't been using them and instead only testing the earlier versions and other platforms on CI.

That works for me. I'll remove the file. 👍

@rgreinho
Copy link
Contributor Author

I usually do not test my CLIs, because they are simply wrappers around backend functions which as well tested. I was not sure what your take on this was so I did not remove it from the coverage.
If you would like me to test it, would you happen to have some ideas/feedback to do so? I know click has some mechanism for testing, but at the end of the day it is only to validate the command arguments which is done by the CLI parser anyway. 🤷

Is it possible to test it in a similar way as was done with the build command? I.e. you just create a subprocess and try to run the command?

The argument for testing the CLI is to ensure that, if someone changes the interface of the functions exposed by the CLI, that those tests fail and let the person know to also update the CLI.

Understanding this better now, It would be really good to exercise the CLI Python functions. A stretch would be to also execute example CLI commands as described above, they can help serve as examples. I have never worked with CLIs before but want to get better. If you would like, I'm happy to merge this into a feature branch and write the tests on your behalf. I'm equally happy for you to write those tests.

Since argparse uses sys.argv as input, I just set it to the value I want in the test. We could also achieve the same thing using the run() function, but I think it is simpler this way.

The test_main() test is parametrized and allows to call any command we want with any combination of arguments. If you want to add more test cases, that would be there.

@rgreinho
Copy link
Contributor Author

I believe I addressed all your comments. Except for the documentation.

The README starts to get really long. What would you think of created a dedicated page for the CLI? Or maybe giving a small example in the README, but the having an "Advance usage page" (something more or less like https://www.scrapd.org/usage.html)?

We could also use an extension which generates parts of the CLI documentation based on the argparse: https://sphinx-argparse.readthedocs.io/en/latest/.

Let me know what your thoughts are regarding this part and I'll start working on the documentation.

@rgreinho
Copy link
Contributor Author

Oh you switched the CI to GitHub Actions! Very cool! However for some reasons which seem unrelated to my PR, the CI is very unhappy with this patch 🤔

@jdkandersson
Copy link
Owner

Oh you switched the CI to GitHub Actions! Very cool! However for some reasons which seem unrelated to my PR, the CI is very unhappy with this patch 🤔

I think it is because some of them are failing so I assume it cancels the rest once it detects a failure.

@jdkandersson
Copy link
Owner

I usually do not test my CLIs, because they are simply wrappers around backend functions which as well tested. I was not sure what your take on this was so I did not remove it from the coverage.
If you would like me to test it, would you happen to have some ideas/feedback to do so? I know click has some mechanism for testing, but at the end of the day it is only to validate the command arguments which is done by the CLI parser anyway. 🤷

Is it possible to test it in a similar way as was done with the build command? I.e. you just create a subprocess and try to run the command?
The argument for testing the CLI is to ensure that, if someone changes the interface of the functions exposed by the CLI, that those tests fail and let the person know to also update the CLI.
Understanding this better now, It would be really good to exercise the CLI Python functions. A stretch would be to also execute example CLI commands as described above, they can help serve as examples. I have never worked with CLIs before but want to get better. If you would like, I'm happy to merge this into a feature branch and write the tests on your behalf. I'm equally happy for you to write those tests.

Since argparse uses sys.argv as input, I just set it to the value I want in the test. We could also achieve the same thing using the run() function, but I think it is simpler this way.

The test_main() test is parametrized and allows to call any command we want with any combination of arguments. If you want to add more test cases, that would be there.

Yeh I think that is ok. It would still be good to have 1 test using run just to confirm any assumptions we're making about sys.argv.

@jdkandersson
Copy link
Owner

jdkandersson commented Oct 14, 2020

I believe I addressed all your comments. Except for the documentation.

The README starts to get really long. What would you think of created a dedicated page for the CLI? Or maybe giving a small example in the README, but the having an "Advance usage page" (something more or less like https://www.scrapd.org/usage.html)?

We could also use an extension which generates parts of the CLI documentation based on the argparse: https://sphinx-argparse.readthedocs.io/en/latest/.

Let me know what your thoughts are regarding this part and I'll start working on the documentation.

I think it would just be a feature listed in the readme, it doesn't need an example there.

I think that should be ok, I'm curios to see what they look like. It is important that the docs can still be generated locally (using make html) and it would be good to know if there are any impacts to the current read the docs integration.

"""
# Ensure the file exists.
if not specfile.exists():
raise exceptions.BaseError(f"the specfile '{specfile}' cannot be found")
Copy link
Owner

@jdkandersson jdkandersson Oct 14, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My apologies, I wasn't specific enough. I wouldn't expect BaseError to ever get raised, I would expect a more specific error to be raised that inherits from BaseError. BaseError is designed to go into a try ... except ... block to catch any OpenAlchemy error. I don't think there is a good error at the moment, perhaps a CliError can be defined?

@jdkandersson
Copy link
Owner

jdkandersson commented Oct 14, 2020

Regarding the tests, it looks like there are quite a few flake8 checks that are failing. I could also spot 2 test cases that failed.

Actually, it looks like flake8 was updated to a newer version and is expecting a maximum line length and is also looking for unused imports.

There is also a mypy test that is failing. I'll open a branch and try to resolve them and you should then be able to rebase.

Do you think that it is caused by the change to the setup.cfg file? Maybe it isn't valid so flake8 doesn't read it?

I just tried that and it doesn't cause that behaviour. Very strange. Maybe let's finish off the pull request and I'll stick it into a branch and investigate. The flake8 configuration that should remove those errors is in the setup.cfg file. I created another pull request here and it seems to work fine: https://github.com/jdkandersson/OpenAlchemy/pull/203/checks?check_run_id=1252333254

@rgreinho
Copy link
Contributor Author

I believe I addressed all your comments. Except for the documentation.
The README starts to get really long. What would you think of created a dedicated page for the CLI? Or maybe giving a small example in the README, but the having an "Advance usage page" (something more or less like https://www.scrapd.org/usage.html)?
We could also use an extension which generates parts of the CLI documentation based on the argparse: https://sphinx-argparse.readthedocs.io/en/latest/.
Let me know what your thoughts are regarding this part and I'll start working on the documentation.

I think it would just be a feature listed in the readme, it doesn't need an example there.

I think that should be ok, I'm curios to see what they look like. It is important that the docs can still be generated locally (using make html) and it would be good to know if there are any impacts to the current read the docs integration.

It looks like none of the available extensions for automatically creating argparse documentation work, so I will abandon this idea 😢 .

@jdkandersson
Copy link
Owner

My apologies, I intended to review this today but I got stuck at work until ~10 pm ☹️. I will try to do this tomorrow.

@rgreinho
Copy link
Contributor Author

No worries. I still have to fix the tests/linter checks. I'll probably do that over the week end.

@rgreinho
Copy link
Contributor Author

Did you change some version or configuration of flake8 or pytest by any chance?

When running pytest locally I keep getting this error:

---------- coverage: platform darwin, python 3.9.0-final-0 -----------
Name                  Stmts   Miss Branch BrPart  Cover   Missing
-----------------------------------------------------------------
examples/app/api.py      32     32     10      0     0%   3-49
-----------------------------------------------------------------
TOTAL                  4362     32   1586      0    99%

97 files skipped due to complete coverage.
Coverage XML written to file coverage.xml

FAIL Required test coverage of 100.0% not reached. Total coverage: 99.29%
==================================================================== short test summary info =====================================================================
FAILED examples/app/models_autogenerated.py::FLAKE8
FAILED tests/open_alchemy/models_file/test_integration.py::test_generate_type_return[relationship] - assert 1 == 0
FAILED tests/open_alchemy/models_file/test_integration.py::test_generate_type_return[simple] - assert 1 == 0
ERROR tests/examples/app/test_app.py::test_delete_id_hit - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_models_autogen_from_dict[not required not given] - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_models_autogen_from_dict[not required given] - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_delete_id_miss - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_get_id_hit - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_models_autogen_init[not required not given] - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_post - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_get_id_miss - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_models_autogen_init[not required given] - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_patch_id_hit - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_get - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_post_duplicate - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_patch_id_miss - ModuleNotFoundError: No module named 'api'
=============================================== 3 failed, 2828 passed, 325 skipped, 1 warning, 13 errors in 39.68s ===============================================

It seems that the examples/app/models_autogenerated.py file is somehow being regenerated when the tests run and is incorrectly formatted.

For the other errors I am not exactly sure where they come from.

When running in the CI (https://github.com/jdkandersson/OpenAlchemy/runs/1303815410?check_suite_focus=true), I get:

=========================== short test summary info ============================
FAILED open_alchemy/helpers/__init__.py::FLAKE8
FAILED open_alchemy/column_factory/__init__.py::FLAKE8
FAILED examples/app/models_autogenerated.py::FLAKE8
FAILED open_alchemy/schemas/validation/helpers/__init__.py::FLAKE8
FAILED open_alchemy/models_file/artifacts/__init__.py::FLAKE8
============ 5 failed, 3164 passed, 14 warnings in 73.71s (0:01:13) ============

But I haven't modified any of these files.

I did rebase my branch on top of master.

That would be great if you would have some pointers to help me get over these errors 🙏 . In meantime, I'll keep digging 😀

@jdkandersson
Copy link
Owner

jdkandersson commented Oct 25, 2020

Did you change some version or configuration of flake8 or pytest by any chance?

When running pytest locally I keep getting this error:

---------- coverage: platform darwin, python 3.9.0-final-0 -----------
Name                  Stmts   Miss Branch BrPart  Cover   Missing
-----------------------------------------------------------------
examples/app/api.py      32     32     10      0     0%   3-49
-----------------------------------------------------------------
TOTAL                  4362     32   1586      0    99%

97 files skipped due to complete coverage.
Coverage XML written to file coverage.xml

FAIL Required test coverage of 100.0% not reached. Total coverage: 99.29%
==================================================================== short test summary info =====================================================================
FAILED examples/app/models_autogenerated.py::FLAKE8
FAILED tests/open_alchemy/models_file/test_integration.py::test_generate_type_return[relationship] - assert 1 == 0
FAILED tests/open_alchemy/models_file/test_integration.py::test_generate_type_return[simple] - assert 1 == 0
ERROR tests/examples/app/test_app.py::test_delete_id_hit - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_models_autogen_from_dict[not required not given] - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_models_autogen_from_dict[not required given] - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_delete_id_miss - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_get_id_hit - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_models_autogen_init[not required not given] - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_post - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_get_id_miss - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_models_autogen_init[not required given] - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_patch_id_hit - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_get - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_post_duplicate - ModuleNotFoundError: No module named 'api'
ERROR tests/examples/app/test_app.py::test_patch_id_miss - ModuleNotFoundError: No module named 'api'
=============================================== 3 failed, 2828 passed, 325 skipped, 1 warning, 13 errors in 39.68s ===============================================

It seems that the examples/app/models_autogenerated.py file is somehow being regenerated when the tests run and is incorrectly formatted.

For the other errors I am not exactly sure where they come from.

When running in the CI (https://github.com/jdkandersson/OpenAlchemy/runs/1303815410?check_suite_focus=true), I get:

=========================== short test summary info ============================
FAILED open_alchemy/helpers/__init__.py::FLAKE8
FAILED open_alchemy/column_factory/__init__.py::FLAKE8
FAILED examples/app/models_autogenerated.py::FLAKE8
FAILED open_alchemy/schemas/validation/helpers/__init__.py::FLAKE8
FAILED open_alchemy/models_file/artifacts/__init__.py::FLAKE8
============ 5 failed, 3164 passed, 14 warnings in 73.71s (0:01:13) ============

But I haven't modified any of these files.

I did rebase my branch on top of master.

That would be great if you would have some pointers to help me get over these errors 🙏 . In meantime, I'll keep digging 😀

The models_autogenerated.py file for the app tests does get re-generated on each test run, it just needs black applied to it and it should be good to go. I have been meaning to automate that, the pre-commit hooks should be taking care of it.

Something weird is going on with the CI for this PR, it does look like it is enabling some rules for flake8 that should be turned off. The F401 errors should be disabled for __init__.py files but, for some reason, in this PR they are not. Maybe you need to do something similar as you did for flake8-max-line-length = 88 but for the following flake8 settings:

per-file-ignores =
    */__init__.py:F401
    */models_autogenerated.py:E501
    *models_auto.py:E501

@rgreinho
Copy link
Contributor Author

Ok, first finding, flake8 and pytest-flake8 don't use the same configuration section in the configuration file AND don't use the same configuration format (https://github.com/tholo/pytest-flake8#configuring-flake8-options-per-project-and-file).

But I still have a lot of weird errors about modules not found 🤔

@rgreinho
Copy link
Contributor Author

OK, some progress, I got it right on my laptop:

---------- coverage: platform darwin, python 3.9.0-final-0 -----------
Name    Stmts   Miss Branch BrPart  Cover   Missing
---------------------------------------------------
---------------------------------------------------
TOTAL    4362      0   1586      0   100%

98 files skipped due to complete coverage.
Coverage XML written to file coverage.xml

Required test coverage of 100.0% reached. Total coverage: 100.00%
======================================================== 2844 passed, 325 skipped, 14 warnings in 35.17s

@rgreinho rgreinho force-pushed the add-cli branch 2 times, most recently from f65c219 to 706fcd6 Compare November 1, 2020 17:05
@rgreinho
Copy link
Contributor Author

rgreinho commented Nov 1, 2020

Alright! Everything was addressed! ✅

Since I could not get the CLI documentation to be generated from the code, I wrote some instructions by hand, finding some inspiration in the Docker CLI (https://docs.docker.com/engine/reference/commandline/build/). 📚

@rgreinho rgreinho force-pushed the add-cli branch 2 times, most recently from bb0076d to e82c9a3 Compare November 1, 2020 17:59
Extended Description
^^^^^^^^^^^^^^^^^^^^

The ``openalchemy build`` command builds a reusable Python package based off of
Copy link
Owner

@jdkandersson jdkandersson Nov 2, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please use :samp:`openalchemy build` for code syntax in rst files 😄

└── simple.egg-info

By default, a source and a wheel package are built, but this behavior can be
adjusted by using the ``--format`` option.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please use :samp:`--format` for code syntax in rst files 😄

@jdkandersson
Copy link
Owner

jdkandersson commented Nov 2, 2020

Alright! Everything was addressed! ✅

Since I could not get the CLI documentation to be generated from the code, I wrote some instructions by hand, finding some inspiration in the Docker CLI (https://docs.docker.com/engine/reference/commandline/build/). 📚

Excellent, so close! I think you just need to add openalchemy and SPECFILE to the cspell.json file and then the CI will be happy 😄. Here is what you need: https://github.com/jdkandersson/OpenAlchemy/blob/pr/201/cspell.json

Creates a CLI for OpenAlchemy. The first command available allows the
users to build a package for the models and a distributable archive.

Other workitems:
* Adds `cli` marker and unit tests.
* Updates the documentation accordingly.
@jdkandersson jdkandersson merged commit 65b3fa6 into jdkandersson:master Nov 3, 2020
@rgreinho rgreinho deleted the add-cli branch November 3, 2020 02:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants