Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

List support: append method #1709

Merged
merged 83 commits into from
Feb 14, 2024
Merged

List support: append method #1709

merged 83 commits into from
Feb 14, 2024

Conversation

Farouk-Echaref
Copy link
Contributor

@Farouk-Echaref Farouk-Echaref commented Feb 2, 2024

Add support for the append() method of Python lists to the semantic stage. Add handling of that method to the Python printer. This fixes #1689.

Commit Summary

  • Add a class representation for the append() method, inheriting from PyccelInternalFunction. The constructor ensures compatibility between the datatypes of the append() argument and the list for homogenous lists.
  • Create ClassDef representing a NativeHomogeneousList. It includes a method append() implemented by the ListAppend class.`
  • Update condition in get_cls_base() function to properly check on the classtype of the containers.
  • Add the python printer _print_ListAppend() to generate the appropriate python representation of the append() method.
  • Add check on the semantic to ensure that the correct class definition(PythonList) is assigned to variables created via expressions such as a = [1]
  • Add tests for the append() method involving many scenarios where the function could be used.

@Farouk-Echaref Farouk-Echaref added Feature adding new features Containers tuples/lists/sets/maps labels Feb 2, 2024
@Farouk-Echaref Farouk-Echaref self-assigned this Feb 2, 2024
@pyccel-bot
Copy link

pyccel-bot bot commented Feb 2, 2024

Hello again! Thank you for this new pull request 🤩.

Please begin by requesting your checklist using the command /bot checklist

@github-actions github-actions bot marked this pull request as draft February 2, 2024 11:55
@Farouk-Echaref
Copy link
Contributor Author

Farouk-Echaref commented Feb 2, 2024

Here is your checklist. Please tick items off when you have completed them or determined that they are not necessary for this pull request:

  • Write a clear PR description
  • Add tests to check your code works as expected
  • Update documentation if necessary
  • Update Changelog
  • Ensure any relevant issues are linked
  • Ensure new tests are passing

@Farouk-Echaref
Copy link
Contributor Author

/bot commands

@pyccel-bot
Copy link

pyccel-bot bot commented Feb 3, 2024

This bot reacts to all comments which begin with /bot. This phrase can be followed by any of these commands:

  • show tests : Lists the tests which can be triggered
  • run X : Triggers the test X (acceptable values for X can be seen using show tests). Multiple tests can be specified separated by spaces.
  • try V X : Triggers the test X (acceptable values for X can be seen using show tests) using Python version V. Multiple tests can be specified separated by spaces, but all will use the same Python version.
  • mark as ready : Runs the PR tests. If they pass then it adds the appropriate review flag and requests reviews. This command should be used when the PR is first ready for review, or when a review has been answered.
  • commands : Shows this list detailing all the commands which are understood.
  • trust user X : Tells the bot that a new user X is trusted to run workflows (prevents misuse of GitHub actions for mining etc). This command can only be used by a trusted reviewer.

Beware: if you have never contributed to this repository and you are not a member of the Pyccel organisation, the bot will ignore all requests to run tests until permitted by a trusted reviewer.

@Farouk-Echaref
Copy link
Contributor Author

/bot show tests

@pyccel-bot
Copy link

pyccel-bot bot commented Feb 3, 2024

The following is a list of keywords which can be used to run tests. Tests in bold are run by pull requests when they are marked as ready for review:

  • linux : Runs the unit tests on a Linux system.
  • windows : Runs the unit tests on a Windows system.
  • macosx : Runs the unit tests on a MacOS X system.
  • coverage : Runs the unit tests on a Linux system and checks the coverage of the tests.
  • docs : Checks if the documentation follows the numpydoc format.
  • pylint : Runs pylint on files which are too big to be handled by codacy.
  • pyccel_lint : Runs a linter to check that Pyccel's best practices are followed.
  • spelling : Checks if everything in the documentation is spelled (and capitalised) correctly.
  • pr_tests : Runs all the tests marked in bold.
  • pickle : Checks that .pyccel files have been correctly generated and installed by the installation process.
  • editable_pickle : Checks that .pyccel files have been correctly generated and installed by the editable installation process.
  • pickle_wheel : Checks that .pyccel files have been correctly generated and packaged into the wheel.
  • anaconda_linux : Runs the unit tests on a linux system using anaconda for python.
  • anaconda_windows : Runs the unit tests on a windows system using anaconda for python.
  • intel : Runs the unit tests on a linux system using the intel compiler.

These tests can be run with the command /bot run X (multiple tests can be specified separated by spaces), or with try V X to test on Python version V.

@Farouk-Echaref Farouk-Echaref marked this pull request as ready for review February 3, 2024 16:05
@github-actions github-actions bot marked this pull request as draft February 3, 2024 16:07
CHANGELOG.md Outdated Show resolved Hide resolved
pyccel/ast/builtin_objects/list_functions.py Outdated Show resolved Hide resolved
pyccel/ast/builtin_objects/list_functions.py Outdated Show resolved Hide resolved
pyccel/parser/semantic.py Outdated Show resolved Hide resolved
Comment on lines 9 to 18
@pytest.mark.parametrize( 'language', [
pytest.param("c", marks = [
pytest.mark.skip(reason="append() not implemented in c"),
pytest.mark.c]),
pytest.param("fortran", marks = [
pytest.mark.skip(reason="append() not implemented in fortran"),
pytest.mark.fortran]),
pytest.param("python", marks = pytest.mark.python)
]
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we not define a language fixture in this file, rather than copying this long decorator many times?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to put xfail marks in a fixture? The xfail marks should be removed in a few weeks or months once the method is supported

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you please explain how that would be possible? I was unable to do so since I'm not that familiar with writing test files using pytest so I did in the most basic way

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to put xfail marks in a fixture? The xfail marks should be removed in a few weeks or months once the method is supported

It should be possible:

@pytest.fixture( params=[
        pytest.param("fortran", marks = [
            pytest.mark.skip(reason="list methods not implemented in fortran"),
            pytest.mark.fortran]),
        pytest.param("c", marks = [
            pytest.mark.skip(reason="list methods not implemented in c"),
            pytest.mark.c]),
        pytest.param("python", marks = pytest.mark.python)
    ],
    scope = "module"
)
def language(request):
    return request.param

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you check the refactored version please

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It did work in my linux machine 🤞

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good but I wonder if we should name this fixture something like language_python_ok. At some point we will have some tests in this file which can use the existing language fixture (without the xfails) and some which need this one (and maybe some that need one where only C or only Fortran fails)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. How about using multiple test files and move the tests from one to the other as they start passing?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is also possible if you have a suggestion for a good name? We may need up to 4 files:

  • Passing in Python only
  • Passing in C and Python
  • Passing in Fortran and Python
  • Passing in C, Fortran and Python

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, using four files is a bit cumbersome... We could call them

  • test_epyccel_lists.py
  • test_epyccel_lists__no_c.py
  • test_epyccel_lists__no_fortran.py
  • test_epyccel_lists__python_only.py

Another solution is using a single file, but providing a utility function which generates the correct decorators with minimum user input.

In any case I suggest that we tackle this in a future PR.

@github-actions github-actions bot marked this pull request as draft February 14, 2024 09:48
@pyccel-bot pyccel-bot bot removed the Ready_to_merge Approved by senior developer. Ready for final approval and merge label Feb 14, 2024
@pyccel-bot
Copy link

pyccel-bot bot commented Feb 14, 2024

@Farouk-Echaref, @yguclu has a few questions/comments about your code. Can you go through and see if you agree with them. If not go ahead and explain why. Once you've adressed all the comments let me know with /bot mark as ready and we will see if we can get approval.

@yguclu
Copy link
Member

yguclu commented Feb 14, 2024

/bot run linux

Copy link
Member

@yguclu yguclu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good job @Farouk-Echaref!

@yguclu
Copy link
Member

yguclu commented Feb 14, 2024

/bot run pr_tests

Copy link

@pyccel-bot pyccel-bot bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good job ! Your PR is using all the code it added/changed.

@yguclu
Copy link
Member

yguclu commented Feb 14, 2024

/bot show tests

@pyccel-bot
Copy link

pyccel-bot bot commented Feb 14, 2024

The following is a list of keywords which can be used to run tests. Tests in bold are run by pull requests when they are marked as ready for review:

  • linux : Runs the unit tests on a Linux system.
  • windows : Runs the unit tests on a Windows system.
  • macosx : Runs the unit tests on a MacOS X system.
  • coverage : Runs the unit tests on a Linux system and checks the coverage of the tests.
  • docs : Checks if the documentation follows the numpydoc format.
  • pylint : Runs pylint on files which are too big to be handled by codacy.
  • pyccel_lint : Runs a linter to check that Pyccel's best practices are followed.
  • spelling : Checks if everything in the documentation is spelled (and capitalised) correctly.
  • pr_tests : Runs all the tests marked in bold.
  • pickle : Checks that .pyccel files have been correctly generated and installed by the installation process.
  • editable_pickle : Checks that .pyccel files have been correctly generated and installed by the editable installation process.
  • pickle_wheel : Checks that .pyccel files have been correctly generated and packaged into the wheel.
  • anaconda_linux : Runs the unit tests on a linux system using anaconda for python.
  • anaconda_windows : Runs the unit tests on a windows system using anaconda for python.
  • intel : Runs the unit tests on a linux system using the intel compiler.

These tests can be run with the command /bot run X (multiple tests can be specified separated by spaces), or with try V X to test on Python version V.

@yguclu
Copy link
Member

yguclu commented Feb 14, 2024

/bot run pyccel_lint

@yguclu yguclu marked this pull request as ready for review February 14, 2024 20:39
Copy link

@pyccel-bot pyccel-bot bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good job ! Your PR is using all the code it added/changed.

@yguclu yguclu changed the title List support/append method List support: append method Feb 14, 2024
@pyccel-bot pyccel-bot bot added the Ready_to_merge Approved by senior developer. Ready for final approval and merge label Feb 14, 2024
@yguclu yguclu merged commit f614845 into devel Feb 14, 2024
11 checks passed
@yguclu yguclu deleted the List_Support/append_method branch February 14, 2024 21:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Containers tuples/lists/sets/maps Feature adding new features Ready_to_merge Approved by senior developer. Ready for final approval and merge
Projects
None yet
Development

Successfully merging this pull request may close these issues.

append: used for adding elements to the end of the list.
4 participants