diff --git a/README.rst b/README.rst index e28f6b2b9..e52651368 100644 --- a/README.rst +++ b/README.rst @@ -16,7 +16,7 @@ It also provides how-to information and summarizes packaging, coding, and documentation styles. It concludes with an abstractions section that explains how apps and complex services expose only functionalities that matter to users. -From the **About** area for the repository, you can click the link for the latest web-based -release of this guide. In the `Releases `_ +From the **About** area for the repository, you can click the link to view +the latest web-based release of this guide. In the `Releases `_ area, you can view information about all releases. In the **Assets** area for -any release, you can download a PDF. +any release, you can download a PDF file of the guide. diff --git a/doc/source/abstractions/data-transfer.rst b/doc/source/abstractions/data-transfer.rst index 9853d13a4..a6dd08c08 100644 --- a/doc/source/abstractions/data-transfer.rst +++ b/doc/source/abstractions/data-transfer.rst @@ -102,7 +102,7 @@ languages and its efficiency over REST in terms of speed, memory, and payload size. Typically, REST data exchanges should be limited to short messages -transferred via JSON files, and gRPC should be used for large data +that are transferred using JSON files, and gRPC should be used for large data transfers and bidirectional streaming. Choosing gRPC over REST is generally preferable due to the performance diff --git a/doc/source/abstractions/service.rst b/doc/source/abstractions/service.rst index 3a85a3329..b78f9b0c1 100644 --- a/doc/source/abstractions/service.rst +++ b/doc/source/abstractions/service.rst @@ -55,13 +55,13 @@ The approach on the right has a number of advantages, including: - Simplified interface for starting MAPDL - Full documentation strings for all classes, methods, and functions -To properly abstract a service, users must have the option to +To properly abstract a service, you must have the option to either launch the service and connect to it locally if the software exists on -their machines or connect to a remote instance of the service. One +your machines or connect to a remote instance of the service. One way to do this is to include a function to launch the service. -This example includes ``launch_mapdl``, which brokers a connection via the -``Mapdl`` class: +This example includes the ``launch_mapdl`` function, which brokers a connection with +the ``Mapdl`` class: .. code:: pycon @@ -74,7 +74,7 @@ This example includes ``launch_mapdl``, which brokers a connection via the ansys.mapdl Version: 0.59.dev0 This straightforward approach connects to a local or remote instance -of MAPDL via gRPC by instantiating an instance of the ``Mapdl`` class. +of MAPDL using gRPC by instantiating an instance of the ``Mapdl`` class. At this point, because the assumption is that MAPDL is always remote, it's possible to issue commands to MAPDL, including those requiring file transfer like ``Mapdl.input``. diff --git a/doc/source/all-styles.rst b/doc/source/all-styles.rst index cc803c455..3dd716ad9 100644 --- a/doc/source/all-styles.rst +++ b/doc/source/all-styles.rst @@ -3,33 +3,33 @@ Style ===== In the PyAnsys ecosystem, three key styles contribute to an -enhanced developer experience: packaging style, coding style, and documentation style +enhanced developer experience: -- Packaging style focuses on creating clear, open source APIs hosted on GitHub,allowing +- **Packaging style**: Focuses on creating clear, GitHub-hosted open source APIs, allowing for reusable packages that can be updated independently of the Ansys release schedule. -- Coding style adheres to `PEP 8`_ and aligns with the conventions of major data science packages - like `numpy`_, `scipy`_, and `pandas`_, ensuring consistency and readability. -- Documentation style emphasizes the significance of well-documented APIs, - offering increased adoption, an improved on-boarding experiences, and streamlined code maintenance. +- **Coding style**: Ensures that code adheres to `PEP 8`_ and aligns with the conventions of + major data science packages like `NumPy`_, `SciPy`_, and `pandas`_ for consistency and readability. +- **Documentation style**: Emphasizes the significance of cohesive and user-friendly + content and well-documented APIs to improve the on-boarding experience and increase library adoption. All PyAnsys libraries follow the `Google developer documentation style guide - `_, incorporating sentence case, - active voice, and concise, clear sentences for cohesive and user-friendly documentation. + `_, which includes using sentence-case titles, active voice, + present tense, and clear, concise sentences. .. grid:: 3 - .. grid-item-card:: :octicon:`file-directory` Packaging + .. grid-item-card:: :octicon:`file-directory` Packaging style :link: packaging/index :link-type: doc Best practices for distributing Python code. - .. grid-item-card:: :octicon:`codespaces` Coding + .. grid-item-card:: :octicon:`codespaces` Coding style :link: coding-style/index :link-type: doc Best practices for writing Python code. - .. grid-item-card:: :octicon:`pencil` Documentation + .. grid-item-card:: :octicon:`pencil` Documentation style :link: doc-style/index :link-type: doc diff --git a/doc/source/coding-style/deprecation.rst b/doc/source/coding-style/deprecation.rst index e46c73b49..7269adc90 100644 --- a/doc/source/coding-style/deprecation.rst +++ b/doc/source/coding-style/deprecation.rst @@ -2,17 +2,18 @@ Deprecation best practices ========================== While deprecation best practices are outlined in -this `Deprecation library `_ , +this `deprecation documentation `_, there is no official guidance on deprecating features within Python. Thus, this topic provides deprecation best practices for PyAnsys libraries. -Whenever you deprecate a method, class, or function, you must either: +Whenever you deprecate a method, class, or function, you must take one of +these actions: -- Have the old method call the new method and raise a warning -- Raise an ``AttributeError`` if you remove the method entirely +- Have the old method call the new method and raise a warning. +- Raise an ``AttributeError`` if you remove the method entirely. -In the docstring of the old method, provide a `Sphinx Deprecated Directive +In the docstring of the old method, use a Sphinx `deprecated directive `_ that links to the new method. This way, you notify your users when you make an API change and give them a chance to change their code. Otherwise, diff --git a/doc/source/coding-style/formatting-tools.rst b/doc/source/coding-style/formatting-tools.rst index 7291282c1..0e1aa2a60 100644 --- a/doc/source/coding-style/formatting-tools.rst +++ b/doc/source/coding-style/formatting-tools.rst @@ -19,10 +19,10 @@ maintained by the Python Software Foundation. It allows for a minimum configuration to ensure that the Python code format looks almost the same across projects. -While `PEP 8`_ imposes a default line length of 79 characters, `black`_ has +While `PEP 8`_ imposes a default line length of 79 characters, Black has a default line length of 88 characters. -The minimum `black`_ configuration for a PyAnsys project should look like this: +The minimum Black configuration for a PyAnsys project should look like this: .. code-block:: toml @@ -30,19 +30,19 @@ The minimum `black`_ configuration for a PyAnsys project should look like this: line-length = "" -The ``isort`` -------------- +The ``isort`` tool +------------------ The goal of `isort`_ is to properly format ``import`` statements by making sure that they follow the standard order: -#. library -#. third-party libraries -#. custom libraries +#. Library +#. Third-party libraries +#. Custom libraries When using `isort`_ with `Black`_, it is important to properly configure both tools so that no conflicts arise. To accomplish this, use the -``--profile black`` flag in `isort`_. +``--profile black`` flag in ``isort``. .. code-block:: toml @@ -56,14 +56,14 @@ tools so that no conflicts arise. To accomplish this, use the Flake8 ------ -The goal of `Flake8` is to act as a `PEP 8`_ compliance checker. Again, if +The goal of `Flake8`_ is to act as a `PEP 8`_ compliance checker. Again, if this tool is being used with `Black`_, it is important to make sure that no conflicts arise. -The following configuration is the minimum one to set up `flake8`_ together with -`black`_. +The following configuration is the minimum one to set up Flake8 together with +Black. -The configuration for `flake8`_ must be specified in a ``.flake8`` file. +The configuration for Flake8 must be specified in a ``.flake8`` file. .. code-block:: toml @@ -72,8 +72,9 @@ The configuration for `flake8`_ must be specified in a ``.flake8`` file. extend-ignore = 'E203' Flake8 has many options that can be set within the configuration file. -For more information, see this `Flake8 documentation topic -`__. +For more information, see `Full Listing of Options and Their Descriptions +`__ in the Flake8 +documentation. The example configuration defines these options: @@ -83,16 +84,15 @@ The example configuration defines these options: - ``select`` Sequence of error codes that Flake8 is to report errors for. The set in the preceding configuration is a basic set of errors - for checking and is not an exhaustive list. - - For a full list of error codes and their descriptions, see this `Flake8 - documentation topic `__. + for checking and is not an exhaustive list. For more information, see + `Error/Violation Codes `__ + in the Flake8 documentation. - ``count`` Total number of errors to print when checking ends. - ``max-complexity`` - Maximum allowed McCabe complexity value for a block of code. + Maximum allowed McCabe complexity value for a block of code. The value of 10 was chosen because it is a common default. - ``statistics`` @@ -100,18 +100,19 @@ The example configuration defines these options: to print as a report when checking ends. -Add-license-headers -------------------- +The ``Add-license-headers`` pre-commit hook +------------------------------------------- The goal of the ``add-license-headers`` pre-commit hook is to add and update license headers -for files with `REUSE `_. By default, the hook runs on +for files with `REUSE `_ software. By default, the hook runs on PROTO files in any directory and on Python files in the ``src``, ``examples``, and ``tests`` directories. -You can find the MIT license that is added to the files in -`ansys/pre-commit-hooks repository - MIT.txt -`_. +You can find in the ``ansys/pre-commit-hooks`` repository, the `MIT.txt +`_ file +that is added to files. -For information on customizing the hook, see the `README ` file. +For information on customizing the hook, in this same repository, see the +`README `_ file. Code coverage ------------- @@ -124,7 +125,7 @@ For PyAnsys libraries, code coverage is done using `pytest-cov`_, a `pytest`_ pl that triggers code coverage analysis once your test suite has executed. Considering the layout presented in :ref:`Required files`, the following -configuration for code coverage is the minimum one required for a ``PyAnsys`` +configuration for code coverage is the minimum one required for a PyAnsys project: .. code-block:: toml @@ -135,18 +136,17 @@ project: [tool.coverage.report] show_missing = true -The ``pre-commit`` ------------------- +The ``pre-commit`` tool +----------------------- To ensure that every commit you make is compliant with the code style guidelines for PyAnsys, you can take advantage of `pre-commit`_ in your project. -Every time you stage some changes and try to commit them, `pre-commit`_ only +Every time you stage some changes and try to commit them, ``pre-commit`` only allows them to be committed if all defined hooks succeed. -The configuration for `pre-commit`_ must be defined in a +You must define the configuration for ``pre-commit`` in a ``.pre-commit-config.yaml`` file. The following lines present a minimum -`pre-commit`_ configuration that includes both code and documentation -formatting tools. +configuration that includes both code and documentation formatting tools. .. code-block:: yaml @@ -184,48 +184,47 @@ formatting tools. hooks: - id: add-license-headers -Installing ``pre-commit`` -~~~~~~~~~~~~~~~~~~~~~~~~~ +Install ``pre-commit`` +~~~~~~~~~~~~~~~~~~~~~~ -You can install ``pre-commit`` by running: +You can install ``pre-commit`` by running this command: .. code-block:: bash python -m pip install pre-commit -Then, ensure that you install it as a ``Git hook`` by running: +Then, ensure that you install it as a ``Git hook`` by running this command: .. code-block:: bash pre-commit install -Using ``pre-commit`` -~~~~~~~~~~~~~~~~~~~~ +Use ``pre-commit`` +~~~~~~~~~~~~~~~~~~ One installed as described, ``pre-commit`` automatically triggers every time -that you try to commit a change. If any hook defined in `.pre-commit-config.yaml` -fails, you must fix the failing files, stage the new changes, and try to commit +that you try to commit a change. If any hook defined in the ``.pre-commit-config.yaml`` +file fails, you must fix the failing files, stage the new changes, and try to commit them again. -If you want to manually run ``pre-commit``, you can run: +If you want to manually run ``pre-commit``, run this command: .. code-block:: bash pre-commit run --all-files --show-diff-on-failure -This command shows the current and expected style of the code if any of -the hooks fail. +If any of the hooks fail, this command shows the current and expected style of the code. -Tox ---- +The ``tox`` tool +---------------- You might consider using `tox`_ in your project. While this automation tool is similar to `Make`_, it supports testing of your package in a temporary virtual environment. Being able to test your package in isolation rather than in "local" mode guarantees reproducible builds. -Configuration for `tox`_ is stored in a ``tox.ini`` file. The minimum -configuration for a PyAnsys ``py-`` project should be: +Configuration for ``tox`` is stored in a ``tox.ini`` file. Here is the minimum +configuration for a PyAnsys ``py-`` project: .. tab-set:: @@ -242,8 +241,8 @@ contains ``requirements_tests.txt`` and ``requirements_doc.txt`` files. In addition, the ``style`` environment must execute ``pre-commit``, which guarantees the usage of this tool in your project. -Installing ``tox`` -~~~~~~~~~~~~~~~~~~ +Install ``tox`` +~~~~~~~~~~~~~~~ You can install ``tox`` like any other Python package: @@ -251,10 +250,10 @@ You can install ``tox`` like any other Python package: python -m pip install tox -Using ``tox`` -~~~~~~~~~~~~~ +Use ``tox`` +~~~~~~~~~~~ -`tox`_ uses ``environments``, which are similar to ``Makefile`` rules, +The ``tox`` tool uses ``environments``, which are similar to ``Makefile`` rules, to make it highly customizable. Descriptions follow of some of the most widely used environments: @@ -262,6 +261,8 @@ widely used environments: - ``tox -e py``: Runs your test suite. - ``tox -e doc``: Builds the documentation of your project. -It is possible to run multiple environments by separating them with commas ``tox --e ,,...```. To run all available environments, simply -run ``tox``. +It is possible to run multiple environments by separating them with commas: + +``tox -e ,,...`` + +To run all available environments, simply type ``tox``. diff --git a/doc/source/coding-style/index.rst b/doc/source/coding-style/index.rst index fe52f8f17..d967e8504 100644 --- a/doc/source/coding-style/index.rst +++ b/doc/source/coding-style/index.rst @@ -1,25 +1,28 @@ + + +.. _coding_style: + Coding style ============ Coding style refers to the different rules defined in a software project that must be followed when writing source code. These rules ensure that all -the source code looks the same across different files of the -project. +source code looks the same across different files of the project. Because the PyAnsys ecosystem consists of many projects, coding style rules -are critical to: +are critical. Their use helps to achieve these goals: -#. Prevent against common programming errors -#. Limit product complexity -#. Provide an easily readable, understandable, and maintainable product -#. Establish a consistent style -#. Implement an objective basis for code review +- Prevent against common programming errors +- Limit product complexity +- Provide an easily readable, understandable, and maintainable product +- Establish a consistent style +- Implement an objective basis for code review All PyAnsys libraries are expected to follow `PEP 8`_ and be consistent in style and -formatting with the libraries for the 'big three' data science packages: `numpy`_, -`scipy`_, and `pandas`_. +formatting with the libraries for the "big three" data science packages: `NumPy`_, +`SciPy`_, and `pandas`_. -The purpose of this section is not to repeat coding style documentation, +The purpose of this section is not to repeat coding style documentation but rather to describe coding best practices applicable to the `PyAnsys`_ project when there are any delineations, clarifications, or additional procedures above and beyond `PEP 8`_. For example, this section provides a topic on deprecation diff --git a/doc/source/coding-style/pep8.rst b/doc/source/coding-style/pep8.rst index 46304f04a..2a377a7fe 100644 --- a/doc/source/coding-style/pep8.rst +++ b/doc/source/coding-style/pep8.rst @@ -5,7 +5,7 @@ This section summarizes important coding style guidelines from `PEP 8`_ and how they apply to PyAnsys libraries. The Python community devised `PEP 8`_ to increase the readability of Python code. Some of the most popular packages within the Python ecosystem have adopted `PEP 8`_, -including `numpy`_, `scipy`_, and `pandas`_. +including `NumPy`_, `SciPy`_, and `pandas`_. Imports ------- @@ -85,7 +85,7 @@ so that they are easily searchable. Multiple imports ~~~~~~~~~~~~~~~~ -You should place imports in separate lines unless they are modules from the same +You should place imports on separate lines unless they are modules from the same package. .. tab-set:: @@ -123,11 +123,12 @@ Absolute versus relative imports You should use absolute imports over relative imports because they are more readable and reliable. -. tab-set:: +.. tab-set:: .. tab-item:: Use .. code-block:: python + from ansys.mapdl.core.plotting import general_plotter .. tab-item:: Avoid @@ -136,6 +137,7 @@ more readable and reliable. from .core.plotting import general_plotter + Import namespaces ~~~~~~~~~~~~~~~~~ @@ -161,21 +163,21 @@ in the *Python Anti-Patterns* documentation. Naming conventions ------------------ -To achieve readable and maintainable code, use concise and descriptive names for classes, -methods, functions, and constants. Regardless of the programming language, you must follow these +To achieve readable and maintainable code, use concise and descriptive names for functions, +classes, methods, and constants. Regardless of the programming language, you must follow these global rules to determine the correct names: -#. Choose descriptive and unambiguous names. -#. Make meaningful distinctions. -#. Use pronounceable names. -#. Use searchable names. -#. Replace magic numbers with named constants. -#. Avoid encodings. Do not append prefixes or type information. +- Choose descriptive and unambiguous names. +- Make meaningful distinctions. +- Use pronounceable names. +- Use searchable names. +- Replace magic numbers with named constants. +- Avoid encodings. Do not append prefixes or type information. Variables ~~~~~~~~~ -Do not use the characters ``'l'``, ``'O'`` , or ``'I'`` as single-character +Do not use the characters ``"l"`, ``"O"``, or ``"I"`` as single-character variable names. In some fonts, these characters are indistinguishable from the numerals one and zero. @@ -216,8 +218,8 @@ improve readability, separate words with underscores. When naming methods, follow these conventions: - Enclose only `dunder methods`_ with double underscores. -- Start a method that is to be considered private with double underscores. -- Start a method that is to be considered protected with a single underscore. +- Start a method that is to be private with double underscores. +- Start a method that is to be protected with a single underscore. .. code:: python @@ -255,7 +257,7 @@ When naming methods, follow these conventions: .. note:: - Remember that these are only conventions for naming functions and methods. In Python + Remember that these are only conventions for naming functions and methods. In Python, there are no private or protected members, meaning that you can always access even those members that start with underscores. @@ -289,9 +291,9 @@ indentation level and avoid tabs. Indentation should be used to emphasize: - - Body of a control statement, such as a loop or a select statement - - Body of a conditional statement - - New scope blocks +- Body of a control statement, such as a loop or a select statement +- Body of a conditional statement +- New scope blocks .. code:: python @@ -308,7 +310,7 @@ Indentation should be used to emphasize: return To improve readability, add blank lines and wrap lines. You -should add two blank lines before and after all class and function +should add two blank lines before and after all function and class definitions. Inside a class, add a single blank line before any method definition. @@ -326,11 +328,9 @@ Inside a class, add a single blank line before any method definition. """Second method docstring.""" return -To make it clear when a 'paragraph' of code is complete and a new section +To make it clear when a "paragraph" of code is complete and a new section is starting, use a blank line to separate logical sections. -Instead of: - .. tab-set:: .. tab-item:: Use @@ -376,7 +376,7 @@ For source code, best practice is to keep the line length at or below the length at or below 72 characters. Lines longer than these recommended limits might not display properly -on some terminals and tools or might be difficult to follow. For example, +on some terminals, and tools or might be difficult to follow. For example, this line is difficult to follow: .. tab-set:: @@ -423,15 +423,15 @@ letter. Here are general guidelines for writing comments: -#. Always try to explain yourself in code by making it +- Always try to explain yourself in code by making it self-documenting with clear variable names. -#. Don't be redundant. -#. Don't add obvious noise. -#. Don't use closing brace comments. -#. Don't comment out code that is unused. Remove it. -#. Use explanations of intent. -#. Clarify the code. -#. Warn of consequences. +- Don't be redundant. +- Don't add obvious noise. +- Don't use closing brace comments. +- Don't comment out code that is unused. Remove it. +- Use explanations of intent. +- Clarify the code. +- Warn of consequences. Obvious portions of the source code should not be commented. For example, the following comment is not needed: @@ -441,7 +441,7 @@ For example, the following comment is not needed: # increment the counter i += 1 -However, if code behavior is not self-apparent, it should be documented. +However, if code behavior is not apparent, it should be documented. Otherwise, future developers might remove code that they see as unnecessary. .. code:: python @@ -457,7 +457,7 @@ Inline comments Use inline comments sparingly. An inline comment is a comment on the same line as a statement. -Inline comments should be separated by two spaces from the statement. +Inline comments should be separated by two spaces from the statement: .. code:: python @@ -499,7 +499,7 @@ Write docstrings for all public modules, functions, classes, and methods. Docstrings are not necessary for private methods, but such methods should have comments that describe what they do. -To create a docstring, surround the comments with three double quotes +To create a docstring, surround the comments with three double quotation marks on either side. For a one-line docstring, keep both the starting and ending ``"""`` on the @@ -544,7 +544,7 @@ equivalence operator. if my_bool == True: return result -Knowing that empty sequences are evaluated to ``False``, don't compare the +Because empty sequences are evaluated to ``False``, don't compare the length of these objects but rather consider how they would evaluate by using ``bool()``. @@ -592,7 +592,7 @@ especially important when parsing arguments. Handling strings ~~~~~~~~~~~~~~~~ -Use ``.startswith()`` and ``.endswith()`` instead of slicing. +Use the ``.startswith()`` and ``.endswith()`` functions instead of slicing. .. tab-set:: @@ -638,12 +638,12 @@ systems. ) Duplicated code -~~~~~~~~~~~~~~~' +~~~~~~~~~~~~~~~ Follow the DRY principle, which states that "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system." Follow this principle unless it overly complicates -the code. For instance, the following example converts Fahrenheit to Kelvin +the code. The following "Avoid" example converts Fahrenheit to Kelvin twice, which now requires the developer to maintain two separate lines that do the same thing. @@ -697,7 +697,7 @@ for this method. np.testing.assert_allclose(12.7778, fahr_to_kelvin(55)) Now, you have only one line of code to verify. You can also use -a testing framework such as ``pytest`` to test that the method is +a testing framework such as `pytest`_ to test that the method is correct. Nested blocks @@ -768,8 +768,8 @@ to circumvent nested loops. squares = [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] If the loop is too complicated for creating a list comprehension, -consider creating small functions and calling these instead. For -example, to extract all consonants in a sentence: +consider creating small functions and calling these instead. Assume +that you want to extract all consonants in a sentence. .. tab-set:: @@ -807,7 +807,7 @@ example, to extract all consonants in a sentence: consonants = ['T', 'h', 's', 's', 's', 'm', 'p', 'l', 's', 'n', 't', 'n', 'c'] -The second approach is more readable and better documented. Additionally, +The "Use" approach is more readable and better documented. Additionally, you could implement a unit test for ``is_consonant``. Security considerations diff --git a/doc/source/coding-style/required-standard.rst b/doc/source/coding-style/required-standard.rst index c93ac5c0e..f12a52c3e 100644 --- a/doc/source/coding-style/required-standard.rst +++ b/doc/source/coding-style/required-standard.rst @@ -1,15 +1,15 @@ Required standards ================== -This section collects the required standards for any ``PyAnsys`` project. The +This page collects the required standards for any ``PyAnsys`` project. The individual configurations for the tools presented in :ref:`Code style tools` and :ref:`Documentation style tools` are combined together. The following lines should be included in :ref:`The \`\`pyproject.toml\`\` file` to indicate the configuration of the different code and documentation style tools. -Required ``pyproject.toml`` configuration ------------------------------------------ +Required ``pyproject.toml`` file configuration +---------------------------------------------- .. code-block:: toml @@ -35,8 +35,8 @@ Required ``pyproject.toml`` configuration [tool.pydocstyle] convention = "numpy" -Required ``flake8`` configuration ---------------------------------- +Required Flake8 configuration +----------------------------- The following ``.flake8`` file is also required: @@ -48,7 +48,8 @@ The following ``.flake8`` file is also required: Required ``pre-commit`` configuration ------------------------------------- -You can take advantage of :ref:`the \`\`pre-commit\`\`` by including a + +You can take advantage of `pre-commit`_ by including a ``.pre-commit-config.yaml`` file like this one in your project: .. code-block:: yaml @@ -86,7 +87,8 @@ GitHub CI/CD integration ------------------------ Finally, you can :ref:`Test using GitHub actions` and -create a ``style.yml`` workflow file in ``.github/workflows/``: +create a ``style.yml`` workflow file in the ``.github/workflows`` +directory: .. code-block:: yaml diff --git a/doc/source/content-writing/_static/notice-new-package-release.png b/doc/source/content-writing/_static/notice-new-package-release.png index 606c1b670..c6fafcce5 100644 Binary files a/doc/source/content-writing/_static/notice-new-package-release.png and b/doc/source/content-writing/_static/notice-new-package-release.png differ diff --git a/doc/source/content-writing/content-contrib-setup/content-dev-environment.rst b/doc/source/content-writing/content-contrib-setup/content-dev-environment.rst index 609c23009..940118c77 100644 --- a/doc/source/content-writing/content-contrib-setup/content-dev-environment.rst +++ b/doc/source/content-writing/content-contrib-setup/content-dev-environment.rst @@ -53,7 +53,7 @@ In a GitHub project, you can perform many tasks, including these: - Create a local branch, make commits with suggested additions or changes, and then create your own PR for project maintainers to review, approve, and merge. -- Integrate CI/CD (continuous integration/continuous deployment). +- Integrate GitHub CI/CD. - Automate testing and deployment processes. @@ -70,17 +70,16 @@ For more information on GitHub, see :ref:`Contributing` and the `GitHub document Install Git ----------- -You must have `Git `_ or a graphical user interface (GUI) client for Git installed. +You must have `Git`_ or a graphical user interface (GUI) client for Git installed. Because developers are accustomed to working in terminal environments, they tend to prefer using the Git command line, especially because it provides for greater control, customization, and automation. However, non-developer team members, such as project managers, designers, and writers, tend to prefer using a GUI client for Git. -With a GUI client like `GitHub Desktop `_ or `Git Extensions `_, -rather than having to remember complex command sequences, non-developer team members can use the -visual clues that the GUI provides to better understand branching, PRs, -merging, and history visualization. +With a GUI client like `GitHub Desktop`_ or `Git Extensions`_, rather than having to +remember complex command sequences, non-developer team members can use the visual clues +that the GUI provides to better understand branching, PRs, merging, and history visualization. If you do not yet have Git or a GUI client for Git installed, install your preferred tool from an official channel. @@ -103,7 +102,7 @@ GitHub credentials. - Join the `Ansys Internal GitHub organization `_. Writers outside of Ansys who want to contribute to the documentation for a PyAnsys -library can email `pyansys.core@ansys.com `_ +library can contact the `PyAnsy core team `_ for permission to access this project's repository. .. _Ansys_Python_Manager: @@ -137,7 +136,7 @@ you can perform these steps to install and immediately begin using this app: #. On the **Install Python** tab, install a selected Python version: #. For **Installation type**, choose **Standard** to install the standard - installation from the `Python `_ organization. + installation from the `Python`_ organization. #. For **Python version**, choose **Python 3.11** to install the latest available version. @@ -305,7 +304,7 @@ To run ``pre-commit`` locally, you must install it in your development environme Install Vale ------------ -`Vale `_ is a tool for maintaining a consistent style and voice in your +`Vale`_ is a tool for maintaining a consistent style and voice in your documentation based on a given style guide. When the `Ansys templates `_ tool is used to create a PyAnsys project from the ``pyansys`` or ``pyansys-advanced`` template, Vale is one of the many documentation style tools that is configured to run as part of the diff --git a/doc/source/content-writing/content-contrib-setup/doc-resources.rst b/doc/source/content-writing/content-contrib-setup/doc-resources.rst index 7874f9115..84845fae1 100644 --- a/doc/source/content-writing/content-contrib-setup/doc-resources.rst +++ b/doc/source/content-writing/content-contrib-setup/doc-resources.rst @@ -69,28 +69,28 @@ building of PyAnsys documentation. PyAnsys documentation. - `Ansys templates `_: Provides templates for creating PyAnsys projects that are compliant with PyAnsys guidelines. -- `Git `_: Provides a distributed version control system for tracking changes +- `Git`_: Provides a distributed version control system for tracking changes in source code during software development. -- `Git Extensions `_: Provides a GUI client for Git. -- `GitHub `_: Provides a web-based platform that uses Git as its underlying +- `Git Extensions`_: Provides a GUI client for Git. +- `GitHub`_: Provides a web-based platform that uses Git as its underlying version control system but goes way beyond version control, offering a collaborative environment for hosting, managing, and sharing Git repositories. -- `GitHub Desktop `_: Provides a GUI client for Git. +- `GitHub Desktop`_: Provides a GUI client for Git. - `Notepad\+\+ `_: Provides an open source code text and code editor for use on Microsoft Windows, supporting around 80 programming languages with syntax highlighting and code folding. -- `pip `_: Provides a package manager for installing Python packages from the +- `pip`_: Provides a package manager for installing Python packages from the `Python Package Index `_ (PyPI). -- `pre-commit `_: Provides for checking the conformance of your code +- `pre-commit`_: Provides for checking the conformance of your code against predefined code style conventions. -- `Python `_: Provides a general-purpose programming language that runs on +- `Python`_: Provides a general-purpose programming language that runs on almost all system architectures and is used for a wide range of applications in different fields. - `Python in Visual Studio Code `_: Provides an extension that makes `Visual Studio Code `_ an excellent Python editor. - `Sphinx `_: Provides a Python documentation generator for generating documentation from RST, MD, and PY files. -- `Vale `_: Provides for checking RST and MD files for consistent +- `Vale`_: Provides for checking RST and MD files for consistent style and voice. - `Visual Studio Code `_: Provides a lightweight but powerful source code editor that runs on your desktop and is available for WIndows , macOS, and Linux. diff --git a/doc/source/content-writing/content-contrib-setup/essentials.rst b/doc/source/content-writing/content-contrib-setup/essentials.rst index c898e1572..f7c0fd0bc 100644 --- a/doc/source/content-writing/content-contrib-setup/essentials.rst +++ b/doc/source/content-writing/content-contrib-setup/essentials.rst @@ -27,7 +27,7 @@ While you should become familiar with this entire style guide, periodically revi `Highlights `_ page to ensure that you are adhering to its most important points. When the `Ansys templates `_ tool is used to create a PyAnsys project from the -``pyansys`` or ``pyansys-advanced`` template, `Vale `_, a rule-based tool for maintaining +``pyansys`` or ``pyansys-advanced`` template, `Vale`_, a rule-based tool for maintaining a consistent style and voice in your documentation, is implemented. This tool, which is one of many run by the CI/CD process, is configured to check content in RST and Markdown (MD) files based on the *Google developer documentation style guide*. diff --git a/doc/source/content-writing/content-how-tos/add-sphinx-extensions.rst b/doc/source/content-writing/content-how-tos/add-sphinx-extensions.rst index 62dd720b1..bfc5e6459 100644 --- a/doc/source/content-writing/content-how-tos/add-sphinx-extensions.rst +++ b/doc/source/content-writing/content-how-tos/add-sphinx-extensions.rst @@ -97,6 +97,8 @@ your documentation. Here are some examples: For more information, see :ref:`API_object_links`. +.. _add_native_sphinx_ext: + Add a native extension ---------------------- diff --git a/doc/source/content-writing/content-how-tos/create-PR.rst b/doc/source/content-writing/content-how-tos/create-PR.rst index 506827d43..2dd33b116 100644 --- a/doc/source/content-writing/content-how-tos/create-PR.rst +++ b/doc/source/content-writing/content-how-tos/create-PR.rst @@ -27,11 +27,11 @@ Run pre-commit locally ---------------------- `pre-commit `_ is a tool for ensuring that all the changes that you make to -files in a project successfully pass all checks run by the code style tools that are +files in your project successfully pass all checks run by the code style tools that are configured as part of the CI/CD process. These tools, which typically include `Black `_, `isort `_, and `Flake8 `_, analyze, format, review, and improve code quality and security. For more information on the code style tools most commonly -used in PyAnsys projects, see :ref:`code_style_tools`. +configured for use in PyAnsys projects, see :ref:`code_style_tools`. Because you do not want the **Code style** check for your PyAnsys project to fail when you create or push changes to a PR, you want to periodically run ``pre-commit`` @@ -459,3 +459,33 @@ One the PR is merged, use your GitHub tool to pull all changes from the remote m branch on GitHub into the main branch of your locally cloned repository. Also delete the local branch with the changes that have now been merged. For additional changes, create another local branch to work in. + +Remove untracked files and directories +-------------------------------------- + +To remove untracked files and directories from your working directory, from the +``doc`` folder, periodically run this command: + +``git clean -fdx .`` + +For more information on this Git command, see :ref:`git_clean`. + +When you next run ``pre-commit``, the code style tools configured for +your PyAnsys project must be initialized once again. For more information, +see :ref:`run_precommit`. + +Before you can run Vale again locally, you must download the latest rules for the +*Google developer's documentation style guide* to the ``doc/styles/Google`` folder +by running this command: + +.. code-block:: bash + + vale sync + +You can then run Vale with this command: + +.. code-block:: bash + + vale . + +For more information, see :ref:`run_Vale_locally`. \ No newline at end of file diff --git a/doc/source/content-writing/content-how-tos/create-issues-discussions.rst b/doc/source/content-writing/content-how-tos/create-issues-discussions.rst index 98dc160c7..26672e70d 100644 --- a/doc/source/content-writing/content-how-tos/create-issues-discussions.rst +++ b/doc/source/content-writing/content-how-tos/create-issues-discussions.rst @@ -20,8 +20,8 @@ on the **Issues** page and **Discussions** page of a GitHub repository. Create an issue --------------- -To report an issue, request a new feature, or ask for troubleshooting assistance, create -an issue in the project's repository: +To report an issue, request a new feature, or ask for project-specific troubleshooting assistance, +create an issue in the project's repository: #. Go to the project repository. #. Click the **Issues** tab to go to the **Issues** page. @@ -43,7 +43,8 @@ an issue in the project's repository: any relevant error messages, code snippets, and screenshots. #. Click **Submit new issue**. -To contact the project support team about an issue, email `pyansys.core@ansys.com `_. +If you have general questions about the PyAnsy ecosystem, contact the +`PyAnsy core team `_. Create a discussion ------------------- @@ -72,4 +73,4 @@ the project's repository: - **Show and tell**: For showing off something that you've made #. Complete the form, providing a clear and detailed title and description. -#. Click **Start discussion**. \ No newline at end of file +#. Click **Start discussion**. diff --git a/doc/source/content-writing/content-how-tos/edit-on-GitHub.rst b/doc/source/content-writing/content-how-tos/edit-on-GitHub.rst index ac142d945..33e45e825 100644 --- a/doc/source/content-writing/content-how-tos/edit-on-GitHub.rst +++ b/doc/source/content-writing/content-how-tos/edit-on-GitHub.rst @@ -6,7 +6,7 @@ Edit on GitHub The easiest way to contribute to a PyAnsys project is to make changes in GitHub, letting the CI/CD process handle the build. When you are viewing PyAnsys documentation, the right navigation pane might display an **Edit on GitHub** link. -You can use this feature to submit changes to this page via a PR on GitHub: +You can use this feature to submit changes to this page by creating a PR on GitHub: #. Click the **Edit on GitHub** link. diff --git a/doc/source/content-writing/index.rst b/doc/source/content-writing/index.rst index f988c8a89..965c037af 100644 --- a/doc/source/content-writing/index.rst +++ b/doc/source/content-writing/index.rst @@ -4,7 +4,7 @@ Content writing =============== Earlier sections of this guide are written primarily for PyAnsys developers by PyAnsys -developers. This section is written for anyone who wants to contributing new or revise +developers. This section is written for anyone who wants to contribute new or revise existing content in the documentation for a PyAnsys library. The goal is to provide content contributors with the information that they need to write clear, consistent, effective, and user-friendly content in the order in which they need to know it. diff --git a/doc/source/content-writing/py-files-writers/docstring-format-rules.rst b/doc/source/content-writing/py-files-writers/docstring-format-rules.rst index 1f956ba53..5fe6ba623 100644 --- a/doc/source/content-writing/py-files-writers/docstring-format-rules.rst +++ b/doc/source/content-writing/py-files-writers/docstring-format-rules.rst @@ -66,7 +66,7 @@ If it is not present, documentation style tools in the CI/CD process raise error when you push changes to a PR. You can declare the short summary on the same line as the opening quotation marks -of the docstring or on the next line. While `PEP 257 `_ accepts both ways, +of the docstring or on the next line. While `PEP 257`_ accepts both ways, docstrings must be consistent across your project. If the developers of a PyAnsys library are declaring the short summary on the same line as the opening quotation marks, they have turned off ``"GL01"`` checking in the ``numpydoc_validation_checks`` dictionary diff --git a/doc/source/content-writing/py-files-writers/index.rst b/doc/source/content-writing/py-files-writers/index.rst index 2ef780944..3423f2277 100644 --- a/doc/source/content-writing/py-files-writers/index.rst +++ b/doc/source/content-writing/py-files-writers/index.rst @@ -24,9 +24,12 @@ A Python client library consists of PY files that are organized and packaged in a way that makes it easy for users to interact with the library and its underlying APIs and services. -In a PyAnsys client library, PY files are usually organized in the -``src\ansys`` directory. However, there are exceptions. For example, in PyAEDT, -the Python files are organized in the ``pyaedt`` directory. +In a PyAnsys client library, PY files are organized in the ``src`` directory. + +.. warning:: + + The folder and file names in the ``src`` directory cannot contain spaces or hyphens. + Replace these characters with an underscore (``_``). Each subpackage contains an :file:`__init__.py` file, which contains any necessary package-level initialization code. diff --git a/doc/source/content-writing/rst-files-writers/code-blocks.rst b/doc/source/content-writing/rst-files-writers/code-blocks.rst index e779177e9..847d3302d 100644 --- a/doc/source/content-writing/rst-files-writers/code-blocks.rst +++ b/doc/source/content-writing/rst-files-writers/code-blocks.rst @@ -4,17 +4,38 @@ Code blocks =========== You can introduce a short standalone code block in an RST file by ending a sentence with two -colons (``::``). Here is an example of how to to use this method to create a standalone -code block:: +colons (``::``). - This is a normal text paragraph in your RST file. The next paragraph is a code sample:: +This example shows how to use this method to create a standalone code block. - A code sample is not processed in any way. In the documentation, - it is rendered in a monsopaced font in a gray block. +.. tab-set:: - A code sample can span multiple lines. + .. tab-item:: Doc surce code + + .. code-block:: rst + + This is a normal text paragraph in your RST file. The next paragraph is a code sample:: + + A code sample is not processed in any way. In the documentation, it is rendered in + a monsopaced font in a gray block. + + A code sample can span multiple lines. + + This is a normal text paragraph again. + + .. tab-item:: Rendered doc + + This is a normal text paragraph in your RST file. The next paragraph is a code sample: + + .. code-block:: rst + + A code sample is not processed in any way. In the documentation, it is rendered in + a monsopaced font in a gray block. + + A code sample can span multiple lines. + + This is a normal text paragraph again. - This is a normal text paragraph again. In most cases, to create standalone code blocks that contain multiple lines of code, you should use either the ``code`` or ``code-block`` directive. While you can use @@ -25,44 +46,67 @@ Both the ``code`` and ``code-block`` directives support a ``language`` option for specifying the programming language that the code is written in. When you specify the language, the code block uses the syntax highlighting for this language. -This ``code`` directive shows how to import a function (``my_function``) -from a Python module (``mylibrary``) and then demonstrates how to use it:: +This example shows a ``code-block`` directive for importing and using a Python function. + +.. tab-set:: + + .. tab-item:: Doc surce code + + .. code-block:: rst - .. code:: python + This ``code`` directive shows how to import a function (``my_function``) + from a Python module (``mylibrary``) and then uses it. - from mylibrary import my_function + .. code-block:: pycon - # Usage example - result = my_function(42) - print(result) + from mylibrary import my_function -Here is how the preceding code block is rendered in the documentation: + # Usage example + result = my_function(42) + print(result) -.. code:: python + .. tab-item:: Rendered doc - from mylibrary import my_function + This ``code`` directive shows how to import a function (``my_function``) + from a Python module (``mylibrary``) and then uses it. - # Usage example - result = my_function(42) - print(result) + .. code-block:: pycon -This ``code-block`` directive shows how to turn off a log handler:: + from mylibrary import my_function - .. code-block:: rst + # Usage example + result = my_function(42) + print(result) - for handler in design_logger.handlers: - if isinstance(handler, logging.FileHandler): - handler.close() - design_logger.removeHandler(handler) -Here is how the preceding code block is rendered in the documentation: +This example shows a ``code-block`` directive for turning off a log handler. -.. code-block:: rst +.. tab-set:: + + .. tab-item:: Doc surce code + + .. code-block:: rst + + This ``code-block`` directive shows how to turn off a log handler. + + .. code-block:: pycon + + for handler in design_logger.handlers: + if isinstance(handler, logging.FileHandler): + handler.close() + design_logger.removeHandler(handler) + + .. tab-item:: Rendered doc + + This ``code-block`` directive shows how to turn off a log handler. + + .. code-block:: pycon + + for handler in design_logger.handlers: + if isinstance(handler, logging.FileHandler): + handler.close() + design_logger.removeHandler(handler) - for handler in design_logger.handlers: - if isinstance(handler, logging.FileHandler): - handler.close() - design_logger.removeHandler(handler) Code blocks can include comments and message strings that you might need to edit. Because comments and message strings are more often seen in PY files than in RST @@ -71,103 +115,152 @@ files, see :ref:`py_code_comments_message_strings` in the section on PY files. Number lines in a code block ---------------------------- -With the ``code-block`` directive, you can use the optional ``linenos`` attribute -to generate line numbers for a code block:: +You can use the optional ``linenos`` attribute to generate line numbers for a code block. + +This example shows a ``code-block`` directive that uses the ``linenos`` attribute without +any value to generate line numbers for all lines. + +.. tab-set:: + + .. tab-item:: Doc surce code - .. code-block:: - :linenos: + .. code-block:: rst - from __future__ import division - import numpy + This ``code-block`` directive shows to generate line numbers for all lines. - def volume(height, radius): - pi = 3.14 - vol = (1.0/3.0) * height * pi * pow(radius,2) - return vol + .. code-block:: pycon + :linenos: - vol = volume(2.0, 10) - print vol, "(m^3)" + from __future__ import division + import numpy -Here is how the preceding code block is rendered in the documentation: + def volume(height, radius): + pi = 3.14 + vol = (1.0/3.0) * height * pi * pow(radius,2) + return vol -.. vale off + vol = volume(2.0, 10) + print vol, "(m^3)" -.. code-block:: - :linenos: + .. tab-item:: Rendered doc - from __future__ import division - import numpy + This ``code-block`` directive shows to generate line numbers for all lines. - def volume(height, radius): - pi = 3.14 - vol = (1.0/3.0) * height * pi * pow(radius,2) - return vol + .. code-block:: pycon + :linenos: - vol = volume(2.0, 10) - print vol, "(m^3)" + from __future__ import division + import numpy -.. vale on + def volume(height, radius): + pi = 3.14 + vol = (1.0/3.0) * height * pi * pow(radius,2) + return vol -To set the line where numbering is to start, you can use the optional ``lineno-start`` -attribute, which automatically activates the ``linenos`` attribute:: + vol = volume(2.0, 10) + print vol, "(m^3)" - .. code-block:: - :lineno-start: 12 - Some more Python code, with line numbering starting at line 12. +To set the line where numbering is to start, you use the optional ``lineno-start`` +attribute, which automatically activates the ``linenos`` attribute. -Here is how the preceding code block is rendered in the documentation: +This example shows a ``code-block`` directive that uses the ``lineno-start`` attribute to +start numbering at line 12. -.. code-block:: - :lineno-start: 12 +.. tab-set:: + + .. tab-item:: Doc surce code + + .. code-block:: rst + + This ``code`` directive numbers starts numbering at line 12. + + .. code-block:: pycon + :lineno-start: 12 + + Some more Python code, with line numbering starting at line 12. + + .. tab-item:: Rendered doc + + This ``code`` directive numbers starts numbering at line 12. + + .. code-block:: pycon + :lineno-start: 12 + + Some more Python code, with line numbering starting at line 12. - Some more Python code, with line numbering starting at line 12. Emphasize lines of code ----------------------- With the ``code-block`` directive, you can use the optional ``emphasize-lines`` attribute -to emphasize particular lines of code by highlighting them:: +to emphasize particular lines of code by highlighting them. + +This example shows a ``code-block`` directive that emphasizes lines 3 and 5. + +.. tab-set:: + + .. tab-item:: Doc surce code - .. code-block:: python - :emphasize-lines: 3,5 + .. code-block:: rst - def some_function(): - interesting = False - print("This line is highlighted.") - print("This line is no highlighted.") - print("This line is highlighted.") + This ``code`` directive emphasizes lines 3 and 4. -Here is how the preceding code block is rendered in the documentation: + .. code-block:: pycon + :emphasize-lines: 3,5 -.. code-block:: python - :emphasize-lines: 3,5 + def some_function(): + interesting = False + print("This line is highlighted.") + print("This line is no highlighted.") + print("This line is highlighted.") + + .. tab-item:: Rendered doc + + This ``code`` directive emphasizes lines 3 and 4. + + .. code-block:: pycon + :emphasize-lines: 3,5 + + def some_function(): + interesting = False + print("This line is highlighted.") + print("This line is no highlighted.") + print("This line is highlighted.") - def some_function(): - interesting = False - print("This line is highlighted.") - print("This line is no highlighted.") - print("This line is highlighted.") Define a caption and name for referencing a code block ------------------------------------------------------ With the ``code-block`` directive, you can use the optional ``caption`` and ``name`` attributes to use either the ``ref`` or ``numref`` role to reference this code block from -elsewhere in your documentation:: +elsewhere in your documentation. + +This example shows a ``code`` directive that uses the ``caption`` and ``name`` attributes. + +.. tab-set:: + + .. tab-item:: Doc surce code + + .. code-block:: rst + + This ``code`` directive use the optional ``caption`` and ``name`` attributes. + + .. code-block:: pycon + :caption: this.py + :name: this-py + + print("Explicit is better than implicit.") - .. code-block:: python - :caption: this.py - :name: this-py + .. tab-item:: Rendered doc - print("Explicit is better than implicit.") + This ``code`` directive use the optional ``caption`` and ``name`` attributes. -Here is how the preceding code block is rendered in the documentation: + .. code-block:: pycon + :caption: this.py + :name: this-py -.. code-block:: python - :caption: this.py - :name: this-py + print("Explicit is better than implicit.") - print("Explicit is better than implicit.") You then give the ``name`` attribute to the ``numref`` role to create the cross-reference:: diff --git a/doc/source/content-writing/rst-files-writers/index.rst b/doc/source/content-writing/rst-files-writers/index.rst index f9fc95546..5ea20bc34 100644 --- a/doc/source/content-writing/rst-files-writers/index.rst +++ b/doc/source/content-writing/rst-files-writers/index.rst @@ -62,6 +62,11 @@ guide, the ``maxdepth`` attribute is set to ``3`` for all sections. The ``toctre then includes an ordered list of the RST files to show in the **Section Navigation** pane. The RST extensions for the files in this list are omitted. +.. note:: + + In the ``doc/source`` directory, folder and file names should use hyphens as space delimiters + for search optimization of the generated HTML documentation. For example, ``rst-file-formatting.rst``. + To see the ``toctree`` directives for the other sections in this guide, in the project's `repository `_, go to the ``doc/source`` directory and look at the ``index.rst`` files in the child directories for the @@ -72,57 +77,60 @@ For more information on RST file setup, see :ref:`rst_files_developers` and .. _readme_files: -README file ------------ +README files +------------ -Each PyAnsys repository has a README file in its root directory. This file explains the -project and points readers to key documentation. The README file can be an RST file +Each PyAnsys repository has a README file in its root directory that explains the +project and points readers to the documentation. The README file can be an RST file or a GitHub Flavored Markdown (MD) file. While RST and MD files are similar, the syntax differs. If the README file in your repository is an MD file, see `GitHub Flavored Markdown Spec `_ -and `Using Markdown and Liquid in GitHub Docs `_. +and `Using Markdown and Liquid in GitHub Docs `_ for +syntax information. -If your README file is an RST file, you can reuse content in it in the overall ``index.rst`` -file for a library's documentation. However, you cannot reuse content if your README file -is an MD file. Thus, the disadvantages of having to use a different syntax in the MD file and the -inability to reuse this MD file on the initial page of your documentation may influence you to -use an RST file for your README file. +If your README file is an RST file, you can reuse content in it in within the main ``index.rst`` +file for the library's documentation or in the ``index.rst`` file for its "Getting started" +section. However, if your README file is an MD file, you cannot reuse the content in it. +Thus, the disadvantages of having to use a different syntax in the MD file and the +inability to reuse content in it in your documentation may influence you to use an +RST file for your README file. -To reuse all content from a ``README.rst`` file in the overall ``index.rst`` file for your +To reuse all content in a ``README.rst`` file in the main ``index.rst`` file for your documentation, use the ``include`` directive:: .. include:: ../../README.rst -To reuse only a portion of the content in the ``README.rst`` file, use this directive's ``start-line`` +To reuse only a portion of the content in a ``README.rst`` file, use this directive's ``start-line`` and ``end-line`` attributes:: .. include:: ../../README.rst :start-line: 4 :end-line: 72 -Using the preceding attributes necessitates having to change the line numbers -if content is later added to or removed from the ``README.rst`` file. Thus, you +Because using the preceding attributes necessitates having to change the line numbers +if content is later added to or removed from the ``README.rst`` file, you might want to use this directive's ``start-after`` attribute instead. It allows you to reuse content from a given point to the end of the file. You first insert a target in the ``README.rst`` file where you want to start the reuse. For example, assume that the ``README.rst`` file has an "Overview" section where you want the reuse -to begin. Before this section, insert a target like this, followed by a blank line:: +to begin. Before this section, insert an explicit target name like this, followed by a blank line:: .. reuse_start -In the overall ``index.rst`` file for your library's documentation, now insert an ``include`` -directive with a ``start-after`` attribute that specifies this target:: +In the main ``index.rst`` file for your library's documentation, now insert an ``include`` +directive with a ``start-after`` attribute that specifies this explicit target name:: .. include:: ../../README.rst :start-after: .. reuse_start If your ``README.rst`` file has links to sections or pages in the library's documentation, you must -use external links. You can use either URLs or named targets. However, named targets must be inserted -at the bottom of the ``README.rst`` file. If your project has a central ``links.rst`` -file in the project's ``doc/source`` directory, inserting named targets for links in the -``README.rst`` file does not work because the GitHub renderer is unaware of this ``links.rst`` file. -For more information, see :ref:`doc_links_external`. +use either URLs or insert explicit targets at the bottom of the ``README.rst`` file that you can then +use in this file. If your project has a central ``links.rst`` file in the ``doc/source`` directory, +you might be tempted to simply use the explicit target names named defined in it in the ``README.rst`` +file. However, the GitHub renderer is unaware of the ``links.rst`` file. For more information, see +:ref:`doc_links_external`. + .. toctree:: :maxdepth: 3 diff --git a/doc/source/content-writing/rst-files-writers/rst-format-rules.rst b/doc/source/content-writing/rst-files-writers/rst-format-rules.rst index 3c10dc27f..293c7d989 100644 --- a/doc/source/content-writing/rst-files-writers/rst-format-rules.rst +++ b/doc/source/content-writing/rst-files-writers/rst-format-rules.rst @@ -126,6 +126,19 @@ page in the *Goggle developer documentation style guide*. ``code`` or ``code-block`` directive. For more information on code blocks, see :ref:`code_blocks`. +- To comment out lines in an RST file so that they do render in the documentation, + place two periods (``..``) and a space before each line that you want to hide:: + + .. When content is drafted on reusable RST files, add the topic here. + .. Also add links to this new topic in the ``documenting.rst" file. + + While this approach is useful if the native ``sphinx.ext.todo`` extension has not been + added to the ``extensions`` variable in your documentation's Sphinx configuration + (``doc/source/conf.py``) file, adding this extension is recommended. The specially + formatted block of text for the ``.. todo::`` directive does not render in the + documentation by default. Plus, you can easily search for occurrences of this directive + later. For more information, see :ref:`add_native_sphinx_ext`. + Subsequent pages describe how to use other common Sphinx roles and directives. For comprehensive lists of roles and directives, see `Roles `_ and `Directives `_ in the Sphinx documentation. diff --git a/doc/source/doc-style/doc-configuration.rst b/doc/source/doc-style/doc-configuration.rst index 580ebf165..ec7933e12 100644 --- a/doc/source/doc-style/doc-configuration.rst +++ b/doc/source/doc-style/doc-configuration.rst @@ -6,7 +6,7 @@ building the documentation of a PyAnsys library. When installing `Sphinx`_, a program named ``sphinx-build`` also gets installed. This program is in charge of collecting, parsing, and rendering all -ReStructuredText (RST) files in :ref:`The \`\`doc/\`\` directory`. +ReStructuredText (RST) files in :ref:`The \`\`doc\`\` directory`. The behavior of the ``sphinx-build`` program is controlled through either a ``Makefile`` (for POSIX systems) or a ``make.bat`` file (for Windows systems). @@ -28,21 +28,20 @@ Automation files As indicated earlier on this page, the ``sphinx-build`` program and all its options and arguments can be automated by using a ``Makefile`` file or a ``make.bat`` file. These files should be placed at the -first level of :ref:`The \`\`doc/\`\` directory`, next to the ``source/`` -directory. +first level of the ``doc`` directory, next to the ``source`` directory. -Notice that both files contain a ``SPHINXOPTS`` variable with these flags: ``-j``, +Notice that both files contain a ``SPHINXOPTS`` variable with these options: ``-j``, ``-W``, and ``--keep-going``. -- The ``-j`` flag indicates the number of jobs (number of cores) to use. +- ``-j``: Indicates the number of jobs (number of cores) to use. The default value is ``auto``, which means that the number of cores in the CPU is to be automatically detected. -- The ``-W`` flag turns warnings into errors. This guarantees that documentation +- ``-W``: turns warnings into errors. This guarantees that documentation health is maximized. -- The ``--keep-going`` flag specifies whether to render the whole documentation, - even if a warning is found. This flag enables developers to be aware of the +- ``--keep-going``: Specifies whether to render the whole documentation, + even if a warning is found. This option enables developers to be aware of the full set of warnings. A special rule named ``pdf`` is also included. This rule is in charge of diff --git a/doc/source/doc-style/docstrings.rst b/doc/source/doc-style/docstrings.rst index eb8d87164..2d1c2ce8c 100644 --- a/doc/source/doc-style/docstrings.rst +++ b/doc/source/doc-style/docstrings.rst @@ -51,7 +51,7 @@ or function to briefly describe what the class or function does. The raises an error. The short summary can be declared on the same line as the opening quotes or on -the next line. While `PEP 257 `_ accepts both ways, you must be consistent across your +the next line. While `PEP 257`_ accepts both ways, you must be consistent across your project. If you decide to declare the short summary on the same line, refer to :ref:`Numpydoc validation` because ``"GL01"`` checking must be turned off. diff --git a/doc/source/doc-style/formatting-tools.rst b/doc/source/doc-style/formatting-tools.rst index 44b98773c..7080d47c7 100644 --- a/doc/source/doc-style/formatting-tools.rst +++ b/doc/source/doc-style/formatting-tools.rst @@ -8,26 +8,26 @@ presents some of the most popular ones in the Python ecosystem. A minimum configuration is provided for each one so you can easily include them in your PyAnsys project. -Most of the tools presented can be configured using :ref:`the +Most of the tools presented can be configured using :ref:`The \`\`pyproject.toml\`\` file`, avoiding dotfiles and thus leading to a much cleaner root project directory. -The ``blacken-docs`` --------------------- +The ``blacken-docs`` tool +------------------------- When writing documentation, code blocks are frequently used to provide examples. However, these code snippets cannot be verified with the usual code -formatting tools. This is where the `blacken-docs`_ comes into play. You can execute +formatting tools. This is where `blacken-docs`_ comes into play. You can execute this tool by running: .. code:: bash blacken-docs -l doc/**/*.rst -The ``codespell`` ------------------ +The ``codespell`` tool +---------------------- -The `codespell`_ checks for common misspellings in text files. This implies that it +The `codespell`_ tool checks for common misspellings in text files. This implies that it is not limited to Python files but can run checks on any human-readable file. It is possible to ignore words that are flagged as misspelled. You can specify these words in a @@ -37,24 +37,24 @@ file that can then be passed to ``codespell`` by running: codespell --write-changes --ignore-words= -The ``docformatter`` --------------------- +The ``docformatter`` tool +------------------------- -The `docformatter`_ automatically formats Python docstrings according -to `PEP 257 `_. To make sure ``docformatter`` wraps your docstrings at a given -number of characters, use the following configuration: +The `docformatter`_ tool automatically formats Python docstrings according +to `PEP 257`_. To make sure ``docformatter`` wraps your docstrings at a given +number of characters, use this configuration: .. code:: bash docformatter -r -i --wrap-summaries --wrap-descriptions src -The ``doctest`` ---------------- +The ``doctest`` tool +-------------------- -The `doctest`_ is a module from the Python standard library, which means it is +The `doctest`_ tool is a module from the Python standard library, which means it is included by default with your Python installation. It is used for checking the examples provided inside docstrings to make sure that they reflect the current usage -of the source code. The `doctest`_ can be integrated with ``pytest`` in :ref:`The +of the source code. You can integrate `doctest`_ with ``pytest`` in :ref:`The \`\`pyproject.toml\`\` file`: .. code:: toml @@ -62,12 +62,12 @@ of the source code. The `doctest`_ can be integrated with ``pytest`` in :ref:`Th [tool.pytest.ini_options] addopts = "--doctest-modules" -The ``interrogate`` -------------------- +The ``interrogate`` tool +------------------------ -The `interrogate`_ is a tool for checking docstring coverage. Similar to source code -coverage tools, this tool tests how many functions, classes, and modules in a Python -library hold a docstring. +The `interrogate`_ tool checks docstring coverage. Similar to source code +coverage tools, this tool tests how many modules, functions, classes, and +methods in a Python library hold a docstring. .. code:: toml @@ -85,19 +85,19 @@ Numpydoc validation ------------------- To validate the style of :ref:`Numpydoc docstrings`, you can -take advantage of the `numpydoc`_ Sphinx extension. Note that this extension +take advantage of the Sphinx `numpydoc`_ extension. Note that this extension checks only for those objects whose docstrings must be rendered. It is not a command line tool that checks the style of all docstrings in your source code. Because ``numpydoc`` is a Sphinx extension, it must be configured in the -``conf.py`` file. See :ref:`The \`\`doc/\`\` directory`. Start by adding it to the +``conf.py`` file. See :ref:`The \`\`doc\`\` directory`. Start by adding it to the list of extensions: .. code-block:: python extensions = ["numpydoc", ...] -Once the ``numpydoc`` extension is added, you can select which `validation checks +Once the ``numpydoc`` extension is added, you can select which `built-in validation checks `_ must be addressed by using the ``numpydoc_validation_checks`` dictionary: @@ -111,20 +111,16 @@ This issues the following warning for any object without a docstring: "The object does not have a docstring" -For a complete list of available checks, see the `full mapping of -validation checks -`_. -The ``pydocstyle`` ------------------- +The ``pydocstyle`` tool +----------------------- -The `pydocstyle`_ is a tool for checking the compliance of Python docstrings with `PEP 257 `_. -Its configuration can be defined in the -:ref:`the \`\`pyproject.toml\`\` file`. +The `pydocstyle`_ tool checks the compliance of Python docstrings with `PEP 257`_. +Its configuration can be defined in the :ref:`The \`\`pyproject.toml\`\` file`. By default, `pydocstyle`_ matches all ``*.py`` files except those starting with ``test_*.py``. The default configuration should be enough for a PyAnsys project. -However, if additional configuration is needed, it must be included -it under the ``[tool.pydocstyle]`` entry: +However, if additional configuration is needed, it must be included under the +``[tool.pydocstyle]`` entry: .. code:: toml @@ -144,27 +140,28 @@ and Markdown (MD) files. After a PyAnsys team member implements ``Vale`` in your PyAnsys library, you can check any content changes that you make in supported files locally. -In the library's ``doc`` folder, download the package with: +In the library's ``doc`` folder, download the package with this command: .. code-block:: bash vale sync -Check all files in the ``doc`` folder with: +Check all files in the ``doc`` folder with this command: .. code-block:: bash vale . -Check all files in the repository, by going to the ``root`` directory and running: +Check all files in the repository by going to the ``root`` directory and running +this command: .. code-block:: bash vale --config=doc/.vale.ini . -Check all files in only a particular folder with ``vale`` followed by the +Check all files in only a particular folder by typing ``vale`` followed by the name of the folder. -Address any warnings and issues that display by either editing the -file to fix or adding a term to the ``accept.txt`` file under the +Address any warnings and issues that display by either editing +files to fix them or adding entries to the ``accept.txt`` file under the ``doc`` folder in ``styles\Vocab\ANSYS``. diff --git a/doc/source/doc-style/index.rst b/doc/source/doc-style/index.rst index 0e3efe015..28ec9dd51 100644 --- a/doc/source/doc-style/index.rst +++ b/doc/source/doc-style/index.rst @@ -6,10 +6,9 @@ Documentation style Good API documentation drives library adoption and usage and is the foundation for a good developer experience. Even with the best interfaces and the most functional product, an API is not adopted -if users aren't satisfied with the documentation or the examples -that they are presented with. +if users aren't satisfied with a library's documentation or examples. -Good API documentation provides: +Good API documentation provides many benefits, including these: * Increased adoption and improved experiences for developers using the API or library. @@ -23,9 +22,9 @@ Good API documentation provides: The documentation for a PyAnsys library should contain: -* Module, class, method, and function docstrings. See +* Module, function, class, and method docstrings. See :ref:`docstrings`. -* Full gallery of examples. See `PyMAPDL examples +* A full gallery of examples. See `PyMAPDL examples `_. * General content on installing, using, and contributing. * Link to the library's documentation from the repository's README file. @@ -41,10 +40,9 @@ follow the guidelines in the `Google developer documentation style guide To help you follow the Google guidelines and any custom rules developed by Ansys, you can implement `Vale `_. -This command-line tool brings code-like linting to prose. For more -information, see :ref:`Vale`. +This command-line tool brings code-like linting to prose. -Finally, the documentation should be public and hosted via gh-pages, either as +Finally, the documentation should be public and hosted using GitHub pages, either as a branch named ``gh-pages`` within the library repository or within a ``gh-pages`` branch within ``-docs``. @@ -53,7 +51,8 @@ see :ref:`DNS configuration`. You should ensure that you are compliant with the naming convention for your CNAME. For procedural information related to crafting, building, and deploying -documentation, see :ref:`Documenting` in the :ref:`How-to` section. +documentation, see :ref:`documenting_developers`. For comprehensive information +on writing content for PyAnsys developers, see :ref:`content_writing`. .. toctree:: :hidden: diff --git a/doc/source/getting-started/administration.rst b/doc/source/getting-started/administration.rst index 4845ffef1..86280d3e0 100644 --- a/doc/source/getting-started/administration.rst +++ b/doc/source/getting-started/administration.rst @@ -1,20 +1,19 @@ Project approval and public release =================================== -Most of the projects in PyAnsys expose the functionality of Ansys -products. Due to intellectual property reasons, the public release of a PyAnsys -library must go through a project approval process. +Most of the projects in the `Ansys organization `_ +expose the functionality of Ansys products. Due to intellectual property reasons, +the public release of a PyAnsys library must go through a project approval process. -First steps ------------ +First step +---------- -To trigger the public release process, project leads must fill in this form: +To trigger the public release process, project leads must first complete the +`Release request workflow initiation form `_. -* `Release request workflow initiation form `_ - -The form lets the different parties involved in the public release process know that -there is a request to release a project. If your intent is to release an Ansys Open -Source project, then continue to the next section. +This form lets the different parties involved in the public release process know that +there is a request to release a project. If your intent is to release an Ansys open +source project, then continue to the next section. Approval process ---------------- @@ -47,24 +46,21 @@ The approval process is divided into three parts: .. important:: An approval from each of these three parts is required to release a project to the public. - - Once approved, a project can be published to the :ref:`Public PyPI`. + Once all approvals are received, a project can be published to the :ref:`Public PyPI`. -When releasing a project to the public, you should: +When releasing a project to the public, you must perform these tasks: * Coordinate with the product line development team, if applicable. * Maintain the project by means of fixing bugs and providing support for new releases. * Uphold Ansys' reputation in the open source community. -Once all three approvals have been awarded, project leads must complete -the next form: - -* `OSS approval request form `_ +Once all three approvals have been awarded, project leads must then complete +the `OSS (Open Source Software) approval request form `_. This form serves as a final checklist to verify that all approvals have been processed -and to formalize the OSS approval process as a final record. You can find frequently -asked questions in :ref:`Questions asked in the OSS approval request form`. +and to formalize the OSS approval process as a final record. For more information, see +:ref:`Questions asked on the OSS approval request form`. Managerial ^^^^^^^^^^ @@ -101,15 +97,15 @@ Legal Legal review approval ensures that the entire project complies with Ansys' legal policies. -Start by completing the legal review request form for open sourcing the code: +Click the following button to complete the legal review request form for open sourcing the code: .. button-link:: https://github.com/ansys-internal/oss-approval-tracklist/issues/new?assignees=MaxJPRey%2C+RobPasMue%2C+jorgepiloto%2C+&labels=&projects=&template=oss_final_signature.yml&title=Name+of+the+package+to+release :color: black :expand: - **Open Source Code Release Request Form** + **Open source code release request form** -The following checks are required when performing the legal review of the project: +These checks are required when performing the legal review of the project: .. card:: |uncheck| The project contains the right licensing. @@ -118,7 +114,7 @@ The following checks are required when performing the legal review of the projec | |uncheck| Ansys official logos and branding images are used in the project. | |uncheck| The Ansys copyright appears in the correct location as required by the Legal department. - | |uncheck| The copyright has the proper formatting, which is: + | |uncheck| The copyright has the proper formatting: ``Copyright (C) YYYY ANSYS, Inc. and/or its affiliates.``. | |uncheck| The contribution does not embody any unapproved Ansys intellectual property for open sourcing. @@ -139,7 +135,7 @@ Technical approval ensures that the project follows the best and latest software development practices. Request a technical review by sending an email to `pyansys.core@ansys.com `_. -The technical review of the project verifies the following: +The PyAnsys core team performs these checks when performing the technical review of the project: .. card:: |uncheck| The project contains the right metadata information. @@ -189,10 +185,10 @@ The technical review of the project verifies the following: -Questions asked in the OSS approval request form +Questions asked on the OSS approval request form ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -When filling in the `OSS approval request form`_, project leads must +When completing the OSS approval request form, project leads must supply responses to several types of questions: .. card:: |uncheck| General questions diff --git a/doc/source/getting-started/basic.rst b/doc/source/getting-started/basic.rst index 16e764551..bfe98c528 100644 --- a/doc/source/getting-started/basic.rst +++ b/doc/source/getting-started/basic.rst @@ -1,10 +1,12 @@ PyAnsys project organization ============================ -The `PyAnsys `_ project is hosted on GitHub at -`Ansys GitHub organization`_. It contains several repositories with Python -libraries that interface with Ansys products or services. To try out a -library, visit one of these links: +The `PyAnsys project `_ is a collection of many +Python packages for using Ansys products through Python. The +`Ansys organization `_ on GitHub contains +several repositories with Python libraries for interfacing with Ansys +products or services. To go to the repository for a main PyAnsys library, +visit one of these links: * `PyAEDT`_ * `PyDPF-Core `_ @@ -19,45 +21,46 @@ visit these links: * `PyAnsys developer's guide `_ * `Ansys Sphinx Theme documentation `_ * `gRPC Hello-world example `_ -* `Material Example data `_ +* `Material example data `_ -Using the following tools, developers generate library packages from +Developers use the following tools to generate library packages from PROTO files, create coverage reports, and report on system coverage: * `pyansys-protos-generator `_ * `example-coverage `_ * `pyansys-tools-report `_ -Quick start guide ------------------ +PyAnsys repository creation +--------------------------- This is an overview on how to create your own PyAnsys repository in the Ansys GitHub organization. A repository is generally a project for a particular PyAnsys library. #. **Create the repository:** Create a repository from the - `pyansys/template`_. See `Creating a repository from a template`_. - Be sure that the `repository visibility`_ is initially private. + `ansys/template`_ repository. See `Creating a repository from a template`_ + in the GitHub documentation. Be sure that the `repository visibility`_ is initially private. #. **Rename the package:** Rename ``ansys/product/library`` to match your product or library. For example, the package name for PyMAPDL is ``ansys/mapdl/core``. Do the - same renaming in ``setup.py``. Do this as a pull request. In fact, only add - code as pull requests. Do not push to ``main``. + same renaming in the ``setup.py`` file. Do this as a pull request. In fact, only add + code as pull requests. Do not push to the ``main`` branch of the repository. #. **Add source:** Add your source files to - ``ansys//`` or create them. Also add unit tests to - ``tests/`` following the `pytest`_ convention. Be sure to maintain - sufficient coverage when adding your library. See `pytest-cov`_. + ``ansys//`` or create them. Also add unit tests to the + ``tests`` directory, following the `pytest`_ convention. Be sure to maintain + sufficient coverage when adding to your library. See the `pytest-cov`_ documentation. .. note:: If your tests require an active service, app, or product, - be sure to set up this app to run in an automated manner. + be sure to set it up to run in an automated manner. #. **Update documentation:** The documentation source and content - vary from repository to repository. In ``doc/``, there are folders for - different types of documentation, which can include guides, examples, - and API. Ensure that all documentation is updated. See :ref:`Documentation + vary from repository to repository. In the ``doc`` directory, there are child + directories for different sections of the documentation, which can include getting + started and user guides, examples, and an API reference. Ensure that all + documentation is updated. See :ref:`Documentation style`. #. **Prepare the package for release:** When you are ready to release diff --git a/doc/source/getting-started/componentization.rst b/doc/source/getting-started/componentization.rst index 1d8a077b4..4b4b233b6 100644 --- a/doc/source/getting-started/componentization.rst +++ b/doc/source/getting-started/componentization.rst @@ -3,26 +3,31 @@ Componentizing Ansys packages ============================= -Componentization is the process of subdividing the functionality of large applications +Componentization is the process of subdividing the functionality of large apps into multiple self-contained services with independent APIs. API creation surrounding existing Ansys products naturally aligns to publishing packages that mimic the full -domain and scope of each product. Emphasizing component libraries and services during -API exposure sets a new paradigm for Ansys product architecture that inherently breaks -apart larger monolithic desktop applications into subsets of functionality, with the -expectation of compatibility and reusability across the entire Ansys portfolio. +domain and scope of each product. + +Emphasizing component libraries and services during API exposure sets a new paradigm +for Ansys product architecture that inherently breaks apart larger monolithic desktop +apps into subsets of functionality, with the expectation of compatibility and reusability +across the entire Ansys portfolio. Many Ansys products already have a scripting solution in place, and wrapping that execution environment with a ``RunScript`` API endpoint is a low-barrier option to gain access to -remote, programmatic execution. This solution lacks API granularity, as the abstraction is -simply an unvalidated script input and some blob output that must be parsed and evaluated -without a prescribed response definition. The documentation for the scripting environment -still remains deep within the product and each script execution request is hard to organize -and maintain. Thus, there remains a significant cognitive disconnect when consuming this API -abstraction. The lack of API definition within the top-level abstraction also makes data -management difficult and direct compatibility with other PyAnsys libraries challenging. +remote, programmatic execution. This solution lacks API granularity, however, as the abstraction +is simply an unvalidated script input and some blob output that must be parsed and evaluated +without a prescribed response definition. + +The documentation for the scripting environment still remains deep within the product and +each script execution request is hard to organize and maintain. Thus, there remains a +significant cognitive disconnect when consuming this API abstraction. The lack of API +definition within the top-level abstraction also makes data management difficult and direct +ompatibility with other PyAnsys libraries challenging. + In addition to API clarity, the underlying product keeps a very large installation -footprint that is a burden in modern, flexible cloud deployments. Avoiding re-architecting -a product in the short-term can give a quick win; but in most cases, it should only be used +footprint that is a burden in modern, flexible cloud deployments. While avoiding re-architecting +a product in the short-term can give a quick win, in most cases, it should only be used as a stopgap solution, providing a window of opportunity to learn more about how the user prefers to consume the individual functionalities of a product. diff --git a/doc/source/getting-started/index.rst b/doc/source/getting-started/index.rst index 05a18886b..10a994654 100644 --- a/doc/source/getting-started/index.rst +++ b/doc/source/getting-started/index.rst @@ -2,18 +2,19 @@ Getting started =============== -The PyAnsys project exposes Ansys technologies via libraries in the -Python ecosystem. Each library provides clear, concise, and -maintainable APIs. Useful Pythonic functions, classes, and plugins -allow users to interact with targeted products and services in a -high-level, object-orientated approach. +The `PyAnsys project `_ exposes Ansys technologies +in client libraries within the Python ecosystem. Each library provides clear, +concise, and maintainable APIs. Useful Pythonic functions, classes, and plugins +provide for interacting with targeted products and services in a high-level, +object-orientated approach. -The PyAnsys ecosystem refines the :doc:`component-level interaction -with Ansys solvers and tools `, and eliminates the -inconsistent and restrictive scripting environments found within product -installations. +The PyAnsys ecosystem refines the component-level interaction +with Ansys solvers and tools. It also eliminates the inconsistent and +restrictive scripting environments found within product +installations. For more information, see doc:`componentization`. -These component libraries play a vital role in: +Additionally, libraries play vital roles in key simulation tasks, +including these: - Application automation - Machine learning @@ -22,21 +23,21 @@ These component libraries play a vital role in: - Workflow orchestration - Data manipulation and export -The libraries also include plugins and interfaces to packages in the vast Python -ecosystem. Examples include: +Libraries also include plugins and interfaces to packages in the vast Python +ecosystem. Here are some examples: -- Arrays using `numpy `_ -- Data structures and tables with `pandas `_ -- 2D visualization using `matplotlib `_ -- 3D visualization using `pyvista `_ -- Advanced scientific computing using `scipy`_ -- Machine learning using `tensorflow `_ +- Arrays using `NumPy `_ +- Data structures and tables using `pandas `_ +- 2D visualization using `Matplotlib `_ +- 3D visualization using `PyVista `_ +- Advanced scientific computing using `SciPy`_ +- Machine learning using `TensorFlow `_ .. note:: - If you are new to GitHub, you should visit `The ReadMe Project - `_. It is a dedicated platform for highlighting - the best from the open source software community. Each monthly newsletter - provides links to feature articles, developer stories, guides, and podcasts. + If you are new to GitHub, see `The ReadMe Project + `_. This monthly newsletter highlights + the best from the open source software community, providing links + to feature articles, developer stories, guides, and podcasts. .. toctree:: :hidden: @@ -45,12 +46,11 @@ ecosystem. Examples include: basic administration componentization - Glossary of abbreviations Contributing to this guide ~~~~~~~~~~~~~~~~~~~~~~~~~~ -If you would like to contribute to this development guide, maintainers gladly +If you would like to contribute to this guide, maintainers gladly review all pull requests. For more information, see :ref:`Documentation style`. This repository uses `pre-commit `_ to @@ -67,7 +67,7 @@ This performs various style and spelling checks to ensure your contributions meet minimum coding style and documentation standards. You can make sure that these checks are always run prior to ``git commit`` -running them by installing ``pre-commit`` as a git hook with this command:: +running them by installing ``pre-commit`` as a Git hook with this command:: pre-commit install diff --git a/doc/source/how-to/compatibility.rst b/doc/source/how-to/compatibility.rst index 5567610ff..99198c5fe 100644 --- a/doc/source/how-to/compatibility.rst +++ b/doc/source/how-to/compatibility.rst @@ -1,8 +1,8 @@ -Ansys product compatibility -=========================== +Product compatibility +===================== -As the different PyAnsys libraries evolve, backward and forward compatibility -issues can occur. Some of the most common cases are: +As PyAnsys libraries evolve, backward and forward compatibility issues can +occur. Here are examples of two common issues: * An Ansys product has implemented certain features in its new server version that are not available in previous server versions. This causes backward @@ -10,24 +10,23 @@ issues can occur. Some of the most common cases are: * New server versions have suppressed support for certain operations after a a given version. This causes forward incompatibility issues. -Though there are different ways to handle these issues, some of the PyAnsys libraries, +Although there are different ways to handle these issues, some PyAnsys libraries, such as `PyMAPDL`_ and `PyDPF-Core `_, handle them in -the same way. To homogenize implementations in PyAnsys libraries, -following their approach is recommended. +the same way. To homogenize implementations in PyAnsys libraries, following their +approach is recommended. ``check_version.py`` module approach ------------------------------------ A *version checking* module determines whether the Ansys product server you are connecting -to provides support for certain operations. For implementation examples, see the -``check_version.py`` files for the following PyAnsys libraries: +to provides support for certain operations. For an implementation example, see the +`check_version.py `_ +file for the DPF-Core library. -* `ansys/dpf/core/check_version.py `_ - -One of the easiest ways to keep track of the versions supported is setting up a -**minimum version** data structure, in which forward compatibility is ensured. +One of the easiest ways to keep track of the versions supported is to set up a +*minimum version* data structure in which forward compatibility is ensured. Server versions earlier than the minimum version do not have access to this -functionality. In the case of `ansys/dpf/core/check_version.py`_, this is the +functionality. The previously referenced ``check_version.py`` file uses the ``VERSIONS_MAP`` structure. Most Ansys products provide forward compatibility, meaning that features @@ -35,16 +34,17 @@ introduced in an earlier version are also supported in later versions. Suppressi a feature would lead to a backward compatibility issue. Because the same type of issues can happen with the PyAnsys servers wrapping -Ansys products, creating a similar data structure for a maximum version is -is necessary. While there are no implementations yet of this feature, it should work -in the same way as the minimum version mechanism works. +Ansys products, creating a similar *maximum version* data structure is +is necessary. While there are no such implementations yet, it should work +in the same way as the minimum version data structure. ``version_requires`` decorator ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The ``@version_requires`` decorator applies version logic to the different +The ``@version_requires`` decorator applies version logic to functionalities and methods available in the client. You can see how this -decorator is used in `ansys/dpf/core/check_version.py`_. Here is a generalized example: +decorator is used in the `check_version.py `_ +file. Here is a generalized example: .. code:: python @@ -70,7 +70,7 @@ specified in the decorator's call. If it is not, a ``VersionError`` is raised. The ``self._server`` member of the class is the server that the client is connected to. This member is expected to have version information in its ``self._server._server_version`` attribute. The decorator uses this version information to determine if the version is -above the threshold. +higher than the threshold. You can create a ``@max_version_supported`` decorator to implement this same kind of logic for forward incompatibility. Changing the ``@version_requires`` diff --git a/doc/source/how-to/continuous-integration.rst b/doc/source/how-to/continuous-integration.rst index 0156f0771..f911ce789 100644 --- a/doc/source/how-to/continuous-integration.rst +++ b/doc/source/how-to/continuous-integration.rst @@ -1,11 +1,11 @@ .. _continuous_integration: -Using continuous integration -============================ +Continuous integration +====================== -Continuous Integration (CI) is the process of merging new changes into the main +Continuous integration (CI) is the process of merging new changes into the main code base while ensuring that these changes are functional and do not break the existing -logic. +code. This process is automated as much as possible to alleviate the developer's workload and ensure a quick development workflow. @@ -17,26 +17,26 @@ Enable GitHub actions --------------------- By default, ``Actions`` are enabled in new repositories and can be accessed -using the associated :ref:`GitHub repository sections`. +using the associated :ref:`GitHub repository sections `. -If ``Actions`` are not enabled, you can enable them by changing ``Actions -Permissions`` in +If ``Actions`` are not enabled, you can enable them. For more information, see +`Managing GitHub Actions permissions for your repository +`_ +in the GitHub documentation. -``Settings -> Actions -> General``. - -Use GitHub actions +Use GitHub Actions ------------------ -Actions to be executed in the CI process must be declared in a ``YML`` and -stored in the ``.github/workflows/`` directory. Although each action is -different, they all have a common structure: +You must declare the GitHub Actions to be executed in the CI process in a +common ``ci.yml`` file in the ``.github/workflows`` directory. Although each +action is different, they all have a common structure: -- A ``name`` identifying the action. -- A collection of ``triggering events`` that run the action when required. -- A collection of ``concurrent`` workflows conditions to, for example, avoid running +- A name identifying the action. +- A collection of triggering events that run the action when required. +- A collection of concurrent workflows conditions to, for example, avoid running several workflows for the same branch. (Multiple consecutive pushes could lead to multiple ongoing workflows when you want only the last push to run). -- A collection of ``jobs`` with different steps to follow during the CI process. +- A collection of jobs with different steps to follow during the CI process. .. code-block:: yaml @@ -55,14 +55,14 @@ Disable concurrent workflows ---------------------------- Handling hardware resources is a big deal, especially when running with self-hosted agents. -Also, if you are using public GitHub hardware for running your workflows, you should try to -care about the environment and sustainability. +If you are using public GitHub hardware for running your workflows, disabling concurrent +CI workflows is a way to show that you care about the environment and sustainability. -Disabling concurrent CI workflows is a good way to do so. For example, imagine the following situation: +For example, imagine the following situation: * You push some changes to your branch. * The CI workflow kicks in and starts executing the different stages. -* You suddenly realize that there is a typo/file missing. +* You suddenly realize that there is a typo or a file missing. * You push the new commit to your PR. * A new CI workflow kicks in and starts running. @@ -70,8 +70,8 @@ At this moment, you probably have two parallel workflows running at the same tim though you are only interested in the results from the last one. One way to solve this is manually cancelling the oldest workflow. However, it is also possible to -automatically cancel pre-existing workflows for a certain branch/PR. To do so, prior to the -``jobs`` section, you should add the following lines to your workflow: +automatically cancel pre-existing workflows for a PR. To do so, prior to the +``jobs`` section in the ``ci.yml`` file, add the following lines to your workflow: .. code-block:: yaml @@ -83,39 +83,43 @@ automatically cancel pre-existing workflows for a certain branch/PR. To do so, p Required workflows ------------------ -These workflows are required for any ``PyAnsys`` project: +PyAnsys projects require workflows for performing these types of checks: -- :ref:`Coding style` workflow -- :ref:`Documentation style`, :ref:`Build documentation`, and :ref:`Deploying documentation` Workflows -- :ref:`Testing` and :ref:`Test code coverage` workflows -- :ref:`Releasing and publishing` workflow +- :ref:`Code style ` +- :ref:`Documentation style` +- :ref:`Documentation building ` +- :ref:`Documentation deployment ` +- :ref:`Testing` +- :ref:`Test code coverage` +- :ref:`release_publish` -You should collect all workflows under a common -``ci.yml`` file. For more information, see :ref:`Workflow examples`. +You should collect all workflows in a common ``ci.yml`` file. For more information, +see :ref:`Workflow examples`. Parametrize workflows --------------------- -It is important to test a ``PyAnsys`` library on different operating systems +It is important to test a PyAnsys library on different operating systems using different Python versions: .. math:: \text{Num. Workflows} = \text{Num. Operating Systems} \times \text{Num. Python Versions} -The most common operating systems are ``Windows``, ``macOS``, and ``Linux``. For -Python versions, see :ref:`Supporting Python versions`. +The most common operating systems are Windows, macOS, and Linux/UNIX. For supported +Python versions, see :ref:`Python versions`. -Because having a ``YML`` file for each workflow would be tedious, ``GitHub -Actions`` provides the ``matrix`` parameter inside the ``strategy``. For more -information, see `Using a Matrix for your Jobs -`_. +Because having a YML file for each workflow would be tedious, GitHub +Actions provides the ``matrix`` parameter inside the ``strategy``. For more +information, see `Using a matrix for your Jobs +`_ +in the GitHub documentation -Consider this example of a parametrized workflow example: +Consider this example of a parametrized workflow: .. tab-set:: - .. tab-item:: Workflow File + .. tab-item:: Workflow file .. code-block:: yaml @@ -129,7 +133,7 @@ Consider this example of a parametrized workflow example: steps: - echo "Running Python ${{ matrix.python }} in ${{ matrix.os }}" - .. tab-item:: Actions Log File + .. tab-item:: Actions log file .. code-block:: text @@ -149,8 +153,9 @@ Consider this example of a parametrized workflow example: Workflow examples ----------------- -Workflow examples are provided for checking :ref:`Coding style`, -:ref:`Documenting`, :ref:`Testing`, and :ref:`Releasing and publishing`. +Workflow examples are provided for various checks, such as :ref:`code style `, +:ref:`tests `, :ref:`documentation style `, +:ref:`documentation building `, and :ref:`releasing `. .. tab-set:: diff --git a/doc/source/how-to/contributing.rst b/doc/source/how-to/contributing.rst index 8fda6914b..2baf9de84 100644 --- a/doc/source/how-to/contributing.rst +++ b/doc/source/how-to/contributing.rst @@ -8,32 +8,31 @@ coding paradigms used for PyAnsys development. #. Follow the `Zen of Python `_. As silly as core Python developers are sometimes, there's much to be - gained by following the basic guidelines listed in PEP 20. As suggested + gained by following the basic guidelines listed in `PEP 20`_. As suggested in these guidelines, focus on making your additions intuitive, novel, - and helpful for PyAnsys users. When in doubt, use ``import this``. + and helpful for users. When in doubt, use ``import this``. For Ansys code quality standards, see :ref:`Coding style`. #. Document your contributions. Include a docstring for any added function, - method, or class, following :ref:`Numpydoc docstrings` as specified by + class, or method, following :ref:`Numpydoc docstrings` as specified by PyAnsys :ref:`Documentation style`. Always provide at least one simple use case for a new feature. #. Test your contribution. Because Python is an interpreted language, if it's not tested, it's probably broken. At the minimum, include a unit test for each new feature within the ``tests`` directory. Ensure that - each new method, class, or function has a reasonable (>80%) coverage. - For information about automated testing, see :ref:`Testing`. + each new function, class, or method has a reasonable coverage (greater + than 80%). For information on automated testing, see :ref:`Testing`. #. Do not include any datasets for which a license is not available or commercial use is prohibited. -#. Review the Ansys `Code of Conduct +#. Review the Ansys `Contributor Code of Conduct `_. -All ``PyAnsys`` projects are hosted in `GitHub `_ in -the form of :ref:`Git` repositories. GitHub is a platform that not only provides -storage for projects but also additional features like code reviews or issue -boards. +All PyAnsys projects are hosted on `GitHub`_ in the form of :ref:`Git` +repositories. GitHub is a platform that not only provides storage for +projects but also additional features like code reviews or issue boards. .. raw:: html @@ -48,17 +47,18 @@ Create a GitHub account To use GitHub, start by creating an account for the platform. Follow the `GitHub Join Process `_. -For Ansys employees: +For Ansys employees who would like to join the Ansys GitHub organization, +visit `Join Ansys GitHub Organization `_. -If you would like to join the Ansys GitHub organization, visit `Join Ansys GitHub Organization `_. +.. _github_repo_sections: -GitHub repository sections --------------------------- +Interact with GitHub repository sections +---------------------------------------- Once you have a GitHub account and access to the Ansys GitHub organization, you are able to interact with the different repositories. While each repository contains all tabbed sections in the following list, -your access level determines tabbed sections you can see. +your access level determines which tabbed sections you can see. .. figure:: images/github_sections.png :alt: GitHub repository sections @@ -78,53 +78,50 @@ your access level determines tabbed sections you can see. Create an issue --------------- -You create an issue to either report a bug or request help or a new feature. Commenting -allows you to interact with other users, developers, and project maintainers. +You create an issue to report a bug, request a new feature, or ask for library-specific +help. You can comment on an issue to interact with other users, developers, and project +maintainers. -To open an issue, select the ``Issues`` tab in the :ref:`GitHub repository -sections` and click ``New Issue``. Then, select a template for the type of issue -to open. +To open an issue, select the ``Issues`` tab in the :ref:`github_repo_sections` and click +``New Issue``. Then, select a template for the type of issue to open. -GitHub issues require the usage of Markdown files instead of ReStructured Text +GitHub issues require the usage of Markdown files instead of ReStructuredText (RST) files. For more information, see `Basic writing and formatting syntax -`_. +`_ +in the GitHub documentation. + +Report bugs +~~~~~~~~~~~ + +If you encounter a bug in the code, open a new issue and select the template +for creating a bug report. In the bug report, take these actions: + +- Indicate the operating system, Python version, and library version that you are using. +- Include a small piece of code to allow others to reproduce the bug that you found. +- Add any additional information that you consider useful for fixing the bug. Request new features ~~~~~~~~~~~~~~~~~~~~ -If you would like a new feature to be added to a PyAnsys library, you open a +If you would like a new feature to be added to a PyAnsys library, open a new issue and select either the template for code enhancements or a -feature idea. In the issue, you then do the following: +feature idea. In the issue, take these actions: - Describe the main goal of the feature that you'd like to have added and why it is beneficial to the project. - - Describe how this feature might possibly be implemented and the steps that should be followed. - - Add any references that could help during the development process. -Report bugs -~~~~~~~~~~~ - -If you encounter a bug in the code, you open a new issue and select the template -for creating a bug report. In the bug report, try to: - -- Indicate the operating system, Python version, and library version that you are using. - -- Include a small piece of code to allow others to reproduce the bug you found. - -- Add any additional information that you consider useful for fixing the bug. - Fork a repository ----------------- Forking a repository is like copying and pasting a project into your own GitHub -profile. Notice that only ``public`` labeled repositories can be forked. You +profile. Notice that only repositories labeled as ``public`` can be forked. You cannot fork a repository labeled as ``internal`` or ``private``. -To fork a repository, click the ``Fork`` button at the top of the project's -``Code`` tabbed section. +To fork a repository, click the **Fork** button at the top of the project's +**Code** page. Clone a repository ------------------ @@ -135,22 +132,21 @@ doing this (``HTTPS`` or ``SSH``), to force the usage of ``SSH``, only this meth Clone using SSH ~~~~~~~~~~~~~~~ -Cloning using ``SSH`` requires :ref:`Enabling SSH`. After that, you can -clone a repository by running: +Cloning using ``SSH`` requires that SSH be enabled. For more information, see :ref:`Enable SSH`. +To clone a repository using SSH, run this command: .. code-block:: bash git clone git@github.com:/.git -For example, clone the `PyMAPDL`_ -project with: +For example, clone the `PyMAPDL`_ project with this command: .. code-block:: bash git clone git@github.com:ansys/pymapdl.git -Install in editable mode ------------------------- +Install a library in editable mode +---------------------------------- You can install a Python library in *editable mode*, which allows you to modify the source code and have these new changes @@ -161,13 +157,13 @@ To install a Python library in editable mode: 1. Ensure that you :ref:`Create` and :ref:`Activate` a Python virtual environment, as explained in the :ref:`Virtual environments` section. -2. Update ``pip`` with: +2. Update ``pip`` with this command: .. code-block:: bash python -m pip install --upgrade pip -3. Install the library with: +3. Install the library with this command: .. code-block:: bash @@ -176,13 +172,13 @@ To install a Python library in editable mode: Create a branch --------------- -It is likely that the default branch name is ``main`` or ``master``. This is the -development branch for PyAnsys projects. For more information, see :ref:`Branch model`. +It is likely that the repository's default branch name is ``main`` or ``master``. This is the +development branch for PyAnsys projects. For more information, see :ref:`Branching model`. You must implement new contributions in a different branch and then :ref:`Create a pull request` -so that you can merge these changes into the ``main`` branch. +so that these changes can later be merged into the repository's ``main`` branch. -You create a branch with: +To create a branch, run this command: .. code-block:: bash @@ -190,7 +186,7 @@ You create a branch with: .. _branch_naming: -Branch naming conventions +Branch-naming conventions ~~~~~~~~~~~~~~~~~~~~~~~~~ The following requirements for naming branches helps to streamline @@ -210,29 +206,30 @@ changes any given branch is introducing before looking at the code. - ``testing/``: Improvements or changes to testing - ``release/``: Releases (see below) -Push a new branch ------------------ +Push your branch +---------------- -Once you have implemented new changes and committed them, you push your -branch, which uploads your changes to the repository. These changes are only -visible in the branch that you just pushed. +Once you have implemented new changes and committed them, push your +branch with this command: .. code-block:: bash git push -u origin +Your changes are upload to the repository, but they are only visible in the branch +that you just pushed. + Create a pull request --------------------- Once you have tested your branch locally, create a pull request (PR) and target your merge to -``main``. This automatically runs CI testing and verifies that your changes +the repository's ``main`` branch. This automatically runs CI testing and verifies that your changes work across all supported platforms. For procedural information, see `Creating a pull request `_ in the GitHub documentation. -After you submit your PR, someone from the PyAnsys development team reviews -your code to verify that it meets the :ref:`Packaging style`, :ref:`Coding -style`, and :ref:`Documentation style`. +After you submit your PR, a project maintainer reviews your code to verify that it meets +the :ref:`Packaging style`, :ref:`Coding style`, and :ref:`Documentation style`. Once your code is approved, if you have write permission, you can merge the PR and then delete the PR branch. If you don't have write permission, the reviewer @@ -240,8 +237,7 @@ or someone else with write permission must merge your PR and then delete your PR .. admonition:: Always delete your PR branch after merging it into the main branch. - You can set up automatic deletion - of branches in **Settings -> General -> Pull Requests**. + You can set up automatic deletion of branches in **Settings > General > Pull Requests**. Use GitHub CLI -------------- @@ -251,4 +247,4 @@ GitHub offers a `command-line interface (CLI) `_. This program allows you to interact with most of the features available in the web version of GitHub. For available commands, see the -`official GitHub CLI manual `_. +`GitHub CLI `_ documentation. diff --git a/doc/source/how-to/diag/doc_layout.rst b/doc/source/how-to/diag/doc_layout.rst index f3bbd04ad..345585912 100644 --- a/doc/source/how-to/diag/doc_layout.rst +++ b/doc/source/how-to/diag/doc_layout.rst @@ -1,7 +1,7 @@ .. _proposed doc layout: .. graphviz:: - :caption: Generic structure for the PyAnsys library documentation. - :alt: Generic structure for the PyAnsys library documentation. + :caption: Generic structure for PyAnsys library documentation. + :alt: Generic structure for PyAnsys library documentation. :align: center digraph "sphinx-ext-graphviz" { diff --git a/doc/source/how-to/dns-configuration.rst b/doc/source/how-to/dns-configuration.rst index 5d084ba59..b9660aae1 100644 --- a/doc/source/how-to/dns-configuration.rst +++ b/doc/source/how-to/dns-configuration.rst @@ -1,34 +1,36 @@ DNS configuration ================= -As explained in :ref:`Documenting`, PyAnsys projects publish their documentation -online under the following canonical name (CNAME) convention: +As explained in :ref:`documenting_developers`, documentation for PyAnsys libraries is published +online following canonical name (CNAME) convention: ``https://.docs.pyansys.com`` To request a CNAME for the ``pyansys.com`` domain, contact the -`PyAnsys Core Team`_, so one of the members can handle the -creation of the requested PyAnsys subdomain. +`PyAnsy core team `_. They handle the creation of all PyAnsys subdomains. Once the CNAME is created, repository administrators can configure their published documentation in GitHub pages to be exposed through it. To configure the CNAME -for your documentation, refer to `Managing a custom domain for your GitHub Pages site`_. +for your documentation, see `Managing a custom domain for your GitHub Pages site`_ +in the GitHub documentation. DNS TXT verification -------------------- Once a CNAME is registered under the ``pyansys.com`` domain, the next step is -to perform a DNS TXT verification. All PyAnsys subdomains are required by Ansys' +to perform a DNS TXT verification. All PyAnsys subdomains are required by the Ansys IT department to provide a DNS TXT verification. To verify a new CNAME for an -organization, refer to `Verifying a domain for your organization site`_. This guide -shows how to create DNS TXT verification elements for GitHub Pages sites. +organization, see `Verifying a domain for your organization site`_ in the GitHub +documentation. This article shows how to create DNS TXT verification elements +for GitHub Pages sites. .. warning:: Only users with privilege access to the ``pyansys.com`` DNS zone can - perform this operation. Contact the `PyAnsys Core Team`_ if needed. + perform a DNS TXT verification. If assistance is needed, contact the + `PyAnsy core team `_. -PyAnsys verified domains +PyAnsys-verified domains ------------------------ In the Ansys GitHub organization, these domains have been verified: @@ -39,41 +41,41 @@ In the Ansys GitHub organization, these domains have been verified: .. warning:: Only CNAME requests with **one** subdomain before the previous verified - domains are allowed. The reasons behind this measure are explained in - :ref:`DNS protection measures`. + domains are allowed. For more information, see :ref:`DNS protection measures`. DNS protection measures ----------------------- -The rationale behind choosing the previous CNAME convention is due to cybersecurity reasons. -As explained in `Verifying a domain for your organization site`_, GitHub provides for +The rationale behind choosing the previous CNAME convention is related to cybersecurity. +As the `Verifying a domain for your organization site`_ article explains, GitHub provides for verifying domains for users and organizations. **Having a verified domain prevents users external to the organization from -taking over existing direct subdomains**. However, GitHub does not verify -deeper subdomains. +taking over existing direct subdomains**. + +However, GitHub does not verify deeper subdomains. This is better explained with the following examples: -Case scenario - **protected** subdomain -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Scenario for a **protected** subdomain +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- Consider that the domain ``docs.pyansys.com`` has been verified for the Ansys GitHub organization. +- The ``docs.pyansys.com`` domain has been verified for the Ansys GitHub organization. - This CNAME is requested: ``subdomain.docs.pyansys.com``. This CNAME can only be used by repositories inside the Ansys GitHub organization. Any attempt by an external user to take over this CNAME is identified and rejected by GitHub. -Case scenario - **vulnerable** subdomain -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Scenario for a **vulnerable** subdomain +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - The domain ``docs.pyansys.com`` has been verified for the Ansys GitHub organization. - This CNAME is requested: ``subsubdomain.subdomain.docs.pyansys.com``. This CNAME **can** be used by external users for their repositories. For this reason, -you must avoid creating CNAME requests that are not verified by the organization. +you must avoid creating CNAME requests that are not verified by the Ansys GitHub organization. -Preventing CNAME takeover +CNAME takeover prevention ------------------------- CNAME values have been taken over in the past by external users, typically due to @@ -87,16 +89,13 @@ these reasons: Thus, it is important that you follow these guidelines: * Ensure that your GitHub organization has verified domains for hosting GitHub pages. -* Check that the CNAME that you request does not have a subdomain depth larger than **1** with respect to the verified domains. +* Check that the CNAME that you request does not have a subdomain depth larger than **one** with respect to the verified domains. * Request a CNAME only when needed, which is just prior to publishing the site. * Request deletion of the CNAME once it is no longer used to prevent others from hosting their sites on it. -.. - Links +.. Links .. _PyAnsys DNS Zones: https://portal.azure.com/#@ansys.com/resource/subscriptions/2870ae10-53f8-46b1-8971-93761377c38b/resourceGroups/pyansys/providers/Microsoft.Network/dnszones/pyansys.com/overview -.. _PyAnsys Core Team: mailto:pyansys.core@ansys.com -.. _PyAnsys GitHub organization: https://github.com/ansys .. _Managing a custom domain for your GitHub Pages site: https://docs.github.com/en/pages/configuring-a-custom-domain-for-your-github-pages-site/managing-a-custom-domain-for-your-github-pages-site .. _Verifying a domain for your organization site: https://docs.github.com/en/pages/configuring-a-custom-domain-for-your-github-pages-site/verifying-your-custom-domain-for-github-pages#verifying-a-domain-for-your-organization-site \ No newline at end of file diff --git a/doc/source/how-to/documenting.rst b/doc/source/how-to/documenting.rst index 23867fe30..a0d61f0fb 100644 --- a/doc/source/how-to/documenting.rst +++ b/doc/source/how-to/documenting.rst @@ -4,9 +4,9 @@ Documenting =========== PyAnsys documentation must not only be written but also maintained. If you are -new to writing PyAnsys documentation, see the `Google_dev_doc_style_guide`_, -which provides the general guidelines that you are to follow. This page supplies guidance specific -to PyAnsys documentation. +contributing to PyAnsys documentation, see the `Google developer documentation style guide +`_, which provides the general guidelines that you are to follow. +This page supplies guidance specific to PyAnsys documentation. .. note:: For comprehensive information on contributing new content or revising existing @@ -30,17 +30,17 @@ Documentation sources -The generation of PyAnsys documentation uses `Sphinx`_ and the Ansys-branded Sphinx theme -(`Ansys_Sphinx_theme_repo`_) to assemble content in: +The generation of PyAnsys documentation uses `Sphinx`_ and the `Ansys-branded Sphinx theme +`_ to assemble content from these resources: -- Docstrings +- Docstrings in Python (PY) files - reStructuredText (RST) files -- Python (PY) example files +- Examples in PY files Docstrings ~~~~~~~~~~ -You must format docstrings so that Sphinx can parse them. Sphinx provides +You must format docstrings in PY files so that Sphinx can parse them. Sphinx provides these extensions for docstring formatting: - `numpydoc extension `_ @@ -48,13 +48,13 @@ these extensions for docstring formatting: Using the ``numpydoc`` extension is preferred because it supports an API documentation structure with one page per method, providing Python community -members with documentation like that generated for the `pandas`_ -and `numpy`_ packages. If your API is very linear, you +members with documentation like that generated for the +`numpy`_ and `pandas`_ packages. If your API is very linear, you can use the ``napoleon`` extension because it supports a documentation structure where everything needed to solve a certain problem can be shown on one page. -The ``numpydoc`` manual provides explains how to use the extension with Sphinx and -includes a `style guide `_ . The ``napoleon`` extension, +The `numpydoc manual `_ explains how to use the ``numpydoc`` extension with +Sphinx and includes a `style guide `_. The ``napoleon`` extension, which parses both numpydoc and Google style docstrings, refers you to the `Google Python Style Guide `_. @@ -68,8 +68,8 @@ RST files ~~~~~~~~~ To provide general usage information in your documentation, use your favorite -editor to create RST (ReStructuredText) files that you then place in :ref:`The \`\`doc/\`\` -directory`. The ``index.rst`` file in the ``doc/source`` directory +editor to create RST (ReStructuredText) files that you then place in +:ref:`The \`\`doc\`\` directory`. The ``index.rst`` file in the ``doc/source`` directory defines the first level of your documentation hierarchy. The ``toctree`` directive (which stands for "table of contents tree") indicates the maximum number of heading levels that the documentation is to display. Following this @@ -78,8 +78,7 @@ directive are the directory names for your documentation sections. .. include:: diag/doc_layout.rst Each documentation section has its own ``index.rst`` file, as shown by the preceding -figure. The documentation layout can be modeled using the following code in -each one of the ``index.rst`` files. +figure. The following RST files provide examples of how to create ``index.rst`` files. .. tab-set:: @@ -127,13 +126,13 @@ each one of the ``index.rst`` files. ... While you do not include the ``.rst`` extension when defining the section -structure, the index file referenced for each section must be named -``index.rst``. +structure in the ``toctree`` directive, the index file referenced for each +section should be named ``index.rst``. After you build documentation locally as described in :ref:`Build documentation`, the first-level heading in the ``index.rst`` file for each -section is shown as a clickable link in the header of the -documentation's generated HTML output. For more information on defining the +section is shown as a clickable link in the title bar of the +library's HTML documentation. For more information on defining the documentation structure, see `Getting Started `_ in the Sphinx documentation. @@ -146,9 +145,9 @@ Within RST files, heading titles are to use sentence case per the in the *Google developer documentation style guide*. The line that follows the heading title must have a string of characters that is the same length as the heading title. If the length of the characters under the heading title -do not match the length of the heading title, Sphinx generates a warning. +is less than the length of the heading title, Sphinx generates a warning. -For consistency within PyAnsys libraries, the use of these special characters +For consistency within PyAnsys libraries, the use of the following special characters is recommended for headings but is not enforced: - For section-level headings, use ``###``. @@ -160,15 +159,16 @@ is recommended for headings but is not enforced: For comprehensive syntax information, see the `reStrucutredText Markup Specification `_. -Because you must be familiar with the content in this guide, explore its HTML pages -and then the RST files in its `repository `_. This should help you -to understand the syntax and see how RST files are nested to create the guide. +Because you must be familiar with the content in this guide before contributing to +a PyAnsys library, explore its pages and then the RST files in its `repository `_. +This should help you to understand the syntax and see how RST files are nested to create the +structure of the guide. Recommended sections ++++++++++++++++++++ -Although each PyAnsys library is different, its documentation has the same goal: -provide instructions and guidelines for users. Thus, you can find some common sections +Although each PyAnsys library is different, its documentation has the same goal, which +is to provide instructions and guidelines for users. Thus, you can find some common sections across the documentation for many PyAnsys libraries. Try to include these top-level sections in your library's documentation: @@ -177,28 +177,28 @@ sections in your library's documentation: - ``API reference`` Documents API resources provided by the library. - ``Examples``: Provides fully fledged examples for using the library. - ``Contributing``: Refers to the *PyAnsys developer's guide* - for overall guidance and provides library-specific contribution information. + for overall guidance and then provides library-specific contribution information. Examples ~~~~~~~~ Examples come in two formats: -- Basic code snippets demonstrating the feature +- Basic code snippets demonstrating features - Full-fledged standalone examples that are meant to be run as downloadable scripts -Place basic code snippets in the ``doc/source/`` directory. -Place full-fledged standalone examples in the ``examples/`` directory -at the root of the repository. All of these examples must be compliant +Place basic code snippets in the ``doc/source`` directory. +Place full-fledged standalone examples in the ``examples`` directory, +which is at the root of the repository. All of these examples must be compliant with :ref:`PEP 8`. They are compiled dynamically during the build process. Always ensure that your examples run properly locally because they are -verified through the CI performed via GitHub Actions. +verified through the CI performed by GitHub Actions. Adding a new standalone example consists of placing it in an applicable -subdirectory in the ``examples/`` directory. If none of the existing directories +subdirectory in the ``examples`` directory. If none of the existing directories match the category of your example, create a new subdirectory with a -``README.txt`` file describing the new category which implies -the Python project has the following structure: +``README.txt`` file describing the new category. Here is an example of what +the structure for a PyAnsys library typically looks like: .. code-block:: text @@ -217,7 +217,8 @@ the Python project has the following structure: └── README.txt (or .rst) -In the Sphinx configuration file (``doc/conf.py``), enable the ``sphinx-gallery`` exentenion: +In the Sphinx configuration file (``doc/conf.py``), enable the `Sphinx-Gallery +`_ extension: .. code:: Python @@ -226,8 +227,8 @@ In the Sphinx configuration file (``doc/conf.py``), enable the ``sphinx-gallery` 'sphinx_gallery.gen_gallery', ] -The following configuration declares the location of the `examples` directory -to be ``../examples`` and the `output` directory to be ``examples``: +The following configuration declares the location of the ``examples`` directory +to be ``../examples`` and the ``output`` directory to be ``examples``: .. code:: Python @@ -236,26 +237,22 @@ to be ``../examples`` and the `output` directory to be ``examples``: 'gallery_dirs': 'examples', # path where the gallery generated outputs are to be saved } -Because these examples are -built using the `sphinx-gallery -`_ extension, you must -follow its `coding guidelines -`_. +Because these examples are built using Sphinx-Gallery, you must +follow its coding guidelines. -:ref:`General example` uses Python and the ``sphinx gallery`` -extension. +:ref:`General example` uses Python and Sphinx-Gallery. Document Python code -------------------- -You can use the `sphinx.ext.autodoc` extension to generate documentation from your Python +You can use the native ``sphinx.ext.autodoc`` extension to generate documentation from your Python code. When using this extension, you can include these directives in your :ref:`RST files`: -* ``automodule``: For documenting modules. -* ``autoclass``: For documenting classes. -* ``autofunction``: For documenting methods and functions. +* ``automodule``: For documenting modules +* ``autoclass``: For documenting classes +* ``autofunction``: For documenting methods and functions -For a full list of 'auto' directives, see `Include documentation from docstrings +For a full list of ``auto`` directives, see `Include documentation from docstrings `_ in the Sphinx documentation. @@ -264,7 +261,7 @@ Document classes There are two main ways of using Sphinx to document a class: -* Manually describe 'how' and 'why' you use a class in :ref:`RST files`. +* Manually describe *why* and *how* you use a class in RST files. * Automatically generate documentation for classes using the ``autoclass`` or ``autosummary`` directive in RST files. @@ -272,7 +269,7 @@ There are two main ways of using Sphinx to document a class: Manually generate documentation +++++++++++++++++++++++++++++++ -To describe 'why' and 'how' you use a class within RST files, use the +To describe *why* and *how* to use a class in RST files, use the ``code-block`` directive: .. tab-set:: @@ -367,11 +364,11 @@ the two preceding approaches. To accomplish this, you include multiple necessary to describe the relationships between the classes. For example, the Granta MI BoM Analytics library uses this combined approach: -:external+grantami-bomanalytics:doc:`Part Compliance page ` +:external+grantami-bomanalytics:doc:`Part compliance` first describes the :external+grantami-bomanalytics:class:`~ansys.grantami.bomanalytics.queries.PartComplianceQuery` class. It then describes the -:external+grantami-bomanalytics:class:`~ansys.grantami.bomanalytics._query_results.PartComplianceQueryResult`, +:external+grantami-bomanalytics:class:`~ansys.grantami.bomanalytics._query_results.PartComplianceQueryResult` and :external+grantami-bomanalytics:class:`~ansys.grantami.bomanalytics._item_results.PartWithComplianceResult` classes returned by the query. Because the classes are only ever @@ -389,16 +386,17 @@ Build documentation ------------------- `Sphinx`_ is used to build the documentation. You configure the entire build process in the -``conf.py`` file, located in the ``source/`` directory in :ref:`The \`\`doc/\`\` directory`. +``conf.py`` file, which is located in the ``doc/source`` directory. -This directory also contains a ``Makefile`` file and a ``make.bat`` file for +The ``doc`` directory contains a ``Makefile`` file and a ``make.bat`` file for automating the building process. Different builders render different -documentation output, such as ``HTML``, ``LaTeX`` or ``PDF``. +documentation output, such as ``HTML`` and ``PDF``. Build HTML documentation ~~~~~~~~~~~~~~~~~~~~~~~~ -You build HTML documentation with: +You can build HTML documentation locally with the command for your OS. On macOS +or Linux, you use ``Makefile``. On Windows, you use the ``make.bat`` file. .. tab-set:: @@ -414,19 +412,15 @@ You build HTML documentation with: make.bat html -The resulting HTML files are created in the ``_build/html`` directory, -located in :ref:`The \`\`doc/\`\` directory`. +The resulting HTML files are created in the ``doc/_build/html`` directory. -You can display the HTML documentation with: - -.. code-block:: text - - doc/_build/html/index.html +To view the HTML documentation in your browser, navigate to this directory +and double-click the ``index.html`` file. Build PDF documentation ~~~~~~~~~~~~~~~~~~~~~~~ -To build PDF documentation, the following rules must be added to +To build PDF documentation locally, you must add the following rules to the ``Makefile`` and ``make.bat`` files: .. tab-set:: @@ -453,7 +447,7 @@ the ``Makefile`` and ``make.bat`` files: cd "%BUILDDIR%\latex" pdflatex \*.tex --interaction=nonstopmode -You can call the previous rules by running: +You can then build the PDF documentation locally with the command for your OS: .. tab-set:: @@ -470,7 +464,7 @@ You can call the previous rules by running: make.bat pdf The resulting PDF and intermediate LaTeX files are created in the -``_build/latex`` folder, located in :ref:`The \`\`doc/\`\` directory`. +``doc/_build/latex`` directory. .. admonition:: Always verify the content of your PDF file. @@ -479,18 +473,19 @@ The resulting PDF and intermediate LaTeX files are created in the .. _multi_version_enabling: -Enabling multi-version documentation ------------------------------------- +Enable multi-version documentation +---------------------------------- With the release of `ansys/actions@v4 -`_ , projects can -benefit from multi-version documentation. Projects taking advantage of this +`_ , libraries can +benefit from multi-version documentation. Libraries taking advantage of this feature need to apply different configurations according to their level of maturity. -Follow these steps to enable multi-version documentation in your project: +Follow these steps to enable multi-version documentation in your library: -- Use ``ansys-sphinx-theme>=0.8`` for building the documentation in your project. +- Use `ansys-sphinx-theme `_ 0.8 or later for building + your library's documentation. - Include the following lines in :ref:`The \`\`conf.py\`\` file`: .. code-block:: python @@ -518,18 +513,19 @@ Follow these steps to enable multi-version documentation in your project: The idea is that the canonical name (CNAME) is only defined in a single place, so it can be easily changed if required. -- Enable documentation deployment for development and stable versions, see - :ref:`Deploying documentation`. +- Enable documentation deployment for development and stable versions. For more + information, see :ref:`Deploy documentation`. -With all the previous configuration, your project is ready to use multi-version +With all the previous configuration, your library is ready to use multi-version documentation in an automated way. This means that every time you release a -new version, it is added to the drop-down button in the upper right corner of -the documentation's title bar. You use this drop-down button to -switch from viewing the documentation for the latest stable release -to viewing the documentation for the development version or previously +new version, a link to the documentation for this version is added to the +drop-down button in the upper right corner of the documentation's title bar. +You use this drop-down button to switch from viewing the documentation for the +latest stable release to viewing the documentation for the development version +or previously released versions. -.. admonition:: Controlling the desired amount of versions showing up in the drop-down +.. admonition:: Controlling the number of versions shown in the drop-down button Only the development branch and the last three stable versions are shown by default in the documentation drop-down button. To show more versions, @@ -537,20 +533,21 @@ released versions. action `_. -If you require support for migrating to the multi-version documentation, email -`pyansys.core@ansys.com `_. +If you require support for migrating to the multi-version documentation, contact the +`PyAnsy core team `_. -Deploying documentation ------------------------ +Deploy documentation +-------------------- -PyAnsys libraries deploy their documentation online via `GitHub Actions`_ to -`GitHub Pages`_. This documentation is hosted on the `gh-pages`_ branch of the -repository of the project. Documentation deployment is done by uploading the -HTML documentation artifact to the `gh-pages`_ branch of the repository, see -`enabling GitHub pages`_. +PyAnsys libraries use `GitHub Actions`_ to deploy their documentation online to +`GitHub Pages`_. This documentation is hosted on the ``gh-pages`` branch of the +library's repository. Documentation deployment is done by uploading the +HTML documentation artifact to the ``gh-pages`` branch of the library's repository. +For more information, see `Creating a GitHub Pages site `_ +in the GitHub documentation. -Add the following workflow job to deploy both development and stable documentation -in an automated way. +To deploy both development and stable documentation in an automated way, add the +``doc-deploy-dev`` and ``doc-deploy-stable`` jobs: .. code-block:: yaml @@ -588,19 +585,18 @@ in an automated way. cname: ${{ env.DOCUMENTATION_CNAME }} token: ${{ secrets.GITHUB_TOKEN }} -Deploying to another repository -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Deploy to another repository +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you are planning to deploy documentation to a repository other than the one -for your project, make sure you create this new repository before deploying your +for your library, make sure you create this repository before deploying your documentation for the first time. -Using the ``{{ secrets.GITHUB_TOKEN }}`` when deploying to another repository is +Using the ``{{ secrets.GITHUB_TOKEN }}`` token when deploying to another repository is not possible due to the level of credentials of this token. Instead, use the -secrets generated by the PyAnsy Bot application. +secrets generated by the PyAnsy Bot app. -For deploying the documentation to another repository, use the following -workflow: +For deploying the documentation to another repository, use this workflow: .. code-block:: yaml @@ -680,7 +676,7 @@ constructed using the following structure: ``https://.docs.pyansys.com`` You can generally access the latest development version of the documentation by -adding the ``dev`` path to the URL as follows: +adding the ``dev`` path to the URL: ``https://.docs.pyansys.com/dev`` @@ -696,12 +692,12 @@ For example, consider the PyAEDT documentation: - The URL for documentation of the latest stable release is ``_. - The URL for documentation of the latest development version is ``_. -The latest development versions of both the library and its documentation are -automatically kept up to date via GitHub actions. +GitHub Actions automatically keep the latest development versions of both the +library and its documentation up to date. To make documentation changes, you create a branch with a name that begins with a prefix of ``doc/`` that is then followed by a short description of what you -are changing. For more information, see :ref:`Branch model`. +are changing. For more information, see :ref:`Branching model`. As you are making changes in this branch, you want to periodically generate the documentation locally so that you can test your changes before you create a @@ -711,65 +707,69 @@ Using PyMeilisearch as search engine ------------------------------------ PyMeilisearch is a Python client library that enables you to utilize -MeiliSearch, an open-source search engine, to provide fast and relevant +MeiliSearch, an open source search engine, to provide fast and relevant search capabilities for your application's data. -To enable multi-version documentation in your project, follow these steps: +By completing the following steps, you can effectively enable and use PyMeilisearch as +a search engine for multi-version documentation in your project. -- Use ``ansys-sphinx-theme>=0.9`` for building the documentation in your project. +#. Use ``ansys-sphinx-theme>=0.9`` for building the documentation in your project. -- Include the following lines in the conf.py file: +#. Include the following lines in the ``conf.py`` file: - .. code-block:: python + .. code-block:: python - import os + import os - from ansys_sphinx_theme import convert_version_to_pymeilisearch + from ansys_sphinx_theme import convert_version_to_pymeilisearch - cname = os.getenv("DOCUMENTATION_CNAME", "") - """The canonical name of the webpage hosting the documentation.""" + cname = os.getenv("DOCUMENTATION_CNAME", "") + """The canonical name of the webpage hosting the documentation.""" - html_theme_options = { - "use_meilisearch": { - "api_key": os.getenv("MEILISEARCH_API_KEY", ""), - "index_uids": { - f"{convert_version_to_pymeilisearch(__version__)}": "index name to be displayed", # noqa: E501 - }, - }, - ... - } + html_theme_options = { + "use_meilisearch": { + "api_key": os.getenv("MEILISEARCH_API_KEY", ""), + "index_uids": { + f"{convert_version_to_pymeilisearch(__version__)}": "index name to be displayed", # noqa: E501 + }, + }, + ... + } - In this code, replace with the desired name for your MeiliSearch index. - The ``convert_version_to_pymeilisearch`` function is to convert your package's version into a format suitable for MeiliSearch indexing. - - - Enable documentation index deployment for development and stable versions using GitHub Actions: + #. In these lines, replace ** with the desired name for your MeiliSearch index. - .. code-block:: yaml + The ``convert_version_to_pymeilisearch`` function converts your package's version into + a format suitable for MeiliSearch indexing. - jobs: - doc-deploy-index: - name: "Index the documentation and scrap using PyMeilisearch" - runs-on: ubuntu-latest - needs: doc-deploy - if: github.event_name == 'push' - steps: - - name: Scrape the stable documentation to PyMeilisearch - run: | - VERSION=$(python -c "from import __version__; print('.'.join(__version__.split('.')[:2]))") - VERSION_MEILI=$(python -c "from import __version__; print('-'.join(__version__.split('.')[:2]))") - echo "Calculated VERSION: $VERSION" - echo "Calculated VERSION_MEILI: $VERSION_MEILI" - - - name: "Deploy the latest documentation index" - uses: ansys/actions/doc-deploy-index@v4.1 - with: - cname: ".docs.pyansys.com/version/$VERSION" - index-name: "v$VERSION_MEILI" - host-url: "" - api-key: ${{ secrets.MEILISEARCH_API_KEY }} - -Replace , , and with appropriate values for your project. -The version of your package is automatically calculated and used for indexing, ensuring that your documentation remains up-to-date. -For more information, see the `PyMeilisearch`_ and `ansys-sphinx-theme-doc`_ documentation. -By following these steps, you can effectively use PyMeilisearch as a search engine for multi-version documentation in your project. + #. Enable documentation index deployment for development and stable versions using GitHub Actions: + + .. code-block:: yaml + + jobs: + doc-deploy-index: + name: "Index the documentation and scrap using PyMeilisearch" + runs-on: ubuntu-latest + needs: doc-deploy + if: github.event_name == 'push' + steps: + - name: Scrape the stable documentation to PyMeilisearch + run: | + VERSION=$(python -c "from import __version__; print('.'.join(__version__.split('.')[:2]))") + VERSION_MEILI=$(python -c "from import __version__; print('-'.join(__version__.split('.')[:2]))") + echo "Calculated VERSION: $VERSION" + echo "Calculated VERSION_MEILI: $VERSION_MEILI" + + - name: "Deploy the latest documentation index" + uses: ansys/actions/doc-deploy-index@v4.1 + with: + cname: ".docs.pyansys.com/version/$VERSION" + index-name: "v$VERSION_MEILI" + host-url: "" + api-key: ${{ secrets.MEILISEARCH_API_KEY }} + +#. Replace **, **, and ** with appropriate values + for your project. + + The version of your package is automatically calculated and used for indexing, ensuring that your documentation + remains up to date. For more information, see the `PyMeilisearch`_ and `ansys-sphinx-theme-doc`_ documentation. diff --git a/doc/source/how-to/grpc-api-packages.rst b/doc/source/how-to/grpc-api-packages.rst index 04e770eca..1c8d543cd 100644 --- a/doc/source/how-to/grpc-api-packages.rst +++ b/doc/source/how-to/grpc-api-packages.rst @@ -3,11 +3,11 @@ gRPC API packages Protobuf service definitions provide the API specification for underlying server implementations so that each consuming client library has a clear -contract for gRPC data messages. Ideally, the ``.proto`` files have a single -repository established as the source of truth, organized by API version -increment as the API definition expands and changes. Because most client -libraries are custom implementations enhancing the developer experience -when consuming the service, releasing the Protobuf definitions +contract for gRPC data messages. Ideally, the Protobuf (``.proto``) files +have a single repository established as the source of truth, organized by +API version increment as the API definition expands and changes. Because +most client libraries are custom implementations enhancing the developer +experience when consuming the service, releasing the Protobuf definitions publicly gives full flexibility to developers to operate at the abstraction layer they choose. @@ -15,52 +15,54 @@ Maintain API definition repository ---------------------------------- Because the Protobuf definition of the service is language agnostic, the repository -containing the Protobuf files can be created within the top-level +containing the PROTO files can be created within the top-level `Ansys GitHub organization`_. -Every update of the Protobuf files follows a standard -pull request process as a sanity check for API definition accuracy. Language- -specific packages can be generated for each merge or on a set cadence. +Every update of the PROTO files follows a standard pull request process as a +sanity check for API definition accuracy. Language-specific packages can be +generated for each merge or on a set cadence. -Managing Protobuf definitions for Python clients ------------------------------------------------- +Manage Protobuf definitions for Python clients +---------------------------------------------- Within Ansys, and more specifically in the PyAnsys environment, most client libraries -have a dedicated Python package containing the needed ``.proto`` files compiled as +have a dedicated Python package containing the needed PROTO files compiled as Python source code. These are typically consumed by the PyAnsys client libraries -for being able to communicate with their respective services. +for communicating with their respective services. -For example, `PyMAPDL_ consumes the ``ansys-api-mapdl`` package, which is built in the +For example, `PyMAPDL`_ consumes the ``ansys-api-mapdl`` package, which is built in the `ansys-api-mapdl`_ repository. -How to build an ``ansys-api-`` repository -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Build an ``ansys-api-`` repository +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Ansys GitHub organization has a dedicated template repository for creating -these ``.proto`` file repositories and the needed files to generate the Python API +PROTO file repositories and the needed files to generate the Python API packages to be consumed by the PyAnsys clients. -To set up an API repository like the `ansys-api-mapdl`` one, +To set up an API repository like the ```ansys-api-mapdl`` one, select the `ansys-api-template `_ repository when creating a repository within the Ansys GitHub organization. -Follow the instructions on the `ansys-api-template - Expected usage `_ -section to understand how to use the template repository. +To understand how to use the ``ansys-api-template`` repository, see +`Expected usage `_ +in this repository's README. -Building Python stub classes -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Build Python stub classes +~~~~~~~~~~~~~~~~~~~~~~~~~ -The template repository uses the `ansys-tools-protoc-helper `_ -library to auto-generate Python wheels that can be consumed by downstream Python client libraries. +The ``ansys-api-template`` repository uses the `ansys-tools-protoc-helper `_ +utility to auto-generate Python wheels that can be consumed by downstream Python client libraries. -To use this, include this tool in the ``pyproject.toml`` file as a build dependency: +To use the ``ansys-tools-protoc-helper`` utility, include it in the ``pyproject.toml`` file as a build dependency: .. code-block:: toml [build-system] requires = ["setuptools >= 42.0.0", "wheel", "ansys_tools_protoc_helper"] -Then generate a Python wheel containing the autogenerated Python source with: +Then generate a Python wheel containing the autogenerated Python source with +these commands: .. code-block:: bash @@ -68,15 +70,15 @@ Then generate a Python wheel containing the autogenerated Python source with: pip install build python -m build -Publishing Python API package -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Publish the API package +~~~~~~~~~~~~~~~~~~~~~~~ -PyPI is the common package manager where API packages are released. +`PyPI`_ is the common package manager where API packages are released. Here is an example of a workflow pipeline for building and publishing the Python stub package. -In this example, the ``ansys-api-geometry`` workflow is shown. However, this workflow can be -easily copied and adapted. Only the ``PYTHON_PACKAGE_IMPORT`` environment variable -would have to be changed: +In this example, the ``ansys-api-geometry`` workflow is shown. However, you can easily copy +and adapt this workflow. Only the ``PYTHON_PACKAGE_IMPORT`` environment variable would have +to be changed: .. code-block:: yaml @@ -165,14 +167,14 @@ would have to be changed: ./**/*.tar.gz ./**/*.pdf -Versioning -^^^^^^^^^^ +Version the API package +^^^^^^^^^^^^^^^^^^^^^^^ PyPI packages follow semantic versioning while gRPC Protobuf API versions -typically follow a simplified ``v*`` versioning pattern. It is not expected to -synchronize the PyPI package version with the Protobuf API version, and +typically follow a simplified ``v*`` versioning pattern. The PyPI package +version is not expected to synchronize with the Protobuf API version, and multiple public APIs can be exposed simultaneously. For example, if you have a -``v0`` for MAPDL exposed, you can access it via: +``v0`` for MAPDL exposed, you can access it with this code: .. code:: python @@ -186,48 +188,47 @@ While if the API has a ``v1`` API exposed, a different library could also use: Ansys follows `Microsoft's gRPC versioning `_ -recommendations which stipulate that incrementing the gRPC Protobuf version is +recommendations, which stipulate that incrementing the gRPC Protobuf version is only necessary when making a backwards breaking change. Non-breaking changes include: -* Adding a new service -* Adding a new method to a service +* Adding a service +* Adding a method to a service * Adding a field to a request message However, this only applies to the ``vN`` gRPC Protobuf API. Python packages -tend to follow semantic versioning, and PyAnsys packages follow that +tend to follow semantic versioning, and PyAnsys packages follow this approach. Therefore, these Python gRPC API packages should also follow semantic -versioning. Plan on releasing a new minor version when: +versioning. -* Adding or removing features, messages, services, etc. +- Plan on releasing a new minor version when adding or removing features, messages, + and services. -Release a patch release when: +- Plan on releasing a patch release when fixing bugs that do not change the behavior + of the API. -* Fixing bugs that do not change the behavior of the API. - -Only plan on releasing a major release once the API is stable and you plan no -major in the near future. +Only plan on releasing a major release once the API is stable and no +major release is scheduled in the near future. This way, you can expose a ``v0`` and/or ``v1`` gRPC Protobuf API and release frequent updates using semantic versioning. -Releasing -^^^^^^^^^ +Release the API package +^^^^^^^^^^^^^^^^^^^^^^^ -As shown in the ``release`` section of GitHub workflow, once the Python +As shown in the ``release`` section of the previous GitHub workflow, once the Python API package is compiled it is then uploaded to the public PyPI. In order to do so, it is necessary to have access to the ``PYPI_TOKEN`` within the GitHub -repository. To get the needed credentials, contact the PyAnsys Core team -at `pyansys.core@ansys.com `_. +repository. To get the needed credentials, contact the +`PyAnsy core team `_. -If the repository cannot be uploaded to the public PyPI yet, but your Python -client library needs to consume this Python API package, it can also be -uploaded to the private PyAnsys PyPI. Email the PyAnsys Core team at -`pyansys.core@ansys.com`_ for the required ``PYANSYS_PYPI_PRIVATE_PAT`` -password. +If the repository cannot be uploaded to the public PyPI yet but your Python +client library needs to consume this Python API package, it can be +uploaded to the private PyAnsys PyPI. For the required ``PYANSYS_PYPI_PRIVATE_PAT`` +password, contact the `PyAnsy core team `_. -In this last case, the workflow section ``Upload to Public PyPi`` should be -replaced by: +In this last case, the ``Upload to Public PyPi`` workflow section should be +replaced with the ``Upload to Private PyPi`` workflow section: .. code-block:: yaml @@ -241,15 +242,15 @@ replaced by: TWINE_PASSWORD: ${{ secrets.PYANSYS_PYPI_PRIVATE_PAT }} TWINE_REPOSITORY_URL: https://pkgs.dev.azure.com/pyansys/_packaging/pyansys/pypi/upload -Consuming the API package within Python -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Consume the API package within Python +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Once the API package has been published to PyPI, a reference can be included -within the client library build dependencies. To know how to specify project -dependencies, see :ref:`Required Dependencies`. +Once the API package has been published to PyPI, you can include a reference +within the client library build dependencies. For information on how to specify +a project's required dependencies, see :ref:`Required Dependencies`. -Using the API package within the Python client -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Use the API package within the Python client +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The stub imports follow a standard pattern. For each API service, there is a ``*_pb2`` module that defines all messages within a specific service file and @@ -275,31 +276,31 @@ underlying implementations. For each client library release, only a single gRPC API version should be wrapped to maintain a consistent API abstraction expectation for the supporting server instances. -Public vs private Python API package -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Public versus private Python API package +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Making these ``.proto`` files repositories public or private is up to the owner of each repository. +Making the PROTO files for a public or private repository is up to the owner of each repository. -In terms of intellectual property (IP) concerns, the ``.proto`` files are typically not an -issue since they do not expose any critical service logic or knowledge - and in most cases -the APIs being exposed through the ``.proto`` files are already exposed through other -mechanisms publicly. +In terms of intellectual property (IP) concerns, the PROTO files are typically not an +issue because they do not expose any critical service logic or knowledge. In most cases, +the APIs being exposed through the PROTO files are already exposed publicly through other +mechanisms. Thus, the general recommendation is to make these repositories public as soon as possible. The -main reasons behind are: +main reasons for doing so follow: * Private Python package dependencies usually involve workarounds when setting up the workflow. It is best to keep the workflows as standard and simple as possible. That - implies making all its dependencies public - including this API Python package. + implies making all its dependencies public, including this API Python package. -* The API Python package generated eventually has to be uploaded to the public PyPI, so +* The API Python package generated eventually must be uploaded to the public PyPI so that it can be consumed by its corresponding Python client library (when it is publicly released). - So, better make it public sooner than later if there are no issues with it. + So, if there are no issues with making it public, it is better to do so sooner rather than later. -* Once the Python API package is publicly released to PyPI, there is no reason behind keeping the - repository private since all users which consume the Python API package have direct access - to the ``.proto`` files that are in the repository. +* Once the Python API package is publicly released to PyPI, there is no reason to keep the + repository private because all users who consume the Python API package have direct access + to the PROTO files that are in the repository. -However, before making any repository public with the `Ansys GitHub organization`_, review -the `Ansys open-source guide documentation `_ +However, before making any repository public in the `Ansys GitHub organization`_, review +the `Ansys Open Source Developer's Guide `_ to verify that the repository is compliant with all the needed requirements. diff --git a/doc/source/how-to/index.rst b/doc/source/how-to/index.rst index 4f6b6472a..004765ed1 100644 --- a/doc/source/how-to/index.rst +++ b/doc/source/how-to/index.rst @@ -1,10 +1,10 @@ How-to ====== -This section describes several guidelines and best practices for creating -effective and efficient Python libraries to interface with Ansys products and -services. Topics also demonstrate how apps and complex services expose -functionalities such as logging, data transfer, and app APIs. +This section describes how to create effective and efficient Python libraries +for interfacing with Ansys products and services. It also explains how apps +and complex services expose functionalities such as logging, data transfer, +and app APIs. .. grid:: 3 @@ -15,7 +15,7 @@ functionalities such as logging, data transfer, and app APIs. How to set up a development environment. - .. grid-item-card:: :fas:`fa-solid fa-code-compare` Versions + .. grid-item-card:: :fas:`fa-solid fa-code-compare` Python versions :link: supporting-python-versions :link-type: doc :padding: 2 2 2 2 @@ -27,14 +27,14 @@ functionalities such as logging, data transfer, and app APIs. :link-type: doc :padding: 2 2 2 2 - How to contribute to PyAnsys. + How to contribute to a PyAnsy library. .. grid-item-card:: :fas:`fa-solid fa-box-open` Packaging :link: packaging :link-type: doc :padding: 2 2 2 2 - How to package Python libraries. + How to package a PyAnsys library. .. grid-item-card:: :fas:`fa-solid fa-cubes` gRPC API packages :link: grpc-api-packages @@ -69,9 +69,9 @@ functionalities such as logging, data transfer, and app APIs. :link-type: doc :padding: 2 2 2 2 - How to use GitHub actions for continuous integration. + How to use GitHub Actions for continuous integration. - .. grid-item-card:: :fas:`fa-solid fa-upload` Releasing & publishing + .. grid-item-card:: :fas:`fa-solid fa-upload` Releasing and publishing :link: releasing :link-type: doc :padding: 2 2 2 2 diff --git a/doc/source/how-to/logging.rst b/doc/source/how-to/logging.rst index c8b89551f..03752d739 100644 --- a/doc/source/how-to/logging.rst +++ b/doc/source/how-to/logging.rst @@ -1,19 +1,19 @@ Logging ======= -This page provides guidelines for logging in PyAnsys libraries. These -guidelines are best practices discovered through implementing logging services -and modules within PyAnsys libraries. Suggestions and improvements are welcomed. +The following logging guidelines are best practices discovered through implementing +logging services and modules within PyAnsys libraries. Suggestions and improvements +are welcomed. -Visit the `Official Python Logging Tutorial -`_ for logging techniques. In -particular, see: +For logging techniques, see the `Logging HOWTO +`_ in the Python +documentation. These tutorials are particularly helpful: -- `Python Basic Logging Tutorial `_ -- `Python Advanced Logging Tutorial `_ +- `Basic Logging Tutorial `_ +- `Advanced Logging Tutorial `_ -Description and usage ---------------------- +Logging overview +---------------- Logging helps to track events occurring in the app. A log record is created for each event. This record contains detailed information about the @@ -21,28 +21,27 @@ current app operation. Whenever information must be exposed, displayed, and shared, logging is the way to do it. Logging is beneficial to both users and app developers. It serves several -purposes: +purposes, including these: -- Extracts some valuable data for the final users to know the status of their work -- Tracks the progress and the course of the app usage +- Extracts some valuable data for users to know the status of their work +- Tracks the progress and course of app usage - Provides the developer with as much information as possible if an issue happens The message logged can contain generic information or embed data specific to the -current session. Message content is associated to a severity level, such as info, -warning, and error. Generally, the severity level indicates the recipient of the message. -For example, an info message is directed to the user, while a debug message is directed -to the developer. +current session. Message content is associated with a severity level, such as ``INFO``, +``WARNING``, ``ERROR``, and ``DEBUG``. Generally, the severity level indicates the +recipient of the message. For example, an ``INFO`` message is directed to the user, +while a ``DEBUG`` message is directed to the developer. Logging best practices ---------------------- -The logging capabilities in PyAnsys libraries should be built upon the `standard -logging `__ library. A PyAnsys -library should not replace the standard logging library but rather provide a -standardized way for the built-in :mod:`logging` library and the PyAnsys library -to interact. Subsequent sections provide some best practices. - -Avoiding printing to the console -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +The logging capabilities in PyAnsys libraries should be built upon Python's standard +:mod:`logging` library. A PyAnsys library should not replace the standard logging library +but rather provide a way for both it and the PyAnsys library to interact. Subsequent +sections provide best logging practices. + +Avoid printing to the console +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A common habit while prototyping a new feature is to print a message into the command line executable. Instead of using the common `print `_ method, you should use a @@ -50,18 +49,17 @@ command line executable. Instead of using the common `print `_ and redirect its content. This allows messages to be filtered based on their severity level and apply formatting properly. To accomplish this, add a -Boolean argument in the initializer of the `Logger -`_ class that +Boolean argument in the initializer of the :class:`logging.Logger` class that specifies how to handle the stream. -Turning on and off handlers -~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Turn on and off handlers +~~~~~~~~~~~~~~~~~~~~~~~~ You might sometimes want to turn off a specific handler, such as a file handler where log messages are written. If so, you must property close and remove the existing handler. Otherwise, you might be denied file access later when you try to write new log content. -Here is an example of how to close a log handler: +This code snippet shows how to turn off a log handler: .. code-block:: python @@ -71,18 +69,16 @@ Here is an example of how to close a log handler: design_logger.removeHandler(handler) -Using app filters -~~~~~~~~~~~~~~~~~ +Use app filters +~~~~~~~~~~~~~~~~ An app filter shows all its value when the content of a message depends on some conditions. It injects contextual information in the core of the message. This can be used to harmonize message rendering when the app output varies based on the data processed. -Using an app filter requires the creation of a class based on the -`logging filter `_ and the -implementation of the `filter -`_ method. -This method contains all modified content to send to the stream: +Using an app filter requires the creation of a class based on the :class:`logging.Filter` +class from the :mod:`logging` module and the implementation of its :func:`filter` +function. This function contains all modified content to send to the stream: .. code:: python @@ -142,11 +138,11 @@ PyAnsys libraries use app and service logging modules to extend or expose features from an Ansys app, product, or service, which can be local or remote. -There are two main loggers for a PyAnsys library that expose or +For a PyAnsys library, there are two main loggers that expose or extend a service-based app: -- Global logger -- Instance logger +- :ref:`global_logger` +- :ref:`instance_logger` These loggers are customized classes that wrap the :class:`logging.Logger` class from the :mod:`logging` module and add specific features to it. This @@ -160,28 +156,31 @@ of the global and instance loggers. :alt: Logging in PyMAPDL :figclass: align-center -You can find the source for this example logger in the following collapsible section -and in the ``dev_guide`` repository at `pyansys_logging.py -`_. +You can see the source for a custom PyAnsys logger in the first of the following +collapsible sections and in the `pyansys_logging.py +`_ +file in the ``pyansys-dev-guide`` repository. The second collapsible section shows some unit tests +that show how to use this custom PyAnsys logger: -.. collapse:: Example PyAnsys Custom Logger Module +.. collapse:: Example of a custom PyAnsys logger .. literalinclude:: code/pyansys_logging.py -Some unit tests demonstrating how to use the PyAnsys custom logger module implemented -in the preceding code are shown in this collapsible section: -.. collapse:: How to Use the PyAnsys Custom Logger Module +.. collapse:: How to use the PyAnsys custom logger .. literalinclude:: code/test_pyansys_logging.py + +.. _global_logger: + Global logger ------------- A global logger named ``py*_global`` is created when importing ``ansys.product.service`` (``ansys.product.service.__init__``). This logger -does not track instances but rather is used globally. Consequently, using -it is recommended for most scenarios, especially those where simple modules +does not track instances but rather is used globally. Consequently, its use +is recommended for most scenarios, especially those where simple modules or classes are involved. For example, if you intend to log the initialization of a library or module, @@ -192,14 +191,14 @@ import the global logger at the top of your script or module: from ansys.product.service import LOG If the default name of the global logger is in conflict with the name of -another logger, you can rename it with: +another logger, rename it: .. code:: python from ansys.product.service import LOG as logger The default logging level of the global logger is ``ERROR`` (``logging.ERROR``). -You can change the output to a different error level with: +You can change the output to a different error level like this: .. code:: python @@ -207,8 +206,8 @@ You can change the output to a different error level with: LOG.file_handler.setLevel("DEBUG") # if present LOG.stdout_handler.setLevel("DEBUG") # if present -Alternatively, you can use this approach to ensure that all -handlers are set to the desired log level: +Alternatively, to ensure that all handlers are set to the desired log level, +use this approach: .. code:: python @@ -226,7 +225,7 @@ a file handler: LOG.log_to_file(file_path) If you want to change the characteristics of the global logger from the beginning of -the execution, you must edit the file ``__init__`` in the directory of your +the execution, you must edit the ``__init__`` file in the directory of your library. To log using the global logger, simply call the desired method as a normal logger: @@ -242,14 +241,17 @@ To log using the global logger, simply call the desired method as a normal logge |----------|------------|--------------|-------------|--------------------------- | DEBUG | | __init__ | | This is LOG debug message. + +.. _instance_logger: + Instance logger --------------- -An instance logger is created every time that the class ``_MapdlCore`` is +An instance logger is created every time that the ``_MapdlCore`` class is instantiated. Using this instance logger is recommended when using the ``pool`` -library or when using multiple instances of ``Mapdl``. The main feature of the instance +library or when using multiple instances of MAPDL. The main feature of the instance logger is that it tracks each instance and includes the instance name when logging. -The names of instances are unique. For example, when using the ``gRPC`` ``Mapdl`` +The names of instances are unique. For example, when using the MAPDL gRPC version, the instance name includes the IP and port of the corresponding instance, making the logger unique. @@ -257,15 +259,15 @@ You can access instance loggers in two places: * ``_MapdlCore._log`` for backward compatibility * ``LOG._instances``, which is a field of the ``dict`` data type with a key that - is the name of the created logger. + is the name of the created logger These instance loggers inherit from the ``pymapdl_global`` output handlers and logging level unless otherwise specified. An instance logger works similarly to -the global logger. If you want to add a file handler, use the method -``log_to_file``. If you want to change the log level, use the method -:meth:`logging.Logger.setLevel`. +the global logger. If you want to add a file handler, use the ``log_to_file`` +method. If you want to change the log level, use the :meth:`logging.Logger.setLevel` +method. -Here is an example of how to use an instance logger: +This code snippet shows how to use an instance logger: .. code:: pycon @@ -282,16 +284,16 @@ Ansys product loggers --------------------- An Ansys product, due to its architecture, can have several loggers. The -``logging`` library features support working with a finite number of loggers. The -factory function ``logging.getLogger()`` helps to access each logger by its name. In +:mod:`logging` library supports working with a finite number of loggers. The +:func:`logging.getLogger` factory function helps to access each logger by its name. In addition to name mappings, a hierarchy can be established to structure the loggers' parenting and their connections. For example, if an Ansys product is using a pre-existing custom logger -encapsulated inside the product itself, the ** benefits from +encapsulated inside the product itself, the PyAnsys library benefits from exposing it through the standard Python tools. You should use the standard library as much as possible. It facilitates every contribution -to the **, both external and internal, by exposing common tools that +to the PyAnsys library, both external and internal, by exposing common tools that are widely adopted. Each developer is able to operate quickly and autonomously. The project takes advantage of the entire set of features exposed in the standard logger and all the upcoming improvements. @@ -303,7 +305,7 @@ You might need to catch Ansys product messages and redirect them to another logger. For example, Ansys Electronics Desktop (AEDT) has its own internal logger called the **message manager**, which has three main destinations: -- **Global**, which is for the entire project manager +- **Global** which is for the entire project manager - **Project**, which is related to the project - **Design**, which is related to the design, making it the most specific destination of the three loggers @@ -311,7 +313,7 @@ The message manager does not use the standard Python logging module, which can be a problem when exporting messages and data from it to a common tool. In most cases, it is easier to work with the standard Python module to extract data. To overcome this AEDT limitation, you must wrap the existing message -manager into a logger based on the standard Python :mod:`logging` module: +manager into a logger based on the standard :mod:`logging` library: .. figure:: images/log_flow.png :align: center @@ -319,9 +321,9 @@ manager into a logger based on the standard Python :mod:`logging` module: :figclass: align-center The wrapper implementation is essentially a custom handler based on a -class inherited from ``logging.Handler``. The initializer of this class -requires the message manager to be passed as an argument to link the standard -logging service with the AEDT message manager. +class inherited from the :class:`logging.Handler` class. The initializer +of this class requires the message manager to be passed as an argument to link the standard +logging service with the AEDT message manager: .. code:: python @@ -336,14 +338,14 @@ logging service with the AEDT message manager. def emit(self, record): pass -The purpose of this class is to send log messages in the AEDT logging stream. -One of the mandatory actions is to overwrite the ``emit`` function. This method -operates as a proxy, dispatching all log messages to the message manager. -Based on the record level, the message is sent to the appropriate log level, such -as debug, info, or error, into the message manager to fit the level provided by -the Ansys product. As a reminder, the record is an object containing all kind of -information related to the event logged. +The purpose of the :class:`logging.Handler` class is to send log messages in the +AEDT logging stream. One of the mandatory actions is to overwrite the :func:`emit` +function. This function operates as a proxy, dispatching all log messages to the +message manager. Based on the record level, the message is sent to the appropriate +log level, such as ``INFO``, ``ERROR``, or ``DEBUG``, into the message manager to +fit the level provided by the Ansys product. As a reminder, the record is an object +containing all kind of information related to the event logged. This custom handler is use in the new logger instance (the one based on the -standard library). To avoid any conflict or message duplication, before adding +:mod:`logging` library). To avoid any conflict or message duplication, before adding a handler on any logger, verify if an appropriate handler is already available. diff --git a/doc/source/how-to/packaging.rst b/doc/source/how-to/packaging.rst index b2863e550..5ef738bf5 100644 --- a/doc/source/how-to/packaging.rst +++ b/doc/source/how-to/packaging.rst @@ -3,18 +3,17 @@ Packaging Packaging is the process for distributing software to guarantee that final users can use it. By packaging Python libraries, it is possible to declare which -source code or binary files need to be distributed, project metadata and third -party dependencies. +source code or binary files must be distributed, project metadata, and +third-party dependencies. -The fundamentals of Python packaging together with the packaging style -guidelines that apply to PyAnsys projects are collected in the :ref:`Packaging -style` section. +:ref:`Packaging style` collects the fundamentals of Python packaging and packaging style +guidelines that apply to PyAnsys projects. -Specifying dependencies ------------------------ +Dependencies +------------ -It is common to take advantage of third party libraries to simplify -source code. The formal way of doing so is by specifying these third party +It is common to take advantage of third-party libraries to simplify +source code. The formal way of doing so is by specifying these third-party libraries as dependencies. There are two types of dependencies: :ref:`Required dependencies` and :ref:`Optional dependencies`. @@ -72,26 +71,26 @@ Optional dependencies Optional dependencies are third-party libraries without which a software is not able to execute particular features. This makes it convenient to declare -dependencies for ancillary functions such as *plotting*, *tests*, or *docs*. You +dependencies for ancillary functions such as plotting, tests, or documentation. You can programmatically integrate dependencies that are to be installed as optional requirements rather than individual packages. You may want to have optional packages for your PyAnsys library for a variety of reasons, including: -- **Not all users want to use the feature.** - For example, you might want - to make using `matplotlib `_ or `pyvista +- **Not all users want to use the feature.** For example, you might want + to make using `Matplotlib `_ or `PyVista `_ optional if you expect your PyAnsys library is to be used primarily for headless scripting rather than visualization. -- **Not all users can install the optional package.** - For certain less popular +- **Not all users can install the optional package.** For certain less popular or obscure environments, some binary wheels might not be available or compatible with the user's environment. For example, if a user of CentOS 6.9 needs to - have ``manylinux1`` but the package only supports ``manylinux2014`` (CentOS - 7+) or newer, the user's environment wouldn't be able to run the PyAnsys + have the ``manylinux1`` package but CentOS 6.9 only supports ``manylinux2014`` (CentOS + 7+ and later), the user's environment wouldn't be able to run the PyAnsys library. -- **Reducing dependency bloat** - Removing the package as a "required" +- **Reduce dependency bloat.** Removing the package as a "required" dependency reduces the number of packages to install at installation time, speeding up the installation and reducing the possibility of dependency conflicts. The trade-off here is that any user who wants to access features that @@ -100,11 +99,11 @@ reasons, including: If you choose to implement optional packages for your PyAnsys library, some helpful best practices follow. -Implementing optional packages in the build system -++++++++++++++++++++++++++++++++++++++++++++++++++ +Implement optional packages in the build system ++++++++++++++++++++++++++++++++++++++++++++++++ -Here's how to implement and use optional requirements for the three most -popular build systems: +The following code snippets show how to implement and use optional requirements for +the three most popular build systems: .. tab-set:: @@ -126,7 +125,7 @@ popular build systems: "pyside", ] - Install ``package-name`` with the optional ``qt`` packages with: + Install ``package-name`` with the optional ``qt`` packages with this command .. code-block:: text @@ -157,7 +156,7 @@ popular build systems: "pyside", ] - Install ``package-name`` with the optional ``qt`` packages with: + Install ``package-name`` with the optional ``qt`` packages with this command: .. code-block:: text @@ -180,18 +179,18 @@ popular build systems: ..., ) - Install ``package-name`` with the optional ``qt`` packages with: + Install ``package-name`` with the optional ``qt`` packages with this command: .. code-block:: text pip install package-name[qt] -Implementing optional libraries in features -+++++++++++++++++++++++++++++++++++++++++++ +Implement optional libraries in features +++++++++++++++++++++++++++++++++++++++++ One of the best ways to implement an optional dependency is to execute a *lazy import* at runtime for the feature in question. For example, if your library -has an optional dependency on ``matplotlib``, you can implement it with: +has an optional dependency on Matplotlib, you can implement it like this: .. code:: python @@ -232,7 +231,7 @@ error is expected because the feature relies on an optional dependency. If you have many methods that rely on an optional feature, you can implement a `decorator `_ to make it -easier to add these lazy imports and helpful error messages. For example: +easier to add these lazy imports and helpful error messages. Here is an example: .. code:: python @@ -285,7 +284,7 @@ easier to add these lazy imports and helpful error messages. For example: return decorator -You use the decorator with a method with: +You use the decorator with a method like this: .. code:: python @@ -308,7 +307,7 @@ You use the decorator with a method with: plt.plot(self._a, self._b) -In practice, if the user does not have ``matplotlib`` installed, this is the +In practice, if the user does not have Matplotlib installed, this is the behavior that the user would expect: .. code-block:: pycon @@ -326,8 +325,8 @@ behavior that the user would expect: Dependabot ---------- -Dependabot is a built-in tool which allows to keep project dependencies updated, -by informing of latest releases of the packages being used. +Dependabot is a built-in tool for keeping project dependencies updated. It informs +you of the latest releases of the packages being used. The ``dependabot.yml`` file ~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -390,32 +389,32 @@ according to the type of file in which the dependencies are specified: This file should be located in the ``.github`` folder of your repository for -GitHub to detect it automatically. As it can be seen there are several main options: +GitHub to detect it automatically. There are several main options: -* **package-ecosystem**: which lets Dependabot know what your package manager is. - PyAnsys projects typically use ``pip``, but another example could be ``conda``. -* **directory**: which lets Dependabot where your requirement files are located. +* **package-ecosystem**: Lets Dependabot know what your package manager is. + PyAnsys projects typically use ``pip``. However, ``conda`` could also be used. +* **directory**: Lets Dependabot know where your requirement files are located. PyAnsys projects typically contain all their requirements inside a ``requirements`` - folder. Other directories could be provided. -* **schedule**: which lets Dependabot know the frequency at which its subroutines - should be performed for checking for updates. + directory. Other directories could be used. +* **schedule**: Lets Dependabot know the frequency to perform subroutines + for checking for updates. Dependabot updates ~~~~~~~~~~~~~~~~~~ Dependabot determines (using semantic versioning) whether a requirement should be updated due to the existence of a newer version. When Dependabot identifies -an outdated dependency, it raises a Pull Request to update these requirement +an outdated dependency, it raises a pull request to update these requirement files. Dependabot allows for two different types of updates: -* **Dependabot security updates**: automated pull requests that help update +* **Dependabot security updates**: Automated pull requests that help update dependencies with known vulnerabilities. -* **Dependabot version updates**: automated pull requests that keep dependencies updated, +* **Dependabot version updates**: Automated pull requests that keep dependencies updated, even when they don’t have any vulnerabilities. To check the status of version updates, - navigate to the ``Insights`` tab of your repository, then ``Dependency Graph``, - and ``Dependabot``. + navigate to the **Insights** tab of your repository and then select **Dependency Graph** + and **Dependabot**. .. caution:: @@ -423,17 +422,18 @@ Dependabot allows for two different types of updates: Dependabot only works for *pinned-down* versions of requirements (or, at most, versions with an *upper-limits* requirement such as ``pyvista <= 0.34.0``). However, this is not a best practice for *run-time* dependencies (that is, the usage of a package should support - the oldest available version, if possible). Thus, it is only recommended to fully pin + the oldest available version if possible). Thus, it is only recommended to fully pin **documentation** and **testing** requirements (that is, using ``==``). Having the latest - dependencies available in your requirements **testing** files allows to test the + dependencies available in your requirements testing files allows you to test the *latest* packages against your library. Dependabot version updates ~~~~~~~~~~~~~~~~~~~~~~~~~~ -In order to enable version updates for your repository, please go to +To enable version updates for your repository, see `Enabling Dependabot version updates -`_. +`_ +in the GitHub documentation. Dependabot security updates ~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -442,6 +442,7 @@ Dependabot security updates make it easier for you to fix vulnerable dependencie repository. If you enable this feature, when a Dependabot alert is raised for a vulnerable dependency in the dependency graph of your repository, Dependabot automatically tries to fix it. -To enable security updates and notifications for your repository, go to +For information on enabling security updates and notifications for your repository, see `Enabling or disabling Dependabot security updates for an individual repository -`_. +`_ +in the GitHub documentation. diff --git a/doc/source/how-to/releasing.rst b/doc/source/how-to/releasing.rst index b1953b4ae..dd7eb197d 100644 --- a/doc/source/how-to/releasing.rst +++ b/doc/source/how-to/releasing.rst @@ -1,16 +1,18 @@ +.. _release_publish: + Releasing and publishing ======================== Releasing a new version is a critical procedure. It should be automated as much as possible to avoid human error. -This sections explains the :ref:`Git` workflow and steps that must be followed +This sections explains the :ref:`Git` workflow and steps that you must follow to create a successful release. .. attention:: - A project needs to be authorized to be released into public by following the - process explained in the :ref:`Project approval and public release` section. + A project must be authorized to be publicly released. For an explanation + of the process, see:ref:`Project approval and public release`. Semantic versioning ------------------- @@ -22,11 +24,11 @@ PyAnsys follows `Semantic Versioning`_, which produces release names in the form of ``X.Y.Z``, where each letter corresponds to an integer value. This notation can also be understand as ``MAJOR.MINOR.PATCH``: -* **Major** version when you make incompatible API changes. -* **Minor** version when you add a feature in a backwards-compatible manner. -* **Patch** version when you make backwards compatible bug fixes. +* A ``MAJOR`` version is when you make incompatible API changes. +* A ``MINOR`` version is when you add a feature in a backwards-compatible manner. +* A ``PATCH`` version is when you make backwards-compatible bug fixes. -To match the versioning methodology used by the 'big three' data science Python +To match the versioning methodology used by the "big three" data science Python packages, `numpy`_, `scipy`_, and `pandas`_, ``MAJOR`` versions of PyAnsys packages are not released when any incompatible API change is made but rather when major, globally breaking API changes are made. @@ -35,15 +37,16 @@ Note that ``0.MINOR.PATCH`` packages are expected to have fluid APIs and should be solidified at the ``1.MINOR.PATCH`` release. At that point, APIs are expected to be much more stable. -.. admonition:: PyAnsys libraries should not match product versions. +.. admonition:: PyAnsys library versions should not match product versions. - For example, the PyMAPDL library ``ansys-mapdl-core`` might have the version - ``0.59.0`` whereas the product version is 22.2 (2022 R2). The reason behind - this is PyAnsys libraries are expected to be developed outside the product - release cycle in a rapid CI/CD manner. + PyAnsys libraries are expected to be developed outside the product + release cycle in a rapid CI/CD manner. Thus, library versions should + not match product versions. For example, PyMAPDL library (``ansys-mapdl-core``) + might have the version ``0.59.0`` whereas the product (Ansys Parametric + Design Language (APDL) might have the version is ``22.2`` (2022 R2). -Branch model ------------- +Branching model +--------------- The branching model for a PyAnsys project enables rapid development of features without sacrificing stability. The model closely follows the @@ -76,10 +79,10 @@ features without sacrificing stability. The model closely follows the .. include:: diag/release_branch.rst -Releasing new versions ----------------------- +New releases +------------ -Releasing is the process of creating a version of a software that developers +Releasing is the process of creating a version of the software that developers consider useful for customers or other developers. Releases are usually labeled with *tags*. These tags are used to quickly identify a release in the version control system. @@ -92,9 +95,10 @@ control system. | |uncheck| All documentation builds successfully. | |uncheck| The project builds successfully. -.. dropdown:: Releasing major and minor versions +.. dropdown:: Release major and minor versions - Before performing a release, you must verify that your ``origin main`` branch is up to date using the these commands: + Before performing a release, you must verify that your ``origin main`` branch is up to date + with these commands: .. code-block:: text @@ -102,11 +106,11 @@ control system. git fetch origin main git rebase origin/main - If you encounter any issues when running the preceding command, solve them before + If you encounter any issues when running the preceding commands, solve them before continuing with the release. Ensure that your style, tests, and documentation checks are passing too. - Create a new branch for the version you want to release with this command: + Create a new branch for the version that you want to release with this command: .. code-block:: text @@ -117,22 +121,22 @@ control system. Check all locations, including :ref:`The \`\`setup.py\`\` file`, :ref:`The \`\`pyproject.toml\`\` file`, and any - ``__init__.py`` or ``__version__.py`` your project may contain. + ``__init__.py`` or ``__version__.py`` files that your project may contain. - Stash and commit previous changes with the commands: + Stash and commit previous changes with these commands: .. code-block:: text git add git commit -m "Bump version X.Y.0" - Tag the previous commit using this command: + Tag the previous commit with this command: .. code-block:: text git tag vX.Y.0 - Push the commit and the tag with these commands: + Push the commit and the tag it with these commands: .. code-block:: text @@ -140,13 +144,15 @@ control system. git push origin vX.Y.0 -.. dropdown:: Releasing patched versions +.. dropdown:: Release patched versions Patched versions allow you to fix issues discovered in published releases by - cherry-picking these fixes from the ``main`` branch. + cherry-picking these fixes from the ``main`` branch. For more information, see + the `get-cherry-pick `_ description + in the Git documentation. Before performing a patch release, you must first identify which - ``release/X.Y`` branch it belongs to. + ``release/X.Y`` branch it belongs to with these commands. .. code-block:: text @@ -154,9 +160,9 @@ control system. git fetch origin release/X.Y git reset --hard origin/release/X.Y - Now, use the following code to `cherry-pick `_ - the fix commit from ``main``, which solves for the bug. Do not merge changes from - ``main`` into the release branch. Always cherry-pick them. + Next, use the following code to cherry-pick the fix commit from the ``main`` + branch, which solves for the bug. Do not merge changes from the + ``main`` branch into the release branch. Always cherry-pick them: .. code-block:: text @@ -165,7 +171,7 @@ control system. Ensure that your style, tests, and documentation checks are also passing. Increase by one unit the value of ``Z`` in your project version. Stash and - amend these new changes using this command: + amend these new changes with these commands: .. code-block:: text @@ -178,26 +184,26 @@ control system. git tag vX.Y.Z - Push the commit and the tag using this command: + Push the commit and the tag it using this command: .. code-block:: text git push -u origin release/X.Y git push origin vX.Y.Z -Publishing artifacts +Artifact publication -------------------- When a new version is released, some artifacts are provided with it. In Python, -these :ref:`Artifacts` are typically the ``Wheel`` and ``Source`` files. -Documentation in the form of PDF and HTML files are also considered artifacts. +these :ref:`Artifacts` are typically *wheel* and *source* files. +Documentation in the form of HTML and PDF files are also considered artifacts. .. attention:: Do not distribute artifacts without approval. - A project needs to be authorized to be released into public by following the - process explained in :ref:`Project approval and public release`. + A project must be authorized to be publicly released. For an explanation + of the process, see :ref:`Project approval and public release`. There are three possible places where artifacts can be published: @@ -214,7 +220,7 @@ There are three possible places where artifacts can be published: :link: public-pypi :link-type: ref - This is the `public PyPI` used by the Python community to distribute + This is the public PyPI used by the Python community to distribute libraries. A project requires Ansys authorization before being published in this index. @@ -237,11 +243,11 @@ hosted on the public `PyPI`_. For example, if a PyAnsys library requires auto-generated gRPC interface files from a feature or service that is still private, this package should be hosted on a private PyPI repository. -ANSYS, Inc. has a private repository at `PyAnsys PyPI`_. Access is controlled -via a username and a password: +ANSYS, Inc. has a private repository at `PyAnsys PyPI`_. You must have the proper +credentials for publishing to this private repository: +---------------------------------------------+-------------------------------------------------------------------------+ -| Credentials for publishing to private PyPI | Value | +| Credentials | Value | +=============================================+=========================================================================+ | Username | ``__token__`` | +---------------------------------------------+-------------------------------------------------------------------------+ @@ -250,32 +256,32 @@ via a username and a password: | repository-url | ``https://pkgs.dev.azure.com/pyansys/_packaging/pyansys/pypi/upload`` | +---------------------------------------------+-------------------------------------------------------------------------+ -When running this from the command line using `twine `_, -be sure to add in `--repository-url`` as an extra option. Otherwise ``twine`` -attempts to upload the package to the public PyPI repository. - The ``PYANSYS_PYPI_PRIVATE_PAT`` is a password in the form of a GitHub secret -which is available only to repositories within `PyAnsys`_. This secret is +that is available only to `PyAnsys projects `_. This secret is available during the execution of the CI/CD. Its value is never shown or shared in the log files. +When using `Twine `_ from the command line, you must +add in ``--repository-url`` as an extra option. Otherwise, Twine attempts to upload +the package to the public PyPI repository. + Forked GitHub repositories do not have access to GitHub secrets. This is -designed to protect against pull-requests that could potentially scrape -tokens from PyAnsys CI/CD. +designed to protect against pull requests that could potentially scrape +tokens from the PyAnsys CI/CD. -Here's a cross-platform one liner for uploading using ``twine```: +Here's a cross-platform, one-line command for using Twine to upload a package: .. code:: python -m twine upload dist/* --repository-url https://pkgs.dev.azure.com/pyansys/_packaging/pyansys/pypi/upload -u __token__ -p -Replace ```` with the private PyPI token respectively. +Replace ```` with the private PyPI token. -.. dropdown:: Using GitHub actions +.. dropdown:: Use GitHub Actions - The following code allows you to publish any Python :ref:`Artifacts` contained in - the ``dist/`` directory to the private PyPI. It is expected to be included when - :ref:`Use GitHub actions`: + The following code allows you to publish Python :ref:`Artifacts` in + the ``dist`` directory to the private PyPI. This code is expected to be included when you + :ref:`Use GitHub Actions`: .. code-block:: yaml @@ -291,7 +297,7 @@ Replace ```` with the private PyPI token respectively. twine-token: ${{ secrets.PYANSYS_PYPI_PRIVATE_PAT }} -.. dropdown:: Using the command line +.. dropdown:: Use the command line Alternatively, instead of command-line tool arguments for Twine, you can use environment variables: @@ -361,24 +367,24 @@ unique to each project. It can only be obtained after the first release to the public PyPI. Follow the process :ref:`Project approval and public release` process to obtain public release authorization. -Once authorized, contact `pyansys.core@ansys.com `_ to -get support during the first release of the project. The team then enables the +Once authorized, contact the `PyAnsy core team `_ to +get support during the first release of the project. The team enables the custom ``PYPI_TOKEN`` once your project has been successfully released for the -first time. For future releases, everything is then automated. +first time. For future releases, everything is automated. -Here's a one liner for downloading: +Here's a cross-platform, one-line command for using Twine to download a package: .. code:: python -m pip install --index-url @pkgs.dev.azure.com/pyansys/_packaging/pyansys/pypi/simple/ -Replace ```` and ```` with the package name and the private PyPI token respectively. +Replace ```` and ```` with the package name and private PyPI token respectively. -.. dropdown:: Using GitHub actions +.. dropdown:: Use GitHub Actions The following code allows you to publish any Python :ref:`Artifacts` contained in - the ``dist/`` directory to the public PyPI. It is expected to be included when - :ref:`Use GitHub actions`: + the ``dist`` directory to the public PyPI. It is expected to be included when you + :ref:`Use GitHub Actions`. .. code-block:: yaml @@ -398,19 +404,19 @@ Replace ```` and ```` with the package name and th GitHub ~~~~~~ -Publishing :ref:`Artifacts` to GitHub is also possible. These are available in +You can publish :ref:`Artifacts` to GitHub, which makes them available in the ``https://github.com/ansys/project-name/releases`` section. The visibility of these artifacts follows the one in the repository. Visibility can -be private, internal or public. +be private, internal, or public. For enabling public visibility of a repository, follow the process explained in -the :ref:`Project approval and public release` section. +:ref:`Project approval and public release`. -.. dropdown:: Using GitHub actions +.. dropdown:: Use GitHub Actions The following code allows you to publish any Python :ref:`Artifacts` contained in - the ``dist/`` directory to the GitHub release created. It is expected to be included when - :ref:`Use GitHub actions`: + the ``dist`` directory to the GitHub release created. It is expected to be included + when you :ref:`Use GitHub Actions`: .. code-block:: yaml @@ -423,13 +429,12 @@ the :ref:`Project approval and public release` section. with: library-name: "ansys--" -Downloading artifacts ---------------------- +Artifact download +----------------- -Artifacts can be downloaded from all previous sources: Ansys private PyPI, -public PyPI and GitHub. +You can download artifacts from the Ansys private PyPI, public PyPI, and GitHub. -.. dropdown:: Downloading artifacts from the Ansys private PyPI +.. dropdown:: Download artifacts from the Ansys private PyPI Request the value of the ``PYANSYS_PYPI_PRIVATE_PAT`` token by sending an email to the `pyansys.core@ansys.com `_ email. @@ -492,22 +497,21 @@ public PyPI and GitHub. --index-url $INDEX_URL \ --no-dependencies -.. dropdown:: Downloading artifacts from the public PyPI +.. dropdown:: Download artifacts from the public PyPI - Downloading artifacts from the public PyPI can be done by using the default - settings by ``pip``: + Downloading artifacts from the public PyPI can be done by using ``pip``: .. code-block:: bash python -m pip install -.. dropdown:: Downloading artifacts from GitHub +.. dropdown:: Download artifacts from GitHub Downloading artifacts from GitHub can be done by checking the ``https://github.com/ansys/project-name/releases`` section. - Note that if you download the ``Wheel`` of a Python package, you still need - to manually install it by running: + Note that if you download the wheel of a Python package, you must manually install + it with a command like this: .. code-block:: bash diff --git a/doc/source/how-to/repository-protection.rst b/doc/source/how-to/repository-protection.rst index d466708b5..72f9f0159 100644 --- a/doc/source/how-to/repository-protection.rst +++ b/doc/source/how-to/repository-protection.rst @@ -1,116 +1,99 @@ Repository protection ===================== -Handling repositories also implies handling sensitive information, especially -when looking into workflows. Access to private servers to acquire Ansys product -licenses, secrets handling and so on. Thus, it is important that the PyAnsys -libraries have good protection rules implemented. +Handling repositories implies handling sensitive information, especially +when configuring workflows to access private servers. Because workflows +acquire Ansys product licenses and handle secrets, it is important to +implement good protection rules. In the following sections, different safety measures are presented. General configuration --------------------- -Being an owner/admin of a repository gives you access to the ``Settings`` menu. -One accesses the *General configuration* of the repository by: - - ``Repository`` >> ``Settings`` >> ``General`` - -The PyAnsys team recommends the following measures in the **Pull Requests** (PRs) section: - -* **Only allow for** ``Squash merging``: this option forces all commits of a PR - to be condensed into a single commit. That way, in case a PR was not successful - for some reason, or wants to be reverted, it is easier to just revert that commit. -* **Enable the** ``Default to PR title for squash merge commits`` **checkbox**: - this provides a uniform way of naming PRs and merging them to the main branch. -* **Enable the** ``Always suggest updating pull request branches`` **checkbox**: - though this can be enforced (as it is explained in upcoming sections), it is - recommended to always have your branch updated to the ``main`` branch before merging. -* **Enable the** ``Automatically delete head branches`` **checkbox**: this is more - intended for cleaning purposes. Once a PR is merged into the ``main`` branch, the - PR-related branch should be deleted so that only active branches are available in - the repository. +Being an owner or an administrator of a repository gives you access to the +**Settings** menu. To access the general configuration settings for the repository, +select **Settings > General**. + +The PyAnsys core team recommends choosing these settings in the **Pull Requests** +section: + +* Select **Allow squash merging** to force all commits of a pull request (PR) + to be condensed into a single commit. This way, if a PR is not successful, it can + be reverted easily. +* Select **Default to pull request title for squash merge commits** to + provide a uniform way of naming PRs and merging them to the ``main`` branch. +* Select **Always suggest updating pull request branches** to update + your branch to the ``main`` branch before merging. (This can be + enforced as explained later.) +* Select **Automatically delete head branches** for cleanup purposes. + Once a PR is merged into the ``main`` branch, the PR-related branch is + deleted so that the repository contains only active branches. Branch protection ----------------- Branch protection is critical in terms of avoiding malicious code insertion and access -to confidential data. If one accesses: - - ``Repository`` >> ``Settings`` >> ``Code and automation`` >> ``Branches`` - -You are presented with the Branch protection menu. On the ``Branch protection -rules`` please click on the ``Add rule`` button. - -On the ``Branch name pattern`` box, one should include the name of the branch -to be protected (usually ``main``, but one could also protect other branches, -such as ``gh-pages``). Regular expressions (also known as ``regex``) are also -accepted (for example, if protection on all ``release/*`` branches is wanted). - -The PyAnsys team recommends to set the following rules for the ``main`` branch: - -* **Enable the** ``Require a pull request before merging`` **checkbox**: that way, - no one (except for owners/admins) should be able to directly merge to ``main``. -* **Enable the** ``Require approvals`` **checkbox**: this ensures - that all PRs are reviewed and nobody goes *riding solo* (again, except for owners). -* **Enable the** ``Require review from Code Owners`` **checkbox**: this is intended for - workflow protection as well. That way, it is ensured that no malicious code is - merged into the main branch. Code owners should be capable of identifying pieces - of code which should not be allowed, though it forces them to review all PRs. -* **It is recommended to create an Owners/Admins team**: this team should contain - multiple members, so that if somebody is not available, others can still approve - and merge. -* **Enable the** ``Require status checks to pass before merging`` **checkbox**: this - is the sole purpose of CI/CD. Only code which compiles, passes tests, is formatted - accordingly etc. should be allowed to merge. -* **Enable the** ``Require branches to be up to date before merging`` **checkbox**: this - is an important concept, since sometimes, somebody may have merged into ``main`` a - certain piece of code which clashes with your own developments. By activating this, - it is ensured that all code which is merged has been tested and is compatible with - the current ``main`` branch. -* **Enable the** ``Status Checks required`` **checkbox**: whenever possible, all workflow - stages should be included, but at least: ``Style``, ``Documentation - Style Check``, ``Build and Unit testing``, ``Documentation build``, and ``Smoke tests`` - should be activated. -* **Enable the** ``Require conversation resolution before merging`` **checkbox**: - this forces assignees to actually go through all comments. It is just a safety - measure so that at least any comment left by reviewers is read (and hopefully applied). +to confidential data. To access the branch protection rulesets for the repository, +select **Settings > Code and automation > Branches**. + +Next to **Branch protection rules**, click **Add rule**. + +Under **Branch name pattern**, type the name of the branch that you want to protect +(usually ``main``, but you can also protect other branches, such as ``gh-pages``). +Regular expressions (also known as ``regex``) are +accepted. For example, you might want to protect all ``release/`` branches. + +The PyAnsys core team recommends setting these rules for the ``main`` branch: + +* Select **Require a pull request before merging** so that only owners + or administrators are able to directly merge to the ``main`` branch. +* Select **Require approvals** to ensure that all PRs are reviewed. (PRs + created by owners or administrators do not require approval.) +* Select **Require review from Code Owners** so that code owners are forced to review + all PRs to prevent malicious code from being merged into the ``main`` branch. + Code owners should be able to identify pieces of code that are not allowed. + To ensure a code owner is always available to approve and merge PRs, creating an + **Owners/Admins** team with multiple members is recommended. +* Select **Require status checks to pass before merging** so that only + code that compiles, passes tests, and is formatted correctly can be merged. This + is the sole purpose of CI/CD. +* Select **Require branches to be up to date before merging** to ensure + that all code has been tested and is compatible with the ``main`` branch + before it can be merged. This is an important concept because someone may have merged + code into the ``main`` branch that clashes with your code. +* Select **Require status checks to pass before merging** so that the PR + cannot be merged until all workflow checks are successful. The minimal checks to + implement are for code style, documentation style, build and unit testing, + documentation building, and smoke tests. +* Select **Require conversation resolution before merging** to force reviewers to + go through and resolve all comments. This ensures that all comments are read and + possibly applied. Tag protection -------------- -Tags should also be protected, so that only code owners/admins can create them. This can -be done by accessing: - - ``Repository`` >> ``Settings`` >> ``Tags`` - -Once inside, and following the PyAnsys tagging convention, the following tags should be -protected: +Protect tags so that only code owners and administrators can create them. +To access the tag protection settings for the repository, select **Settings > +Code and automation > Tags**. - ``v*`` +Following the PyAnsys tagging convention, protect the ``v*`` tag. Workflow protection ------------------- -Finally, workflows can also be protected. - - ``Repository`` >> ``Settings`` >> ``Actions`` >> ``General`` - -Once inside, focus should be set on the ``Fork pull request workflows from outside collaborators``. -In this case, the PyAnsys team suggests the following: - -Preferred option for public repositories: - -* **Enable the** ``Require approval for all outside collaborators`` **checkbox when going public**. - -Minimum option for public repositories: +Protect workflows in the settings for actions. The focus here is on forks, +which let you make changes to a project without affecting the original repository. To +access the actions settings for the repository, select **Settings > Actions > General**. -* **Enable the** ``Require approval for first-time contributors`` **while still internal/private**. +Under **Fork pull request workflows from outside collaborators**, the preferred option +is **Require approval for all outside collaborators** for repositories that are to be +released publicly. The minimum option is **Require approval for first-time contributors**. -Workflows contain sensitive information and it is important to preserve security and control over it. -However, these rules are more flexible. For example, if you have a common outside collaborator, which -has been contributing for a time doing you may consider adding it as a member/collaborator of the -repository so that its PR workflows do not have to be accepted every time he is intending to run it. +Because workflows contain sensitive information, it is important to preserve security and control. +The rules for workflows are more flexible. For example, if you have common outside collaborators who +has been contributing for some time, you may want to add them as members of the repository so that +their PR workflows do not have to be accepted every time that they intend to run them. Internal and private repositories are only available to organization users and repository members, respectively. Thus, no specific rules for outside collaborators are needed. diff --git a/doc/source/how-to/setting-up.rst b/doc/source/how-to/setting-up.rst index bc975e787..40b6b9927 100644 --- a/doc/source/how-to/setting-up.rst +++ b/doc/source/how-to/setting-up.rst @@ -1,7 +1,7 @@ .. _setting_up_dev_environment: -Setting up your development environment -======================================= +Environment setup +================= Before you can contribute to any PyAnsys project, you must set up your developer environment. @@ -21,13 +21,13 @@ All PyAnsys projects require a Python interpreter for interacting with PyAnsys libraries. Therefore, you must ensure that at least one Python interpreter is installed on your local machine. -Installation -~~~~~~~~~~~~ +Install Python +~~~~~~~~~~~~~~ There are several ways to install a Python package on your local machine: -- Use an official installer from the `official Python download section `_. -- Install via a package manager or "store" on your machine. +- Use an installer from the `official Python download page `_. +- Use a package manager or "store" on your machine. .. warning:: @@ -47,7 +47,7 @@ There are several ways to install a Python package on your local machine: .. tab-item:: macOS - To install Python on a machine running the macOS: + To install Python on a machine running macOS: 1. Download the `latest stable Python version for macOS `_. 2. Execute the installer, referring to `Using Python on @@ -77,7 +77,7 @@ There are several ways to install a Python package on your local machine: Verify Python version ~~~~~~~~~~~~~~~~~~~~~ -Once your Python installation is complete, verify it with: +Once your Python installation is complete, verify it with this command: .. code-block:: text @@ -98,8 +98,8 @@ the venv module `_. Check ~~~~~ -Before creating a new virtual environment, you must run this code to see if you are already -working with one: +Before creating a virtual environment, you must run the command for your OS to see if you are already +using one: .. tab-set:: @@ -125,17 +125,17 @@ working with one: which python -This command returns the path to the Python virtual environment that your system is currently using. +The command returns the path to the Python virtual environment that your system is currently using. Ensure that it points to your default installation and not to a virtual environment. If it points to a virtual environment, see :ref:`Deactivate` for -information on deactivating your virtual environment. +information on deactivating the virtual environment. Create ~~~~~~ Usually, virtual environments are named ``venv`` or ``.venv``. -You can create a virtual environment named ```` with: +You can create a virtual environment named ```` with this command: .. code-block:: text @@ -174,7 +174,7 @@ You would activate the preceding virtual environment with the command for your O Deactivate ~~~~~~~~~~ -You could deactivate a virtual environment with the command for your OS: +You can deactivate a virtual environment with the command for your OS: .. code-block:: text @@ -192,13 +192,13 @@ Git > -`Git `_ is an open source version control system (VCS). It +`Git `_ is an open source VCS (version control system). It is used to track changes and register new content in software-related projects. Git registers the author and date of the changes so that accurate tracking of the software's evolution is available. -Installation -~~~~~~~~~~~~ +Install Git +~~~~~~~~~~~ .. tab-set:: @@ -211,7 +211,7 @@ Installation .. tab-item:: macOS - To install Git on a machine running the macOS: + To install Git on a machine running macOS: 1. Check the `latest stable Git version for macOS `_. 2. Run the installation command for your package manager. @@ -226,60 +226,61 @@ Installation Verify Git version ~~~~~~~~~~~~~~~~~~ -Once your installation process is complete, verify your Git installation with: +Once your Git installation finishes, verify it with this command: .. code-block:: text git --version -Usage -~~~~~ +Use Git +~~~~~~~ -If you're new to Git, see the `Git Reference Manual `_ +If you're new to Git, see the `Git documentation `_ for comprehensive usage information. For an understanding of Git workflows and branching strategies, -see the `Learning Git branching `_ tutorial. +see the `Learn Git Branching `_ tutorial. -If you're unfamiliar with GitHub, see the -`GitHub Training Manual `_ for guidance. +If you're unfamiliar with GitHub, see +`The Official GitHub Training Manual `_ +for guidance. -Configuration +Configure Git ~~~~~~~~~~~~~ It is very important to properly configure Git so that every modification that you make -to the code points to you. There are two types of configuration: +to the code points to you. There are two types of Git configuration: :ref:`Global` and :ref:`Local`. It is also possible to combine both to have a :ref:`Dynamic` configuration. Global ++++++ -Global configuration are automatically included in every Git repository on +A global configuration is automatically included in every Git repository on your machine unless overridden by a :ref:`Local` configuration, which is located in ``C:\Users\\.gitconfig`` for Windows users or in ``/home//.gitconfig`` for macOS, Linux, or UNIX users. -You can set the value for any variable in a field with: +You can set the value for any variable in a field with this command: .. code-block:: bash git config --global . -Some examples follow. +Some examples of setting values follow. -**Set up your name** +**Set your name** .. code-block:: bash git config --global user.name -**Set up your email** +**Set your email** .. code-block:: bash git config --global user.email -**Set up the default branch name** +**Set the default branch name** .. code-block:: bash @@ -289,13 +290,13 @@ Local +++++ There might be a time when you want to declare a specific configuration to use only -in a given project. To override the :ref:`Global` configuration, you can declare a local +in a given project. To override the global configuration, you can declare a local configuration. -In a local configuration, the commands are the same as in the :ref:`Global` configuration. The +In a local configuration, the commands are the same as in the global configuration. The one exception is that instead of using the ``--global`` flag, you use the ``--local`` flag. -Ensure that you run the commands in the root directory of your project and that a ``.git/`` -folder exists. +Ensure that you run the commands in the root directory of your project and that a ``.git`` +directory exists. If you would like to manually modify your local configuration, it is saved in the ``.git/config`` file. @@ -303,22 +304,21 @@ the ``.git/config`` file. Dynamic +++++++ -It is possible to configure :ref:`Git` such that it selects between multiple -configuration profiles according to whether your project is located in your system. -This allows you to define common configurations for working under ``PyAnsys``, -``Ansys``, or open source projects from which the company benefits. +It is possible to configure Git such that it selects between multiple +configuration profiles according to whether your project is located on your system. +This allows you to define common configurations for working under +``Ansys`` or other open source projects from which Ansys benefits. -As an example, consider the following scenario for setting up two :ref:`Git` -configuration profiles for working with ``Ansys`` and personal projects. +As an example, consider the following scenario for setting up two Git +configuration profiles for working with Ansys projects and personal projects. Create the two files, naming them so that they are easily distinguishable. For -example, ``.gitconfig-ansys`` and ``.gitconfig-personal``. Then, use `Git -Conditional Includes -`_ to control which -:ref:`Git` configuration is applied based on whether the project is located in -your system. +example, name them ``.gitconfig-ansys`` and ``.gitconfig-personal``. Then, use Git +`Conditional includes `_ +to control which Git configuration is applied based on whether the project is located +on your system. -Each one of these files can look like this: +Here are examples of what these files might look like: .. tab-set:: @@ -353,33 +353,34 @@ Each one of these files can look like this: signingkey = -Signing commits -~~~~~~~~~~~~~~~ +Sign commits +~~~~~~~~~~~~ To verify which code changes were made by you, signing the commit is required. To sign a commit, you must generate a ``GPG`` key, associate it with -GitHub, and specify it in your ``Git`` :ref:`Configuration`. +GitHub, and specify it in your Git configuration. -For an explanation of the process, in the GitHub documentation, see `Verify -Commit Signatures `_. +For an explanation of the process, see `Manage commit signature verification +`_ +in the GitHub documentation. -Enabling SSH -~~~~~~~~~~~~ +Enable SSH +~~~~~~~~~~ Working with Secure Shell Protocol (SSH) is not only a good practice but also required for contributing to PyAnsys projects. Without an SSH key, you are not able to clone **internal** or **private** repositories or to push new changes. -For information on setting up SSH with GitHub, in the GitHub documentation, -see `Connecting to GitHub with SSH -`_. +For information on setting up SSH with GitHub, see `Connecting to GitHub with SSH +`_ +in the GitHub documentation. -Handling line endings -~~~~~~~~~~~~~~~~~~~~~ +Handle line endings +~~~~~~~~~~~~~~~~~~~ -Every time you introduce a new line by pressing the **Enter** key, an invisible +Every time you introduce a new line by pressing the **enter** key, an invisible character is introduced to represent a line ending. Each operating system manages these end-of-line (EOL) characters in its own way. For Windows, the EOL is also known as a `CRLF`, while in Linux it is known as a `LF`. @@ -390,45 +391,75 @@ different operating systems, you can specify an EOL policy in a ``.gitattributes In a ``.gitattributes`` file that you have committed to your repository, you can customize the type of EOL characters that you expect developers to use. Git then automatically manages these EOL characters so that developers do not -need to worry about them. Consider this example presented in `Configuring Git to handle line endings `_: +need to worry about them. Consider this example in `Configuring Git to handle line endings +`_ +in the GitHub documentations: .. code:: text - # Set the default behavior, in case people don't have core.autocrlf set. + # Set the default behavior, in case people don't have ``core.autocrlf`` set. * text=auto - # Explicitly declare text files you want to always be normalized and converted + # Explicitly declare text files that you want to always be normalized and converted # to native line endings on checkout. *.c text *.h text - # Declare files that will always have CRLF line endings on checkout. + # Declare files that always have CRLF line endings on checkout. *.sln text eol=crlf # Denote all files that are truly binary and should not be modified. *.png binary *.jpg binary +.. _git_clean: + +Remove files and directories untracked by Git +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To remove files and directories that are not tracked by Git from your working directory, +you want to periodically run this command: + +``git clean -fdx .`` + +Descriptions follow for each command option: + +- ``f`` forces deletion of untracked files (and directories if ``d`` is also specified) + without requiring additional confirmation. + +- ``d`` deletes untracked directories. By default, the ``git clean`` + command does not recurse into untracked directories to avoid deleting too much. + +- ``x`` deletes ignored files, which are those specified in your ``.gitignore`` file. + You use this option when you want to clean up all untracked files, including build + products. + +The trailing ``.`` specifies the current directory as the starting point for the +cleaning. For example, to clean all of the untracked files that are generated by +building documentation locally, you would run the ``git clean -fdx .`` command +from the ``doc`` directory. + WSL2 ---- Some developers prefer using Windows as the operating system for their machines. However, they might like to take advantage of some features provided by a Linux -operating system. The `Windows Subsystem for Linux -`_ was devised to solve -this problem. +operating system. The Windows Subsystem for Linux (WSL) was devised to solve +this problem. For installation information, see `How to install Linux on Windows with WSL +`_ in the Microsoft Windows +documentation. -Installation +Install WSL2 ~~~~~~~~~~~~ -Open a new PowerShell session and install the Windows Subsystem for Linux -(WSL) with: +Open a new PowerShell session and install WSL with this command: .. code-block:: powershell wsl --install -After installing WSL, ensure that you are running the WSL2 version with: +After installing WSL, ensure that you are running the WSL2 version with this +command: .. code-block:: powershell @@ -437,17 +468,17 @@ After installing WSL, ensure that you are running the WSL2 version with: Verify WSL version ~~~~~~~~~~~~~~~~~~ -Verify your WSL version with: +Verify your WSL version with this command: .. code-block:: powershell wsl --list -v -Linux distribution -~~~~~~~~~~~~~~~~~~ +Install Linux distribution +~~~~~~~~~~~~~~~~~~~~~~~~~~ After WSL2 is installed, install a Linux distribution. -Get a list of available distributions with: +To get a list of available distributions, run this command: .. code-block:: powershell @@ -456,14 +487,14 @@ Get a list of available distributions with: Most developers choose `Ubuntu `_ because it is a well maintained Linux distribution with a huge collection of packages. -Install the Linux distribution of your choice with: +To install the Linux distribution of your choice, run this command: .. code-block:: powershell wsl --install -d -You can use this command to install multiple Linux distributions. Indicate -the distributions that you would like to use with WSL2 with: +You can use the preceding command to install multiple Linux distributions. Indicate +the distributions that you would like to use with WSL2 with this command: .. code-block:: powershell @@ -472,22 +503,15 @@ the distributions that you would like to use with WSL2 with: Windows terminal ---------------- -.. image:: images/windows_terminal.png - :align: center - :alt: The Windows Terminal with different active shell sessions. - -.. raw:: html - -
- -The `Windows Terminal `_ is +`Windows Terminal `_ is an app that integrates multiple shells into a single console. Windows ships by default with two shells: ``CMD`` and ``PowerShell``. If :ref:`WSL2` is -installed, a Linux shell is added. Hence, the goal of the Windows Terminal -is to collect and manage all shell sessions in a single program. - -Installation -~~~~~~~~~~~~ +installed, a Linux shell is added. Hence, the goal of Windows Terminal +is to collect and manage all shell sessions in a single program. You can install +Windows Terminal from the `Windows Terminal page +`_ +on the Microsoft Store. -You can install Windows Terminal` directly from the `official Microsoft Store package -`_. +.. image:: images/windows_terminal.png + :align: center + :alt: Windows Terminal with different active shell sessions. diff --git a/doc/source/how-to/supporting-python-versions.rst b/doc/source/how-to/supporting-python-versions.rst index 498803960..a8b2858c0 100644 --- a/doc/source/how-to/supporting-python-versions.rst +++ b/doc/source/how-to/supporting-python-versions.rst @@ -1,10 +1,11 @@ -Supporting Python versions -========================== +Python versions +=============== Like other programming languages, Python evolves with time. New -features get added to the language, and others get deprecated. For -more information, see `Status of Python branches -`_. +features get added to the language, and other features get deprecated. For +more information, see `Status of Python versions +`_ in the *Python +Developer's Guide*. +---------+------------+-------------+-----------------------+--------+ | Version | PEP | Released | Security Support Ends | Status | @@ -29,14 +30,15 @@ more information, see `Status of Python branches fixes. Versions labeled as ``dev`` are still receiving bug fixes. Expect stable versions to be the most commonly used Python versions. Some -packages like `numpy `_ drop support for older versions of +packages like `NumPy`_ drop support for older versions of Python earlier than their end of life (EOL) as outlined in `NEP 29 `_. -You can still install an older version from PyPI using ``pip`` as +You can still install an older version from PyPI using `pip`_ as your package manager. When ``pip`` is used, it downloads and installs the most recent version of the library that supports your version of Python. You -can enforce a minimum-required Python version within ``setup.py`` with: +can enforce a minimum-required Python version within the ``setup.py`` file with +this code: .. code:: python @@ -52,13 +54,12 @@ support which versions of Python. You can also impose an upper limit if you're sure you don't support certain versions of Python. For example, if you only support Python 3.9 through 3.12, your command would look like this: ``python_requires='>=3.9, <3.12'``. -Verifying support ------------------ +Verify Python support +--------------------- The best way to validate whether a Python library supports a version of Python -is by :ref:`Using continuous integration`. An example GitHub -workflow testing Python 3.9 through Python 3.12 on Windows and Linux would -start with: +is by :ref:`continuous_integration`. An example GitHub workflow testing Python +3.9 through Python 3.12 on Windows and Linux would start like this: .. code-block:: yaml :linenos: diff --git a/doc/source/how-to/testing.rst b/doc/source/how-to/testing.rst index 8b397d2da..ad0b0d44b 100644 --- a/doc/source/how-to/testing.rst +++ b/doc/source/how-to/testing.rst @@ -4,16 +4,16 @@ Testing ======= Unit testing and integration testing are critical for the successful continuous -integration and delivery of any program or libraries belonging to the PyAnsys +integration (CI) and delivery of any library belonging to the PyAnsys project. -`Test Driven Development (TDD)`_ is the practice of writing unit tests before writing -production code. The benefit of this practice is that you know each new line of -code is working as soon as it is written. It's easier to track down problems -because only a small amount of code has been implemented since the execution of the -last test. Furthermore, all test cases do not have to be implemented at once but -rather gradually as the code evolves. In 1993, Kent Beck developed TDD as part -of the Extreme Programming software development process. +In 1993, Kent Beck developed `Test Driven Development (TDD)`_ as part +of the Extreme Programming software development process. TDD is the practice +of writing unit tests before writing production code. The benefit of this practice +is that you know each new line of code is working as soon as it is written. It's +easier to track down problems because only a small amount of code has been implemented +since the execution of the last test. Furthermore, all test cases do not have to be +implemented at once but rather gradually as the code evolves. You should follow TDD when developing your PyAnsys project. Examples and best practices for unit tests follow. @@ -26,20 +26,20 @@ Test framework -For consistency, PyAnsys tools and libraries should use either the `unittest -`_ or `pytest`_ framework -for unit testing. This last framework is recommended unless any constraint +For consistency, PyAnsys tools and libraries should use either the `pytest`_ or +`unittest `_ framework +for unit testing. The ``pytest`` framework is recommended unless a constraint in your project prevents you from using it. As described in :ref:`Required files`, -unit tests should be placed in :ref:`The -\`\`tests/\`\` directory` in the library's root directory. +you should place unit tests in :ref:`The \`\`tests\`\` directory` in the library's +root directory. Add testing dependencies ~~~~~~~~~~~~~~~~~~~~~~~~ -Requirements for testing dependencies should be included either in :ref:`The -\`\`setup.py\`\` file`, :ref:`The \`\`pyproject.toml\`\` file` or in a +Requirements for testing dependencies should be included in :ref:`The +\`\`setup.py\`\` file`, :ref:`The \`\`pyproject.toml\`\` file`, or a ``requirements_tests.txt`` file. Only ``pytest`` and `pytest-cov`_ -must be specified as third-party dependencies because``unittest`` is included +must be specified as third-party dependencies because ``unittest`` is included in `The Python Standard Library `_. .. tab-set:: @@ -81,7 +81,7 @@ in `The Python Standard Library `_. pytest pytest-cov -These dependencies can be installed using ``pip``: +You can use ``pip`` to install these testing dependencies: .. tab-set:: @@ -101,21 +101,21 @@ These dependencies can be installed using ``pip``: Organize test files ~~~~~~~~~~~~~~~~~~~ -You must collect test files in :ref:`The \`\`tests/\`\` directory`. To -guarantee that tests are run against the library source code, follow a ``src/`` -layout as explained in :ref:`The \`\`src/\`\` directory` rather than +You must place your test files in :ref:`The \`\`tests\`\` directory`. To +guarantee that tests are run against the library source code, follow a ``src`` +layout as explained in :ref:`The \`\`src\`\` directory` rather than having your Python library source located directly in the repository root directory. -This helps you to: +This helps you to achieve these objectives: -- Avoid testing the source of the repository rather than testing the installed package +- Avoid testing the source of the repository rather than testing the installed package. - Catch errors caused by files that might be missed by the installer, including any C extensions or additional internal packages. -Run tests ---------- +Test execution +-------------- -Once you have installed ``pytest``, you can execute the test suite with: +Once you have installed ``pytest``, you can execute the test suite with this command: .. code-block:: text @@ -127,21 +127,22 @@ Filter tests To run a subset of all available tests, you can taking advantage of the ``keywords`` and ``markers`` flags: -**Filtering tests by keywords** +**Filter tests by keywords** .. code-block:: text pytest -k '' pytest -k 'not ' -**Filtering tests by markers** +**Filter tests by markers** .. code-block:: text pytest -m slow -For more information about filtering tests, see `Working with Custom Markers -`_ . +For more information about filtering tests, see `Working with custom markers +`_ in the ``pytest`` +documentation. Testing methodology ------------------- @@ -157,7 +158,7 @@ integration, and functional. app or software stack. For example, if your library extends or wraps the features of an external service, you must test that service in conjunction with your library. On GitHub, the ideal approach for this would - be to start your service via Docker and test accordingly. You should still be + be to start your service using `Docker`_ and then test accordingly. You should still be testing at the individual class or method level, but you can now test how multiple libraries or services interact. This is mandatory for testing APIs and is preferred over mocking the service. @@ -175,7 +176,6 @@ testing framework. Consider implementing functional tests as examples within your project's documentation examples. This allows you to write helpful user-facing tests while accomplishing functional testing. - Unit testing ~~~~~~~~~~~~ Unit testing tests at the lowest possible level, isolated @@ -185,9 +185,13 @@ from other applications or libraries. For Python tool libraries like .. _ansys-tools-protoc-helper: https://github.com/ansys/ansys-tools-protoc-helper -These tests should be written to test a single method in isolation. For -example, if you have a method that deserializes chunks, the associated test -file would be: +These tests should be written to test a single method in isolation. For example, +the following ``parse_chunks.py`` file has a method that deserializes chunks. The +associated ``test_parse_chunks_py`` file tests this method in isolation. + +.. note:: + This example assumes that you do not have a ``serialize_chunks`` function in your + library. If you did, you could exclude it from the ``test_parse_chunks.py`` file. .. tab-set:: @@ -261,9 +265,6 @@ file would be: parsed_array = parsed_array.reshape(-1, 3) assert np.allclose(sample_array, parsed_array) -This assumes that you do not have a ``serialize_chunks`` function in your -library. If you did, you could exclude it from ``test_parse_chunks.py``. - Integration testing ~~~~~~~~~~~~~~~~~~~ @@ -277,14 +278,14 @@ Any PyAnsys library that provides features by wrapping a gRPC interface should include tests of the gRPC methods exposed by the PROTO files and wrapped by the Python library. They would not be expected to test the features of the server but rather the APIs exposed by the server. For example, if testing -the gRPC method ``GetNode``, then your integration test would test the wrapped +the ``GetNode`` gRPC method, then your integration test would test the wrapped Python function. If the Python library wraps this gRPC method with a -``get_node`` method, your test would be implemented within -``tests/test_nodes.py``: +``get_node`` method, your test would be implemented within the +``tests/test_nodes.py`` file. -.. tab-set::: +.. tab-set:: - .. tab-item:: gRPC Code + .. tab-item:: gRPC code .. code-block:: rust @@ -310,14 +311,14 @@ Python function. If the Python library wraps this gRPC method with a // other methods } - .. tab-item:: Python Code + .. tab-item:: Python code .. code-block:: python from ansys.product.service.v0 import service_pb2 def get_node(self, index): - """Return the coordinates of a node for a given index. + """Get the coordinates of a node for a given index. Parameters ---------- @@ -343,7 +344,7 @@ Python function. If the Python library wraps this gRPC method with a return resp.x, resp.y, resp.z - .. tab-item:: Unit Test + .. tab-item:: Unit test .. code-block:: python @@ -356,8 +357,8 @@ Python function. If the Python library wraps this gRPC method with a assert srv.get_node(node_index) == node_coord The goal of the unit test should be to test the API rather than the product or -service. In the case of ``GetNode``, this method should have already -been tested when designing and developing the service. +service. The ``GetNode`` gRPC method should have already been tested when +designing and developing the service. Test using remote method invocation +++++++++++++++++++++++++++++++++++ @@ -369,17 +370,18 @@ server using a custom API or language only available within the context of the service. For example, if a method has a RMI service definition named ``SendCommand()`` and -a Python wrapping named ``send_command``, the example test would be: +a Python wrapping named ``send_command``, your code and the example test would look +like this: .. tab-set:: - .. tab-item:: gRPC Code + .. tab-item:: gRPC code .. code-block:: rust message SendCommand() - .. tab-item:: Python Code + .. tab-item:: Python code .. code-block:: python @@ -392,7 +394,7 @@ a Python wrapping named ``send_command``, the example test would be: Command to run on the remote server. """ - .. tab-item:: Unit Test + .. tab-item:: Unit test .. code-block:: python @@ -400,7 +402,7 @@ a Python wrapping named ``send_command``, the example test would be: output = srv.send_command("CREATE,1") assert "Created 1" in output -Note that this test only validates that the command ``"CREATE,1"`` has been +Note that this test only validates that the ``"CREATE,1"`` command has been received, executed, and sent back to the client. It does not validate all commands. Running such a test is necessary only if there are edge cases, which include characters that cannot be streamed or use long-running commands. @@ -413,14 +415,12 @@ that are expected to be executed by the user. Unlike unit or integration testing, functional tests are testing the library as a whole by calling several methods to accomplish a task. You should run these tests only after unit and integration testing is complete. Ideally, you should run them outside the -``pytest`` framework while building documentation with `sphinx-gallery`_. +``pytest`` framework while building documentation with `Sphinx-Gallery `_. .. note:: Functional tests should not contribute to global library coverage. Testing should always be done on individual functions or methods. -.. _sphinx-gallery: https://sphinx-gallery.github.io/ - Test code coverage ------------------ @@ -436,8 +436,8 @@ Configure code coverage If you do not configure code coverage properly, the resulting report does not show the real scope covered by the test suite. -Assuming that a ``PyAnsys`` project follows :ref:`The \`\`src/\`\` directory` layout, -you must pass the following flag when :ref:`Run tests`: +Assuming that a ``PyAnsys`` project follows :ref:`The \`\`src\`\` directory` layout, +you must pass the following flag when :ref:`Test execution`: .. code-block:: text @@ -445,7 +445,7 @@ you must pass the following flag when :ref:`Run tests`: This command tells ``pytest-cov`` to look for source code in the ``src/ansys/`` directory and generate a terminal report for all tests -located in :ref:`The \`\`tests/\`\` directory`. +located in :ref:`The \`\`tests\`\` directory`. While 100% coverage is ideal, the law of diminishing returns applies to the coverage of a Python library. Consequently, achieving 80-90% coverage is @@ -475,9 +475,9 @@ Enforce code coverage One way of enforcing unit test coverage with a project on GitHub is to use ``codecov.io`` to enforce minimum patch (and optionally project) coverage. Because -this app is already available to the `Ansys Organization -`_, you can simply add a ``codecov.yml`` file to the root -directory of your repository. This example file provides a sample configuration: +this app is already available to the `Ansys GitHub organization`_, you can simply +add a ``codecov.yml`` file to the root directory of your repository. This example +file provides a sample configuration: .. code:: yaml @@ -516,10 +516,9 @@ development environment, it is often not possible to enforce testing on multiple platforms, or even to enforce unit testing in general. However, with the proper automated CI/CD, such testing can still occur and be enforced automatically. -GitHub Actions is the preferred automated CI/CD platform for running Python +`GitHub Actions`_ is the preferred automated CI/CD platform for running Python library unit tests for PyAnsys. It can be used immediately by cloning the -project `template `_. If you are -unfamiliar with GitHub actions, see `GitHub Actions`_ for an overview. +project `template `_. .. literalinclude:: code/tests.yml :language: yaml diff --git a/doc/source/links.rst b/doc/source/links.rst index 412fb7b66..839eb7b31 100644 --- a/doc/source/links.rst +++ b/doc/source/links.rst @@ -22,7 +22,7 @@ .. _ansys-templates: https://templates.ansys.com/index.html .. _Git: https://git-scm.com/ -.. _Git_Extensions: https://gitextensions.github.io/ +.. _Git Extensions: https://gitextensions.github.io/ .. _Notepadpp: https://notepad-plus-plus.org/ .. _pip: https://pypi.org/project/pip/ .. _Python: https://www.python.org/ @@ -56,6 +56,7 @@ .. #Other libraries .. _Black: https://black.readthedocs.io/en/latest/ .. _codespell: https://github.com/codespell-project/codespell +.. _Docker: https://www.docker.com/ .. _Flake8: https://flake8.pycqa.org/en/latest/ .. _isort: https://pycqa.github.io/isort/ .. _numpy: https://numpy.org/ @@ -85,7 +86,7 @@ .. _Sphinx_autoapi: https://pypi.org/project/sphinx-autoapi/ .. _Sphinx_contrib_org: https://github.com/sphinx-contrib/ .. _Sphinx_ext_sphinx_design: https://sphinx-design.readthedocs.io/en/latest/index.html -.. _Sphinx_ext_sphinx_gallery: https://sphinx-gallery.github.io/stable/index.html +.. _Sphinx_ext_sphinx_gallery: https://sphinx-gallery.github.io/ .. _Sphinx_ext_sphinx_gallery_structure: https://sphinx-gallery.github.io/stable/syntax.html .. _Sphinx_extensions: https://www.sphinx-doc.org/en/master/usage/extensions/index.html .. _Sphinx_nbsphinx_extension: https://nbsphinx.readthedocs.io/en/0.9.3/ @@ -107,7 +108,7 @@ .. #GitHub links: .. _Join_GitHub: https://github.com/join .. _GitHub: https://github.com/ -.. _GitHub_Desktop: https://desktop.github.com/ +.. _GitHub Desktop: https://desktop.github.com/ .. _GitHub_doc: https://docs.github.com/en .. _GitHub_doc_flavored_markdown: https://docs.github.com/en/contributing/writing-for-github-docs/using-markdown-and-liquid-in-github-docs .. _Liquid: https://shopify.github.io/liquid/basics/introduction/ @@ -130,7 +131,8 @@ .. _MIT License: https://opensource.org/licenses/MIT .. _PEP 8: https://www.python.org/dev/peps/pep-0008/ .. _PEP8: https://peps.python.org/pep-0008/#introduction -.. _PEP-257: https://peps.python.org/pep-0257 +.. _PEP 20: https://peps.python.org/pep-0020/ +.. _PEP 257: https://peps.python.org/pep-0257 .. _PEP 420: https://peps.python.org/pep-0420/ .. _PEP 517: https://peps.python.org/pep-0517/ .. _PEP 518: https://peps.python.org/pep-0518/ @@ -146,7 +148,7 @@ .. _Building and Distributing Packages with Setuptools: https://setuptools.pypa.io/en/latest/setuptools.html .. _Configuring setuptools using setup.cfg files: https://setuptools.pypa.io/en/latest/userguide/declarative_config.html -.. _setuptools: https://setuptools.pypa.io/en/latest/index.html +.. _Setuptools: https://setuptools.pypa.io/en/latest/index.html .. _blacken-docs: https://github.com/asottile/blacken-docs .. _interrogate: https://interrogate.readthedocs.io/en/latest/ @@ -156,7 +158,7 @@ .. _docformatter: https://github.com/PyCQA/docformatter .. _pydocstyle: https://www.pydocstyle.org/en/stable/ -.. _pyansys/template: https://github.com/ansys/template +.. _ansys/template: https://github.com/ansys/template .. _gRPC: https://grpc.io/ .. _pythoncom: http://timgolden.me.uk/pywin32-docs/pythoncom.html @@ -176,7 +178,6 @@ .. _pip Documentation: https://pip.pypa.io/en/stable/cli/pip_install/ .. _Semantic Versioning: https://semver.org/ -.. _MANIFEST.in: https://packaging.python.org/en/latest/guides/using-manifest-in/ .. _Flit: https://flit.pypa.io/en/latest/ .. _flit pyproject.toml guidelines: https://flit.readthedocs.io/en/latest/pyproject_toml.html .. _Poetry: https://python-poetry.org/ diff --git a/doc/source/packaging/build-systems.rst b/doc/source/packaging/build-systems.rst index f14630522..cca6281f4 100644 --- a/doc/source/packaging/build-systems.rst +++ b/doc/source/packaging/build-systems.rst @@ -13,17 +13,18 @@ Artifacts The build system allows maintainers to generate artifacts for their Python libraries. Here, `artifacts` refers to both wheel and source files: -- ``Wheel files`` have the ``*.whl`` file extension. -- ``Source files`` have the ``*.tar.gz`` or ``*.zip`` extension. +- Wheel files have a WHL extension. +- Source files have a TAR.GZ or ZIP extension. -These are the files to upload to `PyPI`_ when releasing a new version of a +These are the files that you upload to `PyPI`_ when releasing a new version of a PyAnsys project. .. warning:: - Not all files are included by default in the source distribution. A - `MANIFEST.in`_ is required at the root of the project to specify additional - files. + Not all files are included by default in the source distribution. A ``MANIFEST.in`` + file is required at the root of the project to specify additional + files. For more information, see `Controlling files in the distribution `_ + in the Setuptools documentation. The interaction between the maintainer and the build system is performed using a build system tool. This tool provides both a frontend and a backend. The maintainers @@ -51,8 +52,8 @@ These problems led to the acceptance of `PEP 517`_ and `PEP 518`_. PEP 517 ------- -PEP 517 allows Python developers to specify the build backend tool for -generating artifacts. The previous :numref:`build system diag` diagram shows the +PEP 517 allows Python developers to specify the build-backend tool for +generating artifacts. :numref:`build system diag` shows the most popular backends: - Setuptools, while very popular, lacks the ability to declare build-time dependencies @@ -67,14 +68,14 @@ PEP 518 ------- In addition to the ``setup.py`` file, PEP 518 includes a project file named -``pyproject.toml``. The main goal of this file is to specify build time dependencies. -However, some build system tools like Flit or Poetry are able to specify all -project metadata inside the ``pyproject.toml`` file and eliminate usage of the -``setup.py`` file. +``pyproject.toml``. Its main goal is to specify build-time dependencies. +However, some build-system tools like Flit or Poetry are able to specify all +project metadata inside the ``pyproject.toml`` file and eliminate the need to use +the ``setup.py`` file. -To specify the build time requirements, the ``[build-system]`` table must be +To specify the build-time requirements, the ``[build-system]`` table must be declared in the ``pyproject.toml`` file. Within it, the ``requires`` key is -assigned to a list of strings, which are the build time requirements. +assigned to a list of strings, which are the build-time requirements. The combination of PEP 517 and PEP 518 leads to the following syntax in a ``pyproject.toml`` file: @@ -85,7 +86,7 @@ The combination of PEP 517 and PEP 518 leads to the following syntax in a requires = ["flit"] # Defined by PEP 518 build-backend = "flit_core.api" # Defined by PEP 517 -Build backend tools +build-backend tools =================== This section lists some of the most popular build systems in the @@ -98,7 +99,7 @@ metadata. Setuptools ---------- -Setuptools has been a part of the Python ecosystem for a long time. Unless +`Setuptools `_ has been a part of the Python ecosystem for a long time. Unless you require high control over your project's installation steps, you should use Flit or Poetry. @@ -113,7 +114,7 @@ specified either in the ``setup.py`` or ``setup.cfg`` file. Flit ---- -Flit is a modern and lightweight build system that requires developers +`Flit`_ is a modern and lightweight build system that requires developers to manage virtual environments on their own. Developers must: * Create a virtual environment and activate it. @@ -129,22 +130,24 @@ guidelines`_. Poetry ------ -Because of its ``poetry.lock`` file, Poetry provides strong dependency pinning. When +`Poetry`_ has a ``poetry.lock`` file, which provides strong dependency pinning. When installing a package, Poetry creates a virtual environment, thus ensuring an isolated package development environment. -Nevertheless, it is possible to make Poetry ignore the ``poetry.lock`` file with: +Nevertheless, it is possible to make Poetry ignore the ``poetry.lock`` file with this +command: .. code:: bash poetry config virtualenvs.create false --local -Using Poetry is popular because it: +Using Poetry is popular because it offers these features: -* Supports pinning dependency versions via a ``poetry.lock`` file that can be +* Supports pinning dependency versions using a ``poetry.lock`` file that can be used for testing and CI * Allows downstream packages to still consume a loose dependency specification * Integrates with `dependabot`_ to update the pinned version The ``[tool.poetry]`` section contains metadata and defines project -dependencies. For more information, see `poetry pyproject.toml documentation`_. +dependencies. For more information, see `The pyproject.toml file `_ +in the Poetry documentation. diff --git a/doc/source/packaging/code/code_of_conduct_file.md b/doc/source/packaging/code/code_of_conduct_file.md index 8131c63c3..94b9a7803 100644 --- a/doc/source/packaging/code/code_of_conduct_file.md +++ b/doc/source/packaging/code/code_of_conduct_file.md @@ -49,7 +49,7 @@ offensive, or harmful. This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project email -address, posting via an official social media account, or acting as an appointed +address, posting using an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. diff --git a/doc/source/packaging/code/contributing_file.md b/doc/source/packaging/code/contributing_file.md index e86d61092..0fd7a58bb 100644 --- a/doc/source/packaging/code/contributing_file.md +++ b/doc/source/packaging/code/contributing_file.md @@ -1,7 +1,13 @@ -# Contributing +# Contribute -Please, refer to the [PyAnsys Developer's Guide] for contributing to this project. +Overall guidance on contributing to a PyAnsys library appears in the +[Contributing] topic in the *PyAnsys developer's guide*. Ensure that you +are thoroughly familiar with this guide before attempting to contribute to +X. -[PyAnsys Developer's Guide]: https://dev.docs.pyansys.com/index.html +The following contribution information is specific to X. - +[Contributing]: https://dev.docs.pyansys.com/how-to/contributing.html + + + diff --git a/doc/source/packaging/diag/doc_structure_diag.rst b/doc/source/packaging/diag/doc_structure_diag.rst index adde9ee71..df4d15864 100644 --- a/doc/source/packaging/diag/doc_structure_diag.rst +++ b/doc/source/packaging/diag/doc_structure_diag.rst @@ -1,7 +1,7 @@ .. _doc structure diag: .. graphviz:: - :caption: Generic structure for the ``doc/`` directory. - :alt: Generic structure for the ``doc/`` directory. + :caption: Generic structure for the ``doc`` directory. + :alt: Generic structure for the ``doc`` directory. :align: center digraph "sphinx-ext-graphviz" { diff --git a/doc/source/packaging/diag/src_structure_diag.rst b/doc/source/packaging/diag/src_structure_diag.rst index 53013a6b6..2a1e05d56 100644 --- a/doc/source/packaging/diag/src_structure_diag.rst +++ b/doc/source/packaging/diag/src_structure_diag.rst @@ -1,7 +1,7 @@ .. _src structure diag: .. graphviz:: - :caption: Generic structure for the src/ directory. - :alt: Generic structure for the src/ directory. + :caption: Generic structure for the ``src`` directory. + :alt: Generic structure for the ``src`` directory. :align: center digraph "sphinx-ext-graphviz" { diff --git a/doc/source/packaging/index.rst b/doc/source/packaging/index.rst index 7342b58a2..52f512aa1 100644 --- a/doc/source/packaging/index.rst +++ b/doc/source/packaging/index.rst @@ -2,19 +2,19 @@ Packaging style =============== -A PyAnsys library eliminates the need to share snippets of code that -perform actions. Users can instead create workflows consisting of -their own Python modules and third-party libraries. This extends -Ansys's products in a way that matches how libraries are created -in the Python community while maintaining the separation between -products, APIs, and PyAnsys client libraries. +A PyAnsys library eliminates the need to share code snippets for +performing actions. You can instead create workflows consisting of +Python modules and third-party libraries. This extends Ansys products +in a way that matches how libraries are created in the Python community +while maintaining the separation between products, APIs, and PyAnsys +client libraries. To avoid the anti-pattern of providing single-use scripts, the -general pattern for a PyAnsys library ensures: +general pattern for a PyAnsys library provides these features: -* Clear, open source APIs that are consistent with community standards - are hosted on GitHub -* Reusable packages can be updated and patched outside of the +* Clear, GitHub-hosted open source APIs that are consistent with community + standards +* Reusable packages that can be updated and patched outside of the Ansys release schedule, while still being directly dependent on Ansys products * Unit testing, release packaging, and documentation @@ -27,7 +27,7 @@ This diagram shows the general pattern that each PyAnsys library should follow: The Ansys product or service exposes an interface that is locally accessible (for example, .NET using `pythoncom`_, `SWIG`_, or `C extensions`_) or a service that is both locally and remotely -accessible (`REST`_ or `gRPC`_). This interface is referred to as the +accessible using `REST`_ or `gRPC`_. This interface is referred to as the API (Application Programming Interface). While this API can be directly accessed, this often results in unreadable and unmaintainable code that forces users to rewrite setup boilerplate and other methods diff --git a/doc/source/packaging/structure.rst b/doc/source/packaging/structure.rst index 87c3950ce..2798daa72 100644 --- a/doc/source/packaging/structure.rst +++ b/doc/source/packaging/structure.rst @@ -12,21 +12,21 @@ Naming convention ================= Large organizations providing Python packages follow a consistent naming -pattern. Ansys follows two naming conventions, depending on the nature of the project. +convention. Ansys follows two naming conventions, depending on the nature of the project. PyAnsys library --------------- -- The project name is to be ``Py``. For example, ``PyMAPDL`` is the - project name for MAPDL and ``PyAEDT`` is the project name for AEDT. +- The project name is in the format ``Py``. For example, ``PyAEDT`` is the + project name for AEDT (Ansys Electronics Desktop) and ``PyMAPDL`` is the + project name for MAPDL (an abbreviation for Mechanical APDL). - The repository name as hosted on GitHub should be all lowercase to follow - GitHub community standards. For example, `pymapdl`_ and `pyaedt`_. + GitHub community standards. For example, `pyaedt`_ and `pymapdl`_. -- The Python library name is to be in the format - ``ansys--``. For example, `ansys-mapdl-core - `_ is the name for the core MAPDL - library. +- The Python library name is in the format ``ansys--``. + For example, `ansys-mapdl-core `_ + is the name for the core MAPDL library. .. include:: diag/pyansys_namespace_diag.rst @@ -50,7 +50,7 @@ named ``ansys-api-`` and may contain an additional level: .. include:: diag/grpc_structure_diag.rst -This structure leads to the following namespace within ``*.proto`` files: +This structure leads to the following namespace within Protobuf (PROTO) files: .. code:: @@ -60,8 +60,8 @@ Python libraries ================ A Python library is the formal way of distributing Python source code. It allows -for reuse and for specifying Python code dependencies. The guidance presented in this section -is compliant with the `Python Packaging Authority`_ (PyPA) and PyAnsys recommendations. +for reuse and for specifying Python code dependencies. Guidelines in this section +are compliant with `Python Packaging Authority`_ (PyPA) and PyAnsys recommendations. .. note:: @@ -73,19 +73,19 @@ Scripts, modules, subpackages, and packages -------------------------------------------- To understand the structure of a Python Library, it is important to know -the difference between Python scripts, modules, sub-packages, and packages. +the difference between Python scripts, modules, subpackages, and packages. * ``Script``: Any Python file with logic source code * ``Module``: Any Python script hosted next to an ``__init__.py`` file -* ``Sub-package``: Any directory containing various Python modules -* ``Package``: Any directory containing Python modules and sub-packages +* ``Subpackage``: Any directory containing various Python modules +* ``Package``: Any directory containing Python modules and subpackages Differences between a Python package and library ------------------------------------------------ Although the terms *package* and *library* are often used interchangeably, there is a key difference between them. A Python package is a collection of Python modules and -sub-packages, while a Python Library is a collection of Python packages. Figure +subpackages, while a Python library is a collection of Python packages. :numref:`python pkg lib diag` exposes this. .. include:: diag/python_library_diag.rst @@ -93,117 +93,122 @@ sub-packages, while a Python Library is a collection of Python packages. Figure Required files ============== -The structure of any PyAnsys library contains these files and directories: +The structure of any PyAnsys library contains these directories and files: .. include:: diag/pyproduct_library_structure_diag.rst -Descriptions follow for some of the directories in the structure: +Descriptions follow for some of the directories in this structure: -- ``doc/`` contains files related to documentation, guidelines, and examples. +- ``doc``: Contains files related to documentation, guidelines, and examples -- ``src/`` contains all Python modules and scripts that form the project. +- ``src``: Contains all Python modules and scripts that form the project -- ``tests/`` contains all unit tests for checking the integrity of the project. +- ``tests``: Contains all unit tests for checking the integrity of the project -- ``setup.py`` or ``pyproject.toml`` is the project file. +- ``setup.py`` or ``pyproject.toml``: Configures the project. -The ``doc/`` directory +The ``doc`` directory ---------------------- -When distributing software, it is important to document it. Documenting software -means giving guidelines on how to install it and describing all functions, -methods, and classes that it ships with. Case scenarios and examples should also -be part of the documentation. +Prior to distributing software, it is important to document it. Documenting software +consists of explaining how to install and use all functions, classes, and methods that +it ships with. The documentation should also include use case scenarios. -A PyAnsys project should have the following documentation sections: +A PyAnsys project typically has these documentation sections: -- ``Getting Started``: Defines requirements and provides installation information -- ``User Guide``: Explains how to use the software -- ``API Reference``: Describes the source code +- ``Getting started``: Defines requirements and provides installation information +- ``User guide``: Explains how to use the software +- ``API reference``: Describes the source code - ``Examples``: Provides use case scenarios that demonstrate the capabilities of the software -- ``Contributing``: Supplies project-specific contribution guides and can link to general PyAnsys contribution guidelines +- ``Contribute``: Links to the *PyAnsys developer's guide* for overall guidance and supplies + project-specific contribution information -Projects in the PyAnsys ecosystem take advantage of `Sphinx`_, a tool used for +Projects in the PyAnsys ecosystem take advantage of `Sphinx`_, a tool for building documentation for Python-based projects. As shown in :numref:`doc structure diag`, -`Sphinx`_ requires a ``doc/`` directory with a specific structure: +Sphinx requires a ``doc`` directory with a specific structure: .. include:: diag/doc_structure_diag.rst -- ``_build/`` contains the rendered documentation in various formats, such as HTML +- ``_build``: Contains the rendered documentation in various formats, such as HTML and PDF. -- ``source/`` contains the RST files that are used to render the documentation. +- ``source``: Contains the RST files with the manually authored content. Folder + and file names in this directory should use hyphens as space delimiters for search + optimization of the generated HTML documentation. -- ``make.bat`` and ``Makefile`` are used to automate cleaning and building +- ``make.bat`` and ``Makefile``: Automates documentation cleaning and building commands. You use ``make.bat`` when running on Windows and ``Makefile`` - when running on MacOS or Linux. The required configuration for these files is - explained in the :ref:`Automation files` section. + when running on macOS or Linux. For information on the required configuration for + these files, see :ref:`Automation files`. -The ``source/`` directory must contain at least these files: +The ``source`` directory must contain at least these files: -- ``conf.py`` is a Python script used to declare the configuration of `Sphinx`_. - The minimum required configuration for this file is explained in :ref:`The +- ``conf.py``: Python script that declares the configuration of `Sphinx`_. + The minimum required configuration is explained in :ref:`The \`\`conf.py\`\` file`. -- ``index.rst`` is the index page of the documentation. In this file, try to reuse the - ``README.rst`` file to avoid duplication. +- ``index.rst``: Main index (landing) page for the overall documentation. Some + projects reuse ``README.rst`` files in the main ``index.rst`` file. + For more information, see :ref:`readme_files`. In newer projects, however, the ``index.rst`` + file uses a grid of cards to present the organization of the documentation in a visual manner. -If you would like to include images or documents, add them in the ``_static/`` +You generally add any images or documents that you would like to include in a ``_static`` directory. -The ``src/`` directory +The ``src`` directory ---------------------- -All the Python source code must be located in the ``src/`` directory. This is where the +All the Python source code must be located in the ``src`` directory. This is where the build system looks when generating the wheel and source distributions. .. warning:: - Folders inside the ``src/`` directory cannot contain spaces or hyphens. Replace these - characters with an underscore '_'. + The names of directories and files in the ``src`` directory cannot contain spaces or hyphens. + Replace these characters with an underscore (``_``). -The structure of the ``src/`` directory determines the namespace of the PyAnsys -library. A namespace allow you to easily split sub-packages from a package into +The structure of the ``src`` directory determines the namespace of the PyAnsys +library. A namespace allow you to easily split subpackages from a package into single, independent distributions. -There are different approaches available for creating a namespace package. -Ansys namespaces use the `PEP 420`_ `native namespace packages`_ approach. +There are different approaches available for creating a namespace. +Ansys namespaces use the `native namespace packages`_ from +`PEP 420`_. Therefore, the source directory of any PyAnsys library must look like the one -shown in the diagram :numref:`src structure diag`: +in :numref:`src structure diag`: .. include:: diag/src_structure_diag.rst -The ``tests/`` directory +The ``tests`` directory ------------------------ To guarantee the integrity of a PyAnsys project, a good test suite is required. -PyAnsys projects use the `pytest`_ testing framework. +PyAnsys projects use the `pytest`_ framework. A good practice is to emulate the structure of the ``src/ansys/product/library`` directory, although this is not always necessary. .. include:: diag/tests_structure_diag.rst -Notice the use of ``tests_*/`` when creating new directories inside the -``tests/`` one. For unit testing files, names use the ``test_*.py`` prefix. -This is the preferred way of naming directories and files inside the -``tests/`` directory. +Notice the use of ``tests_*`` when creating child directories within the +``tests`` directory. For unit testing files, names use the ``test_*.py`` prefix. +This is the preferred way of naming directories and files in the +``tests`` directory. The ``AUTHORS`` file -------------------- -An ``AUTHORS`` file is used to specify the authorship of the repository. Its -format has been defined by the Ansys Legal department. External contributors -may be added on demand to this file. Make sure to adapt the project name on -your specific repository's ``AUTHORS`` file. +You use the ``AUTHORS`` file to specify the authorship of the repository. The +Ansys Legal department has defined its format. You can add external contributors +to this file on demand. Make sure that you adapt the project name on your +specific repository's ``AUTHORS`` file. .. include:: code/authors_code.rst The ``CHANGELOG.md`` file ------------------------- -This file is used to collect the new features, fixed bugs, documentation -improvements and new contributors. It allows to have a quick-view on the latest +You use the ``CHANGELOG.md`` file to collect new features, fixed bugs, documentation +improvements, and new contributors. It provides a summary of the latest enhancements to the project. .. literalinclude:: code/changelog_file.md @@ -212,34 +217,37 @@ enhancements to the project. The ``CODE_OF_CONDUCT.md`` file ------------------------------- -This file is used to specify how users, developers, and maintainers should behave -while working in the project. PyAnsys projects usually adopt the ``Contributor -Covenant Code of Conduct``, which is very popular across open source projects. +You use the ``CODE_OF_CONDUCT.md`` to specify how users, developers, and maintainers +are to behave while working in the project. PyAnsys projects usually adopt the *Contributor +Covenant Code of Conduct*, which is very popular across open source projects. .. literalinclude:: code/code_of_conduct_file.md :language: markdown The ``CONTRIBUTING.md`` file ---------------------------- -This file is used as a quick entry-point for developers wiling to contribute to -the project. It usually provides references to: +You use the `CONTRIBUTING.md`` file to provide a quick entry-point for developers +who are willing to contribute to the project. It usually provides references to +this information: - Where the source code of the project is hosted. -- Which steps need to be followed to install the software in "development" mode. -- Additional ways of contributing to the source code. +- Which steps must be followed to install the software in "development" mode. +- Ways of contributing to the source code. -Ideally, the ``CONTRIBUTING.md`` file for a PyAnsys project should be pointing -towards the `PyAnsys developer's guide `_. +Ideally, the ``CONTRIBUTING.md`` file for a PyAnsys project should link +to the `PyAnsys developer's guide `_ for overall +guidance. .. literalinclude:: code/contributing_file.md :language: markdown The ``CONTRIBUTORS.md`` file ---------------------------- -The ``CONTRIBUTORS.md`` file is used to list the contributors to the repository. Its + +You use the ``CONTRIBUTORS.md`` file to list the contributors to the repository. Its purpose is to credit the authors for their individual contributions and provide a -record of authorship for the codebase. Provide your first and last names and -a link to your GitHub username. +record of authorship for the codebase. Provide first and last names and +links to GitHub usernames. .. literalinclude:: code/contributors_file.md :language: markdown @@ -247,36 +255,42 @@ a link to your GitHub username. The ``LICENSE`` file -------------------- -The ``LICENSE`` file provides the legal framework for the software. `PyAnsys`_ -projects must use `MIT License`_. A template for -this license is provided below: +The ``LICENSE`` file provides the legal framework for the software. PyAnsys projects +must use the `MIT License`_. Here is the template: .. include:: code/license_mit_code.rst -.. note:: +.. caution:: - Just because a software does not ship with a LICENSE file, it does not mean + Just because a software does not ship with a ``LICENSE`` file, it does not mean it is free or open source. If you need to use unlicensed software, contact - its development team so they can provide you with the correct license. + its development team so that they can provide you with the correct license. -The ``README.rst`` file ------------------------ +The ``README`` file +-------------------- -Each PyAnsys library should have a ``README.rst`` file in the root directory. +Each PyAnsys library must have a ``README`` file in the root directory. The preferred format of this file is `reStructuredText Markup Syntax`_, -although `Markdown Syntax`_ can be used too. While Markdown syntax has better -GitHub support, ReStructured Text (RST) files can be reused within Sphinx documentation. -This avoids duplication between the ``README.rst`` and the main ``index.rst`` in -the ``doc/source/`` directory. +although you can also use `Markdown Syntax`_. While Markdown syntax has better +GitHub support, you can reuse ReStructuredText (RST) files within Sphinx documentation. +For more information, see :ref:`readme_files`. -The ``README.rst`` file should at the minimum contain these elements: +The ``README`` file should at the minimum contain these elements: - PyAnsys library title - General description -- Installation directions (via ``pip install`` and ``git clone``) -- Basic usage -- Links to the full documentation +- Installation instructions (using ``pip install`` and ``git clone``) but only if the library + reuses ``README`` file content in its documentation + +.. note:: + While older projects tend to reuse content in their ``README.rst`` files in the + main ``index.rst`` files in their ``doc/source`` directories, newer projects do not. + Instead, they provide a bulleted list with documentation links and descriptions in + their ""README`` files. In the main ``index.rst`` files for their documentation, + they then use a grid of cards to visually explain and link to documentation sections. + This keeps the ``README`` file focused on why you might want to explore the + library and allows you to quickly view documentation sections of interest. The ``README.rst`` file is also reused within the project file metadata. It is usually included in the ``long-description`` field. @@ -284,37 +298,36 @@ usually included in the ``long-description`` field. The ``pyproject.toml`` file --------------------------- -`PEP 518`_ introduced the usage of a project file named -``pyproject.toml``. - -The ``pyproject.toml`` file is mandatory because it allows ``pip`` to resolve the +`PEP 518`_ introduced the use of a project file named ``pyproject.toml``. +This file is mandatory because it allows `pip`_ to resolve the requirements for building the library. The following tabs expose the ``[build-system]`` section -for some of the most popular build-system backend tools in the Python ecosystem: +for build-system backend tools commonly used in the Python ecosystem: .. include:: code/pyproject_code.rst The ``setup.py`` file --------------------- -For a long time, the ``setup.py`` file was generally used to build and -distribute Python libraries. Unlike a static ``pyproject.toml`` file, the +For a long time, Python developers used the ``setup.py`` file to build and +distribute their libraries. Unlike a static ``pyproject.toml`` file, the ``setup.py`` file is a Python script. This means that Python code is interpreted when building the library. This approach supports customizing the build process but can also introduce security issues. .. note:: - The ``setup.py`` file is only compatible with `setuptools`_. Consider using a - ``pyproject.toml`` file instead. + The ``setup.py`` file is only compatible with `Setuptools`_, which is why + you should consider using a ``pyproject.toml`` file instead. -While a ``setup.cfg`` file can be used to specify the metadata and packages, the ``setup.py`` -file must also be present. For more information, see: +While you can use a ``setup.cfg`` file to specify the metadata and packages, the ``setup.py`` +file must also be present. For more information, see these pages in the Setuptools +documentation: * `Building and Distributing Packages with Setuptools`_ * `Configuring setuptools using setup.cfg files`_ -As a minimum configuration for a PyAnsys project, the following ``setup.py`` -template can be used: +As a minimum configuration for a PyAnsys project, you can use this ``setup.py`` +template: .. include:: code/setup_file_code.rst diff --git a/doc/source/packaging/templates.rst b/doc/source/packaging/templates.rst index 46fd969d1..ad2015c71 100644 --- a/doc/source/packaging/templates.rst +++ b/doc/source/packaging/templates.rst @@ -20,17 +20,18 @@ Ansys templates documentation. Here are important links for this tool: - **Repository**: https://github.com/ansys/ansys-templates - **Documentation**: https://templates.ansys.com -- **Issues board**: https://github.com/ansys/ansys-templates/issues +- **Issues**: https://github.com/ansys/ansys-templates/issues .. note:: If you encounter any problem during the installation or usage of this tool, - open a new issue on the `ansys-templates issues board`_. + open a new issue on the repository's `Issues `_ + page. -PyAnsys available templates -=========================== +PyAnsys templates +================= -There are two templates that you can use to create PyAnsys +The ``ansys-templates`` tool provides two templates for creating PyAnsys projects: ``pyansys`` and ``pyansys-advanced``. .. important:: @@ -42,18 +43,18 @@ projects: ``pyansys`` and ``pyansys-advanced``. PyAnsys template ---------------- -The ``pyansys`` template ships only with the required directories and files to -quickly set up a PyAnsys-compliant project. This template provides the following: +The ``pyansys`` template ships only with required directories and files to +quickly set up a PyAnsys-compliant project: - A ``src/ansys/product/library/`` directory - A ``setup.py`` file -- Generation of ``doc/`` and ``tests/`` directories +- Generation of ``doc`` and ``tests`` directories - A generic ``.gitignore`` file for Python libraries -- Build, doc, and test requirements files -- Metadata files like ``README.rst`` and ``LICENSE`` +- Build, documentation, and test requirements files +- Metadata files like those for the ``README`` and ``LICENSE`` To create a project based on the ``pyansys`` template, run -this code: +this command: .. code:: bash @@ -63,15 +64,14 @@ PyAnsys advanced template ------------------------- The ``pyansys-advanced`` template is an enhanced version of the ``pyansys`` template. -It ships with the same files as the preceding template but also includes additional -features: +It ships with the same directories and files and supports additional features: - Allows you to select the project file (``setup.py`` or ``pyproject.toml``) -- Uses `Tox`_ for testing and task automation -- Includes GitHub Actions for CI purposes +- Uses `tox`_ for testing and task automation +- Includes `GitHub Actions`_ for CI purposes - Uses `pre-commit`_ for checking coding style -To create a project based on the ``pyansys-advanced`` template, run this code: +To create a project based on the ``pyansys-advanced`` template, run this command: .. code:: bash diff --git a/doc/styles/Vocab/ANSYS/accept.txt b/doc/styles/Vocab/ANSYS/accept.txt index fd4f1f8f9..11c360409 100644 --- a/doc/styles/Vocab/ANSYS/accept.txt +++ b/doc/styles/Vocab/ANSYS/accept.txt @@ -80,11 +80,13 @@ pytest pythoncom pyvista rebasing +recurse reusability Rey REST RPC RST +rulesets scipy [Ss]etuptools [Ss]phinx @@ -101,6 +103,7 @@ turbomachinery UDFs uncomment unittest +untracked unvalidated URIs Vale diff --git a/examples/pyvista_example.py b/examples/pyvista_example.py index 07aed87a7..fd6b268f6 100644 --- a/examples/pyvista_example.py +++ b/examples/pyvista_example.py @@ -4,31 +4,37 @@ Adding a new gallery example ============================ -This example demonstrates how to add a new PyAnsys `Sphinx Gallery -`_ example as well as being a template that -can be used in their creation. +This example shows how to add a new example to the PyAnsys `Sphinx-Gallery +`_. You can use this example as a template +for adding your examples. + Each example should have a reference tag/key in the form: + ``.. __example:``. The ``.. _`` is necessary. Everything that follows is your reference tag, which -can potentially be used within a docstring. As convention, we keep all -references in ``snake_case``. -This section should give a brief overview of what the example is about and/or -demonstrates. The title should be changed to reflect the topic your example -covers. -New examples should be added as python scripts to: +can potentially be used within a docstring. All references should be in snake case. + +The first section, which is text, provides a brief overview of what the example is. +When using this example as a template, you would change the title to an appropriate +one for your example. + +Add new examples as Python scripts like this: + ``examples/-/.py`` .. note:: - Avoid creating new directories unless absolutely necessary. If you *must* - create a new folder, make sure to add a ``README.txt`` containing a - reference, a title and a single sentence description of the folder. - Otherwise the new folder will be ignored by Sphinx. + Avoid creating directories unless absolutely necessary. If you *must* + create a directory, make sure to add a ``README.txt`` file containing a + reference, a title, and a one-sentence description of the directory. + Otherwise, Sphinx ignores the new directory. + +Example file names should use snake case and be hyphen-separated: -Example file names should be hyphen separated snake case: ``some-example.py`` -After this preamble is complete, the first code block begins. This is where you + +After this text section is the first code block. This is where you typically set up your imports. """ @@ -38,20 +44,20 @@ # Section title # ~~~~~~~~~~~~~ # -# Code blocks can be broken up with text "sections" which are interpreted as -# restructured text. +# Code blocks can be broken up with text sections, which are interpreted as +# ReStructuredText. # -# This will also be translated into a markdown cell in the generated Jupyter -# notebook or the HTML page. +# The text sections are also translated into a markdown cell in the generated Jupyter +# notebook or in the HTML documentation. # -# Sections can contain any information you may have regarding the example -# such as step-by-step comments or notes regarding motivations etc. +# Text sections can contain any information that you may have regarding the example, +# such as step-by-step comments and notes regarding motivations. # # As in Jupyter notebooks, if a statement is unassigned at the end of a code -# block, output will be generated and printed to the screen according to its -# ``__repr__`` method. Otherwise, you can use ``print()`` to output text. +# block, output is generated and printed to the screen according to its +# ``__repr__`` method. Otherwise, you can use the ``print()`` function to output text. -# Create a dataset and exercise its repr method +# Create a dataset and exercise its ``__repr__`` method dataset = pv.Sphere() dataset @@ -59,30 +65,30 @@ ############################################################################### # Plots and images # ~~~~~~~~~~~~~~~~ -# If you use anything that outputs an image (for example, -# :func:`pyvista.Plotter.show`) the resulting image is rendered within the -# output HTML. +# If you use anything that outputs an image (for example, the +# :func:`pyvista.Plotter.show` function), the resulting image is rendered in the +# HTML documentation. # # .. note:: # # Unless ``sphinx_gallery_thumbnail_number = `` is included at the top -# of the example script, first figure (this one) is used for the +# of the example script, the first figure (this one) is used for the # gallery thumbnail image. # -# Also note that this image number uses one based indexing. +# Also note that this image number uses one-based indexing. dataset.plot(text="Example Figure") ############################################################################### # Caveat - plotter must be within one cell # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# It's not possible for a single :class:`pyvista.Plotter` object across +# It's not possible to have a single :class:`pyvista.Plotter` object across # multiple cells because these are closed out automatically at the end of a # cell. # -# Here we just exercise the :class:`pyvista.Actor` ``repr`` for demonstrating +# This code exercise the :class:`pyvista.Actor` ``repr`` to demonstrate # why you might want to instantiate a plotter without showing it in the same -# cell. +# cell: pl = pv.Plotter() actor = pl.add_mesh(dataset) @@ -91,10 +97,9 @@ ############################################################################### # This cell cannot run the plotter # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -# The plotter is already closed by ``sphinx_gallery``. - -# This cannot be run here because the plotter is already closed and would raise -# an error: +# Because the plotter is already closed by Sphinx-Gallery, the following code +# would raise an error: +# # >>> pl.show() # You can, however, close out the plotter or access other attributes. @@ -102,17 +107,16 @@ pl.close() ############################################################################### -# Making a pull request +# Create a pull request # ~~~~~~~~~~~~~~~~~~~~~ -# Once your example is complete and you've verified it builds locally, you can -# make a pull request (PR). +# Once your example is complete and you've verified that it builds locally, you can +# create a pull request. # -# Branches containing examples should be prefixed with `docs/` as per the branch -# naming conventions found in out `Contributing Guidelines +# Branches containing examples should be prefixed with ``docs/`` as per `Branch-naming conventions # `_. # # .. note:: # -# You only need to create the Python source example (``*.py``). The Jupyter -# notebook and the example HTML are auto-generated via `sphinx-gallery -# `_. +# You only need to create the Python source example (PY file). Sphinx-Gallery +# automatically generates the Jupyter notebook and the RST file for generating +# the HTML documentation page.