From 32eaa209871803637fd844a084554fee73f70b14 Mon Sep 17 00:00:00 2001 From: Pavel Tisnovsky Date: Thu, 17 Jul 2025 12:40:30 +0200 Subject: [PATCH] LCORE-304: Info about tests --- CONTRIBUTING.md | 117 ++---------------------------- README.md | 24 ++++-- docs/README.md | 1 + docs/testing.md | 189 ++++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 211 insertions(+), 120 deletions(-) create mode 100644 docs/README.md create mode 100644 docs/testing.md diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 93e7b429..f760ff5b 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -46,8 +46,8 @@ The development requires at least [Python 3.12](https://docs.python.org/3/whatsn ### Tooling installation -1. `pip install --user pdm` -1. `pdm --version` -- should return no error +1. `pip install --user uv` +1. `uv --version` -- should return no error @@ -60,10 +60,10 @@ git clone https://github.com/YOUR-GIT-PROFILE/lightspeed-stack.git # move into the directory cd lightspeed-stack -# setup your devel environment with pdm -pdm install -G dev +# setup your devel environment with uv +uv install -G dev -# Now you can run test commands trough make targets, or prefix the rest of commands with `pdm run`, eg. `pdm run make test` +# Now you can run test commands trough make targets, or prefix the rest of commands with `uv run`, eg. `uv run make test` # run unit tests make unit-tests @@ -173,114 +173,7 @@ Static security check is performed by _Bandit_ tool. The check can be started by ``` make security-check ``` -## Testing -Two groups of software tests are used in this repository, each group from the test suite having different granularity. These groups are designed to represent three layers: - -1. Unit Tests -1. Integration Tests - -Unit tests followed by integration and e2e tests can be started by using the following command: - -``` -make test -``` - -It is also possible to run just one selected group of tests: - -``` -make test-unit Run the unit tests -make test-integration Run integration tests tests -make test-e2e Run end to end tests -``` - -All tests are based on the [Pytest framework](https://docs.pytest.org/en/) and code coverage is measured by the plugin [pytest-cov](https://github.com/pytest-dev/pytest-cov). For mocking and patching, the [unittest framework](https://docs.python.org/3/library/unittest.html) is used. - -Currently code coverage threshold for integration tests is set to 60%. This value is specified directly in Makefile, because the coverage threshold is different from threshold required for unit tests. - -As specified in Definition of Done, new changes need to be covered by tests. - - - -### Tips and hints for developing unit tests - -#### Patching - -**WARNING**: -Since tests are executed using Pytest, which relies heavily on fixtures, -we discourage use of `patch` decorators in all test code, as they may interact with one another. - -It is possible to use patching inside the test implementation as a context manager: - -```python -def test_xyz(): - with patch("ols.config", new=Mock()): - ... - ... - ... -``` - -- `new=` allow us to use different function or class -- `return_value=` allow us to define return value (no mock will be called) - - -#### Verifying that some exception is thrown - -Sometimes it is needed to test whether some exception is thrown from a tested function or method. In this case `pytest.raises` can be used: - - -```python -def test_conversation_cache_wrong_cache(invalid_cache_type_config): - """Check if wrong cache env.variable is detected properly.""" - with pytest.raises(ValueError): - CacheFactory.conversation_cache(invalid_cache_type_config) -``` - -It is also possible to check if the exception is thrown with the expected message. The message (or its part) is written as regexp: - -```python -def test_constructor_no_provider(): - """Test that constructor checks for provider.""" - # we use bare Exception in the code, so need to check - # message, at least - with pytest.raises(Exception, match="ERROR: Missing provider"): - load_llm(provider=None) -``` - -#### Checking what was printed and logged to stdout or stderr by the tested code - -It is possible to capture stdout and stderr by using standard fixture `capsys`: - -```python -def test_foobar(capsys): - """Test the foobar function that prints to stdout.""" - foobar("argument1", "argument2") - - # check captured log output - captured_out = capsys.readouterr().out - assert captured_out == "Output printed by foobar function" - captured_err = capsys.readouterr().err - assert captured_err == "" -``` - -Capturing logs: - -```python -@patch.dict(os.environ, {"LOG_LEVEL": "INFO"}) -def test_logger_show_message_flag(mock_load_dotenv, capsys): - """Test logger set with show_message flag.""" - logger = Logger(logger_name="foo", log_level=logging.INFO, show_message=True) - logger.logger.info("This is my debug message") - - # check captured log output - # the log message should be captured - captured_out = capsys.readouterr().out - assert "This is my debug message" in captured_out - - # error output should be empty - captured_err = capsys.readouterr().err - assert captured_err == "" -``` ## Code style diff --git a/README.md b/README.md index 143b21dd..20d08bbf 100644 --- a/README.md +++ b/README.md @@ -12,10 +12,10 @@ Lightspeed Core Stack (LCS) is an AI-powered assistant that provides answers to * [Architecture](#architecture) - * [Integration with Llama Stack](#integration-with-llama-stack) * [Prerequisites](#prerequisites) * [Installation](#installation) * [Configuration](#configuration) + * [Integration with Llama Stack](#integration-with-llama-stack) * [Llama Stack as separate server](#llama-stack-as-separate-server) * [Llama Stack as client library](#llama-stack-as-client-library) * [System prompt](#system-prompt) @@ -23,6 +23,7 @@ Lightspeed Core Stack (LCS) is an AI-powered assistant that provides answers to * [Make targets](#make-targets) * [Running Linux container image](#running-linux-container-image) * [Endpoints](#endpoints) + * [OpenAPI specification](#openapi-specification) * [Readiness Endpoint](#readiness-endpoint) * [Liveness Endpoint](#liveness-endpoint) * [Publish the service as Python package on PyPI](#publish-the-service-as-python-package-on-pypi) @@ -30,6 +31,7 @@ Lightspeed Core Stack (LCS) is an AI-powered assistant that provides answers to * [Upload distribution archives into selected Python registry](#upload-distribution-archives-into-selected-python-registry) * [Packages on PyPI and Test PyPI](#packages-on-pypi-and-test-pypi) * [Contributing](#contributing) +* [Testing](#testing) * [License](#license) * [Additional tools](#additional-tools) * [Utility to generate OpenAPI schema](#utility-to-generate-openapi-schema) @@ -52,12 +54,6 @@ Overall architecture with all main parts is displayed below: Lightspeed Core Stack is based on the FastAPI framework (Uvicorn). The service is split into several parts described below. -## Integration with Llama Stack - -![Integration with Llama Stack](docs/core2llama-stack_interface.png) - - - # Prerequisites * Python 3.12, or 3.13 @@ -73,9 +69,10 @@ Installation steps depends on operation system. Please look at instructions for - [macOS installation](https://lightspeed-core.github.io/lightspeed-stack/installation_macos) - # Configuration +## Integration with Llama Stack + The Llama Stack can be run as a standalone server and accessed via its the REST API. However, instead of direct communication via the REST API (and JSON format), there is an even better alternative. It is based on the so-called @@ -83,6 +80,8 @@ Llama Stack Client. It is a library available for Python, Swift, Node.js or Kotlin, which "wraps" the REST API stack in a suitable way, which is easier for many applications. +![Integration with Llama Stack](docs/core2llama-stack_interface.png) + ## Llama Stack as separate server If Llama Stack runs as a separate server, the Lightspeed service needs to be configured to be able to access it. For example, if server runs on localhost:8321, the service configuration should look like: @@ -311,10 +310,19 @@ If this configuration file does not exist, you will be prompted to specify API t * https://test.pypi.org/project/lightspeed-stack/0.1.0/ + # Contributing * See [contributors](CONTRIBUTING.md) guide. + + +# Testing + +* See [testing](docs/testing.md) guide. + + + # License Published under the Apache 2.0 License diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 00000000..eaece51d --- /dev/null +++ b/docs/README.md @@ -0,0 +1 @@ +# GitHub pages diff --git a/docs/testing.md b/docs/testing.md new file mode 100644 index 00000000..512c4835 --- /dev/null +++ b/docs/testing.md @@ -0,0 +1,189 @@ +# Testing + +Three groups of software tests are used in this repository, each group from the test suite having different granularity. These groups are designed to represent three layers: + +1. Unit Tests +1. Integration Tests +1. End to end Tests + + + +## Running tests + +Unit tests followed by integration and end to end tests can be started by using the following command: + +``` +make test +``` + +It is also possible to run just one selected group of tests: + +``` +make test-unit Run the unit tests +make test-integration Run integration tests tests +make test-e2e Run end to end tests +``` + + + +## Unit tests + +Unit tests are based on the [Pytest framework](https://docs.pytest.org/en/) and code coverage is measured by the plugin [pytest-cov](https://github.com/pytest-dev/pytest-cov). For mocking and patching, the [unittest framework](https://docs.python.org/3/library/unittest.html) is used. + +Currently code coverage threshold for integration tests is set to 60%. This value is specified directly in Makefile, because the coverage threshold is different from threshold required for unit tests. + +As specified in Definition of Done, new changes need to be covered by tests. + +### Unit tests structure + +* Defined in [tests/unit](https://github.com/lightspeed-core/lightspeed-stack/tree/main/tests/unit) + + +``` +├── app +│   ├── endpoints +│   │   ├── __init__.py +│   │   ├── test_authorized.py +│   │   ├── test_config.py +│   │   ├── test_feedback.py +│   │   ├── test_health.py +│   │   ├── test_info.py +│   │   ├── test_models.py +│   │   ├── test_query.py +│   │   ├── test_root.py +│   │   └── test_streaming_query.py +│   ├── __init__.py +│   └── test_routers.py +├── auth +│   ├── __init__.py +│   ├── test_auth.py +│   ├── test_k8s.py +│   ├── test_noop.py +│   ├── test_noop_with_token.py +│   └── test_utils.py +├── __init__.py +├── models +│   ├── __init__.py +│   ├── test_config.py +│   ├── test_requests.py +│   └── test_responses.py +├── runners +│   ├── __init__.py +│   ├── test_data_collector_runner.py +│   └── test_uvicorn_runner.py +├── services +│   └── test_data_collector.py +├── test_client.py +├── test_configuration.py +├── test_lightspeed_stack.py +├── test_log.py +└── utils +    ├── __init__.py +    ├── test_checks.py +    ├── test_common.py +    ├── test_endpoints.py +    ├── test_suid.py +    └── test_types.py +``` + +* Please note that the directory structure of unit tests is similar to source directory structure. It helps choosing just one test to be run. + + + +## Integration tests + +Integration tests are based on the [Pytest framework](https://docs.pytest.org/en/) and code coverage is measured by the plugin [pytest-cov](https://github.com/pytest-dev/pytest-cov). For mocking and patching, the [unittest framework](https://docs.python.org/3/library/unittest.html) is used. + +* Defined in [tests/integration](https://github.com/lightspeed-core/lightspeed-stack/tree/main/tests/integration) + + + +## End to end tests + +End to end tests are based on [Behave](https://behave.readthedocs.io/en/stable/) framework. Tests are specified in a form of [test scenarios](e2e_scenarios.md). + +* Defined in [tests/e2e](https://github.com/lightspeed-core/lightspeed-stack/tree/main/tests/e2e) + + + +## Tips and hints + +### Developing unit tests + +#### Patching + +**WARNING**: +Since tests are executed using Pytest, which relies heavily on fixtures, +we discourage use of `patch` decorators in all test code, as they may interact with one another. + +It is possible to use patching inside the test implementation as a context manager: + +```python +def test_xyz(): + with patch("ols.config", new=Mock()): + ... + ... + ... +``` + +- `new=` allow us to use different function or class +- `return_value=` allow us to define return value (no mock will be called) + + +#### Verifying that some exception is thrown + +Sometimes it is needed to test whether some exception is thrown from a tested function or method. In this case `pytest.raises` can be used: + + +```python +def test_conversation_cache_wrong_cache(invalid_cache_type_config): + """Check if wrong cache env.variable is detected properly.""" + with pytest.raises(ValueError): + CacheFactory.conversation_cache(invalid_cache_type_config) +``` + +It is also possible to check if the exception is thrown with the expected message. The message (or its part) is written as regexp: + +```python +def test_constructor_no_provider(): + """Test that constructor checks for provider.""" + # we use bare Exception in the code, so need to check + # message, at least + with pytest.raises(Exception, match="ERROR: Missing provider"): + load_llm(provider=None) +``` + +#### Checking what was printed and logged to stdout or stderr by the tested code + +It is possible to capture stdout and stderr by using standard fixture `capsys`: + +```python +def test_foobar(capsys): + """Test the foobar function that prints to stdout.""" + foobar("argument1", "argument2") + + # check captured log output + captured_out = capsys.readouterr().out + assert captured_out == "Output printed by foobar function" + captured_err = capsys.readouterr().err + assert captured_err == "" +``` + +Capturing logs: + +```python +@patch.dict(os.environ, {"LOG_LEVEL": "INFO"}) +def test_logger_show_message_flag(mock_load_dotenv, capsys): + """Test logger set with show_message flag.""" + logger = Logger(logger_name="foo", log_level=logging.INFO, show_message=True) + logger.logger.info("This is my debug message") + + # check captured log output + # the log message should be captured + captured_out = capsys.readouterr().out + assert "This is my debug message" in captured_out + + # error output should be empty + captured_err = capsys.readouterr().err + assert captured_err == "" +```